Say What You Mean

According to an article in Information Week, Germany's Frauenhofer Institute for Computer Graphics Research IGD in Rostock, Germany, will be at CeBIT showcasing techniques to enable a computer or robot to sense how you feel.

That's not the part of the article that annoyed me. What annoyed me was the author's blithe claim that computers already have sensors to allow them to see and hear.

Unrealistic Expectations?
Lo, see me quibble with this statement. To me, seeing and hearing involve not just visual and auditory input but the complicated processing required to make sense of these signals. Humans are really, really good at making sense of our surroundings. We have some scarily complicated built-in algorithms to help us figure out how near or far something is, where a noise is coming from, how fast something is moving in relation to us, and other such things.

Can computers hear? I say no. I say computers have sensors to pick up auditory signals but they can't hear; while speech recognition has improved massively over the years it's still not perfect. The same with vision; cameras are improving all the time in terms of picture resolution and clarity and the machine vision software that makes sense of the images has become increasingly sophisticated. And yet, if you stick a camera on a mobile robot in a unfamiliar environment, the robot cannot make sense of what it is seeing. Can it be trained to identify certain types of objects and then respond to them? Yes. Does it "see"? No.

By anthropomorphizing machines and describing their very limited abilities in human-like terms, we do a disservice to the machines (through unreasonable expectations) and the researchers involved in these breakthroughs. Let's concentrate on the clever technological footwork involved in allowing a computer to respond to voice commands rather than glossing over it and saying the computer can "hear." And for the record? I don't want my computer to sense how I feel. What I want is for the computer not to randomly crash.