One of the things I read religiously whenever it appears in my inbox is Bruce Schneier's Crypto-gram, an email newsletter devoted to security-related topics. The most recent issue included an essay on the psychology of security. To quote Schneier, " Security is both a feeling and a reality. And they're not the same." Humans have developed a large grab-bag of handy tricks to help us survive and thrive. Unfortunately, our hardware hasn't caught up with some of the realities of modern life.
When I read "The Psychology of Security," I was struck by the following quotation from Daniel Gilbert:
"The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That's what brains did for several hundred million years—and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.
Our ability to duck that which is not yet coming is one of the brain's most stunning innovations, and we wouldn't have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing."
This intrigued me, because I'd just started reading "The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places" by Byron Reeves and Clifford Nass. The two authors met while working in the Department of Communication at Stanford, and they applied experiments from sociology and psychology to technology—comparing how people reacted to other people to how they reacted to computers or other media. What they found surprised them: Even though we know that computers or television or movies aren't real, we treat them as if they're real people and real places. We do it unconsciously and we will often insist categorically that we don't do this. Intellectually, we may know that these things aren't real, but our brains are optimized for elucidating and responding to social cues, wherever (and from whatever) they originate.
The world has changed rapidly but our firmware is still playing catch-up. How we see and interact with the world, as dictated by our physiology, affects how we can best use the technologies we develop, especially the data-rich ones. The question shifts from how can we gather data to how do we make sense out of the vast quantities of data we've collected. A good example of this is Google Earth's combination of aerial images, maps, and overlays that mix multiple types of data to create a final amalgamated form that we find easy to parse. What do you think? To share your opinion, please scroll down the page to our comment entry form.