Recently, I was waiting for my father-in-law outside a building. I could see two or three very long blocks away where many people were walking toward me. As I waited, many men in hats approached from a hundred or more yards away. With each man who approached, I knew within a second that he wasn't him. When I spotted a man with a very distinctive walk, I instantly knew it was him. I had practically no information, and a lot of other things in my field of view. Had I been a computer, people at that distance would only have been a couple of pixels wide.
It was one of those "Blink" moments, of instantaneous pattern recognition and "thin-slicing" that everyone has experienced at least occasionally. Were I a computer, I'd have to be programmed to look at a scene filled with information and perhaps look at each distinctive pixel of information. I'd have to filter out trees and sky and cars and dogs and buildings and the sky, without actually recognizing any of those things, and I'd have to be programmed to look for something very specific. As a human, I can recognize process all of this information after simply having been born and alive for some time. Any animal with some kind of visual senses can do some version of the same thing.
An article in the Wall Street Journal describes a technique at Honeywell using brainwave scanning to improve image recognition. In this technique, a person is looking for something in a series of images. For example, they might be looking at aerial surveillance photos and looking for Oscar the Grouch, in the mountains of Western Pakistan. Computers aren't always good at this sort of open-ended task, so you generally need people to do it. Even for people it's hard work, and they can easily miss important information, say the silhouette of a garbage can beside a portable rocket launcher. Also, a person might notice something but pass it by without realizing.
This new system monitors brain activity and watches for when a person has recognized something. Even without conscious acknowlegement, the brain shows a certain pattern of activity when it sees something it's looking for. By paying attention to where the person was looking at that moment, the system marks this part of the image for further analysis. The article talks about massively improved results during image analysis by throwing in this extra, raw brain data.
The internet is a pretty good example of spreading work to people through computer systems, though really that's about data input rather than computation. There are a couple of examples that I know about where humans are actually used to improve computation.
One is google's image labeller. In this application you and an anonymous partner on the net are paired to interactively view and label pictures dredged up from the big, messy web. You are shown images, one after another. You enter descriptive words and phrases about the picture. You see your own labels but not your partner's. The instant you enter a label that your partner has typed, that image is finished. It disappears and you get another. This repeats for a couple of minutes, and at the end you get a score. Your score stays with your google login and over time your total goes up. The high scores show up on the application's home page, which makes a certain type of competitive addict want to play this "game" all day long. With the label you two humans have agreed upon, google now has a bunch of useful information about this particular image and can improve the results of their image search.
Another different take on human collaboration is Amazon's Mechanical Turk web service. Amazon describes it as "Artificial Artificial Intelligence" but it's really a marketplace for distributing tasks via the internet. The tasks might be things like labeling pictures, or sorting items, or writing short descriptions of something, whatever. The difference is that one person lists a task and a price that they're willing to pay for it, and another person takes it on and gets some money, piecework style. The principle is the same in that you're distributing tasks to humans that they are inherently all good at.
The Honeywell technology is really interesting because it taps into human ability at an even lower level. Whereas the other two applications involve skill and language, incorporating brain-scanning incorporates instinct. Suddenly, bodies can be thrown at problems which once involved great expertise. I can imagine warehouses of people in populous countries being fed huge swaths of images and each being told to look for a particular, fairly abstract thing while their brains were being scanned. We would be using the pure circuitry of the human brain rather than the developed, higher level functions. The work could be highly compartmented and easily replicated over a huge scale. The more people, the better, and since the work is electronic, can be sent anywhere.
Where there are "more" people tends to be in developing countries. By now, we know this story: call centers in India, medical records processing in Pakistan, video game players across Asia playing role playing games for pay to enhance the status and ranking of a paying customer's characters. All of the current examples involve varying degrees of skill and often a great deal of cross-cultural training before these folks are considered economically useful to the first world. Even within the United States this practice often sometimes pisses people off because it's still humans speaking to humans from a very different place and things like manners and customs get in the way (For instance, I finally dumped Earthlink because their technical support had gone to a country where cultural and language differences made getting help well nigh impossible).
The Honeywell technology suggests a means of more easily digitizing and leveraging human ability. With brain scanning, humans move closer to being actual processing units within the computer. This all suggests a future cyborg reality where the biological and computational merge to make new applications possible. Instead of the computer infesting people, the people become part of the computer, actually providing their brain power in the service of automated tasks.
Perhaps this pushes us toward the kind of dystopian mechanization that first cropped up when assembly lines and the modern factory appeared during the turn of the last century. Taylorism sought to homogenize the bulk of humanity into a giant, productive machine. Since then, robots and computers have removed some of that burden (and jobs) by more productively replacing people in the first world. The rest of the jobs have been exported to the people of the developing world, who are apparently replicating this country's early history. The direction of this Honeywell technology suggests a combination of the two: inexpensive labor at the service of massive data processing and service industry needs, utilizing global chains of distribution, and eliminating the need for genuine skills, initiative or training. Why invest much in a person if all you are really after something provided by their essential biology?
Reading about this, I thought about the first, pretty good Matrix movie and that one really great sequence showing humanity packaged up in little cocoons, being fed a continuous dream (which we know as reality) and being used as an energy source for giant, evil computers.
Unfortunately, these movies got too involved in trenchcoats and fake taoism. Had they been less stupid, they would have followed a different plot (of my creation): humans are used by computers not for "battery power," but for their ability to think and dream. In those movies, the computers created a dream of reality in order to pacify humans, when instead they could have been using the humans to provide their thinking power.
\t : iPhone->you