The Ghost in Your Machine

The world of smart computers -- machines that would be familiar with your habits and know when you're stressed or fatigued -- could be only a few years away. The computers would note your mental logic for saving information and follow the same logic in saving files. They would accurately infer your intent, remember past experiences (for instance, that you tend to make errors in multiplication), and alert you to mistakes.

These so-called cognitive machines -- essentially, smart software that can be part of any computer environment -- are already here in prototype, having been developed over the past five years by a team of computer scientists and cognitive psychologists at the Energy Dept.'s Sandia National Laboratories. The software monitors everything you do and creates a mathematical model of your behavior, such as your patterns in saving information or doing your work. Think of it as an advanced cousin of today's software which, after you've typed in a few letters of someone's address in an e-mail, suggests the rest.

At their most benign, smart computers seem like executive secretaries for those of us who can't afford one -- offering tremendous advances in productivity. Yet some fear that the concept suggests an ominous encroachment out of a sci-fi movie. Cognitive psychologist Chris Forsythe, who leads the Sandia team, insists that the machines are designed to augment -- not replace -- human activity. "We don't want to take the human out of the loop," he says. The simplest versions of these cognitive machines could hit production in as little as one to two years.

Forsythe talked to BusinessWeek Online Reporter Olga Kharif on Aug. 19 about how cognitive machines will change our world. Edited excerpts of the interview follow.

Q: How would you characterize the current state of human-machine interaction?

A: The biggest problem is that if you're the user, for the most part the technology doesn't know anything about you. The onus is on the user to learn and understand how the technology works. What we would like to do is reverse that equation so that it becomes the responsibility of the computer to learn about the user.

The computer would have to learn what the user knows, what the user doesn't know, how the user performs everyday, common functions. It would also recognize when the user makes a mistake or doesn't understand something.

Q: Could you give me an example of a prototype of a system that you've already built?

A: One of the systems that we built last year has a function called discrepancy detection. We give the machine a cognitive model of an air-traffic controller. You have an operator watching events going on in the world around him, and the computer is sitting there "watching" all the same things the operator sees and is attempting to interpret, using the operator's cognitive model -- essentially, a mathematical model of the user's behavior -- what's going on.

Thanks to our software, when you stop the simulation and ask the computer and the operator, "What do you think is going on right now?" about 90% of the time you get the same answer from both. Such a computer could alert the operator to a problem the operator hasn't picked up on yet.

Q: What kinds of other applications do you expect to see?

A: One application is an intelligence agent, looking at data coming from many databases. Another application is where you'd have a robot that would record its experiences, so that at some point it could say, "Oh, I saw something like this before and this is what I did, and this is what happened."

Our software could also be part of the basic desktop environment -- we have that prototype close to completion. The program monitors your e-mail traffic, who you interact with, the nature of these interactions. So you could later ask the system, "Do I know this person?" And it would remind you that you worked on a project together a year ago.

Q: Do you anticipate a lot of privacy concerns over this?

A: Absolutely. We're O.K. with the idea that other people sitting in our office know most of what we do. But people are much less comfortable that there's a record of this on their computer. There's also the issue of security. But [monitoring of employee activities] already [goes] on.

Some people would argue that our software would make the situation better. The breadth of information that's being recorded will make it harder for someone [who shouldn't be looking to find what they're looking for].

Q: How are cognitive machines better than the search engines and functions we currently use?

A: The technologies available today are inadequate. There are a lot of days when I can't find a file and I just give up. The search engines today -- such as Google -- offer very generic, word-based searches. They have no understanding of the structure of your life. In contrast, our software develops a model based on what you know about your own work. It structures the knowledge on the computer in the same way you structure it in your brain.

Q: What kinds of data would the program need to look at to do that?

A: Your archived files. The software you use, how you use it. For instance, in e-mail, it's going to look at how you use the actual software -- do you frequently forward e-mails, do you blind-copy people? It's going to look at who you interact with. Then, it's going to look at the content, the body of the e-mail. [You can tell] a great deal [about] what a person knows in the words that they use.

Q: When do you expect the product to be commercialized?

A: We could see some capabilities that could go into product development within the next year -- such as the ability for the computer to sift through your e-mail application. But I think most of the capabilities I talked about are going to be commonplace 10 years from now.

The technology is already there. But it will take time to put this kind of application into automobiles, for example -- simply because they have to be tested and proven.

Q: Is there anything that still has to be invented or developed to make your ideas work?

A: There's one very significant technological barrier: The systems we're building now require rigorous collection of data from a person to create a model. It's very labor-intensive and time-consuming. So we're investing into automating the process, so the machine could watch you and infer what you know and what you don't know. Today [this feature is] only available in a limited form.

Q: This project makes me think of The Matrix -- where machines run the world and humans are slaves to the machines. Isn't this technology a move in that direction?

A: Our team is very conscientious with regard to these type of issues. I also think that -- whether it will be us or someone else developing it -- the technology is going to continue to advance. And people are going to push technology in ways that will make it more and more powerful -- to the point where it could begin to intrude on people's lives and take on autonomy.

There's no stopping the technological march. Still, most researchers are very conscientious about the ethical ramifications of what we are doing.

Steve Ballmer, Power Forward
blog comments powered by Disqus