A New Breed of Thinking Computer?
Scientists race to combine living neurons and silicon

For a whole bunch of jobs, today's computers ''are just plain terrible,'' says William L. Ditto, a physicist at Georgia Institute of Technology. Understanding human speech and handwritten notes, for example, is a snap for people but extremely difficult for computers. Until now, computer scientists could only throw more horsepower at such problems. ''But that's just making dumb machines faster, not smarter,'' says Ditto. ''Brains don't say, 'I'm inadequate, so I'll speed everything up.''' Instead, Nature evolved bigger brains with more interconnections among the neurons.

Ditto figures it's time to start building computers the way Nature does. His research team and a handful of other groups, including one at the University of Bordeaux in France, envision hybrid biocomputers that mate living nerve cells, or neurons, with silicon circuits. Neurons are the body's wires--they transmit signals in the brain and throughout the nervous system. Putting neurons into semiconductor circuits could create the basis for a new breed of computer--brainlike systems that finally live up to their name. Like the brain, neurosilicon computers might find solutions on their own, with no need for programmers to write explicit step-by-step instructions.

Ditto isn't talking through his hat. His team at Georgia Tech has just scored the first such breakthrough--doing arithmetic with two neurons (using the large neurons from leeches, which have been studied extensively). The researchers joined the neurons and linked them to a personal computer, which sent signals representing different numbers to each cell. Using principles of chaos theory, Ditto selectively stimulated the two neurons. From the chatterbox traffic that followed, the PC extracted the correct answer to a simple addition problem.

This is the first time invertebrate brain cells have used chaos to do arithmetic, let alone communicate the results to humans. What's more, computer simulations by Ditto and Sudeshna Sinha at the Institute of Mathematical Sciences in Madras, India, show that larger clusters of neurons should also be able to do multiplication and Boolean logic operations, the underlying principle of digital computers.

''VERY CLEVER.'' The Georgia team has yet to dash off a technical paper, so their feat isn't widely known. ''The work is quite interesting and represents a new direction for neural modeling,'' says Terrence Sejnowski, director of the Institute for Neural Computation at the University of California at San Diego (UCSD). Physicist Henry Abarbanel, founder of UCSD's Institute for Nonlinear Science, says he's not familiar with the details of Ditto's achievement, but he knows him from his past work. ''Ditto is very clever,'' says Abarbanel, ''so he likely has something.''

That something is a way to control the behavior of neurons--using an esoteric branch of mathematics. That's because nerve cells and brain waves are not digital systems that simply flip on and off. So the software instructions that drive silicon computers just won't cut it in this realm--and that's fine, since it makes crashes less likely. Brainlike chips will really be brainlike. They will be more creative than the machines on our desks, perhaps even mirroring some of the pluses and minuses of human thinking.

At this stage, Ditto says, it's just too early to tell if neurosilicon computers will have inherent limitations. But he and Sinha are optimistic that biosilicon systems can tackle anything today's hardware can--plus sensory-based computing that only biological ''wetware'' does with ease, such as understanding human language.

Ditto uses a personal computer to put neurons through their hoops, but the PC doesn't throw conventional instructions at the neurons. It runs a sophisticated program based on chaos theory. The results are used ''to 'tune' the neurons--to tweak how they talk to each other,'' says Ditto. This ensures that their operations are consistent and predictable.

That's vital because nervous systems are ''nonlinear,'' or unpredictable. A minor sound or slight change in the visual field can unleash massive responses in the brain and nervous system, while major sensory disruptions may cause only minor ripples--or vice versa. In fact, when the signals exchanged by nerve cells become too regular and repetitive, it often warns of trouble. Medical researchers have discovered that a heart beating without minor variations may be on the verge of a heart attack, and a uniform ''chorus'' of brain waves can signal an impending epileptic seizure.

While none but Ditto claims success at using neurons to compute, biologists who study the bizarre workings of nerve cells have, in fact, been connecting neurons to silicon chips for years. ''That way, you can watch the neurons talk back and forth,'' says Avis H. Cohen, an associate professor of biology and neuroscience at the University of Maryland. Cohen experiments with the spinal cords of lampreys, an eel-like fish, but she says the most popular biosilicon research tool, dubbed the dynamic clamp, uses mollusk neurons and was pioneered by Eve Marder, a biologist at Brandeis University. With such silicon-neuron set-ups, researchers can intercept the signal on the chip and selectively change it before sending it on to another neuron. That way, says Cohen, ''you can play like you're a neuron and try to deduce how they work together.''

HARMONY. The spontaneous emergence of cooperation among clusters of nerve cells has been an enduring mystery. Individual neurons behave unpredictably in isolation, yet collections somehow agree to synchronize and restrict their chaotic behavior. As a result, variations in heart rhythm, for instance, are normally confined to a fairly narrow, predictable range. Marder's research tool could help explain this enigma.

So might the results of an interdisciplinary study at San Diego's Institute for Nonlinear Science. Since mid-May, researchers there have been hooking up artificial electronic neurons, built with $7.50 in parts from a local Radio Shack store, to groups of living neurons from spiney lobsters. Surprise: The fake ones are accepted as the real thing. The artificial neuron is built to act chaotically, says UCSD's Abarbanel, ''and the real ones basically say, 'welcome, but behave.''' Very soon, he notes, the artificial neuron's signaling rhythm falls into step with the rest of the gang, indicating that the model is reasonably accurate.

The artificial neuron stems from two years of research, during which participants ran through $300 worth of spiney lobsters every month. ''They don't taste as good as Maine lobsters, that's for sure,'' says Abarbanel. Next, the team will gradually increase the number of artificial neurons in the group of 14 neurons that regulate the lobster's digestive system. If the network still functions normally when all 14 neurons are artificial, it will be solid evidence that it should be possible to replace damaged or diseased neurons in human systems. ''That's the medical goal,'' says Abarbanel. ''I'm not sure if that will take one year or five,'' he says, ''but there are no showstoppers that I can see.''

When Georgia Tech's Ditto first learned, two years ago, about all the biological work with neurons on silicon chips, a lightbulb went off. As head of Georgia Tech's Applied Chaos Laboratory, he figured he ought to be able to find a way to harness live neurons for computers. Two months ago, after a lot of computer-simulation studies, experiments with actual leeches started. And much sooner than he expected, Ditto had them performing arithmetic. ''We're way ahead of schedule,'' he says. ''I never thought we'd get to addition in just two months.''

Ditto estimates that it will be 10 years or more before biocomputers go commercial. That timing may be just right. Around 2015, semiconductor technology is destined to come to a screeching halt. Semiconductor circuits will then have shrunk as small as they can ever get. Transistor switching will then be triggered by a single electron, not the thousands of electrons that pulse through today's chips. ''When we get to single-electron circuits, that's it,'' says Ditto. ''You can't cut an electron in half.''

That will signal the end of Moore's Law--the doubling of chip power every 18 months that has been the hallmark of semiconductors for three decades. But scientists are working on some clever alternatives to maintain momentum in computer progress. One is DNA computing, which uses actual segments of genetic material to represent numbers. These segments are combined in a test tube to ''grow'' answers.

''GO AT IT.'' DNA computing may prove valuable for horrendously complex problems in science that only supercomputers can tackle. But it probably isn't suitable for many of the everyday jobs that computers now deal with. Besides, notes Ditto, ''Nature doesn't do computing with DNA, probably for good reason.''

That most attractive option may be Ditto's vision of computing with neurons. ''Now that we have an idea about how to go in there and program those little suckers,'' he says, tomorrow's computer engineers will be able to imitate Nature. ''So ultimately, for really tough problems, we'll just throw in more neurons and tell them, 'hey, go at it.''' The neurons will harmonize their operations and self-organize to find the answer--even if they have only partial data to work with. That's the magic of ''wetware.''

Eventually, Ditto plans to hook up neurons to video cameras and microphones, creating systems with artificial senses. How they will respond to the real world is anyone's guess, Ditto concedes. And that's as it should be. Life is unpredictable.

By Otis Port in New York

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

A New Breed of Thinking Computer?

TABLE: Computing with Chaos

E-Mail to Business Week Online

Copyright 1999, Bloomberg L.P.
Terms of Use   Privacy Policy