Don These Specs, Toss Your Monitor
A chat with Richard Rutkowski of MicrovisionRichard F. Rutkowski is president and CEO of Microvision Inc., a six-year-old company developing a screenless computer display. His device, called a "virtual retinal display" (VRD), scans rows of pixels directly onto the user's retina by means of a laser, creating images that simulate the experience of viewing a full-size screen. VRDs could help people explore three-dimensional virtual worlds, or they could be used to annotate real-world images. Silicon Valley correspondent Janet Rae-Dupree spoke with Rutkowski about how retinal displays might change computing:Q: What was the genesis of this technology?
A: It originated in 1992 with Thomas Furness, founder of the Human Interface Technology Laboratory at the University of Washington and developer of some of the first head-mounted and helmet-mounted displays. Tom had the idea that if you could eliminate the display screen and write the imagery directly onto the retina using just light sources, you could make gains in resolution, contrast, brightness, and color quality, and you could also reduce the size and weight of display systems.Q: How does the technology work?
A: When you look at a television set or computer screen, what you're seeing is an electron gun [shooting electrons] very fast across a glass plate painted with phosphors, which glow red, green, and blue. By combining these colors and by varying the intensity of each one, we can create the full palette of colors. The retinal display works in a similar fashion, but instead of stimulating phosphors, we scan the image on the eye to stimulate the retina.Q: Doesn't that seem kind of creepy, to focus a laser beam inside your eye?
A: Actually, whenever we look at an electronic display, it's putting light on our eye. [VRDs] do this using a very small package that can deliver great viewing properties. They are safe, and it looks as if you are seeing a computer screen, but there isn't anything there except for one spot of light moving very fast.Q: When will you have a product on the market?
A: Late next year. It's going to be a wearable display that allows you to overlay information onto the real world. It will be targeted initially at industrial applications and some medical applications. Imagine working with your hands and being able to position electronic information such as parts diagrams, instrument read-outs, or medical data right where you're doing the work. In a military environment, for example, we have satellites and terrain databases, and all sorts of intelligence. You can know where someone is and present data to them as a visual overlay. The VRD user can see information about enemy positions from several different intelligence sources at once. Helicopter pilots fly through very crowded battle space. A computer system could detect a radar signature and generate an image that represents its shape and size, so the pilot could see and avoid it. If he's flying at night, we could give him night vision, overlaying data on weapons emplacements. The computer could take the approximate range of those weapons and draw a no-fly zone.Q: Why aren't any other companies doing laser retinal displays?
A: There was some work that had been done by Motorola and Sony. But it's a hard thing to do. Early on, people didn't fully subscribe to the notion that any of this was even possible. For example, there's a physical phenomenon called persistence: When you hit a phosphor with an electron, it glows and then takes a while to decay. We have some persistence in the eye, which is what you see when a small point of light leaves trails in the dark. But some people didn't think there was enough persistence in the eye. We need to move a single spot of laser light across the plane of an image from the first pixel to the last pixel. It was thought that by the time you got to the last pixel you would not see the first pixel anymore. That turns out not to be the case.Q: Are these devices expensive?
A: Initially, the medical and industrial applications will be in the $30,000 to $40,000 range. The military market could be anywhere from $50,000 to $100,000 because they'll need to be heavily ruggedized for operation in that environment.Q: When might consumers see something like this?
A: We see a consumer market starting to emerge in the 2002 to 2003 timeframe. There are two killer apps for consumers: One is wearable displays for gaming applications. The other is mobile Internet access. Human beings are visual creatures. The handheld devices with their tiny screens just don't let people see a Web page or even e-mail in a meaningful way. We can miniaturize our scanners by building them as micromechanical devices out of silicon wafers. When you hold a small lens three or four inches from your eye it looks as if you're looking at a 15-inch computer monitor. We'll have prototypes of that starting next year.Q: Are we all going to be walking around with glasses beaming information into our eyeballs?
A: We can't do that because the eyes have another important role: They're an extremely important part of our social interaction. It would be disconcerting to look at somebody and realize that the other person is seeing something else. I think covering the eyes is intrinsically antisocial. Someone asked me the other day whether we were going to replace television. I don't think so. Television is the campfire, or the hearth--a gathering place.Q: In what situations might we use these displays?
A: I've thought that it would be fun if you're visiting Stonehenge in Britain or the Forum in Rome, and you could overlay reconstructions of how people of the time lived and behaved and dressed and how buildings used to look. If you start to incorporate global positioning satellites into this picture, you can deliver geographically relevant data to the mobile user.Q: Spin forward a decade or two. Where will we be seeing these displays?
A: Literally everywhere. Wearable displays will be commonplace in a whole variety of applications. The concept of augmented vision is powerful. We have reached a point where we have huge amounts of electronic information, but it's still not integrated into our real, functioning world. We think of an interface as being between the man and the machine. Now if you're able to integrate the electronic information into the workspace, the interface actually becomes the entire space.