Gigaom

Why Watson Can’t Talk to Siri


On Tuesday night, I was schooled by Watson on playing Jeopardy in an exhibition match at the Computer History Museum. I discovered that despite our fear of the robot overlords, humans are much smarter than we think. Case in point: Watson could never use Apple’s personal assistant, Siri.

While each service seemingly understands what we’re saying and can respond with amazingly functional or accurate answers, the truth is that they are both programmed for specific tasks and could never actually converse with one another or a human, outside a narrow context. Watson can’t take dictation and Siri can’t play Jeopardy. Understanding their limits shows how far we have to go when it comes to true artificial intelligence and rational fears of robots taking over.

David Ferrucci, the guy at IBM (IBM) behind Watson’s creation, explained during a conversation before the match that as intuitive as the interactions with Siri or Watson appear to us, they are fundamentally task-oriented. The questions Watson gets are in effect “translated”—not just into the zeros and ones of digital signals but also to a series of words that are then broken down into related concepts.

Watson then tries to ascribe “meaning” to those contexts, based on searches of unstructured data to derive an answer. It next determines which answer is most likely to be correct—and how much confidence it has in the top answer—because if someone guesses the wrong answer in Jeopardy, he or she (or it) is penalized. Watson’s tasks are thus to figure out the context associated with a question, determine which answer is the likeliest based on that context, and then reckon if it’s confident enough in the probabilities to bother answering.

Watson’s Greater Firepower

Siri, on the other hand, does two important things: It recognizes speech (Watson actually doesn’t understand speech, but is fed a text version of the question) and it can figure out what steps to take in a limited number of applications, once it understands the words in a natural language process related to the process by which Watson functions. The sense from IBMers (unsurprisingly) is that Siri doesn’t have Watson’s natural language depth. Siri certainly lacks the computing horsepower behind it (2,880 processor cores and 15 terabytes of RAM) or the 100 GB of text data that Watson uses to figure out how different words relate to each other.

The net result of their differences? Not only could Siri and Watson not communicate directly because each relies on different input methods, but even if they could, their tasks are fundamentally far apart. Each has an ability to do natural language processing, but one uses that skill to find related information and figure out which information is most correct, while the other uses it to open applications and perform a set number of tasks.

So while Alan Turing proposed that the best test of artificial intelligence is that humans can’t distinguish between interactions with a human or a computer, it may be more accurate to say that the best test will be the creation of a machine that can not only understand natural language, as Siri and Watson can, but can then employ Watson’s ability to determine the best course of action and Siri’s skill at taking the action.

Also from GigaOM:

Dissecting the Data: 5 Issues for Our Digital Future (subscription required)

DOE Secretary: There Was No Wrongdoing With Solyndra

LiMo Foundation Wants Alternative App Store. Good Luck With That

First Impressions: A Kindle Worth of Touch

HipGeo Wants to Add Location Tracking Everywhere

Higginbotham is a writer for GigaOM.

China's Killer Profits
LIMITED-TIME OFFER SUBSCRIBE NOW

Companies Mentioned

  • IBM
    (International Business Machines Corp)
    • $151.41 USD
    • -1.65
    • -1.09%
Market data is delayed at least 15 minutes.

Sponsored Links

Buy a link now!

 
blog comments powered by Disqus