Predicting Popularity

Can Computers Pick the Next Big Thing?


In the early 2000s, a handful of entrepreneurs became convinced that machines could mimic human taste and effectively predict popularity. This was a revolutionary notion, suggesting that the talents of legendary tastemakers—the Harvey Weinsteins and Clive Davises—could be replicated by silicon and algorithms. In melodramatic terms, the idea represented an escalation in the war between humans and machines, furthering the debate over what skills and faculties, if any, are unique to homo sapiens.

With each passing year, the humanists appear to lose ground. In 1997 an IBM (IBM) supercomputer named Deep Blue beat the world's best chess player, Garry Kasparov, in a six-game match. Kasparov later wrote off Deep Blue and its relatives as "brute-force programs" that played chess with no creativity, no concern for "hundreds of years of established theory."

Kasparov would have had little patience for the would-be hit predictors, who, for the last decade or so, have tried to do for art and culture what Deep Blue did for chess. Generally, they distilled a piece of content to its numerical essence. Songs were easiest, because their underlying structure is mostly math. Companies and research centers such as The Echo Nest and the International Society for Music Information Retrieval built up databases and correlated variables like pitch, tempo, and melody. By correlating them with historical information on how the song fared in the market, the hit predictors could make an educated guess about whether a brand-new song stood a chance of topping the charts.

One company trying to do this was called Hit Song Science, founded in Barcelona in 2001. Hit Song Science had some early success. In 2002, as the team was fine-tuning its algorithms, HSS determined that 8 of the 14 tracks on an album by a then-obscure singer had genuine hit material. That album, Come Away with Me by Norah Jones, subsequently sold more than 10 million copies.

The same year, an executive at BMG who was promoting a new band, Maroon 5, got in touch with Mike McCready, one of HSS's co-founders. The band's single, Harder to Breathe, was going nowhere, and the BMG executive needed help. Running the album through his software, McCready determined that another track, This Love, had much greater hit potential. The executive sent the new single to radio stations, and Maroon 5's album, Songs About Jane, went triple platinum.

As a business, HSS was not quite as successful. Many predictions turned out to be duds. The algorithms rated Michael Jackson's Billie Jean a flop and a six-minute instrumental a surefire hit. "We discovered we couldn't make the bold kind of claims we were hoping we could make with this technology," says McCready, who left the company in 2006. He now runs a website called Music Xray that helps match music executives and musicians. "The technology might get there someday, but it's not there now."

The number of different possible chess games is 10 to the 120th power—a staggering number, and a surmountable one, if you have the right "brute-force programs." Popularity, by contrast, is a social phenomenon. Making predictions without accounting for human interaction and influence is like programming a computer to play chess and ignore the queen.

Duncan Watts knows this better than anyone. Now Yahoo!'s (YHOO) chief research scientist, Watts was a professor at Columbia University in 2006 when he and two graduate students performed a study that confirmed what marketers have long known: Humans are deeply susceptible to persuasion, and there's no way to predict their tastes unless social factors are considered. In the study, the researchers asked 14,000 people to rank songs by bands they'd never heard of. Some of the participants had no information to go on other than their own taste; others were grouped into pools and shown what the rest liked. What the researchers found is that there's no such thing as "intrinsic" quality: Each pool favored a different set of songs, and reviewers were heavily influenced by the rankings of others when they had access to them.

For Watts, one of the lessons of this experiment is that "for most of the things we care about, you can't predict success. You can take a lot of historical data and attributes and show that, on average, certain attributes do better. Books about boy wizards do better than books about nonlinear equations, for instance. The problem is, there are many hits that have those qualities of success. But there are plenty of non-hits that do, too." Knowing the difference between the successful and unsuccessful boy-wizard book is not yet a computable skill; in that gulf is where art, marketing, and social influence work their magic.

Many would-be hit predictors weren't discouraged by this finding; they adapted. Thanks to Twitter, Facebook, Digg, and more, there is a sudden surfeit of social data. Even a simple Google (GOOG) search contains social cues about the buzz around a movie, a product, or anything else people care about. A number of recent studies have used this insight to refine the prediction of popularity. Earlier this year, Bernardo Huberman, a senior research fellow at Hewlett-Packard (HPQ), analyzed Twitter posts about unreleased movies and came up with surprisingly good predictions of their opening weekend gross. (His model predicted that Dear John, the Amanda Seyfried movie from this February, would gross $30.71 million on opening weekend. It took in $30.46 million.) Other studies by researchers at HP, the University of Southern California, and even Watts himself have used search and social data to predict, with impressive accuracy, the popularity of YouTube videos, Digg articles, new video games, and more.

A New York-based company called BuzzFeed extends the idea that popularity can be predicted across the Web's entire range of content. It culls data from more than 100 partner sites, including The Huffington Post and AOL (TWX), which together reach 150 million unique visitors a month, according to Quantcast, and then posts items it anticipates will go viral—everything from cat videos to celebrity gossip. BuzzFeed makes its predictions based on analysis of how much traffic comes from sharers. If a blog post gets 1,000 hits, for example, and only 200 of those come from home page links, that means those 200 visitors have shared the post with 800 other people—and some of those 800 people are likely to share it with others. BuzzFeed's algorithms calculate this ratio and make a prediction: This will go viral.

In terms of science, BuzzFeed is a bit of a kludge, a tacit acknowledgment of the limits of popularity prediction. No attempt is made to assess the intrinsic quality of the blog posts and videos that its servers track. It doesn't isolate the variables and assign them values, as Hit Song Science tried with the timbre of Norah Jones' voice. And how could they? Who knows why one video of a kitten falling asleep attracts 10 million views on YouTube, while another, nearly identical video gets 1,000? BuzzFeed is concerned only with popularity as a social phenomenon. It's like weather forecasting. BuzzFeed takes an early satellite picture of how the Web responds to, say, the news of Lindsay Lohan's arrest or zoo footage of newborn tapirs, then assesses which of these tropical depressions could turn into a hurricane.

Watts, who is generally quite skeptical of hit prediction, serves as an adviser to BuzzFeed. The system works, he says, "by being very good at noticing what is taking off organically and then feeding the flames." It is on this latter bit that BuzzFeed makes its money. By understanding so well how things become popular, the company can help clients make things popular. A recent case in point is DonQ rum, which had hired a firm called Undercurrent to build brand awareness with an elaborate Web feature called LadyData. It presented more than 200 attractive young women answering pressing questions that demographically appropriate men (ages 21-35) might ask, like "When is a mustache a fail?" The problem was how to grab the attention of the men who might first visit the site and then buy the rum.

BuzzFeed's algorithms churned through the LadyData site, singling out the pieces that generated the most sharing. Those were seeded across the Web on BuzzFeed's home page and on content-sharing sites like StumbleUpon and Digg. Within two months, the DonQ site was pulling in more visitors than those of its much bigger competitors, Captain Morgan and Bacardi, combined. And in the first six months of 2010, DonQ's sales were up 55 percent. That may not be as cool as predicting the next Norah Jones, but for rum manufacturers—and kitten videographers—it's progress.

Sheridan_1901
Sheridan is a senior editor for Bloomberg Businessweek in New York.

Steve Ballmer, Power Forward
LIMITED-TIME OFFER SUBSCRIBE NOW
 
blog comments powered by Disqus