The wisdom of crowds is one of those perfectly of-our-moment ideas. The phrase comes from New Yorker writer James Surowiecki, whose book of that title was published almost a decade ago. Its thesis is nicely summed up in its opening, which describes the 19th-century English scientist Francis Galton’s realization, while attending a county fair, that in a competition to guess the weight of an ox the average of all of the guesses people had submitted (787 in all) was almost exactly right: 1,197 pounds vs. the actual weight of 1,198 pounds, a degree of accuracy that no individual could attain on his own. As individuals we may be ignorant and short-sighted, but together we’re wise.
The implication is that the bigger the crowd, the greater the accuracy. It’s like running an experiment: All else being equal, the larger the sample size, the more trustworthy the result. The idea has a particular resonance at a time when online businesses from Amazon.com (AMZN) to Yelp (YELP) rely on aggregated user reviews, and social networks such as Facebook (FB) sell ads that rely in part on showing you how many of your friends “like” something.
A new paper by the Princeton evolutionary biologist Iain Couzin and his student Albert Kao, however, suggests that bigger isn’t necessarily better. In fact, small crowds may actually be the smartest. ”We do not find the classic view of the wisdom of crowds in most environments,” says Couzin of their results. “Instead, what we find is that there’s a small optimal group size of eight to 12 individuals that tends to optimize decisions.”
The research started from the fact that, in nature—where, unlike at county fairs, accuracy has life-or-death consequences—many animals live in relatively small groups. Why, Couzin wondered, would so many species fail to take advantage of the informational benefits of the crowd?
The experiment that resulted didn’t employ people or even animals but used a computer model, an algorithm set up to replicate group behavior. Each individual “actor” in the program made decisions based on environmental cues that were either reliable or not. Those individual decisions were treated like votes, with the majority decision being the group decision.
The cues had an additional quality, however: They were either highly correlated or not. This was key, the researchers argue. In most real-world decisions, there are some cues that lots of people notice: a big landmark, an incoming hawk, or, in a human decision-making context, an injury to a star quarterback that changes the point spread. These cues have what Couzin and Kao refer to as “high observational correlation.” Then there are cues that only some actors see: the secret stock tip that an uncle passes along, a rumor you heard at work, a faint rustling in the bushes right beside you. These are low-correlation cues.
What Couzin and Kao found is that those low-correlation cues, the ones that only a few individuals noticed, were drowned out by the high-correlation cues as groups grew larger. That meant if the high-correlation cues were unreliable, the larger groups made poor decisions, whereas the smaller groups could still make the right decision, because they were still relying on a diversity of information. In other words, in big crowds, the stuff that everyone knows dominates the group decision, even if the thing that everyone knows is wrong. Couzin suggests that this phenomenon is why even big groups—with a few notable exceptions, such as Occupy Wall Street—rely on a smaller subset of the group to make decisions. Couzin likens the phenomenon to the statistical “noise” in experiments:
“Normally noise is considered a bad thing, but we show in this case—when you have these multiple cues and information can be correlated—noise can actually be adaptive, allowing the group to escape from being trapped in an overreliance on correlated information and actually employing other sorts of valuable information within their environment,” he says. “And that’s why these small groups did so well.”
Having modeled this behavior computationally, Couzin is now working on testing it, in both humans and animals—in particular, in schooling fish.