How many people does it take to create a wise crowd?
You might think a crowd must mean a lot of people – 100s or even 1000s of people. But it turns out a wise crowd can be much smaller than that. This unlooked-for finding, that a wise crowd can be as few as 6 or 7 people, is a clue we have built upon with Wizer.
How many people does it take to create a wise crowd?
You might think a crowd must mean a lot of people – 100s or even 1000s of people. But it turns out a wise crowd can be much smaller than that. Here’s how we found that out.
Some years ago, I ran an experiment with Prof Charles Plott at Caltech. The experiment was designed to answer a question: Can people accurately predict the box office performance of an unreleased movie? We called the experiment Box Office Prophecy.
The way the experiment worked was that we invited people – mainly film school students – to bet on the box office performance of a series of movies that had yet to be released. The bets were placed in ‘buckets’ representing different box office totals. When enough people had made their bets, what you had was essentially a probability distribution for all the different possible box office outcomes.
I’ll have more to say about this experiment in a future blog, but for now I’ll just say the answer was: Yes, people collectively can make accurate predictions about the performance of an unreleased movie.
But I noticed something unexpected in the data. Every time we ran the experiment – and we ran it for 67 different movies – we aimed to have at least 50 or so people making bets. But in some cases, the numbers were lower – down as low as 6 or 7 people. And yet, we still saw the same predictive accuracy.
This unlooked-for finding, that a wise crowd can be as few as 6 or 7 people, is a clue we have built upon with Wizer.
Here's how Charlie Plott summarised the findings of our experiment:
‘The data show that amazingly accurate predictions can be made. Ten or so participants between them often have enough information to accurately assign probabilities to each box office ‘bucket’ or possible outcome. The results leave no doubt that while no single participant has generally reliable information, participants as a group possess solid information about potential box offices and that this information can be captured by a properly designed process.’