Visit us on Facebook!
Visit us at STELLAService.com
Call us: (212) 366-1483
Email us: email@example.com
Email us: firstname.lastname@example.org
When people talk about the “wisdom of the crowd”, they are usually referring to the aggregation of information in groups that results in accurate conclusions or decisions. Made especially popular by James Surowiecki’s The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, the theory has supplied us with some entertaining anecdotes about the accuracy of crowd-driven calculations (Surowiecki tells a story about how a large crowd was accurately able to guess the weight of an ox when the crowd’s individual guesses were averaged). As it happens, I tend to enjoy reading these stories to uncover where and when the wisdom of the crowd can bring truth – or wisdom – to strange or otherwise unsuspecting situations.
However, there are specific elements required to form a “wise crowd”, as Surowiecki calls it, and when it comes to the very business with which I’m engaged (measuring the service quality of Internet retailers), I think there’s a lot of confusion and misconception about the use of (and value from) community-driven seller ratings, which many people confuse as a perfect application of the wisdom of the crowd theory. To determine whether or not these seller ratings are indeed giving us the wisdom of the crowd, there are three criteria that must be met (according to Surowiecki):
Question: are online seller ratings like the ones on BizRate or Amazon’s marketplace truly driven by a genuine wisdom of the crowd? We weren’t sure, so we turned to people that know more about this stuff than we do…MIT. In an MIT Technology Review article entitled ”Can You Trust Crowd Wisdom?“, Vassilis Kostakos, an adjunct assistant professor at Carnegie Mellon, confirms with his team that the rating systems commonly used by online consumers can “easily be swayed by a small group of highly active users.” The article goes on to say that rating systems can paint a distorted picture if a small number of users do most of the voting. And sure enough, after looking at millions of votes across Amazon’s community ratings system (albeit for products, not sellers), he came to the conclusion that “a small number of users accounted for a large number of ratings….and only 5 percent of active Amazon users cast votes on more than 10 products. A handful of users voted hundreds of items.” Even with the incorporation of mechanisms designed to control the quality of ratings – such as allowing users to vote on the helpfulness of other users’ reviews – there are significant dangers in placing your complete trust in the accuracy of these ratings.
Based on Dr. Kostakos’ findings and looking back to Surowiecki’s criteria for identifying a truly wise crowd, it appears that community-driven seller ratings aren’t quite as wise as people probably think:
Even though it’s clear that community-driven seller ratings fail to qualify as a representation of the wisdom of the crowd, can they still be helpful and provide some useful information? Yes, they’re obviously better than nothing. But standing alone, do they provide enough information for online consumers to feel completely comfortable that they’re making the best possible purchasing decision? Not at all.
What’s the solution? A dual-rating system, which has been successfully pioneered by Metacritic.com and Gamespot.com in which the community and the objective, third party experts have their say. Whether evaluating the merits or quality of a camera, a restaurant, a video game or an Internet retailer, consumers increasingly want more information, and from more sources. While movie go-ers are interested in what the “crowd” thinks of the Alice in Wonderland in IMAX 3D, they are also interested in what the critics, or the experts, say. The ”expert” rating allows consumers to combat the potential biases, extremes and inconsistencies of user-driven ratings and provides the perfect point of comparison for someone who remains understandably skeptical from the community’s opinion.
It might be possible for the real wisdom of the crowd to be reflected through a user-driven rating if the crowd were somehow able to meet the above-mentioned criteria, but the ultimate problem facing consumers is that they would still have no way of knowing if the difficult “wisdom of the crowd criteria” had been met or not…and the only way to reassure them that the ratings can be trusted is by the presence of a completely independent, unbiased and professional rating right there next to it. I think Niki Kittur, an assistant professor at CMU who studies user collaboration on Wikipedia, summed it up nicely: ”There are both intentional and unintentional sources of bias” in user-driven rating systems. “In the end, what we really need [are] tools and transparency.”
More information from more independent sources drives transparency. So when it comes to measuring the service quality of an online retailer, you’ll now have one more “tool”, or information set to further enhance transparency: our STELLA Ratings.