As Andrew McAfee of the MIT Center for Digital Business has pointed out, many "established organizations still practice 'decision making by HIPPO (Highest Paid Person's Opinion).'" Google marketing evangelist Avinash Kaushik coined the term HIPPO. Since 2006, Avinash has helped organizations understand how they can use data to make decisions rather than relying exclusively on HIPPOs. Building on his ideas about the limitation of HIPPOs and the promise of data from large groups of people, McAfee notes that we can look to crowds during the decision phase in the same way that we look to them for ideas. So we can see a continuum emerging. On the one end is the traditional approach of leveraging experts to select the best; the other end employs the wisdom of crowds, which relies on analysis of data and feedback from multiple stakeholders about the proposed idea's fitness.
Strong cases have been made for both the wisdom and madness of crowds. The issue with the wisdom of the crowds is that crowds can be useful on certain tasks. So which types of decisions are best suited to crowds? As David Leonhardt puts it, crowds or "markets are at their best when they can synthesize large amounts of information. . . Experts are most useful when a system exists to identify the most truly knowledgeable. . . ."
If we think about Victors and Spoils or Sequoia, we have a good reason to believe that the people reviewing the ideas are some of the most knowledgeable in their fields. Crowds tend not to select the best results or the most feasible ideas when they have insufficient information for making a valid choice - or an inadequate ability to synthesize it. Having many people who know nothing about advertising or investing is not going to increase their chances of picking the best ideas.
But it's even more complicated than that. It's useful to ask a group of potential users for a new product or service because they have experience with the problem and with the solution. But asking this same group to weigh in on something like what new energy technology might win out in ten years is an entirely different matter. There is some level of expertise required just to understand the basis for competition among energy technologies - from the physics involved to the economics of moving from test-scale to production-scale energy production.
Beyond the issues of expertise and diversity, there is the question of how to best capture feedback from larger groups.
How the crowd decides
Most of us are familiar with simple up-and-down voting or, increasingly, liking or supporting. This type of signal provides a way to build a filter. At a basic level, gathering more likes means more people support an idea. But while the act of voting is simple, it can lead to biases.
One simple issue is that some people vote once, while others might vote much more often. The result is the more frequent voters will slant the result; it is a system that can quickly result in bias. A fix for this problem is simply to limit the number of votes to ensure that everyone is equally represented. But does the simple majority vote alone ensure that the voting is fair?
Let's look at a couple of examples that demonstrate different approaches in order to answer this question.
In the United States presidential election, the so-called popular vote is the majority vote; it is the sum of votes cast across the United States for each candidate. However, this is not the math that matters for deciding the future president. Votes in each state count toward the Electoral College, and each state has a certain number of votes that they can cast. Once the majority vote determines each state's winners, the votes for each state are assigned to the particular candidate (with a few exceptions). The candidate who receives the majority of these Electoral College votes wins.
The world of sports offers a different set of voting approaches. Participants in a knockout or elimination model compete to advance through a series of rounds. Losers are eliminated while winners go on to the next round. In a league model, teams are assigned points for different outcomes -win, lose, or draw. The team with the most points at the end of season is the winner.
|
AUTHOR SPEAK
Finding the right outside talent and keeping it motivated is the key, Abrahamson tells Ankita Rai
How is crowdsourcing different from crowdstorming?
I like to think about crowdsourcing in terms of what we’re asking participants to do. For example, in microwork, like mechanical turk, we’re asking people to do small things like, identifying information in an image or verifying a business listing. In crowdstorming, we focus on actions that crowds can take in relation to ideas: finding ideas, finding people or organisations to come up with ideas, offering feedback and rating/ranking ideas.
While writing the book, I realised that some of the basic patterns were pretty old. They had been described by Alex Osborne (the “O” in BBDO) when he introduced the world to brainstorming just after WWII. Osborne was mostly concerned with small groups of people coming up with and evaluating ideas. My co-authors and I see networked crowds where Osborne saw folks in a conference room.
Are contests just another name for crowdstorming?
Contests are a subset of crowdstorming. We identified three patterns of crowdstorming: search, collaborative, integrated. Contests fall in the search bucket because they are focused on searching for the best ideas. The contest has a few winners, so many people don’t receive cash for their contributions.
In mass collaboration model, people contribute without being paid directly — think open source or wikipedia. But the most interesting crowdstorm pattern is only a few years old. You can see this pattern at work in organisations like Quirky (consumer products manufacturer in the US) or Giffgaff (mobile operator in the UK). In these patterns, crowds are integrated into key value creation processes and people are compensated. This ability to measure and compensate for all types of contributions is what makes this model so powerful.
How should companies use “outside talent to the core”?
Stop thinking about inside and outside, but think more about where you’d add people if you had the space or the budget. For instance, GE would want to find as many teams as possible for building products related to sustainable energy. Or Pepsi would want to search for one of the most important TV advertisements by moving beyond agency partners. Or why Nike recently partnered with Techstars to help them find teams to work with them on new applications for the Nike Fuel band.
The question is how will you find these people and how do you motivate them? Firms used many creative ways, such as GE offers partnerships and venture capital, as does Nike. Pepsi offered cash and bragging rights of having your creative work seen by massive audience. Most of these firms take help from a variety of partners. Working with media partners and content producers is essential. It is also useful to work with specialty software vendors or process organisers to help you organise the process.
Outside talent can do a lot more. For example, Amazon Studios asks their customers to review proposed film concepts and offer ratings and reviews. The problem with these feedback tasks is that the number of participants is huge.And this presents a new problem: Who to listen to? How do you value and compensate them for it? For now, this seems to be the domain of start-ups, but bigger firms are finding ways to benefit through partnerships.
CROWDSTORM: THE FUTURE OF INNOVATION, IDEAS AND PROBLEM SOLVING
AUTHOR: Shaun Abrahamson, Peter Ryder, Bastian Unterberg
PUBLISHER: Wiley India
PRICE: Rs 499
ISBN: 9781118433201.
Reprinted by permission of the publisher. Excerpted from 9781118433201: CrowdStorm: The future of innovation, ideas and problem solving. Copyright Wiley India. All rights reserved.
Click here to connect with us on WhatsApp