Sample surveys are just that. Their accuracy and validity depend on many factors — the representative nature of those sampled, the size of the sample, the quality of the questioning process, the possibility of systemic bias in the entire exercise, the weights assigned to different sub-groups sampled, and much else. It is the very impreciseness of the process, with all its unavoidable pitfalls, that make election forecasts in a heterogeneous society like India such a risky exercise; forecasting tends to be safer in more homogeneous cultures that have fewer divisions—be it of language, class, caste or anything else.
The irony at the heart of the media business is that its advertising revenues are determined significantly by the advertising community’s reliance on precisely such sample surveys. Television audience measurements have frequently come under criticism in the very competitive TV industry, one common view expressed being that there simply aren’t enough TV monitoring meters to track and measure accurately a diverse audience’s viewing habits. Another has been that some TV channel producers have figured out where the meters are kept and rigged the market through local cable operators. Whether this commonly made allegation is true or not (and no one can know), advertisers are forced to do what the drunk does in the old story: look under the lamp-post for a lost key, not because that is where the key was lost but because that is where there is light!
The history of readership surveys in the print media has been no better. Indeed, because two competing surveys have existed for the past couple of decades (the original National Readership Survey and the younger Indian Readership Survey), their unreliability has been manifest as they frequently delivered very different readership numbers for the same publications in the same period. Inevitably, scepticism about the accuracy of such surveys grew, and with growing scepticism came law-suits. Publications got stay orders from courts, and created a situation that forced a review. Among other things, NRS has not come out with any surveys for the last three years or so.
Now it has been announced that NRS and IRS will be merged into a composite survey. Immediately, many media companies will heave a sigh of relief because they will now have to finance only one survey, instead of two. However, the existence of a second survey (even if conducted on a slightly different basis) acted as a check because sharply differing numbers would point to data and/or extrapolation problems. No such check exists if there is only one survey. In that sense, the market can become even more susceptible to measurement mistakes flowing from a unified survey. Indeed, given the intrinsic reliability issues connected with such surveys, it works better for media companies and for advertisers if there are multiple surveys—which may or may not slice the market differently. As in election forecasting, plurality can uncover the complexities of the truth better than a monopoly provider of an error-prone sample survey.


