A recent Federal Trade Commission (FTC) order holding US based Cambridge Analytica guilty of “deceptive practices to harvest personal information from tens of millions of Facebook users for voter profiling and targeting” has shed new light on the business of psychological profiling aimed at predicting voter behaviour. The Commission’s order spells in great detail the technicalities, business and potential impact of voter profiling on election results.
How did the Cambridge Analytica model work?
The now bankrupt firm’s chief Alexander Nix primarily relied on new research that was done in University of Cambridge that used Facebook profile information to predict an individual’s personality according to the OCEAN scale. The OCEAN scale also known as the five big personality traits measures an individual’s personality on five counts - openness, conscientiousness, extraversion, agreeableness, and neuroticism. Researchers had developed an algorithm that used an individual’s Facebook likes to predict their personality traits – the more the likes a person had, the more accurate the algorithm’s prediction would be. A researcher at the university named Aleksandr Kogan had developed an application that could collect personal data from not just those Facebook users who had installed this application but also data about their friends who were not using the application. The charge against Facebook in this data collection exercise was that it allowed collection of data from users friends even though they had no information about their data being collected. Among the pieces of critical personal data that was harvested from millions of users in the US and across the world included their name, gender, age, location and likes of all public Facebook pages.
As US investigators put it; this data when analysed by the algorithm could produce a psychological profile of users and predict an individual’s personality better than their friends, family and co-workers. In the US for instance, anyone liking a page related to George Bush, rap, hip-hop and certain other pages could be accurately linked to a conservative and conventional personality. The FTC noted, “Cambridge Analytica was interested in this research because it intended to offer voter-profiling, micro-targeting, and other marketing services to US political campaigns and other US-based clients.” The company also procured voter records in various US states.
Then, a series of paid surveys were carried out in which Facebook users were asked various questions including those pertaining to national security in the US. The algorithm was then trained to analyse these responses along with the harvested Facebook information to classify personalities on a range of issues like “political enthusiasm, political orientation, frequency in voting, consistency in voting for the same political party, and views on particular controversial issues.” It is estimated that data of upto 65 million people; majority of them in the US were harvested in this fashion to build psycho-political profiles. Most of these users had never consented to share their data nor had they consented for their psychological profiling. These profiles were then matched with voter registration records and the trial proved to be a great success. In effect Cambridge Analytica now had a database of Facebook users whose political affiliations and voting behaviour had been predicted using harvested personal data determined by an algorithm. Cambridge Analytica would use this to “identify and build target voter lists and apply psychological profiles to target group of voters.” These profiles could then be sold to political parties or other clients who could then target these users with advertisements and influence their voting behaviour. In effect, these profiles that also revealed consumption tastes and other personal traits could also be sold to corporations who would use it to send targeted advertisemnts for their products. The FTC noted, “The project would have had little value to Cambridge Analytica if the personality scores could not be matched with actual US voters.”
What can it mean for India?
The FTC’s findings and ruling in the Cambridge Analytica case have deep repercussions in a country like India where data privacy is still in its infancy and a large part of the population is unaware about what bits of their personal data are being harvested and for what purposes. The FTC found Cambridge Analytica guilty of deceiving Facebook users – not telling them that their and their friends’ identifiable information was being collected and analysed for political purposes. The Personal Data Protection Bill that was recently cleared by the union cabinet does take many steps in this regard. It explicitly states that the data fiduciary – in this case Facebook – would have to explicitly tell users the kind of personal data it is collecting, the purpose for its collection, source of such collection, the entities with which data will be shared among other things. It provides equal safeguards to users when it comes to processing of their personal data. But the recent FTC judgment in the Cambridge Analytica case shows that users are oblivious of the fine print of the consent that they are often asked to provide. Often the language of these terms and conditions are highly technical and voluminous with users generally preferring to click on the ‘I Accept’ button instead of bothering to read the fine print. In many ways, this was at the crux of the ‘deception’ that Cambridge Analytica was found guilty of. It also remains to be seen if India’s Personal Data Protection Bill can address the issue of comprehensibility and simplicity when it comes to devising terms and conditions for getting citizens consent for collecting, processing and sharing their critical personal data.