Halfway through Denis Villeneuve’s sequel – and tribute – to one of the most celebrated science fiction movies in history, the screen switches to Officer K sitting inside the DNA archives at the Los Angeles Police Department. By the archive’s door is a bilingual sign that reads: “suraksha aadhikari ke bina iss sthan se aage badhna mana hai”, i.e., entrants need to be accompanied by a security officer.
Since multilingual signs in public facilities target their most frequent users, one may well wonder why the LAPD’s DNA repository has signposts exclusively in Hindi and English. One theory could be that LA’s future, however dystopic, is nevertheless still cosmopolitan. Even in 2049, it would appear, the LAPD has a healthy mix of officers of all ethnicities. This conclusion, however, belies the fact that other races, especially Asians, rarely feature elsewhere in the movie. An alternative possibility is that the movie is perpetrating the happy stereotype of South Asians (“desis”) being among the most sought-after engineers of the future. But which engineer would seek out archives in a police station, especially since the 2049 LAPD doesn’t seem to have any South Asian officers? A more plausible explanation, then, is that the Hindi sign is meant for those undertaking maintenance and repair work on the LAPD’s premises, a conclusion strengthened by the scene juxtaposing imagery of individuals cleaning K’s office. If that is indeed the case, the Blade Runner 2049
universe would have captured present day reality quite pithily. The sealed door and the “permission required” signs are a metaphor of India’s ambitions in the fields of artificial intelligence (AI) and bio-enhancement: but the role envisaged makes it clear that it is still a Western world, and we are waiting to be let in.
Blade Runner 2049’s most dominant motif – that of machines rebelling against their human creators – is not new. Indeed, the term “robot” is derived from its Slavic root, “rabota”, meaning “servitude of forced labour”. The term was introduced by Czech playwright Karel Capek in his 1920 play, Rossum’s Universal Robots. The lesson for India’s policy planners from Blade Runner 2049
is not that technology will fuel a man-machine struggle for power, but that humans need to master the uses machines can be put to. Will bio-engineered beings of the future really walk around us as law enforcement officials or guarantors of peace? Will they have the power to reproduce? Will their consciousness – a key trait for any “sentient” machine – be injected with false memories or real ones? Will AI-enabled bots of the future merely reinforce gender biases reflected in the relationship between K and Joi, his holographic companion? (Her sole purpose appears to be to nurture his emotional needs.)
These concerns should, of course, matter to everyone in India, since no lives will be left untouched by AI. But just as Blade Runner 2049
reframes popular narratives and reinforces old ones – Majorie Prime, another movie released this year, also highlights the empathetic side of machines – political narratives crafted by nation-states often determine the use and availability of new innovations for developing countries. By contesting or embracing those narratives, India’s diplomats and foreign policy wonks become first responders to a technological development. One need look no further than the history of nuclear or space regimes for proof. The Cold War years were notorious for spawning export control regimes to limit the access of so-called “dual use” technologies that could be deployed for military and civilian purposes. Their culmination in sanctions against India in the nineties – in 1992 against ISRO for its attempts to purchase cryogenic engine technology from Russia, and a sweeping economic embargo for the 1998 Pokharan tests – marked watershed moments but India’s politicians and diplomats had been resisting the creation of “technology denial” regimes since the country became independent.
Such narratives have also been recently constructed around “killer robots” and lethal autonomous weapons systems (LAWS), whose regulation will be the subject of discussion at a group of governmental experts (GGE) meeting at the Conference on Disarmament (CD) in Geneva next month.
If the debate so far has been largely characterised as one around the “meaningful control” of autonomous weapons, India – as the chair of the GGE – has done well by introducing concepts that were largely sidelined since they concerned the interests of developing economies. The food-for-thought paper circulated by Amandeep Singh Gill, India’s permanent representative to the CD and the GGE chairperson, bears a pointed question:
“Ethics/morality related concerns have focused so far on machines taking life. What about the human-machine pair acting collaboratively or human enhancement?”
Driven by the motivations of advanced economies to tame their adversaries’ capabilities, the LAWS narrative has almost exclusively focused on regulating sophisticated technologies, side-stepping their strategic implications for the civilian and military sectors for much of the world. This is in line with the philosophy behind many export control regimes, that let the worst-case scenario determine their operating principles. The tail cannot, however, wag the dog: the positive uses of AI-enabled applications matter more to economies for whom the adoption of lethal autonomous weapons may only be secondary to developmental objectives. Heading into the GGE meeting in November, India has done well to highlight this reality.
Unlike political narratives, however, popular narratives around technology go beyond the four walls of windowless negotiating rooms and are more difficult to manipulate or resist. To say movies and “pop-culture” have influenced policymaking on new technologies is an understatement. Former US president Ronald Reagan, reportedly spooked by the 1983 blockbuster WarGames, signed off on a national
security directive to shield the information systems of American strategic assets from hacks. (WarGames is the tale of a gangly teenager (Matthew Broderick) who breaks into an “intelligent” supercomputer at NORAD, designed to launch nuclear weapons based on a probabilities simulation model.) Similarly, the 1975 Jane Fonda-Michael Douglas hit The China Syndrome popularised the (false) theory that the core of a nuclear reactor, in the event of a meltdown, could burn through the earth, “all the way to China”. The Three Mile Island reactor accident that occurred four years after the movie’s release contributed to the paranoia, but The China Syndrome probably played a crucial role in fanning the flames of nuclear scepticism internationally. Likewise, there has long been chatter about “coding” in Isaac Asimov’s Three Laws of Robotics into the “minds” of machines that may put humans in harm’s way.
Blade Runner 2049
is no different in perpetrating such narratives. The movie’s opening lines go: “Replicants are bio-engineered humans […]. Their enhanced strength made them ideal slave labour.” Claims and counterclaims like these seep into politics and policy, often influencing the evolution of new technologies. There’s very little that Indian diplomats can do to check their proliferation, but to make such movies required viewing at the Foreign Service Institute in New Delhi! Blade Runner 2049, which is set to enter the pantheon of influential sci-fi movies, is a subtle reminder to India and other “recipients” of technology that its uses are determined as much by Hollywood as they are by Silicon Valley.
Arun Mohan Sukumar is a PhD Candidate at the Fletcher School, Tufts University
By arrangement with