Monday, December 15, 2025 | 08:48 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

AI and the human soul: What new books reveal about us and our purpose

Could AI ever function as a spirit ambulance, shuttling us through the uncanny valleys that keep us, as Shantideva knew, from accepting others?

book

NYT

Listen to This Article

THE MORAL CIRCLE: Who Matters, What Matters, and Why
Author: Jeff Sebo
Publisher: Norton
Pages: 182
Price: $24 
 
ANIMALS, ROBOTS, GODS: Adventures in the Moral Imagination
Author: Webb Keane
Publisher: Princeton University Press
Pages: 182
Price: $27.95
  In a literary flourish, Shantideva, an eighth-century Indian monastic, divulged what he called the “holy secret” of Buddhism: The key to personal happiness lies in the capacity to reject selfishness and accustom oneself to accepting others. A corner–stone of the Buddhist worldview, Shantideva’s verse finds new, albeit unacknowledged, expression in two recent books: Jeff Sebo’s provocative, if didactic, The Moral Circle and Webb Keane’s captivating Animals, Robots, Gods. 
 
 
Much like Shantideva, both authors make a selfish case for altruism.” Sebo, an associate professor of environmental studies at NYU and an animal-rights activist, centres his argument on human exceptionalism and our sometimes contradictory desire to live an ethical life.
 
Those within the “moral circle”— be it ourselves, families, friends, clans or countrymen — matter to us, while those on the outside do not. In asking us to expand our circles, Sebo speeds past pleas to consider other people’s humanity, past consideration of chimpanzees, elepha–nts, dolphins, octopuses, cattle or pets and heads straight to our moral responsi–bility for insects, microbes and AI systems.
 
A cross between a polemic and that introductory philosophy course you never took, Sebo’s tract makes liberal use of italics to emphasise his reasoning. Do AI systems have a “non-negligible” — that is, at least a one in 10,000 — chance of being sentient? he asks. If so (and Sebo isn’t clear that there is such a chance), we owe them moral consideration.
 
The feeling in reading his argument, however, is of being talked at rather than to. That is too bad, because we are in new territory here, and it could be interesting. People are falling in love with their virtual companions, getting advice from their virtual therapists and fearing that AI will take over the world. We could use a good introductory humanities course on the overlap of the human and the nonhuman and the ethics therein. 
Luckily, Webb Keane, a professor in the department of anthropology at the University of Michigan, is here to fill the breach. Keane explores all kinds of fascinating material in his book, most of it taking place “at the edge of the human.” His topics range from self-driving cars to humans tethered to life support, animal sacrifice to humanoid robots, AI love affairs to shamanic divination.
 
Like Shantideva, he is interested in what happens when we adopt a “third-person perspective,” when we rise above our usual self-centred identities and expand our moral imaginations “What counts as human?” he asks. “Where do you draw the line?” And, crucially, “What lies on the other side?”
 
Several vignettes stand out. Keane cites a colleague, Scott Stonington, a professor of anthropology and practicing physician, who did fieldwork with Thai farmers some two decades ago. End-of-life care for parents in Thailand, he writes, often forces a moral dilemma: Children feel a profound debt to their parents for giving them life, requiring them to seek whatever medical care is available, no matter how expensive or painful. 
Life, precious in all its forms, is supported to the end and no objections are made to hospitalization, medical procedures or interventions. But to die in a hospital is to die a “bad death”; to be able to let go, one should be in one’s own bed, surrounded by loved ones and familiar things. To this end, a creative solution was needed: Entrepreneurial hospital workers concocted “spirit ambulances” with rudimentary life support systems like oxygen to bear dying patients back to their homes. It is a powerful image — the spirit ambulance, ferrying people from this world to the next. Would that we, in our culture, could be so clear about how to negotiate the confusion that arises at the edge of the human.
 
Take Keane’s description of the Japanese roboticist Masahiro Mori, who, in the 1970s, likened the development of a humanoid robot to hiking toward a mountain peak across an uneven terrain. “In climbing toward the goal of making robots appear like a human, our affinity for them increases until we come to a valley,” he wrote. When the robot comes too close to appearing human, people get creeped out — it’s real, maybe too real, but something is askew.
 
What might be called the converse of this, Keane suggests, is the Hindu experience of darshan with an inanimate deity. Gazing into a painted idol’s eyes, one is prompted to see oneself as if from the god’s perspective — a reciprocal sight — from on high rather than from within that “uncanny valley.” The glimpse is itself a blessing in that it lifts us out of our egos for a moment.
 
The inscrutability of an AI companion, like that of an Indian deity, encourages a surrender, a relinquishment of personal agency that can feel like the fulfilment of a long-suppressed dream. Of course, something is missing here too: the play of emotion that can only occur between real people. But AI systems play into a deep human yearning for relief from the boundaries of self.
 
Could AI ever function as a spirit ambulance, shuttling us through the uncanny valleys that keep us, as Shantideva knew, from accepting others? As Jeff Sebo would say, there is at least a “non-negligible”—that is, at least a one in 10,000—chance that it might.
 
The reviewer is a psychiatrist in New York City 
©2025 The New York Times News Service

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Feb 10 2025 | 12:28 AM IST

Explore News