Thursday, April 23, 2026 | 03:03 AM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Increasing AI adoption puts humanities and critical thinking back in focus

As generative AI reshapes knowledge and decision-making, the humanities face both renewed relevance and existential pressure, raising urgent questions for universities and critical thinking

Human vs AI, artificial intelligence

| Image: Canva/Free

Yasmeen Arif New Delhi

Listen to This Article

Daniela Amodei, co -founder of Anthropic, recently said that the humanities are going to make a comeback, because what’s important now, in her words is “….understanding ourselves, our history, and what makes us tick. Studying the humanities, she added, is "…more important than ever, … while large language models (LLMs) are often very good at STEM, we need people with critical thinking skills.” Echoing her, it would appear, Steven Johnson, editorial director at GoogleLabs’ NotebookLM, says that the importance of philosophy is going to be felt now more than ever. At about the same time, Sage Publications sends out a “Critical Thinking Challenge… an initiative that spotlights creative, practical approaches to strengthening critical thinking in higher education,” with a winner’s prize of $5000.  Clearly, critical thinking, even philosophy,  could be trending in the generative AI industry, but there is another counter trend that stifles this one.
 
 
Amodei, not surprisingly, has a degree in literature which, we are often reminded, is one among the many allegedly unemployable college degrees in the STEM dominated world, like those in history, philosophy, sociology, anthropology - well, largely the Humanities and the Social Sciences. In most parts of the world, the trend is to close these programmes, although those closures have many ostensible reasons. A strange scenario unfolds, now that generative AI and LLMs are set to take over ‘thinking’, we need the humanities to teach us what “critical” thinking is and perhaps even philosophy, to retain our humanity against the machinic takeover. Yet, not too many of these disciplines will continue their work in university spaces. This leads to an urgent concern: AI and its impact on the human sciences, and vice versa. Some believe that we are about to lose the generation that could indeed think, reason, and decide without AI. These are skills that are crafted in the manoeuvring space between certainty and doubt that the human sciences are meant to nurture.  The initiation of those skills begins and ends in university spaces and that’s what I will focus on.
 
To start with, AI technology and its apparent opacity.  For all those not involved in it directly, AI tends to be a technical spectacle, shocking and awesome at the same time. All of us might know that AI or algorithmic thinking has been around for about 75 years, but its current crescendo, some say happened after Google’s AlphaGo managed to beat the world champion, Lee Sedol, in a game of Go, in Seoul in 2016, with rather unusual moves that few in the industry have been able to fathom. Some machinic capacity lurked there that made everyone take notice and predictably, make Silicon Valley think big again. That launched the era of generative AI models and since 2023, LLMs like ChatGPT, Google’s Gemini or Anthropic’s Claude have entered our everyday, ordinary spaces and certainly overtaken our professional skills in the university. To rehearse the well-known fact, ChatGPT claimed 100 million users globally in two months, when Google Translate took 78 months and Instagram, 30 months.
 
We might also know that there is nothing artificial or real about this intelligence. They are giant algorithms, working with staggering data sets and with fantastic compute speed and planetary computing infrastructure. That architecture makes them supersmart in all those knowledge domains which are verifiable, largely what we call STEM disciplines - science, technology, engineering, mathematics, and extending to the ubiquitous computer coding. User-generative AI models are built on artificial neural networks that can have access to, or have the capacity to process, all information available online on any given topic, in a matter of seconds - something no single human mind can do. That has made them indispensable, albeit not without a substantial downside, in medical, meteorological, governance, corporate or financial decision making, among many more.
 
What we might know less is that this smartness is a bit trickier when it comes to the humanities and social sciences. When LLMs call their models smarter than humans, they are not referring to smartness in any of those critical thinking disciplines.  And that’s where the forking path lies, so to speak. With Amodei, I will hope that the humanities and social sciences retain their role in the university, but we will need to choose that option. The other option is to allow AI to colonise the ‘idea’ of intelligence with the production of ‘correct’ answers, and ‘thinking’ with the capacity of ‘problem solving’. Intelligence and thinking will be very little else than the compute capacity to win the probability game where the most frequent statistical pattern is the correct best answer, all based on past data.  That correct best answer will make the claim on efficiency – and efficiency will trump all other qualities. Those of us in the humanities will pause when faced with that. That’s not, we will say, how we read, learn, write and teach. We believe we do something more. We distinguish between quantity and quality, between predictive accuracy and interpretive nuance. We look at difference of argument, and we build the ability to discern and intuit. And we do it face to face, in a classroom.
 
We will then discover very quickly that LLMs can apparently, do better in that too. Once again, they process ‘data’ of arguments, topical text and probable relevance in a scale that no single human mind can apprehend.  And here too, the LLM will produce the most frequent pattern of an argument, enough material to support that, and finesse that to the latest stance, all in seconds. It will also have personalities, well trained to mimic human-like conversation and companionship skills good enough to make devoted friends out of people, while making convincing, knowing arguments.  A generation already more intimate with the screen than with human contact will find no need for the erstwhile classroom of people, voices, speaking  and simple face to face, interactive, slow learning. Our classrooms will have to learn AI-led efficiency. But for those of us watching this transition in the classrooms will remember that efficiency for us is learning the economy of thought, perhaps one that guarantees the freedom to choose doubt and reflection before deciding on a perspective, leave alone a correct answer. One of the hardest things to develop in the human sciences is an informed perspective. But that is not the AI industry credo. Scale and speed that can blur all human cognitive capacity is the race, toward a moving goalpost and much of it will happen before most of us will know the difference between one AI model and the next.
 
The problem boils down to the impasse between the soft sciences and the hard sciences, an impasse swaddled in the question of knowledge itself. In the AI ecologies that we are living in, sustaining that divide between the hard and the soft sciences, the problem solvers and the problem whisperers, the predictors and the interpreters, is a profoundly foolish choice. If they interact in university spaces and interface with AI pedagogy and industry applications, something different might take shape in the coming future. (It must be said though that the hospitality amongst the hard sciences for their ‘other’ is quite wanting in this regard.) Critical thinking, even philosophy in current university ecologies is alleged to be a bad thing, leading to the many notions of how universities are breeding grounds of unrest. That confuses critical thinking itself and narrows down its intent to counter establishment thinking – which has its debates and those are not my concern here. The space that is lost in that polarisation of efficient problem-solving and critical thinking is the space where careful attention needs be paid to translating what AI is and how it works for everyone using it. Critical thinking here, among other things, is the ability to distinguish between human uncertainty, ignorance, reflexivity, intuition on one hand and, on the other, machinic cognition, machinic language, statistical certainties, scale, speed, and then, to recognise the right places to call them out. Humans continue to live in histories, in geographies and cultures and not all of those are just problems to be solved, they are accrued wisdom.
 
 There have been a few revolutions in human history, borne out by technological innovation, which have transformed human life in ways that are eventually called epochal. The one underway now with Artificial Intelligence is going to surpass those technological epochs.  This is a technology that is, as Yuk Hui has correctly said, convergent in a way unlike any other. It is a singular technology that can and will be applied to any and every context across the planet. LLMs are that kind of technology. It is one planetary ‘super-brain’, albeit one out of Silicon Valley, that will tell us how to read, write and think. We now talk of Small Language Models, which will intervene where English or Chinese has dominated. The model is the same, it is predictive accuracy built on algorithmic casuistry, nothing else, so far, at least.
 
A few prescient ones have said that we do not build technology, technology builds us. With LLMs for a while now, it is has become quite clear that it is very difficult to distinguish between machine writing and human writing. And for the most, the prevailing opinion will be that AI does better. LLM industry jargon often mentions the human loss function – humans with their ambiguous language tend to create loss in modelling compute. As far as AI is concerned, human language is already a loss. In the universities, the humanities and social sciences are the loss. For those of us engaged in academic professions, that should give us more than a pause. Next time we open our screens to find a friendly assistant popping up to help us do our work more efficiently, we might want to ‘think’ twice.
 
(The author is Professor of Sociology at Shiv Nadar University, Delhi, India, and Distinguished Fellow 25-26 at the Max Weber Kolleg, Erfurt, Germany)
 
Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper
 

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Mar 31 2026 | 11:25 PM IST

Explore News