According to a report in The Guardian, a researcher recently asked OpenAI’s ground-breaking chatbot, ChatGPT, about articles on a particular topic, and the AI simply made up some fake articles, including a fake Guardian article that even the Guardian staff, who had in no way written it, found plausible.
However, it would be a serious mistake to assume that ChatGPT-type chatbots can only disseminate fraudulent content. “Generative AI” and many of its incarnations pose a threat to society since they can develop new content like images, text, music, or even full videos using an enormously large amount of training data.
While the internet has been captivated by AI-generated images of characters from the epics or historic rulers, AI-generated images can sometimes —not always — be as potentially dangerous as ChatGPT-generated content, as evidenced by the viral spread of realistic yet fake images of Pope Francis wearing a puffy Balenciaga jacket, Elon Musk strolling alongside rival General Motors CEO Mary Barra, or a “Stormy” arrest of Donald Trump. With the aim of enhancing human creativity, AI image generators have quickly emerged as the most eye-catching tool on the internet. They produce a variety of realistic-looking fake images. The issue is that anyone can produce anything. For instance, using Midjourney, a Pennsylvania lawyer recently created images highlighting the resilience of the conspiracy theory that the moon landings were staged.
Despite the fact that AI-generated art has existed for some time, modern tools have allowed even complete novices to produce intricate, abstract, or photorealistic works by merely entering a few words into a text box. OpenAI launched Dall-E — a “breakthrough” event — in 2021. In the second half of 2022, Dall-E 2, Stability AI’s Stable Diffusion, and Midjourney were all made available to the general public. Indeed, a revolution!
Yes, AI-generated images can have a “plasticky” appearance and “semantic consistencies,” which are undoubtedly shortcomings and can be detected. However, as technology continues to advance at a rapid pace, these images will become more and more realistic. This poses several uncertainties for society. In February, the US Copyright Office ruled that the Midjourney-generated images of Zarya of the Dawn, an 18-page comic book by an AI expert, Kristina Kashtanova, are not protectable under current copyright laws because they “are not the product of human authorship.” This ruling may have broad ramifications for artists. Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to train the software behind its Stable Diffusion tool.
A photographer who gained popularity with his B&W portraits on Instagram has disclosed how he employed AI to create images that were mistaken for his own. Additionally, a sexually explicit advertisement featuring actor Emma Watson’s visage was recently promoted via the face-swapping software “Facemega.” Indeed, human models may also be in danger given that Levi’s has partnered with an AI firm and is planning “tests of this technology using AI-generated models to supplement human models, increasing the number and diversity” of their models for their products “in a sustainable way.”
In the future, generative AIs may blur the line between “true” and “fake,” if they haven’t already. This will happen when additional advancements, investments, and participants in the field become more active. It would be easier to spread disinformation via the internet and social media, and political and social campaigning would frequently grow shadier.
And then, there is the question of trust. Reporter Tiffany Hsu and author Steven Lee Myers noted in a recent New York Times piece that “the technology could hasten an erosion of trust in media, in government and in society. If any image can be manufactured — and manipulated — how can we believe anything we see?”
What should one do, really, when generative AI seems to be pushing the boundaries? “The age of AI has begun,” Bill Gates stated in a recent piece. Mr Gates discussed how AI might impact employment, healthcare, and education. Mr Musk, whose feud with Mr Gates could one day become as epic as that of John Rockefeller and Andrew Carnegie, believes AI is more dangerous than nuclear warheads and says there should be a regulatory body overseeing the development of superintelligence. More than 1,800 signatories, including Mr Musk, cognitive scientist Gary Marcus, and Apple co-founder Steve Wozniak, recently requested a six-month pause on the development of systems “more powerful” than GPT-4. But some people are genuinely inquisitive: What qualifies as being “more powerful than GPT-4”?
The US Commerce Department has now made the official announcement that it’s looking for feedback from the public on how to create AI accountability measures in order to advise how US policymakers should approach the technology. But it’s never clear how to regulate the usage and development of AI. And, in the process, even if I saw a picture of Volodymyr Zelensky shaking hands with Vladimir Putin on the internet, I wouldn’t be overly delighted. I would first confirm it with a reliable news source!
The writer is professor of statistics, Indian Statistical Institute, Kolkata