Following the success of the Nano Banana edits, a fresh wave has taken over Instagram — vintage saree portraits. These edits reimagine women in classic sarees placed against retro, film-like backdrops. The Nano Banana tool itself, powered by Google’s Gemini Nano model, first gained attention for its quirky ability to transform selfies into toy-like 3D figurines.
Building on that popularity, the vintage saree portraits trend has surged, bringing back aesthetics inspired by the 1940s through the 1970s with the help of Gemini’s AI editing tools. But as these trends spread widely, they also raise an important question: are such playful experiments harmless fun, or do they risk exposing personal data?
Previous AI imaging trends
Before vintage saree edits became the latest craze, several other AI experiments caught social media’s attention. The Nano Banana trend in particular saw users upload selfies and transform them into images of figurines with plastic-like skin, oversized eyes, and exaggerated cartoonish traits.
Before that, OpenAI’s ChatGPT introduced imaging features that inspired another viral phenomenon: Ghibli-style portraits. Drawing inspiration from Studio Ghibli’s animations, these edits reinterpreted selfies into whimsical, hand-drawn anime-style characters.
Also Read
How Safe is the Gemini Nano Banana Tool?
According to a report by Mint, while companies such as Google and OpenAI have built-in safeguards to protect user data, overall safety also depends on how individuals handle their uploads and who eventually accesses them.
Google’s Nano Banana tool adds a layer of protection through SynthID, an invisible watermark, along with metadata markers to signal that the image is AI-generated. Any content created or edited with Gemini 2.5 Flash Image includes this watermark, which helps ensure accountability and transparency for users.
Does watermarking offer real protection?
SynthID, although invisible to the human eye, can be detected with specialised tools to confirm whether an image has been created or altered with AI. This makes it possible to trace an image’s origins and establish authenticity.
However, Mint citing Tatler Asia noted that the detection tool for SynthID isn’t available publicly, leaving ordinary users without the means to verify AI-generated content themselves. This limitation raises doubts about its practical effectiveness in everyday scenarios.
Critics also point out potential weaknesses, noting that watermarks can be faked, removed, or simply ignored. The report stated that Ben Colman, CEO of Reality Defender, said their “real-world applications fail from the onset.”
Experts echo this sentiment, stressing that watermarking alone won’t solve the issue. UC Berkeley professor Hany Farid told Wired that no one views watermarking as a stand-alone solution, adding that pairing it with additional safeguards would make counterfeiting more difficult.
The bigger picture
As social media embraces each new AI trend — whether it’s playful figurines, anime portraits, or retro saree edits — the technology highlights not just new creative possibilities but also unresolved questions around digital privacy and responsible image use. The debate around tools like Gemini Nano shows how the excitement of experimentation goes hand in hand with the need for stronger protection measures in the AI era.
ALSO READ: Google Gemini Nano Banana AI: Top 5 prompts for a perfect Durga Puja look 
)