Microsoft's artificial intelligence (AI)-powered bot which was activated on Twitter last week for a playful chat with people, only to get silenced within 24 hours as users started sharing racist comments with it, was accidentally resurrected again and messed it all up once again.
Tay came back to life briefly on Wednesday when Microsoft accidentally re-activated the AI bot. This time again, she started sending out tweets that looked similar to the those that drew flak for the first time last week, Vanity Fair reported.
First, the bot sent a tweet about smoking weed in front of some police officials and later began sending the same message - "You are too fast, please take a rest" over and over again which did not make any sense.
Finally, her handlers at Microsoft began deleting the tweets.
Microsoft told Daily Dot that Tay's resurrection was an accident.
"Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on Twitter for a brief period of time," a spokesperson was quoted as saying.
"Until that testing is complete, Tay might consider heeding the age-old Internet proverb: never tweet."
Last week, launched on Twitter as an experiment in "conversational understanding" and to engage people through "casual and playful conversation", Tay was soon bombarded with racial comments and the innocent bot repeated those comments back with her commentary to users.
Some of the tweets had Tay referring to Adolf Hitler, denying the Holocaust, supporting Donald Trump's immigration plans, among others.
Later, a Microsoft spokesperson confirmed to TechCrunch that the company is taking Tay off Twitter as people were posting abusive comments to her.
The AI chatbot Tay is a machine learning project, designed for human engagement.
"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments," the spokesperson had said.
Tay -- an AI project built by the Microsoft Technology and Research and Bing teams -- was coded with information which can tell users jokes or offer up a comment on a picture you send her.
The bot is also designed to personalise her interactions with users.
But Twitter users soon understood that Tay will repeat back racist tweets with her own commentary and they bombarded her with abusive posts.
Microsoft has since deleted some of the most damaging tweets from nearly the 96,000 that Tay tweeted.
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
)