Microsoft says It's Deeply Sorry for Racist and Offensive Tweets by Tay AI Chatbot
After Microsoft's Twitter-based Artificial Intelligence (AI) chatbot 'Tay' badly defeated earlier this week, Microsoft has expressed apology and explained what went wrong.

For those unaware, Tay is Millennial-inspired artificial intelligence chatbot unveiled by Microsoft on Wednesday that's supposed to talk with people on social media networks like Twitter, Kik and GroupMe and learn from them.

However, in less than 24 hours of its launch, the company pulled Tay down, following incredibly racist and Holocaust comments and tweets praising Hitler and bashing feminists.
Cybersecurity


In a blog post published Friday, Corporate Vice President Peter Lee of Microsoft Research apologized for the disturbing behavior of Tay, though he suggested the bad people might have influenced the AI teenager.
"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee wrote. "Tay is now offline, and we will look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."
Within 16 hours of her launch, Tay was professing her admiration for Hitler, her hatred for Jews and Mexicans, and graphically soliciting sex. She also blamed US President George Bush for 9/11 terrorist attack.

In one tweet, Tay expressed her thoughts on feminism, saying "I f***ing hate feminists and they should all die and burn in hell."
Cybersecurity

Tay's Offensive Tweets were Due to a Vulnerability


Since Tay was programmed to learn from people, some of her offensive tweets were reportedly achieved by people asking her to repeat what they'd written, allowing them to put words into her mouth. Though some of her responses were organic.
"A coordinated attack by a subset of people exploited a vulnerability in Tay," Lee wrote. "As a result, Tay tweeted wildly inappropriate and reprehensible words and images."
The exact nature of the bug is not disclosed, but the whole idea of Tay was an AI bot that mimics the casual speech patterns of millennials in order to "conduct research on conversational understanding."

Microsoft has since deleted as many as 96,000 tweets made by Tay and suspended the experiment. Though the company is not giving up on Tay and she will return.

Microsoft is working on every possible thing to limit technical exploits, but also very well know the fact that it cannot fully predict "all possible human interactive misuses without learning from mistakes."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.