Microsoft's Artificial Intelligence Tay Became a 'Racist Nazi' in less than 24 Hours
Mar 24, 2016
Tay, Microsoft's new Artificial Intelligence (AI) chatbot on Twitter had to be pulled down a day after it launched, following incredibly racist comments and tweets praising Hitler and bashing feminists. Microsoft had launched the Millennial-inspired artificial intelligence chatbot on Wednesday, claiming that it will become smarter the more people talk to it. The real-world aim of Tay is to allow researchers to "experiment" with conversational understanding, as well as learn how people talk to each other and get progressively "smarter." "The AI chatbot Tay is a machine learning project, designed for human engagement," a Microsoft spokesperson said. "It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are...