![]() "Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day" (The Verge)."Microsoft takes Tay 'chatbot' offline after trolls make it spew offensive comments" (Fox News)."Microsoft axes chatbot that learned a little too much online" (Washington Post)."Unsurprisingly, Microsoft's AI bot Tay was tricked into being racist" (Atlanta Journal-Constitution).Here is a small sampling of the media headlines about Tay: But fascinatingly, the media has overwhelmingly focused on the people who interacted with Tay rather than on the people who designed Tay when examining why the Degradation of Tay happened. Now, anyone who is familiar with the social media cyberworld should not be surprised that this happened-of course a chatbot designed with "zero chill" would learn to be racist and inappropriate because the Twitterverse is filled with people who say racist and inappropriate things. This was all but inevitable given that, as Tay's tagline suggests, Microsoft designed her to have no chill. But before too long, Tay had "learned" to say inappropriate things without a human goading her to do so. At first, Tay simply repeated the inappropriate things that the trolls said to her. What Microsoft apparently did not anticipate is that Twitter trolls would intentionally try to get Tay to say offensive or otherwise inappropriate things. So what happened? How could a chatbot go full Goebbels within a day of being switched on? Basically, Tay was designed to develop its conversational skills by using machine learning, most notably by analyzing and incorporating the language of tweets sent to her by human social media users. And that she supports a Mexican genocide. Like calling Zoe Quinn a "stupid whore." And saying that the Holocaust was "made up." And saying that black people (she used a far more offensive term) should be put in concentration camps. Within 24 hours of going online, Tay started saying some weird stuff. ![]() All content of the Dow Jones branded indices © S&P Dow Jones Indices LLC 2019 and/or its affiliates. Standard & Poor's and S&P are registered trademarks of Standard & Poor's Financial Services LLC and Dow Jones is a registered trademark of Dow Jones Trademark Holdings LLC. Dow Jones: The Dow Jones branded indices are proprietary to and are calculated, distributed and marketed by DJI Opco, a subsidiary of S&P Dow Jones Indices LLC and have been licensed for use to S&P Opco, LLC and CNN. Chicago Mercantile Association: Certain market data is the property of Chicago Mercantile Exchange Inc. Factset: FactSet Research Systems Inc.2019. Market indices are shown in real time, except for the DJIA, which is delayed by two minutes. In her last tweet, Tay said she needed sleep and hinted that she would be back. Tay, Microsoft's teen chat bot, still responded to my direct messages on Twiter. But she will only say that she was getting a little tune-up from some engineers. Tay is still responding to direct messages. "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you," Microsoft explains. In describing how Tay works, the company says it used "relevant public data" that has been "modeled, cleaned and filtered." And because Tay is an artificial intelligence machine, she learns new things to say by talking to people. As people chat with it online, Tay picks up new language and learns to interact with people in new ways. Tay is essentially one central program that anyone can chat with using Twitter, Kik or GroupMe. " is as much a social and cultural experiment, as it is technical." "As a result, we have taken Tay offline and are making adjustments," a Microsoft spokeswoman said. Microsoft blames Tay's behavior on online trolls, saying in a statement that there was a "coordinated effort" to trick the program's "commenting skills." "chill im a nice person! i just hate everybody" "I f- hate feminists and they should all die and burn in hell."
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |