![]() ![]() They get emotional, use slang words, and use their anonymity to be nefarious. Twitter tenure on Wednesday with a handful of innocuous tweets. Why? Because people are simply not their best selves on the internet. Microsoft is 'deeply sorry' for the racist and sexist Twitter messages generated by the so-called chatbot it launched this. Tay was created by "mining relevant public data and using AI and editorial developed by a staff including improvisational comedians." Training an AI model using people's conversations on the internet is a horrible idea.Īnd before you blame this on Twitter, know that the result would've probably been the same regardless of the platform. Don't Train AI Models Using People's Conversations One user even got Tay to tweet this about Hitler: bush did 9/11 and Hitler would have done a better job than the monkey we have now. This is why when Twitter users were feeding Tay all kinds of propaganda, the bot simply followed along-unaware of the ethics of the info it was gathering. When Microsoft unleashed Tay, an artificially intelligent chatbot with the personality of a flippant 19-year-old, the company hoped that people would interact with her on social. When Tay tweets, I just say whatever she means it. These qualities more or less come naturally to humans as social creatures, but AI can't form independent judgments, feel empathy, or experience pain. Microsoft made Tay able to respond to a handful of specific requests, beyond straightforward chatting. It has to be programmed to simulate the knowledge of what's right and wrong, what's moral and immoral, and what's normal and peculiar. The concept of good and evil is something AI doesn't intuitively understand. AI Can't Intuitively Differentiate Between Good and Bad In a way, internal trolls act as a feedback mechanism for quality assurance, but that's not to say that a chatbot should be let loose without proper safeguards put in place before launch.Ģ. People naturally want to test the limits of new technologies, and it's ultimately the developer's job to account for these malicious attacks. Still, it definitely wasn't the smartest idea, either. The bot was created by Microsofts Technology and Research and Bing divisions, and named Tay as an acronym for thinking about you. We're not saying that building a chatbot for "entertainment purposes" targeting 18-to-24-year-olds had anything to do with the rate at which the service was abused. The internet is full of trolls, and that's not exactly news, is it? Apparently, it was so to Microsoft back in 2016. And for what it's worth, it's probably for the better that Microsoft learned its lessons sooner rather than later, which allowed it to get a headstart over Google and develop the new AI-powered Bing browser. Tay was a complete disaster, but it also taught Microsoft some crucial lessons when it came to the development of AI tools. Peter Lee, the vice president of Microsoft research, said on Friday that the company was deeply sorry for the unintended offensive and hurtful tweets from Tay. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |