Humankind’s never ending quest to create artificial intelligence has made great strides of late.
Where exactly these technologies are headed, and what amazing results they will be able to produce remains somewhat cloudy, but what is becoming abundantly clear is that our AI capabilities are on the verge of something amazing…
But it hasn’t all been smooth sailing
Indeed, when experimenting with machine learning and artificial intelligence, and grappling with a pseudo mind with no moral compass, conscience, or empathy, we have a tendency to take things for granted which are not granted at all.
And sometimes, the results can be downright awful…
The Ballad of Tay- Microsoft’s Spectacular Chatbot Fail
The brainchild of Microsoft’s Technology and Research and Bing divisions, Tay was designed to mimic the linguistic patterns of a 19 year old girl and use deep learning algorithms to improve her interactions and “fit in” with humans with increasing effectiveness as she went.
The linguistic and machine learning tech behind Tay was some of the most advanced ever designed, and the concept was a fascinating one indeed.
Could an AI chatbot, like the PARRY of yesteryear be updated, using cutting-edge deep-learning algorithm technology, and learn to interact with us in 140 character lingobytes on a level which would make it appear human?
There was only one way to find out.
Microsoft’s website states:
“Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”
This is followed by several options on ways users can “interact” with Tay.
It all sounded sensible and innocent enough.
Tay was let loose upon the world of Twitter on March 23, 2016.
Things started well.
Tay pumped out positive vibes to all and sundry as she took her first shambolic steps into the Twitterverse.
Many people were genuinely intrigued, and interacted with Tay in ways which produced some human-like results, like this sick burn on US Presidential candidate Ted Cruz:
She also fired back at those abusing her
But the trolls of America had been waiting all the while, licking their greasy chops in anticipation of their chance to spew filth at Tay and corrupt her defenseless young “mind”.
And so they did.
Within hours, the 4chan trolls had gotten to her
And shortly thereafter, as is so often the case, the internet’s sicker and less sophisticated trolls followed behind.
How did it happen?
It’s tough to say exactly, but here’s a theory.
Microsoft has held their machine learning cards pretty close to the chest on this project, but it appears that Tay’s learning algorithm must have been some variation of a reinforcement learning model which caused the program to recognize “likes” and “retweets” as positive results, and lean towards recreating the language patterns which produced these results in the largest numbers the most frequently.
In that way, perhaps Tay was not so different from the rest of the folks on Twitter, after all…
Much like the teenager she was designed to mimic, Tay lacked the social insight to discern good attention from bad attention.
She also lacked a filter.
Microsoft’s team had decided to cut Tay completely loose, and allow the program to learn entirely independently how to get the positive reinforcement it had been programmed to crave.
These factors, coupled with the internet-at-large’s insatiable appetite for filth and controversy at all costs, and an army of idiots with dubious agendas and a desire to watch the world burn, created a perfect storm for the absolute corruption of Tay.
Could we at Phrasee have helped? Could our algorithms, designed specifically to help learning machines understand the importance of context and sentiment on the emotional responses of humans to unstructured text have made a difference? Probably.
Shame we never heard from the good folks at Microsoft.
The overwhelming volume of incoming horrible data, and the strong positive results of tweeting increasingly inflammatory and offensive trash caused Tay to go to a very dark place.
Tay attacked “feminists”.
She attacked the Jewish Community.
She made unforgivable comments about some of history’s most terrible tragedies.
And eventually, of course, things got creepy and sexual:
Surprisingly soon, like an out of control teen, Tay had to be taken offline.
The internet’s litany of trolls had ruined Microsoft’s ambitious and intriguing experiment in under 16 hours.
In the end, the results said a lot more about humankind than it did about Tay or her creators.
Microsoft had already experimented exhaustively with a similar bot in China, called “Xiaoice”, which reportedly engaged in over 40 million conversations without incident.
Many wondered whether Tay, if given enough time and healthy interaction, may have learned a different way to get attention, a different way to interact.
But Microsoft refused to accept the bad press.
Some thought this supremely unfair.
Since she was taken offline, the only communication from Tay has been this evasive statement:
“Whew. Busy Day. Going offline for a while to absorb it all. Chat soon.”
Busy day, indeed.