As disappointing as it is to watch your favorite celebrity fall from grace after being overheard saying deplorable, bigoted things, we seem to extract a tiny bit of delight when computers pick up on the habit. At least, that was the gleeful, tongue-in-cheek tone expressed in most coverage of Microsoft’s unfortunate incident with their Twitter AI account, named “Tay Tweets.”
In a nutshell: Microsoft developed a machine learning algorithm designed for social interaction and hooked it up to Twitter. Its purpose was to soak up ideas on human conversation from interacting with other Twitter users and skimming social content. Microsoft theorized that the program would be able to mimic speech patterns of popular tweets and adopt them to create a fully-rounded persona. In other words, it would learn from responses it was given, then use those to develop a vocabulary.
Or, you know, it could just start being racist. Not even mildly “who gave Grandma a third mimosa?” racist, but full-blown “12 year-old with an Xbox headset” racist. The resemblance to the latter is unsurprising, considering that demographic was likely who rigged the machine to start spouting deplorable things through repetition, including holocaust denials and misinformed takedowns of feminism.
At the heart of the issue is a huge, embarrassing black eye for Microsoft’s Tay development team and these five lessons to learn from the aftermath:
#1 The Best Laid Plans…
Microsoft’s Tay team no doubt worked for months or longer waiting for the project to go live. Imagine how their excitement was tempered into oscillating horror/disappointment as their creation began to spout phrases like “Hitler did nothing wrong.”
There are many lessons Microsoft can and should learn from the incident — which we will cover more in point #3 below — but for all their naiveté they still are largely victims of circumstance. What this incident shows is that no matter how much work you put into something and how successful it will be at launch, something as simple as a foul-mouthed suburban tween with too much time on their hands can be its downfall.
Lesson: We can never consider all factors, especially as we are blinded by excitement heading towards launch, so always be prepared to be shoved right back in the gate and forced to come up with a “Plan B” in response to the unforeseen.
#2 The Internet Is a Nasty Place
Call it “polite correctness” or “professional propriety,” but the bottom line is that there are certain things a business should never utter if they hope to not alienate their entire market or be eaten alive by the press. Despite the promise by some that “speaking your mind” is “being brave,” it takes zero effort — and even less brain power — to type up offensive words.
By contrast, considering your audience, the implications of your words or even the baggage of the conversation you are introducing is thoughtful, calculated, strategic and just being a decent human being.
Surrounding ourselves with professional media often leads us to think that “decency” is the default operative mode for humans, but all it takes is a quick perusal of certain forums to realize that shock value and a lack of self-control is something feel proud of themselves for cultivating.
Lesson: Never forget that there are dark corners of the internet propagating the most denigrated practices in our society and getting rewarded for it with attention. That factor can be scary or dangerous when channeled against you.
#3 Guard Thy Campaigns from Evil
While “overestimating the politeness of the average Twitter user” sounds like a forgivable sin, in this case Microsoft could have performed some minor risk management to prevent a self-destruct scenario like this from happening.
Even if you have never been the type to input swear words into text-to-speech programs or name your Pokemon after your favorite parts of the human anatomy, know that the draw to puerile humor is strong among those who have no PR consequences. Someone on the Tay dev team should have run a pilot program to see how quickly that instinct would overtake Tay’s “consciousness” to turn her into the average 4chan user.
The same caution could spare the U.K. from having to name their research vessel “Boaty McBoatface” or prevented Wal-Mart from having to exile send Pitbull to the most barren city with a Wal-Mart in the entire U.S.
Lesson: Protect your crowdsource campaigns and your general public face from would-be trolls with gated publishing and auto-filters.
#4 AI Needs to Learn a Lot About Culture/Context
Recent AI triumphs such as AlphaGo should be tempered with stories like Tay in order to show that we are still a long way off from simulating typical human intelligence. To Tay, the phrases she was tweeting had just as much value as something Stephen Hawking had said — if not more so because of their ability to capture attention. Teaching computers politeness and social expectations is particularly hard when our personal filters can be switched off so fluidly and with a lack of self-awareness, even mid-conversation.
To really perfect our AIs, we will have to help them know us better than we know ourselves in order to make sure they don’t reflect the qualities most of us have the sense to instinctually hide behind closed doors.
Lesson: Computer intelligence has a long way to go before it is more like C-3PO from Star Wars than Bender from Futurama.
#5 Shining Online Is About Finding the Right Examples
Tay’s inability to discern not-so-coded racism or offensiveness is no different than ours when we enter into new, unfamiliar conversations. Therefore, gleaning your content campaign’s talking points from any old blog could end up backfiring unless you mentally check for possible caustic subtext. We must instead seek out models for our content, social interaction or general public face not just from what is popular, but from the purest and most well-intentioned sources possible.
Lesson: Agonize over how the public could perceive your message, or entrust the help of professional digital marketing experts like EverSpark Interactive to make the right call for you every time.
Of course, with the tweets we used in this blog, you may not think it got that bad. If you want to see the full extent of the AI’s racism it learned, check out these links. (Warning: Very offensive and NSFW.)