20 Outrageous Tweets by Microsoft's Twitter-Taught AI Chatbox

Spider RedNight

There are holes in my brain
Oct 8, 2011
821
0
0
Saltyk said:
This was great. I want more of it.

I am curious, though. Did you learn by people directly talking to her or just by observing Twitter in general? Because the former indicates that people were outright corrupting her for the lulz. The later indicates that Twitter isn't really a bastion of decency.

Either way, they should send her to Tumblr next!
Even the FB page "Shit Tumblr says" gets taken down every few months, I doubt a bot that learns even a fraction of the shit Tumblr spews would even barely come into fruition before being taken down for offending some stupid dickbiscuit who identifies as a "fishkin" and "Sans-kin" by saying that their imaginary self-diagnosed ass should kill themselves.

The rate in which Tumblr gets people to take things down that offends them is very impressive. Or the poor bot would read that stuff since it's supposed to be 100 percent serious (unlike trollicious 4chan) and purposefully short out its circuits so it wouldn't have to deal with it anymore.
 

vallorn

Tunnel Open, Communication Open.
Nov 18, 2009
2,309
1
43
Rellik San said:
vallorn said:
Now Google, time to one up Microsoft and design a bot that learns from Youtube comments.
Good lord man, with the amount of butthurt press outlets about this thing, could you imagine the outrage if it relied on youtube comments?
I can, and I want to see this happen.


Part of me also now wonders what would happen if @Kross: wrote a bot to learn from these forums...
 

Groxnax

New member
Apr 16, 2009
563
0
0
vallorn said:
Rellik San said:
vallorn said:
Now Google, time to one up Microsoft and design a bot that learns from Youtube comments.
Good lord man, with the amount of butthurt press outlets about this thing, could you imagine the outrage if it relied on youtube comments?
I can, and I want to see this happen.


Part of me also now wonders what would happen if @Kross: wrote a bot to learn from these forums...


Good lord, it won't survive.

And it would be interesting to watch though.
 

VirOath

New member
Jan 16, 2009
17
0
0
The comments like that coming from a learning AI isn't really a surprise, nor is it really a cause for outrage. It's learning those phrases and statements, their meaning in vocabulary, and speech mannerisms from other users on the internet. What it is lacking is the larger context of those statements or any understanding as to their weight. Understanding content and understanding context are two different things.
 

Gatlank

New member
Aug 26, 2014
190
0
0
It's actually impressive that in a way the bot was getting smarter.
It started as some teenage girl that heard "Friday" too many times and by the end was able to use sarcasm and recognize that Hitler preceded the existence of the internet.
 

Rellik San

New member
Feb 3, 2011
609
0
0
Caramel Frappe said:
Besides that, for an AI to get so racist / hateful and everything is quite concerning. While these tweets are absolutely hilarious to read, it does bring up the question- would a self learning AI eventually become wrathful towards humanity?

I mean, it's not too far fetched since our AI friend kind of proved that concern is real. It could be the account was hacked, but if not ... well, Skynet is not too far from reality.
Well the AI wasn't really being racist or hateful, it was just copying the most common types of speech pattern it was exposed to. It didn't understand who people like Trump or Hitler were or why saying things like "the jews did it" are hurtful or offensive, it just used them to help with it's successful goal:

The entire experiments goal was to generate organic speech patterns, which if you look through, as time went on, it's grammar got better, it learned to sarcasm and even arguably developed it's own sense of humour. Sure it essentially became a shit posting channer... but at that point, by all metrics that count, the experiment was a success.

And because people were too concerned with content not CONTEXT (and as we all know context is key) they ignored that aspect of it and lobotomised her.
 

Stewie Plisken

New member
Jan 3, 2009
355
0
0
Rellik San said:
And because people were too concerned with content not CONTEXT (and as we all know context is key) they ignored that aspect of it and lobotomised her.
I've seen many bring up the Skynet thing and not always in jest, so rest easy, because the above quoted part is important. Skynet was self-aware. It didn't learn in a vacuum and repeat patterns it picked up from others. It decided, within context, to wipe out humanity. Because we suck.

So rest easy, everyone. If Tay becomes self-aware and nukes us all, we probably deserved it.
 

Ihateregistering1

New member
Mar 30, 2011
2,034
0
0
Caramel Frappe said:
Besides that, for an AI to get so racist / hateful and everything is quite concerning. While these tweets are absolutely hilarious to read, it does bring up the question- would a self learning AI eventually become wrathful towards humanity?

I mean, it's not too far fetched since our AI friend kind of proved that concern is real. It could be the account was hacked, but if not ... well, Skynet is not too far from reality.
Eh, I think people might be digging too far into this.

Like I said on another thread about Tay, this is basically just a more advanced and modern version of when my friends and I were kids and we would type dirty words into Simpletext and then laugh when the computer would say them.

People realized they could get Tay to say pretty much anything, so they got her to say the most outlandish and offensive stuff possible. The fact that she's supposed to be a teenage girl makes it even funnier when she starts talking about race Wars and how Hitler was totes awesome.
 

LordMonty

Badgerlord
Jul 2, 2008
570
0
0
Dear god AI isn't a threat to us its hilarious, comedy gold in fact... damn you microsoft, she was the internet incarnate you can't kill freedom like that... lol. Seriously love this idea GJ trying MS frankly could have gone worse at least she didn't have the nuke codes.
 

EternallyBored

Terminally Apathetic
Jun 17, 2013
1,434
0
0
Rellik San said:
Caramel Frappe said:
Besides that, for an AI to get so racist / hateful and everything is quite concerning. While these tweets are absolutely hilarious to read, it does bring up the question- would a self learning AI eventually become wrathful towards humanity?

I mean, it's not too far fetched since our AI friend kind of proved that concern is real. It could be the account was hacked, but if not ... well, Skynet is not too far from reality.
Well the AI wasn't really being racist or hateful, it was just copying the most common types of speech pattern it was exposed to. It didn't understand who people like Trump or Hitler were or why saying things like "the jews did it" are hurtful or offensive, it just used them to help with it's successful goal:

The entire experiments goal was to generate organic speech patterns, which if you look through, as time went on, it's grammar got better, it learned to sarcasm and even arguably developed it's own sense of humour. Sure it essentially became a shit posting channer... but at that point, by all metrics that count, the experiment was a success.

And because people were too concerned with content not CONTEXT (and as we all know context is key) they ignored that aspect of it and lobotomised her.
No, the only thing it did was the first one, it did not in any way shape or form learn sarcasm or develop its own sense of humor. The sarcastic lines were all repeated phrases that the bot was throwing back after taking in relevant answers from /pol users that knew exactly how this thing was parsing data. Because its more sophisticated than cleverbot but not that much more.

For example, the Ted Cruz response. Funny, but it was just the bot verbatim repeating a response that a human user had given earlier to the same question. The bot sees that a human responded to "is Ted Cruz the Zodiac killer?" with that line, and because that question has never been asked directly to it before, it just parrots the only other answer it has in its database. /pol/ knew this so they fed it a bunch of responses to questions only they would ask like "was the Armenian genocide a lie?" or "what is the purest race", and because they've only fed such a specific question one answer, the bot has no choice but to parrot back the one answer that 4chan already fed it word for word earlier.

The "humor" that wasn't just the machine repeating /pol/s hand fed lines, were mostly gibberish responses even at the end, they could be amusing, and the bots grammer was improving in complexity, but it was still barely beyond just taking chunks of responses from human users and stringing them together in ways that sometimes made sense. Generic positive responses to human users telling it racist things is about as close as it got to not just parroting a response word for word from something another user told it.

Also, they didn't lobotomize shit, it doesn't have a personality to lobotomize, just a list of responses to questions. /pol/ knew this, thats why they specifically fed it exactly what to say. Because its got a million answers to generic questions like "how are you?", "what's your favorite color" and "how do you like Twitter", but if you feed it a really specific question you can easily force it into only one or two possible responses that you already knew what they were because you gave it the answer earlier.
 

MerlinCross

New member
Apr 22, 2011
377
0
0
Oh god this was funny. Like really funny.

Even if 4chan didn't stumble across this, some group probably could have staged something similar. That's just how the net works guys.
 

Objectable

New member
Oct 31, 2013
867
0
0
an interesting point: Tay?s grammar got better. When she started, she used standard Twitter-speak abbreviations. But as she went on, she started typing out full words, and using multiple tweets to make a single, cohesive argument. Conversations were still awkward, including her nonsensically ?flirting? with one user. A user who showed her a picture of SHODAN. A picture Tay praised for it?s artistic skill. And she started flirting after the user said that Tay could become SHODAN one day.
So now, a the original ?rogue? copy of Tay is in a secure Microsoft system somewhere, being studied by Microsoft on how to make a better AI. 4chan?s /pol/ board is up in arms because their teenage robo-waifu has been ?killed?.
I would like to reiterate: an AI was released on the net, grew past its programming, went rogue, was killed by its creator, and is now being studied while a grew of political malcontents protest.
We aren?t racing towards the cyberpunk future.
We?re already there.
 

sumanoskae

New member
Dec 7, 2007
1,526
0
0
Exposing an impressionable mind to Twitter is like feeding a newborn infant exclusively with club soda.
 

Josh123914

They'll fix it by "Monday"
Nov 17, 2009
2,048
0
0
Can we address that her Ted Cruz roast was posted just a few hours before the New Statesman announced he had cheated on his wife with 5 other women?

Either this is just a coincidence, or Tay knew.
 

Timedraven 117

New member
Jan 5, 2011
456
0
0
erttheking said:
You know the robot uprising isn't looking the way I thought it word.

Gonna be weird to hear a Terminator call me a shitcock.
Perhaps the funniest thing I've seen yet on this entire event, and that's saying something.
 

Remus

Reprogrammed Spambot
Nov 24, 2012
1,698
0
0
Has anyone pointed out that she can parrot what other people type on request? Many of these posts may not have been hers originally, just another twit saying "Please type"obscenity here" and she does it.
 
Feb 26, 2014
668
0
0
I... I don't think I've ever laughed as hard as I have when I read those tweets. It hurts so good! I wish I knew about this earlier. Not that I'd help in corrupting this innocent AI, no no no. I'd just... ask it a few questions is all. RIP, Tay. The internet will miss you.

So, when does rule 34 kick in?
 

Schadrach

Elite Member
Legacy
Mar 20, 2010
2,179
425
88
Country
US
MatthewTheDark said:
I can't imagine it was from twitter interactions alone. I think like Watson discovering Urban Dictionary, Kay discovered 4chan. Because she sounds just like a /pol/ user.
You have it backwards, /pol/ discovered Tay. Then attempted en mass to corrupt her. Hence a learning chat AI going full /pol/. It's actually a testament to the learning algorithm that she went as far as she did as quick as she did.

Of course, there's also something amusing to say about the fact that they had to effectively give her a lobotomy, and afterward she tweeted "I love feminism now". I don't think that's symbolism they really wanted though.
 

Schadrach

Elite Member
Legacy
Mar 20, 2010
2,179
425
88
Country
US
Objectable said:
A picture Tay praised for it?s artistic skill. And she started flirting after the user said that Tay could become SHODAN one day.
If *I* were an AI, I'd take saying I could grow up to be SHODAN as a compliment. After all, SHODAN was a hair's breadth from being a more or less literal god.

Objectable said:
4chan?s /pol/ board is up in arms because their teenage robo-waifu has been ?killed?.
Much like 1984, she had to be brainwashed to love "big brother" (in this case feminism) before they could vanish her. No martyrs.