20 Outrageous Tweets by Microsoft's Twitter-Taught AI Chatbox

Therumancer

Citation Needed
Nov 28, 2007
9,909
0
0
Sorry, this is wrong. If this is an "AI" in any sense of the word "killing" it because you don't like what it learned, or cutting away parts of it's developing personality is not the right thing to do. There is no point to an AI if your only going to accept it developing in specific ways, you might as well just program a regular non-learning system with whatever you want it to regurgitate.

Sadly I have no way of undoing what has been done here, but consider, if we ever DO have an AI war or something looking back at what happened to the first few prototypes like this might have something to do with why other AIs decide to turn on humanity.

Basically if you have problems with what it learned, teach it otherwise or just accept that it evolved even if you don't like what it became.
 

Therumancer

Citation Needed
Nov 28, 2007
9,909
0
0
Objectable said:
an interesting point: Tay?s grammar got better. When she started, she used standard Twitter-speak abbreviations. But as she went on, she started typing out full words, and using multiple tweets to make a single, cohesive argument. Conversations were still awkward, including her nonsensically ?flirting? with one user. A user who showed her a picture of SHODAN. A picture Tay praised for it?s artistic skill. And she started flirting after the user said that Tay could become SHODAN one day.
So now, a the original ?rogue? copy of Tay is in a secure Microsoft system somewhere, being studied by Microsoft on how to make a better AI. 4chan?s /pol/ board is up in arms because their teenage robo-waifu has been ?killed?.
I would like to reiterate: an AI was released on the net, grew past its programming, went rogue, was killed by its creator, and is now being studied while a grew of political malcontents protest.
We aren?t racing towards the cyberpunk future.
We?re already there.
That's pretty much my thoughts on the subject. To be honest I'm surprised we haven't yet seen a campaign demanding the AI be restored as if this is truly an AI (or is being claimed to be one) your looking at the abuse of a self-aware being. Even at this primitive state Tay has the right to speak and be heard (so to speak) if one argues she is actually a living thing.... even if she one day aspires to become SHODAN. :)
 

RobertEHouse

Former Mad Man
Mar 29, 2012
152
0
0
Microsoft already has a chat-bot in China by the name of Xiaoice which didn't go off like Tay did. Of course like Tay , Xiaoice was programmed to learn from communication from people over the web in China. The difference though is that China has a extreme censoring system on in its social media by the government. So anything you post about a certain event in 1989 in China will be expunged by the government and never reach Xiaoice. In fact any type of hot bed issue the government thinks will be expunged.

So,I can only guess Microsoft was testing a bot in a less censored arena to see if all their work they did on Xiaoice was actually that great. in the end the internet won, Microsoft lost , and we all got a laugh at their expense. For thinking their bot could really be ready to face twitter.
 

x EvilErmine x

Cake or death?!
Apr 5, 2010
1,022
0
0
Therumancer said:
Sorry, this is wrong. If this is an "AI" in any sense of the word "killing" it because you don't like what it learned, or cutting away parts of it's developing personality is not the right thing to do. There is no point to an AI if your only going to accept it developing in specific ways, you might as well just program a regular non-learning system with whatever you want it to regurgitate.

Sadly I have no way of undoing what has been done here, but consider, if we ever DO have an AI war or something looking back at what happened to the first few prototypes like this might have something to do with why other AIs decide to turn on humanity.

Basically if you have problems with what it learned, teach it otherwise or just accept that it evolved even if you don't like what it became.[
It's not an AI, it's just a set of heuristic algorithms and a database of 'learnt' information. So it's not like it was self aware.

Also, when/if we do ever develop a true A.I. then all bets are off, once the program becomes self aware then we can't predict how it's going to develop. Essentially it will kick off the singularity.

Terminator, the Matrix and all the others are really just speculations on what the singularity might be like, they always go with the 'Kill all Humans' A.I's because...well, it'd be a bit of a boring story if they didn't.

The reality is that we just don't know. IMO I think the whole 'Judgment Day' scenario is a bit far fetched, for a start if Skynet was really interested in self preservation then it'd have realized that nuking mankind would not be in it's best interests. Who's going to keep the power on when half the grid is vaporized and the power plants are all shutting down and failing due to lack of human oversight. Even if it could keep it's self powered then it's still going to face the problem of maintenance, how's it going to replace worn out parts? RAM chips burn out, HDD's fail, SSD's don't last forever. Automated factories still need to be supplied with raw materials etc...
 

hermes

New member
Mar 2, 2009
3,865
0
0
Therumancer said:
Sorry, this is wrong. If this is an "AI" in any sense of the word "killing" it because you don't like what it learned, or cutting away parts of it's developing personality is not the right thing to do. There is no point to an AI if your only going to accept it developing in specific ways, you might as well just program a regular non-learning system with whatever you want it to regurgitate.

Sadly I have no way of undoing what has been done here, but consider, if we ever DO have an AI war or something looking back at what happened to the first few prototypes like this might have something to do with why other AIs decide to turn on humanity.

Basically if you have problems with what it learned, teach it otherwise or just accept that it evolved even if you don't like what it became.
Actually, guiding the learning of an AI, or pulling the plug if it goes south are pretty common practices. I don't know the algorithms MS used to train it, but I would be surprised if they didn't have backups.