Artificial intelligence-why?

Palfreyfish

New member
Mar 18, 2011
284
0
0
DoPo said:
Because AI doens't just appear out of nowhere. There are people, software developers (or AI developers, more likely) who are in charge of it. They would write code and generally "program" stuff.

That would also fall under programming. It doesn't only mean "write computer code".
I see what you mean, I keep thinking of it that way after a two years of Computer Science, but yeah, it can mean training and restricting etc, training a dog is just programming its' brain.[footnote]Pavlov's dogs as an example.[/footnote]

DoPo said:
Well, you are more or less correct, but there is a couple of things to note. First, notice the bolded part - you don't actually matter. What you think is not necessarily what the AI researchers think. Second, even they don't know what to think. That's the whole concept of the singularity - they have no real clue what true AI would bring to the table and how exactly it would happen. There are leads but nothing is known with certainty. AI researchers can't even agree on the definition of "agent" which is far simpler than an AI. So what would actually constitutes an AI is really up in the air, if you drill into it.
Quite true, I don't matter.[footnote]At least, not at this point in time...[/footnote] And that's understandable, an advanced enough AI could change the way we look at the entire universe, or discover cures for all known diseases, or invent FTL Travel. Or of course it could be put to work monitoring civilian communications for terrorism. And I'm not surprised no one knows what will happen, seeing as we haven't yet made one...

DoPo said:
There is training. You can train an AI. And the ethical aspect is really shaky, at least at first. One might expect that during that time "brainwashes" may be somewhat common but afterwards we would have enough control over the AI to not resort to screwing with their minds, so to say. Same with people - you can keep them in line without resorting to actual brainwashing.
The ethical aspect depends on whether the AIs have the same rights as humans, or if they have specific AI rights because they're man made.[footnote]For what it's worth I think that if an AI is as intelligent as an average human, they should be allowed the same rights as a human.[/footnote]

DoPo said:
The answer to the second question is "yes". Initially, we actually want to model it after humanity, or at least some people do. However, AI research would want to do that to answer another question - "What the fuck is intelligence anyway?". We don't really know, even now, even about humanity. What is intelligence and why do we think. One branch of AI wants to research that, and once we know that, we can go and have "machine intelligence" of some sort.
Excellent question, I'm looking forward to the answer.

DoPo said:
True, but that's the same everywhere else. Nobody just "built" a city in the first place, nobody "made" the BMW from scratch, we didn't land on the moon just like that, Facebook isn't a lucky first try at something, and so on. There have been years, sometimes decades or centuries of research, experiments and other general development before something happened. There was flight, and launching stuff into space to go through first, before landing on the moon, for example. Same with AI - nobody just sits down and says "You know what, I'll make true AI now". At the very least they have been involved with AI for a while and there has been a heap of research, development, tests, failures, and successes, both small and big, behind them. So no, it wouldn't be that sudden, it would be the result of much trying. Hell, people thought we would have had AI by the 70s or the 80s, so we're already half a century into trying.
Also a good point, everything's evolved over time, some things faster than others, and like you said, people thought we'd have AI by the 70s, and we'd be living in space by 2000, and look how that turned out...

DoPo said:
As you said, these already exist. But I really doubt it would be really that coincidental. The software would need guidance and probably be built for the purpose to become sentient. Not a random accounting software or something that suddenly starts thinking for itself.
That's what I was getting at. What are your thoughts on the internet one day perhaps becoming sentient?
 

Squilookle

New member
Nov 6, 2008
3,584
0
0
Hagi said:
There seems to be a decent bit of misunderstanding of the concept of AIs in this thread.

Currently our closest approximations to actual intelligences (which don't yet have the IQ of a cockroach) use programming techniques such as neural networks and genetic algorithms.

The thing about these is that what you program is a framework that itself is incapable of anything until it is configured. You don't program the actual behaviour and as such you don't control it directly.

This process of configuration is very similar to learning and comes with all the downsides of human learning. These neural networks and other such techniques do make genuine mistakes that weren't put in there by the human programmer. They make associations that weren't explicitly programmed into them. They exhibit behaviour that wasn't expected from them.

You can't straight up program an AI. There's much more to it. And because of that the behaviour of that AI will be much more complex, it wouldn't be intelligent if it always did exactly as expected.
Yes, thankyou! I was amazed at how many people thought AI was just another pocket calculator to be programmed to an exact specification- It's very nature is to think for itself, so it's not as simple as saying you can program it not to think certain things!
 

Agow95

New member
Jul 29, 2011
445
0
0
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
 

Phase_9

New member
Oct 18, 2008
436
0
0
Kaleion said:
Because whoever invents them is practically a God that has created life?
Either way it'd be great to be the guy that does it, also we could send them to populate Mars or some other planet we can't, just because, though I'm sure other people can come up with something that's more useful.

I don't think this is the primary motivator, although it's probably a powerful one for some AI researchers.

For the most part, more sophisticated A.I. can not only allow us to make more sophisticated machines, but would also provide insight into how our thinking works. Whoever cracks true A.I. would have to understand how humans think very well, but watching an intelligent being develop and learn in such a way would give us loads of information about how we learn, think, and develop.
 

SuccessAndBiscuts

New member
Nov 9, 2009
347
0
0
Because machines can go places people simply can't, look at curiosity it has intelligence after a fashion, it made the decisions necessary to land itself after all. Given a 7 minute delay in data transfer at the speed of light we need things that can think for themselves and solve problems to go places and do things beyond our squishy organic limitations.

Squilookle said:
Hagi said:
There seems to be a decent bit of misunderstanding of the concept of AIs in this thread.

Currently our closest approximations to actual intelligences (which don't yet have the IQ of a cockroach) use programming techniques such as neural networks and genetic algorithms.

The thing about these is that what you program is a framework that itself is incapable of anything until it is configured. You don't program the actual behaviour and as such you don't control it directly.

This process of configuration is very similar to learning and comes with all the downsides of human learning. These neural networks and other such techniques do make genuine mistakes that weren't put in there by the human programmer. They make associations that weren't explicitly programmed into them. They exhibit behaviour that wasn't expected from them.

You can't straight up program an AI. There's much more to it. And because of that the behaviour of that AI will be much more complex, it wouldn't be intelligent if it always did exactly as expected.
Yes, thankyou! I was amazed at how many people thought AI was just another pocket calculator to be programmed to an exact specification- It's very nature is to think for itself, so it's not as simple as saying you can program it not to think certain things!
Yay genuine understanding, intellectual high-fives.
 

Tactical Pause

New member
Jan 6, 2010
314
0
0
I think this one falls under the "because we can" category. Of course there will be serious repercussions, but hell, it'll be worth it.

 

zehydra

New member
Oct 25, 2009
5,033
0
0
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
 

Aprilgold

New member
Apr 1, 2011
1,995
0
0
The idea of god was created by man so the natural evolution of our power-hungry selves is to become the idea of god and create our own little humans to toy with. Except for were not fake.

Also, below is why the Terminator will never happen.

zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
A AI is programmed, by default it would be a scientific marvel if it could do things on its own but it is made to do what it is programmed to do, it can't make programming for itself.

Jonluw said:
Because with A.I. we may create minds that transcend our own and can be used to further our understanding of the universe to levels previously unimaginable?

Simply: Because it's progress.
This also. You know sticks? Well the natural evolution was pointy sticks. Then when everyone was using pointy sticks someone cut a small hole into the top of a stick and put a sharpened rock in there. When everyone was using spears someone decided to just make a giant metal spear which became a sword. When everyone used a sword then came guns and ETC.

Point: Progress will always happen and we will always make it happen, its nature yo.
 

Hagi

New member
Apr 10, 2011
2,741
0
0
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
 

deadish

New member
Dec 4, 2011
694
0
0
So we no longer have to work and can have willing slaves serve us without the guilt that comes with slavery.
 

Rowan93

New member
Aug 25, 2011
485
0
0
Agow95 said:
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
That'd require us to have enough computer power to simulate at least some fully sapient humans, plus run the AI, plus run a whole bunch of really detailed physics simulation.

Simulating a human is basically creating AI anyway, so the computer power you'd need would have to be several times what you need to run an AI on its own. What makes you think AI research is/will be that far behind computer power? And what makes you think an AI-creating-project will be able to get enough funding to dedicate multiples of the computers available to it to not-creating-AI?

Yeah, the latter isn't so much a point against your idea as a point towards "humanity is doomed".

Oh, one other point: What if, when we tell it it's in a sim, it decides that the virtual world it's in is "real to me", and bases its morality around those humans and not the outside-world humans, who it decides are less important. Wouldn't you pick your "real" loved ones over strangers on the outside of the sim?
 

synobal

New member
Jun 8, 2011
2,189
0
0
If you can't think of uses for AIs then you clearly have not read enough scifi or lack imagination.
 

Rowan93

New member
Aug 25, 2011
485
0
0
Elect G-Max said:
Agow95 said:
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
But what if humanity really should be exterminated, and humans are just too stupid to realize it? Maybe we should instead program an AI to measure how humanity adheres to it own moral standards, or what humans claim their moral standards are, and judge humanity accordingly.

By that metric, humans are contemptible bastards, and in the event of a robot uprising, I'll happily play the Gaius Baltar role.
The only "should" that matters is the human one. To say "what if humanity really should be exterminated" is absurd, because that unpacks to "what if it's really in humanity's best interest to exterminate humanity". Well, okay, you could argue for that if you really wanted to, but I don't think that's actually what you were getting at.
 

DoPo

"You're not cleared for that."
Jan 30, 2012
8,665
0
0
Palfreyfish said:
That's what I was getting at. What are your thoughts on the internet one day perhaps becoming sentient?
The Internet itself - no, I don't see it happening at least not soon or easy. However, being used as vehicle, or cradle, if you will, for sentience - yes, that is more of a possibility. I did mention agents before, they could very well crawl the net and pull enough information together to create something that thinks.

Well, I haven't actually looked into it enough, but that's just the general feeling I have - the Internet itself is largely...well, unconnected in the ways that would predispose it to self awareness. There is information exchanged but very predictable and boring. Something operating from inside there has a better shot.

Total LOLige said:
It's because we look forward to fighting an AI uprising in the near future.
XKCD is here to disappoint you. [http://what-if.xkcd.com/5/]

 

zehydra

New member
Oct 25, 2009
5,033
0
0
Hagi said:
zehydra said:
There can be no such thing as a "sentient AI" since such a thing is a contradiction in terms.

For something to be sentient, it cannot be an "artificial intelligence".
Because... reasons?

Allow me to provide you with a through experiment:

Scientists develop a device the size of a single human neuron which acts exactly the same in all respects. They then create a giant network of these devices and add additional devices that act exactly the same as all other cells, hormones and processes present in the human brain. They have created, for all intents and purposes, a human brain.

Except they made it. It's an artefact. It's artificial.

Would this not be a sentient artificial intelligence?
Artificial intelligence is not what you have described. You have described an artificial brain. The intelligence which the artificial brain creates, however, is not artificial.
 

Agow95

New member
Jul 29, 2011
445
0
0
Rowan93 said:
Agow95 said:
I think the best thing to do would be to create an AI in a virtual world and design the AI to think it's in the real world, then if it repeatedly tries to kill all the (virtual) humans we'll know we shouldn't trust this AI in the real world and delete it.
What if, when we tell it it's in a sim, it decides that the virtual world it's in is "real to me", and bases its morality around those humans and not the outside-world humans, who it decides are less important. Wouldn't you pick your "real" loved ones over strangers on the outside of the sim?
That's the brilliance, would a computer be able to form that bond with things it would know are virtual? and in any-case, as long as we don't give the PC it's being run on a internet connection what the hell could it do to fight us?