I see what you mean, I keep thinking of it that way after a two years of Computer Science, but yeah, it can mean training and restricting etc, training a dog is just programming its' brain.[footnote]Pavlov's dogs as an example.[/footnote]DoPo said:Because AI doens't just appear out of nowhere. There are people, software developers (or AI developers, more likely) who are in charge of it. They would write code and generally "program" stuff.
That would also fall under programming. It doesn't only mean "write computer code".
Quite true, I don't matter.[footnote]At least, not at this point in time...[/footnote] And that's understandable, an advanced enough AI could change the way we look at the entire universe, or discover cures for all known diseases, or invent FTL Travel. Or of course it could be put to work monitoring civilian communications for terrorism. And I'm not surprised no one knows what will happen, seeing as we haven't yet made one...DoPo said:Well, you are more or less correct, but there is a couple of things to note. First, notice the bolded part - you don't actually matter. What you think is not necessarily what the AI researchers think. Second, even they don't know what to think. That's the whole concept of the singularity - they have no real clue what true AI would bring to the table and how exactly it would happen. There are leads but nothing is known with certainty. AI researchers can't even agree on the definition of "agent" which is far simpler than an AI. So what would actually constitutes an AI is really up in the air, if you drill into it.
The ethical aspect depends on whether the AIs have the same rights as humans, or if they have specific AI rights because they're man made.[footnote]For what it's worth I think that if an AI is as intelligent as an average human, they should be allowed the same rights as a human.[/footnote]DoPo said:There is training. You can train an AI. And the ethical aspect is really shaky, at least at first. One might expect that during that time "brainwashes" may be somewhat common but afterwards we would have enough control over the AI to not resort to screwing with their minds, so to say. Same with people - you can keep them in line without resorting to actual brainwashing.
Excellent question, I'm looking forward to the answer.DoPo said:The answer to the second question is "yes". Initially, we actually want to model it after humanity, or at least some people do. However, AI research would want to do that to answer another question - "What the fuck is intelligence anyway?". We don't really know, even now, even about humanity. What is intelligence and why do we think. One branch of AI wants to research that, and once we know that, we can go and have "machine intelligence" of some sort.
Also a good point, everything's evolved over time, some things faster than others, and like you said, people thought we'd have AI by the 70s, and we'd be living in space by 2000, and look how that turned out...DoPo said:True, but that's the same everywhere else. Nobody just "built" a city in the first place, nobody "made" the BMW from scratch, we didn't land on the moon just like that, Facebook isn't a lucky first try at something, and so on. There have been years, sometimes decades or centuries of research, experiments and other general development before something happened. There was flight, and launching stuff into space to go through first, before landing on the moon, for example. Same with AI - nobody just sits down and says "You know what, I'll make true AI now". At the very least they have been involved with AI for a while and there has been a heap of research, development, tests, failures, and successes, both small and big, behind them. So no, it wouldn't be that sudden, it would be the result of much trying. Hell, people thought we would have had AI by the 70s or the 80s, so we're already half a century into trying.
That's what I was getting at. What are your thoughts on the internet one day perhaps becoming sentient?DoPo said:As you said, these already exist. But I really doubt it would be really that coincidental. The software would need guidance and probably be built for the purpose to become sentient. Not a random accounting software or something that suddenly starts thinking for itself.