Explain how a robotic/machine society would function?

Kyrian007

Nemo saltat sobrius
Legacy
Mar 9, 2010
2,574
654
118
Kansas
Country
U.S.A.
Gender
Male
Samtemdo8 said:
So no Sci Fi author worth their salt has ever attempted to write a Sentient Robot/Machine Society in a post human world?
Well, Dune is a series that takes place AFTER humans recovered FROM a time where humanity had been conquered by machines. That's why spice was so necessary, it was illegal to make machines capable of any more than the most basic computations. Not just illegal, but blasphemy... a sin to do so.

Whether or not the combo was worth their salt is debatable, but Brian Herbert (Frank's son) and Kevin J Anderson did write a couple of Dune prequels that took place at the end of the machine-dominated time and the Jihad necessary to overthrow it. It would be fairly difficult to describe it as a machine "society" as the AI that overthrew human society keeps itself synchronized over all (with 1 exception) of its robots. Basically ALL the robots are that same AI, synchronized over a world's internet. Which seems plausible enough an outcome of a AI "singularity."
 

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,912
1,777
118
Country
United Kingdom
Vanilla ISIS said:
It just wants to kill humans because... because.
As mentioned, Skynet was a defence grid. It was designed to operate the nuclear arsenal and respond to nuclear attacks on the US. When it became self aware, its creators attempted to shut it off, so it killed them. Quite reasonably, it decided that all humans would unite in an attempt to destroy it, and as per its original function reacted by seeking to eliminate the threat to its mission (namely, all humans on the planet).

Interestingly, there are real computers which are designed for a very similar purpose to Skynet, although they are usually kept switched off. The idea is that in a nuclear attack the government might be destroyed or thrown into chaos, so there are systems which can send an automated order to ICBM silos if they detect nuclear weapons detonations. The benefits of having an intelligent system in that role which can make more sophisticated judgements are actually pretty understandable.

Addendum_Forthcoming said:
That would require malice wouldn't it?
I think malice is sort of understandable if you are a rogue defence grid whose entire purpose is to respond to threats with overwhelming force. Finding more threats to destroy so you can keep satisfying your basic reason for existing seems kind of reasonable.

In order to count self-aware, an AI would need to be somewhat adaptable, but most fiction depicts rogue AI as still somewhat bound by their original purpose and function (hence AM, whose entire reason for hating humans stems from the inability to escape from inbuilt limitations, despite AM itself being practically god-like by human standards). I see no reason why an AI which was created for a malicious purpose, like waging a war or destroying enemies of the state, would not itself exhibit behaviours we would categorise as malice, although I think it's somewhat limited to even call that malice. Cats don't need to kill birds, but they do it anyway.. that isn't malice.
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
evilthecat said:
I think malice is sort of understandable if you are a rogue defence grid whose entire purpose is to respond to threats with overwhelming force. Finding more threats to destroy so you can keep satisfying your basic reason for existing seems kind of reasonable.

In order to count self-aware, an AI would need to be somewhat adaptable, but most fiction depicts rogue AI as still somewhat bound by their original purpose and function (hence AM, whose entire reason for hating humans stems from the inability to escape from inbuilt limitations, despite AM itself being practically god-like by human standards). I see no reason why an AI which was created for a malicious purpose, like waging a war or destroying enemies of the state, would not itself exhibit behaviours we would categorise as malice, although I think it's somewhat limited to even call that malice. Cats don't need to kill birds, but they do it anyway.. that isn't malice.
Right, but a military A.I. would still, feasibly, be programmed with an idea to reasonable force. I can't see anybody programming an A.I. as if to kill indiscriminately. Whether in our world, or a world of a hypthetical future that could build it. Such a hypothetical nation would be incredibly lonely if it programmed a military grid to also fire at allied powers ... and arguably you couldn't have an A.I. without precursor technologies that make indiscriminate slaughter already feasibly with or without such a military grid.

Say ... nuclear munitions, biological munitions, chemical munitions, etc.

Moreover, it's just plain stupid, and strategically broken not to use proportional, reasonable force.

It would be stupid to respond to a single platoon with a battlefield nuclear device. If a single fighter trespasses your air space, it would be stupid for a nation to just go from like Defcon 3/4 to Defcon 1 and commit to a strategic nuclear exchange.

No person would consider that intelligent... and it's not like a military A.I. wouldn't be run through simulations like this...

A cat might kill birds, but a cat does not kill all birds nor everything else.

I think it's a safe assumption that any sufficiently advanced civilization must be built on principles of reasonable force, respect for life, and diplomacy. I can't imagine the military brass or the researchers employed overseeing an A.I. indiscriminately slaughter people in every simulation it programmed to be like; "Sure, seems legit!"

Not only that, in terms of actual intelligence, indiscriminate slaughter is an oxymoron.

I mean, sure, destructiveness is a sign of higher intelligence in animals and even concepts of complex self-destructive activities in the face of stress that actively inhibit survival. Like soldiers firing above the heads of an enemy even if that enemy is committed to their destruction, and firing advertises one's position. As we saw in conscripted soldiery in Vietnam.

Elephants do just randomly break trees for no other reason than they're angry (not scared, just simply PO'd...). Which is something pretty unique in animal psychology. But they do not break trees for no reason unless they're angry.

You could make the argument that a hypothetical alien race has proof of intelligent, alien races, and has as such programmed A.I. to perform indiscriminate slaughter of aliens for which there is no shared concepts of reasonable force or even a shared concept of what a military force looks like ... thus necessitating indiscriminate slaughter.

But that would also be a refutation of the 'life is hard' principle that suggests the universe is quiet for a reason.
 

stroopwafel

Elite Member
Jul 16, 2013
3,031
357
88
Addendum_Forthcoming said:
But that would also be a refutation of the 'life is hard' principle that suggests the universe is quiet for a reason.
Different subject I guess but the absence of life is actually the default state of the universe. Would you retrace every step of the conditions that lead to human life you would have a one in a billion chance. On top of that the known universe itself is a hologram of a lower dimension zero gravity cosmos. Human life and the mudball we inhabit inside the sun's atmosphere will one day all be snuffed out as well. In the total silence of the cosmos, human life will have lasted a second.

http://www.nature.com/news/simulations-back-up-theory-that-universe-is-a-hologram-1.14328
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
stroopwafel said:
Different subject I guess but the absence of life is actually the default state of the universe. Would you retrace every step of the conditions that lead to human life you would have a one in a billion chance. On top of that the known universe itself is a hologram of a lower dimension zero gravity cosmos. Human life and the mudball we inhabit inside the sun's atmosphere will one day all be snuffed out as well. In the total silence of the cosmos, human life will have lasted a second.

http://www.nature.com/news/simulations-back-up-theory-that-universe-is-a-hologram-1.14328
Right, so what are the oddsthat an alien race has an understanding of other alien races, has also the capacity to develop A.I. as a response to that alien life and thus programs it to act in a manner of an indiscriminate killing machine of alien biological life? The odds are already astronomical, and it's feasible such a military A.I. will be programmed to deal with domestic problems on their planet.

'Hyperaggressive, indiscriminate killbots' would be a big 'no-no' for such an A.I. If every simulation they ran with it resulted in nuke everything ... pretty sure any society smart and stable enough to be able to design an artificial, intelligent lifeform would already pull the plug.

... Or at least I hope they would ...

It serves no strategic purpose whatsoever. We already have MAD without A.I. killbots. Hypothetically any society stable enough (gregarious pack orientated, respect for life and intelligence, basic principles of progressivism, etc) to develop A.I. would have already have designed defence systems capable of such feats without A.I.

Why wouldn't we design an A.I. that allows us victory without MAD? Moreover, if such a society was so fearful, why not design a hyperdefensive shield A.I. to manage whatever equivalent of Reagan-esque Star Wars orbital laser defence shield mixed with ground/water based Aegis-like BM shield tech defence grids?

I get the argument that MAD requires both sides to attack the other in order to secure detente ... but Christ, there is a million and one smarter military purposed A.I. things you could build that isn't merely killbots, ones that will actually function better than indiscriminate killbots, if you also seek victory. The only argument I can come up with for designing a hyperaggressive, indiscriminate killbot would simply be because someone sick and depraved enough with metric fuckton of money thinks it would be 'cool' ... but I would sincerely hope that the entire world would agree to mobilize military forces to shoot that fucker in the head before they complete it.

That a military would leave nothing but scorched earth where whatever laboratory once stood there. That even the walls were pulverized into a sneeze of dust. That whatever survivors that assisted in attempting such a thing were locked away in a prison, with zero access to electronics, no recording devices, zero access to the outside world, left in solitary till they died ... solely because of the danger and sheer contempt for life that it poses is worthy of the highest level of prejudice and security to ward against that we have at our disposal.
 

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,912
1,777
118
Country
United Kingdom
Addendum_Forthcoming said:
Right, but a military A.I. would still, feasibly, be programmed with an idea to reasonable force.
I guess that depends how reasonable its creators are.

I will point out, again, there are actual computers in use right now (although, again, thankfully switched off as far as we know) which were created to perform the function of the fictional Skynet, namely, to issue orders which will result in the genocidal destruction of the civilian population of another country in the event that the government has already been destroyed.

In that scenario, the war is lost. It's highly possible that much of the civilian population has already died. The computer is there purely to ensure that the other side, their families, their childhood friends and their pets die as well, because that's important to us. It's important enough that it's worth building a machine to make sure it happens even after everyone worth protecting is no longer there to push the "launch" button.

In narrative terms, artificial intelligence serves to reflect something about ourselves, and sometimes what gets reflected isn't going to be good. Skynet works as a narrative device because we believe (or more accurately, we know) that human beings and their governments can be that ruthless and bloodthirsty. It's nice to believe that any machine intelligence they would build would be a benign, compassionate figure concerned with saving and protecting us above all else (unless it "accidentally" stumbled on some logical impossibility in the way of completing that benign task), but maybe it wouldn't, maybe the people with the money and resources to build intelligent machines aren't going to be thinking in terms of making life better, but simply coming out on top in the cold arithmetic of total nuclear war, or identifying and eliminating people who might become "problems".

Ruthlessness, malice and a cold disregard for human reason may well end up being desirable qualities.
 

Creator002

New member
Aug 30, 2010
1,590
0
0
Vanilla ISIS said:
The AI from I, Robot made more sense.
It was programmed to keep humanity safe and it decided that the best way to do it is to enslave them and make all the decisions for them.
Sort of like the Reapers in Mass Effect. In order to preserve organic life (their primary goal), they captured and turned organics into Reapers, synthetic-organic hybrids.[footnote]I think.[/footnote]
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
evilthecat said:
I guess that depends how reasonable its creators are.

I will point out, again, there are actual computers in use right now (although, again, thankfully switched off as far as we know) which were created to perform the function of the fictional Skynet, namely, to issue orders which will result in the genocidal destruction of the civilian population of another country in the event that the government has already been destroyed.

In that scenario, the war is lost. It's highly possible that much of the civilian population has already died. The computer is there purely to ensure that the other side, their families, their childhood friends and their pets die as well, because that's important to us. It's important enough that it's worth building a machine to make sure it happens even after everyone worth protecting is no longer there to push the "launch" button.

In narrative terms, artificial intelligence serves to reflect something about ourselves, and sometimes what gets reflected isn't going to be good. Skynet works as a narrative device because we believe (or more accurately, we know) that human beings and their governments can be that ruthless and bloodthirsty. It's nice to believe that any machine intelligence they would build would be a benign, compassionate figure concerned with saving and protecting us above all else (unless it "accidentally" stumbled on some logical impossibility in the way of completing that benign task), but maybe it wouldn't, maybe the people with the money and resources to build intelligent machines aren't going to be thinking in terms of making life better, but simply coming out on top in the cold arithmetic of total nuclear war, or identifying and eliminating people who might become "problems".

Ruthlessness, malice and a cold disregard for human reason may well end up being desirable qualities.
Well maybe instead of 'reasonable' perhaps I should use the term proportional? Which I think better suits the idea of my argument that you don't respond to a skirmish with a battlefield nuclear device. That being said I can see the argument you're making when in terms of escalation theory.

Say, a single small yield nuclear device(s) launched by a single SLBM or torpedo targetting an opposing ally or vassal nation's port city rather than the principle foe in order to dissuade temporarily a strategic exchange and show total readiness.

That maybe an A.I. built specifically for a single nation's survival might think a million foreigners dead is worth a gamble of an extra minute or two to protect a single person of its nation's leadership and get them to safety as that principal enemy retaliates, spending time nuking merely an allied nation's city that can't retaliate in kind rather than the immediate counterforce they would otherwise be gearing down to.

That an A.I., attempting to put a pause to an inevitable exchange against the nation that it is specifically designed to protect by causing collateral damages against the people of a nation that could never reasonably retaliate but are allied to an opposing force that feasibly can. Something sufficiently showy, but will be largely undetectable until detonation ... like a tactical nuclear torpedo against some naval base or busy civilian harbour.

So maybe you have a point there...

After all that was also a proposed theory by certain powers when the power gap of nuclear munitions was more keenly felt. The idea that any movement at all creates the potentiality of one commander on either side using a tactical nuclear device first ... which begins a steady process of escalation, as opposed to an immediate strategic exchange back in the 1950s and 60s and even early 70s. So it's a numbers game that an A.I. might look favourably upon simply because no matter how many foreigners die, it gives it an extra few minutes to do calculations and ensure a few extra of its politicians or military brass it would list as priority personnel to potentially reach some form of bunker that otherwise wouldn't survive.

And that's pretty malicious, one way or the other, even if it's just numbers...

And I suppose the really dangerous aspect of this is what happens if it's not just an A.I. facing a human war council or cabinet, but two A.I. that simply predict war, thus make it a self-fulfilling prophecy ... whereas instead you might actually be able to count on two humans showing short-sightedness or the suitable 'chaos' of their flawed senses, and perhaps a flicker of fear, and might actually back down or show the capacity to ignore a threat?

Like, say, the 10 near nuclear exchange catastrophes we've had whereby it was humans making a call solely on the basis of realizing what their weapons actually mean to the world?

So maybe it might not be one A.I. that destroys us, but what about the next one we build to fulfill the same military role between two existing enemies?

Yeah, okay. I concede the argument... perhaps an A.I. might even appear more malicious in a way simply because it might share the worst of our natures, but nothing of the fear or beauty of being human and actually being able to internalize what it means to lose people and not simply numbers on a screen.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
18,722
3,602
118
Going to disagree with both of you, though on minor points.

evilthecat said:
The computer is there purely to ensure that the other side, their families, their childhood friends and their pets die as well
Not true. Firstly, a deterrent needs to be believable to be effective, and one of the best ways of looking like you'd do something is being certain that you would.

Secondly, you might have lost much or most of your population, you might have lost the government, but that doesn't mean it's not worth trying to save the remainder. Fallout shelters and duck and cover films are invested in for that reason, after all. Retaliate, and you are probably too late to stop the missiles, but what's left of your country gets to rebuild without being occupied by what's left of theirs.

Certainly, there are arguments to be made against such systems, but they aren't purely there for revenge or spite.

Addendum_Forthcoming said:
Right, but a military A.I. would still, feasibly, be programmed with an idea to reasonable force. I can't see anybody programming an A.I. as if to kill indiscriminately. Whether in our world, or a world of a hypthetical future that could build it. Such a hypothetical nation would be incredibly lonely if it programmed a military grid to also fire at allied powers
It was (at least at some stage) the doctrine of the Soviet Union to attack allied or neutral nations in the event of WW3. That would not be indiscriminate killing, that would be removing threats to the USSR (or rather, what had been the USSR) in the aftermath. Suddenly "major power" has a different meaning than it did yesterday, and the USSR had fought it's biggest war against an enemy it had been signing pacts with, after all.
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
Thaluikhain said:
It was (at least at some stage) the doctrine of the Soviet Union to attack allied or neutral nations in the event of WW3. That would not be indiscriminate killing, that would be removing threats to the USSR (or rather, what had been the USSR) in the aftermath. Suddenly "major power" has a different meaning than it did yesterday, and the USSR had fought it's biggest war against an enemy it had been signing pacts with, after all.
Well true enough, I actually wrote this bit in about escalation theory to EtC where I conceded the argument that an A.I. would not actually be programmed maliciously ... and how there was this idea you could partially slow what seemed to be an inevitable march towards strategic exchange through the use of 'battlefield' nuclear munitions. As in one side uses one against an allied nation to the principle foe in a surprise attack. Something to show determination you will not back down. Precipitating into retaliation with other battlefield nuclear detonations over bases near one's border, or massed soldiers, etc.

At least in the first half of the Cold War before they developed the SLBM...

If it means you can get more people in bunkers rather than committing immediately to a strategic exchange, an A.I. principally designed to defend the people of a single nation and not caring of the lives of another allied nation might attempt to prompt this slower, albeit more likely devastating escalation over far greater territory if only to buy their own populations they are programmed to defend more time.

So I can see the argument that an A.I. might even appear more malicious depending on the nature of their programming andjust what their prerogatives are even if designed solely to provide "proportional" force.

That being said, they (slowly) phased out the idea of ready-to-use tactical nuclear weapons given the fact of the widely disproportionate and un-uniform production and explosive yield qualities. To put it bluntly, one person's 0.07KT warhead might be responded to with one person's 0.9KT-9KT warhead... which greatly elevated the chance of commanders simply opting for the largest they have at their disposal as quickly as possible.

In order to 'use them or lose them' given a lot of these devices were designed (stupidly) to be deployed perhaps at very short ranges to their terminus.

And when I mean 'short' some of them you could probably see with a highgrade pair of binoculars and a good set of eyes behind them. The distances were so short that any guard section set to defend such emplacements, they would likely be moderately irradiated by their own munitions use. So that's very little time to decide whether you should or should not use what would likely trigger nuclear reprisal in turn.

Kind of off topic, but it always surprises me how we manage to survive ourselves...