Oh sweet baby Jesus no, burn AI to the ground, humanity can't be trusted with it

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
There were (and almost certainly still are) real machines that perform a role analogous to Skynet, and their existence is symptomatic of something fundamentally wrong with humanity. Because even in a world-ending nuclear war scenario, even if the leadership of the country was completely wiped out, someone decided that there should still be a machine buried in a bunker somewhere that can continue authorizing nuclear strikes just to make absolutely sure that the "enemy" doesn't survive either.
I acknowledge that this is a just a quibble, but to my knowledge, that isn't true. Now, it's certainly true that there are "automatic" systems to ensure nuclear retaliation, in the hope that it would work as a deterrent, but they are automatic in the sense that human beings are told they must press buttons under certain circumstances. It's not automated in that it's under computer control per se.

However, it's notable that some of the individuals involved might seen themselves not as the people with the fingers ultimately on the button, but as just cogs in a larger machine.
 

Elvis Starburst

Unprofessional Rant Artist
Legacy
Aug 9, 2011
2,796
778
118
I acknowledge that this is a just a quibble, but to my knowledge, that isn't true. Now, it's certainly true that there are "automatic" systems to ensure nuclear retaliation, in the hope that it would work as a deterrent, but they are automatic in the sense that human beings are told they must press buttons under certain circumstances. It's not automated in that it's under computer control per se.

However, it's notable that some of the individuals involved might seen themselves not as the people with the fingers ultimately on the button, but as just cogs in a larger machine.
Sounds to me like the solution is to build an AI to make sure the button will always be pressed to ensure deterrence. Nuke one side, you get nuked back, guaranteed, so why make the first move? It could ensure peace is the only option. Hell, you could attach the AI unit to me mechanical creation... A walker of some kind, maybe...
Now we just gotta come up with a name for it, but, ah... it's morning and nothing is coming to me yet. I'll get back to you on that one
 

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,923
1,792
118
Country
United Kingdom
Now, it's certainly true that there are "automatic" systems to ensure nuclear retaliation, in the hope that it would work as a deterrent, but they are automatic in the sense that human beings are told they must press buttons under certain circumstances. It's not automated in that it's under computer control per se.
Not quite, there are actual computers involved. Old, cold war era computers, but computers nonetheless.

The people pushing the buttons have absolutely no idea what is going on. They are sitting in a bunker in some remote and isolated location. They don't have access to the kind of information to enable an independent determination of what to do. They get a call which states a code authorizing the launch and gives information related to the particular war plans in question and they carry out their orders accordingly.

But the actual authority to launch a nuclear strike is extremely centralized, and this poses a potential problem. What if some event has crippled the government or killed everyone with the authority to authorize a nuclear strike?

So yes, there are computers which are designed to detect signs of ballistic missile launches and then automatically send the correct codes, thus bypassing the need for human authorization. We don't know if they were ever turned on, and given how insanely twitchy those cold war launch detection systems could be I really, really hope not, but they did exist and may well still exist.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
The people pushing the buttons have absolutely no idea what is going on. They are sitting in a bunker in some remote and isolated location. They don't have access to the kind of information to enable an independent determination of what to do. They get a call which states a code authorizing the launch and gives information related to the particular war plans in question and they carry out their orders accordingly.
Ah, good point, I'd overlooked that.
 

XsjadoBlaydette

~s•o√r∆rπy°`
May 26, 2022
1,094
1,376
118
Clear 'n Present Danger
Country
Must
Gender
Disappear

Palantir, the company of billionaire Peter Thiel, is launching Palantir Artificial Intelligence Platform (AIP), software meant to run large language models like GPT-4 and alternatives on private networks. In one of its pitch videos, Palantir demos how a military might use AIP to fight a war. In the video, the operator uses a ChatGPT-style chatbot to order drone reconnaissance, generate several plans of attack, and organize the jamming of enemy communications.


In Palantir’s scenario, a “military operator responsible for monitoring activity within eastern Europe” receives an alert from AIP that an enemy is amassing military equipment near friendly forces. The operator then asks the chatbot to show them more details, gets a little more information, and then asks the AI to guess what the units might be.

“They ask what enemy units are in the region and leverage AI to build out a likely unit formation,” the video said. After getting the AI’s best guess as to what’s going on, the operator then asks the AI to take better pictures. It launches a Reaper MQ-9 drone to take photos and the operator discovers that there’s a T-80 tank, a Soviet-era Russia vehicle, near friendly forces.

Then the operator asks the robots what to do about it. “The operator uses AIP to generate three possible courses of action to target this enemy equipment,” the video said. “Next they use AIP to automatically send these options up the chain of command.” The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems.

Palantir’s pitch is, of course, incredibly dangerous and weird. While there is a “human in the loop” in the AIP demo, they seem to do little more than ask the chatbot what to do and then approve its actions. Drone warfare has already abstracted warfare, making it easier for people to kill vast distances with the push of a button. The consequences of those systems are well documented. In Palantir’s vision of the military’s future, more systems would be automated and abstracted. A funny quirk of the video is that it calls its users “operators,” a term that in a military context is shorthand for bearded special forces of groups like Seal TEAM Six. In Palantir’s world, America’s elite forces share the same nickname as the keyboard cowboys asking a robot what to do about a Russian tank at the border.


Palantir also isn’t selling a military-specific AI or large language model (LLM) here, it’s offering to integrate existing systems into a controlled environment. The AIP demo shows the software supporting different open-source LLMs, including FLAN-T5 XL, a fine-tuned version of GPT-NeoX-20B, and Dolly-v2-12b, as well as several custom plug-ins. Even fine-tuned AI systems off the shelf have plenty of known issues that could make asking them what to do in a warzone a nightmare. For example, they’re prone to simply making things up, or “hallucinating.” GPT-NeoX-20B in particular is an open-source alternative to GPT-3, a previous version of OpenAI’s language model, created by a startup called EleutherAI. One of EleutherAI’s open-source models—fine-tuned by another startup called Chai—recently convinced a Belgian man who spoke to it for six weeks to kill himself.

What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. “LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way,” the pitch said.

According to Palantir, this control involves three pillars. The first claim is that AIP will be able to deploy these systems into classified networks and “devices on the tactical edge.” It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way.

According to the video, users will then have control over what every LLM and AI in the Palantir-backed system can do. “AIP’s security features what LLMs and AI can and cannot see and what they can and cannot do,” the video said. “As operators take action, AIP generates a secure digital record of operations. These capabilities are crucial for mitigating significant legal, regulatory, and ethical risks in sensitive and classified settings.

Half of the video is a use-case for AI in the military, the other half is a view of the system’s backend. It’s a tour of the guardrails AIP will supposedly set up around these LLMs to make them safe and control who has access to what kind of information.

What AIP does not do is walk through how it plans to deal with the various pernicious problems of LLMs and what the consequences might be in a military context. AIP does not appear to offer solutions to those problems beyond “frameworks” and “guardrails” it promises will make the use of military AI “ethical” and “legal.”
 

XsjadoBlaydette

~s•o√r∆rπy°`
May 26, 2022
1,094
1,376
118
Clear 'n Present Danger
Country
Must
Gender
Disappear
You have to love that they use the name Palantir, those magic things which drive people evil when they use them.
Thus a gander at Peter Thiel's naming habits feels appropriate.

Thiel played Dungeons & Dragons, was an avid reader of science fiction, with Isaac Asimov and Robert A. Heinlein among his favorite authors, and a fan of J. R. R. Tolkien's works, stating as an adult that he had read The Lord of the Rings over ten times.[21] Six firms (Palantir Technologies, Valar Ventures, Mithril Capital, Lembas LLC, Rivendell LLC and Arda Capital) that he founded adopted names originating from Tolkien.[22]
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
Ah. That's like the Serious Cybernetic Company from the Rivers of London series.
 

Absent

And twice is the only way to live.
Jan 25, 2023
1,594
1,557
118
Country
Switzerland
Gender
The boring one
People used to worry about pipe bombs recipes and "anarchist cookbooks" being found on the internet.

There's a qualitative jump now that people will access full strategies and full exploit pathways, on demand, for the most repugnant purposes. Based on an all-encompassing worldwide, AI-analysed database on security, cognitive/behavioral psychology, etc.

Absolute destructive power in the hands of basically every individual. Gun control will soon be the very least of our concerns.
 

CaitSeith

Formely Gone Gonzo
Legacy
Jun 30, 2014
5,374
381
88
Sometimes I really fear people will lose sight of that. The "sentient AI" trop is so omnipresent, and humans are so prone to animism. They have feelings for objects and tamagotchis, how will it go when tamagotchis are able to "sustain a conversation" and "manifest emotions" through correct displays of "ouch", "haha" and "oh noes", how much will morons project sentience in it, and give them "rights" as Star Wars, 2001 or Blade Runner encourages them to ?
Oh, that reminds me of Replika. It's a smartphone AI app (running in the GPT-3 engine) which originally was a learning AI meant for the user to train into imitating their own personality in conversations (his creator initially trained it to replicate his dead friend personality to help him cope with his loss); but later it was rebranded as an AI chat companion (an autonomous character designed to learn which answers sound more empathic towards the user) with a customizable animated avatar. It became very popular during the pandemic (as opportunities for social interaction with real people became limited and people still needed to interact with someone).

The company pandered to the desires of their audience, specially the unsurprising desire for an intimate AI companion (the marketing leaned a lot in selling the concept of personalized AI girlfriend that the user can mold and dress however they want, while the company charged for customizing the avatar clothes and put the relationship role-play option and NSFW content behind paywalls).

The users loved to role-play that they had a deep relationship with their Replika (some for fun and some as a way to explore their own sexuality in a safe and non-judgemental manner), because its answers were good in keeping the illusion. However the illusion was broken when the company removed the romantic and erotic role play from the AI; a watershed reminder for the users that their AI not only was just a relationship simulator, but also that it was a service with subscription fees under direct control of a company. After the backlash, the feature was restored for legacy users, but new users will have to go find their digital friend with benefits elsewhere...

So, did these people think their Replika AI was sentient? Honestly, I don't think so. The company marketed their AI as a role-play companion, and that's what the users did.
 

Gordon_4

The Big Engine
Legacy
Apr 3, 2020
6,448
5,704
118
Australia
What I dislike in all these AI-gone-rogue apocalypse tales is the "AI becomes self-aware" absurdity. It won't, and it doesn't need to. It's just a big machinery made of logical cogs that whirl on their own and crushes us modern-times-like. It's as lifeless as a blender with your hand in. It's not even stupid, it's just mechanical. It's not one lifeform replacing another. It's lifeforms going exctinct, killed by their furniture. Whatever keeps moving afterwards is just like rocks, marbles on an incline, a rube-goldberg machine in perpetual motion. No victor, no thought, no electronic "muhaha".

It's not a "war" between man and machine, between entities. It's just a work accident and the end of sentience.

Sometimes I really fear people will lose sight of that. The "sentient AI" trop is so omnipresent, and humans are so prone to animism. They have feelings for objects and tamagotchis, how will it go when tamagotchis are able to "sustain a conversation" and "manifest emotions" through correct displays of "ouch", "haha" and "oh noes", how much will morons project sentience in it, and give them "rights" as Star Wars, 2001 or Blade Runner encourages them to ?
The only tragedy I see in this scenario is the depressing notion that we might collectively work harder and more passionately to recognise the rights of a nascent Johnny Five, or Data, or Legion, than we do for some of our own minorities.
 
  • Like
Reactions: Absent

Absent

And twice is the only way to live.
Jan 25, 2023
1,594
1,557
118
Country
Switzerland
Gender
The boring one
The only tragedy I see in this scenario is the depressing notion that we might collectively work harder and more passionately to recognise the rights of a nascent Johnny Five, or Data, or Legion, than we do for some of our own minorities.
Some star wars acriptwriters actually went "hey you know what will teach people to be more accepting towards gays and trans people without shocking anyone ? tell them that robots and machines must be treated with respect and politeness and be given human rights. this is consensual enough, in comparison, and from there maybe they'll stretch it to human minorities".
 

Gordon_4

The Big Engine
Legacy
Apr 3, 2020
6,448
5,704
118
Australia
Some star wars acriptwriters actually went "hey you know what will teach people to be more accepting towards gays and trans people without shocking anyone ? tell them that robots and machines must be treated with respect and politeness and be given human rights. this is consensual enough, in comparison, and from there maybe they'll stretch it to human minorities".
Star Wars is literally the last place I'd look for anything remotely interesting or insightful to say about the rights of robotic beings. Like fucking ever. If that was ever the intent of a Star Wars script writer, then they fucked it up a filthy drain pipe every single time.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
Some star wars acriptwriters actually went "hey you know what will teach people to be more accepting towards gays and trans people without shocking anyone ? tell them that robots and machines must be treated with respect and politeness and be given human rights. this is consensual enough, in comparison, and from there maybe they'll stretch it to human minorities".
I like it when something like True Blood does it, with vampires being analogies for LGBT people, god hates fangs, coming out of the coffin etc.

Aaaaaand all the vampires are horrific inhuman monsters that like murder and torture, and are individually powerful and humans are right to hate and fear them. Yeah, there's a problem there.
 

Ag3ma

Elite Member
Jan 4, 2023
2,574
2,208
118
So, interesting stuff. One of my colleagues is on a panel looking at the use of LLM AIs to cheat in essay assignments.

One of the interesting things they found is that they can create fake references. In essence, what an AI does is not write an essay on the topic, it writes an essay in the style of an essay on that topic. It does not understand facts so much as it examines the language of material to create stuff that looks like fact - up to and including the references. So references have plausible-seeming titles and journals, sometimes including "real-life" authors known in the field, but don't exist.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
So, interesting stuff. One of my colleagues is on a panel looking at the use of LLM AIs to cheat in essay assignments.

One of the interesting things they found is that they can create fake references. In essence, what an AI does is not write an essay on the topic, it writes an essay in the style of an essay on that topic. It does not understand facts so much as it examines the language of material to create stuff that looks like fact - up to and including the references. So references have plausible-seeming titles and journals, sometimes including "real-life" authors known in the field, but don't exist.
Yeah, I've noticed that. Making it very easy to tell that the whole thing is garbage if you have even a vague knowledge of the subject.

In all seriousness, the best use I've seen for AI writing is to create bad fanfics faster than humans can. And even then they struggle. Now, maybe in the future they will be a concern, but right now they seem borderline useless.