Safety experts quit OpenAI

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
18,866
3,676
118
OK. You may think you're immune to misinformation, but you're not.
Thinking you happen to be totally immune to a massive societal influence which affects people to great degrees generally means it's got you hard.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,356
809
118
w/ M'Kraan Crystal
Gender
Male
Frankly, I think you just have an extremely simplistic view of how media is widely consumed.
How would one be consuming so much news that they run out of news to consume? It's like saying that there's not enough Mario Maker levels being made because I played through them all and thus would want AI to create more levels to play.
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
11,563
6,005
118
Country
United Kingdom
How would one be consuming so much news that they run out of news to consume? It's like saying that there's not enough Mario Maker levels being made because I played through them all and thus would want AI to create more levels to play.
Or you could engage with what I actually said, rather than boiling it down to this nonsense strawman.
 
  • Like
Reactions: CaitSeith

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,356
809
118
w/ M'Kraan Crystal
Gender
Male
Or you could engage with what I actually said, rather than boiling it down to this nonsense strawman.
What are you even talking about? I've said from the start of this thread the following:
There's already massive disinformation with humans alone. The AI can't really create new information or disinformation, it writes stuff based on what humans have already written or then what AI has already written from humans (and it becomes a copy of a copy of a copy) so it can't create something truly new but regurgitate what has already been written. Sure AI can cause disinformation to become accelerated but bots already can accomplish that and they aren't considered AI because of how simplistic they are. The danger is how fast disinformation can spread and it's already at like max velocity without AI.

Hence why I'm asking, How would one be consuming so much news that they run out of news to consume?

What is the point/danger of AI?
 

CaitSeith

Formely Gone Gonzo
Legacy
Jun 30, 2014
5,374
381
88
The AI can't really create new information or disinformation
What are you talking about? You don't need anything new to create disinformation. You only take two unrelated facts and put them together as if it were. That's something AI does very well to a fault. The danger is at encroaching those faulty AI into search engines (you know? what people usually do when they want to check if something they heard is true or false?)
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
11,563
6,005
118
Country
United Kingdom
What are you even talking about? I've said from the start of this thread the following:
There's already massive disinformation with humans alone. The AI can't really create new information or disinformation, it writes stuff based on what humans have already written or then what AI has already written from humans (and it becomes a copy of a copy of a copy) so it can't create something truly new but regurgitate what has already been written. Sure AI can cause disinformation to become accelerated but bots already can accomplish that and they aren't considered AI because of how simplistic they are. The danger is how fast disinformation can spread and it's already at like max velocity without AI.

Hence why I'm asking, How would one be consuming so much news that they run out of news to consume?

What is the point/danger of AI?
They wouldn't be "consuming so much news that they run out". That's nonsense.

The speed, reactiveness, reach and scope of misinformation are not at "max velocity". That's just complacency. It's extremely high, but can be extremely higher. And that's where AI comes in-- it can outpace anything humans and basic bots can put out.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,356
809
118
w/ M'Kraan Crystal
Gender
Male
What are you talking about? You don't need anything new to create disinformation. You only take two unrelated facts and put them together as if it were. That's something AI does very well to a fault. The danger is at encroaching those faulty AI into search engines (you know? what people usually do when they want to check if something they heard is true or false?)
But AI is horrible at doing that.

1717620914511.png

They wouldn't be "consuming so much news that they run out". That's nonsense.

The speed, reactiveness, reach and scope of misinformation are not at "max velocity". That's just complacency. It's extremely high, but can be extremely higher. And that's where AI comes in-- it can outpace anything humans and basic bots can put out.
You ain't gonna get people thinking there's some immigration crisis in say a few hours instead of a few months. You have to have events that happen over time for people to think that. People have to notice an actual difference. For example, you can't just say inflation is out of hand when everything cost the same as it did yesterday.
 

tstorm823

Elite Member
Legacy
Aug 4, 2011
6,824
940
118
Country
USA
I don't care what you're doing. Hundreds of millions of people do, including news content. How much do you think they're presented with, or passively take notice of, even if they don't actively open and read an article?
Not to mention how much of people's trusted sources are writing response pieces to information that came from somewhere else, which may or may not have been written by a computer algorithm trained to imitate the appearance of truth.
 
  • Like
Reactions: Silvanus

The Rogue Wolf

Stealthy Carnivore
Legacy
Nov 25, 2007
16,537
9,094
118
Stalking the Digital Tundra
Gender
✅
Thinking you happen to be totally immune to a massive societal influence which affects people to great degrees generally means it's got you hard.
Advertisers love people who say "oh, advertising has no effect on me", because those people lack the ability to be self-critical and are therefore even more vulnerable to advertising than most others. And propaganda is just a form of advertising....
 

Bedinsis

Elite Member
Legacy
Escapist +
May 29, 2014
1,536
762
118
Country
Sweden
Advertisers love people who say "oh, advertising has no effect on me", because those people lack the ability to be self-critical and are therefore even more vulnerable to advertising than most others. And propaganda is just a form of advertising....
I prefer to say that advertising is a form of propaganda.
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
11,563
6,005
118
Country
United Kingdom
You ain't gonna get people thinking there's some immigration crisis in say a few hours instead of a few months. You have to have events that happen over time for people to think that. People have to notice an actual difference. For example, you can't just say inflation is out of hand when everything cost the same as it did yesterday.
It's already pointed out to you that people don't need to see the actual effects of something to have strong opinions on it. And even if they experience a certain effect, they can be easily convinced that the cause was something else entirely.

Take a look at the Brexit referendum. One of the claims that had the most traction was that Britain sends 300million a week to the EU, and that leaving would allow us to invest it in the NHS. This was a complete lie. But it was repeated so often, so widely-- including in low-substance online ads-- that it gained that enormous traction. These people weren't actually reading full articles making that claim. They just saw it pop up-- again and again and again. It stuck in their minds and stoked up anger and fear.

No-one had experienced the EU taking all our money. They had experienced worsening NHS care due to underfunding... and misinformation effectively exploited that to convince people of something untrue and swing a public vote.
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
8,811
6,086
118
It's already pointed out to you that people don't need to see the actual effects of something to have strong opinions on it. And even if they experience a certain effect, they can be easily convinced that the cause was something else entirely.
I'm completely unclear why anyone is bothering to entertain the argument that people don't believe misinformation coming from a man who spent so many months telling us hydroxychloroquine and ivermectin would save everyone from covid.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,356
809
118
w/ M'Kraan Crystal
Gender
Male
It's already pointed out to you that people don't need to see the actual effects of something to have strong opinions on it. And even if they experience a certain effect, they can be easily convinced that the cause was something else entirely.

Take a look at the Brexit referendum. One of the claims that had the most traction was that Britain sends 300million a week to the EU, and that leaving would allow us to invest it in the NHS. This was a complete lie. But it was repeated so often, so widely-- including in low-substance online ads-- that it gained that enormous traction. These people weren't actually reading full articles making that claim. They just saw it pop up-- again and again and again. It stuck in their minds and stoked up anger and fear.

No-one had experienced the EU taking all our money. They had experienced worsening NHS care due to underfunding... and misinformation effectively exploited that to convince people of something untrue and swing a public vote.
1) Would an AI actually be able to come up with that on its own as it was based in some truth it looks like? Would an AI know something like that would be believable misinformation?

2) It looks like the UK was sending about $150 million to the EU a week at the end of the day. That would likely be something people didn't like anyway.

3) If an AI is just basically pulling shit out its ass hoping something sticks, you're gonna have so much shit that people will just ignore it all. It's like when you get a call from an unknown number, nobody picks it up because it's probably spam and if it was potentially something important, they will leave a voicemail. If news was flooded with BS AI stories, people would probably just find a couple sources they actually trust and follow that.

4) That happened without AI, why do you need AI to accomplish this?

I'm completely unclear why anyone is bothering to entertain the argument that people don't believe misinformation coming from a man who spent so many months telling us hydroxychloroquine and ivermectin would save everyone from covid.
Not true.
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
11,563
6,005
118
Country
United Kingdom
1) Would an AI actually be able to come up with that on its own as it was based in some truth it looks like? Would an AI know something like that would be believable misinformation?
AIs can absolutely come up with false statistics from whole cloth. Recall those lawyers who used AI language programs to create false briefs? Those AIs fabricated names, case details, dates, and numbered references, all of which looked superficially believable.

2) It looks like the UK was sending about $150 million to the EU a week at the end of the day. That would likely be something people didn't like anyway.
I don't care if you think the misinformation was fine.

3) If an AI is just basically pulling shit out its ass hoping something sticks, you're gonna have so much shit that people will just ignore it all. It's like when you get a call from an unknown number, nobody picks it up because it's probably spam and if it was potentially something important, they will leave a voicemail.
Loads of spam and scams are successful. You and I Ignore them. Lots of vulnerable people don't. That's why they keep getting made: they work.
 
Last edited:

Schadrach

Elite Member
Legacy
Mar 20, 2010
2,065
376
88
Country
US
AIs can absolutely come up with false statistics from whole cloth. Recall those lawyers who used AI language programs to create false briefs? Those AIs fabricated names, case details, dates, and numbered references, all of which looked superficially believable.
A legal research AI could absolutely be useful, but with LLMs you have to tune them in such a way that they either sound more natural and respond in a more varied and natural way or hallucinate less. You'd want to go hard on the latter, at which point they'd produce dry, robotic, painful to read but actually mostly accurate responses.

But AI is horrible at doing that.
The Google AI bot is really good at synthesizing the top few search results for a question and expressing that information in an authoritative sounding manner. Unfortunately, that only works insofar as the top few search results actually provide a good answer to the question asked.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,356
809
118
w/ M'Kraan Crystal
Gender
Male
AIs can absolutely come up with false statistics from whole cloth. Recall those lawyers who used AI language programs to create false briefs? Those AIs fabricated names, case details, dates, and numbered references, all of which looked superficially believable.



I don't care if you think the misinformation was fine.



Loads of spam and scams are successful. You and I Ignore them. Lots of vulnerable people don't. That's why they keep getting made: they work.
It still has to pull it from somewhere. It can't actually make up a sentence like we can. I'm not saying it can't put something together that's coherent and readable. I'm saying just doing that isn't necessarily misinformation that would be believable.

I didn't say it was fine. I said it was an actual real number, just without the net value. Like that one time you get mad at company profits going up while their profit margins are essentially the same.

You have to have X amount of people that believe something for it to be believed on a population level. People believe the earth is flat, that doesn't mean the majority or even a somewhat significant amount of the population believe that. Getting like 0.001% of people to bite on some phishing attempt and getting something believed on a population level are 2 very different things.

The Google AI bot is really good at synthesizing the top few search results for a question and expressing that information in an authoritative sounding manner. Unfortunately, that only works insofar as the top few search results actually provide a good answer to the question asked.
But if there's no "answers", the AI has nothing to pull from.
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
11,563
6,005
118
Country
United Kingdom
It still has to pull it from somewhere. It can't actually make up a sentence like we can. I'm not saying it can't put something together that's coherent and readable. I'm saying just doing that isn't necessarily misinformation that would be believable.
Yet to anyone who isn't able to actually look up case history, those references all looked believable. They included dates, names, page numbers, case titles. All fabricated.

I didn't say it was fine. I said it was an actual real number, just without the net value.
But the number that gained traction wasn't the real number. It was a complete falsehood.


You have to have X amount of people that believe something for it to be believed on a population level. People believe the earth is flat, that doesn't mean the majority or even a somewhat significant amount of the population believe that.
Right, but far more people-- millions upon millions a year-- fall for spam and scams.
 

Gordon_4

The Big Engine
Legacy
Apr 3, 2020
6,265
5,532
118
Australia
A legal research AI could absolutely be useful, but with LLMs you have to tune them in such a way that they either sound more natural and respond in a more varied and natural way or hallucinate less. You'd want to go hard on the latter, at which point they'd produce dry, robotic, painful to read but actually mostly accurate responses.
I think Web Search AIs like Google come up with loads of inane bullshit because they're pulling from basically every source Google has. But if you're building one with say, Azure (the platform I learned about), you can literally tell it to reference specific URLs, documents or other specific sources and nothing else.

So if you set the bot to only pull from the online law library of the state - lets take Mississippi cos I just watched The Insider - then that's what will generate its answer. But at that point they're just slightly more advanced search functions.