Safety experts quit OpenAI

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
I would point out that older people are generally the targets because they aren't as technologically savvy....

And then you realise that we are going to be there in 40 years, not keeping up with the newest scam and getting swindled
The problem is old people currently never grew up with technology and never learned it, and we did.

---

AI is looking real dangerous!!! This whole AI push is just a novelty act and a tech fad and the bubble is gonna burst soonish because it can't really do much, though it will be good for some very specific tasks I'm sure. For example, AI would be a good for game devs to use to write NPC side quest dialogue, it can't be worse than it currently is.

 

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,096
6,376
118
Country
United Kingdom
The problem is old people currently never grew up with technology and never learned it, and we did.
We didn't grow up with AI.

This whole AI push is just a novelty act and a tech fad and the bubble is gonna burst soonish because it can't really do much, though it will be good for some very specific tasks I'm sure.
Exactly the same was said about IT, and TV & radio before it.
 
  • Like
Reactions: BrawlMan

Gergar12

Elite Member
Legacy
Apr 24, 2020
3,921
864
118
Country
United States
We didn't grow up with AI.



Exactly the same was said about IT, and TV & radio before it.
Soon Microsoft will be able to make things like automation easier for the average casual user. Your most casual users as of right now use automated web bots, medium users use UI-Path or maybe GPT-4 generated Python scripts, and advanced users use a variety of languages, libraries, ERP systems, and maybe AI. Soon everyone will be able to do what advanced users do today. Instead of me getting my computer to make my Mod-Organizer-2 folders modding Skyrim SE to be a Windows Defender exclusion via going to my security UI and clicking lots of buttons, I will just type a prompt, and Microsoft-copilot will do it. It will auto-locate the folder location, and make it an exclusion by itself.
 

Gordon_4

The Big Engine
Legacy
Apr 3, 2020
6,437
5,693
118
Australia
Depends what you mean by AI.
When most lay people talk about artificial intelligence, they’re talking about people like Data from Star Trek TNG, or Legion and EDI from Mass Effect. What’s being touted at the moment is more like a - highly complex to be sure - predictive response algorithm. A search engine with a name and a voice but it lacks self-awareness, self-reflection and curiosity.

Or if you want a simple breakdown: ChatGPT can produce an image, Data can paint a picture.
 
  • Like
Reactions: BrawlMan

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,096
6,376
118
Country
United Kingdom
When most lay people talk about artificial intelligence, they’re talking about people like Data from Star Trek TNG, or Legion and EDI from Mass Effect. What’s being touted at the moment is more like a - highly complex to be sure - predictive response algorithm. A search engine with a name and a voice but it lacks self-awareness, self-reflection and curiosity.

Or if you want a simple breakdown: ChatGPT can produce an image, Data can paint a picture.
Sure, but when people talk about the dangers of AI currently, most of them are not talking about killer robots or truly self-aware programs. They're talking about the dangers those sophisticated response algorithms pose to modern life.

Easier to distribute misinformation, or muddy the water so much that search results are all-but-completely unreliable. Deepfakes, election interference and scams. The replacement of ever more jobs (even creative ones). And the potentially life-threatening consequences of trusting AI with things like guidance systems (as already seen in cars).
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
Sure, but when people talk about the dangers of AI currently, most of them are not talking about killer robots or truly self-aware programs. They're talking about the dangers those sophisticated response algorithms pose to modern life.

Easier to distribute misinformation, or muddy the water so much that search results are all-but-completely unreliable. Deepfakes, election interference and scams. The replacement of ever more jobs (even creative ones). And the potentially life-threatening consequences of trusting AI with things like guidance systems (as already seen in cars).
But bots can already flood the internet with misinformation/disinformation. What is AI going to be able to enable that isn't already a threat? And people already know the search results driven by AI are horrible. I'm actually going to stop using Google because of how ass it has become, I put stuff in quotes and it still can't find shit. I think there is some way to actually enable normal search on Google, I need to find that and enable that or just stop using Google. Self-driving cars never went past the "test" phase and they aren't going to be a thing probably like ever or at least in our lifetimes, you'd have to redo the whole infrastructure for that. What is this massive danger that AI is imposing? The whole OpenAI being called a danger is probably just a marketing ploy to get people convinced that AI is so good it's dangerous!!!
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,096
6,376
118
Country
United Kingdom
But bots can already flood the internet with misinformation/disinformation. What is AI going to be able to enable that isn't already a threat?
Misinformation spread by bots is not adaptive. It is very easily recognised, even by most laypeople, and becomes anachronistic in a matter of months. "AI", or if you prefer adaptive language algorithms, can skirt these barriers with alarming effectiveness.

And people already know the search results driven by AI are horrible.
Actually, they don't. Most savvy Internet users know search results are skewed and AI-driven ones are untrustworthy. But of the whole pool of people who routinely use search engines? The vast majority do not appreciate how bad they can be.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
Misinformation spread by bots is not adaptive. It is very easily recognised, even by most laypeople, and becomes anachronistic in a matter of months. "AI", or if you prefer adaptive language algorithms, can skirt these barriers with alarming effectiveness.



Actually, they don't. Most savvy Internet users know search results are skewed and AI-driven ones are untrustworthy. But of the whole pool of people who routinely use search engines? The vast majority do not appreciate how bad they can be.
Humans can't make the misinformation themselves and have bots spread it? You have months to come up with something new, that isn't too hard. You act like a news story goes out and you need another one then immediately put out; it takes time (even with with the internet at your fingertips) for news to circulate, and some time after that for people to think about it and discuss it. You're acting like we need stuff created at a constant stream (that AI can do and humans can't) or else people will catch on. People still believe misinformation to this day that was created by humans alone years/decades/centuries ago. AI can only regurgitate/rearrange what has already been created so the more AI creates and the less humans create, the shittier and shittier the AI will become as the AI will be reusing the same content more and more often.

The most likely scenario is that this is just a marketing ploy by OpenAI or the people are just getting out before the AI bubble bursts or both.
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,096
6,376
118
Country
United Kingdom
Humans can't make the misinformation themselves and have bots spread it? You have months to come up with something new, that isn't too hard.
Of course they can. They have since time immemorial. But that's just it: that takes time to produce even a single misinfo article. An adaptive language algorithm does so near-instantly, and provides a hundred times the content.

None of this is about AI being able to do something that humans fundamentally cannot. Its about scale and speed with minimal investment or input.

As you say, many humans today believe misinformation created by other humans decades ago. Yet how much? What has that impacted? Now, with adaptive language models, imagine that the amount put out this year is orders of magnitude more. And with much further reach, much more sophisticated targeting, produced more quickly in response to trends.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
Of course they can. They have since time immemorial. But that's just it: that takes time to produce even a single misinfo article. An adaptive language algorithm does so near-instantly, and provides a hundred times the content.

None of this is about AI being able to do something that humans fundamentally cannot. Its about scale and speed with minimal investment or input.

As you say, many humans today believe misinformation created by other humans decades ago. Yet how much? What has that impacted? Now, with adaptive language models, imagine that the amount put out this year is orders of magnitude more. And with much further reach, much more sophisticated targeting, produced more quickly in response to trends.
But why do you need such a constant stream of misinformation? Humans still have to take in the information (read, watch, listen), then think about it, and then talk about it to others. What are you gaining by putting out info faster than humans can take it in? We already have the capability to produce content fast enough as humans, why do you need this AI? It's like with a computer and a CPU being able to transfer data faster than the hard drive can write the data. It's not like The Matrix where humans are jacked into the network and can learn kung fu instantly.
 

Trunkage

Nascent Orca
Legacy
Jun 21, 2012
9,049
3,037
118
Brisbane
Gender
Cyborg
But why do you need such a constant stream of misinformation? Humans still have to take in the information (read, watch, listen), then think about it, and then talk about it to others. What are you gaining by putting out info faster than humans can take it in? We already have the capability to produce content fast enough as humans, why do you need this AI? It's like with a computer and a CPU being able to transfer data faster than the hard drive can write the data. It's not like The Matrix where humans are jacked into the network and can learn kung fu instantly.
It's a numbers game.

There were a thousand Nigerian Prince like scams, Nigerian Price scam was just the one that was most successful

If you increase the number of scams, the more likely one will work on you

Edit: Also, misinformation takes very little time to make without AI. That's why it's so prevalent over actually information
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,096
6,376
118
Country
United Kingdom
But why do you need such a constant stream of misinformation? Humans still have to take in the information (read, watch, listen), then think about it, and then talk about it to others.
No, they don't. Humans passively absorb information they see all around them without thinking actively or consciously about it.

Take a very prominent piece of misinformation: the belief that immigration is so much higher than it actually is. This is extremely widely believed in the US and UK. And yet, very few people actually read a specific analytic article about it, sit down to have a think, and then discuss it with their peers.

...but they're bombarded with misinformation, online and in the dead-tree press headlines in every shop and newsstand. Eventually, it has an impact. And the result is a very widely held, completely incorrect belief, with barely any conscious or active thought or discussion put into it.
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118
No, they don't. Humans passively absorb information they see all around them without thinking actively or consciously about it.
Whilst I agree that humans do passively absorb some information with little or no conscious thought, it is probably more that they do not critically assess the information they actively think about to any meaningful degree: essentially, monkey see, monkey do.
 
  • Like
Reactions: Silvanus

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
It's a numbers game.

There were a thousand Nigerian Prince like scams, Nigerian Price scam was just the one that was most successful

If you increase the number of scams, the more likely one will work on you

Edit: Also, misinformation takes very little time to make without AI. That's why it's so prevalent over actually information
And if like every email you get is a scam, then everyone will just ignore emails. It's like the California law where everything gets tagged as causing cancer and everyone ignores it.

Again, it's almost impossible to write a story about something (that isn't some very objective event) without it having misinformation in it. Tons of government pages have misinformation on them, let alone news stories were every news source has some bias.

No, they don't. Humans passively absorb information they see all around them without thinking actively or consciously about it.

Take a very prominent piece of misinformation: the belief that immigration is so much higher than it actually is. This is extremely widely believed in the US and UK. And yet, very few people actually read a specific analytic article about it, sit down to have a think, and then discuss it with their peers.

...but they're bombarded with misinformation, online and in the dead-tree press headlines in every shop and newsstand. Eventually, it has an impact. And the result is a very widely held, completely incorrect belief, with barely any conscious or active thought or discussion put into it.
There is an immigration issue in the US, whether it's higher than before or not doesn't really matter.

Looks like US immigration is higher than it's been in over 100 years so... definitely higher.

1717089001241.png
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,096
6,376
118
Country
United Kingdom
There is an immigration issue in the US, whether it's higher than before or not doesn't really matter.
Whether it's rising, or whether you think it's an issue or not, is irrelevant to the point I was making.

When polled, British people tend to have a perception that about 25-33% of the population are immigrants. Even the lower end of that is twice as much as in reality. And they also tend to believe that refugees and asylum seekers are the largest groups, when in fact they're the smallest.

Are these the result of an article explicitly claiming those numbers, and people falling for it? No. Nobody explicitly claimed 25-33%. This is not a lie that was believed. But people have been presented with so much overwhelmingly negative press, stereotypes and misinfo for years that their passive perceptions are completely out of whack with reality.
 
Last edited: