Safety experts quit OpenAI

thebobmaster

Elite Member
Legacy
Apr 5, 2020
2,564
2,474
118
Country
United States

Link doesn't seem to be embedding, but here's the TL;DR if you don't want to click:

Apparently, after Scarlett Johansson refused to lend her voice to the "Sky" OpenAI, she was told by family and friends that the voice they used was incredibly similar to hers. When she contacted them through legal counsel to detail the process they used to develop the voice, they responded by taking it down.
 

The Rogue Wolf

Stealthy Carnivore
Legacy
Nov 25, 2007
16,873
9,553
118
Stalking the Digital Tundra
Gender
✅
Apparently, after Scarlett Johansson refused to lend her voice to the "Sky" OpenAI, she was told by family and friends that the voice they used was incredibly similar to hers. When she contacted them through legal counsel to detail the process they used to develop the voice, they responded by taking it down.
"It's easier to ask for forgiveness than permission" tends to fall apart once lawyers get involved.
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118
"It's easier to ask for forgiveness than permission" tends to fall apart once lawyers get involved.
The trick being that many people do not have the time and resources to get the lawyers involved.

If we learnt anything from Uber, it's that dedication to breaking laws and regulations faster than the authorities can keep up is a great way to grow your business.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
But, while that is a known problem as it is, AI is being used for things that are bigger, more exciting, and that the users are less likely to see it's incorrect. You can argue that the problem lies in the stupid monkeys trusting the magic 8 ball, if you like.
You can apply the same logic to much bigger/complex uses of AI than just a simple math problem. For example, if we have AI completely control the stock market (make all the trades) and the AI discovers an infinite money glitch basically, the AI is not going to know it discovered an infinite money glitch (while a person would) and it would just continue making trade after trade.
 

Schadrach

Elite Member
Legacy
Mar 20, 2010
2,173
421
88
Country
US
What AI safety?
For the most part, "AI safety" currently primarily means making sure your models do not give responses that might embarrass the company that created them, even if those responses are specifically requested. A large portion of which is basically making sure that you don't say anything that might upset moderate to progressive social media, because upsetting them is worse for your image than carefully avoiding giving a straight answer to certain questions in a way the right wingers are going to mock.

It's why certain LLMs would do things like happily produce jokes about men, but go on a discussion of how it's inappropriate to make jokes at the expense of people based on their sex if asked to produce jokes about women. That one has been improved upon, and now it will produce jokes about women, but generally only ones that are positive.

"Understanding" is a tricky thing. If you ever look at the learning outcomes for a university course for what a passing student should be able to achieve, they very conspicuously do not (or should not!) use the word "understand". How do you measure understanding? For instance, ChatGPT can write higher education essays to a standard that will get a pass, even good marks. So can students also compile words together in a sense that is informationally accurate and makes sense, without ever really "understanding" the topic. How do we discriminate between the ability to memorise a load of information to splurge back at a marker, and actual understanding? We can measure tasks like applying knowledge, evaluating knowledge, using judgement, or synthesis of new information, but none of these are quite the same as "understanding". And an AI can do this stuff too. Not perhaps through the same processes and mechanisms as human thought, but they can do it.
Isn't this basically just the Chinese Room thought experiment in not so many words, except now we have a thing that actually exists that is approaching being awfully close to the Chinese Room?

We could expand this idea to aliens: imagine they arrived in spacecraft above Earth to observe/meet/invade. They would almost certainly not think like us (maybe with exaggerated psychology), like some Star Trek / Star Wars alien. But we would almost certainly have to credit them with being "intelligent".
If you want some science fiction with aliens that are actually different than us in a meaningful way, check out the Crystal Trilogy by Max Harms (https://crystalbooks.ai/), a story told from the perspective of a freshly instantiated AI existing in a quantum computer alongside several others like it that also involves an alien species that is very different than us and also a not quite perfect method of translating communications that doesn't help. So lots of worries about AI safety and how it could go horribly wrong coupled with sufficiently inhuman aliens.

Also Three Worlds Collide by Eliezer Yudkowsky (https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8) which has very inhuman aliens with very inhuman views of the world, a humanity with views that noticeably differ from our own (where just how different it is doesn't come up until well into the story with the intent of trying to make the reader think about the humanity in the story the same kind of way the humans in the story view the crystalline aliens) and multiple endings. Also Yudkowsky is kinda obsessed with AI safety, though it's not directly the topic of this story. Also not totally accurate automatic translators, the first transmission from the crystalline aliens gets translated as follows:

THIS VESSEL IS THE OPTIMISM OF THE CENTER OF THE VESSEL PERSON
YOU HAVE NOT KICKED US
THEREFORE YOU EAT BABIES
WHAT IS OURS IS YOURS, WHAT IS YOURS IS OURS
Apparently, after Scarlett Johansson refused to lend her voice to the "Sky" OpenAI, she was told by family and friends that the voice they used was incredibly similar to hers. When she contacted them through legal counsel to detail the process they used to develop the voice, they responded by taking it down.
Easier to take it down that have to fight a legal battle of any kind. Now they just need to find a voice that sounds comfortably human and pleasant but does not sound anything like any existing human that can afford a lawyer. Or at the very least if a person refuses to sample for your voice AI, make sure whatever you use as a replacement sounds nothing like them.
 
  • Like
Reactions: Phoenixmgs
Jun 11, 2023
2,894
2,125
118
Country
United States
Gender
Male
What is the danger of AI when it has to do what we program it to do? It's not more than a computer program than say Microsoft Word.
That’s really oversimplifying things. AFAIK MS Word isn’t doing any of these things -



That you’re implying human intent with these new tools is safe in the first place is kinda absurd.
 
  • Like
Reactions: BrawlMan

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
That’s really oversimplifying things. AFAIK MS Word isn’t doing any of these things -



That you’re implying human intent with these new tools is safe in the first place is kinda absurd.
I kinda just mentioned the financial fraud part in my last post about giving stock market control to AIs. As far as with weapons controlled by AI, the AI will do exactly what it is programmed to do. It's not going to be like a sci-fi story where the AI is tired of being basically a slave and DECIDES to revolt against humans. There's already massive disinformation with humans alone. The AI can't really create new information or disinformation, it writes stuff based on what humans have already written or then what AI has already written from humans (and it becomes a copy of a copy of a copy) so it can't create something truly new but regurgitate what has already been written. Sure AI can cause disinformation to become accelerated but bots already can accomplish that and they aren't considered AI because of how simplistic they are. The danger is how fast disinformation can spread and it's already at like max velocity without AI. Even as far as cybersecurity goes, a rather simplistic DDOS attack (that doesn't require advanced AI) is basically unstoppable.

AI is as dangerous now as it will be in the future. It's not like some new AI is going to be created/advanced in the future in such a way that makes it now an existential threat to humans. AI is as dangerous now as it will be in 50 or 100 years. As long as humans remember that you can't truly depend on AI for complex things and there needs to be safeguards in place, then AI will never be a huge danger.
 
Jun 11, 2023
2,894
2,125
118
Country
United States
Gender
Male
As long as humans remember that you can't truly depend on AI for complex things and there needs to be safeguards in place, then AI will never be a huge danger.
The human elite in charge have already wiped their asses with the above, so no. Threat will continue and the danger will be increasingly clear and present.
 
  • Like
Reactions: BrawlMan

CaitSeith

Formely Gone Gonzo
Legacy
Jun 30, 2014
5,374
381
88
The AI can't really create new information or disinformation, it writes stuff based on what humans have already written or then what AI has already written from humans (and it becomes a copy of a copy of a copy) so it can't create something truly new but regurgitate what has already been written.
New information? No. New disinformation? Yes, because the only thing you need to create disinformation is to twist existing information, and that's something LLMs do frequently even when one doesn't order them to do so (it's called hallucinations).

As long as humans remember that you can't truly depend on AI for complex things and there needs to be safeguards in place, then AI will never be a huge danger.
Complex things like driving a car?
 
  • Like
Reactions: BrawlMan

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
The human elite in charge have already wiped their asses with the above, so no. Threat will continue and the danger will be increasingly clear and present.
No they haven't.

New information? No. New disinformation? Yes, because the only thing you need to create disinformation is to twist existing information, and that's something LLMs do frequently even when one doesn't order them to do so (it's called hallucinations).


Complex things like driving a car?
And AI can create disinformation that is as believable (or more believable) than human created disinformation? It seems far easier for humans to deceive humans and then use computer programs to proliferate said disinformation than to have an AI create the said disinformation itself.

Self-driving cars are basically dead. They can only drive somewhat well in very controlled and fixed environments (e.g. nice sunny days).
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118
For the most part, "AI safety" currently primarily means making sure your models do not give responses that might embarrass the company that created them
That's obviously not what the people involved with setting up OpenAI (with it's safety-conscious remit) and subsequently quitting think it's about.

Isn't this basically just the Chinese Room thought experiment in not so many words, except now we have a thing that actually exists that is approaching being awfully close to the Chinese Room?
The Chinese Room is something I only know the basics of, so may not be able to usefully discuss in sufficient depth. My point is perhaps related to it, in the sense of "What is understanding?" Because I fear it's a more nebulous or elusive concept than we might think. I cannot help but wonder if there's a sort of anthropocentrism to it. Or perhaps we can say that we humans can understand like a human, but what if there are other ways to understand? Does (human) understanding actually matter if the outputs of a "mind" / "program" are accurate and useful, and why not consider that an alternative form of (non-human) understanding?

If you want some science fiction with aliens that are actually different than us in a meaningful way, check out the Crystal Trilogy by Max Harms (https://crystalbooks.ai/), a story told from the perspective of a freshly instantiated AI existing in a quantum computer alongside several others like it that also involves an alien species that is very different than us and also a not quite perfect method of translating communications that doesn't help. So lots of worries about AI safety and how it could go horribly wrong coupled with sufficiently inhuman aliens.
Thank you for the suggestions. There are others - Adrian Tchaikovsky's "Children of Memory" has some technologically evolved crows, and much of the book attempts to address whether they are sapient or not. This is the third in a series, the second of which ("Children of Ruin") attempts to imagine what an octopus might think like (again, technologically evolved to greater intelligence). In terms of AI, there's "I Still Dream" by James Smythe, which involves how to create one, and the potential pitfalls.

Imagining how aliens might think is extremely difficult, and most SF authors basically don't bother.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,133
3,873
118
Also Three Worlds Collide by Eliezer Yudkowsky (https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8) which has very inhuman aliens with very inhuman views of the world, a humanity with views that noticeably differ from our own (where just how different it is doesn't come up until well into the story with the intent of trying to make the reader think about the humanity in the story the same kind of way the humans in the story view the crystalline aliens) and multiple endings. Also Yudkowsky is kinda obsessed with AI safety, though it's not directly the topic of this story. Also not totally accurate automatic translators, the first transmission from the crystalline aliens gets translated as follows:
Oh, I second this recommendation.
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,096
6,376
118
Country
United Kingdom
On a similar theme-- entirely alien forms of understanding/thought/communication, and human difficulty interacting with it-- I would recommend Blindsight by Peter Watts.
 

CaitSeith

Formely Gone Gonzo
Legacy
Jun 30, 2014
5,374
381
88
And AI can create disinformation that is as believable (or more believable) than human created disinformation?
Ah, I see. My mistake; I was mixing disinformation with misinformation. ChatGPT's hallucinations easily create misinformation worded with believable certainty (the result of LLMs capacity to imitate the presentation of factual information without being able to catalogue the facts themselves). I suppose it's only disinformation when used with malicious intent. However the reason I confused them is that I consider the consequences of misinformation just as harmful as disinformation (in an hyperbolic comparison, the latter would be arson, the former a wildfire).

The thing is that disinformation doesn't need to be creative or even believable to convince the audience usually targeted by grifters and the like. I could try to make an experiment: ask ChatGPT to tell me why now it's the best moment to buy Gamestop stocks, copy paste the answer in the superstonk reddit and see how many people agree with it and how many people spot the factual errors.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
Ah, I see. My mistake; I was mixing disinformation with misinformation. ChatGPT's hallucinations easily create misinformation worded with believable certainty (the result of LLMs capacity to imitate the presentation of factual information without being able to catalogue the facts themselves). I suppose it's only disinformation when used with malicious intent. However the reason I confused them is that I consider the consequences of misinformation just as harmful as disinformation (in an hyperbolic comparison, the latter would be arson, the former a wildfire).

The thing is that disinformation doesn't need to be creative or even believable to convince the audience usually targeted by grifters and the like. I could try to make an experiment: ask ChatGPT to tell me why now it's the best moment to buy Gamestop stocks, copy paste the answer in the superstonk reddit and see how many people agree with it and how many people spot the factual errors.
Misinformation is such a loaded term technically, you can say just about anything is misinformation outside of like super basic facts at this point. It would be very hard to run any kind of news story that isn't just something super basic like a murder or car crash (some basic and objective event) without it being misinformation no matter how hard you are trying to legitimately inform people of something. Disinformation has to make logical sense and be believable for the majority of the population to believe it. An AI would be horrible at a social deduction game for example.
 

CaitSeith

Formely Gone Gonzo
Legacy
Jun 30, 2014
5,374
381
88
Disinformation has to make logical sense and be believable for the majority of the population to believe it.
True, if your target were the majority of the population. But that frequently is not the case for gritfers, cults and extremists.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
True, if your target were the majority of the population. But that frequently is not the case for gritfers, cults and extremists.
I don't see how AI is going to make it worse, you already have people that believe the world is flat or 5G was spreading covid (that was a thing, right?). Then, you have tons of phishing attempts already or scam callers (which AI is probably horrible at like aforementioned social deduction game). I guess someone can have AI build a webpage that looks like Amazon and that phish is even easier than for a normal person to try. But then if you have it so widespread, everyone will then know just to go to Amazon.com or banksite.com to do banking when you inundate the whole population constantly with basically the same grift. It's kinda like if you get inundated by warnings that don't really matter and then your ignore a legit important warning because you basically train yourself to ignore the frivolous ones and then just assume they all are pointless. Then like any email or text from Amazon will be ignored whether it's legit or not and that grift is basically useless.
 

Trunkage

Nascent Orca
Legacy
Jun 21, 2012
9,049
3,037
118
Brisbane
Gender
Cyborg
I don't see how AI is going to make it worse, you already have people that believe the world is flat or 5G was spreading covid (that was a thing, right?). Then, you have tons of phishing attempts already or scam callers (which AI is probably horrible at like aforementioned social deduction game). I guess someone can have AI build a webpage that looks like Amazon and that phish is even easier than for a normal person to try. But then if you have it so widespread, everyone will then know just to go to Amazon.com or banksite.com to do banking when you inundate the whole population constantly with basically the same grift. It's kinda like if you get inundated by warnings that don't really matter and then your ignore a legit important warning because you basically train yourself to ignore the frivolous ones and then just assume they all are pointless. Then like any email or text from Amazon will be ignored whether it's legit or not and that grift is basically useless.
I would point out that older people are generally the targets because they aren't as technologically savvy....

And then you realise that we are going to be there in 40 years, not keeping up with the newest scam and getting swindled
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118
And then you realise that we are going to be there in 40 years, not keeping up with the newest scam and getting swindled
Not me: statistically, in 40 years I'll be dead.