Safety experts quit OpenAI

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118

Surprising no-one at all.

The boardroom counter-coup against the removal of Sam Altman last year was in all likelihood reallly about ensuring that OpenAI's products were more easily commercially available, and deliberately dampening the safety-conscious remit of the organisation. Money talks.
 
  • Like
Reactions: BrawlMan

Gergar12

Elite Member
Legacy
Apr 24, 2020
3,921
864
118
Country
United States
If the US/OpenAI doesn't get there first China will.
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118
How did you recover your account? Is there actually someone handling admin stuff again?
Who knows, it just started working again. It might have started working 12+ months ago, for all I know - I've not been testing it. I only logged into this old one accidentally.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
What AI safety? There is no such thing as AI nor are we close to developing AI.

I'm smelling that they are just trying to get out before the AI bubble bursts.
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118
An AI does not understand anything, it merely simulates the behaviour of an intellect.
That depends what intelligence is: there's no fixed definition.

What AI truly lacks is self-awareness - maybe volition is a useful term, it has no self-directed purpose. However, I do not think this is the same thing as intelligence. It lacks other things as well, but I am not sure they are sufficient to say it is not intelligent.

"Understanding" is a tricky thing. If you ever look at the learning outcomes for a university course for what a passing student should be able to achieve, they very conspicuously do not (or should not!) use the word "understand". How do you measure understanding? For instance, ChatGPT can write higher education essays to a standard that will get a pass, even good marks. So can students also compile words together in a sense that is informationally accurate and makes sense, without ever really "understanding" the topic. How do we discriminate between the ability to memorise a load of information to splurge back at a marker, and actual understanding? We can measure tasks like applying knowledge, evaluating knowledge, using judgement, or synthesis of new information, but none of these are quite the same as "understanding". And an AI can do this stuff too. Not perhaps through the same processes and mechanisms as human thought, but they can do it.

If we look at animal studies, we can see that all sorts of different creatures have forms of intelligence. Crows, dogs, octopuses, etc. But their cognitive processes are probably for the most part very different. A crow is in ways an idiot compared to a dog, and yet also vice versa. We could expand this idea to aliens: imagine they arrived in spacecraft above Earth to observe/meet/invade. They would almost certainly not think like us (maybe with exaggerated psychology), like some Star Trek / Star Wars alien. But we would almost certainly have to credit them with being "intelligent". (Frankly, an octopus is probably very "alien", if we really understood how they thought - we might at least assume reasonable level of commonality amongst mammals.)

So, is AI actually "intelligent"? On balance, I would perhaps lean to "yes". But by human standards it is both a genius and an idiot, the latter because it is certainly lacking something.
 

Gergar12

Elite Member
Legacy
Apr 24, 2020
3,921
864
118
Country
United States
Sure. But are they any good?

" More than a dozen Chinese generative AI chatbots were released after Ernie Bot. They are all pretty similar to their Western counterparts in that they are capable of conversing in text—answering questions, solving math problems (somewhat), writing programming code, and composing poems. Some of them also allow input and output in other forms, like audio, images, data visualization, or radio signals. "
 

Bedinsis

Elite Member
Legacy
Escapist +
May 29, 2014
1,650
836
118
Country
Sweden
What is the danger of AI when it has to do what we program it to do? It's not more than a computer program than say Microsoft Word.
Cause it doesn't actually understand anything, so giving it too much leeway can make it behave in ways that are logical given it's learning but the learning in question might not always be applicable.
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118
Cause it doesn't actually understand anything, so giving it too much leeway can make it behave in ways that are logical given it's learning but the learning in question might not always be applicable.
What's so different from a human?

I mean, there are plenty of people I've debated with on the internet over the last 30 years who talk a load of colossal shit because they've never bothered to learn anything useful on the topics they want to discuss.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,632
830
118
w/ M'Kraan Crystal
Gender
Male
Cause it doesn't actually understand anything, so giving it too much leeway can make it behave in ways that are logical given it's learning but the learning in question might not always be applicable.
It's not actually learning, it just refers to data it has access to/it's given to get the answer. The data could be bad or the logic of how it chooses the data to pick as the answer could be bad. It's why AI will never be able to drive cars properly. AI is only as dangerous as the amount of faith you/we put in it. If a human fucks up a math problem in their head and gets a weird answer, the human knows that can't be right because it doesn't make sense, but the AI is never going to know that doesn't make sense.

It's not that unlike in a video game that has context sensitive controls like how Uncharted decides for you whether O is a roll or cover, you'll never be able to code that to perfectly determine what the player wants O to do at any given time with 100% accuracy.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,133
3,873
118
What is the danger of AI when it has to do what we program it to do? It's not more than a computer program than say Microsoft Word.
Well, yes, in of itself the AI isn't dangerous. But it allows for new and exciting ways for technology to be abused, either intentionally (deepfakes came about to make fake porn of female celebs, for example) or because it's new and thus understood even less than usual by the clueless muppets in charge of things.

Now, for the latter at least, this is true:

If a human fucks up a math problem in their head and gets a weird answer, the human knows that can't be right because it doesn't make sense, but the AI is never going to know that doesn't make sense.
But, while that is a known problem as it is, AI is being used for things that are bigger, more exciting, and that the users are less likely to see it's incorrect. You can argue that the problem lies in the stupid monkeys trusting the magic 8 ball, if you like.

For instance, ChatGPT can write higher education essays to a standard that will get a pass, even good marks.
Not saying this is wrong, but this has not been my experience with it at all.
 

Agema

Do everything and feel nothing
Legacy
Mar 3, 2009
9,215
6,485
118
Not saying this is wrong, but this has not been my experience with it at all.
I would suggest a caveat - it's probably not final year of a degree standard (yet), but it can passably do early years.

Admittedly, I suspect part of this is the quality of the prompt to generate the script, and that I'm probably not looking at, marking, and referring for academic misconduct the unvarnished AI output. Most students are probably smart enough to review it and remove the most egregious errors AIs are known to make. Although I've seen some test outputs from a colleague, and I think they'd pass muster for a first or second year.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,133
3,873
118
I would suggest a caveat - it's probably not final year of a degree standard (yet), but it can passably do early years.

Admittedly, I suspect part of this is the quality of the prompt to generate the script, and that I'm probably not looking at, marking, and referring for academic misconduct the unvarnished AI output. Most students are probably smart enough to review it and remove the most egregious errors AIs are known to make. Although I've seen some test outputs from a colleague, and I think they'd pass muster for a first or second year.
Well, not having asked for essays, but for simple stuff like "Have any of the actors from Muppets Christmas Carol been in Lord of the Rings?" and "Do any 40k eldar vehicles have wheels?", to which there are fairly simple and objective answers it came back totally wrong.