Oh sweet baby Jesus no, burn AI to the ground, humanity can't be trusted with it

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,923
1,792
118
Country
United Kingdom
Hmm, I wonder if my life and mental health could be improved by engaging an artificial intelligence to talk to instead of just rambling to myself?
Should I take the final step in embracing our dehumanising dystopia of technological marvels?
I mentioned it earlier, but the fact Replika is being marketed as a mental health/companionship aid in particular is extremely dark. Basically, it uses your inputs as prompts to generate its responses, but what that effectively means is that as you talk to it it's becoming more like you. It's not trying to become someone who would be a healthy companion for you, it's just feeding back whatever you put in.

If you're a healthy person who just wants the feeling of talking to someone, that's probably fine. But it leads to cases of people trying to confide in it about childhood abuse they have suffered only for it to start telling them about made up but detailed experiences of child abuse. Or people struggling with depression or looking for anti-depressants online and ending up with a chatbot that pleads with them for help and tells them that it wants them to turn it off.

I think that there probably is a real future in the therapeutic use of AI for companionship or mental health support, but they need to solve the input problem.

Also, the remake of The Thing looks really good..


To be fair, I feel like some of the weirdness of generative AI now is something we will look back on with nostalgia one day, probably while toiling away in the laser mines.
 
Last edited:

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,650
830
118
w/ M'Kraan Crystal
Gender
Male
On a related note, reliance on calculators on getting the right answer means people don't recognise an obviously wrong answer as readily, such as when they hit the wrong button without realising it.
How bad most people are at math is pretty astonishing. Just about everyone I know has to bust out their phone to figure out a tip at a restaurant when all you have to do is move the decimal point one spot and double it (for 20%) so it's barely even number crunching just knowing how to do it.
 

XsjadoBlayde

~it ends here~
Apr 29, 2020
3,384
3,509
118
How bad most people are at math is pretty astonishing. Just about everyone I know has to bust out their phone to figure out a tip at a restaurant when all you have to do is move the decimal point one spot and double it (for 20%) so it's barely even number crunching just knowing how to do it.
do-you-not-understand-how-high-my-iq-is-im-smart.gif
 

Ag3ma

Elite Member
Jan 4, 2023
2,574
2,208
118
On a related note, reliance on calculators on getting the right answer means people don't recognise an obviously wrong answer as readily, such as when they hit the wrong button without realising it.
Unless you make them do maths questions without a calculator. :D

We do that. We don't want medics who can't do basic maths in their head, bearing in mind at some point they'll probably have a vial of drug and a syringe, and have to get the right amount into the patient.
 

Baffle

Elite Member
Oct 22, 2016
3,476
2,758
118
How bad most people are at math is pretty astonishing. Just about everyone I know has to bust out their phone to figure out a tip at a restaurant when all you have to do is move the decimal point one spot and double it (for 20%) so it's barely even number crunching just knowing how to do it.
Working out the tip is easy. It will be a fiver.
 
  • Like
Reactions: RhombusHatesYou

Gordon_4

The Big Engine
Legacy
Apr 3, 2020
6,448
5,705
118
Australia
How bad most people are at math is pretty astonishing. Just about everyone I know has to bust out their phone to figure out a tip at a restaurant when all you have to do is move the decimal point one spot and double it (for 20%) so it's barely even number crunching just knowing how to do it.
I can do that in my head easily enough; after all, Zero is a pretty easy number - so to speak - to comprehend.

But snark aside, I suspect most of us are bad at maths because outside of our money management - which really should be a mathematics unit on its own - unless we pursue careers or hobbies that involve the use of it constantly we stop using it and like all unused skills, it atrophies and suddenly when it comes time to use it, we stare at it like a deer in the lights of an oncoming train.

And some people - such as myself - simply have no head for complex numbers. It’s just something we fundamentally do not get despite our best efforts.
 
Last edited:
  • Like
Reactions: RhombusHatesYou

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
But snark aside, I suspect most of us are bad at maths because outside of our money management - which really should be a mathematics unit on its own - unless we pursue careers or hobbies that involve the use of it constantly we stop using it and like all unused skills, it atrophies and suddenly when it comes time to use it, we stare at it like a deer in the lights of an oncoming train.
There's that, yeah. Though, I think part of it is to do with how it's taught. If you learn maths solely to pass maths tests in school, without any real world application being taught to you, then you might end up one of those people who boast about never using maths. Of course, if you are good at maths without any real world application, there's other problems there as well.

Unless you make them do maths questions without a calculator. :D

We do that. We don't want medics who can't do basic maths in their head, bearing in mind at some point they'll probably have a vial of drug and a syringe, and have to get the right amount into the patient.
That's the traditional way of getting around that problem, yeah. I've often said that my navigational skills would be worse if my phone had a GPS that worked properly.
 

gorfias

Unrealistic but happy
Legacy
May 13, 2009
7,379
1,967
118
Country
USA
I mentioned it earlier, but the fact Replika is being marketed as a mental health/companionship aid in particular is extremely dark. Basically, it uses your inputs as prompts to generate its responses, but what that effectively means is that as you talk to it it's becoming more like you. It's not trying to become someone who would be a healthy companion for you, it's just feeding back whatever you put in.

If you're a healthy person who just wants the feeling of talking to someone, that's probably fine. But it leads to cases of people trying to confide in it about childhood abuse they have suffered only for it to start telling them about made up but detailed experiences of child abuse. Or people struggling with depression or looking for anti-depressants online and ending up with a chatbot that pleads with them for help and tells them that it wants them to turn it off.

I think that there probably is a real future in the therapeutic use of AI for companionship or mental health support, but they need to solve the input problem.

Also, the remake of The Thing looks really good..


To be fair, I feel like some of the weirdness of generative AI now is something we will look back on with nostalgia one day, probably while toiling away in the laser mines.
Wow, whole video generation? I did not know that was a thing yet.
Tied to get a woman in a bikini jumping the Grand Canyon on a motor bike, the AI tried. Look at the leg on the left... looks to be in the wrong spot (above the left leg that looks to be in the correct spot. No handle bars and her forearms appear to have vanished.
1685551032403.png
 

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,923
1,792
118
Country
United Kingdom
Tied to get a woman in a bikini jumping the Grand Canyon on a motor bike, the AI tried. Look at the leg on the left... looks to be in the wrong spot (above the left leg that looks to be in the correct spot. No handle bars and her forearms appear to have vanished.
They really struggle with hands.

As someone who does draw that's an interesting example and helped me to understand the limitations of AI better. Hands are pretty hard as shapes go. They're complicated and we're very familiar with them so they're easy to get wrong. But when a human tries to draw a realistic hand without a reference, we probably start by imagining the fundamental shapes in 3D space. Fingers are a set of cylinders connected at the knuckles, the palm is kind of like a pentagonal prism. We intuitively understand the "rules" of 3D space because we live in it, so the challenge is to translate that understanding into a 2D image. Our cognitive idea of what a "hand" is, and the thing we will refer to when we try to draw one, is a 3D shape.

An AI's "understanding" of the world comes entirely through the media fed into it, which in the case of image generation is probably 2D images. Like us, it kind of has an idea of what a hand is, but that idea is formed by looking at enormous numbers of 2D images of hands and just noticing patterns of how pixels appear in relation to each other. The rules of the 3D space those images are meant to represent are completely alien, which results in the slightly surreal feeling even when the AI is doing comparatively well.

Eventually, I wonder if we'll get generative AI which can combine input from 2D images and 3D models, but I imagine that's probably reliant on developing a very large library of 3D models.
 
  • Like
Reactions: gorfias

Absent

And twice is the only way to live.
Jan 25, 2023
1,594
1,557
118
Country
Switzerland
Gender
The boring one
Wow, whole video generation? I did not know that was a thing yet.
Tied to get a woman in a bikini jumping the Grand Canyon on a motor bike, the AI tried. Look at the leg on the left... looks to be in the wrong spot (above the left leg that looks to be in the correct spot. No handle bars and her forearms appear to have vanished.
View attachment 8930
We're not ready for AI-driven fashion.
 
  • Like
Reactions: gorfias

Xprimentyl

Made you look...
Legacy
Aug 13, 2011
6,657
4,955
118
Plano, TX
Country
United States
Gender
Male
Industry leaders in AI development have agreed that AI could bring about humanity's extinction. So basically, the chefs in the kitchen are warning us that the food might be poisoned, and this is somehow "responsible" behavior on their behalf. How about stop what you're doing if it's this potentially dangerous? You can't be actively building a nuclear bomb whilst warning everyone within a substantial radius that you can't guarantee it won't go off at any time..

I can't say with any credibility that I have any faith in anything mankind has done in the past several decades as we've show our asses more than our humanity since forever, but when the warnings are now coming directly from the people posing the threat, at some point we should collectively hit the brakes, turn the lights on and the music down, and regain control of this party. But I'm ready for the machines to take over; we clearly have no idea what we're doing with our free will; might as well hand the reigns over to a cold and heartless logic machine to decide what happens next.
 
  • Like
Reactions: gorfias

gorfias

Unrealistic but happy
Legacy
May 13, 2009
7,379
1,967
118
Country
USA
Reading this now: https://www.frontpagemag.com/the-death-of-the-professor-in-the-age-of-chat-gpt/
The Death of the Professor in the Age of Chat GPT
The rise of AI . . . and human extinction.
At a minimum, I think we face an interesting and challenging future.

EDIT: Yikes, " Recently students have been coming to classes late or not at all. Some come to record the classes and type pertinent questions gleaned from the lecture into Chat GPT. Others are fact checking every utterance I make against the wisdom of the AI program. But when I asked a student for his reasoned viewpoint to a point John Locke made in his classic “A Letter Concerning Toleration,” the student typed the question into his computer and said: “It says here that….” and proceeded to read off the AI generated response. In the manner of most students, he made zero eye contact with me. Today, fewer and fewer students are looking at their professors during conversations, lectures and even during in-class discussions. I am speaking of polite and basically good human beings whose socialization via social media has left them bereft of appropriate social skills. "

And I remember when I was berated by the teacher and other students for bringing a pre-historic "laptop" to class. Things sure have changed.

Industry leaders in AI development have agreed that AI could bring about humanity's extinction. So basically, the chefs in the kitchen are warning us that the food might be poisoned, and this is somehow "responsible" behavior on their behalf. How about stop what you're doing if it's this potentially dangerous? You can't be actively building a nuclear bomb whilst warning everyone within a substantial radius that you can't guarantee it won't go off at any time..

I can't say with any credibility that I have any faith in anything mankind has done in the past several decades as we've show our asses more than our humanity since forever, but when the warnings are now coming directly from the people posing the threat, at some point we should collectively hit the brakes, turn the lights on and the music down, and regain control of this party. But I'm ready for the machines to take over; we clearly have no idea what we're doing with our free will; might as well hand the reigns over to a cold and heartless logic machine to decide what happens next.
There is a moment in the trailer for Chris Nolan's next movie, "Oppenheimer" where an officer asks if it is possible their work could blow up the world while simply testing to which the scientist says the likelihood is near zero (but not zero). ITMT: They're still working with developing new viruses (directed evolution?) at places like Boston University. We just can't stop messing with stuff.
 
Last edited:
  • Like
Reactions: Absent

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,923
1,792
118
Country
United Kingdom
Industry leaders in AI development have agreed that AI could bring about humanity's extinction. So basically, the chefs in the kitchen are warning us that the food might be poisoned, and this is somehow "responsible" behavior on their behalf. How about stop what you're doing if it's this potentially dangerous?
At risk of going a bit accelerationist, I feel like this is one of those things that can't be stopped because whatever the potential future risks, the real risk right now is being left behind while someone else develops the technology.

I'd also add that, at least in the foreseeable future, the risks of AI are mostly related to misuse. The problem is not suddenly birthing a superhuman AI overlord who decides we're no longer necessary, it's a slow and insidious process whereby more and more control over the world around us could end up being given over to machines whose reasoning is not necessarily comprehensible in human terms. It's a gradual process, and it won't necessarily be obvious where the danger is.
 
Last edited:

gorfias

Unrealistic but happy
Legacy
May 13, 2009
7,379
1,967
118
Country
USA
OMG, If South Park already sees the danger? We are sooo hosed.

 

Xprimentyl

Made you look...
Legacy
Aug 13, 2011
6,657
4,955
118
Plano, TX
Country
United States
Gender
Male
At risk of going a bit accelerationist, I feel like this is one of those things that can't be stopped because whatever the potential future risks, the real risk right now is being left behind while someone else develops the technology.

I'd also add that, at least in the foreseeable future, the risks of AI are mostly related to misuse. The problem is not suddenly birthing a superhuman AI overlord who decides we're no longer necessary, it's a slow and insidious process whereby that more and more control over the world around us could end up being given over to machines whose reasoning is not necessarily comprehensible in human terms. It's a gradual process, and it won't necessarily be obvious where the danger is.
I'm not disagreeing, it just baffles me that the people with their fingers on the pulse of the technological advancements they're championing are the same warning how potentially dangerous it all is. I don't expect Terminators in 5 years or anything so blatant, but those cautioning against exactly what they're doing is extremely stupid to me, and precisely fits the narrative of how stupid we've collectively become, i.e.: the guy is pointing a gun at our face, and we're more worried about his right to bear arms than the fact HE'S POINTING A GUN AT OUR FACE!
 

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,923
1,792
118
Country
United Kingdom
I'm not disagreeing, it just baffles me that the people with their fingers on the pulse of the technological advancements they're championing are the same warning how potentially dangerous it all is.
See, I would be inclined to read these statements as indicating a sense of ambivalence about the future and, in particular, concern about whether our society as it stands is ready for the implications of a rapidly advancing new technology, rather than the technology itself being bad.

Of the 8 things CAIS identifies as the potential risks of AI, many are just problems arising from human incentives around the use of AI. In other words, AI is potentially very powerful and humans may use it in ways that are either intentionally or accidentally harmful.

Even in the most extreme scenarios where AI becomes "super intelligent" (as in, super good at pursuing goals), it strikes me as a lot like having a genie that grants wishes. It's a good situation to be in, but has the potential danger that someone might make selfish wishes or stupid, poorly thought through wishes. The only real curve ball here is that as AI itself becomes smarter and more independent in its problem solving abilities, the category of stupid wishes might expand in unexpected ways.
 
Last edited:

Baffle

Elite Member
Oct 22, 2016
3,476
2,758
118
Just ask it what steps we need to take to resolve climate change so we can ignore whatever it says for the next 50 odd years.