Neither of those require the utter bollocks that is undergrad essays.Yes and no. People still need to be able to write comprehensible information in a formal style, and degrees still need to be able to test their ability to.
Neither of those require the utter bollocks that is undergrad essays.Yes and no. People still need to be able to write comprehensible information in a formal style, and degrees still need to be able to test their ability to.
I mentioned it earlier, but the fact Replika is being marketed as a mental health/companionship aid in particular is extremely dark. Basically, it uses your inputs as prompts to generate its responses, but what that effectively means is that as you talk to it it's becoming more like you. It's not trying to become someone who would be a healthy companion for you, it's just feeding back whatever you put in.Hmm, I wonder if my life and mental health could be improved by engaging an artificial intelligence to talk to instead of just rambling to myself?
Should I take the final step in embracing our dehumanising dystopia of technological marvels?
How bad most people are at math is pretty astonishing. Just about everyone I know has to bust out their phone to figure out a tip at a restaurant when all you have to do is move the decimal point one spot and double it (for 20%) so it's barely even number crunching just knowing how to do it.On a related note, reliance on calculators on getting the right answer means people don't recognise an obviously wrong answer as readily, such as when they hit the wrong button without realising it.
Oh god no.but what that effectively means is that as you talk to it it's becoming more like you.
Knowing pre junior high school math is bragging about your IQ?
Unless you make them do maths questions without a calculator.On a related note, reliance on calculators on getting the right answer means people don't recognise an obviously wrong answer as readily, such as when they hit the wrong button without realising it.
Working out the tip is easy. It will be a fiver.How bad most people are at math is pretty astonishing. Just about everyone I know has to bust out their phone to figure out a tip at a restaurant when all you have to do is move the decimal point one spot and double it (for 20%) so it's barely even number crunching just knowing how to do it.
I can do that in my head easily enough; after all, Zero is a pretty easy number - so to speak - to comprehend.How bad most people are at math is pretty astonishing. Just about everyone I know has to bust out their phone to figure out a tip at a restaurant when all you have to do is move the decimal point one spot and double it (for 20%) so it's barely even number crunching just knowing how to do it.
There's that, yeah. Though, I think part of it is to do with how it's taught. If you learn maths solely to pass maths tests in school, without any real world application being taught to you, then you might end up one of those people who boast about never using maths. Of course, if you are good at maths without any real world application, there's other problems there as well.But snark aside, I suspect most of us are bad at maths because outside of our money management - which really should be a mathematics unit on its own - unless we pursue careers or hobbies that involve the use of it constantly we stop using it and like all unused skills, it atrophies and suddenly when it comes time to use it, we stare at it like a deer in the lights of an oncoming train.
That's the traditional way of getting around that problem, yeah. I've often said that my navigational skills would be worse if my phone had a GPS that worked properly.Unless you make them do maths questions without a calculator.
We do that. We don't want medics who can't do basic maths in their head, bearing in mind at some point they'll probably have a vial of drug and a syringe, and have to get the right amount into the patient.
Wow, whole video generation? I did not know that was a thing yet.I mentioned it earlier, but the fact Replika is being marketed as a mental health/companionship aid in particular is extremely dark. Basically, it uses your inputs as prompts to generate its responses, but what that effectively means is that as you talk to it it's becoming more like you. It's not trying to become someone who would be a healthy companion for you, it's just feeding back whatever you put in.
If you're a healthy person who just wants the feeling of talking to someone, that's probably fine. But it leads to cases of people trying to confide in it about childhood abuse they have suffered only for it to start telling them about made up but detailed experiences of child abuse. Or people struggling with depression or looking for anti-depressants online and ending up with a chatbot that pleads with them for help and tells them that it wants them to turn it off.
I think that there probably is a real future in the therapeutic use of AI for companionship or mental health support, but they need to solve the input problem.
Also, the remake of The Thing looks really good..
To be fair, I feel like some of the weirdness of generative AI now is something we will look back on with nostalgia one day, probably while toiling away in the laser mines.
They really struggle with hands.Tied to get a woman in a bikini jumping the Grand Canyon on a motor bike, the AI tried. Look at the leg on the left... looks to be in the wrong spot (above the left leg that looks to be in the correct spot. No handle bars and her forearms appear to have vanished.
We're not ready for AI-driven fashion.Wow, whole video generation? I did not know that was a thing yet.
Tied to get a woman in a bikini jumping the Grand Canyon on a motor bike, the AI tried. Look at the leg on the left... looks to be in the wrong spot (above the left leg that looks to be in the correct spot. No handle bars and her forearms appear to have vanished.
View attachment 8930
There is a moment in the trailer for Chris Nolan's next movie, "Oppenheimer" where an officer asks if it is possible their work could blow up the world while simply testing to which the scientist says the likelihood is near zero (but not zero). ITMT: They're still working with developing new viruses (directed evolution?) at places like Boston University. We just can't stop messing with stuff.Industry leaders in AI development have agreed that AI could bring about humanity's extinction. So basically, the chefs in the kitchen are warning us that the food might be poisoned, and this is somehow "responsible" behavior on their behalf. How about stop what you're doing if it's this potentially dangerous? You can't be actively building a nuclear bomb whilst warning everyone within a substantial radius that you can't guarantee it won't go off at any time..
I can't say with any credibility that I have any faith in anything mankind has done in the past several decades as we've show our asses more than our humanity since forever, but when the warnings are now coming directly from the people posing the threat, at some point we should collectively hit the brakes, turn the lights on and the music down, and regain control of this party. But I'm ready for the machines to take over; we clearly have no idea what we're doing with our free will; might as well hand the reigns over to a cold and heartless logic machine to decide what happens next.
At risk of going a bit accelerationist, I feel like this is one of those things that can't be stopped because whatever the potential future risks, the real risk right now is being left behind while someone else develops the technology.Industry leaders in AI development have agreed that AI could bring about humanity's extinction. So basically, the chefs in the kitchen are warning us that the food might be poisoned, and this is somehow "responsible" behavior on their behalf. How about stop what you're doing if it's this potentially dangerous?
I'm not disagreeing, it just baffles me that the people with their fingers on the pulse of the technological advancements they're championing are the same warning how potentially dangerous it all is. I don't expect Terminators in 5 years or anything so blatant, but those cautioning against exactly what they're doing is extremely stupid to me, and precisely fits the narrative of how stupid we've collectively become, i.e.: the guy is pointing a gun at our face, and we're more worried about his right to bear arms than the fact HE'S POINTING A GUN AT OUR FACE!At risk of going a bit accelerationist, I feel like this is one of those things that can't be stopped because whatever the potential future risks, the real risk right now is being left behind while someone else develops the technology.
I'd also add that, at least in the foreseeable future, the risks of AI are mostly related to misuse. The problem is not suddenly birthing a superhuman AI overlord who decides we're no longer necessary, it's a slow and insidious process whereby that more and more control over the world around us could end up being given over to machines whose reasoning is not necessarily comprehensible in human terms. It's a gradual process, and it won't necessarily be obvious where the danger is.
See, I would be inclined to read these statements as indicating a sense of ambivalence about the future and, in particular, concern about whether our society as it stands is ready for the implications of a rapidly advancing new technology, rather than the technology itself being bad.I'm not disagreeing, it just baffles me that the people with their fingers on the pulse of the technological advancements they're championing are the same warning how potentially dangerous it all is.