Free will and reason, perhaps?But by human standards it is both a genius and an idiot, the latter because it is certainly lacking something.
Free will and reason, perhaps?But by human standards it is both a genius and an idiot, the latter because it is certainly lacking something.
"It's easier to ask for forgiveness than permission" tends to fall apart once lawyers get involved.Apparently, after Scarlett Johansson refused to lend her voice to the "Sky" OpenAI, she was told by family and friends that the voice they used was incredibly similar to hers. When she contacted them through legal counsel to detail the process they used to develop the voice, they responded by taking it down.
The trick being that many people do not have the time and resources to get the lawyers involved."It's easier to ask for forgiveness than permission" tends to fall apart once lawyers get involved.
You can apply the same logic to much bigger/complex uses of AI than just a simple math problem. For example, if we have AI completely control the stock market (make all the trades) and the AI discovers an infinite money glitch basically, the AI is not going to know it discovered an infinite money glitch (while a person would) and it would just continue making trade after trade.But, while that is a known problem as it is, AI is being used for things that are bigger, more exciting, and that the users are less likely to see it's incorrect. You can argue that the problem lies in the stupid monkeys trusting the magic 8 ball, if you like.
For the most part, "AI safety" currently primarily means making sure your models do not give responses that might embarrass the company that created them, even if those responses are specifically requested. A large portion of which is basically making sure that you don't say anything that might upset moderate to progressive social media, because upsetting them is worse for your image than carefully avoiding giving a straight answer to certain questions in a way the right wingers are going to mock.What AI safety?
Isn't this basically just the Chinese Room thought experiment in not so many words, except now we have a thing that actually exists that is approaching being awfully close to the Chinese Room?"Understanding" is a tricky thing. If you ever look at the learning outcomes for a university course for what a passing student should be able to achieve, they very conspicuously do not (or should not!) use the word "understand". How do you measure understanding? For instance, ChatGPT can write higher education essays to a standard that will get a pass, even good marks. So can students also compile words together in a sense that is informationally accurate and makes sense, without ever really "understanding" the topic. How do we discriminate between the ability to memorise a load of information to splurge back at a marker, and actual understanding? We can measure tasks like applying knowledge, evaluating knowledge, using judgement, or synthesis of new information, but none of these are quite the same as "understanding". And an AI can do this stuff too. Not perhaps through the same processes and mechanisms as human thought, but they can do it.
If you want some science fiction with aliens that are actually different than us in a meaningful way, check out the Crystal Trilogy by Max Harms (https://crystalbooks.ai/), a story told from the perspective of a freshly instantiated AI existing in a quantum computer alongside several others like it that also involves an alien species that is very different than us and also a not quite perfect method of translating communications that doesn't help. So lots of worries about AI safety and how it could go horribly wrong coupled with sufficiently inhuman aliens.We could expand this idea to aliens: imagine they arrived in spacecraft above Earth to observe/meet/invade. They would almost certainly not think like us (maybe with exaggerated psychology), like some Star Trek / Star Wars alien. But we would almost certainly have to credit them with being "intelligent".
THIS VESSEL IS THE OPTIMISM OF THE CENTER OF THE VESSEL PERSON
YOU HAVE NOT KICKED US
THEREFORE YOU EAT BABIES
WHAT IS OURS IS YOURS, WHAT IS YOURS IS OURS
Easier to take it down that have to fight a legal battle of any kind. Now they just need to find a voice that sounds comfortably human and pleasant but does not sound anything like any existing human that can afford a lawyer. Or at the very least if a person refuses to sample for your voice AI, make sure whatever you use as a replacement sounds nothing like them.Apparently, after Scarlett Johansson refused to lend her voice to the "Sky" OpenAI, she was told by family and friends that the voice they used was incredibly similar to hers. When she contacted them through legal counsel to detail the process they used to develop the voice, they responded by taking it down.
That’s really oversimplifying things. AFAIK MS Word isn’t doing any of these things -What is the danger of AI when it has to do what we program it to do? It's not more than a computer program than say Microsoft Word.
I kinda just mentioned the financial fraud part in my last post about giving stock market control to AIs. As far as with weapons controlled by AI, the AI will do exactly what it is programmed to do. It's not going to be like a sci-fi story where the AI is tired of being basically a slave and DECIDES to revolt against humans. There's already massive disinformation with humans alone. The AI can't really create new information or disinformation, it writes stuff based on what humans have already written or then what AI has already written from humans (and it becomes a copy of a copy of a copy) so it can't create something truly new but regurgitate what has already been written. Sure AI can cause disinformation to become accelerated but bots already can accomplish that and they aren't considered AI because of how simplistic they are. The danger is how fast disinformation can spread and it's already at like max velocity without AI. Even as far as cybersecurity goes, a rather simplistic DDOS attack (that doesn't require advanced AI) is basically unstoppable.That’s really oversimplifying things. AFAIK MS Word isn’t doing any of these things -
That you’re implying human intent with these new tools is safe in the first place is kinda absurd.
The human elite in charge have already wiped their asses with the above, so no. Threat will continue and the danger will be increasingly clear and present.As long as humans remember that you can't truly depend on AI for complex things and there needs to be safeguards in place, then AI will never be a huge danger.
New information? No. New disinformation? Yes, because the only thing you need to create disinformation is to twist existing information, and that's something LLMs do frequently even when one doesn't order them to do so (it's called hallucinations).The AI can't really create new information or disinformation, it writes stuff based on what humans have already written or then what AI has already written from humans (and it becomes a copy of a copy of a copy) so it can't create something truly new but regurgitate what has already been written.
Complex things like driving a car?As long as humans remember that you can't truly depend on AI for complex things and there needs to be safeguards in place, then AI will never be a huge danger.
No they haven't.The human elite in charge have already wiped their asses with the above, so no. Threat will continue and the danger will be increasingly clear and present.
And AI can create disinformation that is as believable (or more believable) than human created disinformation? It seems far easier for humans to deceive humans and then use computer programs to proliferate said disinformation than to have an AI create the said disinformation itself.New information? No. New disinformation? Yes, because the only thing you need to create disinformation is to twist existing information, and that's something LLMs do frequently even when one doesn't order them to do so (it's called hallucinations).
Complex things like driving a car?
That's obviously not what the people involved with setting up OpenAI (with it's safety-conscious remit) and subsequently quitting think it's about.For the most part, "AI safety" currently primarily means making sure your models do not give responses that might embarrass the company that created them
The Chinese Room is something I only know the basics of, so may not be able to usefully discuss in sufficient depth. My point is perhaps related to it, in the sense of "What is understanding?" Because I fear it's a more nebulous or elusive concept than we might think. I cannot help but wonder if there's a sort of anthropocentrism to it. Or perhaps we can say that we humans can understand like a human, but what if there are other ways to understand? Does (human) understanding actually matter if the outputs of a "mind" / "program" are accurate and useful, and why not consider that an alternative form of (non-human) understanding?Isn't this basically just the Chinese Room thought experiment in not so many words, except now we have a thing that actually exists that is approaching being awfully close to the Chinese Room?
Thank you for the suggestions. There are others - Adrian Tchaikovsky's "Children of Memory" has some technologically evolved crows, and much of the book attempts to address whether they are sapient or not. This is the third in a series, the second of which ("Children of Ruin") attempts to imagine what an octopus might think like (again, technologically evolved to greater intelligence). In terms of AI, there's "I Still Dream" by James Smythe, which involves how to create one, and the potential pitfalls.If you want some science fiction with aliens that are actually different than us in a meaningful way, check out the Crystal Trilogy by Max Harms (https://crystalbooks.ai/), a story told from the perspective of a freshly instantiated AI existing in a quantum computer alongside several others like it that also involves an alien species that is very different than us and also a not quite perfect method of translating communications that doesn't help. So lots of worries about AI safety and how it could go horribly wrong coupled with sufficiently inhuman aliens.
Oh, I second this recommendation.Also Three Worlds Collide by Eliezer Yudkowsky (https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8) which has very inhuman aliens with very inhuman views of the world, a humanity with views that noticeably differ from our own (where just how different it is doesn't come up until well into the story with the intent of trying to make the reader think about the humanity in the story the same kind of way the humans in the story view the crystalline aliens) and multiple endings. Also Yudkowsky is kinda obsessed with AI safety, though it's not directly the topic of this story. Also not totally accurate automatic translators, the first transmission from the crystalline aliens gets translated as follows:
Ah, I see. My mistake; I was mixing disinformation with misinformation. ChatGPT's hallucinations easily create misinformation worded with believable certainty (the result of LLMs capacity to imitate the presentation of factual information without being able to catalogue the facts themselves). I suppose it's only disinformation when used with malicious intent. However the reason I confused them is that I consider the consequences of misinformation just as harmful as disinformation (in an hyperbolic comparison, the latter would be arson, the former a wildfire).And AI can create disinformation that is as believable (or more believable) than human created disinformation?
Misinformation is such a loaded term technically, you can say just about anything is misinformation outside of like super basic facts at this point. It would be very hard to run any kind of news story that isn't just something super basic like a murder or car crash (some basic and objective event) without it being misinformation no matter how hard you are trying to legitimately inform people of something. Disinformation has to make logical sense and be believable for the majority of the population to believe it. An AI would be horrible at a social deduction game for example.Ah, I see. My mistake; I was mixing disinformation with misinformation. ChatGPT's hallucinations easily create misinformation worded with believable certainty (the result of LLMs capacity to imitate the presentation of factual information without being able to catalogue the facts themselves). I suppose it's only disinformation when used with malicious intent. However the reason I confused them is that I consider the consequences of misinformation just as harmful as disinformation (in an hyperbolic comparison, the latter would be arson, the former a wildfire).
The thing is that disinformation doesn't need to be creative or even believable to convince the audience usually targeted by grifters and the like. I could try to make an experiment: ask ChatGPT to tell me why now it's the best moment to buy Gamestop stocks, copy paste the answer in the superstonk reddit and see how many people agree with it and how many people spot the factual errors.
True, if your target were the majority of the population. But that frequently is not the case for gritfers, cults and extremists.Disinformation has to make logical sense and be believable for the majority of the population to believe it.
I don't see how AI is going to make it worse, you already have people that believe the world is flat or 5G was spreading covid (that was a thing, right?). Then, you have tons of phishing attempts already or scam callers (which AI is probably horrible at like aforementioned social deduction game). I guess someone can have AI build a webpage that looks like Amazon and that phish is even easier than for a normal person to try. But then if you have it so widespread, everyone will then know just to go to Amazon.com or banksite.com to do banking when you inundate the whole population constantly with basically the same grift. It's kinda like if you get inundated by warnings that don't really matter and then your ignore a legit important warning because you basically train yourself to ignore the frivolous ones and then just assume they all are pointless. Then like any email or text from Amazon will be ignored whether it's legit or not and that grift is basically useless.True, if your target were the majority of the population. But that frequently is not the case for gritfers, cults and extremists.
I would point out that older people are generally the targets because they aren't as technologically savvy....I don't see how AI is going to make it worse, you already have people that believe the world is flat or 5G was spreading covid (that was a thing, right?). Then, you have tons of phishing attempts already or scam callers (which AI is probably horrible at like aforementioned social deduction game). I guess someone can have AI build a webpage that looks like Amazon and that phish is even easier than for a normal person to try. But then if you have it so widespread, everyone will then know just to go to Amazon.com or banksite.com to do banking when you inundate the whole population constantly with basically the same grift. It's kinda like if you get inundated by warnings that don't really matter and then your ignore a legit important warning because you basically train yourself to ignore the frivolous ones and then just assume they all are pointless. Then like any email or text from Amazon will be ignored whether it's legit or not and that grift is basically useless.
Not me: statistically, in 40 years I'll be dead.And then you realise that we are going to be there in 40 years, not keeping up with the newest scam and getting swindled