Thinking you happen to be totally immune to a massive societal influence which affects people to great degrees generally means it's got you hard.OK. You may think you're immune to misinformation, but you're not.
Thinking you happen to be totally immune to a massive societal influence which affects people to great degrees generally means it's got you hard.OK. You may think you're immune to misinformation, but you're not.
How would one be consuming so much news that they run out of news to consume? It's like saying that there's not enough Mario Maker levels being made because I played through them all and thus would want AI to create more levels to play.Frankly, I think you just have an extremely simplistic view of how media is widely consumed.
Or you could engage with what I actually said, rather than boiling it down to this nonsense strawman.How would one be consuming so much news that they run out of news to consume? It's like saying that there's not enough Mario Maker levels being made because I played through them all and thus would want AI to create more levels to play.
Thinking you happen to be totally immune to a massive societal influence which affects people to great degrees generally means it's got you hard.
What are you even talking about? I've said from the start of this thread the following:Or you could engage with what I actually said, rather than boiling it down to this nonsense strawman.
What are you talking about? You don't need anything new to create disinformation. You only take two unrelated facts and put them together as if it were. That's something AI does very well to a fault. The danger is at encroaching those faulty AI into search engines (you know? what people usually do when they want to check if something they heard is true or false?)The AI can't really create new information or disinformation
They wouldn't be "consuming so much news that they run out". That's nonsense.What are you even talking about? I've said from the start of this thread the following:
There's already massive disinformation with humans alone. The AI can't really create new information or disinformation, it writes stuff based on what humans have already written or then what AI has already written from humans (and it becomes a copy of a copy of a copy) so it can't create something truly new but regurgitate what has already been written. Sure AI can cause disinformation to become accelerated but bots already can accomplish that and they aren't considered AI because of how simplistic they are. The danger is how fast disinformation can spread and it's already at like max velocity without AI.
Hence why I'm asking, How would one be consuming so much news that they run out of news to consume?
What is the point/danger of AI?
But AI is horrible at doing that.What are you talking about? You don't need anything new to create disinformation. You only take two unrelated facts and put them together as if it were. That's something AI does very well to a fault. The danger is at encroaching those faulty AI into search engines (you know? what people usually do when they want to check if something they heard is true or false?)
You ain't gonna get people thinking there's some immigration crisis in say a few hours instead of a few months. You have to have events that happen over time for people to think that. People have to notice an actual difference. For example, you can't just say inflation is out of hand when everything cost the same as it did yesterday.They wouldn't be "consuming so much news that they run out". That's nonsense.
The speed, reactiveness, reach and scope of misinformation are not at "max velocity". That's just complacency. It's extremely high, but can be extremely higher. And that's where AI comes in-- it can outpace anything humans and basic bots can put out.
Not to mention how much of people's trusted sources are writing response pieces to information that came from somewhere else, which may or may not have been written by a computer algorithm trained to imitate the appearance of truth.I don't care what you're doing. Hundreds of millions of people do, including news content. How much do you think they're presented with, or passively take notice of, even if they don't actively open and read an article?
Advertisers love people who say "oh, advertising has no effect on me", because those people lack the ability to be self-critical and are therefore even more vulnerable to advertising than most others. And propaganda is just a form of advertising....Thinking you happen to be totally immune to a massive societal influence which affects people to great degrees generally means it's got you hard.
I prefer to say that advertising is a form of propaganda.Advertisers love people who say "oh, advertising has no effect on me", because those people lack the ability to be self-critical and are therefore even more vulnerable to advertising than most others. And propaganda is just a form of advertising....
It's already pointed out to you that people don't need to see the actual effects of something to have strong opinions on it. And even if they experience a certain effect, they can be easily convinced that the cause was something else entirely.You ain't gonna get people thinking there's some immigration crisis in say a few hours instead of a few months. You have to have events that happen over time for people to think that. People have to notice an actual difference. For example, you can't just say inflation is out of hand when everything cost the same as it did yesterday.
I'm completely unclear why anyone is bothering to entertain the argument that people don't believe misinformation coming from a man who spent so many months telling us hydroxychloroquine and ivermectin would save everyone from covid.It's already pointed out to you that people don't need to see the actual effects of something to have strong opinions on it. And even if they experience a certain effect, they can be easily convinced that the cause was something else entirely.
1) Would an AI actually be able to come up with that on its own as it was based in some truth it looks like? Would an AI know something like that would be believable misinformation?It's already pointed out to you that people don't need to see the actual effects of something to have strong opinions on it. And even if they experience a certain effect, they can be easily convinced that the cause was something else entirely.
Take a look at the Brexit referendum. One of the claims that had the most traction was that Britain sends 300million a week to the EU, and that leaving would allow us to invest it in the NHS. This was a complete lie. But it was repeated so often, so widely-- including in low-substance online ads-- that it gained that enormous traction. These people weren't actually reading full articles making that claim. They just saw it pop up-- again and again and again. It stuck in their minds and stoked up anger and fear.
No-one had experienced the EU taking all our money. They had experienced worsening NHS care due to underfunding... and misinformation effectively exploited that to convince people of something untrue and swing a public vote.
Not true.I'm completely unclear why anyone is bothering to entertain the argument that people don't believe misinformation coming from a man who spent so many months telling us hydroxychloroquine and ivermectin would save everyone from covid.
AIs can absolutely come up with false statistics from whole cloth. Recall those lawyers who used AI language programs to create false briefs? Those AIs fabricated names, case details, dates, and numbered references, all of which looked superficially believable.1) Would an AI actually be able to come up with that on its own as it was based in some truth it looks like? Would an AI know something like that would be believable misinformation?
I don't care if you think the misinformation was fine.2) It looks like the UK was sending about $150 million to the EU a week at the end of the day. That would likely be something people didn't like anyway.
Loads of spam and scams are successful. You and I Ignore them. Lots of vulnerable people don't. That's why they keep getting made: they work.3) If an AI is just basically pulling shit out its ass hoping something sticks, you're gonna have so much shit that people will just ignore it all. It's like when you get a call from an unknown number, nobody picks it up because it's probably spam and if it was potentially something important, they will leave a voicemail.
A legal research AI could absolutely be useful, but with LLMs you have to tune them in such a way that they either sound more natural and respond in a more varied and natural way or hallucinate less. You'd want to go hard on the latter, at which point they'd produce dry, robotic, painful to read but actually mostly accurate responses.AIs can absolutely come up with false statistics from whole cloth. Recall those lawyers who used AI language programs to create false briefs? Those AIs fabricated names, case details, dates, and numbered references, all of which looked superficially believable.
The Google AI bot is really good at synthesizing the top few search results for a question and expressing that information in an authoritative sounding manner. Unfortunately, that only works insofar as the top few search results actually provide a good answer to the question asked.But AI is horrible at doing that.
It still has to pull it from somewhere. It can't actually make up a sentence like we can. I'm not saying it can't put something together that's coherent and readable. I'm saying just doing that isn't necessarily misinformation that would be believable.AIs can absolutely come up with false statistics from whole cloth. Recall those lawyers who used AI language programs to create false briefs? Those AIs fabricated names, case details, dates, and numbered references, all of which looked superficially believable.
I don't care if you think the misinformation was fine.
Loads of spam and scams are successful. You and I Ignore them. Lots of vulnerable people don't. That's why they keep getting made: they work.
But if there's no "answers", the AI has nothing to pull from.The Google AI bot is really good at synthesizing the top few search results for a question and expressing that information in an authoritative sounding manner. Unfortunately, that only works insofar as the top few search results actually provide a good answer to the question asked.
Yet to anyone who isn't able to actually look up case history, those references all looked believable. They included dates, names, page numbers, case titles. All fabricated.It still has to pull it from somewhere. It can't actually make up a sentence like we can. I'm not saying it can't put something together that's coherent and readable. I'm saying just doing that isn't necessarily misinformation that would be believable.
But the number that gained traction wasn't the real number. It was a complete falsehood.I didn't say it was fine. I said it was an actual real number, just without the net value.
Right, but far more people-- millions upon millions a year-- fall for spam and scams.You have to have X amount of people that believe something for it to be believed on a population level. People believe the earth is flat, that doesn't mean the majority or even a somewhat significant amount of the population believe that.
I think Web Search AIs like Google come up with loads of inane bullshit because they're pulling from basically every source Google has. But if you're building one with say, Azure (the platform I learned about), you can literally tell it to reference specific URLs, documents or other specific sources and nothing else.A legal research AI could absolutely be useful, but with LLMs you have to tune them in such a way that they either sound more natural and respond in a more varied and natural way or hallucinate less. You'd want to go hard on the latter, at which point they'd produce dry, robotic, painful to read but actually mostly accurate responses.
I feel like you could take out the "just", slightly more advanced search functions could be immensely useful.But at that point they're just slightly more advanced search functions.