Thus Spoke Kishibe Rohan: AI is Dogsh*t | Castle Super Beast 322 Clip
Then feel free to tell me, if you have ai and ai robots doing everything, what is the point of managers?That is an optimistic view of things, IMHO.
That question (or at least about the amount of managers) often gets raised even without the latest developments in ai/whatever. Just because something is pointless doesn't mean it will be removed. For that matter, a lot of ai development and implementation seems rather pointless in a functional sense, but it makes them upstairs happy.Then feel free to tell me, if you have ai and ai robots doing everything, what is the point of managers?
There may not be much/any downsizing of executives and higher management. The people at the top aren't going to put themselves out of a job, and someone needs to schmooze the other elites (political and corporate). There may be substantial losses of middle management. Ultimately, there will be some staff (jobs that can't be readily replaced, or even training / prompting the AI) and the desire for some human input in decision-making, so managers likely to remain to do that.Even if we assume this then the pool of corporate bosses will get smaller and smaller and smaller since with fewer people there is less need more management and when that starts happening then you will see the pushback since if you just have ai doing everything then how do you justify having any higher up managers? Most management is needed for managing people.
True, except if something is pointless and costs a business money then that business will look at ways of removing it.That question (or at least about the amount of managers) often gets raised even without the latest developments in ai/whatever. Just because something is pointless doesn't mean it will be removed. For that matter, a lot of ai development and implementation seems rather pointless in a functional sense, but it makes them upstairs happy.
If ai gets good enough you will see shareholders voting to remove upper management.There may not be much/any downsizing of executives and higher management. The people at the top aren't going to put themselves out of a job, and someone needs to schmooze the other elites (political and corporate). There may be substantial losses of middle management. Ultimately, there will be some staff (jobs that can't be readily replaced, or even training / prompting the AI) and the desire for some human input in decision-making, so managers likely to remain to do that.
However, in many cases what we mean by the elites are more business owners - shareholders. Although there can often be substantial overlap with executives, and they are substantially the same social class and move in similar circles. A major shareholder could be viewed as analogous to a medieval noble: they don't really need to do anything, they just own a load of stuff and money is scraped out of the production process into their pockets (albeit by different mechanisms).
In a further future, maybe. But I would suggest owners will be retaining humans at the top for a substantial time yet.If ai gets good enough you will see shareholders voting to remove upper management.
Are you sure that's going to be the political system of the future? We've come a long way from the supposed "End of History".You are ignoring the politics of the situation. As a democracy...
It would be more likely that bills would be passed requiring a certain number of people employed even if it was doing nothing.In a further future, maybe. But I would suggest owners will be retaining humans at the top for a substantial time yet.
Even if its just in name only, they have to do something to make it look like they are doing something. Also, end of history is a dumb idea.Are you sure that's going to be the political system of the future? We've come a long way from the supposed "End of History".
No shit Sherlock![]()
Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds
‘Pretty devastating’ Apple paper raises doubts about race to reach stage of AI at which it matches human intelligencewww.theguardian.com
So sad.![]()
In part one of this week's three-part Better Offline, Ed Zitron walks you through how Business Idiots have captured our society, with middle management losers breeding out true meritocracy and value-creation in favor of symbolic growth and superficial intelligence
the article from bloomberg about Microsoft's CEO claiming to do every task including even reading his own emails and listening to podcasts by getting his AI (that he's inancially invested in selling as much as possible) to describe them to him instead being a normal fucking human;
Microsoft CEO Satya Nadella Explains How He’s Making Himself Obsolete With AI
The longtime software executive told reporters how he uses his company’s AI app Copilot during his commute—and it looks a lot like talking back to the radio.
May 16, 2025
![]()
Satya Nadella speaks during an event commemorating the 50th anniversary of the company at Microsoft headquarters in Redmond, Washington, US, on Friday, April 4, 2025.By David Ryder/Bloomberg/Getty Images.
In 2019, Microsoft launched a significant partnership with OpenAI, and the legacy software company has gone all in on adding ChatGPT-inspired technology to its offerings, paying no heed to the legacy of Clippy. Ever since, Microsoft CEO Satya Nadella has become one of the most prominent boosters of the idea that large-language models will pave the way for artificial general intelligence. A new Bloomberg profile of the executive, who took the reins at Microsoft in 2014, shows that he is serious about it both at work and in his private life, where he is already using Copilot to replace podcast hosts and put himself in every episode.
According to Bloomberg, Nadella uses at least 10 custom “agents,” or LLM-driven bots, to summarize messages, prepare for meetings, and do research during the course of a workday. The CEO joked that his job has already been reduced. “I’m an email typist,” Nadella told the reporter. He also copies podcast transcripts into the Copilot app, and during his commute, he listens to AI summaries—he even has a back-and-forth with the voice assistant while he drives.
Executives love to invite reporters along as they engage with their companies’ powerful new software. Still, this scene makes the parasocial relationships the rest of us form with our favorite hosts and influencers seem downright healthy. Nadella might benefit from a little more time spent appreciating the native conifers of Washington State.
Nadella isn’t the only CEO who is hard at work rebuilding the meaning of life. Meta CEO and fellow podcast aficionado Mark Zuckerberg has been on this tip for years. In an April interview with Dwarkesh Patel, the CEO brought up another potential use for his company’s products, which will be more “compelling” as AI has more of your data. “The average American has fewer than three friends,” he said. “And the average person has demand for meaningfully more. I think it’s something like 15.” Zuckerberg made this comment while wearing off his company’s new coke-bottle Ray-Bans meant to let us use the whole world as our smartphones.
Jack Clark, the cofounder of chatbot company Anthropic, says that he’s looking forward to giving his kids an AI-powered teddy bear that can use an LLM to generate new stories. “I am annoyed I can't buy the teddy bear yet,” he told economist Tyler Cowen on the Conversations With Tyler podcast earlier this month. “I think most parents, if they could acquire a well-meaning friend that could provide occasional entertainment to their child when their child is being very trying, they would probably do it,” he added.
At least the executives are subjecting themselves to the same attention-span deletion they have served to us. We know what will happen if too many of us decide we prefer our podcasts hosted by humans. “Up to this point, shareholders have lauded Nadella’s performance, making Microsoft the most valuable company on Earth,” Bloomberg’s Aaron Carr and Dina Bass write. “If he falls flat, though, his may be one of the first jobs threatened by AI.”
oh wait, oopsie, that's not the original, here's the original;
it's paywalled like a bastard, so the cringe must be extracted with surgical precision;
Microsoft CEO Satya Nadella on His AI Efforts and OpenAI Partnership …
archived 9 Jun 2025 13:23:46 UTCarchive.is
Ok there are far too many paragraphs in that profile to fit in a single post character limits holy christ in hell, the non paywalled link is there if anyone hates themself that much. also didn't realise how much of a turtle without a shell the dude resembles. is low-key tempting or photoshop edit or twoMicrosoft’s CEO on How AI Will Remake Every Company, Including His
Nervous customers and a volatile partnership with OpenAI are complicating things for Satya Nadella and the world’s most valuable company.
![]()
May 15, 2025 at 9:00 PM UTC
Satya Nadella arrived at the World Economic Forum in January ready to talk up his triumphs in artificial intelligence, when a dangerous threat emerged. A little-known Chinese startup named DeepSeek had just released an AI model that quickly became the talk of Davos, Switzerland. Nadella, the chief executive officer of Microsoft Corp., gathered his lieutenants to assess the out-of-nowhere competition. They set up a virtual war room on—where else?—Microsoft Teams to coordinate a response.
The new model, DeepSeek-R1, could deliver results roughly on par with those of OpenAI at a fraction of the price. Computer processing that would cost $1,000 through OpenAI ran for just $36 through R1. Even crazier, DeepSeek made R1 open-source, meaning anyone could install versions of it for free if they had a powerful enough computer. “OpenAI has been so far ahead that no one’s really come close,” Nadella tells Bloomberg Businessweek. “DeepSeek, and R1 in particular, was the first model I’ve seen post some points.”
To the schmoozers in Davos, this seemed like a huge problem for Microsoft. The company had invested $13.75 billion in OpenAI by that point and had already committed to spend $80 billion on AI data centers in this fiscal year alone, all under an assumption that better AI required more computing resources. Nadella immediately ordered his team to conduct a security review of DeepSeek-R1. They scrutinized a research paper DeepSeek published detailing its work and contacted the startup’s engineers, peppering them with questions about the model. (Few others have even been able to get the opaque Chinese company to respond to emails.)
Soon roughly 100 Microsoft employees were coming in and out of the Teams videoconference rooms testing the security of DeepSeek’s codebase and exchanging notes. “People didn’t sleep,” says Asha Sharma, the company’s AI platform head, who spearheaded the effort. “It was 48 hours of going through every single thing.” R1 appeared to be legit. But instead of trying to stomp out this new rival, Nadella chose to embrace it. He instructed his team to install R1 on Microsoft’s cloud and sell access to it to customers alongside products from OpenAI and Microsoft itself. “Get it out,” Nadella recalls telling his staff.
![]()
Sharma.Photographer: Ian Allen for Bloomberg Businessweek
Nadella’s primary allegiance now isn’t to OpenAI’s very expensive skunkworks. His ultimate objective is to sell whatever AI his customers might want through Microsoft’s platforms. Nadella had spent three years running various parts of the company’s cloud business, called Azure, before he became CEO in 2014, and that’s now central to his AI strategy. Customers can choose from over 1,900 different models on Azure, including ones made by Meta and OpenAI, and upstarts such as Cohere, Mistral, Stability AI and now DeepSeek. (Some, though, such as Google’s Gemini, aren’t available to Microsoft for competitive reasons.) Whether a model’s usage costs a customer $10 through OpenAI or 90¢ via DeepSeek, Microsoft gets paid for the cloud computing, cybersecurity protections, data storage and other upsold services.
The DeepSeek episode highlights another, arguably more revealing part of Nadella’s thinking: AI is rapidly commoditizing, and this is a good thing for Microsoft. While everyone in Davos was focused on AI consumption, Nadella was contemplating the history of coal production. One of his favorite economic theories is the Jevons paradox, which posits that as a resource becomes more accessible and its usage more efficient, consumption increases. This happened with coal during the 18th and 19th centuries and more recently with plane travel, when plummeting operational costs and airfares helped create frequent flyers, new flight destinations and booming sales for airlines. Nadella believes a similar phenomenon will play out with AI.
This econ mindset has been driving the company toward creating its own AI architectures, including some tiny models with capabilities similar to those of DeepSeek-R1. Over the past year, it’s also been training a series of large language models called MAI-2, the latest iteration of Microsoft’s in-house alternatives to OpenAI’s models, which it had been developing in secret. The goal is to build AI that requires less computing power than ChatGPT and bring down the cost to operate Microsoft’s equivalent service, Copilot. The company will still regularly tap OpenAI’s bleeding-edge technology, but Nadella is convinced Microsoft can deliver near-ChatGPT quality for a lot less.
What this all means for Microsoft’s arrangement with OpenAI is complicated. Six years on, what began as a nurturing kinship has turned into an intense sibling rivalry. A power struggle within OpenAI in 2023 was a turning point in the relationship between the companies, and in various ways Microsoft is asserting its dominance. The MAI-2 project, for example, offers protection for Microsoft, to avoid being “left exposed” in case anything “catastrophic” happens to OpenAI, says Mustafa Suleyman, the head of Microsoft’s consumer Copilot efforts. “The relationship with OpenAI has to date been pretty amazing,” he says. “But this is a 50-year-old company that needs to be in an amazing place in 2030, 2035 and 2040.”
OpenAI CEO Sam Altman has repeatedly signaled that his company’s mission is to create an artificial general intelligence, a final frontier for the current AI era. With MAI-2, Nadella is instead pursuing maximum cost efficiency, even though it won’t be as intelligent as OpenAI’s most advanced models. He already feels a sense of ownership over OpenAI’s intellectual property. “MAI is not a clone,” he says. “When I have a contract [with OpenAI] that says, ‘Oh, I’m essentially funding it and have IP rights,’ it would be stupid to sort of do it twice. So we avoided that stupidity.”
If Nadella sounds defensive, that’s because he’s facing a concentrated version of the trial practically every CEO is undergoing now. Even with Microsoft’s considerable resources, Nadella must decide how AI will reshape his business by making a set of extremely difficult trade-offs—between embracing new-fangled technology and shielding his employees and business partners from disruptive systems, and between marching in line with what’s worked in the past and blindly leaping into the future. Up to this point, shareholders have lauded Nadella’s performance, making Microsoft the most valuable company on Earth. If he falls flat, though, his may be one of the first jobs threatened by AI.
The alliance between Microsoft and OpenAI—and between Nadella and Altman—was the foundation of the present AI boom. To realize their ambitions, Altman’s brainy engineers needed Nadella’s money and data centers. Nadella’s $1 billion investment in OpenAI in mid-2019 was prescient, giving Microsoft early and exclusive access to the research lab’s technology. Two years later, Microsoft rolled out GitHub Copilot, a coding assistant that quickly became a must-have for programmers.
In part two of this week's three-part Better Offline, Ed Zitron walks you through how Business Idiots demanded we all return to the office to hide their own lack of utility - and how this directly led to their desperation to adopt generative AI.
In this episode, Ed Zitron is joined by Carl Brown, a veteran software developer and host of The Internet of Bugs, to talk about the realities of software development, what coding LLMs can actually do, and how the media gets it wrong about software engineering at large.
https://www.youtube.com/@InternetOfBugs
New GitHub Copilot Research Finds 'Downward Pressure on Code Quality' - https://visualstudiomagazine.com/articles/2024/01/25/copilot-research.aspx
Report: AI coding assistants aren’t a panacea - https://techcrunch.com/2025/02/21/report-ai-coding-assistants-arent-a-panacea/
Internet of Bugs Videos to watch:
Debunking Devin: "First AI Software Engineer" Upwork lie exposed!
AI Has Us Between a Rock and a Hard Place
Software Engineers REAL problem with "AI" and Jobs
AGILE & Scrum Failures stuck us with "AI" hype like Devin
faking AI with obscenely underpaid overseas labour's a way more common grift than people may assume lolLol. Lmao even![]()
'700 Indian engineers posed as AI': The London startup that took Microsoft for a ride - BusinessToday
The unraveling began when creditor Viola Credit, which lent Builder.ai $50 million in 2023, seized $37 million from its accounts after the company defaulted.www.businesstoday.in
Behind the smooth surfaces of our tech products and the endless possibilities promised by artificial intelligence (AI) lies a much darker picture – one in which the companies behind these new technologies are implicated in a troubling set of social, economic, political and environmental inequalities, writes James Muldoon.
Big Tech has sold us the illusion that artificial intelligence is a frictionless technology that will bring wealth and prosperity to humanity. Behind this smooth exterior, however, lies the grim reality of a global workforce labouring to make this possible, often under appalling conditions.
Based on hundreds of interviews and thousands of hours of fieldwork spanning more than a decade, my new book Feeding the Machine: The Hidden Human Labour Powering AI, written with Mark Graham and Callum Cant, exposes the intricate network of organisations that maintain this exploitative system. Here, I set out seven key take-aways from the book as it speaks to live inequalities debates today.
(1) AI requires a hidden army of workers, often working in terrible conditions
Behind the smooth surfaces of our tech products lies the physical labour of millions of workers across the globe. Feeding the Machine is about the hidden human cost of the AI revolution. It’s a story of the rise of AI told from the perspective of the workers who build it. 80% of the work of AI is not done in AI labs by machine learning engineers; it’s data annotation work that is outsourced to workers in the Global South.
The stories we heard when we visited what could be described as digital sweatshops were horrendous: endless days of tedious work on insecure contracts earning little more than $1 an hour with no career prospects. When we buy other consumer products like coffee or chocolate, many of us are aware of the supply chains and manual labour that make this possible. This is not always the case for digital products. But we are directly connected with these workers dispersed across the globe and actions we take as consumers, workers and citizens can make a big difference to their lives.
(2) AI supply chains reflect older colonial patterns of power
In many ways, the book is also a story of the afterlives of the British Empire and the colonial histories that influence how AI systems are produced today. There’s a reason why this work is outsourced to workers in former colonies. They are countries that have experienced harsh histories at the hands of European colonial powers and tend to suffer from underdevelopment and a lack of job opportunities as a result. Western AI companies take advantage of the relative powerlessness of workers in these countries to extract cheap and disciplined labour to build their products.
There is more than just an echo of colonialism in AI – it’s part of its very DNA. AI is produced through an international division of digital labour whereby coordination and marketing is directed by executives in the US while precarious and low-paid work is exported to workers in the Global South. Minerals needed to produce AI infrastructure are also largely mined and processed in former colonial countries. The outputs from generative AI also privilege Western forms of knowledge and reproduce damaging stereotypes and biases found in AI’s datasets.
(3) AI is an ‘extraction machine’ – it feeds off our physical and intellectual work
We often hear stories of AI as a replica or mirror of human intelligence – an attempt to reproduce the basic structure of intelligent thought in a machine. But from the perspective of workers and consumers, it is more accurate to understand AI as an ‘extraction machine’: a system that feeds off the physical and intellectual labour of human beings to produce profits for Big Tech companies.
We argue that the logic of this machine is to extract the inputs of human labour, intelligence, natural resources and capital and convert these into statistical predictions for new tech products. Understanding AI using this machinic metaphor reminds us that, as a machine, AI has its own history, politics and power structures. A machine is not objective or neutral; it’s built by specific people to perform particular tasks. When it comes to AI, the extraction machine is an expression of the interests of wealthy tech investors and is designed to further entrench their position and concentrate their power.
(4) Generative AI is theft
AI requires the manual labour of data annotators and content moderators, but it’s also based on what we call ‘the privatisation of collective intelligence’. The value of generative AI tools is based on original human creative work that has been ripped off and monetised. All of the books, paintings, articles and recordings used to train generative AI models are largely unacknowledged and unremunerated. No consideration has been given to the human creatives whose work is used to create knock-offs and competitors in the creative market.
In the book, we tell the story of an Irish voice actor who finds a synthetic clone of her own voice online, one that has been created without her knowledge and which poses a completely novel threat to her livelihood. Companies have been devaluing the work of artists for generations and many imposters have been trying to make fakes and derivatives. But AI raises the stakes and allows for this to be done with greater ease and at a larger scale than ever before. Generative AI tools enable tech companies to rob an entire creative community of their value and talent, with little to no protection from existing laws.
(5) The commercialisation of AI is leading to a new conglomerate of “Big AI” firms
We need to start talking not just about, “Big Tech”, but “Big AI”. The commercialisation of AI is leading to a further concentration of power in large American tech companies. If you look at the major investors in younger AI startups they are legacy tech companies who want to be seen as leaders in the AI race. This is going to reduce competition in the sector and monopolise decision making power in the hands of a tiny class of Silicon Valley elites.
Big AI firms include leading cloud computing providers such as Microsoft, Amazon, and Alphabet, in addition to AI startups like OpenAI, Anthropic and Mistral, alongside chipmakers such as Nvidia and TSMC. These companies tend to understand AI as a commercial product, one that should be kept secret and used to make profits for investors. OpenAI was started with the goal of developing general artificial intelligence for the good of humanity, but we are increasingly seeing how farcical this is: billions of investment from Microsoft, working with the US military, training data on copyrighted works, imitating Scarlett Johansson’s voice without her consent. There are only a few companies that have the infrastructural power to train foundation AI models and it is these that will benefit the most from the AI revolution.
(6) Generative AI is a catastrophe for the environment
In 2019-2020, all of the leading tech companies promised to make dramatic cuts in their emission, with the goal of being carbon neutral or negative by 2030. Five years on and these pledges are looking increasingly hollow. Microsoft’s emissions have increased by 30% and Google’s have increased by almost 50% as a surge in AI has rapidly increased investment in data centres resulting in a growth in greenhouse gas emissions.
This shouldn’t come as a surprise. Global data centre electricity demand is set to double by 2026. One large data centre consumes as much electricity as 80,000 US households. Cloud computing has a larger emissions profile than the entire airline industry. And it’s the same problem with water: a large data centre can consume roughly between 10 and 20 million litres of water each day, the same as an American town of 50,000 people. Initiatives that put AI in the service of reducing emissions are unlikely to offset this problem as they must also be considered alongside the oil and gas companies that will use AI to extract more fossil fuel.
(7) Redressing the inequalities linked to Big AI requires collective political action
Change will only come about when we work together to force these tech companies to change their practices. We see time and again throughout history that powerful social groups do not give up their position unless they are directly challenged through political struggle. The number one strategy we advocate in the book is for people to come together through workers’ and civil society organisations to build collective power and put pressure on tech companies to provide better conditions and improve the lives of their workers.
There are many things we can all do to contribute to this struggle, but it’s through working together and supporting the struggles of workers in these AI supply chains that we can hope to make the biggest difference. Real social transformation is most likely to occur through a fundamental shift in the balance of power between social groups.
While there are specific policies and proposal we discuss in the book, we want to leave readers with the main point that the issue is primarily a product of the disproportionate power enjoyed by large tech companies and that workers must build their own power to oppose this. It’s only by reimagining how AI is produced in this way that we can turn it into a more emancipatory technology in the service of humanity.
I was looking at AI's ability to draw diagrams and pictures for my lectures. I asked Copilot to draw a diagram showing uveoscleral outflow, and it gave me something I would probably describe as somewhat accurate but essentially useless, because it didn't relate anything to the structure of the eye. So I told it to make a diagram based on eye anatomy, and it gave me a picture of an eye that could make hardened David Cronenberg fans quail.CoPilot