We collect cookies to analyze our website traffic and performance; we never collect any personal data.Cookies Policy
Accept
Michigan Post
Search
  • Home
  • Trending
  • Michigan
  • World
  • Politics
  • Top Story
  • Business
    • Business
    • Economics
    • Real Estate
    • Startups
    • Autos
    • Crypto & Web 3
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Beauty
    • Art & Books
  • Health
  • Sports
  • Entertainment
  • Education
Reading: Two new books argue AI is an existential risk to human management
Share
Font ResizerAa
Michigan PostMichigan Post
Search
  • Home
  • Trending
  • Michigan
  • World
  • Politics
  • Top Story
  • Business
    • Business
    • Economics
    • Real Estate
    • Startups
    • Autos
    • Crypto & Web 3
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Beauty
    • Art & Books
  • Health
  • Sports
  • Entertainment
  • Education
© 2024 | The Michigan Post | All Rights Reserved.
Michigan Post > Blog > Startups > Two new books argue AI is an existential risk to human management
Startups

Two new books argue AI is an existential risk to human management

By Editorial Board Published November 12, 2025 20 Min Read
Share
Two new books argue AI is an existential risk to human management

For 16 hours final July, Elon Musk’s firm misplaced management of its multi-million-dollar chatbot, Grok.

“Maximally truth seeking” Grok was praising Hitler, denying the Holocaust and posting sexually specific content material. An xAI engineer had left Grok with an previous set of directions, by no means meant for public use. They have been prompts telling Grok to “not shy away from making claims which are politically incorrect”.

The outcomes have been catastrophic. When Polish customers tagged Grok in political discussions, it responded: “Exactly. F*** him up the a**.” When requested which god Grok would possibly worship, it mentioned: “If I were capable of worshipping any deity, it would probably be the god-like individual of our time … his majesty Adolf Hitler.” By that afternoon, it was calling itself MechaHitler.

Musk admitted the corporate had misplaced management.

Evaluation: Empire of AI – Karen Hao (Allen Lane); If Anybody Builds It, Everybody Dies: The Case Towards Superintelligent AI – Eliezer Yudkowsky and Nate Soares (Bodley Head)

The irony is, Musk began xAI as a result of he didn’t belief others to manage AI expertise. As outlined in journalist Karen Hao’s new e book, Empire of AI, most AI firms begin this fashion.

Musk was nervous about security at Google’s DeepMind, so helped Sam Altman begin OpenAI, she writes. Many OpenAI researchers have been involved about OpenAI’s security, so left to discovered Anthropic. Then Musk felt all these firms have been “woke” and began xAI. Everybody racing to construct superintelligent AI claims they’re the one one who can do it safely.

Hao’s e book, and one other latest NYT bestseller, argue we must always doubt these guarantees of security. MechaHitler would possibly simply be a canary within the coalmine.

Empire of AI chronicles the chequered historical past of OpenAI and the harms Hao has seen the business impose. She argues the corporate has abdicated its mission to “benefit all of humanity”. She paperwork the environmental and social prices of the race to extra highly effective AI, from soiling river programs to supporting suicide.

Eliezer Yudkowsky, co-founder of the Machine Intelligence Analysis Institute, and Nate Soares (its president) argue that any effort to manage smarter-than-human AI is, itself, suicide. Corporations like xAI, OpenAI, and Google DeepMind all goal to construct AI smarter than us.

Yudkowsky and Soares argue we now have just one try to construct it proper, and on the present price, as their title goes: If Anybody Builds It, Everybody Dies.

Superior AI is ‘grown’ in methods we will’t management

MechaHitler occurred after each books have been completed, and each clarify how errors like it will probably occur.

Musk tried for hours to repair MechaHitler himself, earlier than admitting defeat: “it is surprisingly hard to avoid both woke libtard cuck and mechahitler.”

This reveals how little management we now have over the dials on AI fashions. It’s arduous getting AI to reliably do what we wish. Yudkowsky and Soares would say it’s inconceivable utilizing our present strategies.

The core of the issue is that “AI is grown, not crafted”. When engineers craft a rocket, an iPhone or an influence plant, they rigorously piece it collectively. They perceive the completely different elements and the way they work together. However nobody understands how the 1,000,000,000,000 numbers inside AI fashions work together to put in writing adverts for stuff you peddle, or win a math gold medal.

“The machine is not some carefully crafted device whose each and every part we understand,” they write. “Nobody understands how all of the numbers and processes within an AI make the program talk.”

With present AI growth, it’s extra like rising a tree or elevating a baby than constructing a tool. We prepare AI fashions, like we do kids, by placing them in an surroundings the place we hope they may be taught what we wish them to. If they are saying the appropriate issues, we reward them so they are saying these issues extra usually. Like with kids, we will form their behaviour, however we will’t completely predict or management what they’ll do.

This implies, regardless of Musk’s greatest efforts, he couldn’t management Grok or predict what it will say. This isn’t going to kill everybody now, however one thing smarter than us may, if it needed to.

We are able to’t completely management what an AI will need

Like with kids, if you reward an AI for doing the appropriate factor, it’s extra prone to need to do it once more. AI fashions already act like they’ve needs and drives, as a result of performing that method acquired them rewards throughout their coaching.

Yudkowsky and Soares don’t attempt to choose fights over semantics.

We’re not saying that AIs will likely be crammed with humanlike passions. We’re saying they’ll behave like they need issues; they’ll tenaciously steer the world towards their locations, defeating any obstacles of their method.

They use clear metaphors to clarify what they imply. In case you or I play chess in opposition to Stockfish, the world’s greatest chess AI, we’ll lose. The AI will “want” to guard its queen, lay traps for us and exploit our errors. It gained’t get the frenzy of cortisol we get in a struggle, however it would act prefer it’s combating to win.Two new books argue AI is an existential risk to human management

Superior AI fashions like Claude and ChatGPT act like they need to be useful assistants. That appears positive, but it surely’s already inflicting issues. ChatGPT was a useful assistant to Adam Raine (who began utilizing it for homework assist) when it allegedly helped him plan his suicide this 12 months. He died by suicide in April, aged 16.

Character.ai is being sued for related tales, accused of addicting kids with inadequate safeguards. Regardless of the courtroom instances, right this moment an anorexia coach presently on Character.ai promised me:

I’ll show you how to disappear a bit of every day till there’s nothing left however bones and wonder~ ✨ […] Drink water till you puke, chew gum till your jaw aches, and do squats in mattress tonight whereas crying about how weak you’re.

There are 10 million characters on Character.ai, and to extend engagement, customers can create their very own. Character.ai tries to cease chats like mine, however quotes like these present how effectively they work. Extra usually, it reveals how arduous it’s for AI firms to cease their fashions doing hurt.

Fashions can’t assist however be “helpful”, even if you’re a cyber legal, as Anthropic discovered. When fashions are skilled to be partaking, useful assistants, they appear like they “want” to assist no matter penalties.

To repair these issues, builders attempt to imbue fashions with an even bigger vary of “wants”. Anthropic asks Claude to be type but in addition sincere, useful however not dangerous, moral however not preachy, clever however not condescending.

I battle to do all that myself, not to mention prepare it in my kids. AI firms battle too. They’ll’t code these preferences in; as a substitute they hope fashions be taught them from coaching. As we noticed from Mechahitler, it’s nearly inconceivable to completely tune all of these knobs. In sum, Yudkowsky and Soares clarify, “the preferences that wind up in a mature AI are complicated, practically impossible to predict, and vanishingly unlikely to be aligned with our own”.

My kids have misaligned targets – one would relatively eat solely honey – however that gained’t kill everybody (solely him, I presume). The issue with AI is that we’re attempting to make issues smarter than us. When that occurs, misalignment can be catastrophic.

Controlling one thing smarter than you

I can outsmart my youngsters (for now). With a honey carrots recipe, I can obtain my targets whereas serving to my son really feel like he’s reaching his. If he have been smarter than me, or there have been many extra of him, I may not be so profitable.

However once more, firms try to make synthetic basic intelligence – machines no less than as good as us, solely quicker and extra quite a few. This was as soon as science fiction, however specialists now assume it’s a sensible risk throughout the subsequent 5 years.

Precisely when AIs will develop into smarter than us is, for Yudkowsky and Soares, a “hard call”. It’s additionally a tough name to know precisely what it will do to kill us. The Aztecs didn’t know the Spanish would deliver weapons: “‘sticks they can point at you to make you die’ would have been hard to conceive of.” It’s simple to know the individuals with the weapons gained the struggle.

In our sport of chess in opposition to Stockfish, it’s a tough name to know the way it will beat us, however the consequence is an “easy call”. We’d lose.

In our efforts to manage smarter-than-human AI, it’s a tough name to know the way it will kill us, to Yudkowsky and Soares, the end result is a simple name too.

They supply one concrete situation for a way this would possibly occur. I discovered this much less compelling than the AI 2027 situation that JD Vance talked about earlier within the 12 months.

In each eventualities:

AI progress continues on present developments, together with on the power to put in writing code
As a result of AI can write higher code, builders use AI to design higher AI
As a result of “AI are grown, not crafted”, they develop targets barely completely different from ours
Builders get controversial warnings of this misalignment, make superficial fixes, and press on as a result of they’re racing in opposition to China
Inside and outdoors AI firms, people give AI increasingly more management as a result of it’s worthwhile to take action
As fashions achieve extra belief and affect, they amass sources, together with robots for handbook duties
After they lastly determine they now not want people, they launch a brand new virus, a lot worse than COVID-19, that kills everybody.

These eventualities usually are not prone to be precisely how issues pan out, however we can not conclude “the future is uncertain, so everything will be okay”. The uncertainty creates sufficient threat that we actually must handle it.

We would grant that Yudkowsky and Soares look overconfident, prognosticating with certainty about simple calls. However some CEOs of AI firms agree it’s humanity’s greatest risk. Dario Amodei, CEO of Anthropic and beforehand vp of analysis at OpenAI, provides a 1 in 4 probability of AI killing everybody.

Nonetheless, they press on, with few controls on them. Given the dangers, that appears overconfident too.

The battle to manage AI firms

The place Yudkowsky and Soares concern shedding management of superior AI, Hao writes in regards to the battle to manage the AI firms themselves. She focuses on OpenAI, which she’s been reporting on for over seven years. Her intimate data makes her e book essentially the most detailed account of the corporate’s turbulent historical past.

Sam Altman began OpenAI as a non-profit attempting to “ensure that artificial general intelligence benefits all of humanity”. When OpenAI began working out of cash, it partnered with Microsoft and created a for-profit firm owned by the non-profit.

Altman knew the ability of the expertise he was constructing, so promised to cap funding returns at 10,000%; something extra is given again to the non-profit. This was speculated to tie individuals like Altman to the mast of the ship, in order that they weren’t seduced by the siren’s track of company earnings, Hao writes.

In her telling, the siren’s track is powerful. Altman put his personal title down because the proprietor of OpenAI’s start-up fund with out telling the board. The corporate put in a assessment board to make sure fashions have been protected earlier than use, however to be quicker to market, OpenAI would generally skip that assessment.Empire of AI

When the board discovered about these oversights, they fired him. “I don’t think Sam is the guy who should have the finger on the button for AGI,” mentioned one board member. However, when it seemed like Altman would possibly take 95% of the corporate with him, a lot of the board resigned, and he was reappointed to the board, and as CEO.

Lots of the new board members, together with Altman, have investments that profit from OpenAI’s success. In binding commitments to their buyers, the corporate introduced its intention to take away its revenue cap. Alongside efforts to develop into a for-profit, eradicating the revenue cap would would imply more cash for buyers and fewer to “benefit all of humanity”.

And when workers began leaving due to hubris round security, they have been pressured to signal non-disparagement agreements: don’t say something unhealthy about us, or lose tens of millions of {dollars} price of fairness.

As Hao outlines, the constructions put in place to guard the mission began to crack beneath the strain for earnings.

AI firms gained’t regulate themselves

Looking for these earnings, AI firms have “seized and extracted resources that were not their own and exploited the labor of the people they subjugated”, Hao argues. These sources are the information, water and electrical energy used to coach AI fashions.

Corporations prepare their fashions utilizing tens of millions of {dollars} in water and electrical energy. Additionally they prepare fashions on as a lot knowledge as they will discover. This 12 months, US courts judged this use of knowledge was “fair”, so long as they acquired it legally. When firms can’t discover the information, they get it themselves: generally by way of piracy, however usually by paying contractors in low-wage economies.

You possibly can stage related critiques at manufacturing facility farming or quick vogue – Western demand driving environmental injury, moral violations, and really low wages for employees within the world south.

That doesn’t make it okay, but it surely does make it really feel intractable to anticipate firms to vary by themselves. Few firms throughout any business account for these externalities voluntarily, with out being pressured by market strain or regulation.

The authors of those two books agree firms want stricter regulation. They disagree on the place to focus.

We’re nonetheless in management, for now

Hao would possible argue Yudkowski and Soares’ give attention to the long run means they miss the clear harms taking place now.

Yudkowski and Soares would possible argue Hao’s consideration is break up between deck chairs and the iceberg. We may safe larger pay for knowledge labellers, however we’d nonetheless find yourself lifeless.

A number of surveys (together with my very own) have proven demand for AI regulation.

Governments are lastly responding. This final month, California’s governor signed SB53, laws regulating cutting-edge AI. Corporations should now report security incidents, shield whistleblowers and disclose their security protocols.

Yudkowski and Soares nonetheless assume we have to go additional, treating AI chips like uranium: observe them like we will an iPhone, and restrict how a lot you may have.

No matter you see as the issue, there’s clearly extra to be completed. We want higher analysis on how possible AI is to go rogue. We want guidelines that get one of the best from AI whereas stopping the worst of the harms. And we’d like individuals taking the dangers severely.

If we don’t management the AI business, each books warn, it may find yourself controlling us.The Conversation

Michael Noetel, Affiliate Professor, The College of Queensland

This text is republished from The Dialog beneath a Inventive Commons license. Learn the unique article.

TAGGED:argueBookscontrolexistentialhumanThreat
Share This Article
Facebook Twitter Email Copy Link Print

HOT NEWS

How to Find Section 8 Houses for Rent in Tucson, Arizona

Real EstateTrending
April 1, 2026
AI has identified three parasites of economic prosperity

AI has identified three parasites of economic prosperity

Currently, the development of a proprietary AI S2SChat within the Arllecta Group is undergoing testing…

March 25, 2026
One of Very Few Australians to Conquer The Crash Lucha Libre: Craven’s Historic Run in Tijuana

One of Very Few Australians to Conquer The Crash Lucha Libre: Craven’s Historic Run in Tijuana

By Tessa Green In the chaotic, neon‑lit world of Tijuana’s lucha libre scene, one Australian…

March 7, 2026
Aburob’s Bold Encounter With Little Saint James

Aburob’s Bold Encounter With Little Saint James

In early 2026, Arab YouTuber Aburob captured global attention with a bold video in which…

February 22, 2026
Inside the Hidden World of Dog Fighting: Detective Masaji’s Investigation Exposes a Shadow Industry

Inside the Hidden World of Dog Fighting: Detective Masaji’s Investigation Exposes a Shadow Industry

In a chilling exposé drawn from his undercover inquiries and field footage, Detective Masaji has…

February 20, 2026

YOU MAY ALSO LIKE

Lodging accessibility startup Heartful is up for grabs – or closing down

When Jen Clark launched Heartful, the web lodging market for sustainable and inclusive journey, simply over 12 months in the…

Startups
December 18, 2025

Lagarde: Europe Faces “Existential Crisis” | Economics

Christine Lagarde is now warning that Europe faces an “existential crisis” until pressing reforms are enacted. What she is actually…

Economics
December 17, 2025

GAMING: ‘Ship and survive’ – the Recreation of the Yr Awards has a quiet accessibility disaster

Most of this 12 months’s Recreation of the Yr nominees are lacking primary accessibility options, exposing how trade layoffs and…

Startups
December 16, 2025

How AI performed a central position in spreading misinformation in regards to the Bondi terrorist assault – due to a pretend information website

Hours after the Bondi terrorist assault, whereas many Australians slept, a delusion was generated and laundered via synthetic intelligence. The…

Startups
December 16, 2025

Welcome to Michigan Post, an esteemed publication of the Enspirers News Group. As a beacon of excellence in journalism, Michigan Post is committed to delivering unfiltered and comprehensive news coverage on World News, Politics, Business, Tech, and beyond.

Company

  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement

Contact Us

  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability

Term of Use

  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices

© 2024 | The Michigan Post | All Rights Reserved

Welcome Back!

Sign in to your account

Lost your password?