Anthropic Chief Sounds Alarm on ‘Super-Intelligent’ AI, but Experts Say Present-Day Harms Demand Urgent Attention
The debate over artificial intelligence has once again intensified after Dario Amodei, CEO of Anthropic and head of the Claude large language model project, warned that rapidly advancing AI systems could pose existential dangers within the next few years.
In a sprawling 20,000-word essay released on January 26, Amodei argued that if powerful AI systems evolve unchecked, they could trigger sweeping job displacement, enable bioterrorism, and consolidate power in the hands of authoritarian regimes.
He described future “powerful AI systems” as potentially surpassing the intellectual capabilities of Nobel laureates, seasoned statesmen, or leading technologists — developments he believes may arrive sooner than many expect.
This is not the first time he has raised red flags.
In May 2025, Amodei suggested that AI could eliminate up to half of entry-level white-collar jobs within five years.
Similar concerns have been voiced by other tech leaders, including Elon Musk, who have repeatedly called for regulatory guardrails around AI development.
Yet many researchers remain sceptical — not only about the dire predictions but also about the timeline.
Critics question whether current technology can realistically produce so-called “super-intelligent” systems anytime soon.
They argue that the conversation about hypothetical AI catastrophes risks diverting attention from very real, ongoing harms.
For over a decade, companies have relied on “scaling laws” — increasing data volumes and computing power, especially GPUs — to improve systems like Claude and OpenAI’s ChatGPT.
This strategy fuelled dramatic performance gains and attracted billions in investment, with the hope that scaling alone might eventually lead to artificial general intelligence (AGI), a theoretical form of AI capable of matching or exceeding human cognitive abilities.
But recent developments suggest that momentum may be slowing. The release of GPT-5 last year, which CEO Sam Altman described as a step toward AGI, left many users underwhelmed.
Persistent issues — from factual inaccuracies to “hallucinations” and unreliable reasoning — remained unresolved. Rival systems such as Claude, Gemini and Grok continue to face similar limitations.
AI entrepreneur and New York University emeritus professor Gary Marcus has been particularly vocal.
In his newsletter, he argued that even the most advanced language models remain “powerful but hard to control,” struggling with reliable reasoning, the use of external tools, and alignment with human intentions.
Marcus contends that these shortcomings are structural weaknesses in large language models — not problems that can be fixed by adding more data or computing power.
Even insiders acknowledge that the field may be at an inflection point. In late 2024, Ilya Sutskever told Reuters that the era dominated by scaling might be ending, suggesting that the next breakthrough will require fresh scientific discovery rather than brute computational force.
While industry leaders debate whether super-intelligence is imminent or distant, critics stress that AI’s current risks are neither speculative nor futuristic.
Numerous studies have documented how AI systems already amplify bias, misinformation and inequality.
Deepfakes generated by AI tools have become a potent vehicle for digital deception. Algorithmic decision-making systems have been shown to replicate racial and social biases.
A widely cited 2019 study found that a healthcare algorithm used in US hospitals systematically underestimated the medical needs of Black patients because it was trained on historical spending data — a metric shaped by entrenched disparities in wealth and access.
Environmental concerns also loom large. Training and operating AI models require enormous data centres powered by energy-intensive GPUs.
Research indicates that a single AI query to a system like ChatGPT may consume 10 to 33 times more energy than a standard Google search, raising questions about sustainability.
Perhaps most troubling are the human rights implications.
Since at least 2013, AI tools developed by Palantir Technologies have been integrated into surveillance systems used in Israel’s monitoring of Palestinians in Gaza and the West Bank.
Reports indicate that AI-driven platforms such as “Lavender” assigned numerical risk scores to residents in Gaza to identify potential militant affiliations — criteria critics say were often overly broad.
Palantir’s technologies are also used within the United States. Observers claim that under President Donald Trump’s administration, data aggregation across federal agencies — including
Homeland Security, Defence, Health and Human Services, Social Security and the IRS have raised fears of surveillance overreach targeting immigrants and political critics.
In 2023, researchers at the University of Oxford cautioned against allowing speculative doomsday scenarios to overshadow tangible harms unfolding today.
“AI poses real risks to society,” they wrote, urging policymakers and technologists to prioritise immediate social and environmental consequences over distant, science-fiction-style fears.
The widening gap between industry warnings about future super-intelligence and expert concerns about present-day misuse underscores a deeper divide in the AI discourse: whether to prepare for hypothetical existential threats — or confront the very real impacts already reshaping societies.
#ArtificialIntelligence #Anthropic #AIRegulation #TechEthics #AGI #AIBias #HumanRights #Deepfakes #AIandSociety

