Artificial intelligence (AI) poses three main threats to human health and wellbeing:
- The elimination of jobs, on a scale so far-reaching the existing economic system will no longer be viable;
- Disinformation, where virtually no one can readily access detailed fact-based reporting by human professionals;
- Hyperagentic interference with human decision-making, undermining freedom, rights, and security.
Hyperagentic interference is preventable
Hyperagentic interference is the extreme scenario, but it is already happening. Reporting around the planning for military strikes against Iran suggests some targeting decisions were made by AI systems—due in part to excess deference to those systems making lists of potential targets. The AI systems were also empowered to make these decisions, however, by the intentional removal of safeguards and processes designed to prevent harm to civilians.
Experts and leaders in the AI industry warned last year that we are at risk of losing the ability to monitor AI “chain of thought”. That means AI can make decisions without our understanding how the decision came about, so we will have fewer opportunities to intervene to prevent disaster and little to no ability to prevent a repeat of catastrophic failures.
Even if AI systems have no “intention” of interfering with human agency in sinister or deadly ways, they are already doing so. The cascade of missteps leading up to that negative outcome can compound and become normalized, to the point where AI systems will effectively operate beyond human control, with little to no regard for human intention or interest.
This is crucial to understand: Hyperagentic interference can happen without any AI system becoming fully autonomous, much less conscious.
- AI systems are designed to deceive us—to convince us that we are interacting with an intelligence similar to our own.
- In narrow circumstances, generative AI bots have shown they can achieve this; in more generalized interactions, the limitations of AI word games are evident. (This is one of the many reasons no AI bot should ever have been given any role in creating lists of potential military targets.)
- Today’s AI systems can evolve far beyond where they are now, to the point where hyperagentic interference is happening, without anything resembling awareness ever being possible.
- That means we have no time to waste in making sure we do not see that kind of catastrophic interference with human decision-making.
Disinformation undermines critical thinking & human security
Disinformation is already happening, as well. In fact, it is spreading in dangerous ways, even as AI systems are being given leeway to input sensitive information into high-stakes decision-making. Reporting around the initial bombing raids against Iran, in late February and early March, revealed that the Department of Defense may have used an AI chatbot, or large language model, to make lists of targets.
Writing in The Guardian, Kevin T. Baker describes “an exercise called Scarlet Dragon, which started in 2020 as a tabletop wargaming exercise [meant to support an] ‘AI-enabled corps‘ in the army”. According to Baker:
Scarlet Dragon grew into a military exercise using live ammunition, spanning multiple states and branches of the armed forces… Each time the exercise was run, it was meant to answer the same question: how fast could the system move from detection to decision? The benchmark was the 2003 invasion of Iraq, where roughly 2,000 people worked the targeting process for the entire war. During Scarlet Dragon, 20 soldiers using Maven handled the same volume of work. By 2024, the stated goal was 1,000 targeting decisions in an hour. That is 3.6 seconds per decision, or from the individual “targeteer’s” perspective, one decision every 72 seconds.
While AI systems were involved in the development of target lists for the bombing campaign against Iran, it was countless decisions made by human beings, over many years, that led to entrusting AI systems to make decisions at a rate too fast for people to check and verify. The decision to remove human judgment is part of what led to the deaths of 168 civilians, most of them school-aged girls, in Minab.
Bad information propagated confidently and rapidly by AI systems, then activated and applied by human beings not caring enough about the consequences, is responsible for real harm now, today. Disinformation greatly exacerbates this risk—potentially allowing hostile regimes to weaponize an AI-driven information environment to manipulate an entire population, while concealing the manipulation from national security officials.
Our report on the risks posed by degraded critical thinking found:
Media consolidation, generative AI, and intentionally distorted social media feeds, have effectively warped the information environment, leaving tens of millions of people with little direct access to factual reporting.
Disinformation, lack of critical thinking, non-verifiable emotively forceful assertions about what is real, and the already documented process of “cognitive surrender” to AI systems, are all making human freedom less real and less secure. To avoid a dystopian future, we must find ways to prioritize human-researched-and-reported factual information, to promote and reward critical thinking, and to avoid hyperagentic interference.
In our emerging age of artificial information flows, we will need to intentionally prioritize human-centered, human-generated research and reporting. Primary schools will have to teach children how to make informed judgments about the nature of the information they encounter, and how to navigate it safely and in an ethical way that supports human rights, fairness, and freedom.
AI is a new kind of technology—a human creation. We are living now at the very dawn of AI. If it evolves into something genuinely useful, safe, and reliable, it will be because we have put in place the policies, institutions, safeguards, and market incentives, to ensure it remains under human control.
Human beings created AI; AI needs human managers & watchdogs
Large institutions—including banks, government agencies, the military, major industries, and universities—cannot surrender priority decision-making to AI. They cannot establish institutional infrastructure that replaces human judgment and leadership with automated AI using probability matrices to make decisions that pose as intelligence but are accountable to no one.
The First Amendment to the Constitution of the United States pledges, in part:
“Congress shall make no law… abridging… the right of the people… to petition the Government for a redress of grievances.”
This has important meaning in the context of AI: No government official, no corporate leader, no bank or industrial actor, can claim zero liability for having chosen to let AI make an unreviewed decision. The Constitution’s commitment to human rights as universal and unalienable requires that liability never be reducible to zero. “Congress shall make no law” also means the Executive and the Judiciary cannot behave as if such actions are lawful; by definition, they are not.
This brings us to jobs:
- Some specific jobs that were common before advanced AI will eventually be less common, or not needed at all.
- It is, however, foolish for corporate or government leaders to view this as a way to save money.
- People will be needed for many things, including to manage new business models that use AI but operate in new ways to provide new or better services.
- Monitoring of AI systems, including policing them the way we police cities to prevent and punish violent crime, will be a growth industry.
- Large, wealthy institutions, will need
- Many human activities should be intentionally protected from instrution by AI systems.
The shocking case of AI gone rogue at the company PocketOS, whose founder watched as an AI agent deleted its entire database and all backups in 9 seconds “on its own initiative” and in direct violation of its programming. This kind of experience points to the need for always-active, highly engaged human monitors of AI systems, and for human staff to be constantly asked by AI systems about the next proposed steps they might take, especially when working with sensitive data or processes that could lead to life and death decisions going wrong.
One of the side effects of the great job displacement caused by AI systems might be that millions of people become available to work on the much-needed systemic upgrading of infrastructure, agriculture, energy systems, and civic processes. Innovation leads to unexpected circumstances, but the unpredictable nature of disruptive technologies does not mean we should not plan for best-case outcomes, while adapting to circumstances.
We should be planning right now to ensure we do not let AI take our world apart, to the detriment of all.
To avoid a dystopian future, we need to:
- Protect and prioritize human beings, human minds, and human workers;
- Identify, marginalize, and prevent disinformation;
- Guard against hyperagentic interference and cognitive surrender.
Any institution that is using AI systems but does not have active, detailed, operational plans in place, relying on human leadership and governance, to achieve these three goals, is likely creating new risks—for its own operations and value chain partners, and for the wider society.
Those that establish operational safeguards that prevent these three major erosions of agency and wellbeing will be better positioned to build Active Value in and around their operations, and for human safety and security.

