Since late 2022, when Chat GPT opened to the general public, we have seen a proliferation of new web-based services using ‘generative artificial intelligence’ (gen AI) to provide text summaries, synthetic images, draft emails, and live meeting notes, among other things.
- Some database management platforms have started to use gen AI to assist users in structuring tables, profiles, formulas, and overall data flow.
- In most cases, users do not have immediately available information about which service provider is delivering the gen AI functionality.
- In many cases, the platform, app, or service, claims to be offering its own unique AI service, though in almost all cases, they are either fully dependent on, or are delivering a refined version of, a service built by Open AI or another major generative AI developer.
- In 2025, we have seen the Trump administration—with urging from Elon Musk and other owners of AI services and corporate government contractors—deploying gen AI services to sort through government databases and cull programs and jobs.
- There have been numerous warnings about the risks inherent in doing so, if only because such systems require sensitive data be transferred over the open internet to third-party servers.
- In June, a scathing dissent from Supreme Court Justice Ketanji Brown Jackson warned that such access to personal data by AI systems likely violates the 1974 Privacy Act and could pose grave and irreversible threats to personal privacy and data security.
Gen AI interests have seen some of the all-time biggest leaps in stock value, as well as some of the all-time biggest losses. This raises the question as to whether markets understand with enough detail where future value creation resides, and what constitutes game-changing innovation.
Game-changing innovation changes the rules of the game in such a way that they can never go back to the status quo ante—the way it was before. The iPhone marked such a shift in popular photography. Most innovation is not that, and that is good. Some of the most significant and impactful innovations are noteworthy but manageable improvements to existing standards, which are scalable, useful, and trustworthy, precisely because they work well with existing systems.
When we look at game-changing AI-related innovation, we should have clarity first about what constitutes artificial intelligence. Here, it is worth looking at where large language models (LLM) have their roots.
In the mid-20th century, Alan Turing proposed a rule for understanding whether artificial intelligence had been achieved. He posited that if a computer trained to use language could persuade a human being that they were conversing with another human being, then that computer should be treated as intelligent, even if it were not sentient. This became known as the Turing Test.
LLMs have many uses, including in services that transcribe spoken language or provide rapid translation of text or audio inputs. Since the 1960s, these voice to text functions have been the primary area of mainstream benefit from LLM systems, most of which use some variation of “neural networks”. Neural networks are intended to provide something like the synaptic richness and flexibility of the human brain, to allow for machine learning and spontaneous recall of relevant language.
The design of LLMs is indeed very smart, and generates often intelligent output, but generative AI chatbots are not sentient; they do not think; they are not reading and responding to conversation. They are computerized language games, designed to pass the Turing Test. This is an important step in the evolution of computational systems and artificial intelligence, but it is worth asking whether they provide the kind of service people think they do.
In a recent experiment, a professional writer tested Chat GPT by giving it a series of prompts aimed at eliciting help in making a list of sample essays. The exchange was published on Substack as a series of unedited screenshots. What it shows is alarming: The bot repeatedly misleads the writer, giving entirely fabricated feedback with no real reference to the actual essays shared. As one commenter noted, the bot appeared to be “gaslighting” the human writer—projecting a fictional claim of reality and attempting to pass it off as fact-based.
What all of this means is: We do not have artificially intelligent machines making genuinely informed decisions or “thinking” through pros and cons, risks and rewards, or even what it looks like to simply deliver a good product. We have computers that are trained to give the appearance of doing so.
There are many reports warning that while gen AI systems may allow cost-cutting executives to eliminate entry level knowledge-economy jobs, the human capital required to make high-level decisions, or to steer the strategic and operational direction of organizations and communities, will be depleted as a result. As the Washington Post reports:
“This [human capital] know-how is transmitted human to human, in real time and in real life. Until now, that hasn’t been a problem, because young employees doing grunt work picked up human capital along with their paycheck.
… If they are wise, [companies] will look beyond the dazzling immediate possibility of smaller payrolls and think about developing the talent they’ll need to stay competitive in the future.”
The BBC, Futurism, and others have reported on growing evidence that AI systems are not reducing cost, as people must be hired to correct mistakes made by AI systems.
Meanwhile, gen AI systems have been trained on vast amounts of data “scraped” from the Internet, much of which is covered by copyright, patents, or trademarks, and cannot be lawfully reproduced without some form of legal agreement or fair compensation. Various arguments are being made as to why gen AI firms don’t need to follow those rules—including the idea that their outputs might constitute parody—a proposition many of their end users might find offensive.
Beyond the question of how human capital is developed for advanced professional work, and intellectual property concerns, there is another point that seems to get lost in the speculative investment frenzy around AI: the vast majority of direct and indirect experience that becomes human knowledge, character, imagination, and judgment, is not recorded in any kind of data that AI systems can read.
So, what is going on with AI?
- Are companies developing game-changing innovations, or modest upgrades to mainstream services?
- Are companies and government agencies turning over human judgment to computers, on potentially high-stakes questions that could affect millions of lives, or the direction of nation states?
- Are people sharing information with gen AI firms, because they want a human-like interaction but feel timid about sharing a particular need or concern with others?
- Is privacy protected, or compromised? (The text of terms and conditions suggests both are often openly stated by the gen AI companies, raising the question of how that contradiction is resolved, in practical terms.)
- Are companies or governments developing and deploying material and operational safeguards that prevent high-stakes decisions being handed over to chatbots?
Among the major questions being faced by governments is how to ensure gen AI bots don’t replace human beings as the primary creators of the flood of information that makes up our everyday discourse. Such a scenario would allow disinformation, or at least misinformation, to become far more prevalent than reported facts and considered commentary.
If little to no publicly available information is fully sourced, fact-checked, and subject to human judgment, even those most skilled at critical thinking might have a hard time sifting through nonsense to detect the most reliable facts and evidence about the state of the world. While that sounds extreme, there is analysis that suggests millions of people are already experiencing this kind of “post-truth” information landscape, where many of the sources of information they favor lead them to other sources of information that do not directly report verified facts, but cite other unverified sources.
As verifiability declines, the need to rely on trust grows. This means people who mistakenly trust flawed—or deliberately distorted—sources of information will find themselves deprived of agency.
There are advanced AI applications that can help to deliver enhanced access to highly precise evidence-based insights, sorting and cross-referencing data in ways that are faster and more reliable than previous models. These are not the flashy gen AI applications that make it feel like you put a few key words into the ether and a detailed work of genius came back at light speed. There is no known way to eliminate errors of reference or potential fabrications from such gen AI outputs; they are inherent to the method.
The kind of advanced AI applications that might provide the most useful, high-value services, over the long term, or even in the coming 5 to 10 years, are those that allow for rapid, reliable, complex cross-referencing of hundreds or even thousands of datasets—each operating across divergent timescales or using overlapping and conflicting criteria—to produce actionable insights that are genuinely fact-based. Important uses could include:
- weather predictions, including the mapping of solar storms that could affect satellites and related services;
- more complex, compounding ecological and climate modeling—which can then refine weather forecasting and early warning systems, and provide insights into Earth systems impacts, and related risk and resilience dynamics, flowing from specific investments;
- the complex interactive orbital physics involved in travel throughout the solar system;
- fluid dynamics inside the human body, including endocrine and cellular effects observed in direct observation of both healthy and unwell people, and in response to toxins, including those extant in the local environment.
All of these could be ways to ensure decision-makers have the best-available distilled insights, based on observational data, without handing over to AI systems the high-stakes decision-making such information would support.
Activv puts forward these questions, in hopes of prompting a detailed, informed debate about the highest-value areas of AI research and technology development.

