We have become accustomed to treating computer systems as not only analagous to, but alike in kind to our human experience. We talk about computer “memory”. Information systems are afflicted by “viruses” that “spread like wildfire”, and viruses affecting our health “insert malicious code” into our genetic material. “Neural networks” aim to give silicon chips something like a brain’s networked-recall capacity.

These are useful metaphors, that allow us to see and interpret reality in ways that are familiar enough to come through on the first or second try; that does not mean they relay absolute fact.

As we have become more dependent on digital devices to store words and images that we create, or which are shared with us—or to call up from the wider internet words and images published there for our benefit—we have become more comfortable with treating information technology as a repository of knowledge. The data we store in computational systems is a record of knowledge generated by, or about, us. Our science and evidence can “live” in digital systems that are increasingly designed to be adaptable so that information does not disappear when the physical technology of digital recall evolves.

And yet… a book illustrated 1,000 years ago, or printed 400 years ago, is likely to still be readable today. The pages of that book contain what we can reconvert back into living knowledge—if we know both the language and the alphabet in which that knowledge was recorded.

This has long been a point of dispute among technophiles: If our systems are so advanced, why is their ability to retain and reliably reproduce the record of knowledge we entrust to them so vulnerable to interference or to technological displacement? Why are ancient books and scrolls a more stable record, at least in some cases, than our most advanced technological systems? Are we, given that, just experimenting with a way of sharing information that might soon be obsolete? And what will become of all these high-minded, ambitious web pages, when IT moves on?

Of course, there is the digital archive—the near universal understanding that meaningful works, or those reflective of our time, or news and politics, law and scientific discovery, should be stabilized in a kind of evolving digital archive, which will serve as the microfilm of our era. The Internet will not be the Alexandria of our day, unless we manage to ensure that archive is created and sustained.

But, back to the question of where knowledge resides: Web pages are no more knowledgeable than printed pages; they need human eyes and minds to reconvert what is recorded there into living knowledge—the activity of a conscious mind, judging not only fact, but virtue and value. What is the meaning of this information? What good can it do? If we are not yet sure, can we say that further inquiry is warranted?

Artificial intelligence systems are another step in the evolution of recorded information. They are different from web pages and PDFs, and from newspapers and manuscripts, and they have unique qualities those other systems do not have. AI systems appear to engage in a kind of thought process, because we can input information and receive back a vast and diverse range of related information. But they do not, in fact, think.

Some of the most advanced work in AI is being done by people who fear that we have already surrendered the game in terms of understanding the “chain of thought” used by these systems. That means, simply: How do they get from point A to point B to Q, and why did they “decide” that a particular output was satisfactory, and by whose standards? And were standards involved at all?

When putting the headline on this story, we thought about the phrase “Unconscious machines do not know what they are doing”, but simplified it, because we do not yet know of any “conscious machines”. Some futurists believe machines will inevitably, eventually achieve self-awareness. We have already seen evidence that even some developers of AI systems have lost the ability to see that they are NOT, in fact, self-aware. Such is the threat to the general public’s clarity of mind about where the boundary lies.

But, no machine we know of is conscious, and unless machines become biological, and our understanding of consciousness evolves exponentially, it is unlikely anyone alive now, or ten generations from now, will meet a conscious machine. This is the point: Machines do not hold knowledge; they do not “know”, because knowing is an active state of conscious engagement with the facts of one’s environment.

Knowledge can be expanded through engagement with artful records of knowledge generated by others—books, articles, websites. AI systems are one of these artful records, but they have the complicating condition that it can be difficult to see a direct line from the way knowledge was recorded to the way “generative AI” systems repurpose that record to create new records. Whether they are drafting your email or giving you medical advice, gen AI systems do not “know” what they are doing, and they cannot make judgments about the benefits or harm that might follow.

There are growing concerns that reliance on AI systems will make people less literate—both in actual terms (less familiar with using letters and words to record, communicate, and access knowledge) and in terms of critical thinking. This is a different concern from the ancient philosopher’s worry about written language making memory less agile and enduring, and it is different from the Luddite concern about machines replacing workers, though both of these worries do attach to AI.

The concern about a “post-literate” society is that people who have lost the ability to seek and recall facts and evidence, or to critically parse the flow of information around them, will be less sovereign citizens and so human rights and freedoms—and the quality of applied scientific knowledge—will be degraded. Each new claim made by those developing frontier technologies must be weighed against:

  1. How much fantasy is there in the metaphorical language used to assert the value of a given innovation?
  2. What comparative advantages are being lost for each one that might be gained?
  3. Who, precisely, is affected by these changes, and how widespread and absolute are they?
  4. Can we trace the provenance of claims of fact, or is that capacity being erased?
  5. If the only boundary-setting tool we have is code, do we still know how to write it, or are new AI systems “inventing” a new language beyond our reach?

The Wall Street Journal reported last week that “the money invested in AI infrastructure in 2023 and 2024 alone requires consumers and companies to buy roughly $800 billion in AI products over the life of these chips and data centers to produce a good investment return.” The same article noted that:

consultants at Bain & Co. estimated the wave of AI infrastructure spending will require $2 trillion in annual AI revenue by 2030. By comparison, that is more than the combined 2024 revenue of AmazonAppleAlphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.

Knowledge is an active sorting of fact from fiction, within a conscious mind. We cannot surrender that process to machines, if we want to have understanding, sovereignty, security, and opportunity, for humankind. Right now, it is purveyors of these dubious investments that are telling us about the unprecedented magic of AI systems; we need to be able to judge for ourselves, each of us, with detailed, fact-based understanding, whether it makes sense to hand any function over to these systems.

At the very least, we should make sure that we retain our own right to know.


The Active Value approach to evolving knowledge management systems starts from this premise: Human beings must retain both a legal right to knowledge and the ability to determine whether claims of knowledge they engage with are factual. Given this, we will look for the following in all cases where AI systems are engaged:

  1. Text generated by or modified by AI systems should be labeled as AI-sourced, including professional communications, such as emails, advertisements, or purported voice recordings.
  2. Claims of knowledge made by AI systems clearly and appropriately cite sources of information, just as we would expect any professional curator of knowledge to do in their reporting.
  3. Scientific claims of fact should be grounded in research conducted by human beings and science-focused institutions, not simply stated as fact by systems trained to make such claims.
  4. No service should be sold on the premise that generative AI systems are “thinking for you“, or replacing specific areas of human knowledge generation.
  5. Decision insights generated by AI systems should come with an accessible record of the “chain of thought” used to arrive at that decision insight.
  6. AI systems that support public services should prioritize the empowerment of knowledgeable human agents, capable of using moral judgment to get the right result for users of that service.
  7. AI systems trained on intellectual property used without permission or compensation should be replaced by new systems that only use IP with proper benefit to the creators of that IP.
  8. AI platforms should be audited for their engineers’ ability to accurately describe the coding and statistical calculations used by the systems they create to generate outputs.

While each of these might seem an inconvenience to those trying to compel society to finance the massive industrial-scale energy and data operation behind AI systems, platforms that cannot provide these specific reassurances are failing to meet a basic standard of trust, with implications for quality of service and user security. Those that are able to meet these standards will be better positioned to operate services that play a constructive, value-building role.


Discover more from Active Value

Subscribe to get the latest posts sent to your email.