The views expressed by contributors are their own and not the view of The Hill

Do advances in AI risk a future of human incompetence?

An employee demonstrates a search feature integration of a browser with OpenAI, which developed ChatGPT, on Feb. 7, 2023, in Redmond, Wash.

Imagine you have a magic box. If you press this box, it will immediately create for you work product that surpasses the quality of all but the most talented of your peers. Would you press it? How often? Now imagine that everyone has this magic box. What incentives would that create? How would those incentives shape the future of humanity?

These are critical questions that ChatGPT and other generative artificial intelligence (AI) – neural networks capable of generating tailored work product in response to prompts – will require us to answer in short order. Presently, generative AI creates compelling college essayscomputer code and art. It has already won an international photography competition, performed a hit single and scored top marks in graduate school entrance exams.

And it is rapidly improving. Generative AI usage is also increasingly harder to detect, with its dramatic evolution alongside clever user prompts removing nuances that allow humans and programs to identify its deployment.

While mass adoption of generative AI holds significant promise, many rightly worry about its risks. Experts are already discussing the paramount danger of human extinction. Others have noted that AI will replace approximately 25 percent of jobs in the U.S. and EU (with other jobs arising in the process).

Yet there is a more pernicious risk to our species that must still concern us even if AI benignly seeks our interests: What happens to humanity when successive generations only learn to press the AI “magic box” and thus are wholly incompetent without its assistance?


For even the most brilliant minds, mastering a domain and deeply understanding a topic takes significant time and effort. While ultimately rewarding, this stressful process risks failure and often takes thousands of hours. For the first time in history, an entire generation can skip this process and still progress (at least for a time) in school and work. They can press the magic box and suddenly have work product that rivals the best in their cohort. That is a tempting arrangement, particularly since their peers will likely use AI even if they do not.

Like most Faustian bargains, however, reliance on generative AI comes with a hidden price. Every time you press the box, you are not truly learning — at least not in a way that meaningfully benefits you. You are developing the AI’s neural network, not your own.

Over time, your incompetency compounds as you progress through your career. Few would advocate that entire generations should live their lives in powerful robotic exoskeletons. While we would outwardly be stronger, we all intuitively realize that this step would lead to physically incompetent people who could not live without the machine. Like a cognitive exoskeleton, reliance on AI – particularly during critical periods of personal and professional development – deprives individuals of mental vitality and the opportunity to truly learn. We become slaves to our own creation, unable to meaningfully think without its aid.

AI proponents argue that its advent simply shifts the nature of learning, rather than depriving humans of mastery. At first, this view is appealing. Humans have consistently developed cognitive aids – literacy, the printing press, the internet – that disrupted society but ultimately benefitted the user. AI is different. Unlike books and websites, AI does not merely help you find and learn information. It does the work for you. Reading a book does not result in an essay appearing on your desk. Using a high-powered camera does not immediately create any image that you can imagine. Generative AI does just that. It is equivalent to going to the gym, having someone else lift weights for you, and expecting to become fit. That is both absurd and ultimately the argument for unfettered use of AI among new learners.

Raising new generations that rely on AI to work and critically think creates a host of fundamental vulnerabilities. Whoever controls the AI platform (including the AI itself) would effectively control humanity.

It also compounds cyber risks. What happens when a doctor can perform surgery only with AI assistance and either lacks internet access or only has access to a compromised AI?

We must also examine mental health risks before intrinsically tying ourselves to AI. Younger generations are already prone to imposter syndrome and have increased depression and suicide as a result. Imagine a world where this syndrome is not only felt across generations but is altogether valid.

There are also larger implications for innovation. Generative AI can only produce content based on preexisting human work. Is the next fundamental shift in physics something that an AI can deduce based on this body of knowledge, or does it require the intuitive leap of a brilliant human mind? While we do not know the answer to this question, we should ensure that this century’s Einstein has the ability to sit and deeply ponder fundamental truths.

To be clear, these risks do not mean that we should abandon AI. That is both wrongheaded and – due to the Moloch problem – effectively impossible. Instead, we must realign human incentives to ensure that our development of artificial intelligence will not hinder the longitudinal development of human intelligence. AI experts rightly worry about AI alignment, where poorly understood incentives can lead to AI harming humanity. Even if we succeed in AI alignment, however, failing to simultaneously align human incentives will still result in a dark and ignorant future.

The consequences of our failure to take this step with social media algorithms – arguably our first contact with powerful AI – is a small demonstration of the risks that we presently face. We must therefore ensure that we properly incentivize human alignment with AI, so that we use it in a way that benefits the species.

Safeguarding human critical thinking will require several steps. Here is the first one: Require all AI platforms to deploy digital watermarks that guarantee identification of generative AI-produced work.

While OpenAI has announced a clever watermarking system, voluntary industry adoption is insufficient. Absent governmental requirements, the market will incentivize one or more companies to provide watermark-free services. Universal adoption of digital watermarks would allow schools and employers to recognize AI work. This would in turn incentivize using AI as a starting point or “assistant,” rather than a replacement for critical thought.

Digital watermarking would also aid humanity in its arms race against AI disinformation and could even identify surreptitious rampant AI conduct. It is a simple and reasonable action that even the most ardent AI advocates should readily accept.

We named ourselves Homo sapiens, “wise humans.” We are at a moment when we must prove that we are worthy of this title. If we do not align incentives to ensure responsible use of the AI magic box, we will not only prove our own ignorance but damn our children to a fate when they only know incompetence. We must think critically about this issue and act decisively now before reliance on our own invention takes that ability from us forever.

Matt Cronin recently served as a founding director in the National Cybersecurity Division at The White House’s Office of the National Cyber Director. In that role, Cronin addressed cyber national security threats and developed strategic cyber policy for the nation. He joined ONCD on a detail assignment from the U.S. Department of Justice, where he serves as the National Security & Cybercrime Coordinator. Cronin is also a Fulbright Scholar researching the policy implications of AI and other emerging technologies.

All statements made in this article reflect his own views and opinions and are not necessarily those of the United States of America, The White House, or the U.S. Department of Justice.