The cognitive rust belt is here: Why AI fluency needs anti-stagnation infrastructure
Research indicates that reliance on AI can lead to "cognitive atrophy," with professionals in medicine and tech seeing skills degrade when machines do the thinking. To counter this "Cognitive Rust Belt," organizations must build anti-stagnation infrastructure that keeps humans in the loop.
By Srini Koushik
Most organizations are not failing at AI because they lack tools. They are failing because they are slowly, quietly getting worse at thinking.
That is not a provocation. It is what the research is now telling us, across industries, across professions, across every population that has started leaning on AI to do the cognitive work it used to do for itself. The danger of AI, as The Atlantic recently put it, has moved from "apocalypse to atrophy." That shift from dramatic to gradual is exactly what makes it so hard to see coming.
The Evidence Is Not About Students Anymore
The most widely cited data point on this comes from MIT. Researchers at MIT's Media Lab tracked participants across three groups: those using ChatGPT, those using search engines, and those using no tools at all. ChatGPT users showed lower brain activity, weaker memory recall, and less ownership of their work. Their output was polished. But they learned and retained almost nothing. Brain-only participants showed the strongest neural connectivity. LLM users showed the weakest. Cognitive activity scaled down in direct relation to external tool use.
People looked at that study and said it was about college students writing essays. Fair enough. But the pattern does not stop at the campus gate.
Microsoft Research surveyed 319 knowledge workers across a range of occupations on how generative AI was affecting their thinking. For routine and time-pressured tasks, workers reported reducing their critical thinking effort when AI was in the room. Researchers found that knowledge workers were ceding problem-solving expertise to the system and focusing instead on gathering and integrating responses. As their confidence in AI grew, their engagement with the underlying thinking fell. They felt more productive. They were becoming less capable.
The medical profession has some of the most concrete evidence. A multicenter randomized trial published in The Lancet Gastroenterology and Hepatology found that endoscopists who routinely used AI assistance during colonoscopies saw their adenoma detection rate drop from 28.4% to 22.4% when they reverted to working without it. Detection rates remained stable with AI. Without it, performance had quietly degraded. These are physicians. Trained professionals. People whose entire career is built on judgment. And the skill eroded anyway, without anyone noticing until the machine was removed.
A 2025 review in Artificial Intelligence Review examining AI-induced deskilling in medicine found the same pattern across clinical settings: a progressive disengagement from complex cognitive and procedural tasks, reducing both technical proficiency and nuanced clinical judgment.
The healthcare sector in Singapore saw this coming and acted. The National University Health System began rolling out deliberate AI-free periods for doctors, requiring clinicians to work without AI tools for clinical work and assessments, specifically to prevent skill erosion from taking hold. When health systems start scheduling mandatory breaks from AI to preserve human capability, you are past the point of theory.
Software development is showing the same dynamic from a different angle. Researchers have begun framing the problem as "cognitive debt," arguing that when AI generates code, developers may accept it without building the mental model they would have built by writing it themselves. One paper calls this "cognitive surrender." Unlike messy code, eroded understanding does not show up in a linter. Its signals are indirect: resistance to change, unexpected results, slow onboarding despite good documentation.
Microsoft's own New Future of Work Report, published in December 2025, framed the risk plainly: generative AI shifts work from doing to choosing among outputs. Without deliberate effort, employees risk losing core cognitive skills, from planning and judgment to domain-specific expertise.
This is not a student problem. It is not a healthcare problem. It is not a software problem. It is a human problem, spreading across every profession that is now asking AI to carry the cognitive load it used to carry itself.
This Is the Cognitive Rust Belt
The original Rust Belt did not collapse because the steel ran out. It collapsed because automation broke the Think-Do-Learn-Adapt loop without anyone noticing until the damage was irreversible. The jobs went away, and with them went the skills, the institutional knowledge, the economic identity of entire communities. By the time anyone named what was happening, it was already done.
AI will run this play again. Faster. At a scale no previous automation wave could reach, and this time the loop does not break in a factory. It breaks inside the mind of every leader, every clinician, every engineer, every professional who lets the machine think for them long enough that thinking for themselves starts to feel like extra work.
As Jeff Raikes, former Microsoft executive and CEO of the Gates Foundation, wrote this week in Fortune: a society where fewer people develop the capacity for independent critical thought is not just less competitive. It is more vulnerable to manipulation, to misinformation, and to the erosion of the informed citizenship that democracy depends on.
Those are the stakes. Not productivity. Not efficiency. Not whether your quarterly numbers improve. The question is whether the humans inside your organization are getting better or getting worse at the thing that makes them irreplaceable.
The False Choice Everyone Is Making
When I started Right Brain Labs and began using the words "anti-stagnation infrastructure," people would stop me. Is this a training company or an infrastructure company?
Neither. That is the wrong question, and the speed with which it gets asked tells you everything about how the market has framed this problem.
Training assumes the goal is to bring people up to speed. You run a program, people learn things, they go back to their desks. Done. The problem is that AI Fluency is not a knowledge state. It is a capability, and capabilities do not get built in a classroom. They get built through repeated, mentored, real-world practice, and they decay the moment that practice stops. As researcher Janet Frances Rafner at Aarhus University put it, if left unchecked, deskilling can erode the expertise of individuals and the capacity of organizations. You cannot train your way out of a decay problem.
Infrastructure, in the traditional sense, suggests hardware, platforms, technology stack. That is not what I mean either. The infrastructure I am talking about is the support structure around human capability, the surrounding system that does not just get people started but keeps them growing. It is the mentored experience. The tools that force real thinking rather than passive consumption. The operating rhythm that makes AI Fluency a daily practice rather than a one-time event.
Anti-stagnation infrastructure is what you build when you understand that the Cognitive Rust Belt is not a training problem. It is a capability decay problem. And the answer to decay is not a single injection of knowledge. The CoThink anti-stagnation infrastructure is a system designed to keep humans in the Think-Do-Learn-Adapt loop.
What That Actually Looks Like
The first component of the anti-stagnation infrastructure we built at Right Brain Labs is the CoThink AI Fluency Compass.
Most people who start thinking seriously about AI Fluency run into the same problem immediately: they do not know where they actually are. They know they are using AI. They know they should probably be doing more. But they have no honest picture of where their thinking is sharp, where it is soft, and where the rust is already setting in. The Compass solves that.
The AI Fluency Compass gives you a clear, honest read on where you stand across the five durable skills of AI Fluency, we call the 5Cs, at any point in your journey. Not a one-time snapshot you take at the start and file. A diagnostic you can return to as your practice evolves, as your organization changes, as the AI landscape shifts. The goal is not to score you and congratulate you. The goal is to show you what is there, so you know what to build next.
This matters because decay is invisible until it is not. The knowledge worker who stops challenging AI outputs does not feel the erosion day to day. The clinician who defers to the diagnostic tool does not notice the judgment softening in real time. The Compass gives you a way to check before the rust shows up in your work.
What you do with that picture is where the second component comes in.
The CoThink Simulator is where the real work starts. Most AI assessments tell you what you already know. They measure tool familiarity, surface-level awareness, basic prompt literacy, and they are designed to be completed comfortably. Comfortable assessments do not build capability. They confirm a story you already believe about yourself.
The Simulator is built on a different premise. It puts you inside real organizational scenarios, the kind of messy, ambiguous, high-stakes situations where AI Fluency gets tested. Not hypotheticals. Not case studies from other industries. Situations that mirror the actual problems leaders face when the stakes are real and the right answer is not obvious.
It does not flatter you. Difficulty is where the Think-Do-Learn-Adapt loop engages. When it is too easy, the loop collapses. You consume an output, agree with it, move on, and nothing was built. When it challenges you, something different happens. You must decide what you think rather than just accepting what the AI produces. That is the muscle we are building, and the Simulator is the first instrument designed specifically to stress-test it.
Together, the Compass and the Simulator form the entry point into the full system. The Compass tells you where you are. The Simulator shows you what you are made of when the pressure is on.
The System Has to Be Bigger Than One Tool
The Simulator is the beginning, not the whole answer. Anti-stagnation infrastructure, by definition, cannot be a single product. Stagnation does not stop because you took one hard assessment. It stops when the conditions for continuous growth are embedded in how you work every day.
That is what the CoThink Framework and Mission Labs are designed to do. They take the diagnostic picture from the Compass and the Simulator and move it into real organizational terrain, with a real cross-functional crew, working on real problems, with a practitioner in the room who has actually done this work. Not a facilitator running a curriculum. Someone who has led organizations, recovered failing programs, and built teams from scratch, who can see where the thinking is being handed off to the machine and pull it back.
The 70-20-10 model is the architecture behind all of this. Ten percent is formal learning, the frame shift, the foundation. Twenty percent is mentored application, where capability gets built in real conditions. Seventy percent is the daily operating rhythm that sustains it after the engagement ends. Most organizations invest almost entirely in the ten percent and wonder why nothing changes. Anti-stagnation infrastructure is designed to fund all three layers, because capability lives in the seventy percent, not the ten.
Why This Matters Now
The organizations that win the next decade will not be the ones with the most AI. They will be the ones whose people stayed in the loop.
The research is clear. The pattern is consistent across students, clinicians, knowledge workers, and software engineers, across anyone who has started offloading the cognitive work that used to make them good at what they do. The Cognitive Rust Belt is not a metaphor. It is an active process, already underway, in organizations that think they are getting ahead by moving fast.
Avoiding it takes more than a training program and a good intention. It takes a system built specifically to keep humans growing, and that system must start now.
Srini Koushik is the CEO and Founder of Right Brain Labs, an AI Innovation Lab and CxO Advisory based in Columbus. An AI Top 50 Thinker and inductee into the CIO and CTO Hall of Fame, he leverages over 35 years of leadership experience across startups and Fortune 100s to balance strategic vision with hands-on execution. Find out more at www.rightbrainlabs.ai