With great power comes great responsibility
Two camps are forming right in front of our eyes. No, I didn’t turn this into a political newsletter. I’m talking about the two sides of the heated AI debate. You’re either in the AI optimism camp, where everything is so exciting, and AI is going to bring us a new utopia, or you’re in the AI apocalypse camp, where everything is terrible, and we’re going to be slaves to the robots. Or at least that's what the media wants us to think.
Odds are, you fall somewhere in the middle of those two extremes, trying to make sense of the nonsense you’re hearing from both sides. I’m not here to convince you of anything, but I have found some studies that highlight things we should be aware of. After all, AI is an incredibly powerful tool. And every powerful new tool in the past has come with a great deal of responsibility. AI is no different.
The most common AI users are those—like myself—who use LLMs to help them think, write, brainstorm, design, create, or learn. The problem arises when we have persistent AI use. “While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.” (Lee et al., 2025). Forcing something else to think for us actually makes us worse at thinking. Another study found that “Our research demonstrates a significant negative correlation between the frequent use of AI tools and critical thinking abilities, mediated by the phenomenon of cognitive offloading.” (Gerlich, 2025).
They’ve even coined a term for this: cognitive offloading. What’s more interesting is the possibility of cognitive atrophy from continued cognitive offloading. You can lose the ability to think and problem-solve without the use of an AI tool to help you. The same study by Lee et al. also found that “higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.” That’s good news for us because it means there’s a path forward for responsible AI use. Rather than blindly trusting our AI companions, we can shift our critical thinking mind to poke holes, think of alternative solutions, and prod our AI to think critically with us.
There’s no need to fear AI, nor is there a need to declare it our benevolent overlord. But there is a need to build responsible habits around your own AI use. AI is certainly a tool, but we don’t have to use that tool to solve every problem we encounter. Sometimes it’s best to break out a sheet of paper and pen to think through a hard problem on your own.
Are you using AI as a tool or a crutch?
Until next time,
Rick
Sources:
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006
Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '25). ACM. https://doi.org/10.1145/3706598.3713778
