MIT says AI tools may erode critical thinking. No shit. The real question is: are you going to let it happen to you?
They're not wrong. AI can erode critical thinking. But only if you use it like a digital vending machine: insert prompt, receive answer, call it a day.
The Calculator Test
Remember before everyone had a TI-83? You had to do long division by hand. Show your work. Understand the why before you got the shortcut. Calculators made math faster for people who already understood what they were doing.
Same principle here. AI rewards clear thinkers. It exposes fuzzy ones. If you can't explain what you want or why you want it, the tool just amplifies your confusion at scale.
Teaching the Next Generation (Before It's Too Late)
New grads are walking into offices with ChatGPT muscle memory but no critical thinking calluses. I'm afraid we're about to see a generation that can generate content but can't generate ideas.
But here's what really worries me: they might stop knowing what they actually believe. When you can ask AI to argue any position convincingly, when you can generate thoughtful-sounding takes on demand, the line between your thoughts and its thoughts starts to blur. You become a curator of other people's ideas rather than someone with a point of view.
What actually needs to be taught:
Prompting as structured curiosity: not just "write me a thing" but "help me explore this angle I'm stuck on"
Editing as taste filter: the AI gives you raw material; your judgment makes it worth reading
Better questions over more answers: quantity is easy, quality requires thinking
The skill now is knowing when the answer is actually good. Which brings me to how I try to navigate this myself.
How I Actually Use This Thing
I don't ask AI to do my thinking. I use it to stretch my ideas, get unstuck, riff through directions faster when I hit a wall.
Think of it as my writer's room. I'm still the showrunner. If anything, it's made my thinking faster and my standards higher because I can iterate through bad ideas quickly to get to the good ones.
Here's what I've learned to avoid:
Copy-pasting without editing
Outsourcing my judgment to something that doesn't know my audience
Using it to skip the hard parts where the actual insight lives
Confusing output with value
What actually works:
Using it to explore angles I hadn't considered
Asking it to poke holes in my thinking
Treating it like a collaborator who never gets tired of brainstorming
Editing as the real work, not just cleanup
Keeping Myself Honest
Sometimes I catch myself, reaching for ChatGPT to think for me. It's so easy to outsource the struggle. So here's my personal framework to make sure I'm leading with my own thinking:
Origination: Did this idea come from me first, or did I outsource the observation?
Process: Am I using AI to push my thinking further, or to avoid the messy work entirely?
Growth: Am I still learning something new while I write, or just assembling pieces?
How are you teaching yourself (or your team) to use AI without getting intellectually soft? Hit reply and let me know what's working (or what's not).
Elan is the founder of Off-Menu, a design studio that helps startups build brands people actually care about.
Love this!
I wonder how many of us have the skillfulness or cognition to resist the allure of outsourcing critical thinking to AI. I'm not worried about me, but I'm concerned for the masses, and generations to follow.
Not dissimilar to the unintended consequences of a like button, short form videos, and other digital inventions, we're likely completely unaware of the downstream positive and negative effects of what's coming. What's different IMO, is how quickly the consequences will arrive. Strap in.