“Did the calculator make us dumb?”
This is a common refrain I see when someone starts inquiring about whether or not AI (read: Large Language Models) are making us dumber.
Except here’s the thing – being skilled with a calculator doesn’t make you good at math.
Recently, I was speaking with someone who was once an incredibly talented security engineer. It was disheartening to hear them share how much of their skill seems to have atrophied over the last few years – and just how reliant they’ve become on Large Language Models (LLMs) in that time. This person could easily have been a Senior engineer on my team three years ago – but now it seems like they struggle to validate the bugs that AI agents are finding on their behalf.
For me, this served as a warning that LLMs – like the calculator – are just another tool. The moment one forgets this, and then begins using them to supply (unvalidated) answers, is the moment atrophy begins.
I think it’s safe to say that – at least for now – there’s still a measurable difference between synthetic talent and authentic skill.
💭 A few quick thoughts
Focus on slow learning; in today’s world it’s all too easy to become complacent from quick answers. If you truly want to learn something, it’s going to feel draining – and slow. That’s what real learning feels like in a world filled with engineered dopamine loops. But I can promise you this: it will be worth it 📚
Do The Thing ™️; it’s easy to watch a video about a given topic – but all that can teach you is imitation. Reading about a topic is better, because it requires critically engaging with the content – but that will only ever teach you theory. To really learn something, you have to practice it for yourself (without training wheels). Thank you, Sensei 🙇
Be consistent; this is what differentiates people who achieve their dreams from those who are constantly chasing them. If you can be consistent with whatever it is you’re practicing – even if it’s for a short period of time every day – your growth is guaranteed 🕰️
📚 From the bookshelf
Virtual Unreality by Charles Seife. Published in 2014, this book was a prescient read for it’s time given how the last eleven years have played out. If there’s any lessons to be learned from this book, it’s that you can’t trust what you read on the internet. This is even more true in today’s world of AI summaries and LLM-synthesized content.
The Way of the Ronin by Beverly A. Potter. My copy of this book was published in 1984, and it reads like it could have been published in 2025. The gist here is that change is the only guarantee in one’s career, and preparing for it is essential – which is exactly why I encourage people to start building and maintaining their public brand now.
Taiko by Eiji Yoshikawa. “The summit is believed to be the object of the climb. But its true object—the joy of living—is not in the peak itself, but in the adversities encountered on the way up.” Discover joy in overcoming adversity through your journey in life.
📖 Recently read
Stanford Research Finds That "Therapist" Chatbots Are Encouraging Users' Schizophrenic Delusions and Suicidal Thoughts. “When a person — someone with schizophrenia or schizoaffective disorder, for example, or another psychosis-inducing mental illness — is in the throes of delusion, feeding into the delusional narrative . . . serves to validate and encourage the unbalanced thoughts”. Oof.
A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say. This is honestly very sad, and I sincerely hope that he gets the help he requires. I’m curious if we’ll see more studies on the affect of the dopamine loop from LLMs use, and how it is impacting people’s well-being.
‘It’s the most empathetic voice in my life’: How AI is transforming the lives of neurodivergent people. In spite of the above, I’m glad to learn that individuals who experience challenges engaging with neurotypicals are finding relief from this technology. I wonder if maybe this is partly why AI is so prominent among “nerd culture”..?
🤔 This week’s question
How can one safely experiment with LLMs without becoming an inefficient (and expensive) proxy to one?
In a blog post I published three years ago, titled “Life after the AI-pocalypse”, I suggested that there would be striation between skilled people who make use of LLMs as a tool – and those who cede thinking and task completion to machines. So how does one safely experiment with – and become skilled at using – LLMs without atrophying other skill(s) in the process?
I think the root answer to this question is to simply validate the output.
By seeking to validate answer’s we’re provided by LLMs, we start engaging critically with the content – and in the process, build and reinforce our own skills. Practicing this is likely to impact business “productivity” measurements – but then again, it appears that LLMs are already negatively impacting productivity by adding 19% more time to code completion. At least engaging critically with LLM code output is likely to help you catch the vulnerabilities they're introducing.
So – how are you using LLMs today? What have you been able to accomplish by using them?
Have you experienced any skill atrophy resulting from your use of LLMs?
Let me know what you think.
– Keith