AI Bits Episode 6 – Is AI Making Us Dumber?

Will we all become like the students of Midvale School for the Gifted in the Gary Larson cartoon?

One of the major concerns people have about AI is its potential to reduce our ability to learn. The argument goes that if we outsource our thinking to an external process, our internal cognitive processes will diminish. On the surface, this seems like a reasonable concern. After all, we wouldn’t expect to get fitter by having a personal trainer lift weights and run on the treadmill for us. If we outsourced our fitness away from our embodied selves, we’d see no benefits or improvements.

Is this really going to be what AI does for our brains?

However, this analogy doesn’t quite hold up when we consider how we actually use personal trainers. A good trainer doesn’t do the work for you; they guide you, motivate you, and help you optimise your efforts. The problem with AI tools is that they will do exactly what you ask them to do. If you ask an AI to think for you, it will attempt to do so – though it may not do a great job unless you know how to ask the right questions. If you ask it to tell you the answer it will – whether it knows it or not… and it won’t always know if it doesn’t know, but what it tells you will sound convincing!

This is one of the big concerns educators have: if students have an “answer machine” in their pockets, they might just rely on it without engaging in critical thinking. It’s a valid concern, but it’s not a new one. Students have had the ability to Google any answer for quite some time now. 

If the internet didn’t cause this, why would AI?

Interestingly, even with this wealth of information at their fingertips, I’ve noticed in my classroom that students often still ask me to give them answers rather than looking things up themselves. Even in rooms with dictionaries, children still ask me to define words for them. While this might be understandable in contexts where English isn’t the first language or if students aren’t familiar with dictionary work, I often see this behaviour even when students have devices open in front of them.

I don’t think AI will necessarily make this situation better or worse. The lack of “grit” in the educational system seems to be currently baked in. What we need to do is integrate AI teaching in a way that prevents it from becoming just another automatic crutch. We need to ensure that students know how to use AI as an appropriate tool, not simply as a shortcut to answers.

We’ve seen this problem with Google already. In an effort to be helpful, Google now often puts summaries of early responses in highlighted boxes. Distressingly, I’ve found students taking these summaries as definitive answers without understanding how Google works or critically evaluating the information.

The big question is why are students doing this? 

The root lies in our education system itself. We’ve become end point oriented and detached from relevance and reality. We’re not teaching students in a way that makes them feel the material is valuable to learn. Many educators, myself included, try to make things interesting and engaging in our classrooms, but the system isn’t helping. There’s a focus on measurable, verifiable targets and KPIs, and we’ve become so obsessed with these numbers that they’ve become the end goal of our process, rather than learning itself.

Have we removed failure from the equation?

Sure, we’ve become experts at getting them through the exam, achieving the results, tracking the target and supporting progress at every turn. We’ve built a system where failure is so undesirable we’ve even removed it from the learning process in favour of predictable flight paths and milestones. We engineer ‘success’, whether it is meaningful or not.

If we continue with this end-focused, numerically evaluated, product-driven education system, AI will simply become a cheap substitute for getting the goal. Unfortunately, many schools will not care as long as they get the numbers and results they need.

The productive struggle – no pain, no gain!

Interestingly, there’s research that supports the idea that some friction in learning is not only desirable but may be necessary, in much the same way that incremental increases in required effort will improve fitness. “Productive failure,” has been shown to lead to better long-term retention of knowledge and skills. Something which is surely the very aim of education – not the simple acquisition of certificates and grades.

A study published in the Journal of the Learning Sciences by Manu Kapur and Katerine Bielaczyc suggested that when students are given ill-structured tasks that are quite difficult to complete and work on them in groups, they actually perform better in subsequent individual learning tasks. This is in comparison to students who are initially given well-structured tasks that they can complete with relatively low cognitive effort.

The key finding is that students who struggled and even “failed” in their initial attempts were better equipped for individualised learning later on. Despite not fully completing the task or only managing it partially in groups, these students had to think harder and engage more deeply with the material. This cognitive struggle, even when it doesn’t immediately lead to success, seems to prepare the mind for more effective learning in the future.

This research has important implications for how we think about AI in education. While AI tools can provide quick and easy answers, they might be depriving students of the valuable struggle that leads to deeper understanding and better long-term retention. Our challenge as educators is to find ways to incorporate AI that don’t eliminate this productive failure, but rather enhance and support it.

For instance, we might use AI to create more complex, ill-structured problems for students to grapple with, rather than using it to simplify tasks. Or we could use AI to provide targeted, minimal hints that keep students in their zone of proximal development without giving away the entire solution.

The goal isn’t to make learning unnecessarily difficult, but to create an environment where students are challenged to think critically, problem-solve, and engage deeply with the material. AI can be a powerful tool in creating and supporting these learning experiences, but it needs to be used thoughtfully and strategically.

By integrating AI thoughtfully into our teaching practices, critically evaluating its impact and helping students understand its strengths and limitations, we can prepare them not just to coexist with AI, but to use it as a powerful tool for enhancing their own cognitive abilities. The goal isn’t to compete with AI, nor is it to ignore it, but to use it as a means of augmenting and expanding human intelligence and creativity. 

The risk of cognitive outsourcing to AI is real, but not insurmountable. Just as we’ve adapted to technological advances in physical labour—running marathons despite having cars, lifting weights despite having machinery—we will need to evolve our approach to mental exertion. Schools, as our ‘brain gyms’ (I wonder how many of you remember when that first landed in the classroom), are the ideal starting point for this evolution. Schools need to be places where we don’t just learn facts but also strengthen our critical and creative faculties, ensuring that AI enhances rather than replaces human thinking.

Citations:

  • Bjork, R.A. and Bjork, E.L., 2011. Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning.  In: M.A. Gernsbacher, R.W. Pew, L.M. Hough, and J.R. Pomerantz, eds. Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society. New York: Worth Publishers, pp. 56-64.
  • Kapur, M., 2008. Productive failure. Cognition and Instruction, 26(3), pp.379-424.
  • Schmidt, R.A. and Bjork, R.A., 1992. New conceptualizations of practice: Common principles in three paradigms suggest new concepts for training. Psychological Science, 3(4), pp.207-217.
  • Kornell, N., Hays, M.J. and Bjork, R.A., 2009. Unsuccessful retrieval attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(4), pp.989-998.

Leave a comment

I’m The AI English Teacher

As a practitioner of over 25 years experience I aim to help teachers find useful resources and create a space for a constructive dialogue about AI, EdTech and the future of education.

Design a site like this with WordPress.com
Get started