The AI discourse is so wrong
We're wasting time debating if AI is useless or dangerous instead of figuring out how to use it well in schools
Instructional leaders frequently encounter polarized perspectives on artificial intelligence in schools. These views typically fall into two camps: those dismissing AI as merely another technological fad destined to fade, and those viewing it as an existential threat to authentic learning and teaching. Casey Newton at Platformer captures this division perfectly—people either believe AI is overrated and ineffective ("fake and sucks") or that it poses genuine risks to education ("real and dangerous"). The resulting polarization undermines our ability to engage in productive conversations about AI's appropriate role in education.
This binary thinking creates significant barriers to thoughtful preparation for technological change. When we approach AI discussions from these entrenched positions, we limit opportunities to examine nuanced applications that might genuinely enhance learning while addressing legitimate concerns. Our professional discourse has become trapped in abstract debates rather than focusing on concrete implementations that could benefit students and teachers alike.
Educational institutions require leadership that moves beyond reflexive skepticism to engage with evidence-based perspectives on AI integration. As Bearman et al. (2023) demonstrate, current discussions in leading education journals remain "often vague and open to debate" (p. 369), focusing more on theoretical threats than practical applications. This vagueness directly affects how we frame AI to stakeholders, often positioning technology as something teachers must defend against rather than a tool they might thoughtfully incorporate. To effectively prepare our schools for technological transformation, we must foster more sophisticated conversations about AI's educational role.
The problem with our current AI discourse
Recent research by Bearman et al. (2023) reveals how limiting our professional discourse around AI has become. Their analysis of leading education journals shows that discussions of AI are "often vague and open to debate" (p. 369), focusing more on abstract threats than concrete applications. As leaders charged with guiding educational innovation, we should be particularly concerned that despite increasing references to AI since 2020, these discussions "are still not substantively concerned with AI itself" (p. 379).
This vagueness isn't just an academic problem - it directly impacts how we frame AI to our teachers and stakeholders. When we position AI as primarily a threat to teacher authority or authentic learning, we create what Bearman et al. describe as a situation where "staff identities are disempowered by AI and AI-embedded technologies, losing agency and authority" (p. 379). This framing makes it harder for teachers to envision constructive ways to incorporate AI into their practice.
Consider how often our professional development sessions or policy discussions start from a defensive posture, assuming we must protect our educational practices from AI rather than exploring how it might enhance them. This defensive stance reflects what the researchers identify as a "Discourse of imperative response" that treats AI as a threat requiring resistance rather than an opportunity requiring thoughtful engagement.
Moving past outdated frameworks
Many of our current approaches to educational technology still rely on outdated theoretical frameworks that fuel deficit thinking about AI. Kim et al. (2024) challenge us to embrace "an account of the learning brain that is predictive (not reactive), embodied, neuronally plastic, non-linear, dynamically self-organising, and inherently emotional" (p. 2). This newer understanding of how learning actually works opens up exciting possibilities for thoughtful AI integration.
As instructional leaders, we should pay particular attention to their finding that "attention, active engagement, error feedback, and consolidation are the true 'secret ingredients of successful learning'" (p. 8). Rather than worrying about AI replacing these fundamental processes, we could be exploring how it might enhance them. The authors note that when teachers are presented with evidence-based perspectives on learning and technology, they "love it, are fascinated by it...and are quick to see its relevance for their own professional classroom practice" (p. 9).
This suggests that our role as leaders isn't to protect teachers from AI, but to help them understand and leverage it effectively. We need to move past simplistic debates about whether AI will replace teachers and toward nuanced discussions of how it might augment and enhance their work.
The real costs of AI skepticism
Newton warns that dismissive attitudes toward AI create their own risks. While many leaders feel they're taking a prudent stance by remaining skeptical, Newton argues they're actually "staring at the floor of AI's current abilities, while each day the actual practitioners are successfully raising the ceiling." This gap between perception and reality can leave our institutions unprepared for rapid technological change.
Consider the concrete examples Newton provides of AI's current educational impact, from preserving endangered languages to enabling innovative forms of educational access. These aren't hypothetical future capabilities - they're happening now. When we maintain what Newton calls "the phony comfort of AI skepticism," we risk missing opportunities to thoughtfully incorporate beneficial AI applications while also failing to prepare for legitimate challenges.
As educational leaders, we need to be particularly attentive to this risk. Our skepticism might feel like responsible caution, but if it prevents us from engaging seriously with AI's potential and challenges, we're doing our institutions a disservice.
Implications for leadership practice
As educational leaders, we need to be particularly attentive to this risk. Our skepticism might feel like responsible caution, but if it prevents us from engaging seriously with AI's potential and challenges, we're doing our institutions a disservice. Moving forward, we need to reshape how we approach AI in our leadership roles. This means grounding discussions in evidence rather than anxiety, building teacher agency rather than defensive postures, and developing frameworks for thoughtful AI integration that recognize both opportunities and legitimate concerns.
The challenge isn't to convince everyone that AI is universally beneficial - it's to foster more nuanced, evidence-based discussions about its role in education. Only then can we develop approaches that effectively leverage AI's capabilities while thoughtfully addressing legitimate concerns about its implementation. As leaders, we have a responsibility to move these conversations beyond deficit thinking and toward productive engagement with the educational technologies that will increasingly shape our schools' futures.
References
Bearman, M., Ryan, J., & Ajjawi, R. (2023). Discourses of artificial intelligence in higher education: A critical literature review. Higher Education, 86, 369-385.
Kim, M., Duncan, C., Yip, S., & Sankey, D. (2024). Beyond the theoretical and pedagogical constraints of cognitive load theory, and towards a new cognitive philosophy in education. Educational Philosophy and Theory.
Newton, C. (2024). The phony comforts of AI skepticism. Platformer.
I agree that the discussion about AI and education needs to move towards a more detailed examination of this relationship. I suggest we stop thinking of it as one discussion when it is two related but separate discussions. The first is about AI in the workplace - how school employees use AI to support their work and workload. The second is about AI and student learning. We also need to be far more accurate in the terminology we use - in nearly every conversation I have people use AI when they are really talking about LLMs. AI in other forms is surfacing in schools. however, LLMs are the most prevalent tools used in my school today. It does a disservice to the debate if we are not precise.
At the same time as this popped into my inbox a parallel blog post from Daisy Christodoulou landed. Daisy included linked slides to two presentations she gave recently. In those slides, she advised that when looking at the impact of AI in any process we should start by going back to examine the process itself, its purpose and how AI enhances that process. That is something that resonates with me as a rule of thumb.
So let's take the second table in this post as an example. I checked and couldn't find the table in any of the referenced articles so I am guessing it is created to illustrate the potential of aligning a model/theory of learning with potential AI benefits. It made me reflect on a conversation that I have been engaged in with faculty ad-hoc. Most schools have some form of a set of principles about teaching but very few have a school-accepted model of learning. It seems to me that developing such a model is going to be foundational sooner or later. Only once you have that in place can you examine how AI enhances or disrupts learning in your context. That seems to be the nuance that is needed. I wish I could say I find the second table a convincing example but for me, it falls short. For example, the four pillars of learning make sense but where is memory? Or thinking?
Perhaps I am being very picky at a time when we are still figuring out the foundations. Perhaps I am wrong - actually, it is highly possible I am wrong in some or all aspects, as I am just a layman in the area. That's why we need to dig into this and actually do the work required.