Discussion about this post

User's avatar
Marcus Luther's avatar

While I support a lot of the points you raise within this piece—the nuance we need in considering how AI is implemented in different contents and practices; the ubiquity of "cognitive offloading" across many non-AI practices, too, etc.—I was a bit curious about this piece overall, given your other writings.

Much of your emphasis with The Science of Dialogue establishes a pretty clear standard for teachers: our practices need to be driven by data and research, and you've articulated the case for this frequently. (And convincingly, and posts like yours on "providing claims" have been really helpful in my own reflections!)

Yet here, it feels like you reverse that standard: you seem to argue that objections to use of AI in teacher practices are thin on evidence/research—an impossible barrier to overcome, I'd add, given how new this is!—and almost seem to scapegoat what to me feels like honest critiques by calling it "medicalizing tool use." Couldn't the same be said to much of the language around Science of Learning discourse?

I bring this up because right now this feels like the double-standard I'm recognizing in broader education discourse: an insistence upon evidence-based and research-driven practices EXCEPT for with AI, where it is incumbent upon us to experiment/explore without being held back by worries about proof of effectiveness, potential downsides, etc.

Just trying to square these two points across your writings! And I continue to appreciate the way you push my thinking as a teacher—so please know that this comes from genuine curiosity and appreciation around exchange of ideas.

2 more comments...

No posts

Ready for more?