6 Comments
User's avatar
Michael G Wagner's avatar

Great post!

Expand full comment
Katerina's avatar

Just a quick thought, even when using a calculator, you’re constantly thinking, “does this answer make sense? Did I miss a decimal? Did I multiply instead of divide?“ just like you so eloquently explained in this article, human oversight and reflection are still necessary.

Expand full comment
Javier Santana's avatar

I think that's a perfect example of how supposedly "higher-order" thinking skills aren't affected by offloading arithmetic, but are at risk when we offload wider language use (including reasoning, structure and idea authorship). With LLMs, learners don't naturally activate their brain's error-detection mechanisms, including attention to detail. For LLMs to help us learn, we should use them with intent to create meaningful effort.

Expand full comment
Periodise's avatar

It also highlights the importance of habitual thinking strategies and the issues that arise with the black box problem in behavioural psychology. Two people can read the same page and have totally different learning outcomes. Student A could be mentally asking "why does the author believe this? how do I know whether this is true or false? how does this idea compare to my prior knowledge of this topic?". On the other hand, student B could simply be thinking "what am I supposed to learn from this passage?"

Expand full comment
Periodise's avatar

This was quite enjoyable to read. I had been firmly on the side of the AI optimist and had explained that it would lead to a shifting of priorities towards higher-order deeper processing and thinking by reducing the 'grunt work' of lower-order thinking tasks. It's a very interesting idea that AI may make us less aware of skill atrophy, and that seems quite consistent with the illusion of competence effect/ Dunning-kruger.

My biggest problem with the Lemov critique of Bloom's Taxonomy is that while lower-order knowledge IS needed for higher-order thinking, high-order thinking strategies can facilitate the acquisition of lower-order knowledge by encouraging deeper processing. As you quote, "you cannot connect the dots if you don't have any dots" That's fair, but it depends on the purpose of what you're learning. Is it mostly declarative or procedural?

It makes sense to focus more time on 'remembering' and 'understanding' in the context of language learning. On the other hand, if you're making business decisions, 'analysis', 'evaluation', and creation appear to be more valuable - though both examples will require aspects of each knowledge type.

Expand full comment
Jeff Treistman's avatar

I really appreciate level headed discussions about AI use, I’ve been writing about it also. I don’t know if you have ever heard Amy Kurzweil (yes, Ray’s daughter) on TED or her interview on the Amusing Jews podcast #101 but her recommendations have to do with using AI as a creative assistant, that you should be in dialog with them and skeptical of what they produce. They are bullshit generating machines after all. I’ve been thinking about LLMs as assistive technology like my hearing aids, but even this analogy breaks down because the hearing aids have a clearly limited function whereas LLMs aren’t limited in this way. I look forward to more of your thoughts.

Expand full comment