
When I was a kid, I was pretty good at maths. I could do quite complex calculations in my head, visualising the numbers. But at school, my teacher Mr. Cutcliffe (who went by the nickname Sigi for reasons I never quite understood) would get annoyed with me, even though I provided the correct answers. Sigi would stalk the silent classroom during tests, looking over students’ shoulders, tapping the chalkboard eraser against his palm and occasionally groaning. He emitted one such groan over my exam paper – it was accompanied by a vague whiff of kipper and tobacco. He groaned again and I watched him set off to the next student, wondering what I had messed up on my sheet. Suddenly, the chalkboard eraser hit me in the head. Sigi was glaring at me from the front of the classroom, his arm still curved around his body in the shape of a pitcher’s follow through. “Show your workings, Asher!” I protested. I’d checked my answers and they were correct. “I don’t bloody care. Show your workings or I’ll mark your paper zero even if you have every answer correct .” He glared at me. “Now bring me my eraser.”
So I learned to show my workings. A couple of years later, I took an examination in the hopes of winning a prestigious scholarship. The maths paper was really tough, and I was pretty sure I’d failed to solve the main problem. It turned out I had indeed got the problem wrong. It also turned out that none of the other candidates had answered correctly either. But the examiners could see from my workings that I understood the problem and was working toward its solution in a valuable way. I wound up getting the scholarship. All thanks to Sigi and showing my workings.
Thanks for reading The Context Maker! Subscribe for free to receive new posts and support my work.
I was reminded of Sigi this week when I read the Guardian’s coverage of a new Oxford University Press report on AI use in UK schools. The findings: while only 2% of students aged 13-18 don’t use AI for schoolwork and 80% use it regularly, 62% say AI has had a negative impact on their skills and development. One in four agreed that AI “makes it too easy for me to find the answers without doing the work myself,” while 12% said it “limits my creative thinking.”
This builds on a constant drumbeat of anxiety around student AI use. A 2024 Wharton study found students using ChatGPT improved their essays most but didn’t learn more about the topic they wrote about. A December 2024 study published in the British Journal of Educational Technology found students with AI access displayed “metacognitive laziness”—a dependence on AI assistance that leads to offloading thought processes without engaging in synthesis or analysis. An MIT Media Lab studyusing EEG to track brain activity found that ChatGPT users showed the lowest neural engagement and “consistently underperformed at neural, linguistic, and behavioral levels,” with brain connectivity systematically scaling down with the amount of external support. TIME Magazine profiled a college senior who said writing now “feels hollow” compared to her freshman year when her mind “felt sharper”—her grades climbing but feeling increasingly empty about it.*
*In the spirit of ‘showing my workings’, I need to acknowledge here that this paragraph summarising recent findings around use of AI in the classroom was researched and written by Claude. I asked Claude to find the articles and studies I remembered, and then write a summary paragraph with links to the studies or articles. The first two drafts contained misattributions, incorrect dates and incorrect links – ultimately, I’m not sure I saved any time by asking Claude to assemble the information for me, as it’s worth noting that with each incorrect draft I merrily dropped the text into my Google doc and only noticed the mistakes as I proofed the piece. Almost hoisted on my own petard!
All these studies point to the same thing: AI makes schoolwork easier, and that ease makes the mind lazy. Sixty percent of students in the OUP report said AI tools encourage copying rather than original work. They already sense this isn’t helpful for their skills in the long run. As OUP’s Alexandra Tomescu noted, “That’s a very deep understanding of what your schoolwork is meant to help you do, and what the pitfalls and benefits are with this technology.”
Is AI itself the problem? Or is it how we’re letting students use it—without guidance, without frameworks for what constitutes good use versus easy shortcuts? Or is it something even deeper: an educational system that values product over process, qualifications over skills, and incentivises using AI simply to pass the test rather than aspiring to genuine mastery?
Whether we like it or not, generative AI is here to stay. So we need to be teaching our children how to use these tools in ways that allow them to achieve more, not less. And that’s not about the product you generate from AI. It’s about the process that got you there.
This is what reminded me of Sigi’s demand I show my workings.
I was fortunate that the examiners back then were as interested in how I thought as what I already knew. But isn’t this at core what we want our educational system to do? Value process over product, teach our young to think for themselves, be critical, creative and curious?
And these are also the core skills we all need to use AI in human ways and invent with it rather than repeat with it. So it’s troubling that kids aren’t feeling that way about it.
How do you change this? Ban them from using AI tools in class contexts? Or teach them to use these tools the right way?
And how do we do that? By valuing process over product and making students show their AI workings. Let every student access GPT, Gemini or Claude in socratic mode, ready to discuss, challenge and inform. The end result isn’t an essay or a summary. It’s the entire discussion. You get graded on your process, how you used the AI, and what learning journey you achieved through its use.
What We Get Wrong About Education
Education has always obsessed over outputs. Correct answers. Finished essays. Polished presentations. We measure what students produce, not how they got there.
But the actual learning happens in the messy middle—the struggle with a problem, the dead ends, the moments of confusion followed by sudden clarity. We’ve been training students to hide their thinking and present only polished results. The process that actually builds understanding gets treated as scaffolding to be removed before anyone sees the final product.
AI just makes this tension impossible to ignore. When a student can generate a perfect-looking essay in thirty seconds, it no longer has value on its own as a benchmark of understanding.
What the Transcript Reveals
Turning in the conversation with AI as part of the final essay output reveals both thought process and skills used. The transcript shows how a student framed their initial question—whether they asked something thoughtful or just “write my essay about Hamlet.” It shows how they responded when the AI gave them something that didn’t quite work, whether they accepted it uncritically or pushed back. It reveals how they refined their thinking through iteration, building toward something rather than just accepting the first output.
Most importantly, it shows whether they used AI as a shortcut or as a thinking partner. A transcript that reads “write this for me” followed by copy-paste is obviously different from a genuine exploration where the student is wrestling with ideas, testing understanding, and building toward their own synthesis.
You can’t fake the process. Either you engaged or you didn’t.
What This Actually Teaches
When students know they have to turn in the transcript, they learn critical thinking—how to evaluate AI responses and recognise when something is right, wrong, or subtly misleading. They develop question formulation skills, learning to ask better questions that generate more useful responses rather than generic outputs. This is precisely the skill they’ll need in professional life, where knowing how to work with AI will matter far more than knowing how to use it as a black box.
This also gives teachers something concrete to work with. Teachers can point to specific moments where a student asked a clarifying question, challenged an AI response, or built on an insight in sophisticated ways. They can help students improve both their AI collaboration skills and their critical independent thinking by analysing what worked and what didn’t in the transcripts themselves.
Time for a Reality Check
Of course, “show your AI workings” isn’t a magic fix. It would demand new habits and infrastructure, as well as data protection safeguards. Teachers already work under intense time pressure, so no one can realistically be expected to wade through full transcripts for every assignment. Assessment design would need to evolve too. Some students might try to game the system, crafting performative “Socratic” exchanges instead of genuine inquiry – of course, you could argue that this will require a level of creative ingenuity that also has its merits, but let’s not sidebar into that moral quagmire. Oral follow-ups or quick in-class defences could help surface real understanding, though that too adds workload.
None of these challenges are trivial. But they’re solvable if we treat AI literacy as a core skill and adjust the process of education accordingly, incorporating personalised AI tutoring in the learning flow, and freeing up teachers for more quality interactions human to human with their students.
The Shift That Matters
Information mastery is obsolete. The differentiators are critical thinking, creative inquiry, emotional and intellectual exploration. We need to teach exceptional process powered by these human skills in an AI-enabled world. The students calling for guidance aren’t asking us to ban AI tools. They’re asking us to help them use these tools well, to show them what great use looks like versus just making things easier to get done. And they are also asking for permission. Not to cheat – but to do better. Judging the process rather than the product of their AI use is the way to achieve this.
Programs like Claude for Education are already building around the Socratic method, designed to guide students through questioning and exploration rather than simply providing answers. This conversational, inquiry-based approach should become a requirement for any AI deployed in classroom settings—not just because it prevents lazy shortcuts, but because it models the kind of thinking we’re trying to teach.
The solution isn’t to ban AI. It’s to redesign assessment to value what actually matters. Show your workings. Whether it’s long division on paper or a conversation with Claude exploring a topic, the principle remains the same.
Looking back now, it occurs to me that maybe my teacher’s nickname was not ‘Sigi’ at all but ‘Ciggy’. I can picture him now, sucking on a smouldering Woodbine out in front of the assembly hall between lessons. Most of Sigi/Ciggy’s methods have changed, thankfully. Chalk erasers are obsolete and hence not available for throwing and, hopefully, teachers have sweeter smelling breath. But the principle he was teaching me hasn’t.
Make the thinking visible. Share your AI conversation with your teachers, not just the final output. Show your workings.
Next time: My contract at Meta Superintelligence Labs is coming to an end. What’s my plan for what comes next?