Technology Solutionism and ChatGPT

The difference between shiny new tools and understanding

Was this forwarded to you by a friend? Sign up, and get your own copy of the news that mattered sent to your inbox every week. Sign up for the On EdTech newsletter. Interested in additional analysis? Try with our 30-day free trial and Upgrade to the On EdTech+ newsletter.

Three weeks ago I wrote about five pathologies of EdTech discourse about generative AI, and one of these has been on display recently.

Technology Solutionism This is an endemic problem in EdTech. Both vendors and users tend to think of a tool as being the solution to a problem. Sometimes the tool can be part of a solution, sometimes it can’t. But it is always only a part of the solution. There are always changes and processes and people and sustained effort that are required to make a tool more useful in improving student outcomes, and the technology solutionism that we indulge in prevents us from seeing that.

source: midjourney creation

Following on the general release of ChatGPT and its use by students, we are seeing the release of several products designed to catch student use of the tool. Well, make that tools, but while Google Bard will be worth watching as well, it has not captured the usage of ChatGPT.

Rushed Technology Solutions

Apart from problems with accuracy (claims of the ability to detect AI use vary from 26% to 98%, although your actual results may vary), there are additional issues with trying to solve the use of generative AI with a tool, typical of the kinds of things that bedevil technology solutionism. The detection tools are being used in some automatic and unthinking ways, and the tools and solutions engender a lack of trust between students and instructors.

I believe the more important risk, however, is that by rushing in with a solution, instructors and administrators are likely missing both a deeper understanding of the problem and missing the opportunity to craft better solutions.

But before I describe a better way forward, I think it worthwhile to make clear that cheating is a real problem. It undermines the integrity of academic institutions and shortchanges students in their learning. New technology creates new opportunities and problems, and we need to take academic integrity seriously. But if we oversimplify the situation, A) it won’t work and B) we’ll miss opportunities to improve academic instruction and assessments. Generative AI is not going away, and simply detecting and banning it is short-sighted. Emphasis on the simply.

Student Insights

In the case of generative AI, we already are seeing some valuable insights from students.

There are several great articles by and interviews of students about how they use generative AI such as ChatGPT. One of the best is a recent piece in the Chronicle of Higher Education by Owen Kichizo Terry, an undergraduate student at Columbia University. He argues that students are using ChatGPT more than most faculty and staff assume, there isn’t a distinctive voice to detect in this use case, and therefore usage of AI can’t it easily be detected by the tools described above. Instead, he argues that:

The common fear among teachers is that AI is actually writing our essays for us, but that isn’t what happens. You can hand ChatGPT a prompt and ask it for a finished product, but you’ll probably get an essay with a very general claim, middle-school-level sentence structure, and half as many words as you wanted. The more effective, and increasingly popular, strategy is to have the AI walk you through the writing process step by step.

So, we are already seeing students are using generative AI to create outlines and as a personalized writing coach.

Other students describe using ChatGPT as a debate foil, to give them another side of an issue. But more frequently students describe using ChatGPT as a source of answers. As one student explained it:

I used and experimented with ChatGPT and it is extremely useful for assignments. Not just because it answers all of your questions that you ask, but it completely destroys the use of tutors.

Owen Terry, whose Chronicle of Higher Education piece I quoted above, believes that many uses of ChatGPT are in fact cheating. He thinks that at the very least its existence and use has eroded the value of assessment types like essays for teaching students how to think. He is also not hopeful about the possibility of finding a good way forward.

As it stands right now, our systems don’t …. fully lean into AI and teach how to best use it, and we don’t fully prohibit it to keep it from interfering with exercises in critical thinking. We’re at an awkward middle ground where nobody knows what to do, where very few people in power even understand that something is wrong. … We’re not being forced to think anymore.

Terry suggests that maybe we need a two-part system.

  • Assessments where students are allowed to use AI.

  • Those where they are not and instead skills and knowledge are assessed by oral or in-person exams.

I agree with Terry that we are in a situation where an old way of doing things is ending but a new way has not yet been born, although I have a different perspective on recommendations.

Student Usage as Pedagogical Clues

I think that the ways that students use AI can provide clues about how to design new types of instruction and assessments that teach people how to think critically and create. Some landscape architects look at where people walk – where they create what are called “desire paths” – and build walkways there rather than dictating a particular route that is likely to be ignored. As educators we need to do the same thing: find in student usage patterns the clues of how they work and build pedagogy around that where at all possible.

Answers and Tutors

Students often use ChatGPT to find answers. Sometimes those answers are going to be wrong, but that instinct is very powerful. I am haunted (in a good way) by an interview I had with a student several years ago at the University of Illinois. She told me she was currently in a calculus class that was crap. She went to class routinely because she wanted to know what was going to be on the exam. While she was in class, she would put on headphones and start googling other explanations. She would find online lectures from other institutions, she would find online textbooks, she would find general information on the web, and she would then use all of those sources to explain the concepts that were being (badly) taught in the actual classroom.

Why don’t we build on this instinct to facilitate and amplify student learning? We could create smaller language models of disciplinary knowledge that students can interrogate on their own, and we could build assessments around who can find and evaluate the most useful information. In other words, to engage students in constructing knowledge and have them use ChatGPT as one of the tools in their arsenal to do so.

Editors and Writing Coaches

As seen above, some students use ChatGPT as an editor, outline creator, and writing coach. Again, we can embrace that usage. Don’t build an assessment around the simple act of creating a five-paragraph essay, instead build writing assignments around hands-on research. Build multimedia assignments (assuming generative AI will be used in those as well). Up the ante on creativity but make it integrative and applied.

But always assume an editor and a writing coach are available to your students, whether virtual or with tutors, and don’t simply grade on basic quality. Even with ChatGPT’s help there are huge differences in the quality of thinking that goes into writing assignments. Assess that, not the basic mechanics.

Parting Thoughts

Yes, there are some challenges around scalability in these solutions I briefly outline. But we are at an inflection point and assessments, as Owen Terry so rightly points out, need to change. Fortunately, ChatGPT has given us our own outline about how change can happen.

The main On EdTech newsletter is free to share in part or in whole. All we ask is attribution.

Thanks for being a subscriber.