A while back I had the pleasure of talking to group of art school principals, about the effects of generative AI on (art) education. It was a very interesting conversation and I wanted to share a few ideas and thoughts that came up during that session.
Think about the children!
Concerns about the influence of technological advances on teaching are nothing new. I remember vivid discussions about the use of calculators (the more advance types that came to market in the 90’s), especially during tests or exams. Since then, the introduction of the internet and google search, Wikipedia, laptops and smartphones have all raised their own set of concerns within education.
And many of these concerns are still being raised: excessive use of pocket calculators reduces number fluency, reliance on Google search hampers the training of one’s memory and so on… Similarly, concerns with the development of critical thinking, problem-solving, imagination and research abilities are raised with the introduction of ChatGPT.
On the other hand, every generation of parents / grandparents (read: old people) voices concern about today’s youth and whether they will amount to anything. They don’t have the right attitudes, they don’t develop the right skills, … This all stems that they won’t be well prepared for being a productive member of society once they graduate. And as a parent myself, I am also concerned about these issues.
There is one important caveat though: the society in which we started our careers was different from the one our children will thrive in. In fact they will shape that very society to who they are. So I’m usually skeptic about the messages of doom and gloom, uttered by some, with regards to our youth.
That being said, we do have a responsibility to help children and youths with making sense of the new technologies that we’re introducing, to guide them so that they can use these technological innovations responsibly. This is something teachers have been doing for (at least) the last 30 years and they should continue doing it now.
So, how can students use ChatGPT responsibly? Start by asking it the right questions: give ideas and inspiration, summarize or reformulate text, ask for explanations on basic topics or get suggestions on how to formulate a specific idea.
Apart from using ChatGPT for the right tasks, students should also be very critical about the output it generates. Hallucinations (the confident output of erroneous facts) are still a big problem in large language models and one shouldn’t trust these systems blindly. So, critical and finding the correct information sources, becomes even more important than it has been in years past.
In fact, with ever more information on the internet being generated by AI – especially when trustworthiness and factuality are not the main focus – correctly applied skepticism about a text becomes ever more important.
Cheating
And then there’s the wrong way to use AI tools in a school context. This was one of the main concerns the art school principals raised: what do we do against AI generated essays, entered as if they are the student’s own work? Are there good AI detection tools? My short answer was: no, there are no such tools that are reliable.
Shortly after ChatGPTs introduction in 2022, some companies started offering tools to detect AI generated text. But they turned out to be woefully unreliable. The biggest risk with using this tools are the false positives: flagging text genuinely written by human authors as AI generated. This resulted in students falsely accused of cheating.
I’ve also heard some people claim that it’s impossible to create a reliable tool for detecting AI generated text. Some quirks of certain generative AI models might give readers a hint. But then again, some people can have a writing style that looks a lot like these idiosyncrasies. Many of these quirks will also disappear as the quality of these large language models improve.
There is also a more technical reason why I think we won’t be able to detect AI generated text: AI models can be trained against these detectors until they don’t trigger the detection any more then a human written text does. And this completely invalidates the test. This can also be repeated for every new detection method that would pop up.
So, as a teacher, how do you deal with a student handing in a text that you suspect is AI generated? Ask them to explain what they’ve written. If a student uses ChatGPT to complete an assignment and they can answer in-depth questions just as they would have written it themselves, that means they found a way to use it right and certainly deserve a good grade.
Leave a comment