AI Detection… The Future is Here
We should have seen it coming… and the initial problems that come with it.
There is no denying that AI will be the equivalent of the integration of the internet in the 1990’s through today. It will change the way we live in ways we can’t understand or even imagine. And while we see the internet so intertwined with our daily activities, it wasn’t a clean integration. There were bumps, hiccups, and issues. And the same is true for and with AI.
Students being falsely accused of using AI and the life altering implications
AI issues not even being mentioned in assignments or policy but students told they are “cheating”
Using AI detection in academic settings is both promising and problematic. As educators seek ways to maintain academic integrity in the age of AI, detection tools have emerged as a potential solution. However, their implementation isn’t without challenges, some of which could undermine their effectiveness and fairness.
As the stories show, there are issues to be wrested with…
Accuracy is a significant concern. AI detection tools often struggle to reliably distinguish between AI-generated and human-written text, especially as students learn to edit and personalize AI outputs. This means that detection systems can flag legitimate student work as AI-assisted or fail to identify subtle AI influences. Such false positives can create unnecessary friction and stress for students and educators alike.
The story of the student who was threatened with expulsion is one of the stories seen is a prime example. She is now recording herself actually typing her assignments to ensure she has proof that she is author of her own work. Is that what we want? And what is the proof level necessary to protect a student’s rights?
The rapid pace of AI development also complicates detection efforts. As AI tools grow more sophisticated, producing content that closely mimics human writing, detection tools have a hard time keeping up. This evolving landscape means that detection software often lags behind, making it less effective as a preventive measure.
And what about something like “Grammerly” (a program that helps with editing)? The program specifically makes recommendations based on technology. Is that too much? Something like these programs (including “Editor” in word) have become a normalized accepted practice for academia. And as seen in one of the above stories, when using Grammerly (or like program), it increases the incorrect notification of using AI. Is it editing or content development?
Students may be unaware that their work is being subjected to AI scrutiny, potentially eroding trust in the academic process. This lack of transparency can lead to ethical concerns regarding privacy and consent. Additionally, the unclear boundaries around AI use in academia make it difficult to establish what constitutes fair AI usage versus misuse This is exactly found in the story about whether the rights of students are violated by the Federal Educational Rights and Privacy Act (FERPA)… and with no answers.
There are more questions than answers, to be honest. It reminds me a bit of the issues around the new technology at the time regarding plagiarism, using the internet. Submit your essay/work into a program that would check to see if it was lifted from another source. Messy at first, with great challenges and inaccuracies. And the same here (AI) as well.
As I said then, I say now. You have to embrace change. Find ways of using it. And understand that hard and fast rules (stubbornness without compromise) with change only causes pain and discontent. And realize that running from it will be like that nightmare in the middle of the night…
eventually, it will catch up to you…