Our universities and institutions say they do not accept AI-generated essays, any work of that sort will be rejected. Yet, on the other hand, we are constantly advocating for technological advancement and digital innovation in our schools.
What do these institutions really mean when they insist that students must not use AI to write their essays, warning that failure to comply could lead to rejection? What is the point of promoting AI adoption and technological progress while simultaneously penalizing those who use it?
If a student uses AI to build ideas or refine an essay, and can confidently defend that work at any time, should that be a crime? Isn’t technology meant to make our work easier, smarter, and more efficient? Why then is its use being criminalised?
Am I missing something here, or are the dots simply not connecting well?
AI Detectors and the Paradox of “Human Perfection”
Lecturers and supervisors now frequently use AI detectors to check whether a student has relied on AI in research or writing, yet these tools come with serious limitations. How trustworthy are they?
Most AI detectors don’t actually detect AI authorship. Instead, they scan for patterns associated with machine-generated text, such as phrases like “This shows that…” or “In essence…,” perfectly structured grammar, balanced sentence lengths, and repeated transition phrases. In other words, detectors often interpret high-quality, academic English as AI-like.
They fail to recognise human reasoning or emotion, they simply detect formality and coherence and label it “possibly AI.” So even when a student writes an essay entirely by hand, a well-written and polished piece may still be flagged. This, in effect, suggests that humans can never produce perfect essays.
But consider this: if we intentionally include errors just to “sound human,” which university or employer would take us seriously? People would question our professionalism and wonder why we didn’t use available tools to improve our writing. It’s a no-win situation, damned if we use AI, and damned if we don’t.
Are Institutions Ready to Practice What They Preach?
This raises a bigger question: are educational institutions truly ready to adopt AI in practice, not just in policy talk? Are tutors ready to embrace flexibility and allow the full enjoyment of AI’s benefits in education?
At the 11th KNUST Summer School, themed “Artificial Intelligence in Education,” the founder and CEO of MinoHealth AI Labs, Mr. Darlington Akogo, noted that by 2028, AI systems are expected to reach human-level intelligence. Imagine how powerful these tools will become.
If today, even human perfection is flagged as AI-generated, what will happen when AI achieves true human-like intelligence? Are our institutions prepared for that future? From what I see, many tertiary institutions still warn students against using AI to write statements of purpose or academic materials, a sign that we are not yet ready.
Moving Beyond Fear to Intentional Use
Until we become intentional about integrating AI meaningfully, we will keep preaching about technology but never truly enjoy its benefits. AI should not be seen as a threat but as a tool, one that, when used responsibly, can enhance creativity, improve academic writing, and bridge educational gaps.
Banning its use outright only widens the gap between technological innovation and educational practice. What we need instead are clear, ethical guidelines that distinguish between AI as an aid and AI as a replacement. Only then can we prepare our students, and our institutions, for the digital future we keep talking about.







Leave a Reply