Case Study
Context
Treesa, a FFU educator in literacy-intensive subjects, faced challenges as generative AI use grew among students. Many learners were from equity cohorts, including those with neurodiversity, mental health challenges, and past literacy difficulties. Initially, AI use was prohibited, but this punitive approach disproportionately affected students needing the most support and created barriers to progression.
Problem
Early assessments designed to scaffold academic literacy were showing signs of GenAI influence, yet detection tools only flagged outputs above 350 words. Students unfamiliar with technology often used AI unknowingly through integrated tools, while others relied on it due to low confidence in their writing skills. The strict “no AI” policy led to increased academic integrity concerns, delayed feedback, and higher intervention referrals.
Intervention
Treesa shifted from prohibition to “somewhat permitted AI use”, reframing AI as an educative tool rather than a punitive trigger. Key steps included:
- Week 2–4: Introducing AI guidelines and library resources on ethical use and referencing.
- Week 6: Providing feedback on early assessments, including advice on maintaining integrity when using AI.
- Week 10: Transforming the final assessment question into an AI prompt. Students generated outputs, critically reviewed structure, ideas, and sources, and learned how to acknowledge and reference AI contributions.
The assessment rubric was updated to include explicit criteria for AI use and academic integrity. Marks were adjusted for unacknowledged AI outputs, hallucinated sources, and lack of critical engagement. A feedback comment bank supported consistent, constructive feed-forward across the marking team.
Outcomes
The approach yielded significant improvements:
- Timely feedback restored, reducing delays in learning support.
- No academic integrity referrals and minimal incomplete grades.
- Positive student experience, with feedback from students appreciative for explicit assessment.
- Improved communication quality in submissions, as AI-supported students produced clearer, more structured work.
Key Insight
Embedding AI use within enabling pedagogy—through structured guidance, transparency, and critical literacy—can transform AI from a threat to academic integrity into a tool for equity, confidence-building, and authentic learning.