AI-Generated Content in Academic Coursework: A Case Study

Dr.Q writes AI-infused insights
7 min readFeb 26, 2024

The rapid proliferation of generative artificial intelligence (AI) tools, such as ChatGPT, in academic environments presents unique challenges and opportunities for educators and students alike. This article explores a case study from an Engineering Capstone Design Course for Electrical and Software Engineering where AI-generated content in some student team reports raised concerns. This article presents insights into handling AI-generated content in students work, and my approach to addressing challenges while emphasizing transparency, ethical use, and academic integrity.

Reddit post: https://www.reddit.com/r/GPT3/comments/10qfyly/my_professor_falsely_accused_me_of_using_chatgpt/

The Course

During the Fall 2023 semester, I coordinated the first part of a two-semester capstone design project course where students are tasked with producing substantial engineering reports. The capstone design project, which is a critical component of the students final-year curriculum, had 156 students divided into 39 teams, each comprising four students. The deliverables in the Fall semester includes two reports: the first report focuses on the problem identification, research on related solutions, and requirements specification; and the second report focuses on concept design generation and prototyping.

From the outset, I proactively addressed generative AI tools in the very first class emphasizing their ethical use. I informed students that while the use of such tools was permitted in the course, it is necessary to disclose and adhere to transparent citation practices. I demonstrated to the students my daily use of the tool, illustrating how it enhances productivity by brainstorming, creating outlines, and refining a message tone, while emphasizing to never directly copy and paste. Additionally, I provided them with our library resources on Citing Generative AI and ChatGPT, as well as an my article on Ethical and Responsible Use of Generative AI, setting the course environment for honest and ethical academic behaviour in the generative AI era.

Initial Findings

Despite the initial guidelines, the submitted reports revealed a significant variance in the extent and acknowledgment of AI-generated content. Utilizing Turnitin at the time of submission, a similarity detection service that includes AI content detection, I identified discrepancies in AI usage across the teams. In the first report, 11 teams had AI-generated content ranging from 10% to a striking 81%, while in the second report, 8 teams had between 9% and 36% of AI-generated content. Five teams utilized the tools for both reports. This finding triggered a comprehensive investigative process, aimed at understanding the depth of AI tools integration in student work and the awareness level among team members. Before I discuss my approach for dealing with this, a brief overview of AI-generated content detection tools and how such tools are not perfect is beneficial.

AI Content Detection Tools

Tools like Turnitin play a pivotal role in academic integrity, and this popular similarity detection tool provides a percentage of AI-generated content introducing complexities notably concerning false positives and negatives. False positives occur when these tools erroneously identify genuine, original content as AI-generated, potentially leading to unjust accusations of academic dishonesty. Conversely, false negatives happen when AI-generated content is not detected, inadvertently allowing AI contributions to pass as student work. The reliability of these detection tools is of paramount importance, yet they are not reliable and will never be perfect. Their algorithms are based on patterns and statistical probabilities, which can sometimes misinterpret nuanced or sophisticated writing as AI-generated or fail to detect AI content cleverly disguised using paraphrasing tools. This inherent limitation can lead to challenges in accurately assessing the extent of AI involvement in student submissions. The issue is compounded by the continuously evolving capabilities of AI writing tools, which can adapt and produce content increasingly indistinguishable from human writing. Thus, while AI detection tools are invaluable in flagging potential instances of academic misconduct, their limitations necessitate a cautious and contextual approach to their findings.

Acknowledging the potential for false positives was crucial in formulating a fair and balanced response to the detected AI content in student reports. It underscores the need for educators to maintain a balance between vigilance against academic dishonesty and awareness of the limitations of the very tools employed to uphold academic integrity.

Turnitin AI writing detection indicator was hidden from students — not sure if this was a decision by Turnitin or the university, but if AI detection is enabled then I urge them to make it visible to students, akin to the similarity detection feature, to give students an opportunity to address any issues identified.

Student Engagement

My approach to addressing this issue was multifaceted. To understand the extent of AI usage and awareness among students, I met with each team separately and asked each team member to complete the following questionnaire, focusing on their awareness and personal use of generative AI tools, knowledge of citation guidelines for AI content, and any additional insights they wished to share.

Are you aware that your Report #X contains AI-generated content? Yes/No

Have you personally used AI tools to generate content for the report? Yes/No

If you answered Yes to question(2), what AI tool(s) did you use and why?

Are you aware of the guidelines I covered in the first class about citing generative AI tools? Yes/No

Anything else you would like to share with me?

I then showed each team the Turnitin report which highlights the content generated by AI. This process was critical not only in gathering data but also in avoiding baseless plagiarism accusations and ensuring fairness.

This exercise revealed two key insights: first, in teams with reports showing up to 20% AI content, some members were unaware that their teammates had used ChatGPT for generating report content; second, two teams, each with less than 10% AI content, claimed that they did not use ChatGPT or any other AI tool for content generation or rephrasing.

Strategic Response

Based on the findings from these student interactions and subsequent consultations with academic leadership in the Faculty of Engineering and Applied Science, I developed a strategic response for this first case of a new territory, considering the novelty of the situation and potential false positives from AI-detection tools, and the need for an educational rather than purely punitive approach. The strategy included:

  1. No action was required for teams that properly disclosed and correctly cited their use of AI. Only one team met this criterion; their first report contained 18% AI-generated content as identified by Turnitin.
  2. Up to 10% AI content was considered as a potential false positive, requiring no action but a reminder of guidelines.
  3. For 11–20% AI content, teams could choose between a mark deduction proportional to the AI content or writing a 500-word report on academic misconduct addressing four critical questions: the definition of academic misconduct, its potential consequences, strategies for avoidance, and lessons learned from the experience.
  4. For over 20% AI content, both the mark deduction and misconduct report were required.

Every team falling under categories (3) and (4) submitted a satisfactory report on academic misconduct that passed Turnitin for both similarity and AI content detection. As previously noted, since not all team members might have used AI tools, they had the flexibility to determine who would write the academic misconduct report. However, my guideline was for it to be a collaborative team effort.

This approach balanced leniency with accountability, recognizing the challenges with new human-like text generation technologies while upholding academic standards. It underscores the complexity of integrating AI tools in academic settings and highlights the necessity for adaptive, context-sensitive approaches for academic integrity.

Discussion

This case study highlights the importance of setting clear guidelines for AI tool usage in academia, with several takeaways:

  • AI detection tools and false positives: Acknowledging up to 10% AI content as a potential false positive reflects an understanding of the limitations of AI detection tools. This aspect of the response is critical, as it prevents unwarranted punishment of students and underscores the importance of continuous evaluation and improvement of AI detection methodologies. It also highlights the need for educators and institutions to stay abreast of technological advancements and their implications for academic integrity.
  • Balancing leniency and accountability: The strategy effectively balances leniency with accountability. By not penalizing teams that properly disclosed and cited AI content, the response reinforced the value of transparency in academic work. This approach encourages students to be forthcoming about their use of AI tools, a crucial step in fostering a culture of integrity in the academic community. Conversely, for teams with higher percentages of AI content, the requirement of a mark deduction or an academic misconduct report (or both) served as a reminder of the consequences of not adhering to the set guidelines.
  • Educational focus over punishment: The option for teams to write a report on academic misconduct, particularly for those with 11–20% AI content, emphasizes an educational response to the issue rather than a purely punitive one. This aspect of the strategy highlights the importance of educating students about academic integrity in the context of emerging technologies. It provides an opportunity for reflective learning, allowing students to understand the implications of their actions and to learn strategies for maintaining integrity in their future academic and professional endeavors.
  • Implications for future policy and practice: This case study, which may serve as a precedent for future policy and practice in academia, demonstrates the need for adaptive, context-sensitive policies that can evolve with technological advancements. As AI tools become more prevalent in education, institutions must develop comprehensive strategies that address their use. This includes establishing clear guidelines, providing educational resources on ethical usage, and developing reliable methods for detecting AI-generated content when its use is not allowed. The most prudent recommendation for users of AI tools is to commit to transparent citation practices, openly disclosing their use.

The use of AI in academia is a double-edged sword, offering tremendous potential while posing significant challenges to traditional notions of authorship and originality. This case study serves as a reference for educators navigating similar challenges, underscoring the need for clear guidelines that resonate with both educators and students, open dialogue, and adaptable strategies in the face of rapidly evolving generative AI technologies.

Probe further

--

--

Dr.Q writes AI-infused insights

Qusay Mahmoud (aka Dr.Q) is a Professor of Software Engineering and Associate Dean of Experiential Learning and Engineering Outreach at Ontario Tech University