Member-only story
Ethical and Responsible Use of Generative AI in Scholarly Research
Generative AI is an evolving technology that may pose ethical and technical challenges for researchers. Tools such as ChatGPT and all the things that are following along can have positive or negative impacts on scholarly research outcomes, such as the quality, originality, and diversity. The current state of tools is to augment human intelligence not replace it. Researchers should monitor and evaluate these impacts and adjust accordingly, but equally important is to share experiences and best practices on how to use such generative AI tools responsibly and effectively so other researchers can benefit from them.
Ethical and responsible use
A lot has been written about the applications and implications of ChatGPT and other generative AI tools on education. No matter where you stand on the issue, it is crucial to prepare students to navigate the ethical, social, and economic implications of generative AI and their future careers.
There are serious implications on research methodologies and practices. This article presents methods to foster ethical and responsible utilization of generative AI in scholarly research.
- Avoid Plagiarism. Generative AI tools like ChatGPT generate content based on a wide range of sources used for training, so you should always validate the generated content and check the originality of the generated content using plagiarism detection tools such as Turnitin and iAuthenticate, and cite any sources that are identical or similar to ChatGPT’s outputs.
Students should be careful with the use of editing services including AI typing assistants during any stage of the writing process as this may constitute an academic offence. Supervisors and institutions may have specific guidelines on accepted levels of text editing and whether the name of the editor and editing performed should be acknowledged. The Editors’ Association of Canada provides Guidelines for Ethical Editing of Student Text.