Ethical and Responsible Use of Generative AI in Scholarly Research

Dr.Q writes AI-infused insights
4 min readMay 29, 2023

Generative AI is an evolving technology that may pose ethical and technical challenges for researchers. Tools such as ChatGPT and all the things that are following along can have positive or negative impacts on scholarly research outcomes, such as the quality, originality, and diversity. The current state of tools is to augment human intelligence not replace it. Researchers should monitor and evaluate these impacts and adjust accordingly, but equally important is to share experiences and best practices on how to use such generative AI tools responsibly and effectively so other researchers can benefit from them.

Photo by Andrea De Santis on Unsplash

Ethical and responsible use

A lot has been written about the applications and implications of ChatGPT and other generative AI tools on education. No matter where you stand on the issue, it is crucial to prepare students to navigate the ethical, social, and economic implications of generative AI and their future careers.

There are serious implications on research methodologies and practices. This article presents methods to foster ethical and responsible utilization of generative AI in scholarly research.

  • Avoid Plagiarism. Generative AI tools like ChatGPT generate content based on a wide range of sources used for training, so you should always validate the generated content and check the originality of the generated content using plagiarism detection tools such as Turnitin and iAuthenticate, and cite any sources that are identical or similar to ChatGPT’s outputs.

Students should be careful with the use of editing services including AI typing assistants during any stage of the writing process as this may constitute an academic offence. Supervisors and institutions may have specific guidelines on accepted levels of text editing and whether the name of the editor and editing performed should be acknowledged. The Editors’ Association of Canada provides Guidelines for Ethical Editing of Student Text.

  • Ensure transparency and disclosure. Disclose the use of any generative AI tools in the research process for the generation of any content, and share your methodology, code, and data whenever possible, allowing for transparency, reproducibility, and scrutiny by the academic community. This will help other researchers to understand the context and limitations of your work. The disclosure can be done in (a) the acknowledgement section of the research paper; and (b) proper citation, here is an example from the Chicago Manual of Style (17th Edition):

OpenAI’s ChatGPT, Response to “Explain to general audiences the possible causes and effects of climate change.”

The IEEE has published guidelines for AI-generated text which states that ‘…sections of the paper that use AI-generated text shall have a citation to the AI system used to generate the text.’, but no suggestion yet of a citation format to follow. The guidelines are silent on AI-generated content other than text.

ChatGPT has been listed as a co-author on research papers, but Science journals have banned this. The Journal of the American Medical Association (JAMA) for example states that “Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.” JAMA specific guidelines are provided in the article Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.

  • Use for generating ideas, not as a source of truth. ChatGPT can help you brainstorm topics, questions, hypotheses, or arguments for your research, but it cannot verify the accuracy or validity of its outputs. You should always do your own research and validate and cite sources properly, cross-reference and validate the information generated with reliable sources and established research methodologies. Exercise critical thinking and skepticism to ensure accuracy and reliability.
  • Maintain human oversight. Do not rely solely on AI-generated content; rather, use it as a tool to enhance and augment human expertise.
  • Respect the intellectual property rights of others. Large language models such as ChatGPT are trained on large amounts of data from various sources, including books, articles, websites, and social media posts. Some of this text may be protected by copyright or other legal rights. Refer to the very first point about avoiding plagiarism.
  • Ensure responsible use. Generative AI tools can produce content that is offensive, inaccurate, biased, or deceptive, depending on the input and the model parameters. Always check for harmful or misleading content by carefully reviewing and editing responses before sharing them with others, and address and mitigate biases to ensure fairness and inclusivity in your research outcomes.
  • Ensure the privacy and security of data. ChatGPT and similar tools may require access to your data, such as your research questions, keywords, or notes, to generate relevant content for you. Make sure your data is stored and transmitted securely, and you adhere to ethical guidelines, including proper consent, privacy protection, and compliance with relevant regulations. Also delete any sensitive or confidential data from ChatGPT’s memory after using it.
  • Understand the context: Recognize the limitations of generative AI tools in comprehending context, nuance, and real-world implications. It is necessary to clarify this fact while presenting research findings. Exercise caution when interpreting and utilizing the generated content in your research papers.
  • Acknowledge the limitations and uncertainties. ChatGPT and similar tools are not perfect systems and may produce nonsensical or contradictory outputs, and should be treated as possible suggestions or perspectives that need further verification and analysis. You should acknowledge the sources of uncertainty and error in generated responses, and the randomness and variability of the generation process.

By adhering to these ethical guidelines, researchers can leverage the capabilities of generative AI tools like ChatGPT while ensuring the integrity, reliability, and responsible use of these technologies in scholarly research.

To probe further

ChatGPT and a New Academic Reality: AI-Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge

ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations

APA Style — How to cite ChatGPT

Chicago Manual of Style — How to cite ChatGPT

How do I cite generative AI in MLA style

--

--

Dr.Q writes AI-infused insights

Qusay Mahmoud (aka Dr.Q) is a Professor of Software Engineering and Associate Dean of Experiential Learning and Engineering Outreach at Ontario Tech University