Researchers have been using AI for decades for complex calculations, data analysis and modelling. AI writing aids have been around for decades (remember "Clippy", Microsoft's anthropomorphic paper clip?) Generative text AIs available via subscription have been available for the public since at least January 2021. However, it was the release of the freely available research version of ChatGPT from OpenAI that saw the explosion in the use of generative text AIs and has prompted the development of policies in the research environment that are specific to its use.
Publishers such as Elsevier, Springer Nature, Cambridge Press, Taylor & Francis and others all have AI use policies related to the many issues around integrity, authorship, originality and the methodology and citation of AI use.
However, it's worth noting that many journals explicitly do not allow the use of AI-generated images and multimedia unless specifically part of the research itself, due in large part to the rapidly evolving issues around copyright.
When it comes to citing the specific use of AI in a given work, citation frameworks have either updated their guidelines or, as APA Style did in April 2023, supplemented their frameworks with how-to articles regarding how AI use should be cited.
"During the preparation of this work, the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication."
This video (video length: 7 mins 30 sec) discusses the aspects of authorship, bias, originality and using AI tools to counteract AI fraud.
The Committee on Publication Ethics (COPE) has joined other organisations such as the World Association of Medical Editors and JAMA network to state that AI tools cannot be listed as an author of a paper.
"AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.
Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics".
- Committee on Publication Ethics. (2023,13 February). Authorship and AI tools. https://publicationethics.org/cope-position-statements/ai-author
When it comes to the use of AIs for research, governing bodies and professional associations recommend balancing the efficiencies to be gained from using AI tools with the aforementioned considerations of ethics/transparency, IP and critical evaluations of the outputs.
The Australian Research Council (ARC) has a specific policy to address the use of Generative Artificial Intelligence in ARC grant programs. In that policy, the ARC acknowledges that the use of generative AIs can be beneficial but must be balanced with the inherent risks of using a third-party tool, such as security and risk of exposing IP in addition to the ethical issues related to the extent to which AI is used for authorship.
Developments are moving forward in the integration of AIs into academic databases to summarise papers and make connections between them. Elsevier is building custom-engineered generative AI into its Scopus abstract and citation database and began beta testing in August 2023. The subscription-based, qualitative citation analysis tool Scite.ai helps science researchers make connections between papers. Elicit is described as a tool to support the literature review workflow and provides context from the bodies of articles based on natural language inputs. However, even a generative tool trained on quality content can hallucinate (produce erroneous data), as Elsevier cautions in their disclaimers on the Scopus AI beta page.
In their submission to the inquiry into the use of generative AI in the Australian education system, Universities Australia urged researchers and research supervisors to strike a balance between harnessing the capabilities of AI to improve research efficiency, being mindful of copyright and privacy issues and the development of "the general capabilities and skill that foster their professional expertise, critical thinking, evaluation and intellectual curiosity".
TEQSA has published 10 tips for using generative AI in research that provides high level guidance for researchers that you may wish to share with your students. See link below for the PDF file.
The video (video length: 8 mins) introduces how to perform a basic search and citation trail searching in Elicit.