According to a recent article published in the Jama Network, the quantity and sophistication of artificial intelligence (AI) solutions to assist authors in enhancing the quality of their article drafts and final publications is fast increasing. Strategies for improving one’s own writing, grammar, language, references, statistics, and reporting standards are among them.
AI-assisted tools are used by editors and publishers for a variety of purposes, including but not limited to triaging submissions; validating references; editing and coding content for publication in various media; facilitating post-publication search and discoverability; and checking submissions for issues (such as plagiarism, image manipulation, and ethical concerns).
ChatGPT is a sophisticated chatbot (“GPT” stands for “generative pre-trained transformer”) meant to respond to queries and requests as if they were human. Others have expressed reservations about the disclosure, citing the possibility of students exploiting the language model to cheat on academics, write essays, and take exams, among other things (even medical license exams).
ChatGPT was named a co-author on two preprints and two papers in the disciplines of science and health by Nature in January 2023. All of these articles are somehow related to ChatGPT, and one of them even has a bot email address listed as the “author.” According to Nature, adding ChatGPT in the paper’s byline was a “mistake that will be corrected immediately.”
Since then, Nature has established a policy to guide the use of large-scale language models in scientific publication, which prohibits naming such tools as a “credited author on a research paper” because “attribution of authorship carries with its accountability for the work, and AI tools cannot take such responsibility.”
According to the standards, researchers should disclose the tools they used in the Methods or Acknowledgements sections of their articles. A growing number of publications and organizations have policies prohibiting the use of non-human technology as “writers.” Some of these guidelines restrict the use of AI-generated terminology in submitted work, while others require complete transparency, responsibility, and accountability when exposing the use of such technology.
The International Conference on Machine Learning has released a new guideline as part of its call for papers: “Papers that integrate text developed by a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as part of the paper’s experimental analysis.
“The organization admits that the policy’s implementation has caused a number of issues, and it intends to evaluate “the effect, both good and bad of LLMs on reviewing and publishing in the field of machine learning and AI” and reassess the policy’s application in the future.
While the text responses to questions in ChatGPT are generally well written, they have been found to be formulaic (though this is not always obvious), out of date, false or fabricated, lacking accurate or complete references, and, worst of all, with fabricated nonexistent evidence for claims or statements made.
According to OpenAI, the current release is part of an open iterative deployment aimed at human usage, interaction, and feedback in order to improve the language model, which includes generating “plausible sounding yet erroneous or nonsensical replies.” This disclaimer should be interpreted as a caution that the model is not yet ready to be relied on as a reliable source of information, particularly in the absence of human responsibility.
JAMA and its associated periodicals have revised their position on the use of artificial intelligence and linguistic models in article production. For many years, these publications have provided guidelines and set criteria for authorship credit and accountability in conformity with the International Committee of Medical Journal Editors.
They’ve also established guidelines for being truthful about any assistance you get while writing or editing. Changes in the nature of research and its reporting, as well as concerns about author duty and accountability, have forced adjustments to these standards and criteria throughout time.