Skip to main content

ChatGPT: is it really a threat to medical research paper writing?

Commentary

ChatGPT, is an artificial intelligence language model developed by OpenAI, is a well-known technology that requires no introduction in today’s world. ChatGPT became popular very quickly after it was launched on November 30, 2022, with over one million users registering within the first week. Its ability to generate text that is virtually indistinguishable from text written by genuine authors is attracting researchers and medical professionals. However, the use of artificial intelligence (AI) tools like ChatGPT has generated significant concern among researchers about their potential misuse and ethical implications (Curtis 2023).

Researchers tested ChatGPT by having it create 50 abstracts for medical research papers based on the titles of real abstracts, and they compared the results to original abstracts. The artificial abstracts produced by ChatGPT and the original abstracts were passed through AI output detector, and also reviewed by blinded human reviewers. The AI output detector was able to detect most of the artificial abstracts. Blinded human reviewers correctly identified 68% of the artificial abstracts as being produced by ChatGPT. However, they mistakenly identified 14% of the original abstracts as being generated by ChatGPT and presented as matter of concern (Gao et al. 2022).

The creation of a scientific paper requires substantial support and confirmation of its content through multiple rounds of checking and rechecking. The text produced by these large language models is not always accurate or scientifically sound. Medical writing often requires a level of expertise and understanding that cannot be replicated by an artificial intelligence model. Researchers and scientists have a legal obligation and ethical responsibility to ensure that the information they provide is accurate and reliable. Therefore, it is unlikely that a scientific paper could be produced solely by ChatGPT or any other AI language tool without significant human input and oversight.

Academic researchers have discussed the advantages and disadvantages of using ChatGPT in their work. One of the advantages is that AI tools can help with various time consuming tasks such as language writing, text generation, translation, grammar correction, formatting, and literature reviews. The use of AI tools hence can result in faster completion of tasks, allowing academics to focus on new experimental designs, leading to breakthroughs in various fields.

Van Dis et al. have mentioned that researchers are under increasing pressure to use AI tools to complete tasks quickly. However, there are concerns about bias and inaccuracies, and it is important to examine the validity and reliability of these AI tools (Van Dis et al. 2023).

Another topic of debate among researchers is whether or not ChatGPT can be a co-author. Some argue that it may be possible for an AI tool to meet the criteria for co-authorship as it may be able to contribute significantly in academic writing. Some argue that it does not only have the ability to make such a contribution, but also to consent to being a co-author and to take responsibility for the study or part of the study it contributed to. This second requirement is where the idea of granting co-authorship to an AI tool faces a major obstacle (Stokel Walker 2023). Nonetheless, it is ultimately up to the academic community to determine the standards for co-authorship and whether or not they apply to AI tools.

The authors of this commentary utilized the services of ChatGPT for language editing and grammar correction.

There are multiple AI language models similar to ChatGPT that help people communicate using natural language and perform various language-related tasks. These AI tools are expected to be the future of writing skills. However, even with such advanced AI technologies, human intelligence and input will always be necessary to verify the accuracy of the text generated by these models. In other words, these technologies can help humans write better, but we still need human oversight to ensure accuracy and reliability. Given the widespread use of AI, it is crucial for journals to use AI output detectors. It is essential to establish policies that regulate the use of AI to prevent its misuse.

Availability of data and materials

Not applicable.

Abbreviations

AI:

Artificial Intelligence

References

Download references

Acknowledgements

Nil.

Funding

Nil.

Author information

Authors and Affiliations

Authors

Contributions

PT is the major contributor in concept and drafting. ST and PRL contributed in concept, drafting, and review. All the authors have read and approved the final manuscript.

Corresponding author

Correspondence to Pooja Thaware.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Thaware, P., Trivedi, S. & Lakra, P.R. ChatGPT: is it really a threat to medical research paper writing?. Ain-Shams J Anesthesiol 15, 67 (2023). https://doi.org/10.1186/s42077-023-00365-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s42077-023-00365-z