Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Case Report
Editorial
Erratum
Guest Editorial
Letter to Editor
Letter to the Editor
Media and News
Medial Education
Medical Education
Obituary
Opinion Article
Original Article
Review Article
Short Communication
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Case Report
Editorial
Erratum
Guest Editorial
Letter to Editor
Letter to the Editor
Media and News
Medial Education
Medical Education
Obituary
Opinion Article
Original Article
Review Article
Short Communication
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Case Report
Editorial
Erratum
Guest Editorial
Letter to Editor
Letter to the Editor
Media and News
Medial Education
Medical Education
Obituary
Opinion Article
Original Article
Review Article
Short Communication
View/Download PDF

Translate this page into:

Editorial
67 (
1
); 1-2
doi:
10.25259/IJPP_133_2023

Machines to make manuscripts? Artificial intelligence - A boon or bane?

Ocular Pharmacology and Pharmacy division, Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India

*Corresponding author: Thirumurthy Velpandian, Ocular Pharmacology and Pharmacy division, Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India. tvelpandian@hotmail.com

Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Velpandian T. Machines to make manuscripts? Artificial intelligence - A boon or bane? Indian J Physiol Pharmacol 2023;67:1-2.

The ability to learn and understand the process and utilise the knowledge acquired through that process to predict the solution or a problem has been recognised as intelligence. Understanding a complicated process having several input parameters to predict the best solution for a human situation has been considered an ability based on intelligence along with. In a broad sense, artificial intelligence (AI) ‘broadly refers to the idea of computers that can learn and make decisions in a human-like way’.

Mathematical formulas and simple algorithms increased the speed of calculations and have been overshadowed by the brute force of computers. The enthusiasm for man versus machine peaked in the chess match of 1997 when the computer deep blue crossed human intelligence by defeating world chess champion Garry Kasparov. Ever since tremendous development in the field of AI has been seen in all areas of science. AI is integrated into our daily lives in many forms, such as personal assistants (Siri, Alexa, Cortana, Google Assistant, etc.), automated mass transportation, aviation and computer gaming.[1] The amalgamation of the wet lab with data science is the field of bioinformatics, which seeks to apply computational tools to a better understanding of science.

Machine and deep learning have always been seen in all fields of medicine. AI has considerably reduced the time lapse between research and their deployment. In the recent past, prospective/ retrospective studies using machine learning platforms, deep learning-based image analysis and disease diagnosis based on decision tree-based AI products such as ADA and MedWhat keep increasing. Radiological images, pathology slides and patients’ electronic medical records are being evaluated by machine learning embedded systems, aiding in diagnosing and treating patients and augmenting physicians’ capabilities.[2]

Screening of diabetic retinopathy using AI machine learning and deep learning algorithms has been found to provide clinically equivalent, rapid retinopathy detection from the retinal images in places where a trained workforce is unavailable for mass-scale screening programs,[3] which are very cost-effective in developing countries. AI techniques are increasingly being employed as a tool for research in neurosciences to understand the working principle of neurons in the functioning of brain.[4]

Chatbots are another class of transformation that has become extraordinarily popular in recent years due to the fabulous achievements in providing automated AI-enabled solutions with the help of natural language processing. It is currently used by many companies, including those in healthcare, to provide customer service, route requests, or gather information. Chat Generative Pre-Trained Transformer (ChatGPT) is a current advancement in this field. It is an AI chatbot launched in Nov 2022. It has been developed on top of OpenAI’s GPT 3.5 family of large language models enabled with supervised and reinforcement learning techniques. As GPT has been built with the capability of articles, websites, books and written conversations, its ability is the next advancement to ChatGPT which can respond to prompts in a dialogue style. The advantages of simplifying radiology reports and patient discharge summaries using ChatGPT are currently being weighed for their pros and cons. ChatGPT has been used to decide how to handle the simulated data to determine vaccine effectiveness and draft a related research paper.[5] After the quick reports of the use/misuse of ChatGPT for academic and research work, a word of caution appeared in many places.[6,7]

Worldwide association of medical editors (WAME), in January 2023, came out with the recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications. The main concerns of WAME are as follows: (1) ChatGPT is not sentient and does not ‘know’ that it is lying, but its programming enables it to fabricate ‘facts’. (2) Chatbots are not legal entities and do not have a legal personality; therefore, fixing responsibility is not possible.[8] WAME committee cautioned that while ChatGPT may prove to be a useful tool for researchers, it represents a threat to scholarly journals because ChatGPT-generated articles may introduce false or plagiarised content into the published literature. Therefore, they have recommended that Chatbots cannot be authors and authors must be transparent about the way Chatbots are used in writing manuscripts. Ultimately, authors are finally responsible for the scientific content of the manuscript and Editors need to develop tools to detect the use of chatbots in the same.

The developers of ChatGPT have themselves acknowledged the program’s limitation in that it sometimes writes plausible-sounding but incorrect or nonsensical answers. It is sensitive to tweaks to the input phrasing, and it can be excessively verbose and overuses specific phrases, such as restating that it is a language model trained. Instead of asking questions to clarify an ambiguous query, the current models guess what the user intended. Moreover, it is not free of false-negatives and positives.

Although, the development of AI is being employed as a powerful model in all branches of medical science from designing vaccines to drug discovery, helping in diagnosis to treatment, unscrupulous use of AI can be harmful. Academic misuse will take away the learning skills of young students and research scholars and will create a disconnect between reality by halting the fundamental understanding of a problem in its original landscape. In future, editors of medical journals and faculties of academic institutions will be facing tough situations in countering such automated AI-driven approaches unless otherwise, the academic institutions come forward with appropriate policies.

References

  1. , . Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol. 2019;28:73-81.
    [CrossRef] [PubMed] [Google Scholar]
  2. , , , . AI in health and medicine. Nat Med. 2022;28:31-8.
    [CrossRef] [PubMed] [Google Scholar]
  3. , , , , , , et al. Prospective evaluation of an artificial intelligence-enabled algorithm for automated diabetic retinopathy screening of 30 000 patients. Br J Ophthalmol. 2021;105:723-8.
    [CrossRef] [PubMed] [Google Scholar]
  4. , , , , , , et al. Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research. Neural Netw. 2021;144:603-13.
    [CrossRef] [PubMed] [Google Scholar]
  5. , , , . Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:1003.
    [CrossRef] [PubMed] [Google Scholar]
  6. , . This new conversational AI model can be your friend, philosopher, and guide and even your worst enemy. Patterns (N Y). 2023;4:100676.
    [CrossRef] [PubMed] [Google Scholar]
  7. Lancet Editorial, ChatGPT: friend or foe? Thelancet. In: com/digital-health. Vol 5. . Available from: https://www.thelancet.com/action/showPdf?pii=S2589-7500%2823%2900023-7 Limitations of Chat GPT: https://openai.com/blog/chatgpt
    [Google Scholar]
  8. WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications. Available from: https://wame.org/page3.php?id=106 [Last accessed on 2023 Mar 06]
    [Google Scholar]

Fulltext Views
8,427

PDF downloads
1,420
View/Download PDF
Download Citations
BibTeX
RIS
Show Sections