Translate this page into:
Challenges and opportunities for teaching and learning in an era of generative artificial intelligence: Students’ perspective
*Corresponding author: Om Lata Bhagat, Department of Physiology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, India. omlatabhagat@gmail.com
-
Received: ,
Accepted: ,
How to cite this article: Dulloo P, Vedi N, Bhagat OL. Challenges and opportunities for teaching and learning in an era of generative artificial intelligence: Students’ perspective. Indian J Physiol Pharmacol. doi: 10.25259/IJPP_285_2025
Abstract
Objectives:
The widespread dispersal of generative artificial intelligence (GenAI) has revolutionised communication. It has also affected the teaching-learning process as the vast syllabus in medical studies makes it a magic wand for students. Although it is perceived as a threat to replace humans in medical diagnostics and prescription writing, as it can generate sentences and provide answers, it appears to be unable to improve the comprehension and emotional skills required for the medical profession.
Materials and Methods:
An online survey was conducted to assess undergraduate medical students’ perceptions of the challenges and opportunities for teaching and learning in the era of GenAI and their readiness to adapt. A questionnaire with 15 closed-ended and two open-ended items was used.
Results:
We received 181 responses, out of which 173 responses were analysed. The Chi-square value (141.56) of the items shows statistical significance (P = 0.001) as per the Friedmann mean rank test, with the highest rank for item P-3 ‘The use of GenAI in medical education can help simulate complex medical scenarios effectively’ (9.56) and lowest rank for item P-8 ‘I feel confident in navigating the evolving landscape of artificial intelligence in healthcare as a medical student’ (6.41). The open-ended questions for the role of GenAI in the teaching-learning process identified five themes: Enhancing teaching and learning, clinical applications and skill development, efficiency and productivity, innovation in medical education and ethical and human-centric considerations. Six themes were identified for their challenges: Technological challenges, educational impacts, human-centric challenges, social and cultural barriers, practical and clinical limitations and psychological and perceptual concerns.
Conclusion:
The medical undergraduate students strongly perceive that GenAI has the potential to improve the landscape of medical education. Although they were apprehensive about patient privacy and data security, they are ready to embrace GenAI in medical education and training.
Keywords
Challenges
Generative artificial intelligence
Medical student
Perception
INTRODUCTION
Generative artificial intelligence (GenAI) offers a transformative leap beyond traditional educational technologies (EdTech) and conventional artificial intelligence (AI) by dynamically creating original, context-specific content in response to the user. The traditional AI analyses existing data, while GenAI creates new content through learning from it. Traditional AI focuses on categorisation, prediction and pattern recognition, while GenAI learns from data to produce original outputs, such as literature, graphics, or music.[1,2] Literature suggests that GenAI is reshaping medical education by making it more interactive and efficient, potentially leading to better-prepared health professionals.
GenAI creates many opportunities for enhancing personalised learning experiences among students by assessing their learning preferences, styles and performance statistics, thereby providing tailored activities and resources.[1,2] The shift from a one-size-fits-all approach to a more personalised approach allows students to progress at their own pace, enhancing their comprehension and retention of the material.[2,3] GenAI can democratise education by providing quality resources to marginalised regions and diverse learners, generating content in multiple languages and media forms, thereby enhancing learning accessibility.[3,4] This confirms support for learners with special needs and customises their learning approach to fit their feet.[4] The tool can enhance learner participation and interactivity in group tasks, discussions and brainstorming sessions, transcending boundaries to cultivate diverse ideas and perspectives.[3,5] AI tool helps students identify knowledge gaps, create reading materials, brainstorm, encourage group discovery and provide emotional support, sparking meaningful educational conversations.[6,7] Furthermore, GenAI can enhance administrative efficiencies for educators, such as lesson planning and grading, allowing them to concentrate on teaching activities and mentoring.[5] This may benefit learners by enabling them to connect with the educator regarding time, thus supporting them both academically and personally.[5,8] However, it may lead to increased cost of education or problems of inaccessibility in resource-constrained areas and communities.
Researchers highlight challenges in the use of GenAI, including ethical concerns about data privacy and reliability, as unfiltered datasets may produce biased or culturally insensitive educational resources.[4,7,8] Excessive dependence on GenAI for task completion can hamper the learner’s critical thinking and problem-solving ability, which, in turn, can hamper the quick decision-making process in those students.[7,9] The idea is to complement the old-school teaching strategy rather than replace it to ensure the best of both teaching-learning approaches.[2,8,10] Students must understand that while these tools can enhance learning, they must be used responsibly to prevent potential misuse.[7,11] Addressing the digital divide in education is crucial due to disparities in access and opportunities, as well as resistance from educators and learners to adopt a growth mindset.[2,5,12]
Keeping this background in mind, we explored medical students’ perceptions of the challenges and opportunities they face in the era of GenAI within educational settings in medical institutions in India. The research question was, ‘What is medical students’ perception regarding the usage of GenAI in the teaching-learning part of medical education?’ This approach will enable the medical faculty to enhance their teaching, learning and assessment strategies by utilising GenAI for improved student learning outcomes. Simultaneously, try to explore the challenges perceived and identify ways to address them.
MATERIALS AND METHODS
A cross-sectional survey was conducted using a structured questionnaire with 15 closed-ended and two open-ended questions. The questionnaire was developed through a literature review and the author’s expertise. The face validity of the perception questionnaire was assessed. Three experts in the field of medical education were provided with the study’s aim and objectives, as well as the developed questionnaire. The modifications and suggestions they supplied were discussed among the authors to finalise the survey questionnaire. The Institutional Ethics Committee approval was obtained (PUIECHR/PIMSR/00/081734/7806; Dated: 18 October 2024) before initiating data collection. The target population for this study includes undergraduate medical students in India. The invitation was forwarded to students from different medical colleges nationwide, comprising 8 medical colleges (4 private and 4 government) with approximately 5,600 enrolled undergraduate medical students. Three attempts were made to invite participants. The survey was administered through a Google Forms link to ensure accessibility and a broader reach. The link was disseminated through various online platforms, including email, university learning management systems and social media. The form/questionnaire description specified that filling out this form would be voluntary and that the data collected would be used for scholarly purposes. Once the participant submits the form, it is considered their consent. As a part of the study design, the authors have not asked the question regarding financial status to reduce hesitation in filling out the forms and to improve better acceptability of the participants to be part of the study. In addition, we steered clear of any bias stemming from social status. The minimum duration for participants to submit the questionnaire was 3 minutes. The authors ensured the confidentiality of the data.
The survey questionnaire contains demographics (Including age, gender, academic level and field of study), usage of AI tools (frequency and types of AI tools students used for scholarly work), perceived opportunities (perceived benefits of using AI for learning like personalised learning, enhanced creativity) and challenges (like over-reliance on AI, academic dishonesty and difficulty in critical thinking) and ethical concerns (plagiarism and fairness).
Statistical analysis was done using the Statistical Package for the Social Sciences-20. Descriptive statistics (mean, frequency and percentage) were used to summarise demographic data and students’ overall perceptions of AI. Inferential statistics (e.g. t-tests and Friedman’s ranking test) were conducted to identify significant differences in perceptions based on demographic variables. Reliability analysis, factor analysis and regression analysis were performed on the 15 closed-ended questions. Thematic analysis was used for open-ended responses.[13] Two authors independently identified the codes and developed sub-themes and themes for the open-ended questions. All the authors collaborated to approve the themes and the sub-themes.
RESULTS
The final data from 173 participants (102 females and 71 males) were analysed out of 181 responses to the questionnaire, after checking for completeness, outliers and duplicate responses. All the students were undergraduate students pursuing MBBS degrees in various institutions across India. Students who participated from Government institutions numbered 99, and 74 from private institutions [Table 1]. Although we attempted to gather students’ perceptions of the undergraduate medical program across all academic levels, there was less participation from third- and 4th-year students compared to first- and 2nd-year students. Only 1st- and 2nd-year students from private medical colleges participated in the study. The comparison between government and private medical colleges could not be made as the degree of freedom for government participants was 3, while for private participants, it was 1.
| Institutions | Gender | Total (%) | |
|---|---|---|---|
| Female (%) | Male (%) | ||
| Government | |||
| 1st-year | 12 (19.7) | 20 (52.6) | 32 (32.3) |
| 2nd-year | 38 (62.3) | 8 (21.1) | 46 (46.5) |
| 3rd-year | 3 (4.9) | 4 (10.5) | 7 (7.1) |
| 4th-year | 8 (13.1) | 6 (15.8) | 14 (14.1) |
| Total | 61 (61.6) | 38 (38.4) | 99 (57.2) |
| Private | |||
| 1st-year | 25 (61) | 18 (54.5) | 43 (58.1) |
| 2nd-year | 16 (39) | 15 (45.5) | 31 (41.9) |
| Total | 41 (55.4) | 33 (44.6) | 74 (42.8) |
| Total | |||
| 1st-year | 37 (36.3) | 38 (53.5) | 75 (43.4) |
| 2nd-year | 54 (52.9) | 23 (32.4) | 77 (44.5) |
| 3rd-year | 3 (2.9) | 4 (5.6) | 7 (4) |
| 4th-year | 8 (7.8) | 6 (8.5) | 14 (8.1) |
| Total | 102 (59) | 71 (41) | 173 |
Presents the frequency and percentage (in parentheses) for the participants enrolled in the survey. Perspectives of medical students (undergraduates) on the challenges and opportunities of teaching and learning in an era of artificial intelligence (AI), particularly generative AI (GenAI like ChatGTP). The questionnaire is based on a 5-point Likert scale with 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree and 5 = strongly agree.
The survey questionnaire shows a Cronbach’s Alpha reliability value of 0.879 for 15 items. The Cronbach’s Alpha for deleted items individually ranges from 0.864 to 0.889. Most participants have agreement responses for all items of the survey questionnaire related to GenAI in medical education, except for item P-8, which receives neutral responses from the participants [Figure 1]. Responses from medical undergraduate students as per gender and institute showed no statistical significance observed for any items based on gender, type of institute or year of program [Figure 2] except item P-5 for gender (0.037) and item P-9 for type of institute (P = 0.024). Based on the weighted mean calculated for the items (3.71), eight items showed high perception, while seven showed low perception. The Chi-square value (141.56) for 173 participants for the items shows statistical significance (P = 0.001) as per the Friedman mean rank test, with the highest rank for item P-3 ‘The use of GenAI in medical education can help simulate complex medical scenarios effectively’. (9.56) and lowest rank for the item P-8 ‘I feel confident in navigating the evolving landscape of AI in healthcare as a medical student’ (6.41) [Table 2]. The Kendall’s Wa of 0.06 with P < 0.001 showed that there is a low level of agreement, yet it is statistically significant. Post hoc Dunn showed no significant differences between the ranks of the items. No statistical significance was observed for mean perception and mean behaviour, as per gender, type of institute or year of medical program, among the participants.

- The spectrum of responses from 1 to 5 for the survey item is represented by varied colour hues as per the frequency distribution of the responses by the undergraduate medical students. The number represents the frequency of the students for each item as per the Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree and 5 = strongly agree).

- The graphical representation of undergraduate medical students’ responses to the survey questionnaire by gender and institute. P means items related to perception, while B relates to the behaviour of undergraduate medical students for the generative artificial intelligence.
| ID | Items | Mean±SD | Median | Decision (Perception) | Friedmann Mean Rank | Mean±SD (Male) | Mean±SD (Female) | Mean±SD (Private) | Mean±SD (Government) |
|---|---|---|---|---|---|---|---|---|---|
| P-1 | AI, particularly GenAI, has the potential to enhance medical education by providing personalised learning experiences. | 3.80±0.88 | 4 | High | 8.59 | 3.73±1.06 | 3.85±0.72 | 3.7±0.97 | 3.88±0.79 |
| P-2 | I am concerned that automation through GenAI may replace certain tasks currently performed by medical professionals. | 3.61±1.0 | 4 | Low | 7.66 | 3.68±1.05 | 3.57±0.96 | 3.64±1.02 | 3.6±0.97 |
| P-3 | The use of GenAI in medical education can help simulate complex medical scenarios effectively. | 4.02±0.79 | 4 | High | 9.56 | 4.13±0.80 | 3.94±0.78 | 4.12±0.75 | 3.94±0.81 |
| P-4 | I believe that ethical considerations are crucial when using GenAI in medical training. | 3.94±0.94 | 4 | High | 9.07 | 3.92±0.92 | 3.96±0.95 | 3.84±0.99 | 4.02±0.89 |
| P-5 | I think that GenAI can undermine the value of human expertise in the medical field. | 3.51±1.05 | 4 | Low | 7.25 | 3.31±1.16 | 3.65±0.94* | 3.54±1.18 | 3.48±0.94 |
| P-6 | The integration of GenAI in medical education will improve the clinical skills and decision-making abilities of students. | 3.53±0.95 | 4 | Low | 7.13 | 3.52±1.06 | 3.54±0.86 | 3.53±1.06 | 3.54±0.86 |
| P-7 | Patient privacy and data security concerns are legitimate reasons to approach GenAI cautiously in medical training. | 3.72±0.90 | 4 | High | 8.01 | 3.83±0.89 | 3.65±0.90 | 3.77±0.94 | 3.69±0.87 |
| P-8 | I feel confident in navigating the evolving landscape of AI in healthcare as a medical student. | 3.42±0.86 | 3 | Low | 6.41 | 3.38±0.94 | 3.45±0.80 | 3.36±0.93 | 3.46±0.81 |
| P-9 | I believe that GenAI can assist in improving diagnostic accuracy and speed in the medical field. | 3.62±1.08 | 4 | Low | 7.71 | 3.61±1.17 | 3.63±1.01 | 3.41±1.30 | 3.78±0.84* |
| P-10 | The fear of GenAI replacing human tasks motivates me to stay updated with AI advancements in healthcare. | 3.54±1.01 | 4 | Low | 7.12 | 3.46±1.04 | 3.59±0.99 | 3.61±1.12 | 3.48±0.93 |
| P-11 | I am open to incorporating GenAI tools and technologies in my medical education curriculum. | 3.62±0.90 | 4 | Low | 7.38 | 3.69±0.91 | 3.58±0.89 | 3.57±1.05 | 3.67±0.78 |
| P-12 | I believe that GenAI can revolutionise medical research and contribute to scientific discoveries. | 3.78±0.93 | 4 | High | 8.3 | 3.77±1.05 | 3.78±0.84 | 3.7±1.09 | 3.84±0.79 |
| B-1 | GenAI in medical education should be accompanied by comprehensive training on its ethical implications. | 3.87±0.89 | 4 | High | 8.69 | 3.97±0.75 | 3.8±0.96 | 3.86±0.97 | 3.88±0.82 |
| B-2 | I am confident that GenAI can work collaboratively with medical professionals to enhance patient care outcomes. | 3.72±0.90 | 4 | High | 7.92 | 3.85±0.87 | 3.64±0.92 | 3.73±1.01 | 3.72±0.82 |
| B-3 | The integration of GenAI in medical education should prioritise maintaining the human touch in patient-doctor interactions. | 3.94±1.0 | 4 | High | 9.22 | 3.87±1.12 | 3.99±0.90 | 3.86±1.17 | 4±0.84 |
Presents the mean scores and standard deviations for the closed-ended items of the survey questionnaire, both overall and by gender and institution of study, among the enrolled participants. Comparison between the groups was done by an independent sample t-test, showing statistical significance for item P-5 for gender and P-9 for type of institute (P< 0.5). No statistical significance was observed for mean perception as per gender (Males [71]; 3.67 ± 0.53, Females [102]; 3.68 ± 0.58, P-0.877). No statistical significance was observed for mean behaviour as per gender (Males [71]; 3.9 ± 0.75, Females [102]; 3.81 ± 0.8, P-0.47). No statistical significance was observed for mean perception as per type of institute (Government [99]; 3.7 ± 0.52, Private [74]; 3.65 ± 0.61, P-0.579). No statistical significance was observed for mean behaviour as per type of institute (Government [99]; 3.87 ± 0.7, Private [74]; 3.82 ± 0.87, P-0.713). No statistical significance was observed for mean perception as per year of medical program (1st-Year [75]; 3.75 ± 0.57, 2nd-Year [77]; 3.59 ± 0.59, 3rd-Year [7]; 3.8 ± 0.55, Final-Year [14]; 3.68 ± 0.29, P-0.34). No statistical significance was observed for mean behaviour as per year of medical program (1st-Year [75]; 3.88 ± 0.73, 2nd-Year [77]; 3.74 ± 0.86, 3rd-Year [7]; 4.05 ± 0.56, Final-Year [14]; 4.10 ± 0.56, P-0.335). GenAI: Generative artificial intelligence, SD: Standard deviation. ID id the identifier for the Questions, P: The questions related to perception, B: The questions related to behavior.
The factor analysis of the survey questionnaire yields a Kaiser-Meyer-Olkin (KMO) of 0.867 and a Bartlett’s test of sphericity that is statistically significant, with a Chi-square value of 1069.489. The component matrix utilises principal component analysis for factor analysis of the questionnaire, revealing four components extracted. However, when comparing them, the authors could only categorise them into two components (Perception Item 1–12; Behaviours Item 13–15).
Descriptive statistics for regression analysis show mean ± standard deviation values for mean perception and mean behaviour as 3.68 ± 0.56 and 3.85 ± 0.78, respectively. With mean perception as the independent variable and mean behaviour as the dependent variable, perception has a significant effect on behavioural attitude (F (1, 171) = 225.019, P < 0.001). The P-P plot of the regression standardised residuals also suggests a positive relationship between them [Figure 3]. The Pearson Correlation shows a strong correlation between the dependent and independent variables (r = 0.754). The Durbin-Watson value recorded was between 1 and 3 (1.768), showing that the data were independent of observation. The model is statistically significant (P < 0.01), indicating a strong correlation between behaviour and perception of participants for the GenAI.

- The P-P plot of the regression standardised residual, suggesting a positive regression analysis between the mean_behaviour as the dependent and mean_perception as the independent variable.
The thematic analysis was done for the open-ended questions. There were two questions; the first asked about the most important role of GenAI in the teaching-learning process and identified five significant themes, which were further categorised into 2–3 subthemes based on the participants’ responses. Whereas the other question addressed the challenges of using GenAI in the teaching-learning process, it identified six major themes, each with 2–3 subthemes [Table 3]. Further, suggestions provided by the participants for the use and challenges of GenAI in the teaching-learning process are categorised as Ethical and Professional Considerations, Integration and Implementation Strategies and Balancing AI and Human Expertise [Table 3].
| In your opinion, what is the importance of using GenAI in the teaching-learning process in medical education? | ||
| Theme | Sub-theme | Input from participants |
| 1: Enhancing teaching and learning | 1.1: Visualisation and conceptual clarity | GenAI aids in better visualisation through 3D models, animations and virtual reality. Helps understand anatomical structures and physiological processes that are hard to imagine. Makes learning interesting, interactive and fun. Provides clarity and eliminates rote learning. |
| 1.2: Personalised and innovative learning | Offers personalised learning experiences tailored to individual pace and needs. Facilitates simulation of complex scenarios for better understanding and skill development. Enhances teaching methods and reduces reliance on traditional teaching styles. |
|
| 1.3: Improved resource accessibility | Provides centralised and accessible knowledge. Enables students in remote areas to access quality education. Offers quick answers and resources for learning anytime, anywhere. |
|
| 2: Clinical applications and skill development | 2.1: Diagnostic and decision-making support | Assists in diagnosis and treatment planning with high accuracy. Helps analyse clinical cases using vast datasets for better outcomes. Supports doctors by reducing cognitive workload. |
| 2.2: Simulation and practical training | Simulates real-life scenarios for surgical and medical practice. Improves hands-on training and skill acquisition. Builds interactive mannequins for enhanced surgical practice. |
|
| 3: Efficiency and productivity | 3.1: Time and cognitive load management | Saves time by offering direct answers to queries. Reduces the cognitive load by simplifying large volumes of medical information. |
| 3.2: Automation and assistance | Automates routine tasks such as medical record maintenance. Serves as a side-by-side assistant for students and educators. |
|
| 4: Learning continuum and inquiry | 4.1: Enhancing research and knowledge creation | Improves research quality by providing resources for rare diseases. Facilitates staying up-to-date with medical advancements. Encourages innovation and new scientific discoveries. |
| 4.2: Collaborative and lifelong learning | Supports collaborative learning among peers. Offers lifelong learning opportunities by adapting to advancements in technology. |
|
| 5: Ethical and human-centric considerations | 5.1: Limitations and concerns | Should not replace human interaction and empathy in teaching. Use should be limited to specific applications, maintaining a balance with traditional methods. A double-edged sword – requires ethical practices to ensure proper usage. |
| 5.2: Impact on employment and roles | May affect job roles, requiring students and professionals to learn coding and technological skills. Can replace repetitive teaching tasks, prompting innovation among educators. |
|
| In your opinion, what are the challenges of using GenAI in the teaching-learning process in medical education? | ||
| Theme | Sub-theme | Input from participants |
| 1: Technological challenges | 1.1: Technical issues | Internet problems, system glitches, lack of technical support Insufficient professionals, coding knowledge gap among students Difficult learning curve, prompt creation challenges High cost, resource constraints |
| 1.2: Lack of expertise | ||
| 1.3: Complexity in usage | ||
| 1.4: Cost and infrastructure | ||
| 2: Educational impacts | 2.1: Reliance on AI | Reduced critical thinking, passive learning and diminished thought processes Risk of misinformation, unreliable AI-generated data Decline in teacher-student and peer collaboration |
| 2.2: Quality and accuracy | ||
| 2.3: Reduced interaction | ||
| 3: Human-centric challenges | 3.1: Ethical concerns | Privacy violations, data misuse, maintaining ethics Loss of empathy, decreased communication and interpersonal skill development |
| 3.2: Human touch in education | ||
| 4: Social and cultural barriers | 4.1: Acceptance | Resistance to AI adoption by stakeholders Limited understanding of AI, increased academic pressure Inequitable access, perpetuation of biases |
| 4.2: AI literacy | ||
| 4.3: Bias and equity | ||
| 5: Practical and clinical limitations | 5.1: Loss of clinical skills | Hindrance to practical skills reduced real-world application experience AI’s inability to replace human emotional understanding, hands-on medical expertise |
| 5.2: Inability to replace | ||
| 6: Psychological and perceptual concerns | 6.1: Fear and distrust | Fear of replacement, distrust in AI’s capability Limitation on innovative thinking Over-reliance on AI, diminished learning motivation |
| 6.2: Reduced creativity | ||
| 6.3: Dependency and motivation | ||
Shows the thematic analysis for the open-ended question. The question asking about the most important role of GenAI in the teaching-learning process revealed 5 major themes, each with 2–3 subthemes based on the participants’ responses. The questions addressing the challenges of using GenAI in the teaching-learning process revealed 6 major themes, each with 2–3 subthemes.
DISCUSSION
The present study was an attempt to highlight the nuances of attitude among Indian medical students about the integration of GenAI in medical education. Today’s students frequently use technology for entertainment, social interactions and research. However, its integration into academia and teaching-learning must be strengthened, particularly in the health sciences. While digital technologies hold great promise for enhancing learning, their success depends largely on proper guidance toward structured usage and striking a balance between these innovations and more traditional teaching methods. Our observations align with the same; the participants opine that the use of GenAI enhances resource accessibility, visualisation and personalised learning in medical education and clinical training. It may improve efficiency by supporting diagnostic decision-making. However, there are social, cultural and psychological barriers to adoption and fear of reduced human interaction with ethical concerns. While students express strong endorsement of GenAI’s potential – for instance, ranking highly its capacity to simulate complex clinical scenarios (item P-3) – there remains a notable and seemingly contradictory lack of confidence in their ability to navigate the evolving AI landscape in healthcare (item P-8). This juxtaposition of enthusiastic support coupled with personal uncertainty suggests an important experiential and educational gap that warrants critical reflection. Rather than a simple acceptance or rejection, the findings reveal an ambivalent stance shaped by both optimism and perceived unpreparedness.
The developed questionnaire was shared with medical students from eight medical institutions across the country, receiving input from 181 participants from both government and private institutes. Only responses to P5 (between genders) and P9 (between types of institutions) were significantly different. The average scores agreed with using GenAI in teaching and learning in medical schools. The factor analysis for the questionnaire categorised the items into two major groups: perception (P-1 to P-12) and behaviour (B-1 to B-3). The output is divided into two major headings: Benefits and challenges.
Benefits
The quantitative output of the present study showed that undergraduate medical students strongly perceive that GenAI has the potential to enhance medical education by providing personalised learning experiences, and its usage can help simulate complex medical scenarios effectively, as reflected by mean scores of 3.80 ± 0.88 and 4.02 ± 0.79, respectively. However, they are not convinced that using GenAI improves clinical skills and decision-making abilities in medical education. The qualitative response of participants suggested the use of GenAI for visualisation and conceptual clarity for personalised and innovative learning. These observations align with previously published studies, which show that students perceive GenAI as a useful tool with many advantages and are open to using it, primarily for writing, learning and research. Specifically, they support personalised and immediate learning, writing and brainstorming, visual and audio multimedia and research and analysis.[7] In addition, students utilise GenAI to enhance their knowledge base and deepen their comprehension of the course material.[14]
We observed that participants believed that GenAI has the potential to revolutionise medical research and could contribute to scientific discoveries. This is reflected in the mean score of 3.78 ± 0.93 and as one of the sub-themes for ‘Enhancing Research and Knowledge Creation’, which is consistent with the findings of Chen and Hu.[7] Furthermore, Almassaad et al. reported that most respondents use GenAI to formulate ideas during the actual writing process and extract summaries from academic literature.[14] Zhou et al. indicated a very broad scope of possible uses of AI, including enhanced productivity, more individualised educational experiences and stronger language skills. The study also provides critical empirical information on students’ attitudes toward AI usage.[15]
Berg noted that GenAI enables participants to stay up-to-date with current research trends and build on initial findings by simplifying literature searches, reading summaries and generating hypotheses based on data analysis.[16] The rapid review study by Hale et al.[17] proves that generative AI can provide students with feedback during their assessments to enhance their clinical reasoning and decision-making skills. It can also serve as a source of educational materials or simulation scenarios for teachers.[17] Mondal et al., in their study, concluded that the large language models (LLMs) are praised for their ability to simplify notes, provide personalised solutions, answer multiple-choice questions and expedite tasks while being time-saving and accessible.[18]
In the present study, participants show high agreement on the use of GenAI as a collaborative tool with medical professionals to enhance patient care outcomes, which can revolutionise medical research and contribute to scientific discoveries. This finding is consistent with earlier studies.[1,7,19] This implies that these emerging technologies are being used in numerous academic activities and practice exercises, which may be approaching greater effectiveness, more productivity and overall quality of student outputs. Similar conclusions have been drawn in much of the previously published scientific literature.[7,15,17,19,20]
Challenges
Notwithstanding the favourable perspective, the Chen and Hu[7] research also identifies difficulties associated with GenAI technologies, as students articulate concerns regarding excessive dependency on the technology, its potential impact on the value of higher education and matters of accuracy, transparency, privacy and ethical considerations. These are well aligned with those of our study, where participants’ responses could be categorised as Technological Challenges in terms of Lack of Expertise, Complexity in Usage, Cost and Infrastructure, as well as Human-Centric Challenges such as Ethical Concerns, the human touch in education, Reduced Creativity and increased dependency.
Concerns about the accuracy and ethical issues were highlighted in a study by Peres et al., which emphasises the issue of plagiarism due to the difficulty in determining the validity of the GenAI technologies’ generated outputs.[21] Furthermore, the study by Mondal et al. revealed concerns about inaccurate results, restricted use due to privacy and dependability concerns and excessive dependence on chatbots for instructional purposes.[18]
In response to these concerns, Lubowitz indicated that these tools are incapable of judging the truthfulness or identifying errors, necessitating the requirement of human supervision.[22] However, medical students were apprehensive about GenAI replacing human tasks, motivating them to stay updated on AI advancements in healthcare, particularly in terms of digitalisation. This unease is well addressed by various researchers, who consider GenAI’s potential threat to impede creativity and critical thinking, as well as its impact on human values and employment opportunities, in their research.[23-27]
However, a study by Jha et al. concluded that Nepali undergraduate medical students do not fully understand the potential effects of AI on healthcare. Therefore, they are uncertain about the effects of AI on patients and the healthcare system, despite considering its impact on the profession.[26] This inference partially aligns with our observations and supports the statement made by Jha et al.[26] that delaying AI schooling may make them less equipped to handle obstacles in their personal and professional lives. Therefore, there should be a training program for using GenAI within the undergraduate medical curriculum that emphasises its ethical and prudent implementation. AI education for aspiring future doctors should not be further delayed.
Implementation framework and implications
Our findings revealed a clear confidence–enthusiasm gap, where students acknowledged the high potential of GenAI but reported limited confidence in its use (P-8 score: 3.42 ± 0.86). To address this problem, we propose an implementation framework that systematically builds competence while leveraging students’ enthusiasm. At the foundational level, structured GenAI literacy modules are essential to strengthen baseline knowledge, complemented by ethics-first training that reflects students’ strong concern for responsible practice (P-4 score: 4.02 ± 0.79). Once confidence and ethical grounding are established, integration into learning environments becomes crucial. Simulation-based exercises provide safe opportunities to experience benefits most valued by students (P-3 score: 4.02 ± 0.79), while supervised decision support and case-based learning contextualise AI use in clinical reasoning. Finally, structured research engagement, collaborative care training and competency assessments ensure advanced application, balancing human judgment with AI assistance. This framework thus addresses identified challenges while transforming opportunities into structured educational pathways.
There is a coexistence of positive attitudes with low self-efficacy, which is consistent with patterns observed in the study by Rani et al. (2025). They reported that although a majority of medical students favour inclusion of AI in the curriculum (nearly 87%), only a small fraction (around 12%) feel genuinely familiar or confident with its practical application.[28] An exposure-experience disconnect exists among medical students who recognise AI’s theoretical benefits but often lack structured opportunities for hands-on engagement and skill development with GenAI tools. The lack of standardised AI curricula in Indian medical colleges likely contributes to this pervasive sense of unpreparedness. Moreover, this ambivalence may also mirror the redefinition of physicians in the era of rapid technological transformation. However, students worry about more than the basics, afraid that AI (as with other tech) will make their roles redundant, kill the creative spirit and cause people to be dependent on it. These psychological challenges expose the difficulty of integrating GenAI into a humanistic (not to mention sociocultural) profession, highlighting the need for curricula that blend technical literacy with ethical, empathetic practice.
Limitations
Although the authors made multiple attempts to reach out to undergraduate medical students nationwide, the sample size remains a significant limitation for data collection. Even the participants from private institutes are limited to the first and 2nd years of the program. Thus, more samples are required to generalise the findings of the study. Second, to check for reading comprehension, the survey design should have included at least one ‘reversed item.’ Furthermore, the questionnaire should also have been validated according to the construct.
Recommendations
To address the confidence–enthusiasm gap identified in our findings, we recommend structured integration of AI into the National Medical Commission competency-based MBBS curriculum. A dedicated AI module with clear objectives, practical training and competency-based assessments would bridge the gap between students’ strong interest and their limited confidence in application. Universities/Medical Colleges could further strengthen this by offering AI as a value-added, credit-bearing course accessible online, ensuring equitable participation. Beyond technical proficiency, curricular design must embed ethical, humanistic and socio-cultural dimensions to safeguard empathy, communication and professional integrity in future practice. For curriculum designers, progressive integration beginning with AI literacy and advancing to clinical applications directly addresses the 34.6% confidence deficit. Medical institutions should invest in simulation laboratories, reflecting students’ highest-ranked perceived benefit and train faculty to provide expert guidance. Policymakers, in turn, must establish national competency standards and ethical guidelines to ensure responsible, patient-centred AI adoption across medical education.
CONCLUSION
The present study highlights that Gen Z medical students in India largely perceive GenAI as a transformative force in medical education. Students recognise the potential of GenAI in fostering personalised learning, facilitating comprehension of complex clinical scenarios and enhancing both teaching and patient care outcomes. They also appreciate its role in advancing scientific research and promoting efficiency and productivity within academic and clinical settings. However, despite this optimism, students expressed reservations regarding their own readiness to navigate the evolving landscape of AI in healthcare and raised important concerns relating to patient privacy, data security and the necessity of maintaining the core humanistic values within the medical profession.
Our findings underscore the urgent need for systematic and structured training modules on GenAI within the undergraduate medical curriculum. Such modules should not only enhance technical proficiency and critical thinking skills but also address ethical, legal and practical limitations associated with AI integration. The National Medical Commission and medical institutions must prioritise continuous faculty and student development programmes, invest in robust digital infrastructure and foster experiential learning to bridge the current preparedness gap.
In summary, meaningful and responsible integration of GenAI into Indian medical education will require a balanced approach – one that develops technological competence, safeguards ethical standards and upholds the principles of patient-centred care. Further multicentric research is warranted to evaluate the long-term impact of such curricular innovations and ensure sustainable utilisation of GenAI in medical education.
Ethical approval:
The research/study was approved by the Institutional Ethics Committee, approval number PUIECHR/PIMSR/00/081734/7806, dated: 18th October 2024.
Declaration of patient consent:
The authors certify that they have obtained all appropriate patient consent.
Conflict of interest:
There are no conflicts of interest.
Use of artificial intelligence (AI)-assisted technology for manuscript preparation:
The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript, and no images were manipulated using AI.
Financial support and sponsorship: Nil.
References
- Traditional AI vs. Generative AI: What’s the difference. 2024. Available from: https://education.illinois.edu/about/news-events/news/article/2024/11/11/what-is-generative-ai-vs-ai#:~:text=ai%20analyzes%20and%20interprets%20existing,models%20based%20on%20existing%20data [Last accessed on 2025 Jul 20]
- [Google Scholar]
- Next-generation education: The impact of generative AI on learning. J Inform Educ Res. 2024;4:2009-14.
- [Google Scholar]
- The role of generative AI in education: Use cases, benefits, and challenges in 2024. Available from: https://www.fullestop.com/blog/generative-ai-in-education-use-cases-benefits-and-challenges [Last accessed on 2024 Sep 28]
- [Google Scholar]
- Generative AI in learning: Empowering the next generation of education. J Artific Intell Comput Sci. 2025;1:1-9.
- [CrossRef] [Google Scholar]
- The use of generative AI in education: Applications, and impact. Available from: https://pressbooks.pub/techcurr2023/chapter/the-use-of-generative-ai-in-education-applications-and-impact [Last accessed on 2024 Sep 28]
- [Google Scholar]
- How students are currently using generative AI. 2024. Artificial intelligence. Available from: https://nationalcentreforai.jiscinvolve.org/wp/2024/03/26/how-students-are-currently-using-generative-ai [Last accessed on 2024 Sep 28]
- [Google Scholar]
- Students' voices on generative AI: Perceptions, benefits, and challenges in higher education. Int J Educ Technol High Educ. 2023;20:43.
- [CrossRef] [Google Scholar]
- Top 6 use cases of generative AI in education. Available from: https://research.aimultiple.com/generative-ai-in-education [Last accessed on 2024 Sep 28]
- [Google Scholar]
- Generative AI: The pros and cons for students and schools. 2023. Education horizons. Available from: https://educationhorizons.com/blog/generative-ai-the-pros-andcons-for-students-and-schools [Last accessed on 2024 Sep 28]
- [Google Scholar]
- With generative AI, we can reimagine education, but the sky is the limit. 2024. Available from: https://www.weforum.org/agenda/2024/02/with-generative-ai-we-can-reimagine-education-and-the-sky-is-the-limit [Last accessed on 2024 Sep 28]
- [Google Scholar]
- Evaluating the impact of students' generative AI use in educational contexts. J Res Innov Teach Learn. 2024;17:152-67.
- [CrossRef] [Google Scholar]
- Center for teaching innovation. Available from: https://teaching.cornell.edu/generative-artificial-intelligence [Last accessed on 2024 Sep 28]
- [Google Scholar]
- Thematic analysis In: Cooper H, Camic PM, Long DL, Panter AT, Rindskopf D, Sher KJ, eds. APA handbook of research methods in psychology. Research designs: Quantitative, qualitative, neuropsychological, and biological. Vol 2. Washington, DC: American Psychological Association; 2012. p. :57-71. Available from: http://content.apa.org/books/13620-004 [Last accessed on 2024 Sep 28]
- [Google Scholar]
- Student perceptions of generative artificial intelligence: Investigating utilization, benefits, and challenges in higher education. Systems. 2024;12:385.
- [CrossRef] [Google Scholar]
- Unveiling students' experiences and perceptions of artificial intelligence usage in higher education. J Univ Teach Learn Pract. 2024;21:1-20.
- [CrossRef] [Google Scholar]
- The case for generative AI in scholarly practice. 2023 Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4407587 [Last accessed on 2024 Dec 10]
- [CrossRef] [Google Scholar]
- Generative AI in undergraduate medical education: A rapid review. J Med Educ Curric Dev. 2024;11:1-15.
- [CrossRef] [Google Scholar]
- A qualitative survey on perception of medical students on the use of large language models for educational purposes. Adv Physiol Educ. 2025;49:27-36.
- [CrossRef] [PubMed] [Google Scholar]
- Chatbots and other AI for learning: A survey of use and views among university students in Sweden. 2023. Göteborg. Sweden: Chalmers University of Technology; Available from: https://research.chalmers.se/publication/535715/file/535715_fulltext.pdf [Last accessed on 2024 Dec 10]
- [Google Scholar]
- Artificial intelligence in medicine: A multinational multi-center survey on the medical and dental students' perception. Front Public Health. 2021;9:795284.
- [CrossRef] [PubMed] [Google Scholar]
- On chatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. Int J Res Mark. 2023;40:269-75.
- [CrossRef] [Google Scholar]
- ChatGPT, an artificial intelligence chatbot, is impacting medical literature. Arthroscopy. 2023;39:1121-22.
- [CrossRef] [PubMed] [Google Scholar]
- Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI Soc. 2022;37:283-90.
- [CrossRef] [Google Scholar]
- Influence of artificial intelligence on Canadian medical students' preference for radiology specialty: Anational survey study. Acad Radiol. 2019;26:566-77.
- [CrossRef] [PubMed] [Google Scholar]
- Medical students' perceptions towards digitization and artificial intelligence: A mixed-methods study. Healthcare (Basel). 2022;10:723.
- [CrossRef] [PubMed] [Google Scholar]
- Undergraduate medical students' and interns' knowledge and perception of artificial intelligence in medicine. Adv Med Educ Pract. 2022;13:927-37.
- [CrossRef] [PubMed] [Google Scholar]
- Medical student perspectives on the impact of artificial intelligence on the practice of medicine. Curr Probl Diagn Radiol. 2020;50:614-19.
- [CrossRef] [PubMed] [Google Scholar]
- Perception of medical students and faculty regarding the use of artificial intelligence (AI) in medical education: A cross-sectional study. Cureus. 2025;17:e77514.
- [CrossRef] [Google Scholar]

