Curious Dr. George | Plumbing the Core and Nibbling at the Margins of Cancer

Integrating AI-Generated Learning Materials into Health Professional Education

Curious Dr. George
Cancer Commons Editor in Chief George Lundberg, MD, is the face and curator of this invitation-only column

Miriam Chickering, RN, BSN, NE-BC
CEO of NextGenU.org

Artificial intelligence (A.I.) is a powerful tool that has made its way into diverse fields—and medical education is no exception. Our Curious Dr. George asks Miriam Chickering, RN, BSN, NE-BC, CEO of NextGenU.org, how the 22-year-old, U.S.-based organization she leads uses A.I. to help create free educational materials for healthcare professionals and students.

Curious Dr. George: NextGenU.org provides teaching materials and curricula in a wide range of medical, public health, and humanitarian fields for more than 2,000 institutions and countless students in all countries, free of charge. With such a vast mission, NextGenU.org would seem an ideal organization to utilize large language model-based generative A.I. in creating original course materials. Products such as ChatGPT have taken the information world by storm in recent months. Yet many users have experienced serious problems of facts and misrepresentation in A.I.-generated materials.

What is the scope of use of such products as ChatGPT by NextGenU.org in creating course materials, and what quality control measures do you use to minimize errors?

Miriam Chickering, RN, BSN, NE-BC: Integrating A.I. into health professional education (HPE) holds undeniable promise—a future where boundless, personalized educational resources and experiences redefine how students prepare for the complexities of healthcare. However, this bright future is not without its challenges. The landscape is mired with concerns over the quality of A.I. outputs, which, if not addressed, could compromise the integrity of HPE and the safety of healthcare delivery.

Incorporating A.I. into HPE requires a thoughtful process. NextGenU.org’s current and evolving process is as follows:

  1. Educate the Team on effective interaction with A.I. tools and evaluation of A.I.-generated content.
  2. Evaluate each A.I. Tool beginning with a thorough assessment of the A.I.’s capacity, dataset, and the expertise and philosophy of its developers.
  3. Identify Integration Points to determine areas for A.I. enhancement within each workflow process.
  4. Develop and Test A.I. Prompts to generate specific outputs, and conduct comprehensive testing to optimize the prompt for accuracy.
  5. Curate A.I. Inputs with vetted, high-quality resources to ensure relevant and accurate output generation.
  6. Create Content such as multiple-choice questions and interactive activities while ensuring alignment with educational objectives.
  7. Ensure Quality Control with subject matter experts validating the accuracy and pedagogical soundness of outputs. Experts ensure outputs align with the approved inputs or align the outputs with reliable and relevant resources.
  8. Integrate Content into the curriculum, and review and act upon student feedback while monitoring student outcomes.
  9. Expand Learning Opportunities through the use of A.I., offering infinite, safe chances to practice.
  10. Optimize Resource Allocation by shifting resources saved through A.I. efficiency towards teaching soft skills and nuanced clinical judgment.
  11. Practice Continuous Improvement through monitoring and refining the use of A.I. tools.

Evaluating an A.I. tool is like bringing a new research assistant to a seasoned research team. As with any new addition, we start by scrutinizing the A.I.’s “credentials”—its dataset and developers—much like evaluating a candidate’s CV for their academic lineage and references.

Once we welcome our “digital assistant” to the team, we don’t entrust it with independent research or content creation. Instead, we provide carefully vetted inputs and detailed prompts, akin to giving a new research assistant specific literature to review or data to analyze. These controlled inputs help ensure that the A.I.’s outputs are standardized. The content is then subjected to a rigorous quality control process. Subject matter experts, akin to senior researchers, oversee and verify the A.I.’s work at each point when the A.I. is used.

We use A.I. assistance for a number of tasks, including:

  1. Compare and contrast competency sets and course structures
  2. Develop objectives and outcomes
  3. Draft multiple choice questions and case studies
  4. Identify web-based learning resources
  5. Summarize learning resources

Readers interested in learning more can watch our videos that demo how we use A.I. assistance to create multiple choice questions and activities. I have also coauthored an academic paper on the use of A.I. in health professional education, and I will participate in a live discussion of this topic in an upcoming webinar on Dec 1.

We envision a future where learners benefit from a virtually limitless sandbox to practice, experiment, and learn from an endless variety of situations, ensuring that when they face real-life challenges, they are well-prepared and confident.

Miriam Chickering can be reached at mchickering@nextgenu.org.

***

Copyright: This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.