AI-generated patient simulations are helping UNSW medical students develop and reflect on their communication skills.
One of the challenges of educating future doctors is giving them sufficient, safe opportunities to work with patients. Communications skills are vital for both compassionate care and clinical efficacy, but they require repeated practice with high-quality feedback, especially for challenging or sensitive conversations. This is a resource-intensive process. Generative AI tools are now providing students and educators an additional way of developing these skills.
Building on in-person simulations and placements
Students are taught communication skills as part of UNSW’s Medicine & Health program. They then develop these skills, along with other aspects of clinical care, through placements in hospitals and clinics. There are limitations, however, on the real patient situations students can ethically work with during clinical placements. For this reason, simulations have long been a part of the medical education toolbox.
There are logistical challenges for in-person simulations, including lab capacity and the need for actors or volunteers to act as ‘patients’. GenAI is helping to address that challenge by enabling students to practise a wide variety of more challenging patient interactions, both prior to and alongside regular in-person simulations and placements.
Nexus Fellow and Convenor of Clinical Skills in the UNSW Medicine & Health program Associate Professor Silas Taylor, is leading an AI-enabled Virtual Patient pilot as part of the Year 3 Medicine & Health curriculum.
The pilot is using a commercially available web-based simulation product, SimConverse, which students can access on their own devices. Across the year, students undertake virtual patient interactions and are able to repeat exercises to try different approaches to the conversation after having received AI-generated personalised feedback and reflecting on their learning.
The simulations are based on detailed patient character prompts created by UNSW educators and clinicians.
Silas says the AI-generated simulations help students bridge the gap between campus learning and hospital experience.
“Clinical skills training and simulations help students develop skills for clinical practice and interacting with patients – essentially building their bedside manner and interpersonal communication skills – to both elicit information and make patients feel more comfortable.
“For our junior students, in-person simulations focus entirely on interactional communication skills. We’ve had great success with our in-person and online human simulated patients, but there’s still a ceiling to how many interactions you can provide. There’s also a ceiling on what we can ask the volunteers to do, in person or online, in terms of the ‘severity’ of the clinical situation, particularly when they are volunteers, not paid actors,” Silas says.
“Gen AI approaches provide not only more opportunities, but also more complex scenarios in which students can develop more advanced communication skills.”
UNSW first introduced online non-AI simulations in 2016 using a video call platform for students to practise with a cohort of online volunteers. That process was used to focus purely on the students’ interactional skills and their ability to make the patient feel comfortable enough to share information about their condition.
Silas later trialled a GenAI virtual patient platform during the pandemic, when medical students couldn’t complete clinical placements. That initial pilot hit problems of authenticity, but the technology has evolved rapidly since then.
This year, 300 medical students used the tool as part of their program, interacting with 18 virtual patients – three scenarios for each of six important communication skills learning outcomes – linked with their Moodle courses.
“Students click through, land in the platform, open up the relevant activity, and then they’re able to have a very authentic, natural conversation with the AI patient,” Silas says.
The virtual patient interactions are audio-only conversations, with live on-screen transcription. Detailed background notes give the student the context and clinical information they need prior to the conversation. For some more complex or sensitive conversations, such as death and dying interactions, students are asked to imagine themselves in a more senior role.
“We have to think about the psychological safety of the user, so we provide background information about the case they’re about to go into before they’re interacting with the patient. Students aren’t going to get this experience in hospital, as it's not appropriate for junior students to interact with patients in these sorts of critical real situations,” Silas says.
They then start the conversation, speaking to the ‘patient’ who responds verbally in a natural and conversational way. The UNSW team has prompted the virtual patients to have specific character attributes and vocal traits, even authentic Aussie accents and vernacular, aligned with the background information given to the students.
Reflection and repetition
After each virtual patient interaction, students then reflect on the conversation before repeating the interaction, applying what they learned from their first attempt.
“We ask them to think about what they did, how well it went and what they might do differently in the next interaction,” says Silas.
“They then go into that second practice with the same patient, same scenario, and do it again, but applying what they learned. They get more feedback, and then there’s a final reflection component.”
To generate personalised feedback, the AI platform parses the transcript of the interaction against a task-specific rubric and creates detailed feedback, as well as simple tick-box indicators of which outcomes have been met and which still need work.
“One of the challenges in using these platforms is the creation of an effective rubric to provide feedback. We already have a lot of character prompting resources, because of our existing simulated patient program. But it was a challenge to create effective feedback prompts to make the AI accurately find the relevant information,” says Silas.
Early survey feedback on the pilot indicates students feel the AI virtual patient conversations were a safe space to practise challenging conversations and that the feedback they received was good for their learning.
“They liked not having somebody standing over their shoulder watching them when they were first practising these skills, and they found the AI-generated feedback to be very helpful to continue developing their skills for real patient interactions in future,” says Silas.
- Log in to post comments