Educational content on VJHemOnc is intended for healthcare professionals only. By visiting this website and accessing this information you confirm that you are a healthcare professional.

Share this video  

ASH 2024 | The current limitations related to patient-facing artificial intelligence

Gwen Nichols, MD, The Leukemia & Lymphoma Society, Rye Brook, NY, comments on the limitations of patient-facing artificial intelligence (AI). She mentions the value of natural language processing to potentially enhance communication between healthcare practitioners but highlights that its accuracy is currently limited due to the risk of perpetuating incorrect information. She goes on to discuss the importance of ensuring that clinical trial databases are representative of all patients. This interview took place at the 66th ASH Annual Meeting and Exposition, held in San Diego, CA.

These works are owned by Magdalen Medical Publishing (MMP) and are protected by copyright laws and treaties around the world. All rights are reserved.

Transcript (AI-generated)

Well, I think we’re really at the very beginning of natural language processing. And so I think how it can potentially help is being utilized to pick up things that, in conversations with patients and in notes from patients, that will enhance communication with other physicians and other healthcare practitioners, which at the current time is limited. If you need to look through thousands of pages in an EMR, if that can be successfully made succinct and readable, it’s very exciting...

Well, I think we’re really at the very beginning of natural language processing. And so I think how it can potentially help is being utilized to pick up things that, in conversations with patients and in notes from patients, that will enhance communication with other physicians and other healthcare practitioners, which at the current time is limited. If you need to look through thousands of pages in an EMR, if that can be successfully made succinct and readable, it’s very exciting. But it’s limited because if it’s wrong, it’s wrong. That information gets perpetuated very easily with many of the AI algorithms. And so, garbage in, garbage out, we have to be very, very cautious. And that also goes for using AI in research. If we’re doing generative AI with available datasets, those datasets are already biased. And trying to make certain that the data we put in is representative of all patients and not just who happens to be in various databases, particularly clinical trial databases, I think there’s a lot that needs to be looked at very carefully before we have a widespread use of AI and take it as gospel.

This transcript is AI-generated. While we strive for accuracy, please verify this copy with the video.

Read more...