Well, I think we’re really at the very beginning of natural language processing. And so I think how it can potentially help is being utilized to pick up things that, in conversations with patients and in notes from patients, that will enhance communication with other physicians and other healthcare practitioners, which at the current time is limited. If you need to look through thousands of pages in an EMR, if that can be successfully made succinct and readable, it’s very exciting...
Well, I think we’re really at the very beginning of natural language processing. And so I think how it can potentially help is being utilized to pick up things that, in conversations with patients and in notes from patients, that will enhance communication with other physicians and other healthcare practitioners, which at the current time is limited. If you need to look through thousands of pages in an EMR, if that can be successfully made succinct and readable, it’s very exciting. But it’s limited because if it’s wrong, it’s wrong. That information gets perpetuated very easily with many of the AI algorithms. And so, garbage in, garbage out, we have to be very, very cautious. And that also goes for using AI in research. If we’re doing generative AI with available datasets, those datasets are already biased. And trying to make certain that the data we put in is representative of all patients and not just who happens to be in various databases, particularly clinical trial databases, I think there’s a lot that needs to be looked at very carefully before we have a widespread use of AI and take it as gospel.
This transcript is AI-generated. While we strive for accuracy, please verify this copy with the video.