CBC on a medical translation service
CBC profiles a medical translation service used by doctors in Park Ex treating people who can’t yet communicate in French or English. But CBC didn’t consult Quebec over whether this is an acceptable practice.
CBC profiles a medical translation service used by doctors in Park Ex treating people who can’t yet communicate in French or English. But CBC didn’t consult Quebec over whether this is an acceptable practice.
DeWolf 12:45 on 2025-01-30 Permalink
My doctor told me he uses Google Translate for many of his patients, which he admits is not ideal.
Joey 15:00 on 2025-01-30 Permalink
There are companies, like Voyce, that provide access to live interpreters who can provide live medical interpretation for these scenarios – they are active in Quebec (their website lists the West Montreal CIUSS and Maimonides as customers).
Somewhat relatedly, I took my kid to a walk-in clinic in the fall for some respiratory issues; the doctor asked us to consent to her using an app that would record, transcribe and (here comes the AI) summarize the visit, which she said would form the basis of (or, ideally, the entirety of) her notes, pending her review.
Kate 15:14 on 2025-01-30 Permalink
Wow.
I tried out DeepSeek this week and it made mistakes. Randomly, I asked it about an ancestor of mine (1790-1874) who wasn’t notable, but has had some historical writing done about him that I know about. DeepSeek told me all about a professor in Victoria, BC with a similar, but not identical name, who died a few years ago, and who isn’t, as far as I could tell, related to me.
It also told me that Notre-Dame in Montreal has flying buttresses, which it does not.
For these and other reasons I’m not sure yet about trusting medical analysis to AIs. An AI might think I had an ailment in the flying buttress.
Joey 16:02 on 2025-01-30 Permalink
I certainly wouldn’t trust a ChatGPT/DeepSeek style large language model with any kind of diagnostic task; the doctor I saw was basically using a transcription/summary tool that, I suppose, had been trained on things like anatomy and medical jargon. Just to be clear, the tasks that were being automated were basically administrative – listen to a conversation, transcribe it, and summarize it. There was no expectation that it would provide any diagnostic value, and the doctor made sure to explain that she would immediately review the notes to make sure they were OK. I would imagine the end result is the doctor gets decent quality patient notes instantly and in a fraction of the time it would take her to prepare them – the kind of automation that could actually reduce wait times if it created enough free time in enough doctors’ schedules.
I suspect there already exist products that would adapt or augment these kinds of tools to help provide pre-visit support; you could imagine a tool that reads a patient’s file and guides the doctor on how to conduct the next appointment. Of course the major risk is that doctors will over-rely on their tools and lose their critical thinking instincts or some intrinsic knowledge of their patients if they are effectively outsourcing their documentation practice (I would love to learn more about the the relationship between a doctor’s memory abilities and their patient outcomes). Then again, maybe the AI would catch something the doctor wouldn’t – or maybe AI generally will be able to more effectively predict illnesses based on patterns that the typical human brain can’t analyze.
jeather 17:02 on 2025-01-30 Permalink
I would, probably, trust a transcription tool (as I don’t have a second language accent which reduces accuracy); I would not, at this time, trust anything that claimed to summarize.
MarcG 17:09 on 2025-01-30 Permalink
A pretty shocking study came out recently that compared accurate diagnosis by doctors with conventional resources vs doctors with conventional resources and AI vs AI by itself. Doctors with AI scored slightly higher than without (76% to 74%) but AI by itself won by a landslide (92%).
DeWolf 17:18 on 2025-01-30 Permalink
The AI transcription services I have used are remarkably good — light years better than automated transcription used to be — but it’s a very different beast to generative AI that creates “original” content.
Kevin 15:29 on 2025-01-31 Permalink
MarcG
That seems to me like it’s more of a study on how doctors with enough time to spend hours participating in a study did during simulated exams, which is still a fairly novel way of training medical students.
It would also be easy for the AI to pick up on one particular key word said by the actor/subject, while a poorly-trained /guided actor could inadvertently hide their disease under assorted complaints.
In any case, one study is one study
MarcG 17:21 on 2025-01-31 Permalink
For sure it’s just one study and I’m not generally a big fan of AI, but it does seem like medicine is one realm that it could be put to good use. For example, machine learning is being leveraged to make the scoring system used to decide who gets a liver transplant more accurate and equitable.
MarcG 14:47 on 2025-02-02 Permalink
An op-ed published today on the subject of medical AI, and one of the author’s substacks that’s worth a follow, that references 6 studies that resulted in AI outperforming doctors w/ AI.