Clinicians could be fooled by biased AI, despite explanations::Regulators pinned their hopes on clinicians being able to spot flaws in explanations of an AI model’s logic, but a study suggests this isn’t a safe approach.

  • @abhibeckert
    link
    English
    6
    edit-2
    11 months ago

    I just don’t think that’s how this will work in practice.

    What I expect is the AI will provide several possible explanations for the test results. Most of them will be wrong, one might be correct. If the clinician can think of more that the AI missed, those can simply also be added to the list of things to consider.

    Human clinicians are surprisingly bad at diagnosing problems - as soon as we think of something that fits the symptoms, we struggle to think of other problems that would also fit. A lot of time (sometimes years) is often wasted proving the first diagnosis was wrong before anyone comes up with an alternative hypothesis. AI can do a much better job in that specific scenario… but it doesn’t mean it can replace humans entirely.