• @Wilzax
    link
    English
    451 month ago

    If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors’ time when the scan is so clean that even the AI doesn’t see anything fishy.

    Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it’s DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don’t let anything that’s worth looking into go unnoticed.

    Either way, as long as it isn’t worse than humans in both kinds of failures, it’s useful at saving medical resources.

    • Match!!
      link
      fedilink
      English
      231 month ago

      an image recognition model like this is usually tuned specifically to have a very low false negative (well below human, often) in exchange for a high false positive rate (overly cautious about cancer)!

    • @Railing5132
      link
      English
      81 month ago

      This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They’re intended use is exactly that - a “first look” and “second look”. A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.

    • @UNY0N
      link
      English
      41 month ago

      Nice comment. I like the detail.

      For me, the main takeaway doesn’t have anything to do with the details though, it’s about the true usefulness of AI. The details of the implementation aren’t important, the general use case is the main point.

    • @Dkarma
      link
      English
      41 month ago

      You in QA?

      • @Wilzax
        link
        English
        41 month ago

        HAHAHAHA thank fuck I am not