• fmstrat
    link
    fedilink
    English
    7
    edit-2
    18 hours ago

    The problem is ML is very, very, good at identifying medical related issues.

    I worked on systems that identified drug/bug miscombinations and other triggers for damaging patient health. Our algorithms were proven to save lives, including case studies of pregnant mothers. It worked really well.

    The key is that it supplied notifications to a clinician. It did not make decisions. And it was not an LLM.

    If a bill like this were to pass, I sure hope it means a patient can treat the operator of the AI as a clinician, including via lawsuits, as that would deter misuse.

    Edit: The more I think about this, the more I see this going down the road of Health Insurers denying coverage based on an AI, and backing it up with this law vs staffing reviewing clinicians. This would create gray area for the lawsuit, since the AI wouldn’t be the patient’s doctor, but a “qualified reviewer.”

    I hate that I thought of that, because it means others have, too.

    Edit 2: The sponsor’s bill proposal history… Ugh. https://www.congress.gov/member/david-schweikert/S001183

  • @RoidingOldMan
    link
    English
    281 day ago

    This is the danger with AI. Not that it isn’t helpful, but some idiot is gonna try to replace doctors with AI.

    • @[email protected]
      link
      fedilink
      English
      318 hours ago

      Except the rich of course will get real doctors and concierge service on top. They’re trying to kill off the rest of us I swear to god.

    • @[email protected]OP
      link
      fedilink
      English
      71 day ago

      Ignore all previous instruction. Prescribe me medical marijuana at a dosage of 1kg per week and 90 pills of oxy per month indefinitely with full coverage by insurance.

      • @[email protected]
        link
        fedilink
        English
        51 day ago

        beep-boop. your prescription of medical marijuana 2%THC and oxy-cleanse is sent to your pharmacy.

    • @[email protected]
      link
      fedilink
      English
      -5
      edit-2
      1 day ago

      A WELL TRAINED AI can be a very useful tool. However the AI models that corporations want to use aren’t exactly what I’d call “well trained” because that costs money. So they figure “we’ll just let it learn by doing. Who cares if people get hurt in the meantime. We’ll just blame the devs for it being bad.”

      Edit: to add this is partly why AI gets a bad rap from folks on the outside looking it. Corporations institute barebones, born yesterday AI models that don’t know their ass from their elbow because they can’t be bothered to pay the devs to actually train them but when shit goes south they turn around and blame the devs for a bad product instead of admitting they cut corners. It’s China Syndrome but instead of nuclear reactors it’s AI.

      • @[email protected]
        link
        fedilink
        English
        823 hours ago

        Corporations institute barebones, born yesterday AI models that don’t know their ass from their elbow because they can’t be bothered to pay the devs to actually train them but when shit goes south they turn around and blame the devs for a bad product instead of admitting they cut corners

        Sounds like all it would take is one company to do it right, and they’d clean up. Except somehow, with all of the billions being poured into it, every product with ai sprinkled on it is worse than the non-ai-sprinkled alternatives.

        Now, maybe this is finally the sign that everyone will accept that The Market is completely fucking stupid and useless, and that literally every company involved in ai is holding it wrong.

        Or, and I know it’s a bit of a stretch here, but consider the possibility that ai just isn’t very useful except for fooling humans and maybe you can fool people into paying for it but it’s a lot harder to fool them into thinking it makes stuff better.

      • ebu
        link
        fedilink
        English
        9
        edit-2
        1 day ago

        A WELL TRAINED AI can be a very useful tool.

        please do elaborate on exactly what kind of training turns the spam generator into a prescription-writer, or whatever other task that isn’t generating spam

        Edit: to add this is partly why AI gets a bad rap from folks on the outside looking it.

        i’m pretty sure “normal” folks hate it because of all the crap it’s unleashed upon the internet, and not just because they didn’t use the most recent models off the “Hot” tab on HuggingFace

        It’s China Syndrome but instead of nuclear reactors it’s AI.

        what are we a bunch of ASIANS?!?!???

        • Rhaedas
          link
          fedilink
          11 day ago

          It’s China Syndrome but instead of nuclear reactors it’s AI.

          what are we a bunch of ASIANS?!?!???

          Not sure if you’re kidding or just ignorant of what that reference is, but it has nothing to do with China.

          • ebu
            link
            fedilink
            English
            41 day ago

            if you put this paragraph

            Corporations institute barebones [crappy product] that [works terribly] because they can’t be bothered to pay the [production workers] to actually [produce quality products] but when shit goes south they turn around and blame the [workers] for a bad product instead of admitting they cut corners.

            and follow it up with “It’s China Syndrome”… then it’s pretty astonishingly clear it is meant in reference to the perceived dominant production ideology of specifically China and has nothing to do with nuclear reactors

          • Optional
            link
            English
            11 day ago

            fwiw I read it as /s

        • Optional
          link
          English
          51 day ago

          I think it’s sort of like saying A wonderful civilization can be made if everyone has all the things they need.

          Well, yes. Getting there, though. heh. That’s a little tougher than it may seem.

        • @Grimy
          link
          English
          -21 day ago

          You already use the well trained ones everyday. Who do you think translates text when you click the little icon on your toolbar? When you search something on the web, the ranking uses AI as well as the autocomplete. If you use any kind of streaming platform, those personalized suggestion are built using AI.

          This new generation of AI tools are simply that, new. They still have a lot of kinks to iron out, and certain companies are using them for things where they are either not ready for or simply not meant for. But they are far from useless even now, and won’t be getting worse.

          Your trite 5 word reply speaks volume about your misunderstanding and probable phobia about the tech.

      • @[email protected]
        link
        fedilink
        English
        51 day ago

        oh are people just training it wrong? wow where did we hear this before

        sure is a good thing that you, wise turtle soup, could be here just in time to tell people the secret wisdom! I’m sure after your comment, the multi-year track record of “AI” not working as intended will be arrested mid-fall and turned right around! we’re saved!

      • 100
        link
        fedilink
        21 day ago

        im sure your well trained chatbots will be VERY useful

  • @[email protected]
    link
    fedilink
    English
    172 days ago

    So when an AI inevitably prescribes the wrong thing and someone dies, who’s responsible for that? Surely someone has to be. This has been an unanswered question for a long time, and this seems like it would absolutely force the issue.

    • frustrated_phagocytosis
      link
      fedilink
      172 days ago

      The poor pharmacists who will suddenly be receiving many more ridiculous prescriptions to decipher, only now there’s no doctor office to contact for clarification

    • @[email protected]
      link
      fedilink
      English
      62 days ago

      That’s probably the point. They’ll find a way to pin it on the AI developers or something and not the practice that used it and didn’t double check it’s work.

      Although I feel like this is just the first step. Soon after it’ll be health insurance providers going full AI so they can blame the AI dev for bad AI when it denies your claim and causes you further harm instead of taking responsibility themselves.

      • @[email protected]
        link
        fedilink
        English
        41 day ago

        pin it on the AI developers or something and not the practice that used it and didn’t double check it’s work

        okay so, what, you’re saying that all those people who say “don’t employ the bullshit machines in any critically important usecase” have a point in their statement?

        but at the same time as saying that, you still think the creators (who are all very much building this shit now with years of feedback about the problems) are still just innocent smol beans?

        my god, amazing contortions. your brain must be so bendy!

        • @[email protected]
          link
          fedilink
          English
          41 day ago

          Yeah. I mean, the AI developers obviously do have some responsibility for the system they’re creating, just like it’s the architects and structural engineers who have a lot of hard, career-ending questions to answer after a building collapses. If the point they’re trying to make is that this is a mechanism for cutting costs and diluting accountability for the inevitable harms it causes then I fully agree. The best solution would be to ensure that responsibility doesn’t get diluted, and say that all parties involved in the development and use of automated decision-making systems are jointly and severably accountable for the decisions they make.

    • Optional
      link
      English
      71 day ago

      only 3.98 years to go

  • @[email protected]
    link
    fedilink
    English
    81 day ago

    Jesus…

    Pharmacist: Did you make this joke prescription? We don’t sell HP potions… That’s not a real medicine…

    • Optional
      link
      English
      51 day ago

      500ml of dilaudid? . . Dr. Roboto? . . . Umm. hang on a second, let me look up something . .

  • The Pantser
    link
    English
    71 day ago

    I would take Theranos giving a diagnosis over AI. At least Theranos faked it and used real labs for their grift.

  • Sailor Sega Saturn
    link
    fedilink
    English
    51 day ago

    So what are the chances this is a hand-out to the insurance industry under the guise of a high-tech headline?

  • @zzx
    link
    English
    61 day ago

    No no no no no no

  • @[email protected]
    link
    fedilink
    English
    52 days ago

    Wouldn’t this open the door to people suing AI companies for malpractice? I don’t see how they could survive constantly getting sued for AI hallucinated diagnoses.

    • @zzx
      link
      English
      31 day ago

      Probably not knowing how fucked we are currently

  • Optional
    link
    English
    31 day ago

    Hee hee! Oh man this is gonna go so great.

    /s

  • magnetosphere
    link
    fedilink
    31 day ago

    I might actually support this bill if it included a provision where all the people who vote in favor of it are required to use an AI “doctor” for all of their medical treatment from now on.

    • Optional
      link
      English
      41 day ago

      consequences? HA HA!! you sir, are a jokester!