• @[email protected]
    link
    fedilink
    766 days ago

    Everytime I read stuff like this, I always remember that slide that says “A computer can never be held accountable therefore a computer must never make a management decision.”

    • @BigBenis
      link
      14 days ago

      Season two lands next year!

  • @psion1369
    link
    34 days ago

    AI should be used as a recommendation, not an absolute answer.

    • @maplebar
      link
      34 days ago

      “AI” shouldn’t be used at all under any circumstance in which human lives are at stake.

      And it’s fucking crazy to me that that even needs to be said in the first place.

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      4 days ago

      I think we need to make laws pertaining to the use and usage of the term by businesses. There is nothing intelligent about language models. Most of what AI is being used for in businesses is more “Automated Instructions” than anything intelligent.

      Laws need to dictate that companies MUST have reasonable ability to get to a human representative and that they are legally responsible for their responses.

      It’s fine to set up automated systems to assist people within companies, as the majority of issues people have can be solved through automated processes.

      User: “I need access to this network share”

      LLM: Okay submit this form: Link to network share access request form.

      LLM: Can I further assist?

      User submits form specifying what the network path location, radio buttons for read/ read, write permissions, and reason for needing access.

      Form sends approve/deny button to owner of that specific network share in an email.

      Approver clicks approve, and the user is added to the active directory group required, and receives an email back stating they have been added and they should log out and log back in so their active directory groups update group policies.

      Time taken by users: 5 minutes Many companies have so many requests coming in that stuff like this often doesn’t get to the approving parties and completed for weeks.

      But if you set up an LLM inside your company non external facing that locates forms and processes but cannot access user data or permissions it can take the workload of managing 60,000 users down by a significant amount.

      (I’m sure there are a million other uses that could be legitimate, but that’s just a quick one off the top of my head)

  • @[email protected]
    link
    fedilink
    95 days ago

    Does anyone have any good article on them using AI?

    The more I read about this story, the better it gets.

    • @Boddhisatva
      link
      74 days ago

      This ProPublica article is good reading. It discusses a company used by many insurers, including UHC, to deny claims using AI. The name of the company is EviCore. I suppose the “Evi” is supposed to be short for “evidence” but I think it is pretty clear that it’s just short for “evil.”

  • @nroth
    link
    -35 days ago

    Unpopular opinion: It’s OK to use AI to fight fraud as long as your data is good, your precision threshold is very high, and appeals are easy. It seems like it is almost never used in this way when people try to save money, sadly.

    • @kerrigan778
      link
      7
      edit-2
      5 days ago

      Current AI is incapable of providing that level of good data and high precision, it is uncertain if the types of AI being developed now are even capable of ever achieving that without fundamentally changing how they work.

    • DankDingleberry
      link
      04 days ago

      i work in Management at an insurance firm and thats exactly what we do (use AI for fraud prevention). we have no interest in denying rightful coverage because in the longrun it can cost you more than just paying them outright (lawyer costs, interventions, bad PR, etc…) if you dont work in the industry, you have NO idea how many people try to cheat. its ridiculous.