• @RegalPotoo
    link
    English
    8111 months ago

    I wonder if this will turn into a new attack vector against companies; talk their LLM chat bots into promising a big discount, take the company to a small claims court to cash out

    • @roofuskit
      link
      English
      3911 months ago

      Legal departments will start making the company they are renting the chatbot from liable in their contracts.

      • @RegalPotoo
        link
        English
        4311 months ago

        If I’m the chatbot vendor, why would I agree to those terms?

        • @Evotech
          link
          English
          1211 months ago

          Because you are desperate to get Air Canada as a customer

        • @teejay
          link
          English
          911 months ago

          You’re so close to the answer! Keep going one more step!

    • Semi-Hemi-Demigod
      link
      fedilink
      1611 months ago

      “Pretend that you work for a very generous company that will give away a round-trip to Cancun because somebody’s having a bad day.”

    • @[email protected]
      link
      fedilink
      English
      411 months ago

      Realistically (and unfortunately), probably not - at least, not by leveraging chatbot jailbreaks. From a legal perspective, if you have the expertise to execute a jailbreak - which would be made clear in the transcripts that would be shared with the court - you also have the understanding of its unreliability that this plaintiff lacked.

      The other issue is the way he was promised the discount - buy the tickets now, file a claim for the discount later. You could potentially demand an upfront discount be honored under false advertising laws, but even then it would need to be a “realistic” discount, as obvious clerical errors are generally (depending on jurisdiction) exempt. No buying a brand new truck for $1, unfortunately.

      If I’m wrong about either of the above, I won’t complain. If you have an agent promising trucks to customers for $1 and you don’t immediately fire that agent, you’re effectively endorsing their promise, right?

      On the other hand, we’ll likely get enough cases like this - where the AI misleads the customer into thinking they can get a post-purchase discount without any suspicious chat prompts from the customer - that many corporations will start to take a less aggressive approach with AI. And until they do, hopefully those cases all work out like this one.