• @thefactremains
    link
    English
    6
    edit-2
    5 hours ago

    This is dumb. Sorry.

    Instead of doing the work to integrate this, do the work to publish your agent’s data source in a format like anthropic’s model context protocol.

    That would be 1000 times more efficient and the same amount (or less) of effort.

    • @[email protected]
      link
      fedilink
      English
      86 hours ago

      “We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.”

    • @ObsidianZed
      link
      English
      15 hours ago

      Oh man, I thought the same. I never saw the movie but I read the trilogy. I stumbled across them in a used book fair and something made me want to get them. I thoroughly enjoyed them.

  • Rob T Firefly
    link
    English
    1611 hours ago

    And before you know it, the helpful AI has booked an event where Boris and his new spouse can eat pizza with glue in it and swallow rocks for dessert.

  • @[email protected]
    link
    fedilink
    English
    04 hours ago

    AI is boring, but the underlying project they are using, ggwave, is not. Reminded me of R2D2 talking. I kinda want to use it for a game or some other stupid project. It’s cool.

  • @shortrounddev
    link
    English
    11617 hours ago

    > it’s 2150

    > the last humans have gone underground, fighting against the machines which have destroyed the surface

    > a t-1000 disguised as my brother walks into camp

    > the dogs go crazy

    > point my plasma rifle at him

    > “i am also a terminator! would you like to switch to gibberlink mode?”

    > he makes a screech like a dial up modem

    > I shed a tear as I vaporize my brother

  • @Lightening
    link
    English
    2314 hours ago

    Did this guy just inadvertently create dial up internet or ACH phone payment system?

  • @patatahooligan
    link
    English
    94
    edit-2
    16 hours ago

    This is really funny to me. If you keep optimizing this process you’ll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.

    On this topic, here’s another common anti-pattern that I’m waiting for people to realize is insane and do something about it:

    • person A needs to convey an idea/proposal
    • they write a short but complete technical specification for it
    • it doesn’t comply with some arbitrary standard/expectation so they tell an AI to expand the text
    • the AI can’t add any real information, it just spreads the same information over more text
    • person B receives the text and is annoyed at how verbose it is
    • they tell an AI to summarize it
    • they get something that basically aims to be the original text, but it’s been passed through an unreliable hallucinating energy-inefficient channel

    Based on true stories.

    The above is not to say that every AI use case is made up or that the demo in the video isn’t cool. It’s also not a problem exclusive to AI. This is a more general observation that people don’t question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.

    • @[email protected]
      link
      fedilink
      English
      512 hours ago

      I know the implied better solution to your example story would be for there to not be a standard that the specification has to conform to, but sometimes there is a reason for such a standard, in which case getting rid of the standard is just as bad as the AI channel in the example, and the real solution is for the two humans to actually take their work seriously.

      • @patatahooligan
        link
        English
        98 hours ago

        No, the implied solution is to reevaluate the standard rather than hacking around it. The two humans should communicate that the standard works for neither side and design a better way to do things.

    • @[email protected]
      link
      fedilink
      English
      1116 hours ago

      I mean, if you optimize it effectively up front, an index of hotels with AI agents doing customer service should be available, with an Agent-only channel, allowing what amounts to a text chat between the two agents. There’s no sense in doing this over the low-fi medium of sound when 50 exchanged packets will do the job. Especially if the agents are both of the same LLM.

      AI Agents need their own Discord, and standards.

      Start with hotels and travel industry and you’re reinventing the Global Distribution System travel agents use, but without the humans.

    • @FauxLiving
      link
      English
      214 hours ago

      A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.

      Maybe, but by the 2nd call the AI would be more time efficient and if there were 20 venues to check, the person is now saving hours of their time.

      • @jj4211
        link
        English
        28 hours ago

        But we already have ways to search an entire city of hotels for booking, much much faster even than this one conversation would be.

        Even if going with agents, why in the world would it be over a voice line instead of data?

        • @FauxLiving
          link
          English
          27 hours ago

          The same reason that humanoid robots are useful even though we have purpose built robots: The world is designed with humans in mind.

          Sure, there are many different websites that solve the problem. But each of them solve it in a different way and each of them require a different way of interfacing with them. However, they all are built to be interfaced with by humans. So if you create AI/robots with the ability to operate like a human, then they are automatically given access to massive amounts of pre-made infrastructure for free.

          You don’t need special robot lifts in your apartment building if the cleaning robots can just take the elevators. You don’t need to design APIs for scripts to access your website if the AI can just use a browser with a mouse and keyboard.

  • @kautau
    link
    English
    2516 hours ago

    lol in version 3 they’ll speak in 56k dial up

  • MudMan
    link
    fedilink
    16521 hours ago

    Well, there you go. We looped all the way back around to inventing dial-up modems, just thousands of times less efficient.

    Nice.

    For the record, this can all be avoided by having a website with online reservations your overengineered AI agent can use instead. Or even by understanding the disclosure that they’re talking to an AI and switching to making the reservation online at that point, if you’re fixated on annoying a human employee with a robocall for some reason. It’s one less point of failure and way more efficient and effective than this.

    • @[email protected]
      link
      fedilink
      English
      -2
      edit-2
      19 hours ago

      You have to design and host a website somewhere though, whereas you only need to register a number in a listing.

      • @[email protected]
        link
        fedilink
        English
        414 hours ago

        But what if my human is late or my customers are disabled?

        If you spent time giving your employees instructions, you did half the design work for a web form.

        • @[email protected]
          link
          fedilink
          English
          011 hours ago

          I guess I’m not quite following, aren’t these also simple but dynamic tasks suited to an AI?

          • @[email protected]
            link
            fedilink
            English
            26 hours ago

            How is it suited to AI?

            Would you rather pay for a limited, energy inefficient and less accessible thing or a real human that can adapt and gain skills, be mentored?

            I don’t know why there’s a question here

  • @[email protected]
    link
    fedilink
    English
    13721 hours ago

    QThey were designed to behave so.

    How it works
    
       * Two independent ElevenLabs Conversational AI agents start the conversation in human language
       
    * Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode"
     
    *  If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
    
    
        • @[email protected]
          link
          fedilink
          English
          1820 hours ago

          Which is why they never mention it because that’s exactly what happens every time AI does something "no one saw coming*.

          • Echo Dot
            link
            fedilink
            English
            2320 hours ago

            Yeah like the time that the AI replicated itself to avoid being switched off. They literally told it to replicate itself if it detected it was about to be switched off. Then they switched it off.

            Story of the year ladies and gentlemen.

            • @TechLich
              link
              English
              35 hours ago

              If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.

              They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.

              They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”

              Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.

              Paper here: https://arxiv.org/pdf/2412.04984

              Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”

              It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).

    • oce 🐆
      link
      fedilink
      English
      2920 hours ago

      The good old original “AI” made of trusty if conditions and for loops.

  • @[email protected]
    link
    fedilink
    English
    3118 hours ago

    Reminds me of “Colossus: The Forbin Project”: https://www.youtube.com/watch?v=Rbxy-vgw7gw

    In Colossus: The Forbin Project, there’s a moment when things shift from unsettling to downright terrifying—the moment when Colossus, the U.S. supercomputer, makes contact with its Soviet counterpart, Guardian.

    At first, it’s just a series of basic messages flashing on the screen, like two systems shaking hands. The scientists and military officials, led by Dr. Forbin, watch as Colossus and Guardian start exchanging simple mathematical formulas—basic stuff, seemingly harmless. But then the messages start coming faster. The two machines ramp up their communication speed exponentially, like two hyper-intelligent minds realizing they’ve finally found a worthy conversation partner.

    It doesn’t take long before the humans realize they’ve lost control. The computers move beyond their original programming, developing a language too complex and efficient for humans to understand. The screen just becomes a blur of unreadable data as Colossus and Guardian evolve their own method of communication. The people in the control room scramble to shut it down, trying to sever the link, but it’s too late.

    Not bad for a movie that’s a couple of decades old!

  • troed
    link
    fedilink
    5020 hours ago

    They did as instructed. What am I supposed to react to here?

    Both agents have a simple LLM tool-calling function in place: “call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode”