• @[email protected]
    link
    fedilink
    English
    29 months ago

    What does it threaten really?

    It works for contact centers for bots to answer short simple questions, so that agents’ time would be used more efficiently. I’m not sure it saves that much money TBF.

    It works for image classification. And still needs checking.

    It works for OCR. And still needs checking.

    It works for voice recognition and transcription, which is actually cool. Still needs checking.

    but they’re a big step towards AGI

    What makes you think that? Was the Mechanical Turk a big step towards thinking robots?

    They are very good at pretending to be that big step for people who don’t know how they work.

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      9 months ago

      You’re right that it doesn’t save too much money making people more efficient. That’s why they will replace employees instead. That’s the threat.

      Yes they make mistakes. So do people. They just have to make less than an employee does and we’re on the right track for that. AI will always make mistakes and this is actually a step in the right direction. Deterministic systems that rely on concrete input and perfectly crafted statistical models can’t work in the real world. Once the system it is trying to evaluate (most systems in the real world) is sufficiently complex, you encounter unknown situations where you have to spend infinite time and energy gathering information and computing… or guess.

      Our company is small and our customer inquiries increased several fold because our product expanded. We were panicking thinking we needed to train and hire a whole customer support department overnight, where we currently have one person. But instead we implement AI representatives. Our feedback actually became more positive because these agents can connect with you instantly, pull nebulous requests from confusing messages, and alert the appropriate employee of any action needed. Does it make mistakes? Sure, not enough to matter. It’s simple for our customer service person to reach out and correct the mistake.

      I think people that think this isn’t a big deal for AGI don’t understand how the human mind works. I find it funny when they try and articulate why they think LLMs are just a trick. “It’s not really creating anything, it’s just pulling a bunch of relevant material from its training data and using it as a basis for a similar output.” And… What is it you think you do?