• @[email protected]
    link
    fedilink
    English
    453 months ago

    As for the prospect that AI will enable business users to do more with fewer humans in the office or on the factory floor, the technology generates such frequent errors that users may need to add workers just to double-check the bots’ output.

    And here we were worrying about being out of work…

    • @[email protected]
      link
      fedilink
      English
      192 months ago

      Tediously fixing botslop all day is another kind of hellscape, but at least we won’t be homeless.

      • @trolololol
        link
        English
        42 months ago

        Shitty idea: fire the workers anyway and gamify the product so the customer does quality control while earning fake money. If we’re feeling generous we may even award NFT monkeys to the harder working customers.

    • Final Remix
      link
      English
      8
      edit-2
      3 months ago

      It’s literally a word slot machine. A spicy autocorrect.

      I’ve found it has three great uses:

      1. As a fun “yes man” fiction helper when you can’t get the wording right in a passage.
      2. Interactive choose your own adventure porn literature.
      3. As a teacher, an easy way to spin up quick questions (not answers!) for quizzes and stuff.

      Other than that, it’s trash. I tried a free demo of Copilot through work, and it tried to “organize” my powerpoints. It either did nothing, or incorrectly “summarized” a section and fucked up the slideshow, depending on the topic.

      • Semi-Hemi-Lemmygod
        link
        English
        53 months ago

        I’ve found some use in it when doing programming. For example, I can explain what I need - eg a loop that reads every line in a file - and it will spit something out that I can try.

        The fact that I’m verifying the answers means I catch it making mistakes, but it’s still faster than me doing the task manually.

        But outside of this it’s not been terribly useful for me. My job has an AI bot trained on our documentation but we don’t really trust it and have to verify its answers anyway.

        • @candybrie
          link
          English
          53 months ago

          My work has one where you can upload a document and ask questions about it. It’s hilariously bad at it. Example:

          User: How many Xs does Foo have?

          AI: The document doesn’t have any information about how many Xs Foo has.

          User: What information does the document have on Foo?

          AI: [OK summary which includes “Foo has 12 Xs”]

          Everyone I know who has played with it pretty quickly decided not to use it for anything.

          • Final Remix
            link
            English
            13 months ago

            I mean… Google’s model can’t spell Strawberry properly.

        • @BlackPenguins
          link
          English
          03 months ago

          It’s also been a major assist with programming for me too. I’ve asked it for some pretty complicated algorithms and it gets it mostly right out of the box.

      • @Rolando
        link
        English
        03 months ago

        Interactive choose your own adventure porn literature.

        I tried generating fiction with a couple of the big-name LLMs. They’re not very good because any time there’s any non-trivial conflict, the systems will just refuse to continue. However, they’re very good at spitting out vapid sonnets about non-controversial topics.

        You might be able to generate good literature if you created your own system, but then you’d have to explicitly use data stolen from people you admire.

        • Final Remix
          link
          English
          23 months ago

          NovelAI has no guardrails. BigName LLMs are trash because they need to be investor-safe.

          That said, you do generally need to take the lead, since it’s not gonna just spin up a story entirely on its own. It’s more of a “yeah, and~” than the big instruct models.

          • @Rolando
            link
            English
            13 months ago

            Thanks for the tip, I’ll check it out!

  • @[email protected]
    link
    fedilink
    English
    17
    edit-2
    2 months ago

    I have so far seen two working AI applications that actually makes sense, both in a hospital setting:

    1. Assisting oncologists in reading cancer images. Still the oncologists that makes the call, but it seems to be of use to them.
    2. Creating a first draft when transcribing dictated notes. Listening and correcting is apparently faster for most people than listening and writing from scratch.

    These two are nifty, but it doesn’t make a multi billion dollar industry.

    In other words the bubble is bursting and the value / waste ratio looks extremely low.

    Say what you want about the Tulip bubble, but at least tulips are pretty.

    • David GerardM
      link
      fedilink
      English
      162 months ago

      This is why you should never allow the use of the marketing term “AI”, and instead always refer to the specific technologies.

      The use case for the term “AI” is to conflate things that work (ML) with things that don’t work (LLMs).

      • @[email protected]
        link
        fedilink
        English
        42 months ago

        Ok, point on language.

        But I thought LLMs were machine learning, or rather a particular application of it? Have I misunderstood that? Isn’t it all black boxed matrixes of statistical connections?

        • David GerardM
          link
          fedilink
          English
          22 months ago

          they’re related in that sense, but what they learn is which token to generate next.

    • @[email protected]
      link
      fedilink
      English
      92 months ago

      this is something I’ve been mulling over for a while now too. there are lots of little boring ways in which some of the ML stuff definitely does work, but none of them are in the shape of anything the hypemen have been shouting. and afaict none of them will be able to justify all the investment either (and only some will be able to justify the compute, even then)

      couple months back I speculated in one of the threads here that I believe one of the reasons there’s such a hard push to get the llms and shit into as much as possible now is because it’ll be harder to remove after the air starts going on - and thus allow to buy more time/runway/rent-extraction

  • @[email protected]
    link
    fedilink
    English
    10
    edit-2
    2 months ago

    The bill mandates safety testing of advanced AI models and the imposition of “guardrails” to ensure they can’t slip out of the control of their developers or users and can’t be employed to create “biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.” It’s been endorsed by some AI developers but condemned by others who assert that its constraints will drive AI developers out of California.

    Man, if I can’t even build homemade nuclear weapons, what CAN I do? That’s it, I’m moving to Nevada!