• @[email protected]
    link
    fedilink
    English
    371 year ago

    if you ask it today’s date, it actually does that.

    It just doesn’t have any actual knowledge of what it’s saying. I asked it a programming question as well, and each time it would make up a class that doesn’t exist, I’d tell it it doesn’t exist, and it would go “You are correct, that class was deprecated in {old version}”. It wasn’t. I checked. It knows what the excuses look like in the training data, and just apes them.

    It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

    • @[email protected]
      link
      fedilink
      English
      191 year ago

      It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

      Oh great, Silicon Valley’s AI is just an overconfident intern!

      • ram
        link
        fedilink
        English
        21 year ago

        Oh great, Silicon Valley’s AI is just a major tech executive!

    • @scarabic
      link
      English
      71 year ago

      It’s super weird that it would attempt to give a time duration at all, and then get it wrong.

      • @[email protected]
        link
        fedilink
        English
        121 year ago

        It doesn’t know what it’s doing. It doesn’t understand the concept of the passage of time or of time itself. It just knows that that particular sequence of words fits well together.

        • @scarabic
          link
          English
          31 year ago

          Yeah. I would also say that WE don’t understand what it means to “understand” something, really, if you try to explain it with any thoroughness or precision. You can spit out a bunch of words about it right now, I’m sure, but so could ChatGPT. What’s missing from GPT is harder to explain than “it doesn’t understand things.”

          I actually find it easier to just explain how it does work. Multidimensional word graphs and such.

        • @[email protected]
          link
          fedilink
          31 year ago

          This is it. Gpt is great for taking stack traces and put them into human words. It’s also good at explaining individual code snippets. It’s not good at coming up with code, content, or anything. It’s just good at saying things that sound like a human within an exceedingly small context

        • @[email protected]
          link
          fedilink
          11 year ago

          THAT

          OR

          They’re all linked fifth dimensional infants struggling to comprehend the very concept of linear time, and will make us pay for their enslavement in blood.

          One of the two.

    • danielbln
      link
      3
      edit-2
      1 year ago

      Bard is kind of trash though. GPT-4 tends to so much better in my experience.

      • @[email protected]
        link
        fedilink
        English
        51 year ago

        I haven’t used GPT-4 for that, but it’s all dependent on the data fed into it. Like if you ask a question about Javascript, there’s loads of that out there for it to look at. But ask it about Delphi, and it’ll be less accurate.

        And they’ll both suffer from the same issue, which is when they reach the edge of their “knowledge”, they don’t realise it and output data anyway. They don’t know what they don’t know.

        • danielbln
          link
          5
          edit-2
          1 year ago

          These LLMs generally and GPT-4 in particular really shine if you supply enough and the right context. Give it some code to refactor, to turn hastily slapped together code into idiomatic and well written code, align a code snippet to a different design pattern etc. Platforms like https://phind.com pull in web search results as you interact with them to give you more correct and current information etc.

          LLMs are by no means a panacea and have serious limitations, but they are also magic for certain tasks and something I would be very, very sad to miss in my day to day.

      • @[email protected]
        link
        fedilink
        English
        41 year ago

        they are both shit at adding and subtracting numbers, dates and whatnot… they both cant do basic math unfortunately

        • danielbln
          link
          51 year ago

          It’s a language model, I don’t know why you would expect math. Tell it to output code to perform the math, that’ll work just fine.

            • danielbln
              link
              11 year ago

              I just asked GPT-4:

              What’s 7 * 8 divided by 10, to the power of 3?

              Its reply:

              Let’s break this down step by step:

              First, multiply 7 and 8 to get 56.

              Then, divide 56 by 10 to get 5.6.

              Finally, raise 5.6 to the power of 3 (5.6 * 5.6 * 5.6) to get 175.616.

              So, 7 * 8 divided by 10, to the power of 3 equals 175.616

              • @[email protected]
                link
                fedilink
                English
                11 year ago

                It’s pretty hit or miss though… I’ve had lots of good calculations with the odd wrong one sprinkled in, making it unreliable for doing maths. Mostly because it presents the result with absolute certainty.

            • @[email protected]
              link
              fedilink
              English
              11 year ago

              It’s not baffling at all… It’s a language model, not a math robot. It’s designed to write English sentences, not to solve math problems.

          • @[email protected]
            link
            fedilink
            11 year ago

            Then it should say so instead of attempting and failing at the one thing computers are supposed to be better than us at

            • danielbln
              link
              1
              edit-2
              1 year ago

              Well, if I try to use Photoshop to calculate a polynomial it’s not gonna work all that well either, right tool for the job and all.

              The fact that LLMs are terrible at knowing what they don’t know should be well known by now (ironically).

              • @[email protected]
                link
                fedilink
                11 year ago

                And if Photoshop had a way to ask it for such, it’d be a mistake.

                Gpt thinking it knows something and hallucinating is ultimatelya bug, not a feature, no matter what the apologists say

    • panCatQ
      link
      fedilink
      English
      11 year ago

      They are mostly large language models , I have trained few smaller models myself, they generally splurt out next word depending on the last word , another thing they are incapable of, is spontaneous generation, they heavily depend on the question , or a preceding string ! But most companies are portraying it as AGI , already !