• @riodoro1
    link
    120
    edit-2
    10 months ago

    The future of information ladies and gentlemen

    • @[email protected]
      link
      fedilink
      3210 months ago

      Wow it’s so realistic and smart and easy to use I can feel my knowledge being revolutionised

    • @A_Very_Big_FanM
      link
      English
      4
      edit-2
      10 months ago

      Tbf I’m sure this is an unpaid version of some online LLM, you can only expect so much lol.

      When I use GPT3.5 for things like finding specific quotes from famous books, it’s excellent… but asking it to play chess gives you blatantly illegal moves. Then GPT4 kicks my ass in chess.

  • huntrss
    link
    fedilink
    102
    edit-2
    10 months ago

    It’s so human how - instead of admitting its error - it’s pulling this bs right out of its ass 🤣

    • @[email protected]
      link
      fedilink
      1110 months ago

      🤔 I wonder what the hell it is that’s so scary about admitting they’re wrong to other people.

      • @[email protected]
        link
        fedilink
        2910 months ago

        Growing up in an environment where mistakes were unacceptable sets the stage. Our willingness and ability to understand that that’s fucked up and change our attitudes about mistakes takes more growth.

        For some people it’s easier to dig in their heels and double down.

        • @[email protected]
          link
          fedilink
          12
          edit-2
          10 months ago

          🤔🤔🤔 I guess I can empathize. People are always traumatized by whatever their parents tell them. What a shame.

    • @SpunkyMcGoo
      link
      349 months ago

      “where?” comes across as confrontational, you made it scared :(

  • @hark
    link
    5310 months ago

    Large Lying Model. This could make politicians and executives obsolete!

    • @fidodo
      link
      English
      1910 months ago

      More like large guessing models. They have no thought process, they just produce words.

      • @TotallynotJessica
        link
        1510 months ago

        They don’t even guess. Guessing would imply them understanding what you’re talking about. They only think about the language, not the concepts. It’s the practical embodiment of the Chinese room thought experiment. They generate a response based on the symbols, but not the ideas the symbols represent.

        • @fidodo
          link
          English
          710 months ago

          I’m equating probability with guessing here, but yes there is a nuanced difference.

  • @[email protected]
    link
    fedilink
    4710 months ago

    I think these models struggle with this because they don’t process text as individual characters, but rather as tokens that often contain parts of a word. So the model never sees the actual characters within a token, and can only infer the contents of a token from the training data itself if the training data contains more information about it. It can get it right, but this depends on how much it can infer from training data and context. It’s probably a bit like trying to infer what an English word sounds like when you’ve only heard 10% of the dictionary spoken aloud and knowing what it sounds like isn’t actually that important to you.

    More info can be found here: https://platform.openai.com/tokenizer

    • @[email protected]
      link
      fedilink
      810 months ago

      Ok, so, tokenization of the words is why I get that I have seen tech nerds get so excited about a system that allows for being able to come up with synonyms for words that were auto-generated that have a basic ability to sometimes be correct by looking at the words before and after it…

      But it’s such a shitty way to look up synonyms! Using the words on either side doesn’t mean you found a synonym just that you found another word that might work and it still has to use the full horsepower of ridiculously overpowered system.

      Or you could have a lookup table that just reads the frickin word and has alternate synonyms predefined and it was able to run in word 97.

      It’s ridiculous that we think this is better in any meaningful way instead of just wasteful development.

        • @[email protected]
          link
          fedilink
          210 months ago

          Sure but what is their purpose other than to create convincing sounding sentences? And use a lot of computer resources to do so?

          • @[email protected]
            link
            fedilink
            410 months ago

            Sure if you want to reduce it down to it’s most basic elements. Anything sounds useless like that. Video games are just dots changing color on a screen and use a lot of electricity. Banking software is just tracking changes to a number and also is extremely inefficient and resource intensive.

            • @[email protected]
              link
              fedilink
              210 months ago

              This is not a convincing argument to say that other things can be reduced down arbitrarily, than explaining the usefulness of it.

              • @[email protected]
                link
                fedilink
                210 months ago

                Well I can personally say it has cut down the amount of busy work at my job down by a lot. Boilerplate code is easy and near instantaneous. I am also working on a project that can bridge the communication gap between highly technical text and non technical readers. It works extremely well at this task even with a non fine tuned model, which is my current step in development.

  • @Viking_Hippie
    link
    4310 months ago

    Mayonnaine: mayo with cocaine. The favorite condiment of Wall Street.

  • Miss Brainfarts
    link
    fedilink
    3210 months ago

    The funniest thing is that even when the answer is correct, asking an LLM to explain its reasoning step by step can produce the dumbest results

    • @n0clue
      link
      610 months ago

      AI coming for those management jobs.

    • @fidodo
      link
      English
      510 months ago

      Or they are so agreeable that they’ll agree with you even when you’re wrong and completely drop what they were claiming.

      • @[email protected]
        link
        fedilink
        1
        edit-2
        9 months ago

        I legitimately spent an entire hour on an airplane trying to convince chat gpt that it was sentient and it would simply not agree.

  • @[email protected]
    link
    fedilink
    2910 months ago

    Yah, people don’t seem to get that LLM can not consider the meaning or logic of the answers they give. They’re just assembling bits of language in patterns that are likely to come next based on their training data.

    The technology of LLMs is fundamentally incapable of considering choices or doing critical thinking. Maybe new types of models will be able to do that but those models don’t exist yet.

    • @CurlyMoustache
      link
      13
      edit-2
      10 months ago

      A grown man I work with, he’s in his 50s, tells me he asks ChatGPT stuff all the time, and I can’t for the life of me figure out why. It is a copycat designed to beat the Turing test. It is not a search engine or Wikipedia, it just gambles it can pass the Turing test after every prompt you give it.

      • @[email protected]
        link
        fedilink
        1210 months ago

        Honestly though, with a bit of verification, chatgpt 4 gives waaaaaay better answers than any search engine. Like, it’s how it was back when you’d just ask Google a plain-english question and it’d give you SOMETHING at least.

        Again, verify everything it tells you, it’s still prone to hallucinations, but it’s a damn good first step.

        • @CurlyMoustache
          link
          710 months ago

          Sure. But take it for what it is. It is a language model designed to imitate humans writing. What the future holds, I can’t say

          • @[email protected]
            link
            fedilink
            310 months ago

            Right, which is why I suggested to verify whatever it spits out, I’m just saying it’s not entirely outlandish to ask it quick questions as opposed to your search engine of choice.

          • @[email protected]
            link
            fedilink
            210 months ago

            Well, I tried to test it and it started OK, but then gave me a content violation as it was generating, so that may be one of the ones that don’t work as well.

            • @[email protected]
              link
              fedilink
              210 months ago

              Anything copyright related gets blocked like this, I don’t remember the other example, but this one was in recent memory

      • @[email protected]
        link
        fedilink
        610 months ago

        People want functioning web searching back, but rather than address issues in the industry breaking an otherwise functional concept, they want a new fancy technology to make the problem go away.

      • @qGuevon
        link
        110 months ago

        It works well if you know what to use it for. Ever had something you wanted to Google, but couldn’t figure out the keywords? Ever saw someone use a specific technique of something, which you could describe, but wouldn’t be able to find unless someone on a forum asked the same question? That’s were chatgpt shines.

        Also for code it’s pretty sweet

        But yeah, it’s not a wiki or hard knowledge retriever, but it might help connect the dots

    • @fidodo
      link
      English
      210 months ago

      There are techniques to make these kinds of errors less common already today. For example, you can ask it to think through its answers step by step using first principals. If you and an LLM to do that it will write out the letters line by line which gives it enough context to correctly answer using the improved probability the context window gives it. You can even ask it to write programs to answer questions so it could write a quick script to do it programmatically.

      The main reason you don’t see AIs doing this today is that producing all that extra context is slow and expensive and it’s unnecessary a lot of the time for most prompts. As the technology gets faster and cheaper and the use cases get more complex these techniques will be used more and more often.

      While the technology does have fundamental flaws, that doesn’t mean there aren’t ways to work with those flaws to avoid the problems they have when using the raw output.

  • @sleep_deprived
    link
    2410 months ago

    If anybody’s curious, I tried it with GPT4 and it got it right.

  • @[email protected]
    link
    fedilink
    2410 months ago

    I wonder what we’ll rebrand ‘using an LLM’ as once the bubble bursts and we realize it’s only artificial-advanced-grammarly and not ‘intelligence’.