• @[email protected]
    link
    fedilink
    English
    2066 months ago

    I’m curious if it could solve the traffic light and crosswalk ones, I would try but I’m out of free image uploads from asking it to explain memes to test its cultural knowledge.

      • @[email protected]
        link
        fedilink
        English
        396 months ago

        I’m sure eventually someone will make a bot called something like ai-explains-the-joke that does this automatically.

        • bobaFeet
          link
          136 months ago

          Expl-AI-n Bot will break down whatever joke you feed it.

          • @TriPolarBearz
            link
            26 months ago

            Expl-AI-n itself is a pun. With the letters AI in the word explain capitalized, readers can infer that artificial intelligence is being used to explain jokes.

        • @[email protected]
          link
          fedilink
          326 months ago

          I mean it still parsed the specific text in the meme and formulated a coherent explanation of this specific meme, not just the meme format

          • @SpaceNoodle
            link
            -86 months ago

            Or it matched the text with an existing explanation upon which it was indexed.

                • @[email protected]
                  link
                  fedilink
                  46 months ago

                  Man the models can’t store verbatim its training data, the amount of data is turned into a model that is hundreds or thousands of times smaller than the original source data. If it was capable of simply recovering everything that it was trained on this would be some magical compression algorithm and that by itself would be extremely impressive.

                  • @SpaceNoodle
                    link
                    -46 months ago

                    Congratulations on discovering compression

                • @[email protected]
                  link
                  fedilink
                  26 months ago

                  They do not store anything verbatim; They instead store the directions in which various words and related concepts relate to one another in some gigantic multidimensional space.

                  I highly suggest you go learn what they actually do before you continue talking out of your ass about them

                  • @SpaceNoodle
                    link
                    06 months ago

                    If you trained a GPT on a single phrase, all you’d get out of it would be the single phrase.

                    The mechanism of storage doesn’t need to be just the verbatim source material, which is not even close to what I said.

      • @kromem
        link
        English
        116 months ago

        The majority of people right now are fairly out of touch with the actual capabilities of modern models.

        There’s a combination of the tech learning curve on the human side as well as an amplification of stories about the 0.5% most extreme failure conditions by a press core desperate to feature how shitty the technology they are terrified of taking their jobs is.

        There’s some wild stuff most people just haven’t seen.

        • @[email protected]
          link
          fedilink
          136 months ago

          I can just as well say that the screenshot above is the top 0.5% pushed by people trying to sell the tech. I don’t really have an opinion either way tbh, I’m just being cynical. But my own experience with those tools hasn’t been impressive.

          • @kromem
            link
            English
            26 months ago

            At a pretrained layer, the model is literally a combination of a normal distribution curve of capabilities.

            It can autocomplete a flat earther as much as a Nobel physicist given sufficient context.

            So it makes sense that even after the fine tuning efforts there’d be a distribution in people’s experiences with the tools.

            But just as the average person’s output from Photoshop isn’t going to be very impressive, if all you ever really see is bad Photoshops and average use, you might think it’s a crappy tool.

            There’s a learning curve to the model usage, and even in just a year of research the difference between capabilities of the exact same model from then to now is drastically different, based only on learnings around better usage.

            The problem is the base models are improving so quickly the best practices for the old generation of models goes out the window with the new. So even if there were classes available I wouldn’t bother pointing you to them as you’d just be picking up info obsolete by the time the classes finished or shortly thereafter.

            I’d just strongly caution against betting against the tech’s continued capabilities and improvements if you don’t want to be surprised and haven’t taken the time to look into them operating at their best.

            The OP post is pretty crap compared to the top 0.5% usage.

            • @[email protected]
              link
              fedilink
              16 months ago

              It really does depend on what you ask and how, I can get some really nice music recommendations from Chatgpt but it also cannot comprehend GURPS skill rules, it’s actually funny how it manages to get it wrong a completely different way each time

        • The Picard ManeuverOP
          link
          56 months ago

          At the risk of sounding like a tech bro who’s desperately trying to secure funding: this truly does feel like a major leap in technology that is going to change the world.

          Anytime I hear it dismissed as “basically auto-complete”, I feel like it’s being underestimated.

          • @kromem
            link
            English
            3
            edit-2
            6 months ago

            It’s not just underestimation, it’s outright misinformation.

            There’s so much research by this point over the past 18 months that there’s an incredible amount going on beyond “it’s just a Markov chain, bro.”

            It was never a Markov chain as that ignored the self-attention mechanism which violated the Markov property. It was just some people trying to explain it used a simplified description which went viral.

            Sometimes talking to people who think it’s crap feels like talking to antivaxxers. The feelings matter more than the research and evidence.

          • @AdrianTheFrog
            link
            English
            16 months ago

            Its kind of funny because autocomplete on phones is definitely moving in the direction of using LLMs. Its like it wasn’t true when people started saying it, but it will be literally true in a couple of years at most.

    • @DarkCloud
      link
      266 months ago

      Yes it probably can… CAPTCHAs don’t work based on your answers (many types you can answer wrong and still sometimes pass) - they work by tracking your mouses movements and timing and deciding whether they human-like.

      • Match!!
        link
        fedilink
        English
        246 months ago

        Why do i fail the “choose all images with motorcycles” challenges all the time then :c

        • @BrotherL0v3
          link
          96 months ago

          Because half of the pictures are mopeds / scooters and God only knows whether those count or not?

          • @[email protected]
            link
            fedilink
            26 months ago

            I’m stubborn. I refuse to give the machine the answer I know it wants. And no, that overpass is not a bridge. Usually there is an option to skip or verify another way, This is when the captcha drops the ruse and it’s clear that the machine was just analyzing my mouse movements and response timings anyway to verify that I was behaving randomly in a human way. Still a better game than any of those in YouTube ads.

        • @TexasDrunk
          link
          66 months ago

          Are you the same guy who didn’t see me riding my motorcycle and tried to run over me? Because I think maybe you just can’t see motorcycles.

          No, that didn’t actually happen. I just wanted to give this person a hard time.

      • @Psychodelic
        link
        26 months ago

        I figured it’s due to using a vpn or ad blocker or something

    • @stormy
      link
      26 months ago

      Did you find that meme online, or did you create it yourself?

      • @[email protected]
        link
        fedilink
        English
        26 months ago

        I don’t remember actually but I checked the file metadata and I have the template in my downloads folder next to this which has an exif tag of 2 minutes later with gimp metadata so I’m pretty sure I must have made it, which makes it a bit more impressive since I probably just sent it to friends privately and didn’t post it anywhere it could have been scraped for training.