• @tee9000
    link
    English
    -72 months ago

    They say it uses roughly the same amount of computing resources.

      • @tee9000
        link
        English
        -112 months ago

        Are you saying thats not true? Anything to substaniate your claim?

        • @[email protected]
          link
          fedilink
          English
          222 months ago

          “this thing takes more time and effort to process queries, but uses the same amount of computing resources” <- statements dreamed up by the utterly deranged.

          • @[email protected]
            link
            fedilink
            English
            142 months ago

            “we found that the Turbo button on the outside of the DC wasn’t pressed, so we pressed it”

          • @tee9000
            link
            English
            -102 months ago

            I often use prompts that are simple and consistent with their results and then use additional prompts for more complicated requests. Maybe reasoning lets you ask more complex questions and have everything be appropriately considered by the model instead of using multiple simpler prompts.

            Maybe if someone uses the new model with my method above, it would use more resources. Im not really sure. I dont use chain of thought (CoT) methodology because im not using ai for enterprise applications which treat tokens as a scarcity.

            Was hoping to talk about it but i dont think im going to find that here.

            • @[email protected]
              link
              fedilink
              English
              13
              edit-2
              2 months ago

              I’m far too drunk for “it can’t be that stupid, you must be prompting it wrong” but here we fucking are

              Was hoping to talk about it but i dont think im going to find that here.

              oh no shit? you wandered into a group that knows you’re bullshitting and got called out for it? wonder of fucking wonders

              • @[email protected]
                link
                fedilink
                English
                112 months ago

                Cake day: September 13th, 2024

                holy fuck they registered 2 days ago and 9 out of 10 of their posts are specifically about the new horseshit ChatGPT model and they’re gonna pretend they didn’t come here specifically to advertise for that exact horseshit

                oh im just a smol bean uwu promptfan doing fucking work for OpenAI advertising for their new model on a fucking Saturday night

                • @[email protected]
                  link
                  fedilink
                  English
                  122 months ago

                  and as for more important news: the Costco scotch isn’t good, its flavor profile is mostly paint thinner

                  but their tequila’s still excellent

                  • @[email protected]
                    link
                    fedilink
                    English
                    62 months ago

                    the Costco scotch isn’t good, its flavor profile is mostly paint thinner

                    even bad bathtub gin sounds more appealing

                  • @[email protected]
                    link
                    fedilink
                    English
                    52 months ago

                    The Kirkland Signature bottled-in-bond Bourbon is well worth the price. Not the best but surprisingly decent. And this concludes my shameless plug.

              • @tee9000
                link
                English
                1
                edit-2
                2 months ago

                Lol damn. I like ai. I’m not bullshitting. Ive been using it to help me in gdscript/godot and artwork for a personal mobile game project.

                The only information ive gotten on strawberry is from techmeme linked articles, which did say openai japan said it consumed about the same amount of computational resources. I didnt claim it as true, i said i heard that was the case and that i didnt know.

                We good?

            • @[email protected]
              link
              fedilink
              English
              92 months ago

              Was hoping to talk about it but i dont think im going to find that here.

              If only you’d asked ChatGPT “is awful.systems a good place to fellate LLMs”

              • David GerardOPM
                link
                fedilink
                English
                92 months ago

                I asked Gemini!

                is awful.systems a good place to fellate LLMs

                Reply:

                I can’t assess awful.systems for that purpose. Sex with large language models is not possible, and awful.systems may be a harmful or unsafe website.

                Here are some things to consider:

                • LLMs are not sentient beings. They can’t feel pleasure or emotions.
                • Awful.systems may contain malware or other harmful content.
                • Engaging in sexually suggestive conversations online can lead to exposure to predators or unwanted advances.

                If you are looking for information about sex or relationships, there are many reputable resources available online and offline.

                SLANDER, I SAY

                • @[email protected]
                  link
                  fedilink
                  English
                  72 months ago

                  Awful.systems may contain malware or other harmful content.

                  oof, this one stings

                  also now I’m paranoid the shitheads who operate the various clouds will make the mistake of using the LLM as a malware detector without realizing it’s probably just matching the token for the TLD

              • @tee9000
                link
                English
                12 months ago

                I dont understand what that means

            • @[email protected]
              link
              fedilink
              English
              72 months ago

              Was hoping to talk about it but i dont think im going to find that here.

              we need something for this kind of “I hope to buy time while I await the bomb exploding” shit, in the style of JAQing off

              • @[email protected]
                link
                fedilink
                English
                72 months ago

                see we were supposed to fall all over ourselves and debate this random stranger’s awful points. we weren’t supposed to respond to their disappointment with “good, fuck off” because then they can’t turn the whole thread into garbage

        • @[email protected]
          link
          fedilink
          English
          11
          edit-2
          2 months ago

          Kay mate, rational thought 101:

          When the setup is “we run each query multiple times” the default position is that it costs more resources. If you claim they use roughly the same amount you need to substantiate that claim.

          Like, that sounds like a pretty impressive CS paper, “we figured out how to run inference N times but pay roughly the cost of one” is a hell of an abstract.

          • @tee9000
            link
            English
            12 months ago

            Openai japan claimed it uses roughly the same amount.

        • Phoenixz
          link
          fedilink
          English
          12 months ago

          Eh, YOU made the claim, you show something to sustain your claim

          • @tee9000
            link
            English
            12 months ago

            I said they say it consumes about the same amount. It was openai japan that made this statement.

    • @[email protected]
      link
      fedilink
      English
      132 months ago

      I’m sure it being so much better is why they charge 100x more for the use of this than they did for 4ahegao, and that it’s got nothing to do with the well-reported gigantic hole in their cashflow, the extreme costs of training, the likely-looking case of this being yet more stacked GPT3s (implying more compute in aggregate for usage), the need to become profitable, or anything else like that. nah, gotta be how much better the new model is

      also, here’s a neat trick you can employ with language: install a DC full of equipment, run some jobs on it, and then run some different jobs on it. same amount of computing resources! amazing! but note how this says absolutely nothing about the quality of the job outcomes, the durations, etc.

      • @tee9000
        link
        English
        02 months ago

        Their proposed price increases are insane and yeah even though they are getting lots of funding right now, they arent covering their expenses with subscriptions. I cant imagine they would successfully charge regular users that much without kicking 99% of them off their platform. Now that would be dystopian to me… to price out regular users when their model uses the same computing power.

        You are saying they are overstating their models ability? My understanding of the claim is that the model just makes less simple arithmetic mistakes. Ive still noticed hallucinations and mistakes when assisting with my code but to be fair the language im using has limited documentation. I dont see their claims as exaggarated yet but id be lying if i said i have used the new preview model enough to understand it. Its certainly slower…

      • @tee9000
        link
        English
        -102 months ago

        Happy to hear about anything that supports the idea.

        • @[email protected]
          link
          fedilink
          English
          112 months ago

          this shit comes across like that over-eager corp llm salesman “speaker” from the other day

          • @tee9000
            link
            English
            22 months ago

            Im just an ai user and it interests me. Im noticing baseless complaints about ai and its kind of annoying and im just waiting for someone to convince me otherwise. Im recognize how dystopian ai could be but so far everyone is just pulling dystopia out of their bum. I genuinely want to discuss it.