• @warbond
    link
    English
    81 year ago

    Going to have to disagree with you there. I’ve gotten plenty of use out of chat GPT in multiple scenarios. I find it difficult to imagine what exactly you think is useless about it because it seems so indispensable to me at this point.

    • FLeX
      link
      English
      01 year ago

      Indispensable, nothing less. lmao

      Have fun when they decide to multiply the price x10 and you are too dependant to have an alternative, or when it becomes stupid or malevolent 👍

      • @warbond
        link
        English
        51 year ago

        Sorry, I’m not sure I understand how that makes it useless. I get the feeling that you just want to feel smug, so if it makes you feel better go ahead, I guess.

        • FLeX
          link
          English
          01 year ago

          Because it’s too fragile and not ready to be use at scale without causing massive damage

          Not useless for now (even if i’d like to know more about the domains where it’s really “indispensable”), but as useless as a drill with a dead battery the day they decide to cut it.

          I don’t find it future-proof, as impressive as some results are

          • @[email protected]
            link
            fedilink
            English
            41 year ago

            Nowdays LLM can be ran on consumer hardware, so the “dead battery” analogy fall short here too.

            • FLeX
              link
              English
              21 year ago

              With the same efficiency ? I’m interested in an example

              Why everyone using these crappy SaaS then ?

              • @AdrianTheFrog
                link
                English
                3
                edit-2
                1 year ago

                Llama 2 and its derivatives, mostly. Simple local ui available here.

                Not as good as chatGPT 3.5 in my experience. Just kinda falls apart on anything too complex, and is a lot more likely to get things wrong.

                I tried it out using the ‘Open-Orca/OpenOrcaxOpenChat-Preview2-13B’ 4 bit 32g model. Its surprisingly fast to generate. It seems significantly faster than ChatGPT on my 3060. (with ExLlama)

                There are also some models tuned specifically to actually answer your requests instead of the ‘As an AI language model’ kind of stuff.

                Edit: just tried a newer model and its a lot better. (dolphin-2.1-mistral-7b)

              • @[email protected]
                link
                fedilink
                English
                11 year ago

                For the same reason SaaS is popular in general: yes, you could get a VPS, install all the needed software on it, keep it up to date, oor you could pay a company to do all that for you.

      • @guacupado
        link
        English
        01 year ago

        You sound like the people who thought credit cards would never replace cash.

        • FLeX
          link
          English
          31 year ago

          And you sound like the people who thought cryptos would replace credit cards ;)