• @[email protected]
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    But that’s still confusing because we already can. Yeah you might need a little bit more of hardware but… not that crazy. Plus some simpler models can be run with more normal hardware.

    Might not be easy to setup that is true.

    • Communist
      link
      fedilink
      English
      51 year ago

      For large context models the hardware is prohibitively expensive.

      • supert
        link
        fedilink
        English
        11 year ago

        I can run 4bit quantised llama 70B on a pair of 3090s. Or rent gpu server time. It’s expensive but not prohibitive.

          • supert
            link
            fedilink
            English
            11 year ago

            3k?Can’t recall exactly, and I’m getting hardwarestability issues.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          I’m trying to get to the point where I can locally run a (slow) LLM that I’ve fed my huge ebook collection too and can ask where to find info on $subject, getting title/page info back. The pdfs that are searchable aren’t too bad but finding a way to ocr the older TIFF scan pdfs and getting it to “see” graphs/images are areas I’m stuck on.

      • @Grimy
        link
        English
        11 year ago

        I personally use runpod. It doesn’t cost much even for the high end level stuff. Tbh the openai API is easier though and gives mostly better results.

        • Communist
          link
          fedilink
          English
          11 year ago

          I specifically said “large context” how many tokens can you get through before it goes insanely slow?

          • @Grimy
            link
            English
            11 year ago

            Max token windows are 4k for llama 2 tho there’s some fine tunes that push the context up further. Speed is limited by your budget mostly, you can stack GPUs and there are most models available (including the really expensive ones)

            I’m just letting you know, If you want something easy, just use ChatGtp. I don’t find them overly expensive for what it is.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      you can, but things as good as chatgpt can’t be ran on local hardware yet. My main obstacle is language support other then english

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        They’re getting pretty close. You only need 10GB VRAM to run Hermes Llama2 13B. That’s within the reach of consumers.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          nice to see! i’m not following the scene as much anymore (last time i played around with it was with wizard mega 30b). definitely a big improvement, but as much as i hate to do this, i’ll stick to chatgpt for the time being, it’s just better on more niche questions and just does some things plain better (gpt4 can do maths (mostly) without hallucinating)