• @[email protected]
    link
    fedilink
    English
    21 year ago

    Makes sense. In that case I guess your next best option is probably to buy or rent hardware to run the local models that are suitable for chat rp.

    • @Widowmaker_Best_Girl
      link
      English
      21 year ago

      I have definitely been considering it. My current hardware gives me about an 80 second delay when I run an llm locally.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        1 year ago

        Same, at least for anything but the tiny ones that will fit in my limited vram. Hoping a gpu that’s actually good for LLM will come out in the next few years that’s not 15k and made for servers.