• @[email protected]
    link
    fedilink
    English
    1310 days ago

    Do they not know it works offline too?

    I noticed chatgpt today being pretty slow compared to the local deepseek I have running which is pretty sad since my computer is about a bajillion times less powerful

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          9 days ago

          Thanks

          Any recommendations for communities to learn more?

          Frustratingly Their setup guide is terrible. Eventually managed to get it running. Downloaded a model and only after it download did it inform me I didn’t have enough RAM to run it. Something it could have known before the slow download process. Then discovered my GPU isn’t supported. And running it on a CPU is painfully slow. I’m using an AMD 6700 XT and the minimum listed is 6800 https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon

          • @JustARaccoon
            link
            English
            3
            edit-2
            9 days ago

            If you’re setting up from scratch I recommend using open webui, you can install it with GPU support onto docker/podman in a single command then you can quickly add any of the ollama models through its UI.

            • @[email protected]
              link
              fedilink
              English
              09 days ago

              Thanks, I did get both setup with Docker, my frustration was neither ollama or open-webui included instructions on how to setup both together.

              In my opinion setup instructions should guide you to a usable setup. It’s a missed opportunity not to include a docker-compose.yml connecting the two. Is anyone really using ollama without a UI?