Hi all!

I’m trying to use a local LLM to help me write in an Obsidian.md vault. The local LLM is running through a flatpak called GTP4ALL, which states that it can expose the model through an OpenAI server. The obsidian.md plugin can’t reach the LLM on the specified port, so I’m wondering if the flatpak settings need to be changed to allow this.

(This is all on bazzite, so the obsidian program is a flatpak too)

Anyone have an idea where to start?

  • @asap
    link
    English
    33 months ago

    It’ll be easier to run the LLM in Podman on Bazzite.

    • @UNY0NOP
      link
      23 months ago

      Thanks! I’ll try it out.

      • @[email protected]
        link
        fedilink
        English
        13 months ago

        I’ve not tried GPT4ALL but Ollama combined with Open WebUI is really great for selfhosted LLMs and can run with podman. I’m running Bazzite too and this is what I do.

        • @UNY0NOP
          link
          13 months ago

          I was trying so hard to get GTP4ALL to work inside of obsidian because it has this localdocs feature, where you can feed it documents and it integrates them into itself. I want the model to be aware of the whole vault, and also generate notes based on the vault contents, constantly updating.

          It looks like podman desktop + chatGPG obsidian plugin is the way to go through, I’m playing around with it and it looks promising.