Hi all!

I’m trying to use a local LLM to help me write in an Obsidian.md vault. The local LLM is running through a flatpak called GTP4ALL, which states that it can expose the model through an OpenAI server. The obsidian.md plugin can’t reach the LLM on the specified port, so I’m wondering if the flatpak settings need to be changed to allow this.

(This is all on bazzite, so the obsidian program is a flatpak too)

Anyone have an idea where to start?

  • @asap
    link
    English
    219 hours ago

    It’ll be easier to run the LLM in Podman on Bazzite.

    • @UNY0NOP
      link
      215 hours ago

      Thanks! I’ll try it out.

  • Leaflet
    link
    English
    41 day ago

    Flatpak doesn’t care about your ports, they can access them if they have network permission.

    • @UNY0NOP
      link
      21 day ago

      Thank you for the info!