I’ve just discovered OmniGPT that seems to be a chat where you can interact with different LLM (Claude, GPT-4, Llama, Gemini, etc.) and costs $16/month (it was $7/month until a week ago 🤦‍♂️). I’ve read on a Reddit post that it uses the APIs of all the provider that is a thing that can be done for free using a personal account (since the API limit seems to be high). Do you know something like OminGPT that can be self hosted that uses users API keys?

    • @peregusOP
      link
      English
      08 months ago

      That seems to need the AI model to be local

      • Scrubbles
        link
        fedilink
        English
        12
        edit-2
        8 months ago

        Oh yes, maybe I misunderstood what you were asking. This is the server that will host the models and the API, it also has a nice interface.

        So by local I mean local to the server, you can run it somewhere else and not put the models on your local computer, but yes the server will need them.

        You can then use other apps to connect with it. That’s what I consider self hosting, hosting the whole thing soup to nuts

        • @peregusOP
          link
          English
          -18 months ago

          What I’m looking for is a frontend that uses GTP-4, Gemini and other AI engine with their respective APIs keys.

          • paraphrand
            link
            English
            238 months ago

            Yeah, using “self hosted” in your title is misleading.

            • @peregusOP
              link
              English
              -10
              edit-2
              8 months ago

              But I will…self host this service! And beside the title, I’ve written a post with a description of what I’m looking for.

              • Nyfure
                link
                fedilink
                8
                edit-2
                8 months ago

                you want a frontend, not the “service” itself.
                Under “service” i usually understand the main logic part of something. In this case the LLM-processing itself.
                Thats probably where the confusion is coming from here.