• @[email protected]
    link
    fedilink
    English
    91 day ago

    I agree. (Digital artist here) I hate the Ai hype machine madness but also understand trying to find ways to make it ethical and useful to eliminate tedium. As it should be for in the first place.

    In the tech sphere, they likely have to play along to stay relevant since everybody’s all hyped about it.

    How’s this local LLM of yours? Does it tend to be more accurate than Recognize? It’s integrated into Nextcloud?

    Recognizing objects and people in pictures locally is one of the best uses I think. …and maybe if it can stop auto-tagging random EDM songs as “country” or “rap” when they sound nothing like, I’d be excited about that!😂

    • @ikidd
      link
      English
      26 hours ago

      I’ve set up OpenWebUI with the docker containers, which includes Ollama in API mode, and optionally Playwright if you want to add webscraping to your RAG queries. This gives you a ChatJippity format webpage that you can manage your models for Ollama, and add OpenAI usage as well if you want. You can manage all the users as well.

      On top, then you have API support to your own Ollama instance, and you can also configure GPU usage for your local AI if available.

      Honestly, it’s the easiest way to get local AI.

      • @[email protected]
        link
        fedilink
        English
        14 hours ago

        How did you get your nextcloud AI workers to actually function in docker? Im not using the AIO image and I attempted to edit the systemd script provided in the docs but it fails :/ manually running the do command that I modified works until it runs out of time