• @pixxelkick
    link
    English
    201 year ago

    Im curious to see what sorts of recommended minimum specs there will be for these features. It is my understanding that these sorts of models require a non negligible amount of horsepower to run in a timely manner.

    At the moment I am running Nextcloud on some raspberry pis and, my gut tells me I might need a bit more oomph than that to handle this sort of real time AI prompting >_>;

    • @[email protected]
      link
      fedilink
      English
      131 year ago

      The blog post states:

      We build the AI Assistant using a flexible, solution-independent approach which gives you a choice between multiple large language models (LLM) and services. It can be fully hosted within your instance, processing all requests in-house, or powered by an external service.

      So it sounds like you pick what works for you. I’d guess on a raspberry pi, on board processing would be both slow and poor quality, but I’ll probably give it a go anyway.

      • @pixxelkick
        link
        English
        21 year ago

        Yeah sorry I was specifically referring to the on prem LLM if that wasnt clear, and how much juice running that thing takes.

        • @[email protected]
          link
          fedilink
          English
          41 year ago

          Some of the other Nextcloud stuff (like that chat stuff) isn’t suitable on Raspberry Pi, I expect this will be the same. It’s released though, right? Might have to have a play.

        • @EatYouWell
          link
          English
          21 year ago

          You’d be surprised at how little computing power it can take, depending on the LLM.

    • Brownian Motion
      link
      English
      91 year ago

      the AI that nextcloud is offering uses openAI, sign up get a api key and add it. Your ai requests goto the cloud. (and i couldnt get it to work, constant " too many request" or a straight “failed”)

      The other option is the addon " local llm", you download a cutdown llm like llama2 or falcon and it runs locally. I did get thoes all installed, but it didnt work for general prompts.

      Nextcloud will probably fix things over time, and the developer who made the local llm plugin will to, but right now this isnt very useful to selfhosters.

      • @TheDarkKnight
        link
        English
        21 year ago

        Llama’s getting pretty damn good, check out phind.com if you haven’t yet…its programming better than GPT-4 supposedly!

        • Brownian Motion
          link
          English
          21 year ago

          I just asked it to write an assembly program for the Intel 8008 uprocessor, and it just knocked it out! That’s not bad for a chip that was released in 1972 !

    • @PeachMan
      link
      English
      81 year ago

      Well, Nextcloud runs like shit on a Pi WITHOUT having to do AI stuff, so…

        • Neshura
          link
          fedilink
          English
          01 year ago

          I love Nextcloud but it’s just oh so painfully slow at times

          • @AtmaJnana
            link
            English
            01 year ago

            That’s likely a problem with your configuration. Mine was slow too until I set up redis.

            • Neshura
              link
              fedilink
              English
              01 year ago

              Great that you don’t have any problems with it, I do at times. It’s not a problem with the config either (at least not the Nextcloud config) because I set it up using the Nextcloud VM script, not manually. It’s not slow all the time but when it feels all the slower for it.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        Nextcloud would struggle on devices with low CPU performance and slow storage speed. A Pi checks all those box. You might increase the performance a bit by running nextcloud from an external SSD but there is no fixing the Pi’s low CPU performance.

        • @PeachMan
          link
          English
          11 year ago

          I’ve tried running NextCloud from a system with a SATA SSD and a Core i7 using WSL…and it still ran like shit.

          • @AtmaJnana
            link
            English
            11 year ago

            Not using redis? Mine ran like shit and I almost gave up until I set up file locking and caching.

    • Lupec
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      Yeah, I’m wondering the same and also figure the requirements will be pretty significant. Still, pretty happy with things like this and Home Assistant’s recent work on local voice assistants.

  • @EatYouWell
    link
    English
    181 year ago

    I really hope someone is working on integrating it with Home Assistant.

    • @TheDarkKnight
      link
      English
      81 year ago

      Yeah I basically want a private Google Home/Alexa/Siri and it seems like we’re edging ever closer to that possibility.

      • @EatYouWell
        link
        English
        41 year ago

        I haven’t had a chance to mess with HA’s voice assistant yet, but I’ve been hearing good things about it.

  • @[email protected]
    link
    fedilink
    English
    9
    edit-2
    1 year ago

    I’m glad they’re taking AI seriously. I feel the world of commercial services and free software have been diverging for some time. With the former being extended with lots of recommendation algorithms, AI features, smart assistants and machine learning shenanigans. And free software not so much.

    While I like my free software without recommendation algorithms that cater for advertisers and confine me in a filter bubble, I like the ML and AI stuff to be available in free software as well. Like a voice assistant, AI that helps with organizing stuff, querying documents, transcribing voice messages… This is all very useful.

    I’d like some of the machine learning stuff to be adopted in other free software projects as well. For example GIMP adopting the current AI helpers. And tight integration of text to speech and speech to text into the desktop environments and available in the package manager of my desktop linux install.

  • @[email protected]
    link
    fedilink
    English
    71 year ago

    Hey! This is Bob, your friendly NC AI assistant. I noticed all your dick pics had very small dicks so I’ve increased the length to a more respectable 8.5" and requested assistance from your 7 female contacts about girth size. User "your mother " preferred the 1.5 size but was ok with 400% increase “if that’s what you’re into”. You agreed to show your privates in private with user “Neighbor” tonight at 7:30. He suggested silicone lube. All images are uploaded and available for your review on your Facebook timeline. Let me know if I should increase the size or if the color is off. User “Coworker” complained about the color and will be discussing it with your manager and you tomorrow first thing.

    How may I be of assistance today?

  • ᕙ(⇀‸↼‶)ᕗ
    link
    fedilink
    English
    41 year ago

    i am missing AI search. imagine having 100 cooking recipes and i want to ask the assistant stg like how much salt do i need for whatever recipe i am cooking. sure text or image generation is nice but i can get that elsewhere

  • Justin
    link
    English
    31 year ago

    Hi, Brent!

  • @errer
    link
    English
    31 year ago

    How is this better/different than GPT4All?

  • Lunch
    link
    English
    21 year ago

    Brent 😍