• @CeeBee
      link
      371 year ago

      It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.

      I’m already doing it, but I have some higher end hardware.

      • Xanaus
        link
        fedilink
        41 year ago

        Could you please share your process for us mortals ?

        • @CeeBee
          link
          61 year ago

          Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

          Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.

          I have some beefy hardware that I run it on, but it’s not necessary to have.

        • @[email protected]
          link
          fedilink
          21 year ago

          Depends on what AI you’re looking for. I don’t know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven’t really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it’s a simple enough process to get started, there’s tons of info online about it, and it’s all run on local hardware.

          • @CeeBee
            link
            21 year ago

            I don’t know of an LLM that works decently on personal hardware

            Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.

      • lad
        link
        fedilink
        51 year ago

        You seem to have missed the point a bit

          • lad
            link
            fedilink
            -11 year ago

            The post “I wish X instead of Y”
            The comment: “And run it [X] locally”
            The next comment: “You can run Y locally”

            Also the one I told this literally admitted that I was right and you’re arguing still

    • arthurpizza
      link
      English
      81 year ago

      This technology will be running on your phone within the next few years.

      • @icepick3455o65
        link
        21 year ago

        Because like every other app on smartphones it’ll require an external server to do all of the processing

        • arthurpizza
          link
          English
          31 year ago

          I mean, that’s already where we are. The future is going to be localized.

      • anti-idpol action
        link
        fedilink
        11 year ago

        Yeah if your willing to carry a brick or at least a power bank (brick) if you don’t want it to constantly overheat or deal with 2-3 hours of battery life. There’s only so much copper can take and there are limits to minaturization.

        • arthurpizza
          link
          English
          71 year ago

          It’s not like that though. Newer phones are going to have dedicated hardware for processing neural platforms, LLMs, and other generative tools. The dedicated hardware will make these processes just barely sip the battery life.

          • @[email protected]
            link
            fedilink
            English
            11 year ago

            wrong.

            if that existed, all those AI server farms wouldn’t be so necessary, would they?

            dedicated hardware for that already exists, it definitely isn’t gonna be able to fit a sizeable model on a phone any time soon. models themselves require multiple tens of gigabytes of storage space. you won’t be able to fit more than a handful on even a 512gb internal storage. the phones can’t hit the ram required for these models at all. and the dedicated hardware still requires a lot more power than a tiny phone battery.

            • arthurpizza
              link
              English
              21 year ago

              Those server farms are because the needs of corporations might just be different from the needs of regular users.

              I’m running a 8 GB LLM model locally on my PC that performs better than 16 GB models from just a few months ago.

              It’s almost as if technology can get better and more efficient over time.

    • aubertlone
      link
      31 year ago

      Hey me too.

      And I do have a couple different LLMs installed on my rig. But having that resource running locally is years and years away from being remotely performant.

      On the bright side there are many open source llms, and it seems like there’s more everyday.

    • dream_weasel
      link
      fedilink
      -42
      edit-2
      1 year ago

      Ha. Lame.

      Edit: lol. Sign out of Google, nerds. Bring me your hypocrite neckbeard downvotes.