• @MigratingtoLemmy
    link
    English
    -17 months ago

    I’m still baffled at how good Ollama is on working on paltry hardware like ARM and small VMs. Give it GPUs and it’s amazing.

    The next step should be to encrypt information at Transit and rest to as to purchase GPU power from the cloud but maintaining client-side encryption throughout. That’ll bring even more power to the masses: imagine giving Ollama a Cloud endpoint to remote GPUs which it can compute on without the consumer purchasing any hardware.

    • AtHeartEngineer
      link
      English
      67 months ago

      ARM is not paltry, it’s in small/portable devices because it’s efficient, not weak.

    • @foggy
      link
      English
      17 months ago

      Tell that to groq.