I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?

  • Jack
    link
    fedilink
    21 month ago

    Can’t you run if from a container? I guess the will slow it down, but it will deny access to your files.

    • marcie (she/her)
      link
      fedilink
      91 month ago

      yeah you could. though i dont see any evidence that the large open source llm programs like jan.ai or ollama are doing anything wrong with their program or files. chucking it in a sandbox would solve the problem for good though

      • SeekPie
        link
        fedilink
        5
        edit-2
        1 month ago

        You could use “Alpaca” flatpak and remove the internet access with flatseal after having downloaded the model. (Linux)

        Or deny the app’s access to internet in app settings. (Android)

    • @[email protected]
      link
      fedilink
      61 month ago

      Containers don’t really slow down apps significantly. It’s not a VM, it’s still a native app running in your kernel, just on a separate memory space and restricted access to hardware.

      • Jack
        link
        fedilink
        01 month ago

        That is true for Linux and maybe Mac, but on windows I think they have a bit more overhead. But again I agree that in most cases it is not significant.

        • @[email protected]
          link
          fedilink
          4
          edit-2
          1 month ago

          Is the overhead because of containers or is it because you’re running something that is meant to run on Linux and is using a conversion layer like MinGW ?