• @sturlabragason
    link
    English
    -2
    edit-2
    3 months ago

    You can download multiple LLM models yourself and run them locally. It’s relatively straightforward;

    https://ollama.com/

    Then you can switch off your network after download, wireshark the shit out of it, run it behind a proxy, etc.