I’m just curious about the potential of running a small LLM locally on GrapheneOS, maybe some image processing too.

  • @j4k3OP
    link
    English
    11 year ago

    The second link is closer. I think it is technically the edge TPU that is used to handle ML stuff.

    It would take a higher level of accessibility for me to be able to engage with in practice. Like I need a hugging face type of high level accessibility to have a chance of getting it working in practice. I’m curious if anything like this exists. The available RAM probably limits anything really useful. It might be interesting to see what kind of edge processing could be mixed with an offline model running on a local server. I can connect to models over LAN already but my largest models are slow.