• @[email protected]
    link
    fedilink
    English
    111 months ago

    Usually there is a massive VRAM requirement. local neural networking silicon doesn’t solve that, but using a more lightweight and limited model could.

    Basically don’t expect even gpt3, but SOMETHING could be run locally.