• @brucethemoose
    link
    English
    1
    edit-2
    4 days ago

    You can use larger “open” models through free or dirt-cheap APIs though.

    TBH local LLMs are still kinda “meh” unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.