• @kitnaht
    link
    English
    32 months ago

    I’ve found that 4o is substantially worse than the previous model at a ton of things. So I run all of my LLMs locally now through OLLAMA.