• @kitnaht
    link
    English
    313 hours ago

    I’ve found that 4o is substantially worse than the previous model at a ton of things. So I run all of my LLMs locally now through OLLAMA.