• @Kachilde
    link
    English
    48 months ago

    Cool. Remind me never to use Opera.

    • @carl_dungeon
      link
      English
      58 months ago

      Well this is local LLM, which isn’t the same as sending everything to ChatGPT. I’ve been experimenting with Ollama to run some local LLMs and it’s pretty neat, I can see it becoming pretty useful in a few years when performance and memory requirements improve- there have already been big advances for the local stuff this year. I’m curious how exactly it’ll be used in opera- I’ll at least check it out.