• @brucethemoose
    link
    25 hours ago

    Only because AMD/Intel aren’t pricing competitively. I define “best experience” as the largest LLM/context I can fit on my GPU, and right now that’s essentially dictated by VRAM.

    That being said, I get how most wouldn’t want to go through the fuss of setting up Intel/AMD inference.