• @[email protected]
    link
    fedilink
    English
    402 days ago

    Man I hope Battlemage is an actually profitable launch, or at least not a massive loss. Otherwise who knows if the next CEO will axe their GPU line. People liked to fearmonger them killing Arc before, with with a new change in management I can actually see that happening.

    • @brucethemoose
      link
      English
      192 days ago

      Its a lower midrange only launch like it appears to be, it will be extremely unprofitable. AMD may even eat large chunks of this market with the Strix Halo APU, which could be similar to the B570 with no need for a discrete GPU.

      Theres actually a big and growing demand for ANY high VRAM GPU for the LLM crowd (that AMD is ignoring for inexplicable reasons beyond Strix Halo) but it appears Intel can’t even compete there. No 256 bit APU, their GPU is 192 bit so capped at like 24GB…

      • @PalmTreeIsBestTree
        link
        English
        52 days ago

        This is why I got a 4070 ti super because it has a 256 bit bus.

        • @brucethemoose
          link
          English
          32 days ago

          Eh actually the 4060 TI is way better for LLMs :P With Nvidia its all about VRAM capacity.

      • @[email protected]
        link
        fedilink
        English
        12 days ago

        Intel is totally missing the boat honestly. Their mobile i9 with the built-in GPU can share DDR5 with the video card.

        You can put 96 gigs of RAM in a small form factor and load in a monster model. It’s not super fast, But it works, and it’s a lot faster than not offloading layers off the CPU.

        They should be selling nuk sized PCs with built-in graphics and 128 gigs of the fastest RAM they can put on the boards.

        • @brucethemoose
          link
          English
          11 day ago

          IMO its not really “enough” until the bus is 256 bit. Thats when 32B-72B class models start to look even theoretically runnable at decent speeds.

            • @brucethemoose
              link
              English
              1
              edit-2
              1 day ago

              Also that is a very low context test. A longer context will bog it down, even setting aside the prompt processing time.

              …On the other hand, you could probably squeeze a bit more running openvino instead of llama.cpp, so that is still respectable.

              • @[email protected]
                link
                fedilink
                English
                21 day ago

                text test. A longer co

                yeah, it’s definitely not good enough for user-facing work, but if I’m working on development for something like translations, being able to see the 70b output to compare it to other models, it’s super useful before I send it off to something that costs more money to run.

                9/10 times, the bigger model isn’t significantly better for what I’m trying to do, but it’s really nice to confirm that.