• @Sconrad122
    link
    English
    07 months ago

    Why call out Intel? Pretty sure AMD and Nvidia are both putting dedicated AI hardware in all of their new and upcoming product lines. From what I understand they are even generally doing it better than Intel. Hell, Qualcomm is advertising their AI performance on their new chips and so is Apple. I don’t think there is anyone in the chip world that isn’t hopping on the AI train

    • @T156
      link
      English
      37 months ago

      Because I was only aware of Intel (and Apple) doing it on computers, whereas most major flagship mobile devices have those accelerators now.

      GPUs were excluded, since they’re not as universal as processors are. A dedicated video card is still by and large considered an enthusiast part.

      • @Sconrad122
        link
        English
        47 months ago

        Fair enough. Was just asking because the choice of company surprised me. AMD is putting "AI Engines in their new CPUs (separate silicon design from their GPUs) and while Nvidia largely only sells GPUs that are less universal, they’ve had dedicated AI hardware (tensor cores) in their offerings for the past three generations. If anything, Intel is barely keeping up with its competition in this area (for the record, I see vanishingly little value in the focus on AI as a consumer, so this isn’t really a ding on Intel in my books, more so making the observation from a market forces perspective)

      • @Sconrad122
        link
        English
        17 months ago

        You’re not wrong that GPU and AI silicon design are tightly coupled, but my point was that both of the GPU manufacturers are dedicating hardware to AI/ML in their consumer products. Nvidia has the tensor cores in its GPUs that it justifies to consumers with DLSS and RT but we’re clearly designed for AI/ML use cases when they presented them with Turing. AMD has the XDNA AI Engine that it is putting its APUs separate from its RDNA GPUs