cross-posted from: https://sopuli.xyz/post/9700996

Nvidia’s AI customers are scared to be seen courting other AI chipmakers for fear of retaliatory shipment delays, says rival firm

  • @[email protected]
    link
    fedilink
    English
    48
    edit-2
    10 months ago

    Yeah, don’t be unrealistic. We can’t just have a group of competent individuals properly plan out how to dismantle a monopoly to allow for proper competition in the industry. If they don’t hold onto their monopoly, how will we ever see technological advancements?

    • @Buffalox
      link
      English
      -1010 months ago

      No really. NVIDIA’s entire business is based on one main chip design. How would you brake up a company, that essentially only has one design it implements in various degrees for their products.
      There is literally nothing to break up.

      • @[email protected]
        link
        fedilink
        English
        14
        edit-2
        10 months ago

        Ai accelerators and gaming gpu could definitely be split apart. AMD already uses different architectures for those applications and they have notably smaller engineering teams.

        Raytracing could also ostensibly be spun into a separate division. That’s already split quite a bit in the architecture. Then Intel, AMD and whatever other competitors pop up could license the raytracing tech stack or even buy raytracing chiplets.

        Some of the software solutions like DLSS could be spun off and allowed to license to competitors.

        • @Buffalox
          link
          English
          -7
          edit-2
          10 months ago

          Ai accelerators and gaming gpu could definitely be split apart.

          Raytracing could also ostensibly be spun into a separate division.

          No they can’t, because all Nvidia products are similar base designs at different scales.
          NVIDIA has for many years designed the main chip first, the biggest baddest of them all, used for the very highest end products. All other products are based on selecting parts of that, to make the chips cheaper for their respective markets. There is no reasonable way to split this up.

    • @[email protected]
      link
      fedilink
      English
      -1410 months ago

      There is no monopoly. If Nvidia doesn’t play it right in the coming years they won’t hold on to their current position. Nvidia aren’t getting into custom chips just for fun. If the major cloud providers end up using their own custom silicon, that’s a major blow for Nvidia.

      • @[email protected]
        link
        fedilink
        English
        2110 months ago

        The point that the article makes is that NVIDIA is pressing current customers by threatening shipping delays, which is an abuse of their power

        • @[email protected]
          link
          fedilink
          English
          -410 months ago

          And I hope they get punished for it, but that is not the same as Nvidia having monopoly.

          • @[email protected]
            link
            fedilink
            English
            610 months ago

            They have as much a Monopoly as Google has on search. Sure, there are competitors, and there is a chance that new tech might disrupt them, but they are able to abuse their market position (for example, forcing websites to use Google analytics or be penalised in search results)

            • @[email protected]
              link
              fedilink
              English
              210 months ago

              I disagree. Most of the big actors in the cloud/AI space got their own silicon that they are working on which is a big enough concern for Nvidia that they are looking into providing custom solutions. If the CUDA moat breaks, Nvidia will be in a much weaker position.

              The search engine landscape is completely different, although I don’t think you meant that those markets are really directly comparable to be fair.

              • @[email protected]
                link
                fedilink
                English
                110 months ago

                I argue that Google is suffering kind of the same issues when it comes to LLM’s. They freaked out when chat GPT dropped, and I’m pretty sure that Bard got rushed just to compete with Bing Chat

      • @utopiah
        link
        English
        110 months ago

        If the major cloud providers end up using their own custom silicon, that’s a major blow for Nvidia.

        They already but AFAIK not at scale. What I believe, but that’s my intuition I don’t have numbers to back that up, is that Alphabet for GCP, Microsoft for Azure, Amazon for AWS and others do design their own chips, their own racks, etc but mostly do it as promotional R&D. They do want to show investors that they are acutely aware of their dependency on NVIDIA and thus try to be more resilient by having alternatives. What is still happening though is that in terms of compute per watt and thus per dollar, NVIDIA through its entire stack, both hardware (H100, A100, 40xx, etc) and software (mostly CUDA here) but also trust from CTOs, is the de facto standard. Consequently my bet is that GCP, Azure, AWS do have their custom silicon running today but it let’s than 1% of their compute and they probably even provide it at a discount price to customers. It’s a bit like China and their billions poured into making their own chips. Sure they are showing that they can (minus the dependency on ASML…) but at what cost? Making some chipset at equivalent performance with the state of the art is a research feat not to be downplayed but doing it at scale in a commercially competitive way is quite different.

        Anyway that’s just my hunch so if anybody has data to contradict that please do share.