The Nvidia NV1 was released in 1995, it was the first GPU with 3D capabilities for PC… form there we know how things went by.

Now it’s 2023, so let’s make some “retro futuristic” prediction… what would you think about a AI board, open source driver, open API as Vulkan which you can buy to power the AI for your videogames? It would make sense to you? Which price range it should be?

What’s supposed to do for your games… well, that’s depend on videogames. The quickiest example I can think of is having endless discussion with your NPC in your average, single player, Fantasy RPG.

For example, the videogame load your 4~5 companions with the psychology/behaviors: they are fixated with the main quest goal (like you talk with fanatic people, this to make sure the game the main quest is as much stable as possible) but you can “break them” by making attempt to reveal some truths (for example, breaking the fourth wall), and if you go for this path, the game warns that you’re probably going to lock out the main quest (like in Morrowind when you kill essential NPC)

  • @[email protected]
    link
    fedilink
    71 year ago

    AI dedicated boards already exist, and Nvidia can’t produce them fast enough to keep up with demand.

    Source: A senior AI engineer at AWS told me.

  • @RightHandOfIkaros
    link
    61 year ago

    This just sounds like putting a second CPU on a PCIe board. I can’t see this being a benefit for games because developers would never go through the pain of programming AI with advanced enough behaviours to even need a secondary CPU.

    • @[email protected]
      link
      fedilink
      01 year ago

      Why wouldn’t they? It’s a lot easier to write out intricate backstories for each character/location independently than it is to build decision trees for every possible combination of decisions that the player makes. That’s basically what current LLMs allow for.

      • conciselyverbose
        link
        fedilink
        21 year ago

        No, they definitely don’t.

        Even with how bad most video game writing is, current LLMs are laughably short of useful for the purpose you’re implying and a game that replaced human writing with an LLM in real time would be a lock to be the worst written game ever made.

        • @[email protected]
          link
          fedilink
          11 year ago

          Current LLMs being bad at it doesn’t mean they’ll always be bad at it. Their current state is the worst they’re ever going to be, and we’re talking about a hypothetical future here. I don’t see any reason why they can’t be improved into a state usable for writing a story with all the worldbuilding details provided.

          • conciselyverbose
            link
            fedilink
            11 year ago

            Your claim was about current LLMs.

            But it’s a fundamental limitation of what LLMs are. They are not AI. They do not have anything in common with intelligence, and they don’t have a particularly compelling path forward.

            They also, even if they weren’t actually terrible for almost every purpose, are obscenely heavy and what we’re calling “current” isn’t something capable of being executed on consumer hardware, dedicated card or not.

            Finally, the idea that they can’t get worse is just as flawed. They’re heavily poisoning the well of future training data, and ridiculous copyright nonsense has the very real possibility of killing training further even though training on copyrighted material doesn’t in any way constitute copyright infringement.

            • @[email protected]
              link
              fedilink
              11 year ago

              Maybe open source LLMs aren’t up to the task, but proprietary ones certainly are.

              Also, you wouldn’t really need a LLM, just a FM that you fine tune for your specific purpose.

                • @[email protected]
                  link
                  fedilink
                  11 year ago

                  It’s a foundation model. Basically it’s the base algorithm that you train with data. LLMs are FMs that have been trained with an enormous amount of data, but they aren’t necessary for every application, especially if you only need the AI/ML to perform a specific task.

                  Fine tuning an FM is just feeding it your own data.

              • conciselyverbose
                link
                fedilink
                01 year ago

                No, they aren’t. They aren’t a little short of capable. You could multiply their capability overnight and have no shot of not immediately being the worst written game ever made.

                There’s a huge difference between stringing together words in the shape of a story and actually putting together something with a shrewd of cohesion. We’re not talking mediocre here. We’re talking laughably short of absolute dogshit.

                • @[email protected]
                  link
                  fedilink
                  11 year ago

                  Buddy, I have actual training in AI/ML from some of the leading engineers in the field, and my job leverages AI/ML very successfully to do a task really similar to what OP is looking for.

                  Maybe the versions available to the public to play with aren’t up to the task, but using AWS Bedrock you can absolutely get results like OP wants.

            • @[email protected]
              link
              fedilink
              11 year ago

              Right, I see where the confusion comes from. I mention current LLMs to say that the architecture and pre-training procedure we currently have produce models that are already capable of generating the type of outputs that can be used in this context. I make no claims about the quality of the output, but some additional fine-tuning on the game’s specific story can take things very far.

              When you say LLMs are not AI, I’m guessing what you mean is that they are not artificial general intelligence (AGI), and that I agree with. But AI is very broad, including things as simple as A* search. Decision trees aren’t any more AGI than LLMs and they’ve been able to produce some very compelling stories, so this isn’t a very good argument. We don’t need AGI to write good stories.

              The compute resources required for these models is something that can be fixed as well. On the hardware side, consumer hardware are continuously getting more powerful over time. On the software side, we’re also seeing a lot of great results from the smaller 7b parameter models, and these are general purpose language models. If you just need something for your one game, you can likely distill the model into something much smaller.

              The training data that we used for the current generation of LLMs are already out there and curated. We know that this dataset can achieve the performance of today’s LLMs, and you can continue to train on that same data in the future. As long as you control where your new data comes from, this is not an issue.

    • @[email protected]
      link
      fedilink
      01 year ago

      Programming AI is actually super easy, unless you decided to create your own foundation model. Even then, you would have data scientists building it, not devs.

      Plenty of FMs and LLMs already exist that would be up to the task.

      • @RightHandOfIkaros
        link
        01 year ago

        Programming AI with behaviour complex enough to need a second CPU would be hard. Syncing its output with the primary CPU could be a problem.

        LLMs would not be useful for anything except maybe generating new dialogue, but it would need a lot of restraints to prevent the end user from breaking it. For the purposes of dialogue and story telling, most developers would opt to just pre-program dialogue like they always have.

        Again, this sounds like a useless PC part that pretty much no game developer would ever take advantage of.

        • @[email protected]
          link
          fedilink
          01 year ago

          You don’t need an LLM for this. You just need a FM that you fine tune, and you’d be surprised at how little computing power is actually required.

          For our uses (which are similar to what OP wants), it takes longer for us to do an OCR scan on the documents our AI works with than for Sagemaker to do it’s thing on a rather small instance.

          And, devs would just be implementing API calls, so it wouldn’t be a big deal to make the switch.

  • @[email protected]
    link
    fedilink
    21 year ago

    Yeah, so, dedicated hardware like that rarely ever pans out. I mean, graphics cards did, but there’s not much of a market for gaming sound cards or physX cards anymore. I imagine that the specific type of AI that will be useful for this will eventually just be improved and made efficient enough that it’ll be done by processors that already exist in your system.

  • Throwaway
    link
    fedilink
    21 year ago

    Wouldn’t that just be a GPU? That’s literally what all our AIs run on. Just a ton of tiny little processors running in parallel.

    • William
      link
      71 year ago

      That kind of like saying “Wouldn’t that just be a CPU?” about the GPU. It can be optimized. The question is if it’s worth optimizing for on a consumer level, like GPUs were.

    • @[email protected]
      link
      fedilink
      41 year ago

      While that is true now, in the future maybe there will be discrete hardware AI accelerators in the same way we have hardware video encoding.

    • @[email protected]
      link
      fedilink
      21 year ago

      They’re meaning something more along the lines of an ASIC. A board specifically engineered for AI/ML.

  • squid
    link
    fedilink
    21 year ago

    Game publishers won’t want direct ai in games, losses them too much control, also they can’t use the excuse of its always online so NPCs have ai powered language. With how things look as everything is becoming subscription I doubt well be getting powerful ai on a single board to put into pci-e my prediction is more Aline to we won’t have gaming PCs, GPUs will be price hiked and anyone wanting to game will be on a subscription service

  • @[email protected]
    link
    fedilink
    21 year ago

    If the board provided enough benefit to outweigh the cost? Sure I might be talked into it.

    Reminiscent of PhysX boards when they were a thing for 30 seconds. It’s all about the return on investment for me.

  • @BetaDoggo_
    link
    11 year ago

    If this were to ever become mainstream this would likely be incorporated into the GPU for cost reasons. Small machine learning acceleration boards already exist but their uses are limited because of limited memory. Google has larger ones available but they’re cloud only.

    Currently I don’t see many uses in gaming other than upscaling.

  • @[email protected]
    link
    fedilink
    1
    edit-2
    1 year ago

    Is the 1995 and first 3d accurate? we were using 3d CAD tools in the range of 1991-1995 before Nvidia. Edit: seems S3 and Creative Labs had some earlier CAD cards, prices too high for general PC use till voodoo cards in 95

  • @[email protected]
    link
    fedilink
    11 year ago

    I mean, that kind of board has existed for a while. They’re usually called AI-accelerator boards, IIRC