• @QuadratureSurfer
    link
    11
    edit-2
    22 days ago

    Yes, but with DLSS we’re adding ML models to the mix where each one has been trained on different aspects:

    Interpreting between frames
    For instance, normally you might get 30FPS, but between the frames the ML model has an idea of what everything should look like (based off of what it has been trained on), so it can insert additional frames to boost your framerate up to 60FPS or more.

    Upscaling (making the picture larger) - the CPU and other hardware can do work on a smaller resolution which makes their job easier, while the ML model here has been trained on how to make the image larger while filling in the correct pixels so that everything still looks good.

    Optical Flow -
    This ML model has been trained in motion which objects/pixels go where so that better prediction of frame generation can be achieved.

    Not only that but Nvidia can update us with the latest ML models that have been trained on specific game titles using their driver updates.

    While each of these could be accomplished with older techniques, I think the results we’re already seeing speak for themselves.

    Edit: added some sources below and fixed up optical flow description.

    https://www.digitaltrends.com/computing/everything-you-need-to-know-about-nvidias-rtx-dlss-technology/
    https://www.youtube.com/watch?v=pSiczcJgY1s

      • azuth
        link
        fedilink
        521 days ago

        No, rendering at a smaller resolution and upscaling is not the same concept as only rendering what will end up in frame.