Pretty damning review.

  • @NewBrainWhoThis
    link
    English
    2
    edit-2
    1 year ago

    Lets say you run a single test and collect 10 samples at steady-state for temperature and power. This data will have some degree of variability depending on many factors such as airflow at that exact moment, CPU utilization and also inherent noise in the measurement device itself. Additionally, if you repeat the same test multiple times on different days with different testers, you will not get the exact same results.

    So if you then compare a system A to system B you might see that system B is 12% “better” (depending on your metric), then you must answer the question–> is this observed difference due to system B actually being better or can this difference be explained by the normal variability in your test setup. Most of the time there are so many external factors influencing your measurement that even if you see a difference in your data, this difference is not significant but due to chance. You should always present your data in a way that its clear to the reader how much variability was in your test setup.

    • @DrBeerFace
      link
      English
      81 year ago

      Gamers Nexus has entire videos dedicated to their methodology for different tests. No, there are not p values on their charts; however GN does a great job discussing their experimental design and how it mitigates confounding factors to the extent that GN can control for them.

      • @NewBrainWhoThis
        link
        English
        21 year ago

        Can you provide a link to that video? And also a link to a good example how they present the data and the different confounding factors? Do they pubish the data somewhere other than in the videos? Thanks 👍