• @WhyIsItReal
    link
    91 year ago

    i’ve never heard of anyone using non-reduced fractions to measure precision. if you go into a machine shop and ask for a part to be milled to 16/64”, they will ask you what precision you need, they would never assume that means 16/64”±1/128”.

    if you need custom precision in any case, you can always specify that by hand, fractional or decimal.

    • @chiliedogg
      link
      -51 year ago

      But you can’t specify it with decimal. That’s my point. How do you tell the machine operator it needs to be precise to the 64th in decimal? “0.015625” implies precision over 15,000x as precise as 1/64th. The difference between 1/10 and 1/100 is massive, and decimal has no way of expressing it with significant figures.

      • @WhyIsItReal
        link
        31 year ago

        sure you can, you say “i need a hole with diameter 0.25” ± 0.015625“”. it doesn’t matter that you have more sig figs when you state your precision

        but regardless, that’s probably not the precision you care about. there’s a good chance that you actually want something totally different, like 0.25±0.1”. with decimal, it’s exceptionally clear what that means, even for complicated/very small decimals. doing the same thing fractionally has to be written as 1/4±1/10”, meaning you have to figure out what that range of values are (7/20” to 3/20”)

        • @chiliedogg
          link
          -21 year ago

          Having to provide a “+/-” for a measurement is a silly alternative to using a measurement that already includes precision. You’re just so used to doing things a stupid way that you don’t see it.

          • @WhyIsItReal
            link
            21 year ago

            providing an arbitrarily non-reduced fraction is an even sillier alternative. the same fundamental issue arises either way, and it’s much clearer to use obvious semantics that everyone can understand

            • @chiliedogg
              link
              -11 year ago

              It’s not the same issue at all.

              How do you represent 1/64th in decimal without implying greater or lesser precision? Or 1/3rd? Or 1/2 or literally anything that isn’t a power of 10?

              You’re defending the practice of saying “this number, but maybe not because we can’t actually measure that precisely, so here’s some more numbers you can use to figure out how precise or measurements are”

              How is that a more elegant solution than simply having the precision recorded in a single rational measurement?