I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

  • TheMurphy
    link
    English
    31 year ago

    I believe it’s because you always use bytes in pairs in a computer. If you always pair the pairs, you would eventually get the number 1024, which is the closest number to a 1000.

    The logic is like this:

    2+2 = 4

    4+4 = 8

    8+8 = 16

    16+16 = 32

    32+32 = 64

    64+64 = 128

    128+128 = 256

    256+256 = 512

    512+512 = 1024

    • PupBiru
      link
      fedilink
      5
      edit-2
      1 year ago

      not exactly because of pairs unless you’re talking about 1 and 0 being a pair… it’s because the maximum number you can count in binary doubles with each additional bit you add:

      with 1 bit, you can either have 0 or 1… which is, unsurprisingly perhaps, 0 and 1 respectively - 2 numbers

      with 2 bits you can have 00, 01, 10, 11… which is 0, 1, 2, 3 - 4 numbers

      with 3 bits you can have 000, 001, 010, 011, 100, 101, 110, 111… which is 0 to 7- 8 numbers

      so you see the pattern: add a bit, double the number you can count to… this is the “2 to the power of” that you might see: with 8 bits (a byte) you can count from 0 to 255 - that’s 2 (because binary has 2 possible states per digit) to the power of 8 (because 8 digits); 8^2

      the same is true of decimal, but instead of to the 2 to the power, it’s 10 to the power: with each additional digit, you can count 10 x as many numbers - 0-9 for 1 digit, 00-99 for 2 digits, 000-999 for 3 digits - 10^1, 10^2, 10^3 respectively

      and that’s the reason we use hexadecimal sometimes too! we group bits into groups of 8 and call it a byte… hexadecimal is base 16, so nicely lets us represent a byte with just 2 characters - 16^2 = 256 = 2^8