I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

  • Humanius
    link
    English
    23
    edit-2
    11 months ago

    Short answer: It’s because of binary.
    Computers are very good at calculating with powers of two, and because of that a lot of computer concepts use powers of two to make calculations easier.

    1024 = 210

    Edit: Oops… It’s 210, not 27
    Sorry y’all… 😅

    • 🐑🇸 🇭 🇪 🇪 🇵 🇱 🇪🐑
      link
      English
      6
      edit-2
      11 months ago

      So the problem is that our decimal number system just sucks. Should have gone with hexadecimal 😎

      /Joking, if it isn’t obvious. Thank you for the explanation.

      • enkers
        link
        fedilink
        English
        3
        edit-2
        11 months ago

        Or seximal!

        Not that 1024 would be any better, as it’s 4424 in base 6.

    • @Skasi
      link
      English
      6
      edit-2
      11 months ago

      1024 = 27

      I’m confused, why this quotation? 1024 is 210, not 27

    • insomniac_lemon
      link
      fedilink
      3
      edit-2
      11 months ago

      Just to add, I would argue that by definition of prefixes it is 1000.

      However there are other terms to use, in this case Kibibyte (kilo binary byte, KiB instead of just KB) that way you are being clear on what you actually mean (particularly a big difference with modern storage/file sizes)

      EDIT: Of course the link in the post goes over this, I admit my brain initially glossed over that and I thought it was a question thread