Day 9: Disk Fragmenter

Megathread guidelines

  • Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
  • You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL

FAQ

  • @Acters
    link
    2
    edit-2
    3 days ago

    yeah, I was a bit exhausted thinking in a high level abstract way. I do think that if I do the checksum at the same time I could shave off a few more milliseconds. though it is at like the limits of speed, especially for python with limited data types(no trees lol). Decently fast enough for me :)

    edit: I also just tested it and splitting into two lists gave no decent speed up and made it slower. really iterating backwards is fast with that list slice. I can’t think of another way to speed it up past it can do rn

    • @VegOwOtenks
      link
      23 days ago

      Thank you for trying, oh well. Maybe we are simply at the limits.

      • @Acters
        link
        23 days ago

        no way, someone is able to do it in O(n) time with OCaml. absolutely nutty. lol

        • @VegOwOtenks
          link
          12 days ago

          Thank you for the link, this is crazy!

      • @Acters
        link
        2
        edit-2
        3 days ago

        so if I look at each part of my code. the first 4 lines will take 20 ms

        input_data = input_data.replace('\r', '').replace('\n', '')
        part2_data = [[i//2 for _ in range(int(x))] if i%2==0 else ['.' for _ in range(int(x))] for i,x in enumerate(input_data)]
        part2_data = [ x for x in part2_data if x!= [] ]
        part1_data = [y for x in part2_data for y in x]
        

        The part1 for loop will take 10 ms.

        The for loop to set up next_empty_slot_by_length will take another 10 ms.

        The part2 for loop will take 10 ms, too!

        and adding up the part2 checksums will add another 10 ms.

        So, in total, it will do it in ~60 ms, but python startup overhead seems to add 20-40 ms depending if you are on Linux(20 ms) or Windows(40 ms). both are Host, not virtual. Linux usually has faster startup time.

        I am not sure where I would see a speed up. It seems that the startup overhead makes this just slower than the other top performing solutions which are also hitting a limit of 40-60 ms.

    • @VegOwOtenks
      link
      23 days ago

      Trees are a poor mans Sets and vice versa .-.

      • @Acters
        link
        2
        edit-2
        3 days ago

        ah well, I tried switching to python’s set() but it was slow because of the fact it is unordered. I would need to use a min() to find the minimum index number, which was slow af. indexing might be fast but pop(0) on a list is also just as fast.(switching to deque had no speed up either) The list operations I am using are mostly O(1) time

        If I comment out this which does the adding:

        # adds checksums
            part2_data = [y for x in part2_data for y in x]
            part2 = 0
            for i,x in enumerate(part2_data):
                if x != '.':
                    part2 += i*x
        

        so that it isolates the checksum part. it is still only 80-100ms. so the checksum part had no noticeable slowdown, even if I were to do the check sum at the same time I do the sorting it would not lower execution time.