• @dragontamer
    link
    English
    111 months ago

    So this looks cool, though I’m not sure if the filesystem is the right abstraction for microcontrollers. Yeah, historically, files were only a few kB big and littlefs is making real files that match larger computer usage patterns. But is that the simplest / easiest approach for our projects?

    The main benefit here is: dynamic wear leveling, which is necessary for any users of flash (10,000 erase/write cycles is just too small…). But is this really the best way to deliver dynamic wear leveling to different projects? Yes and no… yes in that Linux / Windows does this for us and so making a Linux/Windows like interface to interact with dynamically-wear leveled flash blocks in the form of “files” is certainly beneficial.

    No in that… there’s probably another abstraction we could use?

    Nothing against this project mind you. This is really cool, shrinking traditional filesystems down to the point where they fit microcontroller usage patterns. But I do think this is one aspect where experimentation with different data-structures, APIs, and usage patterns would be helpful. I don’t have any solutions myself (though I’m dreaming up some incomplete solutions when I sleep sometimes…)

      • @dragontamer
        link
        English
        111 months ago

        The raw interface of any Flash / EEPROM is fail-safe with writing. IE: When the write is complete, you know it has happened. (just spinloop and wait for writes to be complete at each step, no biggie).

        Rebuilding fail-safe writes in a filesystem is difficult due to all the metadata that needs to be updated to keep track of files. But if you only have a simple queue of data that needs to be load-balanced and writesafe, just writing directly to EEPROM (carefully) is sufficient and simpler. But this isn’t very flexible.

        That’s where I recognize that having a concept of different files (as in this littlefs) is good and useful. Flexibility is always nice since the same code is recycled in many projects.

        • @RubberElectrons
          link
          English
          111 months ago

          None of that applies in the all-too-common power interruption, which is particularly what I like since littlefs supports CoW. I understand it seems like a strange feature, but I’m trying to keep my devices as rugged as possible.

          • @dragontamer
            link
            English
            1
            edit-2
            11 months ago

            For simple data-structures, this is very easy. Lets say I have a queue of data 8-long, so that its easy for us humans to understand. [x, x, x, x, x, x, x, x], only one element at any time has 0xFF (“erased”), and this represents the head/tail of the queue. We only support “pushing” to this queue, and when the queue is full the “push” always erases the oldest element. For example:

            [3, 4, 5, 6, 0xFF, 0, 1, 2]
            

            This queue currently contains ‘0, 1, 2, 3, 4, 5, 6’.

            For simplicity, lets say all elements are byte-erasable, byte-sized, and byte-addressable (aka: simple EEPROM), but Flash isn’t very difficult with regards to generalizations. Note that due to the innate nature of the queue, this is also erase-balanced across all elements.

            So lets say we’re adding an element to the queue. This is a simple process of:

            1. Write 0xFF to the after-element
            2. Write data to the current 0xFF location.

            Lets say we’re adding the number 7 to the queue. There’s only two points of power-off and two possible values:

            [3, 4, 5, 6, 0xFF, 0xFF, 1, 2]
            

            Two 0xFF 0xFF in the array, saying that we got shutoff between steps-1-and-2.

            [3, 4, 5, 6, 7, 0xFF, 1, 2]
            

            Or both steps complete, and therefore we have a proper array again.


            Recovering from the failure case is easy. In case of two 0xFF 0xFF locations, we know its a spurious failure, so just favor the first element for the next write.

            Inserting 8 in this case looks like:

            [3, 4, 5, 6, 8, 0xFF, 1, 2]
            

            Note: Fail-safe writes doesn’t mean “7 is always written”. Fail-safe writes means that “after write returns, we know that 7 has been written. And all cases ‘in between’ can be recovered from”. After all, a power-interruption can happen during any point in littlefs as well, and that may leave the meta-data in an incomplete or otherwise interrupted state. Littlefs here just has (allegedly) guarantees on how to recover from any such power-interruption.

            Now not everyone needs a queue. Maybe someone else needs a tree, or someone else needs a map, or hashtable, etc. etc. The good thing about littlefs is that it provides a generic file that works in all cases. What I described above is way simpler, but only works if you need a a queue with guarantees up to 6-elements (the 7-th element could disappear in an incomplete write. But I can provide a strong guarantee that my 8-element queue safely stores 6 elements in all cases). Or in the general case: a 1024-byte queue of this nature can safely store 1022 bytes.

            • @RubberElectrons
              link
              English
              111 months ago

              More interested in being productive no matter the data type or structure. I understand more or less your point, I just prefer to help everyone take advantage of a battle-tested general purpose library.