• @orangeboats
    link
    11 year ago

    so there must be some reason why they went this design.

    Some applications have a hard zero-alloc requirement.

    • @[email protected]
      link
      fedilink
      1
      edit-2
      1 year ago

      But that’s not the case here, seeing as they have

      if self.len() >= MAX_STACK_ALLOCATION {
          return with_nix_path_allocating(self, f);
      }
      

      in the code of with_nix_path. And I think they still could’ve made it return the value instead of calling the passed in function, by using something like

      enum NixPathValue {
          Short(MaybeUninitᐸ[u8; 1024]>, usize),
          Long(CString)
      }
      
      impl NixPathValue {
          fn as_c_str(&self) -> &CStr {
              // ...
      
      impl NixPath for [u8] {
          fn to_nix_path(&self) -> ResultᐸNixPathValue> {
              // return Short(buf, self.len()) for short paths, and perform all checks here,
              // so that NixPathValue.as_c_str can then use CStr::from_bytes_with_nul_unchecked
      

      But I don’t know what performance implications that would have, and whether the difference would matter at all. Would there be an unnecessary copy? Would the compiler optimize it out? etc.

      Also, from a maintainability standpoint, the context through which the library authors need to manually ensure all the unsafe code is used correctly would be slightly larger.

      As a user of a library, I would still prefer all that over the nesting.