The next release of the Linux kernel, 6.6 [will] include the KSMBD in-kernel server for the SMB networking protocol, developed by Samsung’s Namjae Jeon.

it has faced considerable security testing and as a result it is no longer marked as experimental.

It’s compatible with existing Samba configuration files.

But why is KSMBD important? First off, it promises considerable performance gains and better support for modern features such as Remote Direct Memory Access (RDMA)… KSMBD also adds enhanced security, considerably better performance for both single and multi-thread read/write, better stability, and higher compatibility. In the end, hopefully, this KSMBD will also mean easier share setups in Linux without having to jump through the same hoops one must with the traditional Samba setup.

  • tal
    link
    fedilink
    18
    edit-2
    1 year ago

    The above text says that the aim is to do RDMA, to let the NIC access memory directly, but I’d think that existing Linux zero-copy interfaces would be sufficient for that.

    https://en.wikipedia.org/wiki/Zero-copy

    The Linux kernel supports zero-copy through various system calls, such as:

    • sendfile, sendfile64;[9]
    • splice;[10]
    • tee;[11]
    • vmsplice;[12]
    • process_vm_readv;[13]
    • process_vm_writev;[14]
    • copy_file_range;[15]
    • raw sockets with packet mmap[16] or AF_XDP.

    So I’d think that the target workload has to be one where you can’t just fetch a big chunk of pre-existing data, where you have to interject server-generated data in response to small requests, and even the overhead of switching to userspace to generate some kind of server-generated response is too high.

    Which seems like a heck of a niche case.

    But it obviously got approval from the kernel team.

    googles

    https://www.kernel.org/doc/html/next/filesystems/smb/ksmbd.html

    The subset of performance related operations belong in kernelspace and the other subset which belong to operations which are not really related with performance in userspace. So, DCE/RPC management that has historically resulted into number of buffer overflow issues and dangerous security bugs and user account management are implemented in user space as ksmbd.mountd. File operations that are related with performance (open/read/write/close etc.) in kernel space (ksmbd). This also allows for easier integration with VFS interface for all file operations.

    I guess you could accelerate open and close too.

    In all seriousness, I feel like if you’re in such a niche situation that you can’t afford the overhead of going to userspace for that, (a) there’s likely room to optimize your application to request different things and (b) CIFS might not be the best option to be sharing data over the network either.

    • @[email protected]
      link
      fedilink
      English
      8
      edit-2
      1 year ago

      The above text says that the aim is to do RDMA, to let the NIC access memory directly

      Oh, so the attack surface is much bigger than I realized. The NIC is probably the last thing I’d want writing directly to memory and bypassing the kernel.

      I guess none of this will be enabled in desktop distros or even the majority of server distros…right?

      • @[email protected]
        link
        fedilink
        21 year ago

        I was under the impression this is already the norm for network equipment because the vast amount of data is no longer processable by the kernel. In fairness though that equipment most likely doesn’t really consume the data but rather just forwards.

    • @FooBarrington
      link
      21 year ago

      Shouldn’t io_uring solve the issues with speed between usermode and kernelmode?