Title.

The situation is basically this:

  • NFS works, it’s very fast, keeps the xattrs but if used without Kerberos it’s not secure. If used with Kerberos it works, but has a ticket that expires and forces me to reenter the credentials frequently in order to use it. If there was a way to use NFS with Kerberos and save the credentials NFS would be the perfect solution.

  • Samba works fine too, also keeps the xattrs but I had some troubles with filenames (mainly with some special characters, emoji, etc). Besides, as both my server and my clients run Linux I prefer to avoid it if I have the choice.

  • sshfs would be the natural choice, not as fast as NFS but it’s pretty secure, I already use it in most of my network shares but I just can’t find a way to make it preserve the files xattrs.

Do you guys have any suggestions or maybe any other options that I might use?

    • CtrlAltOoopsOP
      link
      12 months ago

      Thanks for the suggestion. In fact I tried rsync and it works. But is it possible to integrate in my current workflow? Maybe copying/moving files using a file manager?

      I’m asking because with the 3 options I mentioned I may, for example, create mount points in fstab and from this there on everything would be transparent to the user. Would it be possible using rsync?

      • SolidGrue
        link
        English
        132 months ago

        Secure file transfers frequently trade off some performance for their crypto. You can’t have it both ways. (Well, you can but you’d need hardware crypto offload or end to end MACSEC, where both are more exotic use cases)

        rsync is basically a copy command with a lot of knobs and stream optimization. It also happens to be able to invoke SSH to pipeline encrypted data over the network at the cost of using ssh for encrypting the stream.

        Your other two options are faster because of write-behind caching in to protocol and transfer in the clear-- you don’t bog down the stream with crypto overhead, but you’re also exposing your payload

        File managers are probably the slowest of your options because they’re a feature of the DE, and there are more layers of calls between your client and the data stream. Plus, it’s probably leveraging one of NFS, Samba or SSHFS anyway.

        I believe “rsync -e ssh” is going to be your best over all case for secure, fast, and xattrs. SCP might be a close second. SSHFS is a userland application, and might suffer some penalties for it

        • CtrlAltOoopsOP
          link
          42 months ago

          I’ll take a closer look into rsync possibilities and see if it applies to my situation. I appreciate your input.

      • @[email protected]
        link
        fedilink
        12 months ago

        Maybe copying/moving files using a file manager?

        <plugging package=“file_manager”>FileZilla</plugging>

        -or-

        <plugging package=“file_manager”>Gnome Commander</plugging>

        …but call me quaint. I still like…

        <plugging package=“file_manager”>mc</plugging>

        … 'cause it always just works. mc can ostensibly preserve attributes, time-stamps, and (with appropriate privilege on the receiving end) ownership of transferred files (using an sftp server supposedly).

      • @mumblerfish
        link
        12 months ago

        How much delay could you live with between syncs? If it’s not important to be immidiate, just an end-of-the-day thing you could cronjob the rync with the update flag every so often.

  • @[email protected]
    link
    fedilink
    72 months ago

    You didn’t mention rsync, which I think is usually considered standard. I’d look into that.

  • @[email protected]
    link
    fedilink
    4
    edit-2
    2 months ago

    The whole samba filenames thing is configurable. I only use linux systems and I ran into that same issue.

    By default samba seems to mangle file names. Not to mention that Windows systems don’t tend to support naming your files whatever you want the same way they do on linux so we need to map those characters to something else. To solve this I include a few different entries in my samba config file to fix the issue.

    mangled names = no
    vfs objects = catia
    catia:mappings = 0x22:0xa8,0x2a:0xa4,0x2f:0xf8,0x3a:0xf7,0x3c:0xab,0x3e:0xbb,0x3f:0xbf,0x5c:0xff,0x7c:0xa6
    

    That’s just if you choose to go with samba. I only use it cause it was easier to setup than NFS when I tried.

    • CtrlAltOoopsOP
      link
      22 months ago

      Hey, thanks for taking the time to reply.

      Yep, when I tried using Samba I had these catia:mappings configuration in my smb.conf. Thing is it slightly changes things (two that I specifically remember are ¿ and ¡ ), sometimes doesn’t recognizes filenames (don’t remember exactly which chars), etc.

      I tried to setup Samba, NFS and sshfs. Took a couple of days to understand a little better each one and, by trial and error, have an idea of their perks. I do appreciate your suggestion but I don’t think Samba is what I’m looking for.

  • @[email protected]
    link
    fedilink
    English
    32 months ago

    I assume you don’t intend to copy the files but use them from a remote host? As security is a concern I suppose we’re talking about traffic over the public network where (if I’m not mistaken) kerberos with NFS doesn’t provide encryption, only authentication. You obviously can tunnel NFS with SSH or VPN and I’m pretty sure you can create a kerberos ticket which stores credentials locally for longer periods of time and/or read them from a file.

    SSH/VPN obviously causes some overhead, but they also provide encryption over the public network. If this is something ran in a LAN I wouldn’t worry too much about encrypting the traffic and in my own network I wouldn’t worry about authentication either too much. Maybe separate the NFS server to it’s own VLAN or firewall it heavily.

  • @[email protected]
    link
    fedilink
    22 months ago

    forces me to reenter the credentials frequently

    Can you explain what your need is for copying files this frequently? Is this for backups? Do you always want the two sides to stay in sync? If so, something like a distributed filesystem such as gluster/ceph/etc. might work better for you.

    • CtrlAltOoopsOP
      link
      12 months ago

      Sure. I have a little home server running Linux and 2 or 3 machines that access files shared by this server. I use Plasma on my desktop machines and I rely a lot on tags (just to clarify, Plasma uses xattrs - more specifically user.xdg.tags) to tag files. On the server I already have a couple of scripts that automatically insert some predefined tags on files.

      Thing is when I try to copy and/or move files between server and desktop, depending on the protocol I used to mount the shared, I loose this information.

      People suggested rsync, and it would be an excellent option if what I wanted was to keep both sides synchronized or something like that. In fact what I need is just a solution that allow me to mount a server share content and allow me to transfer files from it preserving their extended attributes, preferentially using a file manager (I use basically Dolphin or ranger).

      No need to keep then synced.

    • CtrlAltOoopsOP
      link
      32 months ago

      I appreciate your help, but notice that the article just tell some basics of xattrs usage (I already know how to use it), but has no reference of file transfering files, which is what I need.

      • @[email protected]
        link
        fedilink
        12 months ago

        I suspect you use them more extensively, than I. Mine are limited usually to the extended acls, which I then use getfacl to generate a dump of all the acls of the files and sub directories I am transferring or 7zipping, and include that file in the transfer or 7z bundle. Then use setfacl to apply all those permissions on the receiving end after everything has been copied or extracted.