So I am trying to track down what is possibly slowing down my download connection from my Debian server to my devices (streaming box, laptop, other servers, etc).

First let me go over my network infrastructure: OPNsense Firewall (Intel C3558R) <-10gb SFP+ DAC-> Managed Switch <-2.5gb RJ45-> Clients, 2.5gb AX Access Point, and Debian Server (Intel N100).

Under a 5 minute stress test between my laptop (2.5gb adapter plugged into switch) and the Debian Server (2.5gb Intel I226-V NIC), I get the full bandwidth when uploading however when downloading it tops out around 300-400mbps. The download speed does not fair any better when connecting to the AX access point, with upload dropping to around 500mbps. File transfers between the server and my laptop are also approximately 300mbps. And yes, I manually disabled the wifi card when testing over ethernet. Speed tests to the outside servers reflect approximately 800/20mbps (on an 800mbps plan).

Fearing that the traffic may be running through OPNsense and that my firewall was struggling to handle the traffic, I disconnected the DAC cable and reran the test just through the switch. No change in results.

Identified speeds per device:

Server: 2500 Mb/s
Laptop: 2500Base-T
Switch: 2,500Mbps
Firewall: 10Gbase-Twinax

Operating Systems per device:

Server: Debian Bookworm
Laptop: macOS Sonoma (works well for my use case)
Switch: some sort of embedded software
Firewall: OPNsense 24.1.4-amd64

Network Interface per device:

Server: Intel I226-V
Laptop: UGreen Type C to 2.5gb Adapter
Switch: RTL8224-CG
Firewall: Intel X553

The speed test is hosted through Docker on my server.

  • Suzune
    link
    fedilink
    English
    258 months ago

    Did you use iperf? It makes sure that HDD/SSD is not the bottleneck.

    You can also check the statistics and watch for uncommon errors. Or trace the connection with tcpdump.

    • dontwakethetrees (she/her)OP
      link
      English
      88 months ago

      Using iperf3 results in 2.5gb of bandwidth. SSD should not be a bottleneck, the server only has NVME storage and the laptop SSD is located in the SoC. Both far exceeding the network speeds. Traceroute indicated just a single hop to the server.

      • @[email protected]
        link
        fedilink
        English
        128 months ago

        NVMe drives aren’t guaranteed to be fast. Based on those stats I’m guessing you have QLC and no DRAM.

        • dontwakethetrees (she/her)OP
          link
          English
          48 months ago

          I think you might be right, couldn’t find an identifiable label on the drive and the model reported in Debian shows up in searches as having only 2465MB/s read speeds. After real-world losses and also handling running an OS + multiple services I imagine that could me the source of my problems. Thanks!

    • @stevestevesteve
      link
      English
      28 months ago

      Op said they tried without the firewall connected and had the same results

      • @[email protected]
        link
        fedilink
        English
        18 months ago

        Ah, right, read to fast it seems! Though that still leaves the possibility of software firewalls, but any OOTB ones wouldn’t be doing any packet inspection.

  • @[email protected]
    link
    fedilink
    English
    9
    edit-2
    8 months ago

    Try iperf from your server to your opnsense firewall, to both your laptop and server

  • @[email protected]
    link
    fedilink
    English
    7
    edit-2
    8 months ago

    Have you tried changing out ethernet cables and trying different ports?

    Also, try hosting the speed test from your laptop and running the speed test from the server to see if the results are reversed.

    • dontwakethetrees (she/her)OP
      link
      English
      58 months ago

      Just attempted that, odd thing happened was that both evened out on the reverse test at ~800Mbp/s. So higher than the download test before and lower on the upload. Conducted iperf3 tests and that shows the 2.5gb bandwidth so I retried file sharing. Samba refused to work for whatever reason on Debian so I conducted a SCP transfer and after a few tests of a 6.3GB video file, I averaged around 500mbps (highs of around 800mbp/s and lows of around 270mbp/s).

      • @filister
        link
        English
        11
        edit-2
        8 months ago

        SCP encrypts your traffic before sending it, so it might be CPU/RAM bottleneck. You can try with different cypher or different compression levels, which are defined in your .ssh/config file.

        • dontwakethetrees (she/her)OP
          link
          English
          18 months ago

          I’ll check my server’s CPU usage while transferring. I only used SCP for testing yesterday because the Samba share stopped working.

      • @[email protected]
        link
        fedilink
        English
        18 months ago

        iperf3 showed 2.5 in both directions?

        -R reverses direction

        Also note it can be set up as a daemon - I like to have at least one available on every network I have to deal with.

    • @[email protected]
      link
      fedilink
      English
      158 months ago

      rsync and rclone both rely on disk performance. iperf3 is the best way to test network performance.

      Note that the Windows version of iperf is unofficial and very old now, so you really want to use two Linux systems if you’re testing with iperf.

  • @[email protected]B
    link
    fedilink
    English
    6
    edit-2
    8 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    IP Internet Protocol
    NVMe Non-Volatile Memory Express interface for mass storage
    SCP Secure Copy encrypted file transfer tool, authenticates and transfers over SSH
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access
    TCP Transmission Control Protocol, most often over IP

    [Thread #644 for this sub, first seen 31st Mar 2024, 06:35] [FAQ] [Full list] [Contact] [Source code]

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    8 months ago

    Try switching to bbr for congestion control, and adjust the buffer sizes. The defaults are good for Gigabit but not really for higher speeds. Not near my computer right now so I can’t grab a copy of my sysctl settings, but searching Google for “Linux TCP buffer size tuning” and “Linux enable bbr” should find some useful info.

    If the devices are different speeds (eg one system is 2.5Gbps but another is 1Gbps), try enabling flow control on the switch, if it’s a managed switch.

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      8 months ago

      i think speedtest data is ot read or written to or from the disk but generates in memory or just ‘thrown away’

    • dontwakethetrees (she/her)OP
      link
      English
      18 months ago

      I mean, compared to what it should be, it is. Especially when I paid for 2.5gb infrastructure.

      And it also affects how fast I can pull files from my server. Trying to get some shows downloaded to my laptop before a business trip, guess better prepare for an hour or two copy over LAN. Pulling a backup OS image for my devices? Going to wait for a while.

  • @filister
    link
    English
    08 months ago

    Try to execute

    ping -c 1000 1.1.1.1
    

    And check for any packet loss and jitter.

    Additionally I would also recommend trying a different test server and comparing the results.

    Keep in mind that your ISP might also have issues with the connectivity which can be fixed in the following days.

    • dontwakethetrees (she/her)OP
      link
      English
      78 months ago

      I’ve done pings without any drops. ISP doesn’t come into effect as this is only LAN traffic, laptop and server are on the same switch.

      • @filister
        link
        English
        48 months ago

        Sorry in that case I would recommend you do iperf and see what the traffic would be. Make sure you whitelist the traffic as well.

  • @[email protected]
    link
    fedilink
    English
    -38 months ago

    Who is your ISP? I had some issues with my FIOS ONT. Had to disable IPv6 on my router for it to stop dropping packets.