So I was recently gifted some Mellanox 40gig network cards that I installed in my NAS and my desktop and connected with AOC fiber. I gave them both static IP addresses on their own dedicated subnet that’s not used anywhere else in my network. I was able to run iperf3
between both computers, and that worked exactly as expected.
At that point, I edited /etc/fstab
to update the IP addresses for my mounted network shares. I ran successfully and thought all was well.
The problem is, my computer defaults to my one gig lan connection for some reason, despite the entries in fstab using a completely different subnet.
The only way I’ve found to force it to work properly is to disable my LAN connection, then remount the network shares, then reenable the LAN port.
On one occasion I noticed that a file I was duplicating on my NAS was being downloaded via my LAN to my computer to duplicate, then being uploaded back to the NAS via the fiber connection.
Does anyone have any clue why this may be happening or how to fix it more permanently?
The NAS is Debian, my desktop is Manjaro.
I was recently gifted some Mellanox 40gig network cards
Where can I find such friends/relatives?
Marry way out of your league 😉
That’s what I did at least
I’m already at the lowest league unless you count people who marry their animu body pillow.
Are you sure you’re not using local hostnames / DNS resolution?
You may also need to updates your nfs exports file for the new subnet. And also update systems on the fstab changes (daemon-reload).
I did run a daemon-reload. I’m directly referencing the IP address in my fstab. On my desktop I have DNS turned off for the Mellanox interface.
I’ll have to look into the nfs exports file. I’m using OMV, and I just have my exports wide open. But, I guess I could solve this by limiting the connection to the mounts by client IP.
There might also be some magic/weirdness with IP routing in the kernel. Have a look at net.ipv4.ip_forward system variable.
Sounds like route tables are finding a priority match on the 1gb interface. Are you 100% sure the NAS connection is truly not an overlapping subnet with the 1gb nic?
100% sure.
LAN:
192.168.69.0/24
Fiber:
10.42.69.0/24
Also: yeah, yeah. I know…
you need to look at the routing tables on your computer. these tables store the prioritized rules for how packets leave your host machine.
it might be that something is adding rules, or, there is some overly broad rule taking priority (like a rule that says all 10.0.0.0/8 traffic go to your home router over 192.168.69.0/24, etc)
it’s also suspect that you can reach the NAS over the 1gb card. That to me means one of two things:
- something is not actually using the IP you’ve configured in your fstab and is using some IP that is on the 1gb interface
- you have some weird network routes configured that is leading to this issue. if 10.42.69.0/24 is accessible over the 192.168.69.0/24 network, then you might need to create a static route explicitly telling your OS to send packets out the 40gb card
ultimately, i suggest you run something like
tcpdump
or wireshark on your computer (ideally on the NAS too) so you can start to visualize how the packets are being addressed and transferred over your networks.sincerely, a fellow 10.0.69.0/24 enjoyer
It definitely sounds like he has redundant routes (this literally shouldn’t be possible otherwise) so yeah, he needs to fix priority
A bad route would be the first thing I’d check, too.
It sounds like u/[email protected] is already pretty familiar with the Linux command line, but just in case, you can check the routing table (on both the NAS and the client machine) with
ip route
(that should show the whole routing table) and get the specific route to the remote device withip route get 10.42.69.XXX
(change the XXX to make it the address of the remote system). If either side shows that the route is going over your default gateway, that’s your problem.
Could add a default route via interface
Maybe a /30 going to that gateway
Autoneg / link speed problems?