Currently, I have two VPN clients on most of my devices:

  • One for connecting to a LAN
  • One commercial VPN for privacy reasons

I usually stay connected to the commercial VPN on all my devices, unless I need to access something on that LAN.

This setup has a few drawbacks:

  • Most commercial VPN providers have a limit on the number of simulations connected clients
  • I either obfuscate my IP or am able to access resources on that LAN, including my Pi-Hole fur custom DNS-based blocking

One possible solution for this would be to route all internet traffic through a VPN client on the router in the LAN and figuring out how to still be able to at least have a port open for the VPN docker container allowing access to the LAN. But then the ability to split tunnel around that would be pretty hard to achieve.

I want to be able to connect to a VPN host container on the LAN, which in turn routes all internet traffic through another VPN client container while allowing LAN traffic, but still be able to split tunnel specific applications on my Android/Linux/iOS devices.

Basically this:

   +---------------------+ internet traffic   +--------------------+           
   |                     | remote LAN traffic |                    |           
   | Client              |------------------->|VPN Host Container  |           
   | (Android/iOS/Linux) |                    |in remote LAN       |           
   |                     |                    |                    |           
   +---------------------+                    +--------------------+           
                      |                         |     |                        
                      |       remote LAN traffic|     | internet traffic       
split tunneled traffic|                 |--------     |                        
                      |                 |             v                        
                      v                 |         +---------------------------+
  +---------------------+               v         |                           |
  | regular LAN or      |     +-----------+       | VPN Client Container      |
  | internet connection |     |remote LAN |       | connects to commercial VPN|
  +---------------------+     +-----------+       |                           |
                                                  |                           |
                                                  +---------------------------+

Any recommendations on how to achieve this, especially considering client apps for Android and iOS with the ability to split tunnel per application?

Update:

Got it by following this guide.

Ended up modifying this setup to have better control over potential IP leakage

  • Prison Mike
    link
    fedilink
    English
    25 months ago

    Your setup looks more advanced than mine, and I’d really like to do something similar. I’m just going to copy/paste what I have with some addresses replaced by:

    VPN_IPV4_CLIENT_ADDRESS: The WireGuard IPv4 address of the VPN provider’s interface (e.g. 172.0.0.1) VPN_IPV6_CLIENT_ADDRESS: The WireGuard IPv6 address of the VPN provider’s interface VPN_IPV6_CLIENT_ADDRESS_PLUS_ONE: The next IPv6 address that comes after VPN_IPV6_CLIENT_ADDRESS. I can’t remember the logic behinds this but I’d found an article online explaining it. WG_INTERFACE: The WireGuard network interface name (e.g. wg0) for the commercial VPN

    I left 100.64.0.0/10, fd7a:115c:a1e0::/96 in my example because those are the networks Tailscale traffic will come from. I also left tailscale0 because that is the typical interface. Obviously these can be changed to support any network.

    I’m using Alpine Linux so I don’t have the PostUp, PostDown, etc. in my WireGuard configuration. I’m not using wg-quick at all.

    Before I hit paste, one thing I’ll say is I haven’t addressed the “kill switch” yet. But so far (~4 months) when the VPN provider’s tunnel goes down nothing leaks. 🤞

    sysctl -w net.ipv4.ip_forward=1
    sysctl -w net.ipv6.conf.all.forwarding=1
    
    sysctl -p
    
    ip link add dev WG_INTERFACE type wireguard
    
    ip addr add VPN_IPV4_CLIENT_ADDRESS/32 dev WG_INTERFACE
    ip -6 addr add VPN_IPV6_CLIENT_ADDRESS/127 dev WG_INTERFACE
    
    wg setconf WG_INTERFACE /etc/wireguard/WG_INTERFACE.conf
    ip link set up dev WG_INTERFACE
    
    iptables -t nat -A POSTROUTING -o WG_INTERFACE -j MASQUERADE
    iptables -t nat -A POSTROUTING -o WG_INTERFACE -s 100.64.0.0/10 -j MASQUERADE
    
    ip6tables -t nat -A POSTROUTING -o WG_INTERFACE -j MASQUERADE
    ip6tables -t nat -A POSTROUTING -o WG_INTERFACE -s fd7a:115c:a1e0::/96 -j MASQUERADE
    
    iptables -A FORWARD -i WG_INTERFACE -o tailscale0 -j ACCEPT
    iptables -A FORWARD -i tailscale0 -o WG_INTERFACE -j ACCEPT
    iptables -A FORWARD -i WG_INTERFACE -o tailscale0 -m state --state RELATED,ESTABLISHED -j ACCEPT
    
    ip6tables -A FORWARD -i WG_INTERFACE -o tailscale0 -j ACCEPT
    ip6tables -A FORWARD -i tailscale0 -o WG_INTERFACE -j ACCEPT
    ip6tables -A FORWARD -i WG_INTERFACE -o tailscale0 -m state --state RELATED,ESTABLISHED -j ACCEPT
    
    mkdir -p /etc/iproute2/rt_tables
    
    echo "70 wg" >> /etc/iproute2/rt_tables
    echo "80 tailscale" >> /etc/iproute2/rt_tables
    
    ip rule add from 100.64.0.0/10 table tailscale
    ip route add default via VPN_IPV4_CLIENT_ADDRESS dev WG_INTERFACE table tailscale
    
    ip -6 rule add from fd7a:115c:a1e0::/96 table tailscale
    ip -6 route add default via VPN_IPV6_CLIENT_ADDRESS_PLUS_1 dev WG_INTERFACE table tailscale
    
    ip rule add from VPN_IPV4_CLIENT_ADDRESS/32 table wg
    ip route add default via VPN_IPV4_CLIENT_ADDRESS dev WG_INTERFACE table wg
    
    service tailscale start
    rc-update add tailscale default
    
    iptables -A INPUT -i tailscale0 -p udp --dport 53 -j ACCEPT
    iptables -A INPUT -i tailscale0 -p tcp --dport 53 -j ACCEPT
    
    ip6tables -A INPUT -i tailscale0 -p udp --dport 53 -j ACCEPT
    ip6tables -A INPUT -i tailscale0 -p tcp --dport 53 -j ACCEPT
    
    service unbound start
    rc-update add unbound default
    
    /sbin/iptables-save > /etc/iptables/rules-save
    /sbin/ip6tables-save > /etc/ip6tables/rules-save
    
    tailscale up --accept-dns=false --accept-routes --advertise-exit-node
    
    • Prison Mike
      link
      fedilink
      English
      2
      edit-2
      5 months ago

      Forgot to mention that I run a DNS server for blocking too. When using Tailscale I’ve found it’s important to use their resolver as upstream otherwise App Connectors won’t work (the VPN provider tunnels on each VPS routes to different countries so DNS wasn’t in sync). This kind of sucks but I make do with it after a month or two of App Connectors being very iffy.