Ethernet-layer NAT (userspace queueing for ebtables)?

Sean Reifschneider jafo00 at gmail.com
Fri Mar 31 00:30:55 UTC 2023


A bridge only typically floods a few packet types, like ARP and multicast.
This is called a "Learning Bridge" and is what Linux implements.  You can
bridge an eth interface to a tap device and it will forward on the
appropriate traffic.  I'm pretty sure there are sysctls you have to enable
as well, but I don't remember what they might be.  Maybe "proxy_arp"?

What is the higher level goal here?  I'm guessing that you are trying to
link the networks in two locations so that all VMs in multiple places can
seamlessly talk to each other as if they are on a single LAN?  If so,
consider just putting them all on Tailscale and using the tailnet addresses
to talk.  I have over 100 machines on tailscale and it's working quite
well.  It's a mesh VPN, so all the nodes establish connections directly to
eachother, on a single segment.

On Thu, Mar 30, 2023 at 3:27 PM Gabriel L. Somlo <gsomlo at gmail.com> wrote:

> On Thu, Mar 30, 2023 at 02:53:40PM -0600, Stephen Warren wrote:
> > On 3/30/23 14:15, Gabriel Somlo wrote:
> > > Hi,
> > >
> > > I find myself in need of a Layer-2 (Ethernet) NAT solution to work
> > > around "cloudy" vmware's restrictions on promiscuous mode for guest
> > > network interfaces.
> > >
> > > For context, I'm running a network simulator (think gns3 or CORE),
> > > and need to bridge simulated nodes to the outside of the host VM,
> > > so they can interact with other host VMs sharing a VMWare vswitch
> > > (also, think of the simulated node as the default gateway for the
> > > LAN that's bridged to the outside vswitch):
> > >
> > > -----------------------------
> > > VMWare guest running sim    |
> > >                              |
> > > simulated node,             |
> > > default gateway             |
> > > -----------                 |              ------------
> > >            |                 |   vmware     | External VM
> > >       eth0 + -- br0 -- ens32 +-- vswitch ---+ using in-sim
> > >    Sim.MAC |          VM.MAC |              | dflt. gateway
> > > -----------                 |              ------------
> > > -----------------------------
> > >
> > > VMWare will prohibit traffic in/out of the simulator VM if the
> > > dst/src (respectively) MAC address doesn't match the ens32 MAC.
> > > ...
> >
> > I'm not 100% sure what you're doing since I didn't take the time to
> digest
> > every detail in your message, but maybe this suggestion will work anyway:
> > Don't use VMWare's network interfaces for your traffic. Instead, run some
> > kind of VPN client inside the VM and VPN server on the host system or
> some
> > other system. Push all relevant traffic over the VPN (it will have a
> > specific NIC show up inside the VM for this) so VMWare can't see it.
> Then,
> > you get no VMWare-imposed restrictions as long as you can get a VPN
> > connection. I know that OpenVPN can run in L2 mode, and I imagine others
> can
> > too. Of course, perf may suffer a bit.
>
> Running a layer-2 tunnel is my backup plan, and I'm trying to avoid it
> so that I won't need to impose requirements on the external VMs (i.e.
> make them aware that there's tunneling going on, having to set up
> client or server side software to participate in said tunneling,
> etc.).
>
> I simply want to pass through a container's network interface and make
> it visible to the "outside world" as the ens32 interface of its
> hosting VM.
>
> > Or, get a multi-port Ethernet card that supports SR-IOV, and donate the
> PCIe
> > device into the VM so it has direct access to a real NIC. Or even many
> > regular single-port PCIe NICs (one per VM) and again donate those NICs
> into
> > the relevant VM.
>
> Hardware is not an option, this is cloud-hosted vmware, where we don't
> even get the option of enabling promiscuous mode on the "cloudy
> vswitches", hence my predicament...
>
> All VMs (the one running the simulation *and* the ones using the
> simulated container as their default gateway) must be in the vmware
> "cloud thingie", so there's nowhere for the tunneled traffic to
> "surface" where promiscuous mode would be allowed.
>
> So my options are configuring a linux `br0` bridge to act as a hub
> (i.e., blindly flood everything without looking at src/dst mac addresses,
> which turns out to be surprisingly difficult, since I haven't figured out
> how to do it yet) :)  or layer-2 NAT (simple), but with the added fun
> of having to rewrite ARP packet payloads (hard) :)
>
> Thanks,
> --Gabriel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.nclug.org/pipermail/nclug/attachments/20230330/388a0efe/attachment-0001.htm>


More information about the NCLUG mailing list