Ethernet-layer NAT (userspace queueing for ebtables)?

Gabriel L. Somlo gsomlo at gmail.com
Thu Mar 30 21:27:08 UTC 2023


On Thu, Mar 30, 2023 at 02:53:40PM -0600, Stephen Warren wrote:
> On 3/30/23 14:15, Gabriel Somlo wrote:
> > Hi,
> > 
> > I find myself in need of a Layer-2 (Ethernet) NAT solution to work
> > around "cloudy" vmware's restrictions on promiscuous mode for guest
> > network interfaces.
> > 
> > For context, I'm running a network simulator (think gns3 or CORE),
> > and need to bridge simulated nodes to the outside of the host VM,
> > so they can interact with other host VMs sharing a VMWare vswitch
> > (also, think of the simulated node as the default gateway for the
> > LAN that's bridged to the outside vswitch):
> > 
> > -----------------------------
> > VMWare guest running sim    |
> >                              |
> > simulated node,             |
> > default gateway             |
> > -----------                 |              ------------
> >            |                 |   vmware     | External VM
> >       eth0 + -- br0 -- ens32 +-- vswitch ---+ using in-sim
> >    Sim.MAC |          VM.MAC |              | dflt. gateway
> > -----------                 |              ------------
> > -----------------------------
> > 
> > VMWare will prohibit traffic in/out of the simulator VM if the
> > dst/src (respectively) MAC address doesn't match the ens32 MAC.
> > ...
> 
> I'm not 100% sure what you're doing since I didn't take the time to digest
> every detail in your message, but maybe this suggestion will work anyway:
> Don't use VMWare's network interfaces for your traffic. Instead, run some
> kind of VPN client inside the VM and VPN server on the host system or some
> other system. Push all relevant traffic over the VPN (it will have a
> specific NIC show up inside the VM for this) so VMWare can't see it. Then,
> you get no VMWare-imposed restrictions as long as you can get a VPN
> connection. I know that OpenVPN can run in L2 mode, and I imagine others can
> too. Of course, perf may suffer a bit.

Running a layer-2 tunnel is my backup plan, and I'm trying to avoid it
so that I won't need to impose requirements on the external VMs (i.e.
make them aware that there's tunneling going on, having to set up
client or server side software to participate in said tunneling,
etc.).

I simply want to pass through a container's network interface and make
it visible to the "outside world" as the ens32 interface of its
hosting VM.

> Or, get a multi-port Ethernet card that supports SR-IOV, and donate the PCIe
> device into the VM so it has direct access to a real NIC. Or even many
> regular single-port PCIe NICs (one per VM) and again donate those NICs into
> the relevant VM.

Hardware is not an option, this is cloud-hosted vmware, where we don't
even get the option of enabling promiscuous mode on the "cloudy
vswitches", hence my predicament...

All VMs (the one running the simulation *and* the ones using the
simulated container as their default gateway) must be in the vmware
"cloud thingie", so there's nowhere for the tunneled traffic to
"surface" where promiscuous mode would be allowed.

So my options are configuring a linux `br0` bridge to act as a hub
(i.e., blindly flood everything without looking at src/dst mac addresses,
which turns out to be surprisingly difficult, since I haven't figured out
how to do it yet) :)  or layer-2 NAT (simple), but with the added fun
of having to rewrite ARP packet payloads (hard) :)

Thanks,
--Gabriel


More information about the NCLUG mailing list