OVS seems great but it doesn't seem to garner much hype these days. Does it carry too much connotations from OpenStack, or why isn't it the defacto solution for all L2 bridging/virtual networking for containers/VMs/routers/etc?
Most people just need data to come out the NICs on certain tags. If you need L2 between guests (in a way you can't just do that from the network in the first place) you can either just do it in container or on the host without having to set up an entire virtual switch. The configuration manages fine outside an OpenStack environment it's just a ton of work for very little squeeze.
Same for the virtual VMware switches on Workstation. I'd love to have a way to connect "something" to a VM that is running in VMware Workstation.
There was once a "VMware Virtual Network SDK" or something, but I couldn't find anything that made any use of that (and now with VMware being part of Broadcom I think it's getting more and more impossible)
Mostly historical interest; User Mode Linux (UML) was one of the first more performant options for virtualisation of Linux.
This let you play with a network lab based on UML or other technologies without needing to set up a physical network.
This was before hardware virtualisation that we now take for granted. UML was actually the first technology that Linode used for virtualisation, then Xen came along and was a far better option.
I wrote a docker network plugin once which interfaced with a VDE virtual switch. The benefit was that the switch was user space and pipes based, so technically you could wire more things into it (like packet droppers) without messing with your hosts networking stack.
The motivation was I wanted to use more easily customized dockerized services to test bare-metal system images boot process - which required a bunch of stuff (like DHCP, config management etc.) to be "available" on the network but more easily swapped out.
This actually worked great! You could use QEMU to spin up a machine, have a DHCP container assign it an ethernet address but then reserve an IP range on the docker network for docker to launch containers into.
It also meant you could bridge into the virtual network easily and without any host IP stack interactions - just launch an interactive container onto the network (or launch it with two interfaces, one on the "real" network so you could provide internet connectivity).
I wanted to get around to testing failure states, but it never ultimately came up.
Did you release that and does it still work? I would absolutely love a way to test DHCP and PXE virtually. Or if there's a better way I'm not picky about implementation details.
I use it all the time for my QEMU VMs. It’s less hassle than OVS and allows you to configure some complex networks with just a few socket files and tunnels.
I use one VDE switch per network with each network having a single tap interface to my OS bridges. It has been a feature of QEMU since 0.15 and the performance is just as good as taps or OVS.
> Not sure what one would prefer it for now though.
VDE can be used with wirefilter to introduce programmable, non-ideal characteristics (latency, packet loss, data corruption, etc) into a network link. Obviously you wouldn't want this in a production system, but it's useful for testing.
this one is old, but this is mostly used for rootless containers or well-thought VM setups where you need to properly route things. There are a dozen flavours of user mode virtual network devices and routers to use. All of varying combinations of maturity, features, performance.
All that was out of fashion during the brief period everyone fired sysadmins and all devs got a "devops" tshirt, and then everyone just run Docker as root with bridge mode network.
Notice: Virtual Distributed Ethernet is not related in any way with
www.vde.com ("Verband der Elektrotechnik, Elektronik und Informationstechnik"
i.e. the German "Association for Electrical, Electronic & Information
Technologies").
I wonder how many times that came up, that they had to put it in the README.
Why is it that we all use VDE ratings for high voltage insulated screwdrivers, but (as far as I'm aware/can think of) nothing else?!
I suppose DIN is similar: to most people it's just the rail. (Not to mention I wouldn't be surprised if the common one is just one of many DIN rail specifications or size variants.)
Anyone knows where the API docs for hyper-v virtual switch is?