
BFD (Bidirectional Forwarding Detection) in OpenBSD [pdf] - notaplumber
http://www.openbsd.org/papers/bsdcan2016-bfd.pdf
======
iSloth
Fairly standard to use this in a core network to improve link failure
detection, even "fast" routing protocols can take a few seconds to hit a dead
timer for troublesome links.

The key with using this in some networks is there may be multiple networks or
ISPs between your two routers, so links may look up but could actually be
dropping traffic.

However not too clear why you'd want this on a server?

Most servers support bonding of NIC interfaces, quickly detect NIC and
physical link issues, and don't use routing protocols which are what benefit
from this quick detection.

~~~
X-Istence
There is a new trend, especially with virtualisation, whereby you push Layer 3
all the way down to the host.

In my case, at $WORK, we use a product called Calico with OpenStack. Calico
turns our standard Linux machine into a router with BGP to advertise the IP's
of the VM's running on it to the rest of the network.

We use BFD on the edge to detect failures other than link down (for example,
if for some odd reason the SFP is still inserted and link is still considered
up, yet the ToR is deadlocked and no longer routing traffic).

Pushing layer 3 down further means we no longer have large VLAN's spanning
multiple switches and having a huge broadcast domain, it provides us with more
flexibility in where workloads live, and traffic can be sent to the end nodes
using ECMP and other types of load balancing.

Our network guys love it, they already know how to traffic engineer using BGP,
this makes it even simpler.

~~~
inopinatus
I'll echo that - although I won't call it new. Using routing protocols at
server level, binding a service IP address to the loopback interface and
announcing it through an IGP has long been a tool in the infrastructure
arsenal. It is especially useful in multi-tenanted services and/or logically
partitioned infrastructure, although I first came across it around fifteen
years ago whilst building new-wave IP carriers in Europe.

This pattern avoids some of the use cases for overlay networks (especially
tunnels, but anything that reduces MTU can still cause problems in this day
and age), reduces layer 2 domain scale, aids in service mobility (especially
great for anything in a logical partition whether it's a JVM, a jail/zone or a
VM), adds multipath resilience for clustered services, gives infrastructure-
level visibility on group membership, gives a cheap failover mechanism for
intermediate devices like reverse proxies or load balancers, and is simple to
extend e.g. more service addresses for new tenants. And yeah, the network guys
love it.

If you are very clever it can also extend recursively into the virtual domain.
A VM binding a service address to its loopback address, announcing that via an
IGP over a virtual interface to the host, which is basically now just a
router. This reduces L2 domain size issues for VM farms (albeit by moving them
to L3), and solves IP configuration issues e.g. during VM migration to another
host or during DR processes. However, the usual Knuth caveats about being very
clever apply to this situation. If you do this with, say, a JVM running with
docker inside a Xen domU inside a physical host then you get what's coming to
you. Or, you could aggregate the announcements at the host if you didn't need
the migration part.

The downside of all this is that ops people without ISP or MAN/WAN experience
generally need this pattern explained a few times before they get it (some
never do - try not to hire those ones though), and practically no *nix
distribution or management/config tools support this out-of-the-box, so
there's a build & maintenance tradeoff. You can also build things that make
some applications get really, really confused. Inadvertently creating an
anycasted service might be fun, might also be a recipe for inconsistent
application data.

