The key with using this in some networks is there may be multiple networks or ISPs between your two routers, so links may look up but could actually be dropping traffic.
However not too clear why you'd want this on a server?
Most servers support bonding of NIC interfaces, quickly detect NIC and physical link issues, and don't use routing protocols which are what benefit from this quick detection.
In my case, at $WORK, we use a product called Calico with OpenStack. Calico turns our standard Linux machine into a router with BGP to advertise the IP's of the VM's running on it to the rest of the network.
We use BFD on the edge to detect failures other than link down (for example, if for some odd reason the SFP is still inserted and link is still considered up, yet the ToR is deadlocked and no longer routing traffic).
Pushing layer 3 down further means we no longer have large VLAN's spanning multiple switches and having a huge broadcast domain, it provides us with more flexibility in where workloads live, and traffic can be sent to the end nodes using ECMP and other types of load balancing.
Our network guys love it, they already know how to traffic engineer using BGP, this makes it even simpler.
This pattern avoids some of the use cases for overlay networks (especially tunnels, but anything that reduces MTU can still cause problems in this day and age), reduces layer 2 domain scale, aids in service mobility (especially great for anything in a logical partition whether it's a JVM, a jail/zone or a VM), adds multipath resilience for clustered services, gives infrastructure-level visibility on group membership, gives a cheap failover mechanism for intermediate devices like reverse proxies or load balancers, and is simple to extend e.g. more service addresses for new tenants. And yeah, the network guys love it.
If you are very clever it can also extend recursively into the virtual domain. A VM binding a service address to its loopback address, announcing that via an IGP over a virtual interface to the host, which is basically now just a router. This reduces L2 domain size issues for VM farms (albeit by moving them to L3), and solves IP configuration issues e.g. during VM migration to another host or during DR processes. However, the usual Knuth caveats about being very clever apply to this situation. If you do this with, say, a JVM running with docker inside a Xen domU inside a physical host then you get what's coming to you. Or, you could aggregate the announcements at the host if you didn't need the migration part.
The downside of all this is that ops people without ISP or MAN/WAN experience generally need this pattern explained a few times before they get it (some never do - try not to hire those ones though), and practically no *nix distribution or management/config tools support this out-of-the-box, so there's a build & maintenance tradeoff. You can also build things that make some applications get really, really confused. Inadvertently creating an anycasted service might be fun, might also be a recipe for inconsistent application data.
I recently did some Cisco ACI/Nexus work with virtualisation, however the client was only really interested about having massive layer 2 bridge domains spanning across two DC sites to make their App design/failover easier.
Is that just a general question about servers? If so, you're probably right.
However, OpenBSD is frequently used as a firewall or router. So if a developer has an itch to scratch, and it's within the bailiwick of what OpenBSD is generally used for, then why not have it?
My day job is networking for a UK ISP/DC, and the only thing I could really think of people wanting this was on devices like route reflectors within your network, or an peering exchange.
However I suppose people are just using OpenBSD to do more than I thought.
-Results of audits or how they got bizarre bugs to surface with security features/randomization/running on exotic hardware platforms.
-Security mitigations and software hardening as they find potential attack surface
-Recreating the most common use cases of a common piece of software with <5% the codebase.
I use the implementation in bird (http://bird.network.cz/) for both IPv4 and IPv6
With Juniper you can actually set a static route to be live or not live based on BFD directly, so no need to use BGP in our case though that is common for routers that can't do it directly on a static route.
OpenBSD has a lot of routing technology built-in, so it's common to use it as a firewall, router, VPN gateway, etc.
A server running OpenBSD isn't going to replace a core router, but it's very flexible.
LACP/Bonding is anotjer challenge that requires either min-links or micro BFD.
BFD is often run at aggressive intervals (less than 1000ms for detecting a failure), and is not best suited for running on general purpose CPUs performing all of the management and control plane tasks. BFD is ideally implementes in hardware.