
GoBGP: BGP Implemented in Go - ryancox
https://github.com/osrg/gobgp
======
tptacek
This is the sort of thing Go really shines on: network and infrastructure
services that would ordinarily be provided by big ugly C programs, where the
latency requirements are significant but not as bad as raw packet forwarding.
If your current best alternative is a C program, I'm not sure why you wouldn't
seriously consider replacing any of the following with Go (or Rust) programs:

* Authority DNS

* DNS caches

* ntp

* SMTP

* SSH

* IMAP _(added later)_

* SNMP

* PBX/Telephony

Fortunately, as time goes on, fewer and fewer people need to run these
services at all.

~~~
dsr_
...well, because of the primary shortcoming of the Go system: lack of standard
infrastructure to manage the inevitable updates.

Let's say I have replaced bind, unbound, ntpd, postfix, openssh, dovecot,
snmpd and asterisk with Go-written equivalents. Three weeks later, there is a
bug found in the standard Go TLS library.

My distro ships all the packages noted above, but not their Go-equivalents, so
my work load now includes monitoring security-announce lists for eight
different products, where before I monitored the security-announce list for my
distro.

I need to be able to rebuild all eight systems myself, rather than getting
automatic package updates to my test systems, and then promoting the packages
through alpha and then production. Go is nicer than some other languages about
that, but it builds binaries, not packages.

Next:

I'm pretty sure you can't build an snmpd without ASN.1 parsing, and ASN.1
parsing is the very model of a fraught and perilous splatter-fest. Will the Go
ASN.1 parser be better maintained than libtasn? Maybe, maybe not. Repeat this
for everything else.

Can these problems be solved? Sure. Are they ready right now? Not that I'm
aware of. Please enlighten me, if you have good answers.

~~~
tptacek
No. I've written SNMP from scratch in Ruby, Python, and C++. DER ASN.1 for
X.509 might be treacherous (simply in the sense that any mistake you make at
all will be ruinous), but that's just not the case for SNMP's BER.

The whole _point_ of using Rust or Go instead of C is that the "peril" of
implementing things like ASN.1/BER is pretty much eliminated.

As for your former point: I don't follow. Go's deployment infrastructure is a
superset of C's, and, if you're a masochist, almost everything in C's
deployment toolkit is available to Go projects as well.

~~~
dsr_
The viewpoint you were espousing was, if I understand correctly: I should
replace all my existing services with Go/Rust equivalents, unless they handle
packets directly.

My objection is that you are advocating this in the same narrow-focused way
that people advocate node with npm, python with pip, ruby with gem: little or
no cooperation with the whole system is available yet. This is perfectly fine
from the point of view of a group which does one thing, but not from my point
of view, running large numbers of diverse systems.

When libfoo gets updated, all N packages on the system which use it via
dynamic linking gets the benefit as soon as the packages restart. This is
highly desirable.

If Go-libfoo is updated, each of those N packages needs to be rebuilt, but I
don't have a programmatic way of finding out.

If there are N teams developing those packages, some of them will be faster
off the mark than others, and now I have a window of vulnerability that is
larger than the one I had when I could update libfoo on day 1.

You have multiplied my workload. I won't do that without a really good reason.

~~~
tptacek
If the Debian people don't want to include Go for some logistical or religious
or religiously logistical reason, that's fine with me. I don't think people
should run critical infrastructure from Debian releases --- when things go
wrong, you want to be prepared to patch source on a moment's notice, rather
than waiting for the upstream synchronization dance --- but hardly anyone
seems to agree with me on that, either.

But I notice you didn't respond to my SNMP point, which is disappointing,
because I was hoping that at least some fake Internet points might accrue to
my otherwise fruitless efforts at implementing SNMP from scratch _three
separate fucking times_. Can I at least be rewarded for that by winning a dumb
message board argument!?

There's even a cool trick to implementing BER encoders I could have talked
about!

Instead, it looks like the thread is going to be about dynamic versus static
linkinnzzzzzzzzzzzzzzzzzz.

~~~
_wmd
> when things go wrong, you want to be prepared to patch source on a moment's
> notice, rather than waiting for the upstream synchronization dance

This is IMHO the most backwards logic ever. Everything about dpkg and apt
makes this process easy, from running an internal custom packages repository,
through to "apt-get source" for any system package on a moment's notice,
through to having everything just magically revert back to Debian-patched
versions as part of the normal upgrade process assuming you versioned your
custom packages carefully.

A well run Debian shop is a thing to be seen, unfortunately it's not
cohesively documented in any one location on the Internet. If there's any
problem encountered in the wild on the Internet, after 22 years, there is
almost certainly a solid process built into Debian to handle it.

Compare that to home directories full of tarballs of binaries with dubious
compiler settings and godknows what else, I have no idea why someone would
advocate against it, assuming of course they've actually done sysadmin
anywhere aside from the comfort of an armchair

~~~
tptacek
It's super easy to install a patch using dkpg and apt. I didn't question that.

The problem is that you have to wait for the patch to be bundled. I've watched
that take a long time, while services I knew to be vulnerable had to sit there
and be vulnerable because the organization deploying the service didn't have
any infrastructure to apply a custom patch.

Consider the degenerate case, where you have to wait for a Debian patch
because _you paid for the research that found the vulnerability_. More than
one of my clients wound up in that situation. But that's not the only way to
learn about a simple, critical source patch that won't land in a Debian patch
for days.

~~~
_wmd
You bundle or create it yourself..

    
    
        $ apt-get source bash
        $ cd bash*/
        $ quilt new my_urgent_patch
        $ patch -p1 < ~/my-urgent-patch.diff
        $ quilt add file1 file2
        $ quilt refresh
        $ dpkg-buildpackage ...
        $ dupload ../*.changes
        # trigger apt-get upgrade on target machines

~~~
tptacek
That's fine. I don't care what you do with your self-built binaries once you
manage to build them yourself. But too many firms have no infrastructure in
place to do that. They wait for upstreams to synchronize to fix security flaws
that they could fix directly.

~~~
coalescence
scp and "dpkg -i" are readily available, but it's really not that much work to
setup a repository (aptly, reprepro, apt-ftparchive etc.)

I know I'd personally chose maintaining the system packages and where possible
put extraneous language dependencies in packages too (fpm comes in handy as it
can deal with a variety of packages, gem, npm etc). It makes life a lot
simpler when it comes to administering a bunch of systems and trying to keep
things consistent.

------
educar
I look forward to the first programmer friendly SMTP/IMAP implementation.
Haraka is the closest friendly SMTP server I have come across.

~~~
102030485868
I can understand why there hasn't been much movement in the SMTP world. SMTP
is pretty hard to get right, as is maybe hinted at by the many RFCs. You
__really __don 't want to be making any mistakes because it's a somewhat
unforgiving protocol... unless you send an error code.

~~~
DanielDent
I think the reason SMTP is hard to get right is not because of the many RFCs.
The reason is that it's not documented.

Operational experience at scale is needed to know how to write an effective
SMTP implementation, and that experience is half-documented by many people in
many different information silos.

But... I'd also say it's an extremely forgiving protocol. In fact, it's the
fact that it's so forgiving which makes operational experience required to
implement it. A "correct" SMTP implementation has a lot of latitude in the
choices it makes - and it's that latitude which makes life difficult.

------
devnull42
So at the moment I see no reason why Go written BGP would be better than
standard Quagga/Zebra. There aren't really concurrency or resource issues with
large scale Quagga in my experience.

~~~
tptacek
Quagga/Zebra is a giant C project. The industry is moving away, as much as it
can, from serving critical infrastructure on giant C programs.

~~~
Rapzid
I'm not aware of any trend in the area of routing/switching for linux away
from C projects. nftables and open vswitch are both new-ish and written C.

~~~
tptacek
"As much as it can". nftables and openvwitch both forward packets, and thus
need to be written in C (or, perhaps, in the long term, Rust).

Really, you're playing on a semantic ambiguity in the word "router". A BGP
implementation doesn't forward packets; it maintains a database of forwarding
paths that the packet forwarding layer consults. In a large Cisco router, the
SOC that runs BGP and maintains the RIB isn't the same electronic component
that forwards packets.

~~~
xorcist
Not really. Neither the BGP layer nor the packet forwarding layer in that big
Cisco box of yours is moving away from C code.

Standard network software such as Postfix and OpenSSH took ten years to
replace their predecessors, and their eventual replacement will be just as
gradual. It's not happening right now, so I think it's a bit of a stretch to
call it a trend.

~~~
tptacek
I didn't say it was. But then: I don't trust that Cisco C code _at all_. Do
you?

~~~
sre_ops
Yes. It currently runs over 70% of global internet and considering all kinds
of error conditions that show up on the global internet the code is extremely
stable.

~~~
tptacek
Sendmail used to run on something like 90% of the global Internet. And mail in
the 1990s pretty much did work, pretty reliably. Would you have banked your
site's security on the quality of Sendmail 8.6.12's code?

~~~
nickpsecurity
Slam dunk on that comment! Such systems, due to lots of debugging, can work
reliably in a narrow set of use cases where specific features have massive
use. Then there's the uncommon, usage scenarios and features that get much
less debugging. Then there's all the patches they keep distributing to fix...
"things."

And then the fact that safe, reliable code is only first step toward secure
code where an intelligence, malicious person is targeting it. Totally
different ballpark that neither Sendmail nor Cisco handled so well. Small
shops like Sentinel and Secure64 did _way_ better with a tiny fraction of the
money. So, it has to be intentional for the extra profit at customers'
expense.

------
rmdoss
I remember years ago when every new PHP application would have "PHP" before
its name. PHPNuke, PHPMyadmin, etc, etc.

Seeing the same trend with Go now. Why add the language name to the software
name? Real question...

~~~
sanderjd
Another real question: What is the better approach? Generic names (eg.
"bgpd")? That seems decent if you have an over-arching project to group the
generic stuff under (eg. "Apache httpd"). Making up codenames for everything
(eg. "Zebra")? It's a pain to think of those, and they're rarely descriptive
or meaningful.

I don't really like the language-name-prefix thing either. It makes the
language seem like the important thing about the project. Sometimes it is the
most important thing, but even then, that is mostly only true at the beginning
of a project when attracting contributors is most critical. But I'm not sure
the other approaches are much better.

------
malcolmgreaves
I really dislike projects that assume you know the definition of an acronym
and never (1) expand it nor (2) explain it. BGP is super important to the
GoBGP project. It deserves at least a mention somewhere in the first 4
sentences introducing the project. Gahh!

------
arca_vorago
Would elixir or erlang also be a good potential language for bgp/quagga/zebra?

~~~
technion
I feel "BGP in Erlang" would be exactly the kind of thing I could implement
well - even if it did feel icky having to implement "MD5 Authentication" in
2016.

The problem with those sorts of projects however, is inertia. The average
hobbyist rarely ever uses BGP. Large networks and ISPs aren't going to
implement my personal project as a critical component to keeping their entire
infrastructure online without a very good reason.

This project looks promising, I'm hoping it doesn't suffer this problem.

------
dragonshed
I assume BGP == Border Gateway Protocol
[https://en.wikipedia.org/wiki/Border_Gateway_Protocol](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)

Suggestion: include a quick abstract what what BGP is with a link for more
information.

~~~
misframer
Is that necessary? I'm not sure how many people are unaware of what BGP is.

~~~
jimbokun
I'm not aware what BGP is, clicked hoping to find out, was sorely
disappointed.

~~~
harshreality
<ctrl-t> bgp <enter>

A project page for a $language implementation of $protocol shouldn't be
expected to give a basic description of $protocol. If you care about a new
implementation, you already know what the protocol does, at least generally.
If you're lucky, the project page links to a protocol description (possibly at
wikipedia), or, as above, you can simply google it yourself and then decide
whether a $language implementation of it is something you care about.

~~~
npizzolato
I don't think it's a lot to ask for a readme to contain the full name of the
acronym it's implementing and maybe a link to a wikipedia page.

I mean, it already has a link to the golang website, but no mention of what
BGP actually is.

~~~
nickpsecurity
A quick Google solves that problem. Anyone that wouldn't do that much is
unlikely to be valuable to the project. It's a nice filter at the least.

~~~
arca_vorago
Bah accidentally downvoted you nick, sorry. Since I downvoted though, I might
as well play the game as if I had a reason (because it annoys me when people
dv without a reason): I think the request for some basic information without
forcing the reader to google/duckduckgo/wikipedia some of the most basic info
(such as full name, basic description) is not too much to ask from a
journalistic perspective and using it as a barrier is not a good thing for
encouraging education.

After all, there is a reason it's called the wikipedia rabbit hole, do you
know how often I start with a quick search and suddenly it's an hour later and
I've learned all about $something-other-than-originally-intended?

~~~
nickpsecurity
If i do it, I just load up comments of that person and upvote any decent
comment tgey have. Cancels it out.

On other issue, here's what typing BGP into Google gace me at the top: "Border
Gateway Protocol (BGP) is a standardized exterior gateway protocol designed to
exchange routing and reachability information among autonomous systems (AS) on
the Internet. The protocol is often classified as a path vector protocol but
is sometimes also classed as a distance-vector routing protocol."

Some things are hard to search for. Others, like BGP protocol, are so common
you'll get it easily. Those can default on Google. Further, what use is a
programmer going to be in robustly implementing the protocol if they can't
figure that out? Hence the filter part. So, my position is more solid now that
I Googled it.

