Just like ever single proxy written in Go, it just uses the core httputil library with a shit ton of custom code on top of it.
Anyone who writes Go does not need any of this. And those who do not write Go, can still write their own in no time because it is literally couple of lines of code. No harder than running a webserver in Go(two lines of code).
because all that is is no more than 50 lines of code. it's so easy that nothing else makes much more sense. Go's standard library has all you need to run networking services from the get go. you really do not need these things. Go with nginx, haproxy and similar things if you need every last bit of performance, but otherwise, you can just write your on in no time. Not only that, you can tailor-made it to suit whatever use case you have and that knowledge will only make you more productive in the future.
Now I kinda want to test it to see how it works if your application flushes lots of little packets <MTU. Some proxies will try to recombine the packets to make the connection more efficient, some will just forward as-is. But this is super important behavior to be familiar with (and documented) for things like websockets.
It's funny that layer 7 remains in the vernacular. Nobody talks about layer 6 proxies. Or occasionally somebody will mention a layer 3 proxy. But never layer 5.
I hate the OSI model too, but a 240-page book with 137 references? To complain that a model from the 80s isn't the right fit 40 years later? This isn't a paper, it's a rant.
Before seeing this here, I went down a rabbit hole on why-anyone-cares about the OSI model, especially as a descriptor for their golang project. It seems to be just a classification that one person found useful, and people treat like an interesting thing.
Separately, we need more deprogrammers in the world.
For folks in the networking space, differentiating between L4 and L7 proxies is pretty important. And while you could call it an HTTP proxy in many circumstances, some proxies support other protocols e.g a mysql proxy.
In my last role I started trying to enforce this by refusing to use the terms "Layer 7" and "Layer 4" (I worked on application and transport layer infrastructure at a big tech) but it never caught on and after having to give "the talk" about what happened to OSI Layers a few times I resigned myself to the fate that it was never happening. I will continue to use those terms though.
I think it's only misleading in that the only L7 protocol it supports is HTTP. It's not a huge deal, but when I work with other proxies if I see L7, I assume multiple application protocols.
That's because layer 5 and 6 don't really make sense in the TCP/IP stack. Maybe you could say TLS is one of those layers, it is definitely a layer between tcp and http, and in haproxy documentation it is layer 6, but it also doesn't map to the OSI concept for those layers, and is often said to be layer 7 as well.
And then there is quic, which is a transport protocol, so kinda layer 4, but it is higher than udp, but it also has TLS built into it.
QUIC has TLS built into it, and also (http) streams, and a few other such goodies (say, masque - tunneling).
It definitely "fills the hole" between L4..L7. Or smashes the layers, if you prefer.
Layers, P's… blimey, leave them all out of my PSTN connections and bring X.25 back!
To rectify this most grievous transgression, I now unveil a device of eternal ingenuity and enchanting craftsmanship, a veritable marvel, which shall restore order to the realm of networking with unparalleled precision and grace: «Whispering X.Gate», a X.25 API Gateway – https://pastebin.com/S11LRJNS
Cloudflare "splits" their reverse proxies functionally into different processes; TLS termination may happen in a different process from WAF, or cache access, or origin fetch. I'm sure other large CDNs do similar things.
As others have said, "processing layers" in contemporary network service architecture don't align that well with OSI layers anymore, though.
I don't have much to add other than to compliment the README. At least it shows some concern about documenting the higher level architecture... I get discouraged of contributing to open source due to the laziness of basically having to reverse engineer the code
This used to be pretty standard but has largely gone away, unfortunately.
I blame frameworks that encourage the user to just use their "obvious" specific directory structure that works for 80% but people still make up the other 20%
And no need for documentation, since is "obvious" [...to anyone who has invested dozens of hours working with that specific framework]
LLMs are quite capable at generating such overviews. mutable.ai generated one for a project I co-develop, and it was pretty neat for a v1 (they are on a much improved v2 now): https://mutable.ai/celzero/firestack
Having an API gateway between the internet and your service(s) is a great idea and one I’ve implemented no less than 3 times. But you should really just roll your own. It’s a few dozen lines of code with go’s standard library reverse proxy and gives you way more flexibility than trying to yaml-configure someone else’s.
As someone who already did this (because no other solution with our needs was available), I strongly disagree.
Most of the time, NGINX, Caddy, Traefik or APISIX are enough. The only time I felt the need to implement an API Gateway from scratch was to support a very specific use case with a specific set of constraints. No matter how robust the Go standard library is, implementing an API Gateway from scratch is rarely a good idea.
In my experience those specific sets of constraints come sooner or later. Someone is going to ask for some complex auth or routing rules and it’s easier to just write it in go than it is to learn a whole new DSL or lua to implement it.
If you need something more than this, you're either in a very specific situation (where an API Gateway written from scratch might be a good idea) or that someone is doing something wrong
We had a specific use case that was needed urgently which we couldn't get working with any of the standard systems, so we used Go. Works very well. It is sometimes simply a matter of time; we needed it before the morning (for a launch) and in discord/reddit people were providing solutions with haproxy, nginx, traefik etc that should work according to the docs but just didn't.
Unless you have a wild use-case that hasn't been tackled by what's out there, why on earth would rolling your own be a good idea? Building a proper, secure, and performant API gateway is NOT a few dozen lines of code.
There are some super robust (and fast) Go API gateways that take care of all the things you didn't think about when trying to roll your own.
I can absolutely assure you that building a fast and secure gateway is not as hard as you seem to be implying. This is, again, based on my real world experience.
this is such a wild take to me. why on earth are there complicated routing rules happening at the API gateway at all?
In MY real world experience, the API gateway does some sort of very simple routing to various services and any complex auth or routing rules would be the service's responsibility.
If the API gateway has your application logic in it it's not a separate component at all.
How complex can you really get with HTTP requests anyway?
Use something that solves 1000 use cases of which yours is one. Some would say that's simplicity while others would say that's complexity. When it breaks do you know why? Can you fix it properly or are just layering band-aid on a bigger problem inside the component.
Or... build something that solves exactly your use-case but probably doesn't handle the other 1000 use-cases and needs to be put through trial-by-fire to fix all the little edge-cases you forgot about?
Early in my career I opted for #1 but nowadays I generally reach for #2 and really try to nail the core problem I'm tackling and work around the gotchas I encounter.
Go is nice and all, but every single reverse proxy in Go is outperformed by nginx, both in latency and handled requests. Traefik Caddy Envoy, etc.
I doubt this will be different.
It's good that this exists, but new projects that come into a well established space should make it clear how they differentiate themselves from existing solutions.
For example, it's not clear to me why anyone would choose to use this instead of Caddy, which is a robust server that already has all these features, whether OOB or via plugins.
This space may be well established, but it still does not fullfill all needs. For my own:
- NGINX does not support ACME, and I'm fed up with dealing with Lego and other external ACME clients. Also the interactions between locations has never been clear to me.
- Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy (e.g. so that I can terminate TLS connections in front of my IRC server).
HA Proxy already exists and will do both. You can even redirect Layer 4 HTTP/HTTPS Ports to another reverse proxy server if you want to get inception levels of crazy.
FWIW, it doesn't handle your use case of Layer 4, but for the people at Layer 7, another option is good ol' Apache: it is so flexible and extensible it is almost a problem, people tend to not know it long ago went event-oriented with mpm_event, and it supports ACME internally (as a module, but a first-party module that is seamlessly integrated). (I do appreciate, though, that it is critically NOT written in a memory safe language, lol, but neither is nginx.)
But see how in your project the very first paragraph explains why it exists, and what it does differently. This is what I think is missing from Dito. It doesn't have to be super in depth.
I do disagree with your argument against Caddy, though. How often do you realistically rebuild your services? If it's anytime you would upgrade, then it seems manageable. xcaddy makes this trivial, anyway. Though you don't really need to use it. There's a convenient pro-tip[1] about doing this with a static main.go file instead.
> - Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy (e.g. so that I can terminate TLS connections in front of my IRC server).
I mean, you don't havse to "rebuild services" -- if you need the plugin, just deploy the binary with the plugin. It's not like it changes (other than upgrades, which you'd do regardless of the plugin).
If the plugin is downloadable from caddy's site, it can also be updated in place along with the caddy binary. (There's an option to keep the same plugins)
> Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy
You would be surprised by how many infrastructures have software running without any container :) I'm running FreeBSD on my servers so containers are out, but even if I was Linux, why would I use containers for base services?
This is a supported feature of podman which can generate systemd units to make system services.
But, as for advantages (system has some of them too), sandboxing, resource constraints, ease of distribution, not being broken by system updates (glibc update on RHEL broke some Go binaries iirc).
My rule of thumb is that only system software (e.g. DE, firewall, drivers, etc) belong on the base system, everything else is going in a container. This is very easy to do now on modern Linux distros.
You can use kamal-proxy, recently released. It handles SSL and zero downtime deployments. It’s small and from what I checked the code is easy to read and understand.
I wish there was an alternative to Kong API gateway where I didn't need to write my plugins in Lua (the go and js sdks seem abandoned and are incomplete).
I don't like the AI generated art for the hero image. If you have the ability to write something like this, you probably have enough money to pay an artist $20 to draw something for it.
If you have the ability to write these comments on Hacker News, you probably have enough money to pay an artist $20 to draw something and donate it to the project.
That's true, and I even considered doing so, but I, for one, already do commission a lot of art. I'm currently sitting in two artist's queues, actually.
I also don't want to commission something for another project. They should get creative control in the process, else it just feels weird.
Where are you finding skilled graphic artists willing to do a project like that of $20? Having hired freelance graphic artists for projects for work, even the smallest things costs hundreds of dollars.
Given the project is FOSS, they'd probably be fine with that price still.
I could go through a list of literally hundreds of artists where $20 is within budget. The quality of simple logo like is needed here is about on part with what many would do for a telegram sticker, which are often about $5/piece.
Hundreds of dollars is totally reasonable if commercial. This isn't.
I love the focus on flexibility & integration with Redis.
We use a mix of Traefik and Envoy for complex + dynamic LB configurations. Doing anything related to custom middleware, dynamic configuration, and caching feels archaic on Traefik and requires a non-trivial amount of code on Envoy. I hope Dito becomes the next gold standard for load balancing.
One caveat — one of my biggest complaints with Traefik is the memory usage, which makes it difficult to run as an mTLS proxy between services. We use Envoy for these use cases instead. I’m curious to see how Dito compares on memory usage, despite also being written in Go.
Anyone who writes Go does not need any of this. And those who do not write Go, can still write their own in no time because it is literally couple of lines of code. No harder than running a webserver in Go(two lines of code).
https://github.com/andrearaponi/dito/blob/a57d396476cc618678...