Hacker News new | past | comments | ask | show | jobs | submit login
Webhooks suck, but here are alternatives (deno.com)
38 points by 0xedb 11 months ago | hide | past | favorite | 31 comments



This article made zilcho sense to me. I kept waiting for the part where it was going to tell me what was wrong with webhooks, but when it actually did it felt like one of those "where did the soda go" things, with overly-complicated emphasis on things that aren't really problems. "The hidden cost of webhooks" section is basically just complaining that you need to run your own actual server, including "maintain the code". Even that is really overkill because it's easy to set up any number of lambdas/cloud functions/workers to respond to webhooks.

Furthermore, in the "Workflow builder" and "In-browser IDE", all the author is highlighting is the case where the service provider is hosting the handler code, as opposed to the customer. But a whole, major point of webhooks is that they allow clients to run code in reaction to their SaaS APIs' events, but in their own infrastructure. I mean certainly, of course, if the provider wants to take the cost and risk of running their clients' code, that's a great service to provide, but I wouldn't say that highlights the negatives of webhooks.


Agreed. This was one of the most underwhelming conclusions to an article I've seen in a while. Given the domain name of the blog, I thought Deno was offering some sort of next-generation webhook alternative that would actually be good. Instead, the "solutions" to webhooks (which are great, btw) are... workflow builders and in app IDEs? What the hell?


They kind of buried the lede: the alternative is their "subhosting". This is similar to Cloudflare's "workers for platforms", or some of Fermyon's former marketing.

The idea is that your SaaS would offer an interface for your customers' developers to build integrations directly in your own product. I guess like building Zapier into your product.


Thank you for this, I certainly misunderstood it. But this just means my comment was correct: they're basically just highlighting that the downsides of webhooks are that the customer needs to run their own infrastructure, but here they are (essentially) arguing that it's better to build a plugin system on their platform.

And that may all make sense! But framing it as "Webhooks suck" is such a bizarre choice to me, primarily because they have an extremely limited view of what webhooks are used for in the first place.


Yes, I felt the same, I thought I was unaware of some fundamental flaw of webhooks.

I feel webooks have lasted because they are the simplest thing that gets the job done. They only rely upon open standards like http, they are secured via tls, have well understood failure scenarios.


Obviously Deno have vested interest here (and so do I as the founder of Svix[1] which is mentioned in the post), but my take is that webhooks are great, though there are alternatives that could be better or complementary depending on the situation.

They bring up a few good points about the shortcomings of webhooks, many of these we are trying to improve for the whole ecosystem (e.g. with Standard Webhooks[2]), or solve in Svix directly including running JS instead of sending webhooks which we already support (using Deno, which is great!). It is very useful, but there are many limitations with this approach, and in general oftentimes people just want the data passed to their systems and deal with it there. Not write a bit of JS in the browser to do something ad-hoc. So running JS is nice, but it's not a silver bullet. This is also why we don't support WASM, because in most cases it's just not that useful.

So in short, like always with software engineering: "it depends" and there are tradeoffs to each approach.

1: https://www.svix.com

2: https://www.standardwebhooks.com/



I spoke with Jeff about it before, and I respectfully disagree. :)


of course :)


Webhooks are _not_ great. Unless you want to offload them to a platform like yours, they are entirely too complex and require way more engineering effort to get right and be reliable. It's not enough to just issue a POST to the user-provided URL (I know I'm preaching to the choir on that).

However, Webhooks are also generally not ideal for most users. Why spin up new infra to handle occasional messages over the public Internet? It's just a legacy approach that is on its way out.

Wasm will replace them over time, as we can now run the event-driven user code in-process in the source application and shed the need to needlessly split up these systems, over-engineering distributed systems to just modify some data before it goes into a database or run a little function here and there.


> However, Webhooks are also generally not ideal for most users. Why spin up new infra to handle occasional messages over the public Internet? It's just a legacy approach that is on its way out.

I'm trying to be generous here, but I don't see any evidence that webhooks "are just a legacy approach that is on its way out". Your next paragraph basically highlights what I mean:

> Wasm will replace them over time, as we can now run the event-driven user code in-process in the source application and shed the need to needlessly split up these systems, over-engineering distributed systems to just modify some data before it goes into a database or run a little function here and there.

This feels like a "not even wrong" statement to me. Yes, I think things like hosted plugins are a good thing, but that is only a very small slice of what webhooks are used for. A huge value of webhooks is that while I get the notifications from some external system I can, and I want to, run the handler in my own infrastructure. While this may be a burden for some teams, for many others it is most definitely a feature.

I'm not sure if you're the author of this blog post, but I think it would have been much better if it just talked about how "hosted plugins" can be a good alternative to some of the things webhooks have been used for. But weirdly turning this idea into "webhooks suck" makes no sense.


I'm not the author -- but I do think webhooks generally are misused and need something new, not just a step-change improvement.

For integration and extensibility, webhooks are definitely the wrong choice. For handling event notifications, fine, but I don't see the majority of webhooks used like this. And even if they are, there's inevitably a bunch of cross-talk over the Internet to handle the event, make a follow-on request to the SaaS API, handle its response, rinse and repeat. We can do better.


> but I don't see the majority of webhooks used like this.

Can you give some examples of webhooks that are used solely for this "integration and extensibility" reason, because essentially all of my uses of webhooks have been to handle event notifications, but perhaps I'm just misunderstanding how you're making this distinction. For example, I work in fintech, and the main thing I use webhooks for is to get notified of a whole host of things that happen "in the real world" outside of my app (e.g. when a bank card is swiped, when an ACH is processed, when an account application is approved or denied, etc.)

> And even if they are, there's inevitably a bunch of cross-talk over the Internet to handle the event, make a follow-on request to the SaaS API, handle its response, rinse and repeat

I don't even understand why that would be considered a problem. So what, I make an API call to get some additional data so I can do something with it in my infrastructure? I mean, I'm going to need to write something to get the code to do what I want. There is literally 0 downside, from a business perspective, to "a bunch of cross-talk over the Internet" - why should I even worry about this?

I just generally feel that perhaps this article and what you are referring to have something very specific in mind when you talk about webhooks and what they're used for. I'd just argue that there is a much broader set of use cases out there for webhooks, so I think a better approach is to lay down how you think this kind of "plugin architecture" would be an improvement and highlight some examples where you think webhooks would be a bad fit.


> I just generally feel that perhaps this article and what you are referring to have something very specific in mind when you talk about webhooks and what they're used for.

I'm not the author or GP but I think you're right and can give my two cents. The specific thing these authors have in mind is a SaaS that offers an HTTP API to manipulate data and a set of HTTP webhooks to notify users about changes in the data. *If* the only thing SaaS users do within their webhook handler is make an API request back to the same SaaS, then the argument is there's a bunch of unnecessary work to shuffle bits back and forth across the internet when the SaaS could just offer to run a function.

> I'd just argue that there is a much broader set of use cases out there for webhooks...

Agreed, which is why there are many commenters here who are confused. The "webhooks suck" take is overcooked without context.


Let's say I use some SaaS for handling payments and subscriptions. The SaaS can notify me via a webhook when a user successfully creates a subscription on their hosted payment page, so I can, let's say, adjust their allowed features and resource limits on my end. If we throw away webhooks, how would this work with Wasm?


In fact, your platform (https://healthchecks.io/) is a prime example of where running customer wasm would be really excellent.

Instead of sending webhooks out to customer configured URLs, you could run a Wasm environment to execute customer code. Off hand, a good use case here is to do further inspection of the event before it gets sent off to some other system - maybe there are cases where you send false-positives and needlessly trigger external system alerts. The customer Wasm could do more introspection on the healthcheck event and make a more informed decision about how to proceed.


The SaaS can run your wasm code for you. In this case, you'd write the same code that you'd otherwise run in your infra to respond to the webhook, but it would be compiled to Wasm and run _in the SaaS app_ where the source event was triggered.

In your specific case, you'd still be making a call-out to your system to adjust the entitlements as you describe. However, many cases are actually taking action back on the SaaS side via API calls to the SaaS. So instead of a handful of back-and-forth calls over the Internet between you and SaaS, you just call their functions from the Wasm code running in their infra.


So you have to give your 3rd party providers access to _code you write_, _secrets to your own datastores_, _open firewalls to 3rd parties_, and then allow them to run any code which could access your system, hoping that they do in fact run the blob you upload?


pretty much everything of any level of sophistication require this. unsure what your point is.

wouldn't your customers for inngest need to do the same for anything integrated?


No, Inngest doesn't host your code so it doesn't need your secrets


yet


I've re-read this a few times to try to grasp the wisdom which the confident contrarian tone makes me suspect is in there.

I have to say I couldn't find it. Could you give an example scenario where the status quo would be improved by uploading a wasm handler to a server instead of passing messages?


Look at the Shopify Functions platform


To me what stands out a lot in this article is the following:

> . However, with this more robust setup, you would need 4 new services (SQS, S3, a Publisher(s), and a Consumer(s)), just to handle a single webhook safely. This is a lot of new infrastructure, architecture, time, and effort.

I most certainly agree that it is a lot of work, but the issue feels in large self-inflicted because of the use of serverless. If you're using a monolithic setup with erlang or running any kind of queue on the side you are fine. You might need to setup some retry logic and alerts but that is done once.

Also the solution of providing a in-browser IDE sounds terrible, developers wouldn't have access to source control, their own tools nor any kind of ability for local testing. Using zapier or any third-party tool is IMO not worth it if its a serious project.


> If you're using a monolithic setup with erlang or running any kind of queue on the side you are fine. You might need to setup some retry logic and alerts but that is done once.

I could be misunderstanding but I think the argument is

if something/somebody is pushing to your webhook URL, and you are down for whatever moment of time, unless they retry, you lost the message

therefore

that's why you ask the entity pushing the data to your webhook to put it into something that is high availability (like an HTTP frontend to AWS SQS I'm guessing), and then you get around to processing it on your end when you can

Webhooks seem to be the publish/subscriber side of message queues without any of the "acknowledge successful processing of the message" parts defined in standard. 5 different companies pushing 5 different webhooks might all handle timeouts/500 errors/ECONNRESET differently retry wise.


You do not want to host someone else's code. Webhooks are great because they simply require an endpoint.

If you host someone else's code, you also need to host their 3rd-party API keys, etc. You'll also have to combat spammers using your IPs to do bad stuff.


Where are the alternatives? Seems like half of the article is missing.


I have the same question too, especially when they mention IDE. ^^"


Webhooks' only dependencies are all common standards. They are platform agnostic and have a low barrier to entry. The reliability challenges are going to similar to any other self-hosted or self-managed infrastructure.

I get that you might want something better to shuttle a lot of data around between two apps but webhooks are relatively painless for a lot of straightforward interactions.


> deploy and host the server somewhere where it’s always available

This is such an underappreciated aspect; a naive webhook consumer service will just completely miss events in any outage. It means that if the information contained is important, you need a fallback approach to gathering it, usually by periodically polling the source system. And, it turns out that a lot of times that polling mechanism is good enough on its own; engineering to get the instant push, but only most of the time is of questionable business value.

Incidentally I think that consuming webhooks and converting them to messages on a queue are one of the true killer AWS Lambda and Lambda-like use-cases. Very spiky traffic, shouldn't matter if it's warm, availability is paramount. And if you stay disciplined about it--just routing the message to the queue, nothing else--you sidestep many of the function-as-a-service developer lifecycle frustrations.


Most complains in this article about webhooks sound like they can easily be resolved with some lambda function and a cloudfront (or any alternative to AWS) without much complication. Am I seeing something wrong or not?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: