Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Open-source OAuth service for 40+ APIs (nango.dev)
206 points by rguldener on Feb 7, 2023 | hide | past | favorite | 56 comments
Nango (https://github.com/NangoHQ/nango) provides pre-built OAuth flows, secure token storage and automatic refreshes for 40+ APIs and counting.

Why we built Nango: We built Nango to solve the pain of accessing OAuth APIs. Despite OAuth being a standard protocol in theory, it remains a major burden to implement it, even with the help of a library. You still need to add endpoints and logic to your app for the server-side dance, implement token refreshes, build & secure your token storage, deal with redirects on the frontend etc.

But the worst part is that (almost) every API has quirks and non-standard behaviour. This is why we think open source and knowledge sharing is key here: With the templates in Nango we capture these edge cases and make sure that OAuth just works.

How it works: Nango is a small Typescript/Node.js service that handles the OAuth dance, token storage & refreshes for you. It works with any language, API or framework. It is easily self-hostable for free, or as a cloud service if you want to avoid the burden of securing tokens yourself (that’s how we pay the bills). To get started we recommend you take a look at our Quickstart on the GitHub repo: https://github.com/NangoHQ/nango

We currently support 40+ popular APIs. Adding a new one is as simple as updating a YAML file, so anybody can contribute one. In the coming weeks we plan to add a dashboard, a proxy to authorise requests, monitoring and more APIs.

One thing we learned from talking to other engineers about OAuth is that everybody has their own horror stories: What was the hardest OAuth API you ever used? What made it so difficult? We look forward to your stories and your feedback on Nango!

Repo: https://github.com/NangoHQ/nango // Website: https://www.nango.dev




As soon as I saw this I planned on asking in the comments how it compares to Pizzly, great to see that this already awesome project is getting relaunched as a dedicated, OSS startup!

Also, last time I checked out pizzly I do not remember seeing any of the "sync" type functionality and the fact that you've added that recently with Temporal is awesome!

You guys are awesome for putting open-source as a high priority!

Is nango going to have the proxy server functionality? (Also, would be really cool if you had a way to deploy the proxy to "the edge" via cloudflare/deno-deploy/fly.io!)

Best of luck to the nango team!


Thanks for the kind words!

Yes we are thinking about bringing the proxy back as well. Probably with some added features, such as rate-limit handling and automatic retries (powered by Temporal).

The proxy edge deployment idea is interesting. Would you be calling it directly from your frontend/mobile code? We were thinking most people would want to have the proxy as close to their backend as possible, but maybe we are missing something?


Glad to hear the proxy is in the plans!

Yea you're probably right, most applications would best fit the proxy living close to the backend.

One reason to run on the edge would be if the edge "worker" could retrieve the required token(s) from an edge db and attach it as a header to the request going through to the BE app server so that it can then immediately make the requests directly to the 3rd party api. (Though this likely only simplifies the design for the devs a bit and performance is nearly identical)

Another reason, likely less common use case, is if you could keep certain data from 3rd party api's always fresh/cached at the edge. So for instance Cloudflare worker KV store has an api to update records (perhaps from a normal nango server instance that's maintaining updating/syncing the records) so that this data can be "injected" as json into the body of an html response. This is definitely a niche use case though, lol.

Congrats on YC and the launch!


Nango is Pizzly.


Pitch looks cool, and I see you have some getting started docs.

Do you have some high-level overview of how it all fits together technically?

IIUC the tokens are stored in a backend service (available on GitHub)? Are they encrypted? How does the frontend SDK communicate with the backend, is there some OAuth flow first to the backend service, to get a user-specific key, which lets you store subsequent tokens?


That's a good point, we should have some kind of architecture page with a diagram.

At a glance: Nango's frontend SDK only handles redirects for the OAuth flow, the Nango server actually gets called by the OAuth provider (using a callback URL). That's when the token exchange happens. Tokens are stored in a Postgres (by default we create the Postgres, but you can easily connect your own).

Before triggering the OAuth flow for an end-user, you indeed assign it a unique user-specific key, so that you can retrieve this user's token later on!


This is really cool. One thing to note is that they store the tokens server side (in the db).

This is the BFF pattern, where the browser/native client that needs, say, GitHub or Google Calendar data, has to go through Nango to make those requests. (More here: https://docs.nango.dev/reference/guide#node-sdk )

That works well for a large class of problems, but I've seen some architectures where the token is stored client side, so that you don't have to worry about the proxy (Nango in this case) being a chokepoint. YMMV, but I thought it was worth calling out.


This is cool. I like the frontend aspect. Looks really handy for a lot of apps where integration with other services is a core feature (I've built several just over the past few years, so I definitely get it).

I do wish it supported encrypted storage. For example, I wrote/maintain a Vault plugin to do basically the same work as the backend side of this project[0]. I wonder if you would be interested in supporting Vault as a backend in addition to PostgreSQL down the line? Feel free to reach out if so.

To answer your question:

Like some others here, I haven't found the actual integration points to be terribly difficult with most OAuth 2 servers. Once you have a token, you can call their APIs. No problem. I wrote the Vault plugin I referenced above to basically just do automatic refreshes without ever exposing client secrets/refresh tokens to our services, and it works fine.

Rather our customers would get into situations where they inadvertently revoked access, the user that authorized the integration initially left the company and it was automatically disabled, etc. and there was no notification that it happened. Basically all of the lifecycle management side that couldn't be automated down to "refresh my token when it's about to expire" sucked. So anything you're looking to support there would be a huge value-add IMO.

Another one is that each provider has their own scope definitions/mapping to their APIs. Some scopes subsume others (e.g. GitHub has all repos, public repos, org admin, org read-only, etc.). Some get deprecated and need to be replaced with others on the next auth attempt. We could never keep them up to date because they were usually just part of docs, not enumerated through some API somewhere. If you had a way to provide the user with a way to see and select those scopes in advance, that would be huge. Think if my app or a user could answer the question "I want to call this API endpoint, what scopes do I need?" by just asking your service to figure it out.

[0]: https://github.com/puppetlabs/vault-plugin-secrets-oauthapp


Thanks a lot for all the insights, this is great!

"Rather our customers would get into situations where they inadvertently revoked access, the user that authorized the integration initially left the company and it was automatically disabled, etc. and there was no notification that it happened. Basically all of the lifecycle management side that couldn't be automated down to "refresh my token when it's about to expire" sucked. So anything you're looking to support there would be a huge value-add IMO."

I can definitely see value in notifying that API access has been revoked. If you think of any other case you'd like covered, I am interested!


Looks very slick! Wonderful landing page. Congrats.

> But the worst part is that (almost) every API has quirks and non-standard behaviour.

This is my long-standing pet-peeve with identity providers (idP). Everyone goes ahead and does their own thing, especially when it comes fetching user profiles (by having APIs in addition to a standard OAuth userinfo endpoint) and performing "federated logout" (you want to terminate the session on the idP's domain).

Does Nango relieve us of the burden in these areas? Thanks.


Thank you for your support!

The part we focus on (for now) is getting access to all endpoints of external APIs, on behalf of users, so you can enrich your product with integrations.

This involves 1. getting users to login to external systems, from inside your app 2. storing/refreshing access tokens so you can access all endpoints of external APIs

This means we're less focus on the Single Sign-On use-case for now!


I can't seem to find the pricing for Nango Cloud without going through a Typeform form.

I don't want to spin up a droplet or Linode only to have the tokens stored unencrypted and in plain text in, IIUC, the client? I understand the need to keep things tight as a small OSS startup but my project is a one dev show on a small scale.

I want to check this out but having to self-host with very real security risks is a no go for me.


Cloud pricing is here: https://www.nango.dev/pricing It is free for 10 tokens (and then continues at $0.5/token/month), sounds like that might fit your use case well?


Fairly neat! Quick docs-bug report: "And _easy to add yours_." under the "40+ pre-built providers" heading on the homepage goes to a 404 in the documentation (https://docs.nango.dev/pizzly/contribute-api).


Uups good catch, thanks! Fixed, the correct link is: https://docs.nango.dev/contribute-api


Looks interesting. But why do you recommend such large 20/40 usd droplets for running on digital ocean? Surely one would scale pretty far by simply using a dedicated postgres db - and a small app container?

Or does the backend do some heavy lifting I'm missing?


You are absolutely right, I'll change this.

We made this doc when we were focusing on syncing data, which was much more intensive.

Though in the future, we plan to offer a proxy that would funnel your external requests and would require more processing power. The advantages of a proxy are that we can automatically authenticate requests, handle retries & rate limits, monitor & alert, etc.


40+ most popular doesn’t include “Sign in with Apple” or did I miss something?


Thanks for checking it out! Our focus is currently not SSO (though you can use it for that too), we aim more for users who need OAuth tokens to access APIs and build integrations. E.g. fetch commits from GitHub repos (private or public), build a Slack integration, fetch contacts from HubSpots etc.

With these kinds of integrations we haven't gotten requests for "Sign in with Apple" yet (though we are open to that).


Ok, so the use case is that if I want to get commits from github - I'm better off using github Api directly - but if I want to get commits from github, gitlab, bit bucket etc - this will help me with the authentication part?


Is it really that hard? i have a few integrations with oauth api's and for the most part i just check the dates on the tokens before sending them through?


One thing I always wished Pizzly would have added was regular api key based authentication alongside oauth.

Make it suitable for all integration authentication possibilities.


I'm not quite sure how this would work. Are you saying that Nango (nee Pizzly) would be configured with an API key to, say, Stripe, and then you'd route your browser/native app Stripe requests through Nango and it would proxy them?

Or am I missing something?


This is a use case we have on the radar and would love to support.

Would be great to hear more about your use case for this, feel free to message me on our community or on robin (at) nango (dot) dev


This is very similar to keycloak right? I use next auth when developing using nextJS but mostly always use my own backend to manage the auth and tokens even though they offer schema for backend integration. There has been many discussion and confusion at next auth repo on the flow to register user to backend services after getting the token from sso services. I’ve never heard or encounter issues related to saving the tokens or managing them but i do hope that this product solves some of the pain point for next auth.


Auth0 or Keycloak is just broken at documentation. Also the API is broken with new updates as well.

Enterprisey people just don't care about minimalism on documentation and stable API design.


you could interface a keycloak with nango, so nango would stand between your app and keycloak or any other identity provider like github, ...


Great splash page. This can be a confusing subject. I am dealing with this right now myself and might give it a try and see how it integrates


Very cool. Are you familiar with apideck? I especially like their Vault system for credentials storage. Did you investigate this?


Congrats Nango team! - This looks great and has come a long way :D


This certainly looks polished, but... What use case do you have for working on the token generation code so often that it warrants such a big layer of indirection?

I don't quite understand how all the fuss around different OAuth implementations is justified, especially not why you would need a separate library or SaaS for that. Implementing the client_credentials dance is like, get a token, cache token+refresh until expiration time, use until expired, exchange the refresh token, back to start. I mean yes, some providers want an extra parameter or have a strange token URL, but once you've grokked the general concept, it usually makes sense.


I think it's pretty great for hackathons and other situations when you need to crank out an MVP without worrying about the authorization flow logic too much. Of course, once you have the bandwidth to make a proper backend you can graduate to your own OAuth logic


Thanks for keeping Pizzly alive!


thanks for making it open source


They didn't make it open source. The headline is wrong.


It's not under an OSI-approved license. https://github.com/NangoHQ/nango/blob/master/LICENSE



Who cares about the trademark? The tech community has a meaning for the term. Everyone has understood what it meant for decades. Why would you try to deliberately take away the meaning?

Language is useful.


Yes they do. They applied for a trademark and were unfairly denied one. Why does Apple get a trademark on Apple?

Trademark or no, enough people were on-board with the OSI and OSD at the time it became a well known and established term that we have a claim to it.

touch .well-known/open-source


Your argument is that they own it despite courts denying it?


Yes - "Trademark or no"

Also the institution has proven to be stronger than one individual.

https://www.theregister.com/2020/01/03/osi_cofounder_resigns...?


fine but it only restricts aws type users, the rest have no trouble using it?


I cannot add it into my hosted product as a small company (I am not Amazon) either. Imagine I have a tiny niche product that needs this; the trouble is that it could as well not be open source as I am not allowed to use include it anyway. I understand not wanting aws type of users, but why not just name them instead? Just include a clause; ‘the following companies cannot use this software unless they contribute to it by resolving at least x issues per week; Amazon, google, meta, etc’.

Edit; come to think of it; make it a ‘contribute GPL3 or MIT license’ (not sure if something like it exists); you can use it for anything, however, if you offer it as a hosted product to your clients (aws etc), you must contribute x.y% of your staff FTE to it. GitHub/Gitlab can do the KYC per employee for which the company has to pay.


no no no.... my understanding is if you are using it in house for a product, you are fine but you cannot offer it as a hosted service to your customers. read "aws offers managed DB" you cannot do that but you should be able to use this in your "cat picture search engine that uses this DB hosted inhouse".

that is a big difference


As far as I understand the elastic license I cannot create a SAAS product that includes this and then create a company selling that saas product to 1000s of paying clients (again; not the size of evil Amazon but a not that successful, lifestyle b2b saas business). It’s a crippling license that goes against many of the modern ideas of oss and only becuase Amazon abused it. So name and shame but don’t punish the rest. I would never consider contributing to a project licensed like this, which shouldn’t be a goal for an open source project. In my opinion.

You are right about in-house, but the lawyers won’t care so much becuase ‘in house’ is not so clear. In house cannot include clients or partners using it? They pay and we host so seems to violate. Pretty useless for most businesses. And it’s not dual licensed, so I cannot contribute or pay to change this outcome. For me, it’s just the same as a closed source saas product. That’s fine, I just find it strange that people would let Amazon etc bully people into crap licenses while they could just use MIT-but-not-for-Amazon.


It's fine what they're doing. Just two things:

1. It doesn't solve my problem of finding an Open Source tool. I can just ignore it.

2. I won't read the code because they could claim I copied it if I wrote a similar tool. A browser extension to keep me from viewing Source-Available code would be nice.


Agreed. I would just know why they do what they do; it seems everyone using the Elastic license is because of what aws did with elastic search. If that is the case, there are other options friendlier for open source (vs source available) while not allowing aws (like) to do what they do. It’s not totally in line with freedom but at least makes it usable.


does this license violate 4 freedoms or not?


> MIT-but-not-for-Amazon.

but that the problem. There cannot be MIT-but-not-for-Amazon. MIT is a do whatever you want license.

The problem of 4 freedoms is restricted in MIT, that is why GPL exists. SSPL or this license extends that to exclude AWS.

You are right, they could just exclude AWS but you cannot build a license around specific people or names, aws could just change their name and avoid that.

If you were writing the license, how would you rewrite this offending line to exclude only AWS like businesses?

> It’s a crippling license that goes against many of the modern ideas of oss

yes. this goes against MIT because that assumes AWS or the mom and pop store can use your work without contributing back and use your open source code to build a closed source software.

the entire premise of 4 freedoms is that the freedom of seeing/editing code made by author is given to the ultimate end user. MIT goes against that so there is that.

again, how would you rewrite that line to exclude SAAS providers who just leech off of open source projects for their own gain?


> aws could just change their name and avoid that.

But if you say Amazon or any of their subsidiaries. They are not going to rename Amazon to add 1 product to aws.

The wording is indeed difficult; for my taste to be something not considered abuse, it would need to be integrated in a bespoke product where it offers less than 5% of said products functionality. However that would allow aws. But I think it can be rewritten or added on-to by saying something like that it cannot be offered as a distinct service meaning you cannot say ‘we offer a search engine, here is elastic’; it can be a nocode product which allows you to add search and search apis from elastic by clicking them together if that nocode product also allows for many other features not offered in elastic. So some way to convey it needs to be integrated and not just a copy you can spin up.

However, that would be difficult and convoluted and hard to fight. That’s why I was thinking about adding mandatory FTEs if you use it to make money. That type of thing would simply have the aws lawyers not sign off using it.


> As far as I understand the elastic license I cannot create a SAAS product that includes this and then create a company selling that saas product to 1000s of paying clients.

Yes, you can. You're not providing Nango as a managed service; you're providing your SaaS as a managed service, which uses Nango for some things. But as long as your customers can't access a sizeable portion of Nango *for purposes outside your SaaS*, you're fine.

https://www.elastic.co/licensing/elastic-license/faq

IANAL.


Worth mentioning: https://docs.nango.dev/nango-deploy/oss-limitations

To keep the setup of the Nango open source version simple we have made some choices that may not be ideal for production. Please take them into consideration before using it in production:

The database is bundled in the docker container with transient storage. This means that updating the Docker image causes configs/credentials loss. We recommend that you connect Nango to a production DB that lives outside the docker setup to mitigate this.

Credentials are not encrypted at rest and stored in plain text

No authentication by default

No SSL setup by default

The setup is not optimized for scaling

Updating the provider templates requires an update of the docker containers


> Credentials are not encrypted at rest and stored in plain text

LOL. Advertising something as a security product and doing this is absolutely ridiculous.


Well, you _could_ encrypt the tokens but you can't hash them like a password so they have to be stored in some format in their entirety.

However, it seems reasonable to infer they are referring to storage in the DB itself. How would you suggest they encrypt it securely at rest for an out-of-the-box "setup and go" scenario?

I will note that a lot of the existing "plugin" OAuth type providers for popular web frameworks don't encrypt user tokens in the DB by default either.


You can likely encrypt the tokens symmetrically by a key on the server (can also have that key be stored in a secret manager and retrieved for the purposes of the encryption/decryption only); obv you have to make sure the key doesn't leak but it's significantly better than storing this type of data unencrypted in DB.

Interesting point on existing providers not encrypting user tokens in the DB by default tho - wouldn't expect that hmmm


What I'm doing is have a _separate_ encryption key for _each_ OAuth provider on the server, encrypt with that and save it in a HTTPOnly browser cookie. This way if the key gets stolen from the server, the attacker doesn't even have any user OAuth tokens. If the cookies are stolen from the user's browser, the attacker can't do anything with it. If one user's cookie is stolen AND the encryption key is stolen, only one user is affected. As the cookie is HTTPOnly, there is no XSS surface, the client need to communicate with the server to be able to API requests to the OAuth provider.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: