Hacker News new | past | comments | ask | show | jobs | submit login
Visual Studio Code 1.7 overloaded npmjs.org, release reverted (visualstudio.com)
302 points by eiopa on Nov 3, 2016 | hide | past | favorite | 85 comments



I'd just like to say on behalf of npm that Microsoft's handling of this incident was A+. As soon as we alerted them to the issue they were all hands on deck and did a rollback.

We've been really pleased that Microsoft chose to put their @types packages into the npm registry rather than a separate, closed system, and in general happy with Microsoft's support of node and npm. We're confident we can make the new features of VSCode work, we just need to work with Microsoft to tweak the implementation a little.

This was an honest mistake on their part, and we caught it in time that there was very little impact visible to any npm users.

Fun fact: at its peak, VSCode users around the world were sending roughly as many requests to the registry as the entire nation of India.


> This was an honest mistake on their part

From my outside perspective, it doesn't seem like a mistake on their part at all. Later in the thread you say this accounted for 10% of traffic, mostly 404s. This is (i assume) a hell of a lot of requests, but given npm's position as developer infrastructure, I don't think they could have reasonably expected to melt it. It would have been good of them to give a heads up, but I don't think I'd start assigning blame to the Code team.


Yeah. It leaves an unpleasant taste in my mouth to hear npm blaming Microsoft for this. As noted elsewhere, 404's are supposed to be very cheap to handle, otherwise DoS attacks become embarrassingly easy.

I feel like the npm team have once again failed to own their problems and instead tried to push the blame elsewhere. This is just an outside perspective, but I really feel like it would have been more honest and accurate to at least admit to the possibility that npm isn't perfect, and "blame" (which I'm not sure is even a helpful concept in this instance) is shared between parties more equitably.


I'm sorry my response looked like I was blaming them, that wasn't my intention. Like I said, it was an honest mistake: these things happen, and they handled it well.

Once we determined 404s were the problem we put mitigation in place that worked fine, but the problem of request volume remained: the 10% figure I gave was at a 5% rollout of VSCode. A full rollout would therefore have meant the registry became 3x bigger overnight and two thirds of that would have been 404s to VSCode users. At that point the issue is financial, not technical, which is another reason the rollback happened.


Hmm. What is the "mitigation"?


More efficiently handling 404s, which as many have pointed out we were handling quite naïvely.


Right, but I'm curious what exactly the issue was (on a technical level), and how you've mitigated it. This might be useful knowledge for other people building similar things, to avoid making the same mistakes :)


Check out my detailed answer a few comments down: https://news.ycombinator.com/item?id=12861180


Hi, I just wanted to say kudos for this little reply thread.

Many times I've seen someone on HN write a negative/flaming reply to a comment, which then nets a bunch of further agreement and consensus, and the original commentator is nowhere to be seen.

You quickly responded and fully acknowledged the faux pas (nuking any negative consensus), then you replied twice more, and one of those replies was to a request for technical info.

/o/



On the contrary, this seems like too kind a stance for npm to take. The approach Microsoft took here seems enormously and unnecessarily inefficient.

Microsoft maintain the @types scope. Instead of providing their own metadata endpoint listing available typings to filter requests on, they lazily opted to just mass bombard a repository they maintain, hosted on a free service they don't fund, for any and all possible package names, even though they themselves maintain the list of packages and should know in advance which don't exist.


Sounds like you're describing a mistake there...


Can you elaborate on what the issue is and how you want it to be fixed? Is it just something like rate-limiting requests or something more fundamental?

Edit: Answered at https://news.ycombinator.com/item?id=12861118


A VSCode person can (and probably will) answer in more detail, but at heart it's simple: if you want to add type-checking goodness to a library that isn't itself written in TypeScript, you can create a thing called a declaration file: https://github.com/DefinitelyTyped/DefinitelyTyped

Microsoft publishes a list of known good declaration files for popular npm packages to npm, under the scope @types: https://www.npmjs.com/~types

The 1.7 release of VSCode helpfully tries to automatically load type declarations for any npm package you use by requesting the equivalent declaration package under @types. When the package exists this is fine, because it's cached in our CDN.

What they forgot to consider is that most CDNs don't cache 404 responses, and since there are 350,000 packages and less than 5000 type declarations, the overwhelming majority of requests from VSCode to the registry were 404s. This hammered the hell out of our servers until we put caching in place for 404s under the @types scope.

We didn't start caching 404s for every package, and don't plan to, because that creates annoying race conditions for fresh publishes, which is why most CDNs don't cache 404s in the first place.

There are any number of ways to fix this, and we'll work with Microsoft to find the best one, but fundamentally you just need a more network-efficient way of finding out which type declarations exist. At the moment there are few enough that they could fetch a list of all of them and cache it (the public registry lacks a documented API for doing that right now, but we can certainly provide one).


> At the moment there are few enough that they could fetch a list of all of them and cache it (the public registry lacks a documented API for doing that right now, but we can certainly provide one)

Might I suggest having a bloom filter containing all the existing type declaration (which would be quite small) and only querying the registry if the bloom filter reports the type declaration as a positive.

Since the filter can be really small it will probably scale a lot better than a complete list of all type-declarations, and a new filter could be downloaded by the clients every now and then.


Is there an efficient diff algorithm for bloom filters?


Depends on the bloom filter, but for the fixed size, fixed hash functions, and other implementations in the same general vein, it would just be XOR of both of the bloom filters. Anything that comes out true is missing from what would be the combined filter. Slightly different for counting filters, where something along the lines of subtraction would get you what you need (anything non-zero is different).


> most CDNs don't cache 404s

Sounds like a good CDN-busting DDoS vector.


Yeah, 404 generation needs to be efficient for cases like this. Sounds like npm simply hadn’t encountered a situation where this mattered.


404 generation needs to be efficient for cases like this.

Indeed!

In general, it is an extremely efficient response. It took a huge number of users all hammering on the same set of 404 handling routes to get our attention, and we were able to handle the load, though it wasn't trivial to do so. The end user impact was minimal.

If it hadn't been a known-good actor, we had some options to shut down the flood a bit more forcefully, but we didn't want to inadvertently cause errors for vscode users. Like my colleagues have said in this thread already, we really dig what VSCode is doing, and as operational fires go, this one got put out very swiftly and did very little harm.

All that being said, knowing the npm devops team, this will no doubt be a source of insights for making the registry even more resilient in the future :)


If your CDN supports stale-while-revalidate and stale-if-error, you should consider enabling them -- it will take the load on your servers from O(users * packages) to O(POPs * packages)


> At the moment there are few enough that they could fetch a list of all of them and cache it

at which point they would be back to the annoying race conditions for fresh publishes, no?

Can you speak to why it is so expensive on the NPM side to serve a 404? Would a bloom filter like another commenter mentioned be helpful?


"There are any number of ways to fix this, and we'll work with Microsoft to find the best one..."

It's refreshing to read actual engineers' writing. After this, going back to tear-jerking snark-filled twitter and medium gnashing of teeth will be hard.


It's better to be part of the solution than to create the problem.


Most CDN's are able to invalidate a cache entry, so caching a 404 and busting it when it's not going to be a 404 anymore seems like it'd work?


Yeah, you're right. It was a simple oversight that we hadn't been caching 404s already, since we already have infrastructure in place to bust cache on publishes. It would have been our next step if necessary (it wasn't necessary to mitigate this flood).

You optimize for the use patterns you anticipate or see in normal usage, because, well, see famous saying about premature optimization. The use pattern we see most often is people installing from pre-determined lists in package.json, so 404s aren't all that common ordinarily.


Can they publish a package that contains a list of all their other packages under @types?


This is the tentative fix, though we'll be looking all-up at ways to reduce the load generated by VS Code from this feature.


If there's only 5000 types, why don't you just keep a catalog file? Then it's a single hit.


The TypeScript team will do feature work to minimize npm requests via a well-known cached list of packages.


"Many requests to the registry as the entire nation of India" per what time unit?


Approximately 3 new JS frameworks per hour.


Funny. Sounds like the expansion rate of the Javascript ecosystem.


It's not a real HN thread until someone makes this joke...


I was a bit vague :-) India's about 10% of total requests on any given day. VSCode was 10% of requests for a couple of hours.


Doesn't matter, it's a comparison of request rates.

Requests over time where user in India ~= requests over time where user is a vs code user


How does the time unit make a difference? If they're making the same number of requests/second as India then they're also making the same number of requests/hour or requests/day, no?


There wasn't any time unit in the original statement, so it wasn't clear that the request per X from India was the same as the request per X from the IDE.


Awesome for sharing your thoughts. Don't mind the children here. You could give them gold and they would moan about the purity.


So one day they switched their entire user base to rely on a 3rd party free service without any load testing or heads up? What could possibly go wrong?


We have been testing this on insider builds of vscode for a few weeks as well as preview builds of visual studio with no issues. We were just notified today by npm that we are flooding their servers.


Your installed base is quite large indeed! Your testing load was a drop in the bucket of our daily usage, but once you released to VS users we noticed. Should be straightforward to design something that works for this access pattern and load, now we know what you need. Typeahead package name completion would be a neat feature.


Before you start sending couple thousand QPS to any server it's generally not a bad idea to test if that server can handle that much, sometimes it is even worthwhile notifying the other team about the intended change.

Overall when you "tested" something, but it still breaks in production and requires a rollback it's usually a sign that your testing strategy isn't could use some improvement - what is the point of testing if it doesn't prevent failures from happening


Fwiw if I were building a feature on something that's considered as core a technology as npm is, I likely would not have thought of this either. (Though maybe I would have if I were doing an in depth look into it)


As one of the folks on the front-lines helping patch this, I certainly have no hard feelings; and I'm excited to be able to support this feature properly

... also ... not going to lie, this was the first time we've gotten to test several of the checks and balances we have in the npm registry which I was jazzed about :)


Thanks, Benjamin, Laurie and everyone else for mitigating this, it feels great to know when the community chimes in together for such highly unanticipated scenarios.

On that note, however, respectfully I believe that features which have the potential of hitting the registry so bad should first be beta tested on a private registry and moved on to the high traffic serving CDNs of npm.

And 10% of the daily traffic is from India??? Whoa, every day is a school day.


> And 10% of the daily traffic is from India??? Whoa, every day is a school day.

Well, 17% of the world's population lives in India, so doesn't seems surprising.


lol. agreed.


If I were on the Azure team I'd be offering tons of free credit to npmjs.org to get them to use Azure. Azure coming to the rescue would be the perfect ending to this story for Microsoft.


I don't think most organizations could relaunch their infrastructure on a totally different stack at the drop of a dime. And if it was really a "throw more servers at it" problem then it wouldn't really matter who was hosting them, would it?


Depends on who's paying


CDN caching globally, please!



Shouldn't all these requests be cached by a CDN? What exactly is overloading?


CDNs don't usually cache 404s. VSCode was looking for @types packages for any and every npm package its users were using. Packages that had a type description caused no issue, but most packages don't, so we had a > 1000% spike in 404s. Our workaround before MS did the rollback was to cache 404s for @types packages specifically, and it was effective enough that the registry never really went down.


Interesting. Thanks for sharing this information.


It's a pity, DNS handles negative lookup caching / TTLs (in fact that is exactly what the TTL field in DNS zones are!). So negative lookups can be cached, but you need to do thinking about it ahead of time and set sensible TTLs (preferably configurable) for those negative caches.


"a > 1000% spike in 404s" overloaded your servers? Such are your generation times? Can I bring the entire NPM ecosystem down from my ADSL line using some silly threaded code to make requests to randomly named packages?


99.9% of our requests are handled by the CDN. The CDN doesn't cache 404s, so 404s are handled by our origin servers, which are much fewer in number and therefore quite easy to overwhelm.

You're right that our handling of 404s was naive, and that's definitely something we'll be improving as a result of what we've learned from this incident.


It is a real foreign feeling being exposed to such an actively and well run project. Every time I see a new release on HN I get a little "wow, that time of the month again." Even this rollback was indicative of how fast they move.


Which is more possible? A bug or they just underestimated the volume of traffic that could be caused by ATA in real life?


The latter.


Would yarn help here? (since FB have their own CDN and registry for it?)


They do? yarn uses the npm registry not something else.


It does, but it also goes though cloudflare as far as I know (which does caching).


This issue though was with excessive 404s, which aren't cached.


No, they mirror the npm registry.


I wonder if any warning was given to npm that they would be getting this potentially huge new source of traffic. It doesn't seem to be mentioned anywhere.


Eh, NPM is a pretty core service and both sides probably should have done things a bit differently. I don't neccessarily think vscode needed to reach out to NPM to let them know they were going to be consuming their public API. Both teams appear to be in communication as a result however-- which is good.

This will likely lead to more fault tolerant systems on both projects and hopefully more collaboration & features in the future.


>I don't neccessarily think vscode needed to reach out to NPM to let them know they were going to be consuming their public API.

VSCode is used by a non-negligible number of users, and seems to rely on npm to operate at its best. It would have been good etiquette to let npm know, even though they couldn't forecast this exact situation.


I am not the architect of any large scale system-- that said, I wouldn't expect developers to reach out to GitHub.

However, it isn't bad etiquette and I'm sure Microsoft could get in touch with the devs. Interesting thought.


This is a really cool feature! Is there a similar extension for Atom / Sublime?


Not really, they don't have the user number for this.


[dead]


We've asked you twice before not to do this, so we have to ban this account. We're happy to unban accounts if you email us at hn@ycombinator.com and we believe you'll not do this in the future.


oops


But they told us that pulling in 1.6 GB for "Hello World" is normal and no big deal.


I'd love to use VSCode but can't until they or someone else rolls out a dockblockr extension that works for php as I'm mostly tied to Laravel right now, and my company requires docblocks and they are not fun to write by hand.


They have a great extension ecosystem. Why not give writing the extensions a shot yourself?


I've never really written much software or extension type things... I guess I could take a look at the code and compare it with DockBlockr on Sublime. I'm more of a web app guy.



They are probably trolling for a debate about NPM ... I can smell politics.


> The feature was so great that we started to overload the npmjs.org service.

I'm not sure I would call my feature "great" if it could have brought down npm.


I thought that sentence sounded very trump-ish.

The feature was so great that npmjs couldn't keep up with it, it was yuuuuuge!


...and they made npm Inc. pay for it!


it was a tremendous overload




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: