Hacker News new | past | comments | ask | show | jobs | submit login

I'd just like to say on behalf of npm that Microsoft's handling of this incident was A+. As soon as we alerted them to the issue they were all hands on deck and did a rollback.

We've been really pleased that Microsoft chose to put their @types packages into the npm registry rather than a separate, closed system, and in general happy with Microsoft's support of node and npm. We're confident we can make the new features of VSCode work, we just need to work with Microsoft to tweak the implementation a little.

This was an honest mistake on their part, and we caught it in time that there was very little impact visible to any npm users.

Fun fact: at its peak, VSCode users around the world were sending roughly as many requests to the registry as the entire nation of India.




> This was an honest mistake on their part

From my outside perspective, it doesn't seem like a mistake on their part at all. Later in the thread you say this accounted for 10% of traffic, mostly 404s. This is (i assume) a hell of a lot of requests, but given npm's position as developer infrastructure, I don't think they could have reasonably expected to melt it. It would have been good of them to give a heads up, but I don't think I'd start assigning blame to the Code team.


Yeah. It leaves an unpleasant taste in my mouth to hear npm blaming Microsoft for this. As noted elsewhere, 404's are supposed to be very cheap to handle, otherwise DoS attacks become embarrassingly easy.

I feel like the npm team have once again failed to own their problems and instead tried to push the blame elsewhere. This is just an outside perspective, but I really feel like it would have been more honest and accurate to at least admit to the possibility that npm isn't perfect, and "blame" (which I'm not sure is even a helpful concept in this instance) is shared between parties more equitably.


I'm sorry my response looked like I was blaming them, that wasn't my intention. Like I said, it was an honest mistake: these things happen, and they handled it well.

Once we determined 404s were the problem we put mitigation in place that worked fine, but the problem of request volume remained: the 10% figure I gave was at a 5% rollout of VSCode. A full rollout would therefore have meant the registry became 3x bigger overnight and two thirds of that would have been 404s to VSCode users. At that point the issue is financial, not technical, which is another reason the rollback happened.


Hmm. What is the "mitigation"?


More efficiently handling 404s, which as many have pointed out we were handling quite naïvely.


Right, but I'm curious what exactly the issue was (on a technical level), and how you've mitigated it. This might be useful knowledge for other people building similar things, to avoid making the same mistakes :)


Check out my detailed answer a few comments down: https://news.ycombinator.com/item?id=12861180


Hi, I just wanted to say kudos for this little reply thread.

Many times I've seen someone on HN write a negative/flaming reply to a comment, which then nets a bunch of further agreement and consensus, and the original commentator is nowhere to be seen.

You quickly responded and fully acknowledged the faux pas (nuking any negative consensus), then you replied twice more, and one of those replies was to a request for technical info.

/o/



On the contrary, this seems like too kind a stance for npm to take. The approach Microsoft took here seems enormously and unnecessarily inefficient.

Microsoft maintain the @types scope. Instead of providing their own metadata endpoint listing available typings to filter requests on, they lazily opted to just mass bombard a repository they maintain, hosted on a free service they don't fund, for any and all possible package names, even though they themselves maintain the list of packages and should know in advance which don't exist.


Sounds like you're describing a mistake there...


Can you elaborate on what the issue is and how you want it to be fixed? Is it just something like rate-limiting requests or something more fundamental?

Edit: Answered at https://news.ycombinator.com/item?id=12861118


A VSCode person can (and probably will) answer in more detail, but at heart it's simple: if you want to add type-checking goodness to a library that isn't itself written in TypeScript, you can create a thing called a declaration file: https://github.com/DefinitelyTyped/DefinitelyTyped

Microsoft publishes a list of known good declaration files for popular npm packages to npm, under the scope @types: https://www.npmjs.com/~types

The 1.7 release of VSCode helpfully tries to automatically load type declarations for any npm package you use by requesting the equivalent declaration package under @types. When the package exists this is fine, because it's cached in our CDN.

What they forgot to consider is that most CDNs don't cache 404 responses, and since there are 350,000 packages and less than 5000 type declarations, the overwhelming majority of requests from VSCode to the registry were 404s. This hammered the hell out of our servers until we put caching in place for 404s under the @types scope.

We didn't start caching 404s for every package, and don't plan to, because that creates annoying race conditions for fresh publishes, which is why most CDNs don't cache 404s in the first place.

There are any number of ways to fix this, and we'll work with Microsoft to find the best one, but fundamentally you just need a more network-efficient way of finding out which type declarations exist. At the moment there are few enough that they could fetch a list of all of them and cache it (the public registry lacks a documented API for doing that right now, but we can certainly provide one).


> At the moment there are few enough that they could fetch a list of all of them and cache it (the public registry lacks a documented API for doing that right now, but we can certainly provide one)

Might I suggest having a bloom filter containing all the existing type declaration (which would be quite small) and only querying the registry if the bloom filter reports the type declaration as a positive.

Since the filter can be really small it will probably scale a lot better than a complete list of all type-declarations, and a new filter could be downloaded by the clients every now and then.


Is there an efficient diff algorithm for bloom filters?


Depends on the bloom filter, but for the fixed size, fixed hash functions, and other implementations in the same general vein, it would just be XOR of both of the bloom filters. Anything that comes out true is missing from what would be the combined filter. Slightly different for counting filters, where something along the lines of subtraction would get you what you need (anything non-zero is different).


> most CDNs don't cache 404s

Sounds like a good CDN-busting DDoS vector.


Yeah, 404 generation needs to be efficient for cases like this. Sounds like npm simply hadn’t encountered a situation where this mattered.


404 generation needs to be efficient for cases like this.

Indeed!

In general, it is an extremely efficient response. It took a huge number of users all hammering on the same set of 404 handling routes to get our attention, and we were able to handle the load, though it wasn't trivial to do so. The end user impact was minimal.

If it hadn't been a known-good actor, we had some options to shut down the flood a bit more forcefully, but we didn't want to inadvertently cause errors for vscode users. Like my colleagues have said in this thread already, we really dig what VSCode is doing, and as operational fires go, this one got put out very swiftly and did very little harm.

All that being said, knowing the npm devops team, this will no doubt be a source of insights for making the registry even more resilient in the future :)


If your CDN supports stale-while-revalidate and stale-if-error, you should consider enabling them -- it will take the load on your servers from O(users * packages) to O(POPs * packages)


> At the moment there are few enough that they could fetch a list of all of them and cache it

at which point they would be back to the annoying race conditions for fresh publishes, no?

Can you speak to why it is so expensive on the NPM side to serve a 404? Would a bloom filter like another commenter mentioned be helpful?


"There are any number of ways to fix this, and we'll work with Microsoft to find the best one..."

It's refreshing to read actual engineers' writing. After this, going back to tear-jerking snark-filled twitter and medium gnashing of teeth will be hard.


It's better to be part of the solution than to create the problem.


Most CDN's are able to invalidate a cache entry, so caching a 404 and busting it when it's not going to be a 404 anymore seems like it'd work?


Yeah, you're right. It was a simple oversight that we hadn't been caching 404s already, since we already have infrastructure in place to bust cache on publishes. It would have been our next step if necessary (it wasn't necessary to mitigate this flood).

You optimize for the use patterns you anticipate or see in normal usage, because, well, see famous saying about premature optimization. The use pattern we see most often is people installing from pre-determined lists in package.json, so 404s aren't all that common ordinarily.


Can they publish a package that contains a list of all their other packages under @types?


This is the tentative fix, though we'll be looking all-up at ways to reduce the load generated by VS Code from this feature.


If there's only 5000 types, why don't you just keep a catalog file? Then it's a single hit.


The TypeScript team will do feature work to minimize npm requests via a well-known cached list of packages.


"Many requests to the registry as the entire nation of India" per what time unit?


Approximately 3 new JS frameworks per hour.


Funny. Sounds like the expansion rate of the Javascript ecosystem.


It's not a real HN thread until someone makes this joke...


I was a bit vague :-) India's about 10% of total requests on any given day. VSCode was 10% of requests for a couple of hours.


Doesn't matter, it's a comparison of request rates.

Requests over time where user in India ~= requests over time where user is a vs code user


How does the time unit make a difference? If they're making the same number of requests/second as India then they're also making the same number of requests/hour or requests/day, no?


There wasn't any time unit in the original statement, so it wasn't clear that the request per X from India was the same as the request per X from the IDE.


Awesome for sharing your thoughts. Don't mind the children here. You could give them gold and they would moan about the purity.


So one day they switched their entire user base to rely on a 3rd party free service without any load testing or heads up? What could possibly go wrong?


We have been testing this on insider builds of vscode for a few weeks as well as preview builds of visual studio with no issues. We were just notified today by npm that we are flooding their servers.


Your installed base is quite large indeed! Your testing load was a drop in the bucket of our daily usage, but once you released to VS users we noticed. Should be straightforward to design something that works for this access pattern and load, now we know what you need. Typeahead package name completion would be a neat feature.


Before you start sending couple thousand QPS to any server it's generally not a bad idea to test if that server can handle that much, sometimes it is even worthwhile notifying the other team about the intended change.

Overall when you "tested" something, but it still breaks in production and requires a rollback it's usually a sign that your testing strategy isn't could use some improvement - what is the point of testing if it doesn't prevent failures from happening


Fwiw if I were building a feature on something that's considered as core a technology as npm is, I likely would not have thought of this either. (Though maybe I would have if I were doing an in depth look into it)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: