Hacker News new | past | comments | ask | show | jobs | submit login
Npm's Self-Signed Certificate is No More (npmjs.org)
215 points by rcconf on Feb 28, 2014 | hide | past | favorite | 128 comments



There's a comment on the blog post that I thought was very good, it was from Rob, and it ends with:

"Please, try harder or get forked. Not sure how else to say that."


It's disappointing to see unnecessary and unjustifiable censorship of perfectly legitimate comments like that.

I think it has become more of an issue in general as the discussion of software projects has moved away from mailing lists and newsgroups to blogs and other more centralized discussion forums.

I recently saw similar censorship happen at Opera's blogs. I would read some of the comments one evening, and there'd be some good discussion about features that the newer versions of Opera's desktop browsers are still missing. They weren't always positive comments, but they'd be relevant and remarkably civilized given the circumstances. Then I'd read the same blog post a day or two later, and entire discussion threads full of useful content would be gone.

Criticism and negative discussion is often the most valuable that there is. It often highlights real problems that need to be solved. To delete such commentary is not at all helpful, and actually probably quite harmful.


It's happened to me on the Chromium blog as well when attempting to correct a factual error. Gotta love fauxpen source!


Rob's comment was spot on.

Here's a link to the full text: http://blog.npmjs.org/post/78085451721/npms-self-signed-cert...


And now that comment seems to have disappeared behind an "awaiting moderation" message after it has already been out there four hours. Nice. But the above link still works (for now).


Yup. I hesitated to paste the post but someone else did it so it's in the wild anyway now.


Hi! As I said in the comment's on Rob's own post: we didn't censor any comments. We did no moderation of any kind on any comments today; we were way too busy trying to fix the problem. I don't know what happened to Rob's comment, but it was nothing anyone at npm did.


Seems to have been moderated away by the blog owners. Classy. Here's the comment under discussion:

==

Hey. Crazy kids. This probably needs to be that one event where you sort of realize: "Oh. Shit. Other people...like...use this & stuff. We need a damn road map and a release schedule. Stop smoking dabs all day, breh."

Now is also a great time to learn how to think about the potential ramifications a production push will have prior to making said production push. And, if your change might impact some or perhaps even all of the other people who use your technology, then some degree of coordination - perhaps an email? - would be nice. It's one of those things that will help make you look professional. I suck at professionalism. You have no idea. But, even I know this much.

Because, right now, I sort of feel like I'm asking some very rightfully fearful people to consider entrusting perhaps their actual career into the development of technology they need to succeed and thrive. And, I just started recommending Node.js - with a caveat - that npm basically sucks. I hate having to do that and it needs to stop.

So, here we are.

Your words continue to be one thing, and your actions continue to be quite another. If it is even possible to break a tool like this, that tool is not enterprise grade. If there is nothing that can be done to successfully insulate a tool from unexpected behavior like this, then that tool scores less in evaluations that consider the risk of using it.

npm, at this point, has more going against it in the discussion than going for it right now. Events like this are, in the grand context, very significant and telling. They are also ill-timed. Because big, important decisions are trying to happen right now regarding the use of Node.js. It is literally on the cusp of going mainstream. And, that seems to be generating some pressure that at least one team (npm) doesn't seem to be equipped to handle.

So, before you find yourself facing a community that forks instead of trying to work with you, I would like to just make a simple recommendation. In the future, you seriously need to sit and think about the potential ramifications of a production push. And...this is the important part...if those changes are going to have a wide impact on your users - send some sort of email WELL IN ADVANCE. A flippant blog post the day of is not Doing It Right™.

Because, and I feel like I might not only be speaking for myself, I'm not going to allow the promise of Node.js to be voided by the lackluster and problematic performance of its weird bolt-on archive service. Someone, perhaps even me (as in: today), will simply replace you with a workable, decentralized solution that enterprise can specialize to purpose and communities can use to grow and thrive.

If you have any questions, ask somebody. Anybody. If you're struggling with some concept of enterprise grade operations, what people expect of you and how you can succeed with events like this in the future, I'm positive every capable person here would provide some level of guidance and support. We want you to succeed.

Please, try harder or get forked. Not sure how else to say that.

Best regards, -Rob


We didn't moderate away anything. I am literally the only person who CAN moderate those comments, and I was at a conference all day. 100% of my online time was spent working with my team to figure out the fastest path to a fix. We didn't realize the extent until way too late, and that's bad on us. I apologize.

I didn't delete your comment. I'll look at the moderation queue and see if maybe disqus is set to auto-hide after some time or something. I'm sorry for the confusion there.


Thanks for saving it, probably hit too close to home for them.

A lot of this is also in how they manage the fallout of this decision - the "Streisand Effect" may come into play here...


Who is Rob? (Was it simply attributed to "Rob", or am I supposed to know who Rob is? I do not do much with node.js, and Google searches for "rob node.js" are not pulling up anyone terribly canonical-looking; mostly just some one-off talks, and from node.js only people who work at companies I also haven't heard of that use node.js.)


I am Rob (Colbert) rob.isConnected@gmail.com. I'm nobody special other than someone who faces the constant up-hill struggle to convince IT that Node.js is ready only to have it shot down time and time again. I'm unemployed right now just trying to find a way to use the right tools for the job. And, I'm sick of failing because other people suck, won't learn, won't adapt, won't address problems or make progress.

The "canonical" resource for my comment on npmjs.org today can be found here:

http://robcolbert.com/#/posts/5311055a38bb373638000007

That is my personal blog running on my personal server on a DSL line that I can't afford while about to go homeless. I have no choice. I have to speak up and fight for what I want to be using or throw in the towel.

What would you do?


> who faces the constant up-hill struggle to convince IT that Node.js is ready only to have it shot down time and time again

So it sounds like it is not ready. _This_ drama, tangled slinkies of callbacks, drama with the Node.JS contributor that got pushed out because if "her" vs "his" pronouns and so on. Nothing of that says it is ready. So I pretty much agree with your IT department. Sorry.

> And, I'm sick of failing because other people suck, won't learn, won't adapt, won't address problems or make progress.

Maybe they did, they just didn't arrive at the same conclusion as you. Maybe they are even right.


>> struggle to convince IT that Node.js is ready only to have it shot down time and time again

Devils advocate, is it actually ready? Especially in light of the incompetence on display with this particular issue?


Node.js is not ready.

The main problem with Node.js is that the libraries out there simply are not good, are broken / don't do what it says it does, or have critical issues outside of the most basic use cases. I'm talking about the popular libraries (like Socket.io) down to all the ones being contributed by the community. I've found flaws in every single library I've used so far, and they are flaws not present in their Python and Ruby counterparts.

Error handling in Node.js is one of the worst of any language.

NPM is now deploying breaking changes to production, without apology.

Javascript is also a terrible language for larger teams, unless everyone follows the same exact convention, as the language itself is extremely flexible, more so than any language I've seen.

I've spent 2 years in this ecosystem, and pretty much anything else feels liberating in comparison, be it Python, Ruby, Go, Erlang, etc etc.

The only thing Node.js has going for it is a community of front-end developers who want to dabble in backend architecture, but I just feel this whole entire ecosystem is too fragile for my tastes.

Like MongoDB, as people begin creating successful businesses out of it and reach a certain scale, similar articles bashing Node.js are all but inevitable.


I'm going to address my opinions on your issues a few at a time.

As to the existing libraries, I find a bigger issue is sometimes the number of bad libraries that have become more popular than maybe better libraries. That and some bits have been abandoned for some time now. -- That said, I find that since most of them are out in the open, it's easy enough to fork and address the issues at hand, if the upstream author is unresponsive, then change the name in package.json, and publish your own version.

I've seen far worse error handling... you can use try/catch/finally, which is pretty standard, and most internals will do this for you, and return the error against the callback.

npmjs.org isn't the entirety of the node community, and I think this particular instance was a bit of a mistake, but perhaps not having npm fallback to regular CAs was also a mistake... if they'd done that, it would have continued to work. I don't know of a way they could have done this without breaking something... The lack of advance notice is the biggest issue, but many/most wouldn't have seen it anyways.

With node/npm it's easy enough to create more/smaller packages/modules that are easier for larger teams to maintain. We've been using git+ssh:// references for our internal packages.

I've spent the better part of 4 years following, and over 3 using node.js. I find that there is a huge mix of developers in the larger community, and that it's a lot like running with scissors at the moment, but things will stabilize when the dust settles a bit. I think it's uptake has been far faster than Ruby/Rails and there have been some growing pains.

More than the front end developers, we get to have full stack in one language for a lot of environments. We also have the benefit of a few very large companies participating (Yahoo, Walmart, and others). I'm at GoDaddy in Platform and Commerce development, and a huge portion of our new development server-side is going with node.js

I don't doubt that there will be detractors regarding the use of node.js in certain environments. I wouldn't use it for say image processing... but I might use it as a manager against a queue which launches image processing. It's a matter of using the right tool for the job.


> I'm at GoDaddy in Platform and Commerce development

ಠ_ಠ


sockjs is better than socket.io if you're into WebSocket (streaming support etc).


This is the right solution - unfortunately, socket.io has been essentially abandoned, but it's the library that everybody knows, and is mentioned in almost every node+websockets tutorial. We get questions about socket.io all the time in #node.js, and the answer is usually "use sockjs".

Guillermo Rauch is working on a complete rewrite (engine.io), but it's been very slow going. I don't envy him; socket.io has gotten very popular very fast, it's complicated software, and people expect it to be much more polished than it is.


What do you mean by "ready"? Because for me, ready means being able to run a relatively stable production system, which plenty of teams who use node are able to do.

You listed some downsides, but none of them are actually breaking problems that block running a stable node production system.


What flaws did you find in Socket.io? What were some flaws in other libraries?


The flaw in socket.io had to do with redis-store. When deploying socket.io over multiple Node.js processes / servers, you want to use redis-store, at least, that's what they recommend.

When you use redis-store, after running for a while, it puts your CPU usage at 100%, and you have no idea that it was because you used redis-store.

Socket.io therefore is only really optimized for running a single process, which of course breaks down as soon as you have any significant I/O throughput or hit a certain # of connections.

Same issue happened to me when using the Node zmq library. Inexplicable increase in CPU, which took 3 months to later fix.

I could really go on, but these are mission critical libraries that have obscure bugs that are extremely difficult to track down and will bite you in production.

Before I'd ever consider using Node.js again, I'd wait for

- 1.0, this is taking too long. And even when they release a 1.0, it is more likely to be for marketing purposes than actual product stability.

- Library Maturity (maybe another 2-3 years)

- Better error handing. Nothing is worse than being left wondering why your server is failing in a production environment without relevant context and information.

Another problem with the Node.js community (not all, but a lot) is that you have all these former front-end guys who can build great APIs with pretty web sites that make you believe the library is battle tested and safe to use, when in fact nothing is further from the truth.


Wow that is a nasty bug.


Over 500 open issues, some over 3 years old.

For 3 years without a fix, you'll have been able to DDOS your own server with socket.io: https://github.com/LearnBoost/socket.io/issues/438


To be fair, Ruby's had similar issues with its gem repository, including an intrusion that one time (about a year ago).


Fair point, but to my memory the Ruby tools/community were never quite as ramshackle as those around NodeJS.


I would observe the CPAN community. I highly recommend reading http://www.cpan.org/misc/ZCAN.html and then begin forking+mirroring the registry immediately.


Why don't you just stop recommending Node.js?


Recommended it once. Literally the first time. I've waited a very, very long time to do that. Once. And, it was for building a prototype. Project went well. Very well. Wanted to be able to keep it. Gave up. And, I have stopped recommending it. I left.

Presently seeking to either find a place that's already working with Node, earn a living on my own project based on Node or switch careers. I intend for this to be my last attempt at being able to work my way. It's never going to make sense to others, so I have to make this work.

Sorry if my outburst asked all the hard questions, but reality is: How is this stuff going to get ready if there isn't some pressure to move it in that direction? And, given this is the environment I want to build in, what am I supposed to do? Wait for someone else to kick a tire & light a fire?

No, thanks. Done with that. Don't care if Node's not perfect for what you want to do. It's the right tool for my jobs and the kind of jobs I want to be working on. We're not all docking space shuttles to space stations in real life.

Anyway, I have stopped recommending Node.js. I just personally use it and won't work anywhere that doesn't.


Software development as a career isn't about making you as the developer comfortable. Your job is to build high quality software using the tools given to you. If you are good at your job, and you are able to be a convincing salesman, you can often build software using the tools _you_ want (this has never been an issue for me anyways), but it is not going to be given to you on a silver platter.

It sounds to me like you are convinced that Node is the one and only way to build the software you work on (I think that's bullshit frankly), and you won't accept any alternatives. Well, you are about to find out the hard way what happens when you aren't willing to work past your own pride.


why? I don't understand the attachment.


If you're having trouble finding a job where you can use node.js, try other things.


Rob, I have some spare compute on Rackspace you are more than welcome to use if you want it. Hit me up on Twitter. Same same username.


Have you applied for a job at npm, Inc? It sounds like they could use your help.


Does it really matter who he is? I don't think it does. It's the message that's important, not who is expressing it.


First, I will point out that when a bunch of people are talking about some guy by his name in a way that feels like everyone kind of knows who it is, it is natural to be curious. However, in this case, there is a very important reason why this is the first thing you should ask: the meat of the message is a threat that "Rob" will somehow build something and replace npm: if Rob were someone well-known in the community, is someone who maintains a package manager for another large community, or even if he had a history of rampaging through communities and disrupting them with successful solutions, this would be a meaningful threat; if Rob is just an angry user, then the threat is hyperbole which should be ignored. Like, I appreciate that "arguments should have weight regardless of who said them", but applying that knee-jerk as if that is true in all situations is harmful to discourse... in this case, what we are looking at is mostly a rant and a threat, not an "argument": it matters greatly who the speaker is.


fyi, all yesterday i was getting npm registry timeouts all day, from machines in DC's around the globe. seems due to an unrelated issue also reported on their blog: http://blog.npmjs.org/post/78048834276/elevated-error-rates-...

So yeah.. I was thinking this myself. this type of errors are unacceptable.


If anyone is wondering what the actual change was:

It looks like the npm registry used to have a certificate signed by npm's own CA, and existing npm clients only trust that CA by default, not the normal list of verisign, digicert, etc. (Trusting versign et al would defeat the point of using their own CA.) The signing key for that CA is pretty darn important, and maybe there are entities other than npm, inc who might know it (i.e. nodejitsu).

So npm, inc rolled out a new cert that looks to be signed by digicert, but existing clients don't trust Digicert until you explicitly configure them to.

I was thrown off by the SELF_SIGNED_CERT_IN_CHAIN error; I expected some error about an untrusted root CA if the problem was that npm clients didn't trust digicert, but apparently SELF_SIGNED_CERT_IN_CHAIN is what OpenSSL returns when the root CA isn't loaded.


If the clients trust the npm CA, can't they just sign the digicert CA with that CA and include it in the certificate chain provided by the server? That way the chain would be:

    npm CA -> digicert CA -> any other intermediates -> server cert
Clients that only trust the digicert CA (and other standard CAs) will see that and accept it because they trust the digicert CA, and clients that trust the npm CA will trust the cert also, allowing both old and new clients to work. Once (almost) everyone has upgraded, the npm root CA can be removed from the chain presented by the server. Am I missing something here?

Edit: It looks like what I'm missing is that you'd need the private key of the digicert CA to generate the request to sign with the npm CA. I was thinking about how CAs have been migrated in the past (e.g. equifax to geotrust global CA). It looks like it won't work in this case.

Edit2: Actually, it appears to work after all. I just tested with the openssl ca command, and you give it -ss_cert instead of -in for the certificate to sign a certificate instead of a request.


> Am I missing something here?

A sense of arrogance that precludes understanding x509 infrastructure before you roll out a world-breaking change.


Lets apply a bit of sense here: this was a failure of judgement, not arrogance. It's perhaps easier to picture the npm developers as maniacal villains, cackling as they wield destruction among us. But that's not the case with them, just as it is pretty much never the case with project developers.


I just picture them as cowboy coders not really aware of what it takes to build and maintain software for large enterprises, which is, unfortunately, their stated mission.


Let any developer who has never pushed an update with unintended side effects raise their hand.

This mistake was, in hindsight, a clear error in judgement. It highlights missing steps in their change deployment process. And I expect them to learn from it, as the larger Node community has shown they can learn from mistakes.

Part of joining the ranks of "enterprise"-grade projects is first being an aspiring project, and part of that is learning a lot. Anyone who expects that to happen without a few bumps is naive.


I agree with you. Everyone makes a mistake once in a while. To be successful, you should learn from it.


I don't think people think they're arrogant. I think people find them unsuited to the task at hand. If you felt that way already, this incident would have been another nail in the coffin.


Indeed! Don't forget kids, don't ever do anything if you're not already an expert.


Yes, that sounds about right. Spend the time necessary to understand before you inflict your lack of understanding on the world.


That kind of chain doesn't seem to be ubiquitously accepted. I built up something similar at https://ssltest.greenapes.com:4443. There is a self-signed CA signing a trusted CA (StartCom), which in turn is signing a valid certificate for the hostname.

The chain is accepted by Firefox and Chrome with NSS, but Safari (and Chrome on OSX) gives a self-signed warning message. I asked AGL and he thinks that it should be valid: https://twitter.com/giovannibajo/status/439746540249567232

So it looks like it should work in theory, but in practice, even if npm had attempted this, it wouldn't work on Mac.


I was thrown off by the SELF_SIGNED_CERT_IN_CHAIN error; I expected some error about an untrusted root CA if the problem was that npm clients didn't trust digicert, but apparently SELF_SIGNED_CERT_IN_CHAIN is what OpenSSL returns when the root CA isn't loaded.

All root CAs are self-signed, that's what makes them root. What overrides the self-signing being an error is it being listed in the CA list available to the client which is updated out of band.


This has also broken all of our own deploys on Jenkins. We had to use solution 2, and then upgrade, because solution 1 also produces the certificate error.

It was surprising that npm status didn't have any sort of warning. I had to find out about this via twitter. I think it's extremely irresponsible and I'm glad we've started to move away from node.js and started to use Java.


Ditto. Team wasted a few hours on this today.


That's not a loaded comment.. Changing your programming environment over an SSL certificate? Tell us all about how awesome it is building apps in Java!


You want to talk loaded comments? You're reducing "I'm glad to move away from a project that breaks functionality for all users with no notice" to a quibble over an SSL certificate change.

That npm as a project thought this was the correct way to handle this kind of transition is very unsettling.


Yeah, because no one ever had any problems with Maven, that thing is like package heaven, it's all rainbows and unicorns!


It's really disappointing to see this sort of "reasoning" used so often these days.

Just because somebody points out serious flaws or problems with one technology does not automatically mean that he or she thinks that other, unrelated-yet-similar technologies aren't flawed or are somehow perfect.


> I'm glad we've started to move away from node.js and started to use Java.

That is what I was responding to directly. The implication from the statement was that npm flaws (i.e certificate changes that break everything) is a good reason to switch to Java.

It isn't at all a good reason to switch to java, as java's equivalent would easily waste more time than even this rather embarrassing certificate problem with npm.

> unrelated-yet-similar technologies

It is related, and similar, it is a like-for-like comparison between nodejs package management and java package management.


> It isn't at all a good reason to switch to java, as java's equivalent would easily waste more time than even this rather embarrassing certificate problem with npm.

Works fine for me. Has a healthy ecosystem of 3rd-party artifact repository implementations.


> It isn't at all a good reason to switch to java, as java's equivalent would easily waste more time than even this rather embarrassing certificate problem with npm.

As someone who has used it daily for the past 5.5 years - not really. That said, I'd prefer something like Gradle, but I can't fault Maven for just working, goddamnit.


How is that comparison at all relevant?


> I'm glad we've started to move away from node.js and started to use Java.

I'm not sure you get how threading works, but this thread originated from the above quote. And comparing npm with Maven given the above statement is relevant, given it is the primary "Package management" system for Java, much like npm.


I perfectly well understand how threading works. Read the rest of the comment that you cherry picked that quote from, it's about being upset at how npm doesn't seem to be taking it's responsibility seriously.

If you had some information about Maven callously performing a user-hostile update, the comparison would be appropriate. As it is you're just relying on "lolz, Java sucks" as a form of argument.


No, I'm really not. I don't think java sucks at all, and never said as such.

I'm saying Maven isn't better, nor would npm's mistake be a good reason to switch from to Java from Node.

npm didn't perform a user-hostile update, they made a mistake with certificate authorities.

How many 0-days have forced a java upgrade on people... was that a 'user hostile' move by sun/oracle.

I think your line of reasoning is utterly ridiculous, and you are responding to words I didn't say.


He never said he was changing because of this issue. (Logically, if he's already been changing over then it can't have been because of an issue that just happened)

"Tell us all about how awesome it is building apps in Java!" sounds like a pretty loaded comment to me though.


In the last 5 years maven (Central) hasn't given me a cert or pgp error. Building 50 times a day in Java is running just fine ;) Of course it could happen in theory to any such similar system such as NuGet if people decided to yank the rug out from under their user base.


> Changing your programming environment over an SSL certificate?

That's not at all what he said.


Probably goes down as one of the worst large scale blunders of this type given the sheer amount of people affected. It's actually fairly insane that node.js relies on npm like this - isn't it only a matter of time before one of the core node packages gets compromised and someone gets root access to thousands of servers and dev boxes?


Although npm ships with node, the problems aren't because of that.

People have (foolishly, in my opinion) chosen to make npm an integral part of their deployment process, which is why this change has broken a lot of people's deployments. They're going against the official npm recommendation [1], which is to check your dependencies into your source repository and not use npm in deployment scripts. (A good idea with any package manager, imo. [2])

Not that I'm excusing npm; a change like this seems like something they should have taken more carefully.

[1] Npm recommends checking product dependencies into your repo: https://npmjs.org/doc/faq.html#Should-I-check-my-node_module...

[2] I explain why checking dependencies into your repo is a good idea: http://www.letscodejavascript.com/v3/comments/live/2#comment...


As an alternative to checking in your dependencies: have your build server 'npm install' your module for testing, then archive the directory tree. Deploy from the archive.


Yum (and I seem to recall up2date) have had certificates revoked before (due to compromises on the upstream packaging system) requiring everybody to re-import a good certificate before updates will work again.

Now, system updates are a different beast than tying your development process into Magic Hosted Things In Internetland, but certificates on things have been changed before and they will be changed again. Just have to keep aware of how your systems work and what they depend on, which goes against modern "javascript with a double large lack of knowledge" development.


If you're desperate for something, anything, then npm is what you tend to wind up with.


Well, it's pretty pleasant to use when there are no issues.


As is pretty much everything.


There are a lot of things I don't find pleasant to use even when there are no issues. Lots of clunky, weirdly designed tech in the world.


> core node packages

I assume you're talking about http etc.. These do not depend on npm and have nothing to do with npm.


I'm surprised that this wasn't one of those "hey in 60 days we'll be making this change and it will break x, y, and z. Be prepared." sort of things.


Doesn't matter. When you download Node.js itself, you are doing so over HTTP. https://nodejs.org/download/ (note the httpS) does not work. Hope you enjoy being a part of that botnet :)

Edit: I used to email project owners about issues like this, but never seem to get any response, so I stopped. The worst one, in terms of losing $$ was Pandora. I keep trying to sign up for the paid service, but the form to put in your credit card details is served over HTTP. I emailed them several times with no response.


Regarding Pandora, their login form is also served over HTTP instead of HTTPS (http://www.pandora.com/account/sign-in), which I find to be very disappointing, since it's less secure that way.


If you are so concerned about downloading anything via HTTP, why not always use a VPN? At least that way you aren't susceptible to local MITM. Just make sure you trust your VPN :)

Edit: Oh, plus you can always use a checksum to verify your download packages [1] :)

[1] http://nodejs.org/dist/v0.10.26/SHASUMS.txt


What others said. Checksums downloaded via HTTP are just as easy to spoof as the tarball. The VPN protects you part of the way but not the entire way. Now, if you said that I could verify the download because it was signed using the publisher's GPG key and that key was widely trusted, then I might have entertained the idea. Then again, that is very non-standard compared to getting a $10 TLS cert.


> why not always use a VPN?

Because then you're susceptible to remote MITM? VPNs are useful but do not replace end-to-end encryption.


That's why I qualified it with local. Have anything to say about checksums too?


A checksum that's published via HTTP doesn't protect you against MITM.


You know this links is also not served on HTTPS. So local MITM can change the checksum on this page too.


Regarding pandora, the form post is over HTTPS:

https://www.pandora.com/radio/jsonp/v35

And you can load the form, which is just an HTML input form with some sensitive data (last 4!), over HTTPS. It sucks that HTTPS is not default though.


> Regarding pandora, the form post is over HTTPS:

Sure, but for the purpose of being explicit to other readers that is still not secure. If the form collecting the sensitive data is served using HTTP only then a malicious actor could alter the page (by injecting some javascript for example) to steal the data even if the actual form post is secured. Serving the form page as https at least ensures that the page collecting the data for submission was unmodified in transit.


Hasn't moxie discussed this a few years ago, that it's unsecure to send sensitive data from an HTTP page to an HTTPS one?


The problem is simple: if you send me an HTML file over HTTP, an attacker can inject some JavaScript code that will catch the onsubmit event and also send the data elsewhere.


That's a non-problem because people who care about online security don't have JavaScript enabled by default.


OK what if the attacker replaces the action of the form to submit to their servers instead?

Also Pandora does not work without JavaScript just like most modern websites. Would be pointless to sign up then, wouldn't it.

Lastly, what you are saying is simply not true.


Security isn't binary.


Suffice to say: anybody who has JavaScript enabled by default demonstratably does not care about their security. Interest and competence in security is indeed a spectrum, but somebody in first grade math still knows what 2 + 2 is.


You'd be able to see that though, and you can actually load the form over HTTPS, just use https://pandora.com instead of http. It's stupid that it's not default though, and there is actually insecure content also loaded, so yeah. :(


Good. Perhaps I will sign up now. Last time I checked was almost a year ago. But yeah, that should be the default.


The problem with this line of thinking is that if you take it to it's logical conclusion, you need to stop listening on port 80 altogether, and train users (or search engines) to only key in https:// URLs.

If you listen on port 80 and respond with a 301 Moved Permanently (pointing to the https:// URL), that can be MITM'd also. Just proxy to the real HTTPS site and rewrite all the links (using absolute protocols) to be HTTP. Or, if users are trained to look for the lock icon, proxy through an HTTPS server under your control with a convincing domain name.

Even if you do block port 80, it requires users be educated to never access the http:// version of the site, because a malicious network operator could just operate a forwarding proxy and rely on users hitting the http:// URL. After all, HTTP is the default protocol used when I type in www.website.url to access HTTP. (aside: Perhaps browsers should attempt HTTPS first and then fall back to HTTP?)

I don't know of any public websites that aren't vulnerable to this. It's currently too user-hostile to require https://, so everyone helpfully redirects.

The only saving grace is that most non-technical users today don't actually use URLs. They access Facebook by typing "facebook" into the Chrome URL bar. That's a secure search, which gives a secure link.


With HSTS, you're only vulnerable the first time you connect to a website; that's a massively reduced attack surface compared to being vulnerable each time you submit some form.


When package managers misbehave, the whole software bundle suffers. Maybe the Node.JS project should have more (decision) power over its package manager. Node maintainers properly announce api version changes sometimes months in advance, so you don't need to read through the pr/issues on github or mailing lists to stay up to date.

I really don't think an advance notice would have hurt. So much for automation.


Well, nodejitsu are trademarking NPM so let's see how that plays out.


Go home NPM, you're drunk!

(add all the latest drama, and you're basically just asking to be forked..)


Solution 1: Upgrading npm actually does nothing and also complains about a cert error.

Solution 2: is a terrible idea for people who have their own private registries or proxy caches. Also a security issue in general...

This leaves you with needing to upgrade node, which may or may not be possible due to operating system/platform constraints. Luckily we were using this https://launchpad.net/~chris-lea/+archive/node.js/ which makes life on ubuntu ok.

We set up our own proxy cache, at this point im thinking about increasing the cache time to something like 2 weeks.


The inmates are running the asylum :)

But it does provide a case study for the rest of us on what or what not to do.


Did they previously have client-side validation of the self-signed certificate, and now they changed it to "accept any cert signed by a root CA"? If so, why in the world would they do that?

Maybe I'm being hopelessly optimistic by not jumping to the conclusion that they did no validation whatsoever before?


If running "npm config get ca" on my not-yet-updated copy of npm is any indication, they were using the self signed certificate.


As per Joe Grund: http://blog.npmjs.org/post/78085451721/npms-self-signed-cert...

npm config set strict-ssl false npm update npm -g npm config set strict-ssl true


That's ... still a dangerous way to accomplish the update. You're disabling a key safeguard that ensures you're actually talking to official npm servers, then re-enabling the safeguard after running the code that whoever-you-got-it-from provided.

I'm sure the likelihood of experiencing a MITM attack here is low, but the security consideration needs to be as widely advertised as the solution.


I was just about to post this link.

The change broke bower at on my laptop. The above link fixed it.


Can someone explain what they actually did? What was the technical change that was made?

I can't figure out from the blog post whether the change was in the npm server or client.


FYI I was able to bypass this last night by temporarily switching to the EU NPM mirror.

    npm --registry http://registry.npmjs.eu/ install blah


I was bitten by this yesterday and I didn't know WTH was up. This is good to know. I had two or three package installs work flawlessly and then they started failing randomly across servers (all staging, so it wasn't that big of an issue).

I think Rob's comment sums up my feelings. A bit more aggressively, but the same gist.


Basically don't ever deploy from npm. Tarballs


This is exactly why I don't understand why it is called "package management." What good is NPM if you just need to deploy tarballs anyway? It's good for bloating dependencies and installing 10 of the same library. It completely punts on actually managing packages, by just copying everything -- recursively.


npm prune


This is also causing an issue with Dokku-- I'm trying to deploy a new Node app and it fails because of the certificate issue. Anyone know how to update the Buildpack to the latest version? Heroku already implemented a fix.


Yes I do :)

You can see on the following step which image is being used when deploying an app:

https://github.com/progrium/dokku/blob/master/dokku#L33

So what I did was change that to an image of my own based on the progrium/buildstep one:

  sudo docker run -i -t progrium/buildstep /bin/bash
Now you can make changes to it and save them into your new image (while you are still logged in to progrium/buildstep):

  sudo docker commit <id> myownimage
OK, and inside this image I have referenced my own buildstep instead of the default Heroku one. The relevant txt document is in /build/.

And to get to the point I have changed the npm install line to run pointing to the http EU repo.

  npm install --registry "http://registry.npmjs.eu" --userconfig $build_dir/.npmrc --production 2>&1 | indent
That is one way to go about it.


Great, thanks. I also figured out that I could put the following line into a file called `.env` (I was originally trying to use a file named `ENV`):

    export BUILDPACK_URL=https://github.com/heroku/heroku-buildpack-nodejs
This will use Heroku's version of the build pack, which already has a fix for the NPM problem.


FYI, npm is rolling back to an older cert that they used a few years back. It's a GlobalSign cert, and the GlobalSign CA has been in npm since Aug 2012. When it propagates their global CDN, old clients will be fixed without breaking new ones and they can focus on a more permanent solution.

1. https://news.ycombinator.com/item?id=7323093


WTH "We are rolling back to the older cert now, but since the registry is distributed by a global CDN this process is slower than we’d like, and we don’t want to break things (further) by rushing the process."

http://blog.npmjs.org/post/78165272245/more-help-with-self-s...


What broke? Running the npm command? Running any nodejs program? The blog post gives few details about what people were doing when they hit the issue.


You can't npm install <foo> without getting an error message about the self-signed cert. At least until you update npm, which you usually do with npm update -g npm, but that will also fail with the same error. Fun!

The blog post doesn't mention deleting the config change after updating, which (I'm not an expert) might be important. I think this is a full fix: http://stackoverflow.com/a/22099006/2708274


Did the npm servers previously serve up packages with the self-signed cert?


Yes.


Broken builds, broken builds everywhere...


Not here... cache proxy internally.


The npm source is crazy. Here's my favorite bugfix (bug being that the prepublish script simply wasn't running):

https://github.com/npm/npm/commit/2389525a1df6d20e780cca5887...


Wow, this explains so much. I had to download the coffeescript compiler last night to fix a bug in a library I'm using and hit this error running npm install. I didn't even think to consider that npm itself was broken. I guess it's not always user error!


Welp that didn't take long. I sure hope someone (who is more skilled than I) forks NPM so we can all move on from this craziness. It's getting pretty sad.


still broken. following the instructions on windows, I am getting:

D:\repos\js-master\phantomjscloud-backend>npm install npm -g --ca=""

npm http GET https://registry.npmjs.org/npm

npm http 200 https://registry.npmjs.org/npm

npm http GET https://registry.npmjs.org/npm/-/npm-1.4.4.tgz

npm ERR! fetch failed https://registry.npmjs.org/npm/-/npm-1.4.4.tgz

npm http GET https://registry.npmjs.org/npm/-/npm-1.4.4.tgz

npm ERR! fetch failed https://registry.npmjs.org/npm/-/npm-1.4.4.tgz

npm http GET https://registry.npmjs.org/npm/-/npm-1.4.4.tgz

npm ERR! fetch failed https://registry.npmjs.org/npm/-/npm-1.4.4.tgz

npm ERR! Error: CERT_UNTRUSTED

npm ERR! at SecurePair.<anonymous> (tls.js:1349:32)

npm ERR! at SecurePair.EventEmitter.emit (events.js:92:17)

npm ERR! at SecurePair.maybeInitFinished (tls.js:962:10)

npm ERR! at CleartextStream.read [as _read] (tls.js:463:15)

npm ERR! at CleartextStream.Readable.read (_stream_readable.js:320:10)

npm ERR! at EncryptedStream.write [as _write] (tls.js:366:25)

npm ERR! at doWrite (_stream_writable.js:219:10)

npm ERR! at writeOrBuffer (_stream_writable.js:209:5)

npm ERR! at EncryptedStream.Writable.write (_stream_writable.js:180:11)

npm ERR! at write (_stream_readable.js:573:24)

npm ERR! If you need help, you may report this log at:

npm ERR! <http://github.com/isaacs/npm/issues>

npm ERR! or email it to:

npm ERR! <npm-@googlegroups.com>


also tried on my debian 7 server. broken there also.


i really don't understand why they would make a server change that would orphan all the existing clients.

If anything, roll out a new version and gracefully drop support for other clients.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: