Hacker News new | past | comments | ask | show | jobs | submit login

> File-system Based Routing: If your code resides in ./api/login.js it's exposed at http://<SERVER>/api/login. Inspired by good ol' PHP days.

> Auto Dependency Resolution: If a file does require('underscore'), it is automatically installed and resolved. You can always create your own package.json file to install a specific version of a package.

This sounds like a security nightmare.

EDIT: to be clear, I don't meant he combination of both is a security concern, that each one of them separately is problematic.

Putting my money where my mouth is - use this to leak any file accessible by the running user of zero from the filesystem:

    # curl -v --path-as-is

Hah. Now that’s a Zero day Zero server exploit. :)

I don't think I would use any package that makes this classic mistake in 2019. Web services need to be at least vaguely secure and this destroys all my confidence of that.

You would've thought OP posting to HN should've considered security backlash. Unfortunately, damage is done for me too.

This was fixed literally the minute it was commented here. Securing any project is always a long-term and continuous effort.

This project is brand new, I posted the repo publicly this morning. I frankly think this subthread is an overreaction. I don’t get the hate.

I saw your comments fixing it - which is pretty awesome by the way (& I hope you've taken on board the other comments regarding securing it even further).

Due to PHP experience, this would've been something that I ensured was implemented properly from the outset. I know this to be true, because I've dabbled in the very space you're working in now, and it was one of the very first things I ensured: that no file could be served except from direct descendants. (I rethought my project and tossed the code)

For something like this you need to be absolutely sure about security. Have a look at the annals of PHP security issues - and most likely you'll see a lot of similarities that you'll need to make sure you address.

I'm sorry if you read hate in my words: definitely was NOT intended! I have nothing against you as a co-habitor of this wonderful planet! To me, the bug highlights that a few design considerations may have been overlooked.

> This project is brand new, I posted the repo publicly this morning.

Are you implying that you don’t think it’s ready for production use? If so, maybe you should do like a lot of projects, and warn about it loud and clear in the docs. It’s not clear at all that users should expect the type of blatant security problems that were discovered here.

If the bar to posting to HN was "I am finally confident this project is mature", rather than "I can post any link I want and if people thing it's cool, it'll get upvoted", HN would have a hell of a lot less cool shit on the front page on a daily basis.

People are allowed to screw up: it's how we learn. They got comments that pointed out flaws, they fixed them and posted a follow-up regarding that fix, why the hate? This person tried to make something cool, they learned important security lessons, and now have a deeper insight into what they made, and the world it operates in. How is this possibly a bad thing we don't want to have happen on HN?

But after reading the thread following your fix it seemed that the fix wasn't done properly. That, more than the security issue itself, kind of ruined my confidence as well, sorry to say.

Thanks for pointing this out. Fixed this particular bug!

You also need to report this to security@npmjs.com so they post an advisory [1] and mark the existing versions as vulnerable.

[1] - https://www.npmjs.com/advisories

Anyone, including yourself can do that.

Everyone's talking about it but nobody did it, so I did.

or, the person who should do it should do it and not rely on others to do their job for them?

it's not their job. there's a reason anyone can do it.

Seems like your fix[1] for this is a bit fast. You are already importing `path` in that file. Also, you can do this with just one `path.relative`. Lastly, the url package method you are using is deprecated[2].

[1] https://github.com/remoteinterview/zero/commit/b4af5325c388e... [2] https://nodejs.org/api/url.html#url_legacy_url_api

This fix does not even work on windows. You can still request data on a different drive.

A simpler fix might be to canonicalize (i.e. no "..") the public folder path and the requested file path and then ensure the public path is a prefix of the other.

Any fix also needs to be sure to resolve any symlinks before doing a prefix check.

can just put some logic to set the root directory where can start to include from?

Just like "good ol' PHP days!" as the docs say :D

I don't disagree that this doesn't seem necessarily secure and the auto dependency resolution is a bad idea for other reasons in my opinion, but I don't see the security aspect of it.

The moment I can upload files to the application folder that are executed, I can just `require('child_process').spawn("my_evil_stuff", [])`. In particular "my_evil_stuff" could be some npm install command. I don't see how automatically installing the dependencies makes this worse than it already is.

EDIT: While this is a different attack vector than I was envisioning here, jexco has provided a scenario in which there are additional vulnerabilities: Vulnerable dependencies that cannot be managed.

Where is the management of requirements? How do you force LTS versions? Or roll back if a version has a vulnerability? When things are automatic you are unable to stop bad things from happening.

Not that I was going to say this is a good idea in the first place, but rolling back vulnerable dependencies is an excellent point that I hadn't thought of. Thanks!

Well, for one giving the app the kinds of write permissions needed for this to work is not exactly ideal.

So the app would need write permissions to its own folder. That's obviously a bad idea in a production deployment. I guess I was thinking that the dependencies would be installed during a privileged one-time "deployment run" so you wouldn't need the permissions after. Maybe I'm giving the thing too much credit.

If security is a concern, this is probably a bad choice; this doesn't seem to be advertised as a bulletproof security solution to anything, rather a utility for small little one-off apps that might need _some_ backend functionality. Once you start adding features like file-uploading, youre obviously gonna want to pick a more robust option

> If security is a concern

At the risk of being presumptuous... When is security ever not a concern?

When all other layers are secure, like wearing a bulletproof west inside a bulletproof car inside a bunker that can withstand a nuclear blast. And add to that some security by obscurity, like a bunker in a secret location, only accessed via a tunnel.

That said, if security is top priority you also want to be ready if you get shot from within the car. But more importantly you need to define the most common scenarios and make sure you are protected from all of them. One scenario might be someone shooting you, but another scenario might be food poisoning which would require an additional, different solution.

You know it's always a concern, but context is everything.

> small little one-off apps that might need _some_ backend functionality

The security implications of serving a static website vs. a dynamic application that processes payment and queries the database are two different beasts

Student projects

Conversely though - doesn't that lead to bad habits?

If security is taught at the student level, by the time they get to junior developer they'll have an understanding of it / do it automatically.

Pretty sure student projects should teach you something other than `$ npm install`, no?

When I was a hiring manager and scoped out juniors from bootcamps I had a conversation with some candidates and they would say, "I built user registration and login". When I asked them to talk more about it they said, "well I installed auth0"... Any student project which doesn't teach them how something works is not really teaching anything of value, is it?

Expecting a student to learn how to code at all, not to mention code well, from an academic/bootcamp setting, is an expensive fool's errand for anyone that hires them.

Programming is not academic. It has more in common with plumbing and carpentry and electrician work: you learn only by doing, and you learn how to do it well by doing with critical supervision from a mentor.

The difference between engineering and trade work is that trade work, like the jobs you mention, either follows a plan written by an engineer, or prescriptive standards designed by engineers (and usually certified by governing bodies of engineers). Prescriptive standards allow skipping all the engineering calculations as long as the guidelines are followed and tolerances respected.

Software development (and a lot of hardware development, to be fair) is unique in that doing it well requires functioning as both an engineer and a tradesperson. One's skill has to cover a wide section of the spectrum.

That slight wobble in Earth's orbit we're experiencing, that's Dijkstra rolling in his grave.

All joking aside, programming should be treated a lot more like engineering and a lot less like craft. Yes, it does have aspects of both, but neglecting the engineering aspects of it is proving to be increasingly harmful to our end users.

> a lot more like engineering and a lot less like craft

I think the curve of diminishing returns plays an important role. A near hack job will often get you 90% there, in terms of fulfilling what was exactly requested. I don't think this is true for any other skillset. It's so easy to make something featureful and fragile in software. The time and cost above that can be very difficult to justify to customers/management.

In the words of a previous boss, after I pointed out we need more testing, "Everything is working, we'll fix the bugs as they come".

When you're learning, you need to also learn security implications of what you're writing. Insecure projects should never be allowed to pass.

I disagree somewhat. If the goal of a project is to teach a different skill and it may cause too much of a headache to add a real server this service might make sense. It’s like when you’re learning a new spoken language. It’s better to practice a breadth of situations and vocabularies and make mistakes (that get corrected over time) than to learn fewer things perfectly

Prototyping or proofs of concept

If you're ever running this, and you've left it open to a LAN or the internet, your entire system is vulnerable for use in whatever way someone wants. There are bots looking for stuff like this all the time.

Is this even a serious question?

Not everything runs online connected to the internet.

But almost everything does. Assumptions like this lead to ~40,000 unsecured MongoDB databases on the public internet [1]

1. https://www.information-age.com/major-security-alert-40000-m...

And printers. Although in the linked story the exploit already existed and the guy who did the printing sounds like basically a script-kiddie.


> When is security ever not a concern?

Internal applications where the entirety of the userbase are trusted employees. (Preferably, the userbase is small, too.)

Nobody’s going to bother finding vulnerabilities in an application where, if they break it, their own job gets harder.

I agree with GP. Security is always a concern. I too used to think as long as it's behind the firewall, or a local only exploit, it doesn't matter. But it always matters. Small apps become big apps. Small user bases large ones. Someone gets onto your internal network and then your small userbase app for trusted employees becomes a jumping off point, etc.

Your sort of thinking is how you end up with Yahoo levels of account leaks.

Security always matters.

I think your comment about scalability is accurate. Small apps become big apps, and small user bases get bigger. I’ve seen it happen — but I’m not going to think about scaling to thousands of users when I just need a small application to share with my team. If I spent five days building it to the utmost standards, instead of spending one day on something that solves a problem immediately, I’d be laughed at. It is the same with security.

> Your sort of thinking is how you end up with Yahoo levels of account leaks.

I wouldn’t store any of my customers’ data on an insecure internal service! I know that’s mad!

> Security always matters.

The first part of securing a system is to come up with your threat model, isn’t it?

> I wouldn’t store any of my customers’ data on an insecure internal service! I know that’s mad!

I'm completely sure that you're right. You know that would be irresponsible and reckless with lots of very sensitive data.

With that said, how sure can you be of every other person writing a simple, small, business app for just a handful of their coworkers? I've encountered some people doing exactly what you've described without the same level of cool-headed risk-weighing as you.

At some point you will be outcompeted by businesses where they don't sweat stuff like this.

One of the key functions of GDPR and CCPA and PIPEDA is to make many businesses consider what kind of liability might be attached to things they might otherwise opt to not sweat.

None of those apply to internal software that isn't used to store customer data.

I really hope you don't work for our infosec department!

>_some_ backend functionality

Famous last words.

...why? I get that file-system based routing means you know the location of a source file on disk, but if anyone can access that file you've already lost.

And auto-dependency resolution also doesn't seem any larger a security concern, all it's doing is skipping an "npm install" command.

Because if you ever have a broken upload system that allows you to drop a JS file somewhere accessible by the file system routing, you have remote code execution. Additionally, you now have to write guards in every non-endpoint JS file so that it doesn't get executed just by a misplaced HTTP request.

And as for automatic dependency resolution, this means you're not even aware of what transitive dependencies you're pulling in, what version they are and have no way to vet anything - everything is hidden behind a wall of magic.

Make your application directory read-only to the user running the application, as it ought to be anyway.

Automatic dependency resolution however... Fantastic for experimentation, but that's a dealbreaker for production. Maybe it would be OK if it actually wrote the package-lock.json to the application directory, I'd have to think about that.

In this case the application also tries to auto-install dependencies, so making it read-only removes one of the stated features.

I think this framework hasn't been written with security in mind at all.

Whatever it's doing, it's not writing the packages into the application directory.

If it can write to the applications dependencies isn't that as good as writing to the application?

Probably having .htaccess / .env / database configuration / files that are not supposed to be public be exposed.

For instance, Rails has a public/ folder for files that are going to be served. And jekyll hides files by pattern-matching them[1].

Zero doesn't seem to have exclude folders by default. The solution would be to run Zero is a subfoler and require file in the parent folder which would act as the tree's root.

[1]: https://help.github.com/en/articles/files-that-start-with-an...

Currently, files starting with _ (underscore) are hidden in zero. This is still a feature spec we need to finalize as this can create confusion. Maybe a .zeroignore file (as suggested in another comment) would be a better idea.

Why not reverse that and use a whitelist instead. It’s a lot easier to decide what folders and files should be served than to think of all the things that shouldn’t.

Nextjs uses a pages/ subdirectory which gives you implicit routing, without having to compromise on the whitelist aspect. I think it's a better compromise.

So if an attacker already has access to your filesystem to modify your files, they can install stuff?

By this time it's already too late.

There's different levels of 'filesystem access' vulnerabilities.

Some classes of bugs that would be otherwise tame due to the constraints (eg., file upload that might be able to only create new files in some part of the directory tree, or a buggy routine that lets you create arbitrary symlinks, or leftover VCS/CM files that happen to end in .js and are not filtered out by the router) now become the most powerful kind, remote code execution.

Exploiting auto-downloading modules requires that the filesystem's already been exploited to the point where the app's code can be modified.

I could add `require('foo')` but I could also just require no third party code and have fun with the `process` module.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact