There's already a cultural anti-pattern in the js world of just `npm install`ing stuff and shipping to prod without auditing well (I've heard a number of times, "well it has like 60 stars on github").
If the project marketed itself as a "development only" or non-production framework then I'd agree with you 100%. However as it stands it's dangerous and could lead to extreme compromise of a system if it gets deployed to a production environment.
That said security-minded people are often inconsiderate and horribly untactful in their approach. That needs to change. You don't need to be overly negative to point out a security issue. Something like, "Cool start, but might want to point out that it's not meant for production!" would be a lot better IMHO.
Agreed, and I also agree that maybe it should have a tagline about "not production ready" or even "never production ready". Not sure what the end goals of Zero are. I will say I thought it was pretty evident that this was not for production (if only to the sheer amount of "magic" inside) but maybe that's just me and it should have a disclaimer to that effect.
Devs need to change their culture. This behavior is actively harming end-users through repeated data breaches.
This isn't new, and not knowing how to deal with it is like a builder not knowing how to safely stand up a wall.
That's a great idea! It should definitely be implemented.
With that said, what do you make of the point that things that are clearly not suitable for production (and sometimes labeled accordingly) have a nasty habit of making their way into production anyway? Do you think it has salience here? You're clearly a thoughtful person with your hear in the right place, so I'd very much like to hear your opinion.
Again, your point is both completely valid and absolutely correct. Purpose and labeling should be clear enough that no person could possibly miss them.
I don't think I have a good answer. I'm currently serving a little over 6 million requests a month for a service I never advertised yet some non-zero number of developers copied code from stack overflow or wrote based on my github project to hit an endpoint that was just a demo. Since it was un-authenticated and I didn't notice the traffic until much later I have no good way to shut this down without pulling the rug out from under people. Eventually I will have to but since it's not costing my anything extra really I've left it.
Now I put a warning on it when I noticed the traffic but that was 3 years and 2 million requests less per month ago so...
I guess yes, we, as developers, should be better about putting warnings and the like on our projects but by the same (if not greater) token, people using existing code/services bear the responsibility to vet things they use. I don't know how to fix people to vet stuff better. Hell, I don't know how to make myself do that. That said you take risks every day, you risk your live when you get behind the wheel, you risk security holes when you don't vet code you use or build upon. I've weighed the risks of driving against the rewards and found the risk acceptable. For a lot of software I use I've made the same decision. I just don't have the time and energy to vet everything for myself and timelines and other pressures at work make it equally impossible there.
At what point do you say "Ehh, it's secure enough below this point"?
* npm library with 1000's of dependencies?
* npm library with no dependencies
I honestly don't know a consistent/standardized way to handle it.
At this point I'm leaning towards deprecating the idea that encouragement, positivity, and documentation will lead to developers making good decisions. As you've demonstrated, it's clearly insufficient, and I suspect your experience is an outlier but far from unique.
Increasingly, I think we may have to consider measures to get it right the first time. And we have to be sure our peers do too, because we're all living in the same environment and context. Otherwise, sooner or later, someone who didn't read a warning label is going to try to build a PII-handling business on Zero Server (or similar) and it's going to be a dumpster fire.
I would love a world where friendly encouragement, niceness, and documentation could scaleably do this. Sadly, it's perhaps possible that that world and this one could be slightly different.
One starting point might be re-examining if all the negativity shown in reaction to Zero Server is actually excessive. Some of it surely is! It's also perhaps possible that some of it could be viewed as the horrified reactions of professionals who care about quality (and think about downstream effects) coming face-to-face with reckless engineering.
I don't fully understand what you're advocating, so I apologize if I am misinterpreting or mischaracterizing your position. But I can't imagine "mature professionalism" including berating or embarrassing another developer for creating a security hole. Some people do need that, but most don't. Many devs I've worked with are horrified when they find out they wrote in a vulnerability, and they try to grow and find out better ways. This is startup culture tho.
When I worked in enterprise type environments it was the opposite. There, you could send the dev a working exploit and most of them would groan and bitch about how "you security guys just want to tear things down and break things." I can understand being more of a dick in those situations, but I would still contend that's a venting of frustration more than a desire to actually teach. Most people just get defensive and put up walls when they feel attacked or embarrassed, and the learning is over at that point.
One issue is that full-throated encouragement coupled with suggestions of problems couched in uncertainty is the Dale Carnegie approach. I suspect you're using it now! It's well-suited to a great many situations, as documented in the man's seminal work.
Unfortunately, this suitability is not universal. It is fallible, and in security those failure can be quite dangerous. The Carnegie approach thus described makes it very easy for devs to notice the encouragement and ignore the suggestions of criticism. I have personally encountered this reaction in both open source and enterprise-y contexts, generally from developers who might be charitably described as highly enthusiastic. Including right here on HN!
I've also encountered the hostility you described in reaction to kind, generous, compassionate security reports of the sort you suggest. This has happened in open source, startup-type, and enterprise-y environments.
The key to what I'm advocating is this: you are not your code and the other person is not their code. Who wrote a vulnerability is not as important as that it exists. How it can be fixed, and how it can be prevented in the future, are what matter.
Professionalism means understanding the distinction between a craftsperson and what they have produced. It also means understanding the distinction between yourself and your work. It means understanding that ignorance isn't a character flaw, it's a temporary state of affairs that can be fixed.
It also means realizing that some who refuses to participate in fixing their technical ignorance is someone who is being unprofessional. Such a person might benefit from correction.
> The Carnegie approach thus described makes it very easy for devs to notice the encouragement and ignore the suggestions of criticism.
This is definitely true, I've seen it too. Definitely something to watch out for.
I guess at the end of the day I think it comes down to "know your audience." Similar to the Principle of Least Privilege I like to follow (what I call) the Principle of Least Criticism. Don't use any more criticism than what is necessary, but (and I suspect this is where we will both agree) you need to use enough criticism that the person understands your point.
With that said there might be some significant limitations on the approach. The principle one is that it relies on knowing the individuals involved fairly well. This is easy in a close-knit and small startup environment! It could perhaps be more difficult in a sizable enterprise or open source context where you don't know the other party, don't have time to build a relationship, and/or can't rely on a long conversation to slowly build up to the least amount of criticism required. There's also the question of how to handle groups or meetings in which different people have different thresholds. When one person's minimum required might be someone else's excessive negativity and they're both in the room, there might not be a winning outcome with this approach.
Knowing your audience is an excellent and wise maxim to live by. It might not always be as simple to live by as could be hoped.
Thanks for the discussion
That's a wonderful idea! I'm absolutely certain that people will invariably respond quickly and reasonably to kind, compassionate, considerately made points. Especially ones that are very cautious to cough anything that might be taken as negative as a potential or a possibility.
For my own part, I've found this practice to be both exhausting to implement and highly unreliable in deployment. I'm absolutely certain that these just reflect my own failures. I'm similarly sure that you've seen infinitely better results!
After all, everyone knows that casually documenting something to the tune of "This code might not be as safe for production as it could be" will yield a reasonable level of caution in all developers.
I've also been part of many open source projects and it's very common to not release/announce it until we've proven it ourselves in production.
So while I think one should definitely exercise caution with brand spanking new code, there's no way to know whether it was just ideated recently or has been in development for a while but just got released/opened for the first time.