Hacker News new | past | comments | ask | show | jobs | submit login
I'm not burned out, I'm pissed off (myname.website)
530 points by eitland on Nov 27, 2019 | hide | past | favorite | 312 comments

I'm a recovering security guy.

When I listen to security people rant, I can see their points and it's a bit of fun, I like a good rant. But I get the impression that they're continuously discovering new and exciting ways that individual facets of individual pieces of software (and the processes around them) suck. All without ever accepting that the entirety of the software ecosystem sucks (and that they're rarely moving the needle on that front).

All software and the internet combined is a giant ball of mud that just grows and grows as more people add onto it. There's no architecture more than the strict minimum to keep the whole thing from falling apart the moment someone breaths too heavily near it. And that's not even including when commercial interests keep trying to design their chunks of the mudball in unique ways that make themselves more money at the cost of everyone else's chunk getting more complex.

Like everyone else adding mud, security wants to get in, hit their requirements, and get out. I just don't like the chip on their shoulder that nobody else is doing enough to fix the system throughout in a way that helps them achieve their goals with the least fuss.

Not really security, but regarding software in general, Mud is one of the reasons I gravitated torwards pure FP languages. It doesn't solve everything, but the added guarantees help shift some of the cognitive burden away from having to dig into every method to have to see what's going on, and I can spend that mental budget elsewhere.

Maybe FP forces some degree of added conformity to the code. But I find code maintenance and readability has more to do with the person writing it than the tool they used IMHO.

FYI I am in fact an FP fan, and I use FP principles in a large amount of my code. But I've seen some seriously convoluted and confusing FP code just like I've seen behemoths of objects with twisted internal state manipulation balancing on blind assumptions and false guarantees of order.

This reminds me that "but think of the children" mantra. From my observation programmers often tend to justify / promote their languages/tools/etc under pretense of solving security problems.

"Not really security"

> and that they're rarely moving the needle on that front

This is key. They're not the ones pushing to replace C/C++, even though you'd think that would logically follow. In that way, infosec is the DEA of IT.

I'm supplying a +1 for this comment because it's hilarious to think of our industry as useless as the DEA.

> I get the impression that they're continuously discovering new and exciting ways that individual facets of individual pieces of software (and the processes around them) suck

As a software engineer, I think this is my experience at every job I've had around systems in general.


As a person sometimes overly distracted by grammar, I would like to thank you for using the "As a [description of subject], [subject] [rest of sentence]" construction correctly. Getting this wrong, which is called a "dangling modifier" error, is common. Examples from today's front-page HN discussions:

As a new (1 year) coder starting out at 39 years old, this sounds very similar to the path I envision for myself.

As another former Googler, yes, searching the entire monorepo was a frequent occurrence.


As a new (1 year) coder starting out at 39 years old, I envision a very similar to the path for myself.

I'm another former Googler. Yes, searching the entire monorepo was a frequent occurrence.


If you're going to pedant, at least fix your first "correction" so that it includes its subject.

Ha! You got me. Poor editing. Please delete "to the" in the corrected example.

(or maybe I was just adhering to Muphry's law https://en.wikipedia.org/wiki/Muphry%27s_law.)

I read the examples of dangling modifiers you give as assuming "Speaking" at the start of the sentence. So, e.g. "Speaking as a new (1 year) coder ... this sounds very similar...".

That's still a dangling modifier, because "this" is the subject of the sentence, and "speaking" doesn't modify "this"; rather, it's meant to modify an "I" that's not in the sentence.



(I found this mildly interesting!)

As a fellow Hacker News commenter, thanks for pointing this out.

There is a paper on "balls of mud" that I feel is essential reading for our discipline. http://www.laputan.org/mud/

Thank you for reminding of this paper.

It is fascinating to me that the content (written 20 years ago) reads like an up-to-the-minute description of the madness of IT today.

The continual repetition of anti-patterns is so widespread and commonplace that it is clear to me that expediency will almost always win-out against any engineering prudence.

Expediency gets assigned highest priority -- typically without any real reasoning. The negative effects accrue slowly but consistently -- and correcting for the shortcoming gets consistently more difficult.

The condition is then self-perpetuating.

I don't understand what "accept" is meant to mean here. If we accept that everything sucks, the only thing left to do is leave the field, since it will always suck no matter what we do. If we try to make it better, then we're not accepting that it sucks. I think there is a better way to phrase what you mean.

Leaving the field won't help, the whole damn world sucks.

You have to accept that there's no perfect alternative out there, you won't tomorrow magically find a system with no flaws, and system design is hard because the world is constantly changing and you don't have perfect information. If you just chase "something better" because you can't accept things, you'll continue to be disappointed.

Once you accept it, you can start trying to make your little corner of the world better. Not perfect, but better. It's more satisfying than shouting into the void.

Thank you. I think I sounded too pessimistic in the grandparent post but you saw where I was coming from.

There's always more work to do. On almost everything. As a former co-worker liked to say: If it wasn't hard, they wouldn't pay us to do it.

You kind of need to accept that it sucks for you to want to make it better.

If you thought it was already perfect there would be no reason to.

The problem is interpreting what "it" means when someone says "it sucks." All software and the internet, like GP uses it? I don't think that's a fruitful line of thinking.

From experience I know it’s not all, but very close to 99%

you can acknowledge that it sucks, refuse to accept it and try to make it better

I hate when I see people throwing in the towel like this.

As a two developer company with four separate products doing nearly $500k in ARR collectively, Kumu [1] is a living example that it doesn’t have to be this way.

We rely heavily on bash, docker and cloudformation.

We only use Ubuntu LTS and we lag a release behind so there are plenty of tutorials available when it comes time to upgrade.

After experimenting with backbone, coffee script, flow, vue and multiple redux libraries we’ve settled on rewriting everything in typescript and developing our own thin redux abstraction.

Embrace new tech that makes developers’ lives easier while hopefully making things more secure too.

I get it if that’s not possible in large enterprise companies, but please don’t throw all software under the bus. Software is and will always be fun and there are still fun companies to work for if you’re willing to take a little risk.

[1]: https://kumu.io

I'm a bad writer. I'm not trying to throw in the towel, say software can't be fun, or that useful products can't be built within the status quo.

I can look at the product I develop and rattle off an impressive list of capabilities, talk about how well it is designed, say why it's the best product for the job on the market and talk about successes in the field. But I can also look at it and see a laundry list of design flaws, architectural limitations and unrealized enhancements that may never get time on the schedule.

The second side of the fence exists, and security people inherently spend a lot of time there. Spend enough time there and you see how the system that is the sum of all software is a mess. I'm not even saying it's a bad thing, just that it's the inescapable reality.

Your architecture is like swimming in a school of fish. By moving with the group you benefit from the successes of the group. Ubuntu, docker, typescript and delivering your product as webapp brings a lot of benefits in featureset, maintenance and training that come at a reduced cost. For the same reasons I also prefer to use as much popular off-the-shelf tooling as possible and stick to familiar designs wherever possible.

You're probably doing better than most. But even with all that benefit, the components of your system are fraught with defects and limitations that in a perfect world would already be solved problems. Both in the stack you use and your own software. And you make it work despite that. Great. That's not my point.

Your writing is just fine.

To me, most of the critical comments seem to miss the point that your frustration centers around the foolishness of trying "go as fast as possible" while at the same time "your shoelaces are tied together."

Not a bad writer at all. And I think all the problems you describe do exist. I'm just saying keep your head up and look too the bright side.

Be happy that you have a job that compensates you well, aligns with your values, is flexible to your personal needs, allows you to grow professionally, and enables you to reach for the goals you've set while you're here on earth.

And if that doesn't describe your job, please quit and come work with us or any other company that respects you as a human being first and a sysadmin second. Life's too short to do otherwise.

Docker has its own security nightmares and mis-designs -- for instance, are you using user namespaces? With LXC and LXD user namespaces are the default (and unlike Docker's design, they can use different ID mappings which blocks inter-container attacks). There are plenty of other missteps I can think of.

(I am a maintainer of runc and have contributed to Docker for a long time, as well as collaborated with the LXC folks.)

I love lxc/lxd. Its really a shame that there is little to no interest by the lxd team in supporting the oci container format.

I assume you're referring to the OCI image format (not the runtime spec). This is because the OCI image format doesn't quite meet what they want for LXD -- in particular the whole layering design that OCI uses (which was inherited from Docker) is simply wrong for them. In fact there is a strong argument that the layering design doesn't even match what OCI really wants (it effectively embeds an optimisation for "docker build" into the storage format).

I am actually working on improving the current state of OCI images[1] by using a snapshot-based tree structure -- which will also solve many problems we have in OCI that are independent from LXD. But it is possible that the LXD folks would be more interested if the OCI format more closely matched what they need.

Though it should be noted that LXC has had an OCI template for several years now[2] (and it actually uses a tool I wrote -- umoci -- to extract the OCI image).

[1]: https://www.youtube.com/watch?v=bbTxdzbjv7I [2]: https://github.com/lxc/lxc/blob/lxc-3.2.1/templates/lxc-oci....

Yeah I am aware of the oci-template. I was mostly thinking of discussions like this[1] where Stéphane says there are no plans to support anything like that in LXD.

I find the distinction between "system containers" and "application containers" to be a bit arbitrary from a technical perspective. What does it matter what I'm running as PID 1? I find both system containers and application containers to be useful.

It seems like LXD would see larger adoption if it were easy to run docker container images directly (built into the LXD tooling).

[1]: https://discuss.linuxcontainers.org/t/using-oci-templates-in...

1) Increasingly, if you want to be in infosec, you have to learn how to code on the level of a SWE. This is how to not lose your mind when constantly addressing sec issues that others (devs) are entirely responsible for fixing.

2) Department of No doesn't have to be a thing, it just takes some emotional intelligence and pragmatism. 'Always saying no' is as much the fault of the sec eng as it is the system. If you have the will power and general positive mindset, it's more than easy to go through a sec career without getting angry. It just requires not default No, and ....

3) Understand people want to do a good job. If you tell them they're doing a shitty job, they'll tell you to fuck off. If you (sarcasm font) admire what they're doing, and have some stuff that can get added to build an even better product, people are surprisingly amenable to working with you. It seems like 70% of sec folks think their job is to say No, and as a result never get there.

4) This was a huge lesson for me that made me understand the field: to be successful in sec, you must learn that your personal risk tolerance will not always equal the enterprise's risk tolerance. Making money, in a digital field, is an inherently risk-on undertaking. It's crucial to know what to not escalate beyond your team, what to not cause a fuss about, what is just a known risk that will continue to exist, and then have clear requirements for what risk you will escalate and shake cages about.

It seems like all of these sec rants boil down to seeing the system and being unwilling to work within it, vs. seeing the system, accepting it, and figuring out how best to navigate it, and what tools and personal skill sets I'll really have to maximize to succeed in it.

>Department of No doesn't have to be a thing

This is the key. Mastering "yes, but" will not only make people like you more, but will also change your personal outlook on situations and avoid burnout like the author. Even if you don't realize it, there's a very different emotional impact between "I had to say no to that proposal" vs "I presented the options for that proposal, and they decided to not go through with it". For lack of a better term, it's no longer "your fault".

From the article:

>No, we can't do that. No, it doesn't support that. No, the vendor doesn't allow for that. No, you don't have the right license. No, no, no. Isn't the point of technology to enable businesses? So why am I saying no so often?

Because you're shit at communicating. Yes, we can do that, but you'll need to (pay for X || do Y || get Z to do this other thing). Yes, but your current vendor doesn't support that, although vendor Q does. Yes, but your current license doesn't do that, you'll need this license instead.

The joke is once you present the "effort" required to do what they want to do, 80% of the time they'll give up on it anyways. It's the same end result. But you didn't have to say no!

I think you make some excellent points. One side effect of working with Agile instead of waterfall systems is that changes are being pushed out to production at a staggering pace, the job of the Infosec person today requires some understanding of the fundamentals of SWE. Hilariously, I see this in my own organization, which has a Security department staffed by old-school "security" folks who have very very little idea about what our systems actually do, what kind of crypto methods they use to secure etc... and there is one person who does and she is the only person who gets anything done.

I agree with everything you said. To expand on point 3, when you are pitching a security change, know your audience. Most people hear security change and they think another inconvenient pain in the ass that I'm going to have to deal with. That changes if you can show them benefits to people or to the business that are not security related. For example, traditional approach:

Boss: You are advocating spending a lot of money on a configuration management solution, why?

Sec: Well it can help prevent heterogeneity in our environment which can lead to different versions of software being run, some of which may be old and buggy and therefore exploitable.

Dev team: So now we need to do change management meetings and institute even more controls? No, this will slow us down!

Ops: Sigh, another fad system we will have to support, learn, and test.

Boss: This seems like a lot of money and inconvenience for a theoretical problem, no.

Better approach:

Boss: You are advocating spending a lot of money on a configuration management solution, why?

Sec: Because it's a win for the business. Our Dev teams can avoid dependency hell and be confident that code they develop on their machines is going to work in production because our production environment will mirror the development environment. This will let them ship faster and spend more time doing real work. Ops will be able to scale much faster and spend less time dealing with configuration issues. Imagine being able to develop a config and spin up 50 servers with it at the same time while you sit back and sip a soda! No more fixing damaged boxes, just replace them! No more fighting with the dev teams! Oh yeah, we also get a security benefit for free while making our lives easier as we won't have to deal with heterogeneous software running on our boxes which presents a security vuln. The ROI for the business will be large.

Everyone: wow sounds great, let's do it!

>Increasingly, if you want to be in infosec, you have to learn how to code on the level of a SWE

I think this depends on what kind of infosec work you plan to be doing. I have been a security engineer for the past 4ish years, and while I do have some programming/scripting skills it certainly isn't at the level of a SWE. Most of the projects I work on I end up doing pentesting, risk assessments, remediation for compliance, assisting with policies/procedures, and/or security focused devops work. It would definitely be useful if I could program at the level of an SWE, and I do plan to continue to develop my skills, but I wouldn't say that to be in infosec you have to learn to code at the level of an SWE. I agree with all your other points

I've worked in a shop where InfoSec tried to be the Department of Yes. They were very effective at that - they basically never said "No" to anything. The downside was that they weren't actually able to say "No" to the things that they really needed to be able to change. A lack of accountability for decisions meant that people could ignore InfoSec consequence-free. All told, I've learned that security needs to have the ability to say "No" and the organizational backing to make it stick. It's the implicit threat that makes other groups play ball.

I've had at best mixed results with covering people with positivity and admiration in an effort to get them to fix things. It's often easy enough when it's something really small that looks easy and understandable. When you've got a whole architecture predicated on everything accessing everything unchecked, suggesting that maybe this absolutely amazing architecture could be even better with a tiny bit of TLS and authentication is unlikely to get you anywhere.

3) makes me believe even deeper that most work life is about satisfying other people. You feel motivated because you feel capable of making X other persons happy.

I 100% agree as a consultant working in product development. I think what drives this is a lack of ability for anyone to understand the end-to-end product from a technical standpoint and make coordinated decisions about direction. Instead, you have 30 teams with their own architects and roadmaps (which often overlap functionality) so you build the same thing 5 times across the org, then 3 of them end up drawing meaningful adoption so you have to figure out how to support that on an ongoing basis.

All of this leads to non-technical sales people becoming the de-facto source of product feedback. Which leads to non-technical middle managers making product decisions with a short-term mindset. It's impossible to do a good job under conditions like this, so folks just check out and clock their 40 per week.

I think this is the cause of a lot of burnout. People get emotionally invested in their work, but the way tech is structured, quality doesn't matter as much as velocity. Individuals don't fully understand how much impact their work has, so it can seem like toiling away in obscurity for years on end. That doesn't feel good to anyone, but when the company is making 35% margins it's really easy for them to ignore the cash bonfire.

I experienced this several times as an employee. It becomes hard to stand when you realize that every positive contribution just results in more money being shoveled out of the window behind your back (mainly because of greed, inefficiency, keeping the status quo, and wreckless behaviour caused by trust in your capabilities to somehow fix it again, every time over again) -- as opposed to contributing to a more efficient way of working for everyone.

Been doing extreme over-hours in the hope of fixing stuff once and for ever, only to realize your job becomes more and more like shit-shoveling, since management starts to feel invincible (and protected by your contributions), which leads them to make even more errors without accountability. I've seen some of them with tears in their eyes when I finally quit. 'nuff said. Profiteurs!

> Been doing extreme over-hours in the hope of fixing stuff once and for ever, only to realize your job becomes more and more like shit-shoveling, since management starts to feel invincible (and protected by your contributions), which leads them to make even more errors without accountability.

Legal departments have been dealing with this problem forever too. Such is the plight of being in an assurance function.

Induced demand doesn’t just apply to traffic, it turns out. https://en.m.wikipedia.org/wiki/Induced_demand

This is actually a very common systemic problem, even happened at organizations like NASA. When the risk is reduced in a single area within a system, people would tend to increase risky behaviors in other areas because they unconsciously justify those behaviors with a perception that the entire system has now become safer.

I'm afraid there's no solution to this. In the end we are all humans. How does one even begin to solve a psychological problem at a large scale like this?

> Been doing extreme over-hours in the hope of fixing stuff once and for ever, only to realize your job becomes more and more like shit-shoveling, since management starts to feel invincible (and protected by your contributions)

Isn't that when you start earning a lot of money?

> Isn't that when you start earning a lot of money?

This is very dangerous thinking. Figure out why this true if it is not already obvious.

* Not sustainable * Not healthy * Not scalable

No sarcasm here. Try to avoid this thinking-trap.

Correct. Also, they won't give you that money if they know your motivation is the will to fix things. The money flows instead to the ones who have receiving more money as their motivation.

Huge problem in many orgs. Many product decisions are made on the fly by the wrong person in an attempt to check a box and get something out the door quick. Or if the right decision gets made upstream, teams downstream don’t align for a litany of reasons.

Haven’t been able to put my finger on it exactly (definitely a multi dimensional issue), but I don’t think the issues you outlined will continue to fly if you want something of quality that’s not half baked.

This excerpt from agile manifesto comes to mind as part of the problem that I’ve coming to disagree with (at least how I’ve seen it implemented in the past):

> The best architectures, requirements, and designs emerge from self-organizing teams.

Do you have any counter-example in mind ?

The best architectures, requirements and designs come from a team with a fantastic lead/architect/product manager?

This. 100% this. Any product of real complexity _must_ have a single person or very small group of people overseeing it. This is doubly true if you're product is using micro-services.

The product my current employer is working on suffers immensely from a lack of central leadership to the point where each of the pieces of the application have different (and sometimes contradictory!) ideas of what the world looks like.

It's absolutely maddening.

Any product of real complexity is too big for any one person to hold in their head with sufficient detail. That's why we have modularity. (Microservices are just one kind of module boundary).

In my experience, central leaders want to impose grand sweeping worldviews that just aren't true when you get into the weeds of particular use cases. The Director gets a pretty architecture diagram but the engineers actually building and maintaining the feature live in a train wreck. And for what? You don't actually need uniformity across unrelated parts of the system. Architecture shouldn't be about a director's view of his kingdom, but about engineers' ability to sustainably deliver business value, which starts with using the right abstractions for the problem at hand.

I think we would be able to deliver business value a lot easier if everyone was consistent in the major ways. As it is, everyone has their own idea of what constitutes a dropdown.

Yes. If you think of it, the industry has been growing faster than fantastic leads/architects/ project managers can be developed. Those roles take time & experience to nurture, develop battle scars & grow.

As a backfill, many orgs are forcing people to pinch hit that are in over their head (not their fault, often times they are also stepping up to help which is great!) The problem is that these people & teams get to the point where they are just trying to survive until the next sprint.

Wash & repeat sprint to sprint, deploy something to check the box but it’s building a house of cards.

This isn't a counter-example, but a counter-motto.

Since you want to stick with theoreticals, a lot of great projects come from groups of close friends working together. You need a "fantastic lead/architect/product manager" when a team like that isn't available, and you need to compose one artificially.

This comment nails it. Is there anyway to fix this problem? Not the burnout as that's the symptom but the cause, bad structure.

I'm a software engineer in a research environment. GP sticks out to me because it's nearly the opposite of how it works for me. I as the architect/lead software engineer make basically all of the important decisions about the software. I have a project manager and about 6 researchers who are, theoretically, my customers, but they all know that they're not software engineers so they tell me what they need and defer all intermediate decisions to me.

Combining my experience with GP, the glib, almost tautological, answer is that you fix the problem by not letting "non-technical middle managers" make technical decisions. Actually implementing this in an organization that already suffers from it is a political problem which I don't have any advice on. For orgs where this isn't a problem yet, the answer is simple. If you're a technical organization, don't hire non-technical managers. Every single person in my management structure started out as an engineer or researcher and it works remarkably well.

On your average project, how many people write code? I'm guessing less than 6. The op is talking about problems that generally start to manifest once the number of programmers are more than the amount of people that can reasonably crowd around the same computer/whiteboard discussing architecture at once

Enterprise architecture done well; but the tooling is incredibly difficult, it's expensive and you need a culture devoted to engineering discipline over raw productivity. Other industries manage this just fine: manufacturing and defense contracting are two that come to mind. Both industries have pretty good work / life balance for their employees.

Tech is just worse because we haven't unionized like both of those industries did, so the companies are able to operate in a way that is more exploitative of its workers. I think that's actually the source of the outsized margins we see in tech. Agile is just shorthand for "it's too expensive to coordinate things so we'll just make our engineers do that on top of being engineers". The solution is to unionize and start to demand some standards in engineering discipline that allow people to have both a career and a life.

> you need a culture devoted to engineering discipline over raw productivity

Where can I find this? Sign me up -- please!

> Where can I find this?

In industries where engineers are unionized.

This is so so very true

"A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one."

No one in management wants the expense or the overhead of actual security. They want the "theater" of security, the good feeling, the box to check on their resume (as management) so everyone can pretend everything is fine and go back to the business at hand. Then something real happens, a leak of user data, or credit cards or internal memos... suddenly everyones job is security and no one knows what to do. The last problem gets solved, there is more "theater" and a few quickly forgotten changes that get worked around or just ignored in the long run.

Furthermore your average engineer wouldn't eat a ham sandwich handed to them on the street by a stranger but will happily run code from 100's of other people on their servers with out even looking at it. Note: I too am guilty as charged. Sure you can vendor it, and miss out on future security patches (it was already broken). Or you could just pull it from whatever repo you got it from to begin with and pick up new flaws. Never mind the fact that pulling from random places assumes that all those other chains of trust remain un-compromised.

Management and Engineers should be the ones most concerend and most thoughtful regarding security and both seem to ignore it for cost and convenience reasons till it is (far too late and) a REAL problem.

>They want the "theater" of security, the good feeling, the box to check on their resume (as management) so everyone can pretend everything is fine and go back to the business at hand.

And don't forget the universal elixir that cures all security ills: adding even more rules about which passwords are allowed.

Make sure you change your password every 30 days, too. That way I have to write it down, or use a sequential password just in order to be able to remember it.

"Your password is too similar to one of your previous 24 passwords" -- an actual error message I've gotten before.

Google started doing this. I spent an hour changing my password literally 100 times to clear their cache of the password I intended to set. It's insanity.

No worries, though: if you forget your password you can just call up IT and get it changed to "welcome1" :)

> Furthermore your average engineer wouldn't eat a ham sandwich handed to them on the street by a stranger

We would however eat one provided by a café that our colleagues recommended.

Or one from the company cafeteria. I’m not sure if this is the point you were making, but trust, insight and ability to get things done is quite different when working internally in an org versus working with an outside-org, and it’s a big deal.

My biggest burnout was when internal teams I worked with started to feel like external teams. That was more about engineering teams but another example: I strongly discouraged internal IT from adopting vendor/client language and mentality.

Relationships like that put them on the back foot, become defensive, and when you encounter problems the sense of being able to honestly ask what’s actually going wrong here? evaporates.

That's true, and a good point, but a big part of that trust comes from the knowledge that no one can run a café without some kind of oversight and inspection on food safety.

I probably wouldn't eat a sandwich from a bootleg café.

I would if the alternative was to grow all my own food.

Define "bootleg."



> No one in management wants the expense or the overhead of actual security. They want the "theater" of security, the good feeling, the box to check on their resume (as management) so everyone can pretend everything is fine and go back to the business at hand. Then something real happens, a leak of user data, or credit cards or internal memos... suddenly everyones job is security and no one knows what to do. The last problem gets solved, there is more "theater" and a few quickly forgotten changes that get worked around or just ignored in the long run.

The incentive is that if every box was checked and shit hits the fan, insurance is on the hook to pay for stuff and not the company or the responsible party.

The biggest checkbox in any project/contract/initiative is the one marked "Plausible Deniability." Mgmt often goes by another dictum "Success has many fathers, failure is an orphan."

Almost everything anyone does in business, as far as I can tell, is dramatically optimized for speed at the expense of almost everything else.

This is a natural result of a very competitive market place and, as far as I can tell, its rational if you take that circumstance to be an immutable state of affairs. Its no good stopping to dot the i's and cross the t's if, in doing so, you get completely edged out of the market by a competitor who isn't doing so. And you haven't even made the world a better, safer, nicer place by doing so. You've just left the door open for someone less scrupulous to beat you with a shittier product.

The solutions aren't palatable in the current philosophical mode: much more aggressive penalties (corporate death penalties, for instance, in which shareholders literally lose all their money) or much tighter and well enforced regulations. Move fast and break stuff truly is the philosophy of the day.

If you don't want to break stuff, you have to make moving fast less competitive. As far as I can tell, that's the only way.

> If X is less than the cost of a recall, we don't do one.

We didn't take into account loss of sales and reputation, the equation is not (should not) be that simple

It's a movie quote.

For those who didn't get the reference:

The OP is quoting Tyler Durden, a fictional character of the movie 'fight club'. So take it with a grain of salt.

Or, take it for what it is, a quote from a movie.

But don't dismiss the merits of the idea. It's essential Risk Management[0], which is most certainly a relevant part of Information Security. Specifically, this is a form of Cost-Benefit Analysis (specifically, Qunatativie Risk Analysis[1]), a critical component in proper Risk Management; the reality is you cannot "fix" every issue.

Take a look at methods like Single Loss Expectancy[2] and Annualized Loss Expectancy[3]; you'll find that the Fight Club quote is very close to the real-world methodology. I cannot speak for the Automobile Insurance industry, but given Insurance is founded in Risk Management, it seems likely to be closer to reality than not.

[0] https://en.wikipedia.org/wiki/Risk_management

[1] https://csrc.nist.gov/csrc/media/publications/conference-pap...

[2] https://en.wikipedia.org/wiki/Single-loss_expectancy

[3] https://en.wikipedia.org/wiki/Annualized_loss_expectancy

Fight Club is referencing to a real document made by Ford, the "Pinto Memo"

That is indeed fictional, but the real world Ford Pinto that was based on is real [1].

1. https://www.tortmuseum.org/ford-pinto/

The quote is resonant because anybody with first hand experience with the way "corporations" think internally about such matters KNOWS that this is EXACTLY how it plays out.

If you work in security, this resonates so much. No one really cares about security except to check a box or pay lip service to it. That's why so called security products ship without logging and clients don't want to make the smallest effort to enable you to improve their security. It's why companies that sell security products invest more in marketing than the product. The industry is full of conmen and marketeers. Information security can be great though if you find someone that really cares.

Bro (or Sis? :) )! They're not supposed to care about security, you are! Our job in infosec is to show others how insecurity affects what they care about so in order reduce,transfer or eliminate risk to what they care about they allow us to implement good security. The failure is on the infosec side of the equation.

It confounds and mildly pisses me off when people get pissed and get burned out over suits not caring about infosec. I mean,they care about promotions,reputation,bottom line,ROI,KPI,etc... That's what they do. You know why the marketeers and buzzword snakeoil salesmen prosper? It is because they communicate not only risk but especially [fake] solutions better! Infosec is full of user and management blaming, expecting peoppe outside of software developers and infosec practitioners to care about infosec. I am not saying I have it figured out but I am fairly certain users and decision makers need to be told solutions within the context of risk that affects them. And if it doesn't affct them they're not supposed to care.

I'll give you an example, a network is filled with tls1.0,and ssl1.3, how does that affect some mid sized company's bottom line or reputation? How do they get ROI on the man hours and resources spent to upgrade everythig to TLS1.3 with proper cipher suites and key exchange? and what KPI can they use to measure efficiency of resources? How will you tell them security hygeine takes a very long time to show ROI as do many other security concepts?

You don't really have to do all that if you don't want to, plenty of skill demand to where you can progress to more exciting positions.

I did work at a major cable company building customer premises hardware about 5 years back (the only reason I'm sharing this story). They were alerted to a major, easily exploited and REALLY stupid vulnerability in their system that exposed their core management network for the product to customers. They just hired the guy who reported it then fixed the problem 6 months later. The short-term mitigation was to put passwords on all their database servers (they were not there previously).

Security was just not a concern until they had a major breach. The security teams had been screaming bloody murder for a while, but could not get the product teams to allocate sprint bandwidth to the massive, coordinated security hardening effort that needed to happen to prevent a potential headline in the New York Times.

Yes, nailed it.

Mgmt/non-sec care about pretty clear, often profit-oriented metrics (ROI, etc.). There is such a clear precedent for successfully internally selling, implementing, and creating buy-in for cost-producing (i.e. infosec) but business-saving practices. Insurance, financial risk departments, legal departments etc. etc. etc. Sec can fall under that too. Sec people don't bother to learn the language 90% of the time. Sec people then burn out because they feel they're paddling nowhere.

Failure to learn that ^ language as a sec eng, means you fail to learn how to successfully implement sec in a way that has lasting buy-in. It's doable. It takes a bit of leadership, a bit of buzzword-learning.

If you want to play ball with mgmt and not be a mindless keyboard monkey sec eng who has no care if people care about sec or not, you must be able to take all those sec thoughts, distill it into 3 power point slides and 120 seconds of 'so what,' and be ok doing it over and over.

Why doesn't legal have to fight the same fights? Their domain seems similar: Legal problems take years to surface, and when they blow up, they explode spectacular. Implementing procedures involving legal is a huge drain of time, motivation and opportunities. Yet, in many companies the power dynamics is inverted: Anything non-trivial has to go through legal and is blocked by default. Why don't new deployments have to go through security?

> Why doesn't legal have to fight the same fights?

Legal constantly fights the same fights. They get those systems put in place because they acknowledge that fighting these fights is a critical aspect of their job and they make sure those control points are in place. Before I became an InfoSec PM I consulted for legal departments to fight those internal fights for them. They’ve had decades to refine and develop best practices around how to do these things.

Also places where everything is blocked by default by legal are generally badly run legal departments and have plenty of handshake agreements and covert business activity going on the same way places with intransigent and uncooperative InfoSec or enterprise architecture ends up with tons of shadow IT. They’ve been moving towards automated review and self-service tools to speed things up for a while now.

It's a great point, and largely due to legal teams speak a really similar language to business teams, just looking at it from different sides of the same apple. As nearly every company goes digital, security can fall in that same legal bucket.

Why does legal succeed then? Partially, there are pretty firm laws covering risk, that haven't quite caught up to sec breaches and such (but this is clearly beginning).

However, the big reason: Legal can explain the 'so what' because of that shared common language. Sec folks seem to largely not bother learning how to translate tech jargon to 120 seconds and a power point slide or two that business can understand.

This is what happens when companies don't understand security.


You think a million a year is expensive? It's not - not if it's saving you a $200m fine, and possible class action damages.

It's tough for me to think that Yahoo! didn't "understand security," and yet, their entire user database was ganked. I have to assume that they were doing everything they could to implement all of the white paper suggestions and consulting recommendations they could get their hands on. I also presume they were running the largest, most-expensive "security" products that they could buy. The depressing thought that struck me at the time was: if one of the biggest web properties to ever exist couldn't figure out how to secure their database, what hope do the rest of us have? (Maybe I'm being naive; I've never seen a disclosure on the nature of the intrusion.)

All of this sort of thing leads me to think that there is currently a huge mismatch between the security products industry, and how companies implement all the conflicting white paper snake oil, and what the ACTUAL vulnerabilities are. And I know how stupid this mismatch winds up making the average Fortune 500 worker bee's daily life. But that's a topic for another post.

Pretty much.

I think it's difficult but there's almost no doubt that Yahoo would have people who understand the problem with security. The problem is, were they, and did they have the power to reign this in at scale?

All too often, you get risk people buying products then asking for all your logs, promising the easy silver bullet. Being a pessimistic engineer, you're unlikely to ever be near a leadership position with people who want easy answers.

The trick is, how do you estimate risk of such problems in a particular company, in order to provide the suits with an expected dollar figure?

The problem is, the person who refuses to understand this, typically includes the senior guys in InfoSec who own this, and will continue to do so irregardless of how you present it.

I mean, who else is buying a SEIM, asking for all logs, and then hoping it'll take care of everything? That's the CISO, Director, VP, Head Of, etc.

I work as a contractor for a bank.

A few months ago everybody was up in arms about a "major" security issue discovered by an auditor (you could see the settings of random users by changing an id in a url). I've just shown them you can credit money to your account, yet this is low priority and they provided a fix that I'm 100% percent sure didn't fix anything, unfortunately the functionality is down on all but the production environment. I'm tempted to just credit myself 1 monetary unit in production and just show them the statement.

> I'm tempted to just credit myself 1 monetary unit in production and just show them the statement.

I would be tempted too, though I could bet that this will be a termination of an employment, instead of the problem being fixed.

I would like to be proven wrong on this speculation..

Yes, that's what I thought as well.I'll just have to suck it up until the test environments are up again and provide a proof of concept exploit.

I'm just impatient because it's a really clever and somewhat complex hack that challenges some multi-threading and transactionability assumptions some people mande and I can't really talk about it(which I'd love to share with my peers).

Raducule, did you did CYA (cover your ass)? E-mail(s) to higher ups responsible in case of a fuck-up that most likely will happen in the future? Do it now if you didn't, or "wave and smile" if you did. Let them burn if you are ignored, no longer your problem.

Not just that, I would expect criminal charges to follow.

Sometimes it matters who finds the issue more than what it is....

Not to downplay the issue in question, but you have to be realistic about prioritazation. An exploit of "if I change variable in URL I can see other people data" trumps "if I make multiple concurrent requests with specific input I can give myself monies". I guess not by much, depending on the complexity involved and the know how required to carry one vs the other. Another thing all together is who reported the issue - sometimes it's more of a PR experience, depending on the company

Debit yourself 1 monetary unit, and then say you could just as easily credit money.

Never to yourself. Maybe to a board member. Or all of the board members. But that's a big maybe.

And make sure your contract covers your ass, under production system testing / penetration, or something similar.

I could only exploit this for my own account, and there are only money involved, if it was human lives at stake, I would not have worked at anything else until the issue was fixed.

The bug that lead to me discovering the security issue was mitigated by another developer, the security issue was also deemed fixed, I am 100% it was not, but there's no way to proove it at the moment, except u production, that's why I said I was tempted to actually do it.

Anyway, there's nothing much to gain by me by antagonising another coleague, the management or the bank. It's not worth the ego boost or frustration scratch, worst case scenario, I don't patch the issue in time and the bank looses money and they start taking security more seriously.

You'd get much less reprimand if this information was somehow leaked to someone else who then was stupid enough to do it, although to avoid any legal "abetting" you'd have to have some actual documented cya saying "don't mess with this broken feature".

I don't have any ill feelings against the bank or the management, it's just a stupid setup where the local prophet gets ignored, we are swamped with bugs, new business features, new regulatory features, outdated devops, a lot of teams scrambling to catch the monthly release, understaffed qa, non-functional test environments and so on.

I can understand every piece of the long string of factors that lead to this ridiculous situation where such a serious security issue is not being addressed; any one in particular is not ridiculous, but they all compound to the ridiculous of the end result.

I've fixed another ridiculous security issue in the recent past without making big waves, where only one software architect understood the seriousness of just one option in a maven config file(a whole declarative security module was not being weaved into the bytecode because somone added another module and instead of both being applied, only the most recent one was being applied).

That might be true, but hacking without explicit consent is a good way to get fired with potential jail time.

Conversely, in a lot of industries the security department is only there to prevent you from doing everything you need to do, even if the threat and attack surface are both minimal.

Agreed, too often security people are incentivised to make a massive fuss over tiny issues, and then often don't seem to understand that security is just one of many requirements needing to balanced

So much this. In addition, these tiny issues are often purely technical in nature. I still have to meet the first security engineer who is able to identify informations security risks at the business level and view vulnerabilities in their proper context.

So true! Security people think the end goal of the company is a secure system. No. The end goal of the company is a product that customers want to buy. And it's hard to build said product when security hinders you at every turn.

I've heard from someone selling security products that some companies prefer to pay ransonware to a hacker, instead of investing in building up their defense and paying for security products

The easiest security investment is to switch your shop from Windows, cutting like 98% of threats out there cold.

As well as cutting 98% of your workforce as no office employee knows how to work on anything different.

Most of the workforce would have little trouble working on Macs or using Office365.

The actual friction will come from Windows sysadmins with no other practical skills.

I agree with the sentiment but this is probably not the case as a sysadmin hardly has any say in this usually. (I've been a sysadmin at software companies for 10 years.)

> As well as cutting 98% of your workforce as no office employee knows how to work on anything different.

Techies repeating this should take a lot of the blame for why Windows still sell as well as it does.

A 50 year old electrician convinced me to start using Ubuntu 13 years ago after someone at his kids elementary school or something had told him.

UX wise Linux passed Windows in many areas around the time Ubuntu was introduced.

The only reasons now are prefererence, hard dependencies on Windows only software, stubbornness and incomptence.

Only the two first ones are good reason in my opinion.

> UX wise Linux passed Windows in many areas around the time Ubuntu was introduced.

Is this something you decided on your own was a fact? If I disagree, would I be wrong, stubborn and/or incompetent?

I've worked in the field as a professional since around that time and as an amateur since 1995.

Things Linux did better at that time:

- installation experience, os: installation of a "pre-installed" Windows laptop could take up to 4 hours before you had finished completely.

- installation experience, additional software: I was good at removing Windows spyware and adware back when that was a local problem. Never had any Linux user with that it problem.

- driver issues: 50/50, since around that time hardware would mostly "just work" unlike Windows were one would typically, again at that time, have to hunt around the Internet or dive for the cd. Reason why Linux don't win hands down was because if something wasn't supported it would often be a dead end until next distro release, sometimes longer.

- end user support, other ux issues: the same, which means Linux probably win with a comfortable margin since Windows had the benefit of everyone "knowing it" and still didn't come out way ahead.

- in addition Linux typically is faster, even to this day, which is a huge issue with some users.

You might have noted I wrote in many areas, not all.

For users who earn their livelihood with Autocad and Photoshop I'll have a hard time recommending Linux. Same goes for people who have tried Linux for a few weeks and still don't like it. It might be preference or it might be stubbornness, I don't care.

But blanket statements like the one I replied to:

> As well as cutting 98% of your workforce as no office employee knows how to work on anything different.

is just plain wrong. The fails here are mostly related to other issues, not dumb users. (I really don't like that idea that all users are so stupid they cannot change adapt.)

Probably under stubbornness. Whatever good things Windows has going, it's not the UI.

I didn't say that. I disagreed with the statement of fact that Linux had passed Windows UX-wise in many ways around 2004.

Also, I wouldn't conflate UI and UX.

So would a few areas be true for you? "many" has ben true for 25 years for me and 15 years for many end users I know. UX is very much an opinion.

> UX is very much an opinion.

I'm certainly not going to object to you having that view, but if that's how you see it then it's not a very interesting discussion is it? Would be like discussing whether the Beatles were better than the Rolling Stones "in many ways".

For what it's worth, I believe you are very wrong. But I understand and kind of appreciate this view, largely because it keeps me employed.

And of your productivity for those tied to excel, word, and powerpoint

Excel for Mac does not support multiple cores and has a much more limited set of hotkeys, along with a more limited universe of addons.

this. I think most office workers couldn't care less if they run Windows. But they know those Excel hotkeys as well as any Vim hacker knows the escape key.

As a vim to excel user I’ve always wanted to create an addon that maps vim hotkeys to excel. My only fear is that I’d get too used to it and not be able to use any machine that I sit down at.

Those products in Wine. Google web, office 365, libre/open office, atom/sublime, vim/emacs.


LibreOffice has come a long way, but retraining thousands of employees is impractical.

That is as they say "a brave choice"

My job switched completely to GSuite a year ago, management first.

If anyone needs Excel they get it.

So far I've spotted I one person that probably uses Excel. No complaints that I've heard or seen.

Ah so how do you handle working with external people who use excel.

Most of my work with a wildly spread organisations and we can have excel go through US to Russia to Uk Back to Ukraine and then back to me in the UK

So much this. The problem is mostly that this requires executive buy in, and clear explanation of the cost shift associated. Then, even after executive buy in, you have to guard against CISSP "windows on everything" saboteurs who know the C-speak better than you do, not to mention the people who want a vendor product for everything instead of just using industry standard FOSS tooling. To me, there is a lot of market opportunity here, but the MBA side is behind and so the implementation is lagging.

This is why I think one of the key things is in stack standardization (choose best in class foss tools that match requirements, for example, I personally have a gpl or gpl compat requirement), and stack size reduction (which means you don't need every fancy sounding tool that you hear about, make sure the use case is justified first).

People have such a stockholm syndrome relationship with MS (and proprietary sw in general) it's absolutely sickening. For example, I think educational institutions should be teaching and using FOSS first.

Some will whine about no one using linux or not knowing how, and one response I use is "you had to learn how to use windows too, and even it changes things up, just look at 7 to 8/10, so why not learn to use gnu/linux and free yourself from MS?"

Um mate slashdot is over here http://slashdot/

I'm studying infosec, so I lean on the "pay for infosec people" side.

But from a company's perspective, if they have to pay 1M for an infosec team over five years, or 1M for a breach once every 5 years, what's the difference? You're still paying the same amount of money.

Perhaps they consider that the employees will be way more productive without all the security barriers that the Infosec team would set up, so paying for the breach is a net gain in this light.

And then Equifax gets breached.

When does infosec start to realize that it's not just about company costs/risks, but the lives of all those users who are going to get screwed when your 'low risk = cheap fix' mentality pays off?

I'm in the Equifax breach (like sooooo many more)... part of my 'general concerns about the world' is whether/when I get my life hacked and have to rebuild.

Let me know where you get hired next, so I can take my business elsewhere.

Corporations do not care one bit for

> lives of all those users

Equifax cares about one thing: earning profits for its shareholders. They got caught with their pants down. Now other companies can look and try to estimate their expected cost of being breached (probability of being breached multiplied by the dollar cost) vs the dollar cost to upgrade their IT systems, infrastructure, management, company policies, etc etc etc. Realistically, Equifax is probably incapable of doing the necessary changes upfront without a complete overhaul of it's people and leadership structure.

The vast majority of companies will spend the least amount of money possible to pretend that they fixed the problem.

You want companies to care? Then create regulation that protects

> lives of all those users

You will be paying more for the second breach and paying once will get you put on the sucker list so others will see you as a easy mark to hack.

That sentiment reminds me of how most people think of accessibility.

I asked my SO recently how she view the Internet, what it is and how it works. She was honest and told me that, "If I click this button, this websites loads. If that works I'm fine! If it doesn't I will call you. Don't stop working with IT please, if you get it, we need you badly!"

I believe that is a good reason to be accepting towards the current state of affairs. People just don't care. They have more important issues to deal with, deadlines to meet, problems to face.

If you are frustrated and pissed off, great. That means you get it, and that's great!

Now, meditate -- the journey to a perfect secure world will take some time, like beyond your current life span. It's not until you accept that reality that you will make real change globally. Because it takes time, a lot of time.

Thanks for your effort so far.

Thing is, it took me a long time to accept that people not caring was ok.

Now I realize that my dad is frustrated I never learned something as simple as changing the oil on my car.

My mom does not understand how I can't name more than two flowers and can't bake a pie.

My legal-minded friends are astounded I do not take a day to work on my legal status to pay less taxes. Hell, my wife does the paperwork I am not even sure of the tax rate we are paying.

And yet, I see myself as pretty curious, jack of all trade. I understand many things about CS, mechanics, electronics, manufacturing, politics, economics, ecology... We live in a complex world and we rely on each other to make it work.

Love each other, we really depend on each other, it is easier if we don't consider others stupid for choosing different areas of skills.

Your mom and dad, or grandparents probably didn't need to know, even though they know how.

Most of their interactions were with local businesses, with people who, like themselves, were part of the local community. The unofficial grapevine worked pretty well for rooting out the good and bad mechanics, lawyers and florists. Your dad could change the oil, but almost certainly knew which mechanics could be trusted to have done what was on the invoice, and the few to avoid at all costs. Mostly if really was OK not to care, because they knew someone who did. The network meant something. Doubly so in smaller towns, and yes, small town life came with some downsides too. :)

That breaks horribly when recommendations are of global mega-multinationals, and most businesses on most high streets are national and international chains. A recommendation counts for nothing for a business of that scale, and an individual vote may be an employee you might never encounter again. The network means nothing, except as something to be gamed. Taking your custom elsewhere means nothing unless a million or two others do too. You have to care as no one else gives a shit about your interests, just the sale or commission. Except precisely none of us have the time for that.

If we want the benefits of larger scale business I think we need to start giving them some responsibilities too. Like a duty of care in law as exists in some areas already, but further reaching to consider the public interest as a priority. Without something the power imbalance is impossible.

Without constraint, large business takes the piss. It's time for some constraint. Then maybe we actually can depend on each other again.

My dad was dealing with international teams to manage semiconductors manufacturing at a global mega-multinationals. His dad was the trustworthy mechanics though, that's why he ended being good at cars repairs :-)

I am the one living in the countryside not far from a middle-sized city. I do know, from local community's word of mouth the reliable and cheap mechanic and florist.

Don't fell in the fallacy that things changed globally because you moved from one place to another ;-)

Tracking reputation has never been easier. If you are interested in it, that is.

That's really the point of success in business - it gives you the ultimate privilege of hugely decreasing the odds that you'll be held responsible for damage you cause to others.

Occasionally the wheels come off (maybe literally) and someone with significant power ends up in jail. But realistically - how often?

Which is why software security and quality aren't a thing. There's no pressure to do the job properly and plenty of incentive not to.

> There's no pressure to do the job properly and plenty of incentive not to.

I won't name names, but doing some work for some company in IT security space once, I learned that one of the issue they faced in sales is that the product cannot be too good - because if it points out to a possible vulnerability and then that vulnerability gets exploited, the customer may be on the hook financially and legally, as the software could prove they knew about the problem and didn't address it.

And now I need to go check my blood pressure again.

Worked in enough safety-necessary environments (military weapon handling, warehouses, shipyard) that the very idea of deliberately whitewashing a possible failure just makes me angry.

Lessons learned briefings were some of the best OJT I ever had.

I'd have filed CVE's on whatever they told you to leave uncovered and damn the consequences.

(I've also never been in a place where I could afford to be let go, so YMMV/MMMV).

> The network means nothing, except as something to be gamed.

Thank you for this insight, particularly the concise manner in which you have articulated it.

I can't accept that not caring is ok. The small "I don't care" extends into "I don't care about anything outside my immediate environment" and that has political and eventually global consequences.

If their bank account is drained they will care, and get angry, and then maybe do something (but preferably the bank will recompense them in which case they feel better and go back to not caring).

Some stuff you just can't do; as you say the world's too large, but many people don't want to put in the effort to learn stuff that would benefit them immediately, never mind over the longer term.

I really do not understand people.

> I really do not understand people.

Well, then learning more about people's psychology and motivations seems like a thing you could benefit from immediately :)

I realise people don't care. I don't understand how they can so freely ignore that this has consequences and not always even long term ones.

Given that I do recognise people won't change, would it help if I did learn to understand their builtin magic curtain / SEP field[0]?

I ask with no hope any more, what do you advise?

[0] https://en.wikipedia.org/wiki/SEP_field

It's like a runner that runs laps, if they can keep going they will. Sometimes they need support but then they only do a pit stop. Getting information or acquire a skill with that state of mind makes you only learn what seems essential to you.

Being acceptant of that has the upside that you don't need to know the background of a given individual (some people have a rough life, others just don't think they will get it e.g. think they are not smart enough, others are just working 400% and have kids), to accept the fact that they will just use tech and maybe don't care about the inner workings. But is that really a bad thing? Because that's exactly how they are built know adays, ease of use.

I have people around me that I want to explain basic things in life to, but.. even if they actually listen their previous argument float to the surface the next day. People need to change themselves.

In the other end of the park there's a big playground. That's where I sit and design the biggest sand castle I've ever built. I don't care that much about running around, I switch playground when I have too.

Each mindset have their pros and cons. I solve stuff that takes a bit of effort. My SO keeps our life/house running.

So respect and acceptance about others choices is a great way to let go of anger that nobody is interested in the things you are interested in. Finding someone with the same mindset in real life is rare, even on HN. How cool is that?

> many people don't want to put in the effort to learn stuff that would benefit them immediately

I take it you're not counting yourself in that group? Have you already learned everything that would benefit you immediately?

It would be a better question if you asked it fairly: "Have you already learned everything". I did not suggest or imply 'everything' should be learnt.

That's fair. Your comment would also be better if you quoted me fairly and didn't cut off the second half of my question.

"Everything" and "Everything that would benefit you immediately" are not the same thing. But I don't think there's much to gain from more bickering here, let me just retract my comment.

That's very civil of you, upvoted. OK, now I can answer. It's all a balance of investment, payback (which can be direct - "gets me a job" - and nebulous life-quality stuff - "that's interesting!") and erm, my lifespan.

I try to make those tradeoffs consciously. If I decide x is valuable but boring, these days I'll measure it against other priorities, if it wins I force myself to do it.

So I try. It's an acquired skill which I'm still working on.

What you call an acquired skill is a heuristic most people do unconsciously.

People always do the right thing and don't dither, don't procrastinate, always evaluate the long term consequences? Sure they do.

> If their bank account is drained they will care

For people living in the US, that's just basically an act of God now. The refusal to have a decent ID system has made identity theft laughably easy. Even with top-notch computer security, if you have ever shopped online or swiped a card in a shop that doesn't do in-depth background checks of its employee, you are at risk.

"I really do not understand people."


> I really do not understand people.

If you cared to, you could learn how.

Thanks, this summarizes my thoughts quite neatly.

And most importantly, love yourself and don't let others disinterest get to your spirit of making good.

Just wanted to say this is a really good comment, thanks for putting it out there.

Thanks, you made my day :-)

This I don't understand.

You are interested in economy and politics, yet you don't know how to file taxes. Is that not connected?

If you understand mechanics, yet can't do maintenance on your car, not even talking about fixing anything.

This level of theoretical knowledge that can't overreach to reality to do basic tasks seems just irrelevant to me. What is it good for apart of talking about it?

There is joy to be had in just learning something. For some people, the knowledge is its own reward. Other people see knowledge only as a tool to achieve things they want.

Based on your respective comments, I'd expect that the writer of the grandparent comment is the former type, while you are more the second type. Both are fine.

There is nothing wrong with doing stuff just for fun.

There is nothing wrong with being focused on just few issues.

But I have doubts when you claim some knowledge without being able to use it practically, which is what I see here.

If I'd say I'm jack of all trades, as I'm interrsted in software - but i hire a guy to make all the code for me because I don't know how - and that I'm very interested in knives - but my wife sharpens our knives as I don't know how - I have doubts about your actual knowledge if those fields.

General curiosity in some topic? I can talk quite a bit about firearms, ammunition etc. yet I never owned a gun, shot altogether maybe 5-8 rounds in my whole life (apart from BB gun).

I like watching youtube channels like Demolition ranch or Iraqveteran8888 for their technical expertise, viewpoints and just fun. Without any real expectation to ever own a gun to expand on this knowledge. Maybe some nice bow one day (which these channels don't even cover).

Don't expect everybody acting rationally all the time, considering cost/benefit against all activities, or similar.

Is somebody who watches people draw in their youtube channels interested in drawing, or in watching people draw? I'd say it's the latter.

Are the billions of people who watch sportsball even capable of playing whatever variety of sportsball?

That's my point - they're interested in following others play the sport, not in the sport itself.

I could not figure out an argument without sounding too confrontational toward you or OP, so sorry :).

But if your knowledge can't fulfill your need in same field, what is that knowledge for, or is that even actual knowledge?

People like to satisfy their curiosity. Think of the average tabloid at a newsstand. It provides some provocative information, but it's the curiosity of the details that drives people to buy them. Is it going to improve their life in any material way? Probably not.

You are reading this wrong.

Never said he couldn't change the oil on his car, just that he never bothered to. Didn't say he couldn't do his taxes, just that his wife takes care of it.

I can't respond directly to kissorgy because of shadow ban I guess.

Anyway, yes the author says we are worse today, but keep in mind that we "just" connected a few billion devices. It will take a massive effort to get that fixed properly and keep all up to date security wise.

Nevermind that goverments wants to ban security in some places.

> but keep in mind that we "just" connected a few billion devices.

Yes exactly this. Information technology has improved at a totally crazy rate, and I think it's unrealistic to expect safety and sanity to keep pace, The internet has only been around for 30 odd years, and getting all this right takes time. To examine another technology, the automobile was invented in 1885, seat belts were invented more than 60 years later, and it was another decade or so until they were made mandatory for new cars (in the US at least).

> If I click this button, this websites loads.

As a user of technology, I want the exact same thing. I'll do research to build a PC, or install a new OS, and tinker with it until it works to my satisfaction. Then, I never want to touch the internals again and I want them to just work. I get very upset when my product stops working.

Same for cars, cell phones, computers, etc.

As software devs, it's our job to make this possible for users.

And users should not need to care (too much). I drive a care and while I know some of the inner workings of an ICE, most people who drive do not and that's FINE. The car is a utility that people just want to work.

Why is software expected to be different? Why should it be? Why does your grandma require a basic understanding of password security anyways?

> the journey to a perfect secure world will take some time

The author argues that it will never happen because we are WORSE today than we were before and I agree with that.

The problem with the security mindset is that security goals are relative to other business goals within almost every organization. A breach can be OK. A rebuild can be OK. Some downtime can be OK. It depends on the system. To put it eloquently: I don't trust security people to do sane things. - Linus Torvalds (2017) ... via https://github.com/globalcitizen/taoup

> I don't trust security people to do sane things

Seriously. Security decisions that demolish UX can tank entire products.

Recent-ish example: Oracle VirtualBox. Used to love that software and I would recommend it to friends needing VMs. They added a new hardening feature that makes it unusable on my setup for whatever reason (VMs fail to start with "hardening failures"). There is no option to disable the hardening feature short of going into the source code figuring out how to disable it and building from scratch[1]. It's like geez, I don't even care about hardening - I'm not trying to analyze stuxnet or some ransomware virus here, I just want to work on my web app in between classes at school on my windows laptop.

There's a massive FAQ on the VirtualBox forums on how to debug hardening failures[2]. I spent about 5 hours working through the FAQ and troubleshooting before I just said "screw this" and bought VMWare which worked perfectly first try. I no longer recommend VirtualBox to friends - I tell them it will likely give them headaches and to use VMWare or WSL instead.

[1] https://forums.virtualbox.org/viewtopic.php?f=6&t=84523

[2] https://forums.virtualbox.org/viewtopic.php?f=25&t=82106

[3] https://forums.virtualbox.org/viewtopic.php?f=1&t=62897

Note: [3] is actually pretty entertaining thread with insane security people defending the new terrible hardening issues of VirtualBox.

The funny thing is, VirtualBox is the “best” product in principle, it’s adopting security by default and has one more feature on the bullet point list. Analyzed in isolation everybody would agree that’s good, yet the overall value for the user, considering all the use cases, is lesser, because the definition of “better” was “more restrictive”.

FYI, enable hyper-v and you don't need Vbox.

Just curious if you have considered migrating from VirtualBox to Hyper-V given you are doing your work on a Windows laptop.

No, actually I had somehow never heard of Hyper-V before today (or maybe I had but for some reason it didn't click it was software I could make VMs with). Great suggestion, I'll look into it.

Definitely give Hyper-V a shot. I've been using it at home and work since Windows 8. I had a lot of issues with VirtualBox and networking (I could never SSH into a VM from the host working, for example). In Hyper-V, all you need to do is setup a virtual switch and the host and VM can talk to each other just fine. Hyper-V just works. I've got Linux and Windows VMs and have never had a problem with either (as far as virtualization goes). I use it less frequently now that Win10 has WSL & WSL 2, though (although WSL 2 uses a lightweight VM on Hyper-V).

One of the only downsides is that hyper v doesn't support usb pass through. However if you had you, you could install a pcie USB card then pass that through

Indeed, if you are a security engineer, everything looks like a threat to you. Therefore, a developer's workstation must be protected just as strongly as a critical database server - at the gross expense, of course, of the developer's productivity.

I once worked somewhere as a contractor where developers had 3 separate PCs with a KVM switch - with there being separate development, test and production infrastructure.

Ironically they had a serious production incident that almost took the entire (large) company out for a day because they were doing load testing in one environment (they had about 10 separate environments) but they had shared email infrastructure between production and the production-1 environment. The application being tested generated zillions of emails using "real" email addresses that clogged up their production environment.

My worst: Developer machines were dumb (but secure) terminals for remote desktop connections to a jump box, where a VNC connection got us to a Linux desktop living on AWS GovCloud where actual development took place.

It was a remarkably risk-averse client.

Qualcomm is like this. Their developer workstations are ultra locked down. If you so much as plug in a flash drive, alarms will start going off. If you email them a file, any attachments are instantly quarentined.

While I was there it was extremely difficult to transfer a log file from one of their workstations to my laptop for analysis during a debug session. It was also extremely difficult getting new builds into the setup since there was no easy way to get the binary file into their workstations.

I guess you could have used a private nextcloud instance to transfer it.

I guess in such a locked down environment that would be grounds for termination.

The point is more that all that "data loss prevention" stuff will not prevent someone who has access to the network to get data out.

Precisely. Some folks over-estimate the security risks and under-estimate the business risks. Typically, the biggest risk to any organisation is going bankrupt, and doing "too much security" can make that more likely.

Isn't threat assessment (risk/mitigation cost) part of their job ?

Also the defeatist ‘everything can be hacked if they try hard enough’ attitude, applied as a defense.

I genuinely believe that’s a better starting point though.

It is absolutely true that everything can be hacked if they try hard enough, so the question is — how hard do they need to try, and what resources do they need?

As you reach an answer to that question, you either consider that level of investment a credible threat you need to worry about (in which case you invest in tightening things up) or you don’t (beers at 5?).

That is not the defense of ‘I don’t need to improve security because nothing is secure anyway’ though. Or ‘It’s acceptable to keep making the same dumb mistakes over and over because nothing can be secure anyway’.

Yup, agreed. My point is that I find it easier to talk a defeatist into a somewhat proactive security practitioner (everything is hackable but the point is to deter, delay, and mitigate, not outright stop all attacks), but it's a lot harder to talk a security absolutist into backing down because their initiatives are a net negative.

Also, the nuanced posture can look like the defeatist attitude if you don't pay attention to the details —and, on the flip side, you need to pay attention to those same details to tell a proactive attitude from security theatre.

I don't explicitly work in security, but I've had a handful of arguments in the past where I've discovered a serious vulnerability, and been told either by my managers or by a client that it is low-priority.

Sometimes, the vulnerability was then exploited, and what shocked me was that people were okay with this. I won't name names, but one marketing department I worked with was happier to suffer a customer data leak over having to spend budget during the end of the year to fix the issue. I later learned that the IT department got in trouble for allowing the error to happen, when the system was entirely managed by us and our sole point of contact was with someone in marketing that left their position because they didn't get on with IT.

If there's one thing I learned, it's that the ultimate currency in business is risk, and that the software/IT industry lacks the power to really do anything when a company is found to be negligent. For many, the risk of "being caught" is worth not spending money on preventative issues, and ultimately there's absolutely nothing we can do about it outside of covering our asses when the finger is pointed our way.

You only need to look at the Panera Bread security breach to see that all the badmouthing on Twitter did nothing to stop the company from painting its own narrative. Hell, the WordPress theme/plugin company Pipdig was caught ddosing its rivals with their software, and all they had to do was lay low on social media for a month and lie in a blog post. The worst part was that their non-techie customers were all too happy to back them up, meaning that the WordPress security community had zero clout to really do anything.

I have the utmost respect for anyone that works in security, because you're fighting a battle that no one wants to win, and is often a battle where it feels your partners are silently rooting for the other side.

I worked at a healthcare company in the US (we provided HIPAA data connections between insurance and providers) and discovered all the production passwords were storied in a text file in the code repo half the company had access to. The CTO told me "we trust our employees". There was also no auditing on who access the DB and servers, and they never changed the passwords because the chief architect did not want to remember anything.

I worked at a retailer (not Target) that did something similar. Once Target got breached in 2014, they mandated security training and began making changes to some things in the org. This was one - instead of storing those passwords in plain-text, they were encrypted. So people encrypted them, commited them into the repositories, and deployed the now encrypted files to production. Cool, right?

They didn't actually change the passwords, since that would break too many things at once. So you could just look at the git history to get the plain text password. Or debug the application locally.

Security theater all day. Sigh.

There's something both comforting and absolutely terrifying that everyone has similar stories of software negligence.

I would love to see a whistle-blower company formed, where you could report software engineering malpractice, and be compensated and/or protected from being punished. Not necessarily a union, but an industry body that could verify your security concerns and either "out" a company for punishing you, or provide you x months of work and a reference to compensate the termination of your employment.

Did you work at my current company? Because that's Exactly what happens here.

Also, I'm so sorry you had to touch EDI, if you did.

That's why EU laws to punish such negligence are a good thing. They increase the risk for companies, to something very specific.

I'm in the UK, so all of these companies are either primarily based in the EU, or do business here.

GDPR has put some fear back in, but even today there are loads of companies that simply don't give a shit, and will happily risk it all so that they don't have to adapt their practices from years ago. In my experience, smaller companies are the worst offenders, because they know they are small fry.

Pipdig are a UK company, and have been actively caught using malicious code with proof, yet they're still running without a care in the world. This all happened recently too, so it's not like it's a pre-GDPR or pre-ICO crackdown issue.

I think CISOs are a bit more worried than they were before, but CFOs still refuse to pay for individuals. It's the old Capex vs Opex and random jealously that flies around the market, combined with a lack of skills that leads to outsourcing to consultancies that sell snake oil. It'll remain crap I imagine.

The bad news is that this will happen everywhere where you work for someone else, especially in large companies.

And this cannot be avoided while working as an employee, it's part of the system.

The good news is that there is one way to avoid this (the only way AFIK) is to start your own company, there you get to call all the shots for good or for bad.

It doesn't have to be a big company though, it can be just you and a laptop for starters.

Other than that, as an emotional coping mechanism and to preserve their mental sanity, people will simply become disengaged and cynical.

Take longer breaks and try to have a laugh with your colleagues to blow some pressure and make friends.

Also, because things work badly in those environments, there is a lot of blame deflection going on, which is one of the main causes of stress and burnout.

Again, part of the system, not much that you can do about it other than learning to deflect responsibilities to others.

Getting rid of the hot potato is a very important skill in these corporate settings.

Usually on a team, it's always the same people getting the short end of the stick. Try not to be one of them if you can, but often that's easier said than done.

Also, try to get promoted, you will get more responsibility and might even be able to fix some of these things, that's another positive attitude towards chaos that helps a lot of people cope with it.

I couldn't disagree more. I work for the government (in R&D) of all places (about as close as possible to the opposite of working for myself IMO) and I don't feel this at all. I have near-total control over my work environment. If I need something, all I have to do is articulate why I need it and I will get it or get a precise technical explanation of why I can't have it.

That's a very special setting, most enterprises aren't like that. For example, for material, you get a crap old PC where you can hardly run your IDE, where you can't install anything.

These days it's doable to at least get a second monitor. But for things such as an SSD drive? At least a couple of years ago it was nearly impossible.

The reason why burnout is frequent, is because in general in most places work conditions are bad and stressful. In my view the problem is the system and not the people.

Your professional setting seems like a good example of how it doesn't have to be like that.

So cynical.

First of all, we live in an imperfect world, so there always needs to be a search for balance. No company is perfect, and if it were, we'd be out of a job, so to speak. The balance needs to be that there are enough things going well enough, for you not to get burned down (burned out, pissed off). Enough of the "problems" must be challenges, and actionable, and still excite you to solve them. If this is not the case at your place, I see three possible outcomes:

1. You be cynical. For a miserably long time to come

2. This is not the right company. My experiences are very different. I get cynical about a thing or two, but on balance I love what I do. So look for a better workplace

3. You're not in the right line of work. No hard feelings. Go do something else, I beg you

It's not cynicism. It's acknowledging the reality of work in a lot of these environments as the root cause of these problems.

You will be stuck with an old slow PC trying to fix some issue until midnight, and that just takes a heavy toll on people.

As things can't work well, there is a lot of responsibility deflection going on, it would be naive to think otherwise.

Also, a lot of these posts in HN assume that the only way to live is to work as an employee for someone else, under similar conditions.

It's important to realize that there are other professional options available, which many people won't even consider.

For your option list, I've seen a lot of 1. As for 2, there are lot more bad companies than good ones, and you can't tell from the interview process and can't also change each 2 months.

As for 3, half the workforce would switch jobs then. The problem is not the people, it's the system.

I would add 4, do contract work and stay while the balance of the good things / bad things about the job make it worth it, then do something else after a year or two when it doesn't.

This relates to 2, sometimes a company is the right company in the beginning, but it's no longer the case later on. For example, I bet you won't work there forever.

> The good news is that there is one way to avoid this (the only way AFIK)

I managed to recover from a similar level of burnout/pissed-offedness by changing my domain of software development completely. I got out of high-stakes server-side software and into client-side mobile software. It took some effort to pivot, but in the end it was a great relief. In my experience, Apple's slow approval process is a much better problem to have than getting pinged at 3am for any reason.

Everything a new graduate could wish for ! /s

This is the down-side of artificial scarcity for software. The upside is that sweet sweet green. If you could remain cynical a few years longer, scrimp and save like 10 years of a comfy mid 6 figure salary, you'd stop caring about the BS, and you'd might even learn to love it (or even contribute to it!) After all, nothing quite feels as good as being a well-paid expert in a complicated field, especially when it grows more complicated over time.

It's offensive, and it's not how it should be, it's a "defect/defect" Nash equilibrium when we should be going for "cooperate/cooperate". So kudos to you for fighting the good fight - you deserve to win.

What about those of us who don't get crazy compensations, but instead work at a midsized company selling a "security" product? All of the complexity in my field comes from stupidity, either by certifiers, or legacy protocols that can't die or sales people playing "defect/defect" with oneanother so nobody fucking talks with each other. If the complexity at least came from software I would have a reason for my knowledge to feel valuable outside my specific domain.

The real knowledge you can gain is:

* Understand the true desired outcome

* Own delivering it

People who can't do the former are naive. People who can't do the latter are unambitious. You can go through life just fine being both.

Well, it sounds like you need to be learning things outside your domain in the hopes of entering the job market. Sometimes you can't fix the game.

BTW I'm quite wary of "security products" for enterprise; it reeks of antivirus software writ large. That said I can see some benefit to services like audits, or even things like honey pots or "dark net scans" for detecting leaks. But something tells me that's not what you're talking about...

> BTW I'm quite wary of "security products" for enterprise; it reeks of antivirus software writ large. That said I can see some benefit to services like audits, or even things like honey pots or "dark net scans" for detecting leaks. But something tells me that's not what you're talking about...

The product itself is quite reasonable on paper [1]. (And, yes, it would provide value even if all of the software industry would ramp up their security practices, so it's not a band aid like antivirus software.) The execution is the problem. Despise selling "nation state attacker secure" appliances, we are not internally focusing on producing a high security product but give priority to certifications and features. The disconnect between marketed identity and day-to-day developer experience is breathtakingly depressing... at least to those of us who have an interest in security. Management doesn't care of course. It sells (because of certification and little alternatives on the market), so all is well.

[1] Sorry for being so vague. Given the set of statements I've given already, anything more would make me personally identifiable to my coworkers.

> artificial scarcity for software

Can you elaborate on this? Why would you say there is artificial scarcity for software?

I assume they're talking about the fact that software can be replicated at no cost but is constrained by pricepoints, encrypted code, licensing agreements, versioning flags, and so on. It's been artificially constrained in code.

Correct. Although the constraints often involve code, they don't have to. Enterprise license agreements are ultimately enforced with threat of legal action, coin-op arcade games by a physical mechanism, for example.

Software costs ~0 to copy and distribute absolutely perfect copies, world-wide. To drive the price up you create artificial scarcity, mostly rooted in IP law.

Important note for HN readers: 'driving the price up' is a net benefit for programmers. Artificial scarcity is why you get paid big bucks.

Interestingly, data is legitimately scarce. But that's a discussion for another time...

> Artificial scarcity is why you get paid big bucks.

You are assuming that my software is distributed.

Look 50 years ago you got a catalog from sears. You mailed a check, someone opened it, your order went to the warehouse, the check went to the bank, your items got picked packed and shipped.

The software I spent a good part of my career writing got rid of a LOT of people from that scenario. The hardware and software combination that is the ATM got rid of many tellers. MP3's killed record stores.

If you find a programer old enough, they will insult you by saying "I will replace you with a very small shell script". Many of us built software to replace people, many of us continue that. Our value is literally reducing costs and "enabling scale" at a massive discount. For those of us that do this sort of work, we are massively underpaid when compared to the manual processes that are replaced or never have to happen.

Copy and distribution costs are just a part of the cost. Development and maintenance does require real flesh and blood people spending their days working on developing, building, and deployment. That part costs money.

The artificial scarcity applies to copies of an individual piece of software, rather than the scarcity of different pieces of software.

Once you build it, almost the only way to achieve scarcity is through IP law.

You tacitly assume that I am not aware that software products have an R&D cost, and you are (insultingly) wrong. Of course they do. And without artificial scarcity that R&D cost will not be recouped. The default state of software is an open source model, where the "developing, building, and deployment" doesn't cost money because there isn't any.

Copy and distribution costs for movies in a digital age are non existant. Are movies using artificial scarcity?

That movies are almost impossible to buy DRM-free should give you a hint ?


Quality software is scarce, though. Even in a world without copyright, or a world in which copyright does not apply to pure information like source code, it would still be difficult to produce short effective code (because finding short effective code is NP-complete!) and so some of us would still be employed without difficulty.

> Important note for HN readers: 'driving the price up' is a net benefit for programmers.

Well no, if it is hard to get software written people just won't bother. Most programmers will benefit from a commoditise-the-complement strategy where everyone is using software and need to hire programmers to tweak it to their exact needs.

The money is in support & hardware. Trying to primarily compete on software price is pretty risky; Open Source is quite competitive. The big paying companies in software (Googles, etc, of the world) don't particularly leverage IP law in their strategies. Even with players like Microsoft, artificial scarcity of their traditional primary products (Windows, Office) would be the kiss of death to the company.

I think there is some economic theory that states the price of products will tend towards the marginal cost over time. I haven't seen anything that makes me think software is any different.

I don't think there is much profit in hardware, most companies seem to work on razor thin margins.

> I don't think there is much profit in hardware, most companies seem to work on razor thin margins.

Consider that Apple is America's most profitable corporation. They do squeeze people with software restrictions; and it is a core part of their strategy. But the focus of Apple has never been 'our software is rare', it has always been 'faster processor, thinner phone, better screen'. The artificial restrictions on the software aren't at all beneficial to the software or the programmers; they are part of a strategy to make the hardware more valuable. It is a strategic element to help their hardware resist the commodification of smartphones.

If they had inferior hardware then the restricted software strategy would see them crushed under the booted heal of groups like Samsung.

Apple does seem to be the exception, that's why I did say "most companies".

The big paying companies in software aren't in the software business, they're in the data business. The software is a (very cheap, to them) loss leader.

There is a ton of free software. Price is not up just because of IP law. It's up because running and maintaining Wikipedia for example has a price even though most of the stack is free.

We're creating more complexity every day. Kubernetes, microservices, microfrontends, "big data" lakes, etc etc. Now instead of securing a physical server, or even a VM, you have to somehow secure thousands of interconnected endpoints in PaaS products in a cloud you can't even debug appropriately. Is it a surprise our software is worse than ever these days?

We've added complexity but we haven't increased our capabilities. What software can we make today that we couldn't 10 years ago? I'll even stipulate there could be a few examples, but 99% of the stuff we build today could have been built faster, cheaper and secured easier 10 years ago.

Basically everyone wants security, but nobody wants to pay for it.

Maybe the market equilibrium means people don't get hurt enough yet.

Everybody wanna be a bodybuilder. But ain't nobody wanna lift no heavy-ass weight.


good ole ronnie coleman.

And in my opinion, having done some IT security work, it seems to be a relatively miserable field, even if the salaries are going up.

Miserable in the sense that customers don't care, you have to be paranoid and brilliant at the same time, it can be weeks without an emotional payoff, yet everything is always urgent but rarely important.

Having watched some conference videos on the topic, the bro-toxicity is still strong in that field, too.

I think this person should seek happiness in ways that don't involve technology.

>I'm pissed off that my clients don't seem to take it seriously, and I'm pissed off that the vendors don't seem to want to help.

Yes, I 100% agree with your assessment of this person's post. They are running into what I would call, "the horrible realities of business", which have nothing to do with putting in too many hard hours at work and everything to do with being forced to deal with unethical/deliberately lazy people in the day to day of being a worker

They’re expecting greatness from everyone around them. That would drive me crazy as well. My mental health is too important to be idealistic anymore.

When I was in my early 20s I couldn’t understand why people were so jaded about anything. A decade later and, I understand.

People don’t give a damn and it’s mind-boggling. All you can do is shake your head and continue with your day.

I don't think they are unhappy. I think they see an opportunity, and this is a setup for an upcoming post where they announce the new product to fix the concerns.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact