Hacker News new | past | comments | ask | show | jobs | submit login
The Infosec Apocalypse (rickasaurus.com)
159 points by chillax on Sept 17, 2020 | hide | past | favorite | 100 comments



I recently did an extensive competitive analysis of SAST tools for a client. Anyone who is thinking about buying one of these tools should pay attention to what versions of each language they support. Also try to get old release notes in order to determine when they first supported a particular language version.

Many vendors take a good long time to support new versions of languages, even mainstream ones like Java and the .net family. None of them are particularly helpful in getting this information to you. They have their marketing checklists and information deeper than this can be hard to come by from the salespeople. A few were scared of letting us have this information at all once they knew it was for a competitive analysis. That's a sign that they take a long time to support new language versions.

In many companies adoption of new language versions happen organically at the developer level, often within days of the new version being released. Even if the system admins try to press the brakes a bit on deploying the new version on production servers, the pressure is there for it to happen. SAST vendors typically are not going to be able to keep pace, which will make your developers unhappy or even give them an excuse for not using the expensive tool you purchased.


Pray that you never have to enter domains where the security audits and assessments must be always done by an "approved supplier".

We have had to cancel engagements with suppliers who I have thoroughly enjoyed working with, and whose results were nothing short of spectacular.[ß] And all of this because certain regulators had their rules written by an established auditing firm who managed to make sure that only a select few (less than 5 globally) companies can ever perform their required assessments.

Regulatory capture through security auditing companies should deserve its own circle of Hell.

ß: Have you ever had a pentest report you could forward directly to your engineering teams and be certain that they understood the contents accurately? I have. But I can't use the supplier anymore, because they do not enjoy a royal charter.


Not just languages - Generally if the scanner isn't aware of the specific framework you're using it's unable to find entrypoints or reliably figure out sources / sinks and filters.

Many modern frameworks have one or more of: dynamic configuration, compile time annotations, reflection, IoC etc which make it very hard for a "first principles" scanner to make sense of what can actually happen at runtime.


I didn't write about this because it wasn't toward my exact point about FP, but you're totally right. This will be equally as chilling for new web frameworks.


Even Fortify and other tools which have to compile beforehand don't do so well IMO. Someone is going to come up with an actually good SAST and print money...


I’ve had really good experiences with Fortify so far. It took me a lot of hours to figure out how to write custom rules but once I was able to do so it became super valuable.


What exactly is a "'first principles' scanner?"


Sorry, was perhaps poorly worded. By "first principles" I meant a static analyser working using only an understanding of the language (say, Java) and the application source code, looking over the ast/cfg etc using only built-in language rules.

You get much better (read: maybe useful) results if you happen to have a "rule pack" for the specific framework you're using which provides specific hints on sources + sinks, "gotchas" and how things are wired together. As a somewhat obsolete example, I would not expect a sast working only from "first principles" to be able to do anything useful with a Spring XML configuration file.

On the whole my experience is that these things work very well on certain types of codebases - e.g. naive PHP they can "go to town" because of the huge footgun surface and fairly direct control and data transfer.

Stuff with lots of "magical" framework features and indirection (where even a human reviewer can often have trouble finding the implementation from the interface being invoked) they often silently fail to do anything useful.


> static analyser working using only an understanding of the language [...] looking over the ast

The part I quoted is called "tainting". Or if you are more academically inclined, "data flow analysis".

When it works right, it is an incredible force multiplier in security. You get detailed, actionable and above all helpful error messages directly from CI, because as a static analysis tool it's pretty fast and can be made part of the common linting pass. As you hinted, it does require a suitable config setup and/or code annotations to mark sources and sinks. And when it does work, it can eliminate an entire class of vulnerabilities - good taint analysis will prevent you from even accidentally using user-supplied data in anything that involves relaying, storing or displaying information.

The downside is, when it doesn't work, it's a source of unhappiness. Debugging a taint failure because the AST analysis gives a false positive can be infuriating.


I suspect this means a scanner that can derive all necessary information without any configuration. For example, consider a scanner looking for API endpoint authorization inconsistencies. Does the scanner need you to describe your authorization scheme, or can you simply run the thing and it figures it out?

This can be easy or hard depending on how bespoke your application is. If you're using something like Ruby on Rails, then there's a paved road that a scanner can preconfigure to understand your application. If you're using a homegrown authorization framework, then a scanner will likely have a hard time understanding your application and will need to be configured.


This cannot be stated enough. The company I worked for purchased "the leading gartner SAST testing solution" before they even started a project and had a framework chosen. It turned out the SAST solution didn't support the latest major version of the framework we were using at the time, and didn't support the latest version of the language we were using. Even worse, it never seemed to ever report an issue, and took hours to run at times, eating up build minutes.

I ran bandit on the code base just for fun one day, and we had four hits and it took 5 minutes to run. It took a while, but I finally convinced the powers that be we were better using the tool we could verify works, rather than trusting the SAST vendor that it worked.


Fortify took almost a decade to support Python 3 and there were some hefty lags on Java updates, where multiple times it was like “do you want to use the supported Java or the one which Fortify can scan?”


waves, original Bandit author here. It could've done more, but pretty cool to see how useful it's been and where it's got to.. OSS is fun.


This. I work on an Android app written in Java. We were thinking about migrating to Kotlin years ago, but our blessed tool - Fortify - only just released a technical preview for Kotlin 1.3.50 in Fortify 20.1.0, and already Kotlin 1.4.10 is out.


My impression is that the current owners of Fortify lack the resources that HPE had and the product is suffering as a result. It's best suited for maintaining old code bases.


Wow. Great. Would you be able to publish your analysis?


Unfortunately I cannot as it is not my property. Plus one vendor made my client and myself sign an NDA even for the most basic information. What I can say is to not put too much faith in the Gartner Magic Quadrant for this space. The best thing to do is select a couple of representative projects for your company and get each vendor to let you do a POC and then compare the scan results from each. The process of making this happen will be long and painful as it is managed mostly by sales teams rather than the tech folks. In the end though, it's the best way to know what will work for your particular set of development technologies. The good news is pretty much every vendor was receptive to the idea of doing a POC even if it took a while to get each one set up.


I wish we could just all reject SOC2, it's such a grift. Bankers come in to read some docs (that they don't understand), look at screenshots (that they don't understand), take up 100s of thousands in cash and time, and then write a document (that they don't understand) that no one will read or care about (except to fuel their own SOC2).

The harm is so significant. Tons of these due diligence terms are driven not by any security engineer, but by a legal or compliance team - I've even had members of the security team outright apologize for having to send it over.


Being a security person myself, I've found plenty of value in reading SOC2 reports. They're one of the few relatively standardized ways to address the laundry list of vendor due diligence questions.

Is it a good experience for small companies? No. Does it make it easy for your vendors to use cool new technologies? No.

Do I, someone advising on whether or not we should buy something, care about either of those in the moment? Also no. And I spend a rather distressingly large amount of my time trying to talk engineers out of using cool new technology for novelty's sake anyway.

You're absolutely right. SOC2 can, and I assume often does, go quite badly awry and waste literally everyone's time and money. I just know that I've found some value in it. And it helps provide a sound basis for making the vendor agree to assume liability for when they screw up due to grifting.


I think the fact that you don't care about challenges for small companies and new technology is a huge problem. Stopping new entrants into a space is pretty much anti-competition. Preventing new technologies is anti-innovation.

Competition and innovation are the most important things to a healthy economy and market. So essentially, SOC2 both stifles innovation and ruins economies.


You're absolutely right. New entrants, new technologies, and new people in a space are critical for the health of a market.

I just think it's possible that when procuring a tool for a given purpose, a company's chief concern might be about the safety of the tool and vendor rather than the health of the overall market. Your experience may well differ!

Also, I feel the need to clarify my remarks. I, someone advising on whether or not my employer should buy something, do not prioritize the health of the market or cool new technology or a good experience for the vendor over the safety of the tool and the vendor when advising on the purchasing decision. In other circumstances, I can and sometimes do make decisions different. I hope this is removes any misunderstanding that I may have engendered by failing to write clearly.


I don't think it's on people in your position to solve this issue, but I do think it's important for people in your role to understand it and try to help by accommodating security solutions that are not "pay exorbitant fees to known security vendors."

For instance, the industry desperately needs nearly free, open security tools, that are also going to be accepted by people in your role. Too often open source solutions are immediately dismissed by compliance people simply because they are unfamiliar, or because they don't believe open source can be as good, or in the worst case because of propaganda against open source by security tool vendors.

Similarly we need free starter packages and standard templates for processes that small companies can use to get SOC2 equivalent process in practice, without paying hundreds of thousands a year to expensive auditors.

Maybe there should also be a push on vendors not to use SOC2 related security features as an enterprise tier gate. E.g. SAML or SSO is often only available on "you can't afford it" enterprise tier.

There is a lot we can do to fix these problems, but we need people to care, including people in your role.


I am not someone who does not care. Please accept my deepest apologies for my repeated failures to communicate clearly. Please do not hesitate to ask for further clarification if it would be in any way helpful for you.

I fear you have mistaken me for someone who does not care about the health of the ecosystem. Merely because I am someone who advises on what's best for my employer's safety and risk management need not always mean this.

I am, for example, perfectly happy to make use of open source tooling. In many ways, I prefer it. I do not use price tag as a proxy for value. I also do not use cool new technology or vendor immaturity as a proxy for value. Your idea about making it easy for small vendors to understand what they need to do to attain compliance-equivalence is a wonderful one that should be broadly enacted immediately.

Again, please do not hesitate to ask if there is anything else I can clarify for you!


I'm not saying that no human being has ever derived value from a SOC2. I'm saying that the process around SOC2 is a grift.

The fact that it also causes harm to small companies, and detracts from resources for companies that actually care about security, just makes it a particularly harmful grift.


I agree.

I think if there was a better standardized process for getting all the same information, I would use it preferentially in a heartbeat. From experience, trying to replicate all that info-gathering process manually is not a good answer.


This is like saying Agile or DevOps is a mistake because the usual grifters sold a PowerPoint to the C-levels but didn’t change the culture. If you approach anything as a formality, you can definitely minimize the benefit you’ll get but that’s saying more about your organization.


That's the parent's point! The certification itself is completely meaningless. If a company has their SOC2 it basically tells you almost nothing about their security culture.

I've never been at a company that has taken SOC audits seriously -- plenty of companies that take security seriously but SOC compliance is an exercise in paperwork not security.


If that their point, it wasn't well expressed. It doesn't seem right to call something “a grift” because you chose to treat it like a formality when you could have used it as a good faith engagement to actually try to make it better. That's why I made the Agile comparison — organizations can have positive or pointless adoption without that saying more than “you get out of it what you put into it”.


I call it a grift because it really feels like it. It's this weird "you have to do this or else" thing that provides 0 additional value (SOC2 is not prescriptive, it is just a forcing function, you could have... just done it).

A bunch of unqualified people coming in to rank the unrankable in a vague, gameable, expensive way is my definition of a grift. Then, on top of that, you have a sort of ring of high status organizations that will audit you and charge a fortune, but they've convinced all of these other orgs that their audits are Definitely Better (companies audited by them get breached all the time too), and now you've got this whole racket on top of the grift.

Agile isn't forced on anyone. People won't skip over your product because you're "not really doing agile" or whatever. You don't get forced into yearly audits where you pay some outsider with an "agile certification" and 0 technical knowledge to take up multiple engineers for months to write reports.

This post explains more: https://news.ycombinator.com/item?id=24505894


I don't think you understand SOC2 at all.

The business itself defines what controls they want to be tested and included in the SOC2 report. This is why SOC2 reports are simultaneously the worst and best (in most cases, only way to get assurance) assurance you can get over a third party.

The problem is, as you say people who consume SOC2 reports don't understand what they are getting.


Well I think I do understand SOC2 :)

I am well aware of how controls are defined, it was only just an hour ago that I was reviewing our controls with the feedback from our auditor.


So how would you look to get assurance over third parties without SOC reports?

I am happy to concede that SOC reports are terrible, but at the moment they are one of the few effective ways to get assurance over third parties.


As I said in another post,

> This is a very hard problem. It's a regulation on a quality that is a very fast moving target with weak consensus.

Doing this in a regulatory way just isn't viable today.

> the few effective ways

I disagree that they're effective. I don't think they are. Lots of companies with a SOC2 are breached, no company that people consider 'very secure' has earned that reputation due to SOC2 (or any other compliance for that matter).


I found the SOC 2 very useful. Not because I make any Ops or Engineering changes based on the process. It made me think about risks to the business as a whole and the concept of controls to mitigate and monitor those risks.

For example, what is everyone doing to minimize 3rd party risks? How do you know that the whole team understands why PII data should be avoided when possible?


SOC2 is exclusively a forcing function, you could have adopted a NIST framework for risk assessment and controls and gotten that same value, and then had an extra 6 figures of budget to implement high value security work.


Same with the CIS controls which at least come with a tool to test servers against their host benchmarks -- CIS Level 2 is a fairly practical "Server Hardening for Beginners."


What do bankers have to do with SOC2? Do you mean consultants?


I think he means a CPA. Certified public accountant. They are the ones who actually write the report.


perhaps you don't understand the way a typical company operates. security is all but ignored. SOC2 is an absolute necessity. it's a forcing function for baseline processes and controls, that 100% would not exist by default if companies were left to their own devices.

for sure, i'm not talking about the 5% of competent companies.


Let's oversimplify things and say there are two types of companies:

1. Those who care about security

2. Those who do not care about security

For the (1), SOC2 provides no value, because any structure it lends (it lends none, but you'll end up choosing some NIST thing or whatever) is something you could have implemented for much less money. Remember, you'll spend ~1 full security engineer worth of money/ time, so you could hire a FTE to just do these things. Except you won't be constrained in nearly the same ways.

2. Companies that don't care will just grift the grifters. It's simple - there are lots of easy checkboxes, and most of it is just documenting processes. Anyone who's gone through a SOC2 should see how easy it is to "game" it. It's tedious, but a large company will just hire their way out of it, and have a compliance team that's almost certainly isolated from security.

Because it's gameable SOC2 is far easier for large companies to push off. They can hire a compliance team, call it 'security', and move on. Small companies, and/ or companies that care about security, are left having to dedicate their much more limited resources to compliance over implementing meaningful controls. A small company isn't going to know the many 'tricks' for doing minimal work to pass, which is a really important quality of SOC2 - you want the least policy to pass, otherwise you're setting yourself up for either stagnation or an even longer report next year, since changes between reports have to be documented and go through the process. Large companies can just get away with way more.

As one simple example, let's say you have 1 FTE seceng. For compliance, they could spend N% of their time setting up logging, documenting that logging, writing docs on their IR policy, etc. Or, without SOC2, they could spend N% of their time setting up logging, writing good detections, understanding and exploring their infrastructure, documenting in a much more natural way at a lower cost, etc. And then that budget could go towards improving infra, tooling, training, new hires, etc, to do that work even more effectively.

How many breaches has SOC2 stopped? Because clearly it hasn't been the deterrent in many cases - how many companies get owned, while being compliant, due to unpatched vulns (something any auditor is guaranteed to ask about)? What if they'd spent the few hundred thousand a year on a few more seceng? The way companies scale security puts ~10-500 employees to every seceng, meaning that even cutting a few would be a massive increase in risk.

In short, the companies that are already ignoring security will have no problem doing so when they're large, and smaller companies, or companies that do care, will only be drained by SOC2.

edit: I will also say that,

* I won't state that SOC2 is universally useless.

* This is a very hard problem. It's a regulation on a quality that is a very fast moving target with weak consensus.


I've done more than my fair share of vendor due diligences (and audits, action plans and contract reviews,..)

To me this is a non-issue, because customers almost always ask for types of security checks, not for specific tooling (ie: asking for source code analysis vs asking for veracode). As a rule, compliance/government folks will be concerned about the types of security measures you have in place and not about the specific implementation. Commercial source code analysis tools have varying support depending on language (as others have mentioned: some languages are harder than others). A very valid alternative is to use a linter with security checks (and potential custom rules). The advantage will be that checking will go much faster so you can do it more often (every PR instead of nightly for example). Many security conscious companies have something like this in place.

In general when you're answering security due diligence, it's your job to convince the customer you're going to keep their data safe. They will ask about certain things you don't have and it's your job to explain how you're still solving the underlying problem. Typical example: customers asking for antivirus on all systems and you using (immutable) docker containers.

By the way, the interesting thing here is not the answers to the questions, but how you organise your company to quickly and effectively (as in: no follow up meetings or worse: action plans) answer them. My pet peeve here is "customer guided security": You start from what you think you need (baseline) and you add the security measures that take the longest to explain why you don't have them. That way, you're skating through most of the due diligences and sales velocity goes up, which will make your bosses very happy.


Just as a counterpoint, about 2/3 of the enterprise contracts I've either helped fulfill or reviewed has specified the tool (and sometimes a minimum version but that was only twice out of ~25-30 contracts I've seen). That being said, for the mast majority of those (90%+) the client was very reasonable, and if we had a good reason to remove a specific reference to Veracode, for example, they would probably be fine with it. But I could definitely see it becoming an issue if you just sign the contract to close the deal and try to get out of using Veracode later, especially with whatever the client's internal approvals/review process is.


My experience - primarily in healthcare data as a vendor... Employers & Insurance.

Client security teams have been very reasonable on deviations to their massive spreadsheet checklists.

On one hand, I think that if you, as a vendor, reply back with a few "well, we do X instead of Y in the same spirit" they will probably believe & trust your answers more than a spreadsheet returned in 2 hours with "yes/in compliance" for each question.


This is not my experience. I've had buyers push on specific software. E.g. I've had one push back and arbitrary state that we must use "paid" source code analysis vs an open source solution. No reason given. I've had another say that vendor supplied antivirus is not good enough (e.g. Windows Defender or Apple Xprotect). Again, no reason given.


That's very interesting! What sectors were those buyers in? I've mostly worked with fortune 5000 and financial institutions.

It doesn't surprise me in the least that you didn't get any feedback. The default option for these companies is to make you accept their specific blend of security requirements... Of course, you then have to support that forever...

I've had good luck setting up a meeting with both the due diligence person and the actual buyer/champion present. It's often easier to explain your stance in person and the buyer is going to stop the due diligence person when he's getting into the weeds.


Meh, if you want your niche tool to break into big enterprise, you have to deal with compliance hoops. That's just the cost of doing business with big enterprise, certainly not a new thing.

If anything, in the long term that's probably a benefit to fancy functional programming languages with complex type systems - they provide much more info for static analysis tools to work with (static analysis is pretty highly related to type theory)


And this is probably a really good thing. We should want bleeding edge tech and languages to get battle hardened in smaller stakes products and open source projects before we try and use them at scale.


> That's just the cost of doing business with big enterprise, certainly not a new thing.

This feature is why small companies can compete with big enterprises. The big ones get the economies of scale, but they also get bogged down by being inagile.


The static analysis, but also software component analysis tooling are really incredibly helpful though and should really contribute to releasing stable products as well -- it's not just here to satisfy your customers management types, it's there to actually make sure your tool doesn't have 5 RCEs active at any point in time.

I for one am happy companies ask about this type stuff, it's basic hygiene to keep control over your product's security, really, and the tooling really makes it a lot easier.


As someone that works for a SAST vendor I will say it's mixed. By the choices we make in what we support in languages and their dialects we can effect the ecosystem.

And at the same time, I have seen some terrible things that are picked up in code the first time they are scanned, that in theory should have been obvious but were missed for whatever reason.

It gets even worse when you're looking at included libraries.

Also, if you're using these tools, put in requests for new features and languages. This is how we know what customers want and where to focus resources.


I just spent some time with someone trying to recruit me back to writing medical software. The entire interview was dominated with HIPAA related questions, which were mostly the interviewer justifying why the software sucked. And the software in question sucked in every way it could: bad UX, terrible limits on integration, data could not be exported without copy pasta magic, etc... Some of these issues really are dangerous because clinicians have to spend so much time fighting to make things work or doing immense amounts of double and triple data entry. Oh, and every time the justification was because of HIPAA requirements or infosec.

It made me realize that we do not know how to make software well enough to regulate it safely, and no, I do not want to go work in a sector where prioirity one is complying with some privacy regulation when the top priority should be accuracy of diagnostic, reliability of a system or eliminating operator error.


I've worked in medical software and I want to second indymike on this, because I think it is important. The requirements for security or privacy are used as excuses to make only the smallest and most moderate of improvements to these applications. Year after year the UI stays the same and the product stays the same and the pain points for the customer's employees stays the same (remember that doctors in hospitals have some say but the business-side of the organization buy this software).

IMHO, this is one of the big excuses they use to push off interoperability. The lack of interoperability combined with the big vendors controlling the standards (i.e. HL7 or whatever) are freezing smaller vendors who might have dramatically better products out of the market.

Mark my words, the big consulting firms will use security compliance as another way to keep smaller and more agile companies (some with dramatically better products) out of the market.


What's wrong with and?

Obviously you can't literally have 4 top priorities, but patient privacy isn't some dumb irrelevancy.


When you are making software that is used to make life and death decisions, privacy should not be the top priority.


The idea that there is a single "top priority" is the problem. There's inevitably going to be a list of primary criteria, with the specific situation driving how they are balanced.


Man who hunts two rabbits starves.


Chases maybe.

Putting a snare where two rabbits are active is just as good or better than putting it where one rabbit is active.


I also work on software in the medical industry. Maybe you were talking about the specific firm you were interviewing with, but patient safety (accuracy) is absolutely the highest concern in my team. That doesn’t mean we cut corners on security and privacy, it means that all must be done correctly to be successful.


>There are new forces at play which will calcify current software stacks and make it extremely hard for existing or new entrants to see similar success without a massive coordinated push backed by big enterprise companies [...] enterprises no longer trust their developers and SREs to take care of security, and so protocols are being implemented top down.

The security community got exactly what they asked for.

Security people were selling fear of insecurity with limited actionable advice for security to come into products/systems bottom-up, so the business has to solve it with process. Breaking into computers is fun and all, but throw around words like "risk analysis" to sound like hot shit for too long and you end up with comprehensive risk analysis process that spans beyond the bits of tech you want to play with.

I work in a highly-regulated domain so software security is just another type of risk analysis we do. So shrug whatevs this doesn't calcify us more than we are already calcified. I just think it's cute that infosec people thought they were hackers, but didn't realize they're another flavor of boring business analyst telling the kids to turn down their music and develop software to their requirements.


> In the wake of so many data leaks and hacking events enterprises no longer trust their developers and SREs to take care of security, and so protocols are being implemented top down.

As if enterprises were not responsible for not properly budgeting security concerns in their engineering teams. I guess it's easier to just buy a tool that will force a process overhaul, rather than doing a much more thorough process overhaul in the first place. The problem is that tools like vulnerability scanners address only one part of the problem; admittedly, it is a low hanging fruit.


Those things are not mutually exclusive. And the original framing that enterprises "no longer trust their developers and SREs to take care of security" is a bit leading as well. Real, practical security is a mixture of dedicated security teams doing in depth analysis and pen testing, developers and SREs building secure solutions and automated tooling that does basic sanity checks.


This isn’t worded correctly based on my experience. It should say ‘enterprises no longer trust their developers and SREs alone to take care of security,’.

Devs and SRE still have a very important role as SAST and DAST tools only catch a portion of security issues in code and are generally useless for gauging architectural/deployment/runtime issues.


I agree; my response was to the wording present in TFA, which seemed to imply that enterprises expected engineers to handle security and were surprised they didn't and hence now are attempting to force a top-down approach the whole thing will be secure _despite_ engineers doing the same things as before.

Tooling helps; no question about that. But you can't assume you're engineering team can be oblivious to security concerns. You need to train, hire and equip your teams appropriately. You need to set the right incentives. Just limiting the kinds of software stacks you're allowed to use seems shortsighted.


Totally agree. One option is to make static scans a tollgate only at higher application risk levels and/or exposure. For example, low/medium risk internal automation and telemetry tooling? Go nuts. Internet facing, regulated, high/critical risk? Need code scans.


That sort of division assumes you can be completely sure about how a system will be used in the future. I've definitely seen internal systems deployed in a public-facing capacity because someone found a useful reason to do it.

How are you going to do that, if the internal tool is written in something that is not supported by SAST tooling?


I've got to say, I don't agree with this. The predicate of the argument appears to be that the team managing the VA/SAST platform will be able to block adoption of new technologies if their tooling doesn't support them.

I don't think I've ever seen a company where the security tooling team had that kind of authority or pull. I've seen plenty where the first time the security team hears about a new technology is after product development has started.

You only have to look at the rise of containerization in enterprise to see this in action. When it started tooling was way behind, and it's only catching up now, but that didn't seem to stop anyone.


Had that in JP Morgan. New fancy things better work with all the tooling in place otherwise you can forget about using it (code review, unit tests, linter, deployment, packaging, vulnerability scanning, etc...).

Honestly, this was great, this prevented developers and new graduates from rewriting every goddamn thing in the language du jour.

I hope I never have to work in an organization with 100+ developers that doesn't enforce some standards. It's impossible to join a project and collaborate with other teams when every single developer/project decided to use its own language and make its own deployment system.


The quality of consumer facing major banking websites shows how well this calcified development “culture” works in practice and long term.


I'd rather my bank not loose my money than have a cool website though.


If you're implying Chase the consumer bank, it's separate from JP Morgan.


I think they are arguing that this is changing now. When I first started software engineering ~15 years ago, it was pretty common for enterprises to exert quite a lot of control over what tech could be used. You had to use some solution from the "approved" list of languages/frameworks/versions (that was always hilariously out of date). Then that swung in the opposite direction (mostly concurrently with cloud adoption where dev teams were in charge of their own infrastructure) where individual teams had complete free reign to choose their own tech. The author is arguing that things are now swinging back in the other direction and it's largely driven by security considerations. So its not "you have to use something on the approved list of tech maintained by the Enterprise Architecture gods" but rather "you have to use something that is supported by our enterprise SAST solution." It will still likely be more lax than it was back in the old days because it is harder to enforce those restrictions in a world where the dev team manages their own cloud infrastructure, but still.


sure, and it's the argument that it's swinging back in the favour of centralised control that I don't buy.

I look at major enterprises quickly adopting what are still quite new technologies, a good example being the uptake of things that come under the cloud native banner and that doesn't tell me that things are becoming more centralised/controlled.

I've spoken to multiple security teams looking at container security who've said things along the lines of "this is getting deployed whether we want it or not, so we're doing our best to keep up"

Ofc my examples will just be anecdotal, but that's what I'm seeing.


Yeah, I would mostly agree. I have noticed personally at my (largish) company a bit of a shift back towards centralized control, but only slightly. One thing that does seem very different now is that there is a lot more pressure on the vendors of SAST and other security products to support newer tech stacks. And if the big boys won't then there are always a huge number of smaller players ready to jump in to take advantage of that opening. The number of products just in the Serverless space offering "next-gen" WAF solutions is pretty amazing given how recent large scale adoption of Serverless stacks has been.


Yeah im confused by this article. Why would this push functional programming into a small niche? Is it just because the scanners are only written small range of languages like C# or javascript? Seems like functional programming makes for BETTER security scanning all around. If anything I actually see this potentially giving FP a boost, unless the scanners are just surface level and are adopted as a matter of faith, all good and well until one of them gets cracked, dynamic languages compared to compiled are like swiss cheese


> Is it just because the scanners are only written small range of languages like C# or javascript?

Yes.

> Seems like functional programming makes for BETTER security scanning all around.

Yes.

> unless the scanners are just surface level and are adopted as a matter of faith

Kinda. They are very useful, as they catch all those stuff that should have been designed out of the language/framework to start with. They are not something to get cracked, but they also won't discover any deep issue.


I feel like the underlying issue is the actual quality of the scanner doesn't matter. It's more of a liability checkbox or culpability shifting device.


Whee more compliance calcification that misses the point to slow down your large competitors with.


Static analysis is coming to FOSS as well. There is no mystery tech behind that. Compilers actually do quite a bit of analysis: see Rust. Of course FOSS tools will probably have rougher edges, but the entire field will commoditize in the coming years. There are already a number of competing tools as mentioned by the article.

The usual FP vs IP argument is not relevant: the LLVM intermediate representation is FP.


The inverse point of view of this is that security, as everything else, is easier for stacks that are commoditized.


Well first of all, a smaller number of tools is a good thing. Most software tools suck ass. By focusing more on a few of them, hopefully their quality will increase (depending on who is making them and what their incentives are).

Second, security scanning is just part of an overall strategy for increased software quality, which helps the product made out of the software, which is the entire point of writing software. Who cares if your stack calcifies if the user has a better experience because your app crashes less, needs to be emergency-patched less often, and doesn't leak personal data like a fire hose?

I am an SRE, and not a little bit of a security nerd, and I wouldn't trust myself with getting security right.


Apocalypse is a strong word for a post that ends with no real prediction.


It seems to me that the author is concerned that a desire for vulnerability scanning will prevent new languages from entering use because these tools won’t support them.


"Apocolypse" is still a pretty strong word for failing to break into a market because your product doesn't meet the user's requirements.


Hi Author here, I'm just as worried about "established FP but globally niche" tech like Haskell or F# as I am about new tech. I've surveyed options and there's nothing available.


I worked in a Haskell shop, and later in a Clojure shop. In both cases, engineers were happy-go-lucky about every security or privacy concern under the sun. Often they seemed to believe that being functional saved from having to worry about validating their inputs, checking authorization, or (absurdly) having access control on their critical data stores. They told me as much on several occasions, and exhibited no interest in altering these beliefs.

As you might guess, this drove me nuts. In both cases it eventually blew up in their faces and the events proved to be free of side effects. Their imperative colleagues did not have the same mindsets.

If those two shops are in any way representative, then it may perhaps be worth considering very carefully if keeping cool new technologies away from serious usage could in some scenarios be a win.


It's funny, I've never seen a dev team that really deeply internalized security without someone embedded who was an expert. It's just too hard, too easy to make a mistake.

I don't think these tools are the answer though, they make it easy for CISOs to look good, driving down the number, but is it real security?

You really need experts thinking about the security of your app. Ideally someone who thinks like a hacker.


You're right. You do need that person. IMO, it's a lot easier to accept their feedback when you understand that using a functional language offers you no significant security gains by virtue of being functional.

In my limited and less-than-universal experience as a security person thinking like a hacker, those scanners can and do enable real security enhancements. Keeping up on your patching is real security, as is having a system that can point out which inputs you didn't validate and what code paths they're on. Couple them with someone with the right experience and background, and you have the basis of a real application security program!

Which is to say that you're absolutely right. Having the right person in the right place is absolutely critical. I think it be possible that it might not always be sufficient.


Oh i agree 100% with that. Off the shelf static analysis tools have massive noise and rarely come up with useful vulns.

Most of the time they end up making a 20 page report with 500 issues that nobody reads because 499 of the issues are stupid.

(However, they can work if highly tuned to specific environment and workflow)


Blind SAST will give many false positives. It took a local megacorp a year to reduce noise on their Java build pipeline when adopting it.


Best way to stop hackers is to just start handing out life sentences. that will put a stop to it. or even better. death penalties. like seriously just get a life. society doesn't tolerate thieves IRL so why would we tolerate them in the internet? Backdoors and exploits exist everywhere. no computer or building is 100% secure. we know this. so stop acting like you are doing everyone a favor by exposing them.


Best way to stop crime is to have the state execute anyone found guilty, right? Because the justice system never makes mistakes and is always on our side!

Tell me, in this utopia, is there a world government carrying out these executions? Or is it just our great country who is purging it's security experts?


Not only is this psychopathic, to prioritize property above human life, to have such draconian sentences handed out so carelessly, not only is it so inhumane, to value human life so little, but it's also just factually and objectively bad policy.

We learned from history and the Bloody Code, that when you have draconian sentences it means people will "upgrade" their crime since there is no difference between say "minor theft" and "major theft," but it also encourages horrible acts to avoid draconian punishments (murder is often involved in high risk crimes that have draconian sentences). But we also have most criminology and justice theory based on this modern principle that the punishment must fit the crime for law to be effective under the punitive system.

On a technical note, it is doing a favor to expose the exploits, as long as it's handled in an appropriate way, I mean that's literally why companies pay a ton of money to white hat hackers, and why there are bug bounties, etc. But even more that's why someone like Edward Snowden is a hero.


I'm on the side of sending the big list of vendor due diligence to a potential product. I know it's a pain to ask about GDPR, how you store/delete data, off board employees, is SSO everywhere, do you have 2FA and how you access your servers.

We try to keep a high standard of tooling to protect our customers and company data. It's really not about bogging you down, I know it sucks. It sucks hard. It's about ensuring when we upload data to your SaaS, we know it's in good hands that have been vetted.

The good news is, if you make it through it once other big companies start flooding to you as well and it becomes much easier to deal with them as you've been through the intensive process before.

I deal with a lot of deals and if you're building a startup up it's a lot easier to think about security at the start then retrofit and fix it all at the end.


I see no links between infosec and functional programming.


The link is that FP is usually bottom up and InfoSec is top down and they fail to meet in the middle

Big FP shops will have solved this of course. I doubt StanChart or Jane Street are losing any sleep over it.


I've heard a lot recently about Jane Street. Do you know a lot about them? I was curious to know their background in more detail and generally what kind of company it is, or the general attitude or atmosphere of the place. Also, why do you mention them specifically in the context of FP?


Also, why do you mention them specifically in the context of FP?

They are mostly famous in tech circles for one day one of their interns Yaron Minsky saying, hey let's rewrite everything in OCaml. And they did, and were wildly successful, and he's the CTO now. They bet big on FP and it happened to be an excellent fit for their problem domain.


Still, no amount of FP will save your ass from someone making an internal backend service publicly accessible. Its tools like enumerating all company's public endpoints, and I wrote the tool in Python :))


I work in AppSec for a Very Large Company. I've worked in large companies before. These are not new trends. We have programmers who do F# and other functional programming. I would think the bigger inhibitor to functional programming is that most of the existing apps are Java or .Net so unless you are building a brand new team, you reuse the skills and technology you already have working for you.

Our devs use plenty of small open source projects. We [Security] like to recommend software that we are comfortable with, but any determination of "stacks" we leave to the actual software engineering teams. If something is pretty bad, not updated, constantly having problems, etc - we might ban it... but what's your case for using poorly engineer software given alternatives?

Not sure if mom and pop is supposed to mean commercial, but not OSS? OSS we can patch and modify if necessary. We can even PR patches back based on what our "scanners" and manual testing find.

Generally, we don't care about language, most issues are in implementation not the language, and less so in more modern languages where the creators have heard about security.

The basic type of code scanning needed for PCI and other compliance is a commodity offering and is manageable cost compared to marketing and relationship management costs needed to pursue big clients.

I am not sure the OP understands what a SOC2 report says/does. It talks about pretty high level controls and practices. You certify an app/service, not a stack. If you scan and fix your bugs and have a proactive security training, it doesn't care about how you do it. There is no golden stack that will help you pass a SOC2. You may be able to make your life easier with certain services/SaaS, but the issues come up in your practices and in the actual code implemented. If you have bugs in procedural or functional programming, its the same problem from this perspective.

Vendor due diligence? Some companies have their own questions, there are also agreed standards for these that some companies opt into. I am not sure why a big company should risk their bottom line on something unproven or that isn't ready for prime-time. It's like getting an inspection when you buy a house. In the same way, your org can improve and make improvements. This is no different then adding in features some customer wants in order to win your business.

I don't understand the ultimate point, people who build functional apps shouldn't have to care about security? It's just another non-functional requirement that helps you win a broad audience. It's the same argument that says government should regulate this or that, that financial advisers shouldn't need to act as fiduciaries. It's the cost of doing business.

Maybe the OP has some weird experiences where auditors jumped on functional programming as an issue to justify not doing more work or make their lives easier, but I don't think this is something that is a commonly held belief across audit and security (if people even know what functional programming is).


Boy, this sucks. It’s absolutely correct and you can see it coming, though it didn’t occur to me before reading. Oof.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: