
The Infosec Apocalypse - chillax
https://blog.rickasaurus.com/2020/08/31/The-Infosec-Apocalypse.html
======
Mountain_Skies
I recently did an extensive competitive analysis of SAST tools for a client.
Anyone who is thinking about buying one of these tools should pay attention to
what versions of each language they support. Also try to get old release notes
in order to determine when they first supported a particular language version.

Many vendors take a good long time to support new versions of languages, even
mainstream ones like Java and the .net family. None of them are particularly
helpful in getting this information to you. They have their marketing
checklists and information deeper than this can be hard to come by from the
salespeople. A few were scared of letting us have this information at all once
they knew it was for a competitive analysis. That's a sign that they take a
long time to support new language versions.

In many companies adoption of new language versions happen organically at the
developer level, often within days of the new version being released. Even if
the system admins try to press the brakes a bit on deploying the new version
on production servers, the pressure is there for it to happen. SAST vendors
typically are not going to be able to keep pace, which will make your
developers unhappy or even give them an excuse for not using the expensive
tool you purchased.

~~~
zxcmx
Not just languages - Generally if the scanner isn't aware of the specific
_framework_ you're using it's unable to find entrypoints or reliably figure
out sources / sinks and filters.

Many modern frameworks have one or more of: dynamic configuration, compile
time annotations, reflection, IoC etc which make it very hard for a "first
principles" scanner to make sense of what can actually happen at runtime.

~~~
pc86
What exactly is a "'first principles' scanner?"

~~~
zxcmx
Sorry, was perhaps poorly worded. By "first principles" I meant a static
analyser working using only an understanding of the language (say, Java) and
the application source code, looking over the ast/cfg etc using only built-in
language rules.

You get much better (read: maybe useful) results if you happen to have a "rule
pack" for the specific framework you're using which provides specific hints on
sources + sinks, "gotchas" and how things are wired together. As a somewhat
obsolete example, I would not expect a sast working only from "first
principles" to be able to do anything useful with a Spring XML configuration
file.

On the whole my experience is that these things work _very_ well on certain
types of codebases - e.g. naive PHP they can "go to town" because of the huge
footgun surface and fairly direct control and data transfer.

Stuff with lots of "magical" framework features and indirection (where even a
human reviewer can often have trouble finding the implementation from the
interface being invoked) they often silently fail to do anything useful.

~~~
bostik
> _static analyser working using only an understanding of the language [...]
> looking over the ast_

The part I quoted is called "tainting". Or if you are more academically
inclined, "data flow analysis".

When it works right, it is an incredible force multiplier in security. You get
detailed, actionable and above all helpful error messages directly from CI,
because as a static analysis tool it's pretty fast and can be made part of the
common linting pass. As you hinted, it does require a suitable config setup
and/or code annotations to mark sources and sinks. And when it does work, it
can eliminate an entire class of vulnerabilities - good taint analysis will
prevent you from even accidentally using user-supplied data in anything that
involves relaying, storing or displaying information.

The downside is, when it doesn't work, it's a source of unhappiness. Debugging
a taint failure because the AST analysis gives a false positive can be
infuriating.

------
staticassertion
I wish we could just all reject SOC2, it's such a grift. Bankers come in to
read some docs (that they don't understand), look at screenshots (that they
don't understand), take up 100s of thousands in cash and time, and then write
a document (that they don't understand) that no one will read or care about
(except to fuel their own SOC2).

The harm is so significant. Tons of these due diligence terms are driven not
by any security engineer, but by a legal or compliance team - I've even had
members of the security team outright apologize for having to send it over.

~~~
Kalium
Being a security person myself, I've found plenty of value in reading SOC2
reports. They're one of the few relatively standardized ways to address the
laundry list of vendor due diligence questions.

Is it a good experience for small companies? No. Does it make it easy for your
vendors to use cool new technologies? No.

Do I, someone advising on whether or not we should buy something, care about
either of those in the moment? Also no. And I spend a rather distressingly
large amount of my time trying to talk engineers out of using cool new
technology for novelty's sake anyway.

You're absolutely right. SOC2 can, and I assume often does, go quite badly
awry and waste literally everyone's time and money. I just know that I've
found some value in it. And it helps provide a sound basis for making the
vendor agree to assume liability for when they screw up due to grifting.

~~~
tensor
I think the fact that you don't care about challenges for small companies and
new technology is a huge problem. Stopping new entrants into a space is pretty
much anti-competition. Preventing new technologies is anti-innovation.

Competition and innovation are the most important things to a healthy economy
and market. So essentially, SOC2 both stifles innovation and ruins economies.

~~~
Kalium
You're absolutely right. New entrants, new technologies, and new people in a
space are critical for the health of a market.

I just think it's possible that when procuring a tool for a given purpose, a
company's chief concern might be about the safety of the tool and vendor
rather than the health of the overall market. Your experience may well differ!

Also, I feel the need to clarify my remarks. I, someone advising on whether or
not my employer should buy something, do not prioritize the health of the
market or cool new technology or a good experience for the vendor over the
safety of the tool and the vendor _when advising on the purchasing decision_.
In other circumstances, I can and sometimes do make decisions different. I
hope this is removes any misunderstanding that I may have engendered by
failing to write clearly.

~~~
tensor
I don't think it's on people in your position to solve this issue, but I do
think it's important for people in your role to understand it and try to help
by accommodating security solutions that are not "pay exorbitant fees to known
security vendors."

For instance, the industry desperately needs nearly free, open security tools,
that are also going to be accepted by people in your role. Too often open
source solutions are immediately dismissed by compliance people simply because
they are unfamiliar, or because they don't believe open source can be as good,
or in the worst case because of propaganda against open source by security
tool vendors.

Similarly we need free starter packages and standard templates for processes
that small companies can use to get SOC2 equivalent process in practice,
without paying hundreds of thousands a year to expensive auditors.

Maybe there should also be a push on vendors not to use SOC2 related security
features as an enterprise tier gate. E.g. SAML or SSO is often only available
on "you can't afford it" enterprise tier.

There is a lot we can do to fix these problems, but we need people to care,
including people in your role.

~~~
Kalium
I am not someone who does not care. Please accept my deepest apologies for my
repeated failures to communicate clearly. Please do not hesitate to ask for
further clarification if it would be in any way helpful for you.

I fear you have mistaken me for someone who does not care about the health of
the ecosystem. Merely because I am someone who advises on what's best for my
employer's safety and risk management need not always mean this.

I am, for example, perfectly happy to make use of open source tooling. In many
ways, I prefer it. I do not use price tag as a proxy for value. I also do not
use cool new technology or vendor immaturity as a proxy for value. Your idea
about making it easy for small vendors to understand what they need to do to
attain compliance-equivalence is a wonderful one that should be broadly
enacted immediately.

Again, please do not hesitate to ask if there is anything else I can clarify
for you!

------
smu
I've done more than my fair share of vendor due diligences (and audits, action
plans and contract reviews,..)

To me this is a non-issue, because customers almost always ask for types of
security checks, not for specific tooling (ie: asking for source code analysis
vs asking for veracode). As a rule, compliance/government folks will be
concerned about the types of security measures you have in place and not about
the specific implementation. Commercial source code analysis tools have
varying support depending on language (as others have mentioned: some
languages are harder than others). A very valid alternative is to use a linter
with security checks (and potential custom rules). The advantage will be that
checking will go much faster so you can do it more often (every PR instead of
nightly for example). Many security conscious companies have something like
this in place.

In general when you're answering security due diligence, it's your job to
convince the customer you're going to keep their data safe. They will ask
about certain things you don't have and it's your job to explain how you're
still solving the underlying problem. Typical example: customers asking for
antivirus on all systems and you using (immutable) docker containers.

By the way, the interesting thing here is not the answers to the questions,
but how you organise your company to quickly and effectively (as in: no follow
up meetings or worse: action plans) answer them. My pet peeve here is
"customer guided security": You start from what you think you need (baseline)
and you add the security measures that take the longest to explain why you
don't have them. That way, you're skating through most of the due diligences
and sales velocity goes up, which will make your bosses very happy.

~~~
pc86
Just as a counterpoint, about 2/3 of the enterprise contracts I've either
helped fulfill or reviewed has specified the tool (and sometimes a minimum
version but that was only twice out of ~25-30 contracts I've seen). That being
said, for the mast majority of those (90%+) the client was very reasonable,
and if we had a good reason to remove a specific reference to Veracode, for
example, they would probably be fine with it. But I could definitely see it
becoming an issue if you just sign the contract to close the deal and try to
get out of using Veracode later, especially with whatever the client's
internal approvals/review process is.

~~~
masonhensley
My experience - primarily in healthcare data as a vendor... Employers &
Insurance.

Client security teams have been very reasonable on deviations to their massive
spreadsheet checklists.

On one hand, I think that if you, as a vendor, reply back with a few "well, we
do X instead of Y in the same spirit" they will probably believe & trust your
answers more than a spreadsheet returned in 2 hours with "yes/in compliance"
for each question.

------
bawolff
Meh, if you want your niche tool to break into big enterprise, you have to
deal with compliance hoops. That's just the cost of doing business with big
enterprise, certainly not a new thing.

If anything, in the long term that's probably a benefit to fancy functional
programming languages with complex type systems - they provide much more info
for static analysis tools to work with (static analysis is pretty highly
related to type theory)

~~~
thinkharderdev
And this is probably a really good thing. We should want bleeding edge tech
and languages to get battle hardened in smaller stakes products and open
source projects before we try and use them at scale.

------
pubkraal
The static analysis, but also software component analysis tooling are really
incredibly helpful though and should really contribute to releasing stable
products as well -- it's not just here to satisfy your customers management
types, it's there to actually make sure your tool doesn't have 5 RCEs active
at any point in time.

I for one am happy companies ask about this type stuff, it's basic hygiene to
keep control over your product's security, really, and the tooling really
makes it a lot easier.

~~~
pixl97
As someone that works for a SAST vendor I will say it's mixed. By the choices
we make in what we support in languages and their dialects we can effect the
ecosystem.

And at the same time, I have seen some terrible things that are picked up in
code the first time they are scanned, that in theory should have been obvious
but were missed for whatever reason.

It gets even worse when you're looking at included libraries.

Also, if you're using these tools, put in requests for new features and
languages. This is how we know what customers want and where to focus
resources.

------
indymike
I just spent some time with someone trying to recruit me back to writing
medical software. The entire interview was dominated with HIPAA related
questions, which were mostly the interviewer justifying why the software
sucked. And the software in question sucked in every way it could: bad UX,
terrible limits on integration, data could not be exported without copy pasta
magic, etc... Some of these issues really are dangerous because clinicians
have to spend so much time fighting to make things work or doing immense
amounts of double and triple data entry. Oh, and every time the justification
was because of HIPAA requirements or infosec.

It made me realize that we do not know how to make software well enough to
regulate it safely, and no, I do not want to go work in a sector where
prioirity one is complying with some privacy regulation when the top priority
should be accuracy of diagnostic, reliability of a system or eliminating
operator error.

~~~
maxerickson
What's wrong with and?

Obviously you can't literally have 4 top priorities, but patient privacy isn't
some dumb irrelevancy.

~~~
indymike
When you are making software that is used to make life and death decisions,
privacy should not be the top priority.

~~~
maxerickson
The idea that there is a single "top priority" is the problem. There's
inevitably going to be a list of primary criteria, with the specific situation
driving how they are balanced.

~~~
indymike
Man who hunts two rabbits starves.

~~~
maxerickson
Chases maybe.

Putting a snare where two rabbits are active is just as good or better than
putting it where one rabbit is active.

------
finnthehuman
>There are new forces at play which will calcify current software stacks and
make it extremely hard for existing or new entrants to see similar success
without a massive coordinated push backed by big enterprise companies [...]
enterprises no longer trust their developers and SREs to take care of
security, and so protocols are being implemented top down.

The security community got exactly what they asked for.

Security people were selling fear of insecurity with limited actionable advice
for security to come into products/systems bottom-up, so the business has to
solve it with process. Breaking into computers is fun and all, but throw
around words like "risk analysis" to sound like hot shit for too long and you
end up with comprehensive risk analysis process that spans beyond the bits of
tech you want to play with.

I work in a highly-regulated domain so software security is just another type
of risk analysis we do. So _shrug_ _whatevs_ this doesn't calcify us more than
we are already calcified. I just think it's cute that infosec people thought
they were hackers, but didn't realize they're another flavor of boring
business analyst telling the kids to turn down their music and develop
software to their requirements.

------
ithkuil
> In the wake of so many data leaks and hacking events enterprises no longer
> trust their developers and SREs to take care of security, and so protocols
> are being implemented top down.

As if enterprises were not responsible for not properly budgeting security
concerns in their engineering teams. I guess it's easier to just buy a tool
that will force a process overhaul, rather than doing a much more thorough
process overhaul in the first place. The problem is that tools like
vulnerability scanners address only one part of the problem; admittedly, it is
a low hanging fruit.

~~~
jcims
This isn’t worded correctly based on my experience. It should say ‘enterprises
no longer trust their developers and SREs _alone_ to take care of security,’.

Devs and SRE still have a very important role as SAST and DAST tools only
catch a portion of security issues in code and are generally useless for
gauging architectural/deployment/runtime issues.

~~~
ithkuil
I agree; my response was to the wording present in TFA, which seemed to imply
that enterprises expected engineers to handle security and were surprised they
didn't and hence now are attempting to force a top-down approach the whole
thing will be secure _despite_ engineers doing the same things as before.

Tooling helps; no question about that. But you can't assume you're engineering
team can be oblivious to security concerns. You need to train, hire and equip
your teams appropriately. You need to set the right incentives. Just limiting
the kinds of software stacks you're allowed to use seems shortsighted.

~~~
jcims
Totally agree. One option is to make static scans a tollgate only at higher
application risk levels and/or exposure. For example, low/medium risk internal
automation and telemetry tooling? Go nuts. Internet facing, regulated,
high/critical risk? Need code scans.

~~~
Kalium
That sort of division assumes you can be completely sure about how a system
will be used in the future. I've definitely seen internal systems deployed in
a public-facing capacity because someone found a useful reason to do it.

How are you going to do that, if the internal tool is written in something
that is not supported by SAST tooling?

------
raesene9
I've got to say, I don't agree with this. The predicate of the argument
appears to be that the team managing the VA/SAST platform will be able to
block adoption of new technologies if their tooling doesn't support them.

I don't think I've ever seen a company where the security tooling team had
that kind of authority or pull. I've seen plenty where the first time the
security team hears about a new technology is after product development has
started.

You only have to look at the rise of containerization in enterprise to see
this in action. When it started tooling was way behind, and it's only catching
up now, but that didn't seem to stop anyone.

~~~
user5994461
Had that in JP Morgan. New fancy things better work with all the tooling in
place otherwise you can forget about using it (code review, unit tests,
linter, deployment, packaging, vulnerability scanning, etc...).

Honestly, this was great, this prevented developers and new graduates from
rewriting every goddamn thing in the language du jour.

I hope I never have to work in an organization with 100+ developers that
doesn't enforce some standards. It's impossible to join a project and
collaborate with other teams when every single developer/project decided to
use its own language and make its own deployment system.

~~~
sheeshkebab
The quality of consumer facing major banking websites shows how well this
calcified development “culture” works in practice and long term.

~~~
jmnicolas
I'd rather my bank not loose my money than have a cool website though.

------
nautilus12
Yeah im confused by this article. Why would this push functional programming
into a small niche? Is it just because the scanners are only written small
range of languages like C# or javascript? Seems like functional programming
makes for BETTER security scanning all around. If anything I actually see this
potentially giving FP a boost, unless the scanners are just surface level and
are adopted as a matter of faith, all good and well until one of them gets
cracked, dynamic languages compared to compiled are like swiss cheese

~~~
marcosdumay
> Is it just because the scanners are only written small range of languages
> like C# or javascript?

Yes.

> Seems like functional programming makes for BETTER security scanning all
> around.

Yes.

> unless the scanners are just surface level and are adopted as a matter of
> faith

Kinda. They are very useful, as they catch all those stuff that should have
been designed out of the language/framework to start with. They are not
something to get cracked, but they also won't discover any deep issue.

~~~
nautilus12
I feel like the underlying issue is the actual quality of the scanner doesn't
matter. It's more of a liability checkbox or culpability shifting device.

------
novok
Whee more compliance calcification that misses the point to slow down your
large competitors with.

------
jerome-jh
Static analysis is coming to FOSS as well. There is no mystery tech behind
that. Compilers actually do quite a bit of analysis: see Rust. Of course FOSS
tools will probably have rougher edges, but the entire field will commoditize
in the coming years. There are already a number of competing tools as
mentioned by the article.

The usual FP vs IP argument is not relevant: the LLVM intermediate
representation is FP.

------
gbrindisi
The inverse point of view of this is that security, as everything else, is
easier for stacks that are commoditized.

------
peterwwillis
Well first of all, a smaller number of tools is a good thing. Most software
tools suck ass. By focusing more on a few of them, hopefully their quality
will increase (depending on who is making them and what their incentives are).

Second, security scanning is just part of an overall strategy for increased
software quality, which helps the product made out of the software, which is
the _entire point of writing software_. Who cares if your stack calcifies if
the user has a better experience because your app crashes less, needs to be
emergency-patched less often, and doesn't leak personal data like a fire hose?

I am an SRE, and not a little bit of a security nerd, and I wouldn't trust
_myself_ with getting security right.

------
RajSinghLA
Apocalypse is a strong word for a post that ends with no real prediction.

~~~
saagarjha
It seems to me that the author is concerned that a desire for vulnerability
scanning will prevent new languages from entering use because these tools
won’t support them.

~~~
bawolff
"Apocolypse" is still a pretty strong word for failing to break into a market
because your product doesn't meet the user's requirements.

~~~
Rickasaurus
Hi Author here, I'm just as worried about "established FP but globally niche"
tech like Haskell or F# as I am about new tech. I've surveyed options and
there's nothing available.

~~~
Kalium
I worked in a Haskell shop, and later in a Clojure shop. In both cases,
engineers were happy-go-lucky about every security or privacy concern under
the sun. Often they seemed to believe that being functional saved from having
to worry about validating their inputs, checking authorization, or (absurdly)
having access control on their critical data stores. They told me as much on
several occasions, and exhibited no interest in altering these beliefs.

As you might guess, this drove me nuts. In both cases it eventually blew up in
their faces and the events proved to be free of side effects. Their imperative
colleagues did not have the same mindsets.

If those two shops are in any way representative, then it may perhaps be worth
considering very carefully if keeping cool new technologies away from serious
usage could in some scenarios be a win.

~~~
Rickasaurus
It's funny, I've never seen a dev team that really deeply internalized
security without someone embedded who was an expert. It's just too hard, too
easy to make a mistake.

I don't think these tools are the answer though, they make it easy for CISOs
to look good, driving down the number, but is it real security?

You really need experts thinking about the security of your app. Ideally
someone who thinks like a hacker.

~~~
Kalium
You're right. You do need that person. IMO, it's a lot easier to accept their
feedback when you understand that using a functional language offers you no
significant security gains by virtue of being functional.

In my limited and less-than-universal experience as a security person thinking
like a hacker, those scanners can and do enable real security enhancements.
Keeping up on your patching is real security, as is having a system that can
point out which inputs you didn't validate and what code paths they're on.
Couple them with someone with the right experience and background, and you
have the basis of a real application security program!

Which is to say that you're absolutely right. Having the right person in the
right place is absolutely critical. I think it be possible that it might not
always be sufficient.

------
crb002
Blind SAST will give many false positives. It took a local megacorp a year to
reduce noise on their Java build pipeline when adopting it.

------
skee0083
Best way to stop hackers is to just start handing out life sentences. that
will put a stop to it. or even better. death penalties. like seriously just
get a life. society doesn't tolerate thieves IRL so why would we tolerate them
in the internet? Backdoors and exploits exist everywhere. no computer or
building is 100% secure. we know this. so stop acting like you are doing
everyone a favor by exposing them.

~~~
TheMblabla
Best way to stop crime is to have the state execute anyone found guilty,
right? Because the justice system never makes mistakes and is always on our
side!

Tell me, in this utopia, is there a world government carrying out these
executions? Or is it just our great country who is purging it's security
experts?

------
k4ch0w
I'm on the side of sending the big list of vendor due diligence to a potential
product. I know it's a pain to ask about GDPR, how you store/delete data, off
board employees, is SSO everywhere, do you have 2FA and how you access your
servers.

We try to keep a high standard of tooling to protect our customers and company
data. It's really not about bogging you down, I know it sucks. It sucks hard.
It's about ensuring when we upload data to your SaaS, we know it's in good
hands that have been vetted.

The good news is, if you make it through it once other big companies start
flooding to you as well and it becomes much easier to deal with them as you've
been through the intensive process before.

I deal with a lot of deals and if you're building a startup up it's a lot
easier to think about security at the start then retrofit and fix it all at
the end.

------
nuker
I see no links between infosec and functional programming.

~~~
goatinaboat
The link is that FP is usually bottom up and InfoSec is top down and they fail
to meet in the middle

Big FP shops will have solved this of course. I doubt StanChart or Jane Street
are losing any sleep over it.

~~~
mikorym
I've heard a lot recently about Jane Street. Do you know a lot about them? I
was curious to know their background in more detail and generally what kind of
company it is, or the general attitude or atmosphere of the place. Also, why
do you mention them specifically in the context of FP?

~~~
goatinaboat
_Also, why do you mention them specifically in the context of FP?_

They are mostly famous in tech circles for one day one of their interns Yaron
Minsky saying, hey let's rewrite everything in OCaml. And they did, and were
wildly successful, and he's the CTO now. They bet big on FP and it happened to
be an excellent fit for their problem domain.

------
egsec
I work in AppSec for a Very Large Company. I've worked in large companies
before. These are not new trends. We have programmers who do F# and other
functional programming. I would think the bigger inhibitor to functional
programming is that most of the existing apps are Java or .Net so unless you
are building a brand new team, you reuse the skills and technology you already
have working for you.

Our devs use plenty of small open source projects. We [Security] like to
recommend software that we are comfortable with, but any determination of
"stacks" we leave to the actual software engineering teams. If something is
pretty bad, not updated, constantly having problems, etc - we might ban it...
but what's your case for using poorly engineer software given alternatives?

Not sure if mom and pop is supposed to mean commercial, but not OSS? OSS we
can patch and modify if necessary. We can even PR patches back based on what
our "scanners" and manual testing find.

Generally, we don't care about language, most issues are in implementation not
the language, and less so in more modern languages where the creators have
heard about security.

The basic type of code scanning needed for PCI and other compliance is a
commodity offering and is manageable cost compared to marketing and
relationship management costs needed to pursue big clients.

I am not sure the OP understands what a SOC2 report says/does. It talks about
pretty high level controls and practices. You certify an app/service, not a
stack. If you scan and fix your bugs and have a proactive security training,
it doesn't care about how you do it. There is no golden stack that will help
you pass a SOC2. You may be able to make your life easier with certain
services/SaaS, but the issues come up in your practices and in the actual code
implemented. If you have bugs in procedural or functional programming, its the
same problem from this perspective.

Vendor due diligence? Some companies have their own questions, there are also
agreed standards for these that some companies opt into. I am not sure why a
big company should risk their bottom line on something unproven or that isn't
ready for prime-time. It's like getting an inspection when you buy a house. In
the same way, your org can improve and make improvements. This is no different
then adding in features some customer wants in order to win your business.

I don't understand the ultimate point, people who build functional apps
shouldn't have to care about security? It's just another non-functional
requirement that helps you win a broad audience. It's the same argument that
says government should regulate this or that, that financial advisers
shouldn't need to act as fiduciaries. It's the cost of doing business.

Maybe the OP has some weird experiences where auditors jumped on functional
programming as an issue to justify not doing more work or make their lives
easier, but I don't think this is something that is a commonly held belief
across audit and security (if people even know what functional programming
is).

------
draw_down
Boy, this sucks. It’s absolutely correct and you can see it coming, though it
didn’t occur to me before reading. Oof.

