
Why firewalls won’t matter in a few years - liotier
http://etherealmind.com/why-firewalls-wont-matter-in-a-few-years/
======
nickpsecurity
Firewalls are just some stupid crap industry made up and went with. We've
known since the Orange Book days that security had to be done holistically
involving every endpoint and network. Their standard for security was a strong
TCB on endpoint with trusted path (see EROS or Dresden's Nitpicker); a network
card with onboard security kernel, firewall, and crypto (see GNTP + GEMSOS);
connections between networks through high assurance guards (see Boeing SNS or
BAE's SAGE); proxies + guard software for risky protocols such as email (see
mail guards like SMG or Nexor). All of this collectively working together was
what it took to enforce a fairly-simple, security policy (MLS). More flexible
attempts happened in capability model with KeyKOS + KeySAFE, E programming
language, CapDesk desktop, and so on.

So, the above was the minimum that NSA et al would consider secure against
adversaries on their level. Every security-critical component was carefully
spec'd, implementation mapped against spec 1-to-1, analyzed for covert
channels, pen-tested, and even generated on-site. Commercial industry, aiming
at max profit and time to market, just shipped stuff with security features
but not assurance. Broke every rule in the field. Came up with firewalls
(knockoff of guards), AV, and so on to counter minor tactics. Of course that
didn't work as it doesn't solve the central security problem: making sure all
states or flows in the system correspond to a security policy.

The best route is to put security in the end-point along with E-like tools for
distributed applications and hardware acceleration of difficult parts. Within
your trust domain, you just check data types and use that for information flow
control (aka security). Outside trust domain, you do input validation and
checks before assigning types. The hardware will be like crash-safe.org or
CHERI processor in that it handles the rest. A security-aware, I/O offload
engine will help too. Fixing the root problem along with a unified model
(capability-based, distributed) will make most security problems go away. At
that point, firewalls will be about keeping out the riff raff and preventing
DOS attacks.

~~~
tptacek
If this observation is meaningful, shouldn't it also be the case that firewall
deployments aren't meaningful to enterprise security?

Because: that seems intuitively _not_ to be the case.

To wit: on an annual site-wide pentest of any major enterprise network (this
is a project every security firm does for a couple clients a year), the moment
the pentester gets "behind the firewall" (ie: code execution on any
application server) is invariably game-over.

If firewalls were just some stupid crap the industry made up, shouldn't they
make no difference at all? Shouldn't attackers just make a beeline for
wherever the high-value information is, rather than scanning the perimeter and
looking for some chink to use to get behind the firewall?

My argument would be: whether firewalls are "stupid crap" or not, they
certainly do seem to matter right now.

~~~
joshrivers
'Game over': I think this is exactly the problem. In all the organizations
I've been in, firewalls have been an excuse for negligence. 'We don't need to
think about security because we are behind the firewall.'

Right now the compliance world is addicted to firewalls, to the detriment to
reasonable appsec. In my fantasy world, I'd like the auditors to be telling
companies 'in 5 years, you won't be allowed to firewall your business network,
and if you aren't secure without the crutches, then no certification for you.'
That would light a fire under management to care about software quality all
over the place.

~~~
sliverstorm
You're probably right that firewalls allow negligence elsewhere.

But if they can't secure their one firewall, what makes you think they can
secure their complex network of a plethora of interdependent services running
across many subdomains on a whole roomfull of machines?

"Simple" is a key step to effective security, and I think the reason we've
latched on to firewalls is they are often the simplest, most contained, and
most standard way to reduce the attack surface of your network.

~~~
joshrivers
I think in many cases you will be right and 'they' won't be able to secure it.
This will force them to contact out those applications to someone who can.
Plenty of SaaS providers able to secure a network. Just because my incompetent
I.T. Guy can't properly harden a mail server doesn't mean we can't hire
Rackspace or Microsoft or someone else who can. Let's incentivize competence,
not hide incompetence.

~~~
kibibu
Not all services are capable of "hardening" due to software quality. Not
everything is written as tightly as Qmail

------
snowwrestler
> In the questions at the end, he points out the bug bounties are a PR
> Problem. When you pay a bug bounty and fix, the researcher needs to shutup
> instead of going public about the vulnerability. Of course, the researcher
> needs the publicity to build a business & credibility. So bug bounties are
> likely to die.

Because security researchers need to build their business, they will find
vulnerabilties and disclose them, no matter what. The biggest splashes in the
past year were Heartbleed and Shellshock. Correct me if I'm wrong, but neither
were driven by bug bounties.

Bug bounties are a PR problem, but they are a _smaller_ PR problem than a
zero-day disclosure that results in massive exploits. The point is get the
company slightly ahead of the PR curve, not to kill disclosure (which would be
impossible).

~~~
jerf
Bug bounty programs do not in general involve "shutting up the researcher".
See the HackerOne disclosure page, for instance:
[https://hackerone.com/disclosure-
guidelines](https://hackerone.com/disclosure-guidelines)

The "Disclosure Process" doesn't explicitly spell it out, because I think it's
just the mental baseline assumption all the authors were operating under, but
everything ends up disclosed in the end. It's just a matter of timing.

Perhaps sometimes things are hidden and never disclosed, but it is at least
not the general policy.

(Disclaimer: I work for a company that is a bugcrowd customer; I chose
HackerOne's policies as my point to avoid any entanglement. I'm not aware of
anything we've ever permanently hidden, either.)

------
kator
Years ago at a large car manufacturer I had an argument with the "Data
Security" team about firewall settings. They had some crazy dumb ideas of what
had to be on the firewall and it was constantly causing us pain. I went to
visit this person in charge and argue my case. He was in another building
outside of the "Secure Datacenter".. He argued with me for about 45 mins about
how nothing leaves that data center and the firewall is our last line of
defense. I pulled a 5Gb 8mm tape out of my pocket, dropped it on his desk and
said "That's a copy of every single customer in our database and our entire
parts catalog with all order history so much for your firewall".

The next day we had more intelligent discussions about firewall settings, and
permissions on the mainframe for tape backups...

Additionally my favorite trick to this day when visiting a company is just
plugging in a laptop to various random ethernet ports. I was recently as at a
place where they have a "Guest Wifi" that changes passwords every week and the
password is emailed to everyone. Sitting in a random conference room I plugged
in and had 100% access to their corporate network. In today's work of IOT this
is basically gross negligence to only rely on a firewall for security.

To think a firewall is much protection at all is to stick your head in the
sand and pretend everything is ok.

~~~
nickpsecurity
Great stuff through and through. It's why I use end-to-end security that
doesn't trust the network wherever possible. Let them screw with my Ethernet
ports: the NIDS just tells me there's a problem and where to find it. Or they
walk away with a lot of data that might be useful for... Monte Carlo
simulations or studies in random numbers? Haha.

------
danellis
"You can’t use firewalls to secure East/West data flows in the network."

What does that mean?

~~~
Spooky23
Think of a blade chassis in a datacenter.

If blade1 needs to talk to blade2, running it through a firewall means that
the communications needs to flow out of the blade back to the datacenter
network (ie. flowing north to the top of the rack switch). That adds latency
and requires more network and firewall capacity, as all traffic needs to leave
the chassis.

If there is no firewall requirement, traffic flows east/west within the
chassis on the blade backplane. Security can be layered with host firewall or
similar technology. (ie. IPSec, proprietary solutions like Unisys Stealth)

~~~
rsync
"If blade1 needs to talk to blade2, running it through a firewall means that
the communications needs to flow out of the blade back to the datacenter
network (ie. flowing north to the top of the rack switch). That adds latency
and requires more network and firewall capacity, as all traffic needs to leave
the chassis."

For years (15 ?) I have been putting very simple, very small ipfw rulesets in
place on non-firewall systems that allow only the traffic I believe that
system should be sending/receiving.

It's a firewall. It's on the host itself. It is a firewall that is securing
"east/west traffic". It's a simple model that any host can implement and has
very low (typically zero) cost.

Related:

This is the first, and last, time I will ever use the term "east/west
traffic". Christ.

~~~
Swannie
Indeed. But the view of the NetSec team is that you server is not trusted to
secure itself.

If every service in your ecosystem implemented ipfw rules (or equivalent) then
that's great. But if your box got popped, then can I be sure that it won't be
used as an attack vector for other machines? I will turn off the ipfw ruleset
locally, and start connecting out to other systems. If there was a firewall
sitting there between me and other systems, this would hit rules that should
never be hit, resulting in the NetSec team getting some alerts.

Now I believe, like most sane people, that if you've popped an appserver, it's
already likely to be game over, and this is a moot point.

For most applications, the app server doesn't live in its own little DMZ, and
usually does have privileged access to the DB, often shares the same
authentication domain as other services which is not properly secured (e.g.
your [backup|log|monitoring|deployment] server connects to every machine with
a service account, not SSH protected, and now I have the service account for
all machines).

You wouldn't be foolish enough to have mixed admin functions (content
management?), and user functions on the same app server... right? Right? Oh...
wait... almost everyone does that.

Etc.

------
kstenerud
Note that there is a difference between isolating devices and firewalling in
the sense of packet inspection. You're still going to want selective routing
and packet forwarding (like port forwarding).

Firewalls will continue to be useful for complex devices that connect directly
to the internet (like laptops on public wifi), where all sorts of things you
wouldn't want others accessing are exposed by default.

~~~
snuxoll
What most consumers and sysadmins think of "Firewalls" and what the
presentation are talking about are two different things. Simple packet filters
like "don't allow communication on port 123 unless it's from IP a.b.c.d)" will
always be part of a defense in depth strategy, but things like stateful packed
inspection tools from big-name firewall vendors do not scale when the number
of cycles they have to inspect a packet keeps getting lower, especially when
they have fewer cycles to actually do basic I/O to get the packet through to
the destination.

High performance networking means networking hardware has to get the packets
moved faster, so there's less time to do processing on them.

------
stcredzero
_Passwords are unsafe_

Passwords are unsafe for the same reason that roads are unsafe: human beings.
Things work well enough for most people, most of the time. However, during
certain situations, most people aren't trained correctly and often do the
wrong thing. What's more, there's even an accepted culture of doing the wrong
thing.

~~~
TheLoneWolfling
I'm thinking more and more that the best way to do passwords is to not - you
generate a random diceware passphrase (or similar) and give it to the user via
a secure channel, run it through the KDF, and throw the original away.
Preferably on an entirely separate server from everything else.

It still doesn't prevent users from being stupid w.r.t. writing down
passwords, but it at least presents users with reasonably secure logins that
are relatively easy to remember.

~~~
jimktrains2
Best way is to not send credentials in plain text. I wish SRP had taken off
and become standard.

~~~
TheLoneWolfling
This doesn't send credentials in plain text.

~~~
jimktrains2
What is being sent to the server then?

And by plain-text, I mean the server receives information that could then be
used to authenticate later.

For instance, if you send the sha of a password, and then store the sha of the
sha, you're still sending the password in plaintext, it's just that it's not
the password the user entered.

~~~
TheLoneWolfling
...Which is why I said "over a secure connection".

This method is no less secure than the standard "client sends server password
over HTTPS" scheme.

~~~
josai
> ...Which is why I said "over a secure connection".

... and how do you set up a secure connection without a pre-existing password?

Your solution has a chicken-and-egg problem.

~~~
jimktrains2
SRP (e.g. TLS-SRP) doesn't require the server to have the plaintext password.

[0]
[http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol](http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol)

~~~
josai
The guy I was replying to was arguing against SRP and proposing his own ad-hoc
solution.

------
drcross
I also wonder how the move to IPv6 will also affect the current paradigm.
Internet facing firewalls were typically also NAT machines to save IPv4
address space but all of that is gone in IPv6 meaning your global address is
now exposed and a hacker can persistently try to compromise your machine if
you don't firewall.

~~~
netman21
On top of that many modern defenses are based on IP reputation, or black
lists. There are several companies that track the reputation of all 4 billion
IPv4 addresses. Scores are updated every 5 minutes. With several quadrillion
IPv6 addresses this will be a lot harder to do.

------
KaiserPro
I'd like to counter, IoT will probably change this view. (however the points
raised are still valid.)

IoT devices generally have utterly terrible security, and you'll not want them
public exposed. I can envisage a place for a house wide firewall of somesort,
to stop publicly addressable devices being knocked offline, or exploited by
persons unknown.

So there will be a need for a "virtual front door" something that home router
_should_ really do, but fails utterly in most cases.

~~~
brentjanderson
I think the article is referring more to enterprise installations for
firewalls - I don't think we're worried about 100G internet to domestic
endpoints any time soon. Domestic use will still make sense, likely for years
to come. In data centers? Not so much.

------
nerdy
The presentation audio quality is really unfortunate, I'm interested in the
subject but it sounds like I'm listening from another room.

~~~
EtherealMind
maybe you should pay to go to conferences ?

~~~
nerdy
And maybe you could sponsor my travels.

I was simply pointing out that the audio could be better in case it would be a
deal-breaker for others, or in case another source was available. Sometimes
pointing out a problem is the first step to finding a solution!

P.S. The content is incredible and the presentation was great, I was just
trying to help the audio from detracting from the overall excellent quality.

------
sargun
I agree with this heavily.

I outlined some of these problems in my "Critique of Modern Network Design"
post: [https://medium.com/@sargun/a-critique-of-network-design-
ff85...](https://medium.com/@sargun/a-critique-of-network-design-ff8543140667)

------
bortels
Awesome Stamos talk (as per usual), but the headline here is a tad clickbaity.
Perhaps more accurate to the talk is that they will matter less and less as
time goes by. If you don't like that headline - do go watch the talk, there's
a lot more subtlety than 8 words convey, and Alex is a fun speaker, with one
of the highest signal-to-noise ratios around.

Heck - if you agree with the headline, still go watch the talk. If you care
the slightest bit about security, you won't be sorry.

Firewalls are not a 100% solution, nor have they ever been. Defense done
correctly is always defense in depth, and hardware firewalls are always likely
to be _part_ of that solution.

Alex's point in the video - and one well-made, I think - is that as the
landscape evolves, the value-add of hardware firewalls becomes less and less,
because assumptions about the environment they are in are changing. Anyone
depending only on firewalls (I have called this the "hard candy shell" in the
past) was vulnerable before - and as time passes, they are becoming
increasingly vulnerable, because the things a firewall can be useful about are
becoming less relevant, due to architectural changes and exploits moving up
the stack toward the app.

I've said for a long, long time - I don't care how good your perimeter
defenses are, you gotta harden the hosts. And in the end, this also is moving
up the stacks. Your hypervisor may be secure as all-get-out, but if your app
is open to trivial exploits, you're still screwed. You need to do a reasonable
amount of security at all levels, including bits like user evangelism
(disallowing of insecure passwords, perhaps promotion of MFA) if you want to
have an expectation of security founded in reality.

The human element - users and passwords - cannot be underestimated, because a
chain is only going to be as strong as it's weakest link, and if you do all of
YOUR shit right - that's gonna be the end-user. Someone who can figure out how
to replace passwords with a mechanism that ties access and authentication to a
single human being in a non-trivially spoofable and inexpensive manner could
become very rich...

------
madsushi
"NSX is getting strong traction"

With fewer than 500 paying customers, I don't see how you can describe NSX as
having "strong traction".

~~~
EtherealMind
Two things:

1\. the number is more than 800. 2\. NSX is being deployed primarily as a
security tool for micro-segmentation. It is displacing firewalls in the data
centre is substantly way. 3\. Change in the data centre is slow.
Infrastructure is commonly built on 10-15 year cycles so actual purchases are
a lagging indicator.

------
rcthompson
My takeaway from this post, as someone who's not a security professional:
"Security is hard, swearing is normal."

------
bnewbold
I'm not sure I agree with the argument that faster line rates creating a speed
limit for firewalls. It seems like firewall hardware could parallelize
internally at layer 3, sharding by source/destination IP or port, so all
packets from a single flow will go through the same processing core, no? This
would add a finite latency, but I don't think it would impact throughput.

Am I missing something?

~~~
readams
Firewalls today are able to filter at line rate for a single flow on an
interface. If you want to allow 100G by handing 10 10G flows in parallel this
is completely possible, but not quite the same thing.

~~~
EtherealMind
Delivering this function is very costly, because of stateful inspection you
must implement flow sticking which require buffering which then impacts
performance

..... and so on and so on.

no, doesn't work.

------
Dmatad
Firewalls fall into a dark category for IT -- cover-your-ass implementations
done without questioning the problem-solution dynamic. For years, the cloud
applications I work with have been slowed or made glitchy due to company
firewall interference. I will not miss them when my users' experience improves
by leaps.

------
cm2187
One of the bullet points says "DNSSEC is dead". But what is the plan then? it
sounds odd to rely on a completely insecure, unencrypted service for DNS (plus
all the new ways in which a secure DNS service could be used, to distribute
public keys for instance).

~~~
iyn
I'm also interested in this. DNSCurve? Something else?

~~~
xorcist
DNSCurve solves none of the problems DNSSEC solves, and vice versa.

The only realistic alternative to the DNSSEC PKI is the global SSL CA PKI,
with authenticating higher up in the protocol stack. That does not necessarily
mean status quo though, as the latter have obvious room for improvement.

------
arca_vorago
You know, I keep hearing this, especially related to ipv6, but the problem to
me isnt that the industry is being lazy, its that all the guard systems you
reference are archaic black boxes to most IT people. If you want to start
pushing Guard to the endpoint of every server and desktop, ok, but show me a
product that makes it easy to do and I dont have to be a unixbeard from an
defense agency to know how it works...

I dont disagree, but I hear a lot of terminalogy thrown around by you with
very little substantial practical and technical information. How about a guide
to Guard, EAL etc for the common sysadmin?

------
oldpond
Thanks for sharing. Very interesting presentation. As soon as he said the
browser is the new OS he lost me, but I understand he's coming from the
Internet Industry. I completely agree that we need to design secure
application architecture though, and that's why I am excited about languages
like Go which facilitate a new client server model that doesn't involve the
browser.

~~~
stephengillie
The browser took over that throne 10 or 15 years ago, with the rise of web
2.0. We make and download way, way more applications that run in web browsers
(aka every web site) than applications that run on Windows, OSX, or any other
OS.

~~~
teacup50
Only by redefining what "application" means.

~~~
stephengillie
Please look up the definition of "application". I don't mean to be pedantic;
it was enlightening to me as well.

------
davidu
Strong agree, network based firewalls don't make sense based on performance
needs and placement at the edge of an increasingly ephemeral network
perimeter.

Host and edge / stub firewalls with strong orchestration will be far more
pervasive along with lots of network traffic auditing and anomaly detection
that happens in near real-time, but out of the line of fire (out of band).

~~~
scurvy
I haven't seen firewalls on the edge in ages. I guess it's more of a Fortune
500 attitude than tech company thing.

"Firewall" devices still have a place inside your network beyond the
perimeter. Today they do ACL enforcement as well as DPI, IDP, IDS, tap data,
etc. It's not unheard of to run a "firewall" in completely passive, monitor-
only mode to generate telemetry data.

------
api
Firewalls were always an ugly hack that got grandfathered in as a 'best
practice.'

------
pshc
Great speaker! Thanks for all the system design advice for my next webapp. I
want to do it right next time, with lots of containerization.

