Hacker News new | past | comments | ask | show | jobs | submit login

There's a reasonable argument to be made that router-based firewalls shouldn't be necessary for a home user to have a secure configuration.

If it's not safe to expose a service to the internet, it's also not safe to have it exposed within your LAN without access controls.

NAT-based 'firewalls' can have holes in punched in them in a variety of ways - they are specifically designed to allow holes to be punched, because it's necessary for many applications to work.

It's also possible to take advantage of a user's browser as a relay. Combine that with the ability to use ad networks to target HTML to be served to the IP of an attacker's choosing, and the illusion that a perimeter firewall will prevent an attacker from initiating connections to your network starts to shatter.

I agree that in practice, in many cases today the stateful firewall-like functionality provided by a NAT device will provide a net security improvement. But that's not a situation we should continue to allow.

It's unrealistic to expect users to manually create firewall holes. That's why default configurations tend to include UPNP (which, naturally, introduce a new set of security and DoS concerns) - which will automatically open holes in the 'firewall'.

Google has published some of their thinking on this topic under the 'beyondcorp' moniker. The summary is there is no "safe" and "unsafe" - you need to do a risk evaluation of each attempt to access a service, and "this is coming from inside our network" in inadequate.

De-facto they are necessary, because people expect their networks to be safe, run all kinds of not-great things in there and sort-of got used to configuring port-forwarding. Attacks that can go from inside the network are in practice quite rare (AFAIK), and are a reason to add more security between network devices, not to expose them to the public net.

I'd love if we could trust most devices to be publicly exposed, but IMHO we can not. If router manufacturers could be trusted one could add all kinds of clever things there, but ...

> Attacks that can go from inside the network are in practice quite rare (AFAIK)

This is not true. Most real world attacks I've seen begin by infiltrating malware via the web, e-mail, social media, or phishing. Once inside existing connections between existing internal systems are exploited to crawl around the network.

Remote attacks against non-DMZ things are fairly rare in practice.

The only way to stop this is to implement even more firewalling inside the network, which basically breaks LAN.

I very much agree with the parent and have been talking about Google's beyondcorp and deperimeterization for years. A device that can't be safely connected to a network is broken, and we should stop degrading our networks to support broken junk. If broken junk gets hacked, it is the fault of the makers of that broken junk.

It is not hopeless. I've been into this stuff since the mid-1990s and things have improved a lot since then. I would not be too afraid to hook up a Mac or a fully patched Windows 10 machine to the public Internet. In the 90s or early 2000s I would not even consider this. You'd get owned by a bot within an hour. I remember in 2000 hooking a virgin Windows machine up to a campus network and being able to watch it get infected within 5 minutes.

The trends for the future are positive. Safer languages like Go, Rust, Swift, etc. are getting more popular everywhere. Advances in OS security like W^X, ASLR, etc. are getting ubiquitous. Local app sandboxing and containerization is a thing almost everywhere. Devices security postures are improving.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact