Does this sound familiar?
It is primarily GUI-driven, which won't cut it for automation, version control, or some auditing.
AFAIK it is still based on m0n0wall. The init scripts are written in PHP. There is (or was when I used it years ago) one big PHP script with a bunch of includes. If any part of the PHP script fails the rest may fail.
This led to a bunch of problems with packages. For example I installed the RADIUS package through the web UI, decided I didn't want it, removed the package through the web UI. Removing the RADIUS package removed something the large PHP init script was trying to include, which led to the script exiting immediately with an error.
The PHP script was also reasonable for loading the firewall rules, but it exited before loading the rules. Which resulted in a booted internet gateway with _no firewall_ (allow any).
This aspect of pfSense may have changed, so I'm not trying to hate on it, just trying to point out that it may have shortcomings for some use cases.
I've managed straight pf on OpenBSD for gateways/firewalls. pf is very nice to work with, much nicer than iptables/tc for internet gateways. But that also won't scale to large enterprises or datacenters . The amount of data is too much for OpenBSD and generic hardware, at least as for as I've seen. At one employer the largest Palo Alto Networks firewalls couldn't handle our office network traffic, they had to be replaced by firewalls from another vendor.
So it really isn't fair to compare FortiGate firewalls, some models of which can do 300+ Gbps, with 40/100 GbE ports, to a little box running pfSense.
You forgot: and an ssh backdoor
Fortunately (for us), the large consulting firms disagree with your assessment.
Case-in-point: AES-GCM, including AES-NI support, and support for same for IPsec.
There are other, more minor contributions, such as the new 'tryforward' code (replaces what was 'fast-forward', but doesn't break IPsec). r290383
Or r290028, where we eliminated the performance impact of IPsec (which is now on by default in -CURRENT).
I could go on to detail around 30 recent changes to FreeBSD, but I think the point is made.
In any case, it's a bit more than "bundling it all up and slapping a web interface on top of it", as you assert, but you're not the only person who thinks this way.
Your point that we leverage others work is correct.
Best wishes for your 2.3 release and the new bootstrap based webGui!
Have at it.
What is your definition of "quality code"? (Without going in to a huge rant.)
b) if you're serious about greppable bugs, please open bug reports (redmine.pfsense.org) or, failing that, email me with a description. (jim-at-pfsense-dot-org)
Accusations of astroturfing, sockpuppetry, and shillage are not allowed in HN arguments unless you have evidence. Someone disagreeing with your view doesn't count as evidence. So please don't do this here.
If you want to understand our thinking on this, I've posted about it many times, e.g. https://news.ycombinator.com/item?id=9277068 and the links back from there.
1. Clear description of every feature and requirement in system.
2. Mathematical spec of those where English ambiguity could effect results.
3. High level design with components that map 1-to-1 to those.
4. Low-level, simple, modular code mapping to that.
5. Source-to-object code verification or ability to generate from source on-site.
What people in faux security mocked as mere "paperwork" or "red tape" were actually pre-requisites for defeating subversion my mentally understanding a system from requirements all the way to code. A problem like this would've been impossible in such a system because it would be beyond obvious and probably unjustifiable with requirements tracing.
Every story like this further validates the methods that consistently produced systems without all the security problems plaguing modern security products. Situation isnt inevitable or even necessary: merely an inversion of scientific method where security companies and professionals consistently refuse to use what's proven to help and reuse tactics proven to fail. It's gotta stop.
That it wont is why I favor liability legislation tied to a reasonable baseline of practices. We can use an inexpensive subset of what worked in highly assured systems. 80/20 rule. Baseline would look more like Secure64 or HYDRA firewall than shit like Fortinet and Juniper. Hackers would work for exploits. I know Im dreaming, though, as DOD and NSA just dropped mandate to EAL1 w/ 90 day review for some stuff. (Rolls eyes).
We're kidding ourselves to put our faith into these closed-source products and that's only just now becoming clear. Open source. It's the only thing that will work for us long term.
And no, OSS with reproducible builds is nowhere near enough for software to be trustworthy. It's why even Orange Book had more than one sentence in its feature and assurance activities recommendations.
Let's put it to the test though. If Im right, most the majority of OSS software will be similarly to proprietary full of easily-prevented holes, undocumented/barely-clear functionality, and difficulty even building it. Whereas high assurance systems would've had the opposite attributes while faring well during professional pentesting w/ source.
One of us was right for a decade straight. Maybe it's because the principles and practices I promoted... work? Evidence in the field is on my side. Neither OSS nor good builds are enough.
Closed source has no obligation to reveal vulnerabilities, fix anything, or even work with customers who report vulnerabilities. ORCL will sue you  if you learn too much about what you bought. It's often more in their interest to fix something after public discovery for PR reasons but leave it on the todo list otherwise.
So yes, of course closed and open source have holes. The question is, will they be found, announced, and addressed or will they lay secret for years behind a legal wall?
Btw, about obligation, my essay assumes the company is trying to differentiate by taking initiative and having their product reviewed. Companies that don't shouldn't be trusted at all. End of story.
Depending on your contract, proprietary vendors offer few choices about getting a vulnerability patched, if ever. If you're in the riffraff section (ie, most router owners only have a few), you might wait a very long time. One  from netgear that languished for months. And what about the Juniper backdoor: won't fix?
With open source, you can take the code to whomever you wish, fix it in house, offer a bounty, etc etc. There are plenty of houses that give away code and sell support. If GPL'ed, this model also accelerates fixes because everyone gets immediate benefit of everyone else's fixes.
As a customer, you can claim what you like about your process and your glory: until I can actually verify the code as part of the deliverable, it's just faith on my part. I'd rather use an open source firewall rather than a closed source firewall, regardless of the claims made by the proprietary company. Again, this is about faith. I'd like to avoid having it when it comes to security.
"4. Low-level, simple, modular code mapping to that.
5. Source-to-object code verification or ability to generate from source on-site." (me)
Seriously, did someone hack my comment where it doesn't show that on everyone else's end or did they hack my system where 4 and 5 are only visible to me? Shit! Here I was using OSS, reviewed, well-maintained software specifically to reduce the odds of that. I'm blaming Arclisp: must have called a C function or something.
"You're not wrong that a correct process dramatically limits classes of issues (I've worked in a very high ceremony requirements-tracability shop)."
Well, there we go. Least you saw that and have experienced that assurance activities can increase assurance. Now we're getting somewhere.
"Again, this is about faith. I'd like to avoid having it when it comes to security."
You're probably going to have it anyway unless you specifically verified the software, libraries, compiler, linker, build system, and all while producing it from a compiler you wrote from scratch. Nonetheless, open-source can increase trust but I say closed can be more trustworthy. Not is or even on average but can be.
Here's my essay that claims and supports that the real factors that are important are the review, the trustworthiness of the reviewers, and verification you're using what they reviewed. I'd like your thoughts on it as I see where you're coming from and like the faith angle. Faith in the integrity of the process and reviewers are the two things I identified as core to security assurance. So, I broke it down to give us a start on that.
Note: I have stuff for other aspects like compilers, dev process, HW, etc. I'm just holding off to focus on the source aspect here.
I think OpenSSL's past disproves many of the pro-OSS claims.
As most things, a blended approach is probably best. Defense in depth, layers of security, crunchy on the outside, still tough on the inside. If you put all your eggs in one basket, you're gonna have a bad time, unless it was a very expensive, well-engineered basket.
And even then you might still have a bad time.
"I think OpenSSL's past disproves many of the pro-OSS claims."
"Defense in depth, layers of security, crunchy on the outside, still tough on the inside. If you put all your eggs in one basket, you're gonna have a bad time, unless it was a very expensive, well-engineered basket.
And even then you might still have a bad time."
Decent points. Other engineers and I went back and forth on discussions involving the latter point due to all the factors involved. A high assurance design usually worked pretty well. Yet, it might not, so Clive Robinson and I's consensus was combining triple, modular redundancy w/ voters and diverse implementation concepts. So, three, different implementations of concepts that shouldn't share flaws with at least one high assurance (preferably three). The voting logic is simple enough it can nearly be perfected. Distributed voters exist, though.
Shit gets worse when you think of the subversion potential at EDA, mask-making, fab, and packaging companies. You have to come up with a way to trust (or not) one entity. For HW, esp low hundreds of MHz, the diverse redundancy with voters should do the trick. Until the adversary breaks them all or the voter. Lol...
Not sure what your definition of modern is, but this is a pretty old school "support interface".
PC and Internet eras were really getting going around that same time. Languages and approaches that introduce vulnerabilities one way or another became huge. INFOSEC research and products shifted toward all kinds of tactics for hardening, analyzing, monitoring, and recovering such inherently, insecure stuff. Revisionism kicked in where people forgot the old wisdom, starting to reinvent it slowly, plus how and why they got there in the first place. The products, both regular and security, had tons of vulnerabilities old methods and tools prevented. I call this the Modern Era of INFOSEC. It's still running strong.
Good news is the Old Guard published tons of papers and experience reports telling us what to do, what not to do, and so on. There's a steady stream of CompSci people and some in industry building on that. Keeps advancing. Even mainstream IT and INFOSEC adopted some of the strategies. Rust, "side channels" analysis, unikernels, trusted boot... all these are modern variations (sometimes improvements) on what was done in 70's-early 90's. So, it's not dead but it's mostly ignored and barely moving.
That's what I'm thinking when I see another modern firewall or whatever with less security than the guards from the 80's that predated them. You'd think they'd have learned something by now past just the features. The assurance activities were there for a reason.
Guards, if you were wondering...
Good essay on security assurance from engineering rather than subjective point of view that development often takes:
What does a mathematical specification of "secure" look like?
We develop a tool to verify Linux netfilter/iptables firewalls
rulesets. Then, we verify the verification tool itself.
Warning: involves math!
This talk is also an introduction to interactive theorem
proving and programming in Isabelle/HOL. We strongly suggest
that audience members have some familiarity with functional
programming. A strong mathematical background is NOT required.
TL;DR: Math is cool again, we now have the tools for "executable
math". Also: iptables!
the point I am getting at is that in a firewall product, what you are checking is something like "only those users listed from this source may login" which is "easy" but other more complicated things like "this channel must be encrypted". what does "encrypted" mean? Is it just a word that you use in your specification language? if so, does it mean what you think it means? etc etc etc...
formally proving stuff about the behavior of a system, or a distributed system, is hard, but formally proving stuff about security, especially as it relates to information flows, side channels, etc is very hard...
No, it means you don't allow those ports. That's all it means. Saying precisely what you're doing or what attributes your aiming for are what the formal work is all about. Whether your security policy is enough and your design embodies it are different things altogether.
"what does "encrypted" mean? Is it just a word that you use in your specification language? if so, does it mean what you think it means? etc etc etc..."
That's actually one of the easiest things you could ever check in formal systems. It's helpful to think of it like programming. Actually, done side-by-side. You implement a formal spec for a requirement and/or high-level function that takes plaintext as input and outputs ciphertext. This typically uses Red/Black separation model. You also produce and vet an implementation for the encryption module. Now, how to know if something was encrypted in the system before going on the wire? Wait for it... wait for it... answer: it went through the encryption module first while it was initialized and in a state saying it's encrypting. Just like that. Labels and checking at interfaces were used for identifying what went with what and making policy enforcement easier.
Note: Guttman has a security kernel in cryptlib that does this at the interface level for every function in the system. In CompSci literature, it's called Assured Pipelines if you want to look it up. Easier to support now with Design-by-Contract and advanced typing systems. Past systems were kludgy when they happened at OS level except with capability systems.
Anyway, such components and functions are composed with the interactions and compositions analyzed. Each component and composition usually has a small number of execution traces it can perform so one can brute force the analysis if necessary. Finite state machines, both success and fail states, were common in high assurance development because they can be analyzed in full in all sorts of ways.
Note: It was Dijkstra that invented the method for this in the THE multiprogramming system. I think PSOS did it for security first. VAX Security Kernel Retrospective has nice sections where they show layering and assurance activities plus effect they had on analysis and defect hunting. Google any of those to understand the method better.
this misses the relationship between the size of the plaintext and the ciphertext...
edit: part of the problem is tooling, to deal with stuff like this you need dependent types and those do exist, but not in a way that "the programmer on the street" can use...
Smartcard OS by Karger, another founder of INFOSEC:
A comparable one for dev assurance, but maybe easier to emulate, was CompCert. Testing of it against many other compilers validated formal verification gave best reliability.
Ironically, Microsoft compete neck-to-neck with seL4 in terms of verification with their excellent work on VerveOS:
Which is where EAL6/7's other assurance activities come in.
Amazon notes many kinds of problems that lead to reliability and security failures that TLA+ helped knock out. Their engineers are sold on it now and have no intention of dropping it. Certain sentences in that article are nearly identical to those I read in ancient papers using ancient methods. The benefits of precise, easy-to-analyze specifications are apparently timeless.
Here's it done in Z via Altran's Correct by Construction methodology:
They apply about every lesson learned in high assurance in their work. Defect rate, from this demo to old Mondex CA, is usually around 0.04 per 1,000 lines of code. That's better than the Linux kernel.
Rockwell-Collins formalized a separation architecture, HW, microcode, etc then integrated it into a CPU:
NICTA, who did seL4 verification, use tools to model stuff in the language that causes security errors then use provers to verify they're used correctly. Example tool:
Lots of groups using lots of different tools with great results. The difficulty and impact on time-to-market varies. The use of compositional, functions or state-machine models with subsets of safe languages, design-by-contract (or just interface checks), static analysis, design/code review, testing, and compilation with conservative optimization seem to be the winning combo. There's free tools for all of that. It takes about the same time to code as usual while preventing lots of debugging during testing and integration.
> It leaves no traces in any logs (wtf?). It keeps working even if you disable "FMG-Access". It won't let you define an admin user with the same name to mitigate it, so make sure that SSH access on your devices is at least restricted to trusted hosts!
I really believe this has already begun with the FANG tech giants with Open Hardware initiatives. At some point you can begin pooling your resources to create safe, secure, and fast platforms that everyone can use.
 facebook, amazon, netflix, google
Credibility is in a way binary - you either have it or don't.
The SSH backdoor means that any idiot with basic computer operation skills can log onto a firewall and start playing around with what ever they want.
Of course not all of the results also expose SSH but some do.
For this, there'd have to be a specific function in some Fortinet products for handling the special challenge/response backdoor.
A magic string like `"FGTAbc11*xy+Qqz27"` in firewall source code is going to jump out at you. Unlike an extra goto...
Do you mean "hopefully someone else will"? Because that's what I mean.
Maybe it is time we build open hardware and software for important things. Can't trust anyone.
Doing audits of open hard and software is a whole other problem however.
(It could of course be that nobody's finding the h/w ones...)
I don't quite understand the special handling. Looks like it takes a byte from the server's output, hashes a special string containing that byte, and passes that back to the server. This is the backdoor.
Edit: Maybe that "special handling" is just standard protocol and it's just sending a plaintext password. I dunno.
EDIT: 5.0.7, not 5.0.2
On top of that I am now in an organization where we're starting to implement security levels on networks, anything above level 0 requires 2FA to access and you can never connect a lower level to a higher level. So best practices are a good thing.
Doesn't help. The attacker just has to get user-level access on some machine on the intranet or in the data center, which can be obtained via other attacks. Then then can attack other machines via the local network to escalate.
This is where VPN services like Junos (ironically Juniper) work well because they give you 2FA and group based access. So if you're not in the networking admins group then you have no reason to have SSH access to the networking equipment.
if you want to do backdoor probably should do it better, something like port knocking to start with at least.
Come to think of it, backdoors are fundamentally "security by obscurity". Or insecurity through obscurity, depending on your POV.
This one is. But they aren't always.
For example, if a manufacturer put in a support/recovery backdoor, documented it, and utilised a secret that only the end user and manufacturer should know (e.g. something on the physical label), then that would be a backdoor while not relying on any obscurity for its security (or no more than a password does).
The biggest difference between a "good" backdoor and a "bad" one is if it is documented. If the manufacturer is too scared to document it then it likely sucks and they know it.
Similarly hardcoding someone's SSH public key isn't going to help anyone else gain access just by knowing it's there, is it?
People that discover these these types of exploits each machine code for breakfast.
USG probably would have had better luck if they pitched backdoors as a consumers protection measure and had a law mandating that:
1. Software companies must always have a remote and practical method to correct dangerous flaws in the software they issue.
2. To protect consumers' valuable records, all device manufacturers must back up all data on any device the produce. Such backups should ensure that the data is always accessible by the user even if the user were to lose their password or keys.
It would be a security disaster, and it already is.
Source: Fortinet admin