Hacker News new | past | comments | ask | show | jobs | submit login
The Insecurity Industry (edwardsnowden.substack.com)
743 points by stanislavb on July 26, 2021 | hide | past | favorite | 368 comments



"If you want to see change, you need to incentivize change. For example, if you want to see Microsoft have a heart attack, talk about the idea of defining legal liability for bad code in a commercial product. If you want to give Facebook nightmares, talk about the idea of making it legally liable for any and all leaks of our personal records that a jury can be persuaded were unnecessarily collected. Imagine how quickly Mark Zuckerberg would start smashing the delete key.

Where there is no liability, there is no accountability... and this brings us to the State. "

Yep, this definitly needs to eventually happen.


If this happens, it will be the end of open source and the indie web. Only large companies with large legal departments and serious liability insurance, and anonymous underground hackers, will be able to afford to make software public for commercial use or run a website.


That's one extreme extrapolation. How about, if this happens, it will be the end of commercial IP and the closed-web. Only open source with its inherent transparency and broad, distributed contributors (who would you sue? everybody at once?) and constant, real-time updates and improvements without lock-in or planned obsolescence would thrive when improved regulation gives avenues for redress and improves consumer awareness of security, as consumers flee the commercial silos in droves.


The regulations would be tailored to favor free software if the free software community had better lobbyists than the commercial silos. You can see that isn't the case.

Instead you can look at existing heavily regulated software markets to see what would happen: medical-device software, avionics software, car engine control units, cryptography before 01996, tax preparation software, PCI compliance measures. A vast wasteland of incompetence, waste, government graft, monopolies and duopolies, truly staggering profits, and easily avoidable deaths.

Consider: why aren't you wearing a Holter monitor? How about an automated electric defibrillator? Why isn't cryptographic security integrated into all the internet protocols?

How's that Bitcoin rollout going in El Salvador?


One can only dream. What's likely to happen is the software industry will lobby the government and kill any law that doesn't favor their business model.


That is an unrealistic scenario and it also not desirable to get rid of commercial software. It would give large enterprises a software monopoly.


> If this happens, it will be the end of open source and the indie web

If I run over someone when driving a car, I'm responsible - not the car maker. If my customer's data is stolen, I'm responsible. Whether the makers of my software are responsible is a contractual matter - and nearly all open source licenses include a disclaimer of warranty and a limitation of liability, including the GPLv3 (see sections 15–17).


> Whether the makers of my software are responsible is a contractual matter - and nearly all open source licenses include a disclaimer of warranty and a limitation of liability, including the GPLv3 (see sections 15–17).

That was true in the US until MacPherson v. Buick in 01916 and in the UK until Donoghue v. Stevenson in 01932. Nearly all proprietary software licenses include the same disclaimer, but they are on slightly firmer ground in doing so, since typically those licenses are in fact contracts under common law, while the GPL explicitly purports not to be a contract and is very likely correct about that.

The kind of statutory imposition of liability we're discussing here would have to specifically outlaw such contract terms in order to work at all. You could imagine a statute that would specifically exempt open-source software licenses from that, but as explained comprehensively in this thread, such a statute would certainly not be the one that was passed.


Surely though without a contract there is no liability? IANAL but imposing liability on gifts seems difficult.


If I build an unsafe boiler, and gift it to you, and then it explodes and kills you - am I not liable because it was a gift and not a sale? Can I disclaim away any liability and "fitness for purpose" when I gift you the boiler?

ETA: the first couple of Google results say that no, product liability can't be disclaimed away - particularly when there is no contract or opportunity for bargaining. I am very much not a lawyer but this sounds correct to me (i.e. this is what the law is).

https://www.findlaw.com/injury/product-liability/are-product...

https://www.eltonlaw.com/does-a-disclaimer-mean-you-cannot-f...


Your second link mentions sellers avoiding liability by selling product (that can be inspected in stores) "as-is". It could be argued that open source falls in the same category. You have an opportunity to inspect it before using and if you don't like it or don't feel qualified to pass judgement, no one is forcing you to use it.


The second link doesn't mention "as-is", the first one does. I'll assume that's what you meant. It says,

> Though manufacturers cannot so easily escape liability, sellers can escape liability by informing the customer before the purchase that a product must be taken "as-is,” which means how the product was found when it was purchased in-store. “As-is” works because the buyer has an opportunity to inspect the product and decide whether to buy it given its condition.

On that analogy, Github and RedHat aren't liable, but the original author of the software still is.


Without a contract there was no liability in the US until MacPherson v. Buick in 01916 and in the UK until Donoghue v. Stevenson in 01932, as I just explained in the comment you are replying to.


> it will be the end of open source

Not if the rules are carefully targeted at SaaS and not at codebases. If the rules are targeted at SaaS, the liability is actually lower for open source because of the inherent transparency of everything open source code does.


Do you think Nancy Pelosi is going to ask Richard Stallman, the Debian Project Leader, and the Apache Foundation how the regulation should work? Or is she going to ask SalesForce, Google, Apple, and Microsoft?


Kind of an indictment of the FSF that you think no one would bother talking to them.


Lots of people respect the FSF, Stallman, or both, and put a lot of effort into talking to them, but I have seen no evidence that any US legislator is among them.


I don't see why open source would end. Most open source software is not commercial. There's a massive and obvious difference between a person writing code and a huge obscenely rich corporation selling software to a wide audience. We expect much more from the latter.

There's also no liability associated with running a website. Simply refrain from collecting data of any kind and there should be no reason to worry.


Nope, they only need to uphold to the same standards.

The guy selling food on the street has the same liability as a restaurant.


The guy selling food on the street has liability in proportion to his profits; fifteen customers, fifteen potential food-poisoning cases. He can set his prices accordingly. Simon Tatham doesn't have any profits, but his PuTTY is installed on every developer's Windows machine. OpenSSL is installed on even more machines. How long do you think it would take your proposed regulatory regime to find that Kurt Roeckx owed several million dollars to every company that generated private keys with Debian's copy of OpenSSL? He did, after all, introduce the subtle security hole that left them wide open for years.

Maybe you could make the case that he was following best industry practices in doing so; after all, using Valgrind is a best practice, right? But you could also pretty plausibly convince a jury that he was negligent. Especially if you're IBM's senior counsel. Or, say, RSA's.

Now, is Kurt Roeckx or RSA going to be advising the US legislators who draft this candidate legislation, establishing the standards that they both must uphold?


  Simon Tatham doesn't have any profits, but his PuTTY is installed on every developer's Windows machine.
then isnt it up to the commercial vendor who bundled the software to properly vet it?

someone taking non-commercial products and commercializing it is where the line is drawn right?


That depends entirely on how the law is written. Legislators will ask commercial vendors how to write the law. Why would the commercial vendors choose to write it in the way you're describing?


That is for the law makers to decide,

If someone gets run down by a bicycle that a hobby repair shop failed to fix, it doesn't matter it was done for free by a guy that learned to repair bicycles during late nights.


The hobby repair shop can only be liable for a very small number of bikes, those they worked on. You cannot restrict the number of users of free source software, and you cannot restrict the user's risk profile. Like "good enough for an offline arcade game, but nothing else". Analogies have their limits.

But yes, lawmakers will decide, and given that they for instance try to de facto prohibit aftermarket OpenWRT installs, I have a guess how they would decide.


Sure and so what?

The company selling me a car is responsible to validate the security of each piece they got from a third party, and a restaurant is responsible to take care for the quality of the food it buys from the local bazar.


This is a completely unrealistic demand of software and security. I am really surprised of Snowdens arguments here.

The law makers cannot make the internet safer by one bit. Technical experts can and lawyers would dream to have leverage against them. They should be denied.


Nope, it is a matter to apply the same liability process that are already in place for high integrity computing and enterprise project deliveries.


I deliver software for enterprise and there is no such thing. I even develop software for medical appliances that have an extended software validation process.

It is about minimizing risk and it is a process that acknowledges that risk cannot be removed completely. It just forces you to work carefully and eliminates neglectful practices.

No serious developer will ever commit to ship software free of bugs. On the contrary, that would give people false security, which can in turn lead to further neglect.


People like Simon Tatham and Kurt Roeckx, however fallible they may be, are doing a much better job of deciding how software security should work on the internet than Nancy Pelosi and Mitch McConnell would. The question is not how we can give more power to Nancy Pelosi, Mitch McConnell, Amazon, Google, and whoever the Trump voters vote in as the next president, to regulate Sci-Hub, BitTorrent, Bitcoin, WikiLeaks, DeCSS, Matrix, GDB, and OpenSSL; the question is how we can take that power away from them.

When the wise must obey the commands of the foolish, disaster ensues.


This makes no sense. Open source doesn't collect my personal information. (Except when it does, which is bad behavior.) Anyway, the law is capable of making reasonable distinctions where necessary.


Nearly every website that collects your personal information is running lots of open-source software. Sometimes, as with PHPSESSID and httpd logs collecting your IP, the collecting of that information is automatically done by that open-source software.


The configuration defaults will evolve but FOSS will continue.


Not if publishing FOSS with bad configuration defaults makes you the defendant in a multi-million-dollar class-action privacy-invasion lawsuit.


It won't.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.



How would this make running a personal website risky from a legal liability perspective?


Are your HTTPD logs adequately secured? Are their security measures audited monthly, in keeping with established industry best practices? Do you have comments enabled on your blog? What's your policy for expunging blog posts about people exercising their right to erasure? How did this defamatory comment spam get past your comment filter? Did you know your open-source image thumbnailing software is being used on an illegal pornography site? Why didn't your legal office respond within 8 hours when you were notified of a privacy invasion on your blog last Saturday?


What a load of FUD. Personal websites don't need anything more complicated than an out-of-the-box config for apache or nginx to serve static files out of webroot. When's the last time that kind of setup was exploited?

Sure if you add more complexity, you add more attack vectors, but there's an easy way to reduce your legal culpability there: just don't collect any PII. Even in the scenario you propose where anonymous HTTPD logs are a liability (which... yeah, is not going to happen any time soon) the solution is simple: turn off logging. If the legal precedent is established, the defaults of our software will change to match.


Clearly you've never had to be responsible for PCI compliance. PCI auditors have no patience for arguments like "When's the last time that kind of setup was exploited? If you add more complexity, you add more attack vectors!" and just want you to install the damned antivirus software like their guidelines say. Yes, even though you're running Linux. No, they don't care that there's a CVE in ClamAV every two months. They don't make the rules. And PCI DSS is written by for-profit companies that lose money when their rules don't work. Legislators only lose money when their campaign donors don't donate enough to get them re-elected and nobody will hire them for speaking engagements.

You're coming at this whole thing from the perspective that wise and sane rules would be put in place and then sanely enforced for the welfare of everybody—by the same US government that told people not to wear face masks to protect against covid, while also shipping defective covid tests from the CDC and prohibiting the use of any other covid tests. And that's a case where nobody was in a position to profit by making the rules hard to comply with.

Listen, you know and I know that you can serve a personal website perfectly well with /var/www and a stock Apache config. But the proposal we're discussing here is precisely to take that judgment call away from people like you and me and give it to people like Donald Trump, using laws written by, most likely, lobbyists from Oracle and Microsoft.


You're assuming quite a lot about me and not actually responding very directy to what I'm saying. However, reading this and some of your other responses makes it a bit more clear what the concern you're raising is.

> Listen, you know and I know that you can serve a personal website perfectly well with /var/www and a stock Apache config. But the proposal we're discussing here is precisely to take that judgment call away from people like you and me and give it to people like Donald Trump.

Let's revisit that proposal:

> 1. defining legal liability for bad code in a commercial product

> 2. making [website operators] legally liable for any and all leaks of our personal records that a jury can be persuaded were unnecessarily collected

I believe we both agree that neither of these technically apply to the personal website scenario (static hosting not collecting any PII).

So your argument as I understand it is: in order to make the above liabilities legally enforcable for scenarios where they do make sense, we will end up with regulations similar to those for handling "sensitive" data (such as financial/medical information) being imposed on _all_ software / online services (such as basic static websites). This will happen because laws will be written in an environment of near-total regulatory capture.

This argument is plausible, but it relies on a bit of a non-sequitur: expanding the scope of data collection/handling regulations will inevitably extend to regulating the publishing of software.

It might be in the interest of the current software behemoths to push for such a system, but I don't really see it. They derive too much economic value from the current "free as in lunch" open-source to shoot themselves in the feet like that. It seems more likely they would:

1. try and narrow the scope of their own liability (by heavily constraining which categories of software carry that burden) 2. try to minimize the costs to themselves (by demanding compensation from governments for the work required to meet those regulations) 3. try to offload liability to vendors (who can then demand compensation for taking on that liability).

Points 2 and 3 could be a large cash cow for free and open source software, though I doubt many will be able to successfully capitalize on it.


I wasn't responding directly to what you said because it's irrelevant. You were pointing out that in fact hosting a personal website doesn't in fact expose its visitors to a lot of risk, especially if it's a static site instead of a blog or something. But that doesn't imply that people hosting their own personal websites will find it easy to comply with a regulatory regime tailored to raising the barriers to entry for "the next Facebook". More likely they will find it infeasible.

It's true that imposing liability for publishing defective software is logically independent from imposing liability for collecting unnecessary PII that leaks. But pjmlp's quote from the article we were commenting on explicitly proposed doing both of these:

> For example, if you want to see Microsoft have a heart attack, talk about the idea of defining legal liability for bad code in a commercial product. If you want to give Facebook nightmares, talk about the idea of making it legally liable for any and all leaks of our personal records that a jury can be persuaded were unnecessarily collected.

So my argument does not, as you say, "rely on a bit of a non-sequitur: [that] expanding the scope of data collection/handling regulations will inevitably extend to regulating the publishing of software." The proposal in question is to both regulate software publishing and also regulate data handling, so it's irrelevant whether or not the scope would thus "inevitably extend" from one to the other.

Probably it is true that the most favorable situation for the current incumbents would be to have no liability, as at present, or as minimal liability as they can get away with. But the second-most-favorable situation, and one that is definitely politically viable even if the current situation is not, would be to have a regulatory regime that raises the barriers to entry for new entrants as much as possible and prevents disruption to their markets, by enshrining in law the particular way they're doing business today: AI melody recognition for prior restraint of free speech, combined with armies of outsourced moderators to watch for terrorism and pornography, centrally-controlled app-store platforms, locked-down end-user hardware (with a grandfathered carve-out for desktops and laptops), real-name policies, fax-us-your-passport ID verification, "two-factor" authentication that turns out to be one-factor, and so on. Anything that encourages you to post stuff on your own blog or website would be a big drawback for GitHub, YouTube, and Fecebutt.


Personal website - what if someone takes over your server and does malicious stuff?

Are you going to put up with accountability that you might have misconfigured something and it allowed attackers to scam people or serve porn?

You are perfectly sure that you are going to keep your small site updated all the time and you won't forget about it?

Because that is where it is going - it is not just code that can be vulnerable - but also combination of different software, combination of configurations. If you install 2 applications they might interact in a way that makes your system vulnerable.

Software is infinitely complex we can cut down complexity but then anything that is useful and complex will cost a lot more.


> what if someone takes over your server and does malicious stuff?

What if somebody steals my kitchen knife and uses it as a murder weapon?

> Are you going to put up with accountability that you might have misconfigured something and it allowed attackers to scam people or serve porn?

Yes. This is (and always has been) the price of operating a website on adversarial public networks. We established relatively simple ways to make this possible even for individuals decades ago.

> You are perfectly sure that you are going to keep your small site updated all the time and you won't forget about it?

As I replied to the sibling comment, when was the last time there was RCE for Apache or Nginx configured to serve static files from a webroot? We are talking about personal websites here after all.

> Software is infinitely complex we can cut down complexity but then anything that is useful and complex will cost a lot more.

I think I disagree with you on where the threshold of usefulness is.


I would like to present you how complex life is, maybe you embedded a video on your personal website and no one hacked your apache/nginx:

https://www.vice.com/en/article/qj8xz3/a-defunct-video-hosti...

People like to embed things in their personal websites - integrate them with third parties. That is my point.

Now you are responsible for what you embed on your personal website and what you publish.


I agree that Apache and Nginx are not critical attack surfaces, especially for static web sites. I have been experimenting with running static sites on Cloudflare Pages, delegating security and infrastructure to them. This goes against my desire for decentralization, however. I usually use GCP for my web sites and it is so little effort to occasionally start a fresh VPS, a few git pulls, and copy an Nginx config file, and flipping DNS settings. Automating this process to happen frequently would avoid the problems associated with hackers taking over your servers and use them long term.


Cutting down things that are unnecessarily complex would be a good start. I.e. most websites out there could be easily replaced by static code generators. Webauthn and client certificates can be used to protect admin interfaces. Technology is here but there is no demand for it because making things less secure is way cheaper.


Mostly open source does not gather data that is not needed for the service/program to function.


Most open-source licenses explicitly say that there's no guarantees.


That doesn't excuse providing products that easily blow up and damage the consumer when there are safe means to build the product.


I disagree with Snowden here. Liability is an extremely bad approach and would not solve the problem at all. It would also fortify companies that can pay for guarantees nobody could ever give.

That we have security flaws is always inevitable. Better languages might help but are no panacea.

I agree with Snowden on a lot, but this doesn't solve anything.

The result would be software certificates. By whom? Take a guess.

Nobody can guarantee absolute safety. This is a trap you don't want to fall into.

It would end open source and any independent development. Quite surprisingly short sighted by Snowden.

The problem with iMessage wouldn't be solved by liability. It is a security flaw that cannot be removed by law.

edit: To clarify: I agree with him in the Facebook example. They collected the data for their business and should be liable. "Bad" or "insecure" code is a different matter however.


> The problem with iMessage wouldn't be solved by liability. It is a security flaw that cannot be removed by law.

If they had financial incentives to not get hacked, it would make more financial sense to port non-memory safe c and c++ code to swift and rust. (Currently way too much effort to be worth it.) It would also incentivize better security layers like sand boxing.


Eh, contract law gives you all the tools you need to create liability.

If you want software where the vendors are liable, you can get that today.


Ahh, yes, the free market. The bastion of privacy.

What about those not even doing business with Equifax and getting their info stolen?


I was talking about currently available legal options. Not about any free market.

I don't know anything about Equifax? What are they doing?

I assume whatever Equifax is accused of doing is already illegal by current laws? Would making it 'more illegal' help?


Oh yea, I’ll definitely win if I sue Microsoft, and being in the right will absolutely make a difference.



Well, then don't do business with Microsoft?

Only contract with companies that either have a reputation for upholding their end of the bargain without a court threatening them (ie most good companies), or only contract with companies where you have a reasonable expectation of being able to win a fair court case.

As an example of a more generalised version of the former: Amazon is pretty generous in their customer service, and you don't typically have to sue them to get them to eg give you a refund.

Reputation is a powerful asset, and companies often want to protect theirs.

(Not always, though. Amazon is less nice to sellers or employees, I think, for example.)



I always wondered if IEC61508 would ever be applied like this.

IIRC it could be interpeted as coving a range of types of harm to people and the environment that is not restricted to control systems.

https://en.wikipedia.org/wiki/IEC_61508


We've been incentivizing people to properly write "definitely" for decades and look where it's got us.

d-e-f-i-n-i-t-e-l-y.com


> talk about the idea of making it legally liable for any and all leaks of our personal records that a jury can be persuaded were unnecessarily collected

https://www.gdpreu.org/compliance/fines-and-penalties/


> talk about the idea of making it legally liable for any and all leaks of our personal records that a jury can be persuaded were unnecessarily collected

That's kinda sorta one of the goals of the GDPR. And they go farther as they're liable for any leak of personal data, not just the unnecessary personal data.


Snowden did an incredible job on this article!


Yeah imagine if we make pharmaceutical manufacturers accept liability for vaccine side effects. It’s the same argument right.


There are definitely a lot less pharmaceutical manufacturers now than there were before the Pure Food and Drug Act passed in 01906; ten companies have 40% of the whole worldwide drug market, and if you start openly making and selling drugs yourself (like Coca-Cola in 01886 and 7-Up in 01920), you will probably get arrested within a month. Almost nobody makes drugs as a hobby now.

There are certainly people who would like to make it so that the same thing happens with software and online publishing: a few companies controlling almost all of the activity, and if you release any software or host a blog without working for one of those companies, you get arrested within a month. Other people don't intend that, but advocate policies which would have that effect.


I am sympathetic to your argument, but be careful to avoid 'Post hoc ergo propter hoc'.


I think there's a clear line of causality here, though.


Hmm, I think so, too.

But it weakens your argument a bit. Basically, it would only convince people who are already convinced.


How would you make the argument?


Not sure, it's hard, since we seldom have randomised controlled experiments here.

Perhaps try looking for natural experiments, eg compare between countries, or between different sectors.

(Sometimes there's also silly legislation you can exploit for statistics, like the Onion Futures Act (https://en.wikipedia.org/wiki/Onion_Futures_Act) which can help to see the impact of futures trading on commodities.

Perhaps there's some corner of the pharmaceutical market that wasn't hit or was less hit by the Pure Food and Drug Act?)


I don't think it's the same argument. Being liable for the things you actively and intentionally do to all of your customers is not the same as being liable for a risk inherent in a medical procedure that you've done your due diligence to prevent and warn about. In either case, there should be some form of accountability, but they are not at all the same thing.


Its up to us to decide upon regulations which benefit society the most. Nothing wrong with applying common sense in each instance.


Side effects are expected. Vaccines provide enormous benefit but are not without risk. Patients must receive information about risks and choose whether to accept or reject treatments.

This is not at all comparable to corporations slurping up all data they can get their hands on for marketing purposes. Modern medicine provides enormous benefit for society. Surveillance capitalism... doesn't. Certainly not enough to justify the massive abuses being perpetrated.


I agree however allowing any company regardless of its business the opportunity to escape liability of consequences is dangerous. Pharma companies have in the past and will continue to have their scandals same as any other industry. There shouldn’t be exceptions. Who gets to define what industry is important or not and therefore avoids regulation is dangerous as it’s only as reliable as the “who’s” in control.


I'm not saying pharmaceutical companies shouldn't be regulated. I objected to the notion they should be held liable for side effects. Those are known risks inherent to the treatments.

Widespread data collection on the other hand is totally unnecessary and should absolutely be a massive liability for any company that does it.


I'm not going to comment on Snowden's view of what liberal western states do when it comes to surveillance. I have my own opinion, but he's been right about stuff I'd disagreed with him in the past before so I'm gun shy about confronting his ideas again.

On the topic of unsafe language though, he's absolutely right. We don't have to put up with this. We could pass a law and ban new code in unsafe languages from national security threatening devices like phones or cyberphysical devices like self-driving cars and we would be the better for it in under a decade. We could even have taxes to give breathing room during a transitionary period to encourage it before outright banning it, but we don't. We don't because so often it is less of a hassle for the government to trust the private sector that it is to take some real position on the regulatory issues that matter. This will probably always be the case until the government is able to evaluate talent and pay salaries in accordance with that talent as the private sector is.


As a professional software engineer I'm not sure the idea of "safe" or "unsafe" programming languages is a coherent idea, or if it is then all languages are unsafe in my eyes.

Yes C/C++ have more footguns than Java but there's no "hard line" in the safety differences and there are real and important things that need doing that it's not always clear can be reasonably done in another language.

If you haven't, I'd encourage you to read the paper "Some Were Meant For C"[0] on why C still doesn't have a real replacement (though it could in the future).

[0] https://www.cl.cam.ac.uk/~srk31/research/papers/kell17some-p...


Certainly there's no boundary that clearly delineates "safe" and "unsafe" languages.

But certain languages (like C and C++, for instance) have built-in footguns. They were built in for various important reasons, but right now many of these reasons are not as pressing anymore. We can take something like Ada, Rust, OCaml instead, or at least use the extensive tooling that allows to statically check programs written in (a rather wide subset of) C and detect great many classical pitfalls.

So I'd say that it's not about safe languages, but mostly about safe methods of development for critical software, methods that remove large classes of defects that are commonly exploitable.

This, of course, can be only one layer of protection; the OS and the hardware design should also do their part. Advanced devices, like desktop and phone CPUs, have some very good hardware features for protection, like MMUs, w^x bits for RAM pages, secure enclaves, etc. Simpler devices (like IoT SoCs) often don't, and they can be potentially cracked into more easily.


Then punish defects. Give programmers the freedom to define their semantic space, but make sure to punish profit-seeking entities for insecure behavior. It's not that simple of course (it's easier for FB to pay a fine than a startup, for example), but we need to enforce the cost of defects at the organization level. I mean, there's also a world where FB apologizes about how they "forgot" to remove `unsafe` from a part of Rust code.


Definitly, liability needs to happen, sledgle with lawsuits hammer any company that doesn't take security seriously.


Sledge with lawsuits and company that knowingly buys uncertified software.

Like the FAA does.

Never, ever going to happen in enterprise and consumer software.


It already happens when that enterprise and consumer software is deployed in high integrity computing scenarios.


At the risk of being confrontational, show me a language without built-in footguns and we can talk. I've seen people fail to check their inputs in every language I've ever touched. I've seen people cobble together their own horrifically insecure SQL query builders. And so on.

Meanwhile, I have to argue with developers on a regular basis to convince them that they actually need to patch the libraries in their systems. Yes, even if they don't see an obvious exploit path. Yes, even if it's more work than drop-in-and-ship.

Getting them to use static analysis is a nightmare. Entirely too many developers view each finding as an attack on their style they can negotiate away.

As you so wisely and correctly say, we're at a point in time where we mostly have the tools to employ safer software development lifecycle methodologies. It is telling, then, how often we don't.


They certainly started off that way but modern static analysis can catch a lot. Plus these other languages (yes, including Rust) are all still at least partly implemented in C/C++ so you cant outright ban it.


That paper has many problems. For example:

> There is no particular need to rewrite existing C code, provided the same benefit can be obtained more cheaply by alternative implementations of C

To be clear, those "alternative implementations" do not exist, and no-one actually working on C compilers or tools to make C code safer has been able to produce one, or even come up with a credible plan for producing one.


This is false. There are multiple safe C implementations. compcert and https://staff.aist.go.jp/y.oiwa/FailSafeC/index-en.html are two such.


Compcert is a verified compiler. It guarantees that the generated code does what the source code requires. It doesn't turn unsafe C programs into safe ones.

"Fail-Safe C" is a research project that has been dead for ten years. Note:

> Some benchmark results show that the execution time are around 3 to 5 times of the original, natively-compiled programs, in avarage

That overhead is actually a lot higher than similar projects I also consider failures, such as CCured.

To be clear, when we talk about "an alternative implementation of C", it needs to give similar performance and other properties (e.g. function and data interop) to other C compilers. You can't impose a 3-5x slowdown and say "look, C is safe".


Actually that is what Oracle did with SPARC ADI, Google/ARM are pushing for with MTE, Apple is trying to do with PAC, Microsoft with Phonon.

When everything else fails, kill the bug with hardware memory tagging spray.


The CompCert C reference interpreter aborts execution on undefined behavior.

> To be clear, when we talk about "an alternative implementation of C", it needs to give similar performance and other properties (e.g. function and data interop) to other C compilers. You can't impose a 3-5x slowdown and say "look, C is safe".

Pretty much every language (except rust) which claims to be "C but safe" has the same issue, except you have to rewrite your program in it.


> The CompCert C reference interpreter aborts execution on undefined behavior.

A C interpreter is not exactly the sort of thing that the embedded software industry has been waiting to deploy in production.



CHERI is far more than just an "alternative implementation of C". It includes a completely different hardware platform and replacing all your hardware can hardly be considered "cheap".


It's many orders of magnitude cheaper than replacing all your software.


You can't just adopt that hardware and get memory safety. The hardware is providing support for a capabilities model, which you then have to adopt at a software level both within your OS, compiler, and application code. This also would break C and C++ ABIs, so it's very unlikely to get adopted for a number of cases.

Further, CHERI is not enough to achieve temporal memory safety, it only provides the primitive that one could use to implement an efficient, correct tracing GC.

Perhaps most importantly, one can not use CHERI today. It is just not an option. Memory safe languages exist, and have existed for quite some time.

Pointer tagging and other approaches like CHERI are very promising. Keeping in mind of course that they double the size of pointers and have a global runtime performance cost. Less ambitious but otherwise similar mitigations are already being adopted.


Indeed, you need the whole stack. However, the (non-temporal) memory safety alone is still quite easy to get - the compiler will take care of it, as a programmer you just need to make sure your code doesn't get in the way by eg manually stashing pointers into non-pointer types. It's been demonstrated on large, real-world code bases, such as FreeBSD and PostgreSQL.

(Disclaimer: been there, done that, part of the CHERI team)


Yep, much of the wins look to be a matter of just recompiling, which is great. I'm a big fan of these pointer tagging techniques in general. I just want to be clear that pointing at CHERI and going "See? We already have memory safe C and C++" is glossing over some important details.

Best of luck with the research, I'm quite bullish on the work.


The author of that paper has written a safe C implementation.

https://github.com/stephenrkell/libcrunch


I applaud him for giving it a go, but it's far from complete ... no use-after-free protection at all yet, for example.


The line is far less blurry than you suggest. The stats from the article say 70% of vulnerabilities come from [insert list of a few memory-wrangling mistakes here]. I'd consider any language that enables the top 70% of vulnerabilities to happen to be unsafe.

Maybe a safety ranking is in order? How many CVEs from [year], weighed by severity, are impossible to happen in [language]. Obviously this idea has serious problems, but you see my point: C would get a straight up zero, as it should! (well, not completely zero, as many vulns come from dependencies and C's complete lack of a package and dependency system is actually an advantage here)


An statistical approach like that has the flaw any the most used language would always appear to be at the top of the "most unsafe".

Instead, people should be schooled to write better code. Thats it. Don't let some random new employee with no certifications write safety-critical code. Don't hire people who are under qualified. Its really that easy.

I have no idea, honestly, how you would introduce a use-after-free bug with C++'s smart pointers. Its the lack of schooling, the lack of certification, and the lack of a safety-focused selection of applicants, not a language that you can write bad code in if you ignore all warnings and advice.

Even rust has unsafe{}, but I dont see anyone complain when thats used to introduce safety issues, because "youre not supposed to do that, even though technically you could".

You can have all the safety mechanisms in a language, but you need control, too, and anyone who is completely unqualified will use that to break something.


> Don't hire people who are under qualified. Its really that easy.

I understand what you're saying, but I'm not sure I agree.

For example, look at Google Chrome. They've got mountains of cash. They've got loads of people working for them, and loads of job applicants if they want more. They've got a strong business case to work on security. They've got in-house pen testers, and a bug bounty program. They've got code reviews. They can afford any static analyser on the market. They've got sandboxing. They've got open source so many eyes can spot bugs easily. They can dictate terms on requirements - if Chrome vetos a new web standard and it's as good as dead, and if Google decides plugins have got to go, they go.

And they've got 177 CVEs so far in 2021 [1] - including such greatest hits as use-after-free, buffer overflows and out-of-bounds access.

You and I think we're writing secure C++ - but if the best-resourced team in the world can't write secure C++, isn't it more likely we're just fooling ourselves?

[1] https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=chrome


So many CVEs are due to their bug bounty program, hackers are incentivized to sell vulnerabilities to Google rather than exploiting them. CVEs are publicly disclosed after they are fixed, of course.


The CVEs are due to the bug reporting, the bugs are not.

The point is that memory safety bugs are the gift that keeps on giving.

You can throw as much money into the problem as you want, and the inherent complexity of memory management means you are still gonna ship bugs of all severity.


You could have made that argument a few years ago, before we started to see in-the-wild exploitation of these vulnerabilities.


You make a good point. There is a lot of evidence that it's not as easy as I make it seem.

I might be wrong about this, too, but I think a lot of smaller companies have the really bad practices. Google shines as one of the major "suppliers" of C++ tooling, and their code quality is undeniably very high (considering the complexity of some of their codebases), but a lot of popular libraries are written by small teams, often not even under contract, and you end up with a lot of PR'ed stuff, which is great, but it needs to be thoroughly reviewed.

A good example of "unqualified" PR's was that recent one with the malicious PR into the linux kernel by some university. It was caught, but it does make you wonder how many vulnerabilities make it through because of simply trusting "random unqualified people" too much.


>You can have all the safety mechanisms in a language, but you need control, too, and anyone who is completely unqualified will use that to break something.

That's needless extremism, moderate programmers benefit from safety mechanisms, and you can't hire a lot of perfect programmers.


> As a professional software engineer I'm not sure the idea of "safe" or "unsafe" programming languages is a coherent idea, or if it is then all languages are unsafe in my eyes.

I agree, though that's not intractable. There's security compliance like ISO27001, which can't say "you're secure", but can say "you've got a threat model and you take measures to address it". It's then something you document and provide evidence for as part of your compliance obligations.

If we applied the same thing to a programming language I think it could be very clear to see the differences between languages.

That's not at all to say that I think ISO27001 is a good model for security, just that we already have systems that handle very nuanced ideas, so a formal definition, or proof, etc, is not really as necessary as it may seem.


I agree with this article’s criticism of managed languages (Java, C#) for systems programming, but the author does not convince me that Swift and Rust would not be much, much better to build safe(r) systems and infrastructure software.


Even in confines of C there are coding conventions with lower bullrun compatibility and there are some projects that use them: putty and s2n use them with C, grpc library uses it with C++. Fat pointers are in talks since 80s and are cheaply implementable, meanwhile Fuchsia, an allegedly new codebase, is written in old good 70s vanilla C as if PDP-7 is still hot.

As for the article, the dream to get all UBs right without a systematic approach is idealism.


I advise you to update your Fuchsia knowledge, it is written in a mix of C++, Rust and Dart nowadays.


You have an awfully rosy view of how regulation of programming languages would play out. What will happen (and this is already de facto the case in many "safety-critical" industries) is that Rust is banned and everyone is forced to write straightjacketed C89 and C++03. There's already a congressman on the record complaining about a "native Nigerian" committing code to Rust.


> There's already a congressman on the record complaining about a "native Nigerian" committing code to Rust.

Wow. It comes from the Libra hearing I guess? By any chance, do you have an link to the exact quote?

Edit: as hsod answered below (I don't really get why this answer was flagged) the congressman isn't really “complaining” and it's not about a Nigerian committing to Rust, but to Libra itself.


It’s around 30 seconds of this clip https://www.c-span.org/video/?c4808083/user-clip-rust-langua...

“I went ahead and did a GitHub search on who’s actually leading the development on the Libra core side and it looks like, I think it’s gonna be international because it looks like a native Nigerian that’s actually building the actual Libra core in front of the code development of Libra core” - congressman Denver Riggleman (R-VA)

He lost his seat in 2020


It's an unsettling combination of literacy and illiteracy there.


I had never thought of passing legislation against unsafe languages. It sounds like the type of thorny legislative issue that leaves you with a half broken system, like all the cookie notices that currently abound. Nice in spirit, impossible in practice. Regulation could be more sensible but it’s all in the enforcement. There’s nothing preventing organizations from adopting safer coding languages and practices right now, a little incentive may help a bit but I think only so far.


Would you refuse to use Postgres, the Linux kernel or SQLite because they’re all written in C?

Certainly C and C++ have more footguns than many other languages, but highly insecure as well as highly secure software gets written in all languages. I think coming up with better/easier avenues for digital-security-breach related lawsuits and fines is a better idea than banning specific languages.


Funny that you ask: Linux kernel rather actively embraces experimental code written in Rust.

And yes, for a really critical system I might consider taking something much simpler and potentially slower but formally proven correct, like seL4.

Google is working on Fuchsia as a new phone / chromebook OS, and it's very much focused on bulletproof security.


> And yes, for a really critical system I might consider taking something much simpler and potentially slower but formally proven correct, like seL4.

One of seL4's points is that security can still be fast, no?

From https://docs.sel4.systems/projects/sel4/frequently-asked-que...

>To the best of our knowledge, seL4 is the world’s fastest microkernel on the supported processors, in terms of the usual ping-pong metric: the cost of a cross-address-space message-passing (IPC) operation.


Yes.

But within the Linux kernel, you might not need that cross-process IPC at all, so if you're into squeezing every last microsecond of latency, you likely want your entire app running as a monolith in kernel mode.

But if you want security more than top speed, seL4 + a few daemons you write for other OS needs must be fine.


While Fuschsia's microkernel architecture has a lot of security benefits, the microkernel (Zircon) is still written in C++


Related:

> Our kernel, Zircon, [is not in Rust](https://twitter.com/cpuGoogle/status/1397265884251525122). Not yet anyway. But it is in [a nice, lean subset of C++](https://fuchsia.dev/fuchsia-src/development/languages/c-cpp/...) which I consider a vast improvement over C.

https://blog.cr0.org/2021/06/a-few-thoughts-on-fuchsia-secur...


For the time being yes.

They are now starting to rewrite OS components in Rust, and I wouldn't be surprised if that happens to Zircon as well, it stared as a C anyway, and thus already suffered one rewrite.


s/suffered/enjoyed/

A rewrite usually improves a lot of small things that were found in the previous version. It's like a new major version, even if it does not add new major features.

It's an expensive undertaking, though.


Thanks, if you watch the Fuchsia tour session, they hint at some of the rewriting taking place.

https://www.youtube.com/watch?v=gIT1ISCioDY


> Linux kernel rather actively embraces experimental code written in Rust

False. Which broken telephone did this come from? The Linux kernel has 0% Rust code in it and has no plans to change that.


https://lore.kernel.org/lkml/20210704202756.29107-1-ojeda@ke... - A patch to include experimental rust support, with support from Linus; I believe it's in the latest mainline HEAD.


Read my post again. There is 0% Rust code in the Linux kernel at the moment, and no plans to change that.

(And no, "including experimental Rust support" isn't a plan to change that.)

You don't need Rust to compile the kernel. What's changed now is that you could theoretically write a kernel module in Rust. (But you can, e.g., write a kernel module in C++ too, I've done it before.)


As another commenter noted, there’s no Rust code currently in the Linux kernel. There may be some soon, but not much - it’ll be a dominantly C codebase for a very, very long time, maybe forever.

As for seL4, doesn’t that just support my point that highly secure software can be written in any language, even C? seL4 is primarily implemented in C. There “executable spec” is written in Haskell, but the actual kernel that people use is written primarily in C and a little Assembly.


Where is your legislation supposed to draw the line? In Rust you need to have occasional unsafe code - you can't even have a double linked list or bidirectional graph without unsafe code. Would you also outlaw jni calls in java?


The way this would likely end up being handled is the same as any other compliance.

You define the threat model for your application and then justify why you're safe. In this case we'd be adding an explicit point to ensure that there are controls for attackers who can exploit memory safety issues.

You could end up with controls like:

1. We sandbox our code, so even though it's C we feel that we're safe

2. We use a memory safe language to reduce the risk of memory safety issues

3. We sanitizer inputs before providing them to the program

etc etc

What ends up being accepted as legitimate would be up to the auditors/ generally a matter of consensus.


would that imply that the state would have to review the design and implementation of every system under their jurisdiction? I think that would be a bit heavy for everyone involved.


This is already the case.


Can’t unsafe Rust (when properly isolated) exist within a larger codebase that, as a whole, can still be considered safe?


The same way you can write safe modern C++ and only use unsafe features of the language when its necessary and isolated, but thats also considered unsafe, so Rust should be, too


My experience with trying that in Visual C++, alongside the C++ Core Guidelines and "borrow checker" static analysers, is that there is what we wish to happen, and how the code actually looks like in reality.


I find that, if you enable all warnings, most as errors, and use clang-tidy and cppcheck, you end up with an incredibly powerful safe language. Now, combine that with some good patterns and avoidance of NIH syndrome, and you can get very far while keeping the huge amount of control and libraries that C++ gives.

Edit: And, of course, a lot of forced static typing and avoiding global scoped stuff. So, for example, you'd make a

    struct Meter {
        float value { 0.0f };
    };
    Meter wall_length;
instead of `float wall_length; // meters`.


Even if we constrain ourselves to clang-tidy and cppcheck, and the platforms where they are supported, you can only assert that to your own code, and eventually third party libraries in source code.

Then even if you find issues in those third party code libraries, there is the issue of if you are allowed to change them yourself, or how those fixes get provided by the library vendor, warranties and so on.


Instead of banning unsafe languages how about allowing for tort liability for software? There is a big difference in law between software and other engineering like building bridges. It used to be that software failures were not as severe as a collapsing bridge - but we are close to the point where it reverses.


Would you consider assembly to be unsafe? High-level software written with low-level languages like this includes:

- Decoders and encoders for video, images, and audio - graphics libraries - Many parts of fast cryptographic libraries

This is typically for performance-related reasons.


If there is one thing politicians are surely better at it than programmers is making decisions about what programming languages should be used! /s

Yes on theme, but no on ”there aught to be a law” that bans C/C++ because of an evolving goal of memory safety.

When Rust++ comes out surely there will be people complaining that Rust isn’t safe, and so on.

Best case is you make the consequence punishable (as was described in the article).


> If there is one thing politicians are surely better at it than programmers is making decisions about what programming languages should be used! /s

Programmers aren't that good at it either.


Other industries handle this kind of thing with a layer of indirection: instead of laws from the state, the state imbues a professional body with certification power, and the body sets standards by consensus or whatever other process among their members. So if you're an architect, or lawyer, or tradesperson, or whatever, you have to do things in the manner prescribed or you lose your license.

I know this kind of thing is a lightening rod for a lot of software folks, but it does have the nice property that it's proactive; you don't have to wait around for the obviously-unsafe practice to blow up in someone's face before you can do something about it.

And no one wants to have to get licensed to write some crud app in NodeJS. But maybe these kinds of high-stakes modules related to encryption, security, and so on could be a place to start with it; if you're going to work in these areas, and offer your work to the public (for money or otherwise), then you either need to be licensed by the professional body, or clearly advertise in the header of each file that the work has not be overseen by a licensed professional.

That's the kind of arrangement that could do things like specify languages, methods, interfaces, and so on.


Anytime gov't gets involved though, you have to consider how the lobbiest will get involved. So a proponent with deep pockets that is thouroghly entreched with a language about to be "banned" decides to hire a lobbiest to convince these "well educated in all things" gov't reps. Yet proponents of "safe" languages have no cash to spend. Which language is going to get banned then?


Yep, and what happens when the committee (perhaps correctly) chooses a proprietary language as the safest language? Anyone making essential software would end up writing code like this: https://i.imgur.com/8S6JbDS.png (labview; Haven't used it in 10 years, but I'm pretty sure it is 100% memory safe)

Don't worry, the pricing will be FRAND (fair, reasonable, and non-discriminatory). You just won't be able to see the compiler and runtime sources or follow stack traces into stdlib code. For security reasons. Defense in depth, y'all.


But the point of this is that it isn't the "government involved." As with other professional pursuits, the government says "we don't have the desire, expertise, or agility needed to make these decisions in-house, so we designate this organization to be our regulator of it."

For example, here's the Professional Engineers Act in Ontario: https://www.ontario.ca/laws/regulation/900941

None of it has anything to do with which tools or methods are used for the practice of engineering— it's all defining jurisdiction and constitutional meta-details about how the leadership is to be selected, term lengths, etc.

So no one can achieve regulatory capture about what kind of concrete is to be used in bridges in Ontario by lobbying the Ontario government; you'd have to lobby the PEO. Maybe your argument is that they're effectively equivalent, but they're really not— the PEO is a pretty different kind of organization from just another government office.


>"we don't have the desire, expertise, or agility needed to make these decisions in-house, so we designate this organization to be our regulator of it."

Yes, because this has stopped them doing dumb things before. The gov't will still be in charge of selecting the people in the regulator. The lobbiest will still have influence. There's just no way around it.


Can you please supply examples from law, medicine, trades, engineering, etc where you feel that lobbying of the government has led to the kind of outcomes you're envisioning?


Texas' ERCOT. It's a government appointed regulation body, not a branch of gov't.

Are you really doubting that this is something that "might" happen?

Edit: FAA allowing Boeing to self-certify 737MAX. FDA allowing/not allowing trials of drugs, or allowing a drug meant for one thing to be tried for something else totall untested (ex: AZT).


I'm not doubting that it might happen and may even actually happen in some cases. I'm asserting that society and government is built on compromises and a principle of taking the path of least harm. We shouldn't avoid a 90% solution because we can imagine possible flaws in it, particularly if we have close analogues already deployed in the real world and those flaws don't largely seem to manifest.

I don't know that bodies like FAA and ERCOT are really comparable to professional associations; the market conditions make them especially vulnerable to corruption because they both "oversee" such a small number of large players, so you end up with a revolving door. In any case, these are also odd examples to bring up, because in both cases their failure was not about market capture, it was about failure to protect the public. So it seems your argument is amounting to "the regulation provided by the FAA isn't perfect, so it shouldn't exist, just like regulation for security-critical software shouldn't exist."

I specifically asked for examples from "law, medicine, trades, engineering", because those are cases that I feel are much more aligned to what it would be with a professional body overseeing practices for secure software development— they're cases where you have a large number of mostly small-time practitioners, and where the professional oversight mechanism is working in terms of enforcing safe and consistent practices, while also evolving those over time in response to changing conditions.


>the regulation provided by the FAA isn't perfect, so it shouldn't exist

This is far from what I'm suggesting. I'm just saying that if it is a gov't regulated anything, those regulations will incur wacky decision making due to the influence of outside money. Nobody likes to be regulated against, and if they are in the position to do so, they will use any mechanism available to them to keep the status quo.

I'm also suggesting that any gov't regulation body is not always the panacea people may be dreaming it will be. Anytime a regulation body is proposed, I don't have rose colored glasses. I'd rather be pleasantly surprised that something turns out to be a good thing than having high expectations crushed.


But despite your cynicism about "any gov't regulation body", you would agree that professional associations have by and large been a success in protecting the public in areas like construction trades, engineering, law, and medicine?


Not necessarily. Recently, Miami condo collapes. In the not too distant past London building fire. I'm sure if we were to go looking, we could find more examples. Are these edge cases?

I have less experience with other trades, but you did call out construction separately. My family comes from construction backgrounds at various levels. The 80s in the US saw a boom in the 20 story building construction, and then saw a total collapse (no pun intended) in the construction industry. There are lots and lots of building contracts won by lowest bidder, and the only way to do that is cutting corners somewhere. Usually in quality of material, or reducing the "over-engineered" portions to the point of risking saftey, etc.

It's also widely known in construction that the permitting offices can be gamed. Talk to the right people with the write phrases. NYC is infamous in that people playing by the rules get absolutely nowhere. You have to start spending cash and using influence to get things done. It's all just pointless to being corrupt.


Interesting. I would see the occasional failure as actually a further indication that the system is working— that it's maintaining a balance wherein most practice is within the bounds of what is safe, but it's not NASA-level lockdown where there's massive waste due to unnecessary redundancy and safety factors.


Green Hills with its decades of DoD contracts versus the left wing browser company. Gee I wonder who would win?


Safer languages like Rust are awesome but cannot solve all of these problems.

Code reflects the programmer's understanding of the world. If this understanding is flawed, the logic will also be flawed.


Even if we could clearly differentiate between "safe" and "unsafe" languages, what use cases and devices go on the list?

Would phones really be that high up given that in environments with high security standards usually people are not allowed to carry them? What about home appliances? Could a state actor hack a bunch of stoves and burn the houses down?

I can easily see some huge bureaucracy being put in place without much benefits ("Federal Programing Language Commission"?).

Most high security environments have a physical component, why not learn from there? Phones could just have a physical switch to cut off microphones and antennae for example, much easier to do than trying to police millions of lines of codes to be secure.


"Why KeyKOS is fascinating" - https://github.com/void4/notes/issues/41


C and C++ are really versatile, why are there no safe libraries for them (or are there?). If you want to make buffer overflows impossible then add a library that does so and use it. If you want safer memory handling use a library. Or extend C and C++ with standard libraries that are safer to use. Java is implemented in C so anything Java can do, C can do. Many languages are actually implemented in C.

What is this fascination with creating a new language every time we want to add some features? Is it the fame of creating and naming your own computer language? From a security perspective creating a new language reduces security because it will never have been as thoroughly vetted as the old languages.


Because safety requires that the language prevent you from doing things and libraries don't have much ability to do that.

Example: C++ shared_ptr gives you reference counting which helps prevent use-after-free errors. However, shared_ptr can't stop you getting a raw pointer or reference to the underlying object, which can then be used to trigger a use-after-free bug.


Saying that Java is safe therefor you can write safe C is not actually sensible. The JVM is a virtual machine that runs user's programs, and those programs are in a memory safe language. That doesn't mean the JVM is memory safe... at all.

It's like saying "llvm is in C++, and llvm is used to compile Rust, therefor C++ is as safe as Rust". It's just not a tractable argument.

As for the rest of your post, sure, you can create a new revision of C/C++ that's memory safe - feel free to do so. It'll basically be a new language, but that's fine and has nothing to do with the parent's point, which is that we should incentivize memory safety - they didn't say it had to be a new language.

As for why people don't take that approach, it's because: it'll be a massive breaking change, so it's really no different from a completely new language. If you're going to create a completely new language you may as well take the time to fix a bunch of other issues while you're there.

edit: The above applies to libraries too. There are libraries that enforce stronger security in C++ (I want to say... IronC++? Something like that), and it basically ends up being, give or take, the cognitive overhead of a new language, but with the added bonus of no one actually using it.

> From a security perspective creating a new language reduces security because it will never have been as thoroughly vetted as the old languages.

This is unsubstantiated, and I highly doubt it is the case.


You can actually make a safe C implementation without creating a whole new language, see my other post https://news.ycombinator.com/item?id=27969373


What he is referencing is what I call “artificial complexity”, a way for making solving problems ludicrously wasteful in man-hours. This is primarily accomplished via limiting remixes of known technology - think for example of how easy it is to add types to Lisp and yet it took decades to be attempted publicly.

It is the job of human intelligence to cause ‘discontention’ - discontent and contention. I like to think of how it as how you can make powerful gears almost seize if you understand their weaknesses properly.


> We could pass a law and ban new code in unsafe languages

I can tell you exactly how this will end up: like PCI DSS.


> We could pass a law and ban new code in unsafe languages from national security threatening devices

Countries already do this. But they also put exemption clauses in the policies.

Ban exemptions (in all policies) first if you want to make progress.

But you won't like it.


This is unnecessary regulation and not a good idea. Would that also apply to runtimes, interpreters and compilers?

Liability should be created where when you expose third party data. That would disincentivise data collection massively.


I don't think you'd have to ban unsafe languages but you could pass a law that slowly decreased the amount of money the government could spend on 'unsafe' software (either through direct licensing or renting through clouds). The government is such a huge client to these companies that it'd immediately create a large financial incentive to migrate.


I think this is probably one of the better, more practical ideas that I've seen. If the government has to consider whether the underlying technology has adequately addressed memory safety issues (doesn't have to be at a language level, but that's obviously the easiest way), that puts pressure on them to fund projects that use memory safe approaches.

That's billions of dollars that will get slowly steered in the right direction.


And naturally the ones to decide which software is unsafe will be the lobbyists from the most powerful tech companies like Microsoft or Google.


Not really, no.


Are there any compelling reasons why it should be legal to sell exploits to anyone other than the company whose software is vulnerable? By neatly bundling these exploits up and selling the hacking tool to the highest bidder, this company is giving nation-state spying capabilities to cartels and dictators who would otherwise not have had that. I don't see why that shouldn't be regulated the same way as if they were selling nukes.


It's even more insane when you remember that strong encryption was regulated as munitions until the mid 90s [1,2] - and actual cyber-weapons aren't today.

Software that can cause real damage to not just data and business activities, but endanger lives is completely legal to sell to despotic lunatics. If you sold a bag of fertilizer to a Syrian you'd go straight to Guantanamo, but somehow this is okay.

[1] https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...

[2] https://web.archive.org/web/20051201184530/http://www.cyberl...


Iirc the software sold by the NSO group is considered a weapon under Israel law, so it is somewhat regulated. Not as heavily as nukes though.

This begs the question: Is there a compelling reason why selling weapons to other states should be legal?


In Germany, our economy pretty much depends on weapons and cars, so that.

But apart from economic reasons, not really.


Laws are made by the government of the country in question and in this case the government, Israel, approved the sales and used them to its advantage.


LEO and TLA need vendors with constant supply of exploits.


I tend to bang on about software not as engineering but as literacy. It makes some sense even here - that bad code is as common as bad law - and often for the same reasons, politics, money, and hard questions

"Engineering" is a wide subject - the big stuff is carefully built and highly regulated - bridges and buildings. But as we go down the scale we see engineering give way to the problems of politics and money - tower blocks collapse for example, and then we see human level engineering - factory tools that try to meet inflicting goals, dangerous toys and so much more.

The software world should not beat itself up for not being like all those engineers - when lives are not on the line engineers get tied up just the same as the rest. And when lives are on the line, software and hardware engineering have learnt a few things - reduce the scope to the barest possible essentials - have a lot of redundancy and stick to well known designs.


Also, traditional engineers design things to withstand conditions that they would reasonably face in ordinary use, with some additional safety factor. They don't design them to withstand deliberate attacks by nation-level actors like we're seeing here.

If a car explodes because it got hit by an artillery shell, would anyone hold the automotive engineers responsible? If a building collapses because a bomb was dropped on it, would anyone hold the civil engineers responsible?


This is an interesting point.

I think the difference here is that no car could be engineered to withstand an artillery shell, whereas we can imagine an iPhone not susceptable to this particular vulnerability (it already exists).

Perhaps one argument is that the space of _potential_ vulnerabilities in something as complex as an iPhone is so huge that it just isn't feasible to create one that can withstand all network-based attacks (in which the attacker hasn't obtained user consent, Apple's signing keys, etc)? However I'm not sure if I buy that argument, or not...


That depends really. In the US, nuclear power plants are supposed to be able to withstand a certain amount of damage, specifically a direct impact of a certain sized plane (even before 9/11).

So, like all things, it's a bit of a matter of perspective.


In many cases, these are public or quasi-public works, and security is a cooperative venture between the engineers who develop vulnerable structures and the state. Rather than building bridges to withstand artillery strikes, the nation itself implements national anti-missile defense making it quite difficult to launch an artillery strike, and if one gets through, they strike back.

I'm not really sure how to analogize this with software. The reality is some communications networks were just never meant to be secure. This isn't unique to the Internet. Nothing ever stopped anyone from tapping your phone and stealing your personal information that way except that it is illegal. On the other hand, a whole lot technical measures are in place to make sure it is very difficult and maybe impossible to "tap" a military or classified communications network at all. Nobody can stop you from intercepting radio, but good luck breaking the encryption.

But the national security infrastructure can't extend that level of protection to everyone, just as average citizens don't get police escorts and personal bodyguards assigned from the secret service. If someone wants to shoot you, the only thing the state does to stop them is make it illegal. Otherwise, it's on you to protect yourself, and we don't hold clothing manufacturers liable for not making your t-shirt bulletproof.


I hope it's clear that there's a major difference between a car being hit by an artillery shell (extremely rare) and a nation state attacker exploiting software (extremely common).


I doubt "real" engineering is better. It's just that attacking its artifacts doesn't scale, so it looks more secure. In fact, the average bridge or skyscraper is probably absolutely riddled with serious design and manufacturing flaws


I'm not a civic engineer, but I know some, and that doesn't ring true. I know, for example, that with regards to the NYC subway post-9/11 there was rigorous work to plan and implement safety measures against various terrorist threats. That included modeling the impact of various types of bombs in different parts of NYC's subway tunnels, etc.

This work was mandated for projects that were, in aggregate, likely billions of dollars.


Attacking a cell phone is different in some ways from attacking a subway system though.

The cell phone's defenses are effectively 100% automated, and the attacker can buy a perfect copy of their intended target's phone to and safely try out an indefinite number of attacks on this copy until that attack is perfected. Once the attack is ready it can be deployed with very little exposure to the attackers. If the attack doesn't work, the attackers are unlikely to be discovered and you can make another attempt.

Attacking a subway system in comparison, the planning will be done based on diagrams and theory, the attackers only get one attempt, many of the defenses will be unknown, the target environment is highly unpredictable, includes human defenders, and even if successful the attackers will likely either die or spend the rest of their lives imprisoned.


If your building or structure is a public hazard, under existing legislation the government (at least in the UK) has the right to remove the hazard, up to and including demolishing your building, if you don't make it safe sufficiently fast.

I'm not a lawyer, but I don't see any language in the law that would exclude unsafe automated systems which are part of a building or structure.


This is a key point. There’s no equivalent enforcement mechanism when it comes to software. It would be a very different world if one of thousands of local jurisdictions could at least theoretically issue a binding demolition order, within their boundaries of course.


The design process is pretty rigorous. Anything to back up your idea?


Not necessarily agreeing but reminded of the Harvard Bridge in Boston.

I heard that though it was often called the MIT bridge (because it connects MIT campus to Boston), they felt fine with the name “Harvard” after learning it was structurally unsound.

https://en.wikipedia.org/wiki/Harvard_Bridge


Smoots would have been ashamed, smh....


Challenger, Columbia (safety culture at NASA really improved after Challenger huh), Chernobyl, Three Mile Island, Fukushima, recent FL condo collapse, my average five story Brooklyn apartment building's basement and roof after a thunderstorm, Boeing's long and growing list of recent fuckups... is some stuff that comes to mind off the top of my head. Makes you wonder how many close calls, less major incidents, cracks that are noticed and repaired in time, buildings that happen to be demolished twenty years before collapsing to make way for redevelopment, etc we don't even hear about, no?

What about you - anything to back up your claim?


Every building that has never collapsed on you


It feels like we've gone full circle back to 1980 when the US Defense comissioned a 'safe' embedded language.

Jean Ichbiah's team won this contract with the language 'Green' in 1979. They subsequently went on to further develop this and standardize it in what was then the Ada 83 language standard.

The more I think about the more it feels that Ada just came about to solve the right problem but at the wrong time.


Unisys still sells Burroughs to customers that care about top level security.

https://en.wikipedia.org/wiki/Burroughs_large_systems

ESPOL/NEWP were the very first system programming languages to have UNSAFE code blocks, 10 years before C was even an idea.

Before that there was JOVIAL as well, https://en.wikipedia.org/wiki/JOVIAL


Hasn't that morphed into some sort of emulation nowadays?

Like almost COTS(common of the shelf) Xeon, running some hypervisor and the 'legacy' within? Similar to what Symbolics did with Genera for Alpha?

edit:

[1] https://microsites.unisys.com/offerings/clearpath-forward

[2] https://docs.microsoft.com/en-us/azure/architecture/example-...

Azure?! Err...sure...


First of all it depends on the customer deployment scenario.

Secondly, having an issue with number 2 cloud provider at world scale?

Or would you rather have AWS?


That's not what I meant to say. It doesn't matter which cloud provider at all, if the product running on it is an emulation. All the assurances you spoke of regarding early Burroughs Large Systems resulted from hardware/software co-design. The hardware does not exist anymore. They ported some abstraction layer by whichever means to contemporary platforms, and now top that by making it available to the cloud, saying: 'Diz iz good! We king of the mainframe hill, Ugga Ugga!'

I wouldn't trust that. The same way I wouldn't trust that from another, very established competing vendor, saying the same things, having the same nimbus of security and reliability.

Because sometimes there are blips on my radar, wherein some conference talk pops up, and people looked under the rugs of these systems, not even hard, just casually, and discovered some oopsies.

So it seems like that nimbus of security and reliability is more a result of very careful isolation from public networks on one hand, and inacessability for the bored and curious pranksters of the world on the other. But it is no absolute.

Especially if it's running in emulation on contemporary COTS hardware. (In the EFFING cloud!)

Just my not so humble opinion, though.


I find Snowden's take on software engineering to be poorly reasoned and internally inconsistent.

The most important fact related to his argument is one he doesn't even bother mentioning, namely that Android is mostly written in a memory safe language (Java). The thing he's asking for already exists and is deployed on most smartphones worldwide, yet all he has to say on the topic is this:

"While iPhones are more private by default and, occasionally, better-engineered from a security perspective than Google’s Android"

You cannot claim to be a spokesman for freedom, then demand memory-safe languages be used everywhere, and then praise the one system controlled exclusively by a single American firm that's written almost entirely in (Objective) C, a memory unsafe language.

That sentence is the only mention of Android in the entire article, the word "Java" doesn't appear anywhere and he seems to think that Rust is the only memory safe language in existence. Why should I care about this guy's opinions? Java has been drastically more successful than Rust when it comes to making software memory safe. Nobody is gonna choose to write the next AirBNB in Rust other than for fashion reasons, because it'd simply be too unproductive. Developers already complain about Swift and its horrible compile times, Rust would be even worse.

If Snowden really cares about this topic, he should brush up his Java skills, download some OpenJDK early access builds and start experimenting with writing video codecs and 3D engines using the new vectorization, memory span and value types features. Java is getting the capabilities to do even higher performance work traditionally dominated by C++, but in ways that preserve memory safety. The engineering is very difficult and it's unclear if Google will ever adopt it into ART, but it's there for them if they want it.


Which part of Android is written in java? The kernel? The drivers? The JVM ? I somehow doubt anything significant of the OS itself is written in java :)


Which parts? I'm surprised this isn't well known already. Large parts of Android are written in Java. Amongst other things:

* All the UI libraries, networking APIs code.

* All the system apps and services like the home screen, the keyboard, the system server, the window manager, the telephony subsystem (very important!) and so on.

* Many of the system APIs including services like the alarm manager, dropbox manager, some cryptography services, location services etc.

* The entire developer toolchain: the build system and the IDE.

* All the client logic for Google Play Services, which is a big part of the overall Android API now.

* Large parts of the compatibility test suite.

* The standard libraries for all the above.

Just browse through the code and see for yourself: https://cs.android.com/android/platform/superproject

Most of the Java code is under the frameworks directory. No, not all Android code is written in Java. Lots of devs want to be able to use C++, and for some things its necessary. The point of the upgrade projects I just mentioned is that they are tackling the remaining cases where C++ can outcompete Java for things like media codecs. And the Java world has developed a JVM written in Java, actually several of them, so whilst Google hasn't done it, the tech is actually there. All this exists today, whereas an entire widely adopted consumer OS written fully in Rust is only a pipe dream.

Oh, and as for drivers, Google moved drivers out of the kernel and into user space some time ago (Project Treble):

https://source.android.com/devices/architecture

You can in fact now write drivers in Java. Google recommend you don't, but it's architecturally possible:

https://source.android.com/devices/architecture/hidl-java


I guess it all depends where you make the "OS" stop :)

I'd say the kernel, the drivers and the JVM are already enough to say that Android is not "mostly written in java". And that's not not to talk about whatever shit operating system is running on the modem.


Apparently the idea that Android is a Linux distribution is hard to dispel, because most rather read what is written in some random blog posts patting their backs about "Linux victory" than actually read what is going on at AOSP and Gerrit PR.


Android is Linux because it uses the Linux kernel.

It is not GNU/Linux, which is what people think about when talking about Linux distributions.

Richard Stallman is known to throw a tantrum every time someone omits the "GNU" part but now, with Android, he has a point.


Except the little detail that pisses off termux guys, neither Linux kernel nor POSIX compliance are part of public Android userspace APIs.

Targeting Android/Linux, if you so wish, it is no different than targeting Windows from GNU/Linux point of view.


[flagged]


Personal attacks will get you banned here, regardless of how wrong someone else is or you feel they are. No more of this, please.

https://news.ycombinator.com/newsguidelines.html

Edit: you've unfortunately been doing this repeatedly in your recent comments:

https://news.ycombinator.com/item?id=27891065

https://news.ycombinator.com/item?id=27890950

This is easily grounds for banning you. I don't want to ban you right now, because you've posted good things in the past—but if you break the site guidelines like this again, we will have to, so please fix this. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site to heart, we'd be grateful.


Loved seeing language safety features called out. How can we prevent monoculture yet retain interoperability and profitability? Are these inherently at odds?


Diversity can help improve interoperability, if everyone sticks to only the public APIs and protocols, you can cross-run unit tests and make sure that everyone passes all of the unit tests for all implementations, and yield the same results, the only difference should be in performance.

Like farming, monocultures seem profitable, until the pests show up, and can swamp out all of the advantages.


> The greatest danger to national security has become the companies that claim to protect it

No. The greatest danger is lack of software supply chain management followed by near-universal disrespect for formal complexity management methods.

The only way I have found to win at this "are we actually secure" game is to minimize the number of parties you have to trust. The smaller you get this figure, the easier it becomes to gain control over your circumstances again. How many of us can immediately state the exact number of unique parties that we have to trust as part of building solutions for other people?

What about complexity? Most of the time, something is insecure not because of malicious intent (covered by the trust angle above), but because its so goddamn complex that no one can say for sure if its correct or not. Why do we tolerate this?


There is no supply-chain accountability without liability.

When adding a dependency, knowing it exposes you to prosecution if it becomes a vector for security violation would give pause. A premium on attested-secure components might develop. If Facebook depended on Zst being secure to be able to stay in business, we might be more inclined to use Zst than e.g. unmaintained Zlib.


Coming from embedded/avionics, it's shocking how careless and cavalier the rest of the development industry is about their dependencies. At most places, outside large FAANGs that bake external dependency reviews into their process, the process is:

1. Developer has a problem to solve

2. Developer does a few web searches and finds libXyz, which looks like it solves the problem.

3. Developer tries libXyz and it solves the problem.

4. Yolo! A libXyz dependency is added, and now it's part of the product that the company stakes its name and reputation on.

Total madness. What else does libXyz do? What data does it collect? What does it do with that data? What bugs does libXyz have, and do they put our product at risk? What about security risks? Does libXyz have tests, and do they pass? How extensive is its test coverage, and would that be sufficient at our company? Does libXyz impact the overall performance of our product? What is its maximum memory footprint? Does libXyz limit our product to a particular architecture? What is libXyz's license, and is it compatible with our product? Who is responsible if libXyz fails and our company gets sued?

You're lucky if the developer even thinks about one or two of these, let alone fully auditing the library. Staking your product on the suitability of a library but not actually thoroughly vetting the library. And some products out there have hundreds of dependencies, all added in the manner described above. We're just handing out loaded guns.


> A premium on attested-secure components might develop

Isn't this effectively the Microsoft/IBM/Oracle ecosystem play?


The difference is that they purport to sell you assurance (and lie), where aping FB would leave you no more exposed than deep-pockets (therefore attractive-target) FB.


I was all ready to post a reply... when I was hit by the darkest pattern of all... you must pay to do so, with no hint prior to that moment.


Substack replies are paid? Hmm.

Thinking about this right now: annoying sure but why do you consider this pattern dark? I bet it reduces spam and trolling by orders of magnitude unlike say confusing cookie dialogs designed to make you surrender all your private info.


Seems like it would lead to marginally better discussions too. Or at least, very very heated ones.


The dark pattern is that you don’t know it beforehand, not the paywall itself.


You know before typing your comment though. The real dark pattern would be if it had let you type and then ask for money to submit your comment.


You do have to start typing before it informs you, which means you already had a reply in mind. I agree that's pretty bad.


> Fixing the hardware, which is to say surgically removing the two or three tiny microphones hidden inside, is only the first step of an arduous process, and yet even after days of these DIY security improvements, my smartphone will remain the most dangerous item I possess.

What's the purpose of these microphones? Do they pose more threat than the standard non-hidden microphone?


I think he's referring to the standard, non-hidden microphones.


These are the "standard microphones" of any modern smartphone. One microphone at the bottom, for calls, and 1 or 2 at the top, for stereo sound while recording videos and noise cancelling while doing call


I believe he's referring to the various MEMS sensors (accelerometer, gyroscope, etc) which can be repurposed into microphones (much like harddrives).


> it is still hard for many people to accept that something that feels good may not in fact be good

This strikes me as surprising. I have always been taught the opposite: if it feels good, it's probably bad for you, or illegal, or immoral, or all three.


I'd take out the immoral, as it feels puritan. Also, there are nice things that are good, legal and moral, like stretching after a good night sleep, a professional massage, ad blocking, etc.

Maybe I'm being too literal here.


I love it that you included ad blocking as moral. Absolutely agreed.


Yeah, I'm sure there are some that would think ad blocking as immoral as you're "trying o get something for free" blah blah

As puritan as it feels, a large part of the population likes it that way and want it even more strict.


> if it feels good, it's probably bad for you, or illegal, or immoral, or all three.

Pretty much. The only exceptions I’ve found are exercising and saunas.


Creating art can feel pretty good. Most art is legal these days.


Interesting, when I read that line I thought the exact opposite. It sounds exactly line how I feel about the people around me.


I definitely agree with Snowden's call to action with making spyware illegal, I think the execution is plausible but unlikely for the same reasons he stated in the article. Every country is working to produce these things themselves for cyber security self defense.

I am unsure about his comments regarding unsafe code. Like many of the people have already stated here, that's a blurry line that has good intention but seems nearly impossible. I think more regulation and certification for both employees and companies is a much more likely to be successful.


I thought this was going to be about Instagram making money by making people, especially women, feel worse about themselves. Turns out there are multiple insecurity industries operating today...


The problem is bigger than hardware and software being "insecure" it's got ultimately to do with trust of these corporations. Do we trust that Apple, Google and so on will do the right thing? If not maybe we can construct new (maybe public) companies which are accountable for our privacy. Or hold existing companies accountable. Its going to need a huge swing of power towards ordinary people though.


Do people really think any of this is ever going to change? Governments don't give a shit, they time and again say they won't do something only to later do it in secret.

We should at this point accept that this isn't going to change and move ahead with the belief that your data is already hacked and is not private anymore. We should discuss more on the exact consequences of this and take actions accordingly.


So not point in ever doing anything? Just give up.

A mass movement of people can force a government to change.


Do people stop doing crimes because laws have made it illegal. I am saying there is no point trying to stop what is inevitable instead our energies are better spent trying to understand how best to navigate an online world without any privacy.

Right now many western nations are involved in Pegasuses, what happens when China gets involved. We will have absolutely no control over what China does, especially when no country will publicly acknowledge that we are spying on people.


I think we already know how to build secure tech, we just don't because of the cost, which is not 2x or 10x but way higher to the point that is deemed not worth/feasible both for private companies and governments, so it affects everyone and we are not getting out of it anytime soon (if anything sometimes we go in the opposite direction following the goal of money and calling it innovation). Heck, I think it would take a book just to even explain all the components of this argument. Go figure do something meaningful about it. Safe/unsafe languages is just one piece of this thing, and frankly a quite easy one to wrap one's head around: we already have so much stuff we use everywhere written when memory-safety was not really cared for and no-one wants to rewrite / catch up / surpass (there you go: because of the cost). So we get on with it, because meanwhile life goes on, bread needs to put on the table, people want to play with their shiny phones and whatnot, etc.


Off topic: I counted no less than five different subscribe buttons. Is that the point of diminishing returns?


Is Python a safe language? It doesn’t seem to have the sorts of problems other languages do. Why is that?


Absolutely not, unfortunately. One of the architectural issues plaguing even comparatively memory-safe languages is the fact that there is a global scope that's accessible from anywhere. In some Python versions even numbers or truth values could be redefined. [0]

This makes it impossible to sandbox functions or imported modules, because they can communicate arbitrarily. But communication/access security is not the only problem. Resource security is something I haven't seen any significant language (besides Java perhaps) try to approach - being able to limit the memory and cpu usage of a (part of a) program. Modern languages should make it possible to do both with just a few lines of code.

I recommend reading about Capability Security[1], the E language[2] and the Principle of least Authority (POLA) [3]

[0] https://hforsten.com/redefining-the-number-2-in-python.html

[1] http://www.cap-lore.com/CapTheory/

https://en.m.wikipedia.org/wiki/Capability-based_security

[2] http://www.erights.org/

[3] https://medium.com/agoric/pola-would-have-prevented-the-even...

These originated in the mainframe era, in a connected world before the internet, at the inception of the first multi-user systems: https://github.com/void4/notes/issues/41


CPython is not even memory-safe. The compiled bytecode is accessible to ordinary Python code able to replace it with 'incorrect' bytecode that causes out-of-bounds memory access by the C interpreter.

(Presumably an unusual thing to do -- I've seen libraries doing bytecode hacks but I'm not sure how popular any of them are.)


The compiled byte code in the file system is accessible and reversible, yes. But not once the interpreter has imported it.

A subsequent import could patch the Python object namespace, but that’s normal and intended.

If a program has access to the byte code files, it presumable has access to the actual .py files, and can easily change them to do whatever mayhem it wants.


You totally can create bytecode and jump into it from Python, no .py files needed. See https://codewords.recurse.com/issues/seven/dragon-taming-wit... where types.CodeType gets invoked.


I find the idea of resource security very interesting - I haven’t heard of it before.

Isn’t that the opposite of secure? Pulling resource allocations out of the kernel and putting it into user land?


Not if done correctly. Have a look at this link: https://github.com/void4/notes/issues/41

There is no issue with just limiting resources (unless there is unpredictable overhead). It doesn't have to be hardware resources either, it could be abstract/higher level resources like interpreter steps or managed memory slices.

I'm creating a series of VMs to show that this is possible, like rarVM, the recursively sandboxable virtual machine: https://esolangs.org/wiki/RarVM

Showcase: https://www.youtube.com/watch?v=MBymOp6bTII

When calling a function you can specify how many interpreter steps it can run until it aborts (and optionally gives you a continuation so you can "refill" and resume it later).

Stackless Python can do this too, but unfortunately due to the reasons discussed above will never be a safe language, also this specific mechanism works only in trusted environments since the called function has the ambient authority to increase its own resource limits: https://stackless.readthedocs.io/en/2.7-slp/library/stackles...


lol … this is the first time I’ve heard some one mention the E language outside the context of “inspiration for Scala.”

This is a very interesting comment, and I’m going to study it and the links you gave. Thank you!


Most interpreted languages are "safer" in this regard as they don't leave the memory management up to their users. The interpreter itself could still have such bugs, but that's a comparably much smaller attack surface and has far more eyes on it that any random person's code, so these are less frequent.

And it's not just fully interpreted languages - any language that manages memory for your and doesn't let you mess with it yourself (like Java, C#...) will be safer, assuming the compiler and runtime are secure.


There are at least two things that could be contributing to this perception:

1. Python is (usually) an interpreted language, and it's probably true to say that interpreted languages tend to have a lower attack surface (at the cost of lowered performance).

2. While Python is very popular in certain domains (numerical computing, ML etc), there are few low-level systems that are written in Python.


Great article that I am sharing with non-tech friends and family.

The most powerful point is to stop using unsafe programming languages. I usually use Common Lisp (for almost 40 years now), but I have been experimenting a lot with Swift over the last year. I have little experience with Rust.

I have seen comments that Swift and/or Rust are not appropriate for operating systems and general systems programming, but I call bullshit on that. The problem is a financial problem, expensive to pay for difficult to hire Rust and Swift developers to rewrite billions of lines of code. But it should be done. I pay an incredible amount of money to Apple every year and I expect it from them. Same for Microsoft customers (and Google).


Everyone in these comments mentions C and C++ as a single word, which frankly has nothing to do with reality. C++ is a FAR safer language than C, if you wish to use it properly (which many don't). I write C++ full time at work and I also use it a lot in my free time and I rarely if ever have out of bounds accesses, use after free or any of these bullshit errors. If you use modern C++ and AdressSanitizer and don't do anything too "fancy" then it's very hard to get those types of bugs. I firmly associate them with C and C alone. These time are over for modern C++. The people commenting here clearly have no idea what modern C++ even looks like.


The problem for modern C++ is that few (or zero?) code bases contain exclusively "modern C++" when you include all transitive dependencies. You end up using a huge amount of C and "legacy C++" just by virtue of reusing code.


This is true for Rust as well though.


Because even companies like Microsoft and Google don't write modern C++, you can easily validate this on their public repositories.

See how much modern C++ you will find in AOSP and NDK, WinUI or XBox code samples.

Modern C++ exists only at talks in CppCon, C++Now.


C++ can be safer but that does not mean it is safe. The fact you have out of bounds accesses, even if it is rare, is not really acceptable in a language that would like to be considered “safe”.


As much as like C and dislike C++ you are right.


>If hacking is not illegal when we do it, then it will not be illegal when they do it—and “they” is increasingly becoming the private sector.

Not sure I follow the logic here. The State has a monopoly on many things that are never allowed in the private sector -- violence being the most obvious one. We don't seem to have a huge problem with the split here; why we couldn't do the same for hacking?


> iOS's update model is 1000% better than Android's, which is a massive security improvement against most threat actors (who aren't using artisanal Israeli zero-day exploits).

What are the advantages of iOS's update model?


Software updates actually occur for old hardware. Android almost never gets updates for anything older than a couple years.

I hate iOS for it's walled garden bullshit but android seems to have found the one way to be worse.


>basically turns the phone in your pocket into an all-powerful tracking device that can be turned on or off, remotely, unbeknownst to you, the pocket’s owner

even more so with Android & iOS.


Targeting those is for n00bs.

Backdooring the modem firmware is far more useful, and reliable. Not to mention undetectable -- not only is the user unable to recompile or replace this firmware, they can't even get a checksum of it. Qualcomm's modem chips get their own private NAND flash that they can use as they please.


Realistically, how useful is compromising modem firmware if like 98% of communication done through it is encrypted at the AP level? It’s sorta like wiretapping a fiber optic cable—okay, what now?


On SoCs with an on-chip modem, the modem can write to any part of host memory it likes. No IOMMU.

On the (very) few remaining phones that use two separate chips, what do you think the odds are that the host OS is hardened against attacks originating from its own modem? That is a hideously complicated protocol spoken between the modem and the host processor. Plenty of validation+overflow footguns. Finding exploits here isn't going to get security researchers promoted, if they can find them at all. "Exploit is available only to modem manufacturer" does not engender a high CVE score.

Oh, and, just to top it off, that interface between the modem and the host processor when they aren't on the same chip is... drum roll... USB. As in, BadUSB. As in, Mr. Phone says: "wow, somebody plugged in a USB keyboard! And a USB mouse!".

But hey, the modem doesn't even need to own the CPU. It can record and exfiltrate your GPS location quite happily via LTE, all on its own. And buffer up a nearly unlimited amount of location data on that private NAND flash it has in case you're out of cell range. All while pinky-swearing that location services are definitely absolutely turned off, promise. I don't know why people believe that location-tracking is ever turned off on a phone; it's just absurd to think that.


Thanks for that. I'll sleep well now. These are the litteral nightmare scenarios.


Nice to see that he linked to rust as a safe alternative to the C's (C, C++, Objective-C). There is a place for Go and Java as well above the system level. Both perform well and are safe.


Go is not safe for concurrent code, unfortunately. Specifically, it does not protect against data races on non-atomic types, which can lead to torn writes and break invariants that are required for safety.


You're right that Go has those flaws, and it would be nicer if they didn't exist. However, in practice Go is still far, far safer than C and C++. Use after free etc. are far easier to exploit than Go's races.

If you can write in 100% safe Rust, Swift, C#, Java, etc. - and not use any of the unsafe escape hatches they provide at all - that would be best. But using some small amount of unsafety in any of those languages, or using Go, would still be a huge improvement over the typical C or C++ codebase.


True, but still memory safe, also Go has a race detector included which is really easy to use https://blog.golang.org/race-detector


No, Go is not memory safe for concurrent code.

Race detection is good but can't eliminate races altogether.


Same applies to Rust as well actually.

Its type system only prevents data races for in-memory data structures, on the same OS process.

There are plenty of other concurrency races.


It's true that there are "higher level" races that Rust can't prevent. But in-memory races are particularly important to prevent, because a) they're a common and very nasty kind of bug and b) it means unlike Go, Rust (the safe subset) is memory safe (and in general, free of undefined behaviour) for concurrent code.


Can you do a security exploit with a concurrency bug although in golang? You may corrupt data, but can you cause remote code execution with it?


To be clear, yes. The issue is with how Go's "interface{}"s (and some other types - like slices, etc) work - it's two pointers that can't be written to atomically, which means you can get into a situation where your objects are invalid.

https://blog.stalkr.net/2015/04/golang-data-races-to-break-m...

Here you can find a POC.

You could argue that this is contrived, but it's also not going to be fixed. This is in contrast to Rust, where there may be implementation flaws leading to unsafety in safe code, but the language retains the right to fix those, even if it breaks code.

Still, I think there's really no question that Go is a major step up over C and C++.


RCE in that process maybe not, but security exploit absolutely. Things like that are how can be the difference between admin and non admin privileges or an “authenticated/not-authenticated” scenario.


Yes, not all RCEs are caused by memory bugs.


Can you point to some examples of RCEs caused by Go concurrency bugs?


I don't have a real world example. I was just stating it's possible.


Corrupt data may break invariants, which widen the attack surface.


It was in this context for the first time that I saw how Rust was something other than an overly complex replacement for C. I've heard all sorts of things about how it's memory safe, but nobody ever explained why that was important.

So thanks for that. This marks the place where I acknowledge my mind was changed about the value of Rust.

[Update] Upon reading [1], I think that memory safety is still vital, but Rust has a very strange and cumbersome way of doing it.

[1] - https://doc.rust-lang.org/book/ch04-01-what-is-ownership.htm...


Rust has a strange way of doing it because it's trying to be as performant as C but as safe as a garbage-collected language. Maybe there'll be a language that achieves both without having to worry about ownership, but I'm not aware of any that exist at the moment.


"For example, if you want to see Microsoft have a heart attack, talk about the idea of defining legal liability for bad code in a commercial product."

That sort of discussion is quickly dismissed on HN. And probably elsewhere on the web/over the internet.

Instead we frequently see discussion blaming users of the software, i.e., Microsoft's customers, or even suggestions to make the customer liable, or comments from "security experts" on how Microsoft has made such amazing strides in securing Windows (a tangent). In the real world, outside the Redmond/Silicon Valley monopoly space, how many mistakes does someone have to make before we start to suspect there might be problems with relying on that person's work. Even more, how many times do we hire someone knowing they have made 500+ mistakes in prior work leading up to their application.

If Microsoft products are so infallible when used as instructed by Microsoft, then why would Microsoft have a heart attack as Snowden suggests. What a remarkable state of affairs we have today where employers such as Microsoft can call their employees "engineers", and yet both the employer and employees are absconded from any liability for the so-called engineer's work. The number of "second chances" Microsoft gets is nothing short of astounding. A bit like the number of pardons we allow to Google or Facebook for privacy infractions. Infinite.

https://www.nspe.org/resources/professional-liability/liabil...


Most of the people I went to Uni with ended up in fields where the companies are liable for bad stuff, to a certain degree. It does exist. However:

* you get paid a lot less

* the companies and industries move very slowly

* you spend a lot more time writing long-form, some time just re-using existing stuff wholesale, and almost no time building actually new things

I mean like Real Engineering fields. What we do in software is not real engineering, not even close. That has pros and cons.

I saw someone else talking about Rust, but I don't think that's what would happen in such a world. Rust is too new, if the company was actually liable for problems, the legal arm of the company wouldn't let you use it. I think what would happen is that everything would slow way down, half or more of the people working on code right now would lose their jobs, hobby programming would either disappear or become very insular and not well-distributed (because if companies are liable, then individual people will also get sued for bad software), and you'd spend most of your time working with small pieces of 30-40 year old technology.

I think that eventually, software may get to such a point. Just, be careful what you wish for.


I completed a MSc in Formal Methods a decade ago, and I've worked in software projects where the level of rigor was equivalent or superior to any classical engineering field. For example, railway signaling or some real time control systems. We handed in complex artifacts that have had zero defects throughout their lifetime (> 15 years).

I believe lightweight formal methods are quite promising and might let software move relatively quickly and economically while retaining some rigor. Look into Liquid Haskell for some ideas that might become mainstream.


Liquid Haskell looks similar to Eiffel in terms of contracts.

I imagine a whole bunch of power could be derived from this in Haskell. I don't know how heavily contracts were/are used in Eiffel (I don't think it's so popular these days).

It would be great to know if contracts had a measurably useful outcome on projects and what that measurable was (addressed a market that we otherwise wouldn't have been able to, dropped runtime errors to only system based errors, etc).

WRT your projects, I imagine it would be great to see your product out there working with a known level of uptime and quality. I could imagine it's a bit of a shift from the normal "throw it together" of ... a lot of software.

I'd be curious to know what it's like to work on day to day, year to year. I imagine lots of software will still be "throw it together" for a long time yet, but even if the subsystems are formally verified, it could have a useful impact on the software that consumes these (verified) modules.


Liquid Haskell is a bit different from Eiffel. On Eiffel, contracts were designed to be verified at runtime. There were some efforts to verify them at compile time, like Liquid Haskell does, but that's hard unless the whole language and type system are designed for that from the ground up.

As for my day to day job, I think the real difference with common software development was that requirements were very well understood right from the beginning. This allowed us to formalize them into axioms and then derive the implementation in a stepwise fashion using some Standard ML tooling.

I've also worked on proving theorems on existing software artifacts, but that's way harder both in terms of effort and number of defects.


I personally think it would be excellent if that was the only way you were allowed to code. Though I'd probably lose my job haha. I'd sleep better at night.


How does liquid haskell compare to typescript?



The gp is arguing that companies should be held liable for the harm that they can and do cause. You are countering that argument by claiming that doing so would require all companies to adopt onerous measures. However, that counter argument is only valid if we assume that all companies can cause the same amount of harm and thus have equal liability, and that doing so is unavoidable.

That assumption is deeply flawed. We do not hold toy car manufacturers to the same standards as actual car manufacturers. We do not hold every manufacturer of screws to the same standards as the manufacturers of screws on airplanes. Or rather, we do hold them to the same standards, just we know that certain use cases basically can not cause too much harm in the event of failure and thus in practice the standards needed to mitigate the worst case are much lower.

Software liability does not mean that everybody suddenly needs to take the same care as safety-critical industries. It only means that if you are making safety-critical software and you are incapable of separating the safety of the critical components from the non-critical components. What it really means is the repudiation of the one-size-fits-all lowest common denominator expectation of quality.


Liability just means more controls to avoid blame and tighter specifications. Malpractice laws don’t make doctors less dangerous, they mostly encourage ass covering exercises.

I worked at a place that had a formally verified application running on some mainframe. It was wonderful, except that the process was excruciating and maintaining that validation prevented any changes. Every code change cost a minimum of $25,000 2002 dollars.

It was dumb. They would have been better off with a paper process and army of clerks.


how does linux fare in this scenario? few things are as critical in terms of infrastructure


I can imagine two possible scenarios for Linux (the kernel): companies either choose to double down on it as a collaborative venture in order to distribute the cost of verification, or it is abandoned in favour of vendors who provide verified kernels at tremendous expense but in hopes of locking their competition out of the market.

As for the rest of the open source ecosystem that goes into Linux (the operating system), it would probably be abandoned.


All the internet backbone routers, endpoint routers and switches, hardware firewalls, VPN concentrators, the SSH daemons, SSL software, RSA keyfobs and the like, the content delivery networks and DNS ecosystem, SSL public trust system, the connectivity providers from ISP networks and national and international fibre connections to cellular and wifi networks, web browsers which billions of people use to interact with untrusted content, (datacenters, AT&T Long Lines building style classic phone system, the postal service, electricity subsystems, food and water supplies...), even staying in tech you've basically got to exploit something else before you get to whatever underlying OS there is and even if you get to it there's not necessarily a need to attack it.

NotPetya which took down Maersk and did $300Mn of damages was apparently spread (through their Windows AD) by compromised admin accounts which they were lax at managing[1] rather than kernel exploits. The SolarWinds Orion security flaws were blamed on weak passwords, not OS kernel exploits. And if getting inside, something like last month's SystemD/polkit exploit[2] shows that attacking the kernel isn't always necessary for privilege escalation.

Linux the kernel is important but it's the heart inside the ribcage, not the first or last line of defense, or the main thing to target.

[1] https://gvnshtn.com/maersk-me-notpetya/

[2] https://github.blog/2021-06-10-privilege-escalation-polkit-r...


The point is that if legislation were introduced that resulted in liability it would likely completely decapitate the FOSS ecosystem (among other things).


Why would it? If faced with the choice of taking liability for using Linux or rebuilding Amazon shop + AWS + Kindle + Echo on Windows I'd guess Amazon would do the former, wouldn't you?


In the short term? Perhaps. They might just develop their own proprietary OS - they already design custom CPUs!

I imagine it would depend heavily on how large the liability was. I expect individual components would begin being replaced with "certified" commercial alternatives. If not existing ones, definitely new ones. Remember that they make money by selling to customers who would also be subject to the same rules. Look at healthcare, aviation, and finance for concrete examples of the effects (both negative and positive) that red tape has on software and IT policies.

There's an entire FOSS ecosystem and the vast majority of it is composed of small-ish slow moving projects. The tech industry is also an entire ecosystem full of small and medium sized players. Even if behemoths such as mainline Linux and AWS somehow survived unchanged I would expect a much greater chilling effect on smaller players that couldn't afford to take on such risks. New companies and software projects would become very difficult to get off the ground (healthcare is a good example here). With few to no new entrants forward progress would slow to an absolute crawl.

All of this has downstream effects. Fewer consumer devices running Linux would mean even less hardware support. Security related liabilities would almost certainly mean more vendor locked hardware. Would companies like Purism remain viable (or even legal)? The steady stream of new FOSS users and contributors would almost certainly dwindle.

Depending on how such regulation was written, could open source contributors themselves become liable for a freely provided product?


Actually it's simpler. People and organizations would just move to a jurisdiction where such liability laws didn't exist. Apple would move. Microsoft would move. Google would move.

And then the US would be forced to decide whether to accept imports of foreign devices and software (created under the no-liability framework) or to stay with homegrown technology frozen in time.

The best thing you can say about this proposed reform is that it would make a great plot for a sci-fi novel.


Tech companies can't even manage to leave San Francisco's outrageous cost of living and rising crime, much less the United States.


You think they are trying? When VC and executives live in walled castles and own multiple rental and investment property?

I mean the hub thing and synergistic collaboration are cool but employees are not 3x more productive because of it.


> or to stay with homegrown technology frozen in time.

Why would it be frozen in time?


> Why would it be frozen in time?

Local development would rendered unable to compete with the fast moving zero-liability model that quickly and cheaply delivered the features that consumers wanted. Either it's import would be banned or the local industry would crumble.


I don't completely disagree but

> you get paid a lot less

Isn't that the point? That is, the argument is right now the money goes to the devs, management, and stockholders, when it rightfully should go to those people damaged by the software (or toward preventing them from being damaged).

> the companies and industries move very slowly

How much of this is due to liability law and how much is due to natural aspects of the relevant technology? Liability law may be part of the reason, but software probably naturally moves faster than other engineering fields.


> Isn't that the point?

If the money is supposed to go towards people preventing damage, wouldn't that include devs?

> How much of this is due to liability law and how much is due to natural aspects of the relevant technology?

I can only speak from my own experience in biotech, but moving slowly was due to a lot of compliance box-ticking that didn't actually contribute a lot to either safety, reducing defect rate or meeting requirements. Conway's law applied: since bio engineers and lab techs move slowly, so did the software org.


> fields where the companies are liable for bad stuff...

> * the companies and industries move very slowly

The reason the rest of us move so fast is we skip safety, security, quality, and maintainability to get to market. Those things are usually perceived "not bringing immediate customer value".


> The reason the rest of us move so fast is we skip safety, security, quality, and maintainability to get to market.

And deliver cheaper results.


And still, the Toyota breaking microcontroller code showed that coding practices in Real Engineering are somehow even worse. I hope that improved since then.


> What we do in software is not real engineering, not even close.

The only reason our processes and practices aren't much heavier is because the stakes are lower. People do not die if a Tweet doesn't make it through, but they do if a bridge collapses through.

The threat model is also significantly different. If we go back to the bridge analogy, a company like Microsoft has to deal with tens of thousands of people trying to blow it up or find some weakness every day, while a million people are going over it. Just by sheer laws of scale they are going to have a tougher time, Real Engineering or not.


> People do not die if a Tweet doesn't make it through

True, and what you're saying is generally true. But what were the total consequences of the Equifax breach? We can't even quantify it. Snowden himself in the article mentions activists and journalists being killed because of these vulnerabilities. There are definitely counterexamples.


That's true. I suspect part of the problem there is lack of liability and therefore lack of willingness to pay for security. They're just going to lose the best security engineers to Google and Microsoft.


That is due to issues with how Equifax operated (and related lack of meaningful consequences). It has nothing to do with a lack of liability for software companies.


Is moving more slowly in software a bad thing? I can just imagine an alternative world where people are better off, using sites that look like HN but are secure and work in their best interest.


There would be no sites. There would be no web.


>I mean like Real Engineering fields. What we do in software is not real engineering, not even close. That has pros and cons.

"Real Engineering" has definition and it fits SE, doesn't it?

Computer industry iterated and managed to reach the point where we can move really fast and do not break computers with bad code. That's result of thousands of hours of engineering effort of previous generations.

>Engineering is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering.


How about hardware engineers or software engineers working on hardware? I figured it achieves a certain middle ground. You are still writing code but you get to touch real things and may burn yourself if you work closely with the hardware, plus you need to pay more attention to security.


> I saw someone else talking about Rust, but I don't think that's what would happen in such a world.

I agree.

Let's not use tools as a crutch. Roman engineers built bridges that are still standing today with little to no maintenance. There's no indication that those structures are going to fail anytime soon either. I think we can agree that their tooling was worse than our current bridge building tools.

> I mean like Real Engineering fields. What we do in software is not real engineering, not even close. That has pros and cons.

So the word engineer is probably loaded if not dated, a throwback to engineering's boom in the 19th and early 20th century. No idea why we still use it since software is more like math than anything else. Computer scientist just never stuck.


> Roman engineers built bridges that are still standing today

Roman engineers didn't understand the principles of civil engineering; I'm sure they built a lot of stuff that fell down (survivorship bias). So they started massively over-engineering their structures ("moar rocks!")


> I'm sure they built a lot of stuff that fell down

Survivorship bias is the favourite catchphrase for HN these last few years.

I think you overestimate the importance of academic theory over practicality.

"Moar rocks" is a perfectly reasonable way ro fortify a structure. Because 1000 years from now engineers will be wondering how any of our stuff survived ("moar cement and steel rods!")

So yeah, I have a healthy respect for practicality.


Sounds more like Ada than Rust.


So called "Real Engineering" fields kill people at such a high rate. Software engineers are way better. Despite all the value created, a vanishingly small number of people have been killed.

That's the funny thing. As a software engineer, if I have to build something that might kill someone, I just don't do it. But so-called "real engineers"? Bam, condo building down, people dead. Bridge down, people dead. Tacoma Narrows? Experimenting on people.

If "real engineers" were half as good at their job as I am at mine, we'd have a space elevator and people would be going up and down it every hour and the only problem they'd face is that the music is sometimes not that great. But they're too busy killing people to create $10 in value. I'm too busy not killing people and creating $1 million in value. The numbers don't lie, dude. The numbers don't lie. More value made. Fewer dead people.

People don't like to hear it because of the fetishization of this "Real Engineering" nonsense. But it's true. Software engineering's generational scandals are outdone by a "real engineering" project every day.


Maybe it's because real engineering has higher stakes than a crud app. Condos can kill, bridges can kill, Javascript forms or video game engines generally can't. If they could we'd witness a lot more deaths, despite your confidence.


I have a personal philosophy: "if you can't do it well, don't do it at all". That's because I prize my ability at what I do. I am good at it. I am a craftsman. Not like these fly-by-night characters busy dropping concrete on people's heads. Maybe I need to teach these "real engineers" something about building things haha: "If you can't do it without killing people, don't do it". Man, that's a motto for the ages. You'd think you don't have to teach people that, but here we are.

Maybe we should teach them ethics, because whatever class they took on that, it didn't take. As a practitioner of quality and making money with zero killing, I could help.


>I have a personal philosophy: "if you can't do it well, don't do it at all". That's because I prize my ability at what I do. I am good at it. Maybe I need to teach these "real engineers" something about building things haha: "If you can't do it without killing people, don't do it". Man, that's a motto for the ages. You'd think you don't have to teach people that, but here we are.

You're trying too hard. It's embarrassing.


Why bother with the software engineering, your moral sensibilities are needed everywhere. It's not every day someone emerges capable of writing a todo app in react without killing anyone


HAHA! What can I say? Some of us are made of sterner stuff.


I see an awful lot of skyscrapers not falling down every day, stands to reason some engineers are good at what they do, no?


Oh there are good engineers, certainly. But the field as a whole clearly is immature since it causes more deaths than software engineers do.


I actually agree with you. People keep doing shitty jobs in real engineering e.g. construction. It's just most of the time people don't have the skills to notice it.


We tried this with general aviation. Private plane manufacturers all went bankrupt, and now the minimum price for a new airplane is in the hundreds of thousands of dollars.

Apply strict liability to software, and you'll see the same results. Every piece of software will have to be constructed with the care of a medical device. Expect most forms of technological progress to come to a halt. Some part of the HN crowd will post "I want that" from their iphone (which wouldn't exist under such a regulatory scheme).


I’m reading a book called “An American Sickness” and it discusses medical devices. Turns out a lot of them are pretty poor and often have less testing and verification than most people think.

There’s one story about a hip implant that went bad. Turns out the doctor recommending and performing the surgery was also the patent holder and had a vested interest in getting this particular implant in as many patients as possible. Turns out the patient was actually patient #8 who received this particular implant. Also the implant wasn’t fully approved yet and the FDA simply trusted the doctor to monitor the device for problems.

Also this isn’t isolated. The chapter has several examples of medical devices going into patients and patients experience negative health outcomes. Turns out laws are only as good as the agencies that enforce them.


Even with perfect enforcement, how do you tell if/when the laws stifle innovation?

If a company makes a bad implant, it's very visible, but all the potentially improved hip implants that never get built because of the barrier these laws create are invisible.


Private plane manufacturers went bankrupt, but more importantly, plane crashes have become incredibly rare.


True but that's the trivial case. You'll never get food poisoning if we outlaw food, never get into a car accident if we outlaw cars, ...

The point is there should be a better way that just pull the plug on anything potentially unsafe.


Generally I get food poisoning when the restaurant, supermarket or producer fails to follow the health standards it is obliged by law.

And then I can sue them to death or make a report to health authorities that will act accordingly.


In parallel the GA planes used got old. Imagine the same happening for the software!!!

In 2000, the average age of the nation's 150,000 single-engine fleet was more than 30 years. By 2020, the average age could approach 50 years

https://www.faa.gov/aircraft/air_cert/design_approvals/small...


I don't know about "rare", since 1999 the broadest "accident" statistic fluctuates around 6 per 100,000h

Notice that "accident" definition conveniently excludes tons of partial failures! (see the legalese of 49 CFR § 830.2 - Definitions.)

To make parallel with broader discussion, the security failing of software could be considered "partial failures"...


>How many mistakes does someone have to make before we start to suspect there might be problems with relying on that person's work.

There's a lot of legitimate criticisms about modern OS security, whether we're talking about Linux/Android, MacOS/IOS, or Windows. However, we can't ignore the scope of these programs. Supposedly Windows 10 is approximately 50 million lines of code and due to its overwhelming popularity it has almost certainly been targeted more often than all of the other OS's listed combined. I am pretty sure all of these operating systems are > 10 million lines of code.

Whose work are you going to rely on instead? The level of security in these OS's isn't equal across the board, but I assure you zero days exist for all of them and barring some kind of miraculous technological breakthrough, they'll continue to pop-up from time to time as long as they exist.

Suppose someone pulled off a miracle by making a security-focused OS that's easy for non-technical people to install and use and actually gains enough traction to establish a market share. If such a thing existed it would likely get lots of things right where others have failed, but they would also likely get lots of things wrong. It doesn't mean we shouldn't try and it doesn't mean we shouldn't encourage both old and new companies to try and improve the situation, it just means that its an incredibly difficult and likely never-ending task. Security is a process, not an achievement.


Does it need to be 50 million lines of code? When you design with security in mind you might have to prune old code and drop some risky optimizations, probably drop some features. Using a higher-level language might help reduce the line count as well at the cost of performance and memory consumption.


>Does it need to be 50 million lines of code?

Probably not and they should try to reduce the size if it makes sense. However, if all of the OS's I listed are above 10 million lines even in the best case scenario a modern operating system isn't going to be anything less than an overwhelmingly large and complex program.


> Even more, how many times do we hire someone knowing they have made 500+ mistakes in prior work leading up to their application.

While i don't disagree with the point you're making entirely, I have made many mistakes in my career and i still get hired and i suspect you have too. Most people make mistakes, and still have jobs.

Realistically, people/orgs make mistakes. Requirements change. Expectations change, security practices change, etc.

Roman bridge engineers would likely struggle to build bridges at the scale and requirements they're built to in 2021 (miles long, tall, huge train weights, etc). Things change in technology jobs a LOT faster than typical NSPE engineering licensees' jobs. Its a different world. There are probably bridges under contruction today that were designed before the iphone.


Put liability on software and you will halt innovation almost completely. It won't lead to a utopia of provably safe systems and secure languages, but to a dystopia of paranoid risk-averse companies refusing to allow any form of innovation or use any code that is not already in use. Instead of clean bug-free code we'll end up with code that looks like OpenSSL, ugly as hell but relatively bug-free after decades of brute force fixing.


Microsoft and friends would love this, as it would result in such a huge barrier to entry that only those who could pay large amounts for insurance woyld be able to enter the market, with insurance built into the price of the software.

Say goodbye to open source software!


My guess is the ‘users didnt know what they were doing defense.’ Not that you can deploy software in a secure by default stance. But once somebody sets up users and a million other settings inside Active Directory, it’s pretty much bound to be insecure. And then the defense is akin to ‘you werent supposed to crash the car into the tree’.


I guess they blamed "the army that didn't know what it was doing" for using that bridge wrong too. So it's not _just_ software "engineers"... <grin>


Masterlock is still in business, right?


Regardless of all that, I would love to see a big improvement in basic software Quality.

Easy to say; not so easy to do. It is actually difficult to answer the [seemingly] simple question “What, exactly, is ‘Quality’?”.

I think I do a fairly good job, there, on my own work, but what works for me, won’t work for most.


It is possible to build a bridge which doesn't fall down during normal use. Has any company demonstrated that it's possible to build large scale software with no security flaws in it? Microsoft haven't, Apple haven't, Google haven't, Facebook, IBM, Oracle, RedHat, Amazon, Cisco, Juniper, The Pentagon, The Whitehouse, The UK Government, the fact that we've heard of Snowden suggests the NSA haven't, banks haven't, airlines haven't, medical companies haven't, Cloudflare leaked memory to the world, Symantec screwed up SSL authority handling, BGP was hacked to steal cryptocurrencies. It's possible to commit to building a bridge and then everyone must wait patiently for the bridge to be finished - there's no push to have the bridge done in 6 weeks because another bridge is being built in the same place and if it appears soon all the traffic will be lost and the build cancelled.

Who even stands a chance? Colin Percival - math prodigy, cryptographer, former FreeBSD security officer - has paid out thousands in bounties for over two hundred bugs in Tarsnap software (not all security flaws) including buffer overflows, divide by zero, double-frees, mistakes in error handling, Unicode string handling error, numeric overflow mistakes, padding errors, user input handling bugs and one "critical security flaw" (against his very high standards) - https://www.tarsnap.com/bounty-winners.html

If someone like that working full time on a very constrained single purpose product can't make flawless software, what sense does it make for you to sneer at Microsoft as if Windows is singularly flawed here?

> "If Microsoft products are so infallible when used as instructed by Microsoft, then why would Microsoft have a heart attack as Snowden suggests."

They aren't, they're Swiss-cheese. So is approximately every other general purpose computing product, software and hardware, with the possible exception of a handful of very small very specific battle-hardened tools. Even the non-general-purpose walled garden appstore devices are Swiss-cheeses.

Possibly their heart attack would be because they are one of the biggest software development companies on the planet with a huge range of products used in tons of companies, so such a regulation would disproportionately affect them more than most companies? "All" Facebook develop is a web page, and very few companies use Facebook or Instagram or Oculus except by sending money and adverts to Facebook. Microsoft would have to secure a ton of large-scale software used on company premises in environments they have no say in. Seems to me the result of such a ruling would be something like Microsoft stopping selling on-premises software entirely and offering only web access to hosted Outlook, Office365, SharePoint, SQL, Biztools, Dynamics, with a ton of extra checks slowing them down, and companies faced with either using Microsoft online tools where Microsoft is responsible for the security or choosing on-premises tools like LibreOffice and Thunderbird and FireFox where they have to take responsibility, their legal and insurance would push them to Microsoft world with even less configurability or interconnectivity than there is now. It may be more secure, it doesn't sound great.

> "Instead we frequently see discussion blaming users of the software, i.e., Microsoft's customers, or even suggestions to make the customer liable, or comments from "security experts" on how Microsoft has made such amazing strides in securing Windows (a tangent)."

Not a tangent; Look at the recent discussion on HN about Windows Defender, and pretty much any security choice - it's full of people bemoaning Microsoft making decisions that are mildly inconvenient on the grounds of "how dare they think they know better than me", "I demand to be able to turn these security features off", "I want control to be able to do anything". If Adobe Reader is vulnerable and lets someone ransomware all your documents, what benefit is it to you if Windows underneath is an impregnable fortress? If someone can steal a user's 2-factor auth backup code from their house and access their remote VPN and exfiltrate data, what good is a perfect OS underneath doing for anyone?

This isn't whataboutism ("others are insecure so why can't Microsoft be"), and although it is somewhat in defense of Microsoft it's not "waah leave Microsoft alone", it's ... what kind of world are you living in where you heavily imply that Microsoft could have done differently and didn't? Microsoft trying to write Windows 3.1 in Pascal or ADA would have had their lunch eaten by every other company which didn't.

> "What a remarkable state of affairs we have today where employers such as Microsoft can call their employees "engineers", and yet both the employer and employees are absconded from any liability for the so-called engineer's work."

You think changing their job titles to "programmer" would improve the situation?


> Not a tangent; Look at the recent discussion on HN about Windows Defender, and pretty much any security choice - it's full of people bemoaning Microsoft making decisions that are mildly inconvenient on the grounds of "how dare they think they know better than me", "I demand to be able to turn these security features off", "I want control to be able to do anything".

I think it's somewhat reasonable to complain about the approach taken to security in Windows. MS had a lot of work to do to stay on top as long as they did, but they were also extremely well resourced and dominant for a significant time, where they could have made bigger systemic changes to prevent swiss cheese getting released in the first place. They could've built their own rust-like language, maybe, built some great static analysis & fuzzing tools, or generally advanced their own internal exploit discovery to beat outsiders to the punch. That we've mostly settled for constant security updates upon exploit discovery in the wild and blindly assuming super old code is safe (until it isn't) seems like a failure to act (a failure of incentives?) and not a necessity.


> "They could've built their own rust-like language"

They tried. Windows Longhorn was to be "entirely" managed code in the early 2000s, by 2004 they walked that back because vendors didn't want to rewrite their software for managed code and there were too many performance issues. Then they scrapped Longhorn altogether. As a research project they built: "Midori is an operating system that did not use Windows or Linux. Was written in C#. Took 7 years to build and included device drivers, web servers, libraries, compilers and garbage collection.". From what I know, they tried harder than Apple+macOS or anyone+Linux kernel and couldn't do it. To then casually say "they could've" is not so convincing. Especially if in this "you're responsible for bugs" world they would have had to have Proto-Rust production ready around the time of Windows NT in 1993 or so.

https://www.theregister.com/2005/05/26/dotnet_longhorn/

https://www.theregister.com/2004/05/06/microsoft_managed_cod...

https://www.infoq.com/presentations/csharp-systems-programmi...

https://news.ycombinator.com/item?id=27809296

> "That we've mostly settled for constant security updates upon exploit discovery in the wild and blindly assuming super old code is safe (until it isn't) seems like a failure to act (a failure of incentives?) and not a necessity."

The blogpost I link below[1] was written in 2004, and he says "I truly believe that the patching fad in which we are currently living is not going to last much longer. It can't. In another couple years, we'll have one full-time patcher to each system administrator. What's odd is that if companies simply exercised a bit of discipline, it wouldn't be necessary at all. Back in 1996 a buddy of mine and I set up a web server for a high-traffic significant target. It was not the Whitehouse; it was a porn site. We invested 8 hours (of our customer's money) writing a small web server daemon that knew how to serve up files, cache them, and virtualize filenames behind hashes. It ran chrooted on a version of UNIX that was very minimized and had code hacked right into the IP stack to toss traffic that was not TCP aimed at port 80. 10 years later, it's still working, has never been hacked, and has never been patched. If you compute the Return On Investment (Or ROI in the language of Prince Ciao) it's gigantic."

And yet the patching fad has got hugely worse since then, and all the factors he complains about - CEOs buying from salespeople, desire for customisability and flashiness - are all still driving the industry hard.

[1] http://www.ranum.com/security/computer_security/editorials/m...


I appreciate your reply. I had heard of Midori but didn't know much. It's good to learn more about those efforts.

Even when I say "they could've", it was in some part wishful thinking that we can even get to a state of the art where constant bug-patching can go away. I believe we can in theory. I also know that the nuance of reality often gets in the way of that kind of idealism.


Nothing will change


Eh, I think it's plausible that in 20 years almost all critical code will be written in memory-safe languages and that this particular bug class will essentially be eliminated for new software. There'll always still be RCE bugs of some kind, but it's probably going to get harder and harder to find them, exploit them, and escape sandboxes and mitigations.


Maybe we'll get open source drivers and have the PostMarketOS devs handle it.


I think this is the most we can strive for: a sustainable niche alternative.


They will never change. It's only going to get worse as the world enjoys their new remotely controlled telecommunications devices.


[accidental dupe. see above comment]


I bet aoftware that is so secure to be compared to products of other engineering fields does exist (e.g. avionic systems), but you will have to pay a much higher cost to use it, luke hundreds of thousands dollars since the cost of acheieved the guaranteed security level would also be much higher. Given the low cost of modern consumer software, you really have to understand that you get what you paid for. And if you want security guarantees you don't just need that particular piece of software being secure, you need qualified people to operate it, the hardware running it being secure (probably propietry and cost hundreds of thousands), and the whole software stack being secure as well, from firmware to OS to networking code. This is simply not somehow that your general consumer can afford, and if you enforce that level of security in consumer software like smartphones, it is safe to bet that almost everyone that has a smartphone would be protesting since they would have no Software aviliable on there phone anymore.

Like one comment above said, there do exist ways to enforce that level of software security, (like railway traffic lights) but the cost would be ridiculously high, and those systems are probably not running any consumer kind of software stack, probably without an OS since Linux would has it own vulnerability as well. Those systems are probably made for custom hardware that the software vendor has total control of it as well.


Who dismisses it? I think it's a pretty good idea. I think an enforceable law would have be more thoughtful than just paying for every breach. We need a reasonable standard for responsible security and how to assign blame.


[flagged]


No he's not, he even stated, that he is fine with coming back to the US if he is guaranteed a fair trail.

https://www.washingtonpost.com/news/the-switch/wp/2015/10/06...


Everyone is "guaranteed" a fair trial. As much as one can guarantee such a thing. Bill Cosby was just let out of prison because of shenanigans with his trial.


[flagged]


Look at the dirty tricks they employed with Assange. Snowden’s paranoia has been vindicated.


It's not even paranoia. He wants to present reasons of his actions, and the judicial authorities won't allow that, and are not even really hiding it.


Assange is in prison in England.


Did the FBI deny D.B. Cooper a fair trial?


John Kerry said the same thing nearly. Ellsberg called it disingenuous or simply ignorant.[1]

[1] https://www.theguardian.com/commentisfree/2014/may/30/daniel...


I'm aware and I disagree with Ellsberg on this.


While his message is undoubtedly important I find his writing style is overly condescending and detracts from the message.


He probably has earned the right to be condescending as much as say Linus has. Dude risked it all to expose what he thought was wrong, and only someone as smart as him could have managed to stay alive and out of jail (albeit barely) facing off the most powerful country in the world.


Messages from public advocates should be delivered with more tact and understanding if you're serious about influencing public opinion.

For instance, if you like the Washington Post, and read this article you'll be quite annoyed at the "have to lose your spine" remark. The message is then diluted.

If you're just a regular consumer who likes the iPhone you'd be annoyed at the latte status symbol quip. The message is then diluted.

I don't think comparison with Linus is equivalent because he doesn't publicly advocate much whereas that's mainly what Snowden does.

(Perhaps I should have included this in the original comment.)


"He probably has earned the right to be condescending as much as say Linus has."

It's not about rights, it's about desires and personal insecurities. People who derive pleasure from condescension are just telling you that they're vulnerable to manipulation via flattery. It's bad opsec! :P


C'mon. Compared to what I'm used to from the median big shot internet blogger or NYTimes op-ed page, Snowden is positively pulling his punches here.


TIL Edward Snowden has a substack.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: