Hacker News new | comments | show | ask | jobs | submit login
I cannot consistently write safe C/C++ code (ocallahan.org)
133 points by JoshTriplett 151 days ago | hide | past | web | favorite | 118 comments



The worst thing about these kinds of articles is the troves of junior programmers that never touched systems programming with a stick before but will read this on hackernews today and sit in the office tomorrow lecturing seasoned coders how they´re dumb for not having seen the light and using an unsafe language.

This is how stupid cargo cult gets made, guys. It's easy to repeat some talking points that you found on the rust homepage or another internet forum, but doing that does not make you an enlightened programmer!

Do c/c++ for 10 years, like the author, and then you're qualified to comment about the topic. If you're new to this, try to put your time into actually learning about these topics and not just blindly repeating other peoples opinions. Please don't say things like "c/c++ is unsafe and should not be used" without understanding it first. And try to consider for a moment that a humongous part of the critical software in the world is written in it. It is the industry standard for all things embedded and low level. Consider that maybe the reason for that is not that everybody outside of hacker news is stupid.

The thing is, I'm not even saying the author is wrong (I don't think he is). I'm just saying that circulating these kinds of opinion posts here and then applauding each other for being such brilliant rust fanboys is not helpful.


Beginners will feel overconfident and some will even try to teach more experienced coders how to do things "right" regardless of these types of articles. We've all been there and we probably still are in some fashion. I don't see how that diminishes the value of TFA though.

Any kind of guidelines can be "abused" by taking it to the extreme without actually taking the time to consider if it actually makes sense.

Also while I do have significantly more than 10 years of experience in C I think it's silly to say that you can't criticize the language if you can't recite the K&R by heart. You don't need to know all the intricacies of C's aliasing rules to peruse the long list of CVEs caused by faulty memory handling.

The world of computing changed drastically over the past decades, for a given application maybe C or C++ was a great choice in the 90's, doesn't mean there's no better alternative now.

People telling everybody to ditch C like it never existed and rewrite everything in Rust or Go are silly and are probably the junior coders you're talking about who lack real world experience. Doesn't mean that the opposite reaction of "if it ain't completely broke don't fix it" is any more clever.


Well, I agree with you on practically everything you said.

Just for the record, I never intended to imply that somebody should not criticize their tools until they fully mastered them and I certainly don't claim to have done so. I just (tried to) say that I don't like it when others recite opinions from some blog post without even a basic understanding of the issues at hand as I find it leads to a very superficial discourse. I also didn't mean to imply that new projects should be done in C/C++ for the sake of historical reasons or something similar.

In fact, I actually recently started working on a new project, written in rust (!) in an industrial setting. Part of the reason for my original comment is that I found there was _a lot_ of cult in content related to rust, in their marketing material and on rust-related posts on HN. While I really like some parts of the language, this aspect of the rust community is really turning me off at the moment and I hope it will get better over time.


Let's say you're a fresh grad and looking at options for you future dev career:

Option 1. Spend the next 5 years learning the ins and outs of C++, where it can bite you, where it can go wrong, all the intricate edge cases, etc... and you'll come out a better C++ programmer. Hopefully after 5 years of hard concentration, you may be able to write safe code (I still haven't met a C++ programmer that hasn't been burned by segfaults in production).

Option 2. Spend a single month learning Rust. Play with it for another month or even two just to be sure. Hopefully after the 3rd month, they'll be 100% certain that their code won't break because of errors that are above their pay grade.

I've done programming for over 12 years. x86 assembly and C for the earlier years. Then I realised that higher level languages gave me the power to code confidently without causing the errors that I was constantly seeing without explanation.

After many years of HLL I wanted to go back down to the metal... after everything that I knew, "C or C++, not even once".

Rust gives me the confidence to "compile once, run safe everywhere".


That assumes there will be no errors in the Rust compiler or runtime environment to deal with. If there are then those would definitely be above their pay grade.


I'd take a bug within LLVM over my own any day


Rust has basically the same amount of runtime as C; I'd say "standard libraries" here instead, personally.


>> Spend the next 5 years learning the ins and outs of C++, [..,] Spend a single month learning Rust. Play with it for another month or even two just to be sure. Hopefully after the 3rd month, they'll be 100% certain that their code won't break [...] Rust gives me the confidence to "compile once, run safe everywhere".

See, this is exactly the fallacy that I am criticizing. I think the belief that "C++ is too hard to learn but one could become a good systems programmer in rust in a few months" is - frankly - misguided. I think that in order to become a good systems programmer in rust, you _will_ have to know all of your systems basics (i.e. you should already be able to code in C for starters) and then some more. But sadly this seems to be the spin that their marketing is pushing also.

Personally, I found practical rust (that interfaces with actual system libraries - think openssl) to be more or less on par with good C++11 with RAII memory and ressource management in terms of practical memory safety. Having somebody writing rust code without a basic understanding of things like ressource management and threading is just as bad as having them writing "unsafe" C(++) code.

I hope I'm not feeding the trolls here.


Another scenario:

Option 1. Spends 5 years learning the ins and outs of systems programming, whilst using C++. Still writes unsafe code.

Option 2. Spends 5 years learning the ins and outs of systems programming, whilst using Rust. Zero unsafe code has ever been written.


If you are writing systems code, you will be writing unsafe code; it's just how it works when you are working with the underlying kernel and hardware.

The best you can do is attempt to contain it within "unsafe" blocks, but as has been proven by others in the rust community already this will not save the rest of your "safe" code from crashing if you write a bug.

There have been memory safe languages for longer than c has existed; rust is just the latest.


Again, the assertion that all code written in rust is automatically "safe" (let alone correct) is false unless you say what precisely you mean with "safe" in very specific terms. While there are some ways in which rust is "safe" while c-family languages are not, these cases are _subtle_ and you _will_ have to understand modern C++ first to see what they are.

But yes, rust could turn out to become a new widely accepted standard for systems programming some day and depending on whether you believe that or not, it might make sense to "invest" your time into it.


These cases are not subtle at all. You can understand a good number of them with barely the knowledge from an Intro to Java class- iterator invalidation, data races, etc.

There is no need to understand the particulars of C or C++ to do or appreciate safe systems programming.


I'm talking safe as in Rust's memory safety guarantees.


As far as I understand, there is no accepted formal specification or even single source of truth for what the rust community thinks are the "rust memory safety guarantees". Only a large number of people that have completely convinced each other that "it's better than C++" with little proof and without even clearly defining their own semantics. Please correct me if I'm wrong here (a blog post does not qualify as a specification).

Relevant: https://github.com/rust-lang/rfcs/issues/1447


It's a little more subtle than that: It's "safe code must be memory safe" and "unsafe code infects the whole module". The stuff you're talking about is "what exact invariants are unsafe code supposed to uphold". Because unsafe Rust is a superset of safe Rust, that being an open question does not mean that what's meant by "Safe Rust" inside of safe Rust is up for questioning.

Work on exactly what unsafe code is expected to do is ongoing.


It's nowhere near that uncertain. "Memory safety" is hardly a subtle property and it's been well-understood since we figured out how to enforce it in garbage collected languages.

The only uncertain part is which rules unsafe code has to follow to maintain it- it currently comes down to "whatever LLVM optimizations won't break," which is in practice what C and C++ programmers deal with already. Things like your link are work to improve the situation beyond C and C++.


[flagged]


Comments like these aren't OK on Hacker News, no matter what you're replying to.

https://news.ycombinator.com/newsguidelines.html


As already posted in another thread for this GP, I completely disagree with the overall sentiment of this whole discussion that is throwing (modern) C++ (i.e., C++11 ff.) into the same bucket with (any version of) C.

What I can agree to, however, is that C should be your first choice if (a) you have to, or (b) because you like it and probably (c) because you are several times more likely to find a job if you know C as opposed to Rust or Go (unless living in very specific areas of this world - maybe...?). And then go learn Rust/Go/..., anyways :-).


Fact 1, the article doesn't mention Rust a single time.

Fact 2, mostly safe systems programming languages exist since ESPOL (1961), 10 years older than C, and with a great linage of attempts of safe systems programming outside AT&T walls, so plenty of alternatives are available

So as someone with more than 10 years of C and C++ experience, among other programming languages, before focusing on Java and .NET, I find these kind of articles valuable because they are the proof there isn't such thing as the 10x C developer that never does mistakes.


I feel you're missing my point. The article doesn't mention rust but my comment was phrased in the context of the current rust craze on HN.

As I said, I agree with you and the author that all of us do inevitably make mistakes. I also agree that the c-family of languages makes it somewhat more easy to shoot yourself in the foot in a bad way than others. I'm not saying innovation on safe languages is bad, or that using safe languages is bad.

My disagreement with you and the sibling is probably on the point of whether one can learn something useful about the limitations and pitfalls of c-family languages by reading a single-page all-opinion post about it. You and the sibling appear to believe so, I do not. I don't think we will come to a complete agreement on this, but I certainly understand and respect your and sibling's view.


The main issue I have found out during all these years of advocating safe practices in C and C++, specially among C developers given that the C++ community does care about type safe programming, is that they land in deaf hears.

Currently, writing blog articles re-explained what is bad and should be avoid seem a waste of effort, given that even respectable researchers like DJB get ignored.


There's a broader question of when to investigate a technology yourself versus learning from others. A new programmer could spend ten years programming in [some language] or they could learn from other people to avoid it, and possibly save themselves a lot of time. Yes, that means you're just following the bandwagon, but sometimes that's efficient.

I would share the link, though, rather than making my own assertions, and be open to other arguments from more experienced people.


There is a big difference between modern C++ (C++11 ff.) and even the most current iteration of C. The two languages might have been quite similar back in the day when "C with classes" was created, but they have diverged significantly since then. Therefore, I think it is wrong to consider C a "proper subset" of C++, as often is claimed, or even to treat them as equals.

In other words, the real fallacy of the original post is throwing C and C++ into one pot, as if it were all the same - something I consider could not be further from the truth, even back in the C++98 days. A part of that problem is that persons having a (possibly incomplete) understanding of only one of the two languages (on top) believe they can apply that understanding to the other language, too, without having to learn the correct idiomatic usage of the other [1]. And that situation is made even worse by posts that claim to only have to point out some "gotchas" you need to be aware of to understand the other language, particularly coding in C after picking up some C++. Just to name the most critical changes in C++ over C: RAII, abstractions (objects/polymorphism/templates/generics), I/O, error handling, and namespaces/encapsulation. However, not even keywords work the same (fe., `const` and `typedef`). Finally, when C code breaks, it typically is closely related to one of those features that would have tooling in C++ to avoid the issue (in particular, containers to avoid resource leakages in connection with more advanced error and I/O handling techniques).

In other words, I pretty much agree with the GP about the hyperbole of the linked blog entry, and would only let the original post "stand" if it had some kind of significant `C != C++` distinction. I can agree with anyone who thinks that coding in C is probably a Bad Idea today unless you must due to maintaining legacy code-bases, while I think that C++11 has greatly changed the issues you have to look after when coding in C++, making it a lot more safer - and IMO quite fun - to use.

[1] https://olvemaudal.com/2011/10/10/deep-c/ (note that all C++ issues in the post can be avoided by using proper modern C++, like "naked" `new` usages and such)

[minors edits after the post for spelling corrections and readability - but no semantic changes]


I kind of agree with you, back in the day during the C vs C++ flamewars, I was always in the C++ side, and still am if you follow my comment history.

However a big part of the problem, which you kind of refer to, when talking about lack of understanding between C and C++ differences is that, at least on enterprise space, many use C++ compilers for writing what is mostly C-like code.

Do you know why most MFC classes have an Afx prefix?

Microsoft created a C++ framework similar to OWL in abstraction capabilities, but the test group of early adopters said it was too high level and they just wanted a thin wrapper around Win32, hence Afx was reborn as MFC. [0]

I like modern C++ very much, and it is true that many of the "modern C++" concepts were already available on C++98, the problem is getting developers to actually use it, specially old school devs when working in companies where CI builds, static analyzers or sanitizers aren't part of the culture.

Which is the situation I see most of the time across many enterprise customers.

[0] http://cs.sookmyung.ac.kr/class/00891/C++/mfc-faq/


I completely agree with you on that take.

However, if you accept that take, I see little chances of convincing management in such a company to switch to a new, fledgling language - even if it were much safer to use. If anything, your hopes might be that you can teach their teams to use modern C++ and accordingly and slowly massage their code-bases into a more up-to-date state...

Ie., (not the least due to those vast code-bases) C++ (and Java) is (are) quite certainly going to stick around for the foreseeable future, like it or not. I don't really see any great migrations coming our way as long as those languages keep updating themselves to reflect the more significant insights from programming language research, even if those updates lag behind by years.


Hence why I mostly focus on Java, .NET and C++, and tend to comment that regardless how great Rust might be, it will take several years for any relevant uptake.

Just to show how bad it is to move those big enterprise ships, only now in 2017 has BMW moved from C to C++14, for their car platform.

https://fosdem.org/2017/schedule/event/succes_failure_autono...


> Do c/c++ for 10 years, like the author, and then you're qualified to comment about the topic. If you're new to this, try to put your time into actually learning about these topics and not just blindly repeating other peoples opinions. Please don't say things like "c/c++ is unsafe and should not be used" without understanding it first.

I don't think you need 10 years to understand enough to be qualified to comment. Like it or not people make mistakes, even the most seasoned coders do. C does its best to turn every little oversight into a potential entry in the CVE database.

> And try to consider for a moment that a humongous part of the critical software in the world is written in it. It is the industry standard for all things embedded and low level.

That is a valid excuse to use it but it doesn't mean that shouldn't be changed. What you see quite often is people defending using C less because of leveraging the existing ecosystem but because "it's so beautifully simple" and this "experienced coder can do it right" elitism.

> This is how stupid cargo cult gets made, guys. It's easy to repeat some talking points that you found on the rust homepage or another internet forum, but doing that does not make you an enlightened programmer!

> The thing is, I'm not even saying the author is wrong (I don't think he is). I'm just saying that circulating these kinds of opinion posts here and then applauding each other for being such brilliant rust fanboys is not helpful.

Nobody talks specifically about rust here, it's just a new promising language which is designed to fit inside the territories where C/C++ are prevalent and seems to have some traction. Rust is simply a non-theoretical opportunity to shift to something better.


You took my post as saying "all systems code should be written in C and that should always stay that way because people don't make mistakes". However, that is not at all what I said (also see other replies).

What I said was that one should not blindly make the opposite (false) reverse assumptions, which are that "code should not be written in C/C++ because they are 'unsafe'" or "all code written in 'safe' languages is 'safe'" (both for a hand-wavy definition of safe). The point that I tried to make was that if you do not have a good understanding of C++, you're probably not qualified to comment on the matter of some alternative being safer/better than it or not. And also that just blindly repeating other's opinions is not a path to understanding in this case.

I realize that the linked article didn't say or do that, but my comment was clearly not directed at the author of the post, but at the community of this forum (just look around in this thread to find some of the group-think I'm referring to).


The big safety wins from Rust for me are things like the type system disallowing iterator invalidation, the type system handling resource releases, iterators making bounds checks a lot more affordable, the shared immutable/unique mutable distinction, the ability to encode that something can't keep a value somewhere accidentally, type system enforced thread safety, and others in that category.

Why would I need a good understanding of C++ to consider these built-in facilities or the ones that just happen due to the type system as safer?


I don't think you need a good understanding of C++ to appreciate these features per-se. Only if you want to make a statement like "X is an improvement over C++ because of these".

For example, enforced const-ness/immutability, option types and automatic ressource management (RAII) are also already standard in modern/best-practice C++ code. In general I agree that there are a lot of nice languages features that rust currently has which would also be nice to have in the c-family languages.


> I don't think you need a good understanding of C++ to appreciate these features per-se. Only if you want to make a statement like "X is an improvement over C++ because of these".

What about "X encoding thread-safety through the type-system is an improvement over the C++ way" and similar? This is how I usually read these kinds of statements.


No, you can also learn from experiences of others instead of doing mistakes for yourself.


I think this whole "competent coders don't make mistakes" is a testament to the immaturity of the software development world. Real men write in assembly, maybe C if they're a bit tired.

It would be like a construction worker saying "only noobs need a hard hat" or a surgeon refusing to wash their hands because they're careful never to touch anything contaminated. Or maybe simply refusing to wear your seatbelt because you're a good driver.

Guns don't kill people, people do but if your gun has the bad habit of firing randomly when you don't want it to I guess it's fair to blame the gun a little bit as well.


> or a surgeon refusing to wash their hands because they're careful never to touch anything contaminated.

Fun fact, although perhaps this is specifically what you had in mind: This was pretty much the case until Ignaz Semmelweis [1] noticed that washing your hands caused patients to die less frequently. His suggestion was not well-received by the establishment:

> "Some doctors, for instance, were offended at the suggestion that they should wash their hands, feeling that their social status as gentlemen was inconsistent with the idea that their hands could be unclean."

I'd apologize for going off on a tangent if it wasn't such an accurate analogy.

[1]: https://en.wikipedia.org/wiki/Ignaz_Semmelweis


I don't think it's inaccurate at all. Software development needs a scientific revolution equivalent to that of medicine, wherein every pattern and practice has to be backed up by evidence from a randomised, controlled trial.

I remember sitting in a "software quality" workshop organised by a coder at an old employer. He was enumerating various patterns and best practices, and yet I realised that nothing he was saying was backed up by any evidence beyond appeals to emotion and anecdotes. If he had been trying to sell me on a new religion or a health product, I would have rejected his arguments out of hand, so why should the standard be lower for software development?


> I'd apologize for going off on a tangent if it wasn't such an accurate analogy.

> I don't think it's inaccurate at all.

an accurate - not inaccurate! :)


Surgical checklists have met (and are still meeting in some places) the exact same opposition, which pilot checklist also met when first being implemented.


> Real men write in assembly, maybe C if they're a bit tired.

    root@boxen:~# cat > a.out


Laziness, impatience, and hubris.

  # zcat > /dev/kmem


> "competent coders don't make mistakes"

There's a bit of folklore that I can't remember correctly. One of the seven steps to becomming a hacker (in the pejorative sense) is this kind of overconfidence, not to make mistakes.

The last step is (the believe in) complete controle and omniscence. Another one, I guess, is insomnia.

It hang as a poster in informatics class - fond memories.


Reminds me of this excellent blog post about sharp tools

https://schneems.com/2016/08/16/sharp-tools.html


> I cannot consistently write safe C/C++ code. I'm not ashamed of that; I don't know anyone else who can.

With respect, two mistaken beliefs:

1: Only a few programmers can write safe code.

2: One will naturally encounter such programmers in the course of a prestigious career working for a high-profile web browser company.

But mediocre programmers consistently write safe C/C++ code, every day. They do it in the context of aviation, automotive, etc. They do it as part of a much larger safety process, that is designed to be robust against faults at all levels, from design through development through manufacturing through end-user servicing. Software development is but one facet, and compliance to MISRA, ISO 26262, etc is only the start.

Get to know them, and see how safe C/C++ code is actually written!


A lot of the "safe" code written in embedded systems is only safe because the inputs are already safe. You don't get to exploit any bugs in a parser when the messages are created by a system under your control. Cars, planes, trains, medical devices, etc. don't really have interfaces to the outside world that can easily be exploited. Those interfaces they have get exploited. See for example https://www.wired.com/2015/07/hackers-remotely-kill-jeep-hig...


> But mediocre programmers consistently write safe C/C++ code, every day.

I think perhaps you and the author have different interpretations of "consistently". You might also differ on whether "safe code" is safe just because it hasn't failed (or been reported!) yet under current inputs.

> They do it as part of a much larger safety process, that is designed to be robust against faults at all levels

So, since we seem to have moderately good success through voluntary adherence to strict guidelines, we shouldn't automate away the possibility of some of these errors entirely when possible? What are the benefits of that approach?

Edit:

I think you can safely ignore my original comment. It's obvious now that I was interpreting it oddly by conflating it with some other comments and positions in this thread. Your comment does a good job of explaining how it is possible to write safe C/C++ consistently (although I might quibble by saying that it is only achieved by ignoring portions of the languages, thus confining you to a dialect of those languages, which is a point in itself to be discussed).

I would delete the original portion, but I believe in acknowledging mistakes rather than hiding them, thus this correction.


OK. I also made mistakes and I understand you. However you should perhaps move the disclaimer to the top of your post, so people have a chance to abort reading your post.


I could, but in this case I think it's more powerful as a reminder to be careful about interpretation for those that are in agreement with the original statement and then come upon the revision. That it might initially reinforce someone's similar mistake to then confront them with the fallibility of human nature appeals to me.


MISRA, ISO 26262 make C feel like Ada with curly brackets, it is so constrained that it actually isn't C any longer.

> Get to know them, and see how safe C/C++ code is actually written!

I know them and also know that only in the context of high integrity computing are companies willing to pay for such processes, because they are cheaper than closing down the company if humans are killed due to a memory corruption error.

Good luck imposing such process in industries where human lives aren't at risk.

If companies could be sued by bad software quality, it would be another matter.


That's a different kind of safe than the OP means. You mean "does not crash, produces the right results", he means "can'te be hacked".

A non-networked engine control unit is super hard to hack by simple virtue of being unreachable from the internet. I bet if you run a fuzzer against your perfectly safe aviation code you'd find lots and lots of security issues. But those issues aren't important.


> A non-networked engine control unit is super hard to hack by simple virtue of being unreachable from the internet. I bet if you run a fuzzer against your perfectly safe aviation code you'd find lots and lots of security issues. But those issues aren't important.

Those examples are interesting, specifically because history has shown us this point of view to be erroneous through those exact systems.

Automotive control units that are supposed to be segregates from certain data channels sometimes aren't. That was the cause of the Jeep remote hijacking stuff.[1] Any automation has to have some inpute and output to be useful for more than anything as a heater, and assuming your inputs aren't "networked" may not be valid given you likely can't control exactly how it will be used now or in the future.

For aviation, there have been problems with assumptions regarding the integrity of the channels used for communication between planes and the ground. This has led to certain systems being exploitable, even if not as bad as the initial hype made it sound.[2]

1: https://www.wired.com/2015/07/hackers-remotely-kill-jeep-hig...

2: https://defcon.org/images/defcon-22/dc-22-presentations/Pols...


> I bet if you run a fuzzer against your perfectly safe aviation code you'd find lots and lots of security issues. But those issues aren't important.

As long as no one does something silly a decade from now when it comes to the networking on the plane/ship/factory.


The "Industrial Internet of Things" is currently a very hot buzzword. I fear the worst.


I don't follow your point. Software has to be prepared to deal with abnormal inputs, whether those are from a malfunctioning sensor, or an attacker on the same WiFi hotspot.

Safety is a property of a system and the context in which it's used. Safety is not a property of its individual components, including its code. If your CPU's GPIOs were directly accessible through the Internet, your computer would be hacked too; does that mean your system is not safe?


A malfunctioning sensor is very unlikely to prepare a specially crafted input that exploits a bug. An intelligent adversary is something completely different.

Your claim that MISRA and standards for safety critical software also provide security against targeted attacks does not really follow.


I doubt writing safe C/C++ code is as straightforward as the Parent post claims, otherwise people at Microsoft, Google and Mozilla must clearly be doing something wrong. They have the budget, the skills and the motivation to always produce safe code, yet their products all contain serious bugs, which would not occur if all their C/C++ code was "safe".


We know how to write C/C++ code appropriate for safety-critical components, it's just very expensive and slow to develop. Any company who set out to write a social networking system at an ASIL D level (say) would be rapidly out-competed.

ISO 26262 recognizes this, which is why different components in your car are developed to different integrity levels. No lives are lost if Firefox misrenders a JPEG, so it has a low rating permitting lax development practices.

That said, with crypto-ransomware and other cybersecurity hazards, it's well past time for software development to take some pages from the safety engineering book. STPA-Sec is one example of an attempt at this.


I've read those safety standards and I disagree. They don't prescribe much that isn't standard practice. The only thing they require that isn't commonly done is very extensive testing or formal verification. They high expensive mostly comes from proving compliance to external auditors. You could have 90% of the results with 10% of the work by just testing your software very thoroughly but not producing all the documentation required to prove compliance to the standard. SQLite for example is at least as rigorously developed as SIL 4 software.


In automotive, code is very rarely written it is mostly generated from simulink models. But yes software for aviation automotive, nuclear and medicine equipment are running safe C code.

It is mostly generated from a model not necessarily simulink. The generated code usually have rules for the code so no global parameters, no pointers etc can be used.


That's a great point. Often you need to run your system in simulation via a model. Generating the code directly from the model is one way to prove that your system matches the simulation.

Of course the modeling tools speak mostly C. This illustrates how C's weaknesses are mitigated in practice, and how much work prospective C-replacements must do to be competitive.


> This illustrates how C's weaknesses are mitigated in practice

Except there's likely a false sense of security in how well those mitigations work. Part of the problem with C is that later optimizations by the compiler can yield insecure code at a later date because of undefined behavior and compiler optimization techniques that take advantage of it. In other words, if your generated code takes advantage of any undefined behavior, there's no guarantee that the same code compiler on the same compiler with the same flags but with a different/newer version of that compiler will yield bug free code in both cases.

Can those models produce C code with absolutely no undefined behavior? Maybe? When's the last time someone did a close look at exactly what they were generating? Did they make sure to look again when the underlying architecture changed (even from one version of ARM to another...)?

See the somewhat recent Cap'N'Proto remote vuln[1] submission for a modern case of this, and a good discussion of the problem in detail.

1: https://news.ycombinator.com/item?id=14163111


Yeah testing needs to be done with compiled code on target card otherwise you can't be sure it works. Usually compiling code is super strict no further optimization then the standard can be used etc. Only specific cards and compilers are "trusted" to use.


Yes and the code/models go through extensive testing. Everyone in the industry knows that a callback because of faulty software will cost an insane amount of money. This is changing a bit since more and more work is going towards over-the-air update.


It's true but I'd argue that they're not really "coding" in C or C++ in the way it's commonly accepted in those critical environments. It's more like a "fill the blanks" exercise where everything is split in tiny functions that are thoroughly specified and tested.

Effectively 90% of the dev work is done higher level by the various specification and constraint tools. All that's left to the C coder is the most menial of tasks, "dumbly" implementing the algorithm exactly as spelled out in the spec. And even then every piece of code will be checked multiple times before being validated. There's simply not enough leeway for the coder to make a significant mistake, or at least that's how it's supposed to be.

Clearly generalizing this approach to all of software development seems impractical. It's extremely costly and makes it hard to make quick changes to the codebase. For non-critical applications it's probably overkill. There's room for a compromise I think.

The problem with C IMO is that on its own it's highly and uselessly unsafe. At the time of its conception it was probably impractical to write a compiler that would check things like lifetimes statically because it would've made compile times unbearably long but nowadays it's not really a problem anymore.

Do you really encourage people to write critical code in a language where the compiler gladly compiles codes like this, without even a warning (tested with -Wall -Wextra on gcc 6.4)?

    int test(void) {
      int i[3] = {0};

      return i[8];
    }
And the following code only with a non-critical warning:

    int *test(void) {
      int i;

      return &i;
    }
Isn't the entire hacker mantra to have the computer work for you? I'd prefer if my compiler was actively trying to detect my mistakes instead of dumbly spewing machine code.


  > int test(void) {
  >      int i[3] = {0};
  >
  >      return i[8];
  >    }
clang will warn. gcc will warn with -Wall -O2.

  > int *test(void) {
  >      int i;
  >
  >      return &i;
  >    }
You are free to use -Werror.

I encourage people to learn to use their tools first. Instead of whining about the language when it is their tools that do not do what you want by default.


Oh interesting, GCC warns for the first code snipped only occurs with optimizations turned on. Still only a warning though.

And I don't think -Werror is a good general purpose solution because it's too broad. Many warnings are just fine as warnings and for big projects that build on several operating systems using a lot of 3rd party libraries there's bound to be a few harmless warnings here and there. For instance the perfectly valid "if (a && b || c)" will warn with -Wall. I guess I'd want a "-Werror-on-serious-warning".

That's why sane defaults matter, if those warnings were errors to begin with they would be fixed upstream, if instead you decide unilaterally to turn on -Werror for your code then you're screwed if your third party headers weren't as stringent with their coding rules.


> Oh interesting, GCC warns for the first code snipped only occurs with optimizations turned on.

Right. So GCC isn't really a static analyzer. But it does the analysis that is required to perform certain optimizations, and as a side effect you also get some new warnings with the optimizations enabled.

> And I don't think -Werror is a good general purpose solution because it's too broad.

It sure is too broad for all existing software to compile, but if we're talking about security-critical software, what is with this double standard of bashing C for the lack of stringent checks per the standard and at the same time requiring compatibility with third party software that isn't built to the same standard?

I think if you're doing something security critical, then you must want the same stringent checks from all the libraries you use. Which means you choose the good libraries, or make your own, or fix the existing ones. Yes, positive encouragement from sane defaults could push other projects into doing the right thing, but as long as they don't, you really don't have to use their bad code in your security critical product. Yet, it is entirely usable. OpenBSD for instance compiles the kernel and libcrypto among other things with -Werror. Other parts of the system may be built with less stringent checks.

It is, by the way, possible to only turn specific errors into warnings with -Werror=... Likewise, you can disable specific annoying warnings.

I agree that it would be nice if compilers provided sane set of warnings (and errors) out of the box without any extra tweaking. I think we're getting closer and closer thanks to clang pushing things in that direction. On the other hand, I don't suppose people will ever agree which warnings are the right ones. I'd say most people who want stringent checks will enable -Wall and -Werror, and they will add the parentheses in the expression you mentioned, even if it is completely legit without them.


> It sure is too broad for all existing software to compile, but if we're talking about security-critical software

The problem I have with this is the assumption that software you write that may be used by others can easily be classified into security-critical software and non-security-critical software. Sometimes you don't know, sometimes it changes ten years later as something gets pulled in and used because it's easier, multiple times over the years until the original author of a portion of the code may be entirely unknown, much less their intention for whether it was sufficiently robust for security critical situations (think an OSS library that may have forked a few times, and copied small bits of code such as utility functions from some other projects).

It's time to switch to safety by default. We should fail safe, and require flags to allow previously failing code to compile, not to just hear that it might cause a problem.


Right, that is why I proposed people to use -Werror, even if that means you have to vet and fix third party code (which you should do anyway if you're depending on it). Ideally you use other tools too.

Apart from that, don't assume flags and static analyzers can save you. If you're building something security critical, you do not make any assumptions about the code you depend on. You don't assume it is in some class. You don't "don't know", you vet the code. Flags and analyzers, default or not, are at best going to save you from a small subset of problems. The code might still be completely backwards and insecure.


The aviation/automotive code doesn't get deployed on to millions of personal computers of tons varieties and locals and probed by professional hackers endlessly for vulnerabilities. The aviation/automotive code runs in very well defined environment, almost completely locked down and unavailable for user interaction for most parts. Its unfair to compare them with open ended applications like browsers which has to handle tons of formats, extensibility APIs and very complex and often incomplete protocols with numerous versions and compatibility issues. Two very different worlds.


The problem with security through obscurity (which is what this approach is analogous to), is that obscurity can't really be guaranteed, and you usually can't be sure exactly how your code will be implemented in the future. Here's some somewhat recent examples of that, where a Jeep was remotely controlled because the system's bus wasn't correctly segregated[1], and where an avionics messaging channel may be exploitable to send bogus information between planes and air traffic control[2].

1: https://www.wired.com/2015/07/hackers-remotely-kill-jeep-hig...

2: https://defcon.org/images/defcon-22/dc-22-presentations/Pols...


> I cannot consistently write safe C/C++ code. I'm not ashamed of that; I don't know anyone else who can. I've heard maybe Daniel J. Bernstein can

For anyone that thinks this, you might be interested to find that DJB himself apparently thinks C, or at least the current state of C as implemented by the common compilers, is untenable for important, long lived software.[1] At least, that's how I interpreted his position. He was responded to as you would expect, unfortunately.

Credit where it's due. [2]

1: https://groups.google.com/forum/#!msg/boring-crypto/48qa1kWi...

2: https://news.ycombinator.com/item?id=14170585


> I cannot consistently write safe C/C++ code.

I'm not sure how to interpret what this means. What do "consistent" and "safe" mean? Is safety about not corrupting program data? Even when dealing with Python arrays, I can end up corrupting my arrays one way or another (off by one, race conditions, etc.). Is "consistent" about going days without a bug? Because I can't do that in any language. If not, what do these mean? It'd be really nice to know what other languages he can write safe code in, so we can have something to compare to.


Of course every programming language is not perfect, including its library and related implementations.

In all of them is possible to introduce logical bugs.

The problem with C and its derived languages is that not only one has the logical bugs common to all programming languages, there are the memory corruption and UB introduced bugs to worry about as well.

While one can think as being super competent, make use of all tools to reduce such error cases, there are always situations where such errors get introduced due to fatigue, project pressure, continuous interrupts.

Additionally since most of us don't work alone, the actual code quality is an average of everyone that has ever worked on the code, and not everyone shares the same goals regarding quality of their work.

I share the feeling with the author, always tried to follow C and C++ best practices, when coding in C I adopted or evangelized tooling, safety standards or known books that lead to safer code.

Yet, like everyone else I had my share of memory corruption issues back on my C and C++ days.

One anecdote was trying to find out a memory leak that was bringing down a server in production, with the customer calling technical support every single day.

It took one week to track it down, and it wasn't something I would advise anyone to experiment.


The good thing about C is there is an amazing tool support that can make it as safe as any other language. The bad thing about C is that most developers don’t know about (or don’t use) all the tools.


They don't even use their compiler.. case in point: https://news.ycombinator.com/item?id=14787072

Of course they always just blame the language. Like the language needs to be responsible for the implementation and its proper use.


Any tool that requires special knowledge to use safely rather than defaulting to safe operation and requiring special knowledge to use unsafely is poorly designed. If the design specification for that class or tools encourages or requires that, it's a poor specification indeed.


I think it is clear what he means from the remainder of the blog post:

I see a lot of people assert that safety issues (leading to exploitable bugs)

It is obvious that he is referring to typical C/C++ safety issues buffer overflows, use after free, etc.

Even when dealing with Python arrays

You can introduce security vulnerabilities in any programming language. But safe languages exclude a host of memory-related vulnerabilities, which are a substantial proportion of all vulnerabilities.

Safety is not binary, some languages provide better safety or better means to model domain data safely than other languages.


You can have "safe" buggy code as long as you avoid the subset of bugs that can open a security vulnerability. It's possible to make such a mistake in any language but some make it harder than others. C's laissez-faire approach to memory management makes it very easy to introduce a small bug that leads to a major security vulnerability.

Indexing out of bounds in python throws an exception. Indexing out of bounds in C triggers the dreaded "undefined behaviour". Here be dragons.


> Is "consistent" about going days without a bug? Because I can't do that in any language.

Amen to that. Even when I feel like I cover most of the cases, the sands of time will slowly eat away at the foundation, things become deprecated, and new error conditions added all the time.


Consistent as in you can do it while focusing on solving business problems, instead of only when safety is the primary thing you're concentrating on.

Safe as in a minor bug is unlikely to be an exploitable security hole.


I've worked in "IT security" as a C programmer for about 10 years. I both agree and disagree with this article.

A competent C/C++ programmer will have a lot less of problems like buffer overflows and crap like that, I don't think a buffer overflow has been found in any code I've written during my 10 years as a C programmer.

I have still written code that has security issues though, most of them stem from poorly designed code and are not necessarily a language problem.

I'm not claiming to be a super human here, I've had my fair share of gotchas, like off by one errors and issues with pointer arithmetic when refactoring code and so on and we might actually reduce the time needed to verify that C code is safe if we change to another "safer" language, but I'm 100 % sure that you still have the issue with poorly designed code even with a "safe" language. And from my experience it's a lot harder to find those problems, since you need to understand how the code base works and how it fits together to find those issues.


This is a rebuttal against "use a safe language and all your security problems go away entirely" but that is not generally the argument being advanced. The argument that is generally advanced is "use a safe language and some of your security problems go away entirely".

Put another way, people are arguing for airbags to become much more common, and your rebuttal is "I've gotten into some accidents, and I've gotten hurt in ways an airbag would not have helped". That's entirely possible, but irrelevant to the argument at hand (unless you also state that you never get into accidents where and airbag would help)..

Edit: Stating your position as a rebuttal may have been overstating it a bit. It's entirely possible you're just attempting to add information to the argument, in which case please read my comment as attempting to do the same.


I agree, but I still believe that one issue here is the lack of understanding of what the process around software development should be.

We would get rid of some issues if we used a safer language, but the real issue is that we don't find the issues, the attacker does instead. So people are finding the issues, but why are not the people writing the software finding them?

I believe that you should have a development team that make sure there are no issues to be found, no matter the language you are writing your application in. That means you run the same tests, no matter the language, so in the end it doesn't matter what language you write it in. And you chose a language that fits the problem, you don't make the language fit the problem.

So I think your analogy of an airbag is wrong in some sense. The issue isn't weather we have an airbag or not, the issue is that we don't test if we have an airbag and then go "Whoops, the airbag didn't deploy in the crash and somebody died".

We as programmers like to think of ourselves as engineers, but we don't treat the profession as engineers, we very often deploy code we know are not tested, we might even know it is buggy, you open yourself up to a lot of damage if you do that as a bridge builder (even though it has happened).

I'm tired and this turned into a rant, but I hope that my point comes across.

EDIT: I don't mean that we should write bug free code, I mean that we should strive for code without security issues. It can be done, I work at a place where we have written code for 15 years, not only C code, or more without any remote exploitable holes.


> We would get rid of some issues if we used a safer language, but the real issue is that we don't find the issues, the attacker does instead. So people are finding the issues, but why are not the people writing the software finding them?

Attackers aren't finding all the issues. They are finding issues in the small subset of software they actually bother to examine. That programmers aren't finding the issues I think illustrates both the effort it takes to always be correct as well as the different skills that are leveraged differently. It doesn't take a good C systems or application programmer to find a lot of the common C unsafety errors in question. It takes someone that knows the C memory model and has the knowledge of how to leverage that commonly. In some cases it takes someone applying new fuzzing techniques to expose certain edge cases more consistently that prior fuzzers did.

> That means you run the same tests, no matter the language, so in the end it doesn't matter what language you write it in.

The important point you are assuming is that the language (or current implementation of it) won't change out from under you in a way that makes that assumption invalid. See my other comment in this discussion regarding DJB, and the HN comment I link to for a good discussion about why that's so.

> And you chose a language that fits the problem, you don't make the language fit the problem.

In what case when you have two languages roughly similar in capability but one allows errors the other doesn't is the one that allows the errors the better fit?

> So I think your analogy of an airbag is wrong in some sense. The issue isn't weather we have an airbag or not, the issue is that we don't test if we have an airbag and then go "Whoops, the airbag didn't deploy in the crash and somebody died".

Then you're misunderstanding my analogy. The car isn't the program written, the car is the compiler. The route driven is the program. You may make an error on the drive, but let's let the car save us in those cases where it's obvious it can and should. Sure, making sure your coworkers check you've secured the pillows to the steering wheel before every trip works, but it should be obvious why that's sub-optimal in multiple dimensions.

> I work at a place where we have written code for 15 years, not only C code, or more without any remote exploitable holes.

I congratulate you on your diligence (sincerely, it is an accomplishment to get to the level where you feel you can say this), but that's a strong assertion in at least one possible interpretation. Perhaps you meant no remotely exploitable holes found? The interesting question that immediately arises from that clarification is whether anyone has seriously looked? Companies that care about this hire pen testers. I hope yours does as well.


I bet there's at least one bug in code you wrote 10 years ago relating to buffer overflows caused by integer overflow. You may have been checking every input against the size of your buffer, yet still have had a buffer overflow. Every integer addition when dealing with buffers is suspect.


The more you know an a C++ programmer, the more you're aware of all the things that can bite you and all the things you need to do to make sure you don't get bitten. Writing code within that cognitive envelope is a tax that's often underestimated. After spending some time out of that world, I find it amazingly tedious to get back into it.

If you're not aware of many of the pitfalls, you might not even be aware of the envelope you should be aware of.


Yet another subtle pro-Rust rant against C.

Programming is hard, and writing safe code requires knowledge, not just in C, but in every single language. Even in formal-validable languages you can make mistakes: may be you'll not make errors by using sprintf, but can make others because of using a more complex language.


Good risk management is about cost vs payoff analysis. To address the risk of car accident, first you stop juggling chainsaws while driving, then you start to wear seatbelts, and then you start thinking about how the remaining inevitable driving mistakes can be mitigated through technology or practices.


I guess using C is analogous is juggling chainsaws in your analogy. But is that actually less safe?

Consider a component actually responsible for preventing car accidents, such as the anti-lock brake system. It will consist of input sensors, output signals, and an embedded computer running some code. All of these components have gone through an extensive qualification process. On the software side, the coding standard requires that all loops must be manifestly finite, no `continue`, no recursion, etc. This coding standard is enforced by the compiler, and the compiler itself must be certified, an extensive and difficult process on its own. Redundancy and review is applied at multiple levels, with tons of supporting tooling around requirements tracking and the like.

Do that, and prove it over many years, and you can juggle chainsaws safely - or ride an explosion under your feet across the country.

The key idea here is that things that have been proven safe, are safe. Using something unproven like Rust, it's nuts. Think about how big LLVM alone is! Rust doesn't have any of that, not the tooling, qualification, standards, or track record. It could be done, but you would need to pour tons of resources into it to bring Rust up to the level that C has in this space.


This sounds like a very specialized C subset with nonstandad compilers, tooling and highly restricted functionality. Why use C as the starting point for such a language, instead of a hereditarily safer language like Pascal, Ada, etc?


Rust has yet to prove that.


Safe subsets of C exist already. Any safeguards active at compile time could in principle be achieved via static code analysis. Only, if that means making assumptions and transformation to a different type system, that's like putting an old engine in a new car.


Sure. And the best "insurance" is having competent programmers.


Yes, competent drivers help, but I'm not sure a competent driver juggling chainsaws is better at driving than a bad driver not juggling chainsaws.


There seems to be no major C system not infested with errors most other languages forbid by design. The question is, do those other languages introduce their own types of errors? In my experience, no.

Do they fix all problems? Of course not. Neither does a hard hat.


Do you know any big code base without bugs? That's not exclusive of C, e.g. check any big Java code base. Regarding Rust, I would like to see the kind od bugs that would arise in big code bases (thread contention, performance issues, etc.).


I have to say that I feel as if the problem here isn't competent programmers though. It's competent testing that is lacking.

Sure, it's possible to make a lot of stupid mistakes in C, but most of these mistakes should be found in testing, before the code is deployed. You'll probably say that "We don't even have those problems in language X." and I agree, but you should run the same tests no matter the language, because you want to find as many problems as possible, even if you write your code in language X.


Rust does not appear in that page even once.


Reading between the lines, since he mentions Mozilla Rust is likely accountable for this post in some manner (even if loosely). That said, calling out this post as a pro-Rust rant against C is completely ignoring the fact that it might also be right, and origin is irrelevant.

Positions should be evaluated based on their merits, not their origins.


I wonder how can anyone even use computers these days, since C and C++ are so unsafe. My OS is written in C and C++, also my browser and my word processor, my shell and most of the applications that I daily use. The firmware in my phone, my set-up box and my car is also written in those two languages. Also all the major web servers are written in those to languages. Maybe Rust programmers are an exception, but I personally spend 99,99% of troubleshooting time (in any language) on fixing logical bugs that I make. It's hard to get excited about a language like Rust, when it seems that it's community is only interested in spreading FUD.


"So unsafe."

The thing is, we (C programmers) are a careful bunch and we like to treat everything as a potential security issue. Even if the likelihood of exploitation on a real world system is literally nil. People see these fixes and possibly associated security advisories, and they think it's all exploitable. Then one actually exploitable bug from some 15-year-old code gets publicized and the publicity spreads everywhere. Look, another exploitable bug! C is sooo friggin bad and every line is an RCE waiting to happen!

I think the safety issue is blown way out of proportion because of that.

It doesn't help that when people discuss UB, they assume the most adversarial compiler possible that literally does all it can to turn the UB into a big problem. And that literally everyone has to be using that compiler and that they may not use the compiler flags that turn off the bad behavior.


Before 1990, other than UNIX related software, that was hardly the case.


How about you start writing C++ and not C/C++ code?

The two languages are so different, it isn't fair to mention them as if they were the same.


The author worked on Firefox, a largely C++ codebase, so he was writing C++.

I assume he feels that the problems he talks of are common to both languages, if different in magnitude.


C++ is certainly a different language now, but you can write code that compiles as either C or C++ with a little care. Everything I write compiles as C or C++ (yes I cast all the void allocations and choose to ignore the C warnings that this is verboten).


I'd be interested in trying an online coding exercise like the one he mentioned. I've definitely fallen into the trap of thinking that my code was robust because it worked for my specific use case (and no one tried to break it). Now I feel much more skeptical that my code works at all.


Safety, speed, language convenience.

Pick 2.

Safety is an issue everywhere you are programming something, good programmers just know how to mitigate it.

And to be really honest, you don't always need to use C++. Just use it where performance is needed, write good code, review it, etc, and when you need to go at a higher level, use python or any other scripting languages. C++ should be used to write libraries.

There are not so many softwares in the world that need to be entirely written in C++, and if they do, more time is spent writing and fixing such software.

"There are languages people complain about, and languages nobody use".

I don't dislike rust, but when I read its syntax, it doesn't seem simple enough to learn, read and write, the learning curve seems pretty steep, so it won't really do well with students.


Isn't this the same as saying you can't consistently write bug-free code?

And isn't this sort of the holy grail of CS?


You are conflating "safe" with "bug-free", when safety has a specific meaning in this context, which is memory model safety. Safe code eliminates a class of errors, and thus reduces the possible bugs your program may exhibit.

At this point we can eliminate certain classes of bugs, and sometimes with no runtime cost. The usual example used is Rust, where extra information supplied in the source is used to ensure certain bugs are not possible at compile time. It requires some extra work, but I think likely quite a bit less than the more extreme coding standards used to produce safe C code, but those may cover a different or larger subset of bugs.


I really wish we would disconnect C and C++. They're different langauges used entirely differently by different people, and C++ gives C a bad rep.


C is more than capable of getting a bad rep on its own.


I don't know what the Dunning-Kruger effect is, but I can tell you for a fact I'm too smart for it to affect me.


Nice post. There are still to many C/C++ wannabes who think C is an awesome language and their are awesome hardcore programmers. If they are challenged to code something, it's highly probable that flaws will come up.


There are still to many C/C++ wannabes

There are still too many people using the term C/C++ :] It never really was a thing, even in the beginning (I mean, just take e.g. destruction at scope exit: that alone makes it a very different language) and now even less with the new standards.

Anyway: a language can be considered awesome by people, despite it's flaws. Always has been, never will change, as there will probably never be languages without any flaws. (e.g. I think Python is awesome, others don't ever cease summing up it's flaws, I couldn't care less as it gets stuff done in particular situations). Just like it's inevitable every single programmer, including you, writes flawed code from time to time, challenged or not.


I'm actually going to defend the term.

For starters, the languages are in practice quite similar as used in many codebases (e.g. C++ codebases where they just don't use destruction at scope exit, have exceptions and RTTI disabled outright, and eschew modern features in favor of targeting older compilers, etc. - or C codebases where GNU extensions are used to get automatic cleanup at the end of scope!)

Further, headers are routinely set up to be safe C and C++, with a few #ifdef __cplusplus extern "C"s to help manage name mangling. And I'm struggling to think of a significant codebase which I've actually shipped that was pure C++, rather than C++ with at minimum a few customizations to some third party C library, embedded as part of the same build (to say absolutely nothing of system libs.)

I suppose you could argue it should in some cases be C/C++/Objective C/Objective C++/(C++/CX)/(C++/CLI)/...


> as there will probably never be languages without any flaws

Yes, but we can try to develop better language. Rust is such a try. And that's good.

> Sure, but I don't belong to that fraction of programmers who think they write flawless code or develop a cult with old, error prone languages such as C.

The important thing is that we are not discussing about programmers making mistakes, but about a language which is old and which is making mistakes easy. C proofed in sufficient cases that it belongs to the museum.


it belongs to the museum

I agree that in C it's way easier to make mistakes than in e.g. Rust, but I don't agree that (or whatever other reason) is proof that it belongs in a museum. Take microPython for instance: it's a relatively young project written in C, it allows running a pretty complete Python 3 implementation on a variety of microprocessors as well as on PC/mac/... Suppose we put C in a museum, what do we use instead to achieve the same functionality?


It makes me smile when someone starts off a complaint with something similar to "I've been programing in C++ for x decades. I have y qualifications that should impress you. I worked at z company which should impress you further. Yet, despite this, I cannot write safe C++ code".

The author's post, in particular, is excellent as it follows this token introduction with "I don't know anyone else who can write safe C++ code, either". This is further strengthened by "people I know to be skilled programmers (based on what, I don't know), have never professed that they can write safe C++ code, and therefore people who do, are obviously suffering from some psychological effect".

I mean, really? You can't do it, and your friends can't do it, so no one can do it? Even though there is a world of software out there that does exactly what you say can't be done? And further, anyone who says they can do it must be less-skilled and suffering for a form of illusory superiority complex? Get over yourself.

How ironic that he knows about the Dunning-Kruger effect, but fails to apply it to himself.


> Get over yourself

The guy is simply describing his personal experience. Why don't you describe your own personal experience instead of attacking him?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: