> That's not going to help. Many Windows users don't upgrade not because they fear the price of Windows 8, but because they can't stand it and prefer XP, as simple as that.
Or they don't want to bother with buying new hardware. A friend of mine is still using her 10-year old laptop with 1GB memory and XP. And it works just fine for her purposes (web browsing, gmail, little office).
For all C++ haters and C defenders: the cleanup/fail labels set result to 0 and perform a bunch of _gnutls_free_datum calls. This would not have happened in a properly designed C++ program because you could have just written "return 0;" and let the compiler release the resources (RAII, destructors).
IMO, C is no longer suitable as a systems programming language. It has stopped being that a long time ago, and its only value is ABI stability. (COM and similar technologies work with C++ and other languages, so C is unnecessary even for that purpose.)
What do you think of C++ and timing attacks? What do you think of the surface area of the C++ runtime itself? How do you find auditors for C++ code, when the standard is so big no one person could possibly understand it all?
These are genuine questions, it would be great to hear your answers about these things :)
Same underlying machine model => same mitigation techniques apply as in C.
> surface area of the C++ runtime itself
No RTTI and no exceptions => no runtime. (By "runtime" I mean code necessary to support language features, not the C++ standard library. E.g., without RTTI and exceptions C++ is as suitable for building an OS as C is.)
Still, RTTI and exceptions are table-driven and I'd worry about their integrity if somebody manages to change the RTII and exception tables embedded in the executable. Largely prevented by signing executables. (Oh, the irony :-))
> How do you find auditors for C++ code [..cut]
More than half of the standard text is dedicated to standard library. I've heard it been said (I've not checked myself) that the description of the core language is only slightly longer than that of Java or C#.
But standard size is not that relevant. Reasonable C++ code is easy to write (for an experienced developer), easy to understand, and auditors can always "fail" the code if they don't know what's going on.
Auditing is expensive, so you have a lot of incentives to write reasonable code from the start.
Mercury is used in production with PrinceXML  and ODASE . ATS is used in production in the implementation of a bitcoin mining pool . OCaml is heavily used by Jane St . SML (via the MLton implementation) is used in industry . Rust is not ready for production, I agree, but is being used to develop Servo by Mozilla and Samsung .
That said I'd hope that systems like ATS, Mercury, MLTon and OCaml being open source make it easier to contribute to the implementation for issues that come up and this would offset any 'not enough real world' problems that they have. If you don't like those languages, pick another (eg. Haskell).
I wasn't planting a straw man, I was just pointing out that it's possible to write subtly broken C++ (or prolog, or ADA, or whatever) code as well.
Maybe it would make it harder to introduce this kind of bug (after all, goto error handling could be considered a hack because C lacks proper exceptions) but no language is bug-proof by design. If you're writing security-sensitive code you should have complete code coverage with tests. If you have that you could implement the library in assembly for all I care.
I'm not attempting to bash C++, it's a language I use from time to time. And I agree that RAII is a convenient patter that I often yearn for in C. It's just that in my opinion in this case (and in the "goto fail" bug of late) that's not the core of the issue.
This is true. I think that people should have the capability of being successful, but only if they do approach it like a full-time job. That doesn't, however, guarantee success, even a large operation like the last one I worked for can fail.
If it were easy, and everyone could do it and make as much money as working full-time, without actually working full-time; then everyone would.
> What if the architecture doesn't have a carry? Then the compiler has to generate one when needed.
Adding two N-bit numbers produces N+1-bit result, and C gives you no direct means of accessing the full result. IMO, this is a language defect and has little to do with HW support. [This pertains also to multiplication: NxN bits -> 2N-bit result.]
If the hardware actually returns the full result, excellent; if no, the compiler has to synthesize code to compute it, if needed. Whether it's needed is inferrable by the code that follows, e.g., the type of the variable the result is assigned to.
IMO, inferring that only a partial result is used (e.g., carry is discarded) is much easier for the optimizer than inferring what some instruction sequence is supposed to do. (E.g., 4+ multiplications and few additions => single 32x32=>64 multiply.)
> At the time you're writing the Vehicle class, it seems perfectly reasonable to define a setPropulsionDevice() method
Are you claiming that it seems reasonable to be able to replace a car engine with a jet engine without changing anything else? (The car's hull, construction, etc., and after all these modifications it's not the "same" car anymore.)
Any example I come up with is going to sound contrived, because it is. Real examples get messy, and just add to the confusion.
But real issues show up due to fundamental problems with "is-a": Even if an X value "is-a" Y value, an X variable is not a Y variable (because it can't hold all objects of type Y). So passing a mutable object reference to a function is fundamentally different than passing an immutable object -- but inheritance hierarchies treat them the same.
You can get around this problem by not using inheritance hierarchies that work that way, or by always using immutable objects (value semantics). But I pretty much consider the Java/C++-style inheritance to be a mess.
> But real issues show up due to fundamental problems with "is-a": Even if an X value "is-a" Y value, an X variable is not a Y variable (because it can't hold all objects of type Y). So passing a mutable object reference to a function is fundamentally different than passing an immutable object -- but inheritance hierarchies treat them the same.
That's not a problem with "is-a", its a problem with unsound use of mutation in inheritance heirarchies. A mutating method can either:
1) Anticipate that it can fail, or at least the mutation component can (however that is signaled, whether by return value or exception), on some combinations of argument values and object state, in which case it works fine as a method anywhere in an inheritance hierarchy, or
2) Be guaranteed to succeed completely with any arguments of the appropriate types, in which case its fundamentally unsound anywhere except in a final class (since declaring a mutating method is essentially logically equivalent to declaring a method with a -- potentially additional -- return value whose type is simultaneously both the the type of the class it is declared in and the type of the class it is used in.
The circle vs ellipse problem is rather artificial and, if anything, it shows that modeling with types and everyday intuition are two different things.
If the rest of your program can handle general ellipses, it should also be able to handle ellipses having the same minor and major radius (i.e., circles). The obvious solution is to not have the Circle class and _maybe_ equip Ellipse with IsCircle() method. (Though, why would you care?!)
See the C++ faq lite items 21.6 -- 21.8. (A quote from 21.8: "Here's how to make good inheritance decisions in OO design/programming: recognize that the derived class objects must be substitutable for the base class objects. ")
The circle/ellipse problem is real and shows how mutability ruins people intuitions about models and relationships. Note that mutability ruins programs in the same way, introducing subtle inconsistencies and invariant violations.
In engineering applications, it's perfectly sensible to use float for world modelling, which give you precision of about 1 in 10 million. Then you can use double for intermediate results and lose less precision in a long calculation, producing a more accurate result (rounded to float) in the end.
Also, if the dynamic range of your data allows it [and for physical application it usually does], you can first rescale it to [0,1] in order to further increase precision. [It's the interval where floats are most dense.]
Having worked for a few years now in computational geometry, i'm more and more convinced that "float by default" is the way to go. Doubles should be used for intermediate computations.
Actually, 2^53 different mantissas, plus a few different exponents, depending on the function under test. Also, there's a huge number of NaNs, all of which are equivalent. For ceil/floor, it wouldn't make sense to test exponents larger than 54.
There are a huge number of double-precision NaNs in the absolute sense -- there are 2^54-2 of them. But they are a small portion of all of the doubles (roughly 1/2048) so omitting them from the tests does not help significantly with saving time.
And, skipping testing of large numbers and NaNs can lead to missing bugs. One of the fixed ceil functions handled everything except NaNs correctly. Implementations that failed on exponents beyond 63 or 64 are easy to imagine.
But, testing doubles requires compromises. Testing of special values plus random testing plus exhaustive testing of the smallest ten trillion or so numbers should be sufficient -- better than the current status quo at least.