The general idea is straightforward: "The unchecked C pointer type * is kept and three new checked pointer types are added: one for pointers that are never used in pointer arithmetic and do not need bounds checking (ptr), one for array pointer types that are involved in pointer arithmetic and need bounds checking (array ptr), and one for pointer types that carry their bounds with them dynamically (span)." So this is really a new language derived from C, to which programs can be converted.
This is basically a good idea, but it's only useful if pushed hard by somebody like Microsoft. People won't convert without heavy pressure.
This seems interesting to me. I liked the way CCured did inference to document how you used and abused your pointers, but what it did with that information was maybe not ideal.
Add `checked` to array declarations. (for the simplest
case they've implemented).
I'm not sold on this project as we haven't seen any results of course, but the approach is simpler than switching to C++.
The problem here is compiler support and performance.
And of course, but those will come in time. The point of it is that C++ isn't any safer (And actually, I would say is more risky in some aspects, because of the differences between (For example) the UB in C and C++), while Checked C is guaranteed to be safer, to a certain degree.
Quoting from the paper (https://github.com/Microsoft/checkedc/releases/download/v0.5...):
Reasoning about the correctness of programs with declared bounds sometimes requires reasoning
about simple aspects of program behavior. To support this, lightweight invariants are added to Checked C. A lightweight invariant declares a relation between a variable and
a simple expression using a relational operator. An example would be the statement x < y
+ 5. Lightweight invariants can be declared at variable declarations, at assignment statements,
for parameters, and for return values. Checked C is extended with rules for checking
these lightweight invariants. Just as type checking rules are defined by the programming
language, so are rules for checking lightweight invariants. The checking of the correctness of
programmer-declared bounds is integrated with the checking of invariants.
The fact than you can fork is nice, but if the project is too reliant on the paid staff and doesn't have enough community surrounding it to survive getting dropped by its benefactor, you better hope you have the resources to maintain it yourself. This can happen even with no ill-intention involved, but the whole RoboVM debacle has convinced me that Embrace, Extend, Extinguish is not quite dead at Microsoft.
The RoboVM situation, while unfortunate is a lesson for people not to rely on proprietary tooling for portable projects.
While I mostly code on JVM and .NET languages during the day, I use C++ for my hobby projects on mobile OSes.
The reason being that C++ is an open standard and officially supported in all SDKs, no need for third party layers like RoboVM.
A few similar lessons with my time spent on programming languages that weren't first party on platform SDKs, made me always choose first party languages, if I have decision power in the selection process.
Yeah, that's why I said "Microsoft or anyone else". The only other time I mentioned Microsoft was in reference to RoboVM, which doesn't have much to do with Google or Facebook.
> The RoboVM situation, while unfortunate is a lesson for people not to rely on proprietary tooling for portable projects.
I agree, but the fact that RoboVM started off open source with a GPL licensed compiler was one of the main reasons I chose it. It's my own fault I didn't look to see how contributions were structured, or notice that very little code came from outside RoboVM, sure. That's why I'm advising caution in similar situations.
We're talking about Mozilla.
Netscape was a different company -- and besides we're discussing under the context of MS vs benevolent/FOSS entities.
Last month ff was ahead
Source: http://de.statista.com/statistik/daten/studie/13007/umfrage/... (German statistics office. Paywalled sadly, but you can see the first paragraph that describes Firefox market share at 52%.)
Microsoft became the darling of the community, OR, in addition to embracing open source, they embraced voting ring tactics. These are two explanations I have, maybe there is another. Maybe my criticism really was obnoxious...
And the thing about open source projects, is just that. They're open source. If the wider community takes hold of them, there's not much Microsoft can do. Because then you just see a fork.
You don't have to be a fan of Microsoft to simply appreciate the contribution to open software, as it is.
And 'forced' is a stretch anyways, how to block it is well documented, and was months in advance. And you can revert, something Google and others will not let you do.
I also deleted all the Google apps I could find on my Android phone, and I avoid storing any of my personal data on it, because Google seems to believe it's their machine, not mine. I'd use some other mobile OS if I had any choice in the matter.
- Paid OS != Free OS.
- OS update != software update.
I paid for my phone. I paid for my Windows license. I paid a lot more for my phone, which Android is trying to forcibly update without my consent to a version which will cripple my phone's functionality substantially.
"Power to the people" has always been the whole point of personal computing, as far as I'm concerned. Give people control over their data, their processes, their lives; that's why we're doing this, isn't it?
That's why I'm doing it, at least.
I think the writing is on the wall. Computing devices will split into two diverging branches: one tailored for limited configuration and aimed at media consumption and use by non-tech users, the other providing (hopefully) full freedom to modify, aimed at IT co. and interested hobbyists. It will be quite some achievement if mechanisms to increase vendor control aren't increasingly integrated even into the devices in the second segment, as things like Intel ME/UEFI Secure Boot show.
Now why does anything in your post matter?
I respect your right to have a different opinion. I don't have to respect the opinion itself. Hope you can see the difference.
The only thing I'm talking about is honesty - MS is trying to show they love open-source just because they are so awesome and hackers, but in reality it's just profitable mask and they are glad to wear it.
So, yes, we can edit those TeX documents they put on GitHub to our heart's delight.
Doesn't that look like an open source implementation? Or did I misunderstand something?
> The compiler is not far enough along for programmers to "kick the tires" on Checked C. We do not have a installable version clang available yet. If you are really interested, you can build your own copy of the compiler:
Or maybe we stopped categorically demonizing some technology because of its provenience and started evaluating it based on criteria like usefulness and applicability.
You know, like if we actually take the "engineer" part of "software engineer" seriously instead of using it for everyone who once wrote a PHP page for his uncle.
The argument is exactly to make this difference. There is the technical and somewhat objective aspect of the language and its implementation, and there are the people who created the language, who use the language etc. A bad language can have a great community, and vise versa.
Well, the "maybe in addition to embracing open source, they embraced voting ring tactics" part sure is.
I'd defined a "strict mode" for C. The first step was to add C++ references to C. The second step was to make arrays first-class objects, instead of reducing them to pointers on every function call. When passing arrays around, you'd usually use references to arrays, which wouldn't lose the size information as reduction to a pointer does.
Array size information for array function parameters was required, but could be computed from other parameters. For example, UNIX/Linux "read" is usually defined as
int read(int fd, char buf, size_t len);
int read(int fd, &char buf[len], size_t len);
You could do pointer arithmetic, but only if the pointer had been initialized from an array, and was attached to that array for the life of the pointer.
char* p = s;
char ch = *p++;
There was more, but that's the general idea. A key point was that the run-time representation didn't change; there were no "fat pointers". Thus, you could intermix strict and non-strict compilation units, and gradually convert a working program to strict mode.
This took less new syntax and fewer new keywords than "Checked C". I was trying to keep C style, adding just enough that you could talk about arrays properly in C.
As an extra security feature Perl also supports "taint" mode, which "taints" all user-supplied data as potentially hazardous to the program. This was used more for CGI, but it's still a simple and powerful security feature. Unfortunately, it was never adopted like 'use strict' was because it would more often get in the way of the programmer.
The fundamental memory safety problem in C is that arrays are not first-class objects. They degrade to pointers when passed around. Yet any function which references an array has to know how big it is. Somehow. The language has no syntax for this. That's the big problem, and what I was addressing.
No. That excuse has been bandied about for years, but it's wrong, as we're now seeing as Rust gets better. C's problems with arrays come from lack of expressive power in the language. C doesn't even let you talk about array size in parameters.
Many of C's painful design decisions come from trying to cram a compiler into a PDP-11 with 128KB (not MB) of memory for a process. Global analysis on a machine that small while retaining reasonable compile times was hopeless. This is no longer a limitation.
I just used C++ for a script last month and I got a glorious increase in performance.
Not to mention that C++11 is, in contrast with C++98, a joy to write.
Since I thought I knew C very well, I came to realisation (due to many memory bugs/leaks) that maybe that's not the case after all. What is it then? I can get performance out of it, lots of it (that's one thing I know to do, somewhat at least). That's when I questioned myself and started thinking that it's not that I know C (I don't, after 20 years or so), but that I am very comfortable with it and I don't want to change that comfort. C++ and 90's traumas and post 90's (STL) traumas had a lot to do with it. Now, I'm fishing out for newer languages that would replace C for me, Rust namely, but I'm still falling back to C all the time. Now, I'm thinking that that's just the way it is. I'm a C programmer and that's how it will stay. At least for a long while. I do and have used a lot (A LOT) of languages in my past, but my core is always C (also first language I've learned). I don't program anymore for a living (I do "creative stuff" in film and tv now), so that is now more prominent than any other time in the past. I do stuff for me only and I can pick and choose whatever I want to - yet, it's always C.
Sorry for the "rant".
Thankfully Rust gets my ML instincts going. I'm trying to use it more. It'll be nice not to be anxiously running my test suite under ASAN after every build.
Google sought ECMA standardization when they took on Dart. Go language has an official specification, so someone could hypothetically re-implement it if they wanted to, and remain compatible Go projects.
Mozilla only wants Mozilla to be involved with Rust. They want everybody to use it in their systems, and have no input on where the language should go.
As for "they want everybody to use it in their systems, and have no input on where the language should go", you appear to be utterly clueless about how Rust is developed. Enormous swaths of the language have been designed by contributors who just showed up out of the blue and started putting in elbow grease. Iterators. Every data structure in the stdlib. Crucial, fundamental details of how the borrow checker operates. There's an entire repository for allowing anyone to propose changes to the language (a process which Mozilla employees are also forced to follow), which is a model that several other open source projects have since adopted (e.g. Swift).
While Mozilla may not be the best organization, the Rust team seems much more willing to listen to their users than the Firefox team.
"Let's make an improved C, and then write millions of codes in nothing but that" is a myopic non-starter.
While the total lines of code in libraries is less than that in applications, I would say the relative importance is more. That zlib has a fast and correct implementation matters to a lot of people.
When a bug comes out in zlib (or more likely, OpenSSL again), and we can attribute it to poor engineering in C, it doesn't really matter that all the applications are written in a HLL if they still loaded the library and used it because the performance was needed.
People apparently aren't aware that C++14 requires C99 libraries, hence the update.
Even the new refactored C library was written in C++ with extern "C" for its entry points.
The official wording is that those that still care about C compatibility on Windows should make use of the new clang frontend + Visual C++ backend.
MSR and real Microsoft are different business units.
Many Cyclone ideas made it into Rust.
I strongly prefer Rust.
Use-after-free is not a theoretical problem. All of the Pwn2Own vulnerabilities this year were UAF, for example.
People could write safe code in the 70's. The point of C was to write high level (kinda portable) assembly. Not safety.
Can you give one example of this, please?
GCC: Nope, why? Performance concerns
MSVC: Nope, why? Performance concerns
Clang/LLVM: Nope, why? Performance concerns. 
 LLVM has the op-codes to implement this, but Clang won't emit those op-codes with the c99 switch triggered.
... are an optional part of C standard library. C11 Annex K.
So just use a library for those and you get support in every compiler!
> MSVC: Nope, why? Performance concerns
MSVC runtime library does support an early version of that. Has supported for over 10 years. They contributed it to the standard, I think.
Example, sprintf_s, present in MSVC2005: https://msdn.microsoft.com/en-us/library/ce3zzk1k(v=vs.80).a...
Regarding the comparison to Rust, Rust prevents use-after-free, while this doesn't seem to from a skim of the paper. Use after free is one of the most, if not the most, common remote code execution security issues in C and C++ code nowadays.
I'd love to see a citation on this. My gut feeling tells me buffer overruns and integer overflows are seriously in the running.
Before getting too excited and claiming that it's only 1.3% of all CVEs or something, remember that it's 1.3% of all vulnerabilities. (Especially with the explosion of dynamic web languages, a lot of CVEs aren't really C/C++-related.) There's a power law to these things, so by the power law metric, it's not that far behind "buffer overflow" (6,500 entries), and ahead of the well-known "format string" (577), which is also certainly "one of" the most common C issues.
And no, this is not like asking for citations for the sky sometimes being cloudy because the original comment didn't say "use-after-free sometimes leads to remote code exploit".
This is like asking for citations for a claim like "whenever the skies are cloudy it is due to acid rain more than any other reason". And a claim like that should be accompanied with some citations.
Let's have an honest discussion here, or don't bother, please.
Is that the only information you based your comment on?
It's representative of the state of the art in attacking large, mature, modern C++ codebases.
> Is that the only information you based your comment on?
No, it's due to watching lots of security bugs go by over the last several years. I work in this space, you know.
I would say it is representative of attacking client side desktop browser software and plugins. That seems quite a bit less representative of all C and C++ software, most notably excluding server-side software.
> I work in this space, you know.
That's why I was hoping for something more than "trust me" as a citation.
Interesting. In times when many people advocate a safe C++ subset, Checked C grows the other way, adding C++-compatible notation to represent the most vital things like smart pointers.
I kind of wonder if they're working on automatic conversion tools between C and checked C.
At least for ptr<> this should be trivial. If a function does no pointer arithmetic with a * pointer and only uses it in function calls that take ptr<>, can be converted to a function taking a ptr<>.
0) If the application needs checks on anything (at the cost of performance) then higher level language (like C++) should be chosen at design time. No use for new application.
1) Existing applications will NOT port directly. Real-life applications are tightly coupled with supported compiler(s), so the compiler would need the update. Errors/exceptions (like overflows) would need handling and changes in logic. It could only deny read/write from illegal area, but without the feedback. The speed is also a major thing. Boundary check could possible prevent some bugs, but the performance will drop dramatically (example: commonly used libs like OpenSSL).
One use case I see is to add an extension (like GCC's for instance) for an existing compiler which does this. User could build a slower debug application and spot the silent errors during testing. An implementation thing, not the language extension.
The phrase "couldn't be bothered" is assuming a lot of bad faith on Microsoft's part. While I would agree that in times past MS didn't deserve the benefit of the doubt, these days they're a different company.
With no special insight into their reasoning, I think the more likely answer to the question, "Why didn't they contribute this upstream?" is probably "Give it time."
We ban any account that evidence suggests is part of this campaign or anything similar. Users who think they see signs of this are invited to email us at email@example.com so we can investigate. Please don't level accusations of astroturfing at other users in threads, though. For every case like this one that really is astroturfing, there are dozens more of legit users simply holding divergent views. We don't want a culture of accusing others lightly.
We detached this subthread from https://news.ycombinator.com/item?id=11900781 and marked it off-topic.
In this case, I think much of what this user posted sounds pretty reasonable. I'm not a Microsoft fanboy or anything, but all the vitriol Microsoft gets in light of all the love that Apple and Google get seems silly to me.
I think the concerns some people have that Microsoft will quickly return to their old ways are legitimate, but also a little on the paranoid side. That argument is contingent on Microsoft actually regaining their market position that they once had in the 90s and early 00s, which, frankly, I don't think will ever happen: most developers now flat-out expect that their development tools will be Freely-available ("Free as in Freedom", etc.); most end-users now expect that the "apps" they want will be available for free or at least very cheaply (with prices listed in cents rather than dollars) -- I've seen many comments in the Google Play store where users are complaining about the price of a $2 app, forget about $5 completely. Frankly, people simply aren't willing to pay for software anymore. Regardless of how you feel about that, it's obviously the case these days. As such, I don't think it's unfair to claim that "free software has won".
Furthermore, Microsoft seems to be catching a lot of flak for things that Apple and Google do on the regular, yet many are willing to overlook. I recall seeing a recent thread here on HN in which a user pointed out that Google's update practices are the same if not worse than the recent Windows 10 update fiasco. Yet, you don't see nearly as many people complaining about it.
Finally, it should be obvious that any for-profit business exists for the purpose of generating profit. Given that, it should be no wonder that a business's behavior will be driven by a motive of revenue. Especially in the case of publicly-traded companies, there has been a long history of business behavior motivated by shareholders' interests.
All that said, I have to trust that the HN moderators made the right decision here, but I honestly haven't seen this user make any claims in this particular thread that seem unreasonable. Yes, they are defending Microsoft in their comments, but sometimes I feel as though Microsoft actually needs defending in nerd-circles nowadays, simply because their checkered past makes many of us unwilling to accept that Microsoft may have actually made a change of face recently. Furthermore, is a pro-Microsoft shill really any worse than an anti-Microsoft shill? Going yet further, I've seen plenty of HN users who talk like pro-Apple shills, yet it seems that since so many around here agree with them, that's deemed acceptable.
Anyway, I just wanted to put in my 2¢. Thank you for all that you do to keep this place clean, even if I might not always agree on what constitutes cleanliness :)
Although I don't agree that that account's comments in the thread were reasonable (they were highly uncivil, which is a bit of a new tactic for them), you can't judge this kind of abuse by the reasonableness of individual comments alone. This person or organization does professional-level dissumulation and manipulation of online discussions to deliver false impressions and corrupt the community. I rarely use such strong language (indeed I post 100x as many comments asking users not to accuse each other of astroturfing). I do so in this case because they've done a lot of damage to HN over the years and because users need to know how seriously we take real abuse when we see it. Each time this person or organization shows up again, we crack down hard, and publicly. They should know that they're hurting Microsoft's reputation, not helping it.
Most MS-related comments on HN, though, are by legit users. The generalizations people constantly make about the HN community—in this case how it sees MS—are nearly always bogus, driven by cognitive bias, not data. I see a ton of people complaining that HN has gone to shit because it's full of Microsoft promoters  and an opposing ton of people complaining that it's full of Microsoft haters. The explanation is that the community is simply divided. Both sides can point to lots of comments that seem to support their favored generalization, but going from "I see comments that rub me the wrong way" to "HN is full of shills posting against/for the things I like/dislike" is a non sequitur.
1. An example I responded to at length is https://news.ycombinator.com/item?id=11840127.
Every time I see people complaining about how "HN as gone to shit", I check to see how long they've been a registered user. I'm not the most veteran HNer, but I've been around for about 6 years now, and if anything I think the quality of the site has tended to improve ever-so-slightly over the years, though I feel that it tends to fluctuate a bit. There was also a noticeable boost when you and your crew took the reigns as mods (I don't mean to ass-kiss here, I really feel that way). I sometimes wonder if I missed out on some sort of Golden Age of HN...
Exactly this. What's the deal? They open-source cool things, community benefits, they earn money. So what? It's become a crime to earn money?
Sounds like some zealots should keep that list handy, so they can keep their pristine paws far from anything that Microsoft touches.
Some memory issues with code you might be familiar with... From your own repo.
Not checking for return values is not a great idea. Your code blows up phenomenally with certain "width" and "height" values. It does check against negative grid sizes, but that's not enough (https://github.com/AndrewBelt/game_of_life/blob/master/src/g...)!
With suitable width & height it'd also gladly free a "grid" it never allocated. Of course it crashes long before that happens.
Ok, so it was cloned from someone else's repo, and you never got around cleaning it up.
Ok, still not checking return (SDL, etc.) values. Debug build asserts are better than nothing -- didn't bother to check how your build is configured.
and then access it:
Here if realloc() fails, you'll have a memory leak and, again, accessing NULL later:
There is also integer overflow: alloc is int, so if it becomes greater than 2^31-1, it may wrap around [I think signed int behaviour in C is undefined in this case], and you'll allocate fewer bytes than needed, leading to buffer overflow.
And another to be honest, very few programs on desktops/mobile are actually checking return values of malloc/calloc, because of amount of data that program operates is usually much smaller than amount of RAM available. It's sure a case for embedded, but you simply usually don't use malloc for embedded.
Well, if that is true, you are either very lucky or very brilliant. I have written moderate amounts of C, and I have run into those problems repeatedly.
Maybe that just means I am an idiot, but it appears that smarter people than me make such mistakes as well (although less frequently, I hope). C - at least when compared to most other popular languages - makes it very easy to make such mistakes.
> Why does so much work go into fixing these problems?
Because these problems have cost lots and lots of money thanks to crashing or misbehaving software (including many a security problem). Not to mention the sanity of the programmers who had to debug these...
(Also, I am told that in embedded/realtime software memory management is a lot simpler, as malloc/free are typically not used; you didn't say what kind of C code you have written, but if you are an embedded developer, that would at least partially explain your experience.)