Hacker News new | past | comments | ask | show | jobs | submit login
Secure C coding standards by SEI (cert.org)
103 points by tush726 on April 4, 2017 | hide | past | favorite | 17 comments




Miserable fail to cert for requiring a https-less registration for a document on security.



Is anyone else finding some of these rules to be bizzarely tone deaf?

Recurring pattern:

"Don't do such and such that is obviously wrong."

Well, no kidding! I would never do such a thing ... on purpose! It's the not-on-purpose occurrences that I need help with.

Without a concrete plan on how to prevent or detect that situation, this advice isn't helpful. I know I shouldn't rely on uninitialized memory, and, believe me, I do not want to. Give me a coding strategy which minimizes the occurrence of uses of uninitialized memory. Recommend a compiler and its particular compiler options, or some lint or other static checking tool or run-time detection.

"Don't read uninitialized memory" isn't something I can turn into a concrete action to somehow improve software quality.

It's like a driver's manual which says "stay on the road and don't run over people".


Like the ISO C standard I do not think that Cert is meant to be a training document in and of itself like a book. The user of the standard is going to have to draw on other sources, experience and technology to put together a an implementation strategy.

The purpose of the standard is to allows a piece of software to get CERT certified. You have to actually submit your code to CERT and they analyze it and give you a certificate and add the software to the list of conforming systems.

This can intern help to gain other certifications. For instance if you are working on software for the Department Of Defense they have there own sets of certification standards. Some of which may be satisfied by conformance to CERT.

I would start to investigate static analyzers and compilers and see what you can find. Specifically look for claim that diagnose standards violations and which ones.

Also take a look at section 1.7. There is a difference between the 99 coding 'rules' and the 185 'recommendations'. Recommendations provide guidance but do not necessarily indicate a defect. So a recommendation is not necessarily exact.


In that case, why is it repeating material out of ISO C instead of just referring to ISO C as a base document for all those points.

You can summarize it all in one item:

"A program fails to conform to the CERT standard for secure coding if it violates any of the following 'undefined behavior' situations explicitly described in ISO C, or implicitly by omission: [give a summary list with brief descriptions and section references]"


I can't answer for the the people who wrote it. What you could do is put together a few questions and send them an email. Or look around the wiki.

Just for future reference when you quote put the page number or section so people can look it up and see the context.

From my experience expert arguments about C end with a reference to the C standard. The C standard is authoritative. If you can quote the standard to back up your argument you win. Arguing C/C++ is different from most other languages in that respect because we have an official ISO standard and expert C/C++ programmers take the standard very seriously.

The language of the C standard is very terse and hard to read for most people. (myself included) And a lot of the issues are very subtle. As a result anyone quoting the C standard starts to adopt the C standard style of language and idiom. So you get some arguments and documents like CERT that end up being kind of hard to read and terse also. I have read some arguments where I have to open the standard to read the argument because its essentially two people quoting the standard arguing about the correct meaning of a particular passage.

If they give a section reference to the C standard the purpose is to be authoritative. So that people reading it can look those up in the C standard and know that they are official undefined behaviors. There are lots of arguments about specific undefined behavior. And the only way to settle the dispute is to refer to the standard to see if the behavior is actually an official undefined behavior or someone who is making there own judgment which is usually wrong if its not backed up by the standard.


I don't mean that they are repeating material verbatim out of ISO C as a quote; they are wastefully reinventing it in their own words.

A secure coding standard form CERT should focus entirely on describing conventions and program properties that do not already follow from the standard as a matter of correctness.

For instance, it's a security problem if, say, we manipulate sensitive data and then don't wipe the memory. That does not violate ISO C in any way.

A perfectly ISO C and POSIX conforming application could have a race condition with regard to a symbolic link. Or some time-of-check-to-time-of-use (TOCtoTOU) race.

A perfectly ISO C and POSIX application could do something stupid with permissions.

Creating a listening socket for local use and not restricting it to the loopback address (like 127.0.0.1 on IPv4) should fail a security review.

There are all kinds of things that either follow from the language and API documents in very non-obvious ways, or not at all.

Not initializing an object and then using it falls into a catch-all bucket of "language violation" that can be covered in a paragraph or two.


Its a bit tricky I think.

> A secure coding standard form CERT should focus entirely on describing conventions and program properties that do not already follow from the standard as a matter of correctness.

from CERT 1.7 "The wiki also contains two platform-specific annexes at the time of this writing; one annex for POSIX and one for Windows. These annexes have been omitted from this standard because they are not part of the core standard."

So while the CERT does use some examples from system interfaces its not a standard for programming the system interfaces for POSIX or Windows. It looks like there trying to limit the standard to ISO C. The examples you gave fall into the system interface category. POSIX is huge and the same for Windows, much bigger then ISO C.

I think in order to explain conventions for a system interface you really need a longer form publication like a book. So you can take 50 pages to describe an interface and how to use it and show examples etc.

The best way that I have found to figure this stuff out is the standard way. You get a copy of all the relevant standards as a foundation, ISO, POSIX, Window and stuff like CERT. Then you you get some of the system programming books (listed below). Then you find get some good reference code that show best practice. usually code from the operating system or utilities. Lastly read all the compiler docs and tool docs to set up the best code analysis framework you can.

These are a few system programming books that I use.

(best intro book) GNU/Linux Application Programming https://www.amazon.com/GNU-Linux-Application-Programming/dp/...

UNIX Systems Programming https://www.amazon.com/UNIX-Systems-Programming-Communicatio...

Advanced Programming in the UNIX Environment https://www.amazon.com/Advanced-Programming-UNIX-Environment...

Windows System Programming https://www.amazon.com/Programming-Paperback-Addison-Wesley-...

The Linux Programming Interface http://www.man7.org/tlpi/

edit: I'm not sure your skill level, you may have seen all of those but I posted them regardless. There is a lot of security and convention in those books.


There are some dialects of C which provide additional guarantees through verification, but I don't think there is a coding strategy for standard C which minimizes uses of uninitialized memory. One can and should use static and dynamic program analysis, but that's not a coding strategy.

This is why many people say that standard C is unsafe and using if for writing robust software is a losing battle.


The coding strategy for not using uninitialized memory in C is to initialize it before you use it ;)

And I'm fairly cretin that all the major compilers have a check and warning for it in various forms.

If you make a global simple type it is initialized to zero.

If you use calloc it initializes to what ever digit you want (ie) 0.

If you use malloc etc then you set it with memset.

if you use automatic variables you set them on creation, int i = 0;

If you make an automatic aggregate type you set it like so, struct bob = {0};

and so on.

if you want to make sure your pointer is not uninitialized you set them to NULL on creation and set them to NULL on un-assignment or free. And then always test them for NULL.

All of that is very beginner to intermediate level knowledge and anyone who is going to use something like CERT is expected to know all the basics.

There are valid reasons why you can claim that C memory management is unsafe or not as safe as other languages but its not related to initialization.


Is there a code analysis tool that will flag code that violates these rules?


Elsewhere on the SEI site there are some checkers [1] [2]. They also advocate using clang's "-Wthread-safety" among others. It doesn't sound to me like there exists a tool that will measure this coding standard as conformance criteria.

[1] https://github.com/SEI-CERT/scvs

[2] https://sourceforge.net/projects/rosecheckers/


Clang tidy supports some of the rules http://clang.llvm.org/extra/clang-tidy/


If you're on someone else's budget, there are some commercial tools that flag against various coding standards.

A quick check on something I half remembered shows these people say they cover CERT; http://www.programmingresearch.com/coding-standards/complian...

I know there are more.


The “compliant solution” example for setlocale() is thread-unsafe, AFAICT.

(I’m not aware of a safe solution other than “don’t use the text processing part of the C standard library, because its design is too wrong”.)


There is a thread-specific uselocale() in POSIX.1-2008.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: