Almost every "rule" SEI CERT has for Java either applies directly to C, or is mooted by C's fundamental insecurity.
For an instance of the former case, take SQL Injection: SEI CERT dings Java because it provides libraries that enable SQL injection. C doesn't provide any such libraries --- because it doesn't provide SQL libraries at all! But rest assured that if you bring SQL into your C application through, for instance, the SQLite library, you're inheriting the same set of concerns.
For an example of the latter case, take the Java "privilege" system. SEI CERT dings Java because operating the Java privilege system is fiddly, and if you do it wrong you're exposed to privilege escalation. You're not exposed to privilege escalation in C --- because all C code is privileged!
This is about the level of pointy-haired boss analysis that I've come to expect from CERT.
> Almost every "rule" SEI CERT has for Java either applies directly to C, or is mooted by C's fundamental insecurity.
Did you read the article? That is precisely what it says:
> The preceding analysis demonstrates that all of the high-severity Java rules also apply to C code, except for those in Java's biggest category, which is internal privilege escalation (IPE). C has no possibility of IPE because C lacks an internal privilege model. We also note that C's biggest security category is memory corruption, which does not affect Java code.
The article ends with:
> If you are writing unprivileged code, therefore, you have many fewer rules to worry about in Java than you do in C. Consequently, this table strongly hints that Java is more secure than C. In fact, as we showed earlier, all nine of the remaining high-severity Java rules also apply to C, which provides more rigorous support for our hypothesis.
I did read the article, carefully. Notice the "if you are writing unprivileged code" predicate on that paragraph.
First, what do they mean, "if"? It's not 1998 anymore, and nobody is writing Java applets.
But more importantly, the question doesn't even make sense in context, because you can't ask it if you're comparing C and Java. IF you are considering C for your project, THEN you are writing unprivileged code one way or the other.
Java is categorically more secure than C. Notice how the article doesn't open with that statement? I did too, and so now I'm going to rant about how bad the article is.
The article acknowledges that the so-called "IPE rules" only apply to small subset of libraries. (They don't even apply to applets/servlets because the libraries which run these things are supposed to handle the isolation here.)
IMO the gist of the article is this:
1. We came in believing that Java is more secure than C.
2. So we thought that Java would require fewer secure coding rules.
3. But it turns out Java has more! What does this mean? Was assumption (1) wrong or is there something else going on?
4. Analysis follows.
5. Our assumption was right, it's just that Java has more stuff than C.
IOW, the point of the article was to refute the strawman, "Java has more secure coding rules than C, hence it's harder to write secure Java code than secure C code."
I'm not sure what you expected the article to say but just asserting "Java is more secure than C" is useless because you just end up preaching to the choir.
That's a pretty good summary. It can be pretty tough to analyse permissions across a big Java application though, which is harder than analysing a small C programme that does some kind of math simulation or something. So I can see the real world application of this paradigm.
I believe this means running in a sandboxed environment where you are not only not root but also not able to access the file system, spawn processes, etc.
> Notice the "if you are writing unprivileged code" predicate on that paragraph.
> First, what do they mean, "if"? It's not 1998 anymore, and nobody is writing Java applets.*
I think the intent is clarified by an earlier paragraph:
The IPE rules are designed for code to handle untrusted code, including in applet containers and servlet containers, such as JBoss and Tomcat, and any libraries these containers may depend on, such as the Java core libraries. If you are writing Java core library code, or code that is used in a servlet framework, the IPE rules apply to you. If you are writing only desktop applications, applets, or servlets themselves, however, you can ignore the IPE rules.
I think the author, trying to be fair and balanced, is conceding that there is a small amount of code, written by a few people, that is subject to the IPE rules, which complicate the writing of secure code. Does that mean that for the purpose of writing such code, Java is "as insecure" as C? He doesn't say that, or even suggest it; he's merely pointing out that it mostly doesn't matter, since very few of us write such code.
> Java is categorically more secure than C. Notice how the article doesn't open with that statement? I did too
The article is written very coyly and the author doesn't show his hand until the end. I started out tempted to have the same reaction you did, but then I got to this sentence in the third paragraph: We acknowledge that the number of rules for any domain is an interesting but not persuasive metric regarding the domain's security. Then I thought, okay, maybe this isn't as stupid as it looks, let's see where it's going.
Exactly. Especially the method. I don't know if I should be annoyed people are paid to write stuff like this or be looking to get a job there for the easy money.
You must not have read to the end. You're in violent agreement with the author. The points about SQL injection and the Java privilege system are raised, not because the author considers them valid, but because they have rules about both of them, and someone who is just counting rules -- an obviously superficial metric that the author ultimately rejects -- might be influenced by their presence.
Yes, the lede is thoroughly buried. Fault the author for that if you like; but you don't actually disagree with the conclusion.
EDITED to add: I'm going to put this more strongly. You should retract your comment. The ad hominem at the end is not only uncalled for but extremely ironic given that you didn't (your assertion to the contrary notwithstanding) actually read the entire article carefully. Your comment is not up to the standards of discourse that I know you strive to maintain.
I understand and appreciate what you're saying. I'm also happy to retract things when I'm wrong, which happens pretty regularly. But you're incorrect about how I read the article (also: "how carefully other people read an article" is not a valid topic of debate on HN). I stand by what I wrote.
You are welcome to disagree with my assessment, strenuously. If you'd like to do that, please write arguments that are about what I wrote, not about what you think I personally did or didn't do.
(PS: that's not an actual ad-hominem argument. "Ad-hominem" doesn't mean "anything negative you say about a particular person or entity".)
> "how carefully other people read an article" is not a valid topic of debate on HN
Debate, perhaps not. As feedback, it's absolutely valid. Here you have two people whose first reaction is that you seem not to have read the article closely or completely. Will you really not consider the possibility that you didn't?
Anyway the operative point didn't occur to me until this morning. The author does indeed start out presenting an argument that could be taken to support the conclusion that Java is no more secure than C. But he never actually draws that conclusion.
It does take a closer reading to notice the absence of something than to notice its presence.
Your postscript is valid as far as it goes, though the sneering tone of your dismissal ("pointy-haired boss analysis") was unmistakable. I think it went beyond measured, defensible criticism.
For the record: I have no connection to SEI CERT or David Svoboda (the author), nor any pre-existing opinion about them.
Java has more injection rules than C simply because it comes with more subsystems than standard C. For example, SQL injection is possible in both C and Java, but only Java provides a standard library for connecting to SQL databases (the JDBC); hence, only Java has a rule about SQL injection.
-- Acknowledging one of the points you made in your root post. I addressed the other one in a separate reply.
> For an example of the latter case, take the Java "privilege" system. SEI CERT dings Java because operating the Java privilege system is fiddly, and if you do it wrong you're exposed to privilege escalation. You're not exposed to privilege escalation in C --- because all C code is privileged!
On the other hand, I've heard a nonzero amount of people say it's cool to load untrusted code into your JVM or CLR because high level languages are magic pixie dust. So maybe it does need saying for somebody.
Only beaten by microprogramming. Only beaten by HDL. Only beaten by RTL. Only beaten by gates. Only beaten by transistor layouts. Only beaten by a mathematical description of the behavior of the particles. Only beaten by emailing an English description of that to a fabless operator in China that nobody's heard of and expecting a working chip.
Java, just the language,is absolutely more secure.
Just be careful what you're using in the JRE else you might negate that gained security[1]. Of course so many C libraries have vulns too, just not included in one commonly used package that runs on every operating system it supports.
"C just adds memory corruption to the set of issues one has to worry about."
That's a very casual way to say that C adds an entire class of errors that lets attackers own your machine with the slightest mistake in the most common coding situations. Memory safety is such a huge increase in security over C that it isn't even funny that people use C where not necessary. Type-safety plus good type system gets a start on knocking out data and interface errors. Type systems with context, or just Design-by-Contract, can knock out more interface errors.
So, a good type system, type safety, and memory safety should be the baseline for reliable or secure systems. C tosses out the whole baseline in exchange for... running a bit faster or lazy programmers?
Note: I'm talking in general rather than building on existing C projects, where using C makes some sense.
Thanks for the link. It's all good so long as you keep countering the myth of C having good design like you did here [1] and I backed you up on here [2]. Their house of cards (i.e. ideology) will come down easier as we continue kick its foundation out from underneath. ;)
Far as resource constraints, there's been interesting work along those lines, too. I mean, we have obvious Modula-2/Lilith project. However, it was cool that a recent project used ATS language to program an 8-bit microcontroller. That's on top of safe ASM, embedded DSL's like Ivory language, and stuff like Cyclone. C proponents' excuses for its design advantages are getting thin, thin, thin...
Finished. Was a pretty good video. I did a write-up of it below on Schneier's blog to justify eliminating C because it's provably Bad by Design. All due to limitations of the PDP's and, new to me, the EDSAC. Also more of a rip-off of BCPL than I thought before.
If you are writing Java code to manage unprivileged Java code
(such as an applet container), you are subject to about as many
severe rules as if you are writing C code. If your Java code
is itself unprivileged, however, or if you are ignoring Java's
privilege model, you are subject to far fewer high-severity
rules than if you program in C.
Accessing resources on a system requires privileges.
So java is inherently more insecure than C, because java developers are not aware that every time they access a socket, authentication, a file, ram they need privileges. And they tweak whatever policies and rules of the OS to do so. Yes : the problem is not the java dev but the sysadmins... Sysadmins says no it is unsecure, so companies say let's hire clueless devs call them devops and handle the delicate problem of secured access to resources by telling them: make it possible.
It is not by hiding dust under the carpet, moving the problem to another place (from code to containers) and creating new "paradigms" to hide the inherent complexity of coding that situation will improve.
in 2015 we are still lacking of able developers and every technic to work around the scarcity of really able developers are a failure and adding noise.
Software is as good as the lowest "software" IQ of the member of the crowd building it -managers included-. Our managers are clueless, most our devs/devops/sysadmins are frauds.
Though I understand your reasoning as to why you call devops etc fraud, these developers are under a huge amount of pressure from the business side of things to "just make it work TM".
But you are conceding the point, aren't you? It does not matter if you do the wrong thing because you are an incompetent hack or because you are a low status pawn that got cornered by management into choosing the least worse of a bunch of bad options. The fact remains that a weak solution was implemented, by you, and that it is a matter of time before your employer and their customers suffer the consequences.
I'd like to believe that if we were like doctors and our managers came up with some incredibly stupid idea like "let's save cost by reusing hypodermic needles", we could simply refuse to work and force the penny pinchers to back off. But the cynic in me tells me that doctors have to fight every single day to fend off a parade of equally misguided (though not recognizable as such by the public) ideas, and that more often than not, they end up choosing to keep their own families fed and clothed over the theoretical Hippocratic Oath.
A bit OT, and perhaps I've been lulled into complacency, but beyond basic bounds-checking I don't think an ideal VM should do much security management at all: I would do away with the java security manager and push all higher level security considerations to the compiler.
Sometimes when I'm larping with the gosu team we talk about an ideal VM with:
> A minimum of primitive data types (int64, fp64, etc.)
> A minimum of security management (bounds checks, etc.)
> No method overloading (making bytecode much simpler: no bridge methods, etc.)
> Stack based like the JVM
This would be a dream to target as a language developer.
I don't know if that's something I would handle at the VM level, probably rather an API on top of a native call mechanism in the core library.
I suppose if there is enough similarity across I/O then perhaps something could be added. I'm not familiar enough with the state of low level programming to know, and I'd lean towards minimalism unless there was a compelling reason not to.
One option would be to provide all IO operations as high level plugins or 'virtual drivers' provided by the platform runtime, similar to how one might install and configure plugins for all of the input and output plugins for video game system emulators.
What the VM would provide in a centralized manner would be a repository mapping API names to public header files.
This way, if a developer distributed a program over the internet imported a reference to 'simple-video.01' API, the end user and VM could lookup the name and pull a copy of the header API defining the calling convention directly from the VM website.
In order to run the program, they could then either find or write a 'native plugin' for their platform which implemented 'simple-video.01' API using machine code similar to a C shared object. Or, they could find or write a 'soft plugin' which translates the calls in the 'simple-video.01' API into calls to another API which their was machine code for (say, 'direct-video.51') using VM bytecode. This way the VM could automatically infer the native code necessary to generate effects, even if it was less efficient or qualitatively different depending on the needs of the end user.
For output, the headers would basically need to specify a list of tuples of primitive data types for effects. For input, the headers could perhaps specify a set of named primitive tuples, arrays, and tables which vary over time and can be directly accessed, similar to how one might access pipelined input variables in an OpenGL or Direct3D GPU shader.
This article is coming at it from entirely the wrong way. Counting rules is not how you assess language security. We already had language security assessments back in the 80's-90's with proper methodology that taught us more useful stuff. They had basically two approaches: systematic analysis of language attributes on specific types of defects or vulnerabilies; empirical analysis of defect or vulnerability rates on real-world projects with comparisons on languages. The author or another party should apply one or both of those to modern versions of the language to assess real risk. Then, apply analysis of any coding guidelines to see what they counter and what effort is required as productivity is important criteria.
On the other hand, this analysis tells us about nothing useful except that there's rules, the rules have claimed benefits, rules here vs there, and a recommendation based on that. Ignores too many real-world concerns. Waste of time.
Secure Connection Failed
The connection to the server was reset while the page was loading.
The page you are trying to view cannot be shown
because the authenticity of the received data
could not be verified.
Please contact the web site owners to inform them
of this problem.
Leaving the horrible security of the Java Runtime aside, I highly doubt that there is such a thing like "insecure programming languages". There is "insecure programming" though.
The question seems a bit trivial to me. With Java, you can create a binary file with any content and then execute it. So, in theory, you could replicate the executable code of any C program.
A system's security comes from the security of its components and their interactions. The functionality of each will be expressed in a language. That language will have features that affect the security of that component. Therefore, security is a component of languages and systems.
I'll go further to say language security is a subset or component of securing a system wherever it could be applied. This has been known since at least MULTIC's where the choice of PL/0 dodged some security vulnerabilities due to language's attributes. It's been known for reliability at least since Burroughs used ALGOL variant for their OS.
Well not exactly
See the modula 3 based research operating systems that solve some of the modularity / performance / security trade off triangle issues of a micro kernel
C is categorically less secure than Java.
Almost every "rule" SEI CERT has for Java either applies directly to C, or is mooted by C's fundamental insecurity.
For an instance of the former case, take SQL Injection: SEI CERT dings Java because it provides libraries that enable SQL injection. C doesn't provide any such libraries --- because it doesn't provide SQL libraries at all! But rest assured that if you bring SQL into your C application through, for instance, the SQLite library, you're inheriting the same set of concerns.
For an example of the latter case, take the Java "privilege" system. SEI CERT dings Java because operating the Java privilege system is fiddly, and if you do it wrong you're exposed to privilege escalation. You're not exposed to privilege escalation in C --- because all C code is privileged!
This is about the level of pointy-haired boss analysis that I've come to expect from CERT.