Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Part of how C++ is successful is that it retains C compatibility, and C accurately models how the computer actually works (at least computers of the 70s and 80s, which is all we know how to program well). Lovely languages like lisp, python, haskell may be nicer to work with, but they do not model the underlying machine properly, and for so many problem domains that is just not acceptable. It's not just a performance question.

And then C++ implements several programming paradigms (object oriented, generic, etc) without compromising that machine model.

Plus Bjarne was truly correct when he said: "there are languages people complain about, and there are languages nobody uses."



Bjarne was truly correct when he said: "there are languages people complain about, and there are languages nobody uses."

I think this is very true, what it says literally is that popularity and attracting ire are correlated.

However, my interpretation is that the reason why this is true is that (1) all languages involve design trade-offs, (2) every trade-off pisses someone off for at least a moment, and (3) popularity attracts eyeballs, therefore more popular languages have more people pissed off.

I am not suggesting you say this, but I have heard the (strawman?) argument that it is possible to design a nice, pure language that is above reproach but that it can't be popular, and thus his quote expresses the thought that there is an inverse relationship between elegance and popularity. I don't think this is the case.


My first impression of the quote is that any language that gains enough users will find people who complain about it.

However, I agree that his main point is more that you will never be able to design a language that is useful to a large number of people without making compromises, and with any compromise there will be at least one person for which it is not the optimal solution.

I don't want to say the relationship between elegance and popularity is strict though. There is simply a good correlation.


I think C models how the computer works a lot more closely than C++ (and no one would argue that C isn't carrying water out in the engineering world even today). IMO C++'s issue is precisely that it layers all of these leaky abstractions on top of the strict procedural model of C.

For my money, developers are better off knowing two tools (c + some very high level language) rather than the spork which is C++.


I remember 'cfront', when it first came out, and to this day I haven't really changed my mind on how I felt about it, it's a much too bloated language compared to the elegance of C.

If 'C' would have had a decent native string type I think C++ might not have happened ;)


Are you suggesting that std::string, or as my not-so-friendly c++ compiler likes to call it

    std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*,
    const _Alloc&) [with _CharT = char, _Traits = std::char_traits<char>,
    _Alloc = std::allocator<char>]
is an improvement? :-)


No, not at all. I was thinking of strings the way Dennis Ritchie would have done it.

I can see why they chose to omit it, but in retrospect I think it was a mistake. The problem they were faced with was that the language didn't include any 'runtime' at all the way they wrote it, a string package would have made it a must to have some runtime.

Everything that is 'runtime' in C is in libraries, and everything that is 'core' is in the compiler.


That's not what your compiler calls the string class. That's what it calls one particular constructor of the string class.

This constructor lets you customize the way the string class allocates memory. Now I would like to see the equivalent constructor in your favorite language :)


c + lisp, in my case. nice way to write DSLs.


The sad part is that there are a few good ideas in C++, so it is hard to throw out the baby with the bathwater. I use C++ like C with destructors, operator overloading, typed constants, and not much else.


Thus proving the author's point.


But C with vector, string, hash_map, const&, destructors and scoped_ptr is a pretty good language! It's certainly better than plain C. It's just not as aesthetically pleasing that it sits inside the larger monstrosity that is C++.


Thus proving the authors point even further.

It is funny how everybody really seems to have their own favorite subset of C++.


Just as Lisp programmers add new syntax and vocabulary to build custom languages, C++ programmers do the same by taking them away.


Does anyone really use all of Common Lisp?

I'd say the justification for all of the large languages (like C++ and Common Lisp) is the same. You may not need all of it, but the features you do need will at least be standardized, compared to using a small language with ad-hoc extensions to achieve the same.

[ I here include the standard library as part of the language, for languages with extensive meta-programming support like C++ and Common Lisp, it makes little sense to distinguish. ]


Most users also probably have their own favorite subset of Word.


You've nailed it right on the head there though, when offered a bloated solution, people will 'subset', when offered something small and powerful people will customize.


You missed my point. Most users each probably have a different subset of Word.


No, I got it all right, that's what I meant. Each user subsets word for themselves in to what they need/can handle.


No, that's still not my point. Each user has a different subset of Word. Hence, the only set that contains all of the features that all users use is Word itself. If everyone uses a different subset, it's not accurate to call it bloated, too large, or complain it has unnecessary features. All features are necessary for someone.


A feature can be simultaneously "bloat" and "used by someone".


assembly more closely models how the computer works than anything...


The chief problem with the idea that C "accurately models how the computer actually works", as you allude to but don't go into detail, is that modern CPUs work nothing like the CPUs that C is accustomed to. Between out of order execution, hyperthreading, caches, prefetching, pretending-to-be-uniform-memory-access-but-not-actually-being-it and concurrency the degree to which C's computer model is relevant is rapidly shrinking. We don't really have a good model for this, but that's not really a good excuse to keep writing new languages based on C.


"Part of how C++ is successful is that it retains C compatibility"

I think Objective C is a better object oriented C than C++. It's simpler, makes a syntactic distinction between message passing and "traditional" C, and has a more dynamic run time. I'm surprised no one else has brought up Objective C as a better solution to the "make C object oriented" problem.


Recently, I went from Ruby on Rails web programming to C++ w/Qt GUI programming. Using the C++ features that I choose, C++ is actually pretty fast to develop on... (I did MFC years ago and that was nasty but Rails internal code is also awful, with fifty-million separate classes just for exceptions).

Aside from C compatibility, C++ also allows one to create simple structures and procedures when such simple beasts are appropriate.

I currently suspect one should have either "full" object orientation plus weak typing OR weak object orientation plus strong typing. Strong typing plus full OO = Bondage and Discipline language, where the size of the code itself starts to really drag your development and debugging time down.

While Ruby or Python allows compact but slow code, Java and C#, among other excesses, force one to create a full class for every two-field data-structure. Thus C++ is more compact than C# and Java and faster than Python and Ruby, winning for race for a desktop application language (oddly enough, desktop apps need a fast language because they must compensate for desktop windowing systems being more bloated and users expect more from their machine each).

I'm sure Haskell or Lisp are excellent for some things but I don't think their paradigm is very compatible with GUI programming - I'd be interested if someone has a counter-example.


Haskell is pretty theoretical, but on the lisp front I think AutoCad and Emacs more than qualify.


AutoCAD is not written in Lisp. It has been written in several languages, including C.

AutoLisp is just a thin scripting layer.

See <a href="http://www.fourmilab.ch/autofile/www/autofile.html>the autodesk file</a>.


That 'thin' scripting layer has been used to program some pretty impressive stuff in the engineering and architectural world.

Anyway, some more examples then:

http://www.cliki.net/Application


> That 'thin' scripting layer has been used to program some pretty impressive stuff in the engineering and architectural world.

Yes, I've seen people work wonders with AutoLisp. There was - probably still is - an application used for designing clothes. The designer creates, I think, a "size 8" and a lisp script generates the other sizes.

AutoLisp being lisp-like is really a happy accident. The Autodeskers chose Xlisp IIRC because it was free and embeddable. If they could have embedded a C-like language interpreter, they would have.

In fact later, AutoDesk introduced other languages and tried to push users onto them. But AutoLisp already had huge momentum.


At least C# doesn't require you to allocate a file for your class, and a simple two-field class should consume no more than 5 lines of code. And if you're willing to use a generic tuple type, you don't need to declare a class at all; just use Tuple<Foo,Bar> as needed.


Or just use an anonymous type:

  var simple = new { Field1 = "foo", Field2 = "bar" };


Strong typing plus full OO = Bondage and Discipline language, where the size of the code itself starts to really drag your development and debugging time down

You're missing a crucial part of the equation: type inference. OCaml does what you want :-)


Sure, I just meant in the languages I've been using recently.

I someone is coming up with a type-infering variant of Ruby called Juby which could do the trick once it is work.

Now, has anyone coded a GUI with Ocaml?


Not personally (still use Tcl/Tk) , but: http://coherentpdf.com/blog/?p=38


I just have to nitpick here: Strong typing is always a good feature, and weak typing is always a bad one.

Now, whether the types should be dynamic or static, is entirely up to the situation and what kind of system you are building.


Objective C is cleaner and much more fun to write in, but it's less accommodating than C++:

* You have to actually run the dynamic runtime, which even OS X's XNU doesn't do.

* The dynamic runtime also obscures what's going on with the hardware, which can be a problem in embedded environments.

I like Objective C but it's always felt like voodoo to me. I'm always more comfortable in Ruby+C than I am in ObjC.


It looks like the only platform that had production-level ObjC library was NeXT/OS X.


The computer is actually has a von Neumann architecture, that is, they have code and data in the same address space.

To the degree that C represents this reality, it is through exploits: e.g. buffer overflows, overwriting the stack and return address, and executing arbitrary code supplied by attackers. C itself does not represent the reality very well, since it does not lend itself to writing code that writes code at runtime.


> The computer is actually has a von Neumann architecture, that is, they have code and data in the same address space.

The ones that do not are actually pretty rare.

Most of those use the 'harvard architecture' and are DSP style machines.

And then there are vector processors and SIMDs, but even those can be seen as many von Neumann machines running in lock-step.


The (vast) majority of programmers are not programming DSP-style machines, so like I said, and I still stand by it, C doesn't map well to the full functionality of the von Neumann architecture.


> C accurately models how the computer actually works

i don't even know on what kind of hardware my java or python programs run. neither google (appengine) nor our IT apartment tell me.

so for most app developers "a computer" is not really something they work with. of course someone must write those abstractions (python, etc), and they do it in C/C++ :)


But I can safely say that it is a machine that the C language models fairly precisely and that python, lisp etc will use in ways that will make it harder to predict how their constructs will interact with the machine.

That's exactly the point, if your machine is anywhere near 'standard' (as in not a SIMD or something exotic) then C is as close as you can get to it without going to assembler.


But I can safely say that it is a machine that the C language models fairly precisely

Not any longer. I don't know how C models the kind of massive parallelism that is becoming mainstream right now.


This is only true to a degree.

The only reason you know what the C you write will end up doing on the machine is because you are generally not doing much when you're working in C.

If you want to get productive work done, you'll be leveraging lots of libraries written by third parties, and you have no more insight into how they are working when you're invoking them with C than Ruby, Python, Java, C#, or any other language.


I think you're missing the point here.

A C program that accesses the memory will do so in an extremely predictable way, if I lay out a memory structure, a piece of code and an access pattern then I can be very sure about how that will interact with things like the caches.

In other languages where there is more 'under the hood' that is a lot harder.

Libraries don't enter in to the problem, that's another level altogether.

While it's true that library code written by third parties might be less (or more) efficient than the code you wrote, even those libraries will have the luggage of the underlying implementation of the language.

In C there is no implementation underneath, it is a 1:1 correspondence with machine instructions.

That is the reason that kernel code tends to be written in C.


I think my point is that you can lose track of the woods for all the trees; moreover, C is not actually a good map with the underlying hardware.

In C, you deal with so many trees, it sure feels like you're right there in the woods. But actually, if you're dealing with things on a tree by tree basis, you're not getting much done. And you can pretty lose your way by focusing on navigating territory efficiently at that level, but not keeping track of the overall goal.

As to whether kernel code should be written in C, that is a different matter. C is not a good map to von Neumann architectures because it doesn't model shared code and data in memory - C doesn't have a good representation for writing code at runtime. Most code written in C will not be the most efficient possible code for this reason.


Don't you just love drive-by readers who downvote all your comments in a thread, and never leave a reply indicating why they disagree with you? Because I sure do...


I characterize C as a portable assembly language.


Sure, portable assembly language. With, you know, named variables, type abstraction, recursive function decomposition, infix expression grammar, heirarchical flow control constructs... just a bunch of silly tricks. :)

Modern developers have gotten so used to the fact that C is "low level" that they tend to forget how low really low level coding is. There's a ton of "high level" language features in every 3GL, and that includes C. The abstractive distance from machine code to a language like C or Pascal is far, far higher than it is between C and Python.


One of the odd things about C, historically speaking, is actually how impoverished it was compared to macro assemblers written at around the same time. Those didn't have much of a type system either, but you could get all kinds of power with them. No modern assembler is really comparable, much to my dismay.


The comment was probably referring to things like "copying stuff is expensive" or "function calls are expensive", not more specific details of the machine you run on. In C++ you use pointers/references to solve the first problem, and define header-only classes for the latter. Even if you weren't aware, basic use of the the standard library will clue you in.


I totally agree. C++ main strength lies in its "portable assembler" nature. However, C++ also is [quite a behemoth][1], while Lisp, Python and Haskell aren't so.

[1]: http://yosefk.com/c++fqa/

The answer then is obvious : we should change the hardware, so it is not C/C++ optimized, but Lisp/Python/Haskell optimized. Then, these languages are easier and more practical.


Lisp, Python and Haskell should never ever be separated by a mere slash; they're totally different beasts.


Sorry, of course they are. I miss-phrased my sentence. I should have said "Lisp optimized, or Python optimized or Haskell optimized", or even "lovely language(s) optimized".

But I didn't start this: "Lovely languages like lisp, python, haskell" (sic).


"we should change the hardware, so it is not C/C++ optimized, but Lisp/Python/Haskell optimized."

http://en.wikipedia.org/wiki/Lisp_machine


In general, hardware optimized for high-level languages have been a failure, usually the extra baggage for handling things as machine-level interpreter slow things down nearly as much as a software interpreter. Nothing has been a real commercial success and it isnt for want of trying. Off the top of my head I can think of the Burroughs B5500 which had hardware support for typed data, the Lisp Machine, and the Intel 432. Sun had a Java machine specification, but I dont think anyone ever bit.


There's Jazelle for ARM, it's essentially a instruction overlay that makes the cpu execute all simple and common java bytecodes directly, and call a kernel function for all the hard ones.

However, even that is dying -- it is being replaced by the thumbEE, which is a more generic instruction set designed for efficient code generation from intermediate dynamic code.


There is Azul Systems, that produces massive servers for Java applications.

http://en.wikipedia.org/wiki/Azul_Systems

I don't know how successful they are economically though.


There is forth on hardware, and a long tradition of that. It's still alive and kicking.


IBM has plug-in hardware JVMs for its mainframe big iron. They are ungodly expensive.


According to Wikipedia, "zAAPs are not specifically optimized to run Java faster or better". Apparently they're just normal IBM mainframe processors crippled to only run Java and XML.


I can't find anything about that, it's quite interesting do you have a link ?

edit: found it:

http://news.zdnet.com/2100-9584_22-135319.html


No, C is a portable assembler. C++ was created to add "high level" OO features to C. But the success of the effort is highly controversial, because there are differences between C and the OO model that are too difficult to overcome.

I think the strategy of objective-C is much cleaner, since they better separate the concerns of "writing fast code" and "writing high level OO abstractions".


How, on a site like HN that champions pragmatic entrepreneurialism, can you possibly claim that the success of C++, a language that has been deployed probably more than any other except C, is controversial? It's a matter of fact. C++ won the early OO race. Maybe Java and C# and whatever have since eclipsed it, but it was still a smashing success by all objective measures, and still is by some of those measures. Yeah it's ugly, but end users don't care.


He wasn't talking about the success (popularity) of C++ at all. He was talking about the success of the effort of adding "high level" OO features to C.

I think C++ totally failed at being high level. So, this "success" being highly controversial doesn't surprise me.


Two quotes from http://www.jwz.org/doc/java.html

C (the PDP-11 assembler that thinks it's a language)

C++ (the PDP-11 assembler that thinks it's an object system)

C isn't really a portable assembler. It was designed to be sufficiently efficient on a specific processor (the PDP11, a register based CPU), while providing basic abstractions (function, structures…).

C was an overwhelming success. And so were CPUs fast at running compiled C programs. As far as hardware optimization is concerned, the main characteristics of C are pointer arithmetic, manual memory management, and a relatively low ratio of function calls (in C programs). These are pretty big constraints. So, we ended up with C optimized CPUs. Now what if Unix has been implemented in Forth? All mainstream processors would have been stack based, and optimized for a very high ration of function calls.

C++, by extending C, also have this "portable assembler" nature. It also have a number of incredibly low-level mechanisms which can be used to build pretty high-level abstractions, but it's still an extension of C.


C is a portable assembler. Sometimes that's exactly what you want.

C++ is C, except that it doesn't completely suck for higher-level projects.


LLVM IR [1] is portable assembler. C is "portable" in the sense that:

a.) you get direct access to your linear address space

b.) gcc targets so many instruction sets that you don't have to worry about your backend unless you have a custom platform.

C++ is the thing that screams "__gxx_personality_v0" at me because I always start off using gcc instead of g++.

[1]: http://llvm.org/docs/LangRef.html


Calling C portable assembler is not to be taken literally, but in context of its usage, which was for the most part systems level stuff that previously would have been written in assembler, requiring expensive ports between platforms.

C changed that dramatically and helped to cut down on porting effort tremendously.


I would say C++ is somewhat less portable assembler than C, due to different linker semantics, which is not unified across platforms.


Siebel mentions this point, off-handedly, in covering Guy Steele's take, and yeah, I agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: