
Why should I have written ZeroMQ in C, not C++ (2012) - sytelus
http://250bpm.com/blog:4
======
electrograv
It seems all too often we (coders) are encouraged to think of errors as these
_exception_ al things that happen rarely; deserving only of a few cursory
preventative treatments to the code -- or worse, treated as something only to
be fixed lazily, as bugs and crashes surface during testing. Indeed -- this
philosophy is ingrained into a significant percentage (if not the majority) of
the programming languages we use!

On the contrary, I believe we should _expect to spend the majority of software
engineering time writing and thinking through error-handling code, before even
your first test[1]._

I'd even go so far as to say: If you're not spending at least half your time
on so-called 'error handling', you're probably doing something wrong, like
using a language feature (like exceptions) to defer that technical debt to
later -- and you'll regret it, if your project matures, I assure you.

This is why I so greatly appreciate languages like Rust and Zig[2] which
remove exceptions and unchecked null pointers from the language entirely (with
few edge-case exceptions), and provide a powerful and robust type system that
allows us to express and handle error conditions naturally, safely, and
elegantly.

[1] To be clear, by no means am I downplaying the importance of test code, or
even manual testing; rather, I'm arguing that purely "test driven development"
is _not sufficient_ to yield extremely robust software of significant
sophistication.

[2] These aren't the only examples, but they're among the only that aim to be
C++ (Rust) and C (Zig) replacements, that also made the "right" design choices
(IMO) in removing both exceptions and unchecked null references.

~~~
nostrademons
This strategy tends to fail economically. The tech startups that succeed are
usually ones that let their customers _do things they would not otherwise be
able to do_. Usually doing something that nobody has done before is hard
enough without considering the corner cases; if it follows a typical 90/10
rule, then doing 100% of the job will take 10x as long as the competitor who's
only doing the easiest 90%, and your market will have been snapped up by them
long before you can release a product. Customers would rather use a product
that works 90% of time than do without a product entirely, at least if it
delivers functionality they really want but can't get elsewhere (and if it
doesn't, your company is dead anyway).

Once you've got a commanding lead in the marketplace you can go back and hire
a bunch of engineers to finish the remaining 10% and make it actually work
reliably. That's why solutions like testing & exceptions (in GCed languages)
succeed in the market: they can be bolted on retroactively and incrementally
make the product more reliably. It's also why solutions like proof-carrying
code and ultra-strong (Haskellish) typing fail outside of markets like medical
devices & avionics where the product really needs to work 100% at launch. They
force you to think through all cases before the program works at all, when
customers would be very happy giving you (or a competitor) money for something
80-90% done.

Someday the software market will be completely mature, and we'll know
everything that software is good for and exactly what the product should look
like and people wouldn't dream of founding new software startups. At that
point, there'll be an incentive to go back and rewrite everything with 100%
solid and secure methodologies, so that our software has the same reliability
that airline travel has now. That point is probably several decades in the
future, though, and once it happens programming will not be the potentially
extremely lucrative profession it is now.

~~~
burfog
With that "works 90% of time" idea, please don't ever involve yourself in
software for anything serious: air traffic control, self-driving cars,
autopilots, nuclear reactor control, insulin pumps, defibrillators,
pacemakers, spacecraft attitude control, automated train control, the network
stack of a popular OS, a mainstream web browser, a Bitcoin client, the trading
software of a major exchange, ICANN's database, certificate signing, ICBM
early warning system, cancer irradiation equipment, power steering, anti-lock
brakes, oil/gas pipeline pressure control, online tax software...

~~~
bsaul
Not working in those fields either, but i don’t understand how people can be
comfortable writing life-or-death code in C either. Anything that doesn’t
involve a heavy dose of formal proof or automatic validation of properties of
your code seems irresponsible as well.

~~~
axw10
C is very safe if you are experienced and don't do anything fancy.

What else would you use apart from Ada? I wouldn't trust any language with a
large runtime like Python, Java, and yes, also not Haskell.

C is very amenable to proofs that use Knuth's proof style. Also of course
Frama C exists.

EDIT: If Rust is more mature, it _may_ be an option, but I'd wait at least 5
more years until (if?) it is widely used.

~~~
bsaul
ocaml with coq prover ?

------
symisc_devel
The consensus we have reached at our company is as follows:

1\. If you plan to write a software library [0] to be embedded directly onto
foreign piece of code (i.e a host application), then go blindly with ANSI C or
C99 at last resort. Not only writing bindings for your code should be
straightforward but you will also benefit from the portability of C that make
your code virtually available on every platform or architecture out there.
Remember _write once, run everywhere_. Well, C fits exactly onto this
category.

2\. If on the other side you plan to write a standalone software such as a
Desktop app, a network server, a game, etc. then modern C++ should be your
first choice. You'll largely benefit from shared/unique pointers, built-in
threading, lambdas, high performance data structures with relative portability
accros modern systems. C++11/14/17 is a joy to program with once you master
the basics.

[0]: [https://sod.pixlab.io](https://sod.pixlab.io)

Edit: formatting.

~~~
blub
For 1. I'd still recommend using C++ (or D or Rust) and wrapping it in a C
interface because these languages are just better at handling complexity
compared to C.

Usually one has some idea what the target platforms are, and writing C code
that runs everywhere is pretty hard anyway.

------
jdashg
That the author didn't understand how to write robust C++ by 2012 (after 5
years) without constraining themselves unnecessarily to C is not a compelling
reason to prefer C over C++.

"Doctor it hurts when I do this." Well stop doing that then. Learn what works
and what doesn't, but don't throw the baby out with the bathwater.

~~~
kllrnohj
Yeah it seems... odd. I fully get preferring return codes to exceptions, but
you can still just do that in C++? Heck, you can trivially do a type-safe
union of return value or error code in the spirit of Rust in C++.

And you can do things like enforce that the return value is used by checking
in the destructor if the return value was unwrapped.

Similarly for constructors if you have an object that can fail construction
instead of doing what the author suggested of

    
    
       class foo {
       public:
           foo ();
           int init ();
           ...
       };
    

You could instead do something better like:

    
    
       class foo {
       private:
           foo ();
           int init ();
           ...
       public:
          static std::optional<std::unique_ptr<Foo>> create();
       };
    

This is a well-established pattern in languages like Java, for example, to do
exactly what the author wants - a constructor that can fail without forcing
callers to just know they need to try/catch.

~~~
shaklee3
To be fair to the author, std::optional was added in C++17, which came after
this blog post was written. I agree, maybe if he had written it today his
views would be different.

~~~
gpderetta
And yet boost optional has existed for a good half of forever. C++ programmers
have been writing they're own versions (maybe, fallible, etc) for the other
half.

------
fefe23
The article is mostly about exceptions, which is fair I guess.

However, by the time you write C code that handles _all_ errors (like
exception handling would), you end up with code that does not look as nice as
the examples given here. In production C code you often end up seeing a "goto
fail" or "goto done" and then hand-written destruction of objects. To make
that less error prone, you have to explicitly initialize everything in the
beginning of the function. Oftentimes you even see the "goto fail" invoked via
a macro. Now, in my humble opinion, by this time you might as well have used
C++. Persponally, I do like the regularity of such code when reading other
people's code. Still. It's a lot of effort. And the benefits over exception
handling are not so obvious to me. You end up restricting yourself in both
cases.

Also, when considering whether constructors and destructors should throw
exceptions, consider the array case. You instantiate an array of 100 foo. If
constructor #42 throws, how do you know which ones have been initialized and
which ones haven't? Even worse, if you delete[] an array and one of the
destructors fails -- do you get memory leaks? Maybe the destructor was
supposed to release a mutex and you can get deadlocks?

I find it very telling that both C and C++ (and most other languages for that
matter) end up defining a "good subset" that you are supposed to stay inside.
You would be forgiven for concluding that we don't know what we are talking
about when we design new languages, and we should not hastily add new features
lest we end up having to recommend against them in the future. Yet, many
languages (I'm looking at C# and Javascript here) keep getting new features on
a yearly basis.

I think we should be much more sceptical.

I also think this is a dilemma that is similar to the startup dilemma. It is a
very risky proposition, so the people who end up doing it are not risk averse
and tend towards the "hold my beer" part of the spectrum.

~~~
Paul_Diraq
I think it is perfectly fine that destructors can't throw.

For the rare cases that you need finalizers which might throw I would go for
something inspired by pythons with statement.

    
    
       WITH(expr , lambda)
    

expands to(sketch):

    
    
       bool _exception_happened=false;
       auto __temp = expr;
       __temp.enter();
       try{
           lambda(__temp)
       }catch(const std::exception & ex){
           exception_happened = true;
           __temp.exit(ex);     
       }
       if (not exception_happened){
          __temp.exit();
       }

------
tines
The one real reason to use C++ is for templates. Exceptions, object
orientation, etc. is all fluff. C++ is far better captured as “C with
metaprogramming” rather than “C with classes”, and the other features are the
supporting cast for that, not the other way around.

~~~
ndesaulniers
RAII strictly for preventing leaks of resources is the lone thing I miss in C.

Constant folding via constexpr in C++ is much much better than C.

Templates help write generic code that results in specialized machine code; it
may be hyper optimized at the cost of binary size. They make it harder to
write code that at runtime is a little more type generic (if that makes
sense). You can't always afford those additional copies generated. Not that
it's a bad thing; just an observation as I write both strictly C and C++ at my
day job.

Stealing the below; string handling is significantly safer with std::string.
I'm not sold on std::string_view.

~~~
humanrebar
string_view is about expressiveness and design flexibility (vector<char> and
char* can trivially convert) that safety.

That being said, string_view is arguably safer than passing around a char* and
a length.

~~~
ndesaulniers
[https://bugs.llvm.org/show_bug.cgi?id=34729](https://bugs.llvm.org/show_bug.cgi?id=34729)

Literally had to write a psuedo borrow checker to stop it:
[https://youtu.be/80BZxujhY38?t=1095](https://youtu.be/80BZxujhY38?t=1095)

So much work rather than admit the interface is bad and you should feel bad.

C++: "shoot yourself in the ~~foot~~ face."

~~~
rwbt
Sooner or later, C++ is going to get a borrow checker anyways. I guess it's
all going to be ok? :)

I'm not entirely sold on string_view or span yet. But I wasn't sold on tuples
either and I use them a lot now.

------
chombier
I don't get it: why use a cumbersome try/catch block to handle an error in the
same context when one could just use the so-called "C equivalent":

    
    
      ...
      if (condition1)
         handle_exception1 ();
      ...
      if (condition2)
         handle_exception2 ();
      ...
    

As far as I understand, the essence of exceptions is to defer error handling
_to the caller_ , so local try/catch-es are examples of exception misuses.

~~~
sclangdon
Whilst I agree with you that this isn't the best use of exceptions, one reason
to use them this way might be because exceptions (generally) don't cost
anything on the success path. if statements do.

Of course exceptions cost a lot more on the failure path, but that's not
generally the priority.

------
IshKebab
I expected this to be about ABI stability or something. Exceptions aren't a
good reason because you don't have to use them. You can even use something
very close to Rust's Result<> if you want. LLVM does this.

"But what about constructors??" you cry. Again you can do something similar to
Rust - make the constructor very simple and private, and the have static
make() methods that return a new object or an error.

You don't have to resort to C.

------
dman
Nanomsg itself is now being rewritten - "Rationale: Or why am I bothering to
rewrite nanomsg" \-
[https://nanomsg.github.io/nng/RATIONALE.html](https://nanomsg.github.io/nng/RATIONALE.html)

~~~
wglb
And it showed up on HN at
[https://news.ycombinator.com/item?id=16796531](https://news.ycombinator.com/item?id=16796531)
with lots of commentary.

------
RcouF1uZ4gsC
And in the interim, nanomsg is in "sustaining mode".
[https://github.com/nanomsg/nanomsg/blob/master/SUPPORT](https://github.com/nanomsg/nanomsg/blob/master/SUPPORT)

There is a rewrite which addresses some of the issues with nano msg:
[https://nanomsg.github.io/nng/RATIONALE.html](https://nanomsg.github.io/nng/RATIONALE.html).

IMO, in 2018, it is a mistake to rewrite nanomsg in C. Either Modern C++(17)
or Rust should be used. The advantages in expressiveness and safety make a big
difference.

------
acroback
We uses ZeroMQ on production as part of a message layer which provide very
tight and predictable latencies sustaining more than 300K QPS across multiple
services and across 3 data centers(GCP, AWS and private Datacenter). This is a
stellar testament of how good ZeroMQ is as a mature solution.

I think the reason we are successful with our design is because ZeroMQ is just
great for what it does. It never crashed or gave problems to us, only issue we
saw was because we did not understand how it worked internally.

Had it been written in C, we would have been able to hack it a little bit to
our tastes but I am not complaining. I can understand why someone would use C
instead of C++ for high performing libraries.

Oh on the subject of C:

This is what we have :- Main service which is written in C does around 8K QPS
at peak hours(on a 2 core 4GB Amazon EC2 instance), so we have around 35-40
instances in the cloud. The cost to run this service is low for us, we love
it.:)

Biggest issue, people are bad at C and writing correct multithreaded code in C
is an Art.

------
erpellan
Glossing over the fact that Pieter Hintjens was actually responsible for
ZeroMQ.

------
mwkaufma
Exceptions are a specific solution to a specific problem. E.g. you're an OS an
you're pushing out a minor-version maintenance update which tightens security
policy, adding new failure codes. You cannot expect every user application to
be recompiled. Exceptions ensure that a loosely-coupled interruption channel
exists in user code by compiler-fiat. Other uses are specious, but let's not
throw the baby out with the bathwater.

------
rwbt
I use C++ everyday and I would describe my usage as mostly "C with templates
and destructors." I also tend to avoid OOP paradigms if possible.

All the modern C++11 and above features are great if you're using standard lib
and algorithms. But I really don't want to implement copy/move semantics,
iterators etc for my own classes. Modern C++ feels more python-esque with a
lot of syntax sugar (which is great for expressiveness).

------
dkrikun
People mostly keep saying that: 1\. C++ has at least everything C got, and
"other features" so it cannot be worse. 2\. C++ is safer than C.

Regarding 1.: a. C99 and C++11 have diverged. For example, `struct xx x = {0}`
has different meaning in those languages. b. Lack of certain "features" is an
advantage. See, C++ is a huge and complex language (r-values, x-values,
exception safety, templates, sfinae and so on and so forth). Yes, you can
restrain yourself from using them. But can you restrain others working on your
codebase, now and in the future effectively? Good luck writing and maintaining
code style guidelines..

2\. Well, yes, by using smart pointers, STL, RAII, C++ is safer than C for
small, fresh codebases. But remember: C++ was designed for backward
compatibility with C. It's full of undefined behaviour and pointer
arithmetics. Be careless once and all the safety is gone, and then it will be
harder to debug than plain old C.

------
zbentley
The title should indicate the year of the post: 2012.

------
iainmerrick
These objections seem pretty easily addressed:

1\. Don’t use exceptions

2\. If you have constructors that can fail, use heap objects and factory
functions.

~~~
flukus
Those solutions have other consequences because of how everything interacts.
No exceptions means no std and you have to be very careful about what other
libraries might do. Likewise it makes the consuming code more complicated,
they now have to deal with return codes and exceptions.

You can't really just avoid language features like that.

~~~
jcranmer
Many large C++ projects--LLVM, Chromium, Firefox--disable C++ exception
handling. They still use the standard C++ library.

Exceptions in the standard library boil down to a) passing through user's
exceptions, b) failing memory allocation (note that some systems, such as
Linux, won't fail a memory allocation but instead crash you when you try to
use it), and c) throwing exceptions when you exceed the bounds of an array or
the like (~= Java's RuntimeException). With exception handling turned off,
class (a) doesn't exist and classes (b) and (c) turn into hard program
crashes, which is almost invariably what you want to do for these errors
anyways.

Despite the fact that C++ has been my main programming language for a decade,
I don't actually know how to write catch clauses correctly in C++, unlike
Java, Python, and JS. That's how easy it is to avoid C++ exceptions.

~~~
wahern
> some systems, such as Linux, won't fail a memory allocation but instead
> crash you when you try to use it

False.

1) You can disable overcommit, and there are many of us that do this as a
matter of course on all our Linux servers.

2) malloc can fail because of process limits (e.g. setrlimit or cgroups).

I don't program in C++, but I do use RAII-like patterns in C. By that I mean
that when I create and initialize objects, all the necessary internal
resources--particularly those that rely on dynamic allocation--are also
created and initialized in the same routine.

That means most places where memory allocation can fail are grouped closely
together into a handful of initialization routines, and the places where
allocation failure results in unwinding of a logical task are even fewer.
(While C doesn't automatically release resources, following an RAII-like
pattern means deallocations are just as structured and orderly as
allocations.)

I can understand bailing on allocation failure in scripting languages--not
only is there much more dynamic allocation taking place, but allocation
happens piece-meal all over the program, and often in a very unstructured
manner (strings of variably length generated all over the place). Furthermore,
often script execution occurs in a context that that can be isolated and
therefore unwound at that interface boundary--i.e. a C- or C++-based service
executing a script to handle a transaction.

But in languages like C++ and Rust, _especially_ for infrastructure software
and libraries, it's a sin IMO. These are languages intended for use in
situations where you can carefully structure your code, they make it trivial
to minimize the number of allocations (because POD-oriented), and they permit
one to group and isolate allocations in ways (e.g. RAII) that make it
practical (if not trivial) to unwind and recover program state.[1]

But why even bother?

1) Because these languages are often used in situations where failure matters.
A core piece of system software that fails on malloc is a core piece of system
software that is _unreliable_ , and programs that rely on you can behave in
unpredictable and even insecure ways.

1.a) Go authors take the view that reliability comes from running multiple
instances in the cloud. Yes, that's one way, but it's not the only way, not
always an option, and in any event anybody with enough experience dealing with
"glitches" in cloud services understands that at least in terms of QoS there's
no substitute for well-written, reliable service instances.

1.b) Security. It's often trivial to create memory pressure on a box. OOM
killers are notorious for killing random processes, and even without
overcommit the order of allocations across all processes is non-deterministic.
Therefore, not handling OOM gives attackers a way to selectively kill critical
services on a box. Disruption of core services can tickle bugs across the
system.

2) Overcommit is an evil all its own. It leads to the equivalent of buffer
bloat. Overcommit makes it difficult if not impossible to respond to memory
resource backpressure. This leads to reliance on magic numbers and hand-
tweaking various sorts of internal limits of programs. We've come full circle
to the 1980s where enterprise software no longer scales automatically (which
for a brief period in the late 1990s early 2000s was a real goal, often
achieved), but instead requires lots of knob turning to become minimally
reliable. Nobody questions this anymore. (Ironically, Linux helped lead the
way to knob-free enterprise operating systems by making kernel data structures
like process tables and file descriptor tables fully dynamic rather than
statically sized at compile or boot time, so the kernel automatically scaled
from PCs to huge servers without requiring a sysadmin to tweak this stuff.
Notably, Linux doesn't just crash if it can't allocate a new file descriptor
entry, nor do most programs immediately crash when open fails. Even more
ironically, overcommit on Linux was originally justified to support old
software which preallocated huge buffers; but such software was written in an
era where sysadmins were expected to tailor hardware and system resources to
the software application. Overcommit has had perpetuated the original sin.)

Not all software needs to handle OOM. Not event all C, C++, or Rust
components. But for infrastructure software OOM should be handled no different
than file or network errors--subcomponents should be capable of maintaining
consistent state and bubbling the error upward to let the larger application
make the decision. And if you're not writing critical infrastructure software,
why are you using these languages anyhow?[2] If a language or framework
doesn't permit components to do this, then they're fundamentally flawed, at
least in so far as they're used (directly or indirectly) for critical
services. You wouldn't expect the Linux kernel to panic on OOM (although some
poorly written parts will, causing no end up headaches). You wouldn't expect
libc to panic on OOM. There's no categorical dividing line beyond which
developers are excused from caring about such issues.

[1] Granted, Rust is a little more difficult as it's hostile to allocation-
free linked-lists and trees, such as in BSD <sys/queue.h> and <sys/tree.h>.
Hash tables require [potential] insertion-time allocation. Still, it's not
insurmountable. Many forms of dynamic allocation that can't be rolled into a
task-specific initialization phase, such as buffers and caches, are colocated
with other operations, like file operations, which already must handle
spurious runtime failures, and so the failure paths can be shared.

[2] Maybe for performance? Performance critical tasks are easily farmed out to
libraries, libraries are particularly suited to handling OOM gracefully by
unwinding state up to the interface boundary.

~~~
jcranmer
Linux, in most distributions, enables overcommit. That is a fact that anyone
distributing software is going to have to deal with. Saying that you
personally choose to disable it whenever possible doesn't make that fact go
away.

> But for infrastructure software OOM should be handled no different than file
> or network errors--subcomponents should be capable of maintaining consistent
> state and bubbling the error upward to let the larger application make the
> decision.

OOM, to me, is more like a null pointer dereference or a division by zero. If
it happens, it's because I as a programmer screwed up, either by leaking
memory, having a data structure that needs to be disk-based instead of memory-
based, or by failing to consider resource bounds.

The problem with trying to handle memory allocation is that a) it can happen
just about anywhere, b) you have to handle it without allocating any more
memory, and c) if there is a _single_ place where you forget to handle
allocation failure, your program is not robust to OOM errors. I rather expect
that the fraction of programs that could not be crashed with a malicious
allocator that returns allocation failure at the worst possible time is well
south of 0.01%.

~~~
wahern
> a) it can happen just about anywhere

That's like saying a NULL pointer dereference or division by zero can happen
_anywhere_. Only for poorly written code that doesn't think through and
maintain its invariants. Languages like C++ and Rust make it easier to check
your invariants, but plenty of C code does this, including kernels and complex
libraries. And they don't do it by inserting assertions before every pointer
deference or division operation.

As I said, the RAII-pattern is one way to keep your invariants such that most
allocations only happen at resource initialization points, not at every point
in a program where the object is manipulated.

> b) you have to handle it without allocating any more memory,

Unwinding state doesn't need more memory, not in languages like C, C++, or
Rust. If it did then the kernel would panic when cleaning up after programs
terminated when out of memory.

This argument is a common refrain from GUI programmers who wish to throw up a
notification window. But that's distinct from recovering to a steady state.

In those particular contexts where recovery isn't possible or practical, then
you can't recover. Again, scripting languages are an obvious example, though
some, like Lua, handle OOM and let you recover at the C API boundary.
(Notably, in Lua's case the Lua VM context remains valid and consistent, and
in fact you can handle OOM gracefully purely within Lua, but only at points
where the VM is designed to be a recovery point, such as at pcall or
coroutine.resume.) But the existence of such contexts doesn't mean recovery is
never possible or even rarely possible.

> c) if there is a single place where you forget to handle allocation failure,
> your program is not robust to OOM errors.

If you keep an RAII pattern then handling OOM is little different than
deallocating objects, period. In that case, your statement is the equivalent
of saying that because memory bugs exist, no program should bother
deallocating memory at all.

Now, I've seen programs that can't handle deallocation; programs that were
written to be invoked as one-shot command-line utilities and never concerned
themselves with memory leaks. Trying to fix them after the fact so they can
run from a service process is indeed usually a hopeless endeavor. Likewise,
trying to fix a program that didn't concern itself with OOM is also a hopeless
endeavor. But it doesn't follow that therefore when starting a project from
scratch one shouldn't bother with OOM at all, no more than saying nobody
should bother with deallocation.

The objections people have to OOM handling are self-fulfilling. When
programmers don't consider or outright reject OOM handling then _of_ _course_
their code will be littered with logic that implicitly or explicitly relies on
the invariant of malloc never failing. So what? You can't infer anything from
that other than that programs only work well, if at all, when the implicit or
explicit assumptions they were written under continue to hold.

------
jeremyjh
This has been posted before.

[https://news.ycombinator.com/item?id=9577988](https://news.ycombinator.com/item?id=9577988)

[https://news.ycombinator.com/item?id=3953434](https://news.ycombinator.com/item?id=3953434)

~~~
dang
Also
[https://news.ycombinator.com/item?id=6220049](https://news.ycombinator.com/item?id=6220049),
and part 2 at
[https://news.ycombinator.com/item?id=4455225](https://news.ycombinator.com/item?id=4455225).

------
ruduhudi
This is a long discussion and I hope this has not come up yet:

You can avoid the double constructor problem by using a rust-style constructor
pattern:

class Foo { static int from(Foo *that) { that = new Foo(); that.x = 3; return
Status::SUCCESS; } int x; }

------
sifoobar
I have the same experience from implementing an interpreter [0] in C++ and
later rewriting from scratch in straight C. The popular advice to use C with
classes and skip the rest doesn't hold, for me it always turns into a constant
loosing struggle to keep the code clean of fancy abstractions. And like the
author mentioned, fixing performance problems suddenly means replacing STL
features, which is a lot more work than simply writing what you need. Assuming
you can figure out which knob needs turning in the first place, that is.

I first came across Turbo C++ back in 1995; it was the third language I
learned, several years before I started my C journey. But I've gradually come
to the point where I consider C++ to be a bad compromise, period. For low
level fundamentals, C works better; and for higher level code there are plenty
of more convenient options.

[0] [https://gitlab.com/sifoo/snigl](https://gitlab.com/sifoo/snigl)

------
abacate
I never understand this kind of reasoning. Sure, C++ has exceptions, but you
are not forced to use them. Just because you go to a restaurant and they have
dessert on the menu it doesn't mean you have an obligation to order it.

In fact, you can easily define a result type templated by your result/error
code (similarly to Rust or OCaml/Base) and use it in your whole codebase.

I never use virtual or exceptions in my C++ code, for instance, and I rarely
need anything fancy like smart pointers.

Maybe one could argue that the language is too complex and supports too
different programming styles, but that doesn't justify when the programmer
chooses the worst way to do it: that's on the programmer and not on the
language.

------
hi41
Can someone please suggest book or exercises to understand exceptions? I was
working on Java project and I was not productive because I could not
understand stand where I should throw errors and which function to catch it. I
did not get far and was eventually removed from the project because of the
buggy code. I think it happened because i don't understand how to write code
using exceptions. What layer of code should i catch exceptions? How to deal
nested function calls?

------
svilen_dobrev
Errors are the only way the Programmer can tell anybody else what is happening
(when it is outside the norm). They are, in a way, more important than the
usual results.

On the other side, Error handling is the only way The Programmer can
acknowledge what above has said (and do something about it).

And while the software itself is just a way of communication, the above
conversation rarely exists in it. Most times it isnt mentioned even in high-
level pre-software "software", that is, specs.

go figure..

------
kstenerud
For me, the most infuriating part of C++ is the error reporting. Even for
trivial programs, I have to do make 2>&1 |less and then /error just to get to
the first of 15 errors mixed in with 35 warnings that span 3 pages, all from
one typo.

C++ has come a long way since the 90s, but it's still an incredibly clunky and
obtuse language, with 10x more ways to do it wrong than right.

~~~
jcelerier
If you use an IDE you just click on the lines in your error log which brings
you to where the problem is. Though nowadays IDEs such as QtCreator use clang
to analyse your code in real-time and so you see the errors appear as in-line
hints as you type.

------
overgard
Given that C++ is a superset of C I don’t get why the author wouldn’t just
cherry pick the features he needs. Nobody is forcing you to use exceptions or
the STL in C++ but you can still get templates and namespaces and smart
pointers and references and const and better string handling...

~~~
dkrikun
C++ is definitely not a subset and its not just from purely linguistic
considerations. Also, given that zeromq is a community-driven project,
requiring the contributors to use C++ but avoid 50% of features is weird and
non-idiomatic at best. If you want to C -- C.

------
somefive

        "The decoupling between raising of the exception and handling it, that makes avoiding failures so easy in C++."
    

The point of exceptions, over C-style errors codes, is that you cant forget to
handle errors.

    
    
        "When you create an instance of the class, constructor is called (which cannot fail) and then you explicitly call init function (which can fail)."
    

That's an anti-pattern. Initialization should happen in the constructor, not
be separated into another step. (Look up RAII)

If you cant make exceptions work for you (for instance if you are making
drivers or embedded programs), I have good news for you. Lightweight
exceptions are being worked on and will most likely end up in the standard in
2023.

[http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0709r0....](http://open-
std.org/JTC1/SC22/WG21/docs/papers/2018/p0709r0.pdf)

------
mobilemidget
"It's far more readable and — as a bonus — compiler is likely to produce more
efficient code."

I want to learn more about this, when or how do you know a compiler produced
efficient code? Does anybody have any interesting links for reads on this?

~~~
narrowtux
The example

    
    
        int rc = fx ();
        if (rc != 0)
            handle_error ();
    

can be simply converted into ASM

    
    
        ; call fx() and copy to acc
        JNZ handle_error
        ; rest of code
        handle_error: 
          ; code of handle_error()
    

whereas exceptions in C++ need to generate a lot more code than this

~~~
mhh__
The C++ code can be compiled in such a way that it has no branching e.g. stack
unwinding instead, which is slower in the case of an exception being thrown -
However, exceptions shouldn't be used for non-exceptional circumstances.

------
wallstprog
My two cents, fwiw: [http://btorpey.github.io/blog/2014/09/23/into-the-
void/](http://btorpey.github.io/blog/2014/09/23/into-the-void/)

------
gameswithgo
If avoiding undefined behavior at all costs is important, I'm not sure C is
the right choice either. Rust would also avoid exceptions and eliminate most
(all?) of the undefined behavior that C has.

~~~
steveklabnik
Safe Rust has no UB, by design, but unsafe does exist. We have a fairly short
list of UB though [https://doc.rust-lang.org/reference/behavior-considered-
unde...](https://doc.rust-lang.org/reference/behavior-considered-
undefined.html)

------
taf2
I was always told c++ is fine just don’t use exceptions

~~~
corysama
Someone smarter than me once pointed out that exceptions are great as long as
you don't have to worry about rolling back state. So, they are a great match
for pure functional, stateless languages. But, the more procedural the
language, the more complicated exceptions become. C++ can be very procedural
and I'm of the opinion that I'm not smart enough to use exceptions in C++.
Instead, I'm playing with Expected and monadic error handling.

~~~
jcelerier
> Someone smarter than me once pointed out that exceptions are great as long
> as you don't have to worry about rolling back state

and the point of exceptions in C++ is that they have to be used in combination
with the "rolling-back state" feature backed in the language, namely RAII.
It's only a pain in language without such things such as C# / Java, but in C++
if I do

    
    
        file my_file;
        // .. do stuff
        auto f = std::make_unique<whatever>(...);
        // ...
        throw some_error;
    

I _know_ that my_file will be closed if it had been opened, and that my memory
will be freed.

Of course if your code is littered with `new` and `fopen` left and right that
won't work, but the code won't pass code review in the first place :-)

~~~
tonyedgecombe
C# will let you do something similar with the using statement.

~~~
jcelerier
but using "using" in C# has to be an explicit action from the programmer. In
C++ it's the default behaviour for new values.

------
sagichmal
(2012)

------
User23
Well maintained C++ codebases have a style guide limiting programmers to a
restricted subset of the language. Not allowing exceptions at all is often a
good idea.

------
dejaime
Oh, wait, did he actually use C++ with exceptions enabled? Who th does that?

------
dejaime
You should have written it in Rust. Why? Because ... idk

~~~
comex
Rust was still years away from 1.0 when this was written. If we consider Rust
as it is today, it would address some of the author’s complaints about C++,
but not all. Rust’s error handling is more explicit and supports fallible
“constructors” (Rust doesn’t actually have a true concept of a constructor) –
but it doesn’t support fallible destructors, and the standard library notably
lacks the ability to cleanly handle out-of-memory conditions. Privacy is more
flexible, and there’s not as much of a strict “object-oriented” focus. On the
other hand, intrusive lists are arguably even worse than in C++, because they
require a lot of unsafe code and don’t play well with the borrow checker (in
particular, you can’t enforce unique access).

------
the_clarence
S/C/rust/

------
austincheney
What the article attempts to exemplify is conciseness and transparency. C++ is
inherently OOP and OOP is not concise or transparent.

For example when you extend an object the object knows of its chain of
inheritance at compile time, but this is not immediately evident from reading
an extended instance in the code. This allows code that is simple to write and
expand, but more challenging to maintain. It also results in a large amount of
boilerplate programming by convention.

The way I prefer to think about this is programming as a means of
communication. There are three mental models of communication: _explicitness_
, _implicitness_ , and _stupid_.

Inherited object instances are implicit structures. You have some idea of what
they are because of where they come from and what you know about that thing
they come from. This requires some amount of reasoning that isn't directly
spelled out. Implicitness is a convenience that scales well. In spoken
language the most common implicit things are pronouns.

Creating everything as an explicit quality takes a tremendous amount of work.
The result though is clarity in that you know what you are looking at by
simply reading it. Counter-intuitively explicit things are not necessarily
more code and in many cases actually result in less code. This is the nature
of sacrificing convenience for precision.

~~~
jcelerier
> C++ is inherently OOP

in 1985 maybe. It was already pretty much not OOP in 1999 with libraries such
as boost.graph, and is as far from "Java/C#" oop that you can imagine in 2018,
where you can easily build lazy functional pipelines
([https://github.com/ericniebler/range-v3](https://github.com/ericniebler/range-v3)),
pattern-match
([https://github.com/mpark/patterns](https://github.com/mpark/patterns)), and
where generic programming is the dominant paradigm.

