A lot of C++ verbosity comes simply from bad defaults. Const and virtual should be the default for example, not the other way around. Would stop the virtual destructor ommision problem, and if you need efficiency you could make destructors non-virtual.
Also in the "hiding stuff behind the scenes" department C++ is quite bad.
SomeClass someMethod(SomeClass a, SomeClass b) {
...
}
is doing a lot behind the scenes. Code below is better but it's longer and less convenient to write.
const SomeClass& someMethod(const SomeClass& a, const SomeClass& b) {
...
}
And actually C++ code isn't easy to reason about wihtout reading the whole program. It isn't even easy to parse.
What's going to happen when you run this?
y = f(x);
It may be that f is function, or a type. f may return the correct type to assign to y, or it may return something else and automatically run some conversion. There may be copy constructor and some destructors involved if it's returning object and not reference. It may run overloaded operator= and do anything at all, for example add f(x) to y. Hell, f can also be a class with operator() overloaded, and you would need to track its state to see what will happen.
And we haven't even touched the subject of #define.
It's much easier to reason about Java code for example.
It was the right default twenty years ago for performance reasons.
These days, the economy of a vtable pointer is not really a good reason, and all languages that have the opposite default (such as Java, as you point out) are doing quite fine.
Because of this default, I can't count the number of times where I've seen "#define private public" and other horrors that developers used to be able to extend classes that their creators were too short sighted to design properly.
I'm very glad that virtual is not the default. Most of the classes I write are simply value classes and do not use inheritance at all. Once you start using virtual, you really have to embrace traditional inheritance idioms whole hog, and then you've got std::vector<std::shared_ptr<Foo>> instead of std::vector<Foo>.
If anything, the performance difference between std::vector<shared_ptr<Foo>> and std::vector<Foo> is even greater today than it was twenty years ago.
On the flipside, if you are not using inheritance then you could have solved the same problem with abstract data types. The only big problem is that in C++ the method call syntax is much more convenient to use: foo.frob() vs Foo::frob(foo). IMO, the correct way to fix this is by adding syntax sugar to the language, not by making methods non virtual by default.
I don't understand. Do you mind elaborating? Not sure what you mean, specifically, by "abstract data type" (I think of ADT as just another synonym for a class), nor do I get the static method thing. If you were calling hard-coded static methods, you wouldn't have polymorphism anyway, so how you have virtual methods?
The real problem with defaulting to virtual isn't the vtable pointer, it's the lack of inling.
Not being able to inline a method like (from vector)
T& operator[](size_t pos)
{ return data[pos]; }
Would kill performance.
In languages which do run-time optimisation you can inline such methods later, but in C++ that's not possible and proving when you can de-virtualise a method (which most compilers do) is very hard and often fails.
Why would it be bad? You could always add non-virtual (inline maybe? it's already useless keyword) if you need the performance, and most of the code doesn't. And it's a source of errors.
Because if the compiler can not figure out how to de-virtualize your usage of a structure then you're going to necessitate that a vtable be created at compile time and used at runtime.
The reason the previous commenter asked if you're from the java world is because in many highly important areas, the effect on memory and speed this would cause would be unacceptable. These areas _tend_ to be left to people who understand languages like c, though, so a lot of newer languages make decisions ignoring these good use cases.
I don't know which world I'm from. I mostly program (for money) in java nowadays, but I also did C++ for money for like 6 years, and I knew C years before. And mostly I've learnt programming on turbo basic and turbo pascal. But never had to do system programming.
Also I don't think it's that big of an achievement to understand C. Despite its flaws it's very simple language, very different from C++.
To the point - you could ensure that compiler can de-virtualize your usage of structure (or class if you will) by adding "nonvirtual" to every method it implements or derives. I don't see how it's any better than having to delete "virtual" from every method it implements or derives. Just a question of defaults, and I'd say most of modern C++ code isn't written with the performance goals that justify nonvirtual as default. You can and should profile after writing something anyway if you care about performance.
And anyway if you have derived classes it's almost always the case that you want at least some of your methods virtual, otherways what's the point?
>I'd say most of modern C++ code isn't written with the performance goals that justify nonvirtual as default
I am using C++ right now for embedded code but otherwise I write mostly Javascript. There was a time when C++ was the standard choice of language for desktop apps etc., but that's hardly the case any more. You choose C++ if you need performance and control. And I think the "default means the least overhead" concept makes a lot of sense there.
Also, inheritance is a very central part of mostly all Java code whereas it's much less idiomatic (modern) C++. In C++, classes serve to give you RAII and you specialize with templates.
>"To the point - you could ensure that compiler can de-virtualize your usage of structure (or class if you will) by adding "nonvirtual" to every method it implements or derives."
Think it in this way. Many respectable people has been making a case to avoid inheritance[1][2]. What you are proposing would actually be an incentive to them. I don't know about you but the amount of functions that I actually override in my code is not even close to the 20%.
I agree with "composition over inheritance", but I disagree with "inheritance is evil". Composition over inheritance means you divide you classes into parts that ALSO use inheritance and virtual methods, you just don't make the hierarchy deep and don't mix many different subdivisions into one hierarchy. The problem isn't virtual methods, it's too many divisions and responsibilities in one class hierarchy.
And you do want to refactor your virtual methods and then extracted methods do usually need to be virtual, even if at first they don't they may need to become virtual in future, and IMHO it's better to just make them virtual from the start, if you don't REALLY need the performance.
You can mess up because of nonvirtual-by-default too, especially in C++ because of the difference between stack and heap objects.
struct A {
virtual int f(int x, int y) { return g(x,y); }
int g(int x, int y) { return x+y; }
};
struct B : public A {
int f(int x, int y) { return g(x,y)+1; }
int g(int x, int y) { return x+y-1; }
};
B* b1 = new B();
A* b2 = b;
B b3;
b1->f(2,2); // 4
b2->f(2,2); // 5
b3.f(2,2); // 4
Those actually all return 4. Since f is virtual in the base class, it remains virtual in all derived classes, regardless of whether they say so explicitly.
The override keyword was added in C++11 to help prevent the sort of mistake you were trying to show. Any methods you mark with it will result in a compile error if they are not actually overriding anything.
For example, if B::g were marked as override, it would fail to compile because A::g is not virtual.
Many respectable people has been making a case to avoid inheritance[1][2].
Unfortunately, class inheritance is the only way to create the equivalent of an interface or (Scala-type) trait in C++. So, even if you avoid class inheritance in general (which is a good thing IMO), you still end up doing inheritance if you want run-time polymorphism in the form of abstract classes or abstract base classes.
Over 20+ years of development, I've found this objection to be vastly theoretical and with no practical consequences.
In practice, most classes are designed without inheritance in mind and yet, being able to extend and override them has proven infinitely more valuable than the occasional case where such an overriding breaks the parent class.
The practical reality is that even if a class is not designed for inheritance, inheriting from it is unlikely to break it but very likely to make its user's life much, much easier.
Ok, that explains it. When you're used to Java it's easy to expect inheritance to be a big deal. The fact of the matter is that run-time polymorphism (aka virtual) is much more rarely necessary in the C++ world than the Java world. The only reason it's common in Java is that it's the only tool available for many job that C++ has other tools for, and it also avoids an extra composition overhead that doesn't exist in C++.
tl;dr: Just because it's "infinitely more valuable" in Java doesn't mean the same for C++.
It's a lot easier to reason about your first version of 'someMethod', which passes by value, than the second. It can actually be more performant than the second as well, since the code inside the body of the function (which may have been compiled during another invocation of the compiler on a seperate file) now knows the dynamic type of the object it's dealing with, meaning all interior virtual calls can be made direct.
The first form of 'someMethod' will also accept all 4 combinations of moves and copy operations on 'SomeClass': (copy a, copy b), (copy a, move b), (move a, copy b), (move a, move b). Your second, less convenient, function results in moves degrading to copies, leaving performance on the table if 'someMethod' performs mutation.
Your 'y = f(x)' ambiguity isn't a problem in practice, since most code styles use different naming conventions for classes/structs and function names.
Conversions, construction, copy, move, and assignment semantics etc, are one of the most important, and one of the most difficult things to get right, when it comes to class design. If you make sane choices though, and put thought in to it, automatic conversions etc shouldn't be bothersome.
Also in the "hiding stuff behind the scenes" department C++ is quite bad.
is doing a lot behind the scenes. Code below is better but it's longer and less convenient to write. And actually C++ code isn't easy to reason about wihtout reading the whole program. It isn't even easy to parse.What's going to happen when you run this?
It may be that f is function, or a type. f may return the correct type to assign to y, or it may return something else and automatically run some conversion. There may be copy constructor and some destructors involved if it's returning object and not reference. It may run overloaded operator= and do anything at all, for example add f(x) to y. Hell, f can also be a class with operator() overloaded, and you would need to track its state to see what will happen.And we haven't even touched the subject of #define.
It's much easier to reason about Java code for example.