a < b , c > d;
If a is a global variable, it describes an invocation of the "<" and ">" operators for 4 different variables.
Maintaining a parse tree of this is a massive mess, especially if func() is a template itself (and hence the meaning of a would change based on the instantiation).
a < b , c > d();
Of course you still need to "parse" the template when it is encountered but you have to do it without semantic information (e.g., you don't know what `a` will expand to until instantiation) -- I guess that is the problem.
It is funny that Lisp's defmacro doesn't have this problem because the code itself is a syntax tree.
In short: Token stream alone is not enough. You need to decide whether T::A * b; is a pointer declaration or a multiplication immediately when you parse the template. If A is a dependent name (i.e. if T is a template parameter), it is assumed to be a variable (if that's not correct, the programmer must use typename or template).
MSVC has only recently completed their implementation of two-phase lookup, some twenty years after it was defined as the correct option in the ISO C++ standard. They have an excellent writeup here: https://devblogs.microsoft.com/cppblog/two-phase-name-lookup...
In your example, 'a' is known at parse time to either be a type or a value even if it's a template parameter, so the statement will always parse one way or another. Of course, 'a' itself may change in type or value so the problem is still unwieldy and, probably key to your use case, requires knowledge of type information from a possibly far-off part of the translation unit.
template <typename foo> void func()
foo::a < foo::b , foo::c > d;
"We describe an alternative syntactic binding for C++. This new binding includes a completely redesigned declaration/definition syntaxfor types, functions and objects, a simplified template syntax, and changes to several problematic operators and control structures. Theresulting syntax is LALR(1) parsable and provides better consistency in the specification of similar constructs, better syntacticdifferentiation of dissimilar constructs, and greater overall readability of code."
Have been waiting to make it open source until I have finished writing the user manual (aiming for the end of this year), if that's interesting to you I post updates about it on twitter at cigmalang.
Even if C++ is a little baroque, C++17 and now C++20 provide many of the D's benefits, while keeping all the libraries, and finally we are getting Java like C++'s tooling to just throw it away.
No. D failed due to a mandatory GC and the 'two standard libraries' idiocy.
Thanfully with the likes of Swift on iDevices, Java/Kotlin on Android (with an increasingly constrained NDK), COM/UWP über alles + .NET on Windows, ChromeOS + gVisor, Unreal + GCed C++, those devs will have a very tiny niche to contend with.
I give it about 10 years time, for pure manual memory management to be like Assembly and embedded development.
For someone who has spent some time thinking about memory management strategies, manual MM isn't actually that much additional work. By far, most code doesn't allocate or free (and that's a good thing). So MM->GC is hardly like Assembly->Compiler. In Assembly you're constantly allocating and pigeonholing, and you can't have nice names for things. Assembly->Compiler is a huge step compared to MM->GC, and GC can cause a lot of headaches as well. (disclaimer, I've done almost no assembly at all).
Depends on the code you write. If, like in C++, non-stack memory management is painful, programmers tend to react like you suggest.
In pure-by-default languages, you are creating new and destroying old objects all the time. (At least conceptually. A sufficiently smart compiler can eliminate most of that.)
Most GC can deal with circular references just fine?
I realize that the kind of bug that leads to effective memory leaks with GCs has its own equivalent in manual memory management, but my overall point was that neither manual memory management nor GCs make you immune to leaks from badly designed or incorrectly implemented data structures. Each takes some aspect(s) of pain away.
It's all about proper planning and code organization. Use pooling, central manager structures, etc. If it can be avoided, then do not allocate and free stuff in a single function like you would carelessly do with automated GC. Structure the data such that you don't have to release stuff individually - put it in containers (such as vectors or maps), such that you can release everything at once at certain points in time, or such that you can quickly figure out what can be released at central code locations (that's much like automated GC, but it's staying in control and retaining room for optimization).
I don't think "multiple distributed teams" makes the challenge any harder. You certainly want to (and I'm sure you easily can) contain each ownership management domain wholly in one team.
That doesn't work in enterprise projects with heavy dosis of consulting.
For me the big attraction of GC is memory safety not convenience.
Thankfully, we have RAII as in C++ and Rust and ARC in Swift, which give you automatic memory management without a tracing GC.
If your language requires a GC, it is a complete failure as a systems programming language.
That's not true. You can pool, you can call GC when needed, you can build incremental GC with bounded times, and so on.
Check stackoverflow - there's plenty of links to papers and real-world examples of fixed-time garbage collectors. Or check google scholar and read papers.
Go has demonstrated a very efficient garbage collector.
Here  is one from Oracle for Java.
>If your language requires a GC, it is a complete failure as a systems programming language.
Plenty of OSes are in development and/or researched using managed languages and GC. Singularity  is but one example. I suspect in the future that doing memory management by hand will be as obsolete as writing an OS in assembly. The benefits for security, robustness, and productivity will outweigh the costs, just like the benefits for using higher languages to develop in, while slower than hand-tuned assembly, far outweigh the costs.
And apparently no one noticed that a part of Bing used Midori for a while.
That is alrigth, according to Midori team, Windows team also did not accept what Midori was capable of, even when proven wrong.
Having a GC is non different than malloc spending all its time doing context switches to reclaim more OS memory, using the actual OS memory management APIs.
It is up to the developers to decided to use a GC based allocation, stack, global memory segment or plain untraced heap allocations.
The tools are there, naturally there is a learning process that many seem unwilling to do.
And by the way, Swift's reference counting implementation gets wiped out by tracing GCs on the ixy paper.
Realistically, having the option to use a GC is a boon for many applications. Not everything is hard realtime all the time. Some complex applications tend to have a hard realtime part and parts where it doesn't matter. E.g. a CNC machine controller does not need a guaranteed response time for the HMI or G code parser. But it needs to have a tight control loop for the tool movement.
D is a language where the GC is default, but optional. And the compiler can give you a guarantee that code that explicitly opts out does not interact with the GC and -importantly - can't trigger a GC run that way. However, as this was an afterthought, parts of the language need to be disabled when opting out and not a lot of library functionality works with that.
GC is very useful for programs that don't have any form of real time - but games are real time and thus you need to be careful to ensure that the worst case of the garbage collector doesn't harm you. Reference counted garbage collection gives you this easier than the other means. Note that I said worst case - the average case of garbage collection is better in most garbage collected languages.
One of my first (after awhile) forays into C++ was to analyze big WFST graph to find various statistics. And I found that my program spent as much time freeing resources as doing actual work. Subjectively, of course, yet.
I knew it can happen, so it was not big surprise.
The rest of the C++ world has long moved into automatic memory management as best practices.
Yeah, I've heard that 25 years ago. It was, in fact, the big marketing bullet point on Java's first release.
Meanwhile here in 2019, with the death of Moore's Law, careful memory (and cache!) management is more important than before.
>A total of 76% of the CPU time is spent incrementing and decrementing reference counters.
When this standard library competition existed, the library ecosystem had this very weird split where half of the libraries you would have liked to use in your project used the other library you couldn't link to at the same time. This prevented me from picking up and trying D for a long time because I didn't want to deal with that. Now that this is over with and a ton of useful libraries exist, I'm glad that I started to use D because this is now a language in which I'm very productive.
Yes, and one of those fringe cases is when you're building a competitor for C++.
Knowing your target audience helps if you're trying to take over the world.
Having to deal with headers again and C++'s templates was torture.
Did C++ close the gap? Yes. However, I can still write D 2-3x faster than I can write C++. It's similar to how I'm 2-3x more productive in C++ than in C.
The problem with D in its current state is that its tooling isn't match for OS SDKs + IDE + libraries, beyond the typical POSIX daemon scenarios, unless one is willing to put some effort into it.
So it is hard to catch up to C++, specially after all major compilers reach C++20 compliance.
The other problems with that approach means C++ is everywhere.
I know that I can find a good compiler for C++ when I want to switch platforms. Will your new language support my new platform? Will your new language even exist? I've worked on a number of projects where the code was written in some language where the compiler vendor is out of business. This risk works against all new languages (some have overcome it, some have not).
With C++ I know if I need to hire more people I can hire experts to help out. If I choose your language do I have to pay my new employees to learn the language for the first few months? Learning my code (which is always hard no matter what the language) is already going to be a problem using something that nobody knows just makes it worse.
Will your language optimize well? C++ being everywhere means that compilers vendors have put a lot of effort into writing good optimizers. When performance matters C++ will often come in first because of this effort.
I, for one, find different languages with similar appearance needlessly confusing. I what the experience with different syntaxes for the same language would be.
There was some talk to take advantage of the transition to modules to be able to mark translation units as implementing a specific version of the standard (I think rust doese something similar) to allow for backward incompatibile Evolution of the language.
The committee doesn't seem to keen because they fear the language fragmenting and from a more practical point of views we will be still #including legacy code into new modules for at leas a decade (and I'm probably wildly optimistic).
Ada was designed for programming safety-critical, military-grade embedded systems.
Rust was designed as a memory-safe, concurrency-safe programming language, largely to overcome the shortcomings of C++.
Each excels at what it was designed for, but the intended use cases are very different.
Rust is not (currently) being used for aircraft flight control systems--Ada is.
Ada is not (currently) being used for high-performance web browsers and servers--Rust is.
While there are SOME similar design goals in terms of memory safety, concurrency safety, and error prevention, Rust was not designed to compete with Ada.
On the other hand, we can, and I hope will, move to much more rigorous approaches, such as the use of Rust, for flight software implementations. As you say, Rust was not specifically designed to compete with Ada, but accomplishes a number of similar goals and ultimately strives for correctness-by-construction, as does Ada.
We will be better off in flight software using newer, safer languages employed by the software community writ large instead of trying to mandate niche languages.
Yeah, C++ has been working out great on the F-35.
>> On the other hand, we can, and I hope will, move to much more rigorous approaches, such as the use of Rust, for flight software implementations.
Competition is good and more choices for building avionics systems are welcome. I don't know of any DO-178C certified Rust implementations, but we need them.
>> We will be better off in flight software using newer, safer languages employed by the software community writ large instead of trying to mandate niche languages.
Part of the issue is that high-integrity, hard real-time embedded systems are their own niche in terms of requirements. Java and C# are widely-used programming languages with hundreds of millions of lines of code deployed in business-critical production environments and yet both are unsuitable for avionics environments. The more avionics niche-specific a programming language becomes the more likely it is to add complexity and features that those who program outside the niche will never use or care about.
The number of scary C and C++ architectures flying currently is quite troubling.
While DoD is coming to grips with the fact most aerospace primes take a 1990s approach to software development, other than mostly in research pockets, DoD is still not recognizing the impact of language choice. The late 90s push to embrace COTS threw a lot of baby out with the bathwater.
>> Competition is good and more choices for building avionics systems are welcome. I don't know of any DO-178C certified Rust implementations, but we need them.
One of the impediments to improvement actually is certification. Certification uses a lot of labor and paperwork-intensive proxies for code quality and configuration control that should be revisited in light of modern methods that can assure correctness-by-construction. I'm also not sure any major aerospace prime will generate demand pull for a certified Rust implementation without it being mandated in some fashion by a government regulator or customer (which I personally would not be opposed to).
>> Part of the issue is that high-integrity, hard real-time embedded systems are their own niche in terms of requirements. Java and C# are widely-used programming languages with hundreds of millions of lines of code deployed in business-critical production environments and yet both are unsuitable for avionics environments
Once running atop an RTOS of sufficient quality, what niche language features do you think would be required for avionics, given the widespread use of C and C++ there already? I can understand not wanting to run on garbage-collected runtimes like Java and C#, but once memory management has the determinism of something like Rust, what other functionality do you think is missing?
CppCon 2019: “Lifetime analysis for everyone”
It is available to play on Godbolt.
,,Ada developers either use a garbage collector, or they avoid freeing memory entirely and design the whole application as a finite state machine (both of which are possible in Rust, too, but the point is you don’t have to).
Of course, Ada has range-checked arithmetic, which Rust doesn’t have (it needs const generics first before that can be done in a library), so if you’re more worried about arithmetic errors than dangling pointers, then you might prefer Ada.''
For me not freeing memory sounds like a joke. It's the opposite of zero cost abstraction. Regarding GC there are lots of great languages already (for example modern Java).
Usually the libraries deal with low level issues and are doing the hard work of increasing safety.
Safety will always has a cost somewhere.
I dislike, a lot, the C family of languages. I wish that the pascal or oCalm have "won". But being practical, we are stuck in this reality, so:
Is not "we". Is "them". I think only IF the core developers of that languages provide the "blessed" syntax it could actually catch up.
What I have wondered is why C/C++/JS not provide a "clean up" forward policy.
I think all involved are smart enough to see what is wrong with that langs (we always know what suck of what we build with time). Then say:
"This is $IDEAL-C we will targeting. This will fix this list of problems, and maybe this other list, BUT...
$IDEAL-C is a in-progress. Each change is iterative, and will deprecate in steps.
$BAD-C will be continued to be develop. $IDEAL-C transpile to $BAD-C. $IDEAL-C is another file extension. It will keep the same $IDEALS of $BAD-C.
Eventually, $IDEAL-C-STEP-1 will replace $BAD-C and become $BAD-C. And that until we reach $IDEAL-C!
I know this look like what modern c/c++/js is doing, but the trouble is that that are additive changes. That mean triple work: Keep with $new, still have the problems of $old and maintain $both stuff at the time. What is lacking is doing subtractive changes and REMOVE what is wrong.
The key is transpiling, and not change the core tenants of the lang (ie: C stay as a razor edge).
The big problem, probably, is to not do drastic paradigm changes (ie: turn C in a functional lang), instead, clean the lang until is like what a good, idiomatic, modern developer of it will use.
I think is doable to make $IDEAL-C/C++/JS to be near identical to most developers and from a distance, not look different at all. Being progressive and in steps, provide auto-tranforming tools along the way and I think the community will move on.
I have see, partially, the idea applied with C#, so I think is doable?
P.D: Probably $IDEAL-C must only fix a very small list of stuff, initially. For example, lets say "Remove dangling IFs from C. END"
That its. This small-scope is I think, the key to make the experiment worthwhile.
That doesn’t mean it doesn’t matter though. The decidability of C++ grammar certainly matters to folks that are parsing C++ code.
They're slowly becoming available via clang now, which is nice.
Visual Studio has had Intellisense since forever, clang-format can enforce style standards, static analyzers these days are amazing, the address sanitizer and valgrind find memory problems easily, etc.
The most obvious example to me is in Eclipse, you can right-click on a field in a Java class and choose to Rename it. It will then correctly update that field's name across the entire codebase. AFAIK this is impossible in C & C++ because they are such complicated languages to parse. Macros alone make this feature effectively impossible.
As a daily user of Resharper in both C# and C++, I really notice how much poorer they work in C++. Renaming operations, as you mentioned, do work in C++ sometimes, but not others. Generally if it is a variable or parameter that's used locally I can rename it instantly with no problem. If it's a variable exposed in the class header, then it will tend to sit there churning for enough time before I decide that I should probably cancel the operation.
Likewise, simply using a "Find References" or "Find Usages" in C++ usually works, but at times it gives odd suggestions of things that are clearly not usages of the thing I'm searching, but something else with the same name that it just is not smart enough to understand is not a real usage. (possibly due to the difficulty of parsing templates or macros)
"Extract Method" is one of my favorite C# refactorings. Resharper C++ also has this operation but it is a bit of a gong show, and generates results that usually have to be tidied up considerably afterwards.
Try renaming buzz in this context, you really don't know how many other classes that need the same rename. In Java and C# you know because of generic constraints and IDEs can leverage this information. Concepts in c++20 should hopefully solve this.
Java overloading is simple since all functions are in the same file (so doesn't change overload behavior depending on which files are imported) and Java lacks the user type casts C++ has, so just pick the signature fitting all provided values (with numeric casts) or throw.
Only if you do not have a concrete boundary in the generic declaration, T extends Foo can result in a function definition that takes a Foo instead of an Object.
> Java overloading is simple since all functions are in the same file (so doesn't change overload behavior depending on which files are imported)
import static java.lang.Math.*; for programmers too lazy to write Math.sin instead of sin.
It is still not uncommon to se ICEs on some extreme template constructs.
 Internal Compiler Error, i.e. the compiler segfaulted or hit an internal assertion.
You could pick the winning entries and use them as a corpus for a fuzzer and you might find compiler crashes.
> In practice, compilers limit template instantiation depth, so this is more of a theoretical problem than a practical one.
Better to have this problem in the parser than the type checker, at least.
That particular syntactic ambiguity in C++ would have been trivial to fix (and could still be fixed today!), but no one really cares (and it would not be backwards compatible...).
Another example is the current situation with modules in C++. Instead of looking into the diverse ML implementations or even Java and trying to get the system right, the current discussion goes into wild compiler hacks just to avoid a simple limitation on filenames.
As opposed to a "professional" who is told by their boss to implement X before the end of the day?
Offering an alternative that's an objective improvement, much less, on par with the status quo?
Not so much.
I don’t think you could study just “compilers”, say, at undergrad level. Maybe you mean postgrad or postdoc-level qualifications?
It would certainly be a pretty different world if you were only allowed to design a new programming language after getting your PhD in language design.
Which is neither here, nor there. A reputable CS degree is an all rounder, it's not expertise in PL design and research.
>I don’t think you could study just “compilers”, say, at undergrad level. Maybe you mean postgrad or postdoc-level qualifications?
For starters, yes, but it's not about official qualifications. Someone (e.g. Simon Peyton Jones) could be a PL expert without "official qualification" in the form of such a degree.
Even writing many increasingly successful languages could do it. Starting with your first (or first serious) attempt at a language, however, is not that...
Anders Hejlsberg is another famous example. He didn't complete his university (and it was on Engineering anyway), but after decades of successful work on the field be became a major PL designer and expert.
Stroustrup, however, was hardly anything like that at the time he first designed C++.
Pure CS theory tends to be a maths major.
Not at the time, when CS didn't even exist in many European universities, or was rudimentary at best.
IIRC the person who asked for it also regretted that, but I'm less sure on that.
Worse is, was and always will be better. It seems that's an unchanging law of software design. Practicality, getting things done and catering to user needs always beats purity, elegance and soundness.
Alternative phrasing: the market runs on greedy optimization, which means a lot of value that could be gained is simply unreachable.
There is still value in elegant and sound solutions though. Even if inevitably unsuccessful, they will still be influential on the next round of practical hacks.
The problem was the politcal wars of tons of companies that don't want to let go of their in-house build systems based on translation units just to make use of modules.
So the end result is a compromise to make some of those big names happy.
Which also brings their macros into scope.
Doesn't this require "typename" at the beginning of the line?