There's nothing exponential going on here. Note that in the expression 6^3, the number that's varying is the 6, not the 3. If it were the other way around, that would be exponential.
I realize "exponential" has come into common use meaning just "growing very fast", and it's sometimes pedantic to insist on a distinction between polynomial growth and exponential growth, but in a discussion of algorithmic time complexity, we should get it right!
(I love a good pun!!)
> Determining whether a given predicate has a set of valuations for its variables which make it true is an NP-complete problem; actually finding the set of valuations is an NP-hard problem. Therefore, overload resolution in C# 3.0 is at the very least NP-hard. There is no known polynomial-time algorithm for any NP-complete or NP-hard problem and it is widely believed that there is none to be found, though that conjecture has yet to be proven. So our dream of fast lambda type analysis in C# 3.0 is almost certainly doomed, at least as long as we have the current rules for overload resolution.
See Lambda Expressions vs. Anonymous Methods, Part Five by Eric Lippert.
As someone who used to spend a lot of time in C++ and LTCG I got a good laugh out of this(aside from the crashing part, ICE are always a bad thing). Our release link was ~40 minutes on each change.
Fun "bug" though.
But either way, I'm glad you enjoyed it.
> As someone who used to spend a lot of time in C++ and LTCG I got a good laugh out of this(aside from the crashing part, ICE are always a bad thing). Our release link was ~40 minutes on each change.
A long time ago I did some C++, so I know it can take longer, although I don't think I ever worked on something with 40 min build times!
The 40 minute link times comes from Link Time Code Generation which can reduce your binary size pretty significantly(along with some code speed-ups), without LTCG linking on the project was about 1 minute.
Usually LTCG is a tool of last resort since the build times are so painful(and at the time was the only step that couldn't be parallelized). We were shipping on X360 with a hard memory limit and couldn't have a binary larger than 20mb if we wanted to hit them. If I recall correctly LTCG gave us back about 5-10mb which was a godsend in those times.
In addition to binary size (which I've never worried about too much, as I mostly write code for scientific computing), it has often improved runtime significantly. The only other specific optimization besides -flto that I have had very real speedups from (not counting -O or -Ofast) has been -funroll-loops. I use it on nearly every project. (And, to be honest, I haven't seen a big slowdown in compile time over just -O3, though I think a lot of my costs there are from templates.)
So doing a full release build across all UNIX and Windows variants supported by our product would take almost a full working day.
I did a quick search in AppDomain.cpp  for 'generic' and there's quite a few comments/variables that mention it. Maybe this  is one of the tricky interactions?
So without asking Don Syme himself we can only guess.
> I'm not sure if fullblown .Net CLR implementation was released yet
No and AFAIK they don't plan to. There's 'Reference Source' , but that's the Base class libraries only, not the CLR itself.
Here's an OCaml example: https://cs.stackexchange.com/questions/6617/concise-example-...
And a crazy ML example: https://gist.github.com/spacemanaki/72ed52766e0c7e0b85ef
When you want to show that something grows exponentially, plot it with a log scale on the y axis. That way, an exponential function becomes a line, and humans are much better at judging whether something is a line than judging whether something is an exponential function.
In fact, whenever you want to honestly show that something follows a certain law, scale your axes so that you get a line in the best case.
Of course the article just had to include this bit:
> If we look at these results in graphical form, it’s very obvious what’s going on
When it really is not obvious at all what is going on.
Yeah I suck at maths, according to this comment https://lobste.rs/s/j4545s/dos_attack_against_c_compiler#c_n..., it's actually 'double exponential'
> When it really is not obvious at all what is going on.
Yeah, I guess I actually meant to say 'it's very obvious that it's increasing quickly', but as you say, it doesn't read like that!!
As humans we identify patterns far to readily and as developers we try to reduce duplication far too prematurely.
class Blah<T> where T : Blah<T>
Error:(4, 9) java: UTF8 representation for string "LMain$Clazz<LMain$Cl..." is too long for the constant pool
Four 'Inner's work. Five gives error:
The type generates a string that requires more than 65535 bytes to encode in Utf8 format in the constant pool
This is using Eclipse. So it is the Eclipse compiler which is integrated with the editor. I haven't tried Oracle's Java compiler.
Because the problem is to do with the size of a class constant pool, the limit of four Inner's should be universal across Java compilers, IDEs, and Operating Systems. (Assuming the class name mangling algorithm is part of the class specification.)
Here the generic signature which is a String is just too big, the dictionary part of the classfile, the constant pool, stores a String as an UTF8 byte array prefixed by its size on 16bits (hence the 65535 limit).