I'll give you rigorous, but how is it proven? I think it is a fair guess that the number of non-tiny programs (>100KLOC) making use of the "expert" concepts and skills is ~0. The number of large programs (>1MLOC) at the proficient level is also ~0.
How what proportion of the entire global population of functional programmers who qualify as proficient or better on this scale are employed by Standard Chartered?
(This is intended as a serious question, not a troll. It often seems that in discussions of FP, and particularly of Haskell, someone will suggest that there are few widely-known, large-scale projects written in this style that can be used to evaluate its effectiveness, and someone else will then reply with one or more of the same very small collection of larger projects or high profile organisations using FP/Haskell that are publicly known.)
What I heard is that all of their Haskell software combined is about ~1MLOC (and they don't comprise a single system), and a huge chunk of that (a couple 100KLOCs at least) is the Haskell compiler itself, which they've modified and consider a part of their codebase. The Haskell compiler is still the largest Haskell program, and if it isn't, there are no more than a couple programs that are larger.
> a huge chunk of that (a couple 100KLOCs at least) is the Haskell compiler itself, which they've modified and consider a part of their codebase
Please don't talk about what you don't know. Standard Chartered's Haskell compiler is written completely from scratch and is not based on any existing compiler.
Sorry, that's what I'd heard (or I may have misinterpreted "a variant of Haskell" as "a variant of the Haskell compiler" all on my own); thank you for correcting me. So what you're saying is that biggest Haskell program isn't the Haskell compiler, but that the two biggest[1] Haskell programs are two completely different Haskell compilers.
(Also, while it has little to do with my point, I've also heard that the person behind SC's Haskell compiler is the one who'd written the first ever Haskell compiler, long before he started working for SC; is that true?)
[1]: Please don't take this too literally. In the two decades that have passed since Haskell was declared the language to end world hunger, someone may have written a bigger program. Maybe even two.
So in that case, what's important about your 1MLOC requirement? I can't even begin to imagine that much Haskell code versus a similar line count in say C.
Modern programming languages do not vary in their line-count by too much. If you want to compare to C, you may be able to get a 1 order-of-magnitude reduction in size if you're lucky, but 10MLOC C programs are common (Linux kernel is >15MLOC; MS Office is 30MLOC; LibreOffice is 12.5MLOC), as are 1MLOC programs in more modern programming languages.
I'll give you rigorous, but how is it proven? I think it is a fair guess that the number of non-tiny programs (>100KLOC) making use of the "expert" concepts and skills is ~0. The number of large programs (>1MLOC) at the proficient level is also ~0.