Also, Wirth's book Compilerbau (in German, not sure if it was translated) is a piece of pristine clarity; at just ~100 pages in pocket paperback form, everyone reading it immediately feels writing a compiler or byte-code interpreter is simple and they can do it.
And since it is available in Ada, it is rivalled today.
More to the point, Ada allows for multiple interface packages, an idea also copied by Modula-3, where the same package can be exposed in different ways to multiple clients.
For example, the same package can have a public interface for in-house consume that is wider than the official public interface for third parties.
To me, it was hard to write code in Ada. Lots of niceties from other languages were unavailable in Ada, by design. For example there were no variable argument lists.
It grew on me though, and several years later I worked on a commercial project that used Ada. I was surprised because I expected adopting Ada to be like adopting the adopting the tax code.
Then I realized one thing - although Ada is harder to write, it is nice to have an existing Ada project. And people who have done Ada a while learn to think in Ada and it's not as hard to be expressive.
It's also possible to be pretty accurate in Ada. You can know exactly what the largest or smallest integer is. Moreover you can define integers of a specific range, like -11 to 219.
Nowadays all of that has matured and I think ada is a viable commercial language, and interesting things like spark have happened.
Too bad in the intervening years other languages haven't changed much.
For example, C could have added modules. I guess nobody cares about C.
Yeah, just check the list of features for C2X, WG14 isn't that keen in innovating that much, nor in fixing C's flaws.
Ada does things slightly differently. It manages to separate out the various parts of OOP into different language constructs, and this makes it possible to pick and choose what you need, and not get everything including the kitchen sink when you try to use one thing (like inheritance.)
So no, I never had any big deal with inheritance in any language, and in what concerns Ada its tag based dispatch is also quite interesting as idea.
Your generated code is free:
So yeah, you can go that route and I have done so, poor man's modules, as it helped to keep me sane with C, but it requires discipline.
That pretty much defines software engineering.
For as long as I have been coding, I’ve watched people and corporations chase the will’o the wisp of the “undisciplined coder,” where we can hire naive, barely-educated children, straight out of school, and mold them into our personal “coding automatons,” or even better, let people who are experts in other domains, create software, without having to deal with “annoying” engineers.
So...how’s that working out?
Even when we have AI creating software (and we will), the AI will still need disciplined requirements, which, I suspect, will look a lot like...code.
The outcome of Azure Sphere having chosen C as their main SDK language is proving itself without surprises,
Not to mention that no matter how disciplined you are, you will make mistakes, and having the compiler catch those for you is valuable.
It also means that the discipline applied by the programmer can be focused on areas that can't be checked or enforced by a compiler.
Fantastically if the goal is to set up recurring revenue to maintain the produced systems.
> Even when we have AI creating software (and we will), the AI will still need disciplined requirements, which, I suspect, will look a lot like...code.
https://github.com/webyrd/quines is an interesting example of writing code to create code based on a specification. Perhaps not the AI code generator of some people's dreams, but it exists today.
For example, when I was writing ObjC and PHP, I got used to using Whitesmiths indenting. Once I started writing Swift, it was more appropriate to use KNF style.
It took a couple of months of having to remember to not use the old indenting style, but I haven’t given indenting any thought in years.
”We are what we repeatedly do. Excellence, then, is not act; but a habit.” -Attributed to Aristotle
So well that it was a large part of the reason I accepted an offer (today in fact) somewhere else, life is too short for that mess.
You could even have this type safety on the linker level as far as C is concerned. You just need an object file format that exports C types for symbols. This is not done on any of the (few) systems I know, and probably for practical reasons.
Some other languages give you this link time safety, but I assume at the cost of less interoperable object files.
Isn’t this similar to package private in Java or internal in C#?
So client A sees mylib-public-A, client B sees mylib-public-B, but both link to the same mylib so to speak.
Given that Oberon is a simpler language than pretty much all of its predecessors and that the latest revision went even further, I'd be interested what Wirth does think about contemporary strongly typed system languages like Rust or Go (the latter being quite, erm, influenced by Oberon). Or heck, Eiffel, being the language of his successor at ETH Zürich.
IIRC he didn't have a high opinion of functional programming.
"To postulate a state-less model of computation on top of a machinery whose most eminent characteristic is state, seems to be an odd idea, to say the least. The gap between model and machinery is wide, and therefore costly to bridge."
In the next paragraph, Wirth further indicates that he has chosen to argue against a caricature of functional programming when he suggests that "[Functional programming] probably also considers the nesting of procedures as undesirable." That's another strange thing to insinuate against a programming style that is noted for its use of closures.
(For that matter, where would closures be without state?)
The "less is more" crowd adheres to avoiding and reducing feature bloat and writes lower level, often efficient, very consistent code that is easy to grok.
And the "correctness by concept" crowd, with many variants thereof. Expressive type systems, functional programming, abstraction and general "higher order-ness" are dominating themes here.
Languages and paradigms often land on the spectrum of these two. I wonder if these concepts can be married in some way and what we would have to give up to do so.
The CTO's official reason for the "less is more" philosophy was not that he didn't think that more powerful language features weren't useful, it was that sticking to less powerful features discouraged the growth of individual modules into large complicated tangles, by making it actively painful to do that.
My one, somewhat guarded, criticism of that approach is that I think it may have depended critically on the company being in a position to maintain some very selective hiring practices. When you limit yourself to only hiring people who can really appreciate Chuck Moore's style, well, you've limited your hiring quite a bit. I could be convinced that the "correctness by concept" approach is less fragile and dependent on having a rigid corporate monoculture in order to work out properly.
Let's be honest: there are two completely opposite meanings of "functional programming" that we're burdened with having to put up with, and most of what passes for "functional programming" straight up isn't. Somehow the people making heavy use of closures manage to pass themselves off as doing FP, even though that style is decidedly unfunctional. I wish there were wide recognition of the distinction between this pseudo-functional (PF) style from actual FP, and that we'd call it out as appropriate.
Frustratingly, the inhabitants of this bizarro world who program in the PF style still tend to undeservedly hold the same smug expressions on their faces as the one FP folks do with regard to OO—with lifted noses about OO being unclean, even though PF folks' closure-heavy style is no better—with PF being equivalent to OO, except for the former being being less syntaxy, which only leads it it being harder to spot the trickery employed in the PF folks' programs. This is fairly annoying.
I don't know what definition of functional programming you are using[f], but you don't have to be arrogant about it. Your comment is fairly annoying.
[f] Let me guess, only immutable variables and pure functions? How PF.
No, you're conflating higher-order functions with closures. Higher-order functions that use closures make use of closures. Higher-order functions that don't use closures do not.
> I don't think there is even a purpose to higher-order functions without closures.
I'm not sure how anyone can say this with a straight face, let alone someone who considers themselves to be in a position to challenge somebody about whether or not they grok functional programming.
Well, if you are actually doing the real hardcore FP™, and not just the lame pretentious PF, then yes higher-order functions will indeed very much make heavy use of closures. Did you miss the part where I gave haskell as an example? And note that I didn't say that closures and higher-order functions are the same thing.
> Higher-order functions that use closures make use of closures. Higher-order functions that don't use closures do not
How are these tautologies even an argument? This does not say anything meaningful, like saying wet water is wet. Don't worry, I'm not even trying to keep a straight face while reading what all you have said so far.
Heh, I did qualify my statement with "I don't think" since I didn't really give that much though on that one.
But okay, I admit that statement is dumb and invalid since the usual map, filter, reduce functions are good examples where higher-order functions is not a closure. But more often than not, you really do need to use closures to do anything beyond simple cases like map(array, x => x*x).
My overall point still holds. I'm still in a very good position to challenge your dogmatic beliefs that: heavy-usage of closures is pseudofunctional and unfunctional.
They're not, and that was exactly the point of my comment: it's a circular argument that you have to take responsibility for, not me. You seem to have missed that—it's your nonsense claims that are in focus when the tautology is being spelled out.
Higher-order functions and closures are different things.
> Well, if you are actually doing the real hardcore FP™, and not just the lame pretentious PF
I wouldn't call the pseudo-functional style "hardcore"—any more than OO is hardcore, given that they're equivalent. It's frequently portrayed as the naive/easy way out. Actual FP, on the other hand, is hardcore. (And pretentious—which is an odd attempt to try to stir me up; do you think I'm an advocate of FP or something? I suggest re-reading.)
> But more often than not, you really do need to use closures to do anything
Yes, which is why I'm not an FP advocate.
I was very clear in my original comment. The pseudo-functional style is a preference for how to write programs, and therefore immediately defensible as valid. What's not defensible, though, is equivocating on the meaning of "function" while simultaneously trying to lump the pseudo-functional style in with FP. The moment one starts making heavy use of closures and carrying around state is the moment one forfeits the right to be smug about how unclean OO is, given the equivalence of objects and closures and given that one is no longer actually practicing FP.
> I'm still in a very good position to challenge your dogmatic beliefs that: heavy-usage of closures is pseudofunctional and unfunctional
No, you're not. It's unfunctional by definition.
> It's unfunctional by definition.
There you go, more self-fulfilling tautologies. And for some magical reason, it's me that are making nonsense claims? How is "higher-order functions make heavy use of closures" a nonsense claim?
I have provided a very clear and direct counter-example that falsifies your core argument. On the other hand, you have provided zero actual rebuttals. In case it isn't clear, calling mine "nonsense, circular and tautological" and yours "by definition" doesn't count as an argument.
> The pseudo-functional style is a preference for how to write programs, and therefore immediately defensible as valid
Is the word style even relevant here? You can call it style, paradigm, or computational model, it doesn't change your point.
> What's not defensible, though, is equivocating on the meaning of "function" while simultaneously trying to lump the pseudo-functional style in with FP.
Ugh, I'm guessing your definition of "function" is a special amorphous one that changes meaning to conveniently support your claims.
> The moment one starts making heavy use of closures and carrying around state is the moment one forfeits the right to be smug about how unclean OO is, given the equivalence of objects and closures and given that one is no longer actually practicing FP.
No, repeating your statements don't make them true. Once again, see my original counter-example with haskell. If you insist in ignoring it, then fine with me. I'm done here.
If I've moved the goalposts, you should be able to show where it happened. So do—point to it or fuck off.
As for the rest of your comment and being "done", that's fine. There's zero chance that I'm going to waste my time on a point-by-point rebuttal for anyone who's acting in this much bad faith, ignoring the points I've already made, and trying to pawn off the flaws in your arguments as mine.
Which really shouldn't be a surprise. After all, you can't curry if you can't close.
For instance, functional programmers would almost all tell you that `map (x => ...) xs` is "better" than `for i from 0..len(xs): xs[i] = ...`. But the former, implemented trivially, is very slow: from the allocation of the closure to the allocation of the new list to the function calls on each iteration and the lack of tail recursion in `map`'s implementation (this is a trivial implementation, remember?)
Of course, the functional programmer would tell you, "Well, it's easy to optimize that, the performance issues are just because your implementation is too trivial", and Wirth would rejoin, "Too trivial? What's that?"
If you're interested in maximum constness (which I tend to be, because I find it's almost always easier to read code where values don't unexpectedly change in a branch somewhere) then you'd be comparing
let ys = map f xs
let ys = 
for i from 0 .. len(xs):
Sure, it's using "primitives further from the physical machine" but that is exactly what programming is about! You create a new layer of primitives on top of the old ones, where the new layer makes it slightly easier to express the solution to the problem you're solving. You do this incrementally.
When someone has built a more easily handled set of primitives for you, it would be silly not to use them, all else equal.
In other words: the only real reason to mutate values is to improve performance at the cost of readability, and at the cost of losing the ability to safely share that data across concurrent threads.
If, indeed, that is a cost you're willing to pay for the additional performance, no functional programmer I know would shy away from the imperative mutating loop.
Wirth is not talking about clarity, not in the sense of "can I look at the code and understand the high-level intent of the programmer"; Wirth is interested in clarity in the sense of "can I look at the code and understand exactly what it's doing, at every level"?
For Wirth, programming is not about using an endless stack of primitives that get you further and further from the physical machine, so much that they start to obfuscate what's happening at lower layers. It's about building the smallest, simplest stack of primitives such that you can express yourself effectively while still understanding the entirety of the system. The Oberon system includes everything from the HDL for the silicon all the way up to the OS and compiler in around 10,000 lines of code because you're supposed to be able to keep all of it in your head.
I'm not saying that any of this is correct, per se, nor am I arguing for it - I'm sympathetic to it in some ways and disagree with it in others (I am, in fact, very much into FP). I'm just trying to give a charitable and clear interpretation of his perspective. FP may not want to get rid of state in one sense, as you've pointed out; but it wants to get rid of state in another, and Wirth doesn't like that because it necessitates complexity - and Wirth hates that.
Now that is an interesting perspective I hadn't even considered. Also not sure I would agree – but if I was interested in finding out more but found the OP unconvincing, where would I go to find out more?
(From the book The School of Niklaus Wirth: The Art of Simplicity.)
Each release of Oberon-07 drops features, it is reduced to a C like with GC, with a single form of loop constructions.
I think I remember him saying that if one would want to design a language, starting with Oberon would be his recommendation. In that regard Go at least does something right.
And it does at least have a specification, too, which is another item that Wirth is pretty adamant about.
I'd pay good money to have him and Meyer argue about design, syntax and semantics.
Each Oberon-07 revision, as mentioned, drops language features.
Also note that as far as I know, he wasn't too keen in the offsprings from Oberon, namely Active Oberon, Oberon.NET, Component Pascal and Zonnon.
Oberon-2 was his last collaborative work in the context of Oberon language family.
And while for me Active Oberon is the best one for systems programming (still in use at ETHZ OS classes), with support for several low level features that original Oberon requires Assembly, I doubt Wirth would appreciate it, given that it is Modula-3 like in size and features.
If anyone is interested in using the language outside of the Oberon operating system, here is a freestanding compiler:
Sorry about that.
I'm not familiar with Modula-2's module system. What does it provide that the module system of OCaml does not?
>.def modules that specify the interface and that can be compiled separately from their .mod implementations, which may not even exist when a client application can already be coded against the compiled interface in a type-safe way.
I believe .mli files can be compiled separately from the matching .ml files, and client modules can be compiled against an .mli that does not have a corresponding .ml file.
And OCaml also supports functors, so modules can be parameterized.
Sorry, not arguing that Modula-2's module system is not good, guess I'm just not convinced that it's unrivalled today. And for all I know ML's module system was probably influenced by Modula-2.
The module concept of Oberon is also very good (leaner than Modula's). There are also other languages with good module concepts, e.g. Ada, or the CLR based languages.
> Wirth's book Compilerbau ... is a piece of pristine clarity
For a certain type of compiler (rarely used today).
What then, in the software world, is a "law of nature", and how might we discover such a law/s by examining evidence, such as, the evidence deposited over the last 25 years of software proliferation?
Might we examine source code, or executable code?
Or number of users?
Or revenue from sales?
Or aesthetic qualities of the source code, such as structure, legibility, or maintainability, or portability; or qualities of the executable code, such as size, performance, or intuitiveness of the UI?
It doesn't have to be your day-to-day desktop, but for learning, research and experimenting I think solutions with that goal would still be worthwhile.
IIRC it bootstraps from a prescheme-like language into a Smalltalk-like dynamic language ( https://www.piumarta.com/software/maru https://www.piumarta.com/software/cola ), and uses PEG parsers ( https://en.wikipedia.org/wiki/OMeta ) to implement domain-specific languages, e.g. the Nile graphics language ( https://github.com/damelang/nile )
Wait, what does that even mean? I assume it cannot, then, run on hardware executing any form of microcode because that adds many man-years of what's essentially software complexity to the problem.
What I'm trying to say is that at some point you have to draw a line and say "anything below this line is considered the platform the software runs on, and does not need to be included in the understanding" and where you draw this line is completely arbitrary.
One might argue that x86 is a platform that doesn't have to be understood, as long as one understands its interface. Someone else can argue that the JVM is a platform whose internals need not be understood. Yet other people popularly picture the web browser as their platform. In the extreme case in the other direction, electrical circuits (with a dash of quantum mechanics, I think?) could be considered the platform of the x86.
I have yet to hear a rational argument for why a particular thing counts as "the platform" more than any other.
Have you looked at the hardware design at http://www.projectoberon.com ?
> What I'm trying to say is that at some point you have to draw a line and say "anything below this line is considered the platform the software runs on, and does not need to be included in the understanding" and where you draw this line is completely arbitrary.
What I am trying to say is that Project Oberon proofs it's possible to run it on a hardware design that can be understood by that same human being in one lifetime.
> I have yet to hear a rational argument for why a particular thing counts as "the platform" more than any other.
Again, I am talking about the complete thing.
And how is this even a discussion? The amount of added complexity when you compare an Oberon system with a modern phone or desktop is staggering. Especially if you go as deep as the microcode level.
Something like https://www.amazon.com/Elements-Computing-Systems-Building-P... is not hard, but it's arguably a toy system.
IMO Project Oberon starts from the same principles and builds up to a productive enironment including text processing, a compiler, very basic hypertext, basic networking and hardware to run it on.
Like many such products, it will work so long as you're able bodied and only need the same human language as the author, preferably one that doesn't have complex requirements and can fit in 7 bit ASCII.
I no longer have the source, but a great example someone once gave was that Oberon had pretty minimal support for anything, to the point that sharing information by email was hard due to "how do I share my paper" problem.
Users can still choose to use vi, or emacs, or textpad, or whatever stripped-down, fast, minimalist software they prefer; the old stuff hasn't gone away. Meanwhile, my IDE re-compiles/re-parses my program as I type, and tells me immediately if there are errors. I choose to use the bigger, "slower" program because it makes me dramatically more productive. Sure, input latency might not be as good as a barebones editor, but that cost more than pays for itself (as always, for me -- YMMV).
I can appreciate the aesthetic desire for a lean, minimal program, but to ignore the very real productivity benefits that come from tolerating larger programs is, I think, myopic.
This comes up around here a lot in the context of slow web pages; a very important thing to note that I think is often overlooked is that it's not just the user's productivity that matters. If developers have to spend 2x the time optimizing their code to fit in a limited memory allocation, then that's 2x the cost to the consumer for the same amount of features (or 1/2 the features for the same price). If speed/performance is a feature that your customers want to pay for, they will let you know. If it's not, then your competitor will eat your lunch by building more features that users actually care about, while you optimize your existing feature-set.
Programs are bounded by "feels slow" on one side, and "expensive to optimize further" on the other. If 99.9% of your users (i.e. the lay users, excluding the experts in the HN crowd) don't perceive your program/page to be slow, why would you optimize further? Sure, if you're Google, a few ms faster can equate to millions of dollars of revenue, but that's not the fitness landscape that most software evolves in.
This doesn't really disagree with the argument from the piece. You could summarize this line of thinking this way: The pressures at work in the software market drive us to create slower and slower software. It seems like everyone actually agrees on that point.
The only real disagreement seems to be how to respond to the fact. It's either, "Well shucks, that's just the way it is. At least I have auto-complete" or a feeling that, despite the pressures to just bloat and expand and slow down forever, it would be nice if we tried to combat it.
My counterargument would be that if the new way of building software proposed in the OP was actually a significant improvement, someone would have built a new IDE with it, and eaten JetBrains' lunch. Instead, we got Atom; prioritizing pluggability/extensibility/hackability over footprint/latency.
To be clear, I'm not arguing for a fatalistic position that "there's nothing we can do about it". We can, and do, optimize performance when required.
I'm just arguing for a more nuanced appreciation of the cost/benefit calculation that's at play here.
Either there's an explicit cost/benefit being done (e.g. a PM weighs the "go faster" story vs. the "add new widget" story) or an implicit one (app #1 prioritizes speed, app #2 prioritizes features, and customers vote with their dollars; market share reveals which is more valuable).
To offer an alternative perspective on this bit,
> If 99.9% of your users (i.e. the lay users, excluding the experts in the HN crowd) don't perceive your program/page to be slow, why would you optimize further?
For me personally as a practitioner of software engineering; because I can, and it’s meaningful to me to optimize in the broader context of designing and implementing a system, in part to see what’s possible in addition to the personal enjoyment of going through the process.
That being said, for a company focusing on optimizing value to customers as a function of engineering resource allocation, I agree that it doesn’t make sense to optimize. You summed it up nicely with “...but that’s not the fitness landscape that most software evolves in.”
Note that the mention of Vim by the original commenter is a red herring arising out of lack of familiarity with Wirth. In Wirth's eyes, even Vim would appear monstrous.
To appreciate Wirth's point requires realizing that his frame of reference is the Oberon (eco)system, which includes an OS, a mouse-driven graphical shell, a compiler, and the underlying CPU in an HDL all in a few tens of thousands of lines of code.
And how do you know if they perceive it to be slow? I suspect that if you did optimize it to be faster (e.g. lower latency interactions), they would notice the improvement, even if they weren't sure why it "felt" better.
> If speed/performance is a feature that your customers want to pay for, they will let you know.
I don't buy that. Customers often don't even know it's a possibility that their software could be faster.
In my experience, often lay users don't think of individual programs as being fast or slow. They either assume the "internet" is being slow, or their computer is slow. However, I think they do notice when performance is better. They say it "works better" or "feels more reliable", but they don't necessarily know why.
Ask them, or listen to what they are saying without you asking. (I.e. UI/UX research 101.) For an IDE with a large userbase (for example JetBrains' product line), there are plenty of bug reports / user reviews which complain about performance, and so it's possible to get a picture of how many users perceive your app to be slow.
Note, I chose my words carefully there -- if your users don't perceive the site to be slow, they may still respond positively to imperceptible performance improvements. To measure that you do A/B experiments. There's a lot of ink spilled on this subject; Google has done some really good research here.
> Customers often don't even know it's a possibility that their software could be faster.
Fair, I probably oversimplified there. More precisely, "If speed/performance is a feature that your customers will pay for, you should be able to measure that fact."
You don't necessarily need to invest in the performance improvements to measure this; when Goole investigated this they simply added artificial delay and measured the effects on revenue. From this you can estimate the gradient of the $-revenue / ms-latency slope, and figure out how much it's worth investing in improving your app/site's latency.
No, it isn't. That's "Recognizing Survivorship Bias 201".
- A: not slow
- B: slow, but still using
- C: intolerably slow and no longer use it
Indeed, you can't measure C with a survey. But for most apps, it's probably reasonable to assume a distribution of thresholds where, if B/(A+B) (ie, the result you get on a survey of users for "is it slow") is less than 5%, there probably aren't many in C.
The scientific approach to find out if app slowness is a problem is to make a much faster version, give that to some fraction of users in an RCT and see if their usage goes up. But that makes no sense for a business. If you make the effort to develop a fast version, just give that to everyone and move on to the next thing that might get you more users.
> [paper showing Google measuring how many users perceive their app to be slow]
Why don't you write to Jake, Hilary, and Maria and see if they'll explain to you why the existence of their paper doesn't mean what you're now trying to argue?
You see some shakeups here and there. A large push to a new browser once a decade, etc. but features are king most of the time.
Sure, but I don't see any reason better tools couldn't get us closer. Instead of starting out with languages and tools that will almost guarantee slow software, maybe we work on designing languages that provide productivity while at the same time encouraging leaner software.
I don't disagree that overall the environment software is developed in doesn't encourage lean software. But I do think it's worthwhile to see if there are ways to improve the situation.
Also, faster software can often provide opportunities for new features that weren't possible before. For example, something that used to be a batch or background process can now become a real-time feature. That is something that customers likely would find useful.
The key point I'm making is that you need to consider the whole system/environment; developers aren't choosing "slow languages" because they simply don't care about performance, they are choosing those languages because they are more productive in other dimensions, and the return on that productivity gain exceeds the return on optimizing for more speed.
> Also, faster software can often provide opportunities for new features that weren't possible before.
Absolutely -- we do see performance improvements happen, when they give users something that they actually value; for example the reason IntelliJ triumphed over Eclipse was by doing more sophisticated compiling/parsing in real-time, which was only made possible by significant performance optimizations.
It's not just developer writing a program for one user. It's a very small number of developers writing programs for many many users (unless you make custom software). The impact of what devs do is multiplied by the number of users. That includes the negative impacts of being slow or requiring more resources than necessary.
Imagine Photoshop users having to buy more RAM not just because it is needed by some functionality, but because that functionality used more RAM than was actually necessary. The sheer waste, in money, carbon footprint, and pollution...
Granted, optimising takes time. Time that would be taken to write features. There's a tradeoff there. But we should keep the competition in mind here: there's a difference between a feature being unavailable because you spent time optimizing, and a feature being available elsewhere instead. While it make sense for Photoshop devs to push features as fast as they can, it doesn't to users any good if Krita already offers those features. (I'm ignoring incompatibility here, but you get the idea.)
The incentives are all wrong in my opinion. We should have fast, lean, correct programs to work with. They just don't happen in the current economic system.
He spells Niklaus, see https://en.wikipedia.org/wiki/Niklaus_Wirth. And yes, he has a sound, pragmatic attitude towards software systems.
Nikolaus is another famous person.
For inspiration, Sundblom turned to Clement Clark Moore's 1822 poem "A Visit From St. Nicholas" (commonly called "'Twas the Night Before Christmas"). Moore's description of St. Nick led to an image of a warm, friendly, pleasantly plump and human Santa. (And even though it's often said that Santa wears a red coat because red is the color of Coca-Cola, Santa appeared in a red coat before Sundblom painted him.)
In the beginning, Sundblom painted the image of Santa using a live model — his friend Lou Prentiss, a retired salesman. When Prentiss passed away, Sundblom used himself as a model, painting while looking into a mirror.
Of Myra, actually — his relics were stolen from his tomb in Myra by sailors from Bari, where they remain today, but he probably never even visited there.
If you want to get something up and running fast, then JS/Python should absolutely be your go-to. We have an "innovation sprint" every quarter where everyone gets to try out changes and new features and anything else they wish to hack with our system, and I would say 99% of people choose to do this work in JS/Python.
However, my personal opinion is that productivity's first and most important pillar should be maintainability, followed closely by readability, with speed relatively far behind.
My interpretation is that the conclusion, at the moment, is that we can't know for sure, scientifically, which one is "better".
That being said, I sill have to find the randomized, double blind test that proves a hammer is the right way to hit a nail.
I agree this deserves some study. But...
> when you have to do something you don't have yet any clue about the type system is just in the way
Even if you don't have any clue, you usually know the types of your functions and data structures.
I mostly write Python and OCaml code. Each langage has its use cases, but I rarely feel the OCaml type system is in my way. When it's in my way, it's because my code is incorrect in an obvious way.
In the case of my anecdote here, I'm mostly talking about typescript, but I've worked with a bunch of other ones in a non-web context.
Every single time, without fail, when I get a "snag" from the type-system, it's complaining about something real. Sometimes it's a trifling bug, like a typo, but ... even there it's usually pretty nice to have the type system immediately jump on it and report it, without me having to deploy the thing and run it and only then find out that something is wrong.
But the other class of bugs - that's where it's solid gold. It'll often catch really sneaky bugs, bugs related to "nullability", where some object I'm blindly using isn't guaranteed to stay allocated during the use case I expect it to be useable in, and holy smokes are those a lifesaver. Having had to deal with those bugs from the opposite direction, they're an unleaded nightmare to try to fix without the type system pinning down exactly the culprit that would be causing it. Every time I see one, I immediately think "wow, this would have been a 5-10 hour nightmare if I had to fix this because of some production bug". I've been in the office till 10pm, and ... I never want to do that again if I can avoid it.
I would use typed API's though, because it saves documentation reading time and makes the editor autocomplete to work like magic.
This is not true. Unit tests cannot replace a type system, just like a type system cannot replace unit tests.
You need a unit tests because a type system cannot check for all possible types of correctness.
However, unit tests can only check for the presence of bugs; they cannot prove their absence. On the other hand, a type system can prove that certain classes of errors cannot exist in the program.
That's an usual symptom that you are trying to write untyped code in a typed language.
There are few cases where types do get in the way, but in the huge majority of the cases the types are there for you to explore your ideas on them first, and only mess with the code once they make sense.
Subjectively, my experience is that writing out the type definitions and signatures of main functions is a great way to start exploring an unknown problem space.
I'm heavily biased towards small teams and small to medium programs though. I can at least imagine how TS improves ad-hoc documentation in some cases, which can definitely help in the "maintaining the code over a long period by different people"-scenarios.
Being able to specify interfaces is absolutely nice, but overall I'm not convinced it's worth the trouble.
Judging by the seismic shift in the industry away from vanilla JS towards TS I'd say that qualifies as an extraordinary claim.
It would be interesting to hear some of the details behind your experience.
Most of the functional errors will be caught either by unit tests or by functionality noticeably not working. These are not things that would be caught by Typescript anyway.
The irony is that we're using typescript in the front-end, where it mostly gets in the way. I think Typescript would have been more useful in the backend, but we're not using it there, because originally our backend was trivially simple. Now that the backend is becoming bigger, I can imagine Typescript would be more useful there.
It could be that Typescript doesn't work well with our version of Vue. (I think the latest version is designed around Typescript which will hopefully make the process a lot easier.)
In my experience working with React, which is pretty heavily invested in typescript these days, if you go reasonably deep on doing typescript interfaces, it's like a switch gets flipped.
A light dusting of typescript really does barely anything; it's just boilerplate. But once you get up to about 80-90% coverage, it's like a switch gets flipped. All of a sudden it's really, really good at detecting discrepancies - I had a thing I was working on today, where I had a cute little svg icon component in the giant SPA program we're writing - I was just reusing the thing, and attaching a click handler to it, and all of a sudden - this component we hadn't touched in months, typescript starts griping about it. And I'm like "oh come on, this is so basic - what the hell could be wrong about passing in a simple onclick handler?" Well - turns out nobody had ever needed to use the "event" param on that function, so it didn't even use one internally - what I was passing in, in plain JS, would have just been thrown away, because the internal 'passthrough' version of the function had no parameter at all. And I didn't notice it in light testing (we have TS set to emit our program even if it's failing tests). I tested the component, and because the behavior's invisible/internal, it seemed like it was probably fine. Maybe I would have caught it with really earnest, aggressive testing later, but I didn't even need to - typescript just nailed it instantly.
I've had the privilege of working on some game development stuff outside of a web stack, and holy smokes does working in a complete, algebraically typed language change everything. When you go from 80-90% type coverage, to "hard 100%", it's just a complete 180°. It's just _freaky_ how good it is at catching errors. I'll change one little thing, and it can tell me "oh yeah - you know that cutscene an hour into the game? Yeah, you broke that." It's uncanny. It just absolutely changes everything about how I work.
Basically, any use of `any` should be avoided. Once you tolerate one `any`, you're on the way down.
One thing that I really, really do like about typescript is that you need to be explicit about whether a value can be null. Java lacks that, but the difference between `foo: string` and `foo: string|null` is stark.
Otherwise I dunno what this could be. Especially what TS could be disallowing that'd be all of: valid in JS, a good idea in JS, and especially time-consuming to fix.
I honestly wanted to believe in the value of Typescript and was enthusiastic about the switch, but it really hasn't proven itself over the past year.
For web gadgetry or exploratory ad statistics, you might end up with different productivity-enhancing features.
Strongly typed languages increase productivity if the cost of a type mismatch bug exceeds the cost of defining and specifying the types. That's true for some programs, and it's not true for others.
Dynamic typing is useful if you are forced to develop software before you understand the business model or if you need to expose some DSL to your users. I normally view it as a short term option that is typically reached for when not enough actual engineering has occurred (in more complex systems). It's also mandatory for working within certain domains (i.e. the web). I think this last point is why so many seem to think its a perfectly acceptable way to carry on in any sense.
This is an experience issue, not a type-system issue. Rapid refactoring is also possible during initial passes while design is settled.
2014 (and better) https://news.ycombinator.com/item?id=8301511
IMO a great read for everybody interested in long-term reliability or sustainability.
The expected end result, in spite of the mitigations put in place, is the Azure Sphere 20.07 Security Enhancements bug fixes release.
I write firmware in C because flash memory costs us money. I wouldn't recommend C or C++ otherwise. Stay away.
On the other hand C is kind docile as long as you avoid doing 'insert long list of sketchy stuff'.