Hacker News new | past | comments | ask | show | jobs | submit login
The Programming Language Conundrum (evrl.com)
321 points by bibyte 52 days ago | hide | past | web | favorite | 216 comments

> A lot has been written about "hyperproductive teams". ... there’s one thing in common between all the stories I’ve heard and the successes experienced first hand: small teams

Personally, I'd go one step further. It's small teams that were together from the start. Going through the initial setup of a project makes all the difference. When people come later, they lack the intuition that comes with being around when all the structural decisions were made. Early people make tradeoffs and bake good and bad practices into the pipeline, later people are told not to touch certain things because it's more complicated than it looks or will break something else that doesn't seem explicitly connected.

I've been on both sides of this fence. Having started a company and sold it - my partner and I were hyper productive, but while the small team that inherited the code is extremely capable, it took years longer to feel comfortable making big changes than it took to write.

Joining other people's projects and large codebases, on the other hand, has always taken a long time for me to be productive in. Typically it takes a year at a new company before I'm moderately useful.

Personally I don't think this has a ton to do with programming languages, even though I feel much more productive in python and javascript than C++ myself, even though I've been using C++ longer. I still suspect familiarity with the early decisions is a greater factor in productivity than language.

There’s another way out of this problem, but you have to focus on making the code dead obvious and simplify the tools for working with it.

That work doesn’t just pay off in rampup time. True, people can figure things out quicker. It’s easier for early career developers to transition to senior positions. It’s less likely for someone to vastly overrun estimates for tricky new features when the code is well factored. But it’s also about scaling your company up.

When your company gets bigger, you will have more than one project. You need to be able to remove people from project A without killing it, and occasionally bring people back to project A at a later date.

These things have been a constant source of drama everywhere I’ve worked where there “wasn’t enough time” to follow my recommendations. The projects grind to a halt and eventually they lose so many of the old people they have to start over. But it’s the same management style so the process repeats.

This works better for established companies than startups, once they know the pain of the first option. However, when on a 6 month runway, it may not be rational to optimize for 6 years, when you aren’t even guaranteed product/market fit.

I specialize in long life software, so I agree with you, but I understand the alternative.

> I specialize in long life software

How does one market themselves in this specialization?

I'd look at Corgibytes as an example - it's not how I market myself as much as what problems I gravitate toward, with the alternative being startups.

Do you have any more details on your recommendations?

Get variable and method/function names right. They matter more than most comments. I remember a code base I inherited

    if avail > 0:
I could never reason well about that. Then I changed it to

    if available_hosts > 0:
and eventually into

    if number_of_hosts_to_scan > 0:
Technically this would be enough

    if number_of_hosts_to_scan:
but it's less clear.

I'm not afraid to type long variable names because the editor has autocomplete. I don't type much and I don't make typos.

If the function is called scan_hosts, then if the parameter is n, it's more or less obvious to me that n it's number of hosts to scan. I'm willing to read a block comment to remind me what n is.

Some issues could be:

1. It is not obvious to everyone. 2. Another integral parameter called k is added later on. Now which is which? 3. Another guy might call all his integral parameters k. n and k are used for the same thing all over the code-base. An outsider might initially think they are different. 4. Names which are descriptive of their use are obvious to everyone by definition.

It may not be obvious to strictly everyone, but that should be reasonably obvious to anyone with enough context to have any business touching that code.

nHosts according to a popular naming standard and using camel case.

At least one of them will be something similar to: write boring code. Document just a little more than you think you need to (be that in self-descriptive variables or actual comments). Don't get fancy unless actual measured performance dictates it, and when you do so, make sure you document both the why (which test indicated this was necessary) and the how (this is what we did, exactly, to make it perform better).

Write code like you’ll have to debug it at 2 am. Some day you will.

Uncle Bob’s Clean Code.

> It's small teams that were together from the start. Going through the initial setup of a project makes all the difference. When people come later, they lack the intuition that comes with being around when all the structural decisions were made. Early people make tradeoffs and bake good and bad practices into the pipeline, later people are told not to touch certain things because it's more complicated than it looks or will break something else that doesn't seem explicitly connected.

I've also seen this happen and agree with your analysis that it's probably not related to the programming language - it seems more like a problem with how the thought process behind those decisions is (not) recorded.

Short of making every project (and the codebase it results in) live for a short enough time to guarantee that nobody new will ever have to look at it, we've got to collectively do a better job of being able to explain the "wtf did you do that?!" aspect of development.

Your last sentence reminded me of "The only valid measurement of code quality: WTFs/minute": https://www.osnews.com/story/19266/wtfsm/

A readme (in plain language) in every folder helps a lot. Either that or it's replaying every commit from the start. Maybe tools can be made for this.

I don’t think it’s about tools as much as culture - whether it’s a readme, a wiki or a chat log that’s not full of talk about the weather and scones (no, really), people have to be suitably incentivised to work that way. Something that isn’t a customer-deliverable (they aren’t going to directly care if you don’t record the weird business process that led to you implementing feature X the way you did; they don’t have a comparison which tells them that if it was recorded, changes could’ve been quicker/cheaper/less risky) is at risk of being cut as soon as there’s a hint of time pressure to get something out the door.

It’s coincidental that doing things in less of a “let’s all get in a room and talk about it for 3 hours” way should make keeping remote workers in the loop easier too.


As it happens, my current project that uses it has a Readme in every folder: https://github.com/akkartik/mu

I've been chasing project longevity (particularly in infrastructure like languages and OSs) for a long time, and came up with a prerequisite: http://akkartik.name/about

This quest took me from programming in Lisp, to C, and now to raw machine code. I'm hoping to build back up to high level languages, but with key implementation features that enable outsiders to comprehend a codebase.


Part of that is the difference from having a system with paying customers vs a new system still trying to gain them. Risk for changes increases with success and for devs who don’t own the company they can’t make hard calls as easily.

I would have to agree with this wholeheartedly, thanks for adding it. Yes, risk tolerance drops over time as a business solidifies, and hyper productivity is partly a function of high tolerance for risk. Risk tolerance goes down at the same time that sales and technical debt and complexity and team size all go up, so I think losing the feeling of hyper productivity over time the more successful a company is, is both natural and inevitable.

Agreed: On the one hand, the anecdotes in this article are not structured in such a way that allows one to say that it must be the language difference, rather than people issues, that accounts for the difference in outcome. They present less of a conundrum if you don't assume the former.

On the other hand, C++ seems to be strictly more complex than Python, and if so, it would not be surprising for there to be cases where the latter can be applied more quickly.

Peter Naur has written before about the problems of ramping up new people on old codebases: http://akkartik.name/naur.pdf

I think this is the essential tech debt (and so shows up the essential problem with that framing; you aren't so much going into debt as selling yourself into slavery, because the debt will never be repaid).

I work on this problem:



Now that is interesting, thank you!

> Typically it takes a year at a new company before I'm moderately useful.

WTF? What kind of companies are these? How are they successful in the slightest?

I have the exact opposite experience going from working primarily with Java to Clojure. I worked with the former for nearly a decade, and I haven't contributed to a single library or project I used in that time. Every time I'd see a bug or a feature I'd like to add, I'd check the source see thousands of classes each with its own semantics and behaviors, and give up. The amount of effort needed to familiarize yourself with a non-trivial Java codebase is herculean.

Meanwhile, I've contributed to many Clojure libraries, and lots of people contributed to the ones I maintain. I can open a Clojure project and figure out what's going on much easier than with a Java one. The code tends to be much more direct. It's primarily written using pure functions that you can reason about independently. It's focused on the data, so I don't have to learn hidden behaviors. And I have a live environment with the REPL where I can run the code and see what it does immediately.

My team has been working with Clojure for nearly a decade now. We have many applications in production that have been developed and maintained for many years. We've hired many new devs during that time, and we've never had problems onboarding.

A language being expressive and flexible does not preclude it from scaling to large projects. And when the language is not expressive, you end up with huge amounts of incidental code because you're forced to map your problem domain to the language as opposed to the other way around.

I think that community practices for using the language are far more important. Clojure community started discouraging heavy use of macros early on, and a small set of common patterns tends to be used to solve problems, and codebases across projects tend to have a similar style to them.

Even if there is a scaling limit, you can still use SmallTalk or a Lisp for small projects (and the scope of small projects in those languages is larger than that of, say, small Java projects) and benefit. I imagine plenty of unglamorous software projects are small enough in scope.

I am a mathematician, not a programmer, but even I have a Lisp story like this. I'm working with a computer science professor at my university who has in the past worked as a programmer both for private companies and for our government. We're studying some combinatorial game played on graphs and wanted experimental data of which player has a winning strategy on thousands of small graphs.

To feel more certain of the computer results we decided to each write our own implementation independently without looking at each other's code. He wrote his in C and I wrote mine in Common Lisp. You can imagine the punchline since it is similar to all of the Lisp stories out there: my program is about a tenth of the number of lines of code, took much less time to write and runs about twice as fast!

How would you know the projects stay small? Also, if you have to work on small and large projects, and one tool works for both, getting proficient in both tools might just not add enough benefit...

I certainly have no idea how to predict if a programming project will stay small, but maybe people who do that kind of thing for a living have some idea?

Also, I'm skeptical about this scaling limit business, but if it's true, call the first program a prototype or proof of concept and then rewrite in a "scalable" language when necessary. "Plan to throw one away, you will anyway."

Any large project can be broken down into a set of small projects.

> Create a language to solve your problem;

> Solve your problem.

When I worked on a complex framework in a small team, we used Objective-C. In the early days of the project, we created some powerful utility methods like map and reduce which we used extensively in the early stages. These methods let us do things like traverse complex structures very compactly. Way better than the more plodding while-loop (perhaps with a stack) way of doing it. But, I found myself moving away from these later in the project in favour of more verbose code because it took me too long to understand the map/reduce code if I needed to revisit it. The longer code was, um, longer, but it was much easier to understand what I (or my colleagues) had done when I came back to this code. In my view, a bunch of very smart programmers can indeed be "hyperproductive" using a language like Smalltalk, but this hyper-productivity may not survive the longer term life of the project.

Note that map and reduce (and generally fold) tell you what happens. Loops show you how something happens, and you have to infer a higher-level concept from it yourself. That is, map belongs to a strictly more powerful language, but to benefit from it, you have to be fluent with this language.

The very same thing happened when structural programming was invented, and Fortran-iV guys were saying that all these "while" and "until" loops are harder to read than computed gotos.

This past week I sat in a class at work about the newer features of Java 8-10, mainly lambdas and streams. I was like "oh, Java is catching up I see" while another developer was like "What? I don't understand? How would you unit test a stream statement?"

Streams are reasonably testable; I wrote streams-heavy code for a year, and did not have issues with that.

Very long streams can be combined of shorter stages, and these stages are individually testable, when needed.

How is map difficult to understand? It’s the same semantics as a for loop with input and output lists, but less boilerplate.

From my personal experience, most developers that discover list comprehension etc. get excited about how much they can achieve with little code that they try to keep everything as small as possible. They end up using short, non-descriptive variable names and you need evaluate the code in your head to find out what a variable/list contains. It's even worse with non-sequential callbacks and closures.

In Python I went back and forth:

I discovered list comprehension and was exited and used them a lot. Then I went to multi level list comp. Then I realise I could never debug those so went back to loops for anything needing more than a single level. Finally I discovered Generator Expression which are basically like list comp, but lazy (not evaluated) and since then I break my code in a series of generator expression which I materialise at the end with a single list comprehension. This way I keep my code readable, it's a lot cleaner then a bunch of loops, and I also benefit of Python's special optimisation for list comprehension.

I like writing generator loops with the yield statement for more complex processing.

That’s my style as well, and I followed the same path of yours!

>It’s the same semantics as a for loop with input and output lists

Arguably it's even easier because you you generally know you won't have inter-item dependencies.

The parent did say they rolled their own map/reduce. It might not have been exactly as one would expect it to work in that case.

But yeah, map/reduce are so common nowadays that I think most people would not have a problem with them anymore. Even Java is doing them now :)

OTOH, if you've never encountered them because you come from languages that did not have them, they might be confusing at first. Some of my colleagues did have trouble getting used to map/reduce when Java8 came around.

If map() is part of the language then it's not difficult to understand because you can always read up on it in official documentation or on the web. But if the language does not support it and you create your own it is unlikely that you document it as well as the vendors of a programming language.

This may be part of the problem with hyper-productive languages in general, you get stuff done fast but it tends not to be documented as well as a publicly available programming language.

map() that is part of the language is also very STABLE whereas if you write your own you are likely to tinker with what it does here and there, what default argument-values it uses and so on.

I've written maps that I've regretted later. "map(+1, list)" isn't too bad. But as you chain things and/or put more in the lambda, it gets less comprehensible. Then, some cross cutting requirement comes in and you either need two map reduces or to just refactor to a loop.

I would say it is the difference between first-order and higher-order. While loops are first-order, thus easier to understand, while map is higher-order, thus more difficult to understand.

"Higher order" functions are about whether a function takes code as a tuneable parameter, rather than "just" (user) data.

A regular function call is first-order:

    foo(bar: Int): Int
While is usually a magical special case, but conceptually it's just a higher-order function that takes two functions:

    while(keepGoing: () => Boolean, action: () => Unit): Unit
Or, if you want to be pedantic:

    while[M: Monad](keepGoing: Monad[Boolean], action: Monad[Unit]): Monad[Unit]

Your higher-order while construct certainly mimics the normal first-order while one. But that's not really the point.

Scala's "implicit" keyword seems a famously contentious example of code obfuscating trickery

Odd languages attract above average programmers, if nothing else than because they are engaged. Small teams are more productive than large teams, because there are almost no communication problems. I wonder how much is due to this?

I came here to make the same point. The real test would be to take those same Lisp or Smalltalk programmers, and have them work in Java. I’ll bet you see the same increase in productivity. It’s the people, not the language.

No, the language is part of the equation.

I have code, for work, in: FoxPro, Delphi, Python, VB, VB.NET, C#, F#, Obj-C, Swift, Rust, Sql, Js.

I rewrite apps and codebases, and move them. I rewrite the same stuff many times, and make my own pseudo-ORM is my main thing when learn new languages.

Absolutely I'm more productive in some Langs than others:

Amazing at:

- Fox, Delphi, Python (#1), F#, Swift

Average, low:

- VB, C#, Obj-C, Js

Barely move:

- Rust (this is my last lang, and also doing a programming language that I have sketch in python, swift, f#. The task hit against the hardest and weakest parts of rust).

I look at C, C++ and my instinct tell me I will suck forever at them. Same Haskell. Ocalm? I will fly. Lisp? Nope, that crazy stuff never click. Kdb+? I don't know, maybe.

I don't buy the meme "the language not matter, is the people" because languages are made FOR the people. And some stuff click on you or not.

That is the reason APL is a thing for some.

Whenever I hear "it might work for you, or not", my ears perk up. Not understanding when something works or not is a great question. It's something to explore. It's not the endgame.

I briefly studied French in college, and to say it "didn't click" would be an understatement. It was the worst grade I got in any class ever, by far. And yet, even the dumbest French person is fluent from when they were just a kid. It's probably not the case that French is simply impossible for some people to learn. Something else is going on.

Couldn't it be that we simply haven't figured out a good way to teach programming languages yet? Software is still generally "go read the reference manual online and you're good", but most other mature fields have moved beyond that. Boeing is in hot water this month in part because they essentially used that as pilot training for the 737 MAX, and it's clear to everyone that this is not an adequate way to learn a complex new technical tool.

Unlike you, I don't find Swift particularly productive (and I've written tens of thousands of lines in it!) -- but maybe with the right training, I would.

Learning a language as an adult is completely different from learning as a kid. Your hypothetical “dumbest French person” would not have been able to learn French as an adult, the same way that the most physically fit 100 year old could not survive the falls down the stairs that 3 year olds do without even crying.

Correct. A different part of the brain is used when learning a language past a certain age (I think 8 or 10 years old).

It's not a "meme", it's just my speculation. Yes, languages are made for people, but the people aren't all the same. TFA overlooked the point that maybe it is more curious and more talented people who make their way to esoteric languages.

So we are on agreement (how weird that happened often on HN, at least on tech!)

I also add that you need to explore that languages to become talented.

I'm pretty certain to be an average developer, at most. Not because low self-esteem, but after 20+ years I have know people above and below.

BUT, the use of many paradigms have help me to look like much better than if I have been stuck on a single lang (or paradigm).

I credit, by intuition, to FoxPro in how I tend to be better on RDBMS work. Delphi, for how build UIs and have certain understanding of low level. And so on.

Every new lang/paradigm make you better, and that lessons carry over.

One of my favorite anecdotes was someday I was stuck with C# solving a task, that even with libraries can't get.

I think to myself "let do that on python". I solve it in no time. I port it to C#, and almost get the same line count!

Yes, you do need to explore languages to become talented in using languages. I think you are not an average developer if you are learning and using so many languages, pretty much by definition.

Thanks for the compliment.

I consider "talent" as the amount skills you have at your disposal. I think a average idiot will be more productive the more broad is their horizons ;)

> I came here to make the same point. The real test would be to take those same Lisp or Smalltalk programmers, and have them work in Java. I’ll bet you see the same increase in productivity. It’s the people, not the language.

A good example to strengthen this argument is Petr Mitrichev who has won numerous competitive programming competitions and his language of choice is... Pascal https://en.m.wikipedia.org/wiki/Petr_Mitrichev

> It’s the people, not the language.

Then why is it that so many good developers who have learned these more esoteric languages cannot stand going back to Java etc?

Because those esoteric languages are better. The comment I responded to points out that TFA makes the unreasonable assumption that the Java programmers and the Smalltalk programmers are equally talented. And that perhaps what is going on is that the people attracted to explore beyond Java are more curious and perhaps more interested in their craft. If that is the case, then I predict these people will be better in Java, even if it isn't their first choice.

For one thing, highly popular languages tend to have communities which are flooded by people who can't use those languages, can't describe the problems they are solving, or can't understand basic programming concepts.

Like, it wouldn't even matter to me if PHP is a good language if I have to sift through thousands of comments of "I did this and it worked" without any description of why it worked, why it is better than other ideas, or what problem it is even meant to solve.

Did you get the name mixed up? Says here that Gennady Korotkevitch uses Pascal but Petr uses Java. https://www.quora.com/What-language-do-Gennady-Korotkevich-a...

>I came here to make the same point. The real test would be to take those same Lisp or Smalltalk programmers, and have them work in Java. I’ll bet you see the same increase in productivity. It’s the people, not the language.

They write a mini-lisp in Java and obfuscate it. I've been on project where this had happened.

> above average programmers, if nothing else than because they are engaged

But are you really "above average" if you are incapable of actually producing something large scale because your temperament is such that you can't stand working in a "boring" "unproductive" language?

It's sort a tortoise or hare type situation: the hare runs really fast but if that doesn't win you the race, who cares? There is an even rarer breed of programmer who operates as both the tortoise or the hare. Those are the true "above average" programmers in my view, but you can pretty much exclude the elite crowd of folks who "refuse to program in Java" or will confine their entire job search to companies that use their favorite language from that category.

Good point. When I discovered Clojure and saw the light after listening to Rich Hickey's sermons on The Mount I found it almost impossible to work with OO code. Years later I asked myself if this had actually benefited me professionally, other than simply broadening my mind, and I concluded it hadn't because it limited me to the tiny number of Clojure jobs available and ruled-out earning good money working on existing code. In recent years I've had to reverse-engineer this experience and get to grips with the likes of Java, Kotlin, Ruby and Python.

> But are you really "above average" if you are incapable of actually producing something large scale

Wasn't the OP's conclusion that large scale was due to not being able to scale the team into the hundreds and not any limitation of a small team?

There's only so much code that a handful of programmers can write and maintain in any language. The scaling issues comes up when you need to make your team a lot larger.

Yes. I think one of the interesting things in software is the irrational amount of productivity that a small team can accomplish, but that it can only be done by doing things in a way that can never be executed by a 50 person team.

So it is possible for 3 - 10 developers to maintain a product that perhaps, would need 50 developers using fully maintainable methods. But the 3-10 developers can never produce what 500 developers can. And there's a valley of death between about 10 and 30 where you actually go backwards. With 20 developers you might accomplish less than with 10 unless you very carefully and strictly compartmentalise the team (at which point you have two teams of 10 developers that don't interact ...).

I'm skeptical for the simple reason that this post never calls out any specific feature of Lisp or Smalltalk which could cause this problem, and other languages which apparently don't suffer from these problems have most of the same features. It's almost as if unpopularity is the root cause.

Guy Steele famously said that with Java they "managed to drag a lot of [C++ programmers] about halfway to Lisp". I don't hear many complaints about Java being too 'organic' or difficult to document or that it only works on 2-pizza teams.

If you think that sufficiently powerful languages only work on small teams of dedicated people, why draw the line at Lisp/Smalltalk? Why aren't the Ada people going crazy about how loose and dynamic Java is, much less Python and Ruby -- all of which apparently "breeze through the 100, 200, 500 developer marks without much trouble"?

I remember reading about how Codd's relational databases struggled with acceptance because it was believed at the time that 'tables' were too complex for the average programmer, yet today that idea is laughable. People tend to rise to the level of expectation.

>other languages which apparently don't suffer from these problems have most of the same features

Well, one of the very first differences I think of between Lisp/smalltalk and Java is dynamic vs static typing. In Java, you have to write extra boilerplate code to satisfy the type system. This slows you down when you're writing code. However, when you're figuring out someone else's code, static types can be a big help. Often times, you can guess the right way to call a method based solely off the signature, and if pass the wrong thing, your IDE can highlight your mistake almost instantly.

So when you're working on your own freshly written code here and now, static typing is usually a hindrance. When you're working on 3 year old undocumented code written by someone you've never met, static typing is a key source of information on how things work. The bigger the team/project, the more likely it is you're working on code that's not familiar, so it would follow that static typing helps scalability.

I've started using type hints with python and they are amazing, not just for historic code, but improving my productivity in real time (especially in conjunction with an IDE). I think what's great is when you use hints, you get benefits in linting, but you can also get away from it when you need to. With mandatory types, you are always having to work in the type system. Best of both worlds. But not everyone is as excited about or as thoughough with them as I am.

I find type hinting to be very useful, but it has some gotchas (also present in unhinted Python), where the type you get isn't the type you expect:

- Int and float from numpy arrays are np.int/float32/64. And round(np.floatXX) returns a float, but round(float) returns an int in Python 3. - ruamel.yaml loads CommentedMap and some custom scalar types. CommentedMap is not a subclass of dict.

> Well, one of the very first differences I think of between Lisp/smalltalk and Java is dynamic vs static typing.

That's true, but that's not what the article is about. He specifically calls out PHP, Python, and Ruby as examples of languages which do scale well to larger teams.

It seems that there's something else in the space between Ruby and Smalltalk that he thinks is responsible for a language being able to scale or not. I just can't figure out what it is.

I remember that many years ago somebody told me (or did I read it?) that he expected Smalltalk to eventually succeed but he didn't expect that it would be called Ruby.

My guess: maybe a Smalltalkish language in a conventional environment? Don't underestimate the power of being close to what the average programmer used to experience in the years before a new language is released.

Python is from the end of the 80s and its classes look like OO in C (without ++, I mean the explicit self), plus functional methods (len, etc). Ruby and Java are from the middle 90s and take that for granted and hide self and go all in with method calls. They take small steps and adopt a syntax similar to other existing languages. Too much change and developers won't follow en masse.

> It's almost as if unpopularity is the root cause.

It kind of is. What determines which things people are excited to learn? What determines which things people will feel safe doing? What determines which things get you in, and which get you sidelined?

Creating knowledge that can be used in practice in a non-enclave environment, and creating common knowledge, overlap a lot. “Make this maintainable” = “Make this something people can adequately be assumed to know how to maintain” = “Don't do anything weird”—thus the focus on keeping code idiomatic. So it's not really about absolute power; it's about relative power. If you can establish a cultural boundary, and then set up a pipeline of enough people dealing with a higher-power language, it seems possible to make that work, but that's a social undertaking that easily falls apart to things like “people will call you elitist for separating yourself and doing anything that might imply the others are inferior” (this visibly happens to Lispers) and “your code gains more from talking to other code easily than it does from being powerful in itself, so you need a lot of knowledge surface area, where people who use your environment are widely distributed in what else they know”. Notably, though, enclaves of Lisp and OCaml do exist in industry where there's some amount of big projects that have a limited surface area and they can make use of the higher power. I've heard similarly for Haskell. And academia seems to practically be a supercluster for higher-power language use.

I get the impression that one of the big ways Java and C#¹ are gradually steering the big ships into higher-power water is by having backing organizations who can introduce a new “fancy” feature, document it extensively, watch how the great masses of programmers in that language actually use it, push for them to learn it, and then wait before adding the next feature, so that the delta between highest and lowest power feels more socially safe; no one is ever far enough from the crowd to become a problem, but the rallying banner can still move it gradually. The slowness is widely detested among programmers who want to jump to the higher power level now and know that there isn't necessarily a technical reason not to, but it's a feature for social cohesion. It also provides a convenient binding for transmitting this information for teamwork purposes: “we're using Java 8” implies a whole set of things about expectations in a way that “we're using Common Lisp” by itself doesn't.

¹ Apparently I can't write the sharp sign here.

> I'm skeptical for the simple reason that this post never calls out any specific feature of Lisp or Smalltalk which could cause this problem

How about the fact that Smalltalk at scale involves custom version control systems? I'm haven't used Smalltalk seriously, but that feature gap alone could keep Smalltalk projects and teams small.


Hmm, it's true that one common feature of both of these languages is that they traditionally use process images. That's not required, though, and every Lisp program over 100 lines I've seen written in the past 25 years has used a normal text-based version control system.

Also, while bypassing version control and doing everything in-process can be a great productivity boon for a single programmer, even a team of 2 has outgrown that. This wouldn't explain why (he thinks) these languages work for small teams but not larger teams.

Pharo uses Git

While NoSQL databases don't owe all their popularity to a common developer aversion towards thinking in tables/relations, there's some portion of enthusiasm that was generated pretty much on that basis.

The fact that dozens of companies have been built around millions of lines of Common Lisp code at all levels of abstraction solving some of the most complicated problems is a testament to the scalability of Lisp.

What has changed is that most software shops aren’t solving technical problems anymore. Most places are engaged in the business of plumbing—connect this database (whose problems are solved) to this dashboard (whose problems are solved)—but many of us don’t want to admit it. It’s not glamorous and Silicon Valley prefers to keep its death grip on titles like “Senior Solutions Architect.” Plumbing doesn’t necessarily need expressive power of a programming language, but I contend that problem-solving greatly benefits from it.

this hasn't really changed. Yesteryear it was Visual Basic, reviled by anyone coming from a Computer Science background. Most if it was "connect to database". Prior to that, 4GL's: dBase III derivatives, right through Microsoft's (eventually) FoxPro. Before that COBOL. There has always been "plumbing" for as long as computers have been "business machines".

I think the proportions have shifted significantly. (Proportion of “problem solving” to “plumbing”.) There’s at least a couple orders of magnitude more software and services available to use and connect to today than there was 25 years ago.

Plumbing is not what you read in your textbooks about.

It has always been the case that the majority of programmers are plumbers. As a fraction of people I would be surprised if we do not have more people working on blue skies problems today because all the tools are available for free, as opposed to the 80s when a workstation with the software on it cost 30% of the developers yearly salary.

> The fact that dozens of companies have been built around millions of lines of Common Lisp code at all levels of abstraction solving some of the most complicated problems is a testament to the scalability of Lisp.

I would draw the opposite conclusion from your statement. "Dozens" is not a big number compared to the universe of software-based companies. Up that number by three orders of magnitude and I'm sold.

I would say there are several languages that straddle the line.

AFAIK there exist 50+ people teams that use i.e. ocaml (Jane street, Facebook), haskell (gallois, Facebook, Google), erlang (whatsapp, pivotal)

I would say that beyond i.e. 10 developers you suddenly need to have onboarding solved. You need to be able to teach your new hires. You need to have community beyond your workgroup to ask questions. You need decent library support.

I would say it is beyond 50 where your leads start to worry you didn't `go with IBM` so to speak. You suddenly can't just hire an expert. And you can't coast on the fact your programmers find the language familliar.

Like, I joined 4 people team working in clojure. It was fine. You mostly could see "oh, this is where the macro magic starts, and here I need to grok protocols first and here I need to grok transducers first". Then my manager got nervous that he couldn't hire new people fast enough so we had to do a python rewrite :-/

Different team, different project, 60+ people, python. You'd think, it is just python. But you never know when there is a metaclass hiding behind an inheritance chain. There are parts, where we might have single person understand the code. Sometimes they left year ago.

I don't think the issue is that the language weaves with the mind (or the reverse). Instead I think there's several issues at play, at least these, and probably others: a) mindshare, b) dynamic type systems suck.

(b) alone may seem insufficient, since TFA mentions Python as a language that scales, but I've never seen large Python systems -- lots of small Python programs and libraries, yes, but monolythic multi-mloc Python applications? no. When you get to that size, dynamic typing issues become time sinks.

As to mindshare, if the number of developers who know Lisp is small, you're not going to get a lot of them. Mind you, you should be able to train them -- once you speak N>3 programming languages, getting to N+1 is not that difficult. But it's a lot easier to hire C++/Java/whatever programmers. This is a mindshare effect.

Also, if 1.5 FTEs allows you to build a successful company, then to scale development to 100 FTEs you have to rewrite, well, that's still a success. Many would like to have that problem. And nowadays we're getting good at incremental rewrites as an industry -- witness the Rust effort.

I think the future lies with Rust. Rust is a very high-level programming language, but it still lets you get down to the same low level as C. If you sold Rust as a Haskell w/ linear types, no/minimal monads, no I/O monad, and a C-ish syntax -basically, what it is-, I think TFA might run away saying "oh no, another Smalltalk that won't scale".

At the very least I'd like to think and hear a lot more about the issue -- whether and why some programming languages do or do not scale teams -- before accepting TFA's conclusion.

"These languages are very powerful, but they don’t scale. So even when you get ten times the productivity, the mostly tribal complexity will not scale beyond your “pizza-sized team” and therefore you now have a hard upper limit on your total software development capacity."

Ultimately, the problem boils down to the languages giving too much freedom. More conventional languages like Java can scale to a massive degree because they standardize so many things, which means that you can now share components which amplify your work. Lisp et al are cool toys and small team tools, but you're not going to see any large scale db libraries and parsers and concurrency libraries and web servers and cachers and HA systems that can interoperate with each other without a ton of shoehorning.

When every foreign chunk of code requires a ton of ramp up time just to figure out what it's doing and how you could even integrate it, or requires changing your IDE and workflow in ways incompatible with what you have already, you lose more than you gain.

Worse is better. Worse gives you interoperable, reusable, amplifiable components, in a common framework everyone understands.

> the languages giving too much freedom

This is why the D programming language doesn't have macros, neither text nor AST. It's been a controversial decision, and I've had trouble convincing people of the reasons why. Others explain it as "Walter doesn't like macros."

The reason is that macros enable one to invent one's own language withing the language, and that language is incomprehensible to others. It happens with every language that supports macros and macro-like constructs. (It happens with C++ "expression templates" and operator overloading.)

People swear up and down that they'd never use macros that way, they'd use them responsibly, and the incomprehensible mess they create is indeed responsible.

For a hilarious example, an engineer I know recounted a story at Big Corp where there was an aged piece of software written in assembler. The programmer for it had long since moved on. The code had some problems and needed updating. Unfortunately, the programmer had invented his own language using the macro system, and nobody wanted to touch it.

My engineer friend said he could do it, and had the code fixed in a couple hours. Amazed, the manager asked how he'd figured out those macros. He replied "I didn't. I assembled the code, ran it through a disassembler, converted that back to source code, then fixed it."

D does have macros, it just has bad macros, which prevents people from going overboard with them.

CTFE returning strings to be mixed in is 90% of the way to a proper macro system. (It just needs quasiquoting, really.) The only thing keeping it from taking off is how ugly it is to use, keeping it relegated to hidden tooling code. And of course there's UDAs, mixin templates having full access to the lexical space they're mixed into, `q{}`... really, D has all the pieces of a fine macro system, you're just refusing to take the final step and make them consistent and easy to use.

It kind of reminds me of Path of Exile and their refusal to implement proper player shops. It doesn't mean there's no item trading, it just means there's only bad and annoying item trading. I guess I don't believe in gimping tools, especially when you're already willing to take them most of the way.

None of these enable ordinary D code to not be ordinary D code, which is the crucial difference.

Similarly, operator overloading is deliberately constructed in such a way as to allow new arithmetic types to be smoothly designed, but to make it clumsy to do things like C++ IOstreams.

They do enable expanding the definition of what is ordinary D code. With UDAs, you can write declarative data structures that are more UDA semantics than D semantics. Macros do the same thing, if with more range.

Well, due to these decisions D seems to have become a (quite unpopular) "middle ground" / "centrist" language:

- people who want something more dead-simple / no-magic go for Go

- people who want the full combo of "max performance" + "performance-wise-free abstraction" + "extra safety" go for Rust

And conservatives stick to C++. Hence D didn't get very popular. I think it's a programmer psychology issue: people just don't like "middle ground languages" they want to clearly be in one extreme and accept its tradeoffs.

(There's Swift as a counter-example currently growing outside of iOS-dev towards ML, but I guess it's more like it being "like Go but with operator overloading and some macro-like infra features that ML people want & need"...)

I think the main driver for Swift is that it's a modern replacement for Objective-C, which feels very dated at times.

Well, ignoring Apple s ecosystem uniqueness, being an Obj C successor is kinda the same as a C++ successor

No experience with D, but using Go a lot, and it is the same thing there with lack of metaprogramming.

Turns out everyone uses reflection instead to get anything done.

Not sure how much of a success that is.

D fully embraces metaprogramming which is a very powerful part of D. But it stops short of being able to redefined the language, which is what macros do.

Do you consider e.g. BER MetaOCaml approach as "being able to redefine the language"?

I don't know enough about OCaml to have a non-gibberish reply.

The lack of generics kills any joy I feel for go.

The problem with macros and dsls is that there are many good programmers in the world but very few good language designers. Plenty of people can design a language for themselves, which probably explains the productivity gains of lisp for a one person project. But there are very few who can design a language for others.

And the problem of programmers not being good language designers is that the design of APIs consisting of nothing but functions and data types is also language design. Oops!

Unfortunately, not having powerful macros that can redefine the language hasn't made D popular, so it doesn't really factor into the line of argumentation here.

In any case, this whole debate, starting with the submitted blog and on down from there, is a lot of nonsense that completely ignores the fact that humanity has produced thousands and thousands of programming languages, only a handful of which were ever popular at any given moment. The numbers are stacked against any language, regardless of its peculiarities and merits.

What is popular in this area is a combination of luck, promotion by some big company, or bundling with an operating system or application platform.

Metaprogramming is amazing though and in languages that allow it like Ruby make amazing DSLs.

The beauty of Ruby is that you only need one metaprogramming call to make a DSL in Ruby. Define all your custom methods in a DSL class, then use instance_exec to run the code using the DSL in the context of an instance of that class. The instance can use instance variables to keep track of any state that the DSL requires. Super-elegant.

For C gcc -E helps a lot for macros. Does C++ have a tool that expands all templates?

gcc has options for intercepting various intermediate representations of a translation unit; look into that.

I just found this in the man page (for a fairly dated version of gcc):

       -fdump-translation-unit (C++ only)
       -fdump-translation-unit-options (C++ only)
           Dump a representation of the tree structure for the entire
           translation unit to a file.  The file name is made by appending .tu
           to the source file name, and the file is created in the same
           directory as the output file.  If the -options form is used,
           options controls the details of the dump as described for the
           -fdump-tree options.
If this were Lisp, we wouldn't be scared of what "representation of the tree structure" will look like.

It doesn't say anything about whether templates are expanded/instantiated.

> More conventional languages like Java can scale to a massive degree because they standardize so many things,

But didn't Java go through the whole Factory/Design Pattern/Deep Inheritance nesting that created a large amount of complexity on big projects? Was that really better than creating a DSL that a newcomer to the team had to learn?

Maybe the reason is that Java was just more popular and more familiar to a majority of programmers. Worse is better often because worse is more popular for historical reasons.

There's nothing stopping you from using lisp as a transpiler to generate the java, or different interop glue for the lowest common factor code everyone will surely understand.

The magic was always that it was always an AST all the way down, and you can create and experiment with your own personal semantics and vocabulary, so long as the language is expressive enough to define them in the domain specific way you are doing development. For example a language that is aware of what libraries and databases you're tying together and abstracts the way they interconnect, can compose multiple libraries together without the kludgy plumbing, or autogenerates boilerplate along the way that's required by your build and tests. As long as you generate readable clean Java or whatever, your coworkers won't care how you got there

Where would you rank a language like Kotlin in the JVM community or F# in the .Net?.

Both allow the massive freedom of something like a Lisp but you get to leverage the massive existing codebases in both environments.

Reason I ask is because I'm just starting to learn Kotlin (or will when the book turns up from Amazon) and I'm curious.

Are you comfortable in Java? If so you won't need any book just the documentation and stack overflow will give you all you need.

Reasonably comfortable in Java though not used it much for the last few years, much more comfortable in C#.

I decided to go the book route because I've found that when learning a new language the book route works better for me since it gives a structured walkthrough of the language rather than a piecemeal one and it's more time efficient.

YMMV but for me you still can't beat a good programming language book.

Gotcha. Books never worked for me for some reason, only by diving right in to building something can I learn.

Best of luck to you! I've never been more productive than I am with Kotlin.

>Lisp et al are cool toys and small team tools, but you're not going to see any large scale db libraries and parsers and concurrency libraries and web servers

We already have many of those for Common Lisp.

I am yet to see a team of 100 successfully working on the same problem. I've seen large codebases that solve lots of different problems with 100's of people working on it at the same time, but this isn't the same thing.

My point is, if you are on the second situation, you can very likely split your people into different teams, facing different trade-offs and with a formal (but not static) interface between their code. So if you can get a 10 people team with the productivity of a 100 people team, but unable to scale further, well, you will face the same problem on the 100 people team, and can apply the same kind of solution to both. The only difference is that you are saving 90 people.

I think there's a really important aspect to this that the software industry needs to confront in order to mature.

To some extent - not completely, but at least partially, power and unmaintainability are two sides of the same coin. That is, it is exactly the same properties that allow your amazing macro to boil 15 dimensions of a problem down into a single beautiful one line piece of code that also make it an unacceptable liability that should never be allowed into a mature codebase [1]. Because if the macro didn't exhibit that amazing sublte complex behavior then it wouldn't be powerful. But exactly by doing that, it is now something that nobody new can understand.

There is also a fraction of the software "power" equation for which it is a win-win: more power equals less overall non-essential complexity in the system and the burden of propagating the associated knowledge is lower than what is saved.

However distinguishing the above two things is completely non-trivial and if you can't actually do that well you need to fall back on the side of avoiding overly powerful behaviors if you want to have an actual long term, large scale, maintainable outcome.

[1] Obviously this is an overreaching statement. But to a first approximation this is true and the exceptions by definition have to be rare.

I wonder what the difference is between a DSL in Lisp/Smalltalk and a framework or powerful library like React, Rails or Pandas as far as complexity and learning curves go. Is it that the frameworks and libraries are typically maintained outside the team, and if popular, are widely known?

> Is it that the frameworks and libraries are typically maintained outside the team, and if popular, are widely known?

Pretty sure this is it. A framework developed in the company can be a horrible thing; and avoiding it may be politically impossible, because someone important has approved spending a ton of money on developing it.

The proposition that "custom development tooling doesn't scale" is very much news to Microsoft, Google, Facebook, LinkedIn, etc.

All of them have custom programmable configuration stacks, meta languages for their environments, and custom application harness instructions.

So why don't those count as counterexamples? It seems like a just-so story.

Those companies are so high-profile, with so many people wanting to work for them, that they can do what they want, and the new hires will keep coming.

But in an obscure company, especially one that's not very well funded, it may make sense to use a blub language so the company can attract programmers that it can afford. Of course, using a less powerful language will limit the productivity of those programmers. But having two or three moderately productive programmers is probably better than being dependent on one bipolar [1] programmer.

[1]: http://www.marktarver.com/bipolar.html

... My experience with those companies, and I've been employed by more than one: the decisions to make those special sauce systems predate their fame in every case.

So no, and I don't get why you're trying to now raft programming languages you don't like to a medical diagnosis.


My reference for "bipolar" was an article written by someone who applied that term to programmers writing in a family of languages he does like. His description might even apply to me. I've never done any serious Lisp hacking, but I wrote large amounts of code in Python and Lua, which are closer to Lisp than they are to, say, C. In truth, the second paragraph in my earlier comment was me thinking out loud on why it might have been a mistake for me to use those languages in the context where I did. Maybe it didn't belong in a reply to your comment.

> of doing things at the last minute but still doing pretty well at them.

> Another feature about this guy is his low threshold of boredom. He'll pick up on a task and work frantically at it, accomplishing wonders in a short time and then get bored and drop it before its properly finished.

> But brilliance is not enough. You need application too, because the material is harder at university. So pretty soon our man is getting B+, then Bs and then Cs for his assignments. He experiences alternating feelings of failure cutting through his usual self assurance. He can still stay up to 5.00AM and hand in his assignment before the 9.00AM deadline, but what he hands in is not so great.

That linked article is literally just describing undiagnosed ADHD, which yes, can affect otherwise successful students (of which I was one). Usually it's a form of Primarily Inattentive ADHD: https://en.wikipedia.org/wiki/Attention_deficit_hyperactivit... [1], which doesn't present as hyperactivity but instead as inner restlessness.

Teenagers with this variety of ADHD go undiagnosed at significantly higher rates than Hyperactive ADHD. Those with the hyperactive variety are easily noticed due to their tendencies to run around rooms, act out, jump out of their seats at inappropriate times, and so on. But the Inattentive kids suffer due to the existence of just this very stereotype -- the smart but bored kid. For some reason, with ADHD-H we're able to say, "This kid isn't following the norms in school because of a cognitive deficit," but when it comes to ADHD-PI people think "This kid isn't following the norms in school because he's just so over it, man."

Meanwhile, these kids are often tortured by their inability to work as hard and as consistently as they want to. Everyone in my life just assumed for me that I was "so smart that I was just bored in school, no challenge!", but meanwhile I was depressed from about the age of 10 by my complete inability to succeed in school to the extent I wanted to. I loved school, and was depressed by my inability to pay attention and work harder.

These students were able to succeed in high school because the material is mostly accessible via some amount of common sense. I was able to ace a lot of objective (multiple choice, etc.) exams through nothing more than logical deduction. But in college, assignments require more discipline over time -- writing long research papers, motivating yourself to work on something over weeks -- which is when the failures start appearing more and more frequently. My GPA went from 4.0 in middle school to 3.7 in high school to 3.2 in college. 3.2 sounds fine to some, but it was the most depressed I've ever been because I felt my life's potential slipping through my fingers.

So rather than having professors write condescending articles over our "bipolar personality" I'd much prefer if we, as a society, could work to truly understand the challenges that these kids face; to look beyond the stereotypes and actually care about their wellbeing.

[1] I used to think ADHD was made up. Why? Because I couldn't understand what was different about kids with an ADHD diagnosis. I would read lists of ADHD symptoms and think, "But that's just normal life! I experience that stuff all day every day, and I don't have ADHD!" It took me until the age of 28, and until my wife was there to help me shake the cobwebs off of my denial, before I was able to have the realization that strongly relating with the ADHD symptoms meant that I had the disorder, not that ADHD was "just normal life."

Not sure how familiar you are with Google code, but ensuring a consistent, readable, uncreative coding style is kind of a big deal at Google.

Ask a Googler what it was like to get "readability", which is a process where you get certified as knowing how to produce code for a particular language up to company standards. You can also see how the value of consistent style influenced the design of the Go language, which is more resistant to metaprogramming than most similarly-popular languages.

> Not sure how familiar you are with Google code, but ensuring a consistent, readable, uncreative coding style is kind of a big deal at Google.

Yeah, I work there. And I'm aware of the lore. I'm also acutely aware of the fact that there are numerous configuration systems that are exquisitely crafted custom snowflakes.

> You can also see how the value of consistent style influenced the design of the Go language, which is more resistant to metaprogramming than most similarly-popular languages.

Yes I know, it is terrible and designed around the worst impulses of Google. I do not believe it solves a real problem, I believe it enables abusive hiring practices.

I agree that Google has many unique internal systems for job management, workflow management, authorization, CI/CD, etc., that have unique configuration and unique behavior that new employees will need to learn. As an SRE you probably spend a lot of time in that world.

Seems like the linked article was more on the topic of programming languages, though, not configuration management. I don't think a non-Googler would have any trouble reading and understanding Google internal code in a programming language they're familiar with.

Exceptions might be pre-TypeScript JS codebases using the closure compiler annotations, old ndb Python code if there's any of that left, or Java engineers who haven't used dependency injection before.

We are traveling down this road. We've implemented Clasp (https://github.com/clasp-developers/clasp.git) - an implementation of Common Lisp that interoperates with C++ and uses llvm as the backend. On top of this, we built Cando (https://github.com/drmeister/cando.git) - a computational chemistry programming environment for designing large molecules. The end of our story hasn't been written yet.

Clasp could be used for a lot of other things.

To add to cutting-edge (& open source) scientific applications, Rigetti Computing uses Common Lisp for their quantum optimizing compiler [1] and high-speed quantum simulator [2].

[1] https://github.com/rigetti/quilc

[2] https://github.com/rigetti/qvm

And one of the most serious and cutting edge graph databases, Allegro Graph, is a Common Lisp software.

The engine that does the core stuff at Grammarly is also a CL program.

DrMeister is an example of how an advanced hacker uses cutting edge tools to get things done.

The poster makes a point about conservatism in large companies, but doesn't explore this further. Its actually a more complicated matter than just technical ignorance in managers. The real problem is that programming languages are seen as a tactical matter for project managers ("just use the best one for the job"), whereas they should be considered strategic.

* At the start of a project the PM is given a budget and a timescale which were probably driven by the question "whats the earliest you can't prove you won't be done by?" There will be no slack for developers to spend 6 months coming up to speed on a new programming language and paradigm.

* Nor can the PM afford the career risk of trying something new. If you do it the standard way and it doesn't work, the decision to do it the standard way was obviously not responsible and the search for blame can be pushed elsewhere. But if you pick something novel and it doesn't work, the blame for any failure will always focus on the novel element. So there is a strong institutional pressure to keep doing things the same way to avoid the blame for failure.

* Once a piece of software is finished it has to be maintained. Most software expenditure actually goes on this phase. If you use a new programming language then you are going to have to keep a bunch of programmers around who speak that language just for that project, even if there are no changes needed this year. That is expensive, so this is another disincentive for trying a new language.

* Every technical manager has seen enthusiastic young programmers bearing some new silver bullet (see Fred Brooks) which, the youngster says, will solve all the problems at a stroke. After you have seen half a dozen of these they all start to look the same. There is also concern about "CV engineering": getting the company to take on the risk and cost of training employees in trendy technologies which won't provide real benefit but which will make those employees more likely to find a higher paid job somewhere else.

If programming languages were considered as a strategic matter then a large company could make a decision to invest in tooling, training and increased project risk in order to adopt a new language which would make them more productive in the long term. But until then change will be forced on the industry by small companies who beat the averages.

I enjoyed the build up throughout the article to his conclusion. While I can't prove him wrong, a lot of very smart people came up with languages like Java and C# to solve these problems (ramp up, maintainability, retention, and more) as efficiently as possible. While I salute the author in his quest to be proven wrong, objectively, everything is a trade-off. I don't believe in better.

> These languages are very powerful, but they don’t scale.

The problem, it seems to me is that software developers often think: "what programming language do we really like && can continue to use for $long_time over $scaling_of_business".

When this is the case, business management expectation problems emerge regarding costs, maintenance and timing when the business scales by 10 or 100x. Early phase teams are lambasted for making poor decisions - and "adult supervision" ( i hate that term ) is hired to "fix things".

Thing is, the early "poor decisions" may not have been poor decisions at all relative to the business needs. However, since costs and retooling issues weren't discussed and planned for - the early team look like a bunch of fools/amateurs.

As software developers, we need to understand and communicate that the tools can we use to accomplish the needs for the business at one stage are different than those of a later high-scale stage. its OK, completely natural, and a requirement if we want to keep our jobs and grow with the business.

analogy: If a specialty carpenter is designing and building high end furniture with a set of well made brand-name niche tools and techniques they wouldn't expect that those same tools would be used if sales take off and 10x or 100x the number of units need to be produced. It would be understood that retooling would be required, people with expertise in those tools be hire and that there would be new costs involved. Also, the carpenter who starts the business with small scale tools isn't lambasted for being an idiot for picking those tools in the early phases. It would be quite odd to suggest that that person have picked tools and hired people for 100x production when they were selling single digit units a year.

The question is what the intention was. A carpenter doing models for IKEA would be expected to design for million-unit volumes from the start.

different problems here - if the production and scale are already well known and understood - there are, arguably only design choices to be made that fit into the existing tooling.

"the new chair model must fit into these production constraints"

"the new software must be written to fit into this system - JVM based, web interface ..."

The real problem is that there's a missing step:

  1. Create a language to solve your problem
  2. Solve your problem
  3. Document the language
Presuming that the original creators made a reasonable language for the problem at hand, who wouldn't want to use a powerful, well designed language with good fitness of purpose?

Now imagine that you were brought onto a team using a widespread (not homegrown) language you didn't know that was used for the company's main application, but you have no access to documentation.

I find that I almost never read documentation in languages like Common Lisp and Clojure for two reasons:

1) documentation is often unmaintained and reflects the state of the library at some point in time, but not the current API. But...

2) even when there is good documentation, it’s often easier to just take a code example and then use the excellent jump to definition and other tooling to understand how the example works than it is to read the definition: in part, because there’s always a repl available in these languages and it’s much easier and quicker to run mini-experiments on the language at the repl than it is to read the documentation and translate it to code.

I think Lisp’s “the source is the documentation” (with comments and doc strings) is a huge boon. I usually figure out how something works faster and more accurately than, say Ctrl+F’ing some Python Sphinx documentation.

>I think Lisp’s “the source is the documentation” (with comments and doc strings) is a huge boon.


At first i thought "Damn, there is little documentation on those Lisp libraries", but as the months passed, I understood that often code was very readable, and the fact that documentation strings are integral to the language is also a good thing.

My point was about documenting the home-grown domain specific language. Do you think it also wouldn't be useful and better to just read definitions? Doing mini-experiments might not be as simple as it could depend on other parts you have yet to look up, etc.

My experience is that reading source code is often an easier way to figure out how something works than reading documentation: especially in an environment with good jump to definition and a language that doesn’t impose very much boilerplate and meaningless syntax on your code.

In a similar vein, I’d rather see your test-cases, nicely formatted, than your documentation: you can execute a test case to check that it still is true; you cannot execute your documentation.

I agree that learning from examples is very powerful. Good language documentation in addition to defining what each construct does, also provides examples which should be more concise than a random ones found in a codebase.

> My theory, completely made up and untested, is that these powerful Lisp/Smalltalk systems weave code and brains together in a way that amplifies both. It’s very organic, very hard to document, and thus it becomes very hard to absorb new brains into the system. <

Here's a simpler and less mystical theory. These programming systems are fundamentally image-based, and much of their leverage comes from the coder's ability to leverage all aspects of the meta-system. However, the more the core image is evolved the easier it is for a team to get out of sync. This is the ultimate namespace clash.

A slew of other highly-productive languages share this feature, e.g., APL and Forth.

Are teams using these languages small because the language doesn't scale well or because you don't actually need more developers?

> 1. Create a language to solve your problem;

> 2. Solve your problem.

But don't we do this in every language, at least if we're doing a good job?

I remember the final "exam" of my high school CS class. It was mostly an in-class programming exercise in Pascal.

I was the only one to finish in less than the allotted time. The first one to finish after me did so after the second extension period.

I have to admit while I was obviously inordinately pleased with myself, I was also baffled, because the other people in the class were certainly just as smart as I was.

So what was the difference?

The problem we were given was to draw a bunch of boxes with character graphics with contents and some overlap. Not really hard. Everybody else coded up each box individually with straight code. One after the other. Each time having the handle all the edge cases (literally).

I was the only one who first created a procedure (or set of procedures) for box drawing, and then used those procedures to draw the boxes. And yes, that is very much a simple "language" for solving the problem. Of course the language set some pretty tight limits on how your "language" could look, which is a bane for expression and a boon for understanding.

So I did "define the language, solve the problem in that language". In Pascal. Classic bottom-up programming.

So I dispute that there is a fundamental difference, though there definitely is a significant difference in degree. The difference with high-power languages is that they allow us to apply this much more broadly, whereas with other languages you often run into technical limitations that prevent you from really defining the right language.

With there not being a fundamental difference, I also don't buy that we have a fundamental, unbridgeable divide/chasm/whatever, with these high-power languages with their problems on one side and the "blub" languages with their other problems on the other. Rather, let's explore that regions of the language space and see if we can't narrow the gap.

"But don't we do this in every language, at least if we're doing a good job?"

There is a difference between expressing higher level abstractions using a language and actually morphing your language in way that you can directly express those abstractions in it.

The latter is often impossible, if not because of technical limitations, but rather due to cultural barriers. If it was only once or twice when I worked in a team where stepping off the official language syntax dogma was considered a punishable crime. I've seen things as innocent as "from x import y" (in Python) being discouraged, and even using the dict function instead of {}.

People don't generally feel happy about seeing unfamiliar things in what they consider their comfort zone. I don't know what to do about it.

Boy is that ever the case! The lengths people will discuss spacing after method declarations, while being completely ignorant of the architectural issues that are rotting their code-base.

> comfort zone

Tentative essay title: Can Programmers be Liberated from the Gentle Tyranny of Call/Return?

> actually morphing your language

Yes. And no. First let's have a look at what "morphing the language" means, by way of natural language. When we use natural languages, we can mostly adequately describe anything we want to using the mechanisms available. Sometimes, we need a new vocabulary, sometimes a set of interlocking vocabulary, i.e. jargon. I haven't really seen a case where we need to tinker with the structure of the language, with the grammar.

When we add a function, set of functions, interlocking objects/methods etc. I would say that we are also adding vocabulary. And that should be sufficient. The fact that it is not sufficient to me suggests that our fundamental "grammar" (not grammar of the surface syntax) is insufficient, not that we need to be able to make up grammar on the spot every time.

Of course making up grammar on the spot also solves the original problem, but it does lead to all these follow-on problems. One being that making up grammar on-the-spot is much more complex than just adding vocabulary, and so you are likely to get it horribly wrong. The other being comprehensibility.

metaprogramming allows more compression of your code than functions alone, sometimes by huge factors

Yep. I named my company ‘metaobject’ after The Art of Metaobject Protocol. 20 years ago.

Metaprogramming is powerful. Can we make it more practical so we can reduce or eliminate the chasm Cees writes about?

i’ve actually just bought this book and plan to read it soon, any advice on how to get the most (order of reading, other papers/books to checkout etc) out of it?

Any resource reccomendations?

Alas, not much at this point.

My basic starting point is architectural connectors as the basis for "metaprogramming".

So bake support for creating and adapting architectural connectors into the language, and then you can do most if not all the things you want with metaprogramming, while at least discouraging the things you don't want.

Key is to provide a set of adaptable connectors as a basis, so you guide good meta-design (and reduce its necessity) by providing good examples, rather than enforcing it.

Working on a language, tentatively called Objective-Smalltalk: http://objective.st/About/

The idea is pretty simple: concise languages ( like python, smalltalk etc.) work well when you can get away with being concise. Verbose languages ( like the C family ) work well when you can't. It only takes a couple of tries writing a project that favors verbosity in a concise language before this becomes apparent.

Languages like С allow you to control more (low-level) aspects of a program than languages like Ruby. Ruby is only concise because it handles tons of things for you, the way it sees fit.

You can make C++ quite concise, or quite verbose, depending on the level of detail you can hide under layers of abstractions.

It's much like the framework vs set of libraries: a framework allows you to write very little and produce a lot when all you're doing is wiring together existing pieces and write business logic code it the slots provided for that. Similarly Ruby allows you to write much lass than C if all you're doing is wiring together hashtables of hastables of hashtables, and rely on garbage collection.

The point of this article was more about writing DSLs than just conciseness.

> So why aren’t we all using either of these two languages?

because $bigcorp's dreams with languages such as Java was not to be incredibly productive, but to be able to fire a whole team, hire replacements, and have them be more-or-less productive on day 1. Can't do that if every project lives in its own DSL.

This is both really wrong factually and really cynical to a degree that is also wrong. This may be _some_ people's dreams, but the reality is that this is not the foundation story of Java, at all. Java's popularity had more to do with its portability (you could write your code on your Windoze box and then deploy it to SunOS or Linux with a reasonable expectation of it working just fine) and its safety (it was a marked improvement over C) than on the dream that it would be the language that allowed for offshoring your team.

In fact, amusingly, I'd argue Golang is more explicitly designed to be able to bring new developers up to speed rapidly and not requiring them to be language experts to be productive. This has direct impact with respect to being able to hire developers, possibly en masse. And hey, whaddaya know, it was dreamed up by a $bigcorp way bigger than Sun was back in the day.

The reality tho is that $bigcorp just wants to be able to write more code, not replace the coders they already have.

Even in a lower-level language, is it possible to be productive on day 1? Instead of learning an explicit DSL, you're learning an implicit one spread across 10 or 100 times as much code.

Imagine if regexes didn't exist, and you showed up at a new company and they had hundreds of lines of Java for every little string matching function. Would anyone find that more productive? Even if regexes were never used outside of that one project, I think they'd be a huge win.

Especially at a bigco. My experience is you spend a while just getting all your accounts set up. Half the time, I haven't even known the programming language they used, so I spend weeks/months ramping up on that, even though it's not domain-specific. Spending a couple days learning a new DSL is just rounding error.

I'm not saying that this idea is grounded in reality :)

There is a less cynical version of this that is closer to the truth. Because $bigcorp is big there is proportionately much more money at stake if software suddenly becomes unmaintainable. There are various forms of insurance you can take out against that, but one is definitely "Yes I could pay a really expensive consulting company to front up 5 developers within 24 hours that could diagnose a problem in my code the day after my 3l1te dev team lead quit to join $startup". I am betting you would think this way too if you were in charge of such a team / product. It is a completely logical form of risk management.

I put a reaction on the site of the writer of the blog, but it looks like he removed it. So I'll repeat here. The text misses structure, clearness and the writer makes the same assumpion as Paul Graham, namely that a programmer thinks in the programming language he uses. To start with the last: "to write a good program, we need to think above the code level. Programmers spend a lot of time thinking about how to code, and many coding methods have been proposed: test-driven development, agile programming, and so on. But if the only sorting algorithm a programmer knows is bubble sort, no such method will produce code that sorts in O(n log n) time. Nor will it turn an overly complex conception of how a program should work into simple, easy to maintain code. We need to understand our programming task at a higher level before we start writing code. [..] There are two things I specify about programs: what they do and how they do it. Often, the hard part of writing a piece of code is figuring out what it should do. Once we understand that, coding is easy." (Leslie Lamport, https://archive.li/2zZKy). Sentences in de Groot's blog like "these powerful Lisp/Smalltalk systems weave code and brains together in a way that amplifies both." are incomprehensible. The question at the end of the blog looks rhetorical to me "I think I’ll end while hoping that someone out there on planet Earth reads this and proves me wrong :)". It is not clear what should be proved wrong, I suspect the writer is satisfied with his musings.

> A lot has been written about "hyperproductive teams". ... there’s one thing in common between all the stories I’ve heard and the successes experienced first hand: small teams

Funny because I don't think its small teams or a particular programming language that leads to great productivity. You don't have to look very far or far back to see teams, of various sizes, pulling off some amazing stuff with assembler and C. The key to productivity is a lot simpler, and independent of software development. Its a sense of responsibility, interest, and dedication.

In a small team that is using small talk the responsibility might come from being on a small team with a lot of lifting to do, the interest might come from having fun using a language that you like (regardless of the language), and the dedication might come from the promise of getting in early on a promising venture - hence the small team. All of these things add up to hyper-production.

If you can establish those three things, somehow, with a large team I don't see why hyper-production can't be achieved. Unfortunately modern project management for large teams seems to rely heavily on techniques that rob developers of all three qualities mentioned above in the name of so called "agile" and in pursuit of some fictitious software assembly line.

I agree with the OP. This has been called the Lisp Curse elsewhere. Over time, your codebase becomes so high-level, full of advanced, domain-centric abstractions that basically alienates everyone else from effectively working on the project.

If you have a small, stable team that such issue might be relatively easy to overcome. But with bigger teams, with higher turnover, than it might totally cancel out any technical advantage gained from using such non-mainstream programming languages.

I think what really makes those languages/teams productive is iteration time being low. Part of that is a natural consequence of small teams and lack of legacy code but certain languages definitely encourage it more than others. If you have fast iteration times you can really quickly figure out a system by experimenting, or work through bad ideas on the way to good ones.

For whatever reason software developers always seem to undervalue fast iteration times, and the less productive languages always seem to make iterating quickly harder, whether it’s C++s ghastly compilation times and lack of an ABI making it hard to integrate libraries, or Java’s interdependent class based nature causing excessive mocking and/or lots of generally unrelated state being required to have a working object to test with. In contrast smalltalk lets you just highlight a chunk of code and run it, and the lack of separation between runtime and ide generally means you have enough of the system live for things to work. I think where smalltalk doesn’t scale is it can’t easily take advantage of things invented outside of it because it’s so integrated and practically its own OS

I’m not sure I buy this: I recently had to write a bit of ruby (an omniauth strategy for my company’s OAuth2 server) and the code I encountered there (as well as in the various rails projects I’ve worked on) may as well have been written in macro-heavy lisp: in fact, lisp or smalltalk would be preferable because they would have good source navigation tools for the macros, such as a stepping macro-expander.

Ruby supports a lot of metaprogramming, and some libraries and frameworks take advantage of that. It's not the best counter example to Lisp/Smalltalk.

My point is that a fairly popular platform that large teams use has nearly as much metaprogramming in its widely used libraries as any Lisp or Smalltalk codebase I’ve seen. So, I don’t think that the tendency of Lisp programs to evolve DSLs is incompatible with large teams.

That's a very good insight here on large/small teams.

Although I typically hate discussions of "programming languages are like human languages" I think it's the key here. All the team members need to share and be comfortable in not just a spoken language, but the teams local jargon, and the programming language in use (including your team-specific conventions, which form a kind of jargon too). So teams in these esoteric languages (and I say this as a Lisp hacker myself) tend to be by definition small, have a lot of cultural overlap, and can be hyper-effective. And small teams can be hyper productive/effective because of the lack of coordination overhead and because it's easier to have a high 1st derivative from a small starting point (in other words: no large team can beat a small team in efficacy/person, but small teams can't do everything).

That only answers half the problem; the other is: well if small team X can beat small team Y in significant part because of the choice of tools, why don't bigger teams use the same tools?

There's no easy answer. One is power: Lisp and C++ have too many degrees of freedom so it's easier to get in to trouble with languages with fewer "guard rails". This was an explicit design criterion for Go BTW: new grads are coming here and screwing up, so let's make it hard for them to make the most common/dangerous mistakes.

Second in the case of Lisp: it wasn't considered a deployment language as many of the things that made is powerful (e.g. GC, lambda, dispatch, dynamic polymorphism) were considered too expensive at a time when C and smaller machines were taking off. Now many of those features are considered mainstream in other languages, while the Lisp community itself therefore attracted people who were more power users.

C++ had a driver (Windows) and an impetus (like Go's: C gets you into a lot of trouble) but even then people looked for easier / safer languages (e.g. Java, an explicit reaction to ++)

> I can recite many of such stories; of people keeping up with their ten person team by coding whatever the team did during the day in 2-3 hours at night using Squeak Smalltalk; about an old Lisp hacker working in a research setup with mostly Java colleagues and always estimating to need 10% of the time (and then actually making the estimate). So why aren’t we all using either of these two languages?

Maybe it isn't about the language, primarily, but about the team? Maybe those super productive teams would have been successful with any language that gave them enough freedom?

That would also explain why you can't switch your 20 Java developers to Smalltalk, and have them perform miracles of software engineering.

Maybe this is part of the benefit of the whole micro-service trend? I've largely dismissed the idea until now as impractical in many cases, but this seems like a case where a team of between three and six developers could work in something like lisp (that may not scale well), expose an api, and communicate with other parts.

It seems like we really need cross-language oop for this reason. MS COM might be an example. I was reading something I clicked on yesterday when doing some reading about reference counting, and someone commented that things like MS Office (COM based) needed such things: something could be initialized in one language, passed to another, and deinitialized in a third.

> Hmm, maybe this is part of the benefit of the whole micro-service trend?

I think micro-services can actually make this worse.

The issue is that the early coders write macros and other tooling that makes it harder for future coders to understand. In other words, they develop their own service-specific language.

With one big service, you would expect a somewhat consistent set of practices. So, one language for the entire service.

With micro-services you can have a different language for each service. Worse, each language might be only slightly different.

If you end up with a consistent language across all of your micro-services, then you’re back to the original problem.

(I’ve experienced this effect first-hand. If I’m not disciplined I’ll do a bunch of meta programming that makes me _way_ more productive, but makes it much harder for another programmer to understand.)

This could make an interesting case for a "hybrid" Clojure/Java codebase. Write focused modules in Clojure, exposing strong-typed APIs that are then stitched together in Java.

One downside of Lisp and Smalltalk for large projects may be that there is no official mechanism for declaring types and interfaces. Even if you can adopt your own conventions for implying the types they are just your convention and other programmers might have difficulty grokking and trusting them.

Hyper-productive languages may be hyper-productive just because they use dynamic typing and static types may be better for creating a large piece of software by many programmers evolving over years hopefully decades.

>that there is no official mechanism for declaring types and interfaces.

This is not true for Common Lisp.

Want to declare types? "(declare (type ...))"

Interfaces? CLOS allows "generic functions", which can achieve the same as interfaces, and are light years beyond the typical OOP system.

I wasn't aware of Common Lisp type declarations, thanks. But they are optional (right?) so in practice if they are not there they do little good.

Could you tell a bit more about how generic functions do the same thing as Interfaces? Can you use them as part of a type-declaration? Say like a Java Interface can be used as a Type in a type-declaration of a method.

> But they are optional (right?) so in practice if they are not there they do little good.

That depends on the compiler. The SBCL compiler will use it as type assertions, adding type inference and will do checks at compile time.

>But they are optional (right?) so in practice if they are not there they do little good.

Modern implementations like SBCL also do type inference as well and do tell you about compile-time type errors.

CL is also a very strongly typed language so type errors are going to be caught by the implementation sooner or later.

Not that I care, though. Type mismatch errors are my least concern.

>Could you tell a bit more about how generic functions do the same thing as Interfaces?

You'll have to read the online chapter about Generic functions of the "Practical Common Lisp" book, available online for free and already a classic. Common Lisp Object System (CLOS) cannot be explained in a few words.

>Can you use them as part of a type-declaration?

Object classes can be used in type declarations.

>Say like a Java Interface can be used as a Type in a type-declaration of a method.

CLOS goes beyond and is even more powerful than what Java offers. See "multimethods" and "polymorphic dispatch".

CLOS OOP makes Java OOP look like looking at a Little Tikes car from an actual Formula 1 car. And I say this as a person who worked as a paid Java developer for "serious" stuff for years.

> (CLOS) cannot be explained in a few words.

No doubt. I was just curious how "generic functions" in particular could help in defining types and interfaces. I was not asking for the whole story about the whole of CLOS.

>I was just curious how "generic functions" in particular could help in defining types and interfaces. I was not asking for the whole story about the whole of CLOS.

I understand.

For something similar like a Java interface, you could define generic functions for each method your interface declares. For example consider interface IStream with methods open(), close(), sendbytes(b) and readbytes(b)

In Lisp you could declare the following generic functions: open(x), close(x), send(a, b), and read(a,b).

Then you simply define methods for any kind of (concrete) stream. Moreover, "send" and "read" could be defined to send/read not just bytes, but also any other kind of object. Or you could also define a "send" and "read" that works from stream to stream. For the latter, on java you would have to either redefine IStream, extend IStream, or create another class specialized on copying from Stream to Stream. On Lisp you just define an additional method for the generic function send(a,b) where a and b are both streams.

The CLOS system will take care of calling the correct method. If there isn't a correct method this would signal an error: "No applicable method".

Ever heard of CLOS, the Common Lisp Object System? It has 'classes', 'generic functions' with 'methods' which support multiple dispatch over classes, etc. It was defined in the late 80s and integrating it into Common Lisp made it the first standardized OO language.

   (defclass person ()
     ((name        :type string)
      (age         :type (integer 0 150))
      (exployed-by :type company)))

   (defmethod rename-person ((p person)
                             (old-name string)
                             (new-name string))

   (rename-person some-person "Jim" "James")

I've heard of it but not used. Being able to say: "(name :type string)" looks great.

Does it mean the system gives me an error if a method is called with a number where a string was declared to be the argument type?

At runtime every CLOS implementation will do that. Note that it allows for more than one parameter to be specialized. In the example FOO is using multiple dispatch with two parameters P and S. When we call FOO with arguments for which no method exists we get:

  CL-USER 22 > (defclass person ()
                 ((name        :type string)
                  (age         :type (integer 0 150))
                  (exployed-by :type company)))

  CL-USER 23 > (defmethod foo ((p person) (s string))
                 (list p s))

  CL-USER 24 > (foo (make-instance 'person) 3)

  Error: No applicable methods for #<STANDARD-GENERIC-FUNCTION FOO 4060000B5C> with args (#<PERSON 402025F1DB> 3)
    1 (continue) Call #<STANDARD-GENERIC-FUNCTION FOO 4060000B5C> again
    2 (abort) Return to top loop level 0.

  Type :b for backtrace or :c <option number> to proceed.
  Type :bug-form "<subject>" for a bug report template or :? for other options.

  CL-USER 25 : 1 >

Note to galaxylogic: When lispm wrote:

     (defmethod foo ((p person) (s string))
                     (list p s))
He was implicitly defining a "generic function" "foo" of two arguments.

Defmethod was defining an implementation of the generic function foo for arguments person with string.

One could later put something like:

     (defmethod foo ((p person) (p person))
                     (send-greeting p p))
And thus define a "foo" that works on two persons, etc.

CLOS allows specializing not just on classes, but also on specific instances, so some method applies only for a specific instance. For example a "foo" that applies only to a particular "important" person.

I don't buy that the "privilege" of

> 1. Create a language to solve your problem; > 2. Solve your problem.

belongs mainly to Lisp and Smalltalk. Each time you define a method or a class (C#, Java, C++), you're extending and defining your "language".

I'm the leader of a 2.5-developer startup, we use C++, Java and C# for different purposes, and I have not once thought "shit, we'd be so much more productive if we used XXX instead of C# or Java" (C++ being the exception). Because the real problems are around the business domain. Understanding customer needs, wrapping our heads around Azure AD, etc, etc, etc.

The programming language has ZILCH to do with our challenges and I do not feel hampered by PL during refactoring either -- thanks to static types.

I have some regrets, and these go against frameworks we decided to use (EFCore, i.e., ORM, being the biggest regret of mine).

Programming language? Almost not a factor. I'm happy with all of the 3 we use, though I'm least productive in C++ due to the amount of ceremony needed -- header/implementation files, linkage, fixing preprocessor mess by variations of PIMPL (looking at you windows.h), slow compilation, etc, etc, etc.

If anything is indispensable for our productivity, it's a good ide and becoming comfortable with its capabilities.

> Each time you define a method or a class (C#, Java, C++), you're extending and defining your "language".

One mostly only adds new words to these languages: nouns and verbs. With Lisp you can add new syntax as a user as part of the program. This allows us to do code transformations at compile time as part of a program. This allows us relatively straight forward to generate more complex code from simpler descriptive code.

> With Lisp you can add new syntax

"With Lisp you can have any syntax, as long as it looks like Lisp."

Unless it doesn't:

    CL-USER 14 > (loop for i from 0 below 10
                       sum i into isum
                       when (evenp i) sum i into evensum of-type integer
                       else when (oddp i) sum i into oddsum
                       finally (return (* isum evensum oddsum)))

Arguably that still looks like Lisp (i.e. it's a valid S-expression). But then you also have reader macros...

Well, s-expressions are not Lisp. Not every s-expression is a valid Lisp program.

(1 * 2 ^ 3 + 1) is a valid s-expression, but not a valid Lisp program. With an INFIX macro it could be:

(infix 1 * 2 ^ 3 + 1) could be a valid Lisp program, with a corresponding infix macro.

But the Lisp syntax still is on top of the s-expression syntax...

I could do that in C# as well if I felt I needed it. Define classes representing the "syntax/AST", utility functions to build the AST and an interpreter that compiles the AST to an expression tree which in turn is an executable lambda.

It's what EFCore does, translates C# method calls into SQL statements via lambdas bound to expression trees.

So, no, LISP is not special in that regard.

> LISP is not special in that regard

It's just MUCH MUCH easier, standardized and widely used.

> easier

For routine code generation, there are even simpler T4 templates. That one gets as easy as one could want.

> standardized

So are expression trees: https://docs.microsoft.com/en-us/dotnet/api/system.linq.expr...

> and widely used

By what statistics?

> So are expression trees

But they are not easier. Pick one or two out of three.

> By what statistics?

Number of defined or used macros per lines of Lisp code.

When you have a good language, you don't need a good IDE.

This is like saying that if you have sharp nails you do not need a hammer. (IDEs can be, and are, extremely helpful; even simple text editors like nano or SciTE try to include some of the features normally found in an IDE.)

Alternatively, when you have a good IDE, you don't need a good language, freeing you to optimize other concerns like ecosystem. Not defending crappy languages, just saying that argument cuts both ways.

I've noticed a curious pattern: some brand new languages have communities expressing similar sentiment initially, but then if/when the language becomes popular and IDEs for it appear, you see quite a few people being excited about simple things that those IDEs can do (and could do for other languages decades ago), like debugging or code completion.

When you have good memory, you don't need to google anything twice.

Any details about EFCore regrets? Too much magic, maybe?

At times too much magic, like queries mysteriously failing because you have a dangling "navigational property" without a database column. That one was a hell to debug.

At other times, too little control over how the query is translated to SQL, or DB features not being exposed. With time the frustration piles up..

EDIT: I'd be OK with a light-weight ORM, like stuff these columns into these fields of that class. Leave the rest to me. Though EFCore got a bit better with support for query types so I can map classes to views / stored procedures and do the "real" work there.

Compared to writing Sql stored procs it feels really hard to write something that spits out efficient sql.

You can't run (as far as I'm aware) things like query analyzers like I can with SQL to get execution plans.

It can be an absolute nightmare to debug. Like proper hell that where you can lose a day or so over something that ends up being stupidly simple.

I hate it and would choose just doing SQL with a plain ORM anyday over ever touching EF again.

When I was using it it also had poor support for immutable types. I imagine this has got better.

For any slow performing query you can always log the raw sql that is generated and run a normal explain like always.

You should also have database profiling tools in your disposal at all times, regardless if you use an ORM or just raw SQL.

ORMs are great when used sparingly and properly. For a bunch of normal crud operations they are great because they get rid of a lot of boilerplate code.

Also as with any ORM you can always drop down to raw sql in the situations that require it.

I think the small team aspect is far more important than anything else. With two developers in a startup, you can split up key components and have quick informal talks concerning design choices. In a real company, huge amounts of time are wasted on all-staff meetings, team meetings, 1:1 meetings, management fads, mandatory compliance trainings that come up quarterly and take hours, and the sheer human problem of coordinating that many people. Big companies have huge waste stemming from many different areas. For example, at a large company they have people in charge of requirements that have no understanding of the domain or how to write software. It's like the telephone game you get as a kid where the message completely changes by the time it gets around the circle. Of course you could have your devs do that job, but most companies simply aren't that logical. I bet Paul Graham and another competent developer could use a language that is not considered to be very productive (Ex: C++) and still swim circles past a large IBM team that is being held down by it's own weight.

> mandatory compliance trainings that come up quarterly and take hours

Compliance has a bad name because it's bureaucratic. But in software, compliance can cover important things like privacy, security, internationalization, and accessibility. Getting these things right is a moral imperative in many cases. For this reason, the rise of move-fast-and-break-things startups, with their developers unfettered by bureaucracy, worries me.

I'll agree that there is likely some value (usually CYA for the company) and some real value too. However, those things still take up significant time that the two-person startup doesn't have to worry about and that is just one class of time wasters for big companies. It's always something: A big day-long network outage, firedrills, performance reviews, management shakeups, water cooler gossip, sending emails, reading emails, sudden surprise requests from executives, having to have IT install something as they took away admin rights, preparing for audits and disaster recovery...etc. I bet a startup could easily avoid 30% of the work which just comes with having 100-1000 people in an organization.

For various reasons I have to examine byzantine fault tolerant consensus mechanisms. One of the surprising things I noticed is something like Dunbar's number falls out of it naturally.

I was not aware of either of these things (and now have some interesting reading to do) thanks. I will say that it doesn't surprise me that there is some limit on how much social coordination can be done in a group.

I guess you're doing this for blockchain?

The root issue is extensibility. The same sort of arguments come up with the simplest extensible language: Forth. There are whole categories of embedded applications where Forth can cut the development time down to a fraction of what it would take to do it in C. Once that programmer leaves it takes a long time for someone else to come up to speed to do the sort of simple modifications that such applications tend to require.

... and where do you get a programmer that can do anything effective with an extensible language anyway? Such people tend to be very good programmers in general and are rare and expensive.

Could it be that these languages haven't evolved all of the tooling and ecosystems other languages have? You don't spend time setting up stuff. With web dev, you constantly spend time switching between various tools to just even build the damn thing/app.

None of it is needed, per se - for years everybody made web sites without it - but we've all convinced each other that it's just what you do.

Maybe Smalltalk doesn't have these conventions (yet)?

"(and do not forget: they are that powerful)"

Are they? Because small teams are naturally much more productive, I thought that was going to be the 'aha' of the article.

Yeah, I was expecting something along the lines of the old saw "what a team of two programmers can do in two weeks, a team of six programmers can do in three months".

> It’s very organic, very hard to document, and thus it becomes very hard to absorb new brains into the system.

I think they way out is [something like] Alan Kay's "active documents". If you really need large teams (rather than communicating constellations of small teams, say) then your software has to be structured to be communicable. (Beyond Knuth's "literate programming" even, I think.)

Sounds like language oriented programming: https://resources.jetbrains.com/storage/products/mps/docs/La...

> Create a language to solve your problem;

> Solve your problem.

But a typical problem consists of multiple sub-problems. Are you going to create a new language for each of them?

I think it really comes down to the availability of training material.

Look at the amount of material for Python compared to say SmallTalk

you’re given a $10M budget to solve a problem so you write it in Lisp and keep all the money.

Break this absurdity in any number of ways and see. Tech, business, people etc.

There is more than one way to do it. Or, only one: Perl.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact