
Less is Moore - sgentle
https://samgentle.com/posts/2015-04-26-less-is-moore
======
jonathaneunice
The problem with this kind of YAGNI thinking is that it 1. commits everyone to
understanding how to do everything and 2. requires everyone to do everything
afresh, every project. The fact is, I don't know the gory details of all the
Unicode encodings, XML Namespaces, multithreaded concurrency locks, database
indexing strategies, or SVG rendering pipelines my code uses every day. While
I can dive down to understand those things if need be, at any given moment I
don't know even a fraction of those things my program directly uses. If I had
to dive down on them, I wouldn't get productive work done. And as a group,
we'd each be reinventing slightly different, mostly crappy single-use versions
of The Wheel. Thanks, but I'm perfectly happy standing on the shoulders of
giants--even if it gets a little wobbly some times.

~~~
ploxiln
When it comes to really supporting non-english languages well, you kinda do
have to know gory details of Unicode. (I had to.) Otherwise, stuff will
mysteriously not work, and you won't know what to do. OS X / Windows filenames
will surprise you. Python on OS X / Windows will surprise you. Javascript will
surprise you. URLs will surprise you. There's a lot of surprises in store.

If you have some real-time large-ish-data need, you'll need to evaluate your
options based on how they manage concurrency and locking. You may even have to
implement a custom cache or archive layer. (I had to.)

I'll admit, I haven't dealt with SVG rendering, and it wasn't important to any
company I've worked for. In cases like that, you can say "whatever, fast
enough on today's computers, this one doesn't work so I'll replace it with a
png." But if you really need stuff to work right, if you want it done right,
you don't have to do it all yourself, but you do have to understand it all
yourself, otherwise you can't correctly choose and marshall the pieces that do
it.

EDIT: I feel like adding an addendum: yes, the specific views described in OP
are historic relics. Still... the number of levels that exist today between
do-it-yourself c programming with an alternative/minimal libc, and a
ruby/rails/activerecord/auth-gems/assets-gems project, is really amazing.

~~~
bandrami
_the number of levels that exist today between do-it-yourself c programming
with an alternative /minimal libc_

Turtles all the way down. I'm working right now own an OS-less Forth system to
run on a SoC, and keep thinking "man, how much easier would this be if I had a
kernel and a library!"

Meanwhile, somebody else is probably working on a pure logic processor and
thinking "wow, I wish I had a full SoC to work with..."

------
gwern
> I came into contact with a version of this philosophy even earlier, in The
> Mote in God's Eye by Larry Niven and Jerry Pournelle. In the story, there
> are aliens called Engineers who only build special-purpose things. To them,
> there's no such thing as a generic chair. They would instead build a Sam-
> chair to my exact proportions. If those proportions changed because of a
> series of brownie-related incidents, they'd rebuild the chair. Every item in
> their world is custom-made for its particular purpose. Both Moore and
> Niven's specialisation philosophies came from resource-constrained
> environments: the Engineers because of the limited physical resources on
> their home planet

To expand on this, and why Moore's views are a niche and will remain that way
for a long time: in _Mote_, the aliens in question have been trapped on the
same planet for millions of years, typically overpopulated, and have evolved
to an extremely high degree to cope with living at (and sometimes beyond) the
Malthusian limits. The Engineers are an example of these adaptations at work -
because there are so many Moties, Motie time and labor is dirt-cheap
(specifically, they are at the Malthusian limit where their wages equal how
much it costs to live the most minimal life) while resources remain at their
usual finite amounts; and so, it pays to have Engineers finetune and customize
each product, for the same reason evolution leads to ultra-optimized (but
often unclean and inelegant) solutions.

In contrast, as ugly incidents like the wagefixing scandal at Google/Apple/etc
show, we face the _opposite_ situation. Real resources are extremely abundant,
and computations have never been cheaper, while programmer time remains
expensive. So it makes little sense to take a Moore/Engineer approach, and
instead generally the tradeoff of performance for more generality is made.

This will remain true for as long as the cost of programmers remains the main
part of running software compared to the resources like CPU or RAM or joules
consumed to run the software. (The software could be run at Internet-scale, in
which case the possible efficiencies from specialization are worth the
programmer time; or programmers themselves could become much more abundant,
such as in a Robin Hanson upload/emulation SF-like scenario.)

~~~
zaphar
The corrollary to this is that sometimes a Company hits scale where having a
few "Moties" customize/refine every level of the stack saves billions and
suddenly it's financially worth it.

It has always been a cost/benefit analysis. The hard part is figuring out the
costs and benefits when it comes time to make that decision.

Case in point. My current work is in C# with some Go sprinkled around. Go
happened because the c# Console an IO api's are a hopeless mess. They are
bloated with everything under the sun and special cased in the API. And to top
it all of they are just about the farthest thing from composable. The creators
of C#'s standard library created a monstrosity in the name of abstraction that
ultimately was harder to use correctly. What is the cost of that decision?
Longer development times than is necessary for projects that do IO in C#. Late
refactors when you realize you should have been using a TextReader instead of
a StreamReader and you have these seams running all through your app that have
to change now.

------
akkartik
I'll make a stronger case to Keep it Simple: even if you have infinite
computing resources and memory, implementing the simplest possible thing is
far easier for other programmers to understand than the ugly generalized
'abstractions' most of us come up with 99% of the time. That 'cognitive
bottleneck' will remain no matter how far back you push the resource
bottleneck, and it will keep an anti-abstraction ethos competitive.

(Now, the Forth way isn't the one I would choose to be easy to understand:
[http://yosefk.com/blog/my-history-with-forth-stack-
machines....](http://yosefk.com/blog/my-history-with-forth-stack-
machines.html). But that's a separate story..)

~~~
eru
Actually, if your language is designed right, abstractions become much
cheaper, especially conceptually. As an example, I find a call to `map' or
`filter' much easier to understand than a (C-style) `for' loop.

Part of your point still stands: the `for' loop is harder to read and write
specifically because it is more general. `for' loops can do all kinds of wacky
things.

~~~
akkartik
Yeah. When I said 99% I wasn't just including what the language provides. That
usually lies in the 1%, even for 'bad' languages. Generalization there is
usually justified. No, I was thinking of the functions and interfaces we
create _atop_ the language and standard library.

~~~
loup-vaillant
Language, standard library, or custom module boundaries, it's the same
problem: there are 2 kinds of generality.

The first kind is exhaustiveness. Being general by accounting for all special
cases. That one is complicated, and rarely worth doing beforehand.

The second kind is genericity. Being general by _ignoring_ all the special
cases. That one is often simpler and more solid than any special case.
Parametric polymorphism (or generics, or templates) are like that: ignoring
the specifics of the parameters limits what you can do with them, making the
generic thing simpler.

The layer at which you choose to apply exhaustiveness or genericity is
irrelevant in my opinion. C++ often leans towards the exhaustiveness end of
the spectrum despite being a language, for instance.

~~~
akkartik
Interesting. My take was that language, standard library and third-party
libraries are part of the same state space, just organized by the amount of
use they receive. Since user-space libraries receive less hammering (many have
just one user) they're usually still in the "adding exceptions" phase and
haven't yet attained (and perhaps never will attain) the simplicity on the
other side of complexity when the requirements stabilize.

I think that maps to your distinction, except that I don't believe in
'exhaustiveness'. The state space of a program isn't some fixed thing for most
programs. It evolves and grows in response to what we want it to do, which
dimensions we choose to generalize along and where we stay stable. I think
it's equally reasonable to view the evolution of special cases as equally
exhaustive at every point, it's the boundaries of the state space
(requirements) that is growing in strange ways.

------
edpichler
This part I don't want to forget:

 _" When you take on a framework, you're like a consumer buying a product: if
it does a hundred things you don't need, or doesn't do things the way you
want, well, tough. That's what we've got for sale. But as a programmer you're
not a consumer. You're a producer. You aren't forced to accept an abstraction
that doesn't work for you, or solves a problem you don't have. The option to
build the Engineer-style specialised solution that conforms exactly and only
to your needs is always there."_

------
duncanawoods
I liked this:

 _If you 're solving problem X and realise "hang on, that's actually a special
case of problem Y", it's actually the single most dangerous point in the
development of your solution; you're only one step away from the logical
conclusion of "I should solve Y instead". Now you're solving the wrong
problem._

I think this resonates with most of us but we will still pursue the general
case. I think its because problems are simpler when expressed in their general
case than with all the hoary detail of a special case. Its close to a rule of
nature: whenever we see complexity we can discover simple elegant rules that
govern it, and as programmers who's main challenge is taming complexity, we
are prepared to risk anything in our pursuit of simplicity, including more
complexity...!

