
Minimal Viable Programs (2014) - gglitch
https://joearms.github.io/published/2014-06-25-minimal-viable-program.html
======
branweb
This is a useful concept. I also thought these lines were insightful:

 _New features mean new sales opportunities, good for the company but not good
for the user._

 _The problem with adding features to MVP is that when we ship more complex
products like complete operating systems that are packed with programs, the
complexity of the individual programs contributes to the complexity of the
whole._

Not sure how to solve that particular problem though.

This made me think of a blogpost linked on Hacker News when Joe passed away:
[https://github.com/lukego/blog/issues/32](https://github.com/lukego/blog/issues/32).
This part seems relevant here:

 _Joe wrote amazingly simple programs and he did so in a peculiar way. First
he wrote down the program any old way just to get it out of his head. Then
once it worked he would then immediately create a new directory program2 and
write it again. He would repeat this process five or six times (program5,
program6, ...) and each time he would understand the problem a little better
and sense which parts of the program were essential enough to re-type. He
thought this was the most natural thing in the world: of course you throw away
the first few implementations, you didn 't understand the problem when you
wrote those!_

~~~
nemaar
I really believe this is the only way you can write good software. It's mind
blowing to me that most people try to "figure out" the problems before seeing
what it actually is. I always feel strange when someone is writing a "study"
by reading the documentation of some API or even worse, by reading someone's
feature request. The writer of a feature request usually knows even less about
the whole thing and really did not think things through. We need tools and
programming languages where you can create really dirty but working solutions
and make it iteratively better. You need to find all the edge cases and
pitfalls and for that you need to fail. Of course you need to fail fast, this
method does not work if the iteration is slow and the thing is in production
when you find out that it barely works.

~~~
mike_hock
Where do I go to get paid to actually put in the TIME and effort required to
produce something elegant and high quality?

~~~
nothrabannosir
As a contractor working on project basis, who can leverage this technique to
produce code of such high quality that a commensurate price can be commanded.

You’d need to find a niche that values this level of quality, but if that
exists, and _if the theory is true_ , then there’s your gold.

------
kejaed
“I really like systems that do one essential thing and do it well. Good
examples are Dropbox and Twitter. Dropbox just works. Twitter has a no fuss
140 character tweet box. Simple, easy to understand and minimalist.”

They used to... Maybe even when this was published in 2014.

~~~
coleifer
And yet when I expressed a negative opinion on dropbox "Paper" I was downvoted
into oblivion. Same happened when I was critical of antirez "disque" project,
which is abandonware now. Part of HN seems breathless about new and shiny, and
others seem to like this kind of "Unix philosophy" minimalism. I guess it's
just a matter of taste at the end of the day?

~~~
antirez
Disque is not abandoned fortunately! I'm in the process of porting it as a
Redis module. It was not viable to continue it as a fork.

------
kiba
Huh. I had my own definition of Minimal Viable Programs: code that demonstrate
how to do X, and only X.

I found that tutorials often had unnecessary amount of complexity to show
people how to do something, like encumbering an example with classes, title
setting, etc, none of which is essential to the operation of the program and
which makes it difficult to understand and learn.

So, if I found a working example, I would strip the example down to its most
essential until I cannot take away anymore without it ceasing to function.

Maybe I should call it minimum viable examples instead.

~~~
oftenwrong
A bit like [https://stackoverflow.com/help/minimal-reproducible-
example](https://stackoverflow.com/help/minimal-reproducible-example)

------
mankyd
This always becomes a tricky debate. Take the `ls` program. MVP would just by
`ls`. Or maybe `ls -la` to get all the details. You could theoretically pipe
that into other programs to get more information.

But I'll be damned if I don't use `ls -lah` because I am a human, not a robot.
`ls -1` is also super handy. I am sure there are lots of others that folks
use.

So where do you draw the line? It makes way more sense to include a few extra
options in your tool rather than requiring your users to jump through extra
hoops to get something _they_ find useful.

edit: On the other hand, you don't need to include the kitchen sink, either. I
have seen programs with literally hundreds of long-form command line
arguments. It's a nightmare to wade through.

~~~
lou1306
The flags you talk about are mostly about the _presentation_ of the output.
The, shall we say, "business logic" of `ls` is still pretty bare-bones. That
could be one way to draw a line.

Also, I think these presentation/formatting flags are a consequence of the
Unix shell "everything-is-text" paradigm. In Powershell, where you pass around
structured data rather than text, typically you handle presentation just by
piping into the appropriate cmdlet.

------
vanderZwan
Tangent: I miss Joe Armstrong. I never even programmed in Erlang but I just
enjoyed watching his contalks and the interviews he held with other people so
much.

------
kyberias
> If you want a job done find the busiest person you know and give them an
> extra job. This is because the reason they are busy is that lot's of people
> want them to do things because they are good at doing things and that's why
> they are busy.

Oh man... that is so accurate it hurts.

~~~
NicoJuicy
Oh man, you know what happens right.

Management wants 1 issue at a time/person. So they just mention the next task
in person, that you are not allowed to create on the board.

Lol :-(

~~~
mc3
1 issue "in progress" might make sense, but 1 issue in total seems weird.

------
gglitch
I’ve been interested in shell scripting lately and I have to admit that I
posted this not because of Joe’s high-level observations on minimalism of
architecture, but because of his discussion of a tiny, shell-based system his
team used for ticket management, which reminded me of this shell-based notepad
posted a few days ago:
[https://news.ycombinator.com/item?id=21416338](https://news.ycombinator.com/item?id=21416338)

~~~
wodenokoto
TFA made me think of this recent similar post: A static site generator using
make, shell script and pandas, in about 150 lines including templates.

[http://stephenbalaban.com/static-site-generation-
in-50-lines...](http://stephenbalaban.com/static-site-generation-in-50-lines-
of-code/)

~~~
vector_spaces
I think you meant Pandoc rather than Pandas?

~~~
wodenokoto
Yes I did. I wanna blame autocorrect but I’m afraid it was probably some sort
of Freudian slip

------
brunojppb
Good software is written like this. Joe could not be more sharp on that. I did
the exact same thing at my current company. I created a ticket system
integrated in Slack where employees can quickly open a new issue and put a tag
on it. A ticket is created in HelpScout where out IT support team follow up.
Works 100% of the time and had zero downtime so far (around 2 years on
production). No need for new features. It just works.

------
JimmyRuska
Things are becoming more like the unix philosophy, modular and shared. Even
though feature creep is increasing on modular systems, as long as the core
components are simple to understand and introduced first, extra features do
not necessarily decrease utility.

For example tmux, if you look at a cheat sheet you might be overwhelmed, but
when I teach people to use it they find it extremely simple. I just tell them
ctrl-c to create a new window, and ctrl-w to switch windows. They only need 2
commands to get value, even though there are hundreds of features, and later
they can incrementally learn about them.

This is the same way I learned programming, the people learn emacs and vim.
You start with the MVP feature set that will make it functional and you learn
about features from there.

Even AWS, Google Cloud, Azure, even though they're mega services of now insane
size and feature breadth, they're creating single services that are easy to
understand and compose together. You don't need to understand a lot to copy
files in and out of AWS s3 for example, and you can compose many services
together even after a basic understanding. You can slowly learn about more
features as you work.

In the future, things like apache arrow will also help as you can retain
services as modular but potentially share memory when composing services. This
idea of service composition is also common in Haskell and other functional
languages, where if it's possible to do lazy evaluation you can avoid
redundant work. Now that the unix philosophy is being more widely adopted,
although renamed to microservice fundamentals, these ideas of lazy evaluation
and protected data sharing probably be the next level of services research.

~~~
0xFACEFEED
Might you be looking at this through rose tinted glasses?

I'm still a VIM user to this day, but my learning experience was hardly a
clean incremental process. VIM has no MVP if you're using it to program
computers. Early on, I spent hours getting sucked into making the editor do
what I wanted. I only stuck with it because I believed the investment of time
was worth it (and it was).

> You don't need to understand a lot to copy files in and out of AWS s3 for
> example, and you can compose many services together even after a basic
> understanding. You can slowly learn about more features as you work.

This is dangerous though. You might unknowingly become a victim of vendor
lock-in. Or you might have to throw out all of your work because you didn't
understand the constraints of the system.

Subjectively, my evidence is how I (an experienced person) evaluates software
and tools today. Step 1 is learning the architecture, Step 2 is finding the
limits/constraints, etc (there's more but you get the idea). When I was
evaluating Git I spent a couple hours researching the internals. This sold me
on Git but it also made Git much easier to use down the road. It's not a very
big time investment most of the time. This kind of evaluation has been so
important when evaluating complex stuff like databases and SaaS products.

> Now that the unix philosophy is being more widely adopted, although renamed
> to microservice fundamentals, these ideas of lazy evaluation and protected
> data sharing probably be the next level of services research.

Microservices have been a disaster. The majority of implementations I've seen
are brittle/buggy/convoluted messes. Complexity that was previously managed in
code has been pushed to infrastructure -- and no one knows how to do
infrastructure!

Most people that I talk to don't really understand what made the Unix
philosophy work. Everyone cites "programs that do one thing and do it well".
That's certainly the fun part, but it isn't what makes Unix programs so great.
It's how programs do I/O (text and pipes).

In other words, it's not enough to build tiny modular single-use programs. You
need an equally elegant system the combines the programs together. The
programs need to strictly adhere to this system otherwise everything breaks
down. Containers are _not_ it. K8s is _not_ it.

~~~
dragonwriter
> This is dangerous though. You might unknowingly become a victim of vendor
> lock-in.

The simplicity of the individual services means it's easy to compose within a
cloud, but it also means (particularly when the fundamental problem concept
each is directed at is similar) that it is easy to abstract from a particular
cloud by composing the solutions from separate clouds for the same problem
behind a shared abstraction layer.

~~~
0xFACEFEED
The dirty reality is that software is tuned to constraints of a particular
cloud provider. Talk to anyone who's turned a long-lived single-cloud
production system into multi-cloud.

Example: How does AWS EKS provision control plane worker nodes? AWS
CloudFormation. How do you manage security policies? AWS IAM. What kind of
servers do you tune your software to run on? AWS EC2 of course.

Now let's say you wanna take advantage of this fancy "composing the solutions
from separate clouds" thing. Let's say you like Azure's Blob Storage over
AWS's S3. Guess what? You're now paying for cross-cloud latency! Both in time
and in cost.

The devil is in the details.

------
skybrian
This happened to work because all the users were programmers who were already
familiar with the command line, text editing, and the CVS tool, and they had
already been granted access to their source control system. It wouldn't work
for the general public.

Maybe the lesson is to figure out your minimum viable audience and build on
the tools they already know?

~~~
nimvlaj30
This is the real lesson. Programmers prefer simple, atomic programs that are
short and digestible.

Most other users need safety restraints such as a GUI. Doesn't stop you from
using minimum viable product as a philosophy.

------
opportune
Great post. I think this also ties into configurations. It is so annoying when
getting started with a new tool requires reading through a ton of
documentation and then making a big configuration file just to get it to work.
Adding features can be great but adding cognitive overhead isn’t

------
3xblah
One opinion, possibly shared by others, is that small programs are actually
more versatile than larger ones. The obvious example is working with text in
UNIX versus in a large, graphical program with, e.g., cascading menus full of
features chosen by a company, not by users. The text processing functionalty
in the large program is usually pre-determined, not usually able to be
modified by users. We can say what the program can and cannot do. Whereas it
is difficult to say with any certainty what are the limits of a UNIX user who
is creative in her use of a variety of small, single purpose programs. She may
write her own, in addition to using the ones provided.

------
AdieuToLogic
From the first sentence of the article:

> A minimal viable program is the smallest program that solves a particular
> problem. It is small and beautiful. It has no additional features.

From the FreeBSD source tree (/usr/src/usr.bin/tr):

    
    
      $ svn log tr.c | tail -6 | grep -v '^---'
      r1590 | rgrimes | 1994-05-27 08:33:43 -0400 (Fri, 27 May 1994) | 2 lines
      
      BSD 4.4 Lite Usr.bin Sources
    

So, yes, what a lovely mindset indeed.

------
r3dk1ng
[https://news.ycombinator.com/item?id=5920732](https://news.ycombinator.com/item?id=5920732)
most minimal viable program

------
janpot
> A minimal viable program is the smallest program that solves a particular
> problem. It is small and beautiful. It has no additional features.

Yet, when a node developer creates left-pad...

~~~
dragonwriter
There's two problems with leftpad:

(1) Dependencies always add certain minimal level of complexity, so a too-
trivial dependency can add more-than-warranted complexity as a dependency even
when the implementation itself would not do so as an integrally-maintained
component of the larger system.

(2) the nature of the npm distribution ecosystem (and this is by no means
unique to npm) means that the complexity and risk added by dependencies
managed through that ecosystem is more than it otherwise could be with
immutable, append-only repositories of dependencies.

~~~
thinkingkong
What's an example of an immutable, append-only repository of dependencies?
That _sounds_ intriguing but I don't follow.

~~~
dragonwriter
I don't know of any perfect examples, but the idea is one where once a version
of a library is posted, that version can no longer be deleted or modified,
only later versions added.

Legal and practical constraints would probably mean any real system would be
at best an approximation of this ideal, but a big part of he leftpad problem
is that NPM at the time didn't even try to approximate it (the response also
underlined an NPM policy existing at the time that was useful in that context
but dangerous in general, in that NPM generally allowed others to claim the
name of abandoned projects.)

~~~
thinkingkong
So like the first use of a shared ledger ever for binary packages?

