
On software popularity - djanowski
http://soveran.com/popularity.html
======
beat
It's turtles all the way down.

Using a lightweight, comprehensible framework is good, until you hit the
limits of that framework and start pulling in lots more libraries to fill the
gaps (the Sinatra/Cuba world). Using a heavyweight, complete library is good,
until you start suffering from bugs caused by incomprehensible magic buried
deep inside (the Rails world).

I see the same problem in microservices versus monolithic apps. If you use
microservices, you have a lot of duplication and inter-component configuration
management complexity. If you use monolithic apps, you get the big ball of
mud.

Or, as I sometimes put it, "Which kneecap do you want to get shot in?"

The underlying problem isn't fashion, or bloat. It's that software is very,
very complex. Unless you're doing hardcore embedded work in assembly language,
you're building a raft on an ocean of code you won't read and can't
understand.

A friend of mine put it well once. He said that you should have deep
understanding of systems one layer from yours (ie your frameworks), and at
least a shallow understanding of things two or three layers away (operating
systems, etc). If you don't have some grasp of the things you depend on,
you're relying on magic and are vulnerable. But you can't realistically expect
to know it all.

~~~
j03m1
Which knee do you want to get shot in is the best way I've heard this
explained. Ever.

~~~
beat
"I don't believe in the no-win scenario." -Captain Kirk

A lot of the problems we face in software engineering are Kobiyashi Maru
tests. They're no-win scenarios, tests of character rather than ingenuity.
There's a certain irreduceable complexity to all interesting problems, so at
some point, you're not _solving_ complexity, but merely pushing it from one
place to another.

~~~
chongli
_There 's a certain irreduceable complexity to all interesting problems, so at
some point, you're not solving complexity, but merely pushing it from one
place to another. _

Right, but I'm not about to believe we're even close to that limit. Every day
we have people writing bugs which, from a state of the art perspective, are
already solved problems. It's like people are out there riding horses while
others are driving past in their cars.

I think it's a little better to say that a lot of the problems we face in
software engineering are cultural, not technical. So many people adopt these
tribalistic mindsets when discussing their preferred technologies rather than
allowing themselves to be open to better ideas.

~~~
beat
Sure, there are lots of projects that are cranking out junk redundant code on
legacy systems. But even if you _do_ use the best tools, the best processes,
the best analysis, the irreduceable complexity remains.

For example, I have to read data from one source, and write it to another. I
can have a single app provide api for both the writing source and the reading
source, or I can have two separate apps and two separate apis. But I can't get
away from the problem of reading and writing. That's irreduceable. If I have
separate apps for the two apis, I have a configuration management issue. If I
have a monolithic app that does both, I have a coupling issue.

Imagining that Bleeding Edge Technology of the Month will make this go away is
wishful thinking. But facing the truth is awful, so we choose the wishful
thinking.

~~~
chongli
I'm not merely talking about _junk redundant code_ but entire classes of
problems such as memory errors (null pointers etc.) and race conditions (in
many, though not all, cases).

If you compare software engineering to another field like aerospace or
structural engineering it's laughable how little rigour and formality takes
place in most software. It's shocking how much disdain programmers show for
mathematics which gives us the tools to _prove_ that our applications are
correct.

~~~
dkersten
_It 's shocking how much disdain programmers show for mathematics which gives
us the tools to prove that our applications are correct._

I personally think these things are great but too expensive. My customers care
about solving their problems not perfection. There's a balance to be struck
and I'm not excusing bad software, but if the customer isn't willing to pay
for better and is happy with the current quality, there is little incentive to
put in this extra effort (and therefore cost) when the resources could instead
be put to work on providing value in ways the customer does care about.

I guess after you exceed the customers quality requirements/expectations,
there are diminishing returns.

I'd love to produce perfect software, but nobody will pay me to do it.

~~~
chongli
_I personally think these things are great but too expensive. My customers
care about solving their problems not perfection. There 's a balance to be
struck and I'm not excusing bad software, but if the customer isn't willing to
pay for better and is happy with the current quality, there is little
incentive to put in this extra effort (and therefore cost) when the resources
could instead be put to work on providing value in ways the customer does care
about._

I didn't say you had to do all of this yourself. I don't believe every
programmer has to have a degree in mathematics. Where I take issue is with the
refusal to take advantage of the hard work of others. There exist languages,
tools and libraries which have a strong mathematical basis that are ignored in
favour of the latest fad. Places like stackoverflow are full to the brim with
people asking for help solving problems that they wouldn't have to deal with
if they'd chosen better tools.

------
t4nkd
I find it pretty interesting that the only real concern was "can this team
solve my technical problem using the best tools in the shortest amount of
time" but did not seem to consider things like, "What happens when this team
moves on". One of the biggest assumptions that I've made is that when choosing
tools, choosing the most popular ones gives you the highest chance of bringing
someone on board who already groks them, finding learning material, etc.

I seem to be noticing more often now that "the best tool for the job, for the
person, at the time" is completely acceptable. I feel as though this didn't
used to be the case, and I know a lot of more "established" engineers who
believe it's naive to choose tools based on an inclination or personal/team
preference. While in this case, we're getting less code in the dependency, I'm
highly suspicious of how their working knowledge, method of code organization,
etc... can be transferred over time.

Ultimately though, knowing what actually happened over time with this project
would be the most interesting. Does he eventually find new team members who
convince him to switch back to a framework that is more widely understood and
practiced?

~~~
RyanMcGreal
> but did not seem to consider things like, "What happens when this team moves
> on".

FTA:

> 3\. Cuba itself is extremely easy to work with because it barely does
> anything. You can read the entire source in 5 minutes and understand it
> completely. I'm confident future teams could pick it up.

~~~
_raul
I guess the main concern here wouldn't be Cuba, but the "handpicked solutions
for common problems that a web app would face" added by the development team.

~~~
t4nkd
This^ is actually where my concern was.

Specifically:

> While in this case, we're getting less code in the dependency, I'm highly
> suspicious of how their working knowledge, method of code organization,
> etc... can be transferred over time.

------
seppo0010
This article feels like a follow up on "On Ruby"[1] and related to "It’s OK
for your open source library to be a bit shitty"[2]. The common theme I find
in all three is the quality and health of a dependency. I agree with OP that
stars and latest commit are not clear indicators of quality.

I think more useful indicators are tests, documentations and pull requests.

* Tests. These are things you can actually run to see if the library actually works. In most cases, you can read them and understand how it is supposed to be used. On some platforms even if the tests passed in the past, they may not pass now, for example using different version of the language or platform. The coverage of the tests would be nice, but it is hard to measure at first sight, therefore using the ratio of tests LOC to library LOC might be an indicator.

* Documentation. The documentation itself does not change how the library behaves but it is a clear indicator whether the author expected someone else to be able to use it or not, and shows a sort of responsibility. Most of weekend hacks would not have one.

* Pull Requests. If there are old and open pull requests, that seem useful, it is a bad sign for the maintainer of the project.

[1] [http://hawkins.io/2015/03/on-ruby/](http://hawkins.io/2015/03/on-ruby/)

[2] [http://www.drmaciver.com/2015/04/its-ok-for-your-open-
source...](http://www.drmaciver.com/2015/04/its-ok-for-your-open-source-
library-to-be-a-bit-shitty/)

------
mcguire
" _That didn 't prevent further criticism, and many comments followed
recommending Rails and Sinatra as alternatives based on familiarity and
popularity._"

Ah, the "industry standard" argument, where _industry standard_ seems to be
defined as "whatever I've read the most blog posts about", or "whatever was
used at my last job", or possibly "what I learned in school (last year)".

One of my major dissatisfactions is newly hired, junior (or mid-level)
developers coming on board and then immediately wanting to change _everything_
to be more "industry standard". They rarely seem to try to understand why
we're already doing what we're doing, they almost never seem able to explain
_why_ what they want to do is strongly better, and they frequently just go off
and do it, leaving a pile of projects each doing the same task in different
ways.

~~~
beat
The problem with stepping into legacy systems from the outside is the "Wow,
this all sucks, let's do it over!" thing. Experienced engineers who have dealt
repeatedly with the ball of mud (hopefully) learn that completely
restructuring legacy code is, at least, a dangerous endeavor, with no
guarantee of success. The really good ones know that the unknown unknowns are
lurking in that muddy swamp and can swallow development teams whole.
Sometimes, that tangle of spaghetti code is actually a cage for a dangerous
monster.

Part of the reason the ball of mud is as ugly and messy as it is is because of
repeated attempts to fix it, just getting absorbed into the structure. An
application that was shaped by many different engineers and leads over many
years can have a lot of different flavors of weird.

~~~
mgkimsal
Another problem (not you doing it) is quoting a Joel Spolsky article from 15
years ago saying "don't ever rewrite from scratch". _NEVER_ ever accepting a
rewrite, based on someone's experience in a different field... that's also a
bad situation.

About 20% of the projects I've dealt with would have been measurable (by
pretty much every measurement you can come up with) by a complete rewrite. One
of the commonalities I noticed is no one from the original technical team was
still around. The notion of losing "institutional knowledge" about the code
goes out the window at that point - it's already lost. You now have new people
now just adding on more crap (or fixing crap) without ever even knowing why it
was crap in the first place.

If someone _only_ wants to keep their systems running - that's 100% fine - no
need to rewrite. If the org expects new functionality on a regular basis, and
there's no one who truly understands any of the current code, a rewrite may
make sense.

------
hasenj
One of the problems with popularity is you will get a lot of mediocre
developers who don't know what they are doing. You know, the people who pick
up tools only because they think it will land them a job more easily. The same
people who picked up VB.

You can't build a product with a mediocre team, no matter what tools you have.

If you have a decent team, I think they will be able to manage to build great
products even with not-so-great tools. Of course, if you force them to use
shitty tools, they might just decide to leave.

~~~
abvdasker
_One of the problems with popularity is you will get a lot of mediocre
developers who don 't know what they are doing. You know, the people who pick
up tools only because they think it will land them a job more easily. The same
people who picked up VB._

I understand that you're saying it's a bad practice to adopt a tool for
reasons other than its utility in addressing a specific problem.

But is it really so terrible for a developer to want to learn a new tool so he
can broaden his skillset and make himself a more attractive candidate to
potential employers (so long as the tool is the right one for the task at
hand)?

~~~
hasenj
What I'm actually saying is a bit more subtle.

There are types of developers who are very limited in what they can do. They
learn some basic "tricks" in the form of "If write this magic spell, I get
this magic result".

These are the mediocre developers.

They tend to pick some popular tool and try to market themselves as "an X
developer" where X is something that's popular right now.

Then there are developers who can _create_ magic. They can build something
almost from scratch without having to have a library that already does it for
them. These developers can pick up X or Y or Z in a week or two and then they
will become 20x more productive with it than the mediocre developers are.

I'm not saying it's a bad thing to pick up a tool to market yourself as an "X
developer". I'm just saying there's a lot more mediocre people in the pool of
"X developer", and if that's what you go looking for, you'll get mostly
mediocre people.

~~~
taurath
At the same time, theres a very high amount of managers and HR who want
someone who has worked in those libraries, period. They want people that will
tell them "I'm an X developer".

~~~
hasenj
I think that companies managed by these types are being naturally-selected out
of the software products market.

------
Ologn
He talks about the virtue of using new and untested languages and frameworks.
The example he uses is Twitter's use of the relatively new Ruby on Rails
framework.

Unfortunately for him, this is a bad example. Twitter was notorious in its
early days for being down all the time. So Twitter decided to migrate to Java
- a dull, over-verbose, yet reliable language.

~~~
kelseydh
You're making the mistake of thinking that the software that Twitter builds
today should be the same as the software they built early on as a startup
company. You can't make those comparisons because as company during those
times Twitter was facing completely different realities. What were those
realities early on? Twitter had:

1\. Small team, small budget, smaller set of technical skills.

2\. The need to do a lot with less.

For these needs, I would argue that Rails was the perfect tool for Twitter
early on. It likely gave them a competitive advantage in their development.

However as with all fast growing prodcts, nearly every app experiences a new
set of challenges every time its traffic scales up by an order of magnitude.
Twitter being an _extreme_ example of among the most difficult-to-scale apps
you can imagine (data is rapidly being created and read -- by its nature its
inherently hard to cache).

Their original app was built by relatively inexperienced developers hacking
together an MVP. By the time they reached scale the had a roster of senior
developers capable of rebuilding the system way more professionally than they
ever could have in the early days. Rebuilding the core functionality was
trivial, and this time, they could build it from the ground up to support
5000+ tweets a second.

------
matthewmacleod
It's an interesting thought.

I'm primarily a Ruby developer. Literally every single time I've started a
web-oriented project using something like Sinatra or Padrino etc., I've ended
up regretting it. I end up building a shitty, half-featured, buggy version of
Rails. My experience is that most other projects end up the same.

I suppose mileage may vary; perhaps it's easier to write bloated apps in
Rails. I'm not convinced that's a good reason to avoid it, though.

~~~
rcfox
I don't know anything about the libraries mentioned, but it sounds like you've
decided that the Rails way is the correct way to do things, and then try to
shoehorn that into the others.

------
jingo
That Knuth quote by itself is interesting.

Perhaps he is not suggesting that popular ideas likely to be are wrong.

Instead maybe he is saying that to think and develop ideas like Knuth's one
needs a certain amount of irreverance for what is popular.

(Undue?) reverance is rampant in the software industry, in my opinion. Would
Knuth agree?

~~~
soveran
An extended version of the quote would be "Don't just believe that because
something is trendy, that it's good. I would go the other extreme where if I
find too many people adopting a certain idea I'd probably think it's wrong. Or
if my work had become too popular I probably would think I have to change." I
took it from this video:
[https://www.youtube.com/watch?v=75Ju0eM5T2c](https://www.youtube.com/watch?v=75Ju0eM5T2c)

~~~
kelseydh
Unfortunately this sentiment is why software is plagued with an unhealthy
culture of constantly needing (/wanting) to learn new languages and
frameworks.

The "elite" developers (who often don't actually maintain real production
apps, or need to live with the consequences of their design choices) begin to
fad over a new language. O'Reilly publishes a book and sure enough...
Influential senior developers who obsess over these elites start fangirling,
leading them to convince their bosses/teams/companies to also adopt this new
"cool" technology. As the popular tide of generic bandwagoners rises, the
"elite" developer begin to feel the pressures of wanting to redefine their
identity again, and soon jump ship to switch to the next hot thing.

Then the process repeats...

------
zkhalique
Wrong according to whom?

If most people believe X and that makes you think X is probably wrong, then
you are probably in one of the groups that believes alternatives to X. There
are lots of groups like yours, they can't all be right, and therefore your
chances of being any more right than X is are slim (all things being equal).
At least, with X, you have the support of the population and can fit into
society during your lifetime.

And the nice thing about X is that it has been battle-tested by many people,
and while it may be "wrong", it WORKS.

~~~
soveran
The way you put it, it looks like it's a matter of trust rather than
understanding. If you don't understand what X does and how, if you don't want
to invest the time to analyze it, then you are gambling. The fact that a lot
of people use X makes it look like a safe bet, but the point is that even if
it turns out to be a safe bet, it's still a gamble. Isn't that cargo cult
programming?

------
animatedgif
Aforementioned CTO here. Great writeup soveran (:

With the past year in retrospect, I can say with certainty that I'm quite
pleased with our decision to embrace Cuba. It's led to a highly-declarative
codebase with clear layering and very little magic.

If anyone has any specific questions I'd be more than happy to answer.

------
hyperpallium
Mainstream stuff isn't necessarily the best, but it's "good enough". The big
benefit isn't the product itself, but a market effect: multiple third-party
add-ons and service will be available. Bugs and gaps will have been found and
gaffa taped. A large skilled employee pool, commodity-priced. Basically, "if
we _all_ make the same mistake, it will be worth someone's while to take care
of us" (baby boomer effect).

This is what makes established products unreasonably difficult to dislodge
(from a technical perspective).

At the other end of the capability spectrum, Alan Kay said there's an
exception to the rule to _reuse not reinvent_ : those who _can_ make their own
tools _should_.

------
ChrisArgyle
On the flipside, "Is this task so special that we need to break from the
pack?"

~~~
kelseydh
Exactly, unless you're doing an app with high concurrency or real time
updating, you can pretty much scale any Rails app to the stratosphere if the
app is properly cached and optimized for performance -- and you get to do so
while staying within the confines of useful conventions.

Really I think the developers just chose Cuba more than anything so that they
could scratch the itch of getting to try something new,

------
redwood
Would be interesting to check in with the team now

------
_greim_
I think the article makes a good point.

Counter-point: at least in the open source world, popularity can translate
into lots of eyeballs on the code, lots of bug fixes, better ability to hunt
down solutions on Google, hire people, etc.

Counter-counter-point: lots of people depending on a project may slow its
progress and prevent its maintainers from correcting fundamental design
mistakes, all for fear of breaking existing installs.

~~~
abarkai
Dead-on. Lots of eyeballs.

Even in a non-open-source tool, lots of users means lots of reference on Stack
Overflow, and an exponentially better chance of not encountering bugs for your
particular use-case.

Any framework of moderate size has bugs in it. The only question is, has the
bug that would slow you down been fixed yet?

That's why the size of the alternative is so important. If the alternative is
just 300 lines of code (as in this case), then even if you do run into bugs,
fixing them is gonna be trivial. But what about 3,000 lines? 30,000? ...

I wouldn't say everybody should _automatically_ go with the popular option,
because this kind of herd mentality can really be destructive. If another,
less popular tool seems better for your use, give it a fair consideration.
I've actually personally chosen that route several times, and I would again.
But by doing so you accept the strong possibility of dealing with bugs which
your competition doesn't need to worry about.

------
emodendroket
> When Rails was created, Ruby wasn't mainstream. When Twitter launched, Rails
> had been out for just four months. Adopting Ruby and Rails were bold moves
> made by people that didn't care about popularity. They were forced to assess
> the quality despite the lack of stars.

Apparently they didn't do a good job since they've long since abandoned it.

~~~
matthewmacleod
On the other hand, while it's pretty obvious that Rails isn't appropriate for
global-telecomunications-level infrastructure, it was a project that allowed
Twitter to get off of the ground pretty quickly. That's a pretty valuable
feature to have!

------
meistro
I don't remember his name, but one of the creators of Angular said
(paraphrasing) "When choosing between libraries and in doubt, choose the one
that has more testing in place". This is something I've personally found true
the majority of the time.

~~~
serve_yay
Which kind of testing? Is it better for a library to be in wide distribution
and therefore battle-tested but without unit tests, or to be unpopular but
full of unit tests?

~~~
meistro
It depends on the size the library. I'm sure people will point to some
outliers but I'd hard-pressed to find a large library that is both widely
distributed and severely lacking in tests.

~~~
serve_yay
> It depends on the size the library.

Haha, indeed it does.

------
serve_yay
Sure, sure. If a tool or an idea is popular it still may not be any good.

But the thing about groupthink is, it is at its most insidiously effective
when its participants don't recognize it as such. It works best when one feels
oneself an iconoclast for saying what most everyone around them believes.

------
sinful
"Following the crowd is easy, but it's often a shortcut to the wrong place."

In the first place, if the product does not work well, it won't even get much
attention or popularity. There are reasons why products got massively popular
in the first place.

------
xtx23
for micro vs. monolithic, why does it have to be versus decision? "I suppose
it is tempting, if the only tool you have is a hammer, to treat everything as
if it were a nail."

