
Finish your stuff - Oxitendwe
http://250bpm.com/blog:50
======
projectramo
What would that look like?

Consider an example:

You have made a little web app that beeps at certain times of the day to
remind me to do something.

Is it complete?

Oh, you want a calendar integration. That makes sense. You add it. Is it
complete?

Oh, you say you want to release mobile versions? Okay, now is it complete?

Sorry, but the iOS version needs to be updated to remain compatible. Now is it
complete?

There is a new popular cal app that everyone uses. Should you update it to
work with that?

The problem isn't that the software is incomplete. The "problem" is that the
world keeps changing.

To the extent that the environment doesn't change (unix), you can "complete"
your tool.

~~~
zokier
That is called feature creep, and is usually not viewed in positive light. Its
real art to know how to decompose problems and design neat, contained,
solutions.

I'd also argue that the "change" you are seeing is mostly illusionary, but
that is another story altogether.

~~~
sly010
> neat, contained, solutions.

Contained solutions can be only designed for contained problems. Sure if you
define a limited set of requirements you can finish your project and will have
a limited software that is only useful in that very limited context.

"change" might be unnecessary but it is certainly not illusionary.

~~~
wbl
Then you split the problem into contained pieces.

~~~
sly010
But then you didn't solve the problem. You solved pieces of the problem.

~~~
Jtsummers
That's a part of software engineering though.

Once you reach a certain scale, you have to decompose your problems into
subproblems. You solve those and assemble them into the final system.

The people making AMQP (following the original author's example and line of
reasoning) were solving too many problems: they had a message queue, they had
a wire protocol, they had service discovery, etc. This is all _fine_.

The author and others went on to make ZeroMQ with the intention of focusing on
the communication between nodes and various communication patterns (pub/sub,
reply/respond, etc.). They had some work on what should go over the wire but
not too much, and they didn't expose things like authentication or discovery
of services. Why? Because those are critical to applications, but not to a
distilled, core message queue.

Using ZeroMQ, now, you can build up to what AMQP wanted to be. You can agree
between your customers that ZMQ will be how you coordinate between systems,
but you define your own message format using a separate standard. That
standard is now independent of the wire protocol as well. Take something like
ASN.1 or protocol buffers and define your message schema using those.

You want discovery? You agree on how services should be announced (a schema in
ASN.1) and where the service registry should live (this would be a system
configuration detail, where you specify where your system looks for a
registry, maybe even federate registries). But this doesn't redo the work of
ZMQ or the work of the serialization/deserialization format defined above.

You want to handle authentication? You pick your desired authentication
protocol (public/private keypairs for identity, a capabilities based system,
access tokens, whatever) and you again define a schema, define where the
authentication service is located (or how it's identified in the registry
above) and continue.

We don't reinvent TCP every time we want a new network protocol (well, some
people do). Why would you reinvent the message queue when its already been
distilled to a basic and fundamental framework that can be used to implement
your desired business logic (now, if the MQ lacks some features that are basic
or primitive, then it should be extended because it's not complete).

~~~
a_t48
> We don't reinvent TCP every time we want a new network protocol

Hey man, speak for yourself. Reliable UDP is where its at.

~~~
Jtsummers
I had "(typically)" in there originally. Should've left it, but it's complete
so I won't change it now.

------
primitivesuave
There is a whole class of rhetorically attractive analogies comparing
programming to the building of physical things like furniture, bridges, and
buildings. They usually overlook that software requirements vary greatly and
can be quite complex, while the physical requirements of a chair are not going
to vary much across chair designs. Modification of software requires no
physical resources and can be executed at scale (e.g. package managers), while
modifying the design of a chair once it has shipped is impractical. Software
errors can certainly be consequential, but I'd much rather encounter a bug in
a messaging queue than a defect in the chair I'm sitting on or the bridge I'm
driving over.

~~~
Jtsummers
> They usually overlook that software requirements vary greatly and can be
> quite complex, while the physical requirements of a chair are not going to
> vary much across chair designs.

I feel as though that's somewhat addressed in the essay:

    
    
      Yes, I hear you claiming that your project is special and
      cannot be made functionally complete because, you know,
      circumstances X, Y and Z.
    
      And here's the trick: If it can't be made functionally
      complete it's too big. You've tried to bite off a piece
      that's bigger than you can swallow. Just get back to the
      drawing board and split off a piece that can be made
      functionally complete. And that's the component you want
      to implement.
    

Software engineering is an offshoot of systems engineering, which is often
described as "managing complexity". I don't think it's at all unreasonable for
people to look at the methodologies actual systems engineers use and learn
from them. One of those things is decomposition of tasks, what this article is
about. If you're making a message queue system, divide it into subsystems that
are each more feasible. Your over-the-wire data format protocol is one thing
(asn.1, protobufs, etc.), your concurrency library and patterns in your
systems are another (go's channels, erlang's processes, libmill from the
article), your database is another. Once you have your system decomposed in
this fashion you can do each part, and actually finish them, and assemble your
complete system by combining them.

Once they're small enough the Unix philosophy applies. Write components that
do one thing and do it well, and design them in a way to communicate with each
other.

~~~
derefr
> If it can't be made functionally complete it's too big.

Too big for whom? Most people on HN aren't building systems software or
embedded software with a strictly-defined purpose; they're building an app
business or a service business, where the "purpose" is "to make money" and
adding additional features or "checkboxes" are how you attract more customers
to make that money.

A _program_ can be done. A _product_ usually can't be, as long as its authors
want to continue to rely on it to put food on the table.

------
googamooga

        Out of the frustration with AMQP I've started my own ZeroMQ project.
    

I doubt that Martin was the sole initiator of ZeroMQ project. I think that
late Pieter Hintjens, the original author of AMQP, deserves some credit as
well, at least out of respect[1][2].

[1]
[https://en.m.wikipedia.org/wiki/ZeroMQ](https://en.m.wikipedia.org/wiki/ZeroMQ)
[2]
[https://en.m.wikipedia.org/wiki/Pieter_Hintjens](https://en.m.wikipedia.org/wiki/Pieter_Hintjens)

Disclaimer: I knew Pieter personally and met him several times at his home and
at conferences.

~~~
lebca
As someone who only discovered Pieter by finding his last article posted on
HN, and then subsequently learning a lot from reading many of his other
articles on his website (different perspective on life, humility, consulting
in tech outside of SV, presentations), here is his homepage for those who are
curious: [http://hintjens.com](http://hintjens.com).

Three examples I'd recommend:

Life, Consulting, Humility, Calming Down:
[http://hintjens.com/blog:125](http://hintjens.com/blog:125) (confessions of a
necromancer)

Presentations: [http://hintjens.com/blog:107](http://hintjens.com/blog:107)
(ten steps to better public speaking)

Life, Humility, Time:
[http://hintjens.com/blog:123](http://hintjens.com/blog:123) (fighting caner)

He's one of the few online writers who keeps my attention no matter the
length. His humor is fun and his imagery and story telling style are engaging.

[edit: formatting]

------
tomc1985
Finish your work... only to have legions of newb "hackers" bitching at you and
calling your project dead because "(s)he hasn't posted an update in months"

Death to evergreen software! Let projects be finished!

~~~
onion2k
This is a really good point. I'm guilty of seeing a repo that hasn't had a new
commit in 18 months and choosing not to use that code because it's 'not
supported any more'. Maybe maintainers could help by making it clear that
their code is still supported but just doesn't need any changes in the Readme
or something.

------
xelxebar
I get the problem with feature creep (who wants a text editor that also
implements support for tetris?! ^•^), but as others have pointed out, that
sentiment probably isn't precise enough to be all that useful.

Being a command line junkie, what stands out to me is the composability of my
tools. This composability, I feel, is the key that let's us separate concerns
and write small, standalone tools.

~~~
TeMPOraL
> _who wants a text editor that also implements support for tetris?! ^•^_

Well, I do :) - given that said editor also has better UX than mainstream
operating systems (in terms of efficiency, extensibility and
interoperability), and it's because of that it is possible for someone to
implement Tetris in it.

------
hybridtupel
This sounded oddly familiar, until I read about the projects Libmill and
Ribosome. And yeah it's a post already submitted in 2015 and should be labeled
2015.

~~~
axus
The link to Ribosome led to a shopping site, I was disappointed.

~~~
tedmiston
Found the repo -
[https://github.com/sustrik/ribosome](https://github.com/sustrik/ribosome)

------
vortico
Any software tool that is deemed as "complete" is not really complete. Even
TeX, which will be releasing its final version when Donald Knuth feels the end
of his life is near, is not complete because it is constantly expanded with
reimplementations like XeLaTeX and LuaTeX and with thousands of plugins in
constant development. GNU utilities like ls are not complete because they are
built on top of operating system syscalls which are changing due to updated
filesystems, and on glibc which releases a new version every 6 months. A
change upstream might trigger ls to be changed, so it's not finished, just
evolving slowly. Just look at its git history
[http://git.savannah.gnu.org/cgit/coreutils.git/log/src/ls.c](http://git.savannah.gnu.org/cgit/coreutils.git/log/src/ls.c),
where the last change was _three weeks ago_. GNU Make, mentioned in the
article, is still evolving. Look at _its_ history.
[http://git.savannah.gnu.org/cgit/make.git/log/](http://git.savannah.gnu.org/cgit/make.git/log/)

~~~
Jtsummers
Re TeX: That's actually covered by the author. TeX itself is complete. Knuth
isn't adding new features, instead the version number is converging to pi (new
digit added) with bug fixes [0]. The other variations are not TeX (they're
reimplementations) or they're extension built on top of the completed TeX
program.

[0]
[https://www.tug.org/TUGboat/tb11-4/tb30knut.pdf](https://www.tug.org/TUGboat/tb11-4/tb30knut.pdf)
[PDF, in case it wasn't obvious]

~~~
vortico
Yes, that's what I meant to imply. So yes, TeX is finished software but the
boundary between what is known as TeX and "not TeX" is blurred by the
thousands of extensions in the TeX environment that people _must_ use when
using anything beyond the basics.

Basically what I'm saying with that example is that you can draw a line around
some code and say "this is TeX, and it is finished." But what users mean when
they say "I use TeX" will never be finished and will be forever expanding.

TeX is the hardest to argue because it is probably the most stable software in
existence for what it's capable of, so any other software you use is much less
finished.

~~~
derefr
> the boundary between what is known as TeX and "not TeX" is blurred by the
> thousands of extensions in the TeX environment that people must use when
> using anything beyond the basics

I would say the boundary is very well defined.

Consider Internet wire protocols: to access this website, you're using HTTP
over TLS over TCP over IP [etc.]

That doesn't mean that TCP or IP are "changing" when HTTP or TLS change. TCP
and IP are feature-complete, low-level layers that each just do one thing
well. We add on more layers to get the effect we want, and those layers change
frequently, but changing those layers doesn't mean the lower-level thing
"changes" as a part of it.

I think the problem with TeX is just that people have the nomenclature
flipped. People think of things like LaTeX as being "what TeX is"—that LaTeX
"is an implementation of" TeX. But that's off; it'd be like saying that HTTP
is an implementation of TCP/IP, or that Ubuntu is an implementation of the
Linux kernel.

In reality, LaTeX et al are _software distributions_ —they _include_ TeX as
their text-constraint-processing engine, just like a Linux distro includes the
Linux kernel, or a Unity game includes the Unity engine. It's one component,
with a well-defined function.

------
EGreg
I respectfully disagree.

While it's very good to encapsulate functionality, and have stable and well-
defined interfaces, there is also a great value to having a unified platform
and community that builds interoperable things.

Would you rather assemble your project from 250 different libraries, some of
which may be incompatible with other ones? Sure, each may solve a tiny
problem, and they may all be orthogonal (best case) but there is a major need
for a unifying paradigm above all those components so they can all work
together.

Here is a counterpont to solving tiny problems:
[http://www.haneycodes.net/npm-left-pad-have-we-forgotten-
how...](http://www.haneycodes.net/npm-left-pad-have-we-forgotten-how-to-
program/)

Basically, I prefer to have a growing community working on a growing snowball
of components that are all interoperable and can be assembled like lego
bricks. I think Wikipedia has such a community. RDF has several such a
communities. The Web has such a community. This really creates a lot of value
by having an exponentially growing platform where many thing works with many
other things.

~~~
Oxitendwe
Tiny problems that are solved provide a common part that many people can be
reasonably sure is free of bugs - if thousands of people use left-pad, you can
be reasonably assured that it has no bugs (or else people wouldn't use it, or
a bug report would be filed - "many eyes make all bugs shallow"), if a
thousand people write left-pad, you will get many versions with bugs that are
never found, because there is much higher entropy/surface area.

~~~
EGreg
I agree with that, but I just think there should be some overarching platform
/ interoperability into which all these small pieces can fit. And also version
pinning in package managers, where you manually check diffs and compatibility
before upgrading anything.

------
fillskills
The whole point of software is that it is soft. Malleable. Ductile. Thats the
strength and sometimes a weakness. It makes software hard to deal with. See
the many late projects. But that also makes it awesome because you can change
it with changing requirements.

Yes, it’s important to generally finish what you start. Equally important is
the capacity to accept, learn and make changes

------
alkonaut
software is never complete. It might be usable (in the world outside personal
projects perhaps "shippable") but never complete.

I think the point the author is making is to not start projects that aren't
likely to have a minimum viable product that is usable and achievable within
reasonable time.

I'm not so sure. It's more fun to write the skeleton of a compiler and
abandon, than it is to write something minimal sometimes. The only (proper)
way to learn what tradeoffs exist in sql database design is to make one.
Making a sql parser, btrees and so on is HARD but rewarding. After a week
you'll know a lot about a lot of things. But you likely won't have a working
SQL database. If you aren't a very special kind of crazy you won't reach the
finish line and you'll abandon it somewhere along the way. So was it all a
waste? Is exploratory hobby programming somehow a bad practice? I can't see
that it is.

~~~
Jtsummers
The point of the author is to define your targets. When you're building a
compiler if you're also building a parsing engine then maybe the parsing
engine should be its own project, with the compiler a separate project that
uses the engine. When you build a web app you don't closely tie in the web
server code, you use an existing or build a new web server. When building a
web server you don't build the TCP stack with it. You make the TCP stack and
then have the web server use it.

Complex systems are constructed from smaller, less complex subsystems.
Complete them, let them standalone and build out our final systems from them.
We generally don't rewrite tools like grep because it exists and it works
well, we may port it to new platforms. And we don't change the effects of the
-R flag out from under the users (clients), it is what it is. The interface
is, essentially, completed.

If you're leaving incomplete hobby projects in your wake (all of mine are),
that's not what he's talking about.

His specific experience was going from AMQP (large, complex specification) to
ZMQ (smaller, focused on the essentials and let complexity be built on top) to
nanomsg and libmill (even smaller, more focused). So in theory his nanomsg and
libmill could be considered complete, as we'd consider TCP "complete", today
(from a client perspective at least). And we can work our way back up plugging
it into systems to recreate the capabilities of ZMQ, which can be plugged into
systems recreating the capabilities of AMQP. Which can then be used to solve
our business problems.

------
Steeeve
I find this funny coming from the originator of ZeroMQ. When I firt tried out
ZMQ it took me all of 12 seconds to figure out that if you didn't pass exactly
the message that it was expecting, it crashed the server.

That problem along with I'm sure hundreds of others were fixed along the way.

You don't achieve perfection in software development. Ever. Even when
developing something for yourself where you define the scope and you leverage
it for a menial task. There's always better. There's always more robust.
There's always something else you can do.

That's the draw to software development for many of us. There is a constant
challenge awaiting our brain. You can consistently push yourself in a new
direction that you care about.

The downside is that feeling of a completed accomplishment really is just an
arbitrary release ceremony. What software developers need to get good at is
letting go and moving on.

------
codingismycraft
Great software projects are never finished; instead they always evolve and
improved similarly to living organisms and unsimilar to material things.

I completely disagree with your analogy to a carpenter who builds a chair,
this metaphor is wrong and responsible for a lot of misunderstandings when it
comes to software development.

~~~
0xdeadbeefbabe
How about the words "fully baked"?

I also don't agree that "finish your program" needs to be added to the unix
philosophy because a tool that does one thing and does it well is finished by
definition.

~~~
codingismycraft
Except very trivial cases, any piece of software contains bugs that need to be
fixed needs to add features based on new user stories and also must to adopt
to the constantly evolving hardware. Linux is a perfect example of this
constantly evolving and never finished paradigm and if anything it can be used
not to support but to invalidate the "finish your program" philosophy.

------
gkya
> Imagine the carpenters were like programmers. You bought a chair. You bought
> it because you've inspected it and found out that it fulfills all your
> needs. Then, every other day, the carpenter turns up at your place and makes
> a modification to the chair.

505 Bad Analogy. Carpenters work all the time on new designs and better
chairs. When a programmer releases a new version of his program, it's like
when the carpenter made a new chair and put it up on sale on Amazon or ebay
(aren't these kind-of like package managers for physical goods?). You may want
to buy that new one, but you may also not. Similarly, nobody forces software
upgrades down your throat. But sometimes the people change, they want to sit
on divans or nail-beds instead of chairs, the carpenters follow the trends and
produce those instead of chairs, and people expect to find these when they
come to visit you. You can say "sit on chairs or f*ck off!", but you can't
avoid the consequences: loneliness. Similarly, you can hold on to your select
programs in select versions, maybe backport patches for them, but the
protocols change, the services change, world goes on. You may say eff-off and
stick to your programs, but the consequence is that it'll become harder to
share experiences and information with others, both ways. Nobody fixes or
modifies your stuff should you not want them to, they may try hard to sell and
bug you, but you always are the one that buys. Mobile phones do by default,
but you can opt-out.

~~~
nfriedly
> _nobody forces software upgrades down your throat_

Windows 10 does, as does every SaaS web app out there.

~~~
gkya
But isn't that a bit like sitting on someone else's chair? You can't control
things you don't own.

~~~
derefr
Yeah, I'd compare that more to living in a hotel, and coming into your room
every day to find that _the hotel_ has replaced the chairs. The hotel, here,
is the web service, or your OS's package manager

If you have a problem with constantly changing chairs, you're free to find a
new hotel that doesn't do that—probably one with "LTS" in the name.

~~~
gkya
I was referring to individual software packages, an entire distro is a bit of
a different beast. Going with the same chair analogy, well, you'd mostly care
about the chair in your room, no? The one that you use the most. That's
similar to what I do with Ubuntu (and any other OS): it takes care of all the
programs except Emacs which I compile myself following master, Firefox
Developer Edition which I'll use until 57 comes out, and my scripts and
utilities that I've written. It's impossible to micro-manage every bit of
software I directly or indirectly use, but it's a nice trade off to ensure the
most important bits myself and trust the rest to the distro maintainers.

------
kenshi
> Please join me in my effort and do finish your projects. Your users will
> love you for that.

Will they?

If your users are programmers, they _might_ love you for that.

For pretty much all other users, they will want more. "It would be great if
your app did...". I would argue that users now _expect_ software to change and
add more features.

And why wouldnt they, when pretty much all software they use is doing this? If
your app isnt growing and changing, to users it way well look dead: "It's
abandonware".

~~~
icebraining
That's actually not my experience; I see people using "old" software all the
time, and rarely looking for newer alternatives even if they feel that some
things could be added or improved. As just an example, my mother still uses
ATnotes - last released in 2005.

And I think the limited data we have shows that, like the number of people who
upgraded their browsers before it was automatic.

~~~
aeorgnoieang
And my experience is that there are many many combinations of some amount of
both. I find a combination of both even with myself and sometimes even for a
single piece of software! "I wish they'd add Markdown support!" "I wish they
hadn't changed the UI!"

------
oddlyaromatic
> it's almost impossible to find anyting that's truly finished, not simply
> abandoned.

This goes far beyond programming. "completeness" is nice but is highly
subjective. For work that can be infinitely revised, refactored, recut,
"finished" is an idea you impose from the outside, in relation to specific
criteria.

The author seems to be saying that there is value in creating smaller,
independent modules of things that can be said to be "finished" when they do
their thing and no further functionality is added or expected to be added.
Further functionality, if needed, would come from connecting "finished" things
together, I suppose, or making new things- not over extending a component to
handle all cases.

This moves the complexity around a bit and maybe it's more efficient and
easier to handle this way.

Artists have a similar problem. It's hard to know when a poem or a song is
"functionally complete" but eventually you have to let them go or you never
get anything out there.

I'm all for conscientiously abandoning work, especially if the state of the
work is well documented so that somebody else could, in theory, pick it up.

~~~
tedmiston
> This goes far beyond programming. "completeness" is nice but is highly
> subjective. For work that can be infinitely revised, refactored, recut,
> "finished" is an idea you impose from the outside, in relation to specific
> criteria.

This is why I think a "definition of done" is so valuable on a dev team,
especially in a startup environment. Without it, there's just natural
inconsistency or variability across people and features.

------
pronik
The author implies that by releasing new versions of our software with
security patches or features we somehow differ from construction where house
are considered finished. They are not - there is a lot of maintenance to be
done, painting, replacing pumps, sewage pipes, insulation, cables, I even had
a house where we got a new balcony some 20 years after the house was "done",
dito a completely new elevator recently some 40 years after finishing the
house. I'm sure that if constructing houses had been as easy as rewriting
software, we'd get a lot of other convenience changes just for the sake of
being modern.

Software engineering is not flawed. It just lacks hundreds of years of
experience which it will gain with time.

------
rkuykendall-com
> Imagine the carpenters were like programmers. You bought a chair. You bought
> it because you've inspected it and found out that it fulfills all your
> needs.

Then, every other day, the carpenter turns up at work and tries to improve how
they make chairs!

Why can't they just stop changing how chairs are made? I like my chair, but I
go back to the store after a few years and they're all different! Carpenters,
finish your chairs. Those ideas you have? Stop having them. I like chairs how
they are now.

~~~
gcoda
That is how market works, people not satisfied with sitting experience and
constantly trying to improve on that, yet QWERTY layout is still in use by
99.9%, it is enough to satisfy typing needs and good enough experience so
nothing changes for ages.

~~~
TeMPOraL
> _That is how market works, people not satisfied with sitting experience and
> constantly trying to improve on that_

Not sure if you're joking here, but if not - then no, it's not because of
that. Companies jsut make new, "better" versions, market them hard, and are
happy because when they pull the old ones from the stores, you have no choice
but to buy one of the new ones.

------
kmicklas
The reason this is an implicit principle of the Unix philosophy is because the
Unix philosophy is to be as lazy as possible as the tool-creator and push all
complexity onto the tool-user. Thus we get things like Go, regular
expressions, and null pointers.

People outside this school of thought don't finish their projects because
their projects actually try to solve the underlying problems in computing,
which are inevitably hard.

~~~
erikbye
Examples of underlying computing problems that are unsolvable if one follows
the Unix philosophy? Keep in mind that it's often misunderstood what the
primary tenets of the Unix philosophy are; your comment demonstrates you, too,
have misunderstood.

~~~
kmicklas
What have I misunderstood?

------
galaxyLogic
The difficult thing is to divide a big problem into a set of smaller ones that
together solve the problem BUT such that those smaller solutions are valuable
on their own, not only as part of the bigger solution that is your eventual
goal.

So the problem is not so much to solve a problem but to FIND a problem whose
solution is useful on its own and perhaps also as a sub-solution to bigger
problems.

------
elihu
I don't agree. The goal of software isn't usually to discover some perfect
abstraction with nothing left to add and nothing to take away, the goal is
usually to solve some practical real-world problem. If your software satisfies
the requirements without taking too much time or effort to create, then it's a
success.

If a project is going to be used by a lot of people or become some kind of
industry standard, then it makes sense to spend some time up-front and figure
out a clean interface and a well-defined feature set so you don't have to make
non-backwards-compatible changes. It also makes sense to look at the feature
set and figure out if you have the time and ability to complete those
features, whether they're important enough to justify the effort, and whether
there's an easier option that satisfies the requirement without having to boil
the ocean. All this should be driven by the project goals, though, not some
abstract ideal that all programs should be complete.

Some of the most useful software is going to be forever incomplete. Web
browsers, CAD packages, operating systems, etc... I'm glad there are people
working on software like that.

~~~
Jtsummers
The author's point is not that there shouldn't be new systems or that no
systems should ever change. He's not arguing for a perfect abstraction,
either.

Following his development path (AMQP->ZeroMQ->nanomsg + libmill + others) his
interest was in decomposing the larger system (AMQP which included message
queues, databases for storing messages, wire protocols, etc.) into its smaller
parts so that you could build back up to it in a better way. It happens that
when you get to those lower levels, you can call them done.

If you're building a web browser, what's the proper scope for your project?
Should it actually consist of all of these things in one project: tcp stack,
http client, javascript interpreter, html parser and renderer, keyboard
drivers, mouse drivers, touch screen drivers, etc.?

No. You build on the existing OS for the drivers. You build on the OS for the
TCP stack. You may build an HTTP client, but odds are you can reuse an
existing one, or the one you make could be partitioned off into a separate,
and hopefully reusable, library. Your javascript interpreter can also be its
own project/library. And most of that already exists, so you can focus on HTML
parsing and rendering and glue in the other components. Your HTML parser
doesn't need to know how to send an HTTP request, your renderer doesn't need
to know it either. Only your browser needs to connect the HTML renderer and
the HTTP client.

~~~
elihu
> Following his development path (AMQP->ZeroMQ->nanomsg + libmill + others)
> his interest was in decomposing the larger system (AMQP which included
> message queues, databases for storing messages, wire protocols, etc.) into
> its smaller parts so that you could build back up to it in a better way. It
> happens that when you get to those lower levels, you can call them done.

I think breaking things up into small pieces so that it's modular and you can
work on one part at a time and then assemble a working system out of
individually-testable working parts is a good thing (especially if you can re-
use parts that other people have already made), but I don't think that's quite
the argument the author was making.

I think the author was arguing that you should reduce the scope of the task
any time you notice open-ended requirements with no clear completion criteria.
I think that's good advice most of the time, but there are exceptions. It's
okay to have a project that isn't "finished" if it's useful.

------
georgeecollins
What I love about games is that at some point they have to ship. They used to
have to go in boxes. But even now that they don't, there is a still a date
when they have to go live in stores. Deadlines terrify me and they create
great anxiety in my life. But ultimately they are cathartic.

------
Walkman
I don't think the chair example is appropriate, because I think there are
different, more comfortable chairs today than let's say a 100 years ago. So
the "chairmans" are continuously improving the chairs, just not the one you
bought.

------
oldrny
Software can be usable but 'incomplete', in the same way that stools and
Aerons are both usable, but stools weren't an unfinished technology. If your
bug count is low and your software has features, it's already done.

------
rbranson
Except that grep and make have continued to be augmented over time?

------
1001101
Somebody has given up a significant amount of their time to give you something
that's free as in beer and speech. You get at least what you pay for, don't
ask for your money back.

~~~
za_creature
The one thing you can _expect_ to receive in exchange for said significant
donation of your time is recognition for your work.

If your work consists of half-finished code that you then attempt to pass off
as a usable product, the expected reward is shunning.

There's a difference between pushing your CS101 homework to github and
publishing your package to npm. As with academia, once you publish you
implicitly vouch for the quality of your work, and your reputation is
permanently tied to it.

