
The Hustler's Manifesto - choxi
http://blog.niroka.com/post/12253356252/the-hustlers-manifesto
======
abstractbill
It's a shame most of the comments are focusing on the not-testing point. The
last point was much more interesting (if I may paraphrase, engineers in an
early-stage startup should be responsible for _proving_ that the features they
build actually add value).

Anyway, I agreed with pretty much everything, personally.

~~~
choxi
agreed, i hope after all the testing controversy settles people will walk away
with that. that was my favorite point too.

upvotes for you

------
snorkel
"Don't worship the code." Well put. I've personally witnessed a situation
where a startup was paralyzed by code narcissism. The business was asking for
features they couldn't get because implementing those features would mean
violating the lovely design patterns chosen by the architecture astronauts.
Every developer wants to believe that someday their code will be released as a
heroic open source project that solves everyone's problems but in the meantime
they neglect to solve the problem their customer is having right now.

~~~
ErikRogneby
I've worked with a software vendor (as a fortune 50 client) where their
architecture and product vision was solid but the engineering team was
deciding what aspects of both they would implement. Not when they would
implement but _if_ they would. Imagine if a carpenter or autoworker did this?
The fallout from this was a major contributing factor to their loss of new
business.

------
mgrouchy
The main problem with this is that author is creating a false dichotomy
between writing great software and running a business(hustling). I don't think
you have to forgo one to succeed at the other.

I just think that you need to make a decision up front what your businesses
core competency is, if it is supposed to be writing great software, then
really should you be doing things like "Stop being such a nerd".?

~~~
choxi
by "writing great software" do you mean writing software that other
programmers will admire? or writing software that your users will love?

you shouldn't let the first one get in the way of the second.

~~~
mgrouchy
I mean, writing software that is usable, useful, maintainable, robust and
reliable. This is the type of code that both other developers AND your users
will love.

------
derekreed
I think it's hilarious that most of the comments I'm seeing here so far seem
to be doing the exact over analysis that this article is trying to dissuade us
over-analytical types from doing ... maybe?

Just build it motherfuckas?

~~~
sanderjd
What I find hilarious is the idea that things can "just" be built
(motherfuckas), rather than things built through lots of hard work,
dedication, and yes even that apparently dirty word _analysis_.

What this article is suggesting sounds great for prototypes, which is what
startups should be doing early on, but not forever.

------
spenrose
"Here’s what you do instead: write integration tests for the critical parts of
your application."

Yes! The "unit" in unit tests makes sense when "units" are what define your
code -- because you are delivering units. If you are delivering applications,
focus your efforts on having quality tests for those.

(Of course competent coders will develop units, and collections of units aka
libraries, along the way, and of course unit tests as such will be right for
them. But the measure of an application must be taken at the application
level, not at the unit- or aggregate-of-units- level.)

~~~
bunderbunder
For that matter, what the author complained about in the article didn't strike
me as unit testing. It was testing, and it may have been done using a unit
testing package, but it sounds like what was being tested was not units.

------
joelhooks
As a consultant I am constantly called onto projects that are in varied states
of "prototype gone wild" - they seem to already be following this advice on
many levels and pay the price later. YMMV

~~~
m0th87
Right, which is why he states TDD and agile development is best fit for
consulting. It makes perfect sense there.

~~~
joelhooks
It is WAY more expensive to do later than it is to keep this sort of thing in
mind up front. I've done a lot of work with product startups that have dug a
massive tech dept hole that is actively affecting their bottom line. Rickety
foundations.

TDD fully supports the concept of prototyping. You don't unit test a
prototype. The trick is to STOP that prototype when the concept is proven and
move onto something you can really build on.

~~~
anthonyb
A mess is not technical debt!
[http://blog.objectmentor.com/articles/2009/09/22/a-mess-
is-n...](http://blog.objectmentor.com/articles/2009/09/22/a-mess-is-not-a-
technical-debt)

~~~
gfodor
This is a nice idea, and perhaps is the original meaning of the term, but in
practice sometimes a mess is exactly what you should be creating. Since you
know you are going to be throwing it in the trash this mental mode of
prototyping fosters creativity like no other.

------
blhack
Am I the only person who is _incredibly_ turned off by the term "hustler"?

It's right up there with "I'm an idea guy".

In fact, I think that "I'm an idea guy", has just morphed into "I'm a
hustler".

~~~
invalidOrTaken
I don't know...a hustler actually _solves problems_. Nasty, gross, annoying,
fuzzy problems that have no right to exist, but do anyway. Hackers hate these
problems[1].

I moved into a new apartment complex a few months ago from out of state. The
landlord was going to put us into a different living situation than we
contracted for. All our calls were met with a bland, "We ran out of room."
Resident hustler roommate had a talk with them and they meekly _knocked down a
wall_ to make things right, and gave us a discount.

Hustling has a very different set of skills than hacking. Hacking rewards
thinking carefully and doing the Right Thing The First Time, where hustling
rewards more of a shotgun approach. Maneuvering one's way past secretaries,
remembering 10 acquaintances who might be talked into putting up seed money
and calling all of them---these are things hackers _hate_ to do if they can't
accomplish them cleverly, and so they often go neglected for great products
with hackers behind them (because no one else could be).

I'll take a good hustler over an idea guy any day. If they're legit a hustler,
they're worth their weight in gold.

[1]<http://www.paulgraham.com/gh.html>, search "nasty"

~~~
feralchimp
"Hacking rewards thinking carefully and doing the Right Thing The First Time"

You have just won the gymkhana for linguistic drifting.

------
petercooper
_The point is, code evolves. It’s never “done”, so don’t write tests that
presume it will be static and your interfaces won’t change._

That's exactly _why_ one does TDD, so that you can both be guided in your
design (code that's hard to test is probably crappy code) and also have
confidence when it comes to refactoring. This is _particularly_ important in a
dynamic language.

Good tests are not written with the requirement or presumption that the
codebase will be tightly coupled or be difficult to change. Good tests are
written entirely to _support_ change and to give the developer confidence in
the ability of that code to change easily.

I think we need more and more materials out there on good TDD and OOD because
I'm finding that a lot of really smart people have just never seen or
experienced it and have been turned off by the first few slimy rungs of the
ladder (including me, once!)

~~~
arctangent
I agree with a lot of what you say.

However, it takes a lot more time to write code guided by tests than it does
to just write code. And you may end up throwing away a lot of those tests and
rewriting them when the code changes in response to customer feedback.

In my mind that's the main argument against TDD.

I'm happy to write tests after the v1.0 of an application is shipped and I
have sign-off from my customer, because it is clear then that the time is not
wasted - they have the working product as soon as possible, after all.

However, in my world it's inevitable that I'll be assigned to another task
almost immediately after I ship a product. Those tests often don't get
written.

I suppose I'm accumulating a lot of technical debt, but everything seems to
work...

~~~
21echoes
i feel the exact opposite-- if you have a good test, the code is nearly
trivial to write. the key is to always bounce back and forth between tests and
code, so that you're always just writing the minimum possible thing that
passes, which (when looking at a test) is super simple.

"throwing away tests due to customer feedback" is a red herring-- if you
weren't throwing away tests, you'd be throwing away code. additionally, the
goal with all development is to only sit down and write code when it's
_extremely_ unlikely it's going to change. you should be sure that you're not
going to drop this thing you're writing, because you've done the necessary
user research to be sure the feature is needed and usable.

"I'm happy to write tests after the v1.0 of an application is shipped and I
have sign-off from my customer". if you're shipping untested code, that's a
huge problem. the last thing you want is to deal with scaled feedback from a
1.0 release alongside ironing out code quality and reliability with tests

~~~
petercooper
_additionally, the goal with all development is to only sit down and write
code when it's extremely unlikely it's going to change._

To a point. But as a general rule, I don't agree. The goal should be to write
code that's both loosely coupled and highly cohesive enough that you can defer
as many changes or decisions as possible to the latest point possible. While
this might involve spending more time in planning, the process of developing
will itself impact the overall design (in most, but not all, cases).

------
wpietri
I think this is too coarse-grained.

Code I plan to keep? I'm writing unit tests for anything that I'm worried
might get broken. Code I'm going to throw away? Fuck it. Rarely worth testing.

The problem comes when people start treating prototype code as production
code. That tells me that a) their prototype was probably too rich, and b)
they're asking for trouble by building on a shaky foundation.

I think the solution is to be very clear about whether a given chunk of code
is a prototype or for real. When a prototype pays off enough that you want to
take is seriously, rewrite it. Tests and all.

------
eridius
The points made here seem like a great idea if your goal is to get something
out the door as fast as possible. But they don't seem so great if your goal is
to actually produce something that you can continue maintaining over the next
few years. So maybe if your goal is to get bought out, then you only care
about having something working for a short time, but if your goal is to build
great software that you can continue to build and maintain for 5 or 10 years,
then you might need to rethink some of this.

------
jmathes
TDD does work, and I use it in practice. I know a whole company full of people
who all use it in practice, and it works.

I have a guess about why HN likes to upvote opinion pieces that hate on TDD.
TDD initially feels like it takes discipline. It's natural to dislike things
that require discipline. I think people are trying TDD, finding that it
doesn't work as advertised, and gravitating towards the most pleasant
explanation: that TDD sucks. They never consider the alternate explanation:
you're doing it wrong.

~~~
petercooper
I agree, from the perspective of once being one of those people.

I'm seeing a parallel with fitness or eating right. If you leap into a diet or
heavy fitness regime right away, it _really sucks_ for a while. But eventually
it works out. Practicing mindful, intelligent TDD and object oriented design
results in a similar experience. There's a "dip" you need to plough through
before things start to click.

------
timscott
I agree with almost all of what you said. I'm a coder, agilist, craftsman,
blah, blah who has worked on a startup more than once. On the last one I wrote
almost no unit tests. The whole thing was so experimental from the start that
I never go around to it. I explained it to myself that I was being a hustler.
I'm okay with that.

However, I gotta say it. If unit tests are making it harder to restructure
your code, you're doing it wrong. The opposite should be true. The greatest
purpose of unit tests is to add comfort and safety (and thus speed) to very
big refactors and restructuring. If your system behavior changes, yes, your
tests gotta change, and that takes effort. But if you restructure your code
(e.g., break it apart to add an intermediate abstraction) and unit tests slow
you down because you got get all up in 'em, then brother you got yourself some
bad tests. Those should not have written.

------
shwa
There's too abrupt a distinction between integration and unit tests. Testing
the behavior of your code before it's written can be organizing and efficient.
Testing private methods or code that is otherwise internal, is, as was
mentioned, an additional maintenance cost.

------
jakejake
I don't really agree that unit tests should need to be re-written quite as
much as indicated in the article. Our unit tests tend to have good coverage at
the lower, model level. At the UI level we rely more on usability testing
because it's a bit harder to test automatically and things tend to change more
frequently.

I do think it's best to find a balance that works for whatever particular
product you're developing. If you don't follow any methodology at all you're
likely to spend a lot of time reinventing the wheel. But you don't necessarily
need to follow a methodology to the letter in order to have a great team and
produce great software.

------
dylangs1030
I think a lot of startups (and companies in general) become so enamored with
their work that they lose sight of the real value of what they're doing. For
example, as the author said, losing the forest for the trees: getting tunnel
vision on one very good feature causes stagnation to the rest of the system as
a whole. It doesn't move forward, it just makes one 10/10 feature in a 7/10
system. Metrics that show how useful a userbase finds one feature are always
good because they keep the programmer's eye on the difference between being
objectively useful and just being neurotic.

------
thret
In case anyone is wondering, the meme 1\. Do xyz 2\. ??? 3\. Profit

Comes from South Park S02E17 Gnomes:
<http://www.youtube.com/watch?v=TBiSI6OdqvA>

------
jwatte
The real point is: Proving stage and scaling stage are fundamentally
different.

Not making the transition at the right time (or at all) has probably killed as
many startups as over-engineering in the proving stage.

------
jaggederest
> where if I had written unit tests I would have found myself essentially
> rewriting all of them to respect the new abstraction layer.

Here's your problem: You're either not writing unit tests correctly, or you're
writing _bad_ unit tests. If you have to rewrite all your unit tests to
respect the new abstraction layer, you're programming astonishingly poorly -
that should be a change to a few different tests, not all of them.

~~~
choxi
the problem isn't that my tests are bad, the problem is that it's commonplace
to adapt new interfaces in a constantly evolving codebase. when your unit
tests were built for one interface, you have to change them when you change
the interface.

for example, the interface to enroll in a course used to be an association
between an "enrollment" and the "course". when we added "courses has_many
sessions", you enrolled in "courses" via "sessions". you have to rewrite all
your enrollment specs to respect that new interface now, regardless of if they
were shitty or well-written.

to me, that's a maintenance cost that isn't worth it at a startup given how
rapidly you have to change those internal interfaces.

~~~
jaggederest
> you have to rewrite all your enrollment specs to respect that new interface
> now, regardless of if they were shitty or well-written.

No, not if you're doing it correctly.

Again, that's not a unit test, it's a functional test. Unit tests test
_really_ small blocks of code. If you're crossing class boundaries, generally,
it's not really a unit test.

Let me show you what I mean with tests I wrote:

[https://github.com/newrelic/rpm/blob/master/test/new_relic/a...](https://github.com/newrelic/rpm/blob/master/test/new_relic/agent/error_collector/notice_error_test.rb)

Those are unit tests (not beautiful ones, written to refactor, but I digress)
- they mock cross-object (and even cross method, in some cases) and they
_really_ focus on individual paths through the code. If you change a method,
only tests relating to that method fail.

Edit: also, "the problem isn't that my tests are bad," is a poor assumption to
start with - you're assuming _a priori_ something that we're discussing here,
which is that you're complaining about problems resulting from not testing
correctly. "Bad" is a loaded word in any case - I'm not sure there's a test
I've seen that couldn't be improved.

~~~
gfodor
And, if you decide to rename any of the methods on the classes you've mocked
out here, your unit tests will continue to pass despite the fact your
implementation is now full of bugs. And, if you do rename the methods or class
you've mocked, you now have to update every single test for any class coupled
with the changed method.

I've never understood endo-testing/mock objects in environments where the
compiler cannot check your mocked interfaces. I also don't understand how
people can aruge that you shouldn't be testing against the implementation and
then say in the next breath you should be mocking out every single method call
the internal implementation makes explicitly. You're just setting yourself to
get lots and lots of green tests on code that will explode as soon as it hits
production. Whenever I've done aggressive mock object based testing I soon
have zero confidence in my tests because I get burned due to the fact that the
mock objects eventually start asserting that the _wrong_ behavior is _right_
and my code explodes when integrated.

(And yes, I know the excuse here is that you should then write integration
tests and functional tests too. But seriously now, how many tests are you
going to end up writing for your 100 line Ruby class before you decide you're
going overboard in the name of purity?)

Better to instead just write it so instead of worrying about no obvious
deficiencies there are obviously no deficiencies (avoid side effects, state,
extra coupling), and write some functional tests just to be safe. Yes, those
ones that actually hit the database and test the interaction between multiple
classes that TDD advocates loathe because they are so slow and impure. Slow
they may be, but at least I know they're testing the code that's going to run
on my servers. I'd rather have 10 tests break when I change one class that are
easy to fix than have zero tests break when I change one class and let broken
code get to production.

~~~
jaggederest
Comments one at a time inline:

> And, if you decide to rename any of the methods on the classes you've mocked
> out here, your unit tests will continue to pass despite the fact your
> implementation is now full of bugs.

Yes, that's why you have other tests to cover those implementations. This is
just an isolated example.

> And, if you do rename the methods or class you've mocked, you now have to
> update every single test for any class coupled with the changed method.

Yes, this is true. It's a helpful thing in my experience: you wish to mock as
little as possible, and so having clear end points is important. Having to
change every single usage of the method means you tend to write better code,
in essence.

> I've never understood endo-testing/mock objects in environments where the
> compiler cannot check your mocked interfaces.

That's fine, don't do it. This is just a demonstration of what works for me.

> I also don't understand how people can aruge that you shouldn't be testing
> against the implementation and then say in the next breath you should be
> mocking out every single method call the internal implementation makes
> explicitly. You're just setting yourself to get lots and lots of green tests
> on code that will explode as soon as it hits production.

This is true, however, it means that when you edit one method, at most a half
dozen tests will fail, as opposed to your entire test suite failing. You end
up with very good locality of failure, as opposed to binary 'something is
wrong' tests.

> Whenever I've done aggressive mock object based testing I soon have zero
> confidence in my tests because I get burned due to the fact that the mock
> objects eventually start asserting that the wrong behavior is right and my
> code explodes when integrated.

This is true, but not something that can be avoided - you end up with problems
either way, and these test (as above) give you very good feedback about
_where_ your error is. That, combined with very comprehensive testing, leads
to a situation where you can trust your tests really well never to throw false
positives.

> (And yes, I know the excuse here is that you should then write integration
> tests and functional tests too. But seriously now, how many tests are you
> going to end up writing for your 100 line Ruby class before you decide
> you're going overboard in the name of purity?)

It depends on how important it is to you - for example, the tests above test
functionality that is core to a piece of code that runs on many hundreds of
applications - not something you ever want to break. As a result, the
investment was worth it. You have to decide those tradeoffs on your own.

> Better to instead just write it so instead of worrying about no obvious
> deficiencies there are obviously no deficiencies (avoid side effects, state,
> extra coupling), and write some functional tests just to be safe.

I'm worried about more than obvious deficiencies - I'm worried about corner
cases and things you haven't thought of. In writing these tests I caught
dozens of unspecified and poor behavior corner cases.

> Yes, those ones that actually hit the database and test the interaction
> between multiple classes that TDD advocates loathe because they are so slow
> and impure. Slow they may be, but at least I know they're testing the code
> that's going to run on my servers. I'd rather have 10 tests break when I
> change one class that are easy to fix than have zero tests break when I
> change one class and let broken code get to production.

I totally agree with that. There are comprehensive integration style tests and
comprehensive functional tests too - but they're pointless without the
assistance of specific tests that indicate which portion of the application is
failing.

If a functional test fails without a unit test failing, you have work to do on
your unit test suite. Unit testing is a tool for programming as much as it is
a tool to verify correctness.

------
bomatson
Hustling is more than listening to customers. Making deals, negotiating,
CLOSING smells more like hustling. What kind of manifesto is this?

------
dariusdunlap
A strong dose of LUXr style UX work and Vlaskovits & Cooper style Customer
Development will do a lot to fix these problems.

We all tend toward "doing what we know." (Coding the next feature, for
example) If you recognize it in yourself, that's a big start.

------
nroach
This is probably a bit of nitpicking, but did you mean tenets instead of
tenants?

~~~
choxi
yep, thanks! fixed it

------
amm3g
Everything in moderation.

------
c4urself
I identify with Rule #2.

I often "lose" days making code look just a little better, after making it do
what it needs to do.

------
mkramlich
I liked the points he made, but didn't like the title because it didn't seem
to involve hustling or a manifesto.

------
shareme
I would like to add hat anyone that uses TDD to test develop any GUI
application such as iphone or android is a freaking ill-informed idiot..unit
tests are poor behavior analysts or indicators

~~~
tikhonj
I think "idiot" is a little harsh. You can definitely be too earnest in
writing unit tests, but I think there are places in almost any program that
would benefit from them. For example, in a GUI app chances are you have some
sort of model of the data; tests could help ensure that the model is well
behaved. This way, if you have an odd GUI glitch you can be confident it's
actually in the GUI code rather than in the data underlying the interface.

------
swah
Not engineering, though.

