
Doing things the wrong way to get the right result - zgryw
https://madebymany.com/stories/swallow-your-developer-s-pride-and-just-do-stuff
======
larrik
The problem with this approach is when they say "okay, we'll have an intern do
the manual keying each week and use the prototype as the finished product.
Thank you, goodbye."

Or the fact that it looks 90% done, so the idea that you did it in 2 weeks
means you are just days away from completion, right?

Sometimes the reasons for not hacking stuff together can be quite political,
both for internal projects and for consultant/client relationships.

~~~
lgunsch
"okay, we'll have an intern do the manual keying each week and use the
prototype as the finished product. Thank you, goodbye." is a valid business
choice though. Maybe the cost of the intern and the risk of being unable to
effectively maintain or add features is worth it to the client. It's their
choice if they would prefer it that way.

~~~
braveo
Not only that, but the intern can make decisions that an automated system
isn't going to be able to do safely due to ambiguities, etc.

This reminds me of an automated licensing call I was putting together once.
The company owner wanted me to insert some verbiage into the notes field, only
the web api didn't support the notes field (but you could do it by logging in
and manually updating).

I had started to plan on how I was going to do this via a headless browser. I
start asking the owner questions to clarify things and he says "just email XXX
with the information and she'll manually log into the site and copy/paste the
notes you put in the email".

And I thought... you know what? that's really fucking smart, I'm way
overthinking this. They're going to get 4 or 5 license activations/month MAYBE
and this approach is a whole hell of a lot more simple and robust than pulling
in a headless browser to simulate it.

I then had to stop and think about why I hadn't considered that approach
before.

I guess the point is that I agree with you wholeheartedly. Not only in terms
of simply cost, but in terms of complexity, stability, and ambiguity as well.

------
lmm
This is often good advice especially in a startup, but it's not quite
absolute. It's worth putting a certain amount of work into maintainability.
It's worth pre-empting some classes of issues. And the further you shift into
being a mature business, the more serious engineering becomes appropriate.

~~~
stinos
Yes it's bit of a trade-off, as usual in software. Hacking can be the fastest
now, but having worked on 10000loc code which consisted mainly of such hacks
and none of the feasible maintainability: your hack might cause a ton of lost
time in the future. Then again, the more experienced you get, the easier it
will be to spot if a hack is the appropriate thing or not. Like if your
software is already nicely established and loosely coupled you can apply hacks
here and there without any negative effect on maintainability or functionality
whatsoever.

~~~
concrete777
Curious if you have some concrete examples of "loosely coupled" software
projects. On GH perhaps?

It's a concept I try to keep in mind, but am never sure I am "doing it right".

~~~
stinos
Mainly C++ dev here, I do think for example LLVM is very well done, and Boost
as well. But both are immense codebases and especially Boost is complicated to
the point it's insane. It's a shame but I don't have good examples of
relatively small projects to check out. Maybe that's worth an 'Ask HN' thread:
I've definitely seen threads like 'Ask HN: what are good open source projects
to check' but without the extra requirement they're small(ish). The concept of
decoupling is usually not just in the details, but in the more higher level
layers of the software, so to really grasp it you'd have to really spend some
time on the whole project, not just read some files here and there. Which
would obviously be easier with a small project.

~~~
braveo
I would point to both CLang and KDE as projects with very good modularity. I
remember reading through the sources oc CLang a few years back and being
impressed with it.

------
awjr
I always think this is very much the approach MVP (Minimum Viable Product)
takes. I've seen this taken to the extreme in a company where the IT
department where applying MVP to their internal business users creating
absolute havock. Eventually Business Analysts were brought in to 'force' a
more beneficial approach to delivering good solutions as well as get the
business talking to IT in terms that each could understand.

There is a point within a company, where you have to switch from Minimum
Viable Product to Minimum Valuable Product and try and live by the motto
"Write your way out of a job".

Not sure where I am going with this, but I think the OP is right. We overthink
things sometimes.

------
dingaling
I encountered this in a personal project in the past week.

The ideal, normalized schema would be slow and awkward for manual insertions
on a daily basis, which I felt would deter me and hence undermine the project.

The not-quite-normalized version would be much quicker and more intuitive for
me to keep up-to-date, so I talked myself into it.

In the future I can always write a stored proc to process the less-normalized
data into idealized tables, and use the original tables solely for importing
the data.

Despite that reassurance it was remarkably difficult to accept that doing the
'wrong thing' programmatically was the 'right thing' in a project scope.

~~~
tmaly
I took the exact opposite approach in a personal project. In my professional
experience I have seen this exact same pattern with stored procedures use to
hide poorly designed tables. After time, more entropy occurs in the system.
Being able to change things comes to a crawl as you have duplicate data due to
the initial poor schema design.

Many of the current databases like Postgresql let you create views. And the
performance even on a smaller machine is quite good. So if you stick to a
normalized schema and build views, it will pay dividends later on.

------
chris__butters
I think this needs to be said to a lot of stubborn egotistical designers too.
Ego in some ways is good but needs to be controlled and put aside so projects
get the most effective outcome.

~~~
zgryw
Absolutely. Same for designer, strategists, etc - however for some roles it's
easier to "hack stuff together" than others.

~~~
chris__butters
I've seen designers simply copy and paste from an "inspirational" pinterest
board, strategists copy and paste the first idea from an article or speech or
book - hacking stuff together is for more than simply technical people.

------
gmarx
My first step would have been to search Amazon for a web enabled door buzzer.

Cool project though

------
mmartinson
In addition, once you have hack solutions that are validated by users,
replacing them with more robust solutions is extremely satisfying.

~~~
emperorcezar
Nah. We just throw another hack on top of it.

~~~
brain5ide
If it works - don't fix it, usually wrongfully applies here.

------
lscharen
Being able to redefine the problem to get at a "quick and dirty" solution is
absolutely a useful skill. What I think gets glossed over in the fine article
and in some of the other comments, is that this is really only a _good_
approach when one can also see the full solution as well and understands how
to evolve past the "mechanical finger" if it becomes necessary to refine the
system.

Where people get into trouble is when they have a short problem space horizon
and don't have a decent feel for the trade-offs being made and a reasonable
understanding of how the "wrong solution" is related to the "right solution".
That can lead to the creation of a hack-y, fragile and _unchangeable_ system
that can really limit progress in the future.

~~~
zgryw
Definitely.

If you know you're cutting corners, the ability to change in the future should
be the priority. It usually means that you need to consider how the "perfect"
solution would look like, so you know where the possible refinements will come
to play.

------
dpflan
Perhaps I am misunderstanding the discussion and do not see the "wrong" way:
it seems that this consistent, incremental improvement is the "right way" to
get to the the "right result". Isn't that very much a tenet of a whatever one
may want to call a "lean start-up"

"""

We started with a completely unscalable solution, which enabled us to validate
the need. We then evolved it, step by step, to make it support more and more
users. On the way, we learned not only about our users, but also discovered
what the technology requirements were. We can only speculate what the outcome
of the project would have been if we hadn’t let ourselves find a “quick and
dirty” solution.

"""

------
JustSomeNobody
I can't. My first job out of college was extremely toxic. It was the other
developers' job to completely trash anyone else's code. It was sport to them.
Now, even several companies later, I have anxiety going into a code review.

------
rdiddly
This idea is basically all about the future. If you don't have to worry about
the future whatsoever, or have no business worrying about it (e.g. you're in a
startup that might die in 6 months), by all means hack it together, get as far
as you can, and "borrow" as much technical debt as you can rack up.

The longer the future of the project, the more likely that tech debt will have
to be repaid. And the more closely you're involved with it, the more likely
it's you who'll be paying it.

------
supergeek133
It may not be absolute advice, but certainly applicable in many companies.

For instance getting a new environment setup inside a company that is a big
Azure customer. There are forms, reviews, cost centers, and more reviews. Even
to get a QA environment. I have access to a subscription, and more than once
I've just set things up for people.

We're talking potentially weeks of time lost just to start working.

Not to mention the seemingly severe lack of willingness to prototype within an
organization these days.

------
jwilk
Thanks for including multi-megabyte pointless animated GIF. :|

------
eksemplar
The coolest thing about ignoring trends is that most of them go away again.

I pity the fool who did test first development on applications with less than
10 functionalities for instance.

Even as a manager it's the gaffatape programmers who end up saving the day
rather than the best practices.

Obviously I won't advocate against following best practices. It's just that
people never seem to agree on what they are, making continuity rather hard to
pull off over longer periods of time.

~~~
hinkley
It's really hard to get gaffertape version 3.0 out the door. Those best
practices are ultimately about reducing wear and tear on the development team,
so you can keep momentum for a long long time.

But since you can make pretty much any development strategy work for about 18
months, by the time the consequences are felt either those people are gone or
nobody still there can paint a clear cause and effect story.

~~~
eksemplar
Which is the text book explanation, but you can gaffatape and still do SOLID,
making everything easily replaceable later on.

The key feature of gaffatape programming is that it gets shit done, to make
sure you're still around to do a 3.0 release.

The key downside of gaffatape is that it requires better talent, because your
programmers need to know how to hack it in a way that won't ruin your
codebase.

