
Software Development at 1 Hz - akkartik
https://medium.com/@MartinCracauer/software-development-at-1-hz-5530bb58fc0e
======
castratikron
I wrote numerical code for a while, it simulated a magnetic material. It would
take hours to run a simulation long enough to be able to verify that it was
correct. Make a change, wait five hours, check if the change worked.

I eventually started keeping a journal of every code change I made, along with
the hash of the binary that it created. I could use this to make several
independent changes and run them all at the same time. When one run finished,
I would verify its behavior, look at the binary's hash, and then know that my
code change was safe. It was a very slow process, but effective. After doing
this kind of development for a while I'm weary of claims like this that imply
that feedback with a latency <10s is actually a good thing. High latency
feedback forces you to be more methodical in your development and think about
the changes you're making.

~~~
fantispug
Being methodical is good when your problem is intricate and well defined. Fast
feedback won't be helpful in implementing a complex algorithm like a compiler
or a numerical simulation (although I would argue it will help you debug it).

When what you're doing is simple but error prone, or not well defined fast
feedback can make you much more efficient. If you're using an underdocumented
API/dataset, the best way to understand it is to probe it with code, quickly
iterating.

I used to write a lot of LaTeX and make lots of simple mistakes (missing
backslashes typically). I found having an environment where I found my
mistakes as soon as I made them more efficient than searching through the
output for all mistakes, and then for each mistake finding the corresponding
source and correcting them.

~~~
btbuildem
> When what you're doing is simple but error prone

That just means the problem is deeper than your understanding of it.

~~~
WalterSear
Or deeper than you >can< understand it without probing, because documentation
is simplistic, wrong or non-existent.

~~~
darthdeus
In that case you have a much bigger problem :P

------
nojvek
I am a huge believer of <10s Dev loop. I quit my previous team because compile
times were greater than 2mins and I wasn't in a position to influence making
it shorter.

In my new team I got it from 30s to 5s and the effects have been amazing. I
have to thank VScode and Gulp to make this possible. With typescript vscode
does fast parsing of your code on every keystroke and gives squigglys
instantly. On every save, gulp does its magic and vscode runs the problem
matcher and shows more squiggly on my editor.

The browser knows when a file has changed and refreshes immediately. Vscode
has a chrome-debug plugin so my breakpoints hit immediately in the IDE.

It's been an amazing experience so far. We've also revved up our CI systems to
run 1000's of tests in parallel.

You actually enjoy work when you don't spend forever waiting for things to
compile.

~~~
pags
Web engineers are waiting for stuff to compile these days? I thought things
were supposed to be moving forward, not backward.

~~~
btbuildem
It's simultaneously frustrating and entertaining to watch the younger
generation toil under the impression that they're inventing something new.

------
eeks
A 20s turnover is not even that slow. Debugging undecidable timing issues in
an FPGA may have turnover counted in hours given how slow HW synthesizers are.

If the need of better tooling is clear, the OP does not mention one important
point: faster machines have created some sort of fast food approach to
programming, where programs are built on the go with the help of the
appropriate tools (some sort of computer-aided programming).

Back in the 60s, when programs where being shipped by snail mail to some data
center somewhere in the country to be entered by a random assistant and
scheduled in a long pool of jobs, turnover was counted in days. Yet, we walked
on the moon.

So beyond better tooling, maybe a key to maintaining that attention span is
check is to simply spend more time on the design board.

~~~
digi_owl
Possibly. All this talk about getting antsy if things take time to build seems
to fit a worry i have about our current web focused world.

Observe Google and their management of Android.

Again and again they have pushed out some half-baked X.0 version with the
unstated intent that it will be sorted in a near future update. A clear
example of that was the introduction of the Storage Access Framework.

This is the web mentality seeping into the development of firmware and
physical products, and it is leading to a massive culture clash and crap
products.

Basically you can't just code, push, run, repeat on devices in peoples pockets
and bags like you can a web site.

~~~
alphapapa
Seriously, the reason I can't rely on my phone is mostly Google Play Services.
I never know when it will update behind my back, and suddenly I'll feel a warm
spot on my leg, and less than halfway into my day my phone's battery is at 5%
because of Google Play Services running infinite loops.

Or how at least every two days, the phone's wifi and LTE connections just stop
working, for no apparent reason. But if I Force Stop on Google Play Services,
they suddenly work again. This has been going on for several months now.

Of course, these issues didn't used to happen with older versions of Play
Services. And if I downgrade the Play Services app to the version installed
with my ROM, I never have these problems--but then I can't use any current
Google apps, like Gmail, Maps, etc.

The AOSP software is mostly fine. But the Play Services side of Google simply
cannot be trusted. Their CADT/web-style of development is almost enough to
push me back to iOS.

I want a phone that just runs Debian. :(

------
maaaats
When I write code, I have this mental model of how the world (application)
works. I write some lines of code based on it, and then test the code. More
often than not, something happened that I hadn't thought about, and my mental
model is updated.

This summer I worked on an application where it took 8 minutes to see the
effect of changed code. Between each iteration, I had forgot most of the
assumptions I had made. So when things didn't work, I had a hard time figuring
out why my model of the world didn't work. I basically started from scratch
each iteration.

Possibly very obvious, but it was interesting to learn how I attack a problem.
And scary to see that I didn't just lose the 8 minutes between each run of the
program, but much, much more.

~~~
hobo_mark
If 8 minutes sounds like a lot... turnaround times when you work with FPGA
code can be hours (and months if you're taping out an ASIC). After having to
cope with that for a while, I've found that even with software I think a lot
more before writing any code, and now I find myself stepping into the debugger
much less often.

(My point is: debugging first in your head is yet another skill that should
really be taught to everybody but isn't).

~~~
StavrosK
> debugging first in your head is yet another skill that should really be
> taught to everybody but isn't

Along with smelting iron. I don't know why people advocate skills one doesn't
need, saying "but they make you better".

Skills you don't need will atrophy, because you don't need them. Conversely,
skills you need with strengthen. Why would I debug first in my inaccurate head
when I have a perfectly accurate debugger _right here_ , and I can run my test
suite in ten seconds?

Debugging first in your head is a good skill if actual debugging will take
hours. If feedback is instant, you're probably faster writing the first thing
that comes to you and iterating on it.

~~~
taeric
This is as silly as asking why people advocate exercise. I mean, you clearly
don't _need_ to be able to lift heavy things. Even if it is sometimes helpful.

Similarly, thinking about things before you do them will almost always be
something you could just skip out on. But... It can be very helpful. And
exercise is a great way to get better at work. "Practise makes perfect" and
all.

~~~
lmm
No, people advocate exercise because it's necessary for long-term health.

~~~
taeric
Some exercise certainly helps long-term health. Calling it necessary greatly
over states the importance, though.

And, I should be clear, I was saying "exercise" to refer to gym style
exercise.

------
watermoose
> At the same time I cannot use toy languages that have no compile time type
> checking

This guy seems like and sounds like a serious developer, so I'm totally
confused by this statement.

Dynamic languages that don't do compile time type checking are not toys.

I used to only write in Java or C++, but I think it's a stage of maturity as a
developer to realize that you can develop code that can take arguments with
the assumption that the objects sent in are of types that will have the
behavior you need to work with them.

If you argue that the code is faster when it is compiled- that's fine, and I
agree, and that's good, if it matters.

If you argue that you need types because otherwise you can't be safe, I'm
sorry, but that's like being a helicopter-parent. Sometimes maybe you can't
trust what is calling your code even when you give it trust, and that's valid;
just like as a parent, sometimes the child really needs that level of
micromanagement. But, for a lot of if not most of practical web development,
you can use dynamic typing, and most children do not need that level of
micromanagement.

There's nothing _wrong_ with languages that provide type checking, but it
isn't necessarily a deficiency when it's not there.

~~~
lmm
> If you argue that you need types because otherwise you can't be safe, I'm
> sorry, but that's like being a helicopter-parent. Sometimes maybe you can't
> trust what is calling your code even when you give it trust, and that's
> valid; just like as a parent, sometimes the child really needs that level of
> micromanagement. But, for a lot of if not most of practical web development,
> you can use dynamic typing, and most children do not need that level of
> micromanagement.

This is backwards. Working without types is like walking around with your eyes
closed: sure, you can do it, most of the time; if you're not doing anything
particularly dangerous you can even do it reasonably safely. But it makes
everything a lot slower.

The arguments against types usually boil down to either, as the saying goes
"The belief that you can't explain to a computer why your code works, but you
can keep track of it all in your head", or having only used languages where
the explanation to the computer is so cumbersome as to not be worth doing
(valid, but only in the scope of those languages, and the correct response is
almost always to get a better language).

Try using a language with a decent type system some time (something along the
lines of OCaml, Haskell, F# or Scala). Back when I'd only written Java and C++
I also thought type checking wasn't worth it.

~~~
mbrock
I'm a huge fan of static type systems and their ever helpful checkers.

For me the most difficult argument against static types is that the sweet spot
remains elusive: some type systems are too simplistic (e.g. the difficulty of
writing generic print in OCaml) while some are too fancy and difficult (e.g.
how many people understand even most of GHC Haskell's type system?).

There's also some real problems with compiler error messages. A great type
checker needs to be able to explain problems understandably, or decoding the
type errors will be more difficult than tracking down a null pointer in an
interactive debugger.

I wonder about the possibility of making type checkers more interactive. It
can be hard to understand them because they build up lots of implicit
understanding that's not apparent.

------
kabdib
I once had

\- a super-fast assembler

\- a super-fast way to get the assembled code over to a target system

... and my turnaround time was on the order of five seconds: Edit, hit a
button making the target ready to receive the code, assemble: Running.

It almost didn't matter that I was writing 6502 assembly; things just fell
together and it was _magic_.

Years later I was in a place where it was common to have half-day builds (30
minutes if you arranged things well). In fact, my last three weeks on that job
I never got a working build at all, despite our group having an entire source
control team whose job it was to make builds work.

Current big project, it's about 30 seconds of tool churning, then another 30
of startup time. Could be better.

~~~
agumonkey
I always laugh thinking that Turbo Pascal was so quick I didn't understand the
difference between build and "build then run". And that was statically typed
code on Pentium class computer.

~~~
FreeFull
I think Pascal was specifically designed in a way that allowed a compiler to
go straight from the source code to machine code in just one pass (although
the output wouldn't be particularly well optimised).

~~~
agumonkey
True, I've read that they made both the syntax easier to parse, and kept the
compiler relatively simple to avoid intermediate structures/allocations etc.

I took a look at Wirth first Pascal compiler too, it's just too files, of
acceptable length, lot of coupling though, but it's easy to guess that it's an
imperative style that brings a lot of mechanical sympathy.

------
barrkel
You can get this experience in most interpreted languages fairly easily (if an
interactive debugger like pry or a JS console isn't good enough), and compiled
languages with sufficiently good tooling for dynamic code replacement under a
debugger.

For dynamic languages, use the `watch` command combined with a host script
that loads the code you're developing dynamically and executes it with some
interesting parameters, and dumps the output. As you edit the source, you can
see the effects immediately, character by character.

You can get the same effect in something like Java in an IDE if you can
structure the interesting code like a game loop (ideally it is a game loop):
it's being continuously evaluated, so every change has an immediate effect.
You can see Notch use this technique in this video here:

[https://youtu.be/rhN35bGvM8c?t=5757](https://youtu.be/rhN35bGvM8c?t=5757)

As he edits code in the IDE, the application responds dynamically.

------
mpweiher
I've been working on/with an environment that gives me feedback on every
keystroke. The effect has been amazing, and it's always painful to go back to
regular environments. Even traditional Smalltalks or REPLs where I have to do
something to execute the current command are jarring. Regular
compile/link/debug is painful. Xcode is excruciating.

~~~
wreft
What are you using now?

~~~
mpweiher
My own Objective-Smalltalk.

Here with a more traditionally interactive environment (so updates are applied
when you save the method):
[https://www.youtube.com/watch?v=ArcClqt2vTc](https://www.youtube.com/watch?v=ArcClqt2vTc)

Here in a really live environment I call CodeDraw, with updates computed on
every keystroke (the "Run" button is just left-over UI, it doesn't do
anything): [https://www.youtube.com/watch?v=sypkOhE-
ufs](https://www.youtube.com/watch?v=sypkOhE-ufs)

Most interpreted or incrementally compiled systems should be able to do this.

------
ctdonath
This is why interview code tests are... badly misguided. Most "fizzbuzz"
screening is grossly artificial, denying one the feedback loops which are a
critical element of productivity, and without which one is relegated to
spending time manual checking what automation does almost instantly.

I rely heavily on the IDE reminding me of things, and quick compile/run cycles
verifying correctness, rather than trying to think thru a myriad of special
cases. Relegated to "whiteboard coding" interview questions, I'm left looking
a whole lot worse than I am - not because I don't know it, but because I know
enough that thoroughness is painfully slow (without leveraging tools
multiplying my skills & speed).

~~~
userbinator
I would argue that if you can't write a correct FizzBuzz without needing to
compile and run it, or even worse, an IDE to hold your hand, you don't
actually understand what's happening and are just leaning on a crutch.

 _I rely heavily on the IDE reminding me of things, and quick compile /run
cycles verifying correctness, rather than trying to think thru a myriad of
special cases._

The peril of that sort of workflow is that you often won't realise the
importance of those special cases until it's too late to change things easily.
"It looks like it works, it compiles and runs" \--- I've heard this sentiment
from such IDE users many times, and yet when I inspect their code, they
inevitably missed something important.

Someone once gave me a phrase I like to keep in mind when programming: "How
can you tell the machine what to do if you're not even sure how to do it
yourself?"

~~~
lmm
All tools are crutches. I mean a good carpenter probably _should_ be able to
bash a nail in with a rock rather than a hammer, but that would be a crazy way
to do interviews.

------
jimmytidey
This is an enormous deal to me, although I'm only messing about with Node and
Meteor.

Productivity plumets if a change takes long enough to be visible for me to
justify looking at Twitter.

------
i336_
I'm reminded of this very amusing C++ compilation speedup trick: cat * >
everything.cpp -
[http://stackoverflow.com/a/318495/3229684](http://stackoverflow.com/a/318495/3229684)

\---

Also, I tend to do all my work using a realtime-feedback model like this.

I discovered inotifywait a few years ago, and consider it to be the coolest
thing I've discovered. Using a loop like:

    
    
        while true; do clear; ./program; inotifywait -qq -e delete_self program; done
    

I run my program, then sit waiting until I re-save it, at which point I run it
again. This approach is very flexible; the example above works for shell
scripts, or I can do "gcc -o file file.c && ./file", or I can do "node
file.js", or whatever. Switching the sequence around I can have inotifywait
pause before the first execution too, but I prefer the method above. I
occasionally substitute "clear" for "tput reset" so I can wipe my scrollback
and shift+pgup stops at the top of the current execution.

There are some caveats though.

The biggest is that this approach doesn't work too well for things that need
to be killed to be restarted, like socket servers; that's fixable but
nontrivial and likely project-specific too.

The second problem are the race conditions that will likely arise between your
editor's slow file-save process and inotify's fast response time, producing
irritating "File is in use" errors 50% of the time (since inotifywait exited,
bash looped, and ./program is trying to be read by your compiler or
interpreter while still locked by your editor).

The DELETE_SELF inotify event is specific to the Geany text editor; when Geany
saves a file it does quite a few operations, and DELETE_SELF is amongst the
last (but inotify doesn't see any of the following events since DELETE_SELF
marks that the file - that inotify was watching - got deleted; this is
thankfully coincidental with Geany being done with the file). You'll need to
do something like "inotifywait -m program" and watch what events occur;
inotifywait exits on the first event received; hopefully there's a lone unique
event at the end of the sequence. Worst-case scenario you might have to add in
"sleep 0.05"; I have not tested the success of prefixing the compilation step
with something like "while [ ! -r program ]; do true; done".

------
djur
I would really like to see someone write enough code to do anything (or even
compile), compile it, and interpret the results in a useful manner in less
than a second. Continually, over a significant period of time. The tooling is
certainly available, but the biological part of that process would seem to be
superhuman.

~~~
cracauer
The point is that you want to use the high frequency to get the simpler tools
you will need out of the way, preserving your energy and attention for the
complicated parts.

If you used up your energy by the time you made it through the simple stuff
you are looking at a bad time later.

------
keithnz
It's really interesting how little time it is before things become disjointed.
Recently working with some legacy code I noticed this. Usually I have a
continuous unit testing process which normally is < 4 seconds to give
feedback. I had to wrap some tests around some legacy code, which required
some expensive setup which added an extra 10 seconds. I really feel pained
working with that code and the extra 10 seconds. It is just enough time for
the brain to drift off somewhere else.

Ironic when in the past I have worked with C++ code bases that would take 20
minutes to compile, though incremental builds and parallel builds brought that
number down.

Also in embedded systems, I've got 3 minute cycles to try something out.
However most of the coding is done on a PC with a fast unit testing cycle
which dramatically improves feedback loops.

------
kuon
I know work with phoenix + live-reload and the nearly instant dev cycle is the
best thing that happened to me since I started web dev.

I know rails has something similar but I haven't worked with rails for nearly
2 years now so I can't tell.

As a side note, having a short dev loop is really good for education.

------
corecoder
From this perspective, the worst things I meet are setting up or tuning
complex CI pipelines and developing infrastructure automation: a cycle can
easily last more than half an hour and there doesn't seem to be an easy way to
speed up things, really.

~~~
Retr0spectrum
With a modular TDD approach, you could in theory test individual components
very rapidly. I haven't looked into it too much, but I assume such systems
exist.

~~~
jerf
I have also often complained about this pain of CI systems.

One of the problems is that the things you need to have a proper CI system
conflict with fast build times. Proper CI requires a _clean_ checkout, and a
total compile fromscratch. If you're doing something in Docker, you ought to
start your Docker process from scratch. If you're in a VM, you really ought to
revert to a snapshot to make sure you're not accidentally accumulating un-
CI'ed state. And so on.

While in normal development you may have a very fast turnaround, a properly
configured CI system needs to assume the worst, start from scratch, and build
everything, in every combination you support. (You may also want a less
accurate CI build that trades speed for accuracy and just does an incremental
build of some particular aspect of the system. But that should be supported by
the full CI I describe here.)

Consequently, something that fails only 97% into that build process, and only
fails on the CI server, can be very annoying to fix. But you don't really have
a choice, because any hacky alternative is too risky. A CI system that has
human intervention isn't a CI system.

------
ams6110
This isn't relevant only for development. Applications that take 10+ seconds
to process an action are annoying and tiring as well.

At work a lot of stuff is like this. Enterprisey applications that take 10,
15, 20 seconds to process each entry. Time enough to get distracted. Time
enough to tab over to email or some other application and lose focus. Just
overall mentally fatiguing.

Developers who are annoyed by 10 second delays in their development process
should remember that their users will be just as annoyed by slow response time
in applications. Cut the weight. Make it faster. I'm more and more convinced
that nothing else will go as far in making users happy.

------
dkarapetyan
For Ruby pry comes close but he's right about the inability to have the same
kind of turnaround cycle when you're developing a C extension. I don't think
anyone is gonna beat SBCL in that regard any time soon. There is no way to
take Ruby and then generate machine code that ends up being callable from the
current address space. Although I guess there is some way to do it with cffi
and a compiler constantly running in the background.

------
sebastianconcpt
Today I was thinking in the idea that LISP and Smalltalk are probably at the
top on instant feedback for development. And Smalltalk dev tools might rank
even higher because you can dig any instance to any dept and evaluate code
"talking to it" and getting instant answers right there.

------
k__
My dev-loop currently takes ~10sec. But I worked with worse...

Yes I prefer faster loops, but nerver had a job where I could influence the
build pipeline...

~~~
MaulingMonkey
I just try and fix the build pipeline when it annoys me.

I was running into some 1-minute link times per project on Android, with
several projects. Investigated and found out there was an alternative linker
named "gold" \- a few minutes to figure out how to reconfigure our builds and
our link times were down to ~10 seconds / project or something. Nobody
complained when I checked that in ;)

Dealing with C++ build pipelines for large projects, there's only so low I can
get build times for the whole project - better to try and make the data hot
reloadable in a lot of cases. Similar goal: Fast iteration loops. Even if you
can't fix them for the entire project, you may be able to fix them for
whatever you're working on _right now_.

------
Waterluvian
Writing django and JavaScript apps with a hot reloader has spoiled me. The dev
loop for other cases now feels almost painful.

------
_pmf_
The aversion towards upfront thinking and tendency to glorify the TDD slot
machine is a bit unsettling.

~~~
GrumpyYoungMan
That's somewhat unfair don't you think? It's perfectly possible to
incrementally evolve code using TDD towards a particular destination
determined by upfront thinking.

------
PakG1
Relevant XKCD: [http://xkcd.com/303/](http://xkcd.com/303/)

