Hacker News new | past | comments | ask | show | jobs | submit login

I wrote numerical code for a while, it simulated a magnetic material. It would take hours to run a simulation long enough to be able to verify that it was correct. Make a change, wait five hours, check if the change worked.

I eventually started keeping a journal of every code change I made, along with the hash of the binary that it created. I could use this to make several independent changes and run them all at the same time. When one run finished, I would verify its behavior, look at the binary's hash, and then know that my code change was safe. It was a very slow process, but effective. After doing this kind of development for a while I'm weary of claims like this that imply that feedback with a latency <10s is actually a good thing. High latency feedback forces you to be more methodical in your development and think about the changes you're making.




Being methodical is good when your problem is intricate and well defined. Fast feedback won't be helpful in implementing a complex algorithm like a compiler or a numerical simulation (although I would argue it will help you debug it).

When what you're doing is simple but error prone, or not well defined fast feedback can make you much more efficient. If you're using an underdocumented API/dataset, the best way to understand it is to probe it with code, quickly iterating.

I used to write a lot of LaTeX and make lots of simple mistakes (missing backslashes typically). I found having an environment where I found my mistakes as soon as I made them more efficient than searching through the output for all mistakes, and then for each mistake finding the corresponding source and correcting them.


The problem is fast feedback means you tend to create more bugs. It's faster in the short term, but very quickly produces unmaintainable code.

In the end you can write code to be read (aka maintainable), to be run (aka fast), or to be written (aka cheap).

PS: Sure, in theory it's simply better. However, people get lazy make some change and assume if it's passing their tests it works.


> When what you're doing is simple but error prone

That just means the problem is deeper than your understanding of it.


Or deeper than you >can< understand it without probing, because documentation is simplistic, wrong or non-existent.


In that case you have a much bigger problem :P


> High latency feedback forces you to be more methodical in your development and think about the changes you're making.

This! People rely on their fancy REPLs and super fast feedback loops and 1000 unit tests too much these days. What do you actually do when you can't run the code? What if you have to debug it just by reading it?

There's a lot to be said about being efficient with trivial changes vs being methodical and able to solve much complicated problems when they arise.


This is exactly the reason why I actually quite strongly discourage teaching programming by starting with IDEs. Far too often I see beginners fall into what I call "programming tunnel vision" where they repeatedly make very tiny and often random changes to a piece of code in an attempt to get it to compile or produce the right result, seeming to completely abandon any thoughts about the overall goal. A lower latency feedback only encourages this behaviour more. The same phenomenon also happens if you give them a debugger --- they spend plenty of time just stepping through the code without any good sense of the bigger picture. Maybe it feels productive, but it's not. Their attention is too preoccupied with the feedback that they do not think deeply enough about their solution, and as a result, overall code quality often also suffers.

Instead, I believe in thinking carefully about the problem. Close your eyes and visualise the program and its data and control flow in your mind, then write the code. Use a whiteboard or even pencil and paper to collect your thoughts and get a good mental model of what you're trying to accomplish. Block out all other distractions and focus on the problem.

Many others I've talked to are in disbelief when I tell them I can spend an hour writing several hundred lines of code that compiles and works flawlessly the first time, but this is what careful thought will allow. Even with a very fast feedback loop you may spend several times longer fiddling with the code until you get something that seems to work, but actually doesn't in all cases precisely because you did not ever think about those cases while you were fiddling with it and had your attention focused on getting that next dose of feedback.


I'm glad I found someone who shares my point of view. You're right about IDEs and debuggers.

> Instead, I believe in thinking carefully about the problem. Close your eyes and visualise the program and its data and control flow in your mind, then write the code. Use a whiteboard or even pencil and paper to collect your thoughts and get a good mental model of what you're trying to accomplish. Block out all other distractions and focus on the problem.

It's funny how many problems I've solved by writing code on paper/whiteboard when I got stuck doing actual programming. It's so much easier to focus on the problem when there's no code to run.

Another thing I found useful is just reading the code outside of an editor. Either by printing it out and scribbling over it with a pencil, or just reading it on a phone/tablet that can't run the code.

> Many others I've talked to are in disbelief when I tell them I can spend an hour writing several hundred lines of code that compiles and works flawlessly the first time, but this is what careful thought will allow.

I've been having the same experience. Recently at a uni we were given an assignment to write an interpreter for a rather simple imperative language (conditionals, loops, simple recursive functions and stack depth checking). We were given 3 hours to write a program that could interpret a sample program.

Most people struggled to get anything working at all during that time, since each and every one of them I talked to didn't have a clear picture of they were trying to build.

It took me a little over an hour to write the whole thing in almost a single pass, in a modular fashion with separate tokenizer, parser and evaluator with the necessary checks. There was no need to run the code, most of it was rather trivial, implementing simple state machines. It was quite a bit of code (over 1000 lines), but there was almost no thinking required if you knew how the parse tree should look.

In situations like this I'd even say it's hard to make the program not work if you're methodical, working step by step and checking if you've covered all the cases.


>>"I quite strongly discourage teaching programming by starting with IDEs..."

>>"...also happens if you give them a debugger..."

1). I assume you can cite no research supporting the idea that new programmers are better off with your recommendations?

2). Your idea doesn't seem to take into account that different people think in different ways. I believe this approach was good for you. But as far as we know you could be in the minority right?

For these reasons I don't think there is enough data to make blanket recommendations against IDEs and debuggers.


Are you genuinely attempting to argue that thinking ahead and fully understanding the problem isn't preferable to tweaking one's way to a solution?


"Thinking ahead" is often a great excuse to design an overengineered mess of a solution that can't be tweaked and doesn't really properly solve the problem either. To be pithy - see Java.

Sometimes, exploring the problem space can give you a fuller understanding of a problem faster by forcing you to confront pitfalls that may not be obvious until you try a solution. We use all kinds of wonderful terms for this - "Agile", "Prototyping", etc.

Both extremes - fetishizing planning and up front design, or fetishizing short term iteration and poking things without deeper thought - have their problems, and occur too often. Neither tool is a panacea, but both have their place.


There's a difference between "thinking ahead into the next problem", i.e. premature generalisation, and "thinking ahead into the details of the current problem".

It's good that you mentioned Java, because it is a language which I find extremely IDE-centric, and I suspect that's also what causes easy premature generalisation --- creating new classes with tons of boilerplate automatically generated by the IDE is so easy that it encourages programmers to. That doesn't help one bit with the details of the algorithm, unfortunately; it often gets "OOP-ified" into a dozen classes and much-too-short methods created as a result of the "fiddle with it until it works" mentality.


'Fiddle with it until it works' has to be done when you are working with a product that isn't documented well enough. If that mentality is used in general for programming it is bad, but there are some situations where experimentation has to be done to work out how parts the product work.


Java as a language is producing more value to actual businesses than most other popular languages. Where you see an over engineered mess, others see valuable abstractions, extensibility, compatibility and self documentation. Unfortunately, understanding this so called mess requires knowledge of the lingua franca of object oriented design which has fallen out of favour by the new generation.

I'm not saying that there are no unjustifiable over engineered java libraries, but the current hype cycle of web frameworks seem to indicate the burden of proof of good design should lie with current technologies as well as previous.


> Java as a language is producing more value to actual businesses than most other popular languages. Where you see an over engineered mess, others see valuable abstractions, extensibility, compatibility and self documentation.

"Everyone uses it" or "it's producing value" doesn't mean it's not an overengineered mess that everyone recognizes as such - it just means that imperfect code still trumps no code. Switching languages usually means tossing out your old codebase, leaving you at "no code".

I have worked on such messes, created such messes (oops!), and cleaned up such messes.

That said, I'm sure there is a Java project out there which actually benefits from stereotypical levels of Java abstraction and patterns - and I'm sure there's a few codebases out there where "my" and "others" opinions differ exactly as you say.

> I'm not saying that there are no unjustifiable over engineered java libraries, but the current hype cycle of web frameworks seem to indicate the burden of proof of good design should lie with current technologies as well as previous.

100% agreed - not that I'm qualified enough at web dev to have much of an opinion on this. If anything, the churn of web frameworks smacks of being both overengineered (do you really need a whole framework for that?) and underengineered (wait why are we replacing things yet again?) simultaneously.


Spring comes to mind as a widely used framework that benefits from those "stereotypical levels of Java abstraction and patterns."

But it's the exception rather than the rule. Once you have something like Spring in your codebase, to take care of modularity and reuse, everything else should be coded with as little "abstraction and patterns" as possible.


WhitneyLand is probably not arguing against the claim you make in the large, but against the unsubstantiated argument that using IDEs and debuggers is more likely to lead you to that style of thinking than high latency variants.

One could just as easily hypothesize that these tools let you avoid thinking in the small, and help you form a big picture overview that would otherwise be difficult to understand.


Those were not my words. I said to discourage all students from using IDEs and debuggers doesn't make sense.

I went quite a while with no tools other than a hex editor to type in op codes. I don't think it did anything except hurt productivity.

Maybe you learn or work better that way. I don't. And I don't see how you justify assuming all new programmers would.


Discouraging someone to use an IDE is more a symptom of the target language's shortcomings. Xcode provided me with beautiful compile-time errors for both Objective-C and Swift, and forced me to really think about what I was doing. Incidentally, I learned both languages from the IDE.

Would I recommend an IDE for a low level language like C? Probably not, because it forces a kind of laziness on the programmer.

Maybe an IDE isn't the solution, but a starting point to build upon. Something that's an interactive environment like LightTable has, where you can quickly eval blocks of code and see the end result without having re-compile your entire program. Certain languages are better suited to this, and certain paradigms (reactive programming comes to mind).


If they won't, I will. Working code (in a good language) is the best way to work on the problem, far better than a whiteboard where you have no undo, no VCS tagging, no ability to come up with reusable components... . Trying to do it all in your head would be even worse.

Of course it's possible to push code around on the page until it seems to work, just as it's possible to push symbols around the page until it seems to work when answering a mathematical question on paper. (Unfortunately some languages/compilers will run code that doesn't make any sense, but that's more true on paper, not less)


No, that was your interpretation.


I think the "compleat" programmer can move freely between the two extremes. I have one piece of code -- in Common Lisp! -- that I've been working on for a couple of months of weekends and still haven't tried to run (except for an occasional one-line experiment to check that I have the correct syntax for a macro).

But I can also adopt a much more interactive, experimental approach in situations where experiments are cheap and easy.

It all depends on the nature of the task.


oh, those 1000 lines of C...

If you think small code/functions are the just the result of "short term decision", that is wrong. Small and short doesn't imply ill-developed, short-sighted nor a lack of whole picture.

Small and well-thought code often contains an abstraction. And compiling such an abstract-level code takes even shorter time, still do some syntax check and (optionally) checks types. If you fix the abstract layer then you proceed to the details.

"Close your eyes and visualize the program"? Why not just draw the image as an abstract program on the screen? You know, Hackers and Painters is a real thing. Good abstract code describes itself on the screen, you don't have to imagine the behavior in your head. Code is much better than your volatile image in the head that may go away if you go to sleep.


I think when you have to resort to using debugger, probably it is better to discard the code entirely and rethink the solution.


I've only used a debugger a few times, when the program did not seem to behave according to the source code. That was invariably due to a third-party bug (compiler, library, OS) or to me failing to understand a subtle point about the language or library I was using.

But in general I agree with you.


If you have to break out the multimeter, it's probably just better to throw out that radio and make a new one.


I don't think this is a right analogy, because the code can't "go bad" on its own one day (as if some capacitor would go dry), unless you modify it to do so. Maybe when you don't have access to the source code and you want to see what is going on, use of debugger could make sense.


One thing that helps is having short, self-contained, composable pieces of code that are easy to run in a repl or compile. This also helps testing and general understanding.


> High latency feedback forces you to be more methodical in your development and think about the changes you're making.

There is also an attitude in computer music that low latency audio and realtime systems are simply part of the march of historical progress and ultimately represent a step forward from the days when composers could wait a week to hear 15 seconds of music rendered.

I found that I like the think-write-render-listen-repeat loop that comes from a compiled music workflow. (I discovered this when writing a -- initially very slow -- nonrealtime python computer music system which in the early days would sometimes have me waiting a few hours to render a couple minutes of music.)

Realtime computer systems enabled all sorts of new forms of live improvisation, interactive algorithmic systems, etc but sometimes it's very nice to have to sit in a quiet room with a text editor or notebook and think your way through the next long render.


I'm in a similar situation -- I write a lot of long-running hadoop jobs where I won't know the results for an hour or more. I try to "think twice, run once".

When I'm debugging, I'll kick off several variants to test different ideas, and write myself notes on what I expect to learn from each one when it finishes. Then I can go off and work on something else until it's done, and use the notes to get myself back into the debugging groove. Without the notes, I'd find myself an hour later struggling to remember why I even ran a particular variant.


> High latency feedback forces you to be more methodical in your development and think about the changes you're making.

So does not having tests.


Oddly... The most tested code bases I have worked in had similar problems. They had tested and designed themselves into a solution that did not lend itself to changes. To the point that making a change required reasoning about several micro services and integration test packages.

Again, there are no panaceas.


Right. If I would have worked on this again, the first thing I'd do is come up with some way of automatically testing the code.


Sure, but my point is that being forced to be more methodical isn't a good thing. Not needing to be methodical is better (because your tools will catch your mistakes for you).


Doesn't that also remove the need for discipline? So the last vestiges of 'engineering' that so many programmers aspire to vanish, they can blame their tools, and the only development challenges are found in the tools themselves (perhaps already true to some extent).

This sounds depressing.


See it another way: you can spend the mental effort instead on solving harder and more interesting problems. That is, if you can find harder problems to solve...


> the only development challenges are found in the tools themselves (perhaps already true to some extent).

Definitely true on the web side of things. Bootstrap+React+Angular+jQuery+whatever else, minify, deploy on docker, done.


The best method is to automatize your methods.


This is... What?

I'm not all in on what I call "Poirot's doctrines," but there is almost always room for method. I grant that some methods aren't needed and speed is always of high value.

However, speed of bouncing from beep to beep in your tools are not always faster than stepping back and thinking.

As an example, no amount of fast feedback will help you complete all of the Euler problems.


> no amount of fast feedback will help you complete all of the Euler problems

I find this one almost a perfect counterexample. The Euler problems are almost ideally suited to high levels of interactivity, in order to tease out patterns and solutions.

By no means am I a massive Euler user, but when I was solving #555 I was sure glad to be using Python. If I had a >1m recompile time I probably would have just given up.


I contend that if you have solved 500 ish of those problems, you are a bit beyond the majority of programmers.

Also... I don't see how any single Euler problem could have minute plus compiles. No matter the language. :(


I haven't solved 555 problems, just problem #555 [1]. Sorry if I was unclear.

I also wasn't saying that any language would get long compile times on Euler questions -- they're far too short for that -- but that theoretically if they did it would cause problems.

[1]: https://projecteuler.net/problem=555


That misunderstanding is on me. What you put was perfectly clear, so my apologies.

It is worth noting that many of those problems originate from people that did them by hand. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: