Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What change in your programming technique has been most transformative?
236 points by dmux on Jan 23, 2020 | hide | past | favorite | 208 comments
As an example, the team I work on has been adding more precondition checking to all of our applications. The simple act of stepping back from the perceived data-flow and explicitly declaring what we believe should hold true has uncovered several bugs in our understanding of our applications.

We've likened it to having someone review a paper you've written: you often read what you think you wrote, not what's actually written.

This got me to questioning what others have found to be transformative in their development practices.

Programming in a team and accepting that it is not 'my' code but 'our' code. Not feeling slightly pissed when someone else changes the (not my) code. For me, this was a total game changer and a complete change in programming style. The focus changed to using the most common idioms, the clearest and cleanest and most concise way of writing stuff down, and avoiding smart hacks or, if necessary, encapsulating and documenting smart hacks. Not being shy of writing stuff multiple times until it really is clean and inviting to be read and understood. Using a very strict style (including when/where to put white space and line breaks) that everyone else also uses so that code is easy to read for everyone. It improved my programming enormously and the bugs/line ratio went down. It also gave a much better intuition to smell fishy code. It also took quite a while to internalise, but I now always code like that, also in private projects and in quick hacks scripts in, say, Perl.

My second most important change was to learn how to use contract based programming, or, if the language has poor support, at least to use a ton of asserts. This, for me, feels like stabilising the code very quickly and, again, improved my bug/line ratio. It forces me to encode what I expect and the code will relentlessly and instantly tell me when my assumptions are wrong so I can go back and fix that before continuing to base the next steps on broken assumptions.

I agree with the first bit; however it's hard to find a balance between accepting that your code can be improved, and standing by your solution. What I see in my team is that someone else has another solution, and whatever that is, should be implemented, if you argue against it, then it's often "I'm just trying to improve your code". However his solution might not necessary be better, it's just a different approach.

We also follow very strict style guides, but this is all defined in formatters and linters, so there's no arguing against it.

Even for my personal projects, the first thing I do is usually set up .editorconfig and TSLint (if I use Typescript).

I would say: Automated testing and a functional programming mindset.

Even for legacy projects, starting with adding regression tests for every bug you find is a great way to introduce testing. And when you add new features, you can write tests for that as well.

I find that a FP mindset is helpful mostly because it tends to reduce the amount of global or spread-around state. This in turn makes testing and quickly iterating in a REPL a lot easier. Also when later the time comes to debug, it's much simpler if you don't have a huge amount of state to set up first.

And even if a lot of state is required, having it be an explicit input to a procedure is helpful because it makes it much clearer what you need to set up when doing manual testing.

You had me at automated testing.


It's okay to be messy.

Treat code mess with the same techniques you would treat RL mess. Sometimes you sweep it under the rug. You can toss it in a closet or attic. You can buy a shelf or box and toss everything in there.

You can clean it up seasonally, like set aside a sprint for it.

Some kinds of messes are hazardous and absolutely should not be tolerated - this is similar to leaving milk out or trash piling. Many people make this mistake of assuming that because some messes are very dangerous, all of it is too.

Most messes will slow you down, but cost more to clean up. Overall, I've seen documentation do more harm than help - it's often faster to interrupt a colleague doing something than for that colleague to spend weeks documenting and updating documentation on something that gets thrown away.

Some will take more time to fix later than now - this is what people mean by tech debt. But it's less common than it seems.

With a lot of mess piles, the important parts float to the top and the less important ones sink at the bottom. God classes are mess piles.

Once the mess becomes a burden, it's a good time to clean up.

You may need a few janitors and landscapers, especially for a large code base.

Some people place much higher priority on cleanliness than others. Be respectful of them but you don't have to be like them.

I've got a slightly different take on messes, and using another RL analogy I call it "don't put your dirty dishes in the sink." If you have a couple dishes you can't wash right away it is tempting to put them in the sink instead of on the counter: from a distance it looks neater, any spills are contained, etc. Where this falls apart is if you add a few more dishes, and then more, and now your sink can't serve its main function because it's full of dishes, and some of them are full of dirty water so you can't just move them back to the counter, so washing just the one dish you need requires you to either undertaking cleaning the whole sink or delicately juggling the thing under the faucet while water sprays everywhere, and washing all the dishes now requires a mop.

Code messes are similar. Sweep them under a rug or toss them in a closet and your code will look clean, just like your counter looked clean while those two dishes were in the sink. And like the sink you can quickly get to the point where those closets are scary to open and no one wants to walk on that rug because it makes weird crunchy noises.

So how does one leave ones messy code "on the counter"? Well in my "writing C with vi" days I actually outdented any code that I didn't consider clean/good/permanent. Like all the way to the left, regardless of where it would naturally be indented at. It stuck out like a dirty mug on clean granite, and no one (importantly including me) failed to notice it.

With modern IDEs and auto-formatting and the somehow unavoidable use of Python I no longer use that technique, but I will comment any such code liberally with #HACK or #TODO or similar

I think this is a great analogy because both code and cleanliness carry similar social consequences.

If you live by yourself, you can be as messy as you want. It's only a problem if your mess becomes a biohazard for your neighbors that you have to take care of it. Otherwise, if you're happy with your mess, that's fine.

If you live with other people, your mess in the common area becomes their mess. That's a problem if other people do not like your mess. There needs to be agreed upon standard by those living together.

Eventually, the mess will inhibit your ability to work productively regardless of the situation. I guess you can always throw out dirty dishes and buy new ones, but there is a cost to bear.

Wrong. Absolutely wrong analogy.

You can't discount people who keep code clean, modular and simple. It's not an easy thing to do unlike cleaning your house. It just takes one's will to clean the house but when it comes to keeping your codebase clean, it's much more than just will.

Sound knowledge of software architecture and design principles is required to write and keep your code clean and unfortunately, the number of people with this knowledge is very small in most teams and sometimes there's none.

Also, it's not okay to be messy unless you are working on a school/university project.

You're not quite disagreeing with the analogy, you're actually re-enforcing it. What you've really taken exception to, is the OP's perceived standards of 'hygiene'.

And between say 'medicine' and 'refuse collection', standards of hygiene and priorities can and will be wildly different.

> It's not an easy thing to do unlike cleaning your house. It just takes one's will to clean the house but when it comes to keeping your codebase clean, it's much more than just will.

Let me put it this way. I've seen no relationship with code purity and a happy client

Have you ever walked into a restaurant and asked to see how they refrigerate or organize their ingredients?

No? Would you say then that you don't care if your food comes out slowly or gives you food poisoning?

You care about the result, but you trust it to someone else.

It is a Chef's responsibility to maintain the mise-en-place and food safety to a degree which enables him or her to keep delivering meals quickly and without salmonella.

It is an Engineers's responsibility to maintain the code clarity and tests to a degree which enables him or her to keep delivering improvements on business needs quickly and securely.

That analogy would work if I hadn't worked for companies who made $400m+ with code and processes that would be at home in a 14 year olds hobby project. Oh and 0 tests.

I find a lot of a certain kind of sw person who need analogies to make their mental model work. It never did for me

> the number of people with this knowledge is very small in most teams and sometimes there's none.

This is just it. It's expensive to organize things.

Bad architecture is worse than no architecture. It take time to build, then more time to tear down.

How do you organize with no experience? A mess already holds a structure. It can be evolved into the right structure. But if you adopt the wrong structure too early, you're stuck with a bad system that has to be destroyed and rebuilt.

That's true in a way but with the huge caveat that cleanliness of a home is very objective. Everyone recognises and agrees when a home is very clean or very dirty.

Software architecture isn't like that at all. Developers routinely have massive fights and disagreements over whether something is clean or not. One person's clean is another person's over-engineered monstrosity.

This changes the equation quite a bit. In any sufficiently big team either one person has to be top dog and enforce their personal perception of clean on a perhaps unwilling team. Or you have to flex and tolerate multiple competing definitions in a codebase, which if you have "clean freaks" in the team can lead to constant pointless refactoring and rewriting.

> Once the mess becomes a burden, it's a good time to clean up.

Definitely the way to go. It's not a problem until it's a problem.

Still, you may want to be careful with this attitude, as some people have... let's call it, high tolerance to mess. When you stumble over something every time you enter your room, but proudly proclaim "not a burden, works for me, all under control!", then you "suddenly" need to add a huge feature quickly, having a clean room can help to do it in shorter timeframe.

> It's not a problem until it's a problem.

I see it as a problem waiting to happen until it happens.

A part of engineering is evaluating the damage of the problem happening vs risk vs cost of preventing the problem.

Can you elaborate on why this was transformative?

The dropping of standards is always going to make things a bit easier on yourself but if there isn't really much more to it than that, is proclaiming "It's okay to be messy" really a good thing?

I don't think(I might be wrong) other engineering fields can get away so lightly with such attitudes, why do you think we can? Particularly in light of so many security disasters unfolding around us. How can you justify an attitude that seems to move in the opposite direction of where many believe we should be heading, i.e. tighter control, more strongly enforced standards and a distancing from the 'move fast, break stuff' ethos

Order has as many flaws as disorder. There's a balance in between.

Many of us are familiar with that balance in our daily lives; e.g. we don't mop the floor so clean we can eat off it. But code is often continually polished, refactored, documented as soon as there is the slightest smell.

We do keep tidy enough. The standards for office cleanliness never qualifies to surgery standards, and yet a lot of people like to compare their code to medical device safety levels.

Besides that, there is a clear structure to mess. Too much or too little order and you lose control. The right amount will give you the correct structure.

This. Do it the stupid way first, I've heard it said.

That's an excellent analogy.

Learning different programming paradigms. For example, logic programming with Prolog makes you think about solving certain problems quickly and efficiently in a declarative style. Strongly-typed functional languages like SML and OCaml make it easy to use types and pattern matching to reduce errors and shift some cognitive burden from yourself to the compiler. Lisps allow you to quickly prototype functions in the REPL and test them interactively, and thinking about code as data (homoiconicity) is a powerful concept.

In short, learning new programming paradigms completely changed my view of programming and these skills can translate over to more "mainstream" languages, so it is still a worthwhile effort.

Yes, learning other paradigms is useful.

For me, introducing immutability of data (functional paradigm) helped clean up many interfaces and made the code feel resilient. I realized I rarely passed the input data back to the caller (it was often a new and different structure). At the end of it all, I was somehow more confident in the 'run time' of the app and reasoning about issues is much easier. Currently I am not using any language features to enforce immutability - I just code the receiving function to not change any input structures (or in rare cases, return a new one). I suspect there are some exceptions lying around but having most of the code behave this way has helped.

Especially when using very general purpose languages often in business, I think what you get from learning completely different styles, structures and concepts in other languages is mental models for approaching problems that can help you write cleaner code, finding simpler solutions. When the choice is so broad (say you want to build a React App, there's 1000s of approaches, many of them only show their nasty side at a point where you now have a huge difficult to maintain app). I think some paradigms do lend themselves to different architecture too, and this helps you to discover more ways to architect code and make informed choices.

I found learning Rust valuable, as it forces you think about (and specify) ownership and mutability. It has exhaustive branch matching enforced by compiler, and it has no Null.

Also ChucK (and other niche languages are often cool). "ChucK is a concurrent, strongly timed audio programming language for real-time synthesis, composition, and performance"

I did a Coursera course on digital music creation in ChucK and it was such a joy, and such an unusual language.

Learning Rust helped me become a stronger Python developer.

I've been working full time with Rust for long enough that I've acquired an intuition about ownership such that a lightweight borrow checker in my mind monitors ownership designs in Python. I use mutability sparingly. I use compositional design. Etc.

>shift some cognitive burden from yourself to the compiler

This is the best way I've seen the advantages of static type checking articulated.

The line I use is: "I'm a very bad programmer, but I'm very good at making the compiler force me to get it right".

You can come up w/ an algebra for a specific problem domain by thinking about the data models and repurposing some simple abstract algebra.


    new(state, &args) => mutation
    apply(state, mutation) => state'
Applied to finance:

    newPayment(balanceSheet, amount, date) => payment
    apply(balanceSheet, payment) => balanceSheet'
    newDiscount(balanceSheet', amount, date) => discount
    apply(balanceSheet', discount) => balanceSheet''
Applied to chess:

    newMovement(board, K, E2) => movement
    apply(board, movement) => board'
    newMovement(board', b, D6) => movement'
    apply(board', movement') => board''
As a bonus, it's clear where to enforce invariants and fail early:

    newMovement(board', K, E3) => Illegal Movement (Can't move to E3)
    newMovement(board'', K, E2) => Illegal Movement (White King not available on board)
    apply(board''', randomMovement) => Illegal Play

Hot code reloading. It was one of the best decisions I made when I added it to a hobby project I've been working for the last six years. Most of my development time for this project is spent adjusting code while the app is running. The feedback loop is incredibly tight and the most engaging of all of the projects I've ever worked on.

I'm working on extending auto reloading to all of the assets in the project because I know that tight feedback loops are that important.

What language? Is it a game?

I've been wanting to try that ever since I saw a video of someone developing an FPS in Common Lisp while playing it at the same time. They would modify the bullets and the way they made collision, then fire after each change to see the effect.

Sounds like you're remembering John Carmack developing VR using racket:



Also check Arcadia for Clojure:


And developing flappy bird in clojurescript (canonical figwheel example):


No, the one I saw was different. They were walking around the world and shooting stone building walls. They also weren't part of a conference. The video was just their screen split with their code on one side and the game window on the other. I'm pretty positive it was Common Lisp too because I think I was looking for SLIME videos at the time.

Thank you for the links.

It's an open world roguelike in Clojure. It was counter intuitive to me at first, but modifying real-time behavior like rendering is even easier because the feedback loop doesn't involve player interaction.

Game code doesn't need player interaction either to hotreload. See this section of a talk by Bret Victor for some inspiration: https://www.youtube.com/watch?v=PUv66718DII&t=10m42s

1. Using REPLs: The tool or the principle.

REPL means Read-Eval-Print-Loop and its a place where you can run code immediately and get immediate feedback. For example F12 in your browser and use the console to do 1+1. But you can also use that console to interact with your code in the browser, the DOM and make http requests.

But I also see REPL as a principle - to get as quick feedback from the code you write as possible, and things that help this are:

* Quick compile times

* Unit tests

* Continuous integration

So that each stage is as quick as possible. I write code and within as second or two know if it failed a test. Within a few seconds maybe I can play with the product locally to test it manually. Once committed, I can see it in production or a staging environment at least pretty quickly.

You can then have bigger REPL loops where a product manager can see your initial code fairly quickly, give you feedback and you can get stated on that right away and get it out again for review quickly.

I don't think there is any excuse not to work like this given the explosion of tooling in the last 20 years to help.


Writing over elaborate code because it is fun! That's fun at first but you soon learn it's better to write what is needed now. There is a balance and writing absolutely shit code is not an excuse, but adding too generic code because of stuff that might happen is also a problem.

By far, starting to write automated testing as I code. The sheer number of bugs I find immediately and the number of bugs I find from changes that I didn't expect would be enough to justify it, but what is even better is that it forces me to keep my code testable, which also happens to correspond fairly closely to well-architected code. Not perfectly, but for something that is automated and always running, it's really helpful.

When I'm able to greenfield something myself, and use this from day one, I tend to very naturally end up with the "well-separated monolith" that some people are starting to talk about. I have on multiple occasions had the experience where I realize I need to pull some fairly significant chunk of my project out so it can run on its own server for some reason, and it's been a less-than-one-day project each time on these projects. It's not because I'm uniquely awesome, it's because keeping everything testable means it has to be code where I've already clearly broken out what it needs to function, and how to have it function in isolation, so when it actually has to function in real isolation, it's very clear what needs to be done and how.

Of all the changes I've made to my coding practice over my 20+ years, I think that's the biggest one. It crosses languages. At times I've written my own unit test framework when I'm in some obscure place that doesn't already have one. It crosses environments. It crosses frontend, backend, command line, batch, and real-time pipeline processing. You need to practice writing testable code, and the best way to do it is to just start doing it on anything you greenfield.

My standard "start a new greenfield project" plan now is "create git repo, set up a 'hello world' executable, install test suite and add pre-commit hook to ensure it passes". Usually I add what static analysis may be available, too, and also put it into the pre-commit hook right away. If you do it as you go along, it's cheap, and more than pays for itself. If you try to retrofit it later.... yeowch.

Ah man. Reading these comments make me so happy. Automated testing from the start. Code as you go. Love every bit of it.

To piggy back on here. Don’t use mocking frameworks in tests. I’ve found it to be a great way to expose design flaws in the code.

Avoiding cleverness at all costs. Simple code that is easy to both read and write without strong efforts to follow senseless DYI or take on abstraction for the sake of abstraction.

Amen! Developers spend their first decade learning how to write complex code, and be clever, and compress everything into the fewest and hardest-to-read smallest amount of code possible.

Their second decade they learn that simplicity and readability is always better, even if it does require more code.

It's better to have 300 lines of simple, straight-forward, easy-to-read code than to have a 30 line version of the same code that's difficult to understand, and difficult to debug.

Always code for your future-self and future-others who really don't want to spend an hour figuring out what the hell you did in that awesome looking 30 lines of entropy.

IME it's about time spent maintaining code rather than an age/experience thing, devs that flock like locusts from green field to green field will never learn, devs that maintain "clever" code will learn pretty quickly.

Sometimes you're lucky enough to have this expedited; my first CL at my first job was beautiful and elegant and concise and needlessly complex, and I was told to unroll it to make it more readable. The lesson sunk in the minute I had to start reading other people's code.

"Expedited" haha. yep. I mean there are times when something is going to cause just totally complex code to be required, but in those cases there should almost always be a full paragraph of "explanation" (documentation) for about every 10 or so lines of code.

Ends up with actually more documentation text than code itself, which is a good thing. But for the majority of code it an be kept simple and self-documenting.

Sure, but the converse is that when you have a 10k+ LOC project, you can't read that program linearly anymore. I find that sometimes proponents of "straight-forward code" strive for code to be immediately understandable line-by-line, but in big projects, this is not how you navigate a project; you have to create the right modules and abstractions, otherwise you will get lost.

I've worked mostly on massive apps with 400K to 1 or 2 million lines of code, in many projects, so the concept of linearly reading thru the code is something that just never happens.

But with a "keep it simple" philosophy the number of lines of actual code never really matters that much, while code where the developers were clever and complex at every opportunity (often just showing off) the code is quicksand and a quagmire every every single line of the way, and is a nightmare.

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"


Your link goes on to say that this doesn't mean the obvious. They say that cleverness isn't innate, and that writing code that you have trouble understanding later will force you to grow as a programmer

> If we deliberately stay away from clever techniques when writing code, in order to avoid the need for skill when debugging, we dodge the lever and miss out on the improvement. We would then need other sources of motivation in order to grow as programmers, and if no such motivation appears, our abilities stagnate (or even deteriorate).

I'm not experienced enough to guess if this is accurate, but I found it very interesting

Can anyone give a specific example of code they've worked with that was "too clever"?

Based on my experience, if a coworker wanted to "avoid cleverness" I imagine we'd end up arguing over something like a for-loop versus a map, but such small code decision matter very very little in comparison to overall architecture, and where you place the "seams" in your system.

So I ask, when you have felt that code is "too clever", was it because someone used a map, but you're more comfortable with for loops, or was it bigger than that?

My first job was maintaining some PHP. The author didn't know what a function was. The code was a 5,000 line script, top to bottom, with basic control and looping logic nested up to 17 deep (I counted). It was horrible. Yet surely, the author had avoided cleverness at all costs; he had built a working system with only the most basic tools, those being all he knew.


    for(let i=0;i<100;)console.log((++i%3?'':'fizz')+(i%5?'':'buzz')||i)

  for (let i = 1; i <= 100; i++) {
      if (i % 3 == 0 && i % 5 == 0) {
    } else if (i % 3 == 0) {
    } else if (i % 5 == 0) {
    } else {

I like that example. Since I'm preaching to prefer functional style programing and to separate the side effect from the rest of the algorithm, I'd write the code this way:

  .from({length: 100}, (v, k) => k+1)
  .map(i => {
    if (i % 3 == 0 && i % 5 == 0) return "FizzBuzz"
    if (i % 3 == 0) return "Fizz"
    if (i % 5 == 0) return "Buzz"
    return i

A couple of examples from Java/Kotlin code:

- Too much business logic hidden behind dependency injection (using Dagger in this instance). I think DI should be used sparingly, to decouple large subsystems like the database or the network. It can feel clever to inject everything, so your system is super modular and decoupled, but that just makes it much harder to understand, with little or no real benefit.

- Somewhat related, excessive use of annotations to accomplish tasks that could be done in a more straightforward way with normal code. For example, in Android, you might have an annotation that adds some fragment to your activity as a mix-in; but is that really easier than just calling a function to do the same thing?

This stuff starts to cause real trouble when there are 1000 occurrences sprinkled through your code, and suddenly you need to step through it to debug a tricky problem. Straight-line code is vastly easier to deal with.

So I’d say “clever” is more of a problem at an architectural level, rather than inside individual functions.

At the low level, shorter is almost always better. If somebody comes up with a clever way to reduce a function from 10 lines to 4, say, that’s great -- as long as its purpose is clear and it’s testable.

Interesting examples. I am still conflicted about DI. I have seen it used too little which means that classes become basically untestable. OTOH, things like Spring (don't know about Dagger) seem so overcomplex and hard to figure out, I can certainly understand your point. I've had some luck with just hand-rolled DI in certain cases (e.g. a class that I know is mostly coordinating other classes; in that case, I might just inject all the dependencies manually for easy testing but provide a default constructor for actual production use), but I think it's hard to find the right balance.

Agreed about the annotations; it's one of the annoying things about Java, IMHO. The language itself is so inexpressive that people have to resort to annotations to do basic stuff. I think some level of "metaprogramming" can be useful (regardless of the language), either for boilerplate reduction or for removing aspects like logging from the main code, but it's too easy to become "clever" about it (not just in Java; Rubyists abuse metaprogramming way too often too, for example). I generally favour "explicit over implicit", which also means that I generally dislike inheritance because of all the non-local reasoning.

I've had some luck with just hand-rolled DI in certain cases (e.g. a class that I know is mostly coordinating other classes; in that case, I might just inject all the dependencies manually for easy testing but provide a default constructor for actual production use)

Same here! I think that approach works really well when you’re able to use it.

I’d love to try removing the DI framework and see what you get just rolling it all by hand, but that’s a tough sell in a large pre-existing project.

you could do it as a personal side project without merging it into the mainline and you might learn one of three things:

- handrolled is better and the framework stinks

- there are some disadvantages to the framework, but after doing it all by hand you also understand how it makes a lot of things easier

- it's a tradeoff that largely amounts to which kinds of problems you're personally more willing to put up with (quite likely outcome)

in any case you would learn something

One example would be anything called a "validation framework", hand rolled or as a library. They add so much complication for something easily handled by a validate function and a bunch of linear if statements in the "not clever" version.

One I deal with at work is someone trying to be "clever" and coming up with some annotation based tool to automatically serialize and deserialize objects to a third party system. It's got complex class heirarchies, interception points, etc and I regularly have to crawl through this code. The "not clever" way would be a function that translates the objects to the very simple csv format. In fact I've done that to test interacting with it and when the bash version is simpler to read than the c# version you know it's over complicated.

Another frequent one is "this code is repeated, better move it to a function", problem is you then need to make a change and you have to go through every code path to check it's relevency or add optional parameters, then next time it needs to change you have to check all code paths and inspect which ones are using which paramters. The "not clever" way is to leave the repetitive code, this can have it's own downsides like fixing one place and missing another, but those are much easier to deal with.

Basically use the least level of abstraction that can reasonably get the job done. Your php example probably could have used a little more abstraction like functions and structures but most "enterprisy" code I see could use a lot less.

>Basically use the least level of abstraction that can reasonably get the job done.

There is a great section of John Ousterhout's "A Philosophy of Software Design" that goes into this. He mentions that moving code to functions / methods doesn't eliminate complexity, it nuat kind of shifts it: it adds complexity to the "interface" of the module.

Sometimes leaving code inline with its original context is better.

I once decided that the SNMP agent for an embedded device should not implement the leafs (responses) by simple hard coded C functions, one for each implemented leaf on the great standardized ASN.1 tree.

Instead I implemented some kind of ASN.1 tree parser, which parsed to lisp, then I tied that through C bindings to some kind of callback to the lisp functions, and clever reuse of datatypes and whatever. I don't even remember how it worked!

I don't remember if it did all the processing on the device or if I had (the same flavour) lisp running in the build system too. I just remember that in the end, even I didn't know exactly how it worked or how to hunt down the inevitable bugs. It looked very elegant though, with lots of autogenerated types from the ASN.1 spec.

The C code which eventually replaced my unholy mess had to handle the datatypes manually in each function, but it was easy to understand and easy to update. Fewer bugs too, I'm sure. And one less dependancy. (The lisp engine.) It didn't have the automatic integration with our vendor MIB file, you'd have to manually add or update functions in the C code whenever someone in another part of the company decided to update the MIB file.

Small price to pay. I'm not saying the "elegant" idea could have been made workable and easy for other developers to understand, but definitely not by me in that point in time. Lisp at the time was my hammer and I saw a lot of nails. (I wasn't even any good with the hammer for actual nail like things, I think.)

> Can anyone give a specific example of code they've worked with that was "too clever"?

Code that relies on implementation details of a library that a good portion of developers wouldn't necessarily know. In C# if your code relies on LINQ being lazily evaluated to produce the correct answer it's probably too clever.

Using reflection in static languages like Java or C# when it's not necessary.

Writing code that passes functions around when you don't need to.

Chaining together a series of map/filter/reduces in order to avoid writing a simple for loop.

> Based on my experience, if a coworker wanted to "avoid cleverness" I imagine we'd end up arguing over something like a for-loop versus a map, but such small code decision matter very very little in comparison to overall architecture, and where you place the "seams" in your system.

> So I ask, when you have felt that code is "too clever", was it because someone used a map, but you're more comfortable with for loops, or was it bigger than that?

This rule basically comes down to all things being equal try to write straightforward code. If you can write two functions in 5 lines of code but one function will be more easily understood by a more novice programmer, go with the more straightforward approach.

>My first job was maintaining some PHP. The author didn't know what a function was. The code was a 5,000 line script, top to bottom, with basic control and looping logic nested up to 17 deep (I counted). It was horrible. Yet surely, the author had avoided cleverness at all costs; he had built a working system with only the most basic tools, those being all he knew.

No one is arguing that writing straightforward code is the most important code quality to strive for, just that before you implement that currying solution to reduce the number of lines of code from 40 to 35. Think twice.

> Can anyone give a specific example of code they've worked with that was "too clever"?

-I remember being called to help rewrite a few lines of Perl (the other developer didn't manage to do it): it took me two or three hours with constantly looking at a manual to rewrite these few lines in a Perl that a beginner could understand.

The end result had the same line number than the previous version..

-Configuration files made of C++ template for dubious reasons..

I was working on a pipeline that took data in, did a bunch of work concurrently, then decorated the original data and shuttled it off elsewhere.

I spent a whole sprint building out "the RAD" - a rooted acyclic digraph - that ensured all sorts of correct behavior and powerful options for working with the graph. It felt awesome building what I thought was this super elegant thing that, to me, made total sense since I had been working with it for 3 weeks.

The other devs were confused by it and we're afraid to touch the code. I admitted failure and worked with them to rewrite it. It ended up being an adjacency list (i.e. a graph) with breadth-first traversal, but as long as I didn't call it that, they understood it and liked it.

And in hindsight, we didn't need all the guarantees that it provided, so they were right.

Generally bigger than that, though long clever statements using functional idioms like you suggest could be hard to read in some languages.

Cleverness also refers to architecture, designing meta-types to encapsulate all sorts of things that just don't need the flexibility, over-designing a system in anticipation of future needs that may never come. Sometimes it's a delicate balance and often only experience can dictate how "clever" one should be.

Many developers just out of university, or still in it, in certain languages, like to run with clever things they can do with the type system to abstract away all sorts of stuff, which becomes painful later.

+1 from me. Every time I revisit code that I wrote cleverly I roll my eyes at myself. What was I trying to prove, now I have to put more effort in understanding my "cleverness" instead of quickly going through the change I have to implement.

^this +1. took me years to learn.

Functional Programming fundamentally changed the way that I solve problems. It also changed my expectations for how code should be designed.

The concepts that influenced me the most were immutability and the power of mapping+filtering. Whenever I read a for/while loop now, I’ll attempt to convert it to the FP equivalent in my head so that I can understand it better.

I’d do the complete opposite. Instead of using mapping and filters I’d prefer for loops.

map and filter are not concepts of FP - they are just syntactic sugar around for loops. fully evident if you check any native source code. many languages (such as go which I currently use) do not even have these functions but can apply FP. this is by design of simplicity and visibility: it does not get much easier to read than a simple for loop and know the exact iterations count at runtime.

One of the key principles of functional programming is, well, functions. A for loop isn't a function and isn't composable, map and filter are.

They are not equivalent - for loop prescribes the ordering of processing, but map and filter can work in any order. They are often associated with FP because to define these operators, you need to be able to pass functions as parameter to other functions, which is a thing that surprisingly many languages struggle with.

That's like saying SQL is just syntactic sugar around loops for databases. Might be true but it misses the whole point of declarative programming.

Recognizing that there's a special power in resilient technologies. Those that keep chugging along decade after decade, and getting stronger, too. Not from inertia or monopolist effects or other kinds of random lock-in. But because they tackle a certain set of fundamental problems very, very well. Despite endless complaints about their fundamental limitations, conceptual flaws or alleged lack of scalability.

SQL, Python, PHP, JavaScript, and many aspects of Unix and the C/Make toolchain all come to mind.

Mind you, I don't like all of the above. At least 2 in particular I definitely wouldn't mind seeing "just go away."

But I do recognize that they have special staying power. And they didn't get this power by accident.

This is certainly not true. Evolution of programming languages is circumstantial. There are many flaws in all of the ones you had listed.

The marginal cost of deprecating something or breaking something is higher than the incremental cost of workarounds. This happens all the time from the laryngeal nerve in Giraffe to the evolution of a programming language, say JavaScript.

Please don’t mix good design with popularity. It’s a correlation/causation fallacy.

Look close enough and everything is flawed.

Evolution doesn’t ‘design’ anything, least of all anatomy.

The ‘special power’ is that they have stuck around, for whatever reason or circumstance.

I never claimed that evolution “designs” something. In fact, I’m talking about the opposite - Evolution has no hindsight. Therefore, the marginal cost of undoing something is larger than incrementally adding or bolting on fixes.

> The marginal cost of deprecating something or breaking something is higher than the incremental cost of workarounds.

So why are we in this python 2 to 3 mess?

Making debuggability (transparency) a first-class design criteria.

For many (most) systems, there are designs that make perfect sense, but will be hard to debug.

When I started designing so that programs could tell me what they are going to do, what they are doing, and why, and so that their state, wherever possible, could be expressed into a structure where I could "see" the wrongness, my development time went way down--because it was really time spent debugging, so my productivity went way up.

Yes! This is so much more important than whether you use "for" or "map".

Interesting. Any examples of libraries or papers?

I can give an anecdote. I needed to design a load testing system that would ramp up to an arbitrary number of users of certain types, over an arbitrary amount of time.

Rather than just make a loop and increment every so often, I made the system output it's "plan" for the user ramp, and a second component execute the ramp plan. We had a bug in the calculation which we were able to see easily by reading the "plan." Without this step, we would have been unsure if it was the calculation of when to add users, or the implementation of adding a user that had a problem.

For me it was three things: 1) learning to understand push/pull between data structures and algorithms 2) learning new programming languages from different paradigms, and 3) understanding the physical implications of programming (i.e. latencies of disk-access, network, and memory; the non-amortized cost of pulling data from a database only to do the joins in memory, etc.)

My younger self did not truly understand the strength of modeling or treating data as the reification of a process. I saw everything through the lens of what I knew as a programmer which was, for the early 2000s java developer I was, a collection of accessors, mutators and some memorized pablum about object orientation. I treated the database with contempt (big mistake) and sought to solve all problems through libraries.

Now I can see the relational theory in unix pipes. I can see the call-backs of AWK and XSLT processing a stream of lines/tuples or tree nodes as the player-piano to the punch cards. I understand that applications come and go, but data is forever. I no longer sweat the small stuff and finally feel less the imposter.

I throw away rough drafts all the time for non-trivial work that I don't fully understand yet. There are different approaches to it, but I usually just create a throwaway branch to hack out a naive solution until I run into the non-obvious roadblocks and then try again. I used to try to _really_ understand something before coding a solution, but not being afraid of throwing away a rough draft has helped my mindset a bit in terms of working out difficult problems. It's not a novel or huge concept but it works for me.

This is an interesting idea, I think I often do this subconciously but I might take your mindset next time and tell myself I can chuck the first draft away.

We call this phase the PoC effort (proff of concept). Rough draft, I like that too.

Static typings + type inference

Parse, don't validate https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

All libs/products should be pure functions, with input output documented, making libs/products predictable

Use in-app event sourcing to reduce the need for global states states

DoD https://youtu.be/yy8jQgmhbAU In non bare-metal languages, this will be useful for readability

For errors, return instead of throw

Thanks for that "Parse, don't validate" link. It definitely is the clearest explanation for type driven development I've found.

Turns out, years ago I was writing some C# code and I found myself wanting to explore something like that. However, it wasn't idiomatic for C# (especially at the time), and I talked myself out of it.

Since then, I've wondered if that's what people mean by type driven development. It's cool to see that is at least in the general ballpark.

Is C# nowadays friendly enough for type driven development?

Using TypeScript daily, I have a great time with type driven development. (C# and TS was created by the same guy)

I was never comfortable with Java community approach of automating stuff, with their heavy use of reflections, method name parsing instead of being strict in typings, use generics, macro, templating, etc.

Well, you can "force" some of it ... make an object whose ctor can only succeed if a certain check passes, for example.

You'd be using types in a way that pretty much no one in C# does (not idiomatic), except for maybe those who are explicitly trying to go for type driven development (for example, people doing things like railway oriented programming).

> in-app event sourcing

What's the difference between in-app and not?

Event sourcing is already widely used as app to app communication. But few realize that it is plausible to take that paradigm and use it for module to module communication in-app.

That line serves as reminder to utilize that, not to exclude non in-app event sourcing

I'd love to see a good example of in-app module-to-module event sourcing.

By module to module, do you mean between very loosely decoupled systems such as microservices, or closely associated things like queues and method calls in a single-process process?

Both in-process, inter-process, and inter-machine. The event sourcing technique really helps decouple components, reduce the need for higher order state.

I work with TypeScript on daily basis and I'm in the middle of building this library to help with event sourcing and other stuffs. https://github.com/Kelerchian/florcky

I'll be working with rust, C++, and C# in a few weeks due to professional demand and I might get somewhere with event-sourcing using those language.

In the long run we might be utilizing wasm and ffi to introduce universal protocol for event sourcing.

"Premature optimization" was happening a lot more in my head than on the screen. By that I mean I would waste time struggling to succinctly generalize a function instead of making something work the easy and ugly way FIRST. It is far easier to copy paste working code in 4 places and leave a nice comment on each about what could be probably consolidated... without actually doing the work at least until you come back to the code to make a minor change in those 4 places. Half the time you never come back, most other times you end up merging them within a week. And for outliers... good commenting saves the day.

Some game changers for me off the top of my head:

* Automated regression testing and build/deploy pipeline. The machine will do things quickly and repeatedly exactly the same way, given the same input conditions/data.

* TDD. Create tests based on requirements, get one thing working at a time, and refactor with a safety net.

* Mixing FP style and OO style programming to get the best of both worlds.

* Understanding type systems, and how to use types to catch/prevent errors and create meaningful abstractions.

* Good code organization, both in physical on-disk structure and in terms of coupling & cohesion.

* Validate all incoming data and fail fast with good error messaging.

Many developers think of themselves as abstraction masters. "Moar abstraction is moar better!" Abstractions have a cost in readability and maintainability. It is very hard to make the abstraction that improves both. I'm tempted to say that pre-abstraction is a larger problem than pre-optimization.

Make your code easy to read at the expense of easy to write. Don't abstract unless you know exactly the use case. If the use case is not immediately forthcoming, YAGNI (you aint gonna need it). Related, don't be clever. Clever is cute and all. It does not belong in production code.

When working on most things, I ask two basic questions: how does it fail? how does it scale? I picked that up years and years ago.

Any line, function, workflow, etc can fail. You need to have worked through all the failure cases and know how you are going to handle them.

Somethings don't need to scale, ever. But most things I work on do. If I don't know the story on how I can ramp up an implementation a few orders of magnitude, then I can't say I've designed it well.

Aside from that, measure everything. Metrics, telemetry, structured logging. Design for failure and design for understanding what happened to cause the failure. Data will tell you what you are doing, not what you think you are doing.

There was a post here yesterday about a neat map thing. I hit a bug and the author could only rely on their experience with it being potentially slow at times but saying it should work. If there were proper metrics in place, they would know that N% of calls exceed M seconds and cause a timeout. He could then relate this to the underlying API and its performance as the service experiences it. With proper logging, they see what the cache hit ratio is and determine if new cache entries should be added.

Build, measure, learn.

Oh, and automated tests. So much to be written on that.

Leaning on the type system for as many guarantees as possible. A few of my teams use Kotlin's sealed classes and extension methods to prevent a lot of goofy invalid state and improve composability (using a Result<T, E> sum type very similar to what Rust has built-in).

I liked this article I recently came across discussing the topic: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

Continuous Delivery.

I'm building internally used software, and getting feedback from stakeholders in the testing environment was hard work, and often impossible.

We went from roughly monthly releases to deploy-when-it's-ready, and this tightened our feedback loops immensely.

Also, when something breaks due to a deployment, it's usually just one or two things that could be responsible, instead of 20+ that went into one monthly release. Waaaay easier to debug with small increments.

Optimising for quick feedback loops.

Basically it involved writing additional code to set up the app state to the state I need.

It went from changing a line of code, running the app, clicking/filling fields as needed, and then seeing the change reflected, to the app immediately being in the state that I wanted.

Now that I'm working in web-apps, hot-reloading is quite beautiful.

Many incremental changes (some already mentioned here) but none comes close to grokking macros in Lisp and lisp-like programming languages such as Common Lisp, Scheme, Racket, Emacs Lisp and Clojure. That was truly, truly transformative.

Probably learning to use a debugger and decreasing my reliance on print() statements. It's a nonzero initial investment but saves time in the long run.

At the team level, probably CI/CD. It forces us to break the monolith into digestible chunks and makes regression testing easier.

I've recently joined a team that focuses massively on test-driven development, and it's just such a great way to shake out dumb bugs and focus on the requirements and the problem you're trying to solve.

Also recently did Thorsten Ball's Interpreter/Compiler books which focus heavily on unit testing the functionality (I can't recommend these books enough).

I now can't imagine going back to a pre-TDD world

I did Learn Go With Tests which was my first intro to TDD and I found it enjoyable even if the author is little overly pedantic about his preferred testing approach.

However at work, I often find that I feel like I can't write unit tests before starting code because I just don't know how it's going to get built until I start poking at the code. I'm not sure how to break out of this

> However at work, I often find that I feel like I can't write unit tests before starting code because I just don't know how it's going to get built until I start poking at the code. I'm not sure how to break out of this

You'd start with shallow functions and quality asserts. Initially, you'd have broken tests, you have to write code to fix them, and that's your TDD.

This is definitely the best way. Start with the how you want everything to look and make your code match your expectation.

Hello, I wrote learn go with tests!

Your comment is exactly what I try to get across in the book but it's not always easy.

If you cant decide how you want something to look (as that's not always easy) just take a punt on something. Make sure it's a small decision and make something useful. Sure you might have to change it, but at least you'll be basing that on some real feedback.

Hi, thank you for writing the book! Great introduction to Go.

One thing I was uncomfortable with about TDD is the step to "just write enough to make the test pass". The danger seems to be when you have to walk away from some code and you or whoever picks up your code doesn't know what you were up to.

In a simple app and a few tests you could probably tell, but the larger an application gets, the less you can put effort into finding out "this test works because the application works how it should" or "this test works because someone wrote just enough code and hardcoded some value somewhere in order to make the test pass".

It seems "safer" to write tests that are going to stay broken until every little corner of everything works properly, even though this is less iterative.

As mentioned above I have little TDD experience, so maybe I'm missing something.

The point of "just writing enough to make the test pass" is to get you to the point where you have working software (proven by the test, even if the code is "bad")

This is the _only_ point where you can safely refactor, if your tests are failing how do you know you haven't broken something? So long as you keep things "green" you know you're ok

> The danger seems to be when you have to walk away from some code and you or whoever picks up your code doesn't know what you were up to.

Just don't do this. You're not "done" with the TDD cycle until you make it pass and have refactored.

I reflect this in my git usage too, roughly it goes

- Write a test -> Make it pass -> git commit -am "made it do something" -> refactor -> run tests (if i get in a mess, revert back to safety and try again) -> git add . -> git commit --amend --no-edit

> It seems "safer" to write tests that are going to stay broken until every little corner of everything works properly, even though this is less iterative.

The problem with this is how do you safely refactor when you have potentially dozens of tests failing?

A big point of this approach is it makes refactoring a continuous process and makes it easier because you have tests proving you haven't accidentally changed behaviour.

Thank you for the thorough reply! I'll keep hacking away at it :)

I need to get this book now too ;)

Well it's open source so just go take a look


I'm not even thinking about "how it's going to get built" when I write test-first. I'm just writing (test) code as a client that's calling an API. It just so happens that the API (method, or method + object) in question doesn't actually exist yet, so the first thing I do is let the IDE generate them so it compiles.

Once the code compiles I go into the implementation and make it work.

Now and then I realize during implementation that I need some additional parameter, but it's easy to add that.

I guess to me:

1) Writing a test that calls an API and asserts that everything resulted correctly from it is more of an integration test in my opinion - in this case I can agree this can be written beforehand.

2. If I'm changing my tests as I write my code then it doesn't matter if I go test-code-test-code vs code-test-code-test. I have still changed my definition of "correct" on the fly based on something I found out as I implemented.

What does your pre-code design process look like? Might beef that up; its what has helped me quite a bit.

How have you achieved a balance between too little and too much testing? I worked on a project that had a ton of tests, but they seemed to be "over fitted."

The trick is to get something covered to the point where you could comfortable refactor the bejeezus out of it, and know that it's still working from a higher level. The test should be how you know the code works, and under any situation.

That is, you don't need to test that inbuilt standard library stuff is working as expected (e.g does a string reverse) but rather, does the combination of possible inputs match your expectations?

Writing simple functions which take an A and return B in any context. Then, composing those simple functions together to build up more complex functionality.

I would mention generics also. It has forced me to a clean architecture from the start without throwing away code.

Although sometimes the domain is not clear enough before you start, that's shitty.

Realizing that I can roll my own framework.

Why? Because I’ve tried to actually think about my underlying data structure instead of defaulting to convention but reflex.

The best example? Marcel Weiher Ch 13 in “iOS and MacOS Performance Tuning” explains how to improve lookups in a public transport app by 1000x.

What more? The description of the the data (the code) and the the application (the intended use) are probably way better because thinking about performance is similar to thinking about your data. You want answers fast. Fast answers, fast code.

I've been doing a lot of Python lately and using type annotations strictly and a decent IDE like VS Code or PyCharm. If you screw up, passing a foo instead of a bar, both almost immediately highlight the error. Don't even have to run the script to see it blow up. Huge time saver.

Also doesn't hurt that it improves Intellisense/Autocomplete, gives better hints/help as you're calling a method, and improves on "self documenting" code.

I ask myself what's the smallest, dumbest, least-coupled program I could possibly write to get the job done. Turns out that the answer is often "very", and short, simple programs are a lot easier to produce bug-free and work with over time.

I wouldn’t be surprised if the majority here are doing this, but version control everything

Distributed version control is one of the greatest things to have gained popularity in the last ~20 years; if you’re not incorporating git, hg, fossil, ... start now.

You gain: concise annotated commits, ease of cloning/transporting, full history, easy branching for feature branches or other logically seperate work, tools like bisecting, blame/annotation, etc., etc.

I can think of 4 things:

1. Not being afraid to look at the code of the libraries that my main project depends on. It's a slow, deliberate process to develop this habit and skill. But more importantly, as you keep doing this, you will develop your own tactics of understanding a library's code and design, in a short amount of time.

2. Not worrying about deadlines all the time. Not a programming technique as such, but in a world of standups and agile, sometimes, you tend to work for the standup status. Avoiding that has been a big win.

3. (Something new I've been trying) Practicing fundamentals. I know the popular opinion is to find a project that you can learn a lot from, but that may not always happen. Good athletes work on their fundamentals all the time - Steph Curry shoots like > 100 3 point shots everyday. I'm trying to use that as an inspiration to find some time every week to work on fundamentals.

4. Writing: essays, notes. In general, I've noticed I gain more clarity and confidence when I spend some time writing about a subject. Over time, I've noticed, I've become more efficient in the process.

What would be some fundamentals to practice for a software developer?

There are quite a few, and you can create a list of categories you consider as fundamentals for the nature of your work. As an example, I would think Algorithms and Data Structures is a fundamental subject. These are the easiest to practice.

You could for example pick something as simple as a HashTable, and implement it from scratch. Then, you could add more complexity to it, like HTs that won't fit in memory, expanding and shrinking HTs efficiently etc.

Or, you could use one of the several practice websites like LeetCode to practice Algorithms/DS problems.

Once you start building a habit, you will also become better at organizing your practice routine and finding out more about what to work on, and where to look for study/practice materials. But mind you, this is a slow process, which you want to build as a habit. There is no end goal here (like cracking Google interview or such), this is a process to get better at the fundamental skills in your field.

Moving from my college years of emacs on the server to a proper IDE on the desktop.

It wasn't college for me, but the first IDE I used that seemed like a step up from vi and makefiles was Visual Age for Java. And man, that was a huge step up.

Do you use any emacs key bindings in that IDE?

Yes, but compared to an IDE (Pycharm in this case) there is no comparison.

Honestly at a team level - code reviews. At least one other developer reviewing every line of code that is written is transformative in the quality of code that is produced. A lot fewer problems, a lot more maintainable.

Autoformatting (gofmt, prettier, etc). I can't believe how much time I was wasting having to manually indent, split lines etc.

+1... black for Python. I didn't agree with some of the formatting decisions, but it has gotten better with passing versions, and the gains of not thinking about it at all are huge.

Self discipline. Just sitting down and showing up is not enough, you need to concentrate. This is why I think open plan offices are terrible for getting real work done, there always should be possibilities for solitude for those that need to work on something hard. Oh, and KISS. No way without that.

Can't believe this list doesn't include revision control systems. So many benefits. Without these, you can't easily work with others. Never again break code then scratch your head wondering how to return it to previously working state (we've all been there). Have a documented history of changes over time. Support for branching. Also amazing for efficiency and not so frequently used are pre and post commit hooks for CI/CD.

Basic inline documentation. Who wrote it, when and why. What else did you consider. How does it differ from other solutions. Why is it designed the way it is. Brief history of design changes. What state did you reach in testing. What are the forward looking goals. Takes 5 minutes, pays for itself many times over in future. 90% of effort is maintenance.

Immutability. I don't use it all of the time. And sometimes I use bits of the knowledge I've picked up through playing with it. But encountering it made me think a lot more about how state is available through the systems I write, and that definitely made me a better coder.

Logger as an anonymous function.

I think literally every logger let's you log at a certain point as a single line that does not cover the context of the log.

  logger.info('Doing this and that',  function () {
    if(maybe) do_that()
This way you know how much the log comment is supposed to cover as well as take a benchmark on the said context.

I can never think to just add a single line logger anymore.

This is much better than inserting comments in a code as comments can have no context except the line below.

You can add a function like,


and don't let it log anything to be comment syntax replacement.

Not an active HN user. There doesn't seem to be a downvote function..

I guess you can't downvote having below 500 points for yourself.

You can certainly add a comment on how you don't like it.

State Machines.

Thankfully that was decades ago, which means I enjoyed the magic and bliss for most of my professional career.

Also interesting: I started using state machines in hardware designs before I applied them in software.

Environment variables with fallback to a default value.

They're a super cheap way of

1. allowing feature flags

2. injecting credentials in a way the user thinks about exactly once

3. moving workstation-specific details out of your code repository

They're implemented into the core of most every language in existence (especially shell scripts) and you're probably already using them without knowing. They're (get this) _variables_ for tuning to your _environment_.

Sounds like I'm being sarcastic here (eh, maybe a bit) but it never really hit me until I really dug into the concept.

Honestly, I used to be such a computer nerd, to a fault. So over the last quarter century of programming for money, I'd have to say the most transformative has been writing comments like my livelihood depends on it (since it does).

Other development practices that boosted my output significantly: Regular cardio exercise like running, strength training by lifting weights, and regularly reading source code for pleasure.

Re: watchexec, another two patterns are: (1) trigger processes using pre-commit hooks with your local RCS/VCS. (2) replicate core watchexec functionality using inotify-tools, fsnotify or similar. As you seem to be using go also consider https://github.com/fsnotify/fsnotify

Async-await in Rust has made asynchronous programming a joy. It's no wonder that programmers are refactoring as fast as they can to support it.

Not a programming technique, but a mindset. If you separate your ego from your code ("My code doesn't define my value") then it becomes much easier to look at your code objectively and understand others' criticism. This allows you to grow faster and more openly (in directions you may have opposed before).

Writing a basic test before starting a task, and use that to verify that the task is correct. I use the word task because this can apply outside of writing code, for example making some large scale changes to data. Never underestimate your ability to overestimate your ability. Reality is hard so make the machine help you.

Something not mentioned yet. I find programming is the task of last resort and I spend a good deal of my time talking with the product person to help them refine their product idea into something that is mechanically sympathetic with how computers work and the general problem domain. It often adds 10% time for the initial work and -90% of the time for iterations.

For example once we had a project that wanted to add Role Based Access Control and some juniorish engnineer suggested adding boolean columns to the table for each of the user's roles.. Nope. Instead we created a document store w/ roles and defining new roles was as simple as adding a const to a set in the service's code. Good thing too because the number of roles grew from the initially requested ~3 to almost 100. That would have been too wide of a table.

I would have used flags if roles aren't dynamic

in our case they were not only dynamic, but also managed by the administrator on an account (by our customers, not by us the software creator)

Not that hard, if there's something like a Tenant implementation.

Which would limit the scope of the "moderator" role explicitly to the tenant database.

Eg. https://martendb.io/documentation/documents/tenancy/

Fyi: The op, which i was responding too, was referring to using roles as const. Which is not dynamic :)

1. Write notes, when notes can't support ur thoughts draw, scribble. Date every page. Revisit the notes every 2-3 days. Put priority on the idea. Write notes when you about to get off work or before you go into a boring or interesting meeting and chances are you might forget what you were thinking.

2. Don't get into the trap of refactoring just because you read a new cool way of doing it unless, it's reduces half the size of code or improve performance multi fold. This might waste time as you could write a new feature or plan about it in that time.

3. Write very big elaborative comments. It's for future you as you can't remember everything, sometimes you need to know why you wrote that code or condition as you might not remember why you wrote it at that time.

If you have to run your code to understand it.

Try again.

This is so true for beginning developers. They tend to have a mindset that once they run something and it appears to work, then that means they likely got it right.

In reality the code might be right for 90% of use cases, but once you multiply this over a whole project, you end up with total dogsh*t, and mainly because of laziness too. This is the biggest reason the inexperienced developers create such horrendously buggy code.

Migrating from JavaScript to Typescript and declaring interfaces for classes and data structures forced me to put more thought into each piece before I start implementing. Now I’ve cleaned up a lot of old code I didn’t have much visibility over thanks to the compiler warnings.

Using static typing to make changes slightly bigger than intended.

But with a better architecture and nearly zero mistakes. Through using OO correctly ( please read the word correctly)

One step at a time.

Ps. This is mostly when a code-base is detected where a part of the code is sensitive for bugs. Eg. From today: handling better payment providers and payments.

I thought it was important enough to make the change bigger, but with the intention of removing recurring/similar errors in someone else his code-base.

Afterwards, go to him and explain why. Developers can be proud of their code, although it's just shitty code ( eg. Long functions, loops, if else, switch, by reference, ... Is mostly an example of bad code) and never say it's shitty..

Off the top of my head:

- Reading books (e.g. Clean Code, Refactoring: Ruby Edition, Practical Object-Oriented Design, ... these left a mark)

- Going to conferences (Rubyist? Attend a talk by Sandi Metz if you have a chance)

- Testing (TDD at least Unit Testing)

- Yeah, the usual stuff: quick deployments, pull requests, CI/CD ...

- Preferring Composition over Inheritance (in general and except exceptions :)

- Keeping on asking myself: what is the one responsibility of this class/object/component?

- Spending a bit more time on naming things

- Have a side-project! It's a fun way of learning new things (my current one is https://www.ahoymaps.com/ - I could reuse many things that I learnt also in my 9-5)

FP, REPL Driven Development and generative testing.

In short, I have been doing Clojure since last year.

I stopped doing small commits. In the past as soon as I was "done" with something I would commit, even if one line.

What would end up happening sometimes later in the day I would decide revert or modify the change, so my commit history ends up flooded with bunch of small commits.

I ended up writing a script that checks my current work, and if enough changes or time has past, then a commit is recommended:


You can have your cake and eat it too by initially generating a series of small commits and then going back later to reorder them and squash them into a much shorter series of logically coherent commits.

This makes it way easier for someone (possibly you) looking later to understand what was done and why.

(I once almost took a job where the repo was just a series of huge commits taken at more or less regular intervals, but with no logical coherence and no attempt to document what was happening. Noped right out of there.)

This is exactly what I do, with one addition: Sometimes I don't check everything in as I go because I'm doing a lot of experiements at high speed. However, I will locally do "junk" commits for trivia such as comment typos and short, obvious bugs that are noticed.

At this stage I don't worry much about the commit message, as it's purely local, and many of them will be squashed (discarding the message). So the trivial ones get one-liner messages like "WIP: Adder: Comment fix" or "WIP: Parser fix '...', needs testcase'. "WIP" commits are a useful tag for me: They are never allowed to be pushed to a shared repo.

Then the "git add -p" stage.

When a task, feature or bug has been dealt with, I'll start using "git add -p" to separate out parts of files, and commit them as logically independent changes. At this stage, I don't mind separately committing even very scrappy little changes, such as comment typos, as separate commits because I will merge them later. The key is to separate out logically separate kinds of things in the delta from worktree to HEAD. During this I will usually find a few things that are untidy or comments that could be worded better, and add tiny commits for those changes.

The "git add -p" stage is a great time to get some perspective on what logical units were actually needed for the main task, and what else was refactored or fixed in passing, and this "pick up the pieces later" method frees me up to fix things and do small refactors without having to switch context while doing it.

Then the "git rebase -i" stage.

When the adding is done, "git rebase -i" to reorder into a sequence of logical, coherent and explainable changes. Ideally in an order where things still work if they are partially checked out in that order (bisectability), and squash together separate "git add -p" chunks that really must be one logical unit. Also, squashing trivial fixes such as typos in comments, whitespace etc into logical commits.

I may then go back and clean up and flesh out some of the commit messages before pushing the lot upstream for review. Or, in practice, there's usually no review of my work other than testing it, but so that others can at least read through the commit messages and patches to see what was done and how; hopefully learn from it.

The above cycle is usually done about every 1-2 workdays, but it can be longer if there's a tough problem being debugged or a complex new feature (but for big new features a branch is more useful). If something turns out to be too big, though, it doesn't matter because I can always commit any work in progress to a new local branch or stash, and rewind back to a stable worktree.

The final, logically coherent and documented set of commits is very satisfying to push, and rarely contains "junk" commits or unexplained changes, even though what I commit locally at first is often like that. I guess it helps to know Git quite well.

I have the opposite problem, my commits are way too large and I'm trying to get in the habit of making more frequent commits as you can always squash them.

If I don't commit often enough, then I would forget which part was stably written or not by seeing the lines changed in the editor when I come back to a certain part of the code.

I can't think of a specific technique, but one thing that has been very useful has been exploring the use of fuzz-testing.

I'm pretty much adding fuzz-testing to all my current/new projects now. It doesn't make sense in all contexts, but anything that involves reading/parsing/filtering is ideal - you throw enough random input at your script/package/library and you're almost certain to find issues.

For example I wrote a BASIC interpreter, a couple of simple virtual-machines, and similar scripting-tools recently. Throwing random input at them found crashes almost immediately.

- Letting code crash early and hard, deciding what parts of the code i never think about handling - Dependency Injection (ObjectOriented/Functional), controlling when stuff gets created vs. used - Accepting to having some level of redundancy in code, using code to configure stuff (wrappers, adapters, factories) and not trying to pull out everything from the start - Figuring out what parts of what i could test i should test, being fine with just coverage - Writing a cli for every project - Learning how to transform code without breaking it for long periods of time

The most transformative experience was learning how to retain information better following the Coursera course: Learning How to Learn (https://www.coursera.org/learn/learning-how-to-learn)

I'd not always been the best learner and this taught me how to learn more effectively. Whilst at the sametime accepting the limitations of the brain.

It led me to find the Anki app and combined with more effective learning, it positively affected my ability to be a better programmer.

Voraciously eliminating unnecessary whitespace, especially vertical whitespace, was a trick I learned early. Most monitors are wide, not tall. When you can see a whole function without scrolling it's much easier to understand. I'll collapse 'if' statements to a single line in most cases to make it readable left-right instead of up-down. 1TB bracketing is my default unless style guidelines prohibit it. End-of-line comments make it easier to associate functionality with comments and keep code tidy.

I personally hate one liners. I used to love them and feel good about how neat the code looked like.

After some time I've noticed that I read the code much easier and understand flow when the indentation is similar to python or go.

Also, I keep my editor on a vertical half of the screen.

I would no longer put much in a line, even if it meant using a single use variable.

Monitors may be wide but your eye sight will never change and moving your eye sight left and right, worse, moving your neck left and right is tiring and vertically is easier to follow as the code is the one that moves around instead of you.

Same as sibling comment but I keep my apps' window size at about 70% of the display's width starting near the mid to right end, so I don't have to look way to the left when pretty much everything is leaned on the left side. This brings most of the text, be it code or web page that you read close to mid, which is easier on your neck.

I’d argue you’re better off just getting two vertical monitors. Keeps the code easy to read (can see a lot more of it) without the negatives from one liners.

A lot of really good answers in this thread. I had a great mentor years ago and she really taught me the importance of working cleanly and professionally. Proper code formatting, checking your comments for spelling mistakes, indentation, auto formatting on save, linting, writing solid documentation, rebasing and tidying up your commits to create clear steps that a reviewer can step through. All of the things that often get overlooked but are very important in how you come across as a professional developer.

Not mentioned yet

1) Start with consistent naming conventions, notation and structures

2) Rely on autocomplete to simplify typing/readability/programming effort. It’s a tremendous force multiplier.


3) TDD

4) Functional programming

Consistent naming does not mean “use the same word for all similar things”. It means using one word every time you have the same thing.

Words stop meaning anything when you use it all the time.

Use a thesaurus. Naming everything “handler” or “node” is just naming them “function” and “object”.

Touch typing. There was a huge difference in speed between the my brain and my fingers. Once removed, it was much easier to make the code flow.

You realize you never need anything printed on the keyboard and that's what I use for 10 years now.

Good way to make non programmers laugh too.

A divide and conquer approach to complexity; which is turning the complex idea/code into several loosely coupled simple building blocks.

I used to just write code fast and forget about it, only slightly caring about the finished product. After having to build and support 2 fairly large applications(for use on a company intranet) I will fully plan, document and test my software. No more clever code, no more quick hacks. I also gave up on Javascript and node.js, and I mostly do c# now because of type safety.

Caveat: sometimes you just want to finish something. Specially when prototyping or creating an MVP, it's important to have a hard focus on the end goal.

From iOS perspective changing to clean architecture using VIPER has made our code testable and easier to reason about.

We enforce the architecture as much as possible through protocol conformance, which doesn’t always work, but works extraordinarily well for “Views” or “Scenes”.

It’s taken so much stress out of the development lifecycle for our team.

A surprising one is to avoid all kind of jumps (early return, breaks and continues).

A broader one: writing expressions as much as possible. Basically, it means avoiding unnecessary mutations (and jumps).

Then avoiding architecture. Thinking algorithms that process data (instead of "systems") has been transformative.

Why avoid early returns? If something doesn't match a condition to process anything in the function, then it's cognitively better to return early and say that case should be left out before possibly doing anything weird in it.

Being full stack in mind. You don't need to have full stack skill, but you should have experience working on frontend, backend and Dev/Ops. This can help how you design API between frontend and backend, how you implement your system to ease maintenance.

1. It's better to implement solidly 8 features then partially implementing 10.

2. If you can, step through any new code with the debugger. You may find the execution path isn't what you thought it was.

I program slowly now, and only program in my head and on paper. Only when I am 100% sure that I have something I can add as a small step to a larger working goal do I touch the code

Using "this." for object-scope identifiers and "ClassName." for class-level identifiers.

Greatly improved readability, even though there was a short adjustment.

Learning generics and the streaming system in Java. There's a clear improvement in expressiveness and they minimize a lot of duplication and visual clutter.

Functional programming with functions as first-class-citizens. And related: asynchronous programming. Both with Node.js, a couple of years ago.

using data structures and parsing techniques over "convoluted" algorithms ( pretty much discussed here: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va... ).

1. Shifting from solo/modular to team software development

2. Figuring out unit and integrations tests

3. Embracing clean code paradigm and SOLID architecture

Entity component system as a replacement of OOP. Don't even bother writing a complex app without it.

TDD and clean architecture for me.

TDD for me. It's frowned upon but has helped me craft elegant code with less bugs .

KISS + YAGNI + constant refactor. Code is a means to an end and I suck at futurology.

Metaprogramming in Common Lisp.

JSIO - just shit it out. Stop worrying about it being perfect and get it done

Learning ReactiveX. Never wrote classical imperative code since

start with explicit requirements to the system you are building. so obvious, but so neglected! i blame the typical belief in one's omniscience.

two things 1) leading commas in SQL 2) RAISE an EXCEPTION or RETURN --no more `else` statements in code

SELECT this , that , something FROM table

takes a while to get used to

100% this. Nothing is more aggravating than having to go through someone elses SQL mess and having to reformat with commas in front. Why you ask? First, its trivial to comment a value out for testing/debugging etc, but secondly if you don't and you are reading through a large query, it is far more difficult to scan to see what fields are included, being aliased, incorporated into calculations or functions etc. This ties to the longer code, less clever mantra noted prior in the thread. The else comment I get, but I still use it and typically set it to a code or value that will push the error further up the stream. We do a lot of system to system integration so often the else part is arising from missing setup info from the source or destination so surfacing it later can ( depends on use case of course) sometimes make it easier for end users to self fix the problem by updating one of said source or destination systems.

What is the point of selecting specific columns unless some include large text or blob to consume unnecessary IO?

Here's a few reasons why when you finalize a query, you select only the necessary columns (non-exhaustive list):

- Reduce uneccessary IO / resource strain, as you said

- Predictability; consuming program receives data in the same order, every time

- If the DBA adds additional columns to the table, it doesn't hose downstream consumer processes

- Easier to debug if some problem does arise.

- Clarity, if you are only using a few columns from a large table. I might just be getting old though ;)

Data should be served in the simplest, most robust manner.

It should be easily consumed by other services, with little extra effort from the programmer.

If you use Select *, you either accept the possibility of it breaking unexpectedly, or have to write logic within the consuming service to deal with that. If the problem can be eliminated by the DB/query itself, it should be.

I understand how clarity may provide the programmer's intention of the query but I'd rather just pick all, so that change in code below doesn't suddenly break because you didn't sync the columns to be fetched.

> - Predictability; consuming program receives data in the same order, every time

For any joined table, you can specify table name for each '*' or add an alias last to keep the column you need instead of writing it all.

Seems specifying each column name isn't in any way crtitically bad when you can't select all but a column in SQL, which is a deficiency in the language.

starting to use map, reduce, filter on Arrays, and arrow functions

Learning Doxygen

Buy oxycodone online, Oxy, sold under name OxyContin among others, is an opioid medication used for the treatment of moderate to severe pain. it’s usually taken orally, and is out there in immediate release and controlled release formulations From 24hrsonlinestore https://24hrsonlinestore.com/product-category/buy-oxycodone-...

It is like instant release oxycodone, morphine, and hydromorphone within the management of moderate to severe cancer pain, with fewer side effects than morphine. Buy Oxycontin Online without prescription here legally.From 24hrsonlinestore. https://24hrsonlinestore.com/product-category/buy-oxycontin-...

Buy Percocet Online without Prescription. This combination medication is employed to assist relieve moderate to severe pain. It contains an opioid (narcotic) pain reliever (oxycodone) and a non-opioid pain reliever (acetaminophen). Oxycodone works within the brain to vary how your body feels and responds to pain. Acetaminophen also can reduce fever. From 24hrsonlinestore https://24hrsonlinestore.com/product-category/buy-percocet-o...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact