Hacker News new | past | comments | ask | show | jobs | submit login
How I Program in 2024 (akkartik.name)
317 points by surprisetalk 43 days ago | hide | past | favorite | 267 comments



When you have no tests your problems go away because you don’t see any test failures.

Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

Thus, when you delete your tests, the only person you are fooling is probably yourself :(

From reading your page I get the impression you are more burnt out from variation/configuration management which is completely relatable… I am too. This is a hard problem. But user volume is required to make $$. If the problem was easy then the market would be saturated with one size fits all solutions for everything.


I think this is highly domain dependent. I currently am working on codebase that has tests for a part of it that are an incredibly useful tool at helping me refactor that particular part. Other parts are so much UI behavior that it is significantly faster to catch bugs by manual testing because the UI/workflow either changes so fast that you don’t write tests for it (knowing they’ll be useless when the workflow is redesigned in the next iteration) or so slow that that particular UI/workflow just doesn’t get touched again so refactors don’t happen to it to introduce more bugs.

I have never found tests to be universally necessary or helpful (just like types). They are a tool for a job, not a holy grail. I have also never met a codebase that had good test coverage and yet was free of bugs that aren’t then found with either manual testing or usage.

Somewhat hyperbolically and sarcastically: if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)


> if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful?

This sentence makes no sense. Tests are infinitely more straightforward than code. I always go back to my dad's work as a winder before he retired:

After repairing a generator, they'd test it can handle the current that it's expected to take by putting it in a platform and... running electricity through it. They'd occasionally melt all the wiring on the generator and have to rewind it.

By your logic, since they weren't "good enough" to fix it perfectly, how could they know their test even worked? Should they have just shipped the generator back to the customer without testing it?


No, they often aren't, and UI can be complex to test.


> if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful?

This was the original quote. Just because one aspect of a system is difficult to test doesn't make this quote true.

> No, they often aren't, and UI can be complex to test.

It can be but I've seen different ways to get confidence through testing. Just because it's complex doesn't mean we shouldn't do things to improve our confidence that what we're building works.

Go back to my example above: I guarantee that a repair has been tested, returned to the customer and failed the test after it was fitted. Why? The fault was elsewhere. But testing gave him confidence the generator wasn't the source of the problem.


IMO if your implementation is that unstable (you mentioned the UI/workflow changes fast) it isn't worth writing a test for it, but also, I don't think it shoud be released to end-users because (and this is making a big assumption, granted), it sounds like the product is trying to figure out what it wants to be.

I am a proponent of having the UI/UX design of a feature be done before development gets started on it. In an ideal XP/agile environment the designers and developers work closely together and constantly iterate, but in practice there are so many variables involved in UX design and so many parties that have an opinion, that it'll be a moving target in that case, which makes development work (and automated tests) an exercise in rework.


I think there's a great balance here in these environments:

- write tests as part of figuring out the implementation (basically: automate the random clicking you're doing to test things anyways)

- Make these tests really loose

- Just check them in

Being unprecious about test coverage means you just write a handful of "don't blow up" tests for features, that help you get the ball rolling and establish at least a baseline of functionality, without really getting in the way.


Chiming in as an end-user of software: please try to minimize the amount of times I need to re-learn the user interface you put in front of me.


Aaaaaaand I replied to the wrong comment, mea culpa!


I agree with you that the ideal is to have UI/UX work resolved before starting dev work.

In my experience, this has never happened. I’ve moved around hoping that somewhere, leadership has fixed this problem and nope. It never happens.

There are just too many unknowns and never enough time for design to stabilize. It’s always a mad dash to collect whatever user info you can before you slap together a final mock-up of the interface and expected behaviour.


> Somewhat hyperbolically and sarcastically: if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)

Well obviously, you just write tests for the tests. :3

It's called induction.


Qui testet ipsos tests?


It's actually called mutation testing. And luckily it's almost fully automated.


> Well obviously, you just write tests for the tests. :3

I had a friend whose first job out of school (many years ago) was part of a team at Intel assigned to write tests for their testing tools. When it matters enough economically, it will happen. As you can see from recent news, that is still not enough to guarantee a good result.


> if you are good enough to write perfect tests for your code, just write perfect code.

I have yet to see anyone claim they write perfect tests.

> If you aren’t perfect at writing tests, how do you know the tests are complete, bug free,

I never claimed to produce or seen complete tests. I never claimed or seen bug free tests.

> and actually useful? :)

I know that whenever I fix something or refactor, test fails and I found a bug in code. I know that when we do not have have the same bag again and then again the same bug and again the same bug.

I know when testers time is saved and they dont have to test repetitive basic stuff anymore and can focuse on more complicated stuff.


watwut wrote:

> I know that when we do not have have the same bag again and then again the same bug and again the same bug.

Well, username checks out :-)


Types are there to ensure against human error and reduce the amount of code we need to write.

Tests exist to guarantee functionality and increase productivity (by ensuring intended functionality remains as we refactor/change our code/UI).

There may be cases where some tests are too expensive to write, but I have never come across this myself. For example, in functional tests you would attempt to find a secure way to distinguish elements regardless of future changes to that UI. If your UI changes so much between iterations that this is impossible it sounds like you need to consider the layout a little more before building anything. I’m saying that based on experience, having been involved in several projects where this was a problem.

Having said that, I’m myself horrible at writing tests for UI, an area I’m trying to improve myself, it really bothers me :)


Tests can be expensive to write if they are an afterthought, and/or the code is not written in way that is easy to test.

UI tests can be cheap but they require some experience in knowing how to write a testable UI. One way of achieving that is writing them as early as possible, of course. Which is not always possible :/


> the code is not written in way that is easy to test

Which isn't devoid of downsides either


That's a good point. Sometimes more ergonomic APIs can be harder to test.


> the UI/workflow either changes so fast that you don’t write tests for it

This is my number one pet peeve in software. Every aspect of every interface is subject to change always; not to mention the bonanza of dickbars and other dark patterns. Interfaces are a minefield of "operator error" but really it's an operational error.


People are building multimodal transformers, that try to simulate users.

No matter how stupid the ai, if it can break your ai code, you have a bug.


Tests are just a way of providing evidence that your software does what it's supposed to. If you're not providing evidence, you're just saying "trust me, I'm a programmer."

Think back to grade school math class and your teacher has given you a question about trains with the requirement "show your work." Now, I know a lot of kids will complain about that requirement and just give the answer because "I did it in my head" or something. They fail. Here's the fact: the teacher already knows the trains will meet in Peoria at 12:15. What they're looking for is evidence that you have learned the lesson of how to solve a certain class of problems using the method taught.

If you're a professional software developer, it is often necessary to provide evidence of correctness of your code. In a world where dollars or even human lives are on the line, arrogance is rarely a successful defense in a court of law.


Not quite. Tests are just a way to document your software's behaviour, mostly so that future people (including future you) working with the software know what the software is intended to do – to not leave them to guess based on observation of how undefined behaviour plays out.

That the documentation is self-validating is merely icing on the cake.


That's a beneficial side effect, not the purpose of having tests.

It's like saying the purpose of eating is to fill your senses with wonderful flavours and texture. No, that's a beneficial side effect: the purpose of eating is to prevent death from starvation.


Indeed. You could write the documentation in Word instead, but when the "herbs and spices" come for free...


> If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)

I did like the rest of the post, but this is not hyperbole. It's just a disingenuous argument, and one that looks orthogonal to your point that "tests are a tool for a job".

If you aren't perfect at magnetizing iron, and you need a working compass, you better magnetize two needles and use one to test the other. The worse you are at magnetizing iron, the more important it is that you do this if you want to end up with a working compass.


> If you aren't perfect at magnetizing iron, and you need a working compass, you better magnetize two needles and use one to test the other. The worse you are at magnetizing iron, the more important it is that you do this if you want to end up with a working compass.

This is modern testing in a nutshell - it's ineffective but the author of the test can't actually tell that!

Using this analogy, if you created 10 magnetised needles using the wrong process and getting the wrong result, then all 10 would agree with each other and your test passes, but your needle is still broken.


I don't think you understand how magnets work.

Hint: if you think the way to test whether a needle is magnetized using another possibly magnetized needle is by building both needles into two separate compasses, you're nowhere close.


> Hint: if you think the way to test whether a needle is magnetized using another magnetized needle is by building both needles into two separate compasses, you're nowhere close.

I thought it was clear from my post that I do not think this.

I also think you are missing the point.


You wrote:

> if you created 10 magnetised needles using the wrong process and getting the wrong result, then all 10 would agree with each other and your test passes

This suggests that you do think soemthing like this. Again, the way you test wheher you successfully magnetized a needle using another potentially magnetized needle is not by checking whether they "agree with each other" in the end application.


> This suggests that you do think soemthing like this.

Or it suggests they’re continuing the analogy (which isn’t perfect) to make a different point.

> Again, the way you test (…) is not (…)

Twice you’ve spent the majority of words in your post telling someone they’re wrong without explaining the correct methodology. That does not advance the conversation, it’s the equivalent of saying “nuh-uh” and leaving. If you disagree, it’s good form to explain why.

It doesn’t take long to say the failed magnetisation would leave all needles pointing in disparate directions, not the same consistent wrong direction. Unless there’s something else in your test that is so strong and wrong that it causes that problem, in which case the analogy starts working again.


> You wrote:

>> if you created 10 magnetised needles using the wrong process and getting the wrong result, then all 10 would agree with each other and your test passes

You snipped out the qualifier to the paragraph. The qualifier is important. Here's the full quote with the qualifier:

> Using this analogy, if you created 10 magnetised needles using the wrong process and getting the wrong result, then all 10 would agree with each other and your test passes, but your needle is still broken.

IOW, using this analogy for software development, all the products created with the wrong algorithm and/or process would all agree with each other. It's why I say you are missing the point.

You're not creating a binary product that either exists or doesn't exist, like magnetism, so repeating the process as some sort of "test" is a broken way to test.

It's also the most popular way to write tests: the tests are effectively moot because a pass does not indicate that the result is correct.


I don't get this analogy.

Apart from the fact that in your example the produce is validated using the exact same produce, you are actually describing the perfect test:

Two magnetized needles will validate each other, so both the product (needle#1) and the test-setup (needle#2) will be confirmed as valid in one step. If one is not working, the other will self-validate by pointing at the earth magnetic field instead...


The problem with using two needles to test each other (instead of using an external third source like the earth's magnetic field) is North and South could be swapped on both of them. The test would validate correct but the needles be wrong.


I feel like our industry kinda went the wrong way wrt UI frontend tests.

It should be much less focused on unit testing and more about flow and state representation, both of which can only be tested visually. And if a flow or state representation changed, that should equate to a simple warning which automatically approves the new representation as the default.

So a good testing framework would make it trivial to mock the API responses to create such a flow, and then automatically do a visual regression of the process.

Cypress component tests do some of this, but it's still a lackluster developer experience, honestly

This is specifically about UI frontend tests. Code that doesn't end up in the DOM are great for unit tests.


> When you have no tests your problems go away because you don’t see any test failures.

The flip side of this is the quote that "tests can show the presence of bugs, but never their absence". It better fits my experience here; every few months I'd find a new bug and diligently write a test for it. But then there was a new bug in a few months, discovered by someone in the first 10 minutes of using it.

I'm sure I have bugs to discover in the new version. But the data structures I chose make many of the old tests obsolete by construction. So I'm hopeful that I'm a few bugs away from something fairly stable at least for idle use.

Tests are definitely invaluable for a large team constantly making changes to a codebase. But here I'm trying to build something with a frozen feature set.


If your tests break or go away when your implementation changes, aren’t those bad tests by definition?


A lot of tests don't survive implementation changes, that doesn't make them "bad tests by definition". It means their value came and went. Think of it like scaffolding. You need it for a time, then the time is up, and it's removed. That doesn't make it bad, it was still necessary (or at least useful) for a time.

When there's an implementation change you'll likely end up discarding a fair number of unit tests and creating new ones that reflect the new implementation details. That's just natural.


A lot of tests, especially unit tests, are just change detectors and get updated/go away when change happens, that is just WAI. It is fairly hard to write non change detection tests, it requires for you to really reason abstractly about the contract of your module, or to write integration tests that are moving a bunch of things at once.


small, fine-grained black box tests can be really good for this. in my last project, a type checker, the vast majority of the test suite was code snippets and assertions about expected errors the checker needed to catch, and it was an invaluable aid when making complex changes to the implementation.


Anything that transforms or processes text, like a compiler or type checker, is pretty easy to test. You get into trouble with user interfaces, however.


If that is the case too often, I ditch them and write integration tests for that part.


Yeah, especially when you're exploring new ground.

Unit tests are awesome for fleshing out APIs; but once the fundamentals are in place, the tests no longer add any value.


I have two answers:

1. Yes. To the same extent that we are all bad people by definition, made of base material and unworthy urges.

I'd love to have better programmers show me how I can make my tests better. The code is out there.

2. Even if I have good tests "by definition", a radical rewrite might make old tests look like "assert(2x1 == 2), assert (2x2 == 4)". Tests exist in a context, and radically changing the context can change the tests you need.

---

This is not in OP, but I do also have a problem of brittle tests in my editor. In this case I need to test a word-wrapping algorithm. This depends intimately on pixel-precise details of the font. I'd love for better programmers than me to suggest how I can write tests that are robust and also self-evidently correct without magic constants that don't communicate anything to the reader. "Failure: 'x' started at x=67 rather than x=68." Reader's thought: "Why is this a problem?" etc. Comments appreciated on https://git.sr.ht/~akkartik/lines.love/tree/main/item/text_t.... The summary at https://git.sr.ht/~akkartik/lines.love/tree/main/item/text_t... might help orient readers.


>> If your tests break or go away when your implementation changes, aren’t those bad tests by definition?

> 1. Yes. To the same extent that we are all bad people by definition, made of base material and unworthy urges.

Good and bad are forms of judgement, so let's eschew judgement for the purposes of this reply :-).

> I'd love to have better programmers show me how I can make my tests better.

Better is also a form of judgement and, so, I will not claim I am or am not. What I will claim to do is offer my perspective regarding:

> This is not in OP, but I do also have a problem of brittle tests in my editor.

Unfortunately, brittle tests are the result of being overly specific. This is usually due to tests enforcing implementation knowledge instead of verifying a usage contract. The example assertions above are good examples of this (consider "assert (canMultiply ...)" as a conceptual alternative). What helps mitigate this situation is use of key abstractions relevant to the problem domain along with insulating implementation logic (note that this is not the same as encapsulation, as insulation makes the implementation opaque to collaborators).

In your post, you posit:

> Types, abstractions, tests, versions, state machines, immutability, formal analysis, all these are tools available to us in unfamiliar terrain.

I suggest they serve a purpose beyond when "in unfamiliar terrain." Specifically, these tools provide confidence in system correctness in the presence of change. They also allow people to reason about the nature of a system, including your future-self.

Perhaps most relevant to "brittle tests" are the first two you enumerated - types and abstractions. Having them can allow test suites to be defined against the public contract they provide. And as you rightly point out in your post, having the wrong ones can lead to problems.

The trick is, when incorrect types and/or abstractions are identified, this presents an opportunity to refine understanding of the problem domain and improve key abstractions/collaborations accordingly. Functional testing[0] is really handy to do this fairly rapidly when employed early and often.

HTH

0 - https://en.wikipedia.org/wiki/Functional_testing


Automated tests ideally don't entirely replace manually executed tests. What they do replace is repetitive regression tests that don't need to be executed manually.

In an ideal world this opens up room for exploratory testing where someone goes "off-script" and focuses specifically on those areas that are not covered by your automated tests.

The thing is that automated tests aren't really tests, even though we call them that. They are automated checks at specified points, so they only check the outcome at those point in time. So yeah, they are also completely blind from the sort of thing a human* might easily spot while using the application.

*Just to be ahead of the AI bros, we are not there yet, hold your horses.


I watched a video by Russ Cox that was recommended in a recent thread, Go Testing By Example:

https://www.youtube.com/watch?v=X4rxi9jStLo

There's _a lot_ of useful advice in there. But what I wanted to mention specifically is this:

One of the things he's saying is that you can sometimes test against a simpler (let's say brute force) implementation that is easier to verify than what you want to test.

There's a deeper wisdom implied in there:

The usefulness of tests is dependent on the simplicity of their implementation relative to the simplicity of the implementation of what they are testing.

Or said more strongly, tests are only useful if they are simpler than what they test. No matter how many tests are written, in the end we need to reason about code. Something being a "test", doesn't necessarily imply anything useful by itself.

This is why I think a lot of programmers are wary of:

- Splitting up functions into pieces, which don't represent a useful interface, just so the tests are easier to write.

- Testing simple/trivial functions (helpers, small queries etc.) just for coverage. The tests are not any simpler than these functions.

- Dependency inversion and mocking, especially if they introduce abstractions just in order to write those tests.

I don't think of those things in absolute terms though, one can have reasons for each. The point is to not lose the plot.


> Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

I have found that, in my own case, every time I’ve written a unit test, it has exposed bugs.

I don’t usually do the TDD thing, where I write failing tests first (but I do it, occasionally), so these tests are usually against code that I already think works.

That said, I generally prefer test harnesses to unit tests[0]. They still find bugs, but the workflow is less straightforward. They also cause me to do more testing, as I develop, so the bugs are fixed in situ, so to speak.

[0] https://littlegreenviper.com/testing-harness-vs-unit/


> That said, I generally prefer test harnesses to unit tests[0].

That's a strange redefinition of harness.

The larger-scoped tests are more often called integration or even system tests.

And while I'm here, those are slow tests that are harder to debug and require more maintenance (often maintenance of an entire environment to run them in!). Unit tests are closer to what they test, fast, and aren't tied to an environment - they can be run on every push.


Not “strange,” in my opinion. Back in The Day, we called what people insist are “test harnesses,” “unit tests.”

But these days, the term “unit test” has morphed into a particular configuration.


The focus on automated unit/integrations tests is a relatively modern thing (late 90's?). There was some pretty large and extremely reliable software shipped before that focus. Random example is that the Linux kernel didn't have much tests (I think these days there is more testing). Unix likely didn't have a lot of "tests". Compilers tended to have them. Operating systems less so. Games (e.g. I'm sure Doom) didn't tend to have tests.

You need to find a balance point.

I think we know that (some) automated tests (unit, integration, end to end) can help build quality software. We also know good tests aren't always easy to write, bad tests make for harder refactoring and flaky tests can suck a lot of time on large projects. At the same time it's always interesting to try different things and find out what works, especially for you if you're a solo developers.


> Random example is that the Linux kernel didn't have much tests (I think these days there is more testing).

As the author of many of Linux’s x86 tests: many of those tests would fail on old kernels, and a decent number of those failures are related to very severe bugs. Linux has worked well for many years, but working well didn’t mean it wasn’t buggy.


As was said in another comment, tests don't prove the lack of bugs. There is no software of enough complexity without bugs.

Working is something ;) Lots of software barely does that and there is certainly plenty of software with tests that doesn't meet the no-test Linux quality bar.

That said, tests certainly have their place in the world of software quality, so thanks for your work!


Most video games have a full team of QA testers doing functional testing on the games as they go along.

Same thing for the kernel, plus some versions are fully certified for various contexts so you can be sure fully formalised tests suites exists. And that’s on top of all the testing tools which are provided (Kunit, tests from user spaces, an array of dynamic and static testing tools).

But I would like to thank all the people here who think testing is useless for their attitude. You make my job easier while hiring.


> But I would like to thank all the people here who think testing is useless for their attitude. You make my job easier while hiring.

That's fine.

I've never written a test in my life. Have my programs ever had bugs? Sure. But I sleep very well at night knowing that I spent all my brain power and time writing actual code that Does Useful Work rather than have wasted significant lengths of my time on this planet on writing test code to test the code that does the Useful Work.

You speak of attitude and smugly "thank" those who don't write tests as that acts as your hire-or-not filter. With an attitude like that, I'd 100% not work for anyone with that attitude anyway.


> I've never written a test in my life. Have my programs ever had bugs? Sure. But I sleep very well at night knowing that I spent all my brain power and time writing actual code that Does Useful Work rather than have wasted significant lengths of my time on this planet on writing test code to test the code that does the Useful Work.

And that’s why I never want to have to work with you on anything shipping to a user ever.

Don’t get me wrong, the field is riddled with people who think testing is beside them and wash their hand with the quality of what they ship and what they put their users through. That’s an issue to fix not a situation we should tolerate.


> Don’t get me wrong, the field is riddled with people who think testing is beside them and wash their hand with the quality of what they ship and what they put their users through. That’s an issue to fix not a situation we should tolerate.

See, this is my point. It's not that testing is beside me, it's that my stuff gets tested anyway.

Here's the test: Does it fucking work or not?

You do that by running the thing. If it explodes, find out why and fix it. Job done. No thought or line of code was wasted in writing tests, all brain power was used to initially write a piece of code - which initially had a bug of course - and then said bug was fixed.

My code gets tested. By people using it. Or by me testing it as I write it ("does it fucking work").

There is really only one test.

You can choose to expend your brainpower and time on this planet writing code that will never actually be run by an end-user, or you can just write the fucking code that the end-user will run. That's how I work. Write it and run it. That's the test.

Test code written to test Useful Working Code is time wasted. It's like putting stabiliser wheels on bicycles - you're either gonna be stuck forever riding a bike with stabilisers, or you grow up and rip them off and have a few falls on the bike then become confident and competent enough to ride that bike without them. And have more freedom and time to experiment and go places you couldn't when they were put on.

So yeah. I definitely wouldn't work with people who like wasting my and their time on this Earth.

Write it. Run it. It either does what it's supposed to or not. If it doesn't, find out why and fix it. Or discover that your function/code abstraction/thought was shit in the first place then write it differently - oh and that's the worst part about writing code that tests the Code That Does The Work; say you discover that the function you're writing was a load of bollocks and needs to be highlighted and simply erased - there goes all that test code you spent brainpower and time, with it, too. And now you have to spend even more time writing new test code to test the Code That Actually Does Useful Work.

No thanks. And goodbye.


> My code gets tested. By people using it.

Users are not guinea pigs. They deserve better.

> Write it. Run it. It either does what it's supposed to or not. If it doesn't, find out why and fix it

That's called functional testing and that's actually testing. You are one step removed from actually formalising what you do and getting non regression testing for free. At that point, I think you are either arguing fot the sake of it and do actually realise that testing is important or somehow confuse testing with unit testing which is only a narrow subset of it.


Congratulations. I already told you I test my programs. The discussion is about expending brain power and time writing hundreds of lines of code to test the intended user-facing code, which, in my opinion, is just dumber than a bag of hammers.


I think we're talking specifically test automation like unit tests, integration tests, and end to end tests. You can't write software without ever trying to run it, which is functional testing. A solo developer e.g. has to be doing these sorts of "QA"/functional manual testing.

Let's look at the source code for Doom: https://github.com/id-Software/DOOM

How many _test files do you see? Millions of people played Doom for endless hours and the quality beats a lot of modern software with tests.

Again, tests certainly have their place in modern software development, but the kind of thinking that if you have tests that means your quality is good is wrong and actually leads to worse software quality. Tests are just a part of an overall approach to quality software.

EDIT: Re hiring. I would be looking for people that understand the nuances and the why vs. people that approach things through a religious lens. I.e. they understand the tradeoffs. If you're writing tests then your tests are code. Should you write tests for your tests? If not why not? How do you know that your tests are correct? If your religion says 100% unit test coverage for all code then it's pretty clear this is a religious belief not subject to reason (because otherwise you'd be also asking for 100% coverage for your unit test code by other unit tests).

There are situations where unit tests have a ton of leverage. There are situations where they have less. Testing happens in other disciplines, e.g. mechanical engineering, where certain things get tested (including with automation) and others do not. The decisions depend on the function of the component, the impact of failure, preexisting knowledge about the technologies, etc. software engineering can learn something from some of those other engineering disciplines...


Tests are less crucial when your code is written by one genius who understands the whole code base coz he wrote it.


My old man who will always gladly mention that „we did this already in the 80‘s and it was called frobniz“ whenever I bring up a technique, architecture etc. would beg to differ.

When I asked him about TDD he said they did practically the same thing. Forgot how it was called though.

One recent gem was when he shared a video where they explained the recent crowdstrike debacle: „Look they’re making the same mistakes as 40 years ago. I remember when we dynamically patched a kernel and it exploded haha…“.

In any case, writing tests before writing the implementation was a thing during the 80‘s as well for certain projects.


"Unit-Testing" became popular about the time of Extreme Programming. The reason I think it became so popular was that its proponents programmed in dynamically typed languages like Smalltalk, and later JavaScript. It seems to me that synamic languages needs testing more than statically typed ones.


Beck's first xUnit framework was SUnit, for Smalltalk, but Beck's second take was JUnit, which is for Java. Java was and still is a statically typed language.

Tests are there to detect logical correctness of the unit under test, very few type systems can catch errors like using - instead of + in a mathematical formula, for instance. You either need to go into dependently typed languages or languages that otherwise permit embedding proofs (SPARK/Ada).


In dynamic languages tests also tend to fill the role of the compiler is I think the parent's point. Dynamic/interpreted language code might have syntax errors or be otherwise incorrect (including type errors) and you often don't find those until they code is run.


When this buggy method is compiled (not run) with Smalltalk, errors and warnings are shown. The code cannot be run because it failed to compile.

hnQuestion

| list max |

list := #(1 8 4 5 3).

! Syntax Error: Nothing more expected

1 to: list size do: [:i |

max < (list at: i)

? Uses ifTrue:/ifFalse: instead of min: or max:

ifTrue: [max := (list at: i)].

ifFalse: [max := max].

].


For future reference, indent with 2 spaces to get code formatting.

https://news.ycombinator.com/formatdoc


For future reference, does "code formatting" provide a way to distinguish

  list := #(1 8 4 5 3).
from

! Syntax Error: Nothing more expected

as-well-as from comment text ?


That’s a fair point. However dynamic languages tend to have very good linting that catch many basic type errors.

They can also run way more often during development, down to the function/expression level.


Of course there were tests, just not automated tests!

In better run organisations they had test protocols, that is, long lists of tests that had to be run by manual testers before any new version could be released. Your manager had to make sure these testers were scheduled well in advance before the bi-annual release date of your latest version of the software.

So that listing old software and claim that they didn't have much tests is misleading, to say the least.


I am talking specifically about automated tests.

> automated unit/integrations tests

Since that's what the blog is talking about. Formalized manual testing is a different topic, that also didn't universally exist in the 70's and 80's.


Yes, but your comment gave the impression that you could create quality software without testing. No one is going to interpret your comment as a call for replacing automated tests with manual tests, but many might read it as “testing in itself isn’t very important.”


> When you have no tests your problems go away because you don’t see any test failures.

> Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

I wasn't a very fast typist, I could do about 180 strokes per minute. My teacher, a tiny 80 year old lady, talked the whole time to intentionally distract her 5-6 students. It was a hilarious experience. One time, when I had an extra slow day, the monologue was about her learning to type, the teaching diploma required 300 strokes per minute, from print, hand writing and dictation. Not on such a fancy electronic type writer! We had mechanical type writers! And no correction lint! She was not the fastest in her class by far and many had band-aids around smashed fingers. Trying to read type, not listen and not burst out in laughter I think she forced me down to 80 strokes per minute. Sometimes she had me sit next to a girl doing 450 strokes per minute. Sounded like a machine gun. They would have casual conversation with eye contact. I should not have noticed it, I was suppose to be typing.

When writing code and think about those "inevitable" bugs I always think of the old lady, who had 1000 ways of saying: you only think you are trying hard enough... and: we had no correction lint....

Take a piano, there is no backspace. You are suppose to get it right without mistakes.

If you have all of those fancy tools to find bugs, test code, the ability to quickly go back and forwards, of course there will be plenty mistakes.

If they need to be there no one knows.


World class best in the world gymnasts still fall off a balance beam from time to time.

Mistakes are inevitable, it’s why whiteout and then word processors were made


Pain is a great teacher.


I'm puzzled by people debating tests. why such hate? They catch bugs, prevent breaking changes, and ensure API stability. I have never seen tests preventing me from refactoring anything. I guess it depends on the company and the processes :thinking:


There are different kinds of tests.

Integration tests at the outer edges often gives you most bang for buck.

Granular, mocked unit tests often add little value and will become a maintenance burden sooner or later.

And some of it is unconscious; maybe having that big, comfy test suite is preventing the software from evolving in optimal directions; because it would just be too much work and risk.


Because writing good tests is very hard and many engineers are simply mediocre so they write brittle tests that require a lot of time to fix and don't actually test the right things (e.g too many mocks) or simply overconfident (like some people in the thread) that their code will always work.

Also the TDD cultists are partially to blame for this attitude as well. Instead of focusing on teaching people how to write valuable tests, they decided to preach dogma and that frustrated many engineers.

I'm firmly in the circle of writing tests of course, I don't think a system that is not tested should ever be in production (and no, you opening your browser on a local machine to see if it works is not sufficient testing for production..).


I think there is a mostly psychological "problem": tests are not perceived as progress (unless you are mature enough to treat quality assurance as an objective) and finding them fun to write or satisfying to run is an unusual acquired taste.


Tests are tools - you won't be using screwdriver for everything, even though it's a tool that useful in many things.

Having said that - tests, codebase and data consistency, static types are things I'd not want to be without


A test will only catch an edge case you already thought of. If you thought of it anyway why just not fix the bug instead?

Tests have burned out software engineers who waste the majority of their time deriving tests that will pass anyway. And then a significant code change will render them useless, at which point they have to be rewritten from scratch.

No your program will not be more correct with more tests. Deal with it.


> A test will only catch an edge case you already thought of. If you thought of it anyway why just not fix the bug instead?

The reason I do this is to prevent the bug from re-occurring with future changes. The alternative is to just remember for every part of the system I work on all edge cases and past bugs, but sadly I simply do not have the mental capacity to do this, and honestly doubt if anyone does.


If a future change is relevant to an existing piece of code then the logic needs to be rethought from scratch. Your past tests have no guarantee that will be still relevant or comprehensive.

So skip the tests and work more on the code instead.


If a requirement changes, the test for that requirement obviously has to change. These tests breaking is normal (you had a requirement that "this is red", and a test ensuring "this is red", but now suddenly higher ups decide that "this is not red", so it's obvious why this test breaking is normal).

If a requirement doesn't change, the test for those requirements should not change, no matter what you change. If these tests break, it likely means they are at the wrong abstraction level or just plainly wrong.

Those are the things I look at. I don't even care if people call stuff "unit tests", "integration tests". I don't care about what should be mocked/faked/stubbed. I don't care about whatever other bikeshedding people want to go on.

E.g. if your app is an HTTP API, then you should be able to change your database engine without breaking tests like "user shouldn't be able to change the email of another user". And you should also be able to change your programming language without breaking any tests for user-facing behavior (e.g. "`GET /preferences` returns the preferences for the authenticated user").

E.g. if your code is a compiler, you should be able to add and remove optimizations without changing any tests, other than those specific to those optimizations (e.g. the test for "code with optimizations should behave the same as code without optimizations" shouldn't change, except for specific cases like compiling only with that optimization enabled or with some specific set of optimizations that includes this optimization).


To me, advice like "just write your code in a way that you will only ever extend it, not change it" is about as realistic as "just don't write bugs".


Will all your team members also think about those edge cases when changing that part of the code? Will they ensure the behavior is the same when a library dependency is updated?

So, tests catch edge cases that someone else thought of but not everyone might have. This "not everyone" includes yourself, either yourself from the future (e.g. because some parts of the product are not so fresh in your mind), or yourself from now (e.g. because you didn't even know there was a requirement that must be met and your change here broke a requirement over there).

To put an easy to understand example, vulnerability checkers are still tests (and so are linters and similar tools, but let's focus on vulnerabilities). Your post implies you don't need them because you can perfectly prevent a vulnerability from ever happening again once you know about it, both because you write code that doesn't have that vulnerability and because you check that your dependencies don't have that vulnerability.

So, think of tests more like assertions or checksums.


> think of tests more like assertions or checksums.

That's a good way to summarize how tests can catch regressions. And, I think I'm stealing that!


You write the test to prevent the bug from being accidentally reintroduced in the future. I have seen showstopper bugs reintroduced into production multiple times after they were fixed.


For me at least, designing a test will usually let me discover problems with my code which may otherwise gone unnoticed.

Leaving the tests there once written to help us in future refactoring costs nothing.

Granted, in some languages tests are more complicated to write compared to others. In PHP it’s a nightmare, in Rust it’s so easy it’s hard to avoid doing.

I hear what you are saying though, sometimes writing tests consume more time then is necessary.


I completely agree with what you're saying - tests help me ensure nothing breaks and change stuff fast. But leaving EVERY code behind is a liability. In the best case, it's free, otherwise, it's another point of failure and other engineers might spend time understanding it.

Code is a liability. It has to have a good reason to be there in the first place - in the case of tests, it's worth it because it saves more time on bugs, but this can easily turn into a premature optimization.


Very well put! Couldn’t have said it better myself


Do you think the test is written and the bug left in? What a weird take.

And then, you write the test so that future changes (small or big) that causes regressions get noticed before the regression is put into production again. Especially in complex systems, you can define the end result and test if all your cases are covered. You do this anyway manually, so why not just write a test instead?


The time that you spent to write a test is the time you would have spent in finding another bug in the code. Your time is finite. Bugs are not.


> A test will only catch an edge case you already thought of.

Property-based tests and model-based tests can catch edge cases I never thought of.

> Tests have burned out software engineers who waste the majority of their time deriving tests that will pass anyway.

Burn, baby, burn! We don't need programmers who can't handle testing.


There are things that are easier to verify than to do correctly. Almost anything that vaguely looks like a proper algorithm has that property. Sorting, balanced trees, hashtables, some kinds of data splicing, even some slightly more complicated string processing.

Sometimes it's also possible to do exhaustive testing. I once did that with a state machine-like piece of code, test transitions from all states to all other states.


I assume you are talking about unit tests here.

Thinking of edge cases is exactly what unit tests are for. They are, when used properly, a way to think about various edge cases *before* you write your code. And then, once you have written your code, validate that it indeed does what you expected to do so beforehand.

The issue I am seeing, more often than not, is that people try to write unit tests after the fact. Which means that a lot of the value of them will be lost.

In addition to that, if you rewrite your code so often that it renders many of your tests invalid I'd argue that there is a fundamental issue elsewhere.

In more stable environments, unit tests help document the behavior of your code, which in turn helps when rewriting your code.

Basically, if you are just writing tests because people told you to write tests, it is no surprise you burn out over them. To be fair, this happens all too often. Certainly with the idiotic requirement added to it that you need 80% coverage without any other context.

If you write tests while understanding where they fit in the process, they can actually be valuable for you.


Writing a test is often the best way to reproduce and make sure you fixed a bug.

Keeping them for a while lets you make sure it doesn't pop up again.

10 years later, they probably don't add much value.

Tests are tools, that's like saying 'No, your food won't taste better with more salt.', it depends.


Completely agree on tests. It's much more enjoyable for me to write some automated tests (unit or integration) and be able to re-run them over and over again than it is for me to manually run some HTTP requests against the server or something. While more work up front, they stay consistent and I can feel more comfortable with my code when I release.

It's also just more fun to write code (even a test) than it is to manually run some tests over and over again, at which point I eventually get lazy and skip it for that last "simple, inconsequential" commit.

Coming from a place where we never wrote tests, I introduce way fewer bugs and feel way more confident every day, especially when I change code in an existing place. One trick is to not go overboard and to strike an 80/20 balance for tests.


It depends a lot on what you work on and how you program. Virtually none of our software has actual coding errors, and when developers write new parts or change them, it’s always very obvious if something breaks. Partly because of how few abstractions we use, partly because of how short we keep our chains. Letting every function live in isolation and almost never being used by multiple parts of the software. Both the lack of abstractions and the lack of reuse is against a lot of principles, and it’s not exactly like we refuse to do either religiously, but the only real principle we have is YAGNI, and if you build and abstraction before you need it you’re never going to pass a code review. As far as code reuse goes, well, in the perfect world it’s sort of stupid to have a lot of duplicate code. In a world where a lot of code is written on a Thursday afternoon by people who are tired, their babies kept them awake, the meetings were horrible, management doesn’t do the right things and so on. Well, in that world it’s almost always better to duplicate code so that it doesn’t eventually become a complicated abstract mess. It shouldn’t, and I’m sure it doesn’t in some places, I’ve just never worked in such a place. I have worked with a lot of people who followed things like clean code religiously and the results were always unwieldy code where even small changes would take weeks to implement. Which is completely counterproductive to what the actual business needs. The benefit of YAGNI is that it mostly applies to tests as well, exactly because it’s basically impossible to make changes without knowing exactly what impact you’re having on the entire system.

What isn’t easy is business logic, and here I think tests are useful. Or at least they can be. Because far too often, the business doesn’t have a clue what they want up front. Even more often the business logic will change so rapidly that tests automated tests become virtually useless since you’re going to rely on acceptance tests anyway.

Like I said, I’m not religious about it. I sometimes write tests, but in my anecdotal experience things like full test-coverage is an insane waste of time over a long period.


He was basically starting over. Definitely need to delete the tests. One of the issues with enterprise development is choking the project with tests and other compliance shit as soon as people start coding. Any project should be in a workable/deployable state before you commit to tests.


Tests written for pure functions are great. Tests written for everything else may be helpful but might not be.


You need tests for all part of the functionality you care about. I write tests for making sure that what is persisted is what we get back. Just the other day I found a bug due to our database didn't care about the timezone offset for our timestamps.


Not suggesting that testing other things isn't useful but not as straightforward and not as obviously beneficial as pure function testing. It is easy to just dogmatically pile on tests but they may not be helpful.


I’d say, as beneficial. But as you say, not as straightforward. One if the reasons functional programming is popular is because it makes it easier to test, but it’s not that other code needs less testing.


> When you have no tests your problems go away because you don’t see any test failures.

>

> Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

It's a trade-off. Most of the business world ran on, and to some extent still runs on, Excel programs.

There are no tests there, but for the non-tech types who created these monsters, spending time on writing a test suite has a very real cost - there's less time to do the actual job they were hired for!

So, yeah, each test you write means one less piece of functionality you add. You gotta make the trade-off between "acceptably (in frequency and period) buggy" and "absolutely bullet-proof no matter what input is thrown at it".

With Excel programs, for example, if the user sees an error in the output, they fix the input data, they don't typically fix the program. It has to be a dealbreaker bug before they will dive into their code again to fix the program.

And that is acceptable to them.


> There are no tests there, but for the non-tech types who created these monsters, spending time on writing a test suite has a very real cost - there's less time to do the actual job they were hired for!

Not spending time on writing tests has a very real cost - a lot of time is spent on figuring out why your forecast was way off, or your year end figures don't add up.

Not to mention how big parts of the world are thrown into austerity, causing hundred of thousand dead, due to errors in your published research [0].

[0] https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Metho...


>> It's a trade-off.

>> spending time on writing a test suite has a very real cost

> Not spending time on writing tests has a very real cost

Yes. That's what "trade-off" means.


My point is that there isn't a tradeoff between getting "real work" done or writing tests. Either you write tests, or you spend the even more time mitigating the consequences of not writing tests. You can't save time by not writing tests (except for the most trivial cases).


> Giving up tests and versions, I ended up with a much better program.

I can’t understand how anyone would willingly program without using source code control in 2024. Even on a single-person project, the ability to work on multiple machines, view history, rollback, branch, etc. is extremely valuable, and costs almost nothing.

Maybe I’m misunderstanding what the author means by “versions”?


I'm trying to build something small with a quickly frozen feature set. I've chosen to build on a foundation that changes infrequently. There is more background at https://akkartik.name/freewheeling.

You're absolutely right that this approach doesn't apply to most programs people build today, with large teams and constantly mutating requirements.

I do still have source control. As I say in OP, I just stopped worrying about causing merge conflicts with other forks. (And I have over 2 dozen of them now; again, see the link above for details.) So I have version control for basic use cases like backups or "what did I just change?" or getting my software on new machines. I've just stopped thinking of version control, narrowly for this program, as a way to help _understand_ and track what changed. (More details on that: https://akkartik.name/post/wart-layers) One symptom of that, just as an example of what I mean: I care less about commit message hygiene. So version control still exists, but it's lower priority in my mind as a part of "good programming practice" for the narrow context of programs like this with frozen feature sets, intended to turn into durable artifacts that last decades.


O the joys of solo-programming! I do it too and the thing I find interesting about it is I think a lot about how to program better like you are. If I was working on a team I would probably not think much about it, I would be doing just what my boss tells me to do.


This context helps me understand more what you're getting at quite a bit. I dunno if I could manage the same approach but I at least appreciate how you're thinking about it. Thanks!


The author does not seem to have to support any professional / paying users, and wants freedom to experiment more than a guarantee of a known working version. The author also does not seem to work on large systems, or do significant teamwork (that is, not being the only principal author).

In such a situation, all these tools may not provide a lot of value. A flute player in a large orchestra playing a complex symphony needs notes and/or a conductor; a flute player playing solo against a drum machine, or, playing free jazz, does not much need notes, and would likely be even hindered by them.


Tests and version control still have immense value when working solo.

Tests help with ensuring that you don't introduce regressions, and that you can safely refactor. It's likely that you test changes manually anyway, so having automated tests simply formalizes this, and saves you time and effort in the long run.

Version control helps you see why a change was done, and the ability to revert changes, over longer periods of time. We tend to forget this even after a few weeks, so having a clean version control history is also helpful for the future version of you.

Not having the discipline to maintain both, and choosing to ignore them completely, is just insane to me. But, hey, whatever works for OP. I just wouldn't expect anyone else to want to work with them.

The only scenario where I could conceive not using either is in very small projects with a short lifespan: throwaway scripts, and the like. The author is writing their own language and virtual machine, which don't really align with this. Knowing their philosophy, I would hesitate to use anything they made, let alone contribute to it.


Whatever floats your boat, but just to be clear my own language and virtual machine do have tests. The value of tests depends on the domain. Graphics and games benefit less from tests. My graphical text editor straddles the worlds.

I'm still using version control as I've clarified elsewhere. I wasn't expecting this post to gain such a broad audience; I realize now it is really about how one's workflows can keep one stuck in a rut, a local optimum.


Thanks for clarifying. I think you might want to make this clearer in the blog post, since many people had the wrong impression.

> The value of tests depends on the domain.

I agree with this. And like I said, for small throwaway projects and quick experiments I can see how tests and version control can be tedious to deal with. But even in projects like your Freewheeling Apps, where you're releasing them to the public and encourage people to use, you're doing them and yourself a disservice to not have tests.

But you clearly know what you're doing, so I'll stop preaching. :) Good luck with your projects!

BTW, I'm a big fan of LÖVE and it's super interesting what you're using it for. I only imagined it was good for games, not apps.


Thanks! I should add that almost every app you can get to from https://git.sr.ht/~akkartik/lines.love by traversing the "Mirrors and forks" sections of readmes has these thorough tests for the editor widget. Certainly every app you can see in the family tree image map from 2023 does: https://akkartik.name/freewheeling/#resources. It's only a tiny new sub-tree that currently does not:

https://git.sr.ht/~akkartik/lines2.love

https://git.sr.ht/~akkartik/text2.love

I tend to gravitate towards tests, and taking out tests as I describe in OP is a lot of work.


The author is probably experiencing mental fatigue or even burnout about programming.

If version control bothers you that much I'd say it's a good sign that you need to take a break.


This seems very far from my subjective experience. The little platform-independent programs I write for myself and publish are a source of spiritual rejuvenation that support my day job in a more conventional tech org with a large codebase, large team and constantly changing requirements.

I'm not "bothered" by version control. I've not even stopping using it. As I say in the post, I just don't think about it much, worrying about merge conflicts and so on, when I'm programming. I've stopped leaning on version history as a tool for codebase comprehension. (More details: https://akkartik.name/post/wart-layers)

This comment may also help clarify what I mean: https://news.ycombinator.com/item?id=41158040


All of your comments are without any arguments against vc. It also seems there is a missunderstanding of your state, you seem to use it but you aren't focused/disciplined in its use?

> I'm not "bothered" by version control. I've not even stopping using it. As I say in the post, I just don't think about it much, worrying about merge conflicts and so on

How is using VC, especially in a solo project, "bothering"? It really does seem you just hate the tooling around modern software development and you just want to spit out code that does something for you and yourself. Which, again, is fine, but it's usually not a good idea if you are making something for other people/users.


But I said VC is not "bothering"!

Perhaps I should replace the word "versions" in my post with "workflows". In some situations the workflows I settle into contribute to a feeling of being stuck in a local optimum. Throwing away familiar and comfortable workflows can help find a global optimum. It's just the first step, though. It takes hard work to build everything all at once. But it can be valuable for some projects if you aren't happy with where you are.


As programmers we are inundated with choice and options. Our tooling and whatever the zeitgeist considers "best tooling" tends to err on the side of making $THING easier to do.

But having 1000 easy options always available introduces severe cognitive burden to pick the correct choice. That's part of the reason why we as an industry have enshrined all shorts of Best Practices and socially shame the non-adherents.

Don't get me wrong, bad architecture and horrible spaghetti code is terrible to work with. However, questioning the things that feel Obviously Correct and exploring different and austere development environments that narrow our set of available choices and tools can sincerely operate to sharpen our focus on the end goal problem at hand.

As for version control, branching encourages cutting a program into "independent features"; history encourages blind usage of potentially out-of-date functional units; collaborative work reifies typically-irrelevant organizational boundaries into the code architecture (cf Mel Conway); etc.

Version control's benefits are also common knowledge, but there are real tradeoffs at the level of "solving business problem X". It's telling that such tradeoffs are virtually invisible to us as an industry.


> branching encourages cutting a program into "independent features"

But, you can choose not to branch then?

I’m really confused about the trade offs of version control. I can understand trade offs of branching strategies, but at its most fundamental (snapshots of your code at arbitrary times), I can’t think of any drawbacks?


You're, perhaps unintentionally, moving the goalposts a bit. "Version control" doesn't just mean database of code snapshots. It simultaneously connotes all the related functions and development processes we have around version control.

Are you familiar with the artistic practice of adding "artificial" constraints in order to promote creativity and productivity? See Gadsby, the novel written without using the letter "e", or anything produced by Oulipo.

The point is that we have a superabundance of choice with software architecture and programming tools. One subset of those tools comprises things provided by version control. Give yourself a version control-less, limited development environment and see how it influences the way you think about and practice coding. There will be sharp edges, but if you give it an honest attempt, you will also very likely discover novel and better ways of doing more with less.

There are many things you can try. Disable syntax highlighting in your editor; try exclusively using a line editor such as ed; flatten your codebase into a single directory; code everything in a single file; organize your data structures to minimize pointer chasing; support extreme cross-platform compatibility (10 OSes?); write platform-independent code using "only assembly" (a la Forth, sectorlisp, or whatever); write a thing and then nuke and rewrite 5 times; etc.

IMHO, value in the above is most easily discovered by retaining a strong introspective eye throughout your personal development process. Where are the pain points? What processes force you to think about non-end goal issues? When does coding feel the most glorious? When did you have the deepest insights? Blah blah blah.


> You're, perhaps unintentionally, moving the goalposts a bit. "Version control" doesn't just mean database of code snapshots. It simultaneously connotes all the related functions and development processes we have around version control.

Not OP, but I'd argue you are the one moving the goalpost here.

If someone says they are not using "version control", I'm going to assume that they are not using git (or similar) at all. Any other meaning would be so arbitrary to be almost useless. No one can guess where you draw the line in the sand between "I'm not using any version control tool" to "I'm technically using a version control tool but I'm not doing version control because I don't do X,Y,Z".

I personally can't imagine writing any non trivial piece of code without using git. Even in its more basic form, the advantages are overwhelming. But at no point of my 20+ years of development I've ever applied the same rigorous version control rules of professional environments to my personal projects. At best I've used branches to separate features (rarely, and mostly when I got tired of working on a problem and wanted to work on a different one for some time), and PRs to have an opportunity to review the changes I made to see if I forgot to do something. At "worst" I simply used it as a daily snapshot tool (possibly with some notes about what's left to do) or as a checkpoint after getting something complicated working.

If the author has finally figured out rigorous source control can be unnecessary and counterproductive on small projects - good on them! But if that's the case then say that. Calling the fine tuning of which process you want (or don't want) to use "no version control" is just misleading.


I’m working on a feature that is a moderate refactoring and extending of an existing feature. I’m in some sense taking extra burden by ‘sculpting’ my change out of the existing code and the working backwards to come up with the logically contained and discrete commits to get from where I started to where I want to go.

I would be nice to just make my change without having to show it in a series of discrete steps.

I’m not actually opposed to this standard, but trying to show one perceivable downside that op may be alluding to (I’m not actually sure?)


Thats not version control, that’s something you’ve chosen to do with version control.

You could just check in your code every night. And, vs not having those commits (even without messages) - what could possibly be the downside?


This is all in a professional environment requiring code review for actual submission. I need to follow this process to actually deliver


This sounds like you’re discussing code review and coding standards, not version control.


Maybe your confusion is in your assumption of what’s being discussed


I’m discussing version control.


And everyone else is discussing behaviors that are down stream of version control


But only if you choose to use them. I agree with the other commenter, it's very hard to see what trade offs there are to pressing a button to initialise a repo at the start, then committing any changes at the end of each session/intermittently so there's a copy of current progress somewhere?

If the OP is referring to version control because they're needing to handle multiple branch types, switching between versions etc that is much more involved....but also makes it even harder to see how you can manage that by simply dropping version control entirely?

From the article, it does seem like it's not about any sort of specific feature they use, but rather the sheer basic "save versions of code" aspect of VC:

"Version control kept me attached to the past"

To go back to an earlier comment, this honestly sounds like burnout to me if you're having temporal anxiety from saving code.


If it's your personal project, you are in charge of deciding which "behaviors that are down stream of version control" you want to adopt. If you are applying unnecessarily complex processes for a given project, that's on you.


I think in this case, the author means coding version logic into the app itself. eg. versioned API endpoints for backwards compatibility


I don't think so:

> Back in 2015 I was suspicious of abstractions and big on tests and version control. Code seemed awash in bad abstractions, while tests and versions seemed like the key advances of the 2000s.

> In effect I stopped thinking about version control. Giving up tests and versions, I ended up with a much better program.

> Version control kept me attached to the past. Both were counter-productive. It took a major reorientation to let go of them.


I don't get what they mean by "Version control kept me attached to the past."

You don't have to look at the history to use other features of version control. Typically everything is moving forwards in a repository.


Best guess, a reflexive need to keep diffs as small as possible. Personally I think this is a completely wrong mindset, having version control is what allows you to go wild because you can always use the version from before a crazy refactor - and if it goes wrong you can even keep it around on a branch for reference later on with a second attempt.


Your quotes seem to reinforce parent's assertion he's not talking about version control in the form of tooling but some kind of versioning in the code itself: "...while tests and versions..."


Holy cherry-picking batman.


Oh good, Reddit seems to be leaking in again.


He specifically mentions version control and avoiding merge conflicts, so I'm pretty sure it's stuff like git that he's finding himself cautious about.


How do you get a merge conflict with yourself?


By maintaining a family of related forks/branches: https://akkartik.name/freewheeling


Thanks a bunch, now the coffee is on my keyboard.


By trying really hard


That makes sense, but then why not just work on trunk and don't worry about branching?


This is about a desktop text editor built with LUA on a C++-based native framework for writing 2D games: https://git.sr.ht/~akkartik/lines2.love Very unlikely to have versioned API endpoints involved.


Yep, commit your code when it "works". Then I can safely go off on a hair brained experiment, knowing I can easily throw away the changes to get back to what worked.


Yeah, this is not good advice for the average person, even for solo projects.


I agree, and the author probably does as well.

I didn't get the feeling it was meant as general advice.


Could this person be intentionally giving bad advice?


I think it’s just an alternative way of thinking. It’t not one I agree with, but I can see where the author is coming from. Think he’s just tired of spending time on useless tasks around his projects. For all we know they may be, but I do have hard time viewing testing and version control as overhead xD


I'm pretty sure he's trying to find his balance, because it is always a balance and we tend to err big on the other side.


Yeah exactly


At first glance I thought the author was plain wrong, but I think there is some good insight here.

This workflow works very well for the author. Most of us can probably think of a time when Git or automated tests frustrated us or made us less productive. There are similar solutions that are simpler and get out of the way, e.g. backing up code with Dropbox, FTP, whatever.

The above is works well because the author is optimizing for their productivity on a passion project where they collaborate with few others.

Automated tests are useful, but it sounds like the author likes creating programs so small that the value might not surface. I think that automated tests still have value even in this context, but I think we can all agree that automated tests slow you down (though many would argue that you see eventual returns).

Version control and automated tests solve real problems. It would be insane to start a project without VC today, and automated tests are a best practice for a reason. But, for the authors particular use case, this sounds reasonable.

---

Aside from the controversial bits around VC/tests, I think items 7/8/9 perfectly capture my mindset when writing/refactoring a large program. Write, throw it away, write again.


Disagree on VC, even for solo project and no multiple version branching. Human make mistakes, knowing what you change in the last 3 weeks for >100k LOC project are godsend. It helps to find and fix issues. The better feature is branching out, because you can do what you want while still having a way to go back to previous stable.

As for automated tests? That's fine.


I think it's still worth asking "which VC?" through that lens, though. Git was designed for developing the Linux kernel - with countless LOC and contributors and commits pouring in constantly. It happened to also be readily suitable for GitHub's model of "social" FOSS development, with its PRs and such (a model that most other Git hosting systems have adopted).

...but that ain't applicable to all projects, or possibly even most projects. The vast majority of my FOSS contributions have been on projects with one or maybe two primary authors, and without all that many PRs. What is Git, or any particular Git repository host (GitHub included), really offering me?

I need to track changes (so I can revert them if necessary), I need to backup the code I'm writing, and I need to distribute said code (and possibly builds thereof). Just about any VCS can do those things. I ended up trying Fossil for various new projects, and I'm liking it enough that I plan on migrating my existing projects into Fossil repos (with Git mirroring) at some point, too. It's unsurprisingly more optimized toward the needs of the SQLite development team - a small cathedral rather than a Linux-style giant bazaar - and considering that all my projects' development "teams" are tiny cathedrals it ain't terribly surprising that Fossil would be the right fit.


imo taking the time to learn enough git to setup an ignore file, then run be able to run git init; git add -A, git commit -a -m "before I changed the foo function to use bar" and then go back to older revisions is well worth it. you don't have to master it, but just having a commit message and a version to get back to has saved my bacon more times than I can remember, nevermind more advanced operations.


This is quite a confused article.

I really wonder what about it made it be upvoted to first place.


I keep trying to figure out the joke.


Author successfully drove engagement with psychological baits like bashing commonly accepted tools and practices and being intentionally obscure so a lot of people would comment about it.


Or, author has so much more experience than you, that his conclusions can't possibly make sense in your world. Not saying that's the case, but it's certainly possible. The more wisdom, the less need for rules and conventions.

That being said, I do feel like we have to learn to communicate over these boundaries if we want to evolve faster, as opposed to mostly repeating the same mistakes over and over.


Even if that’s the case, the exposition is quite poor and hard to follow. It doesn’t exhibit a lot of clarity of thinking on the author’s part, or at least it doesn’t translate to his writing. That’s what I meant by “confused”.


It's not really designed for a broad audience, so I share your surprise that it got upvoted so much. Writing for a broad audience takes me a lot of effort, which isn't always worthwhile.

FWIW this trail might help fill in context:

https://akkartik.name/freewheeling (this was designed for a broad audience, so is like a snapshot backup where the links below are incremental backups)

https://akkartik.name/post/2022-03-31-devlog

https://akkartik.name/post/2024-06-05-devlog

https://akkartik.name/post/2024-06-07-devlog

https://akkartik.name/post/2024-06-09-devlog

https://akkartik.name/post/2024-07-10-devlog

https://akkartik.name/post/2024-07-22-devlog

Sorry to throw a bunch of links at you :)


Strong opinion that there is no wisdom in not using anything other than code to produce software for yourself. It's a personal choice. Selling it like it's an epiphany is definitely kind of a weird move.

For your personal projects you can choose any language, define any constraints, do whatever you like which is what I think the author is trying to communicate here, and that is fine. But sprinkling a bit of huge discovery/realization on top is not so much.


Making strong assertions without any evidence or data to back it up is not "wisdom". I agree with other people: the author is simply burnt out by software (which is fine) and is jut YOLOing his code.


Yeah I am sure author has transcended such pedestrian things as versioning and testing code.


No one has claimed that.

It was simply suggested that in some situations, maybe they're not as important as we tend to assume. And it takes experience to see those patterns.


On the one hand this may be an article from a developer experimenting with different tools and techniques to advance themselves in life.

On the other hand it may just be the author wanted to gaslight ppl into a debate xD


Given that the author has been exploring these themes* throughout the years since I first encountered them, I've got a strong weighting for the former.

* with varied approaches; I even recall a "test all the things" experiment


Yes I think so too, I was just trying to inject a little comic relief :)


> In 2022 I started working on Freewheeling Apps. I started out with no tests, got frustrated at some point and wrote thorough tests for a core piece, the text editor.

This is a primary motivation for having a reasonable test suite - limiting frustration. Test suites gives developers confidence to evolve a system. When done properly, contributors often form an opinion similar to:

> But I struggled to find ways to test the rest, and also found I was getting by fine anyway.

This is also a common situation. As functional complexity increases, the difficulty to test components or the system as a whole can become prohibitive.

> Now it's 2024, and a month ago I deleted all my tests. ... In effect I stopped thinking about version control. Giving up tests and versions, I ended up with a much better program.

This philosophy does not scale beyond one person and said person having recent, intimate, memory of all decisions encoded in source code (current or historical). Furthermore, given intimate implementation knowledge, verifying any change by definition must be performed manually.


> This philosophy does not scale beyond one person ... having recent, intimate, memory of all decisions encoded in source code

Some time ago on HN, I ran across a tale of someone who never merged code unless they'd written it all that day. If they got to the end of the day without something mergeable, well, that just meant they didn't understand the problem well enough to express it in under a day, and they tried afresh the following morning.

Anyone else remember this, or am I confusing sites/anecdotes again?


> This philosophy does not scale beyond one person and said person having recent, intimate, memory of all decisions encoded in source code (current or historical). Furthermore, given intimate implementation knowledge, verifying any change by definition must be performed manually.

As a one-man programming team, you are correct. And quite frankly, I shudder to think of not programming with a test suite or version control, even though I work alone!

Docs, tests, and version control reduce what I have to remember about the code context. Yes, I have to remember the details of the code in front of me, but if I document it, test it, and check it in with a good commit message describing the why and how and whatever, then I can discard that code from my memory and move on to the next thing.


All of the tools and artifacts you reference as important contribute to the same goal, whether it is for me or a future-you:

Understanding.


My favorite example for point number 3 "Small changes in context (people/places/features you want to support) often radically change how well a program fits its context." is K9 Mail, which is becoming the Android version of Thurderbird now.

It started with an unconventional UI with a home page listing email accounts and for each account the number of unread and total messages. There was a unified inbox but it was not forced on users.

I remember that I explicitly selected this app because it fit my needs: one personal account, one work account, several work accounts that my customers gave me. I wanted those account to stay separated.

Probably a lot of K9 users picked that app precisely for the same reason because there were many complaints when the developer migrated to a conventional Android UI with a list of accounts sliding from the left and an extra tap to move from an account to another. If we had liked that kind of UI chances are that we won't have picked K9 to start with.

So one small change (but probably a lot of coding) destroyed the fitness of the app to its users. I keep using the old 5.600 version, the latest with the old UI, and I sideload it to any new device I buy.

Furthermore, to make things even more unusual, I only use POP3 to access my accounts (I preview on phone, delete stuff, possibly reply BCCing myself, eventually download on my laptop) and K9 fit perfectly that workflow. I don't need anything fancy. An app from the 90's would be good enough for me.


I really appreciate[1] the concrete example. Worth more than my opinion in OP and everybody's opinions in this thread put together.

[1] https://news.ycombinator.com/favorites?id=akkartik&comments=...


I too keep wondering where this path leads.

One thing is clear to me though, creating (software) by yourself is a completely different activity from doing it in a team.

About testing. Tests are means, not ends. What we're looking for is confidence I think. So when I feel confident about an implementation, I'll test less. And if I desperately need to make sure something keeps working, I'll add a few integration tests at the outer edges that are not so affected by refactorings and thus won't slow me down as much. E.g poking a web backend from the outside, as opposed to testing the internals. Unit tests are good for fleshing out the design of new API's, but those tests are pretty much useless once you know where you're going.


Plus there's so many good reasons to have tests in a single person project

* Hotwiring if statements with "true ||" to go straight to the feature you're building takes time, and you're gonna have to tear it down later. Just build a test and run it, that way you get to keep it for regression testing

* If you're shipping something big, or slow, (which can just mean 'I use qt' sometimes) and launching the app/building the app takes ages, just make a test. A single test loads quicker and runs quicker

* If you're debugging and reproducing the bug takes 45 seconds, just write a test. It automates away the most boring part of the job, keeps your flow going, allows you to check the status of the bug as often as you want without having to think about if it's worth it or not, and, same as #1, you get to keep the test for regression testing



Just dropping by to say I adore this author and Mu is one of my favorite projects. A modern Lisp machine, kinda! In QEMU! So much fun!


Thank you so much, you made my day.


> Most software out there is incurably infected by incentives to serve lots of people in the short term.

Great quote! You can even replace “software” with “businesses” and the quote still works.


We are all a bit overwhelmed by the complexity of the field of software engineering. Arguably sometimes accidental. But I don't agree that rejecting all the ideas we have come up with over the decades is a solution. On the other hand, not all solutions should be taken to the letter or used "too much". "Overwhelming" is by definition what happens when something is used "too much". By all means, please, write tests, use the VCS, use abstractions, but *know why you use them*, and when the "why" doesn't hold - reassess.


I think a major source of the problem is academia. I’m an external examiner for CS students in Denmark, and they are basically still taught the OOP and onion architecture way of building abstractions up front. Which is basically one of the worst mantras in software development. What is even worse is that they are taught these things to a religious degree.

What is weird to me is that there is has been a lot of good progression in how professionals write software over the years. As you state, abstractions aren’t inherently bad for everything. I can’t imagine not having some sort of base class containing “updated”, “updated_by” and so on for classic data which ends up in a SQL db. But in general I’ll almost never write an abstraction unless I’m absolutely forced to do so. Yet in academia they are still teaching the exact same curriculum that I was taught 25 years ago.

It’s so weird to sit there and grade their ability to build these wild abstractions in their fancy UML and then implement them in code. Knowing that like 90% of them are never going to see a single UML diagram ever again. At least if they work in my little area of the world. It is what it is though.


The only reason I started to _actually_ use git was magit. I wish there were command line level "porcelains" for everything. A standard '--help=ui' output and 'dialog' style interface and it could be automatic.

It's not so much being overwhelmed by the complexity, it's just that there's a limit to the amount of active muscle memory I can utilize, and I have to make the cut somewhere.


There are some interesting ideas in this article. Not using source control and removing tests resulting in a better program is quite fascinating.

It's a shame that there are so many rude comments. It seems like there are many close minded folks lurking here, forgetting that experimentation is essential in tech.


It's also a shame that Kartik explicitly states his goals and his problem domain, yet folks react as if he'd been making comments about their goals and their problem domain.


> Not using source control and removing tests resulting in a better program is quite fascinating.

Can you clarify what is exactly fascinating here? They seem to be writing simple programs, used only by themselves. In these scenarios of course you don't *have* to use good eng practices.


You seem to think of writing simple programs used only by myself (and people I have a relationship with, and people who want to have a relationship with me) as some sort of special situation that doesn't require "good engineering practices." I think of it as the most basic situation of all.

The most foundational engineering practice of all: tailor interventions to the context.


I don't know because no studies have been done about the so called good engineering practices.

If a big company with 10 teams of 20 engineers each blogs about how they're able to ship good code with testing or source control, I won't be any more fascinated that I am here because it sort of makes sense since no one can prove that source control or testing improves the end product.


I don't agree with point 3:

"Small changes in context (people/places/features you want to support) often radically change how well a program fits its context. Our dominant milieu of short-termism doesn't prepare us for this fact."

My opinion here is that short-termism is precisely a consequence of the hardness of predicting/keeping up with these small changes: businesses prefer to be able to adapt quickly to new scenarios rather than risking being stuck in the wrong direction.


I have noticed a few articles recently on HN that talks about dropping tests because they are too slow or holding them back or just extra cognitive load.

This kinda beggars belief for me. I wonder who these people are - do they have the "battle scars" from working on complex or big systems? Are they reasonably junior or new to the profession with less than 10 years experience?

Next up? Fuck structural engineers, it's just going to slow us down building this bridge...

If you are doing something for fun, sure do whatever you want. I write zero tests for my own pet projects. But please in professional environments please don't ignore hard-won lessons in reliability and engineering-velocity because you don't want to have to do the extra work to update your tests. Your customers and colleagues (potentially years in the future) will thank you.


Tech is a relatively immature industry. And a lot of time and effort and money in it is devoted to non-critical products.

I'm not directing this at the OP, because they have actually thought about it even if I disagree with them, but there are a lot of people working in tech and in software who do not care about product quality at all. They're paid a lot of money and exclusively focus on shipping ASAP, quality be damned, so they keep their metrics looking good and the $$$ flowing. Add in the industries tendency for very short term tenure at jobs and you end up in a situation where people think what they're doing is "optimal" simply because it keeps them getting $$$ --- product quality is just secondary. Their "craftsmanship" is their job-hopping. (I don't have a problem with job-hopping if the products and code are still good --- they usually aren't.)

They usually don't need to care about a bridge lasting 6 decades, but then they're writing critical software for infrastructure or airplanes and, unfortunately, they can actively resist a lot of the hard learned lessons people had to make in those industries because they just want to move fast (and leave after ~2 years).

The culture isn't there yet.


It's my failure as a writer, because this is not one of those articles.

OP is about how I thought I had the answers in the past but was wrong, and how I have new answers and am still wrong in ways I will find out about. So what beggars belief for me is anyone reading it and thinking I'm offering any sort of advice for others in all situations. What here gives you a sense it's at all related to professional environments? My first bullet was, "building for others is hard so don't even try." If you have ideas for what I can reword to make it even clearer, definitely let me know.


The one I saw some years ago with relatively new people was, they'll write tests while writing a piece of code, then once it's done and all the tests pass they'll think the tests are no longer necessary because the code is done. They didn't have the experience to see how long their code will exist, how it is extremely likely to get tweaked over the years, and how future developers won't have any of the context they had while writing it.


There is adverse selection at play. The top/world-class programmers are too busy to write blogs.


There is a class of problem where you know the goal, and code which produces the goal which you can test independently is demonstrably ok. Of course the next run with different parameters may well be wrong, but if they aren't on the goal-path you don't much have to worry.

I do sometimes code in this pattern. I have high confidence in charts from Google and Akamai about some data I have exposure to (a variant of the inputs unique to my situation not in their hands) and when the curves I make conform in general trend to the ones they make over the time series, I am pretty sure I have this right. If the critique is in the fine differences I do some differential on it. If the critique is in the overall shape of the curve, if mine is like theirs, why do you think I am so wrong?


I love end to end tests even for personal projects https://ashishb.net/all/bad-and-good-ways-to-write-automated...



Data-orientation, abstraction avoidance, holistic rewrites. The values espoused by OP rhyme heavily with the stance I've begun to take after reading and writing significant amounts of APL.

The best code I've seen mercilessly elides anything that doesn't serve an architectural level, problem domain-relevant concern. GADTs and hash tables, and all our nice CS tools work much better when applied as cognitive tools in a domain-specific manner as opposed to reified language syntax or library APIs, as the latter necessarily introduces cross-domain concerns and commensurate incidental complexity.

The most blatant example of this in APL is using arrays and tables for everything. Trees? Can be efficiently encoded as arrays. Hash tables? Same. Tuples? Just a pair of vectors. Etc. APL's syntax really shines in this instance, since data interaction patterns become short, pithy APL expressions instead of zoos of library functions. Using direct expressions makes specialization much easier, by simply ignoring irrelevant concerns.

Anyway, APL aside, I'd really like to see our software engineering zeitgeist move more toward an optimistic refining our understanding of the human practice of software engineering and away from pessimistic and programming-centric problem avoidance.

(The above really came out more treatisy than intended. Oh well.)


Can I just say... I love the return of the term "programming," "to program," and "programmer." "Coder" and "coding" was popular for a while, and before Steve Balmer put his stamp on it, "developers" and "development." But when I started, before 32-bit Windows was a thing, I was a programmer.

If the Primeagen has helped popularize the term again, great, thank you.


I've always been a programmer. Because it was good enough for Dijkstra.


I like that take, and wholeheartedly agree.


I can sympathize with the authors love/hate relationship with tests, but I can’t help feeling like it’s because we as developers so often test the completely wrong things.

I don’t typically write tests, but they do make sense for a few cases (specifically end to end tests that look for well defined outputs from well defined inputs). I was inspired by Andreas Kling’s method of testing Ladybird, where he would find visual bugs, recreate the bug in a minimum reproducible example, fix the bug, then codify the example into a test and make sure the regression was captured in his test suite[0]. This led to a seemingly large suite of tests that enabled him to continue modifying the browser without fear of regressing somewhere.

I used this method of testing while I was writing a code highlighter that used TextMate grammars. Since TextMate grammars have a well defined output for some input of code + grammar, I was able to mimic that output in my own code highlighter and then compare it to TextMate’s output for testing purposes. I wrote a bunch of general purpose tests, then ran into a bunch of bugs where I would have mismatched output. As I fixed those bugs, I would add the examples to my test suite.

Anyways, my code highlighter was slow, and I wanted to re-architect it to speed it up. I was able to completely change the way it worked with complete confidence. I had broken tests for a while in the middle of the refactor, but eventually I finished the refactor. As I started to fix the broken tests, there was a domino effect. I only had to fix a few tests and that ended up automatically correcting the rest. Now, I have a fast code highlighter and confidence that it’s at least bug for bug parity with the slow version :)

[0]: https://youtu.be/W4SxKWwFhA0?si=PJs_7drb3zVxq0ub


> Giving up tests and versions, I ended up with a much better program.

This is one of those sentences that is clearly an opinion but stated as if it were some undeniable, incontrovertibly true statement of fact.

In your opinion, you have a better program - but give the code or repository to another dev or a group of devs and I'm sure you'll hear very different things...


The person who wrote both the original and new versions isn't qualified to say one is better than the other?


If they are the only user or developer, sure. Otherwise they are the least qualified to say it's better -- like how I'd be the least qualified to declare myself winner of a handsome contest.


I'm stealing this for all my future code reviews.


IMHO there are tests and there are tests. I had to work with codebases that had awful tests. They broke frequently because they were badly written. They used a lot of mocking when mocking was not appropriate. This tests were written for the purpose to have tests not for the purpose to really test the domain. I do not write tests for simple cases, like method in class A just delegates to method in class B.

For a one man show - go on do not write tests, especially if you do not know where you will end up with the software. But in teams I find a lot of value in (good written) tests, preventing bugs and documenting bugs. Sure you can over-engineer it, as everything else too.

BUT working without version control? Good that it works for you. I think version control is one of the MUST USE tools.


> Building durably for lots of people is too hard, just don't even try. Be ruled by what you know well, who you know well and Dunbar's number.

Wikipedia on Dubar:

> A replication of Dunbar's analysis on updated complementary datasets using different comparative phylogenetic methods yielded wildly different numbers. Bayesian and generalized least-squares phylogenetic methods generated approximations of average group sizes between 69–109 and 16–42, respectively. However, enormous 95% confidence intervals (4–520 and 2–336, respectively) implied that specifying any one number is futile.

https://en.m.wikipedia.org/wiki/Dunbar%27s_number


I have always found integration tests most important in order to test business logic when your customers pay for your trust and especially when they rely on your code for revenue while interacting with a third party. However, they should be thrown away immediately after proving your coded logic matches business requirements as they are slow and lose value and become tech debt quickly. Unit tests, if needed, should be even more temporary in my opinion. Often a CLI can be sufficient as a "unit test" during the development process.


>However, they should be thrown away immediately after proving your coded logic matches business requirements as they are slow and lose value and become tech debt quickly.

Can you expand on why integration tests should be thrown away once validated? Isn't the idea that when you make a change later, these tests will ensure you haven't introduced a regression?


Integration tests can be very resource intensive. In larger projects the time it takes to run and set these up these is often daunting. Yes, the idea is regression of business logic in practice, but in reality I have found it leads to test resentment (and writers of those tests) and smaller or no test being written instead. Additionally the regressions added are often to the test suite itself and not the actual application.


> Small changes in context (people/places/features you want to support) often radically change how well a program fits its context. Our dominant milieu of short-termism doesn't prepare us for this fact.

I'm wondering what would work better? Writing a small program from scratch, using a rich ecosystem of libraries, seems like one way to go. AI help makes that easier.

Another approach, more common with popular apps that have more to lose, is to evolve in place.


I worked with a guy who was so obsessed with testing he never even bothered to ask what the feature or problem the code was to solved

He happily and condescendingly told every else how much they sucked because he had 1000% test coverage

When he released he had tons of bugs because his code wasn’t going what it was supposed to

His answer: yelling at product and tech leads for not being clear

The rest of us had tests but spent as much time asking clarifying questions

The guy above is one of the reasons I just lost all interest in software. This was a major FAANG company and his smooth talking continues today with management none the wiser because “he has the tests” Arby’s


Seems to intersect with my experience. The best guys I've worked with had test...to some extend...especially in places that did some work you could easily get wrong by not thinking about a small edge case. Yet, none of them had or pursued 100% coverage as they were all clearly aware of that there is no actual benefit in that number, but that it can also mean harm by heavily slowing down dev speed and tying down your feature set because you're too lazy to always port some useless tests.


Our field is burdened by complexity. Some people cannot function properly without illuding themself they can tame it. So they cling to rules, best practices, tools hoping that adopting them to the letter will protect them from the uncertainties of our job.

I've seen the opposite too, devs not only not writing any test, but not trying to run a single line of the code they wrote. Reason being I cannot test all the edge cases so I won't test it at all. QA will open a bug. And somehow getting praised by management for being faster than others to ship changes.


I appreciate the honest answers by the OP. Even if we all think there's fundamental flaws with what was given up.

For me, ChatGPT saved a lot of my mental load. I don't think about individual lines NEARLY as much. Obviously you need to understand what the program is doing and be smart about it, but you can really focus on business problems.

It spits out something like 40% of my code and 70% of tests. I've started dropping whole files into it and tell it how to combine a new code.


If you don’t have tests you don’t know if your shit works, and your team size can be at most 1. I even write broad coverage tests in my private repo to have a modicum of assurance that when I change things the remaining code still works.


1. It's only a modicum of assurance.

2. There are many ways to get a modicum of assurance. Types, tests, formal methods, cleanroom software engineering, NASA's IV&V, many others that I'm sure I'm forgetting.

So there are many ways to "know if your shit works" and none of them support turning off one's brain entirely (for the part beyond the "modicum"). What I did here is to explore some of the other approaches that I have long neglected.


You don’t know if your shit works just from “types”, and formal methods are not applicable to anything real. You’re arguing for the sake of argument, you know I’m right.


I know nothing of the sort. I'll stop arguing now.


> Types, abstractions, tests, versions, state machines, immutability, formal analysis, all these are tools available to us in unfamiliar terrain. Use them to taste.

How did people program in Lisp for decades? I like types and such, and have even gone so far as to write Python like it's Rust. But in the end I realized dynamic languages have an appeal for a reason, and by using types all over the place, I was not getting the benefits of a dynamic language like Python.

When context is mostly static, dynamic languages shine. Context could be, for example, the structure of the directory. If I want to read a file and I know that the file exists, throwing a bunch of type checks about file reading operation is just overkill and slows down the development.


Hmm, rarely have I thought types were a burden, rather than help, maybe I’m weird.

Maybe I spend effort in making sure my types are useful and easy to work with, but one previous TypeScript I got to a state where _all_ of my database queries were automatically typed, and all of my requests and responses too, so both input output were guaranteed to be correct bg the compiler, so any bugs or errors that were left were business logic.

It was incredibly liberating - like pairing with someone who was junior but very pedantic. I ended up writing almost no unit test and only having integration level tests, cause the job of those test was mostly covered by types.

And writing the code itself was such a pleasure - you get immediate feedback if your program is correct _as you type it_. The most bizarre consequence of all this to me was writing a program for almost 2 hours, hundreds of lines of code, and then executing it and having it do exactly what you wanted on the first compile. That was both scary but exciting!

One can get over-constrained with types for sure, where you’re sending more time “fighting the types” rather than writing your code. But this is all just learning, once you understand how the typesystem works it all becomes easy to work with.

It was the same story with tests - once I started testing everything- it wasn’t easy to adopt my code to be testable, took effort, but then I learned how to make code pure, move state to the edges, manage dependencies etc, and all of those are useful practices in their own right, regardless if you write the tests or not.

Same with types - schema design and invariants, state machines, edge cases in type conversion and how to lock and manage external dependencies. Sets, unions etc - it became the way I reason about code with of without types and my code is better for it.

I also assume that types would make AI generated code much better _and more reliable_ because of the additional information and structure that it provides, so I recon they are here ti stay.


Regarding programming with a mature static type system:

> It was incredibly liberating - like pairing with someone who was junior but very pedantic.

This is exactly what is happening. To achieve the same level of semantic confidence across a code-base in a dynamically typed language (such as JavaScript, Perl, Python, Ruby, etc.) would take the effort of a diligent junior programmer.

Which, in a way, is what a strongly-typed language compiler does IMHO.


have you worked with java or C#? both are where types sometimes become a burden.

No, you cannot just do something like this:

``` return { data, message: "OK" }; ```

you need to declare a class or struct that match the definition. There's mapper and builder everywhere. Adding / deleting 1 column from a datatype can force you to make changes in 5 places due to mapper / builder.


The flip side of this "burden" is that it prevents you from returning an arbitrary data structure that is not expected by the caller and may cause errors further down the call stack if you return the wrong type. The whole point of static typing in the first place is to eliminate this kind of footgun.


have you tried typescript? the code above will make the function declaration become like this:

function foo (): {data: MyCustomObject, message: string}

so foo().data and foo().message is a valid object, while foo().bar will throw build error


As a matter of fact - no I haven’t. My experience with types is TypeScript, Elixir and a bit of Scala.

I always try to have “clojure style” types where I don’t try to be too prescriptive - don’t lock things down if you don’t need to, just the minimum possible types to make sure the code I’m writing is correct, and nothing more - Rich Hicky’s talks on Clojure’s Spec was an eye opener.

I have been told by some java devs though that it is a matter of style - while unconditional it is possible to write java code with much less boilerplate if one uses newer language features and actually tries to hold the cruft at bay. Is that true?


idk, haven't use newer java or C# after 2014-ish, though I think I've read C# has support some dynamic typing, never explore that. Never touch Elixir and Scala too so cannot comment on that.

TS though, the type declaration is amazing with it's union, optional and intersection type!


> My experience with types is TypeScript, Elixir and a bit of Scala.

Dude, Elixir only recently introduced some sort of a type system...


Elixir has had dialyzer + type hints for years


While not part of the compiler, the dialyzer’s types are quite nice, even if sometimes it is a bit clunky - I’ve noticed that most of the time when I thought it was “wrong” it actually wasn’t and had picked up on some bug / misunderstanding in my code, though error messages could have been better


> have you worked with java or C#? both are where types sometimes become a burden.

> No, you cannot just do something like this: > ``` return { data, message: "OK" }; ```

Yes, in Java you can:

  return new Object() {
    String message = "OK";
    Object data = itsValue;
    };
There are many reasons to dislike Java and it is nowhere near my programming language of choice. The specific semantic deficiency you chose happens to be an invalid one.


And how does the caller extract those fields?


> And how does the caller extract those fields?

The same way one would when returning an anonymous map in JavaScript - via reflection and the assumption of what was returned.

While JavaScript makes the use of reflection less burdensome, it has the same collaboration fragility as a Java version. Just with less ceremony.


You can do that in C# with dynamic types. But realistically it's almost never used because it's kinda dirty.

To each their own I guess, I've never felt burdened by types but I feel like I'm programming in the dark when using a non typed language.


Yes you can. C# has had anonymous types since language version 3.0, released nearly 20 years ago; everyone who has used .Select() at the end of a LINQ query has used this.


I quite like Lisp 'cause I can do silly things like:

  (defun sum-triplet? (list)
    (declare (type List list))
    (and (= (length list) 3)
         (destructuring-bind (a b c)
             list
           (and (numberp a)
                (numberp b)
                (numberp c)
                (= (+ a b) c)))))

  (deftype Sum-Triplet ()
    '(and List
          (satisfies sum-triplet?)))


The very next dot point is agreeing with you: "The ideal quantity to use these tools is tiny, much more miniscule than any of us is trained to think".

And I agree tbh. The TS community has turned me off static typing for life. It's just wall to wall bikeshedding from the least productive people I've ever had the displeasure of working with.


Honestly, that explains perfectly why all MS apps have gone from useable if a bit slow or buggy to just unusable buggy messes that crash constantly and make my machine feel like its running on molasses.


It doesn't "explain" it at all (and indeed many of reject the idea that MS apps - in general - are worse. They've had a bad reputation for decades, and that isn't based on nothing).


The actual quality of their apps has significantly degraded over time. Many of their apps which use web-native technologies have become nearly unusable, and I strongly suspect it's exactly because they've gone down this insane road of bike shedding and constant re-inventing of the wheel instead of pursuing excellence.


Teams + the entire Office suite are two examples of Microsoft products going downhill over time.


That's true but that has absolutely nothing to do with types in programming languages; it has more to do with MS's culture of always favouring backward compatibility and never culling any features in every version.


> it has more to do with MS's culture of always favouring backward compatibility and never culling any features in every version.

The new Office suite are unbelievably buggy and slow not because of backward compatibility, but because of numerous useless "features" Microsoft added which made the software essentially bloatware. By the time Excel opens up on my Mac, I can open several Google Sheets and start working.


Has Teams ever been good? From the first time I used it it has always been one of the worst pieces of software I use.

Office seems fine to me - not great software, but I haven't noticed any particular decline (not a heavy user, but have been using it since Word v2 in the 1990s)

And in any case what does this have to do with Typescript and the (over?) use of types?


this article must be written with the intention to troll HN


My approach to programming in 2024 is a bit different: When I want to code a new module, I start by talking to an AI about the requirements and let the AI generate the tests and the code.

Sadly, the AI isn't capable of generating working code for all scenarios, so eventually, I take over the code and finish/fix it.

The workflow itself can be quite frustrating (who doesn't love fixing bugs in other people's code?), and the act of coding isn't as much fun as it used to be, but the AI also shows me new algorithms, which is a great way of learning new things.

Let's just say I am looking forward to 2025 ;-)


The author should really setup SSL on his website and make it secure to browse to begin with.


I do have SSL. It's just optional and it seems the submitter chose http.


Looks fine to me. TLS 1.3 with a cert from Let's Encrypt.


Insanity-grade takes end-to-end, not a single word of this should be taken seriously


If you are writing tests but have no users then you are wasting your time and money.


> If you are writing tests but have no users then you are wasting your time and money.

If you have users but are not writing tests, then your users are your tests and you are wasting their time and money.


Is it recommended to learn shell if you are a beginner?


There was a saying that technology usually has inertia, if it was used for 30 years, and is used actively now, it will probably still be in use 30 years in the future.

I learned vim by necessity after shying away from this weird old tech for years, when I was forced to work on a solaris server where there was no other way to edit the code at all. It was pain and suffering for a few hours - we really wanted to fix something that day as I was working on a machine that we were “not allowed to ssh into” been driven to a different city in order to sit in front of it.

But after that day I’ve been using vim almost every day. It is not my daily driver, always felt more productive in TextMate, SublimeText and now VS Code, but it is still incredibly useful.

Any remote server I ssh into there is no question what I can or cannot do - can easily edit everything I want to. And I use it for various quick edit tasks in the shell.

Now learning shells wasn’t so dramatic for me but same rules apply, I don’t feel uncomfortable anywhere - that pod that is misbehaving in your cluster - well just ssh into it and poke around! You need to tie a few commands together as there isn’t something that does _exactly_ what your company needs - just whip up a quick bash script! - zero dependencies and can be deployed anywhere - your mac, the server the ci is running on, even windows machines!

So general rule is - if it was used for 50 years and is used now, it is probably worth learning.


I won't comment on which shell to learn, but you'll end up spending a lot of time in it, so learning your chosen shell well will pay off dividends for the rest of your life.


Yes and no.

Shell scripting is incredibly powerful and omnipresent. So you want to know the basics about pipes, loops and the like.

But the language itself is broken by design (error handling is a mess; whitespaces create headaches daily; sub-shells can be a pain; ...). So, creating reliable scripts can be a challenge, and you do not want to become an expert on how to write large programs with the shell. Other languages, e.g. Python, are much better at this.

My favorite site in this context is https://shellhaters.org. It has a list of links to the POSIX standard so that you can easily look up functionality that is part of it (and should be present on all POSIX-compliant operating systems).

If you know everything on https://learnxinyminutes.com/docs/bash/ you most likely know more than you need.


What you should learn, is knowing when shell is the right tool for the job.


It dawned on me, thank you.


I wish someone would share their programming workflow when using LLMs... I feel like I'm falling behind in this area.


The important task for you, now that llms write code, is to know the theory very well and have list of things to try out. The good thing about coding is we have very fast and tight feedback loop. You should be in a position to cross-question llm responses and that is possible only when you know your stuff.

https://x.com/jdnoc/status/1791145173524545874


I’m really curious whether my agree:disagree ratio will be higher in the article or in the comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: