Hacker News new | past | comments | ask | show | jobs | submit login

Don’t read too much into that. TDD for example is not leveling up it’s an opinionated approach to development.



Automated testing is not a choice in many industries.

If you're not familiar with TDD, you haven't yet achieved that level of mastery.

There's a productivity boost to being able to change quickly without breaking things.

Is all unit/functional/integration testing and continuous integrating TDD? Is it still TDD if you write the tests after you write the function (and before you commit/merge)?

I think this competency matrix is a helpful resource. And I think that learning TDD is an important thing for a good programmer.


There is absolutely no need to follow TDD to be good at testing.


This is all unfounded conjecture: it seems easier to remember which parameter combinations may exist and need to be tested when writing the function; so "let's all write tests later" becomes a black box exercise which is indeed a helpful perspective for review, but isn't the most effective use of resources.


IMHO being convinced that there's only one true and correct methodology (TDD, Scrum, etc.) or paradigm (functional, objective, reactive programming, etc.) is a sign of being a bad programmer.


A good programmer finds common attributes and behaviors and organizes them into namespaced structs/arrays/objects with functions/methods and tests. Abstractly, which terms should we use to describe hierarchical clusters of things with information and behaviors if not those from a known software development or project management methodology?

And a good programmer asks why people might have spent so much time formalizing project development methodologies. "What sorts of product (team) failures are we dealing with here?" is an expensive question to answer as a team.

By applying tenets of Named agile software development methodologies, teams and managers can feel like they're discussing past and current experiences/successes/failures with comparable implementations of approaches that were or are appropriate for different contexts.

To argue the other side, just cherry picking from different methodologies is creating a new methodology, which requires time to justify basically what we already have terms for on the wall over here.

"We just pop tasks off the queue however" is really convenient for devs but can be kept cohesive by defining sensible queues: [kanban] board columns can indicate task/issue/card states and primacy, [sprint] milestone planning meetings can yield complexity 'points' estimates for completable tasks and their subtasks. With team velocity (points/time), a manager can try to appropriately schedule optimal paths of tasks (that meet the SMART criteria (specific, measurable, achievable, relevant, and Time-bound)); instead of fretting with the team over adjusting dates on a Gantt chart (task dependency graph) deadline, the team can

What about your testing approach makes it 'NOT TDD'?

How long should the pre-release static analysis and dynamic analyses take in my fancy DevOps CI TDD with optional CD? Can we release or deploy right now? Why or why not?

'We can't release today because we spent too much time arguing about quotes like "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines." ("Self Reliance" 1841. Emerson) and we didn't spec out the roof trusses ahead of time because we're continually developing a new meeting format, so we didn't get to that, or testing the new thing, yet.'

A good programmer can answer the three questions in a regular meeting at any time, really:

> 1. What have you completed since the last meeting?

> 2. What do you plan to complete by the next meeting?

> 3. What is getting in your way?

And:

Can we justify refactoring right now for greater efficiency or additional functionality?


The simple solution there is to simply not use specific parameters (outside ovious edge-cases, ie supplying -1 and 2^63 into your memory allocator). Writing a simple reproducible fuzzer is easy for most contained functions.

I find blackbox testing itself also fairly useful. The part where you forget which parameter combinations may occur can be useful since you now A) rely on documentation you made and B) can write your test independent of how you implemented it just like if you had written it beforehand. (Just don't forget to avoid falling into the 'write test to pass function' trap)


IMHO, it's so much easier to write good, comprehensive tests while writing the function (FUT: function under test) because that information is already in working memory.

It's also easier to adversarially write tests with a fresh perspective.

I shouldn't need to fuzz every parameter for every commit. Certainly for releases.

"Building an AppSec Pipeline: Keeping your program, and your life, sane" https://www.owasp.org/index.php/OWASP_AppSec_Pipeline


I mean, I general don't think you should write fuzzers for absolutely everything (most contained functions => doesn't touch a lot of other stuff and few parameters with a known parameter space)

The general solution is to use whatever testing methodology you are comfortable, that is very effective, very efficient and covers a lot of problem space. Of course no testing method does that so you'll have to constantly balance whatever works best (which is why I think pure TDD is overrated)


Is there irrefutable, objective proof that TDD is the one true way?

I've looked into TDD but it simply does not fit my way of thinking and how I approach problems. I prefer to test systems when I have finished them as I cannot formulate a test before I know what I'm testing.

Especially for organically dogfood-development TDD is a bad methodology as you discover requirements as you go.

TDD is however great if you have requirements before you start writing any code (aka Waterfall but swap Testing and Coding)


>Is there irrefutable, objective proof that TDD is the one true way?

No, there can never be irrefutable, objective proof for something that is a best practice. What you can do is check which teams are having the most bugs found by the customer, or how long they need to deliver that code, and then draw some statistical conclusions from it.

In my (subjective) experience TDD a) gives you really good regression tests and b) makes you create smaller functions that are more easily testable and c) makes people think harder about their code.

b) and c) aren't really effects on the tests, but because TDD drives this behavior (again, subjective experience) you get a better codebase, which in itself is of value.

If you can get to that great codebase without TDD and write good regression tests catching all the edge cases post hoc, then you don't need TDD.

>Especially for organically dogfood-development TDD is a bad methodology as you discover requirements as you go.

Especially in this case I find TDD very helpful, because it provides a kind of executable documentation of the requirements you just discovered you want.


>No, there can never be irrefutable, objective proof for something that is a best practice.

That's rather begging the question, isn't it? There cannot be proof for the one true way since it's the one true way (or as you call it, best practise)

>In my (subjective) experience TDD a) gives you really good regression tests and b) makes you create smaller functions that are more easily testable and c) makes people think harder about their code.

In my experience, only when those people tended to do that beforehand. And b) and c) don't hold true in my experience either (people are happy to write big functions so their tests pass).

c) is IMO not true either since it feels more like writing code to make tests pass not making code that fits requirements (unless you're good at writing requirements into tests but then why write code to tests code if you could write the code directly?)

>Especially in this case I find TDD very helpful, because it provides a kind of executable documentation of the requirements you just discovered you want.

That's not how that works. I'm sitting in the middle of a function, discovering requirements for that function while I'm writing it and running the application (ie, function does X but also should send an event to Y to clear the cache for user Z who triggered action X)

When you don't have any fixed requirements, applications tend to evolve in-vitro, which doesn't mix well with TDD which is an pre-vitro.


> Is all unit/functional/integration testing and continuous integrating TDD?

No. They differentiate in the matrix.

> If you're not familiar with TDD, you haven't yet achieved that level of mastery.

That's not true - I've worked on teams with far lower defect rates than the typical TDD team.

TDD can help keep a developer focused - and this can help overall productivity rates - but it doesn't directly help lower defect rates.


> TDD can help keep a developer focused - and this can help overall productivity rates - but it doesn't directly help lower defect rates.

We would need to reference some data with statistical power; though randomization and control are infeasible: no two teams are the same, no two projects are the same, no two objective evaluations of different apps' teams' defect rates are an apples to apples comparison.

Maybe it's the coverage expectation: do not add code that is not run by at least one test.


Whether you like it or not it would be remiss to ignore it or not understand it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: