Hacker News new | past | comments | ask | show | jobs | submit login

This is all unfounded conjecture: it seems easier to remember which parameter combinations may exist and need to be tested when writing the function; so "let's all write tests later" becomes a black box exercise which is indeed a helpful perspective for review, but isn't the most effective use of resources.



IMHO being convinced that there's only one true and correct methodology (TDD, Scrum, etc.) or paradigm (functional, objective, reactive programming, etc.) is a sign of being a bad programmer.


A good programmer finds common attributes and behaviors and organizes them into namespaced structs/arrays/objects with functions/methods and tests. Abstractly, which terms should we use to describe hierarchical clusters of things with information and behaviors if not those from a known software development or project management methodology?

And a good programmer asks why people might have spent so much time formalizing project development methodologies. "What sorts of product (team) failures are we dealing with here?" is an expensive question to answer as a team.

By applying tenets of Named agile software development methodologies, teams and managers can feel like they're discussing past and current experiences/successes/failures with comparable implementations of approaches that were or are appropriate for different contexts.

To argue the other side, just cherry picking from different methodologies is creating a new methodology, which requires time to justify basically what we already have terms for on the wall over here.

"We just pop tasks off the queue however" is really convenient for devs but can be kept cohesive by defining sensible queues: [kanban] board columns can indicate task/issue/card states and primacy, [sprint] milestone planning meetings can yield complexity 'points' estimates for completable tasks and their subtasks. With team velocity (points/time), a manager can try to appropriately schedule optimal paths of tasks (that meet the SMART criteria (specific, measurable, achievable, relevant, and Time-bound)); instead of fretting with the team over adjusting dates on a Gantt chart (task dependency graph) deadline, the team can

What about your testing approach makes it 'NOT TDD'?

How long should the pre-release static analysis and dynamic analyses take in my fancy DevOps CI TDD with optional CD? Can we release or deploy right now? Why or why not?

'We can't release today because we spent too much time arguing about quotes like "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines." ("Self Reliance" 1841. Emerson) and we didn't spec out the roof trusses ahead of time because we're continually developing a new meeting format, so we didn't get to that, or testing the new thing, yet.'

A good programmer can answer the three questions in a regular meeting at any time, really:

> 1. What have you completed since the last meeting?

> 2. What do you plan to complete by the next meeting?

> 3. What is getting in your way?

And:

Can we justify refactoring right now for greater efficiency or additional functionality?


The simple solution there is to simply not use specific parameters (outside ovious edge-cases, ie supplying -1 and 2^63 into your memory allocator). Writing a simple reproducible fuzzer is easy for most contained functions.

I find blackbox testing itself also fairly useful. The part where you forget which parameter combinations may occur can be useful since you now A) rely on documentation you made and B) can write your test independent of how you implemented it just like if you had written it beforehand. (Just don't forget to avoid falling into the 'write test to pass function' trap)


IMHO, it's so much easier to write good, comprehensive tests while writing the function (FUT: function under test) because that information is already in working memory.

It's also easier to adversarially write tests with a fresh perspective.

I shouldn't need to fuzz every parameter for every commit. Certainly for releases.

"Building an AppSec Pipeline: Keeping your program, and your life, sane" https://www.owasp.org/index.php/OWASP_AppSec_Pipeline


I mean, I general don't think you should write fuzzers for absolutely everything (most contained functions => doesn't touch a lot of other stuff and few parameters with a known parameter space)

The general solution is to use whatever testing methodology you are comfortable, that is very effective, very efficient and covers a lot of problem space. Of course no testing method does that so you'll have to constantly balance whatever works best (which is why I think pure TDD is overrated)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: