The very first test can be a simple "Did my HTTP request receive a response?". Then you can build on "Does this HTTP response have this value I need"....
etc
The way I have always gone about TDD is just that I am testing the code I am writing by running the test, not from the main entrypoint of the application. The things you would log and look for when you run the application you instead validate with an `assert()`. Then once you have finished developing, you do a single pass verification from main and you have both a test and a function written
Do you see how playing this game of writing iterative tests would actually provude slower learnings than prototyping a call to an API and receiving a result that would inform a series of tests that would be much more informed?
It's just as easy to be inefficient writing useless/bad tests than it is writing bad code. I think every developer should do TDD in their career, because it will make you think about the testability of the code. That said once you understand how to write testable code than I think it makes sense to be flexible on whether you write the test first or take a crack at an implementation.
The argument that you should have a good understanding of requirements I think leads to a lot of analysis paralysis, and isn't a practical method for building novel things in a timely manner.
I think the key distinction here is "run my new function through the entry point of the application" vs "run my new function from a test". You leave out whatever startup time is involved from the rest of the application to get to that function, & you have a tiny feedback loop to test the function.
You have your test which gets the response, and you can still log the output to see what it looks like. It's a little slower at first since you have to write the test, but I'm not sure I see a big distinction vs "have a small script that sends the request".
> Do you see how playing this game of writing iterative tests would actually provude slower learnings than prototyping a call to an API and receiving a result that would inform a series of tests that would be much more informed?
And do you see that in many examples just coding ahead without conaidering how to test the code later will in many cases leave you with code that is such a pain to test that you start avoiding the tests because "they take too much time".
If you start with the test the question of "how can this be made testable" is unavoidable.
I am not dogmatic here. If you are operating on the level of "I am happy if anything works at all" then starting with writing a test is probably a bad idea, better try things out first, scrap the whole thing once you understood what you wanna do in which way and then start writing the tests.
For a lot of adhoc code test driven development doesn't make too much sense, but if the stuff you write could ruin someones day (or life) maybe it is the way to go.
The way I have always gone about TDD is just that I am testing the code I am writing by running the test, not from the main entrypoint of the application. The things you would log and look for when you run the application you instead validate with an `assert()`. Then once you have finished developing, you do a single pass verification from main and you have both a test and a function written