Creating abstractions without considering performance is probably the correct approach. It’s much better to at least start with a program with enough abstraction to allow reasoning about how things should work. Save performance concerns for when optimization is seen to be necessary.
This is how you wind up with fundamentally slow systems that can’t be easily sped up. You need to take some account of performance in systems design. Just don’t micro-optimize too soon.
This is how we get software that run ten times slower consumes ten times more resources and this happens on occasion when software works, more often it doesn't.
Big-O notation lets you conveniently ignore constant factors like 'ten times'.
(I'm only half joking. A system that's always ten times slower than the optimal solution, can still be beneficial to one that's quadratically slower than the optimal solution. Or it can be worse, it depends on how big your instances are.)
Hard disagree. Abstractions are nice when they match the mental model of a problem domain, but the odds you have the right set of abstract primitives in mind from the outset is practically 0.
This is why TDD is not good when starting from scratch.
Your abstractions might change in a way that they break your tests in a major way, and not only do you end up re-writing a great part of your code but also of your test set.
The stupid thing in TDD is that when you refactor while keeping tests working, many of them turn into duds: tests that are not testing any corner case.
Suppose you use TDD to write a function that adds positive integer together. You get it working for add(0, 0), and nothing else. Then add(0, 1), and add(1, 0) and so on. So you have all these cases. The function is just switching on combinations of inputs, branching to a case, and returning a literal constant for that case.
After writing a few hundred of these cases you say, to hell with this nonsense, and refactor the function to actually add the f.f.fine numbers together.
Now you have two, maybe three problems:
1. Almost all the tests still pass, but most of them are uselessly uninformative.
2. Almost all, because you made a typo in the add(13, 13) test case such that it required the answer 28 to pass, and that's what you did in the code; the real code correctly puts out 26, requiring the test to be fixed.
2. The function adds together combinations that are not tested, just fine, but you can no longer have the function fail for hitherto untested inputs, and then make the test pass. You can no longer "do the TDD thing" on it.
TDD is just a crutch that has to fall out the way when a developer with two brain cells together implements a general algorithm that makes all possible inputs work.
At the same time, that crutch is pretty good for systems that fundamentally are just collections of oddball cases that won't generalize. As a rule of thumb, if there is no obvious way to refactor the code, then TDD is likely continuing to provide value in the sense that the tests are protecting important properties in what has been implemented, against breakage.