Hacker News new | past | comments | ask | show | jobs | submit | lukeramsden's comments login

Can start with the number of unicorns in USA vs Europe, especially when you take population in to account https://www.failory.com/unicorns


That isn't a concrete example of a regulations that hinder innovation.


What do you think the cause is? Unwashed eggs?


any number of reasons: language barriers, existing American firms anti-competing, smaller domestic markets, less centralisation, and, yes, in some cases, regulation, but, when it comes down to it, it's better to have smaller firms that don't (or less frequently) damage society than larger firms than do, even just from the perspective of wealth distribution.


> it's better to have smaller firms that don't (or less frequently) damage society

I'm not sure about that - I really like my lifestyle which would be nearly impossible to attain in Europe, but is very attainable for Americans.

I don't see how you're materially better off because you're forced to use foreign companies (Google, Facebook, etc.) instead of having your own.


What are you talking about? I am unable to follow your reasoning, maybe you can walk us through?


I think he's saying that, yes, this regulation means that your own companies are more ethical, but European consumers end up using these less-regulated American companies anyway. this is true, but this problem has started to be solved by the EU anyway, for example, with the Digital Markets and Services Acts


Why are you asking me? And what does 'Unicorns' have to do with innovation anyway?


How many of those unicorns are financial black holes never expecting to turn a profit?

And the inclusion of so many cryptocurrency "unicorns" in that list is also quite telling.


So Estonia is better than the US?


If you can’t afford to pay the current employees that raise, you certainly can’t afford new people - the market rate is the market rate.


The best way to do this is message passing. My current way of doing it is using Aeron[0] + SBE[1] to pass messages very efficiently between "services" - you can then configure it to either be using local shared memory (/dev/shm) or to replicate the log buffer over the network to another machine.

[0]: https://aeroncookbook.com/aeron/overview/ [1]: https://aeroncookbook.com/simple-binary-encoding/overview/


Part of test-driven design is using the tests to drive out a sensible and easy to use interface for the system under test, and to make it testable from the get-go (not too much non-determinism, threading issues, whatever it is). It's well known that you should likely _delete these tests_ once you've written higher level ones that are more testing behaviour than implementation! But the best and quickest way to get to having high quality _behaviour_ tests is to start by using "implementation tests" to make sure you have an easily testable system, and then go from there.


>It's well known that you should likely _delete these tests_ once you've written higher level ones that are more testing behaviour than implementation!

Building tests only to throw them away is the design equivalent of burning stacks of $10 notes to stay warm.

As a process it works. It's just 2x easier to write behavioral tests first and thrash out a good design later under its harness.

It mystifies me that doubling the SLOC of your code by adding low level tests only to trash them later became seen as a best practice. It's so incredibly wasteful.


> As a process it works. It's just 2x easier to write behavioral tests first and thrash out a good design later under its harness.

I think this “2x easier” only applies to developers who deeply understand how to design software. A very poorly designed implementation can still pass the high level tests, while also being hard to reason about (typically poor data structures) and debug, having excessive requirements for test setup and tear down due to lots of assumed state, and be hard to change, and might have no modularity at all, meaning that the tests cover tens of thousands of lines (but only the happy path, really).

Code like this can still be valuable of course, since it satisfies the requirements and produces business value, however I’d say that it runs a high risk of being marked for a complete rewrite, likely by someone who also doesn’t really know how to design software. (Organizations that don’t know what well designed software looks like tend not to hire people who are good at it.)


"Test driven design" in the wrong hands will also lead to a poorly designed non modular implementation in less skilled hands.

I've seen plenty of horrible unit test driven developed code with a mess of unnecessary mocks.

So no, this isnt about skill.

"Test driven design" doesnt provide effective safety rails to prevent bad design from happening. It just causes more pain to those who use it as such. Experience is what is supposed to tell you how to react to that pain.

In the hands of junior developers test driven design is more like test driven self flagellation in that respect: an exercise in unnecessary shame and humiliation.

Moreover since it prevents those tests with a clusterfuck of mocks from operating as a reliable safety harness (because they fail when implementation code changes, not in the presence of bugs), it actively inhibits iterative exploration towards good design.

These tests have the effect of locking in bad design because keeping tightly coupled low level tests green and refactoring is twice as much work as just refactoring without this type of test.


> I've seen plenty of horrible unit test driven developed code with a mess of unnecessary mocks.

Mocks are an anti-pattern. They are a tool that either by design or unfortunate happenstance allows and encourages poor separation of concerns, thereby eliminating the single largest benefit of TDD: clean designs.


You asserted:

> … TDD is a "design practice" but I find it to be completely wrongheaded.

> The principle that tests that couple to low level code give you feedback about tightly coupled code is true but it does that because low level/unit tests couple too tightly to your code - I.e. because they too are bad code!

But now you’re asserting:

> "Test driven design" in the wrong hands will also lead to a poorly designed non modular implementation in less skilled hands.

Which feels like it contradicts your earlier assertion that TDD produces low-level unit tests. In other words, for there to be a “unit test” there must be a boundary around the “unit”, and if the code created by following TDD doesn’t even have module-sized units, then is that really TDD anymore?

Edit: Or are you asserting that TDD doesn’t provide any direction at all about what kind of testing to do? If so, then what does it direct us to do?


>"Test driven design" in the wrong hands will also lead to a poorly designed non modular implementation in less skilled hands.

>Which feels like it contradicts your earlier assertion that TDD produces low-level unit tests.

No, it doesnt contradict that at all. Test driven design, whether done optimally or suboptimally, produces low level unit tests.

Whether the "feedback" from those tests is taken into account determines whether you get bad design or not.

Either way I do not consider it a good practice. The person I was replying to was suggesting that it was a practice that was more suited to be people with a lack of experience. I dont think that is true.

>Or are you asserting that TDD doesn’t provide any direction at all about what kind of testing to do?

I'm saying that test driven design provides weak direction about design and it is not uncommon for test driven design to still produce bad designs because that weak direction is not followed by people with less experience.

Thus I dont think it's a practice whose effectiveness is moderated by experience level. It's just a bad idea either way.


Thanks for clarifying.

I think this nails it:

> Whether the "feedback" from those tests is taken into account determines whether you get bad design or not.

Which to me was kind of the whole point of TDD in the first place; to let the ease and/or difficulty of testing become feedback that informs the design overall, leading to code that requires less set up to test, fewer dependencies to mock, etc.

I also agree that a lot of devs ignore that feedback, and that just telling someone to “do TDD” without first making sure that they know that they need to strive to have little to no test setup and few or no mocks, etc., otherwise the advice is pointless.

Overall I get the sense that a sizable number of programmers accept a mentality of “I’m told programming is hard, this feels hard so I must be doing it right”. It’s a mentality of helplessness, of lack of agency, as if there is nothing more they can do to make things easier. Thus they churn out overly complex, difficult code.


>Which to me was kind of the whole point of TDD in the first place; to let the ease and/or difficulty of testing become feedback that informs the design overall

Yes and that is precisely what I was arguing against throughout this thread.

For me, (integration) test driven development development is about creating:

* A signal to let me know if my feature is working and easy access to debugging information if it is not.

* A body of high quality tests.

It is 0% about design, except insofar as the tests give me a safety harness for refactoring or experimenting with design changes.


Don't agree, though I think it's more suble than "throw away the tests" - more "evolve them to a larger scope".

I find this particularly with web services,especially when the the services are some form of stateless calculators. I'll usually start with tests that focus on the function at the native programming language level. Those help me get the function(s) working correctly. The code and tests co-evolve.

Once I get the logic working, I'll add on the HTTP handling. There's no domain logic in there, but there is still logic (e.g. mapping from json to native types, authentication, ...). Things can go wrong there too. At this point I'll migrate the original tests to use the web service. Doing so means I get more reassurance for each test run: not only that the domain logic works, but that the translation in & out works correctly too.

At that point there's no point leaving the original tests in place. They're just covering a subset of the E2E tests so provide no extra assurance.

I'm therefore with TFA in leaning towards E2E testing because I get more bang for the buck. There are still places where I'll keep native language tests, for example if there's particularly gnarly logic that I want extra reassurance on, or E2E testing is too slow. But they tend to be the exception, not the rule.


> At that point there's no point leaving the original tests in place. They're just covering a subset of the E2E tests so provide no extra assurance.

They give you feedback when something fails, by better localising where it failed. I agree that E2E tests provide better assurance, but tests are not only there to provide assurance, they are also there to assist you in development.


Starting low level and evolving to a larger scope is still unnecessary work.

It's still cheaper starting off building a playwright/calls-a-rest-api test against your web app than building a low level unit test and "evolving" it into a playwright test.

I agree that low level unit tests are faster and more appropriate and if you are surrounding complex logic with a simple and stable api (e.g. testing a parser) but it's better to work your way down to that level when it makes sense, not starting there and working your way up.


That’s not my experience. In the early stages, it’s often not clear what the interface or logic should be - even at the external behaviour level. Hence the reason tests and code evolve together. Doing that at native code level means I can focus on one thing: the domain logic. I use FastAPI plus pytest for most of these projects. The net cost of migrating a domain-only test to use the web API is small. Doing that once the underlying api has stabilised is less effort than starting with a web test.


I dont think ive ever worked on any project where they hadnt yet decided whether they wanted a command line app or a website or an android app before I started. That part is usually fixed in stone.

Sometimes lower level requirements are decided before higher level requirements.

I find that this often causes pretty bad requirements churn - when you actually get the customer to think about the UI or get them to look at one then inevitably the domain model gets adjusted in response. This is the essence of why BDD/example driven specification works.


What exactly is it wasting? Is your screen going to run out of ink? Even in the physical contruction world, people often build as much or more scaffolding as the thing they're actually building, and that takes time and effort to put up and take down, but it's worthwhile.

Sure, maybe you can do everything you would do via TDD in your head instead. But it's likely to be slower and more error-prone. You've got a computer there, you might as well use it; "thinking aloud" by writing out your possible API designs and playing with them in code tends to be quicker and more effective.


>What exactly is it wasting?

Time. Writing and maintaining low level unit tests takes time. That time is an investment. That investment does not pay off.

Doing test driven development with high level integration tests also takes time. That investment pays dividends though. Those tests provide safety.

>Sure, maybe you can do everything you would do via TDD in your head instead. But it's likely to be slower and more error-prone.

It's actually much quicker and safer if you can change designs under the hood and you dont have to change any of the tests because they validate all the behavior.

Quicker and safer = you can do more iterations on the design in the available time = a better design in the end.

The refactoring step of red, green, refactor is where the design magic happens. If the refactoring turns tests red again that inhibits refactoring.


> It's well known that you should likely _delete these tests_ once you've written higher level ones that are more testing behaviour than implementation!

Is it? I don't think I've ever seen that mentioned.


Put simply, doing TDD properly leads to sensible separation of concerns.


I did some research in to this this year in the context of maybe trying to start a business to solve this - and this was the conclusion I came to. There’s lots of threads here on HN about it too. It’s a structural, market-wide issue where the primary service Ticketmaster provide is reputation laundering, and in return, large agents and promoters agree to continue to use Ticketmaster despite their reputation.

I don’t know what the solution is.


Like, don't go to overpriced concerts? How much more obvious than this should the solution be? None in that chain create value, so no need for you to feed their greed.


> There are many instances I've encountered where two pieces of code coincided to look similar at a certain point in time. As the codebase evolved, so did the two pieces of code, their usage and their dependencies, until the similarity was almost gone

https://connascence.io/


Learning to program does not equate to learning to architect and evolve over many years a non-trivial system, operate it, document it, train others on it, scale it, etc. - “unmotivated” is a bit reductive. In the same way I might enjoy some DIY here and there but don’t want to and shouldn’t be trusted to build new houses, the same goes for non-trivial systems - sometimes you really do just need professionals.

That’s not to say you can’t give areas with guardrails to non-software-engineering-professionals if you can teach them git.


> “unmotivated” is a bit reductive

Well, sure, it's under-specified, I said "unmotivated (for whatever reasons)", eh?

I agree with you. When it comes to programming computers it's one of the few areas where I am unabashedly elitist ( https://sforman.srht.site/AnyoneCanCode.html "For most people learning anything more complicated than Excel is counter-productive." )

My point is that it's not that hard to learn to program, not that it's easy to program well (it's not. I've been at it for over a quarter of a century and I'm still barely capable.)

My additional point is that the barrier to entry (other than apathy or disinterest) is the complexity (just to make a website you need to know three or four computer languages!?)

My third point is that that complexity is about to vanish in a haze of linear algebra and massive data (the computers can talk now.)


The difference is that nobody has to actually accept updates that the core team make - and this has happened a few times, resulting in forks, although those cases were for things the core team refused to do - how can you “fork” a security? What is the common enterprise when forking a PoW chain?


> how can you “fork” a security?

Exactly the same way you fork a non-security. Security is a legal designation, not a description of how a particular asset is implemented (in a forkable structure like a blockchain).

> What is the common enterprise when forking a PoW chain?

The developers/promoters of the fork, potentially.


> Even if this leads to humourously Java-esque code, it's worth it

Some people seem to get mad about the verbose naming common in Java - but it’s one of the biggest blessings I’ve ever experienced. If you name things after what they do, and that name is stupid, then it’s the quickest indicator of bad design I’ve ever seen. Good design is where every name is patently obvious and encompasses the entire purpose of the class / record / method / whatever.


I think it's not the verbose names that are the problem in stereotypical Java-esque names, but rather the large amount of "scaffolding" classes that implement design patterns - a wealth of middleware that becomes apparent thanks to the verbose naming. That is, it's not the ProcessedWidget::QueryStructuralIntegrityPercent() that's the issue - it's the associated WidgetFactory, ProcessedWidgetBuilder, ProcessedWidgetProcessingManagerDelegateProxy, etc. that bloat the code and tax cognitive memory, and which exist only because the language isn't (or used to not be) expressive enough to express those patterns without giving them a name.


There's nothing like ProcessedWidgetProcessingManagerDelegateProxy in Java or the standard library. I feel that Java gets whacked with a cudgel that should be aimed at middleware authors.


There's nothing in JavaScript forcing you to use 500 dependencies across 100 000 folders either. Opinions on a programming language are often, for better or worse, opinions on the language plus its community/ecosystem. And correct me if I'm wrong, but isn't the company responsible for maintaining Java (Sun, now Oracle) also one of the worst offenders in terms of naming things in its middleware?


> If you name things after what they do, and that name is stupid, then it’s the quickest indicator of bad design I’ve ever seen.

I agree it’s a smell, but I’d caution that it’s also very situational.

Sometimes the domain has a recognizable category of like-things which don’t have a name for their likeness within the same domain, and choosing even a stupid-sounding name is a good start towards choosing a better name.

Other times, a targeted portion of an existing codebase might have a common theme which doesn’t necessarily line up with the domain, but needs to be named something to untangle what outlived its WET shelf life. You can recognize it’s a stable abstraction, you can recognize it's entirely divorced from any domain concern except by happenstance. You just have to give it a name, or let it be an increasingly unnecessary burden.

It’s easier to take such a hard line on the design quality implications of naming when you have a vocabulary to reference and/or compose, or when you have extant design attitudes relatively aligned with that principle. It’s especially difficult to apply the principle in systems with extant design problems of this nature, because whatever established or emergent abstractions do exist might not align with any principle you’d apply, and you can’t move them in that direction without some intermediate step no matter how flawed. You have to name what is before you name what should be.


> If you name things after what they do, and that name is stupid, then it’s the quickest indicator of bad design I’ve ever seen.

As a reader of the code I think 'FooAndMaybeBar_SpecialBazSupportV2 ()' is a blessing.

Aiming for cleaness when the problem is not clean is a worse sin than messy code that could be clean.


Then the price has to go up. This sounds like… regular market dynamics? It’s still not clear why that’s a crisis?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: