Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

TBH if we could boil software metrics down to a immediately gratifying metrics like lap time or race position, it would make things a lot easier to figure out what simple means.


First thought: 'Ya, that would be nice'.

Second thought: 'Wait, we have metrics on execution time, time-to-load/display, time to download, file size, etc, and can even count clock cycles required.'....

And, it's kind of like auto road racing, where the times only matter in comparison to other times of that particular car class, at that specific track, in the configuration for that race, on that day's weather conditions, etc. Software is similarly comparable only to software of the same type/class, and how well the earlier versions of that software performed.

It's almost like we can gather more performance data about software than we can about sportscars...?


Which is better, a car that completes a two lap race with laps of 1:00, 1:00 or with laps of 1:30, 0:15.

Which is better, software that used the twice has a time to first paint of 3 seconds, 3 seconds or 4.5 seconds, 1 second? The second obviously benefits from caching.

There, it's no longer obvious because there are competing goals.


Said like you are proud to have found some kind of "gotcha"; it's a cute example, but I don't see the relevance.

The goal the article, GP and I mentioned were simplicity and reducing amounts of code, and the comment was about how we have the ability to measure our code 's performance.

Yes, immediate fetch every time vs caching is a question of competing goals. The default approach would be to avoid adding the code and complexity of caching. BUT, and that is a big "BUT", if the context of the software's use requires it (e.g., cached uses are far more numerous than fresh and using cached values will not screw up the results), then add it, and be efficient about it.

What's the big deal, what am I missing?


Other than basic politeness (that's a rude and dismissive opening on your post) my point was it's easy to know what "optimal performance" means when racing, or generating a profit. Software requires optimizing some aspects at the cost of other aspects, which is a business decision.

Like the best race car races the race the fastest, the best company makes the most money over the time period you care about. The best software could be optimized to load for new users fastest to reduce new customer bounce rates or to work best for returning loyal users to reduce churn or several other metrics. It's a legitimate question of what you want to optimize for


Yes, it is clear that software can have different options or questions of what to optimize for.

That still seems orthogonal to, or at least a separate consideration from, the question of minimizing the code.

Of course what to optimize for should be as carefully considered as any other factor. It is kind of the core point of a design effort.

The developer or team should figure out what to optimize for, then figure out how to implement that with the least possible code. Of course, some optimizations will require more code than others, and that should be one consideration (e.g., "yes, feature XYZ is cool, but is it worth the amount of code — slowness & bug habitat — that it will require?") in deciding whether or not to implement it.

So, I'm still not seeing how your point is an argument for writing more code, or invalidating the principles in TFA or the above posts? It just seems an offtopic distraction?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: