Hacker Newsnew | past | comments | ask | show | jobs | submit | s188's commentslogin

The JavaScript ones simply don't appear - which is nice. And yes, I believe they're in breach of GDPR if they use cookies and tracking pixels to track me without giving me the opportunity to deny consent. Please note I'm not a lawyer but I don't think there is a legal obligation for me to use their sites with JavaScript enabled.


As in Mutually Assured Destruction? Or do I misunderstand? I'd love to know more...


Yes, mutually assured destruction. From the movie War Games: "The only winning move is not to play."


UK: 30 minutes before work and 30 minutes at lunch


I've read Liars Poker - a real eye opener. I'd recommend Dreaming in Code by Scott Rosenberg. It's a look at 'how not to build systems'.


I don't do TDD on the first version. For me, the first version is a throwaway version. If it turns out to be commercially viable, that's when I start with a brand new codebase incorporating all the lessons from the first version, but this time using TDD. TDD has it's place. I just don't think it's cost effective on the first version.


It's helpful to measure design quality and build quality - not just build quality. Software developers are mostly concerned about build quality. Software development should always proceed from complete and accurate specifications and the specification process is a key design process - not a build process.

Poor design quality increases the chance of poor build quality. That's why it's important to measure both.

As an aside, I've always felt that 'Defect' is a more useful term than 'Bug'. 'Bug' seems to be too open to interpretation (i.e. one persons bug is another persons feature). 'Defect' however, has a useful definition:

'An operation is defective if it doesn't conform to the specification.'

This provides a solid basis for identifying Defects. Once you can formally and accurately establish what a Defect (i.e. Bug) is, you can use the following two simple formulas as KPIs.

1. Design Quality Change Requests/Requirement Specification (DQ = CR/RS) 2. Build Quality Defect Reports/Requirement Specification (BQ = DR/RS)

The ideal is zero. In other words, zero Change Requests per Requirement Specification and zero Defect Reports per Requirement Specification. However, I suspect that no project in history has ever achieved that. Nevertheless, it's the quality target to aim for. Once you start treating these values as KPIs you can start to monitor the two numbers over time and steadily work to reduce them, thus improving quality in a measurable way. For instance, you may find that the project you're working on has a DQ of 2.5 and a BQ of 3.6. Your mission (if you choose to accept it) is to steadily increase quality so that those two numbers reduce over time.

By viewing the results across a date rand you can start to see quality trends.

Design Quality is an important metric to monitor because that's often where the weakness lies in software projects. Managers love to offload design to developers in an ad-hoc way, largely because design is hard and time consuming (and no-one likes to write specs).

If you're concerned about build quality, first check the design quality. If you're building a product from poor, incomplete, inaccurate specifications, you're going to find it much harder to achieve decent build quality.


Could this be regarded as false advertising? It's not really free if you have to hand over PII. Problem is, no-one seems to know how much PII is worth.


'Tis a point. Should I give a direct link to the PDF then?


If possible. That would be great. I definitely download it.


Perhaps it's the fact that you tolerated the technical debt that got you to google scale. I've seen a few projects where developers aim for (as swatcoder so eloquently put it above) the intellectual purity of clean systems and then watch these projects fail to be delivered.

We'd all like to be a filthy rich pragmatist but I wonder just how much the 'low-quality standards' enabled you to achieve that. Seriously, I would love to be a millionaire with regrets about low quality code instead of being broke with junior developers pointing out my code smells and technical debt.


Perhaps, but you can't determine whether a system is over or under-engineered a priori. Certainly not when you are racing a clock.

These kind of assertions are always after the fact. And in hind-sight I think we could've done a tad better.

I might be less-rich, but that's not even close to being broke. And the people I hire to clean up my technical debt would hate working here less.


Wow, that last paragraph is spot on. I found both your comments to not only be accurate and clear but also rather eloquent.

I've always felt we've misunderstood technical debt. It seems to be regarded as a problem with a lot of negativity surrounding it. Developers seem to fear being accused of introducing technical debt - like it's the worst kind of developer crime. But in reality, there is value in embracing it in early stage projects. As thinkingkong mentions below; 'Its only debt if it sticks around long enough to need to be dealt with.'


The hardware engineering world (oil and gas, aerospace, civil) is less about ego and more about well established processes, practices and rules. These are much more mature industries than the software industry. In my experience intellectual posturing is much more a software industry thing than a hardware industry thing. Software developers are still struggling to figure out what's the best way to do things. SOLID, YAGNI, Agile, RUP, RAD, Clean Code, OOP/FP - they're all just the start of a maturing process. They will no doubt be superseded in time by other, better practices, just as they have superseded others. In mature engineering industries, the engineering rules and practices are well establishes. Much of this has come about because of accidents (and death - planes crashing, bridges collapsing, oil rigs exploding) and the court cases that follow. In the early oil industry health and safety mattered little. Same goes for the aerospace industry. These are hard lessons to learn and practices had to change. The cost of not changing was uneconomical.

The software industry is still growing up and best practices still have to be formally established. All the articles and books written about best practices in software - they're just the beginning - and most are probably wrong to some extent. Where does the intellectual posturing come from in the software industry? It's largely because of a lack of provably reliable practices and processes. The ones we have, are sold to us as 'the best thing' but they will eventually be found wanting. Ron Jefferies recent article about software estimating is a classic example (https://ronjeffries.com/articles/019-01ff/estimation-again/I...). Some people are so fed up with how unworkable estimating is that they're willing to ditch it entirely.

And so, in the absence of mature, provably reliable practices and processes the way is open to 'who's ego is the biggest' because those with big egos (but not necessarily a lot of experience) often think they know best (Dunning-Kruger). Their proposals (which are just as likely to be wrong as anyone else's) tend to be adopted simply by force of ego. For instance, you wont hear terms like 'code smells' in the hardware world (I worked as a software developer in oil and gas and rail transport for 30 years and never once heard it mentioned). To say "that's a code smell" is a kind of intellectual put down. It's intended to insult a developer into doing something differently and thereby elevate the speaker as someone who 'knows the right way'. Eventually, these things will disappear and the software world will have reliable, accurate processes, practices and rules and the 'code gurus' will be consigned to history.

And that's when the intellectual posturing will end.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: