Hacker Newsnew | comments | show | ask | jobs | submitlogin

Feels like that PM is trying to micromanage and failing to keep his troops working in their box.

If you're going to make a spec that defines 'how' as a PM, then you need to make sure your designers, coders and QA do exactly what you tell them. This can actually work if you're an incredible genius and know the details of what you're trying to do incredibly well.

Most PMs don't fall anywhere near there, and so the solution is simple: spec the 'what' very broadly. Delegate protocol specs to the programmer(s) who will be dealing with that protocol the most. Make sure the programmers create a product that the designer can 'skin' afterwards - such as moving elements around the screen without programmer assistance. Have the QA work with the actual product and not a spec - the users will be using the product after all, not the spec.

Standard stuff...




> Feels like that PM is trying to micromanage

I didn't get that from this piece at all. At what point did the PM in question appear to be micromanaging?

> spec the 'what' very broadly.

That's laughable. No client or internal stakeholder is ever going to spec the "what" broadly for any definition of the term, and if you let them you're setting yourself up for failure (because they're not going to like what you make, and they sign your check).

> Make sure the programmers create a product that the designer can 'skin' afterwards

And what if the programmers leave something critical out of the design? What if the designer was expecting behavior that is different from what was created?

> Have the QA work with the actual product and not a spec - the users will be using the product after all, not the spec.

QA resources use the spec to create a test plan from which they test the final product. If they don't test against the spec, what do they test against? If they're not testing against the spec, they're not exactly doing quality assurance, are they?

Your comment shows a strong lack of understanding of almost everything that was mentioned in the blog post linked to by the OP.

-----


What about spec the "why", then engage with the team to figure out the "what", then let the team figure out the "how" by themselves. Believe it or not, the PM is not the only one with problem-solving skills on the team to tackle the mapping between "why" into "what".

-----


Sometimes this is true. It depends on the developers.

Sometimes the developers don't have product skills but know they don't have product skills. This is also fine.

Sometimes the developers think they have product skills when they don't, and this is always a complete disaster.

I can't tell what situation the company described in the article is in. Perhaps the product manager is incompetent (there's lots of incompetent product managers out there), and the developers are correcting for this. Perhaps the developers have good product sense that the product manager is underutilizing.

But if the developers don't have the product skills they need, skills you get by spending lots and lots of time talking with customers instead of (or in addition to) writing code, no amount of process tweaking is going to fix the real problem - individuals assuming that their technical skill translates into product skills, and insisting on also doing something they're not good at.

I've often wondered why the subset of developers who lack product skills and/or the relevant domain knowledge frequently insist on getting involved with product anyway. After all, the head of human resources rarely insists on committing code.

-----


I'm assuming this is for building a product, in which case the "why" is specced by market demand, and the "what" is little more than a goal of the product/feature - such as "product to allow consumers to record their taxes". The "how" is obviously more involved and includes things such as technology choices and specs on what the customer will interact with and how data will be stored.

The "what" would also often include essential features, such as "visual graphs or charts over time". These features would grow (or shrink) over time based on market feedback and testing.

You rarely want your programmers/designers/QA defining the "why" behind your product. If it's a startup, you're generally creating your business because of the "why" - you don't start a company and then use the company to find a "why". If it's an established company creating a new product, you'd have had market research and similar groups set up before the product even started to identify the "why".

-----


Have the QA work with the actual product and not a spec - the users will be using the product after all, not the spec.

Why do these have to be mutually exclusive? There are plenty of cases, especially in enterprise, when if you solely( relied on spec-less QA, plenty of important use-cases would be missed.

-----


The linked article spoke about creating the QA tests before the product was finished or even designed by the designer. No plan is going to survive contact with the enemy and no spec specific enough to have formal tests written for it is going to survive contact with reality. As clearly seen in the linked blog.

The PM needs to take a step back and let the designers/developers handle the tricky details of the spec, and let the testers test the resulting product and spec from them - not the high level initial spec. You don't need to bring in your testers until you have something concrete for them to actually test against.

As something concrete so that we don't get lost in abstracts: if you are building an accounting package that needs to calculate a user's tax rate from information pulled from twitter, then having your testers create a framework for testing exact tax rates before the product is complete is folly. You may only be able to generate a 'rough band' of tax rates, and now your whole perfect spec and testing is out of the water as all the values test incorrectly.

Rather get your team to build in a way to add tax information to the system, then once they have they have the method to get the best data possible you add details to your sparse spec and bring in the testers to test the real implementation against the real spec.

Obviously there are tons of other nice patterns to follow here that are heavily written about - but the "design in stone from on-high" is a very fallible approach. Not that it hasn't worked in the past - it's had some great success - but it requires someone with a perfect image of exactly what they want and a team willing to follow that person exactly.

-----


As something concrete so that we don't get lost in abstracts: if you are building an accounting package that needs to calculate a user's tax rate from information pulled from twitter, then having your testers create a framework for testing exact tax rates before the product is complete is folly. You may only be able to generate a 'rough band' of tax rates, and now your whole perfect spec and testing is out of the water as all the values test incorrectly.

There is a specific reason why a QA plan is formalized before development: so that QA tests for client's requirement and not what is developed. Ideally what is developed will be what was needed but it would be foolish to assume that. If the QA was simply QA'ing what was developed, it is assuming that what was developed is what was needed. That is not a safe assumption to make, especially at scale and in enterprise.

You may only be able to generate a 'rough band' of tax rates, and now your whole perfect spec and testing is out of the water as all the values test incorrectly.

I dont quite follow but a QA plan does not need to hard code values. It should have test values where possible - in this case, may be the QA could find a couple of example twitter accounts and what the tax rate info. should be if done right. If you are saying the formula for tax rate may change mid way, that is okay as long as QA is notified. This way the QA and the dev independently do their own calculation and if they do it right, QA's manual calculation should lead to the same answer as when QA uses the functional app to do the calculation.

-----


I agree completely when it comes to enterprise, and your QA team is likely large enough and developed enough to have most of the required infrastructure set up before the product even begins. However, the article seems to be dealing with smaller teams, but in particular it has the product manager actually working with QA to specify the QA plan from the outset himself.

Your point about building up a fairly generic testing system right from the start like that seems like a very good idea. Most QA I've worked with hasn't been anywhere near that farsighted and probably where I got that opinion from - seeing QA throw out their hacked together models and hacking up a new model (and stringing together hacked up models...) From experience, I think most companies take the hacked-together route for QA and put their junior recruits in QA. This is probably different in a Fortune 50 company? I've yet to work at one of those...

-----




Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: