Why do these have to be mutually exclusive? There are plenty of cases, especially in enterprise, when if you solely( relied on spec-less QA, plenty of important use-cases would be missed.
The PM needs to take a step back and let the designers/developers handle the tricky details of the spec, and let the testers test the resulting product and spec from them - not the high level initial spec. You don't need to bring in your testers until you have something concrete for them to actually test against.
As something concrete so that we don't get lost in abstracts: if you are building an accounting package that needs to calculate a user's tax rate from information pulled from twitter, then having your testers create a framework for testing exact tax rates before the product is complete is folly. You may only be able to generate a 'rough band' of tax rates, and now your whole perfect spec and testing is out of the water as all the values test incorrectly.
Rather get your team to build in a way to add tax information to the system, then once they have they have the method to get the best data possible you add details to your sparse spec and bring in the testers to test the real implementation against the real spec.
Obviously there are tons of other nice patterns to follow here that are heavily written about - but the "design in stone from on-high" is a very fallible approach. Not that it hasn't worked in the past - it's had some great success - but it requires someone with a perfect image of exactly what they want and a team willing to follow that person exactly.
There is a specific reason why a QA plan is formalized before development: so that QA tests for client's requirement and not what is developed. Ideally what is developed will be what was needed but it would be foolish to assume that. If the QA was simply QA'ing what was developed, it is assuming that what was developed is what was needed. That is not a safe assumption to make, especially at scale and in enterprise.
You may only be able to generate a 'rough band' of tax rates, and now your whole perfect spec and testing is out of the water as all the values test incorrectly.
I dont quite follow but a QA plan does not need to hard code values. It should have test values where possible - in this case, may be the QA could find a couple of example twitter accounts and what the tax rate info. should be if done right. If you are saying the formula for tax rate may change mid way, that is okay as long as QA is notified. This way the QA and the dev independently do their own calculation and if they do it right, QA's manual calculation should lead to the same answer as when QA uses the functional app to do the calculation.
Your point about building up a fairly generic testing system right from the start like that seems like a very good idea. Most QA I've worked with hasn't been anywhere near that farsighted and probably where I got that opinion from - seeing QA throw out their hacked together models and hacking up a new model (and stringing together hacked up models...) From experience, I think most companies take the hacked-together route for QA and put their junior recruits in QA. This is probably different in a Fortune 50 company? I've yet to work at one of those...