Some of the backstory here (it's the funniest fucking backstory ever): it's lately been circulating --- though I think this may have been somewhat common knowledge among practitioners, though definitely not to me --- that the "random" seeds for the NIST P-curves, generated in the 1990s by Jerry Solinas at NSA, were simply SHA1 hashes of some variation of the string "Give Jerry a raise".
At the time, the "pass a string through SHA1" thing was meant to increase confidence in the curve seeds; the idea was that SHA1 would destroy any possible structure in the seed, so NSA couldn't have selected a deliberately weak seed. Of course, NIST/NSA then set about destroying its reputation in the 2000's, and this explanation wasn't nearly enough to quell conspiracy theories.
But when Jerry Solinas went back to reconstruct the seeds, so NIST could demonstrate that the seeds really were benign, he found that he'd forgotten the string he used!
If you're a true conspiracist, you're certain nobody is going to find a string that generates any of these seeds. On the flip side, if anyone does find them, that'll be a pretty devastating blow to the theory that the NIST P-curves were maliciously generated --- even for people totally unfamiliar with basic curve math.
> if anyone does find them, that'll be a pretty devastating blow to the theory that the NIST P-curves were maliciously generated
IDK, if I don't think that finding that a seed matches a hash of "Give Jerry a raise of $100000 dollars now!!!" is any evidence for that, because if I had a desire to generate malicious constants, and knew some unusual property that they must have to be weak, then nothing would prevent me from generating hashes of many, many variations of similar strings until one gives me constants with the properties I need.
At the point where we find an intelligible English string that generates the NIST P-curve seeds, nobody serious is going to take the seed provenance concerns seriously anymore. I think everybody sort of understands that people who don't work in cryptography are always going to have further layers of theory to add, the same way people waiting for the "Mother of All Short Squeezes" do with Direct Share Registration and share votes and stuff. If the bounty program is successful, that's going to end the NIST curve "debate", such as it is.
If you're convinced the P-curves must be backdoored, despite the computer science arguments that suggests they really couldn't have been, then you should comfort yourself in the knowledge that we're probably not going to find the seed strings any time soon; presumably Solinas tried pretty hard himself!
For the reason stated in the article, it's actually pretty likely that there's a counter in there somewhere. A 31-bit number like "3263958374" doesn't seem especially interesting cryptographically.
The counter is described as the minimum value that will fit the pattern and make the result a prime. So that should be easily checkable and not really add any free variables.
If you were evil and motivated you'd probably want to hide your variables in the innocent looking part, the simple English or the punctuation, instead.
Yeah, that counter is separate from a counter embedded in an ASCII string. The one thing that kind of string indicates is that it almost certainly wasn't the first string they tried.
This would basically be a Nostradamus attack. If we're going around claiming exciting (read: improbable) things about P256 I don't know what stops us from claiming the NSA can generate collisions in the hash functions it also chose the parameters for.
> At the point where we find an intelligible English string that generates the NIST P-curve seeds, nobody serious is going to take the seed provenance concerns seriously anymore.
GP is making a different argument along the lines of last week’s Twitter fad generating intelligible English sentences which spell out the first few bytes of their hash.
Exactly. How many gramattically correct sentances are there, vs what is the probability that a "random" hash used as an EC seed results in poor security? Research[0] has demonstrated it's not a theoretical vulnerability in the EC selection process.
By the late nineties using both MD5 and SHA1 for "additional robustness" together in ad-hoc constructions was also en vogue. SSLv2 and SSLv3 are good examples. The outputs match the size of a SHA1, but it wouldn't be that shocking if the pipeline were some form of echo "$string" | md5sum | sha1sum.
What's smart about this bounty is that these kinds of silly hashing constructions, and scalable pipelines for figuring them out through brute force, are the bread and butter of password crackers, which is practically a sport now. So you'd be optimistic that if the problem is interesting enough, somebody will work it out if there's like, a pre-MD5 hash or something.
> Some of the backstory here (it's the funniest fucking backstory ever): it's lately been circulating --- though I think this may have been somewhat common knowledge among practitioners, though definitely not to me --- that the "random" seeds for the NIST P-curves, generated in the 1990s by Jerry Solinas at NSA, were simply SHA1 hashes of some variation of the string "Give Jerry a raise".
For a longer history see the mentioned-in-article paper by Koblitz and Menezes, "A Riddle Wrapped in an Enigma" (which is more asking why the NSA said, in 2015, that things should move towards a post-quantum world):
The same "backdoor" thinking seems to have been present with (EC)DSA:
> Proponents of RSA bitterly opposed DSA, and they claimed that the NSA was promoting DSA because they had inserted a back door in it (“No Back Door” was the slogan of the anti-DSA campaign). However, they gave no evidence of a back door, and in the two decades since that time no one has found a way that a back door could be inserted in DSA (or ECDSA).
Then there was this little tidbit:
> As the heated debate contin- ued, the NSA representative left to make a phone call. When he returned, he announced that he was authorized to state that the NSA believed that ECC had sufficient security to be used for secure communications among all U.S. government agencies, including the Federal Reserve. People were stunned. In those days the NSA representatives at standards meetings would sit quietly and hardly say a word. No one had expected such a direct and unambiguous statement from the NSA. The ECC standards were approved.
Other actions by the NSA that were treated as suspicious were tweaks to DES (which later turned out to be defence against differential cryptanalysis) and weaknesses in the original SHA ("SHA-0") which the final version ("SHA-1") does not have.
> "However, they gave no evidence of a back door, and in the two decades since that time no one has found a way that a back door could be inserted in DSA (or ECDSA)."
ElGamal/DSA is fragile and difficult to implement securely in a way that doesn't reveal secret k. ECC (and somewhat particularly NIST curves[1]) is also very difficult to implement securely from side channel attacks[2].
Consideration of "backdoors" should be broader than just mathematical equations on paper. "backdoors" could also include deliberate choice of algorithm or parameters that are fragile and difficult to implement in a secure way to minimise risk of side channel leakage of secrets. This is especially important in a whole variety of applications such as passports, payment cards and TPMs.
> ElGamal/DSA is fragile and difficult to implement securely in a way that doesn't reveal secret k. ECC (and somewhat particularly NIST curves[1]) is also very difficult to implement securely from side channel attacks[2].
IMHO this isn't evidence of a backdoor. Side-channel protection is hard. RSA decryption and especially keygen are also tricky to implement in a side-channel-protected way, and a widespread timing attack on RSA decryption in many libraries was published earlier this year [3].
The NIST p-curves used the state-of-the-art elliptic curve shape for general operations when they were released: short Weierstrass curves over prime fields. Edwards curves are easier to defend against side channels and are overall a better choice (with their own rough edges, mostly involving the cofactor), but those were not known until 2007. Montgomery curves (similar rough edges to Edwards, plus the point at infinity) were known earlier and are nice for key exchange, but they are not as nice for signatures.
Overall I would not choose the NIST curves for a new design today, because Edwards/Montgomery curves are a better choice. But I think the evidence they were backdoored (mathematically or otherwise) is weak.
> tweaks to DES (which later turned out to be defence against differential cryptanalysis)
The NSA's mission has changed, in the sense of its real priorities for securing comms vs. spying on them. They were clearly different at the time of creation of DES vs. at the Snowden leaks, so I wouldn't take that episode as too much evidence of helpful intent in the 90s.
Its funny - all this doubt and suspicion of the NSA but then you end your post about how NSA has been saving our asses. Maybe they aren't so bad afterall? /s
They're not, in fact, comic book villains. They have a pretty understandable mission, and then a set of organizational values that are sharply different than those of technologists.
I burned an unreasonable amount of cpu power searching for input that were used to produce the ~166 bit 'random' value used to construct G in secp256k1 and secp224k1 without success. (in both cases the parameters choice of G is the double of a point with a suspiciously sized x coordinate, and the same for both curves).
For those curves the choice of G is the only particularly high entropy input into their selection, and it's provably almost irrelevant-- the choice of it lets them know one specific arbitrary discrete log. (you can imagine a contrived protocol where this would be a backdoor, but it would be a pretty contrived protocol). But since it's the only really unknown parameter I thought it was worth searching for.
If someone does find the seed used for the P-curves it might also be similar to the one used for the the generators of those other curves and solve their minor mystery too.
> At the time, the "pass a string through SHA1" thing was meant to increase confidence in the curve seeds; the idea was that SHA1 would destroy any possible structure in the seed, so NSA couldn't have selected a deliberately weak seed.
It's standard to use transcendental constants like pi or e for this purpose as you can't select them. A phrase could in theory be selected to yield a more desirable hash
> It's standard to use transcendental constants like pi or e for this purpose as you can't select them.
If one may choose between one or the other without arousing suspicion, that's one bit of entropy.
If I can add the possibility of sin(1) and cos(1), that's two bits.
And remind me: are we using sha1, or md5? Or perhaps sha1 -> md5, or md5 -> sha1?
And so on, until I have enough bits to reimplement the suspicious seed in the cracks of all these innocuous, and very poorly specified "standard" choices.
Dan Bernstein (and some other cryptographer?) wrote about this and did a demo creating over a million curves in a short/practical amount of time on commodity hardware. All of them had seeds specified using a "standard" method like what you mention above.
It makes me think all cryptographers ought to just go ahead and choose suspicious seeds where prudent, kinda like if you're going to play the lottery, just go ahead and choose the numbers 1 2 3 4 5 etc. At least then you get instant, free information about the lack of knowledge of the people who complain that you're doing it wrong.
And if it's possible to get an easy-to-break configuration in a relatively small number of attempts, it's probably because there's something fundamentally broken about the entire construct, not just the seed you picked. And that sort of brokenness is likely to be a lot harder to hide.
Conceivably there's a subset of weak values and Jerry tried "Give me more money", "Jerry deserves more money", etc until he found a phrase which produced a weak value.
I don't think that's what happened, but it does mean that "some variation of give Jerry a raise" doesn't mean the value wasn't chosen maliciously.
Agreed, I wouldn't have thought so a few weeks ago, but after seeing the messages that contain the first few SHA bytes of themselves in themselves, I don't know how implausible it is that they might have generated billions of strings and hashes and then chosen the weaker ones.
However, I don't find it funny reading all the "gory" details:
In the article, under the subtitle "Step back, what is this about?" is:
"The NIST elliptic curves (P-192, P-224, P-256, P-384, and P-521[1]) were published by NIST in FIPS 186-2 in 2000, and generated “verifiably at random” according to ANSI X9.62 by taking an arbitrary seed, hashing it with SHA-1, and using the output to derive some of the parameters."
Note the sentence: “verifiably at random” according to ANSI X9.62.
Now, the mentioned ANSI X9.62 describes very formal algorithms of what “verifiably at random” should mean:
"If it is desired that an elliptic curve be generated verifiably at random, then select parameters (SEED, a, b)
using the technique specified in Annex A.3.3.1"
and then goes on to specify an exact algorithm both how the parameters are selected in A.3.3 and then in A.3.4 the verification algorithm: "The technique specified in this section verifies that the defining parameters of an elliptic curve were indeed selected
using the method specified in Annex A.3.3"
So, if I understand correctly, the authors spent enough energy both to construct the algorithms to generate the constants and make the SEED public as well as the algorithms to later verify the parameters given the publicly known SEED as an input. And to publish all that in ANSI X9.62.
If that was the idea of “verifiably at random” according to ANSI X9.62, and if then nobody knows the SEED, then it appears that the very procedure, for which a lot of energy was spent to be developed or described, was just not followed. From which it can be concluded that the "if" condition of the sentence was just not true:
"If it is desired that an elliptic curve be generated verifiably at random..."
(Not to mention that the algorithms published there clearly aren't "simply SHA1 hashes" in the sense result = SHA1( seed ) but, casually looking, a concatenation of only some bits of output from multiple SHA1 runs over the increments of the seed, which could suggest that nobody who tries any human written string as a seed would ever find a result by expecting a match of a whole constant with an output of a single SHA1 pass? Has anybody calculated how big would be a chunk of bits from a single SHA1 run actually for every of the constants?)
Now, I probably miss something here, if it is so, I'd like to know what.
The standards claim that the existence of such a (SEED, a, b) tuple is enough to show that there is nothing special about the curve in question. But if one in a billion curves have a special property that only you know about, which would make it easier for you to attack the cryptosystem, you can try a variety of different SEED values until you find a desirable curve.
I don't think we can complain that there were retries over different human-readable seeds to make an appearance of "verifiably at random" design if the chosen human-readable seeds just haven't been published at all.
And if the argument is that the publishing of human-readable seeds was unnecessary because the retries of the procedure could have been performed until some exploit was possible, why even define and publish these definitions? Was it an error? Or something else?
The procedures in Annex A.3.3.1 and A.3.3.2 do not really specify how you are supposed to come up with the SEED value used in the first step ("Choose an arbitrary bit string SEED"). Note that this value is part of the output that is published.
The claim here is that the procedure used for choosing the SEED in the first step involved SHA-1 of some ASCII text with a counter.
By the way the construction with incrementing counter (steps 3 respective 4) is horribly inefficient PRNG that expands/shrinks the entropy in the SEED to the size of field element one bit at a time. I suspect that the inefficiency is intentional to make it even more obvious that authors did not try enough SEEDs to be able to specifically select weak one.
> The claim here is that the procedure used for choosing the SEED in the first step involved SHA-1 of some ASCII text with a counter.
That's the story as much as I see it: there's a constant that doesn't appear to be "arbitrary" enough in a sense that there's a suspicion that it could be too "special" if nobody can recognize it, and nobody can show how that one was generated.
And as there's an official procedure to turn something to something "more random" that "something" appears to be still missing.
BTW I don't think that the "inefficiency" you see in the steps there changes anything.
The whole point of the procedure as designed is to make how the constant was selected irrelevant to the security of the resulting curve.
Also you have to consider the historical context. The procedure was originally designed to generate parameters for cryptosystems that were very much built on the assumption that SHA-1 is secure hash. Any method to choose a weak SEED in a reasonably practical way involves either breaking SHA-1 (collision does not really help, you would need preimage) or the underlying ECC structure having some gaping security issue that only NSA knows about (ie. there being ridiculously many weak curves).
And we come once again back to the start: _because_ there's an explicit algorithm right there in the standard which allows to start from something "not special" like the digits of Pi or even the ASCII strings of the beginning of the Declaration of Independence, why the completely opaque constants instead? Even if it's, as Filippo suggests, because "the counter has to be there because only one in every 192 to 521 hashes is actually good to make a curve out of", if the counter is a known part of the process of such a selection, all these details could still have been "open".
At least, that's my understanding why there's still talk about it all, and this bounty: those who don't like the opaque constants argue: why aren't they "open", if really "irrelevant"? Now, if the bounty shows that the constants come from something like
SHA-1("Jerry and Alice deserve a raise. 1398")
then all this looks a little better, especially if it can be shown that that "1398" was the first integer that "worked" for the selected phrase, according to the publicly known criteria.
> the NSA would have had to be aware of a class of weak curves so large that it’s not plausible that no one in academia or industry discovered them in 25 years.
GCHQ in the U.K. hires more mathematicians than any other research institute or University in the country. Not sure about the US equivalents but I imagine it’s similar.
Diffie-Helman key exchange was known about by GCHQ and the NSA prior to it being rediscovered by Diffie and Helman. I think it’s hard to assume anything about the underlying capabilities of intelligence institutions. Not saying they do know this but it’s also not impossible, this stuff is their bread and butter.
I think you're right to be suspicious, especially because the arguments for it not being possible are presented as if they are mathematical but are actually social.
The usual response to these concerns is to argue that academics are so excellent that they would certainly have discovered what the NSA was up to by now, if there was a way to do it, and anyone who doesn't agree with that is as FUDy pleb who just doesn't get it. The article repeats this party line, although it's great to now see the problem being taken more seriously with an organized bounty programme. I'll be surprised but happy if anyone actually finds pre-images. But the frequency with which concerns are blown off here is just not good enough.
I've done some cryptography work in the past and for several years had to regularly review cryptography papers as part of my job. I've attended cryptography conferences, talked with researchers, implemented various "exotic" things with elliptic curve cryptography and so on. Not exactly an insider but not a complete outsider either. The party line here looks very dangerous to me.
Let's recap the arguments here because neither of them seem strong and neither are actually mathematical at their core.
1. If there was a way to execute a kleptographic attack on NIST curve standardization, academics would have found it by now. If not academics, then someone in industry.
2. Dual_EC_DRBG was detected as suspicious immediately, so the public cryptography community is good at spotting back doors.
For (1) why should that be so? It actually seems unlikely to me. Cryptography suffers the same problems as the rest of academia w.r.t. the file drawer problem and "publish or perish" incentives. If you're an up and coming university researcher, which path is more profitable for you? Develop some clever new zero knowledge proof algorithm and be fairly certain to publish some cool new paper that gets cited a lot, or start attacking an algorithm that "everyone" already knows to be incredibly strong and almost certainly come up with nothing. If you take the latter the expected outcome is that you languish in obscurity at best or simply perish at worst (lose funding, exit academia with nothing to show and being seen as a crank).
Put another way the argument for the strength of academic understanding is that lots of really clever people have studied this in depth and found nothing. There is a Consensus Of Experts. But because you don't get to publish null results in academia, there's actually no way to know how much effort has been put towards this kind of problem. That's why assertions about academic crypto-analytic supremacy are always so handwavey and difficult to reason about. There's no actual numerical proof of work being done, and we know that some fields of academia have extreme difficulty with things being declared a "consensus" for social reasons that later have to be walked back.
Moreover, researching kleptography specifically (how to backdoor standards) isn't a good career path because kleptography isn't useful for anything unless you're the NSA, so you probably won't be able to easily transition to a post-academic career in industry on the back of that expertise and you won't get many citations.
The NSA doesn't have any of these problems. It can pay better wages than academia, hire more researchers than all of academia put together, and then assign them full time to research that is likely to be a dead end or which is useful only for backdooring standards. It can also easily build multi-disciplinary teams and fund research that academic cryptography just can't tackle at all due to lack of hardware budgets. And it has been doing exactly that for decades.
If I had to bet on whose understanding of ECC is better, the NSA's or academia's, well, all the firepower is on the side of the government. It's not even a close competition. In fact we can even measure how much firepower the government has, hence the well worn line about how many maths PhDs they hire, but we actually have no idea how much firepower academia levels at this part of the problem space.
So we're left with argument (2), people immediately raised the alarm about Dual_EC_DRBG so the NSA must actually kinda suck at designing backdoors. But this isn't an ideal argument because people immediately raised concern about the NIST curves too. The only difference is that in the former case, there was already a known algorithm that could be used to pull off the needed attack, and in this case there isn't.
Fundamentally there's no reason this debate should even be needed. It's been known for decades how to avoid doubt here, that's the whole reason the NIST curves are the output of SHA1 to begin with. The best time to phase out the NIST curves was decades ago, the second best time is today.
I'm still not fully convinced by your point about what's best to fuel an academic career. While putting it as trying to find vulnerabilities into some specific cryptosystem largely believed to be secure indeed doesn't sound that interesting, if you instead put it as finding new attacks for elliptic curve cryptography we're back again the "cool paper with lots of citations" territory.
As a side note, in the past a PhD student was tasked with the "not interesting sounding and apparently useless in practice" task of filling the gaps in OCB2's security proof. That's how it was discovered that it was utterly broken.
You only get a cool paper with lots of citations if you actually do find a new attack. If you spend a four year PhD searching and come up with nothing, which is what the field tells you to expect, then you end up with nothing. At the NSA you get a four year salary, maybe even a promotion, who knows.
I'm one of the people contributing to the bounty because if it really is a password-crackable phrase, it would be of significant historical impact to know.
> The NIST elliptic curves that power much of modern cryptography were generated in the late ‘90s by hashing seeds provided by the NSA.
I find this deeply troubling. So the seeds were provided by the NSA and they said "don't worry, they were generated hashing a trivial sentence. Unfortunately we forgot it now, but trust us, it's just Jerry joking about getting a raise, nothing more..."
I can't believe this didn't undergo further scrutiny earlier, and I can't believe the seeds haven't been chosen in a more sensible way, such as combining random seeds provided by different parties with competing interests, also including hardware RNGs, etc...
> I can't believe the seeds haven't been chosen in a more sensible way, such as combining random seeds provided by different parties with competing interests, also including hardware RNGs, etc...
This is because you're looking at it from the perspective of someone living in 2023 with knowledge of what happened between the 90s and now. While that would have been a good way to go, at the time few people would have seen the need for it.
So if I understand this correctly: the community accepted those mysterious strings of unknown origin while it would’ve been trivial to replace them with different strings with a known origin, by providing another but known input to the hash?
You understand the situation correctly, hence why it's kind of disastrous. The phrases "so close" and "you had one job" seem relevant here. Unfortunately, as far as I know this would be the only case related to the NSA and cryptographic standards where someone has alleged incompetence. Normally the stories run the other way.
Doubly problematic: NIST is known to have been compromised and putting backdoors into elliptic curve related standards, the fact that this mechanism didn't create trust was pointed out immediately, and neither NIST nor the NSA did anything to address the concern. Just like with Dual_EC_DRBG.
Triply problematic: the NSA explicitly told people in 2015 not to upgrade past the NIST curves to other curves, because quantum computers will soon be good enough to break ECC entirely and so everyone should switch to post-quantum crypto instead (which is new and still experimental widely used etc). If ECC worked fine, QC was far off and you wanted to keep people on the NIST curves for as long as possible this is exactly what you would say.
The cryptography community has not exactly covered itself in glory over this situation. It's been nearly 25 years now. There are newer curves that don't have this problem, why are the NIST curves still being used by anything? Where is the effort to phase them out, like there was with SHA1? This article even seems to be advertising them.
SHA1 was phased out, for another NIST standard, because it was known to be weak (in the sense of a likely cryptographic break --- which happened --- not in the sense of needing to be more careful using it). That's not the case with the P-curves; short of a QC attack that will break all modern curves, it's unlikely the P-curves are going to be broken.
Wait, that just generates the sha1 of your input string and nothing more? Because as the post says, it has to include a counter of some sort. I'm sure the NSA's seeder didn't just sit there and enter 500 unique phrases until they hit one that gives a nice curve, if they had any idea about cryptography and computers whatsoever. Or even if that's how it went down, variants with a dot at the end, capitalizing the sentence, title casing, ...
The list of plausible variations one should try might never be exhaustive, but this page doing nothing but generating a sha1 hash makes it practically impossible to find you the hash even if you got the string right with correct punctuation and capitalization. The easiest/least thing it could do is check if the first or last ten bytes match to see if the resulting hash was incremented, though even that would mostly be asking people to waste time on the page, since the author knows it won't go anywhere.
It should be remarked on the page that it is a toy for demonstration purposes and will not actually let you find the seed even if you guessed correctly
> Something I've learned from a career of watching cryptographer flame wars: Don't bet against Bernstein, and don't trust NIST.
I should amend that to:
Something I've learned from a career of watching cryptographer flame wars: Don't bet against Bernstein or Filippo, and don't trust NIST. When these two rules are in conflict... still don't trust NIST.
I mean sha-1 is for sure broken, but I thought that was mainly concerning stuff like collisions via a length extension attack and other known plaintext attacks.
Finding what amounts to a passphrase just given a hash was still generally untractable I thought.
Yes, SHA-1 is still considered preimage resistant. But preimage resistance isn't that important here, if the hypothesis about seed structure is correct: SHA-1 is also very fast and trivial to parallelize, and someone dedicated to exploring the permutation space of "Jerry needs a raise" stands a decent chance of discovering the original input.
This is yet another instance where more transparency on the NIST/NSA side would have been beneficial in the long run. If they said from the start, "here are the seeds, we generated them by computing SHA1(insert_solinas_string_here)" the whole debate would have never started in the first place.
Just curious, even if we known the origin plaintext becomes known and we can prove it's correct, this doesn't compromise the security of those curves, correct?
I'm showing my age here, but as someone that lived through and had to mitigate the results of the md5 disater, I'm all for a variety of [verified] cryptographic algorithms being available. I think having edwards curves, NIST curves, or others is healthy for the ecosystem.
According to D&D 5e, you can only case Raise Dead if the subject is dead for less than 10 days. Even if those conditions are satisfied, the creature's soul needs to be both willing and at liberty to rejoin the body. So, no.
You guys are gonna revisit this post in 20 years and see I was right, cracking ECC is impossible, and starting with a "could have looked like this" seed phrase is a wild goose chase multiplied by a needle in a haystack.
At the time, the "pass a string through SHA1" thing was meant to increase confidence in the curve seeds; the idea was that SHA1 would destroy any possible structure in the seed, so NSA couldn't have selected a deliberately weak seed. Of course, NIST/NSA then set about destroying its reputation in the 2000's, and this explanation wasn't nearly enough to quell conspiracy theories.
But when Jerry Solinas went back to reconstruct the seeds, so NIST could demonstrate that the seeds really were benign, he found that he'd forgotten the string he used!
If you're a true conspiracist, you're certain nobody is going to find a string that generates any of these seeds. On the flip side, if anyone does find them, that'll be a pretty devastating blow to the theory that the NIST P-curves were maliciously generated --- even for people totally unfamiliar with basic curve math.
So: pretty fun bounty.