
On the S-Box of GOST Streebog and Kuznyechik - fanf2
https://who.paris.inria.fr/Leo.Perrin/pi.html
======
nullc
Someone should hold a backdoored blockcipher or hash function competition.

The structure would be that submitters are intended to submit proposals which
are structurally like those of normal cipher competitions, e.g. make the same
security arguments, etc. To focus efforts most usefully, they could be
required to not just lie about the properties they claim (not a restriction on
real attackers, but we already know that if people make claims that people
accept and no one checks that this can allow backdoors-- and due to the risk
of being caught real attackers would probably avoid outright lies).

But every entry is required to contain an exploitable algorithmic backdoor,
not just a certificational weakness.

The process continues like a normal cipher competition, and the winner is
selected based on no one being able to break their scheme.

Then the backdoors are disclosed. (Perhaps some escrow process could be used
to make sure that they're really back doored, or a ZKP to that proves they
could break a nothing-up-my-sleeve instance could be required along with the
submission)

I think from such a process we could learn a lot about what submission norms
are actually useful for protecting against cleverly constructed back doors,
and something about the submission norms that we're missing.

Alternatively, it might be useful for normal competitions to allow secretly
backdoored contributions. E.g. every submitter submits a commitment at the
beginning, and then at the end everyone reveals their commitment (to avoid the
case of a bad submission being revealed as "white hat" selectively only if its
caught). ... but I fear that the result of such efforts would do more to erode
our trust in the process (because it would fail to find the backdoors) than it
would educate us about making things more secure. It would, however, make
cryptanalysis of proposals more attractive, because it would increase
confidence that there are discoverable weaknesses in at least some of the
proposals.

~~~
lucb1e
> some escrow process could be used to make sure that they're really back
> doored

Or the judges ask the teams to provide a working implementation, encrypt some
text with each implementation (a random sentence, perhaps), and return the
ciphertexts. Each team can then prove it is backdoored by providing (a
significant part of) the plaintext.

~~~
bcaa7f3a8bbc
> Each team can then prove it is backdoored by providing (a significant part
> of) the plaintext.

Some backdoors are asymmetric backdoor, so the original designers can decrypt
anything at will.

But some others only hide a trick that reduces the computation complexity to,
say, 2^80. Unless you are a nation-state, you cannot use the backdoor. And the
other type of backdoor only increases the susceptibility to implementation
vulnerabilities (cache-timing attacks, invalid curve attacks), mathematically
speaking they're still secure, but too difficult for most people to implement
it perfectly.

A escrow process would be better.

~~~
throwawaymath
_> But some others only hide a trick that reduces the computation complexity
to, say, 2^80. Unless you are a nation-state, you cannot use the backdoor._

That's not an issue - you just publish the backdoor. You don't need to
demonstrate the complexity reduction, you just need to prove it exists.

------
userbinator
The maths is way beyond my level, but I think it's rather amazing that they
were, given a presumably random permutation, able to figure out the algorithm
that generated it. I'd like to read more about how that was done --- did they
just try different algorithms until they found a match?

This reminds me of the story around the DES s-boxes:

[https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA's...](https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA's_involvement_in_the_design)

To me, the possibility of a backdoor in these GOST algorithms seems just as
likely as the possibility that the structure was chosen carefully to
strengthen them against a type of attack that is still classified.

~~~
blattimwind
If you pick a random S-Box you can see that there is an expected value for its
nonlinearity. On the other hand, you can also look at the maximum amount of
nonlinearity you can have in an S-Box of a given size (and also some other
properties). Then you can maximize those. This was done with DES.

The S-Boxes used here on the other hand have a whole lot of structure, i.e.
linearity, that has been put there intentionally. This is generally the
opposite of what you would want to see in a S-Box. It is of course
hypothetically possible that these linearity combine with other parts of the
algorithm in a way to result in greater non-linearity or specific hardening
against specific unknown attacks, but this does sound a bit far-fetched in my
ears (I can't think of precedent for this; e.g. in DES the issue that was
worked around was related to differential linearity, and it wasn't solved by
adding more linearity into the algorithm).

------
wolfgke
Here the opinion of one of the Veracrypt authors on this topic:

>
> [https://github.com/veracrypt/VeraCrypt/issues/419](https://github.com/veracrypt/VeraCrypt/issues/419)

Highlight:

"This leads me to think that maybe there is another interesting possibility:
the designer of Streebog and Kuznyechic are aware of new type of algebraic
attacks affecting modern ciphers and hash functions and they wanted to ensure
that these algorithms are immune against this attack without revealing
anything about the attacks they are protecting against!

This may seem to be far fetched as it would mean that someone has made a
breakthrough cryptanalysis but history tells us that this is in the realm of
possibilities.

For example, people were suspicious of the SBoxes forced by the NSA in the 70s
for the DES encryption standard and everybody was saying that they were
backdoored. But more that 10 years later, we discovered that actually the NSA
was aware of Differential cryptanalysis before it was discovered by academics
and they designed the DES SBoxes to resist it.

So, if history is repeating it self which is often the case, what we are
witnessing here is like the discovery of a cure before the illness is even
publicly announced. In which case, we should really worry about existing
ciphers like AES and Serpent."

\---

TLDR: existing ciphers might contain a weakness that was fixed in the design
of Streebog and Kuznyechic.

EDIT: Why the downvote?

~~~
tuxxy
This is a really bad look for VeraCrypt and the writer is clearly not informed
enough on this topic to make that kind of statement. It really pains me to be
the lone -1 on that GitHub comment.

------
vbezhenar
> That is why I recommend that, until their designers provide a detailed
> explanation of their complete design process, you do not use these
> algorithms and, should you be in a position to make such a decision, that
> you do not standardize them.

Nobody would use those ciphers unless required. And they are required if
you're interacting with government state information systems or licensing your
software for certain usage. I have zero doubts that those ciphers are
intentionally backdoored. It does not really change anything. Those are
"certified cryptography" and conventional ciphers like AES-256 are not, even
if it sounds absurd.

~~~
wolfgke
> I have zero doubts that those ciphers are intentionally backdoored.

> And they are required if you're interacting with government state
> information systems

What interest would the government have to have interaction with government
state information systems done using broken/backdoored ciphers?

------
infinity0
backwindow

