Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: SHA-256 explained step-by-step visually (sha256algorithm.com)
1241 points by manceraio on Feb 7, 2022 | hide | past | favorite | 137 comments



Oh this is great. When we taught SHA-256 last semester, we linked to this YouTube video: https://youtu.be/f9EbD6iY9zI. Next time we do it, we'll probably link to both. Having several different ways to visualize the same thing is very helpful, and I like that this one moves quickly.

A couple of details missing from this visualization are how you pad a message to be a multiple of the block size, and how you chain blocks together to form a longer message. In the pseudocode at https://en.wikipedia.org/wiki/SHA-2#Pseudocode, that's the "Pre-processing (Padding)" part and the "for each chunk" loop just below it. I get why you'd want to leave those things out, since they're not really the interesting part, and the screen is already pretty packed as it is.

If anyone's feeling curious about implementing this yourself, take a look at these project notes: https://github.com/oconnor663/applied_crypto_2021_fall/tree/.... At some point I'll clean that up for public consumption, but for now just ignore the parts about grades and cheating :)


Thanks for the feedback and I am glad you'll use it for teaching (which was the main goal of this project)! The padding part it's briefly explained on the "dynamic" notes on the left column, but yes, can be improved. Typing on the input gives you some sense of what is doing on the background, specially if it jumps to two blocks.

The "for each chunk" is also implemented (which was one of the most difficult parts to synchronize with the UI), but I agree too, I should come up with some way to represent it better. Thanks again :)


Minor nit: input could also take hex.


Was about to reply with the link to the project. If anyone is curious about sha2 highly highly recommend to go thorough the project. Jack did an amazing job explaining everything step by step. Writing the code really helps to understand all the concepts much better.


Thank you for the link to your repo, this it the first time I heard about length extension atracks. TIL, appreciate it! This SO answer explains them nicely, if anyone is curious: https://crypto.stackexchange.com/questions/3978/understandin...


What course did you teach?! Have you got a syllabus?


Applied Cryptography (CS-GY 6903) at NYU Tandon. You can find all our programming problem sets in the same repo: https://github.com/oconnor663/applied_crypto_2021_fall.


So, how do people come up with these things? I assume every aspect of the design is carefully considered to defend it against various attacks. For example, why "right rotate 7 XOR right rotate 18 XOR right shift 3" and not "right rotate 2 XOR right rotate 3 XOR right shift 4"?


Generally, come up with a set of operations that you want to use that meet your design criteria (fast in hardware, fast in software, fast in SIMD, whatever), then brute-force the instruction order and/or arbitrary constants (rotations) to minimize known attacks and maximize diffusion

e.g. the Chacha20 design document explains the choices made in a fairly high level and easy to understand way: https://cr.yp.to/chacha/chacha-20080128.pdf


> (fast in hardware, fast in software, fast in SIMD, whatever)

"Slow to compute in GPUs" has been another design criterion for a while. It helps against brute force attacks. Of course, you might want your hash to be fast on GPUs too, if you're trying to accelerate something, bu cryptographic hashes usually aim for attack resistance.


No, no, no. That's a criteria for password hashes, which are a totally different construction than standard cryptographic hashes. You can't brute force a cryptographic hash, practically by definition.


I must have conflated Keccak's "slow without specific hardware/extensions" with "slow on purpose against GPUs". Both are similar (why would a GPU have an operation for cryptography?) but obviously not the same thing.


The distinction being the relatively small space of possible preimages?


I’m surprised that’s a design crtieria. Being able to calculate hashes quickly is advantageous.

Speeding up hashing by, what, a few orders of magnitude shouldn’t break your ≥ 128 bits of preimage protection, surely?


It depends what the goal of your hashing function is. Pure cryptographic hashes like sha256 want to be cheap to calculate. Slow on GPUs is only a design goal if you're making a password hash function, which is a rather different class of problem to sha256/chacha20 and similar.

I think the commenter you're responding to got a little confused somewhere along the way and conflated cryptographic hashes with password hashing.

EDIT: To be clear, password hashes are also cryptographic hashes - but they're a specific subclass of crypto hashes with different design criteria.


Yes, I did confused the part about Keccak[0] being "slow" to compute without specfic hardware/extensions with it being slow on GPUs as a design goal.

[0] Taking this as an example of newer crpytographic hashes, rather than the "old ones" that were easier/faster to calculate on GPUs.


This is a known technique for password hash protection too. By limiting the usefulness of the GPU, you shield yourself from high speed password crackers in the event your password database is compromised.


>why not "right rotate 2 XOR right rotate 3 XOR right shift 4"

In this case, you generally want shifts that are far apart so bits that far apart have a chance to influence each other. The other operations have only local effects on bits (and, or, xor affect only the same bit position; add affects the same bit position and perhaps a few that follow via carries). Only the shifts/rotates give a chance for "non-local" effects.


Visualized like this it feels like security through obscurity, but there must be reason for this.

I did wonder why initialization is like:

1. Initialize hash value h0 to h7: first 32 bits of the fractional parts of the square roots of the first 8 primes 2..19).

2. Initialize array of K constants: first 32 bits of the fractional parts of the cube roots of the first 64 primes 2..311


https://news.ycombinator.com/item?id=23166713

It seems like a "nothing up my sleeve" way to initialize the variables.


It's not security through obscurity. In fact, it's the very opposite. You can see the process exactly. The reason this is secure is because the process itself doesn't work backwards. You can create a hash using this algorithm, but you'll never reverse that hash back into the original text.


I'm also wondering, how does this prevent preimaging attacks (or whatever they're called)? That is to say, what's stopping people from reliably producing output based on input?


> I'm also wondering, how does this prevent preimaging attacks (or whatever they're called)?

First, see the Wikipedia entry about preimage attacks.

Second, I am not a cryptographer but I think in practice there is a couple of things to be aware of:

- make sure slightly different inputs have wildly different outputs

- make sure no parts of the input survives

- practically speaking there are an unlimited number of inputs that map to most (all? I'm not sure how uniform the distribution of sha256 is) output (since input is unlimited and output is a short string.

- the classic preimage attack, rainbow tables, works because 1.) inputs, i.e. passwords, are often short and predictable

- in ancient times password systems didn't use salts

> That is to say, what's stopping people from reliably producing output based on input?

I assume this should be the other way around, which is what I have tried to explain above.

Again, read the Wikipedia page.


from what I've seen, there's a lot of "obscurity" to this; there are many seemingly arbitrary choices all over the place.

In the end most encryption algorithms boil down to doing 'random' (arbitrary, hard to justify why) things to data and then undoing them exactly in order to decrypt.

the math is all incredibly abstract but not all that complex, the high level of abstraction does make it quite difficult to grasp.

What's worse is that I fear there are incentives (mostly political/security interests) to keep the field small and to keep many people far away from this very practical use for all these beautiful, elegant, simple (but extremely abstract) mathematics (refering to the entire cryptography field).


> What's worse is that I fear there are incentives (mostly political/security interests) to keep the field small and to keep many people far away from this very practical use for all these beautiful, elegant, simple (but extremely abstract) mathematics (refering to the entire cryptography field).

I mean, everything you want to learn about crypto is available online, in libraries, in textbooks. Including differential cryptoanalysis, the theory behind these mathematical forms (Galois Field makes things _EASIER_, not harder actually. That's why CRC-checks and Reed-Solomon codes were based off of Galois Fields, and AES being based on GF(2^8) is to take advantage of those same properties).

--------

What has happened is that the "old generation" of programmers is dying out / retiring. And they aren't passing on their knowledge to the new generation. The "old generation" of programmers were high-math, abstract algebra and more, while "new generation" programmers just never bothered to learn this stuff.


What has happened is that the "old generation" of programmers is dying out / retiring. And they aren't passing on their knowledge to the new generation. The "old generation" of programmers were high-math, abstract algebra and more, while "new generation" programmers just never bothered to learn this stuff.

There may be some survivorship bias here. Even in the 1990s, business-grade programmers (the ones who, quite frankly, aren't inclined to learn difficult subjects) either went into management or did something else, although the timeframe and ageism are more aggressive these days due to the infantilization and humiliation (e.g., Agile Scrum) that engineers face today.

Research-grade programmers were the minority, even then, although this problem is a lot worse today due to the near nonexistence of R&D jobs.


Not all decryption is doing exactly the same things in reverse. For example, with CTR mode (and thus GCM mode, which is CTR plus GMAC), you call the /encryption/ routine regardless of whether you're encrypting or decrypting data. This means in an embedded environment you can save die space because your program doesn't need the e.g. AES decryption routines too.

https://upload.wikimedia.org/wikipedia/commons/4/4d/CTR_encr...

https://upload.wikimedia.org/wikipedia/commons/3/3c/CTR_decr...

(Note the bold text)


I mean it in the sense that they undo the 'noise' added to (or reverse the scrambling of) the initial input .

even if they do it in a different way.


I doubt there is any concerted effort to keep the field small. That would be like saying tech companies don't want people learning how to code so that they can maintain an advantage.

If anything, governments and companies are encouraging people to study cryptography so that they are able to hire more experts in the future.

Now, once you get gatekeeper organizations and special licensing organizations like contractors licensing or beauticians licensing, those are examples of groups trying to keep the pool of experts small.


> What's worse is that I fear there are incentives (mostly political/security interests)

Nah its mostly just a mix of laziness, rigor, and salesfolk.

Most people don't want nor can properly design a hash algorithm (which works well). Public ones like SHA have received so much scrutiny, they are extremely well vetted...and then there's the mostly valid attitude of "never roll your own crypto" - Don't, not in production or anything that could become production. Unless you are a group of highly skilled cross domain career cryptographers/mathematicians...

Which leads to the last bit, people build whole business out of selling "security products" out of publically available crypto, then make the argument you shouldn't do it yourself, buy theirs. Which sometimes makes sense - often it is a shill/marketing ploy. Rarely do these products provide much on top of the core freely available code...and they probably shouldn't, or else there is probably untrustworthy nonsense inside.

So yeah, don't assume malice where first incompetence is possible.


This seems pretty silly, as there is extensive (one might say tedious) detail on why the decisions in a hash function or block cipher were made; your challenge is that you have to do a literature search to collect all the reasons --- but that's science for you.


> from what I've seen, there's a lot of "obscurity" to this; there are many seemingly arbitrary choices all over the place.

When is a (7-/pk/win)zip compression algo not an encryption algorithm?

Do the use of certain mathematical functions make it an encryption algo?

I've always found the use of prime numbers in some encryption algo's to be a red herring, namely because in theory there are an infinite number of primes, but in practice your computing device can only use 1 from a finite number of primes otherwise it would take too long to encrypt something.

With this in mind, do primes actually make it easier to decrypt encrypted data?

>What's worse is that I fear there are incentives (mostly political/security interests) to keep the field small

Discussions like this ie communication poke light into those dark crevices of intellectuality.


Isn't all the math involved in the end based on the modulo operator and prime numbers?


Yes in "Galois Field" arithmetic. But GF(2^8) (or whatever) arithmetic is only in AES and a few other ciphers/hash functions. SHA-256 looks like an XOR / Add / Rotate kinda cipher.


It's helpful to understand that the algorithm wasn't designed the way it's presented in this illustration, and consists of somewhat discrete components.

It's an iterated design, like a block cipher, meaning that it's built around a simple round function that's repeated a bunch of times on each input, rather than a super-complicated function run once.

It belongs to a large, important family of cryptographic hashes called Merkle-Damgård, which pads messages to a fixed block size, chunks them into blocks, feeds and feeds each block to an iterated "compression function" (the heart of the hash) that takes a block and the chained n-1th compression value and spits out the nth compression value. So a lot of the mechanism in this illustration is just the vanilla mechanics of an MD hash.

Iterated cryptographic designs generally have "schedule" or "expansion" functions that take the (small) input to the iterated function and "expand" it so that not all the iterations are working on exactly the same data; in the same way, a block cipher will have a "key schedule" that expands your 128-bit key into distinct "round keys" for each round. Message and key schedules, to me, feel like tiny little ciphers embedded in the larger cipher, and they're yet more of the complexity you're looking at, but can be studied independently (the SHA2 message schedule is apparently a big part of what makes it difficult to attack).

When you see magic numbers in a block cryptography design like this, two relatively safe bets you can make is that they're either "nothing up my sleeve" numbers chosen from e.g. mathematical constants, just for the sake of avoiding arguments, or that they're the product of statistical testing to maximize things like diffusion or minimize things like linear or differential characteristics.

With all block cipher cryptography one goal you always have is introducing nonlinearity; you can accidentally design a simple iterated function that is actually linear, and that cipher will be solvable simply with Gaussian elimination. People have shown me CTF levels that, for instance, linearized the AES round function. So when you see particular operations chained together in a cipher/hash design, keep in mind that they're balancing goals of (1) ensuring nonlinearity, (2) maximizing diffusion, the rate at which a single-bit change in the message totally scrambles the output in the shortest number of rounds, (3) optimizing metrics to avoid differential and linear cryptanalysis, (4) maximizing performance on target hardware.

As was suggested downthread: a good way to come at this stuff is to start with MD4, and then read about MD5 (vs. MD4), then SHA1, and then take a closer look at SHA2. This stuff didn't get figured out all at once, and all these hashes are related. You might find MD4 easier to get your head around.

For the linear and differential cryptanalysis stuff, which is surprisingly (given its age) important in cipher/hash design, a great starting point is the Heys tutorial, which is built around worked examples of both attacks:

https://ioactive.com/wp-content/uploads/2015/07/ldc_tutorial...


You might find MD4 easier to get your head around.

You can even generate collisions for it by hand calculation:

https://www.iacr.org/archive/fse2007/45930331/45930331.pdf


What a great question and what an excellent answer. Thank you!


1. You want an invertible operation. Invertible operations do _NOT_ lose information, and therefore have the maximum amount of entropy per step.

2. You want the invertible operation to pass a statistical test called "differential cryptography analysis". Over multiple rounds, it must be difficult / impossible to "predict" how 1-bit of difference changes the state. (ie: 1-bit of difference should lead to 50.0000000% change in every output bit. If its 50.1% or 49.9% change of bits, you fail because someone running cryptoanalysis will pick that up).

----------

#1 is somewhat easy. It turns out that Data XOR (Rotate-left Data) XOR (Rotate-right Data) is all you need to make an invertible operation. 5-operations, no more (any more is redundant and therefore unnecessary use of compute power), no less (any less is not invertible and therefore loses information / entropy each step).

#2 is complicated: you gotta understand differential cryptoanalysis and run a bunch of tests.

-----------

The discovery that (Data) XOR (Rotate-left Data) XOR (Rotate-right Data) was invertible became extremely popular in the 2010s through 2020s, and has become the cornerstone of XOR / Rotate / Add ciphers (aka: chacha), pseudorandom generators, and hash functions.

I don't know quite when it was originally discovered, but Jenkins was blogging about the importance of invertibility and playing with invertible xor/rotate stuff in (non-crypto) hash functions way back in the 90s.

I know Knuth's "Art of Computer Programming" book 2, Seminumerical Algorithms, discusses the importance of invertibility in random number generators, which is closely related to hashing / cryptographic procedures. So this "understanding" has been around for decades, but has only "become popular" in the SHA256 / Blake3 / pcg-random era.

------

In the 90s, ciphers were mostly using SBoxes for this step ("confusion", to grossly change a value to another value without losing information). But today, modern CPUs are much faster at add/xor/bitshift operations than reading/writing to memory. So SBoxes are no longer a high-speed methodology / primitive for these kinds of operations.

It makes more sense to change our algorithms to use a new "invertible confusion operation" (aka: what SBoxes did before, and what ((Data) XOR (Rotate-left Data) XOR (Rotate-right Data)) does today).

--------

EDIT: Remember: the modern crypto-primitive is just a combination of "confusion" principles and "diffusion" principles.

1. Confusion "lossless transforms" numbers into other numbers. (A "set permutation" of sorts)

2. Diffusion "moves" bits from one number into other numbers.

Iterating over confusion + diffusion many times (IIRC, SHA256 is 64 rounds) is all you need to make a cryptographic cipher. If you "just" need a pseudo-random number generator or hash function, maybe 5 to 10 rounds is all you need.


Not sure if I understand you right, but sha256 is not invertible, there are a couple of steps where you actually just 'cut off' information that is lost afterwards.


The primitive (data) XOR (rotate-data) XOR (rotate-data) is invertible, which means there's no "bit funnel" (Bob Jenkin's term).

You want your "primitives" to be invertible, so that your one source of "non-invertible" operations is controlled very carefully.

Hash functions are non-invertible. But all operations on the "internal state" should be invertible (!!!!) to maximize the entropy per round.

--------

All good hash-functions have invertable operations (aka: a bijective (1-to-1 and onto) operation) over its state to "distribute" the bits around in some manner. You then perform a non-reversable XOR over the input message data (this is the one and only non-reversible step), maybe after operating over the input data somehow.

Whenever you're "mixing" bits around in a cipher or hash, you need every step to be invertible to maximize your entropy. If you have 2^256 possible input states, you want to have 2^256 output states. By the pigeon-hole principle, this only happens if you have a bijective / 1-to-1 and onto invertible operation.

--------

Lets look at the opposite. Lets say we have a non-invertible function. That is, 2^256 possible inputs become only 2^128 outputs. Guess what? We've just lose 128-bits of information (!!!).

It doesn't matter how complex your operation is. If you have 256-bits of input and only 128-bits of output, you've "lost information". You _NEVER_ want to lose internal-state information.

The hash-function's internal state should be at a maximum 256-bits (with 256-bits of entropy) every step. Sure, the input might be 512-bits, 4096-bits, or 100-MBs long, but each "step" of a 256-bit hash should retain 256-bits of state to be maximally "difficult" to reverse.

That's kinda-sorta obvious if you think of it from a pigeon-hole perspective.


I guess sort of intuitively (I'm not a cryptographer):

If your round function isn't invertible, then it's going to converge at some point on some fixed value, and the round function is going to stop doing anything useful.

More broadly, SHA2 is a sort of instance of a construction called Davies-Meyer, which treats each message block as the key to a bona-fide block cipher (SHACAL, in SHA's case), each encrypting the previous block. It's hopefully pretty obvious why a block cipher core needs to be invertible. :)

So I also find it kind of helpful to remember that you can take any block cipher and turn it into a hash, and then a "good" hash function is just optimizing the block cipher core around the goals of a hash function (be fast, be nonlinear, be hard to find differential trails through the whole sequence of round functions, &c).

It's a thing I don't love about illustrations like the one we're discussing, in that it sort of presents this "intelligent design" problem of, like, "how did we arrive at this incredibly complex fully functioning eyeball", when there's a whole series of smaller evolutionary steps that led to this point; it's more productive maybe to look at the bones of the whole design before diving into the details of which bits go where at each precise step.


> I guess sort of intuitively (I'm not a cryptographer):

Don't worry. I'm not one either. And I think that's why I am actually able to tell you that invertibility / 1-to-1 onto bijections is an important concept :-)

An actual cryptographer would tell you its 1-to-1 and onto and move on.

> It's a thing I don't love about illustrations like the one we're discussing, in that it sort of presents this "intelligent design" problem of, like, "how did we arrive at this incredibly complex fully functioning eyeball", when there's a whole series of smaller evolutionary steps that led to this point; it's more productive maybe to look at the bones of the whole design before diving into the details of which bits go where at each precise step.

Agreed. There's a lot of history here, and knowing all of the history helps a _LOT_.

Go back 50 years ago, and ciphers are way easier to grok. DES / Feistel ciphers are really easy for example. But then we discovered issues about them and iteratively improved.

The old "DES" / Feistel cipher principles of confusion and diffusion remain with us today, but each step has been honed for 90s-era computers (SBoxes and 32-bit numbers), and then honed again for 2010s-era computers (recognition that XOR / Rotate / Add is a faster primitive today than memory-based SBoxes).

I don't think any of the principles have changed since DES / Feistel cipher days. Its just that today's designs are better for today's computers.

------

EDIT: As far as I can tell: "confusion" can be created by (data) XOR (rotate-data) XOR (rotate-data) primitives. "diffusion" can be created by the "ADD" operator (in particular: "carries" will diffuse the bits around).

So XOR, rotate, and Add are the only primitives you need to make a modern crypto-cipher. All three primitives are outrageously fast on modern machines.

AES and other older ciphers tried to make each round relatively high quality. Modern ciphers try to make each round low-quality, and then do something like 64-rounds or 80-rounds to make up for it.

So you'll see old ciphers like AES with just 11 rounds, but modern ciphers / crypto algorithms like SHA256 use 64-rounds.


That's right, but sha256 also utilizes eg shift operation with a fixed length for the result. Meaning: your moving information "out of the scope". Or all the additions: the result is often greater then 32bit (2 words) and therefore being reduced.


Everything in context.

The operation you're talking about is "sigma_0" and "sigma_1" I believe, which is defined as:

sigma_0(x) = Rotate7(x) XOR Rotate18(x) XOR shift3(x).

sigma_1(x) = Rotate17(x) XOR Rotate19(x) XOR shift10(x).

Where "shift3" and "shift10" are both lossy operations.

----------------

While "shift3" and "shift10" are lossy, I'm not 100% certain that "sigma_0" or "sigma_1" is lossy. But that discussion aside, both sigma_0 and sigma_1 are applied to the _message_, not the internal SHA256 state.

The _message_ needs to be compressed, so a lossy operation over the message is not only expected, but required. 4096-bits of input need to become 256-bits of output. 8192 bits of message-input needs to become 256-bits of output.

-----------

But if you look at the "intermediate hash chain" where H(i) = a + H(i-1) for 64 rounds, all operations over "a" and the internal hash-state are invertible operations (SIGMA_0 and SIGMA_1 are both invertible, being (x) XOR (rotate x) XOR (rotate x) style functions).

------

I'm not saying that the "whole" hash function needs to be invertible. I'm saying that __particular__ elements of the hash function _should_ be invertible. The design of these particular elements (in particular, SIGMA_0, which is (Rotate2(x) XOR Rotate13(x) XOR rotate22(x))) is _clearly_ and evidently invertible / 1-to-1 and onto bijection / confusion principles.

The particular constants (why "rotate2", "rotate13" and "rotate22") is chosen for other reasons: probably differential cryptoanalysis but I admit that I'm not 100% sure on that (that's my expectation though).


Do you recommend any resources to learn about this?


The information in the above post is a merger of like, 20 different sources of information.

Your standard cryptographic books from any undergraduate program will tell you about the basics of confusion / diffusion, but I don't think the concept is very difficult at all.

Invertibility is hugely important but not discussed very much. It seems like crypto-experts 'obviously' know about it so they just don't talk about it? Knuth and Bob Jenkins both talked about the importance of invertibility to random number generators. EDIT: Invertibility is the basis of bijective / 1-to-1 and onto permutation functions. To be fair, I think everyone discusses the concept but maybe with different words.

Here's Jenkin's discussion on (non-crypto) hash functions, including a few paragraphs on "invertibility" (or "funnels" as they put it): http://www.burtleburtle.net/bob/hash/doobs.html

The "pcg-random" generator also has a large discussion on invertibility. Honestly, one of the best writeups on the subject, and I felt like my IQ went up severely after reading the paper end-to-end: https://www.pcg-random.org/paper.html

--------

The study of the invertibility of (data XOR (rotate-right data) XOR (rotate-left data)) is covered in many blogposts. https://marc-b-reynolds.github.io/math/2017/10/13/XorRotate....

The study of invertibility in general was covered by Lemire: https://lemire.me/blog/2016/08/09/how-many-reversible-intege...

-----

So as you can see, invertibility is "important", but rarely discussed as important. Its just assumed that everyone knows how important it is. Once you realize that invertibility / lack of funnels is important, everything else makes sense.

Galois field multiplication is useful in AES because its invertible (multiply by the GF(2^8) reciprocal). Aaaaah. I see. Why didn't my undergrad teacher just start with the importance of invertibility before talking about prime numbers and extension fields?

-----

Ah right. Once you know that these things have great properties (ie: invertible), then you can just read the SHA256 paper directly and the rest is just "obviously" confusion + diffusion principles.

-----

Linear Cryptoanalysis is the "standard" run a distribution kinda thing. Anyone who has done Chi-squared tests over random numbers kinda-gets this principle. Honestly, your standard non-crypto RNGs are roughly at this level at this point, so the study of any statistical-test for any random-number generator is good. Knuth / Art of Computer Programming Vol2 is one source of information on Chi-squared tests, but the study of statistics in general also results in Chi-squared tests and the like.

Differential Cryptoanalysis is more difficult and looks at the change-of-bits at each step of every operation. I don't quite remember how I was introduced to the concept, but differential-cryptoanalysis of many popular ciphers is a well-studied thing throughout the field.

--------

Knowledge of "what is fast" and "what is not fast" is... not really easy to come by, and seems to change as computer architectures change. I know that memory has gotten relatively slower compared to CPU-speeds in the past 30 years. I can imagine that in the 90s, an XOR-rotate-add cipher would take "relatively more" time than today, and that SBox-based / lookup table based ciphers were far more popular back then.

I'm not sure where I learned that information. Maybe software optimization manuals in general?


Forgive me for replying here, I couldn't find an option to DM you. And I cannot find any implementation of this in javascript, nor anyone else able to help me.

Can you shed some light here why I'm getting these differences?

https://jsfiddle.net/7kf15dje/

Here is an example of the output I'm getting:

  //       X  A  B           X^1         X^-1 :: Difference
   471490377  6 13 =  1365552781 =  471490377 :: 0
  1528396978  9 11 = -1576695076 = 1528396978 :: -0
  1592322722  9 20 =   622346385 = 1592322722 :: -0
  1214152986  8 16 = -1748578289 = 1214152985 :: -1
  1193897367  2 16 =   907713766 = 1193897366 :: -1
   335642564  9 10 =   318891964 =  335642564 :: -0
   486208953 16 23 =   894211128 =  486208952 :: -1
   629577059 13 14 =  1383225523 =  629577058 :: -1
  1609442937  8 18 =   674046110 = 1609442937 :: -0
   234450967  6 12 =  -459008694 =  234450966 :: -1
  1840721644 19 28 =  -602984005 = 1840721644 :: -0


I can't edit my post anymore but I found the solution. All I had to do was to convert the random input value (X above) to an uint32 using (X >>> 0).


You did things the hard way by trying to construct the exact inverse.

---------

I suggest the following instead:

1. Create a 16-gigabyte file consisting of f(x) from x=0x00000000 to x=0xFFFFFFFF.

2. Sort the file.

3. Determine that no repeats exist, that is, the values go from 0x00000000 to 0xFFFFFFFF.

--------

Once you have determined that all 32-bit inputs result in all 32-bit outputs, you've determined that the function is invertible. 4-bytes x 2^32 possible inputs == only 16GB these days, small enough to be handled by any old computer... possibly entirely in RAM.

But if you don't got enough RAM for that, an SSD or even Hard Drive would be fast enough for the above procedure. It may take a few minutes but its not that hard.

You see, the goal is to "prove you have an invertible function". At no point do you have to accomplish the difficult task of actually finding the inverse.

-------

Well, I guess if you're "just following the blogpost" about the inverse, maybe that's easier. But from the perspective of "I don't know the inverse yet", its really difficult to figure it out. So you should think about the simpler brute-force methodologies that a modern computer can do in just a few minutes.


MD5 and SHA-1 are predecessors, and they are simpler, so certainly by starting from the understanding of the simpler ones. Just to say it wasn't all conceived in one go.


You may not get an exact answer for SHA-1 or SHA-2, because they were designed by NSA, and I'm not sure whether a complete design doc has been published. SHA-3, by contrast, was designed in an open competition, and is completely different from SHA-1 and SHA-2. But here's a rough go at it.

First of all, the overall form. SHA-2 is a Merkle-Damgård construction, meaning that it has a "compression function" which takes an algorithm state and a message block, and outputs a new algorithm state. The final block gets some padding, so that eg "x" and "x\000" don't have the same hash, and then the final algorithm state is the output. (This is a known weakness: from H(x) you can calculate H(x concat padding(x) concat other stuff). SHA-3 doesn't have this weakness. This weakness doesn't itself break collision resistance, but it can be dangerous for other uses of SHA2 if you aren't careful.)

The compression function is a Davies-Meyer construction, which means that given an input state, it:

* Copies the state.

* Encrypts the state with a "cipher" of sorts (the "cipher" extracted from SHA-2 is known as SHACAL-2), using the message as a "key".

* Adds the copied state to the encrypted state to produce a new state.

Davies and Meyer proved that, for an ideal cipher, this would make a collision-resistant hash function. Note that because the update is a "cipher", it must be internally invertible. This is a good idea anyway, because any non-invertibility is by definition a collision, and you want to push that out as far as possible to make collisions hard to find. The step that isn't invertible is the final "add the copied state to the encrypted state".

Within that design, the SHA-2 cipher is based on a shift register. Basic shift registers have been used in crypto for many decades; traditionally you get almost all of the bits in the register by shifting its current state, and at least one (typically only the one shifting in) by computing a function of the other bits. To extend this to general-purpose computers, you can do the same thing for words (32-bit words for SHA-224 and -256 and 64-bit words for SHA-384 -512). So you have A shifting to B shifting to C etc, with a final value that's computed and shifted back into A. You can see there's a slight twist, which is that instead of E=D you get E=D+stuff. I expect that this mitigates issues where the same stuff is used in the same way throughout the whole round.

The "message schedule" is likewise based on a cipher's key schedule. The idea in a cipher is that each round needs a different version of the key to avoid "slide" attacks, so you mix the key during the round as well as mixing the cipher state. I'm not sure what the corresponding theory is for a hash function, but it's also pretty clear that you want a single byte of the message to be used in many places throughout the operation, so mixing the message is a must.

For the operations, the biggest constraint is that they need to be as nonlinear as cheaply, because doing a bunch of linear functions in a row still gets you a linear function, and those are easy to break. They need to be nonlinear both as bits and as integers, so a popular design is to mix addition (linear over ints) with xor (linear over bits) and rotations (linear over both, but you need it to propagate changes from the top of a number to the bottom). That way the whole operation won't be linear over bits, nor over the integers, but all three operations are cheap on PCs. This is called an "ARX" (Add-Rotate-Xor) design.

Picking the exact operations and rotation constants is something of a black art, and while you can trace descent from MD5-SHA0-SHA1-SHA2, the exact design criteria for the SHAs might not even be public. But you can see how some of the decisions may have been made. Consider the rotations in sigma0 and sigma1. The rotation constants in these designs are typically chosen so that if you look at any slice of the output, it will have many different input bits affecting it, which in turn means that small changes diffuse rapidly thoughout the design. This is not done perfectly in SHA2, in that the same bits are used for the Maj and Ch in successive iterations (i.e. you compute Maj(A,B,C) and then Maj(newA,A,B) next time since they shifted over), so you'd get correlated outputs of those steps from round to round, but apparently it's good enough.

The setup constants are chosen as the square and cube roots of prime numbers, as "nothing-up-my-sleeve" numbers. The idea is that if the initialization were random numbers with no rationale given, then it would be a possible place that NSA could have hidden a backdoor. But it would be difficult to create a backdoor where the initialization is the square roots of the primes.


About linearity, that just means that there's no "multiply" or "exponent" function equivalent to our primitives.

For example: if "add Foo" were our primitive, then 64-rounds of "add-Foo" could be broken by simply "data + 64 * Foo". IE: We found a "trivial shortcut" and someone else can break our cipher way faster than we can use it.

Addition is linear over numbers, because of multiplication.

XOR is linear over numbers: because XOR undoes itself. (5 x (XOR-Foo) is equivalent to 1x (XOR-Foo)).

It turns out that combining addition and XOR together results in a non-linear function over numbers and non-linear function over bits.

We can't "shortcut numbers", because the XOR-messes up our "number math". We can't "shortcut bits" because Add has this really annoying "carry" that basically has a 50% chance of shuffling bits around.

As such, we "expect" that combinations of Addition, XOR, and Shifts are non-linear. I'm not sure if it's been mathematically proven however, but no one has been able to show a "shortcut" thus far.


That's really cool. I made a terrible one for SHA1 years ago, yours is 1000x better.

https://lock.cmpxchg8b.com/sha1/visualize.html

I read a paper at the time where someone described a tool they made to find a near-collision, they explained they were just flipping bits and visually observing the affects. That sounded kinda fun, but they didn't release it, so I tried to replicate it from their description!


Your tool seems to understand the gist of differential cryptography better though.

You can track a 1-bit change or 3-bit change to "M" and see how it propagates through the SHA256 rounds in your tool.

----------

So your tool is probably better at understanding the underlying design of SHA2. We know that SHA2 was created well into the era of differential-analysis for example, so the designers would have inevitably done analysis similar to how your tool works.


Before watching this: "Why can't cryptographers just figure out some tricks to crack these hash algorithms?"

After watching this: "How can any cryptographer EVER figured out any trick to crack these hash algorithms?!"


> "How can any cryptographer EVER figured out any trick to crack these hash algorithms?!"

More so, "How can any cryptographer EVER figure out any trick to come up with these hash algorithms?!" Seriously, they are incredibly impressive mathematical algorithms. Even coming up with an algorithm that is able to show the avalanche effect is mind boggling. To make sure that the algorithm is not biased to a set of samples AND shows the avalanche effect is tremendously mind blowing.


They can't. But there are certainly unsavory actors that are able to trick a certain percentage of the unsuspecting to just give them a private key.



This reminds me that I've always wanted to make a huge interactive combinatorial circuit that computes SHA-256 and shows all its internal state, then put it on a site with the claim that anyone who can make its output match a certain clearly-constructed value (e.g. 0123456...ABCD...) will win a prize. No mentions of hash algorithms or other such phrasing to deter anyone. I wonder how many people would try such a "logic puzzle", how much time they'd spend on it, and if we might even get the first successful preimage attack from that.


I think making a problem more accessible like that is the fastest path to a solution.

It reminds me of the Stephen J Gould quote:

"I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops."


I actually started this because my first idea was: can I implement SHA-256 with just TTL logic gates? which should be possible, but it would take months to do.

for puzzles try 12037465 for some coffee :)



There also exists a written description showing the process in Python, step by step, which I consider more helpful, because you do not need to stop and play the video.

https://nickyreinert.medium.com/wie-funktioniert-der-sha256-...


not everybody speaks barbarian sir


Python that bad?


I think it's a reference to the written human language of the blog post, which when I attempt to view, is in German. I realize both of the parent comments could be read in jest; I'm mentioning this here very clearly since tone and implication sometimes may not translate, and passing readers may stumble on the context(s) here.


hahahaha lol


Can it be proven whether values of m exist such that SHA256(m) == 0?

If I were omnipotent and wanted people to believe in me, I would write a book that hashes to 0, so that anyone could verify its authenticity.


One way to prove it might be to actually find the "null hash"; turn it into a challenge/puzzle, don't mention hashing or crypto or that it's really hard, and let all the bored people of the world play with it. Perhaps someone might notice something that all the highly-trained cryptographers have missed all along: https://news.ycombinator.com/item?id=30245419


Maybe I'm being incredibly naive, but it seems like this would be trivial. Can you just start with the output hash and then essentially run the algorithms backwards? Obviously the resulting "input" would be random-ish garbage, but it seems like if all you care about is the output, you can pretty much just "pick" any data for the last step that produces the output. Then do likewise for the step prior, and so on.


As a comment above stated, part of the "input" is the initialized values:

> Initialize hash value h0 to h7: first 32 bits of the fractional parts of the square roots of the first 8 primes 2..19).

My guess is h0 to h7 change throughout the algorithm. If you perform each step in "reverse" as you suggest, "picking" any input at each step that produces the required output for that step, then you may not arrive to the correct initial state with the square roots of the first 8 primes.

You'll arrive at "random-ish garbage".


Ah, yep. You're right. I overlooked that part. It looks like it's truly non-reversible—even if you don't care what the resulting input is.


If you do ever figure out how to reverse SHA-256, best keep it a secret until you've sold all your free Bitcoin.


It's funny, people used to explain that the encryption used in bitcoin is secure because otherwise all your online banking would be at risk.

People don't say that any more because if someone were to break SHA-256 they would actually steal bitcoin first.


Sure, that would be a pretty high difficult factor, but its possible.


But is it provably possible for SHA-256?

An n-bit cryptographic hash function should ideally cover every n-bit output value, given slightly more than n bits of input, but I don't know whether this has been proven for any real-world functions.


A hash function aims to replicate the properties of a truly random function.

The probability that a random function does _not_ output 0 given some specific input block is (1 - 1/2^n). Taking each of the possible 2^b input values into account this means that 0 is not an output for any input with probability (1 - 1/2^n)^(2^b) ~ e^(-2^(b-n)).

For SHA-256 with n = 256 and b = 512 (one can treat the compression function as a 768 to 256 bit random function, but we can stick to the worst-case single-block message case here) we have that the probability of 0 _being_ an output for a single-block message is 1-e^(-256) which effectively means it exists, but the probability never quite reaches 1.


Your math looks plausible for an ideal cryptographic hash, but wouldn't you first have to prove that SHA-256 actually does behave as if each unique input generates an independent random number?


There's no real way to connect the compression function to any kind of mathematical model that would help here, other than modeling it as random. Provability is out the window.

So what you do is assume it behaves like a random function until proven otherwise, by _some_ property that deviates from this model. (This is not even the case for SHA-256, since neither the hash nor the compression function can be modeled as random oracles (due to length extension and the Davies-Meyer structure), but we can conveniently forget that for the duration of this thread.)

There _are_ some hash functions based on number-theoretic problems where you could reason about such things, but none of those are in common use since they are usually slow and/or require an output unstructured transformation anyway to turn them into proper, rather than just collision-resistant, hash functions.


You could in theory also get that with a lot of computation of course


It should be straightforward to design a hash function that can't be preimage attacked using all the computing power in the universe.

Maybe an omnipotent cryptographer would find flaws in SHA-256, but then they could design a better function and include it in the book.


Maybe their omnipotence includes being lucky enough


Why would an omnipotent being need luck? They have infinite brute force.


Similar project which visualizes SHA-256 into terminal: https://github.com/in3rsha/sha256-animation


This comes to my attention at a really convenient time. As a teenager, I initially got interested in Computer Science due to cryptography. Over a decade later, I've gotten into the subject for the first time since then.

For the last few days, I've been writing my own encryption for fun even though it's 100% not secure enough or powerful. My belief is that even though it's not super useful, the experience of attempting to write one is teaching me a lot more than I would have by simply studying it.


Rather than write crypto, what I'd actually recommend is to break it. Schneier put together a block cipher cryptanalysis course a long time ago and while I don't usually recommend his crypto books these days, the course is good: https://www.schneier.com/academic/archives/2000/01/self-stud... (in this case, his crypto book might actually be useful, because it documents some of these out of date ciphers. There's (was?) a mistake in the DES code though iirc).

It is essentially a tour of all the early cryptanalysis literature, complete with suggestions of ciphers to target (e.g. FEAL). This will give you a sense of how the attacks work. Many of the ciphers are old, but I wouldn't let that put you off.

The study technique for this would be a) implement each cipher with toggles to control rounds and any other features, then implement attacks. Most of the papers should be open access by now since the 'course' was written in the year 2000. You could also 'catch up' on the constructions and attacks that have come out since.

I would caveat this with: what I am advising is purely for potential interest. Bear in mind there is little need to implement new block ciphers these days (what I'm saying is: this is a very specialized skill and most people won't find jobs in it).


How long before we see this website as the source for some "hacker sequence" in a movie where a person wearing a black hoodie states they are "... working on cracking their SHA-256 encryption, should only take a sec."


If such a movie even mentions SHA-256 I think that's above average on its own.


That depends..

"Just a second, I need to backdoor the SHA256 rootkit to penetrate the directory. Shit, they have an X62 firewall. Luckily I brought my pentester."


"These packets are not going to sniff themselves."


You're right. Still, mentioning SHA-256 brings better possibility of "someone knowing something", compared to, say, GUI, Visual Basic, and tracing IP addresses.


Syntax-highlighted ALGOL-y code blocks seem to be back in style if they're not going with the constant "toss that screen up to a hologram" bit.

Sometime there will be a nice interview of such on the design that goes into that, not necessarily for "hacker sequences" but general imaginary computer interfaces like https://www.hudsandguis.com/home/2021/theexpanse .


This is fantastic. I once implemented SHA-256 in Google Sheets to visualize it, but it had horrible performance compared to this. This is the best visualization I've seen yet.


SHA2: https://en.wikipedia.org/wiki/SHA-2

https://rosettacode.org/wiki/SHA-256

Hashcat's GPU-optimized OpenCL implementation: https://github.com/hashcat/hashcat/blob/master/OpenCL/inc_ha...

Bitcoin's CPU-optimized sha256.cpp, sha256_avx2.cpp, sha256_sse4.cpp, sha256_sse41.cpp: https://github.com/bitcoin/bitcoin/blob/master/src/crypto/sh...

https://github.com/topics/sha256 https://github.com/topics/sha-256

Cryptographic_hash_function#Cryptographic_hash_algorithms: https://en.wikipedia.org/wiki/Cryptographic_hash_function#Cr...

Merkle–Damgård construction: https://en.m.wikipedia.org/wiki/Merkle%E2%80%93Damg%C3%A5rd_...

(... https://rosettacode.org/wiki/SHA-256_Merkle_tree ... Merkleized IAVL+ tree that is balanced with rotations in order to optimize lookup,: https://github.com/cosmos/iavl

Self-balancing binary search tree: https://en.wikipedia.org/wiki/Self-balancing_binary_search_t... )


You should be very satisfied with how well you conveyed the algorithm. Well done. What was your approach to arriving at this result?


Thanks :) I first implemented sha256 in js to understand its inner workings. Then I started displaying its variables with react and adding this stepped mechanism. Finally, I added some notes on the left to add some context of what is going on.


Very nice. Have you built any other algorithm visualizations? I have a very strong interest in how algorithms are visualized so that they are more easily understood.


First time doing this type of visualizations. I have this one tailwind.ink which will visualize a color palette on their luminance, chroma and hue values, but it's not really representing the algorithm behind.


Someone should make a collection of the various visualization web projects that crop up here from time to time.


I love single-purpose websites like this that put a potentially complex implementation behind an elegantly simple interface. This website’s design and styling are pretty too :) Another useful one is https://www.h-schmidt.net/FloatConverter/IEEE754.html . I’d say even https://godbolt.org/ counts!


Does anyone have any good references, preferably a book but a good detailed website is fine, on cryptography, hashing, public/private keys, tokens, encryption, etc. as it relates to a software engineer? I don't necessarily want to know all the nitty gritty details of how these things are implemented. Rather, I think I would prefer simply understanding them and how to use them, piece them together, etc. to build something out of them.

I just have very little knowledge in this area. I'm going through a how to build a blockchain book right now, and I find myself struggling a little bit where I'm just calling some library functions but not necessarily knowing how to compose things properly.


For that I like this one: https://cryptobook.nakov.com/


Thanks!


I wrote this one: https://app.qvault.io/course/6321ddbf-49eb-4748-9737-6bc12e8...

it's a crypto course where you write the solutions in Go. you might enjoy it :)


JP Aumasson is one of the authors of the BLAKE hashes and wrote "Serious Cryptography":

https://www.amazon.com/Serious-Cryptography-Practical-Introd...


Thanks!


I have an odd request regarding e.g. SHA-3. Can anyone tell me if it is implemented in a way that is in a sense 'one-pass' over its input, i.e. each byte of its input in memory is accessed only once, after which all of the algorithm state is held in registers and the original input is never accessed again? My scenario is one where I'm concerned about TOCTOU-like attacks on the memory where the input is stored, but I don't want to pay the overhead of first copying the whole input to a 'safe' memory location, e.g. imagine I have kernel code wanting to compute a hash over data stored in userspace.


Yes, sha3 reads every input byte only once. It does hold a pretty large internal state that doesn't fit in only registers.


this is funny. when i first learned the algorithm, i found some matlab code that computes it with bit vectors. i added support for displaying them as an image and used the movie feature to generate step by step movies to build intuition.

nice to see someone build something polished that visualizes it in the same way. once you look at the mechanics for each round of the compression function and see the bits get swirled around for yourself, it starts to make intuitive sense.

the other big intuitions are of course, the trapdoor nature of add mod 2^32 (which is implicit in unsigned integer overflow on many machines) and the fact that some operations (like xor) operate in galois field 2, while others (like addition) operate in galois field 32 and the repeated stacking of the operations in different fields gives the function it's nonlinear trapdoor property.

i remember reading a pretty good paper on the arx (add, rotate, xor) family of ciphers back in the day (sort of in the vein of, is that all you need?)...


Man, this is amazing. I had to hand-unroll bit packing in a binary encoding scheme we used in a game. Rare enough that making a tool wasn't worth it, but damn I love your visualizations! Doing something like that would have helped others understand how I was "seeing the matrix."


On the third step(?) of the second step, it says "Copy 2nd chunk into 1st 16 words", but it's accompanied by a visualization of copying the 1st chunk into the 1st 16 words. Am I just totally misunderstanding something?


Fantastic presentation! The utility functions from the source code are just as useful.


Is there a library or application that can take an annotated algorithm, and then generate a website like this one? That would be great for beginning CS and the sorting algorithms and other basic data structures too.


Looks fantastic, but the only thing missing is why each step is done.


great visualization. i've also checked the source code and utility functions. they are very well defined and useful too.

i've coded a sha256 decrypter recently which uses dictionary attack and brute force. I read lots of articles about sha256 while coding this tool. there were still some missing parts on my mind, but your project clarified all.

btw, the decrypter i coded -> https://10015.io/tools/sha256-encrypt-decrypt



Pretty cool. Have been looking for something like this for a while. Thanks for building it.

Just sent you a PR for some typos I found while running through an example.


Great visualization. Might become my new screensaver.


One could even feed the hash back into itself to get a different result each time


how long would the cycle last before it starts repeating?


On the order of 2^256 steps, if SHA-256 behaves randomly, since the largest cycle in a random permutation on N elements has expected length ~ 0.62 * N.


SHA-256 is not a permutation; the expected cycle length is ~2^128. It's how collisions are (generically) found.


Pigeonhole principle?


Each new output value i can collide with output 1, 2, ..., i-1. So the collision probability of iteration i is (i-1)/2^256. Adding all of the iterations up you have 1/2^256 + 2/2^256 + ... + (i-1)/2^256 = 0.5 i (i-1)/2^256 which approaches 1/2 as you get to 2^128.

In reality you use some variant of Rho which does not store every previous value but uses something like Floyd's cycle detection or distinguished points and requires minimal storage, at the cost of some extra hash computations but still on the order of 2^128.


This situation calls for Birthday paradox, not pigeon hole.


Then wouldn't that be 0.62*2^256, closer to 2^255 ? Not that it makes a notable difference: at 1 fs per step (1000 GHz, or 1 TH/s), that would take about 4.5e65 years to happen. The universe is only about 1e10 years old...


It would be at least that, with a smaller contribution to the expectation from smaller cycles. That's why I said "on the order of", i.e. not much smaller.


Well made! Can you share how you made the website?


Can we have a video of this on youtube/dailymotion/vimeo/etc which we can download with yt-dlp?


You can screen record the full screen browser window with OBS (with the mouse disabled too)


That's super cool. Visualized or illustrated algorithms have always looked so magical, to me at least.


Looks bautiful. I understand it is really complicated (even I know calculation of SHA256)


Please do this for b+trees. :)


Beautiful color palette.


This is awesome!


Two of the buttons at the top have no "title" attribute and therefore no tooltip.


Very satisfying!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: