So, how do people come up with these things? I assume every aspect of the design is carefully considered to defend it against various attacks. For example, why "right rotate 7 XOR right rotate 18 XOR right shift 3" and not "right rotate 2 XOR right rotate 3 XOR right shift 4"?
Generally, come up with a set of operations that you want to use that meet your design criteria (fast in hardware, fast in software, fast in SIMD, whatever), then brute-force the instruction order and/or arbitrary constants (rotations) to minimize known attacks and maximize diffusion
> (fast in hardware, fast in software, fast in SIMD, whatever)
"Slow to compute in GPUs" has been another design criterion for a while. It helps against brute force attacks. Of course, you might want your hash to be fast on GPUs too, if you're trying to accelerate something, bu cryptographic hashes usually aim for attack resistance.
No, no, no. That's a criteria for password hashes, which are a totally different construction than standard cryptographic hashes. You can't brute force a cryptographic hash, practically by definition.
I must have conflated Keccak's "slow without specific hardware/extensions" with "slow on purpose against GPUs". Both are similar (why would a GPU have an operation for cryptography?) but obviously not the same thing.
It depends what the goal of your hashing function is. Pure cryptographic hashes like sha256 want to be cheap to calculate. Slow on GPUs is only a design goal if you're making a password hash function, which is a rather different class of problem to sha256/chacha20 and similar.
I think the commenter you're responding to got a little confused somewhere along the way and conflated cryptographic hashes with password hashing.
EDIT: To be clear, password hashes are also cryptographic hashes - but they're a specific subclass of crypto hashes with different design criteria.
This is a known technique for password hash protection too. By limiting the usefulness of the GPU, you shield yourself from high speed password crackers in the event your password database is compromised.
>why not "right rotate 2 XOR right rotate 3 XOR right shift 4"
In this case, you generally want shifts that are far apart so bits that far apart have a chance to influence each other. The other operations have only local effects on bits (and, or, xor affect only the same bit position; add affects the same bit position and perhaps a few that follow via carries). Only the shifts/rotates give a chance for "non-local" effects.
It's not security through obscurity. In fact, it's the very opposite. You can see the process exactly. The reason this is secure is because the process itself doesn't work backwards. You can create a hash using this algorithm, but you'll never reverse that hash back into the original text.
I'm also wondering, how does this prevent preimaging attacks (or whatever they're called)? That is to say, what's stopping people from reliably producing output based on input?
> I'm also wondering, how does this prevent preimaging attacks (or whatever they're called)?
First, see the Wikipedia entry about preimage attacks.
Second, I am not a cryptographer but I think in practice there is a couple of things to be aware of:
- make sure slightly different inputs have wildly different outputs
- make sure no parts of the input survives
- practically speaking there are an unlimited number of inputs that map to most (all? I'm not sure how uniform the distribution of sha256 is) output (since input is unlimited and output is a short string.
- the classic preimage attack, rainbow tables, works because 1.) inputs, i.e. passwords, are often short and predictable
- in ancient times password systems didn't use salts
> That is to say, what's stopping people from reliably producing output based on input?
I assume this should be the other way around, which is what I have tried to explain above.
from what I've seen, there's a lot of "obscurity" to this; there are many seemingly arbitrary choices all over the place.
In the end most encryption algorithms boil down to doing 'random' (arbitrary, hard to justify why) things to data and then undoing them exactly in order to decrypt.
the math is all incredibly abstract but not all that complex, the high level of abstraction does make it quite difficult to grasp.
What's worse is that I fear there are incentives (mostly political/security interests) to keep the field small and to keep many people far away from this very practical use for all these beautiful, elegant, simple (but extremely abstract) mathematics (refering to the entire cryptography field).
> What's worse is that I fear there are incentives (mostly political/security interests) to keep the field small and to keep many people far away from this very practical use for all these beautiful, elegant, simple (but extremely abstract) mathematics (refering to the entire cryptography field).
I mean, everything you want to learn about crypto is available online, in libraries, in textbooks. Including differential cryptoanalysis, the theory behind these mathematical forms (Galois Field makes things _EASIER_, not harder actually. That's why CRC-checks and Reed-Solomon codes were based off of Galois Fields, and AES being based on GF(2^8) is to take advantage of those same properties).
--------
What has happened is that the "old generation" of programmers is dying out / retiring. And they aren't passing on their knowledge to the new generation. The "old generation" of programmers were high-math, abstract algebra and more, while "new generation" programmers just never bothered to learn this stuff.
What has happened is that the "old generation" of programmers is dying out / retiring. And they aren't passing on their knowledge to the new generation. The "old generation" of programmers were high-math, abstract algebra and more, while "new generation" programmers just never bothered to learn this stuff.
There may be some survivorship bias here. Even in the 1990s, business-grade programmers (the ones who, quite frankly, aren't inclined to learn difficult subjects) either went into management or did something else, although the timeframe and ageism are more aggressive these days due to the infantilization and humiliation (e.g., Agile Scrum) that engineers face today.
Research-grade programmers were the minority, even then, although this problem is a lot worse today due to the near nonexistence of R&D jobs.
Not all decryption is doing exactly the same things in reverse. For example, with CTR mode (and thus GCM mode, which is CTR plus GMAC), you call the /encryption/ routine regardless of whether you're encrypting or decrypting data. This means in an embedded environment you can save die space because your program doesn't need the e.g. AES decryption routines too.
I doubt there is any concerted effort to keep the field small. That would be like saying tech companies don't want people learning how to code so that they can maintain an advantage.
If anything, governments and companies are encouraging people to study cryptography so that they are able to hire more experts in the future.
Now, once you get gatekeeper organizations and special licensing organizations like contractors licensing or beauticians licensing, those are examples of groups trying to keep the pool of experts small.
> What's worse is that I fear there are incentives (mostly political/security interests)
Nah its mostly just a mix of laziness, rigor, and salesfolk.
Most people don't want nor can properly design a hash algorithm (which works well). Public ones like SHA have received so much scrutiny, they are extremely well vetted...and then there's the mostly valid attitude of "never roll your own crypto" - Don't, not in production or anything that could become production. Unless you are a group of highly skilled cross domain career cryptographers/mathematicians...
Which leads to the last bit, people build whole business out of selling "security products" out of publically available crypto, then make the argument you shouldn't do it yourself, buy theirs. Which sometimes makes sense - often it is a shill/marketing ploy. Rarely do these products provide much on top of the core freely available code...and they probably shouldn't, or else there is probably untrustworthy nonsense inside.
So yeah, don't assume malice where first incompetence is possible.
This seems pretty silly, as there is extensive (one might say tedious) detail on why the decisions in a hash function or block cipher were made; your challenge is that you have to do a literature search to collect all the reasons --- but that's science for you.
> from what I've seen, there's a lot of "obscurity" to this; there are many seemingly arbitrary choices all over the place.
When is a (7-/pk/win)zip compression algo not an encryption algorithm?
Do the use of certain mathematical functions make it an encryption algo?
I've always found the use of prime numbers in some encryption algo's to be a red herring, namely because in theory there are an infinite number of primes, but in practice your computing device can only use 1 from a finite number of primes otherwise it would take too long to encrypt something.
With this in mind, do primes actually make it easier to decrypt encrypted data?
>What's worse is that I fear there are incentives (mostly political/security interests) to keep the field small
Discussions like this ie communication poke light into those dark crevices of intellectuality.
Yes in "Galois Field" arithmetic. But GF(2^8) (or whatever) arithmetic is only in AES and a few other ciphers/hash functions. SHA-256 looks like an XOR / Add / Rotate kinda cipher.
It's helpful to understand that the algorithm wasn't designed the way it's presented in this illustration, and consists of somewhat discrete components.
It's an iterated design, like a block cipher, meaning that it's built around a simple round function that's repeated a bunch of times on each input, rather than a super-complicated function run once.
It belongs to a large, important family of cryptographic hashes called Merkle-Damgård, which pads messages to a fixed block size, chunks them into blocks, feeds and feeds each block to an iterated "compression function" (the heart of the hash) that takes a block and the chained n-1th compression value and spits out the nth compression value. So a lot of the mechanism in this illustration is just the vanilla mechanics of an MD hash.
Iterated cryptographic designs generally have "schedule" or "expansion" functions that take the (small) input to the iterated function and "expand" it so that not all the iterations are working on exactly the same data; in the same way, a block cipher will have a "key schedule" that expands your 128-bit key into distinct "round keys" for each round. Message and key schedules, to me, feel like tiny little ciphers embedded in the larger cipher, and they're yet more of the complexity you're looking at, but can be studied independently (the SHA2 message schedule is apparently a big part of what makes it difficult to attack).
When you see magic numbers in a block cryptography design like this, two relatively safe bets you can make is that they're either "nothing up my sleeve" numbers chosen from e.g. mathematical constants, just for the sake of avoiding arguments, or that they're the product of statistical testing to maximize things like diffusion or minimize things like linear or differential characteristics.
With all block cipher cryptography one goal you always have is introducing nonlinearity; you can accidentally design a simple iterated function that is actually linear, and that cipher will be solvable simply with Gaussian elimination. People have shown me CTF levels that, for instance, linearized the AES round function. So when you see particular operations chained together in a cipher/hash design, keep in mind that they're balancing goals of (1) ensuring nonlinearity, (2) maximizing diffusion, the rate at which a single-bit change in the message totally scrambles the output in the shortest number of rounds, (3) optimizing metrics to avoid differential and linear cryptanalysis, (4) maximizing performance on target hardware.
As was suggested downthread: a good way to come at this stuff is to start with MD4, and then read about MD5 (vs. MD4), then SHA1, and then take a closer look at SHA2. This stuff didn't get figured out all at once, and all these hashes are related. You might find MD4 easier to get your head around.
For the linear and differential cryptanalysis stuff, which is surprisingly (given its age) important in cipher/hash design, a great starting point is the Heys tutorial, which is built around worked examples of both attacks:
1. You want an invertible operation. Invertible operations do _NOT_ lose information, and therefore have the maximum amount of entropy per step.
2. You want the invertible operation to pass a statistical test called "differential cryptography analysis". Over multiple rounds, it must be difficult / impossible to "predict" how 1-bit of difference changes the state. (ie: 1-bit of difference should lead to 50.0000000% change in every output bit. If its 50.1% or 49.9% change of bits, you fail because someone running cryptoanalysis will pick that up).
----------
#1 is somewhat easy. It turns out that Data XOR (Rotate-left Data) XOR (Rotate-right Data) is all you need to make an invertible operation. 5-operations, no more (any more is redundant and therefore unnecessary use of compute power), no less (any less is not invertible and therefore loses information / entropy each step).
#2 is complicated: you gotta understand differential cryptoanalysis and run a bunch of tests.
-----------
The discovery that (Data) XOR (Rotate-left Data) XOR (Rotate-right Data) was invertible became extremely popular in the 2010s through 2020s, and has become the cornerstone of XOR / Rotate / Add ciphers (aka: chacha), pseudorandom generators, and hash functions.
I don't know quite when it was originally discovered, but Jenkins was blogging about the importance of invertibility and playing with invertible xor/rotate stuff in (non-crypto) hash functions way back in the 90s.
I know Knuth's "Art of Computer Programming" book 2, Seminumerical Algorithms, discusses the importance of invertibility in random number generators, which is closely related to hashing / cryptographic procedures. So this "understanding" has been around for decades, but has only "become popular" in the SHA256 / Blake3 / pcg-random era.
------
In the 90s, ciphers were mostly using SBoxes for this step ("confusion", to grossly change a value to another value without losing information). But today, modern CPUs are much faster at add/xor/bitshift operations than reading/writing to memory. So SBoxes are no longer a high-speed methodology / primitive for these kinds of operations.
It makes more sense to change our algorithms to use a new "invertible confusion operation" (aka: what SBoxes did before, and what ((Data) XOR (Rotate-left Data) XOR (Rotate-right Data)) does today).
--------
EDIT: Remember: the modern crypto-primitive is just a combination of "confusion" principles and "diffusion" principles.
1. Confusion "lossless transforms" numbers into other numbers. (A "set permutation" of sorts)
2. Diffusion "moves" bits from one number into other numbers.
Iterating over confusion + diffusion many times (IIRC, SHA256 is 64 rounds) is all you need to make a cryptographic cipher. If you "just" need a pseudo-random number generator or hash function, maybe 5 to 10 rounds is all you need.
Not sure if I understand you right, but sha256 is not invertible, there are a couple of steps where you actually just 'cut off' information that is lost afterwards.
The primitive (data) XOR (rotate-data) XOR (rotate-data) is invertible, which means there's no "bit funnel" (Bob Jenkin's term).
You want your "primitives" to be invertible, so that your one source of "non-invertible" operations is controlled very carefully.
Hash functions are non-invertible. But all operations on the "internal state" should be invertible (!!!!) to maximize the entropy per round.
--------
All good hash-functions have invertable operations (aka: a bijective (1-to-1 and onto) operation) over its state to "distribute" the bits around in some manner. You then perform a non-reversable XOR over the input message data (this is the one and only non-reversible step), maybe after operating over the input data somehow.
Whenever you're "mixing" bits around in a cipher or hash, you need every step to be invertible to maximize your entropy. If you have 2^256 possible input states, you want to have 2^256 output states. By the pigeon-hole principle, this only happens if you have a bijective / 1-to-1 and onto invertible operation.
--------
Lets look at the opposite. Lets say we have a non-invertible function. That is, 2^256 possible inputs become only 2^128 outputs. Guess what? We've just lose 128-bits of information (!!!).
It doesn't matter how complex your operation is. If you have 256-bits of input and only 128-bits of output, you've "lost information". You _NEVER_ want to lose internal-state information.
The hash-function's internal state should be at a maximum 256-bits (with 256-bits of entropy) every step. Sure, the input might be 512-bits, 4096-bits, or 100-MBs long, but each "step" of a 256-bit hash should retain 256-bits of state to be maximally "difficult" to reverse.
That's kinda-sorta obvious if you think of it from a pigeon-hole perspective.
I guess sort of intuitively (I'm not a cryptographer):
If your round function isn't invertible, then it's going to converge at some point on some fixed value, and the round function is going to stop doing anything useful.
More broadly, SHA2 is a sort of instance of a construction called Davies-Meyer, which treats each message block as the key to a bona-fide block cipher (SHACAL, in SHA's case), each encrypting the previous block. It's hopefully pretty obvious why a block cipher core needs to be invertible. :)
So I also find it kind of helpful to remember that you can take any block cipher and turn it into a hash, and then a "good" hash function is just optimizing the block cipher core around the goals of a hash function (be fast, be nonlinear, be hard to find differential trails through the whole sequence of round functions, &c).
It's a thing I don't love about illustrations like the one we're discussing, in that it sort of presents this "intelligent design" problem of, like, "how did we arrive at this incredibly complex fully functioning eyeball", when there's a whole series of smaller evolutionary steps that led to this point; it's more productive maybe to look at the bones of the whole design before diving into the details of which bits go where at each precise step.
> I guess sort of intuitively (I'm not a cryptographer):
Don't worry. I'm not one either. And I think that's why I am actually able to tell you that invertibility / 1-to-1 onto bijections is an important concept :-)
An actual cryptographer would tell you its 1-to-1 and onto and move on.
> It's a thing I don't love about illustrations like the one we're discussing, in that it sort of presents this "intelligent design" problem of, like, "how did we arrive at this incredibly complex fully functioning eyeball", when there's a whole series of smaller evolutionary steps that led to this point; it's more productive maybe to look at the bones of the whole design before diving into the details of which bits go where at each precise step.
Agreed. There's a lot of history here, and knowing all of the history helps a _LOT_.
Go back 50 years ago, and ciphers are way easier to grok. DES / Feistel ciphers are really easy for example. But then we discovered issues about them and iteratively improved.
The old "DES" / Feistel cipher principles of confusion and diffusion remain with us today, but each step has been honed for 90s-era computers (SBoxes and 32-bit numbers), and then honed again for 2010s-era computers (recognition that XOR / Rotate / Add is a faster primitive today than memory-based SBoxes).
I don't think any of the principles have changed since DES / Feistel cipher days. Its just that today's designs are better for today's computers.
------
EDIT: As far as I can tell: "confusion" can be created by (data) XOR (rotate-data) XOR (rotate-data) primitives. "diffusion" can be created by the "ADD" operator (in particular: "carries" will diffuse the bits around).
So XOR, rotate, and Add are the only primitives you need to make a modern crypto-cipher. All three primitives are outrageously fast on modern machines.
AES and other older ciphers tried to make each round relatively high quality. Modern ciphers try to make each round low-quality, and then do something like 64-rounds or 80-rounds to make up for it.
So you'll see old ciphers like AES with just 11 rounds, but modern ciphers / crypto algorithms like SHA256 use 64-rounds.
That's right, but sha256 also utilizes eg shift operation with a fixed length for the result. Meaning: your moving information "out of the scope". Or all the additions: the result is often greater then 32bit (2 words) and therefore being reduced.
Where "shift3" and "shift10" are both lossy operations.
----------------
While "shift3" and "shift10" are lossy, I'm not 100% certain that "sigma_0" or "sigma_1" is lossy. But that discussion aside, both sigma_0 and sigma_1 are applied to the _message_, not the internal SHA256 state.
The _message_ needs to be compressed, so a lossy operation over the message is not only expected, but required. 4096-bits of input need to become 256-bits of output. 8192 bits of message-input needs to become 256-bits of output.
-----------
But if you look at the "intermediate hash chain" where H(i) = a + H(i-1) for 64 rounds, all operations over "a" and the internal hash-state are invertible operations (SIGMA_0 and SIGMA_1 are both invertible, being (x) XOR (rotate x) XOR (rotate x) style functions).
------
I'm not saying that the "whole" hash function needs to be invertible. I'm saying that __particular__ elements of the hash function _should_ be invertible. The design of these particular elements (in particular, SIGMA_0, which is (Rotate2(x) XOR Rotate13(x) XOR rotate22(x))) is _clearly_ and evidently invertible / 1-to-1 and onto bijection / confusion principles.
The particular constants (why "rotate2", "rotate13" and "rotate22") is chosen for other reasons: probably differential cryptoanalysis but I admit that I'm not 100% sure on that (that's my expectation though).
The information in the above post is a merger of like, 20 different sources of information.
Your standard cryptographic books from any undergraduate program will tell you about the basics of confusion / diffusion, but I don't think the concept is very difficult at all.
Invertibility is hugely important but not discussed very much. It seems like crypto-experts 'obviously' know about it so they just don't talk about it? Knuth and Bob Jenkins both talked about the importance of invertibility to random number generators. EDIT: Invertibility is the basis of bijective / 1-to-1 and onto permutation functions. To be fair, I think everyone discusses the concept but maybe with different words.
The "pcg-random" generator also has a large discussion on invertibility. Honestly, one of the best writeups on the subject, and I felt like my IQ went up severely after reading the paper end-to-end: https://www.pcg-random.org/paper.html
So as you can see, invertibility is "important", but rarely discussed as important. Its just assumed that everyone knows how important it is. Once you realize that invertibility / lack of funnels is important, everything else makes sense.
Galois field multiplication is useful in AES because its invertible (multiply by the GF(2^8) reciprocal). Aaaaah. I see. Why didn't my undergrad teacher just start with the importance of invertibility before talking about prime numbers and extension fields?
-----
Ah right. Once you know that these things have great properties (ie: invertible), then you can just read the SHA256 paper directly and the rest is just "obviously" confusion + diffusion principles.
-----
Linear Cryptoanalysis is the "standard" run a distribution kinda thing. Anyone who has done Chi-squared tests over random numbers kinda-gets this principle. Honestly, your standard non-crypto RNGs are roughly at this level at this point, so the study of any statistical-test for any random-number generator is good. Knuth / Art of Computer Programming Vol2 is one source of information on Chi-squared tests, but the study of statistics in general also results in Chi-squared tests and the like.
Differential Cryptoanalysis is more difficult and looks at the change-of-bits at each step of every operation. I don't quite remember how I was introduced to the concept, but differential-cryptoanalysis of many popular ciphers is a well-studied thing throughout the field.
--------
Knowledge of "what is fast" and "what is not fast" is... not really easy to come by, and seems to change as computer architectures change. I know that memory has gotten relatively slower compared to CPU-speeds in the past 30 years. I can imagine that in the 90s, an XOR-rotate-add cipher would take "relatively more" time than today, and that SBox-based / lookup table based ciphers were far more popular back then.
I'm not sure where I learned that information. Maybe software optimization manuals in general?
Forgive me for replying here, I couldn't find an option to DM you. And I cannot find any implementation of this in javascript, nor anyone else able to help me.
Can you shed some light here why I'm getting these differences?
You did things the hard way by trying to construct the exact inverse.
---------
I suggest the following instead:
1. Create a 16-gigabyte file consisting of f(x) from x=0x00000000 to x=0xFFFFFFFF.
2. Sort the file.
3. Determine that no repeats exist, that is, the values go from 0x00000000 to 0xFFFFFFFF.
--------
Once you have determined that all 32-bit inputs result in all 32-bit outputs, you've determined that the function is invertible. 4-bytes x 2^32 possible inputs == only 16GB these days, small enough to be handled by any old computer... possibly entirely in RAM.
But if you don't got enough RAM for that, an SSD or even Hard Drive would be fast enough for the above procedure. It may take a few minutes but its not that hard.
You see, the goal is to "prove you have an invertible function". At no point do you have to accomplish the difficult task of actually finding the inverse.
-------
Well, I guess if you're "just following the blogpost" about the inverse, maybe that's easier. But from the perspective of "I don't know the inverse yet", its really difficult to figure it out. So you should think about the simpler brute-force methodologies that a modern computer can do in just a few minutes.
MD5 and SHA-1 are predecessors, and they are simpler, so certainly by starting from the understanding of the simpler ones. Just to say it wasn't all conceived in one go.
You may not get an exact answer for SHA-1 or SHA-2, because they were designed by NSA, and I'm not sure whether a complete design doc has been published. SHA-3, by contrast, was designed in an open competition, and is completely different from SHA-1 and SHA-2. But here's a rough go at it.
First of all, the overall form. SHA-2 is a Merkle-Damgård construction, meaning that it has a "compression function" which takes an algorithm state and a message block, and outputs a new algorithm state. The final block gets some padding, so that eg "x" and "x\000" don't have the same hash, and then the final algorithm state is the output. (This is a known weakness: from H(x) you can calculate H(x concat padding(x) concat other stuff). SHA-3 doesn't have this weakness. This weakness doesn't itself break collision resistance, but it can be dangerous for other uses of SHA2 if you aren't careful.)
The compression function is a Davies-Meyer construction, which means that given an input state, it:
* Copies the state.
* Encrypts the state with a "cipher" of sorts (the "cipher" extracted from SHA-2 is known as SHACAL-2), using the message as a "key".
* Adds the copied state to the encrypted state to produce a new state.
Davies and Meyer proved that, for an ideal cipher, this would make a collision-resistant hash function. Note that because the update is a "cipher", it must be internally invertible. This is a good idea anyway, because any non-invertibility is by definition a collision, and you want to push that out as far as possible to make collisions hard to find. The step that isn't invertible is the final "add the copied state to the encrypted state".
Within that design, the SHA-2 cipher is based on a shift register. Basic shift registers have been used in crypto for many decades; traditionally you get almost all of the bits in the register by shifting its current state, and at least one (typically only the one shifting in) by computing a function of the other bits. To extend this to general-purpose computers, you can do the same thing for words (32-bit words for SHA-224 and -256 and 64-bit words for SHA-384 -512). So you have A shifting to B shifting to C etc, with a final value that's computed and shifted back into A. You can see there's a slight twist, which is that instead of E=D you get E=D+stuff. I expect that this mitigates issues where the same stuff is used in the same way throughout the whole round.
The "message schedule" is likewise based on a cipher's key schedule. The idea in a cipher is that each round needs a different version of the key to avoid "slide" attacks, so you mix the key during the round as well as mixing the cipher state. I'm not sure what the corresponding theory is for a hash function, but it's also pretty clear that you want a single byte of the message to be used in many places throughout the operation, so mixing the message is a must.
For the operations, the biggest constraint is that they need to be as nonlinear as cheaply, because doing a bunch of linear functions in a row still gets you a linear function, and those are easy to break. They need to be nonlinear both as bits and as integers, so a popular design is to mix addition (linear over ints) with xor (linear over bits) and rotations (linear over both, but you need it to propagate changes from the top of a number to the bottom). That way the whole operation won't be linear over bits, nor over the integers, but all three operations are cheap on PCs. This is called an "ARX" (Add-Rotate-Xor) design.
Picking the exact operations and rotation constants is something of a black art, and while you can trace descent from MD5-SHA0-SHA1-SHA2, the exact design criteria for the SHAs might not even be public. But you can see how some of the decisions may have been made. Consider the rotations in sigma0 and sigma1. The rotation constants in these designs are typically chosen so that if you look at any slice of the output, it will have many different input bits affecting it, which in turn means that small changes diffuse rapidly thoughout the design. This is not done perfectly in SHA2, in that the same bits are used for the Maj and Ch in successive iterations (i.e. you compute Maj(A,B,C) and then Maj(newA,A,B) next time since they shifted over), so you'd get correlated outputs of those steps from round to round, but apparently it's good enough.
The setup constants are chosen as the square and cube roots of prime numbers, as "nothing-up-my-sleeve" numbers. The idea is that if the initialization were random numbers with no rationale given, then it would be a possible place that NSA could have hidden a backdoor. But it would be difficult to create a backdoor where the initialization is the square roots of the primes.
About linearity, that just means that there's no "multiply" or "exponent" function equivalent to our primitives.
For example: if "add Foo" were our primitive, then 64-rounds of "add-Foo" could be broken by simply "data + 64 * Foo". IE: We found a "trivial shortcut" and someone else can break our cipher way faster than we can use it.
Addition is linear over numbers, because of multiplication.
XOR is linear over numbers: because XOR undoes itself. (5 x (XOR-Foo) is equivalent to 1x (XOR-Foo)).
It turns out that combining addition and XOR together results in a non-linear function over numbers and non-linear function over bits.
We can't "shortcut numbers", because the XOR-messes up our "number math". We can't "shortcut bits" because Add has this really annoying "carry" that basically has a 50% chance of shuffling bits around.
As such, we "expect" that combinations of Addition, XOR, and Shifts are non-linear. I'm not sure if it's been mathematically proven however, but no one has been able to show a "shortcut" thus far.