I understand the urge to try to get a high-level grok of curves without much math, but I spent years bouncing off the outermost surface of curve understanding by trying to start with the curve picture and the intuitive geometry of curve shape, and what finally made curves click for me (and quickly) was to simply take the curve equation --- which is itself high school math! --- and play with it a bit.
So if you're a programmer and you want a baseline understanding how curves work, do what you'd do with any other subject you're trying to understand: pop open an editor, take the Weierstrass curve equation, pick a y, solve for x; then do it in a finite field (ie, mod a prime). Then write an "add", then a "scalar mult". It's a couple hours of noodling, tops.
As always, but particularly with curves, remember that the basic understanding of what's going on is nowhere nearly enough to ever use them safely!
Just recently, an amateur programmer with very little background in cryptography discovered a flaw in libsodium in the Argon2 implementation and also in the reference implementation that everyone in the world was trusting without question. My advice is if you're an engineer, don't be afraid to write your own implementation of tried and trusted ciphers. This is how we find bugs and improve. This isn't the only trusted library or algorithm that has been shown flawed in recent times.
The strength of your cipher implementation can be tested and proven. We need to stop telling everyone these algorithms are absolutely trustworthy so don't try understanding them or implementing them. Nothing ever advances or improves that way. Buck the trends, create competing libraries, try new things.
The ancient ones were not all knowing, they were doing everything wrong. Their designs are full of flaws. Deny them. We need to code ourselves out of the coming cryptographic apocalypse. Do not hide your heads in the sand and hope the world doesn't come crashing down around you.
Edit: I found the blog/website of the man I mentioned in this comment who discovered the Argon2 flaw.
In his own words, "There's something worrying about this bug: I was the first to discover it, in January 2017. According to Khovratovich himself, it was two years old. Now I understand why the authors themselves didn't find it: unlike me, they didn't have a reference implementation to compare to.
What I don't understand is, how come nobody else discovered this bug? If you implement Argon2 from spec, you cannot miss it. Test vectors won't match, and searching for the cause will naturally lead to this bug. I can draw only one conclusion from this:
I'm the first to independently re-implement Argon2. This 'never implement your own crypto' business went a little too far."
Believe it or not, libsodium's implementation of Argon2 is an example of libsodium going off the rails. Libsodium began as a cross-platform easy-to-build repackaging of Nacl, a crypto library designed and written by cryptographers with carefully chosen primitives. Libsodium has gradually expanded it into a kitchen sink of shiny cryptography, and as a result now libsodium users have to worry about the pitfalls of AES-GCM. Why does libsodium implement Argon2? Hell if I know. As an interface, it's actually worse than JCE, which at least had the self-respect to pretend it had a "password based encryption" abstraction.
It gets worse though, if you really want to climb down this rabbit hole with me, because I'm not totally sure why Argon2 exists either. The "bug" he found in Argon2 has actually no practical impact, so much so that the reference implementation decided not to bother fixing it. But that's because very little in password hashes actually matter. Scrypt was the last important thing to happen to password hashes, and bcrypt still works just fine.
If we want to keep touring all the weird shit that happens because people pointlessly reimplement things many of which don't need to exist in the first place, we can keep doing it all the way back to unpadded RSA-512 off a broken browser RNG, which (a) existed and (b) got me gameover on a pentest once. Maybe I should be happy about that, but I mostly find it frustrating.
My point here, though, is that you aren't going to learn nearly enough from a tutorial of any sort to safely implement curves.
Can you elaborate a little on this? I ask because Noise protocol standardizes on AES-GCM or ChaChaPoly, will apps using Noise with AES-GCM face "pitfalls"?
Places to go from here if you enjoyed reading this and want to learn more about elliptic curves and cryptography related to them:
More on an elliptic curve and constructing a hypothetical one.
Elliptic curve pairings.
An intro to supersingular isogeny cryptography, which has a basis in elliptic curves as a mathematical structure, but is fundamentally different from elliptic curve cryptography.
The implementation doesn't concern itself with groups or fields, but they're still very useful to make sense of the code at all. Actually, I should add some types and implement an instance of Data.Group when I have more time.
Of course, it's for fun and not production use. I didn't give the slightest thought to timing attacks, optimized performance, etc.
Another good writeup:
Edit: but yes, the OP writes "integers (Z) are a field", which is wrong. Integers form a ring.
Reflect C across the X axis, and that's point 2A.
So you're not missing anything, the article is misleading at best.
In short, to take the sum of two points A and B, draw a line through them both to find a third point on the curve, reflect that point across the X-axis, and the result (which will also be on the curve) is A+B.
If A and B coincide then use the tangent at A.
The point at (vertical) infinity plays the role of the zero.
I'm curious for a similar explanation for how decryption would work though; a trapdoor function is nice and all, but it's only half of the story if there is no 'way out'.
To encrypt/decrypt, for example, you can use the ElGamal scheme: https://en.wikipedia.org/wiki/ElGamal_encryption
(note that wikipedia uses the multiplicative notation, while in ECC typically additive notation is preferred, so you'll have to translate every A^x into xA -- but it's a good exercise.)
If we use RSA as an example, we can see how this works in practice:
1. Let n be the product of two large primes, p and q. Large means 512 bits or greater in this context. Then n is the RSA modulus.
2. Select a special number e such that 1 < e < (p - 1)(q - 1), and such that e is coprime with (p - 1)(q - 1) (meaning no numbers divide evenly into both).
3. Your RSA public key is now (n, e) - you can share this publicly with anyone you want to securely communicate with. Conversely, p and q must not be public. This is where your one-way function comes in to separate what can be public from what must be private: n is part of the public key, but was generated by p and q, and multiplication of two primes is a one-way function.
4. Your RSA private key d is computed from q and e, such that for any pair (n, e) there can only be a unique d. d is the inverse of e modulo (p - 1)(q - 1) - in other words, d is a unique number such that, when multiplied by e, is equal to 1 modulo (p - 1)(q - 1). This can be expressed simply as ed = 1 mod (p - 1)(q - 1).
5. Assuming you chose e correctly, d is unique and very difficult to find without having p, q and e. However, while it's very difficult to find without those inputs, it's very easy to find if you have them, because you can use what's called the Extended Euclidean Algorithm to compute d in polynomial time given p and q.
6. So now you want to encrypt something with RSA. Your peer has your public key, (n, e). The encryption process to compute the ciphertext C from the plaintext P is simple: C = P^e mod n (C is equal to the plaintext P multiplied by itself e times and reduced modulo n). Exponentiation has polynomial complexity.
7. To decrypt the ciphertext C in order to compute the plaintext P, all the holder of the private key needs to do is this: P = C^d mod n. In other words, they raise the ciphertext C to the power of their private key.
So, returning to trapdoor functions in general: why does this work? It works because the solution is easy to compute with all inputs, easy to verify with a reduced (public) set of inputs, and extremely difficult to compute with only the reduced set of inputs (and the secret inputs are themselves difficult to find from the public ones).
Hope that helps a bit and I understood your question's context correctly! Trapdoor functions do not necessarily mean that there's "no way out", what they mean is that a set of values have a relation such that one of the values requires only a few of the values to compute a ciphertext and all or most of the values to compute a plaintext for that ciphertext. When we talk about functions being irreversible, we're specifically talking about one-way functions, and those are used in the context of hash functions. Trapdoor functions are "merely" difficult to reverse.
One of the things I'm actively researching at the moment is whether or not (and how) it would be possible to construct a post-quantum secure public-key cryptosystem based on a hardness assumption derived from problems in Ramsey Theoretic graph problems (coincidentally, this problem had a recent treatment after being quietly raised over a decade ago: https://pdfs.semanticscholar.org/1599/62064634fe10897aea300c...).
TL;DR: the topology of these curves is quite different over finite fields.
I first encountered what I consider cryptographically unhelpful drawings/intuition when I studied lattice-based cryptography. There's a whole field where a fundamental hard problem is finding the closest vector to a point not on the lattice. If you try and explain this by drawing a 2 or 3-dimensional example over the reals, anyone who's paying attention and proficient in Linear Algebra will ask themselves, why don't you just transpose the whole thing and then take the orthonormal projection? The problem is that in a finite field, unlike over the reals or complex numbers, orthogonal projection doesn't get you any closer to your target, so the intuition over R^2 is in my opinion very unhelpful to understand exactly why SVP/CVP are supposed to be hard, and indeed had me confused for a while until my professor pointed out I should forget "the silly drawing over R^2" which he didn't like either.
For Elliptic Curve Cryptography, I find the example of a curve over the reals again misses the point of why exactly problems like DLOG are hard - for discrete-log based crypto at the 256-bit security level over finite fields, you need an about 15k bit modulus depending on which site you look at (NIST 2016 at keylength.com is a good place to start) due to speedups from Number Field Sieving etc. THis is the kind of structure that I mean you don't get.
On EC, to get 256 bit security you need exactly 2 * 256 = 512 bits of key (slightly oversimplifying, the factor 2 is because you get the "sign of the y value" for free). The number 512 pretty much stands for the conjecture "there is no other cryptographically exploitable structure to take DLOGs over these Elliptic Curves". In fact it's not just "we haven't found any such structure" but there's an argument about heights of points (Miller '85 I think - though I'm pretty sure I've also read something by Koblitz on this) why on certain kinds of curves such structure is unlikely to exist. (Of course, other kinds of curves for fancy bilinear group stuff exist and do have more structure. And supersingular curves are another topic altogether.)
The structure you obviously do get is a group, which you can extend to a vector space over the finite field so that (x \mapsto xP) is a linear function. The security property you want is roughly "you get this group, but only this group" (and the "sign", so add 1 extra bit to your keylength) and there is no useful concept of anything like points being close to each other, continuous maps in the usual sense etc. Plot the points on an EC over a small finite field and it looks like a random scatterplot rather than a neat and elegant curve - which is the whole point of using ECs over finite fields for DLOG-based crypto.
You are of course right that extra structure helps with solving DLOG. I was hoping that you could point me to some specific reasons why DLOG is easier on real/complex curves. I'd be very interested in learning these.
I don't find a "nice smooth curve vs. scatterplot" argument very convincing by itself -- you only care about a cyclic subgroup anyway. Take a complex elliptic curve, viewing it as a torus consider its fundamental parallelogram, and plot a cyclic subgroup on this parallelogram. Won't you get a nice scatter plot just the same?
Even if you can use this extra structure to find some more efficient DLOG algorithms, you could try to apply the same solution as with the crypto based on multiplicative groups of finite fields: just use larger points. My understanding so far is that the reason we don't do it is purely computational efficiency -- I'd be very interested in learning any reasons to think the contrary.
And, back to the original point, I think that the drawings of real/complex curves are very helpful when learning the basics. The group law is really best understood if you actually draw some lines intersecting the real affine part of some curve, show what happens when you draw a tangent line etc. If you confine yourself to finite fields, while it formally works just the same, the geometry is much less obvious, and talking to beginners about Cartier divisors and line bundles won't help much.
While the finite fields are the whole point of elliptic curve cryptography, they are definitely not the whole point of elliptic curves, and in fact I believe that complex case with all the extra structure is best for educational purposes.
There are other reasons beyond efficiency why I would prefer people to switch their DLOG crypto to EC in practice: it's hard enough to implement - even with introductory blog posts with images - that your average developer leaves the details to an expert. libsodium for example is very well designed and written and few people think "I know, I'll write my own Curve25519 implementation" but lots of people seem to think that they understand finite fields well enough to build crypto, after all they have a bignum library and a modexp function, what could possibly go wrong?. I've seen finite-field discrete log software that is supposed to be production ready with the following problems:
* Non-constant time implementation of "schoolbook" square-and-multiply.
* Forgetting to check if points really are in the group, e.g. you're supposed to be working in a q-order subgroup of Z^*_p where q | (p-1)/2 but the software accepts any integer less than p as a group element.
* Crash or infinite loop if you pass 0 as a "group element".
* Parameters can be specified or overridden by the sender of a message and set to tiny values.
* Not checking whether the modulus is prime.