> But math never decreed that sine and cosine have to take radian arguments!
That is not entirely true. It comes from the relationship between those functions and the complex numbers via the Euler formula.
ix
e = cos x + i sin x
There may be arithmetic/numerical inconveniences, but that's not all there is to "math".
Let's define ncos and nsin ("nice cos, nice sin") as follows:
nsin x = sin 2πx
ncos x = cos 2πx
So then what do we make of:
ncos x + i nsin x
This has to be
cos 2πx + i sin 2πx
which is then
2πix ( 2π) ix ix
e = ( e ) = f
2π
Where f = e is a weird number like 535.4916. This f doesn't have nice properties. E.g.:
d x x
- f /= f
dx
Otherwise it works; for instance 90 degrees is 0.25 and surely enough
0.25i
f = i
In situations not involving e in relation to angular representations via Euler, f cannot replace e.
I'm all for having parallel trig functions in libraries that work with turns, though.
The annoying 2π factor shows up in lots of places though. Should way, say, in electronics, redefine a new version of capacitive reactance which doesn't have 2πf in the denominator, but only f?
I see where you're coming from, if the formulas end up having weird numbers like 535.4916 or numbers like 2.718 or 6.28318 then obviously there's something suspicious about the equation. But small correction though. You got the number wrong, it's actually much more weird than any of those mentioned. The actual equation you come to for ncos an nsin is:
(-1)^(2x) = ncos(x) + i nsin(x)
And yes, -1 is a very weird number. If you take it to the power of something divisible by 2 you get itself raised to zero. What's up with this spooky periodicity? Also if you have x=1/4, then we get weird numbers like sqrt(-1) what on earth is that all about? No way that will fly, no way. No I'll take my 2.718^((-1)^(1/2)) and multiply through with 6.28318 that way I don't have to bother understanding what I'm doing I can sleep comfortable at night knowing that someone else has done all the thinking that needs to be done on the matter, and that turns or rotations are a blasphemous concept that breaks the very concept of math through scaling of an axis. You'd think math was strong enough to withstand such a minor change, but the textbooks do not mention it thus it must not be contemplated!
This is a very good point, but it took me a minute to get what you were saying beneath the snark. Translating without the snark:
There's a famous equation relating sin and cos to complex exponentiation. It also helps explain the Taylor expansions of sin and cos, which is one way to compute them and to find properties about them. It's a very important equation. It is:
ix
e = cos x + i sin x
kazinator's point was that this equation relies on cos and sin taking radians as arguments. If they take turns instead, then you need to insert messy extra constants to state this equation!
jVinc's counter-point, made with lots of snark, is that there's an equation that's even nicer if you just instead measure angles in turns with ncos and nsin:
(-1)^(2x) = ncos(x) + i nsin(x)
It's similar, but doesn't require the magic constant e.
A proof sketch that these are equivalent:
(-1)^(2x) = e^ln((-1)^(2x)) = -e^(2x) = e^(i * (2 pi x)) using e^(pi i) = -1
That's a nice result. If we rearrange the products in the exponent we get
2πix πi2x ( πi ) 2x
e -> e -> (e )
Where e^(πi) is -1. That shows there is something to the turns units; we can express the analog of the Euler identity using exponentiation using a base and factor which are integers.
> Because you’re obscuring the connection of sin/cos with their hyperbolic counterparts.
Only because we forgot the name change: these are supposed to to be nsin and ncos.
Remember also that people use sin and cos with 360-degree degrees just fine; and don't worry about wrecking the connection to the hyperbolic counterparts --- and without changing the names, either.
That version of euler's formula might make a nice case for half turns. Then it's just
(-1)^x = ncos(x) + i nsin(x)
It's obvious how to handle it for integers (an even number of half turns is 1, an odd number is -1), and the extension to real numbers aids the intuition.
Or, depending on your focus, quarter turns are very clean too:
i^x = ncos(x) + i nsin(x)
Either way, turns > radians (it's what I think in when doing most fourier kinds of work anyways!).
>The actual equation you come to for ncos an nsin is:
>(-1)^(2x) = ncos(x) + i nsin(x)
Try to formally define this procedure, though. You end up going in circles.
Here's another version:
lim[N->infinity] (1 + ix/N)^N = cos(x) + i sin(x)
Now there are no "weird numbers", and both sides of the equation can be calculated directly, even by hand if you wanted.
If all you're teaching students is a bunch of formulas to be memorized, the (-1)^x notation is kind of cute. But usually when teaching math, we want to build some kind of understanding.
> Try to formally define this procedure, though. You end up going in circles.
The cos(x) + isin(x) formula gives us a way to find the point on the complex plane's unit circle corresponding to an angle x, given in radians. (Plus it does more, because the argument is complex valued.)
The new formula with ncos and nsin does the same thing for an angle given in turns. E.g 0.25 (90 degrees): -1^(0.5) = i. It's understandable in terms of roots of -1.
When you want to know the principal N-th root of number on the complex plane, you can simply divide its argument (i.e. angle) by N. The other roots are then equidistant points around the circle. So for instance, the square root of -1, which is sitting at 180 degrees, is found at 90 degrees, and is therefore i.
We can use -1 as the reference for measuring angles. The turns unit (one circle) is twice as far around the circle as as -1, so that's where we get the 2. Because 90 degrees in turns isn't 0.5, but 0.25.
We could use 1 directly, but then we need the first complex root of unity. For instance, here is the Wikimedia diagram of the fifth roots:
That root which is close to i, has an angle which is exactly 1/5 turns. There is a relationship between turns and roots of unity, because N roots occupy N equidistanct points on the circle spaced by 1/N turns.
You seem to have missed the point. You need the formula I gave to rigorously compute the roots of -1. Of course, you could notice that (cos(x) + i sin(x))^n = cos(nx) + i sin(nx), but that's what I meant by "going in circles". You end up defining (-1)^x in terms of sines and cosines, making the "formula" trivial. It's difficult, working this way, to understand why (-1)^(1/3) is (1 + isqrt(3))/2 and not just -1.
By contrast, the Bernoulli formula is actually computable. In fact, the CORDIC algorithm corresponds quite closely to computing the Bernoulli formula by repeated squaring. The use of arctan(2^(-n)) is just like taking (1 + i2^(-n))^(2^n).
There's a reason why math is structured the way it is.
> You end up defining (-1)^x in terms of sines and cosines
But we are explicitly doing that; we have "nsin" and "ncos" on the other side, and those are explicitly defined as just cos and sin with a scale factor applied to the argument.
The goal is simply, if there is a goal, can we have a nice correspondence between complex exponentiation of some base and the scaled sine and cosine that work with turns.
Hey look; if we change the angle coordinate so that a full circle is just 1 rather than an irrational number, then the transcendental e disappears from our version of this famous equation.
> if the formulas end up having weird numbers like 535.4916 or numbers like 2.718 or 6.28318 then obviously there's something suspicious about the equation.
Well, 2.718 is different than those numbers, because the derivative of 2.718^x is 2.178^x, which is a very interesting property of 2.718. The same cannot be said about 535.4. (6.283 is the ratio of a circle's, diameter to radius, which is just something intrinsic to the universe. I think it even transcends the universe, but that's hard for me to reason about. But basically, both 2*pi and e are fundamentally interesting.)
But it really has nothing to do with the universe, except insofar as maths happen to (imperfectly) match it.
Presumably if the universe seemed to match some other maths, we would have invented that variety instead. The Greeks knew the Earth was round, yet made up plane geometry; and never touched on spherical geometry, as far as we know.
Astonishingly, the concept of the number line did not surface until 2000 years later. With the number line, school children can do on command what the best mathematicians of antiquity struggled with for centuries.
If you're not using derivatives, integrals, or complex numbers, maybe you'd be better off using Wildberger's "rational trigonometry" with quadrances and spreads instead of angles? I haven't actually tried it myself. Wildberger's motivation is a sort of ultra-strict Platonism* mixed with the desire to extend analytic geometry to fields other than the real numbers, though, so it wouldn't be surprising if it wasn't actually a simpler way to write Asteroids. Doing trigonometry in Galois fields sounds super cool though and I hope I understand it one day.
Alternatively you can just directly represent angles as unit vectors in the desired direction, which is pretty much the same as using complex numbers. Angle addition is complex multiplication, angle bisection is complex square root, and computing the sine and cosine is simplicity itself. (This takes twice as much space. If you choose to store only the real part of the complex number, you can only represent angles up to half a turn, same as in Wildberger's approach, you lose some precision near the limits, and the other operations require some extra computations.) I have tried this, for example in http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra... and https://gitlab.com/kragen/bubbleos/-/blob/master/yeso/sdf.lu..., and in the cases I've tried it, it works great.
I'm interested to hear other people's experiences on this count!
______
* His main concern is that irrational numbers don't, in some sense, really exist, so they're a bad basis for trigonometry. As I understand it, not only is Platonism now a minority among foundations-of-mathematics types, but even Platonists generally believe that irrational numbers are just as real as rational ones, so as I understand it, Wildberger's viewpoint is held by quite a small minority. That doesn't, of course, imply anything about whether it's correct.
I never saw rational trigonometry, but I imagine it is mathematically overkill in another sense. When calling `sin` or `cos` in code you get back a rational approximation to the presumably irrational answer.
From what I quickly gleamed, using spreads would be really annoying to represent rotations, because you lose 'aditivity'. Two subsequent rotations with spreads a and b do not have spread `a` and spread `b`.
Lots of code using trig is about rotations, so losing that feature would probably not be the nicest.
In order to reason about the quality of the rational approximation, you need to reason about the irrational number it's an approximation to, for example as a Dedekind cut. Wildberger, rightly or wrongly, doesn't trust reasoning about that sort of thing; he wants to ground everything in the rationals. Maybe he's worried about finding some equivalent of Russell's barber paradox in the irrationals.
Also, of course, irrational numbers don't allow you to extend trigonometric theorems to Galois fields, complex numbers, and so on, which to my mind is a much more interesting direction. I don't know how much you lose if you use Wildberger's construction into a division algebra like the quaternions.
I don't actually know how you compute the "angle sum" in terms of Wildberger's spreads, but I'm pretty confident that there's a way to compute it, and it's pretty simple.
In the unit-vector representation I described, angle-sum is not just simple addition, but it's really not that bad: (a + bi)(c + di) = (ac - bd) + (ad + bc)i. That's four real multiplications, an addition, and a subtraction, and the result is exact if computed in bignums or bignum rationals. This is usually cheaper than computing sine and cosine, especially if you can use SIMD or vectors, and of course if you want to rotate some points around a center, you end up having to multiply by the sin and cos in exactly the same way anyway. (Maybe in strength-reduced fashion if you're texture-mapping or something, but that applies just as well to representing the angles as sin and cos.)
So if I wanted to, say, calculate the height of a pole from the length of its shadow, I should use Wildberger's rational trig, because I don't need derivatives, integrals, or complex numbers?
You can do that without derivatives, integrals, complex numbers, sines, cosines, tangents, exponentials, square roots, or even addition and subtraction. If the pole is plumb, erect a pole smaller than you with a plumb bob, measure its length a and its shadow's length b, and then calculate the height of the original pole from its shadow's length c as ac/b.
If you have a slide rule, you can do this in a single motion: align c on the C scale over b on D and read off the answer on C above a on the D scale.
It's simpler than that. Draw a circle of radius 1, and draw two lines through the center. The distance along the circumference between those lines is the angle between them in radians. If you really want to remove the multiplicative identity as a magic number, you can define the radian angle as the ratio of the subtended circumference over the radius.
IMO this is at least the most accessible argument for why radians are special, and while I don't pretend to understand complex exponentiation, I expect it's the root of why other math involving radians turns out nicely.
The author makes the point that turns allow for exact representation of many commonly used angles, but with binary floating point, many common angles (1/6 of a turn, for example) are inexact.
This could be addressed by using a whole number other than 1 to represent a turn ... one that is a multiple of 3 (or 3x3) and 5, and while we're at it, 2 (or 2x2x2), so most commonly-used angles are whole numbers! That gives us 360 as the value representing a whole turn.
I just want to point out that that is an issue with radians too (pi/3). Whenever this happens I just use that same integer representation (or rational as some poster said) and then remember to multiply by tau before using a math library. With a turns-based library it would only make my life (very slightly) easier
There's no need to represent fractions of a turn as binary fractions, since you don't ever need more than 1 turn. You can represent fractions of a turn as (pair of integer) rationals, and round on the rare occasion that the denominator gets too big.
Indeed, maths never "decreed that sine and cosine have to take radian arguments". But thinking that makes any sort of point is a fundamental misunderstanding of maths.
There are infinitely many sinusoidal functions out there. You can just adjust amplitude, frequency and phase to your heart's content.
Trigonometry basically requires that sine and cosine have specific amplitudes and phases, but gives not one shit about how you map angles to frequency. Degrees are completely arbitrary, but both radians and turns have pretty natural definitions, with turns indeed being the easiest to work with. So far so good.
Calculus does have an opinion on frequency, though. There is exactly one non-trivial pair of sinusoids s(x) and c(x) where c'(x) = - s(x) and s'(x) = c(x), among a bunch of other very useful properties.
When you put calculus and geometry together, s and c are have the same amplitude and phase as sine and cosine from geometry, and the two pairs are exactly the same if you match the frequencies such that the argument is the angle measured in radians. It's just so damned useful to use angles in radians and make everything play together nicely.
Degrees are very natural in the context of ancient astronomy/astrology, where you have (1) ~365 days in a year, so that if you look at the path of something that takes a year you get about one degree change per day but with a number that is more easily divisible. (2) approximately 4y, 10y, 8y, 15y, 12y, 30y cycles for the moon and various planets. (3) A calendar with 12 months, 12 zodiac signs. (4) A timekeeping system which breaks days into 24 hours and then uses divisions by sixty for smaller units. (4) A base-sixty number system – from ancient Mesopotamia, which persisted as the standard for astronomical calculations for millennia, only displaced in the very recent past.
All those approximations have error and the behaviour you describe depends on where you are on Earth. Moreover, degrees are not natural from a mathematical perspective.
My favourite way of handling angles was always with either unsigned char or 16bit unsigned int that was treated as 1/nth of turn. Usually in these cases cos/sin tables were pre-calculated for speed, although that need went away to an extent. As long as as the calculations wrap around on the underlying system, it makes angles much easier to manage, because angle1 + angle2 = angle3 is always within 0 to 255 or 0 to 65535. Unfortunately I mostly work with higher level languages now that have mostly dropped integer types.
If anybody knows how similar calculations can be easily achieved in JS for example, I'd love to hear about it. I'm sure there must be a better way than boundary checks and manual wrap-around.
> If anybody knows how similar calculations can be easily achieved in JS for example
Simply "a = (a + 0x1234) & 0xffffffff;". Or whatever width you require, 0xff or 0xffff. JIT is going to optimize that and-operation away (at least for 32-bit mask 0xffffffff) and keep the integer value internally.
You can also "cast" a var to int by "ORring" 0 with it, like "a |= 0;"
Thank you! This does exactly what I meant. I think this is the best solution for my use-cases. It even handles floating point operations to a correctly, something I didn't expect.
For the same number of bits, an integer representation of angle is always going to be more precise than a floating point one for angles away from 0. It's also going to be equally precise for the whole circle.
In JavaScript, I’d stay with floating-point (don’t fight the language if you don’t have to) and use something like x => x - Math.floor(x) to normalize.
> PICO-8 uses an input range of 0.0 to 1.0 to represent the angle, a percentage of the unit circle. Some refer to these units as "turns". For instance, 180° or π (3.14159) radians corresponds to 0.5 turns in PICO-8's representation of angles. In fact, for fans of τ (tau), it's just a matter of dropping τ from your expression.
Back in the early 80's a common thing to do in games on 8 bit computers was to implement sin and cos as lookup tables with the angles being 0-255 or 0-128 or something like that and the result also an integer that was some fixed point representation, so you'd do something like:
The original Asteroids source code has been released, we can look at what it did! (Also, even before that the EPROM images had been disassembled, but looking at the original sources is more fun.)
It indeed used (-128, 128) for (-180°,180°)
For instance, here it is using 64 (written as "40", because hexadecimal) for "PI/2":
COS: CLC ;COS(A)=SIN(A+PI/2)
ADC I,40
; JMP SIN
It implements COS in terms of SIN, and SIN wraps everything to be in the domain [0,64] (that is [0°,90°] ) then retrieves that from the lookup table.
Pico8 is 128x18 res, so I think the maximum number of visibly unique lines you could draw from one point is 512. You can't even express sub-one-degree (sub-1.422222 really) rotations on Pico8 visually without subsampling etc.
Interesting. I couldn't imagine asteroids without at least 1 degree accuracy feeling right, but I just fired up a pico8 emulation of asteroids and it had what seemed like about 24 steps of rotation, and played beautifully. Funny how my memory gave it way higher resolution.
The sin/cos lookup tables would only contain an octant of the circle and use various symmetries to map other values to that range. Thus the resolution is ~0.176 degrees for a 256-entry table.
The limit on precision wasn't the size of the lookup table, it was the size of the integers. The CPU's integers were only 8-bits. So if you're using native integers (i.e.: [0,256) == [0°,360°) ), you can't get angles more precise than 1.4°.
I agree that this makes sense for the kind of situations that the article talks about. If you only need to express common angles like 90 degrees, 45 and so on, radians are just messy (though in physics, you get used to it).
But in other cases, radians are useful. For example consider the case of small deviations from a direction. If you give it in radians, let's say three mrad (milliradians), it's very easy to estimate how large the error will be over the course of a meter; three mm.
This is just to say: choose the right unit for the job.
That's because of the equality relationship between 2π radians and the length of the unit circle perimeter. If one is working with a sine taking in turns, one can just adjust by saying sin(q) ≈ 2π * q for small q.
As already mentioned by others, radians are not arbitrary units for angles; in fact, they are the "natural" "units", so to speak.
By definition, an angle is just the ratio of a circular arc (s) to its radius (r), θ = s/r (as an exercise, imagine how to apply this definition to the angle between two intersecting lines). When the length of the circular arc equals its radius (s = r), the angle subtended is exactly 1 radian; of course, since this is just a ratio, 1 radian is exactly the same as 1 numerically, which is why I put "unit" in quotes earlier -- a radian is not really a unit at all!
A degree, in contrast, equals pi / 180 radians. Of course, since 1 radian = 1, that really just means that 1 deg = pi / 180, similar to how 1%=0.01. Putting this all together, it is perfectly parsable (although not recommended) to say that a $5 burger costs roughly $29000% deg.
The fact that it is natural doesn't make it performant and straightforward for all applications.
For example linear algebra is the natural and general way to handle vectors. However game developers still find quaternions faster and more performant.
>The fact that it is natural doesn't make it performant and straightforward for all applications.
How does changing the scale make anything more or less performant?
If anything, it makes things less performant since to use any hardware supported trig functions you now have to convert your weird angle representation into radians. For simple addition or fractions of your angle, it is just as performant as using angles in any scaling.
> For example linear algebra is the natural and general way to handle vectors. However game developers still find quaternions faster and more performant.
They only use quaternions for a few things, like slerp, and mostly because of gimbal lock.
For everything else they still use linear algebra, and linear algebra is used much, much more than quaternions for nearly any 3d program.
> How does changing the scale make anything more or less performant?
As the article shows, in application code we are multiplying by 2 pi, and the very first step in the optimized assembly is to divide by 2 pi. Therefore changing the scale to what both sides want saves 2 operations.
And why would the optimized version want that division? It is because the next step is to reduce down to a fixed range, then use a lookup table.
> They only use quaternions for a few things, like slerp, and mostly because of gimbal lock.
And yet they still do use them.
> For everything else they still use linear algebra, and linear algebra is used much, much more than quaternions for nearly any 3d program.
>As the article shows, in application code we are multiplying...
The article is wildly misleading.
I just checked out the godot code (git version c2f6664, today), and checked to see if what the author wrote and you believe actually happens in the code. It does not. Not even once. Didn't you find it odd that the author took two different codebases to make an argument between them instead of simply using one codebase?
Here is what I find:
First, avx_mathfun is not in Godot, nor is sin256_ps, nor is anything like it I can find. Feel free to find it and post where it is in godot.
The function sin is called 347 times in c, cpp, and h files. The function sinf is called 25 times. Of these, only 50 are scaled by tau in some direction. So right there it's terrible to optimize the other way.
And here is the kicker. Not a single one, zero, nada, called a function like the contrived sin func the author wrote about. Every single one called std::sin.
std::sin resolves on different architectures to (often) a hardware instruction or code like this from glibc (used on most x86 machines). Note this implementation, likely far more widely used than any other (since it's the code in GCC), does not use the method in the code the author posted.
For cos and cosf the result is the same (with a little changes in the counts since some audio filters use more sin or cos than the other): cos occurred 370 times, cosf occurred 41 times, only 52 of these involved tau (again the majority has no tau on the front side), and of all these, again zero get passed to anything other than hardware std::cos.
So this pretty clearly demonstrates for this codebase that changing scale as the author wants would make the code less, not more performant.
Didn't you wonder why the author searched on tau alone and didn't compare to the tau cases? Or why he then jumped to a sin/cos approximation not in the original code? And why he didn't mention how prevalent hardware sin/cos is?
So, do you now believe that changing scaling is going to make code perform worse, not better?
Radians are a natural unit to measurements that are based on the radius. Radians are notably not natural to measurements that are based on the circumference, or to any equally divisible arcs of a circle.
Saying 1radian=1 is just as senseless as saying 1m=1=$1.
It's true that abstract math often drops units because some things (like Taylor series) work nicely in certain units. That doesn't make the unit meaningless.
Street-Fighting Mathematics, thesis/book by Sanjoy Mahajan, shows what amazing things you can die in abstract math if you don't forget units.
Sorry, but this is incorrect. An angle is defined through the ratio of two objects with common units; it is dimensionless for the same reason that 5m / 5m is dimensionless. You could argue that radians should only refer specifically to angles, but your own example demonstrates how impracticable that would be: you can't sensibly Taylor expand a trigonometric function (eg, sin(x) ~ x) if the left-hand side and right-hand side have incompatible units.
It's a good question since "radius" seems somewhat arbitrary, but the reason for defaulting to radius is that the unit circle (radius 1) produces convenient numbers in general in a way that a circle of diameter 1 doesn't. However, that doesn't mean that it's the most parsimonious choice for your particular application.
That was quite convincing actually. I guess we all have this realization at some point in early math education.
Why is it 360 degrees? Mainly because that's a nicely divisible number, no other good reason. Sometimes you find a 400 degree system on calculators but it doesn't seem to be taught anywhere (is it a French thing?)
Then at some point you get shown radians, which relates the arc length to the radius. That somehow seems natural, but it does mean there's going to be this constant lying around somewhere in your calculations.
Parameterizing the angle as a proportion of how big it can be (number of full circles) seems pretty sensible. I mean if you can avoid the constant for at least some of your geometry, then why not?
360 comes from the Babylonians, who used base-60 for numbers much for the reasons you describe (and who gave us the 24-hour day, 60 minute hour and 60-minute second, not to mention the 7-day week).
NATO forces have compasses labelled in mils or milliradians, which are not actually 1/1000 of a radian but as an approximation 1/6400 of a full turn. I still have my Silva military compass from 1989 graduated thus.
More importantly, it can be used for ranging. The average human height is known, and (usually) so are the sizes of whatever vehicles the other side might be using. Thus, when observing things of known size through some optic with a reticle graduated in mils, you can easily determine the range to them. Which then gives you e.g. the amount of holdover necessary to hit the target with a gun (which can itself be expressed as number of mils on the reticle to aim above).
The 400 system is the grads or gradians, indeed originating from the French revolution.
Nowadays I don't think they're used as the principal unit in any country. Wikipedia does mention it gets some use in specialized fields such as surveying, mining and geology.
They are used indirectly through distance. At the time, the meter was defined as one ten millionth of the distance between the north pole and equator through the Paris meridian. That means that the meter corresponds to 1/100000 of a grad of latitude -- which is better read as "a kilometre is 1/100 of a grad".
This is symmetrical to the nautical mile, which is one minute of arc.
Regarding the 400 "degree" system: they're called gradians and it's part of the centesimal system of angular measures according to Wikipedia. And your French guess was right, they have their origins in the French Revolution. More here: https://en.wikipedia.org/wiki/Gradian
I was going to look that up to confirm it, but then I realized I could prove that statement true using some simple logic I already know.
Earth does one cycle around the sun in 365 days.
So at midnight looking straight up on a specific star (that is angled perpendicular of the rotating poles of earth) in the sky, the star you would have spotted on that day would appear slightly off the next day at midnight.
It would only end up on the same spot on midnight after 365 days.
So we are 5 days off, but I am going to believe it is true until someone is correcting me.
Wikipedia agrees with you, technically, except the theory specifically names the sun as the star used. However, the reference is a wolfram alpha article, which only references the book The Elements of Typographic Style. I've never read the book but using the sun position makes sense to me, so I will also choose to believe this until corrected.
I'm so glad someone else finally said this. This article takes the approach of simplicity of code, which I do agree with, but additionally I've been thinking turns would have to be more intuitive and easier to teach (particularly to disinterested teenagers) for YEARS, and I finally feel vindicated at least one person agrees with me.
Turns are really the most neutral way to count an angle. We don't use them for everyday physical things because the numbers we'd deal with would be too small to work well for feeble human minds, hence degrees. But for the mathematical world where we currently use radians, turns make so much more sense.
The mathematical world doesn't only use radians. Parametrizing with a factor of 2*pi is totally common. Sometimes it’s called the winding number which here is called turns.
But yeah the overall point is good. Use language appropriate to the problem at hand.
Using degrees, turns, etc instead of radians, is like using 10^y instead of e^x (where y=x/ln(10)). Useful for many practical things, but useless for a lot of math applications, especially involving differential equations, complex numbers etc.
Right - there's a negotiation in terms of complexity. The coefficient baggage has to go somewhere.
For me I don't want to care which units I use (and I'm rarely inspecting the exact angle as a number) - consistency is most important. I'm rarely interested in the precise numerical value of an angle - it's just a thing in the graphics/physics pipeline somewhere.
I don't know if that makes me agnostic about this proposal or conservative.
When you store a value that will be reused, like in an array of points, if every time you walk the array multiplying by a constant, you should seek to store values in such a way that it does not need that treatment. If you are taking sin and cos each time through (presumably, added to some angle), it should be pre-multiplied for the fastest implementation of sin and cos available.
I’m surprised game engines do conversion from degrees to radian to call trigonometric functions. I would have called that bad code in an industrial context. We did everything in radians and converted to degrees only for displaying and input.
It was a lot more natural from us because we are a lot more familiar with radians anyway. I don’t think I have used degrees often since starting high school twenty years ago.
I consider APIs that use degrees essentially broken. (And similarly, APIs that express ratios as numbers in [0, 100].)
But you have to consider that a lot of game engines and related tools are intended for an audience without a strong programming background. You wouldn’t expect a 3D modeling tool to display radians. And if the UI shows degrees, the file format would better use degrees as well to avoid rounding errors. And then maybe you want the scripting layer to use the same values that the UI does. And so on.
Most game engines work in radians under the hood and expose degrees in the UI but still need to glue the UI values to the API which is the code seen in Casey’s Godot example the full context of which is a color picker and h has already been converted from degrees. Godot itself has a bunch of the API in degrees which is pretty bleugh.
It's for convenience of developing tooling. Because the only thing harder than re-framing all of mathematics so that sine and cosine are properly defined in terms of turns would be teaching artists and level designers to believe that a full turn is 2π radians instead of 360 degrees.
Yeah tooling is fine in degrees, which is what I was saying in the first part off my post. It’s more mixing it in the API rather than at the boundaries that is bleugh.
Most APIs are intended to be used, doesn’t mean you need to mix units, particularly when they both use the same basic type and thus are easy to muddle up. That’s why you keep the API consistent and deal with conversion at the edges. I get you’re probably very keen on Godot but it’s definitely one of the weird warts that all engines accumulate and my intention isn’t to say it’s bad because of this choice. Just that I don’t like that choice and the reason why.
Ah, I see what you mean. I tend to be operating with APIs in languages that wouldn't call radians and degrees the same units and wouldn't let you just call a function that takes radians with degrees without an explicit cast; forgot that was a thing people have to worry about.
That’d definitely be my preference as well. Games is full of this as well. Handedness and general definition of coordinate system is another fairly arbitrary decision you want to keep consistent in your API but have to translate at the boundaries quite often.
> We did everything in radians and converted to degrees only for displaying and input.
This is generally how game (engine) code works as well. The example in the article is an example of performing that conversion, except with turns instead of degrees.
The post says that programmers convert to radians to call APIs that take radians and then immediately divide by pi before computing the sine. So doing everything in radians would still involve an extra floating point divide compared to the alternative.
Saying "rad" are bad units for sin is like saying e is a bad base for logarithms.
The only "bad" thing about rads is that they're not taught early enough so that culturally 45 degrees are not know as pi/4. Then a turn would be known as simply as 2pi (or "a one eighty" as Americans infuriatingly like to call it when someone rotates 360 about themselves)
I'm actually more familiar with the reverse mistake, when people say someone "did a full 360" to mean they have changed their mind/approach.
I suppose the confusion there is with the association of "full 360" with "comprehensive" (as in looking all around, without any blind spots), which is valid.
There are a lot of comments here saying that radians are the only true way to deal with angles, however I agree with the author of the original article that turns are a legitimate alternative - I just wouldn't use the same language. Instead I would say that the new function I'm calculating is sin(2π t), and maybe also say that t is measured in turns, where (1 turn) = (2π rad).
It still has a nice small angle approximation: sin(2π t) ≈ 2π t for small t (arguably this is easier to interpret than sin(x) ≈ x), and its derivative is slightly more complicated: d/dt sin(2π t) = 2π cos(2π t). But everything is still perfectly workable and makes sense. I don't think you would find a mathematician or engineer surprised to come across functions such as these. (They may prefer to make the standard [1] substitution ω = 2πt if there is going to be a lot of differentiation involved, but this is a choice, not a requirement).
Turns can also be helpful as an intermediate unit which is to be translated both to an angle, and something else (colour, pitch, etc). I used turns internally for a pitch pipe application [2], where a turn became both an angle around a circle (t ↦ (cos(2πt), sin(2πt)), and a pitch moving up and down in equal temperament (t ↦ C4_FREQUENCY * 2^t). That way t=1.5 means either 1.5 octaves higher, or 1.5 full turns around the circle.
What mathematicians or engineers would be unhappy with is finding a redefinition of sin(t) to sin(2π t). Instead, lean into the fact that algebra can be a compact and unambiguous method of communication, and make a new library function called sin2π or something, and document that it calculates sin(2π t). Everyone will know what you mean.
I think this whole comment thread is missing the forest for the trees.
The forest here is: know what abstractions your dependencies use and be ready to break your own when you need more speed. This is a vital skill for game developers, where every cycle tends to matter.
The author is a software engineer and when he says "people", he implicitly means other software engineers. When he says "replace" he means in code, not in equations.
It's a general challenge of writing on the web that you don't know what context the author assumes and the author doesn't know what context the reader assumes.
In this case, the blog title "Computer, Enhance!" and the article subtitle "Switching away from radians makes code simpler, faster, and more precise." sends a pretty clear signal that this is about programming and not pure mathematics.
For any given article on the web, you can always generate valid criticisms based on the author assuming some context that may not be true for all possible readers. You can't say, "Ice cream is cold" without some commenter pointing out that you're doing a disservice to astronauts for whom ice cream is freeze dried and room temperature.
I find the best way to extract value from writing on the web is to simply try to understand the author's assumed context and go from there.
Computationally, we all only ever work with approximations, but when doing mathematics, pi is exact all the way out to the infinity-th digit. To multiply by pi (or any irrational, but particularly transcendental) in a pure mathematical context is to audaciously specify an infinitely long computational process. It is dizzying to contemplate, almost mystical. Sort of like modular arithmetic with an infinitely-precise irrational modulus.
In the surreal numbers [1], the definition of division and hence modulo applies to all numbers, finite and infinite alike. And for any finite number -omega < x < omega you would indeed have x mod omega == x.
You might already know about it, but Ehrlich claims[1] that the Surreals form an ordered field which is a maximal hyperreal number system in NBG, and therefore all the results of non-standard analysis "work" (up to isomorphism) in the Surreals.
[1] Philip Ehrlich. "The absolute arithmetic continuum and the unification of all numbers great and small." Bull. Symbolic Logic 18 (1) 1 - 45, March 2012. https://doi.org/10.2178/bsl/1327328438 Theorem 20
And I think that's the most important part of the argument here. By using radians in this case, you do extra calculation steps only to reduce the accuracy. That seems like a poor investment of the processor's time.
The effort wanted to replace all uses of Pi (including the ones in math formulas) and not just their use in (floating-point) computer programs where we implement approximations of those formulas.
But just because there was {a lot of work to replace all uses of pi with tau} doesn't mean there wasn't {a lot of work to replace all approximate uses of pi with approximate uses of tau in computer programs}.
Right, which makes it an effectively incorrect description. It is like if I said I wanted to change all green fruits to orange ones and you reported that as "my undertaking to change green apples to orange apples." It's the truth, but it's not the whole truth.
Better than turns, radians, etc. is an `Angle` newtype AKA wrapper class.
It completely eliminates misinterpretation of the value, miscalculation from [angle = angle + tau*n] as all angles are normalized, is more descriptive, and in a decent language is zero-cost.
Programmers should not be using (radians: float) in modern languages which support wrapper classes
This has nothing to do with the article, and it is equally applicable to degrees, radians, or turns. It neither solves nor hinders the simplicity or performance issues the article was talking about.
I disagree. By wrapping an angle in an Angle class, the internal representation need never be exposed to the programmer.
Rather than every programmer needing to read this blog post to see the performance benefits of using 'turns', instead now just a few library developers need to.
Types are just labels applied to variables. Their only power is type-checking a program to see whether every variable use is consistent. Wrapping something in a type doesn't magically change its value.
Not to mention, Angle is a particularly poor name, since radians, degrees and turns are all different measures of angles.
Say I have this program:
x : Angle = 90
y : Angle = pi/4
z : Angle = 1/4
sin : Angle -> Real
sin x //what will this print?
sin y //how about this?
sin z // ?
Why is Angle a poor name? The fact that radians, degrees and turns are all different ways to represent angles is exactly the point. Despite their different format, they represent the exact same thing, and sin(90 degrees) should return exactly the same as sin(1/4) or sin(pi/4).
From a mathematical perspective where people don't care about types, this is weird, but from an OOP perspective with polymorphism this is exactly right.
So in that sense, instead of storing angles as just a number where the programmer needs to keep checking whether it's in radians, degrees or turns, you should store it as an Angle object. And that object shouldn't use any approximations of pi, but understand what pi means. Angle.fromDegree(90) should be exactly, and not approximately Angle.fromRadian(4*PI).
You are right that `Angle` is the wrong name for a newtype but wrong that this is not a job for types.
> Wrapping something in a type doesn't magically change its value.
Most formal definitions of value disagree, e.g. from Stepanov & McJones:
A value type is a correspondence between a species (abstract or concrete)
and a set of datums. A datum corresponding to a particular entity is called
a representation of the entity; the entity is called the interpretation of the
datum. We refer to a datum together with its interpretation as a value. In this case the species is abstract, rotational measurement - and the interpretation is the unit. 2pi radians, 360 degrees, and 1 turn all correspond to the same abstract entity; radians, degrees, and turns are types.
But you can also find the mistake just based on your own comment, where you've taken an unjustified leap from variable to value:
> Types are just labels applied to variables... Wrapping something in a type doesn't magically change its value.
Types are labels applied to variables, specifically in order to produce a particular value from the contents of some memory region (what Stepanov & McJones call "datum"). If changing types didn't change values, types would be near-useless (cf, as a concrete example, C's near-useless type system).
One way to handle this with an Angle type is to not allow an automatic cast from float/double/int. instead, you expose methods that normalise the input and internally can represent it as whatever, i.e. from_degrees, from_turns, from_radians or similar that force the programmer to be explicit about their unit, when the angle is constructed.
Alternatively, Angle could be the base class and each unit gets its own class that Angle inherits from.
> I disagree. By wrapping an angle in an Angle class, the internal representation need never be exposed to the programmer.
Because nobody will ever instantiate one?
At the end of the day, you're not implementing these as an exercise in hermetic design, you're trying to do arithmetic, presumably on numbers you have.
Does this matter any more than the fact the underlying representation within sin is e.g. an 80 bit number no programming language can give you? `Angle` is a bad name for this example because a newtype should be named after the unit rather than what it models, but once you've newtyped it, what matters is that you call sin with the right one, not the actual representation.
Yes, the entire point of the article is that it does in fact matter.
> what matters is that you call sin with the right one, not the actual representation.
No, the actual representation does matter. At some point, arithmetic operations are being performed and those operations consume CPU cycles. If you care about the performance of the code, you care about having a representation that minimizes the number of those operations.
So make sure you use `typeof sin[0]` or whatever the syntax is in your language for your own variables - in the end this still requires newtyping the thing.
For performance in this context, the actual representation does not matter - only that your code and your sin code agree to avoid the conversion cost.
I think you're confusing "representation" to mean just the number of bits allocated to the number, but the article and my comments also use it to mean what the numeric range of those values represents (hence "representation").
How that's modeled in the static type system is completely orthogonal to the underlying arithmetic operations that are performed and their efficiency. The article is entirely about the latter.
No, I think you're confused about what I'm saying about performance. I definitely don't think "representation" just means the number of bits.
> How that's modeled in the static type system is completely orthogonal to the underlying arithmetic operations that are performed and their efficiency.
No! A type system gives information in in both directions. Normally yes, we use it to impose some interpretation on a representation to produce a value, so we can think in values and not bits. But if two types model the same range of entities, i.e. angular measurements, it also tells you if you have a value with a representation that you might have to spend time futzing with to integrate with something else expecting the same entity but a different type.
If we had reasonable types, sin could define the type it wants to do what it does efficiently (turns, or radians if the platform has an efficient instruction, or a lookup table index into some division of a quadrant, or whatever). That representation matters to the sin author, but not the user. All the user cares about is, then, how many times do I have to pay some conversion cost? And if I'm using the type the sin author provided, the answer is zero (or maybe like one, at initial load). But that relies on the sin author using such a newtype.
(Arguably the user also cares how much memory it takes to store the representation the sin author chose, but in practice for sin that's going to be a word or less whatever the representation is.)
You also have a blinkered view of the article if you think it's "entirely about" performance, given how often it also talks about simplicity and accuracy.
I feel like we're talking past each other or the goal posts are moving or something.
> If we had reasonable types, sin could define the type it wants to do what it does efficiently (turns, or radians if the platform has an efficient instruction, or a lookup table index into some division of a quadrant, or whatever). That representation matters to the sin author, but not the user. All the user cares about is, then, how many times do I have to pay some conversion cost? And if I'm using the type the sin author provided, the answer is zero (or maybe like one, at initial load). But that relies on the sin author using such a newtype.
Everything here is equally true if you take types entirely out. sin() will ultimately be doing math on floating point values. The sin() function expects angles according to a specified scale. Some scales (turns, according to the author) require one less multiplication in the function prelude. Therefore, sin() is faster if it specifies that as the input scale.
That benefit is negated if angles are stored in another scale and rescaled every time sin() is called. That's just hoisting the multiplication out to every callsite.
So what you need is to store angles in the numeric scale that is fastest for the implementation of sin().
Whether those floating point angles are wrapped in a newtype is completely orthogonal. What matters is that the numeric scale you use to store and represent angles is the one that requires the least arithmetic in the underlying implementation of the trig functions.
Unit types are great, but they solve different problems than the article is talking about.
Well I don't think they are. Sure they make computing the function simpler. But I would hazard that most people that use these functions are not doing geometry and they are already working with radians. In short if you give any kind of engineer working on any kind of signal processing domain a sine function where sin(1) = 0 you are only going to confuse them.
PS: that's why mathematicians don't use turns they are mostly not doing geometry and radians in that case typically make for better formulas.
Not really, pretty much every single one would assume you just forgot the pi, because everyone writes “sin(1pi)” and never actually “sin(3.14…)” because no one ever writes down numbers in the unit of radians, they already convert to half rotations or full rotations by scaling with pi. Imagine if someone went “nanometers are a dumb unit, because I always write down my numbers as h=342 x 10^-9m” that last part “10^-9m” is just nm. In the same way that sin(1.432pi) might as well just be written sin(1.43*rotation). Arguing for radians is arguing that the most natural way to write it is sin(8.985). Which you will pretty much never see anyone do.
In my example you would not actually write it out but instead have something like this.
[sin(x) for x in sample]
My point is that the trig functions are abstract and useful in multiple domains and in most of these domains turns does not make sense. Turns only makes sense in geometry and maybe some physics but most of the time in these cases you might be better off working with other units, like say quaternions.
The fact is in the vast majority of literature trig functions take rads as arguments, it's the sane default for that reason alone.
> no one ever writes down numbers in the unit of radians, they already convert to half rotations or full rotations by scaling with pi
That statement is true only if you are talking about geometry, If you are working in any other domain the trigonometric functions operate on real numbers and have nothing to with rotations or angles and if I call sin(1) I expect to get back 0.7847 not 0
Still, the discussion is only about convenience. For example, e^ix = cos x + i sin x (with sin/cos taking an argument in radians) would become e^ix = cos 2pi x + i sin 2pi x (with sin/cos taking an argument in turns). It's more cumbersome than the radian-based definitions, but it's not strictly different.
I'm interested in this from the perspective of learning maths, rather than writing code (for now).
I've wondered for a few years now whether teaching angles and trig using turns, rather than degrees or radians, would be better from the very beginning. Degrees are arbitrary and based on the numeric preferences of a dead culture, rather than on what's happening on the page or in 3d space. Radians seem better because the units are related to a property of the circle, but they're hard to visualise and reason about because they don't fit a circle in whole numbers. Surely turns are the most clear.
I'm rusty and don't practice maths much. If I did I'd probably have the skills of 14y/o me, for anything outside set theory. I'd definitely do a trigonometry course based on turns if I could find one.
Radians makes the most sense just because (as others have pointed out) pi appears all over in math, and in all those places if you are working with a unit of raidans the math becomes a lot easier. In a few cases turns are easier to work with, but teaching them is a dead end to the vast majority of students who will never work in those domains, and even the few students who work in a domain where turns make sense they will still spend time in other areas of math that works much better in radians.
If mathematicians believed things can’t change or improve, and we have to stick with the status quo only because it’s common and popular, we’d never have had radians in the first place, we’d be stuck with degrees, no? Why assume that people can’t convert between turns and radians as needed, when we all already convert between degrees and radians all the time? Why not imagine a pedagogy that teaches turns and radians and degrees, giving students more flexibility than the current set rather than less?
> but they're hard to visualise and reason about because they don't fit a circle in whole numbers.
I think, unfortunately, that you can't avoid encountering irrational numbers in trig. You would need to constrain yourself to working only with right angles, but in those cases sin and cos are trivial[0] so there would be no need to use trig in the first place.
[0]: 1 or 0, you're either on the right axis or you're not. No circles involved.
> Radians seem better because the units are related to a property of the circle, but they're hard to visualise and reason about because they don't fit a circle in whole numbers.
When I was learning this in school, radians were always expressed as (fraction * pi), not the final number.
This is quite convincing but it would have been more convincing if he'd acknowledged the downsides and explained why it is in radians in the first place. (On balance I think he's probably still right.)
Perhaps we can make new named functions that operate in turns, along the same lines as ln/log. sint, cost, etc. Ok maybe not cost.
We manage OK with cosh for the hyperbolic cosine even though "cosh" is a word. For that matter, "sin" and "cos" are both words, though of course "sin" and "sin" are pronounced differently.
`cost` is a normal variable name in a lot of code though, whereas `cosh` definitely isn't! In fact I bet you can't find a single use of `cosh` in code that isn't talking about hyperbolic cosine.
While I agree that "turns" are much more convenient in some applications than radians, there is no need to invent a new terminology.
For a long time, including the 19th century, the plane angle measurement unit corresponding with 4 right angles, i.e. a complete rotation around a point, has been named "cycle".
That is why in many old physics or engineering books one will find wave numbers measured in "cycles per meter" and frequencies measured in "cycles per second", for example what is now called as a frequency of "10 MHz" was called as a frequency of "10 megacycle per second".
There are 3 important measurement units for the plane angle and each of them is the most convenient unit for a certain class of applications: the right angle, the cycle and the radian.
(An example where the right angle is the most convenient unit is when expressing a complex number with unit modulus as (i^x) instead of the (e ^ (i * x)) that is used with radians.)
Using the inappropriate plane angle measurement unit for an application causes a loss of precision (which can be large for large angles that must be reduced to the 1st quadrant) and introduces extra arithmetic operations that are not needed.
The radian is used much more frequently than it should be used because many standard programming libraries provide only trigonometric functions with arguments in radians, which is a big mistake.
The recent versions of the Floating-Point Arithmetic standard recommend the functions sinPi, cosPi, tanPi, atan2Pi, asinPi, acosPi and atanPi.
This is another serious mistake, because one normally needs either the trigonometric functions of (x * Pi * 2), i.e. with angles measured in cycles, or those of (x * Pi / 2), i.e. with angles measured in right angles, and never the functions of (x * Pi), which are recommended by the standard.
That page doesn't actually state anywhere when the term 'turns' was actually introduced, the concept yes but not the terminology. As far as I can tell the first references it gives to the term being used is in ISO 80000-3:2006, completed in 2009, although it must have existed earlier because Fred Hoyle appears to have used it and derivatives in 1962. I thought the reference to percentage protractors might be fruitful, but I tracked down the article reference for that from 1922 and it doesn't use the term.
The difference between representation in units of right angles (1/4 cycles), and cycles is just a matter of bumping the exponent up or down two clicks. So cycles / turns and radians suffice.
I suppose it is in solutions to differential equations where radians become important.
This is like complaining about g in F = g * m, in gravitional force formula, and redefining gravitional constant g as 1 and multiplying "gram" by 1/9.8.
Sure it simplifies things for you but you are breaking everything else that used g constant
Or we can redefine seconds perhaps and multiply it sqrt(1/9.8)
Oh yes. This is definitely a pet peeve of mine. CGS is so much nicer. I did E&M from Jackson before he converted it to MKS, and I still can't keep all those epsilon_0 and mu_0's straight. (Not that it comes up all that much.)
On the frontend, one thing that I discovered when implementing color spaces into my canvas library was that the CSS HWB standard[1] allows the hue part of a color (which is an angle value) to be supplied supplied as either 'Ndeg', 'Nrad', 'Ngrad' or 'Nturn' values. Other CSS color spaces involving hue don't see to accept 'turn' (though I could be misunderstanding them).
You’re misunderstanding things. CSS is typed, and deg/rad/grad/turn are all just angle units; anything that takes an <angle> supports angles in any unit. hsl(0.5turn 100% 50%) is equivalent to hsl(180deg 100% 50%) and #00ffff (and infinite other spellings).
When it was introduced, hsl() only took a <number> for hue, which was interpreted as degrees, but it has had proper <angle> support for over a decade (apart from Opera which only got it with the switch to Chromium in 2013). The current state of affairs is `<hue> = <number> | <angle> | none` (https://drafts.csswg.org/css-color-4/#hue-syntax), and <hue> is used by hsl(), hsla(), hwb(), lch() and oklch().
I recommend reading specs regularly when doing anything like this. Web specs are generally pretty approachable, and they generally match implementations very closely these days (largely because the implementers are the spec-writers, which wasn’t so much the case before, say, the HTMLL5 effort), with the caveat that parts of these drafts precede the implementations and thus may end up being altered due to implementation experience.
Is there a Rust library that also takes "turns" (or cycles or something similar) that I can use? I have been wondering (very low-key-ish) about this problem for some years and now that I know this I want to use turns in my code.
> But math never decreed that sine and cosine have to take radian arguments!
To quote a really important comment posted by Eduardo Vasquez on the article:
> [...] all those formulas of derivatives and primitives of trig functions in standard calculus books assume that arguments are expressed in radians. Say, the derivative of sin(x) w.r.t. x is cos(x) --- that is only true if x is in radians. Otherwise, you would get an extra factor, due to the chain rule. [...]
(There are a few comments here that point this out, but they are nested pretty deep so I thought it was worth repeating.)
Lots of things work better in radians: s=r * theta, area = 1/2 theta^2 * r, d(sin(x))/dx = cos(x) (etc, for slopes and numerical algorithms), simpler series approximations and algorithms for computing these and other functions, movement rates of things using gears or wheels are simpler to calculate, radians are dimensionless whereas anything else is not (when used in nearly any physics or math uses), hardware support for lots of related functions, interoperability with legions of existing software, papers, book, and algorithms...
{vulgar fraction one quarter}{subscript zero}{subscript zero} is the wrong way of writing this, generally producing a suboptimal result (denominators and subscripts occupy different lines, and the known fractions often have slightly different, more manually-optimised layouts anyway). The proper way is {digit one}{fraction slash}{digit four}{digit zero}{digit zero}: 1⁄400. Won’t render looking like a fraction with split numerator and denominator in all fonts, but it should mostly look better.
A lot of people are getting hung up on the "math never decreed" sentence (not entirely wrongly since it's basically false, and surprising for Casey who has probably forgotten more linalg than I'll ever know), but that's not really the point of the article.
The point is that notationally, turns seem to read better in most code that isn't doing analysis. I'd say this points more to a flaw in our languages than in our function definitions. Of the major general-purpose languages I think only C++ has really taken a shot at implicit unit conversion, which would let you safely and correctly sum a `turn facing` and `degrees delta` and pass the result to a `float sin(radian x)` function and statically ensuring your dimensions remain correct.
Everything else I can think of either makes newtypes too complicated to define, lacks conversion overloads (or more likely lacks operator overloads entirely), or refuses to let you do them implicitly. C++'s approach is certainly too general, but is it really impossible to corral such behavior in a way that's both safe and convenient?
The major motivation for radians is arc length parametrization, really. Meaning that in a circle of radius 1 unit (in whatever measurement unit you've chosen), an arc formed by a k-rad angle measures k units. There is an intentional coincidence of angles and arc measurements.
What is the radius of a circle that has a circumference of 1 (as proposed in TFA)? It is 0.5/PI == ~0.15915, which means you are just moving the "problem" elsewhere. I am sure there's a lot of math that is simpler with radius being 1.0 (with some input in radians) vs an input in "turns" and having to deal with a 0.15915 radius.
Yes, agree turns would be so much nicer for many use cases, but I'd like to see which operations it makes worse first :)
If indeed almost every hardware/library implementation of `sin` would lose a multiply by an arbitrary constant by choosing a new input scale, that would convince me too, as long as it was the same scale for all.
But the proposal here would not change the circumference to 1, it's about representing angles in another currency. More often that not when you are doing trigonometry you care more about angles than arclength. The issue stems from thinking angles in terms of arc length.
The angle needing to be used as a length (to be able to use the same scale as a unit radius) is something that naturally happens in a lot of trig algorithms.
It depends whether you care more about the "UI" (the user of the algorithm gets a more pleasant input range to use) vs how convoluted the implementation is (somewhere deep down an extra multiply by 0.15915 was needed).
All I am saying that "turns" are not universally better, they have downsides too.
What subfield of mathematics? Surely if you do differential equations, your trigonometric functions will eat radians. But e.g. for geometry, where you do basic arithmetic operations on angles, turns are a bit more convenient than radians. (Radians are not that inconvenient if you denote 6.28 somehow, but then again, why not just use turns.)
The situation is similar to that of the logarithm and exponential functions[0].
There’s a log2 function and a log10 function and they are both useful. But when we talk about the log() function there can be no doubt that it is to base e.
If you want to define a sinT() function that works in turns then that’s totally fine. But the sin() function is defined as taking an argument scaled in radians, because it is mathematically natural.
Mathematically speaking, all trig functions are in radians. When you write sin(90°) the degree symbol ° is a conversion factor. I blame calculators for confusing high schoolers into believing that there is a separate set of functions that work in degrees.
[0] unsurprisingly because Euler’s formula equates the trigonometric and exponential functions.
The sine function, defined as x-x^3/3!+... doesn't take argument scaled in radians. It takes real numbers. It has nothing to do with radians, really. Or even with angles.
The other sine function, defined using right triangles takes argument in Angles, also has nothing to do with the measuring unit.
(Also I don't know who told you that log() denotes log_e. Maybe in your narrow environment, but definitely not universal between fields and languages. Personally I prefer ln, ld and lb for natural, decimal and binary logarithms.)
> who told you that log() denotes log_e. Maybe in your narrow environment
This is a nearly universal convention in modern mathematics (except a few niches like information theory and computational complexity theory where it means log base 2, which is usually clear from context).
Engineering disciplines used to use "common" logarithms (i.e. base ten) all over the place back when most calculations were done with slide rules, lookups in paper tables, and pen-and-paper arithmetic, but with the advent of computers multiplication is just as cheap as addition, and expressing things on log scales is less necessary.
Over time the mathematicians are winning the fight to define the symbol 'log'.
Yes, Google in this context is a glorified pocket calculator, and follows the convention from the slide rule era. But if you find log in a mathematics paper it almost surely means the natural log.
> The sine function, defined as x-x^3/3!+... doesn't take argument scaled in radians. It takes real numbers. It has nothing to do with radians, really. Or even with angles.
The point is that a definition of the sine function where sin(pi/2) = 1 is equivalent to a sine function taking radians.
You could also define sinT(x) such that sinT(1/4) = 1: sinT(x) = sin(2pi x) = sin (tau x) = 2pi * x - (8pi^3 * x^3) / 3! + [...]. Neither of these is more or less fundamental than the other, but one is more convenient in most (non-trig) calculations.
But I must confess that we had ln() in university courses and by default log used base 10. Now I use ln and a base for the log as a subscript like log_10, log_2, etc.
> But I must confess that we had ln() in university courses
Same. I often wonder why would anyone denote the natural logarithm with log(), when ln is shorter, and easier to read (at least for the people that were thought to use it), also it is already somewhat established.
Haha the Brainfuck analogy was funny. Note that in programming languages it looks like Log denotes Ln. Anyway, Log_b(x) = Log_a(x) / Lob_a(b), so we're covered.
Parent comment is way too dogmatic for my taste. It even mentions a mythological entity as an appeal to authority.
In my opinion, mathematicians always choose the notation that's more convenient for them, at the moment, for a particular problem.
If a given problem is easier using another form of sin/cos, etc., they will use it, and it will be used without hesitation. In that sense, mathematicians could not be more pragmatic.
However, for many things, as long as the result is correct, they don't care about the operations' computability. Performance is an afterthought because for them (a*п)/п is exactly the same as 'a'. All operations are instantaneous.
Taylor series for example are a perfectly fine final answer in calculus, but to a programmer they are an infinite set of partial approximations that can take any arbitrary time to execute.
This is what makes computer science fascinating =)
True, but if you are talking about video games in particular you could just build a sin/cos function with different parameters into the game engine.
have a `sin(x)` where the unit of x is radiants and a `sin_turn(x)` where x is expressed in turns.
Video games especially are a great situation to do it like this because they often use a framework (game engine) that was specifically created for this purpose.
Yes, this is the topic of the fine article and the point that is being made. Ideally both of the multiplies (one in the user code to multiply by pi and another in the engine code to divide by pi) can be omitted.
> But they are turns of a circle of a given radius.
> Radians (circle fractions)
It's the other way around actually. A turn is a turn, no matter the radius of the circle.
Radians are the length of the line you need to draw a fraction of a circle with a radius of 1.
(In the end, both are just ways to describe angles and thus independent of any radii... the only effective difference between them is a constant factor of tau or two pi.)
>But they are turns of a circle of a given radius.
No, a revolution is a revolution. If you turn a 1 m wheel by one revolution and a 2 m wheel also by one revolution, both will have turned one revolution, or 2 pi radians. If you roll both of them 2 pi m forward on the ground, one will have turned one revolution, or 2 pi radians, and the other will have turned half a revolution, or pi radians.
Yes, it would devirtualize the dispatches for that kind of thing at compile time if the argument type is known at compile time. See how Unitful.jl works. You can then see inside of the LLVM and native code that it swaps in the required functions.
I am a bit surprised with all this commentary that no one has yet mentioned the two variants of Planck's constant - the original one in terms of cycles (turns, here) and the hBar=h/(2pi) that winds up being more common.
Questions like this ultimately have answers related to what formulas are most often deployed - more a sociological question than anything else.
I appreciate the article encouraging programmers to think about API design.
At the same time, I look forward to a future (or present?) where compilers and static analysis tools can point out examples like this; e.g, many examples of calling code multiplying by pi followed by function code dividing by pi.
P.S. This reminds me somewhat of the Department of Redundancy Department.
In my view, because pi crops up unavoidably in math, if you concoct a "unit" to get rid of pi in one place, it will simply crop up somewhere else, perhaps in a denominator.
For instance: The ratio of rise to run for small angles.
Working in optics, radians are such nice units: A milliradian is a millimeter per meter or a "mil" per inch.
> In my view, because pi crops up unavoidably in math, if you concoct a "unit" to get rid of pi in one place, it will simply crop up somewhere else, perhaps in a denominator.
That doesn't mean you shouldn't try to put it in a convenient place.
One way to think of the post is: where you want pi to come up?
With arc length parametrization f(r) = (cos(r), sin(r)), it comes up in the parameter space (one turn: 0 <= r <= 2 pi). If you had the whole thing in terms of turns, you'd instead have (as a primitive) some kind of function g(t); with one full round for 0 <= t <= 1. It'd then have to be true that
f(2 pi t) = g(t) = (cos(2 pi t), sin(2 pi t)).
Pi would come up in the velocity:
f'(r) = (-sin(r), cos(r)) = if
(i u means rotate the vector u by 90 degrees counter-clockwise)
g'(t) = 2 pi f'(2 pi t) = 2 pi (i f(2 pi t)) = 2 pi (i g(t))
Before, you had |f'| = 1. Now you have |g'| = 2 pi.
For classical physics (kinematics and dynamics) applications and classical geometrical applications (curvature, etc), it's really convenient to have that speed term (|f'|) being 1. This is one of the major motivations for arc length parametrization.
By the way, this can't be understated. It really simplifies kinematics, dynamics, geometry, etc, having |f'| = 1 throughout. It's not just for circles. This can be done for an extremely large class of curves and it makes the related math much more understandable and easier to deal with.
For a lot of computer graphics (I believe this is where Casey comes from), you care less about tradicional mathematics for physics and geometry. So you'd rather (maybe) take this pi appearing in the parameter space and push it to the velocity.
Pi and radians mostly and naturally enter math when we are dealing with both angles and radial distances (e.g. in spherical coordinates). If we choose to normalize in terms of "turns", then angles and circumferences may look natural but radial distances acquire Pi's everywhere.
Radians are just half-turns, so we use turns either way. Some equations look better in turns and some in half-turns, but the math works fine either way.
Instead to decide which is better think of how a new student might learn this intuitively:
No, radians are pi * half-turns. For example, a 90 degree angle is a quarter-turn (1/4 turns) or pi/2 radians. It is most definitely not 1/2 radians. Equivalently, radians can be said to represent tau turns: 1/4 turns is tau/4 radians.
Yea, I have some macros cysin and cycos that does that, I just call them cycles instead of turns, but it's the same idea.. ofc internally those are still stupid and just multiply by tau..
Who cares? It doesn't matter. No one doing serious work in physics, simulation, etc cares about units at all besides very broad distinctions between systems like natural units vs constructed. Arguing about imperial, metric, pi, tau, etc is 99% bikeshedding by people who don't even do this stuff.
> But math never decreed that sine and cosine have to take radian arguments
Yes it did. The lowest kolmogorov complexity definitions of all trigonometric functions (free from non-integer constants) all take radian-based arguments.
Ooof, you're no true scotsmanning with that bikeshedding argument.
No one doing serious work in physics, simulation
Well, he is doing game engines, so you are right, it is not about "serious work in physics, simulation", it is about simplicity and performance in games.
Not closely related, but the topic reminds me of this gem of a question from math.SE: Why does "Turn! Turn! Turn!" equal 241217.524881? (It's because of the calculator, but not in an obvious way.)
Bravo! Bravo!! Now if we can just make base10 math the default in modern languages and only let the very few propeller headed weenies that really need it ever pay attention to floating point math and IEEE-754 Hell, we can join the 21st century we were promised.
A motivation seems to be performance (avoiding useless multiplications followed by divisions by the same factor). I'm not sure that you really "pay" for these multiplications, with code optimization?
I strongly suspect that in most cases, yes, you do. The only time you wouldn't pay this cost is if the multiplication outside of the sin() call and the multiplication inside of it can be constant folded together. That requires the call to sin() to have its code inlined at the callsite. Given how large most sin() implementations are, I would be fairly surprised if it does get inlined.
The only way to answer this is to profile it and see.
"math never decreed that sine and cosine have to take radian arguments!"
This is at best questionable and at worst false.
If you only want to use sin and cos as functions for doing trigonometry, it is true that you can choose whatever angle unit you like and stick with it and it will be fine.
For most other stuff, e.g. differential equations, complex analysis, signal processing and mechanics, it's pretty much inescapable that the zeroes of sin are at integer multiples of pi, and that's that.
If you differentiate sin(x) with respect to x then you get cos(x), but only if your trig functions are using radians. Any other unit results in an extra coefficient appearing. That’s not an insurmountable problem, but radians are the fundamental unit here, not just an arbitrary choice.
The main thing to realise is that sin and cos are not fundamentally tools for doing geometry. The fact that you can use them for working out side lengths of triangles or converting polar to cartesian coordinates is somewhat incidental.
It doesn't help that at school our first look at sin and cos is all about adjacent sides and opposite sides in right-angled triangles. It's understandable, because jumping straight into the deep end would be too hard, but it's a bit misleading.
In most mathematical applications, the x in "sin(x)" doesn't even represent an angle, so it doesn't make sense to talk about whether sin and cos are "in degrees or in radians". They're simply functions that crop up as solutions to the differential equation that describes harmonic oscillation; or the imaginary and real parts of e^ix; or exponentiation of certain matrices; or a whole load of other stuff I haven't thought of.
In all those settings, it turns out that sin has zeroes at integer multiples of pi, which forces the convention that a half-turn is an angle of pi, and the definition of radians follows from there. But as I said, for the specific case of basic trig, carrying around a scaling factor and doing everything in degrees is easy enough. Carrying that same scaling factor around in pretty much any other application of sin, cos and related functions would be hell.
Well this isn’t very fair. Yes, triangles have very little to do with the true nature of sin and cos.
It’s also true that they are the basic building blocks of cyclicity.
But to say they are not geometric tools is dishonest. They instead show us that geometry is deeply connected to many other, sometimes-surprising, areas of mathematics.
That seems ahistoric as per the meaning of the word. "Sine" is derived from the Sanskrit word for 'chord' as per its initial usage in determining the length of straight line segments between two arbitrary points on a circle.
A turn doesn't have to represent an angle. It can also be a "cycle" in the oscillation.
I studied engineering, and pretty much everywhere where we needed the radian form with Pi, the mental reasoning was "one cycle, or repetition, or loop ot whatever is Pi". Never did Pi have any deeper meaning that helped understand the logic of the problem.
True, however that doesn't mean x represents an angle. It means you can put a geometric interpretation on an abstract formula. e^ix = cos x + i sin x regardless of what x represents - if you were doing electrical engineering it might be time, for example.
Interpretation is often strongly motivated by what "comes first" in the order in which you learn things, so the whole debate is a bit subjective anyway. Another common example: at school we learn the integral is the area under the curve. At university we learn the area under the curve is the integral. The integral is the "real" thing and the area is just a convenient geometric interpretation, which actually makes no sense for many (most?) integrals. At school we learn it "backwards" purely because it's easier that way, and visual aids are helpful. I think something similar applies to sin and cos.
> the area is just a convenient geometric interpretation, which actually makes no sense for many (most?) integrals.
Could you elaborate on this? Because we defined the (Lebesgue) integral in my analysis 3 course exactly in this way: First define what measurable sets are, and what their volume is. Then the integral of a non-negative function is the volume under its graph, if that is measurable.
Because - numeric precision arguments aside, there's an excellent comment explaining that problem in this thread - it's the only unit of measurement that makes sense for angles.
It provides an easy way to connect the complex exponential with trigonometric functions (and everything you get from that, i.e. Taylor series, nice behavior in diffeqs). You can do the same in degrees as well, but you end up with weird conversion factors with pi in the denominator, a strong indication you should have multiplied by pi to begin with.
It's effectively the same, but cos, sin and others are defined with radians as arguments. Taylor's series expansions are changed, otherwise. So is Euler's formula.
>which means the calling code is multiplying by a factor of pi just so the library code can immediately divide it back out again.
Mathematical philosophy aside, that's a pretty compelling argument from a practical perspective. You're doing two unnecessary relatively expensive (multiply/divide) operations in a process that's supposed to be fast.
>But math never decreed that sine and cosine have to take radian arguments!
Ummm, actually it did. The Taylor-series of sine and cosine is the simplest when they work with radians. Euler's formula (e^ix = cosx + isinx) is the simplest when working with radians.
Of course you can work in other units, but you'll need to insert the appropriate scaling factors all over the place.
"Turns" don't generalize to higher dimensions either. With radians you can calculate arc length on a circle by multiplying with the radius. This extends naturally to higher dimensions: a solid angle measured in steradians lets you calculate surface area on a sphere by multiplying with the radius. How do you do the same with "turns" on a sphere? You can't in any meaningful way.
>> Ummm, actually it did. The Taylor-series of sine and cosine is the simplest when they work with radians. Euler's formula (e^ix = cosx + isinx) is the simplest when working with radians.
That's nice, but as the article points out most implementations of trig functions on computers don't use things like Taylor series.
Another terrific use of turns is in calculating angle differences, where you take a difference and just use the fractional part of the result. No bother with wrap around at some arbitrary 2*pi value. Since it wraps at integer values we simply discard the integer part. This can even be for free when using fixed-point math.
That's an obfuscation from the blog post. If you read further down in the code that is mentioned, the actual computation of sin is done by a polynomial expansion in x (radians), not y (turns). The purpose of y is mainly in case x is more than pi, and if so, what the corresponding angle in [0,pi/4) is.
You can if you want make a polynomial in turns. The CPU isn’t going to care one way or the other.
Implementations which are accurate in terms of turns even for values close to half a turn can be useful for avoiding numerical issues that sometimes pop up because π is not exactly expressible as a floating point number. These functions usually names like sinpi, cospi, etc. It would be nice if they were provided more often in standard libraries.
Based on the article, CUDA has a sinpi instruction (or whatever they call them in CUDA-land). Does anyone know -- is sinpi commonly provided in the CPU assembly extension ecosystem (avx & friends)? Light googling showed me some APIs that had implementations, but I didn't dig in enough to see if they are directly implemented in assembly (this seems like the sort of info a wizard here would know about, and probably whether these types of instructions tend to be well-implemented...).
Right, radians are the "natural" units of angle, others generally just make a circle into some integral number of units for convenience, but you always have to go back to radians to actually do calculation.
In the next installment, maybe he'll propose that turns can be limiting because diving up a circle requires the use of fractions, and suggest instead of 1 turn per circle, we make a number that's easily divisible into many integer factors. Maybe 216, or I don't know, 360?
The point of the original post is that depending on your field (e.g. game engine), maybe all the calculations you need can be done easier in the unit of convenience (e.g. sine of a turn is easier to calculate than sine of radian), so if that is the case you should stick with the unit of convenience thru all the layers and forget about converting to radians in your code.
And using fraction of a turn is also a very good option, much better than radians in many cases, especially if you chose a power of two fraction (e.g. 1/256), in this case all the modular arithmetic needed for angles comes for free as simple integer overflow, and lookup tables became a simple array access.
If you are making something like a game engine using computer hardware from the past 30 years, you should avoid angle measures to the extent possible.
It is much computationally cheaper and more robust (and easier to reason about) to use vector algebra throughout. Then you have no transcendental functions, just basic arithmetic and the occasional square root. You need the dot product and the wedge product (or combined, the geometric product), and derived concepts like vector projection and rejection.
If you need to store a rotation, you can use a unit-magnitude complex number z = x + iy, where x = cos θ, y = sin θ, without ever needing to calculate the quantity θ directly. If you need to compress it down to one parameter for whatever reason, use the stereographic projection s = y / (1 + x) = (1 – x) / y. Reverse that by x = (1 – s²) / (1 + s²), y = 2s / (1 + s²). [If starting from angle measure for whatever reason s = tan ½θ, sometimes called the "half-tangent".]
The angle measure is the logarithm of the rotation, θi = log z. In some contexts logarithms can be very convenient, but it’s not the simplest or most fundamental representation.
With units of "radians" angle measure is the logarithm of base exp(i) [related to the natural logarithm], and with units of "turns" it is the logarithm of base 1 (sort of).
I mean that’s all well and good until you have an object in your game and you’re like “I’d like this object to be leaning at 45 degrees, oops I mean 0.70710678118 + i*0.70710678118
At a high level you should be expressing something like one of "turn this by the angle between vector (1,0) and vector (1, 1)"; "point this in the direction of vector (1, 1)"; or "turn this by the square root of the rotation i" (i = a quarter turn).
If you use angle measures (of whatever units), when you say "rotate by an eighth of a turn" you are instead going to end up with something internally like: multiply some vector by the matrix
[cos ¼π, –sin ¼π ; sin ¼π, cos ¼π]
which is ultimately the same arithmetic, except you had to compute more intermediate transcendental functions to get there.
If you just have to do this a few times, angle measures (in degrees or whatever) are a convenient human interface because most people are very familiar with it from high school. You can have your code ingest the occasional angle measure and turn it into a vector-relevant internal representation immediately.
P.S. If you write 1/√2 in your code compilers are smart enough to turn that into a floating point number at compile time. :-)
In most game engines, constructing the rotation matrix is trivial. Something like Quaternion.Euler(0, 45, 0). The ultimate position / rotation of any given object in a game is usually a compound transform computed via matrix multiplication anyway, e.g. a model view projection matrix. I'm not sure it's the best way, but that's just how most game engines work.
If your game engine is using quaternions as a canonical internal representation for rotations, it is already following my advice from above.
(Game engine developers are smart people and have lots of practical experience with the benefits of avoiding angle measures, as do developers of computer vision, computer graphics, robotics, physical simulations, aerospace flight control, GIS, etc. etc. tools.)
Yea, I see. I originally interpreted your comment as being about game developers, but you were actually talking about game engine developers. In which case, we agree :)
CORDIC is not based on radians or turns; it is based on decomposing the angle into a sum of:
phi_n = atan(2^-n)
and then using an abbreviated sum formula where computing cos(theta + phi_n) depends only on sums and bitshifts.
The small-angle approximations sin(x) ≈ x and cos(x) ≈ 1-x^2/2 are the real killer feature of radians, though, because when you can deal with the loss of accuracy you get to avoid using any loops whatsoever. They're also fundamental to understanding simple physical systems like a pendulum.
"One of the best ways to calculate sin and cos is CORDIC" This is extremely false. Cordic is 1 bit per iteration, while polynomials (Chebyshev or minmax) converge exponentially faster.
You are of course right about the speed, when a fast hardware multiplier is available for the computation of the polynomials.
On the other hand with CORDIC it is extremely easy to reach any desired precision, and in cheap hardware it does not require multipliers.
So CORDIC may be considered as "one of the best ways" depending on how "best" is defined.
Even when developing a polynomial approximation for the fast evaluation of a trigonometric function, it may be useful to use an alternative evaluation method by CORDIC, in order to check that the accuracy of the polynomial approximation is indeed that expected, because for CORDIC it is easier to be certain that it computes what it is intended.
> you always have to go back to radians to actually do calculation.
The article actually argues the opposite: that the common implementations of sine and cosine start by converting their radian based arguments to turns or halfturns by dividing by pi.
That's just because the power series would take ages to converge for large arguments, so you take advantage of periodicity. But the implementation in a floating point world is a different thing than the definition in an infinite series world.
For example, e^x can be implemented by handling the integer and fractional parts separately, for similar reasons. But no one really cares about the functions e^floor(y) and e^(y-floor(y)). They are only useful as part of an implementation trick.
That's really not the only thing going on. Yes, it allows you to take advantage of periodicity. But many common function approximations work best (i.e. not requiring any transform of the argument) over the interval [-1, 1].
This is not the common "implementation" of sine and cosine, its the common argument h, in his use case, he tends to want to calculate turns and half turns most often. He might be able to refactor his functions to optimize for this, but its not exactly something I would expect to be a good idea for library code, people do want to calculate other angles.
sin(x) ~~ x only in radians, so honestly that's reason enough.
Once in a while we get programmers wanting to disrupt mathematical notation for whatever reason... Worst I've seen so far was one arguing that equations should be written with long variable names (like in programming) instead of single letters and Greek letters. Using turns because it's a little easier in specific programming cases is just as short-sighted, I'd say, it doesn't "scale out" to the myriad of other applications of angles.
Those perfect radians use 2*pi, aka tau, though, a different math notation issue, where mathematicians have chosen the wrong option (imho) and a case for disrupting that part of math notation, to make radians easier to teach: 1/4th of a circle could be tau/4 radians, 1/8th could be tau/8, etc..., instead of confusing halved factors with radians expressed as amount of pi.
Regarding long variable names: I'd rather have long variable names, than a mathematician using some greek symbol in formulas without telling what the meaning of it is (and it could be different depending on their background). But I have no issues with the single letter variables if they're specified properly.
Just out of curiosity, where did tau come from? I never heard of it used for 2pi, and frankly, it seems like a poor choice because in engineering it is one of the most common symbols used (time constant tau).
It apparently was chosen because it's the starting sound of "turn": Hartl chose tau to represent 2pi because it nicely ties in with the Greek word “tornos,” meaning “turn,” and “looks like a pi with one leg instead of two.”
That is a completely different matter. The definition of sin/cos in radians doesn't change if you prefer to use 2 * pi or tau - it's still x - x^3/3! + [...]. sin(pi/2) = sin (tau/4) = 1.
> Worst I've seen so far was one arguing that equations should be written with long variable names (like in programming) instead of single letters and Greek letters.
That could never work. If anything the words comprising mathematical texts should be defined once and thereafter truncated to their first letter to reduce cognitive burden and facilitate greater comprehension.
c = "could";
d = "don't";
f = "for";
g1 = "go";
g2 = "great";
i = "it";
i2 = "i";
m = "me";
s = "see";
w = "works";
w2 = "what";
w3 = "wrong"
That is the way to do the math, but not the way to write the code.
That said, I would like for my compiler to combine any multiplications involved down to one factor for input to the fastest sin/cos operations the machine has. And, to treat resulting multipliers close enough to 1, 1/2, and 1/4 as exact, and then skip the multiplication entirely.
But the second part is a hard thing to ask of a compiler.
Good news, optimization is engineering, not CS. CS is all about what a program would eventually do, if you were ever to run it. Once you run it, you have moved to the domain of technicians. Engineering is about making it run better.
What really bothers me is that mathematicians seemingly never distinguish between doing and presenting mathematics.
You can do your own scribbles with single letters, so do I, it works fine.
But when you present maths in a scientific article, maths book, Wikipedia article or similar, your convenience as a writer should be secondary. Your task is to present information to someone who does not already know the subject. Presenting an equation as six different Greek letters mashed together means that the equation itself convey almost no information. You need a wall of text to make sense of it anyway.
Depends how you define “soundness”, but the idea of prolonging a function out of its definition domain with an arbitrary value that doesn't make it continuous is arguably a curious one.
From an algebra perspective (the one given in the blog post) it may be fine, but from a calculus perspective it's really not.
The lack of continuity really hurts when you add floating points shenanigans into the mix, just a fun example:
When you have 1/0 = 0 but 1/(0.3 - 0.2 - 0.1) = 36028797018963970. Oopsie, that's must be the biggest floating point approximation ever made.
But for 1/x you have that issue anyway. If x is on the negative side of the asymptote but a numerical error yields a positive x, you'll still end up with a massive difference.
That's only for small angles though (stems from Taylor's expansion). With other units, you have a conversion factor, but it remains true enough at small angles.
In my opinion, it puts too much emphasis on the variables compared to the operators and numbers and makes the expression as a whole harder to parse at a glance as I have to actually read the names.
Yeah, when doing it with hand, I surely would shorten it. But when doing math on the computer with help of autocomplete, why not?
But well, I do not really know if that in pure math shape exists, I am only doing Math in the context of programming.
And for pedagogic purposes, I do would like more meaningful names at times.
It's definitely a lot harder to read and make sense of an equation that is sprawled out. In some domains, I would contend that using greek letters in code would increase readability, especially for those familiar with the underlying formula, and especially if the code is not edited frequently (e.g. implementing a scientific formula which won't change).
A good compromise might be to put the equation in the comments in symbol-heavy form, and use the spelled out names in code.
If you go watch math lectures, there's a bunch of "x means Puppy Constant" or, "let's substitute in k for the Real component", or "let's signify <CONCEPT> by collecting these terms into a variable". My argument wouldn't be to replace ALL the variables with meaningful names, just the ones with a lot of meaning that a reader might not understand. It'd also be great if constants, variables, and functions all got naming conventions. Lowercase letters are variables, all caps for constants, etc. It saves a little bit on writing to shorten the variable names, but if the goal of math is to share and spread knowledge within the community or without, better naming and less-memorization would both help. You can also rename things for the working out and use friendlier names for the final equations, just tell people how you're renaming them and everyone will follow along and the programmers will stop trying to sell you one readable code.
Most importantly the flat dismissal and horror that many express when someone brings up adjusting the symbolic traditions of Maths should be investigated. Engage with why you feel so strongly that anything other than rigid adherence to tradition is sacrilege. Based on what I've heard, in order to be a great Mathematician, you need to hold onto tradition lightly and think outside the box. Rigid adherence to tradition doesn't sound like that to me.
> Engage with why you feel so strongly that anything other than rigid adherence to tradition is sacrilege
Who’s saying that? Inventing good notation is a big part of mathematics (and that also frequently gets criticized on HN because it may introduce ambiguities)
Also, there’s nothing wrong with texts that target an audience with a certain level of understanding.
It’s not as if adding, for example, “By Hermetian matrix we mean a complex square matrix that is equal to its own conjugate transpose” will make a paper much easier to understand, just as adding a comment “this is where the program starts running” doesn’t help much in understanding your average C program, or adding a definition of “monarchy” to a history paper.
In the end, any scientific paper has to be read critically, and that means making a serious effort in understanding it. A history paper, for example, may claim that Foo wrote “bar” but implied “baz”. A critical reader will have read thousands of pages, and (especially if they disagree with the claim) then think about that for a while, and may even walk to their bookshelf or the library to consult other sources before continuing reading.
Got a reference for what that looks like with current notation? The internet is basically just showing the starting equation and ending equation and skipping all the intermediaries.
You can use whatever notation you want for your own work, but documenting with, at least, formal variable definitions would be a significant boon for math literacy.
Nothing, but their use in mathematical equations will certainly conflict with the implicit multiplication in equations (i.e. `abc` in a formula means `a * b * c`, not a variable abc).
> > But math never decreed that sine and cosine have to take radian arguments!
> Ummm, actually it did.
No, it didn't. Some specific uses looking better with radians does not mean you have to use radians always.
When I first learned sine and cosine, we used degrees, and that worked fine. Later we switched to radians, but there's no reason why you shouldn't use turns, and the article gives a very good argument why in some cases you definitely should.
>Some specific uses looking better with radians does not mean you have to use radians always.
It's not just some specific use cases, it's the majority of cases if you look across all of math and science. Switching to turns would be stupid, especially once you start doing differentiation and integration. The fact that we use radians almost across the board isn't some accident.
The simplicity of the Taylor series of sine and cosine is irrelevant, there are no important applications for those series.
There is only one consequence of those series that matters in practice, which is that when the angles are expressed in radians, for very small angles the angle, its sinus and its tangent are approximately equal.
While this relationship between small angles, sinuses and tangents looks like an argument pro radians, in practice it isn't. There are no precise methods for measuring an angle in radians. All angle measurements are done using an unit that is an integer divisor of a right angle, and then the angles in radian are computed using a multiplication with a number proportional with the reciprocal of Pi.
So the rule about the approximate equality of angles, sinuses and tangents is at best a mnemonic rule, because to apply the rule one must convert the measured angles into radians, so no arithmetic operations can be saved.
"Turns" generalize perfectly to higher dimensions.
To the 3 important units for the plane angle, i.e. right angle, cycle and radian, there are 3 corresponding units for the solid angle, i.e. the right trihedron (i.e. an octant of a sphere), the sphere and the steradian.
The ratio between the right trihedron and the steradian is the same as between the right angle and the radian, i.e. (Pi / 2).
The ratio between the sphere and the right trihedron is 2^3, while that between cycle and right angle is 2^2. In N dimensions the ratio between the corresponding angle units becomes 2^N.
Moreover, while in 2 dimensions there are a few cases when the radian is useful, in 3 dimensions the steradian is really useless. Its use in photometry causes a lot of multiplications or divisions by Pi that have no useful effect.
There is only one significant advantage of the radian, which is the same as for using the Neper as a logarithmic unit, the derivative of the exponential with the logarithms measured in Nepers is the same function as the primitive, and that has as a consequence similarly simple relationships between the trigonometric functions with arguments measured in radians and their derivatives.
Everywhere else where the radian is convenient is a consequence of the invariance of the exponential function under derivation, when the Neper and radian units are used.
This invariance is very convenient in the symbolic manipulation of differential equations, but it does not translate into simpler computations when numeric methods are used.
So the use of the radian can simplify a lot many pen and paper symbolic transformations, but it is rarely, if ever, beneficial in numeric algorithms.
> The simplicity of the Taylor series of sine and cosine is irrelevant, there are no important applications for those series.
The addition theorems for trigonometric functions can easily be shown by the multiplication theorem for Taylor series (and adding two Taylor series). This proof would be more convoluted if the Taylor series were not so easy.
Also, because of the simplicity of their Taylor series, one immediately sees that sin and cos are solutions of the ODE y'' = -y.
Another application of the Taylor series is that by their mere existence, sin and cos (as real functions) have a holomorphic extension.
The proof of any property of the trigonometric functions is trivial when the sine and the cosine are defined as the odd and even parts of the exponential function of an imaginary argument, and the proof uses the properties of exponentiation.
Any proof that uses the expansion in the Taylor series is a serious overkill.
Moreover, those proofs become even a little simpler when the right angle is used as the angle unit, instead of the radian.
In this case, the sine and the cosine can be defined as the odd and even parts of the function i ^ x.
Only in school exercises you can solve a differential equation by expanding a sine function into a Taylor series.
In practical physics computations, the solution of differential equations requires numerical methods that do not use the Taylor series of specific functions, even if the theory used for developing the algorithms may use the Taylor series development of arbitrary functions.
For accurate prediction, the simple pendulum equation also requires in practice such numerical methods, which do not rely on the small-angle approximation that enables the use of the Taylor series of the trigonometric functions, for didactic purposes.
> Only in school exercises you can solve a differential equation by expanding a sine function into a Taylor series.
> In practical physics computations, the solution of differential equations requires numerical methods that do not use the Taylor series of specific functions, even if the theory used for developing the algorithms may use the Taylor series development of arbitrary functions.
I'm sorry, but you have no idea what you're talking about. Series expansions is one of the most widely used techniques in Physics. Obviously some equations require full blown numerical methods to be solved, but one can do a whole lot with analytical techniques by doing series expansions and using perturbation theory.
Saying that this is only used "in school exercises" shows that you're completely out of touch with reality.
You have replied to something that I have not said.
I have said that the Taylor series of arbitrary functions have various uses, but there is no benefit in knowing which are the specific Taylor expansions of the trigonometric functions, with the exception of knowing that the first term of the sine and tangent expansions when the argument is in radians is just X.
Solving physics problems using the expansion of an unknown function in the Taylor series has nothing to do with knowing which is the Taylor series of the sine function.
> with the exception of knowing that the first term of the sine and tangent expansions when the argument is in radians is just X.
There are more terms in the expansion that you can use, that's the whole point of using an expansion...
> Solving physics problems using the expansion of an unknown function in the Taylor series has nothing to do with knowing which is the Taylor series of the sine function.
I hope you're aware that the sine function appears quite often in Physics problems.
When have you ever used the Taylor series of sine and cosine for anything (outside school) ?
When you approximate functions by polynomials, including the trigonometric functions, the Taylor series are never used, because they are inefficient (too much computation for a given error). Other kinds of polynomials are used for function approximations.
The Taylor series are a tool used in some symbolic computations, e.g. for symbolic derivation or symbolic integration, but even in that case it is extremely unlikely for the Taylor series of the trigonometric functions to be ever used. What may be used are the derivative formulas for trigonometric functions, in order to expand an input function into its Taylor series.
The Taylor series of arbitrary functions (more precisely, the first few terms) may be used in the conception of various numeric algorithms, but here there are also no opportunities to need the Taylor series of specific functions, like the trigonometric functions.
The Taylor series obviously have uses, but the specific Taylor series for the trigonometric functions do not have practical applications, even if they are interesting in mathematical theory.
> When you approximate functions by polynomials, including the trigonometric functions, the Taylor series are never used, because they are inefficient (too much computation for a given error). Other kinds of polynomials are used for function approximations.
Can you point me to some implementation of sin that’s not actually using Taylor expansion in some form? Because most that I am aware of do in fact use Taylor series (others are just table lookup). See glibc for example:
(The constants are easily checked to be -1/3!, 1/5! Etc)
This might have something to do with the Taylor’s theorem. You know, that the Taylor’s polynomial of the order n is the only polynomial of order n that satisfies |f(x)-T(x)|/(x-a)^(n+1) -> 0 as x -> a. In other words, the Taylor polynomial of order n is the unique polynomial approximation to f around a to the order n+1. This means you cannot get any better than Taylor close to the origin of the expansion. This causes implementers to focus on argument reductions instead of selecting polynomials.
If any of those libraries uses the Taylor expansion for approximation, that is a big mistake, because the approximation error becomes large at the upper end of the argument interval, even if it is small close to zero.
What is much more likely is that if you will carefully compare the polynomial coefficients with those of the Taylor series, you will see that the last decimals are different and the difference from the Taylor series increases towards the coefficients corresponding to higher degrees.
Towards zero, any approximation polynomial begins to resemble the Taylor series in the low-degree terms, because the high-degree terms become negligible and the limit of the Taylor series and of the approximation polynomial is the same.
So when looking at the polynomial coefficients, they should resemble those of the Taylor series in the low-degree coefficients, but an accurate coefficient computation should demonstrate that the polynomials are different.
The difference at the very edge of the interval occurs at the 14th digit of decimal expansion, and it's at the edge of accuracy of double, at 16th digit: after ...6547, the exact value starts with ...6547_5244, instead of 4617. I wouldn't exactly call it a big mistake, as the difference would not be relevant in almost all practical uses, but that would be a mistake nevertheless, and I'm sure someone would be bitten by this. Thanks, I learned something new today!
> When have you ever used the Taylor series of sine and cosine for anything (outside school) ?
I've used them a few times, mostly in the embedded space, and mostly in conjunction with lookup tables and/or Newton's method, but yes I've absolutely used them outside school (years ago, I forget the exact details).
- implementing my own trig functions for embedded applications where I wanted fine control over the computation-vs-precision tradeoff
- implementing my own functions for hypercomplex numbers (quaternions, duals, dual quaternions, and friends).
- automatic differentiation
Does the Taylor series form survive to the final application? Usually not, usually it gets optimized to something else, but "start with Taylor series and get back to basics to get a slow but accurate function" has gotten me out of several pickles. And the final form usually has some chunks of the Taylor series.
I agree that using the Taylor series can be easier, especially during development, mainly because convenient tools for generating approximation polynomials or other kinds of approximating functions are not widespread.
However, the performance when using Taylor series is guaranteed to be worse than when using optimal approximation polynomials, according to appropriate criteria.
Still I cannot see when you would want to use the Taylor series of the trigonometric functions, even if for less usual functions it could be handy.
There are plenty of open-source libraries with good approximations of the trigonometric functions, so there is no need to develop one's own.
In the case of a very weak embedded CPU there is the alternative to use CORDIC for the trigonometric functions, instead of polynomial approximations. CORDIC can be very accurate, even if on CPUs with fast multipliers it is slower than polynomial approximation.
> So the use of the radian can simplify a lot many pen and paper symbolic transformations, but it is rarely, if ever, beneficial in numeric algorithms.
If only computers could do a bit of symbolic algebraic manipulations before issuing the machine code.
Wait, isn't that what optimizing compilers can do? That requires an optimization across library calls and thus a form of inlining, which doesn't see far fetched for a math library call. Or some optimizations can't be done due to floating point error propagation (which could be relaxed)?
A simple example: suppose we want to compute cos(x)-1 near x=0, with high accuracy, in single precision fp. How to do this? Its very easy: google "taylor series for cosine", lobb off the fist term (1), and you're done.
I already learnt in school to calculate trigonometry using radians or turns depending on the situation. It was part of the general math curriculum in Bavaria. As far as I am aware both are mathematically sound and there is no reason to religiously use one of them over the other. Let your use-case or input parameters decide. The examples given in the article definitely make no sense in radians.
"I already learnt in school to calculate trigonometry using radians or turns depending on the situation. It was part of the general math curriculum in Bavaria. "
Out of interest, when did you go to school in Bavaria and in which grade did you learn about turns? I was in school in Bavaria a long time ago and I don't remember learning about turns there. Could very well be that I forgot or our teacher forgot to teach it.
That should've been about 15 years ago. I don't remember the grade, but based on the subject probably 8th or 9th? I thought it was in the textbook but possibly our teacher just added it himself.
The shocking thing with some of these articles is somehow the author asked “why do people use radians” and ended up with an answer of “it was an arbitrary decision and the world would be better of not using it”.
I feel a bit of humility would have helped the author and perhaps they would have considered the possibility that they didn’t think of the problem deep enough rather than hastily write a blog post about it.
It speaks to the hubris and the superficiality of thinking for some authors.
The shocking thing about some of these comments is somehow they didn't consider that the original author spoke in a specific context.
Casey didn't say the world would be better with turns instead of radiants. He said that game engine code would be better with turns instead of radiants. Be more charitable.
The writer don’t seem to realise that radian is not an arbitrary unit but a dimensionless one which is defined so that 1rad is actually just 1.
Reading the submission and the comments here, I’m under the impression that trigonometry is not extensively taught in middle schools and high schools in the USA. While I’m slightly envious you might not have to suffer developing powers of cosine and sine but that would explain the lack of familiarity with radian I see here. Am I wrong?
Yes. Trigonometry is extensively taught in the US. People forget this stuff if they don’t use it.
Ask some 30 year old chef in whatever country you fantasize teaches properly to compare and contrast turns vs radians and you’ll get similar responses.
I'm a 50 yo programmer. I have a CS degree. I don't even remember my college calculus much less my high school trig. I just haven't had cause to use it in my career, not as a sysadmin, not as a programmer. My son is taking calc 3 and I knew I happened to have my calc 3 notes from the mid-90s, so I pulled them out of the filing cabinet and my very carefully taken notes, my proofs, my hand drawn graphs, it was all gibberish to me. That was stuff I knew like the back of my hand when I graduated but it quickly faded away.
By far the most annoying myth I face when trying to discuss the pros and cons of various education techniques is the pervasive idea that everybody is a magical knowledge sponge and will go to their grave still remembering how to integrate by parts and every detail about some particular battle they covered in seventh grade, and therefore, if we slightly tweak a curriculum plan to drop something that was included on theirs we'll be stealing that knowledge from all the 70 year olds who will eventually have been on that plan.
Where this idea comes from I have no idea. Personally looking around in school itself it was plainly obvious this was all going in one ear and out the other for the majority of students even at the time. The better students retained it long enough to spew it out on the test but that was already above average performance. That doesn't mean there isn't still a certain amount of value in that in terms of what that knowledge may do to their brain during the brief period of time it is lodged in there. (I think there's a lot of value in just learning the "shape" of all this stuff, and perhaps having some index of what might be valuable to know.) But the idea that we can spend 15 minutes and a one-page homework assignment on something and expect that to last 60+ years is just nonsensical.
I mean, honestly, anyone over the age of 22 or so ought to be able to notice a distinctly sub-100% retention rate simply by looking inside themselves.
Yes, to a first approximation everyone with a normal education in the US has been present while some sort of trig was discussed. Not all of them, but still quite a lot of them, were present for the Taylor expansion discussion. The vast bulk of them have had it decay by 25, and there simply isn't anything to be done about that if you're talking about humans and not some homo educationous who mythically retain all knowledge they were exposed to even for 30 seconds just as the mythical as homo economicus perfectly rationally conducts all their economic business at all times. Perhaps they're actually the same species.
I remember being amused by this same observation when my own country decided to reduce mandatory education from k+12 to k+10 (cutting two years of high-school). They immediately began re-arranging the curriculum in high-school, for example to move organic chemistry from 11th grade to 10th grade, on the basis that it's important for students who only finish the mandatory 10th grade to know some organic chemistry as well, instead of the old curriculum which would have only taught them inorganic chemistry after 10th grade (this has the bonus of making the chemistry curriculum inorganic I -> organic I -> inorganic II -> organic II, for maximum confusion).
To me, even though I was barely out of high-school at the time, this was obviously absurd - expecting especially someone who wants to drop out of high-school early to retain any notion of organic chemistry taught in a school year, that they couldn't learn on the job if it was really required, seems so obviously nonsense that I couldn't help but laugh. Especially since the same thing was done to basically every other subject as well, with the same intentions.
One note: in my country, the curriculum is completely centralized; there is some small amount of choice, but it amounts to, at most, 1-2 classes per semester; everything else is fixed.
Actually, it's almost entirely the opposite—the idea that students are a "sponge" that can soak up knowledge perfectly is then taken directly to mean that some students are better at soaking up / retaining knowledge then others, and that the "smart" kids who do the best on the tests are the ones who are going to retain the knowledge the best. And then the ones that were the best knowledge-sponges will eventually go on to become the next generation of teachers, since they know the most information. Whereas for most kids it's completely the opposite—they memorize the information in their short-term memory without understanding the fundamentals, they do great on the tests, and then they forget all of it immediately. But they stand out from their peers as better students, because they're able to play the "game" of school better and optimize for being a knowledge-sponge that will absorb the most information as possible and forget it as quickly as possible.
>>> simply isn't anything to be done about that if you're talking about humans and not some homo educationous who mythically retain all knowledge they were exposed to even for 30 seconds just as the mythical as homo economicus perfectly rationally conducts all their economic business at all times. Perhaps they're actually the same species.
On a related note, it bothers me that there’s so much urgency to teach younger kids more and more advanced math. I use more and higher math on a day-to-day basis than practically anyone I know, but it’s very rarely even calculus, and even then it’s typically just discrete integrals or derivatives.
There’s just an absolute ton of math being taught that’s going completely to waste, and it’s at the expense of the humanities.
My biggest “Screw everything” moment about math was the first lecture of my numerical methods class in college when the professor said: “All that calculus you’ve been learning your whole lives? It’s useless. Carefully curated set of a few dozen problems that are doable by hand. Here’s how it’s really done for anything remotely practical”
And then we learned a bunch of algorithms that spit out approximate answers to almost anything. And a bunch of ways to verify that the algorithm doesn’t have a bug and spat out an approximately correct answer. It was amazing.
But the most long-term useful math class (beyond arithmetic and percentages) has been the semester on probabilities and the semester on stats. I don’t remember the formulae anymore, but it gave me a great “feel” for thinking about the real world. We should be teaching that earlier.
When I took 400 level Real Analysis: “All that calculus you’ve been learning your whole life? It’s a lie. Those epsilon delta proofs? They were fake - none of you were smart enough to challenge us on ‘limits’. And now we’re gonna do it all again only this time it’s really gonna be rigorous.”
Is there any somewhat simple explanation of what are the limitations of the epsilon-delta definition of limits that make it non-rigorous? I've been trying to find some information about your comment, but have so far come up empty.
I'm shaky on this - it's been thirty years - but I believe the Calc I epsilon delta proofs relied on the notion of an open and closed intervals on the real line, which we all intuitively understood.
The upper level Real Analysis made us bring some rigor as to what an interval on the real line actually meant going from raw points and sets to topological spaces to metric spaces, then compactness, continuity, etc. all with fun and crazy counterexamples.
I think much of math 'education' is constructed as a filter to identify a small handful of math prodigies. The general population suffering anxiety and youth lost in the filter is seen as an acceptable sacrifice for the greater good of finding the math prodigies so those can be given a real math education.
Yes, this is a very good point. In my experience from, uh, several decades ago, it also felt like a lot of math educators watched (and showed in class...) Stand and Deliver way too many times and the only message they took away was "we should teach everyone calculus!"
I doubt the students would actually learn humanities in the extra time allotted if it's not used for math. I remember a distinct refusal to internalize, especially in my male peers, during "English" classes.
Forget humanities. The hours a week after school that highschool students spend on calculus homework would probably be better spent socializing with their friends. They'll never be young again, wasting the time of a teenager with unproductive busywork is a horrible thing to do.
I’m the opposite, Im 15 years into my career of applied research which for me is like an extension of university. I tend to lean on Mathematica to do my calculus though. I think high school curriculum was optimized to expose a lot of people to things they won’t need on the off chance that a few will end up as researchers of some sort. It would be more efficient to identify such people earlier and split them off. I think historically that was the idea but there has been an egalitarian push to broaden the pool.
I think the point of high school is to make kids' brains do work, and what you are learning is secondary.
People love to hate on their school curriculum and all the useless knowledge they had to acquire but I'm positive it makes you a smarter person overall, and the body of high school knowledge makes learning more specialized knowledge easier (even if that's baking bread or whatever)
(People also love to talk about how little they remember from school, yes the brain is a muscle and you stopped working out, congratulations.)
USA here, same acro. I still start off solving by writing it off and drawing slashes through O/H A/H O/A for reference.
Came to use trig functions quite frequently while playing video games, and that was a big surprise to me. Not to assume you've played it, but I've recently discovered that Stormworks is a programmer's game - you can write microcontroller code in LUA for your vehicle designs. And, wow, does it ever use my trig knowledge everywhere.
Realized the transponder beeps can be triangulated, tick being 1/60th of a sec and that's a distance estimate resolution of up to 5-10 km. And that's when cos and sin came back to be useful because you can do intersection of circles and figure out where to do a sea rescue more precisely. So video games, trig. Who would've thought?
I’d say I use it for something practical/random like that a few times a year?
Another example was placing some ceiling speakers whose tweeters had a 15° angle so that they were pointed directly at a seating position below. How far did I need to place them in front of the seating position from directly overhead.
I would guess any sort of construction you’re using it fairly often.
Yeah, I know it's not the same type of math, but it's one of the few things that I still use today. To be honest, I can't think of one time in my professional career that I have needed to calculate the area under a curve to solve a life problem. Geometry has been the most used branch of math past basic arithmetic, oh, and algebra. It amazes me the number of people that don't realize how many times in a day they have solved for X.
It was by no means uncommon when I was taught in the US but I somehow missed it, instead just internalizing the various relationships directly, and was briefly confused when classmates started talking about SOHCAHTOA working together in college math courses.
What I remember from trig is to draw a unit circle. Most of the rest falls out of that.
I’m handy outside of work and use sohcahtoa often enough to remember it. Triangles are everywhere and sometimes you need to compute angles and lengths of sides.
Statistics is also useful and applicable to everyday life, but I didn’t learn that till college as best I can recall.
I don’t regret having spent time learning calc, or physics or chemistry or biology for that matter. If you asked me to come up with a curriculum I’d have a really hard time prioritizing. Maybe the one thing I’d like to see kids learn better is how to be self-directed learners. I’m still fairly surprised at the number of colleagues I have who seem unable to problem solve and figure something the fuck out. Even knowing when and how to ask for help.
I'm a 50+ year old American, of british decent...
I never managed to remember the 'american' mnemomic, but my dad taught me one the used to use in England around WWII:
Percy has a bald head, poor boy
There's lots of stuff I knew well and then forgot, but can re-learn quickly. For example, nearly all of calculus (useful when dealing with machine learning). Other bits I've retained and never forgotten, such as everything I've seen involving matrices. There are even things which I had conveniently completely "forgotten" but later emerged as suppressed latent memories- for example, set theory. I was so unhappy with the lead-up to Russell's paradox that I actively suppressed thinking about sets, groups, rings, and fields for several decades.
There are even other bits that I was shown, never incorporated into my brain at all, but later recognized as truly important (Taylor series expansions, the central limit theorem, the prime number theorem, etc).
Informally, big-O and limits have a similar smell to them, calc might have helped get some wheels turning in your head for that.
I do recall taking a "probability in CS" as an electrical engineering student -- it was pretty mind-blowing to me the extent to which the CS students did not like to talk about any continuous math. It makes sense, though, these are different specialties after all.
Honestly the typical developer needs a solid understanding of algebra, but not much beyond that. Though any time I get into game dev stuff I start ripping my yair out over quaternions
I'd argue it's not so much taught in the US as it is tested. The common core standards say [0] that students should:
> Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle.
> Explain how the unit circle in the coordinate plane enables the extension of trigonometric functions to all real numbers, interpreted as radian measures of angles traversed counterclockwise around the unit circle.
and so on. However, in practice, this means that students need to be able to answer "C" when presented with the question:
> One radian is:
> A) Another word for degree.
> B) Half the diameter.
> C) The angle subtended on a unit circle by an arc of length 1.
> D) Equal to the square root of 2.
A surprising number of students can get through without ever really comprehending what a radian is. They might just choose the longest answer (which works way too often), identify trick answers and obviously wrong answers, and eventually guess the teacher's password from a lineup by association of the word salad of "radians" and "subtended."
They might not even have a clue what the word "subtended" means, but they know it's got something to do with "radian" and that's enough. It is more important for the school that the students answer (C) than that they understand what a radian is.
No, you’re missing the point. I went to school in the 80s and learned this stuff without multiple choice and fully understood it all of the way through undergrad where I took up through calc 3 and differential equations. Then I spent nearly 30 years as a SWE not using it and forgot nearly all of the details within maybe 15 years.
This happens with very basic things like human languages. Bilingual people can forget an entire secondary language if they don’t use it for a decade+.
> Derivative of sin(x) is cos(x). Many people probably think this works for degrees, but it's actually some abomination like pi cos(pi x/180)/180.
That's what it would be if you are using sin in degrees and cos in radians. But if you are using degrees for both then the derivative of sin(x) is pi/180 cos(x).
Like most here, I've learned and forgotton lots of trig and calculus.
However, I still remember that "eureka!" moment of realizing that radians were special, that the small angle approximation of sin(x) = x, and many related math rules, work only when x is expressed in radians. I guess that's a credit to my math teacher, who basically led the class in deriving mathematical formulas rather than just presenting them to us.
I think the article is still valid and interesting, as "turns" in some use cases might improve performance and accuracy. But radians aren't at all "arbitrary" - if we ever encounter technologically advanced aliens, they certainly won't use degrees, but they will understand radians.
I would only really expect programmers who work with angles regularly (those working in 3D) to remember it. Even then, you’re likely just smashing quaternions together anyway.
> The writer don’t seem to realise that radian is not an arbitrary unit but a dimensionless one which is defined so that 1rad is actually just 1.
First, I would be cautious about suspecting someone of Casey Muratori's calibre didn't consider something just because he didn't directly addressed it.
Second, the choice of unit is kind of arbitrary, even if the unit itself is not. Radiants are nice because the length of a 1 radiant arc is the same as the length of the radius. But turns are also nice because angles expressed in turns are congruent modulo 1 instead of modulo 2π.
Third, he talks in the context of video games. Such games use code, that have to be read by humans and executed by the CPU. And that's the main point of his article: in this context, expressing stuff in terms of (half) turns reduces the amount of code you have to write & read, reduces the number of multiplications & divisions the CPU has to make, and makes some common operations exact where they were previously approximated.
Do we even care at this point whether the definition of radians is arbitrary or not? I love the elegance of radiants, but for game engine code I'm willing to accept they're just the wrong unit for the job.
Furthermore, the whole reason to treat radians as dimensionless, the problem, is with angles, not with radians specifically. Degrees are also considered dimensionless. So, a turn could be treated as dimensionless too, with a conversion constant to radians & degrees, just like between degrees and radians.
Of course, the declared dimensionlessness of angles like radians isn’t something generally discussed in pre-college trig courses, that’s a subtle subject that matters more in physics. In my high school trig, we all understood radians to be a unit of angle and never pondered whether angles had dimension.
Also subtle point, but dimensionless doesn’t mean unitless. It’s another separate convention to drop the units when working with radians.
> "I’m under the impression that trigonometry is not extensively taught in middle schools and high schools in the USA. While I’m slightly envious you might not have to suffer developing powers of cosine and sine but that would explain the lack of familiarity with radian I see here. Am I wrong?"
It varies by school, but overall I think this prediction is incorrect. Trigonometry was an important subject in high school — for all of the math, physics, and possibly chemistry courses — and then if you take calculus in university, it's very, very important to learn trigonometry well (or you'll really struggle as a student).
So, even on the off-chance that trigonometry is not taught in high school (which I predict is rare), a first-year student taking calculus in university must learn it on their own time. Good calculus textbooks (e.g. Thomas Calculus) even account for this, having fairly comprehensive textbook sections on what you need to know about trigonometry to succeed in the calculus course.
Most students who therefore took math to pre-calculus or calculus (or physics and possibly chemistry), should therefore have a good exposure to the definition of the radian.
I was a bad student through 8th grade, but managed to get selected for a STEM magnet school. I was supposed to enter 9th grade with Geometry, then algebra II, trig, Calc for the 4 years. But they discovered i'd never passed algebra prior, they put me in algebra, which means i would have finished in trig. Due to a crazy 3.5 years, i never got a high school math education. Calculus makes my eyes glaze over, and all i know about triangles is sohcahtoa.
Every couple of years i try to get some higher math education, but nothing makes sense. It's one of the reasons i [think] i suck at programming - i should note that another reason is i first learned BASIC, then qbasic, then fortran, and then C never made sense to me. At least i can putter around with python and R.
however i can do "basic" math things that generally everyone else has to dig out a calculator app for in my head, percentages, fractions, moving decimals, "making change". Since i suck at higher math, i'm only able to help my kids with basic math, and i try to ensure that they know it fairly well.
Yes, Rust does indeed and a long time before that it was Pascal. I really love Pascal's syntax, it makes a lot of sense when you approach it with a math background.
- '=' is for equality only
- assignment is ':=' which is the next best symbol you can find in math for that purpose
- numeric data types are 'integer' and 'real', no single/double nonsense
- 'functions' are for returning values, 'procedures' for side effects
- Function and procedure definitions can be nested. I can't tell you what shock it was for me to discover that's not a thing in C.
- There is a native 'set' type
- It has product types (records) and sum types (variants).
- Range Types! Love'em! You need a number between 0 and 360? You can easily express that in Pascal's type system.
- Array indexing is your choice. Start at 0? Start at 1? Start at 100? It's up to you.
- To switch between call-by-value and call-by
-reference all you have to do is change your function/procedure signature. No changes at the call sites or inside the function/procedure body. Another bummer for me when I learned C.
Pascal wasn't perfect but I really wish modern languages had syntax based on Wirth's languages instead of being based on BCPL, B and C.
It's time-consuming, but there are great resources to learn high school math to a very high level (likely much more effectively in many cases, than actually taking a high school math course, due to thoughtful exercises and more control over the pace of learning).
I learned a lot from the Art of Problem Solving book series because they're highly focused on the reader solving problems to learn, versus giving explanations. Even if you don't finish all of it, you can strengthen any problem areas.
For a less-comprehensive but still great introduction to precalculus (with a great section on trigonometry in particular from memory), Simmons' Precalculus in a Nutshell has a great introduction to this. Then you can read a book like Thomas Calculus, which has a great introduction to trigonometry in the first review chapter.
I would even say that you would be better off working through the books above than if you had the high school classes; the best math students probably took the same approach too (working through books instead of focusing just on the class material). The main obstacle is time, because it's hard to find time when you have work and children to take care of.
Wow. You people went to crap schools. We got the derivation of modern trig functions w/ maclauren/Taylor series in 9th grade (though yeah... that was the "hard core math track".) And a year of proofs and derivations in 11th grade. Quaternions and their application in physics was 12th grade.
>> The writer don’t seem to realise that radian is not an arbitrary unit but a dimensionless one which is defined so that 1rad is actually just 1.
It's been a while, but I used to have an argument that rad should be a unit. This even plays well in physics where it allows torque to not have the same units as a joule.
I don't see how radians come into the discussion of torque and energy, both of which are N*m in SI.
That discussion has to do with the failure of SI to notate the directions of vectors. When it's torque, the N and the m are at a right angle. When it's work, they are both in the same direction.
>> That discussion has to do with the failure of SI to notate the directions of vectors.
Well if radian is a unit then torque become Nm/r and is no longer Nm like energy. Then when multiplied by an angle in radians you get energy. It was *something like that*.
Trig is generally called per-calculus in US high schools. It is not a required course, but it is one of the courses everyone on the college track is expected to take.
Though most people haven't used any of that since college and so don't know it very well anymore. I smelled BS when I read the blog, but couldn't put my finger on why - the comment you replied to explained what I knew was the case but couldn't remember.
> I’m under the impression that trigonometry is not extensively taught in middle schools and high schools in the USA
Education quality and quantity vary greatly across the country. Many schools don't require trig at all or lump it in with other classes. I memorized SOH CAH TOA and brute forced a CLEP test (the state of MN is required to allow you to test out of classes and to write a test if one doesn't exist; usually AP and CLEP tests are accepted, and they don't count for/against your GPA).
It's also culturally accepted to "be bad at math," with undertones of defeat and that it's the world doing that to you and not something you can change (maybe the blame lies elsewhere like with how math is taught as a sequence of dependencies and bombing one course makes the rest substantially more difficult). I don't know how many people scrape by a D in trig and subsequently forget it all, but I'd wager it's a lot.
No, like many others you have been confused by the incapacity of those who vote the modifications of the International system of units to decide what kind of units are the units for plane angle and for solid angle: base units or derived units.
A base measurement unit is a unit that is chosen arbitrarily.
A derived measurement unit is one that is determined from the base units by using some relationship between the physical quantity that is measured and the physical quantities for which base units have been chosen.
While there are constraints for the possible choices, the division of the units into base units and derived units is a matter of convention.
Whenever there are relationships between physical quantities where so-called universal constants appear, you can decide that the universal constant must be equal to one and that it shall be no longer written, in which case some base unit becomes a derived unit by using that relationship.
The reverse is also possible, by adding a constant to a relationship, you can then modify its value from 1 to an arbitrary value, which will cause a derived unit to become a base unit for which you can choose whatever unit you like, e.g. a foot or a gallon, adjusting correspondingly the constant from the relationship.
There are 3 mathematical quantities that appear frequently in physics, logarithms, plane angles and solid angles (corresponding to the 1-dimensional space, 2-dimensional space and 3-dimensional space). All 3 enter in a large number of relationships between physical quantities, exactly like any physical quantity.
For each of these 3 quantities it is possible to choose a completely arbitrary measurement unit. Like for any other quantities, the value of a logarithm, plane angle or solid angle will be a multiple of the chosen base unit.
For logarithms, the 3 main choices for a measurement unit are the Neper (corresponding to the hyperbolic a.k.a. natural logarithms), the octave (corresponding to the binary logarithms) and the decade (corrsponding to decimal logarithms).
Like for any physical quantities, converting between logarithms expressed in different measurement units, e.g. between natural logarithms and binary logarithms is done by a multiplication or division with the ratio between their measurement units.
The same happens for the plane angle and the solid angle, for which arbitrary base units can be chosen.
What has confused the physicists is that while for physical quantities like the length, choosing a base unit was done by choosing a physical object, e.g. a platinum ruler, and declaring its length as the unit, for the 3 mathematical quantities the choice of a unit is made by a convention unrelated to a physical artifact.
Nevertheless, the choices of base units for these 3 quantities have the same consequences as the choices of any other base quantities for the values of any other quantities.
Whenever you change the value of a measurement unit you obtain a new system of units and all the values of the quantities expressed in the old system of units must be converted to be correct in the new system of units.
The fact that the plane angle is not usually written in the dimensional equations of the physical quantities in the International System of Units, because of the wrong claim that it is an "adimensional" quantity, is extremely unfortunate.
(To say that the plane angle is adimensional because it is a ratio between arc length and radius length is a serious logical error. You can equally well define the plane angle to be the ratio between the arc length and the length of the arc corresponding to a right angle, which results in a different plane angle unit. In reality the value of a plane angle expressed in radians is the ratio between the measured angle and the unit angle. The radian unit angle is defined as an angle where the corresponding arc length equals the radius length. In general, the values of any physical quantity are adimensional, because they are the ratio between 2 quantities of the same kind, the measured quantity and its unit of measurement. The physical quantities themselves and their units are dimensional.)
In reality, the correct dimensional equations for a very large number of physical quantities, much larger than expected at the first glance, contain the plane angle. If the unit for the plane angle is changed, then a lot of kinds of physical quantity values must be converted.
To add to the confusion, in practice several base units of the 3 mathematical quantities are used simultaneously, so the International System of Units as actually used is not coherent. E.g. the frequency and the angular velocity are measured in both Hertz and radian per second, the rate of an exponential decay can be expressed using the decay constant (corresponding to Nepers) or by the half-life (corresponding to octaves), and so on.
Thanks for writing that. While I don't automatically believe it all, I think it's important to see what's arbitrary and what's natural in our units. I've struggled with the Hz vs rad/s before and I think I resolved it by including the cycle as a quanitity, so Hz = cycle/s and rad/s = 1/s. You don't seem to agree and I'm not confident of my decision, but it's now part of a big technical debt :P
A clear sign of how wrong people can be about the naturalness of units is Avogadro's constant which was recently demoted from a measured value to an exact arbitrary value. Chemists often believe that N_A, moles, atomic mass units, etc. are all somehow important or fundamental and don't realize that it's all based on a needlessly complicated constant with an (until 2019) needlessly complicated definition that could have just been a simple power of 10 if history had gone differently. Luckily the people defining SI have finally moved away from the old two independent mass units to just the kg that can now be exactly converted to atomic mass units by definition.
While it might be something you are now realizing, the US is not a single entity in many ways. Rather, it's some 50 states that form a country. Each state has it's own laws and ways of doing things. While there are many similar ways of doing things, none are exactly the same. On top of that, even within the state you'll have different school systems with different policies.
And we aren't even going to discuss going to American schools in Europe.
Use it or lose it. Most people have no reason to need knowledge of trigonometry, so even if they’re taught it they quickly forget it.
I never really learned trigonometry until I started doing game programming in my spare time when suddenly that knowledge and linear algebra became necessary to understand. They only way I learned it was by needing to know it.
In fact, I regularly forget knowledge I don’t need to know. The stuff I do need to know remains fresh in my mind.
But the angle is an adimensional unit (it's the ratio of two distances, one along the circumference and one along the radius) so 1 rad = 1. Therefore 1 degree is 0.0174... radians but it is also just 0.0174.
No, you're describing one particular way to measure angles. Radians express such a ratio, but degrees don't. 1° is not a ratio between distance along a circumference and radius, it's a ratio between amount rotated and complete revolution. 1° actually stands for 1/360 (of a revolution).
Which is why it's important to add the unit after the measurement. If someone tells you an angle measures 1, can you tell whether it's 1/360 of a revolution or the angle that would be formed by traveling along a circumference a distance equal to the radius of the circle?
Angles in the SI are a ratio of two lengths (and solid angles are a ratio of two surfaces), so degrees are also a ratio of two lengths. 1 degree is a ratio of pi/180=0.01745, which happens to be 1/360th of a revolution; and you have to write down the unit to indicate the multiplicative factor. But writing down radians is just for clarity.
While it is a fake unit, it was made to make the math easy. You could call the origin of everything the place where I'm standing - but good luck calculating a path for the mars rovers to travel if I happen to walk to the bathroom.
I drive a mars rover and this cracked me up. Understanding reference frames is indeed a big part of the job. We do have to deal with "site frame updates" based on rover observations of the sun -- important but annoying. I will bring your person-centered frame suggestion to the team :-)
Speaking of reference frames, I deal with quite a few for Earth-bound things, and the primary ones we use are ECEF (Earth-Centered, Earth-Fixed) and ECI (Earth-Centered, Inertial), which then we will often move to a relative local frame for whatever object matters.
Is the equivalent set available for Martian Nav (MCMF/MCI, I guess), or do you have different/specialized/etc. frames based on something unique to Mars.
For the rover, we're pretty much always dealing in local coordinate systems based on reference frames defined using the rover's observations of the sun and alignment of local imagery with orbital imagery. The two frames used most frequently are called RNAV (centered on the rover) and SITE (centered on where we last did a sun observation). But then there is a tree of frame transformations for knowing the location and orientation of each part of the rover with a lot of named frames (especially important for operating the robotic arm, which I also do).
I don't understand your comment. What I meant is that we can use the numeric values of radians without ever writing the radians unit, it is indeed dimensionless (it is length / length = 1, no unit)
Well it's not exactly surprising, the US is fundamentally built on arbitrary baseless measurement units so getting out of that mindset is probably difficult.
A unit that could be inherently defined by math itself and not a farmer looking at their hands and feet? Preposterous!
> Of course you can work in other units, but you'll need to insert the appropriate scaling factors all over the place.
You probably take out more scaling factors than you introduce.
> Euler's formula (e^ix = cosx + isinx) is the simplest when working with radians.
Euler’s still simple:
e^(2 i pi y) = cosy + isiny
Or if you start noticing c = e^(2 pi) showing up all over the place:
c^iy = cosy + isiny
> How do you do the same with "turns" on a sphere?… You can't in any meaningful way.
Why not do the same thing? One steradian is 1/(4 pi) of a sphere’s solid angle. What if one “steturn” or whatever just covered a full solid angle? And similarly for higher dimensions?
Neither definition seems more natural to me, especially being used to all the factors of 2 and pi that pop over all over the place in the status quo.
I do not see how higher dimensions invalidates the concept.
Steradians are replace by a scaled unit-less number that I will called
sterturns that goes from 0 to 1.
The sine and cosine that are defined with Taylor series are not the same sine and cosine that are defined for right triangles.
The former are R->R functions, while the latter are defined on Angles (Angle is unfortunately not an SI physical dimension yet, but I expect it soon to change), and they don't care about the measurement unit.
I have no idea what you mean by radians generalizing for higher dimensions, but not turns.
sine and cosine are functions from ℝ->[-1,1]. They don't take in a value which has a unit, or even a dimension, they take in a real number.
sin(x) is precisely the unique function f(x) such that f''(x) = -f(x). Similar to how exp(x) is the unique function g(x) such that g'(x) = g(x).
Sine does not operate on 'angles measured in radians'. It operates on real numbers. It is zero whenever the real number passed in is a multiple of pi. It happens to have applications in relating angles to distances in circles and triangles, and in order to use sine in that context it is useful to introduce the concept of a 'radian' as a specific, constructed angle of a particular size, such that when you express an angle in terms of multiples of a radian, you can just use the sine function to generate useful values.
I was taught that Eulers formula defined complex exponents?
If we used turns for cos and sin we could redefine what e^ix means so it works without radians. From the other answer I guess this is completely wrong...
(I do understand it is nuts to redefine, i'm just interested as a theoretical thought)
Now, how is Eulers formula is deduced? How did we figure out what e^ix means?
One way to understand where the formulas come from is the power series of e^x, remembering that that function is (can be) defined as the function whose derivative is itself. Sin and cos are functions whose second derivative is -sin and -cos respectively. If you plug in ix to the power series for e^x, the complex exponential comes right out.
There are a couple other "paths" to this result, and the choice we have is by far the most elegant.
The abstraction Unity provides allows you to do neat things without fully understanding the math behind it. But if you don't also learn the math, now you are limited by what those abstractions allow you to do and by the performance cost due to the fact that these abtractions can't make good use of special-case information that would be available to you if you handcrafted the operation.
It is perfectly legitimate to rely on abstractions a lot of the time because it's safe and easy, and _also_ want to roll your own manipulations sometimes, because 1) when you're accustomed to solving weird trig problems with trig, that's the most straightforward way to write the code, or 2) for performance, which, for a game developer, I think ought to be a major concern.
It's super common for people to reach for trig to solve geometry problems when they should be using vector arithmetic instead, eg this answer on SO has to special-case vertical/horizontal lines, which can be avoided entirely if it didn't reach for trig in the first place.
This constant need to redefine the known world around a favourite detail amazes me. Are people that bored?!
(also I itch hearing the idea of redefining interface - and the world - to fit the implementation detail. how about reimplementing using the [0...0.7854] domain instead of the [0...1] if this is such a huge worry after decades of computing - on slower machines - with the natural radian (arc_length/radius) values? I feel Godot engine should fit the world and not the other way around.)
> Math doesn’t require radians.
....What?!?! Circumference, radius and volume, just to name some, try calculate those easily on turns only (without a new constant introduced!).
I think that the author is speaking about their world, in which they regularly encounter a specific use of trigonometric functions which would be simplified (conceptually and computationally) by skipping the conversion from and into radians.
> redefine the known world around a favourite detail
I think that's a good way to think about software optimization. Deep inside nested loops of a game engine (TFA's example code comes from Godot), that's often what you need to do to squeeze some performance characteristic into your hardware.
I think they are trying to expand it beyond their world. They already have this in their world! Shown with examples. Just want it elsewhere as well.
Not everything is about software automation! Especially in this regard where decade long established practices work on legacy hardware and systems. This is ruining/complicating things for some chip of the scope.
That is not entirely true. It comes from the relationship between those functions and the complex numbers via the Euler formula.
There may be arithmetic/numerical inconveniences, but that's not all there is to "math".Let's define ncos and nsin ("nice cos, nice sin") as follows:
So then what do we make of: This has to be which is then Where f = e is a weird number like 535.4916. This f doesn't have nice properties. E.g.: Otherwise it works; for instance 90 degrees is 0.25 and surely enough In situations not involving e in relation to angular representations via Euler, f cannot replace e.I'm all for having parallel trig functions in libraries that work with turns, though.
The annoying 2π factor shows up in lots of places though. Should way, say, in electronics, redefine a new version of capacitive reactance which doesn't have 2πf in the denominator, but only f?