Hacker News new | past | comments | ask | show | jobs | submit login
How does a computer/calculator compute logarithms? (zachartrand.github.io)
178 points by ykonstant 3 months ago | hide | past | favorite | 73 comments



This is a well done post, but please note that computers do not in fact use Taylor series for evaluating polynomials. They do use polynomials! (Or, sometimes, rational functions, but I think the recent trend has been toward polynomials only.) They also use transformations like the log(1+x) - log(1-x) one described in this article. (That one is used in fdlibm, a very influential math library, though more modern libraries seem to not use it.)

But the polynomials themselves are usually generated through some other technique, especially the Remez algorithm (https://en.wikipedia.org/wiki/Remez_algorithm) and modern improvements to it like Sollya's (https://sollya.org/) LLL-based floating-point polynomial fitting. It's still an active area of research; the RLibM project (https://people.cs.rutgers.edu/~sn349/rlibm/) from Rutgers has introduced a totally new way to fit low-precision polynomials (say, up to 32 bits) using massive linear programming problems (roughly, 4 constraints on 10 variables).

Source: am researcher in this field.


Hi Pavel. For people who don't know, this is the person whose lab produced Herbie (https://herbie.uwplse.org/).

Tacking on to this, I have seen rational approximants in "fast approximate" math libraries, while the ones targeting the highest numerical accuracy are often still using polynomials. Setting up a solver for 0.5 ULP when you have a rational function is definitely a lot harder. In my own work, it also empirically seems that the division creates some problems for precision of the last bit, so your polynomials are longer than you might expect.

One interesting approach for bit-accurate rational approximants is to have a rational approximant get most of the way there while fitting the error of your rational approximant to a polynomial, but I don't think there's a good way to figure out where the computationally-efficient split of the problem is (ie how big to make the rational side and the polynomial side).


Yeah, you're right on all of that. I think there have been recent breakthroughs on quadratic (I think at ARITH 23? Yep, just looked, it's the best paper: https://arith2023.arithsymposium.org/program.html) so hopefully this will become more accessible with time. Though OTOH RLibM-style techniques can get you very short polynomials, to the point that it's hard to imagine beating them by much given the high cost of division operations (like 3x a multiplication).


They used to, though! The AMD K5 paper is a classic, and it describes how they used Taylor series because they could compute them on-the-fly (they had limited ROM space). https://www.researchgate.net/publication/3612479_The_K5_tran...


"Used to" is sort of an overstatement, IMO. The use of Taylor series here is remarkable because they need to generate coefficients on the fly. Chebyshev and Remez have been around for quite a long time, since the early 1930's, and people doing numerics in a serious fashion have generally used them for polynomial approximation since the birth of computers (except in unique circumstances like the paper you cited).

The new engineering that has come recently with Sollya and similar solvers is explicit design around floating point operations. Chebyshev and Remez use the real numbers, but Sollya uses floats in its minimax function, and the coefficients you get are actually somewhat different due to rounding. Fast LP solvers have also enabled this approach more generally.


It should be said that Sollya doesn't _really_ use floats. It restricts its coefficients to rationals that are (mostly) representable by floats, but the minimax is still run in full precision. Which means you can often beat it by brute force or similar.


Yes, that is a good clarification - it uses wider precision internally, but it takes floating point restrictions and operations into account. If you otherwise used Remez, you would have to just quantize the coefficients blindly and then tweak (probably manually) if something was off.

Shamelessly plugging, you can sort of see the old-school process here (using integers and mixed-precision fixed point is harder with Sollya): https://specbranch.com/posts/faster-div8/

The results of a follow-up quantization process that was programmatic using an LP solver to narrow the operands went into a paper I wrote for ARITH last year: https://arith2023.arithsymposium.org/papers/Newton-Raphson%2...

The coefficients and constants in both cases are substantially higher than what you get from Remez, to allow for truncation. When you have quantized operands, they are quite a bit higher. The same goes for approximations generated by Sollya - the optimal coefficients are relatively far from what Remez would tell you to do because of the error you get from rounding.


Do you think we'll ever get to a point where there is tooling to generate implementations for user specified functions? E.g. say I want log2(x!) (Picked that example because the size of the intermediate value blocks computing it naively, and because it's a function I've had to implement before).


Technically, that isn't a continuous function, so I don't think the Remez algorithm and similar would work directly.

But if you replace the factorial with the gamma function, you get a continuous function that is equivalent for positive integers. And you might be able to get something from that. But there might be something more efficient that can take advantage of the fact that the range of your function is positive integers.

If you just need something that doesn't overflow, you can take advantage of the properties of logarithms to compute it as the sum of log2(n) for all n from 1 to x.


And if you want an integer result, log2 can be quickly computed using the number of leading zeros.


Its not possible to automate. Log is one example. You can't just use log(x) but log(x+1). There are other problems like the "zero problem" and its still possible to devise better approximations with other elementary operations such as |x| or ABS(x).


> computers do not in fact use Taylor series for evaluating polynomials.

I mean I get what you're saying, but the smartass in me observes that since any polynomial is it's own Taylor expansion...


I think it's a typo and supposed to read "logarithms".


Ahhhh that makes more sense


Wang Labs made an amazing (for its time) calculator, the LOCI-1, quickly replaced by the much more capable LOCI-2. They were from 1964/1965 and were built with core memory and discrete transistors, 1200 of them, no ICs.

It operated with 10 digits of precision, and its native hardware could add and subtract. Rather than doing multiplication and division and square root directly, instead the calculator was able to compute logs and antilogs via successive addition and subtraction. Multiplying two numbers might return 199.9999999 instead of 200.0, but since this was built for engineers who were mathematically literate, it was acceptable.

There were other calculators that could be programmed to compute logs and antilogs, but they were slow. On the LOCI machines, the results were instant (on a human timescale).

Description of the LOCI-2:

https://www.oldcalculatormuseum.com/wangloci.html

Just found this description of the log algorithm -- it used only six constants in its iterative add/subtract/shift algorithm to compute logs and antilogs:

https://osgalleries.org/journal/pdf_files/20.2/V20.2P51.pdf


> antilog

So exp? e to the power of x?


Yes. I think antilog name used to be more common in engineering, and commonly referred to 10^x rather than e^x. Some calculators even had a log^-1 button.


Neat Google Ngram for it at https://books.google.com/ngrams/graph?content=antilog&year_s... showing a peak for "antilog" in 1976, decreasing now to prewar usage of 25% that of peak.


For anyone who remembers the Sinclair Scientific calculator, the site http://files.righto.com/calculator/sinclair_scientific_simul... has a good explanation of its way of calculating. It is unlike other machines, overcoming serious electronic limitations with some simple but clever approximate methods.

This little machine was my first calculator, a present from my Dad. Man, I loved that thing. It was so small that I could easily hold it, and strike the keys, with one hand. That single-hand aspect proved helpful in my undergraduate physics and chemistry labs. The RPN methodology also made it easy to work through complicated formulae without getting lost in parentheses.

Over time, the keys stopped working well. And then stopped working at all. I kept it for a year or two, but then got rid of it, having moved on to another calculator that was more powerful but much less fun. I don't even remember the brand.

Looking back four decades, I wish I had kept the Sinclair Scientific, for a memento. But I did not appreciate that a dead little machine like that, in its thin plastic case, would bring back so many fond memories. There's a life lesson in that.


RIP Sir Clive :')


> While these equations of polynomials contain a finite number of terms, we can have polynomials with an infinite number of terms. These are called series, and one of the simplest types of series is called a geometric series.

Slightly OT, but what made infinite sums a lot more sane to me was the understanding that the sums are in fact not infinite - it's just syntactic sugar for a perfectly ordinary limit over a series of finite sums, each with one more term than the last.

E.g.,

  sum(1 / 2^n) for n from 1 to +infinity 
"really" means:

  lim(sum(1 / 2^n) for n from 1 to m) for m -> +infinity 
So no infinite loops of additions are actually involved.


Yes, great point. In fact, you don't even have to have exactly one more term than the last (assuming the limit converges).


Ah yes, that's true of course.


I'd also want to say that a series is not necessarily a polynomial


The article's impl of its own series [1] centers on the octave [2/3, 4/3] but the error is actually more balanced across [1/sqrt(2), sqrt(2)]. (A series expansion of the error probably explains this - some of the experts here like pavpanchekha | pclmulqdq might be able to explain pithily.)

So, here is one way to evaluate the discussed series evaluated across the centered interval and I think it only needs 9 terms for "near IEEE double" NOT the author's 15 terms:

    import sys, math # arg = number of terms of series
    n = int(sys.argv[1]) if len(sys.argv) > 1 else 3
    def lnATH(x):         # Ln via ArcTanH form/series
      u = (x - 1)/(x + 1) # Can expand loop to expr
      return sum(2/(2*j+1)*u**(2*j+1) for j in range(n))
    x  = math.sqrt(0.5)   # sqrt(1/2) centers error
    x1 = x*2     # IEEE FP exponent gets within 1 octave
    m  = 3840    # 4K monitor; adjust to whatever
    dx = (x1 - x)/m
    for i in range(m):  # Plot error in favorite tool
      print(x, lnATH(x) - math.log(x))
      x += dx
If you run it with 8 you begin to see round-off effects. With 9 terms, it becomes dominated by such effects.

Anyway 15/9 = 1.66X speed cost which seemed enough to be noteworthy. I mean, he does call it "log_fast.py" after all. (4 seems adequate for IEEE single precision - though you might be 1-bit shy of 24-bits of relative precision at the very edges of octaves if that matters to you.)

[1] https://github.com/zachartrand/SoME-3-Living/blob/main/scrip... (and yeah, the code says "16 terms" & article "15 terms", but any way you slice it, I think it's a lot of extra terms).


I'm just guessing here. What stands out is that the extrema of that octave have the same value. I would guess that this means that things are probably especially well-behaved in that region.

If you look at the Chebyshev series of log in that range, you may see some very small coefficients.


Re: centers on the octave [2/3, 4/3] but the error is actually more balanced across [1/sqrt(2), sqrt(2)]

For the computation of the natural logarithm using binary floating-point arithmetic there is no practical difference, i.e. the overall accuracy that can be achieved is basically the same. Here is a worked example based on minimax core approximations extracted from my personal code collection (design and implementation are my own; no guarantee of fitness for any particular purpose).

  /* natural logarithm */
  float my_logf (float a)
  {
      float m, r, s, t, i, f;
      int32_t e;
  #if LOG_VARIANT == 1
      // max ulp err = 0.86280  
      const float cutoff_f = 0.707106781f;
  #elif LOG_VARIANT == 3
      // max ulp err = 0.85089  
      const float cutoff_f = 0.666666667f;
  #endif // LOG_VARIANT
      if ((a > 0.0f) && (a <= 0x1.fffffep+127f)) { // 3.40282347e+38f
          m = frexpf (a, &e);
          if (m < cutoff_f) {
              m = m + m;
              e = e - 1;
          }
          i = (float)e;
          f = m - 1.0f;
          s = f * f;
  #if LOG_VARIANT == 1
          /* Compute log1p(m) for m in [sqrt(0.5)-1, sqrt(2.0)-1] */
          r =             -0x1.384000p-4f;  // -0.076232910
          t =              0x1.084000p-3f;  //  0.129028320
          r = fmaf (r, s, -0x1.0f218cp-3f); // -0.132388204
          t = fmaf (t, s,  0x1.226b04p-3f); //  0.141805679
          r = fmaf (r, s, -0x1.5427a4p-3f); // -0.166091233
          t = fmaf (t, s,  0x1.99a35ap-3f); //  0.200018600
          r = fmaf (r, s, -0x1.000416p-2f); // -0.250015587
          r = fmaf (t, f, r);
          r = fmaf (r, f,  0x1.55555cp-2f); //  0.333333433
          r = fmaf (r, f, -0x1.fffffap-2f); // -0.499999911
  #elif LOG_VARIANT == 3
          /* Compute log1p(f) for f in [-1/3, 1/3] */
          r =             -0x1.0ae000p-3f;  // -0.130310059
          t =              0x1.208000p-3f;  //  0.140869141
          r = fmaf (r, s, -0x1.f1988ap-4f); // -0.121483363
          t = fmaf (t, s,  0x1.1e5740p-3f); //  0.139814854
          r = fmaf (r, s, -0x1.55b36ep-3f); // -0.166846141
          t = fmaf (t, s,  0x1.99d8b2p-3f); //  0.200120345
          r = fmaf (r, s, -0x1.fffe02p-3f); // -0.249996200
          r = fmaf (t, f, r);
          r = fmaf (r, f,  0x1.5554fap-2f); //  0.333331972
          r = fmaf (r, f, -0x1.000000p-1f); // -0.500000000
  #endif // LOG_VARIANT
          r = fmaf (r, s, f);
          r = fmaf (i, 0x1.62e430p-01f, r); //  0.693147182  // log(2)
      } else {
          r = a + a;  // silence NaNs if necessary
          if (a  < 0.0f) r = 0.0f/0.0f; // QNaN INDEFINITE
          if (a == 0.0f) r = -INFINITY; // -INF
      }
      return r;
  }


Thanks for your code post, but your fused-multiply-add Horner coeffs seem like they are not from the same power series I was discussing / the article was promoting.

The effects I mention are pretty clear if you run my code with both `x = 2/3.0` and with `x = math.sqrt(0.5)`, even with a much smaller `m` / coarser granularity.

For example { for the python &| copy-paste impaired &| lazy :-) }:

    $ python3 j.py | headtail 3 3 # [2/3..4/3], m=300
    0.6666666666666666 3.7747748309890916e-06
    0.6688888888888889 3.5668329603222304e-06
    0.6711111111111111 3.3693446750415745e-06
    ---
    1.3266666666666544 -3.120628879194065e-07
    1.3288888888888766 -3.2510972441945896e-07
    1.3311111111110987 -3.3859621345611046e-07

    $ python3 k.py | headtail 3 3 # [rt1/2..rt2], m=300
    0.7071067811865476 1.279804798859363e-06
    0.7094638037905028 1.19727672415193e-06
    0.711820826394458 1.1195756742798935e-06
    ---
    1.4071424945612245 -1.1572535666970474e-06
    1.4094995171651796 -1.1970089550117358e-06
    1.4118565397691347 -1.2378520898859158e-06
Note in the 2/3..4/3 case the exponent changes from e-06 to e-07 - a whole order of magnitude difference in |error|, while in the rt1/2..rt2 case the nearness of |error|. This same pattern manifests for higher numbers of terms like 8 or 9 or higher resolution like m=3840.


If I remember correctly, real libm implementations rarely use an analytically derived approximation, they just fit a polynomial rational approximation to minimize the error on the required range and precision.


Please check RLIBM project, it's building a collection of correctly rounded elementary functions for multiple representations ( 32-bit float, posits, bfloat16, tensorfloat32) for multiple rounding modes.

RLIBM: Rutgers Architecture and Programming Languages Lab's Correctly Rounded Libm:

https://people.cs.rutgers.edu/~sn349/rlibm/


Good article. I, too, am fascinated by calculators and how they work, especially the HP scientific RPN calculators. So much so, that I ended up developing this:

https://play.google.com/store/apps/details?id=com.limpidfox....


Is there a recursive computation for logarithms just like subtraction or division ?


e^x = sum_n x^n/n!

= 1 + x + (x^2)/2 + (x/3)/6...

= 1 + x(1 + (x/2)(1 + (x/3)(1 +...

exp(x,n) = 1 + (x/n)*exp(x,n+1)

this gives you the exponential function as a recursion formula. Just set some condition for when to truncate. From there, a recursive function that does the exact opposite is easy enough.

I'm sure there are optimizations you can do on top in the case that you don't get convergence within the first few terms.


Thanks that's the kind of process I was aiming at but was stuck early.



Most of calculator and hardware do use CORDIC for real-time implementation.

Why the CORDIC algorithm lives rent-free in my head:

https://news.ycombinator.com/item?id=40326563


Great post and fascinating series of comments.



Btw: I'm looking for efficient ways to represent numbers of arbitrary precision, and performing mathematical operations on the them. It's for a toy financial calculator I'm writing in my spare time. My knowledge is really lacking in this field and I'm not even sure about what I should be asking/searching for. Could someone suggest me some sources? Preferably online, but books would also be very welcome.


The typical library folks use is called MPFR: https://mpfr.org

It's fast, high quality, and very complete.


It might be also important to note that mpfr is really for scientific purposes that need to deal with such large precision - there are significant memory and performance costs. For financial accounting applications, a fixed precision floating point library (aka fixed point arithmetic) is probably a better choice. Also mpfr is c/c++ but the author didn’t really specify their language and there may be other better options although bindings likely exist.


I once wrote some accounting software for a tiny team in an investment bank. The bosses decided that we should use our brand-new platform, which was cheap to develop for, because most (normal) things could be done without code.

All went well for a couple of months; then the customer reported that the system failed for currency conversions to and from Yen. There wasn't enough precision. We had to write an arbitrary-precision maths library, resulting in the project overrunning budget by a factor of three.


There’s a world of difference between an arbitrary precision library like mpfr suited for financial applications, a fixed precision “Decimal” type that’s better suited for financial applications, and trying to roll your own thing.

I don’t believe anywhere that I wrote that you should roll your own. All I said is that for financial stuff “Decimal”-like APIs are going to be easier to configure, manage and be better suited for financial applications.


Sure. This was the early eighties; we didn't really have "libraries", and even if we had, there was no way to plug them into this proprietary development system.

> library like mpfr suited for financial applications, a fixed precision “Decimal” type that’s better suited for financial applications

I don't know mpfr, but my guess is you meant to say that mpfr is better-suited for scientific applications?


Yeah, arbitrary precision is different from fixed point precision that financial institutions typically need. MPFR is arbitrary precision so if you’re doing stuff involving our solar system and beyond, you probably want to use mpfr and be very careful in setting up the equations you’re using. Financial transactions have different requirements though and the arbitrary precision is unnecessarily slow, uses unnecessary amounts of RAM, and I believe will probably difficult to configure correctly. That being said, I’ve never really used mpfr. More just relaying what I understand to be the lay of the land.


MPFR itself is pretty easy to use.

Unfortunately, if your formula are ill conditioned you can get inaccurate results even using thousands of bits of precision... so there is no replacement for doing analysis and testing.

Perhaps if there were some interval-math equivalent of MPFR you could at least tell when your code might be producing wrong results without extensive mathematical analysis.


Thanks for the suggestion.

And do you know where I could read about the underlying theory? I'd like to implement the most I could from scratch, for learning purposes.


The underlying theory is primary school math if you stay in a decimal fixed point format.

All you have to do is choose the number of decimals you want. Say, half the number of digits of the biggest number that fits your desired size. Half these digits will be for the integral part, the other half for the decimal part. For instance, if you want to store signed fixed point on 32 bits, the highest number in base 10 is 2 billion. That's 9 zeros for you. Say you pick 4 for the decimal part, just multiply your numbers by 10000.

So 1 is 10000. You can use all the regular operators you want on these numbers, just remember to x10000 when storing them, and /10000 when extracting from them. You're really just "computing on tenths of thousands" instead of "computing on ones".

This is obviously wasteful to do that in a decimal base, but it's also trivial, so unless you cannot afford to lose that many digits, stick with it.

When confortable, consider moving to a binary base so that you don't waste as much digits.

Note that it is not _stricly_ mandatory to use fixed point numbers when doing financial computations. Floating point can work just fine, it all depends on what kind of number and computation you have to deal with.

Typically, the need for fixed point arises when

1) You work with really small and large numbers together, these are the most susceptible to be misrepresented as floating points. The best example is if you need to deal with exchange rates to convert between currencies that have widely different bases.

2) You need to chain a lot of operations, which could compound imprécision.

3) You need to store exact numbers of arbitrary precision (e.g. market data)


> You can use all the regular operators you want on these numbers, just remember to x10000 when storing them, and /10000 when extracting from them.

??? You can only use addition and subtraction in this way. As soon as you touch multiplication and division the rules don't hold anymore!


>Btw: I'm looking for efficient ways to represent numbers of arbitrary precision, and performing mathematical operations on the them

The easiest way is to look at how fixed length numbers are implemented and extend the ranges of values to arbitrary length.

For financial operations it seems likely that you want fixed point arithmetic, which essentially just means you calculate in integers of cents. So you just need to implement arbitrary length integers, which isn't particularly hard I would say. The algorithms you need are the ones you already know from school. Long division and multiplication. (There are better ones though).


For finances, numbers are typically stored as integers, since floating point problems can crop up quite easily if you don't. Some languages can handle arbitrary precision integers out of the box - this is true in Python for e.g., in others then you have to use special libraries to do so. Even when you can represent the numbers, you still have to take care since doing things like calculating interest might lead to floating point numbers which need to be handled.


> efficient ways to represent numbers of arbitrary precision

I used to write COBOL for Burroughs machines, which had hardware BCD arithmetic. It wasn't "arbitrary precision" in the sense that you could input an arbitrary number and get an exact result; you had to allocate storage for each intermediate result, so your program had to be designed to handle some specific amount of precision.

I'm sure some Intel microprocessors had BCD hardware; maybe they still do.


For implementing it yourself from scratch for learning it is a good idea to first step back and ask how much precision do you need and how fast does it need to be. Also are we talking about just integers, or do you also want fixed point and/or floating point?

If you don't need really huge numbers and you don't need it to be very fast and just need the add, subtract, multiply, and divide (with remainder) on integers there's a decent chance you can write something by just implementing the same algorithms you would use to do it by hand on paper. You

To make is easy to debug you could represent an arbitrary integer as an array of base 10 digits, least significant digit first. E.g., the number 123456 would be represented as [6, 5, 4, 3, 2, 1].

Addition and subtraction are pretty straightforward in that representation. Multiplication is more complicated by just follow what you do when you hand multiply and you'll get it. Division is harder but for a project you are doing for learning purposes it is doable.

Once you've got those operations you can improve them by changing to a higher base. For example you might switch to base 1000. Then 123456 would be [456, 123]. Assuming the base is small enough that each digit fits in one machine word addition and subtraction are O(n) and multiplication and division as described above are O(n^2) where n is the number of digits in your operands (let's assume equal sized operands).

Switching from base 10 to base 1000 cuts the number of digits in your numbers by a factor of 3, speeding up addition and subtraction by a factor of 3 and multiplication and division by a factor of 9.

It usually is easy to go up to base whatever fits in half a machine word. For example on a 32-bit processor that would be base 65536. Digits then range from 0 through 65535 and and adding or multiplying a pair of digits will not overflow.

You can also go up to the full machine word for your base but that can be more work. So for a 32-bit machine that would be base 4294967295. You then have to deal with overflow which adds a small amount of extra logic for addition/subtraction. For multiplication you need a 32-bit x 32-bit => 64-bit operation. Many 32-bit processors do have that but it might not be exposed in whatever language you are using.

The above is all stuff you can probably do without any more theory than you can remember from elementary school. Taking it farther you probably want to do some reading. You can get a decent speedup on multiplication by using the Karatsuba algorithm, which speeds up multiplication from O(n^2) to O(n^1.58). Karatsuba is pretty easy and Wikipedia explains it adequately.

An extensive treatment of all of this is in Knuth volume 2. He covers everything from the "do it like you do by hand" up to whatever was state of the art at the time the book was written. I wouldn't necessarily say you should go out and buy Knuth volume 2, but if you can borrow it from a library or someone you know take a look.


You can check how python does it, and also ask chatgpt - I found it to be very helpful in cases like these.


>for if our calculators and computers calculated logarithms inaccurately, as well as exponentials, trig functions, and square roots, to name but a few, a lot of scientific and engineering work would be broken and end in catastrophe.

I think I have some news for the writer of the article.


“It would be better to never make a dime of profit than to have a product out there with a problem.”

(Today, the generator attached to David Packard powers half of California.)


Wait until they learn about floating point.


What are they going to learn? That’s it’s by far the best fixed-size approximation to real number arithmetic we have, sufficient for essentially all of our scientific and engineering needs?


Besides what you said (which I mostly agree with), that floating point calculations can at times be extremely counter intuitive, weird and totally inaccurate.

While a single floating point operation gives you the best possible floating point result, the error for two consecutive floating point operations is unbounded.

They need to be treated carefully and people need to be aware of their shortcomings. You would be surprised how often I have seen even people here getting hung up over the fact that floating point numbers aren't behaving identically to real numbers.


> That’s it’s by far the best fixed-size approximation to real number arithmetic we have (*)

* for some use cases (**)

** conditions apply


I know that, but there are assumptions you cannot make about equivalence etc. So if he's commenting on inaccuracies of numbers, he may find that surprising.

Tough crowd, sheesh.


Its polynomials all the way down.


The VAX POLY instruction is generally held by s/w folk to be the poster child for CISC over-complication.

Evidently they were mostly correct, as it's disappeared, but one thing that is missing from many criticisms: POLY as part of the ISA should have enabled the h/w folk to play games with representations of intermediate results (higher precision than the visible register set, or maybe even carry save?) that would not have been possible with RISC algorithmic implementations.

[I once worked with people who provided additional crunch for your POLY-heavy problems via extra VAXBI boards. Their bugaboo was that their approach (like GPU offload today?) often succeeded in turning what had been compute-bound problems into i/o-bound ones]


it turns out that fma is the right framework here. it gets you the extra accuracy you want, and can be really fast.


What is fms?


Fused multiply add. A single operation that does

    a += b * c
https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_op...


To your point about i/o bottlenecks....these days, there is an i/o bottleneck between main memory and the registers--and even between the registers and the ALUs.

This changes the tradeoffs. And its why we use chips sporting data parallelism and pipelined floating point units. Both techniques are, in the final analysis, backing off of pure RISC and making more complex instructions.


Damn good point. Sure, there’s no point in microcoding something if it is just doing the same steps as a hand-coded version could do.

But that doesn’t mean we just, say, stop implementing floating point operations!! We just make the hardware do it better than hand-coding can.


I don't think Taylor series (or, for that matter, Fourier series) will ever stop seeming magical to me.

It seems to defy all common sense that things could really add up and cancel each other out that perfectly.

I understand all the intuition behind them. But they still feel like magic.


For polynomials, it's kind of trivial that taylor series work. It's sort of magical in the case of sine and cosine and the exponential function, but that's more a case, i think, of the "magical" properties of those functions than the taylor series. In the general case, taylor series only approximate a function around a point and it's not even guaranteed to do it very well or very far from that point.


> taylor series only approximate a function around a point and it's not even guaranteed to do it very well or very far from that point.

Half of the the article is mitigating that.


You wonder why we don’t have statues of him in college towns.


He has a lunar crater[1] named after him, which is a bit better than a statue imo.

[1] https://planetarynames.wr.usgs.gov/Feature/5889




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: