Hacker Newsnew | past | comments | ask | show | jobs | submit | owalt's commentslogin

They're saying the naive implementation was more than 150 lines of C code (300-450 bytes), i.e. too big.


No, it is correct. The integral is with respect to x, and the ordinary/partial derivatives are with respect to t. Written out fully, the derivative computation is

d/dt (x^t - 1)/ln(x) = d/dt [exp(ln(x)t) - 1]/ln(x) = ln(x)exp(ln(x)t)/ln(x) = exp(ln(x)t) = x^t.

Edit: d/dt exp(ln(x)t) = ln(x)exp(ln(x)t) by the chain rule, while d/dt (1/ln(x)) = 0 since the expression is constant with respect to t.

There are convergence considerations that were not discussed in the blog post, but the computations seem to be correct.


Ah, yes. I don't understand how I differentiated with respect to x instead of t, but...


I'm not quite sure whether you're asking for an explanation or whether you're simply making the observation that this is often omitted from the discussion. Either way I think it's an interesting point, so I'll elaborate a bit on it.

As you say, there's no guarantee that even a convergent Taylor series[0] converges to the correct value in any interval around the point of expansion. Though the series is of course trivially[1] convergent at the point of expansion itself, since only the constant term doesn't vanish.

The typical example is f(x) = exp(-1/x²) for x ≠ 0; f(0) = 0. The derivatives are mildly annoying to compute, but they must look like f⁽ⁿ⁾(x) = exp(-1/x²)pₙ(1/x) for some polynomials pₙ. Since exponential growth dominates all polynomial growth, it must be the case that f(0) = f'(0) = f"(0) = ··· = 0. In other words, the Taylor series is 0 everywhere, but clearly f(x) ≠ 0 for x ≠ 0. So the series converges only at x = 0. At all other points it predicts the wrong value for f.

The straight-forward real-analytic approach to resolve this issue of goes through the full formulation of Taylor's theorem with an explicit remainder term[2]:

f(x) = Σⁿf⁽ᵏ⁾(a)(x-a)ᵏ/k! + Rₙ(x),

where Rₙ is the remainder term. To clarify, this is a _truncated_ Taylor expansion containing terms k=0,...,n.

There are several explicit expressions for the remainder term, but one that's useful is

Rₙ(x) = f⁽ⁿ⁺¹⁾(ξ)(x-a)ⁿ⁺¹/(n+1)!,

where ξ is not (a priori) fully known but guaranteed to exist in [min(a,x), max(a,x)]. (I.e the closed interval between a and x.)

Let's consider f(x) = cos(x) as an easy example. All derivatives look like ±sin(x) or ±cos(x). This lets us conclude that |f⁽ⁿ⁺¹⁾(ξ)| ≤ 1 for all ξ∈(-∞, ∞). So |Rₙ(x)| ≤ (x-a)ⁿ⁺¹/(n+1)! for all n. Since factorial growth dominates exponential growth, it follows that |Rₙ(x)| → 0 as n → ∞ regardless of which value of a we choose. In other words, we've proved that f(x) - Σⁿf⁽ᵏ⁾(a)(x-a)ᵏ/k! = Rₙ(x) → 0 as n → ∞ for all choices of a. So this is a proof that the value of the Taylor series around any point is in fact cos(x).

Similar proofs for sin(x), exp(x), etc are not much more difficult, and it's not hard to turn this into more general arguments for "good" cases. Trying to use the same machinery on the known counterexample exp(-1/x²) is obviously hopeless as we already know the Taylor series converges to the wrong value here, but it can be illustrative to try (it is an exercise in frustration).

A nicer, more intuitive setting for analysis of power series is complex analysis, which provides an easier and more general theory for when a function equals its Taylor series. This nicer setting is probably the reason the topic is mostly glossed over in introductory calculus/real analysis courses. However, it doesn't necessarily give detailed insight into real-analytic oddities like exp(-1/x²) [3].

[0]: For reference, the Taylor series of a function f around a is: Σf⁽ᵏ⁾(a)(x-a)ᵏ/k!. (I use lack of upper index to indicate an infinite series as opposed to a sum with finitely many terms.)

[1]: At x = a, the Taylor series expansion is f(a) = Σⁿf⁽ᵏ⁾(a)(a-a)ᵏ/k! = f(a) + f'(a)·0 + f"(a)·0² + ··· = f(a). All the terms containing (x-a) vanish.

[2]: https://en.wikipedia.org/wiki/Taylor%27s_theorem#Taylor's_th...

[3]: Something very funky goes on with this function as x → 0 in the complex plane, but this is "masked" in the real case. In the complex case, this function is said to have an essential singularity at x = 0.


I don't quite understand your main concern, but it seems others did and have answered. To elaborate on why Cantor's argument doesn't go through for the integers:

Assume you had a list of every integer. To begin with, you couldn't start flipping bits from the left since integers don't start from the left. You could however imagine them having a sequence of 0s extending infinitely to the left, e.g. 101 = ...00000101. If you did that, the diagonal argument is possible to perform only starting from the rightmost digit, and you do end up with some string of numbers.

The problem is this: If you do walk right-to-left flipping bits in such a list, the integer case has an "out". You do end up with a string of 0s and 1s, but by assumption it can't have 0s extending infinitely to the left because then it would have been on the list. In other words, the thing you end up with never has its "final" 1, and so what you end up with isn't an integer.

This is really the crux of the argument: Not every string of 0s and 1s actually represents an integer. However, put any string of 0s and 1s after the radix point ("decimal point") and that does represent a real number. So what you construct in the real version of the argument really is a real number not on the list (i.e. a contradiction).

It's instructive to do the same thought experiment with an imaginary list of rational numbers written out as decimals and see why the diagonal argument fails in that case too.

Now I described the above as a kind of procedure ("walk right-to-left") but that's really just for convenience. There isn't any actual iteration going on.


> Not every string of 0s and 1s actually represents an integer.

Is that to say that only the finite ones are? Since an infinitely long string that starts with 1 is infinite in size?

Otherwise I can’t make sense of this statement.


Yes, for instance ...101010 (the pattern 10 repeated infinitely) isn't an integer.

It's maybe easier to think of the argument as working on sequences: From a sequence (x_0, x_1, x_2, ...) with each x_n a binary digit (i.e. 0 or 1) you can always construct a real number x = Sum(x_n / 2^n, n = 0, 1, 2, ...).

The integer you'd want to construct from such a sequence would be Sum(x_n * 2^n, n = 0, 1, 2, ...), but this only works if there's some N such that x_n = 0 for all n > N. Otherwise you would have an "infinite integer" (whatever that means). Valid positive integers are all on the form Sum(x_n * 2^n, n = 0, 1, 2, ..., N), i.e. the binary expansion is finite.

The subtlety of the diagonal argument (why it works for reals but not for integers/rationals) is that while it always produces _some_ string of digits, only the real numbers are defined such that the string of digits is itself always a real number. In the integer/rational case, you can construct a number "outside" the set you started from. (In fact you always do if the original list really was of all integers/rationals. Given any enumeration of the rationals, the diagonal argument can actually be used constructively to produce an irrational number.)


This does not – according to what I can see from the code and comments – implement the case of algebraic extensions. As someone only really familiar with the implementation of the transcendental case[1], my understanding is that the algebraic case is where the major difficulty lies.

[1]: Manuel Bronstein, Symbolic Integration I. Online: https://archive.org/details/springer_10.1007-978-3-662-03386...


That's incredibly cool looking!


Indeed. As someone who (in the past) struggled a lot with eating, fasting would not have been a solution - it was the entire problem!

This is the first cookbook I've read that seems to get the effort level vaguely right for a meal you still might be able to make after not eating or drinking anything for 40+ hours. In fact, the "literal depression cooking" meal is close to what most of my meals were, but I rarely even had cheese to add.


To be fair to the author (and I know this is by no means obvious context coming into the video), his channel blew up shortly before this video. His previous audience consisted of people in the Super Mario 64 speedrun/challenge community who would need less explanation.

The thoroughness of this video seems to me to be proof that he wanted to get these newer viewers up to speed. In fact it's basically a love letter to the uninformed viewer. The "complaining" at the start is just lightly poking fun at these newer people while also serving the double purpose of informing them about the explanations in his description boxes. Given the lasting popularity of this video, I don't think too many people took offense.


> In fact it's basically a love letter to the uninformed viewer

Absolutely, and it shows in how many people I've recommended this video to who know nothing about sm64 who absolutely love it. (I also know nothing about sm64 and absolutely love it.)

> The thoroughness of this video seems to me to be proof that he wanted to get these newer viewers up to speed

Yep and he literally says it right in the video, "but maybe what you guys need isn't a paragraph, maybe you guys need an example". He admits his previous attempts were maybe not getting across and adapted. Teaching at its finest. The TJ Henry Yoshi callout was clearly for comedic effect, and if I were TJ I'd be pretty happy to get an answer this thorough, understandable, and entertaining.

edit: listening to it again, even calling it a callout as I did above is kind of incorrect. "Well TJ Henry Yoshi, hear me out." He didn't say "Well TJ Henry Yoshi, this is where you're wrong" or "Well TJ Henry Yoshi, what didn't you understand about the paragraph?", he just asked for a listening ear for another chance to explain his perspective in a different way.


> Ah, matrices! Does * mean dot product, cross product, or the usually less useful matrix multiplication? Ooh, or maybe you should you use . for dot! How clever that would be!

Why the snark? The fact that you're free to make a bad choice does not imply that having a free choice must be bad. Obviously neither dot nor cross product should be *. It should be the Hadamard product or matrix multiplication. You can choose one convention for your code and be perfectly happy for it.

As a follow-up question: How do you feel about languages like Fortran and Matlab then? Is it actually a good thing that mathematics convenience features are relegated to a few math-oriented languages and kept away from all the others? (Or are the linear algebra types in these languages offensive as well?)


The benefits from operator overloading are "I can show this to a mathematician and it looks like what they're used to". The downsides lurk in the corners of whether it's actually doing what you think.

I'll pass.


Maybe C++26 will let us write operator·() and operator×().


In C++34 we'll finally have a way of overloading the empty string operator, so that we can, at last, write AB for the product of matrices A and B. As God intended.


Stroustrup, B. (1998, April 1) Generalizing Overloading for C++2000. AT&T Labs. https://www.stroustrup.com/whitespace98.pdf


I do my current work (embedded) on an architecture with the following properties:

- 8-bit bytes

- 16-bit aligned accesses to 32-bit types

- 32-bit aligned accesses to 64-bit types.

- Struct alignment depends on the size of the struct (32-bit aligned for >= 64-bit structs)

It's a pretty common architecture in the automotive industry, though probably would be considered esoteric for other applications.

This is not the first platform I've encountered with "unnatural" alignment rules in the embedded space, and I'm sure it won't be the last. (The extra packing this allows is actually quite handy.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: