
A Quirky Function - mathgenius
https://johncarlosbaez.wordpress.com/2016/06/25/a-quirky-function/
======
etatoby
I've always marveled at the exponential function.

Here's a 3D visualization of z=x^y that supports the 0^0 is undefined camp:

[https://s32.postimg.org/v9t4stdl1/x_y.gif](https://s32.postimg.org/v9t4stdl1/x_y.gif)

(Made with Grapher.app and Gimp)

You can clearly see that the surface is tangent to the z (vertical) axis,
which _suggests_ that any z>=0 satisfies z=0^0, hence 0^0 is undefined.

Edit: here's a picture that highlights two important rules:

[https://s31.postimg.org/4kozbjqbf/x_y_lines.png](https://s31.postimg.org/4kozbjqbf/x_y_lines.png)

The red line is the rule 0^y=0 for all y; the purple line is x^0=1 for all x.
It's clear _why_ these two lines would not meet at 0^0. Some will define
0^0=0, others 0^0=1. But the 3D surface shows that it's really a vertical
line.

~~~
dahart
This is a great example. Wikipedia has the same one with isolines
demonstrating this point in the 0^0 subsection of the exponentiation article,
here:

[https://en.m.wikipedia.org/wiki/Exponentiation#Continuous_ex...](https://en.m.wikipedia.org/wiki/Exponentiation#Continuous_exponents)

With the caveat:

"However, under certain conditions, such as when f and g are both analytic
functions and f is positive on the open interval (0, b) for some positive b,
the limit approaching from the right is always 1.[26][27][28]"

Also, definitely take note of Knuth's argument, because while you might feel
strongly that indeterminate is the right answer right now, there are big
implications for allowing other choices. (And NB: indeterminate vs undefined)

"The debate stopped there, apparently with the conclusion that 0^0 should be
undefined.

But no, no, ten thousand times no! Anybody who wants the binomial theorem to
hold for at least one nonnegative integer n must believe that 0^0 = 1, for we
can plug in x = 0 and y=1 to get 1 on the left and 0^0 on the right. The
number of mappings from the empty set to the empty set is 0^0. It _has_ to be
1."

[http://arxiv.org/pdf/math/9205211v1.pdf](http://arxiv.org/pdf/math/9205211v1.pdf)

------
dahart
[http://www.askamathematician.com/2010/12/q-what-
does-00-zero...](http://www.askamathematician.com/2010/12/q-what-does-00-zero-
raised-to-the-zeroth-power-equal-why-do-mathematicians-and-high-school-
teachers-disagree/)

It might be a mistake to try to argue that defining 0^0=1 is the best answer
or the most consistent answer. The problem is that there are conflicts. There
is no such thing as more consistent, if any rule breaks, they all break. Some
math rules lead you to 0^0=0, some to 0^0=1, and some to 0^0=undefined. So
math doesn't have an answer for this, and what really happens is we decide
that 0^0=1 by convention. There are other examples of this happening in math,
and it works reasonably well. But let's not pretend it's "right", let's accept
that it's a choice.

We're choosing to say that the rule x^0=1 takes precedence over the rule 0 * x
= 0.

Just to play devil's advocate, I like the choice of 0 for intuition
consistency, not math consistency. The positive side limit is 0, so if 0^x is
defined at x=0, how about we _try_ to make the function more continuous,
instead of trying to make it discontinuous?

0^0 is also saying 'start with zero, then multiply it by itself ... Wait, on
second thought, don't start'

There's nothing, because you don't multiply zero by anything. The best number
we have to represent nothing is zero. Saying that not raising zero to a power
is one feels like a random number.

But, I'm wrong. ;)

~~~
lanna
> 0^0 is also saying 'start with zero, then multiply it by itself ... Wait, on
> second thought, don't start'

The same is true for any other integer: "2^0 is also saying 'start with two,
then multiply it by itself ... Wait, on second thought, don't start'" You
don't multiply two by anything.

~~~
dahart
Hehe, yep! x^0 isn't intuitive. Depending on how you think about it, anyway...
some people feel like it is intuitive.

[http://math.stackexchange.com/questions/9703/how-do-i-
explai...](http://math.stackexchange.com/questions/9703/how-do-i-explain-2-to-
the-power-of-zero-equals-1-to-a-child)

The thing is, there is no argument to be had, x^0 is _defined_ to be 1, and
x^1 is _defined_ to be x. Those agree with and make consistent lots of other
rules and limits, but they are choices we made just like saying 0^0=1.

~~~
lanna
x^0 _is_ intuitive, pretty much so:

    
    
        x^0 = x^(y - y) = x^y * x^-y = x^y * 1/x^y = x^y / x^y = 1
    

x^0 isn't _defined_ to be 1, x^0 _IS_ 1.

x^1 isn't _defined_ to be x, x^1 _IS_ x.

The situation is analoguous to multiplication. a X b is defined as a+a+a+... b
times. If you see a problem with x^0 and x^1, you should also see a problem
with a X 0 and a X 1.

x^0 is 1 because 1 is multiplication's identity element, just like a X 0 is
zero because zero is addition identity element:
[https://en.wikipedia.org/wiki/Identity_element](https://en.wikipedia.org/wiki/Identity_element)

~~~
dahart
> x^0 _is_ intuitive

So you say. If it's planly obvious, why do I get so many results for the
Google search "why is x^0 1"?

I understand the multiplicative identity argument, but that's a technical
explanation, not an intuitive one.

What's the intuitive reason that multiplying something zero times should equal
one? If I don't multiply, then I don't have an answer, and zero's closer to
nothing than one is. Why should I use the multiplicative identity if I _don
't_ multiply, why would that make sense?

Using the multiplicative identity is yet another choice, not an intrinsic
property of numbers. It's a very good choice, and there are a lot of reasons
why it's a good choice.

[https://en.m.wikipedia.org/wiki/Empty_product](https://en.m.wikipedia.org/wiki/Empty_product)

"In mathematics, an empty product, or nullary product, is the result of
multiplying no factors. It is by convention equal to the multiplicative
identity 1 (assuming there is an identity for the multiplication operation in
question), just as the empty sum—the result of adding no numbers—is by
convention zero, or the additive identity.[1][2][3]"

> x^0 isn't _defined_ to be 1, x^0 IS 1.

Are you sure? How can you show that? How do you define what an exponent is
without defining what x^0 and x^1 are? What does it mean to suggest that x^0
is intrinsically 1? Are you absolutely certain that you just aren't so
comfortable with the idea that you can't imagine other possibilities?

[https://en.m.wikipedia.org/wiki/Exponentiation](https://en.m.wikipedia.org/wiki/Exponentiation)

"Formally, powers with positive integer exponents may be defined by the
initial condition

b^1 = b

and the recurrence relation

b^(n+1) = b^n * b"

~~~
lanna
It all follows directly from the very definition of the exponentiation
operation itself. You can go work the math yourself, I'm not here to educate
illiterate people.

~~~
dahart
Hahahaha! That's an extra unfortunate choice of insult, it really doesn't look
good for you. But I hope it made you feel better. Look, I'm honestly sorry if
I offended you along the way, that wasn't my intent. If it was my remark about
imagination, it might have been poorly worded on my part, but that wasn't
directed at you personally or meant as an insult -- it's actually really
difficult for all people to see how certain simple ideas were constructed,
when you've known them your whole life. You know what an exponent is so well
and so thoroughly, you might not have a strong grasp on how it was defined and
developed through history. I don't. Another example: it's _very_ hard to
imagine life without 0, but 0 didn't always exist, the symbol 0 was given a
definition, and we still haven't fully resolved how to use it in all cases.

Now, since I just quoted the definition of exponentiation itself from
Wikipedia and the formal definition includes the base case b^1 = b, I think
you've completely failed to make an educated point to go along with your
insult. I didn't ask for you to define exponentiation so you could educate me,
I asked so that you could think carefully and tell me _if_ you can define what
exponentiation is without using the base case. Were you to actually try, you
may find it difficult. Or not, I might be wrong, so feel free to prove me
wrong or cite a source that proves me wrong, if you want. As it stands, my
takeaway for now is that your insult is a substitute for the argument you
don't have, so you're forfeiting your position and handing me a walkover.

------
mjw_byrne
This is an interesting discussion about why it isn't possible to define a
"good" value for 0^0, i.e. a value which works in all cases, but the original
riddle doesn't really work. "Nice" operations like addition, multiplication
and exponentiation all preserve continuity and the function we're given is
discontinuous. The given solution contains a hidden special case, rolled into
the author's proposed definition for x^0.

From a formal mathematical standpoint, you define things in certain ways and
then work with the definitions. It's not like there is a "correct" value for
0^0 out there that we can mount a search for.

It's roughly the same with discussion of 0.999... - people still debate
whether it's less than or equal to 1. You just have to use the definitions. In
this case, 0.999... is shorthand for an infinite sum. An infinite sum is the
limit of the sequence of its partial sums (where such exists). In this case
the limit is provably one. So that's the value of 0.999... . If you look at a
series like 1 - 1 + 1 - 1 + ..., its sequence of partial sums is divergent, so
it doesn't have a value and that's that - although that doesn't mean we can't
apply all sorts of alternative definitions to try to work with such a series,
and some of them do in fact assign it a value. Such alternative definitions
can produce really weird-looking results - one of the most striking is the
result that (subject to certain definitions) 1 + 2 + 3 + 4 + ... is -1/12.

------
kens
The last time 0^0 came up on HN, I gave it a lot of thought. My conclusion is
the problem goes away if you think in terms of typed functions. For the
function x^y, where y is an integer, 0^0 is obviously 1. For the entirely
different function x^y, where y is real (or complex), 0^0 is obviously
undefined. The problem is people talk about 0^0 without making it clear which
overloaded exponential function they are talking about.

If you're doing things with x^n (integer n), such as combinatorics or
polynomials, you need to have 0^0=1 or else you have lots of annoying special
cases. e.g. if you have a polynomial sum(an * x^n), you don't want to have it
break at x=0.

But if you're working with real or complex exponents, defining 0^0=1 is just
strange.

~~~
Double_Cast
[Z] is a subset of [Q], which is a subset of [C]. So I don't think the problem
goes away.

If you examine the graph etatoby provided, the limit of 0^0 depends on which
direction it's approached from: 1 from along the real axis; 0 from along the
imaginary axis; and all sorts of [Q] from along a diagonal.

~~~
kmill
For your first sentence, it really depends. If you start with the naturals,
you can close it under addition using a Grothendieck-style construction: take
pairs (n,m) of naturals, and say (n',m') is equivalent to it if n+m'=n'+m.
There is a copy of the naturals embedded in these integers through n
identified as (n,0).

Similarly, we can take pairs of these integers to close under division to get
the rationals. Let's denote such pairs n/m (m nonzero), so n'/m' is equivalent
when nm'=n'm. The integers are represented as a pair by n/1.

Next we can close the rationals under the property that Cauchy sequences
converge: take the set of all Cauchy sequences of rationals, and say two such
sequences are equivalent if the term-wise difference between the two converges
to the rational 0/1\. This gives us the reals, with a rational q being
identified with the sequence (q,q,q,...)

Finally, we can close the reals under being able to have polynomials have
roots. It turns out adding the root of x^2+1 is all you need. The complex
numbers then are the set of all polynomials with real coefficients, where two
polynomials are equivalent if their difference is divisible by x^2+1. The
reals can be identified as a complex number by thinking of r as a constant
polynomial.

All I'm saying is that, while we commonly think of Z as a subset of Q as a
subset of R as a subset of C, this is only after a sequence of closures of
different kinds, each time creating a completely new set which the first
number system is not actually a subset of (though the first number system can
be identified as some subset in the new system).

With this in mind, it's completely reasonable to define exponentiation of
something to the integer or to the rational as being a completely different
operation from exponentiation to the real or to the complex. In fact, in
Rudin, exponentiation to the real is defined as a limit of exponentiation to
the rationals, which is defined as a positive root of real to the integer.

------
tomp
I studied math (BSc only, not PhD), and IMO that's not _quite_ how math works.

You can define a^x or 0^x by its _algebraic_ properties only for rational
numbers (i.e. fractions). In algebra, it makes sense that 0^0 = 1, but that
won't allow you to plot 0^x as a function.

To plot it as a function, you need to define it for all real numbers, which,
AFAIK, you need to do analytically:

1\. First, define e^x as the limit (1 + x/n)^n as n goes to infinity ( _n_ is
a natural number, so this definition uses the standard algebraic
exponentiation).

2\. Then, you define ln(x) as the reverse of e^x (after proving that e^x is
monotonous and continuous).

3\. Then, define a^x as e^(x * ln a).

4\. Then, 0^x is defined as a^x as a -> 0, which is 0 for x > 0\. In the same
way, you can see that x^0 is 1 for x > 0\. Therefore, there is no "correct"
(analytically) result for 0^0 (i.e. such that would make the function f(x, y)
= x ^ y continuous at (x, y) = (0, 0)).

~~~
GFK_of_xmaspast
What's wrong with defining exp(z) in terms of the usual taylor series?

~~~
tomp
Yeah I guess that works as well.

I wonder though how hard it would be to prove basic equalities (e.g. e^(x + y)
= e^x * e^y)... Probably not too hard since that's how you define the complex
exponential function.

------
Double_Cast

      0^0
      (\be.eb)(\sz.z)(\sz.z)
          [b := (\sz.z)]
      (\e.e(\sz.z))(\sz.z)
          [e := (\sz.z)]
      ((\sz.z)(\sz.z))
          [s := (\sz.z)]
      ((\z.z))
    

Lambda Calculus returns (\z.z) AKA the Identity Function. Which kinda sounds
like 1, except 1 is usually represented as (\sz.sz). Which means 0^0 is NaN? I
wonder if division by zero also gives (\z.z), but I don't know what division
in Lambda Calculus looks like.

~~~
kmill
I'm pretty sure (\sz.sz) and (\z.z) are the same function:
(\sz.(\z.z)sz)=(\sz.sz).

It would make sense that 0^0 for church numerals is 1, since this is
exponentiation to the natural (defined as recursive multiplication, with the
base case x^0=1).

------
cousin_it
I like how deeply Baez thinks about the problem, but back when I was trying to
figure it out, my reasoning was much simpler. Basically I just decided that
keeping the binomial theorem true without exceptions was very important, so 0⁰
should be 1 by fiat. (If you set it to any other value x ≠ 1, then (0+1)¹ ≠
0⁰⋅1¹ + 0¹⋅1⁰, because the left side is 1 and the right side is x.)

~~~
GFK_of_xmaspast
It's Baez' blog, but the post was by David Tanzer:
[http://www.azimuthproject.org/azimuth/show/David+Tanzer](http://www.azimuthproject.org/azimuth/show/David+Tanzer)

------
amelius
> You can’t use any special functions, including things like sign and step
> functions, which are by definition discontinuous.

So, let's go the other way around. Is it possible to define a step-function in
terms of this result? Can we define functions such as sign(x) and abs(x) in
terms of it?

~~~
gizmo686
We can already define abs(x) as sqrt(x^2) = (x^2)^(1/2).

sign(x) follows naturally as abs(x)/x.

The step function is a bit trickier, we can solve this with 0^0=1.

Begin by noting, that to construct an arbitrary step function, all we need is
a function f such that:

    
    
        f(x) = -1 | x<0
        f(x) = 1  | x>= 0
    

By composing this basic step function, we can construct an arbitrary step
function. We can also note, that sign(x) comes very close to this definition.
The only problem is that sign(0)=0.

To work around this, we need a function, g, such that:

    
    
        g(x) = 1 | x=0
        g(x) = 0 | x/=0
    

0^x comes close, but is not defined for x<0\. To work around this limitation,
we can take g(x)=0^(x^2).

With this, we find that f(x) = sign(x) + 0^(x^2), satisfies our initial
criteria. By composing functions of the form `a * f(x + b)`, we can construct
an arbitrary step function.

------
joefkelley
Here's Knuth's opinion on 0^0:
[http://arxiv.org/pdf/math/9205211v1.pdf](http://arxiv.org/pdf/math/9205211v1.pdf)
(on page 6)

He makes a few pretty convincing points about why it's useful to define it
equal to 1, while admitting that in some sense, it's not quite as well defined
as say 0+0.

------
d4nte
After reading the problem but before reading the solution, I came up with
0^ln(x+1). Did I do my math correctly, and is this a legitimate alternative
answer?

~~~
jwmerrill
Yes, the results are the same for 0^f(x) where f(x) is any monotonic function
with f(0)=0.

~~~
S4M
f has to be monotonic and increasing, and such as f(x) < 0 if x < 0 , and f(x)
> 0 if x > 0 (note the strict inequalities, that makes it a condition slightly
stronger than monotonic).

------
fisherjeff
> And it equals its reciprocal, because 0^0 = 0^-0 = 1/0^0

Doesn't this already assume that 0^0 evaluates to 1 and not 0?

~~~
qf303rjr3
No - you have the following identity because 0 = -0

    
    
      0^0 = 0^(-0)
    

and by the definition of negative exponents,

    
    
      0^(-0) = 1/(0^0)
    

So whatever number 0^0 is, it is equal to its own reciprocal.

~~~
fisherjeff
Ah, ok. I didn't have an issue with the first half, I just misunderstood: I
thought he was stating that 0^0 == 1/0^0, not deriving it.

------
dahart
Check it out - this has been debated for at least 200 years!

[https://en.m.wikipedia.org/wiki/Exponentiation#History_of_di...](https://en.m.wikipedia.org/wiki/Exponentiation#History_of_differing_points_of_view)

------
S4M
I got another one (a bit more complicated): find a function f that verifies:

\- f(x) = 0 for x <= 0

\- f(x) = 1 for x >= 1

\- f admits a derivative of any order

The last condition rules out functions like f(x) = 0 if x<=0,x if x in [0,1]
and 1 if x >=0.

(Note: piecewise defined functions are still OK)

~~~
johnp_
Not a math student, but I tried it anyway:

    
    
      0 for x<=0
      1 for x>=1
      e^(1 - 1/(1 - (x - 1)^2)) else
    

Does this work? Does the last condition mean smoothness (C∞)?

(Not a native english or math speaker, so please forgive my ignorance)

edit: HN ate the link :/ wolframalpha code:

    
    
      Piecewise[{{0, x <= 0}, {1, x >=1}}, {e^(1 - 1/(1 - (x - 1)^2))}]

~~~
S4M
That one will not work because the function is not C∞ [I didn't know how to
make that symbol] in 1. it's first derivative is 0 in 1, but that's only
because the derivative for 1-1/(1-(x-1)^2) is 0 in 1, that will not be the
case for the second derivative.

If you look at what's going on in 0, the derivative of any order is something
like phi(x) e^(1 - 1/(1 - (x - 1)^2)) where phi is a rational function, and
thus it will always be 0 in 0 (the exponential always beats a rational
function).

so the answer is, similarly to what kmill posted:

0 for x <=0 1 for x >= 1 1/(1+exp(K(x))) else

With K(x) = (x-1/2)/(x(x-1))

(x-1/2 is added to create a change of sign between 0 and 1)

in 0, K(x) -> ∞, so 1/(1+exp(K(x)) -> 0, and in 1, K(x) -> -∞, so
1/(1+exp(K(x)) -> 1.

To see that the function is C∞, you can check that the n-th derivative
1/(1+exp(K(x))) is of the form phi(x) exp(K(x)) 1/(1+exp(K(x)))^(n+1) where
phi is a rational function, and then verify that the n-th derivative is 0 in 1
and 0.

------
nemo1618
How about 1/(x * ∞)?

Only works if you allow (0 * ∞) = 1, which is admittedly quite shaky.

~~~
majewsky
All arithmetic with +/\- ∞ is shaky. But even then, your solution is wrong
because it's defined on negative numbers.

------
myle
I think he is looking for Dirac Delta function.

~~~
lvh
No, this is quite different (and the article tells you what it actually is).
The Dirac delta is a distribution, this is a perfectly well-behaved function.
It's just one that's tricky to write a formula for that doesn't require
stepwise or partial functions.

