Hacker Newsnew | comments | show | ask | jobs | submit | mturmon's comments login

On the contrary, unary numbers are a well-defined concept (http://en.wikipedia.org/wiki/Unary_numeral_system).

Unary numbers are perfectly commensurate with binary, decimal, octal, etc., numbers. In binary, you write

  B2 B1 B0
to mean the number:

  B2 * 2^2 + B1 * 2^1 + B0 * 2^0.
The same thing applies to unary numbers, just note that 1^n = 1 for every n, so writing:

  B2 B1 B0
means:

  B2 * 1^2 + B1 * 1^1 + B0 * 1^0 = B2 + B1 + B0, 
or, in other words, add the tally marks to get the number.

Unary is a well defined concept, but it's not "base 1".

Or if you don't want to argue about the definition of base, at the very least it doesn't have any of the properties other base systems have.

A number in any base system can be thought of as an infinite string of zeros, followed by the digits. "1" is just a convenience, it could also be written "0000000001", or "0000...0001".

And then you can extend it to decimals, and have an infinite string of zeros after the decimal point. You can't have decimals in unary.

In other words it isn't necessary to store any additional information on how many symbols you need. The symbols are fixed, you just choose what state they are in.

Unary "cheats" by using a meaningless symbol and passing all the information in the number of times the symbol is written.

All the algorithms for adding numbers, dividing, subtraction, etc, can be generalized to work in any base. But not unary. Unary requires entirely different algorithms because it's not a base number system.

The formulas for converting between two bases work for any base system except unary. The formulas for measuring the number of digits you need to represent a number work for any base system except unary.

It's just an entirely different system of representing numbers. It's more similar to Roman numerals.


you are assuming the lack of a digit is a digit. didn't you just made a way to add zero on base 1?

from the "base" definition on wikipedia: "In mathematical numeral systems, the radix or base is the number of unique digits, including zero."


Hmm, the original comment of yours, to which I replied, was rather different than the one there now.

Unary representation differs from better bases in one very important regard: a base-n representation of a number uses n symbols, and you write the number as the coefficients of a power series in n. The n symbols you use are the natural numbers m such that 0 <= m < n.

But in a unary representation, the only number you could represent that way is 0. Instead, a unary system uses as coefficients the natural numbers m such that 0 < m <= 1. An immediate consequence is that there's no way to represent zero.

(In "unary-style" binary, you'd count like this: 1, 2, 11, 12, 21, 22, 111, 112... . The constraint that none of your coefficients can be zero (because it doesn't exist) makes the representation unique. If you're willing to use infinite representations along with finite ones, you could then represent zero in two's complement as ...111112, that is, one more than -1.)


Yes, thanks for the qualification regarding zero...the extension to the "base-1 case" is not as simple as I made it sound. Point taken.

In the case of porcelain technology, the process you describe went in reverse: http://en.wikipedia.org/wiki/François_Xavier_d%27Entrecolles

Both posts have their merits; the OP is more succinct.

Neat! Don't miss the (heuristic) argument in the comment by Tim Gowers about why 82000 was lucky, and that there may not be any other larger number with the same properties.

[deleted]

The argument isn't saying that arithmetic is random. It's saying that a lot of properties of number systems behave as if they are random.

A good example is finding arithmetic sequences of length K in the prime numbers (for example, the sequence 3, 5, 7 is an arithmetic sequence of length 3, as is 17, 23, 29). It can be shown that random sets that have a density similar to the primes (i.e. the chance that N is in the set is proportional to 1 / log(N)) have arbitrarily long arithmetic sequences - as, in fact, do the prime numbers.


"a lot of properties of number systems behave as if they are random"

Choosing a weird base (and yes, base ten is weird) certainly contributes to this effect.


This looks like an interesting perspective. Do you have any pointers to discussions of topics that become more intuitive/clearer when thought about in other bases ?

The whole topic of "why does this random (in base 10) number suddently behaves strange" doesn't exist in e.g. base 2.

The lead developer is on the Apple AppKit team.

In this and everything pop, Sturgeon's law applies (http://en.wikipedia.org/wiki/Sturgeon's_law).

We're all looking for that other 10%.


I agree. It's the problem when the other side asks the questions and edits the responses down.

Hersh was clearly annoyed that the interviewer wanted to talk about why the article did not appear in the New Yorker. The interviewer's agenda seemed clear: if the New Yorker did not see fit to print it, then I can safely ignore it. (People really do think like this, of course.)

Why would a writer respect such an agenda? So Hersh lets him have it.

Then, the interviewer appears to back off that angle ("Can I tell you why?"), and gave a different reason: He said he's interested in the publication venue because that would indicate a bias in the American press. Hersh says that's interesting.

At the very end, the interviewer seems like he's trying to catch Hersh up in some kind of logical error regarding the venue question ("I feel like you are telling me two different things.") And then it's over.

There's just nothing there. The interviewer asks questions, with a transparent agenda, that a grouchy reporter has heard before. They are kind of second-order to the actual story (i.e., venue). Hersh gets annoyed and hangs up.


I'm puzzled by this comment. Are you talking about GISS, the small Columbia-U. adjacent NASA institute that specializes in climate? Because there is no "climate science division" at NASA -- that activity is spread out over several centers, besides GISS, most prominently at GSFC (Maryland) and, to some extent, JPL (California). The post here, for instance, was released by JPL. Additional climate work goes on at NOAA, NCAR, etc.

I'm also puzzled by the "run by Congress"/"little autonomy" comment, which is not true.


The two essays of Tony Judt regarding trains mentioned in the OP are:

http://www.nybooks.com/articles/archives/2010/dec/23/glory-r...

http://www.nybooks.com/articles/archives/2011/jan/13/bring-b...

These were published a few months after his premature death in 2010. Judt was a prominent American/European historian and intellectual who wrote a series of serious, elegant, and spirited essays for the NY Review, all of which can be recommended.


One of my favorite C.S. professors, in a first-year grad class, gave out 3 of these problems as homework, including the first one, without indicating whether it was easy or hard.

The problem was posed as a while loop, and the question was, does the loop terminate for arbitrary n>0, so it seemed very tractable:

  given: n > 0.
  while n != 1:
    if n is odd:
      n = 3n + 1
    else:
      n = n/2
It was a tease. I tried a couple of different ways to prove it, but of course nothing worked. Then I tried to find a counter-example. The numbers get pretty big, in some cases, because applying

  n = 3n + 1 
several times in a row will blow "n" up, despite the intervening n = n/2. This was before arbitrary-precision arithmetic was a thing, so the only tool I knew that could do this was the Unix "dc". I think I was able to get up to n=10K or 50K on our campus VAX.

That was about 30 years ago. I still remember it fondly.


This is a pretty good example of how far computing hardware has come. I was able to eliminated 2 to 50k in 81ms in completely unoptimized C# just now. Makes me wonder who has taken this approach the farthest these days.

> ... who has taken this approach the farthest these days.

Tomás Oliveira e Silva of the Universidade de Aveiro, in Portugal, apparently. According to a page on his personal website [http://sweet.ua.pt/tos/3x+1.html] he has verified it up to 5 x 2^60.


My prof was also more impressed by my implementing it than by my attempted proofs. (Not the first time this has happened!)

I just implemented it too. I notice that by 100K, you get an intermediate result of size just less than 2^31, but at 1M, you get an intermediate result of size about 2^36. Maybe I went up above 100K.

For more: http://en.wikipedia.org/wiki/Collatz_conjecture#Experimental..., which includes some very nice plots, and a fractal construction when iterating an extension of the map to the complex plane. Whew! (On the other hand, https://xkcd.com/710/)


Now I'm curious, is it guaranteed to terminate? Is there a name for this problem?

It needs to land on a power of 2 at some point.


> Now I'm curious, is it guaranteed to terminate?

No one knows.

> Is there a name for this problem?

The statement that it always terminates is the Collatz Conjecture. It's the first problem in the posted article.


Just to clarify why, if the result is ever one (the loop's exit condition), then it would be about to get stuck repeating 1,4,2 forever if it didn't exit. If you can prove the loop does not exit (i.e., never reaches one) for some n, then you just proved that, for that n, it would never reach the 1,4,2 repetition pattern even without the stop condition.

It's a clever way to re-state the problem to look more computer-science-y.


As some of the people have already said, this is the Collatz conjecture aka 3n+1 problem aka half-or-triple-plus-one (HOTPO)problem. The fact that we don't know if the algorithm will certainly land on a power of two shows that it's quite a difficult problem. The famous mathematician Erdos said that math is not yet ready for such problems.

There are heuristic to show that the algorithm should terminate but there are numbers for which it takes an arbitrarily long time for convergence. Furthermore, there are similar problems that have divergence or loops. What makes matters worse is that it's been proven that such problems are undecidable in general.


It's the Collatz Conjecture. [0]

[0] https://en.wikipedia.org/wiki/Collatz_conjecture

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: