Unary is a well defined concept, but it's not "base 1".
Or if you don't want to argue about the definition of base, at the very least it doesn't have any of the properties other base systems have.
A number in any base system can be thought of as an infinite string of zeros, followed by the digits. "1" is just a convenience, it could also be written "0000000001", or "0000...0001".
And then you can extend it to decimals, and have an infinite string of zeros after the decimal point. You can't have decimals in unary.
In other words it isn't necessary to store any additional information on how many symbols you need. The symbols are fixed, you just choose what state they are in.
Unary "cheats" by using a meaningless symbol and passing all the information in the number of times the symbol is written.
All the algorithms for adding numbers, dividing, subtraction, etc, can be generalized to work in any base. But not unary. Unary requires entirely different algorithms because it's not a base number system.
The formulas for converting between two bases work for any base system except unary. The formulas for measuring the number of digits you need to represent a number work for any base system except unary.
It's just an entirely different system of representing numbers. It's more similar to Roman numerals.
Unary representation differs from better bases in one very important regard: a base-n representation of a number uses n symbols, and you write the number as the coefficients of a power series in n. The n symbols you use are the natural numbers m such that 0 <= m < n.
But in a unary representation, the only number you could represent that way is 0. Instead, a unary system uses as coefficients the natural numbers m such that 0 < m <= 1. An immediate consequence is that there's no way to represent zero.
(In "unary-style" binary, you'd count like this: 1, 2, 11, 12, 21, 22, 111, 112... . The constraint that none of your coefficients can be zero (because it doesn't exist) makes the representation unique. If you're willing to use infinite representations along with finite ones, you could then represent zero in two's complement as ...111112, that is, one more than -1.)
The argument isn't saying that arithmetic is random. It's saying that a lot of properties of number systems behave as if they are random.
A good example is finding arithmetic sequences of length K in the prime numbers (for example, the sequence 3, 5, 7 is an arithmetic sequence of length 3, as is 17, 23, 29). It can be shown that random sets that have a density similar to the primes (i.e. the chance that N is in the set is proportional to 1 / log(N)) have arbitrarily long arithmetic sequences - as, in fact, do the prime numbers.
I agree. It's the problem when the other side asks the questions and edits the responses down.
Hersh was clearly annoyed that the interviewer wanted to talk about why the article did not appear in the New Yorker. The interviewer's agenda seemed clear: if the New Yorker did not see fit to print it, then I can safely ignore it. (People really do think like this, of course.)
Why would a writer respect such an agenda? So Hersh lets him have it.
Then, the interviewer appears to back off that angle ("Can I tell you why?"), and gave a different reason: He said he's interested in the publication venue because that would indicate a bias in the American press. Hersh says that's interesting.
At the very end, the interviewer seems like he's trying to catch Hersh up in some kind of logical error regarding the venue question ("I feel like you are telling me two different things.") And then it's over.
There's just nothing there. The interviewer asks questions, with a transparent agenda, that a grouchy reporter has heard before. They are kind of second-order to the actual story (i.e., venue). Hersh gets annoyed and hangs up.
I'm puzzled by this comment. Are you talking about GISS, the small Columbia-U. adjacent NASA institute that specializes in climate? Because there is no "climate science division" at NASA -- that activity is spread out over several centers, besides GISS, most prominently at GSFC (Maryland) and, to some extent, JPL (California). The post here, for instance, was released by JPL. Additional climate work goes on at NOAA, NCAR, etc.
I'm also puzzled by the "run by Congress"/"little autonomy" comment, which is not true.
These were published a few months after his premature death in 2010. Judt was a prominent American/European historian and intellectual who wrote a series of serious, elegant, and spirited essays for the NY Review, all of which can be recommended.
One of my favorite C.S. professors, in a first-year grad class, gave out 3 of these problems as homework, including the first one, without indicating whether it was easy or hard.
The problem was posed as a while loop, and the question was, does the loop terminate for arbitrary n>0, so it seemed very tractable:
given: n > 0.
while n != 1:
if n is odd:
n = 3n + 1
n = n/2
It was a tease. I tried a couple of different ways to prove it, but of course nothing worked. Then I tried to find a counter-example. The numbers get pretty big, in some cases, because applying
n = 3n + 1
several times in a row will blow "n" up, despite the intervening n = n/2. This was before arbitrary-precision arithmetic was a thing, so the only tool I knew that could do this was the Unix "dc". I think I was able to get up to n=10K or 50K on our campus VAX.
That was about 30 years ago. I still remember it fondly.
This is a pretty good example of how far computing hardware has come. I was able to eliminated 2 to 50k in 81ms in completely unoptimized C# just now. Makes me wonder who has taken this approach the farthest these days.
Just to clarify why, if the result is ever one (the loop's exit condition), then it would be about to get stuck repeating 1,4,2 forever if it didn't exit. If you can prove the loop does not exit (i.e., never reaches one) for some n, then you just proved that, for that n, it would never reach the 1,4,2 repetition pattern even without the stop condition.
It's a clever way to re-state the problem to look more computer-science-y.
As some of the people have already said, this is the Collatz conjecture aka 3n+1 problem aka half-or-triple-plus-one (HOTPO)problem. The fact that we don't know if the algorithm will certainly land on a power of two shows that it's quite a difficult problem. The famous mathematician Erdos said that math is not yet ready for such problems.
There are heuristic to show that the algorithm should terminate but there are numbers for which it takes an arbitrarily long time for convergence. Furthermore, there are similar problems that have divergence or loops. What makes matters worse is that it's been proven that such problems are undecidable in general.