so instead of being a tool which Google has already removed support for, it's in the bucket of tools which - at least to me - should be treated as liable to be cut at any time and don't rely on it.
sorry, Google lost any credibility in this area over the Reader fiasco - Google has no interest in maintaining a project for the good of a community, if it doesn't align with corporate strategy. 8 years history means nothing.
"... it's in the bucket of tools which - at least to me - should be treated as liable to be cut at any time and don't rely on it."
Unless you've paid for a guaranteed period of support, bug fixes and extensions of functionality, this how you should treat all code libraries surely? All libs come as is, and statements of future actions are always expressions of intent, not facts. If your company is utterly dependent on a single free 3rd party library then you should be prepared to take over ownership of it - or accept that your company's survival is at the whim of another. Personally, I would not be swayed by any brand name here - but who would you trust (enough to bet your company on) to provide long term support for a free OSS library?
Personally, I would consider the closing down of a very niche, free, consumer product based on a seemingly dying web technology as a very poor guide towards how Google will treat GWT, which is actually used by them for website development.
I think the answer might be that for now it's mathematical interest but no one really knows what it might or might not turn into. I think the more of these sextuplet, twin etc. primes that we discover the closer it may be bringing us to developing a grand theory of primes and solving difficult problems like the twin prime conjecture and whatever else is out there. And if we could do that then that would be where the practical ramifications would start to possibly emerge. So for now I see it as just more data that might or might not help spark something in the mind(s) of some mathematical genius(es) one day.
I am in no way an expert and this is just amateur speculation but I wanted to write it anyways because I was (am) wondering the same thing.
Mathematician here, have to disagree. Knowledge about individual primes (or n-tuples) is pretty much useless for understanding the big picture of them. And if things like "solving the twin prime conjecture" had practical applications, we wouldn't need to solve them to reap them: if there's a crypto-technique that hinges on TPC being true, you can start using it today, and if it somehow breaks, congratulations, you disproved the conjecture.
I absolutely agree with that. But there can be exceptions. For example the ternary Goldbach conjecture boiled down in the end (after a lot of analytic "pencil sharpening" as the author called it) to an exhaustive computer search up to some limit.
Of course that was an exceptional case. There were some very committed computational people who had been working on related stuff for years prior who just happened to have the code that could just about reach the required limit with sufficient hardware, which we just happened to have sitting around in between working on projects paid for by our grant.
By the time the real theoretical work was done with pencil and paper, the computational task was just completing, meaning the result could be stated unconditionally.
Whether you would say that the computer search allowed us to learn anything mathematically useful, though is a state of mind. Mathematicians care much more about techniques than knowing large lists of otherwise random looking numbers, even if conjectures about such lists do motivate looking for new techniques, and even if there is some prestige in having established a long-lived conjecture.
*"any odd integer > 5 can be written as the sum of three primes"*
is at best a tiny bit better than the
N"any odd integer > exp(3100) can be written as the sum of three primes"
that we had before or even than the even weaker
*"any sufficiently large odd integer can be written as the sum of three primes"*
That would change (for me) if we ever link that magical constant 5 to another magic constant 5 in math, or if we create a series of related constants (sums of 4 primes? Sums of three numbers of the form pq with p and q prime? Who knows?) and show a pattern in them.
That, I think, is another good reason to try and pinpoint the exact values of these limits. The values the,self are dull, but knowing them may give mathematicians ideas about why they have the values they have. It may often be (relatively) dumb work, but still worth doing.
Looking at repeat donations prompted us to ask: do people donate more or less
their second time? On average, the answer is roughly 50% more. While first
donations had a mean of $88 and a median of *$30*, repeat donations had a mean
of $114 and a median of *$50*.
Average doesn’t mean typical, however. If you look at each repeat donor one by one,
it turns out they’re split almost exactly into thirds: 33% donate less the second time
(most commonly half), 35% donate more (most commonly double), and 32% donate
exactly the same. The averages get pushed up because doubling (and the
occasional tripling or even quadrupling) makes a bigger difference overall
than halving does.
that's odd. wouldn't you expect the median to stay about the same if 1/3 donated less, 1/3 donated the same, and 1/3 donated more?
Not just if they cross the median, the median might cross them - for example what if the only people to come back to donate a second time were above the median in their first donation, then even if they all donated the same amount the second time, the median of second-donations would still go up.
I have a script that grows a bit Perl hash. It fills up memory and starts paging. The application grinds to a halt (almost). I tie the hash to Berkly DB file, so it writes to disk. Performance is about a third of the in memory hash, BUT doesn't slow when it's too big for memory.
It sounds from the language used ("he did not want his vital functions supported any further but should be allowed to cease functioning and promptly be cryopreserved") that he didn't regard this moment as death at all - merely pausing his life until a) ALS is curable and b) we have the technology to reawaken cryopreserved people.
To him, all that was worth announcing here is the cryopreservation.
Personally, I think cryonics is a pipe dream right now. But I completely respect and empathize with the wishes of Hal Finney to put hope into a future life, possibly without the burden of ALS that he was forced to deal with for a significant portion of his life.
If he had the means to do it, and it gave him something while he was still alive, than I would say it was worth it to him.
You're probably right, but I just thought about it from the point of view of someone dying from a (currently) incurable illness.
On one hand, you can die, and that will probably be it for you.
On the other hand, you can become cryogenically preserved. In that case, there are two outcomes I can see. One, you're never revived (which is functionally equivalent to death). The other, you awake (in what feels like an instant) in a world where you can continue living. That certainly makes it seem tempting for me.
It seems strange to me to place him into "cold storage" after he was completely ravaged by ALS and had died. Wouldn't there be more hope for a recovery if he had been placed into a frozen state before his physical form had died? No doubt a dicey area of law but surely a treatment for ALS will be reached for those still living before we discover a treatment of ALS for those who are already terminal.
This almost exists already, in a slightly different form - it's the same principle as the unsolicited goods laws codified in eg. the UK's Consumer Protection Regulations; if I send stuff to someone without it being asked for, they legally own it and I can't insist on payment.
Same thing now applies to bundling unwanted extra crap in with a real order - it's just the extra stuff which may not be charged for.
Agreed. Related to this, there's a school of thought which says the ability for us to break laws which don't yet reflect updated social mores is key to preventing society stagnating, to force change in those laws.
Let's suppose 1960s civil rights campaigners had had their protests and civil disobedience clamped down on with the ruthless efficiency of what you could imagine policing becoming in a few decades -- if the firehose-wielding cops had also been able to know with perfect intelligence who was planning to break the law, and had been able to easily crush any nascent rebellion against the laws of the time, which we now recognise to be wrong. How easily would we have seen change under those circumstances?
And you don't think the dissidents wouldn't use technology to circumvent the law enforcement bots? You don't seem to understand how brittle, rigid and flawed most/all of these algorithms are. With the right knowledge you can get around pretty much any technical system. Also, just a single person with that knowledge and the ability to implement it is a danger to the bloated, slow government(s) that might seek to use such a system.