Hacker News new | comments | show | ask | jobs | submit login
The IceCube Neutrino Detector at the South Pole Hits Paydirt (ieee.org)
221 points by amynordrum 70 days ago | hide | past | web | favorite | 104 comments



My coworker went down to the South Pole to install Debian on the IceCube cluster (funny story: it's hard to cool servers there because the air is really dry) ~14 years ago. Basically, a 48 hour trip both ways just to insert a CD and press reset. We were all jealous.


> just to insert a CD and press reset

That may have been all he did that time, but imagine if stuff had gone sideways, he'd have to troubleshoot it from Antarctica! I thought working from home was bad...


Yes, that's why he was sent. In case something went wrong, he was qualified to fix it in production. I've come to an appreciation for people who have jobs where they don't do much most of the time, but when the shit hits the fan, they do exactly what was required.


Pilots being another good example.

Years ago, I interviewed for a couple of positions with Schlumberger which, at the time, basically went out to oil wells and made various measurements. In a lot of ways, it was essentially a tech job but the reason you had engineers (assisted by techs) was that you had various training and, presumably, other background for when things went sideways.


Fun story, though off topic: Schlumberger, having people all over the world on dangerous places such as oil platforms (but also in offices), thinks that physical safety for their employees is very important. They iterated on this so well for so many decades, that the primary cause of employee death is now traffic accidents, involving employees thousands of miles away from oil wells, on their way to the office.

Problem solved? Of course not. Employee safety is still a core KPI after all. So to address this problem, they installed a device in every company issued car which makes a BEEP! sound every time you drive aggressively (as judged by the device). After three BEEP!s you have to explain yourself to your boss.


I do not want a company car that much...


Not only that, but not wearing a seatbelt (even as a passenger) can get you fired on the spot!

I was extremely impressed with the safety-first culture when I interned at Schlumberger a few years back. Your coworkers will yell at you if you simply step into a work area with proper PPE (hardhat, safety boots, etc.).


Seems like your standard tachometer with an alerting system, a lot of companies have company cars with something similar


Power utility system operators, and other operators, are prime examples. Usually they just sit around and play farmville. But when it hits, they're the guys making complex decisions that affect when your power turns back on, and therefore how much money the company is losing until that happens.


Sometimes I wonder about an operator-type job like that, or security, or stuff like that - you know, sit around all day / night, browse the interwebs, play farmville or what have you, or do remote contract work to maintain skills and make a lot of money on the side. I had a neighbour who worked as a roadworks driver (like, the car that puts down cones and such), his job involved a lot of night shifts (good money) and then waiting for hours, occasionally moving - that gave him a lot of time to study.


Isn’t that exactly how firemen operate? In the business world these crucial roles are neglected and operations is driven towards maximum efficiencies. On the other hand, no one sane would suggest to utilize firemen better during their idle time.

EDIT: I guess that is the reason why consultants come into the picture


You can think about it much like load managing servers:

A job has a task load that varies over time. Those tasks also have some maximum latency before Bad Things happen. Thus, you need enough worker capacity to handle the peak load. You can allocate new workers on demand, but that also has a certain warm-up time which can be problematic if it's lower than the maximum latency.

For most white-collar service-oriented jobs, the variance of that task load is quite smooth. If you're a software dev not working directly on a production service, there aren't that many urgent surprises. Also, the max latency is really high — many tasks can be put off till tomorrow, next sprint, etc.

That makes it easy to allocate just enough workers to handle your average task load, keep them busy all day, and rarely need to spin up or spin down new ones.

But some jobs just have really high variance or really low latency. No one knows when a patient is going to show up at a trauma hospital but when one does, you need the surgeon on site right now and not somewhere stuck in traffic.

For that kind of work, the logical thing to do then is to have spare idle worker capacity. It seems inefficient, but it's cheaper than the cost of missing your latency targets when a spike happens.

You can reduce the idle capacity needed by reducing variance. For example if you flew all your trauma patients to one hospital, that actually lets you allocate surgeons more efficiently. Because then at that point you have spikes frequently enough that the aggregate number of patients in a given day becomes more consistently predictable. Instead of one surgeon each sitting on their thumbs most of the day at ten city hospitals, you get three surgeons that reliably have a couple of patients a day at the regional hospital. (But of course there is the overhead of getting all the patients there.)

Also, you can reduce idle capacity by lowering warm-up time. That's why doctors have pagers — it reduces the time between a patient coming in and the doctor being ready to help.

Or you can increase the max latency, though this can be hard based on the nature of the work. For example, if you're better able to stabilize patients, you may be able to wait longer until the doctor is on-site.

In this case, the "install Debian" job has really high variance. 99% of the time it's a click, 1% of the time, you've gotta know some Linux internals. The max latency is low because you don't want the IceCube array non-operational for long. And the warm-up time is really high — fly someone out to the South Pole.

So the reasonable response is to throw money at idle capacity and send someone out pre-emptively.


I work on an experiment at the south pole (not IceCube). sshing into fix stuff is painful, even when the "good" satellite is up.

Protip: pressing up several times and then enter is very dangerous since the command that you will actually run may not have appeared on your screen yet.


Similar experience to being a hacker-cracker 20 yrs ago, bouncing off servers around the world over a modem connection to telnet into some poor SunOS 4 boxes in South Korea or at GSFC (NASA). Sometimes you had to wait minutes before anything appeared on the screen while TCP was doing its thing. God forbid if you accidentally (yp)cat’d some large file..:)


Even on a good console I put a # at the start of the line before editing or typing it in case I accidentally hit Enter too soon.

And never ever paste commands without first trying it into a trash terminal or editor.


That's an interesting technique that I've never heard of. Do you have fast shortcuts/keybindings for removing the # when you are ready, or is it '(left arrow x N) + ^H' each time?


In windows what I do is Home->Delete I am amazed at how useful home, end pgup and pgdown keys are, and I am pretty sure not many people know or use them a lot.


^A ^D


^A ^D it is. But since you bring up moving the cursor, it also is very helpful to 'xset r rate 160 80' to speed it up substantially in Xwindows.


I assume (by trying on zsh) that this is a configuration change you made vs. some keybinding I've yet to discover?



Those are Emacs style keybindings, which are the default on most shells. You've probably switched to Vi style bindings in zsh.


Readline bindings, emacs mode.

Despite (mostly) using vim as an editor, it's got to be emacs on the shell itself.


Use mosh and you will have a much better experience. It's made for high latency connections.


Some mission critical production systems don't really want you installing <insert current trendy software here>. That's probably a highly unpopular view on HN, but having downtime on IceCube, for example, means they may miss an extremely rare event (neutrino hitting something and generating muons/photon shockwaves) that they could detect.


Mosh has been around for a while, it's not like "current trendy software". I thought people use it for bad connections (and envied them because of my stuff in the jungle that isn't performant enough for ssh).


Something released in 2012 isn't exactly what I'd call "trendy". Technically it really does have some improvements for poor latency and high-packetloss connections.


I'm pretty sure they consider everything sub 10 years old "new and untested" in this field, on this server/array.


Indeed. Mosh is still new (less than 10 years old), albeit it could get an exception if it really helps avoiding errors and typos due to latency.


Jusr amoment ago learned about mosh.org that sounds like would be helpful for that kind of situations.


Since you're here! Is Iridium out of your price range, or does even that have latency/bandwidth issues for the kind of SSH work you're talking about?


At least a few years ago, IceCube had it's own Iridium modems, in addition to a small pool of them for the station at large. Latency is very high, bandwidth is very low. It's much better when you can send an email over Iridium to ask for help from a human on the other side, than to SSH and do it yourself.

I think the situation now is a lot better than it was, but a few years ago, most of the day there was no connectivity except Iridium. Most communications satellites are in orbits that don't take them near the poles, with Iridum being the obvious exception.

Related: https://en.wikipedia.org/wiki/GOES_3 was an amazing piece of the South Pole infrastructure for several years. There was a period every year where the Earth would eclipse GOES-3 - the batteries onboard were long dead - so the satellite would power down and everyone just sorta crossed their fingers and hoped it came back online with the right orientation and with enough electronics going so that we could keep using it. Bandwidth wasn't great, but it was quite reliable especially considering that it was launched in 1978.


iridium is available all the time but very slow and the scarce bandwidth is not prioritized for incidental ssh access. The problem is that there are a lot of people / experiments at Pole.


I met some scientists (at the pub) from NASA the other day - they had been on a plane going repeatably from sea level to stratosphere all the way from from Hawaii to Chirstchurch,NZ to sample the atmosphere.

They had designed and built their own instruments, but they were on the plane so they could fix any problems, although the idea was to have no problems!


I would have thought that for the time and cost of doing that, they could just ship the CD with a booklet of photographic instructions. An instruction manual SO SIMPLE that even a scientist at the South Pole could understand it. Or they could make a video chat call and it could be explained.

(I don't mean scientists are so dumb, but that they might have Antarctic Stare...)

https://en.wikipedia.org/wiki/Polar_T3_syndrome


Former IceCube winterover here. Things at Pole often don't go smoothly, and detector uptime is very valuable considering the cost of building and operating it.

Video chat isn't a realistic option - not the greatest connectivity down there...


The satellite connections provided only limited communication windows and not enough for video chats.

They have some technical support at the SP but it's not great. In fact, one of my ex-coworkers, a well-respected research scientist, volunteered as a technical support person there (https://davidpablocohn.com/ and https://www.youtube.com/watch?v=eBaQtsft2bM). But I think he mostly helped people fix laptops.


>it's hard to cool servers there because the air is really dry

Not sure how this could be the case. Dry air has higher thermal conductivity than moist air.


The heat capacity of dry air is lower though, which I suspect is more important than the conductivity. Lower pressure also lowers heat capacity; you'll see a lot of electronics equipment with a specified maximum operating altitude of 10,000 feet for that reason. Incidentally, South Pole is around the same elevation, with local weather conditions varying the pressure over that range at times...

The bigger problems with keeping ICL (the IceCube Lab - where the computers on top of the detector live, also the biggest datacenter on the continent) are somewhat more mundane though:

* If memory serves, there's one air handler that brings in outside air. Like anything else, it occasionally breaks or needs maintenance, sometimes in the middle of winter.

* Anything that has to operate in outside-ish conditions there, like the air intake for example, needs to be simple and robust. I think that early on in ICL's life, they had problems with frost clogging up the intake louvers, but that was before my time with the project.

* South Pole Station gets Cold, and so you need to be careful with mixing a little bit of outside air with much warmer inside air, to keep the heat evenly distributed and temperature from fluctuating too much.

* The station's HVAC is controlled by a DDC (https://en.wikipedia.org/wiki/Direct_digital_control) system, which is reputedly a pain to work on, and sometimes people get ideas about ways to improve it, which of course leads to new and interesting quirks in the system...

edit: formatting


Dry air also increases the probability of static damage. They probably can't use outside air exchange to cool the server room.

I imagine just circulating your nice toasty air into a larger area filled with moist humans who want to be warmer would work, but it depends on your building layout.


Static damage? Where I was raised, winter temps regularly reached -40. That cold, outside air already had most of the moisture 'wrung out of it'. Now let that cold dry air into the house and warm it: the relative humidity plummets.

Now walk across a carpet in leather-soled shoes. Do NOT even touch metal doorknobs, let alone metal faucets!


A favorite game of my siblings and I during the winter growing up in Minnesota was to shuffle across the floor and then poke each other.


In the winter (in MN), I usually wake up to my dog shocking us nose-to-nose while he checks if I'm still sleeping.


I'm kind of surprised he hasn't been trained out of this from the pain/surprise. Seems like sort of a shock collar effect.


There's a good article here: https://arstechnica.com/features/2012/04/coolest-jobs-in-tec... and a video here http://www.datacenterknowledge.com/archives/2010/07/23/the-d... but you're right, it doesn't mention the reason I give (and says now they do import some external cold air).


Recovering astroparticle physics PhD here.

I read this news with great interest because I thought they might have finally detected the flux of neutrinos predicted by the GZK process[1]. But, this can't be the case because 1) 10^14 eV is probably on the low side for GZK neutrinos, and (more importantly!) 2) that flux would be fairly isotropic, and not from point source.

But still! Exciting results! IceCube is an ambitious and wonderful detector and I'm always pleased to see it in the news.

[1] https://en.wikipedia.org/wiki/Greisen%E2%80%93Zatsepin%E2%80...


Curious: alcoholics have Bill Wilson to help them recover ... who do astrophysicists have?

(the kind of question that reveals search engine limitations)


Jack Daniels?


Large paychecks from industry/finance


Nah, the IceCube neutrinos are probably photonuclear with local photons rather than CMB photons.


Is this how other people hear software developers when we talk about code?


Except that we physicists can usually understand you :)

(since many of us spend most of our time writing code too...)


Yes. And the same goes with finance people, and people in lots of other fields. Jargon rich conversations that outsiders think is mumbo-jumbo.


I'm the author of the Spectrum article: this is the exact reason why I get paid for my job :)


You did a great job of inspiring me to understand something I have, literally, not even the foggiest iota of a clue about. I finished your article in awe at the human species.


Then I consider that a job very well done :) We can do some pretty neat stuff as a species when we want to...


You did great. Precise, informative, and evocative. Really fun to read.


Thanks!


>"After 3.9 billion years of hurtling unhindered through the vast reaches of the universe, a ghostly neutrino particle died on 22 September 2017. It was annihilated when it collided with an atom in the frozen darkness two kilometers beneath the surface of the south polar ice cap."

Your article says the neutrino came from far away, which disagrees with that post that says it was probably "photonuclear with local photons".


That's exactly why the author gets paid. By local I meant near the source, not near us.


Not saying you are wrong about the terminology, but if local photons are "produced at the source", the use of "local" seems to be redundant.

When would a photon not be created at the source? What is a non-local photon, only CMB photons?


I am probably being confusing because there are distinct photons involved here... the ones that protons interact with to make neutrinos, and the gamma rays (which are of course, just high-energy photons) detected by FERMI or MAGIC. '

The neutrinos are produced in the process:

proton + photon -> (usually) Delta_1232 (basically an excited proton, which is very unstable so that's why you may not have heard of it)-> neutron + pion -> proton + neutrinos, provided there is enough energy available to make pions. The same process also makes gammas (from a neutral pion instead of a charged pion), but there are also lots of other processes that produce gammas (which again, are just high-energy photons).

The protons are accelerated at the source. The photons that the protons interact with to make neutrinos may either be local to the source (i.e. in the same galaxy, such that the neutrino is produced within the galaxy) or the proton could propagate some distance, bending in the intergalactic magnetic field and, if it's energetic enough, interact with a CMB photon (which are very low energy but everywhere in the universe, courtesy of the big bang). The former are likely the astrophysical neutrinos, while the latter are by definition the GZK neutrinos. The astrophysical neutrinos are at lower energy because inside galaxies, photons with higher energy than the CMB photons are available, allowing pion production at lower energies.


Basically when talking about multiple particles, and using the term source, it would be clearer to say "source of x". Yet three times now you've just said "the source".

It seems needlessly confusing so I think there is some reason I'm not grasping that you keep saying "the source".


>"The protons are accelerated at the source."

So by "source" you mean "source of the protons"? And "local photons" means "produced near the source of the protons"?


If you're in a field in a windy day you cannot tell where the wind is originating from. If you're in a room with a fan turned on, you can tell where the wind is being sourced. The source of the wind has been localized in this latter scenario, whereas in the first you can detect the wind but you can't tell what the source of the wind is. That's the gist of the local versus non-local neutrinos being detected.


So if I know the source but you don't, its local to me but non-local to you?


Great article, thanks!

OT: I keep meaning to join IEEE - do you get any commission or perks for referrals?


I don't get any commission or perks, but I would honestly still encourage you to join the IEEE: It takes its mission of "foster[ing] technological innovation and excellence for the benefit of humanity" quite seriously, plus you get a copy of IEEE Spectrum every month, filled with cool articles from me and my colleagues -- if there's a deliberately moving electron associated with a topic somewhere, we're likely to cover it. :)


To translate (correctly, i hope): the neutrinos come from far away and interact with local nuclei, generating photons within the vicinity of the neutrino/nuclear interaction, rather than the photons coming from far away space.


The opposite, the IceCube neutrinos are likely produced in proton-photon interactions near the source with photons from the source rather than being produced in proton-CMB photon (GZK) interactions elsewhere.


> Recovering astroparticle physics PhD here

If I may ask: why 'recovering'?


There's a massive oversupply of people who want to do science vs science that people will pay to have done. Supply and demand are balanced by making the environment so toxic that most leave. Hence 'recovering.'


A pity some of our new billionaires don't make a hobby out of funding genuine science research. Rather than, say, adventures in space tourism.


Agreed. If Musk had wanted to make a science contribution, he might have launched one of these [1] instead of a Tesla.

[1] https://en.wikipedia.org/wiki/LARES_(satellite)


Would they have allowed their satellite to be launched on an untested platform?

That's the idea of putting a 'worthless' payload up first.


A tungsten or brass (LAGEOS) sphere covered in optical and radar retroreflectors is cheap and inert. The major cost for that type of mission is the launch.

Given 3-6 months, $<1M all-in, and a team of engineers and scientists, it would be quite possible to get something ready to rock. Such an instrument would allow an ever-improving test of solar-system dynamics for many millenia.

If Elon is reading and plans to do something similar again, get in touch (cah49@uw.edu). We'll make science happen.

Or, just launch a dense one of these: http://www.landfallnavigation.com/davis-emergency-radar-refl...


Agreed. I'm not a Musk fanboy, but most rockets are initially tested with inert dummy payloads. The choice wasn't between a Tesla and a science satellite, but between a Tesla and sand.


The problem is that each professor trains more than a dozen PhDs over their career but they open up only one spot once they retire. Exponential growth in scientists isn't sustainable, no matter how many billionaires like science.


It’s worth noting that this isn’t an iron-clad law of nature. We could train fewer students and hire more staff scientists, for example, but the people who decide such things are largely the folks who “made it” so....


I think "tourism" is probably a bit unfair, but it sure wouldn't hurt to see more people go the way of D. E. Shaw.

It's really sad that the only ways to fund science seem to be charity and war.

mjfl 70 days ago [flagged]

what if scientists actually considered being economically competent? There is no law of nature that holds that scientists must be aloof savants.


I think setting up machinery and waiting for discoveries like this one can be compared to being patient at the slot machines; it is essentially economically equivalent to gambling. Eventually, as a person living in society you may run out of quarters before you hit the "jackpot." Yes, you can use your quarters today on other things in society that are more guaranteed to bring you more quarters, but, we don't know what kind of "jackpot" is waiting for humanity until we hit it. This makes a sad, difficult decision in the life of a scientist, which is why we should have more rich people funding (since a lot of them have gambling problems any way, ie: stock market, brothels, etc)

edit: to sum up, this would be an efficient economy at scale. Some one with money already puts someone with expertise to good use :)


This drives me nuts. If a quaint, ivory tower environment ever existed at all, it’s long, long gone. Science these days is incredibly fast-paced and competitive (for better or worse) and running a successful lab takes a similar level of managerial chops, hustle, and luck to running small business.

If you have actual suggestions, please do share them, but taking the mickey out of people trying to figure out the universe on $45k/yr is just mean.


Value capture in science is a known hard problem. Snide comments suggesting that it is tractable do not make it so.


The US may never have come to be what it is today if all Columbuses were bound to their homes.


I'm not in astro, but have a physics degree. It's pretty well known that only a few of us get to stay in basic research after we finish our degrees. Most of us have to make the transition to some "real world" activity. Some have an easier time of it than others.

While it's also well known that you don't go into certain fields to make the big bucks (astrophysics, viola da gamba performance, etc.), a corollary is that you do go into those fields because they're highly addictive. So, referring to it as "recovering" might be appropriate.

Disclosure: The people I know who are astrophysicists, and violists da gamba, both now have successful careers as software developers.


Yes: I've a physics degree as well, but I'm a sci/tech journalist now. I'm sometimes asked to talk to physics students as an example of someone using their degree as the basis of a career outside academic research.


Presumably he's out in industry making the big bucks now.


I'm just in awe that we're coordinating events world wide based on particle interactions one of which originated billions of light years away. That we're able to understand a little more about the vast space we live in because of it, the human cooperation, the technology, the amount of curiosity it took to get here, it's beautiful.


Totally! IceCube is such an amazing combination of great people, great Science, great engineering, and a completely amazing place where it all comes together (places, really, both Madison WI and South Pole ;) ). And, as you say, IceCube is just one project involved in this discovery.

All that effort focused on one incomprehensibly small neutrino...


I worked on this project in college and went down to the south pole after graduation to help install some of the detectors. Neat to see the results! ~10 years later.


The scientists at the IceCube Neutrino Detector are currently doing an AMA on reddit[0] for those interested.

[0] https://www.reddit.com/r/IAmA/comments/8yajhh/were_scientist...


Off-topic: Imagine discovering the IceCube detector 10,000 years from now, and trying to figure out what it was for.


If archaeologists 10k years in the future are like their present day counterparts, they'll conclude it was built for ritual purposes :)


I guess the details will depend on whether it happens before or after the discovery of the giant poo-bergs nearby :).


I believe all the poo is taken away.


Nope, at South Pole it's pumped in to the ice. In general, there's an amazing amount of refuse down there, it's a very expensive place to ship things out from...

The water supply for the station comes from a rodwell, which is effectively a big bubble of molten ice, sustained by heat from the diesel generators. Once the rodwell gets to a certain size, some water can be extracted for use on station, and the bubble lowers progressively in to the ice. Eventually, the liquid surface in the well gets too deep for the pump in the well to pump water all the way to the surface, so a new well is started.

Waste water then goes in to the void left from the old well; poo, gunk off the dirty dishes, etc all included.


Nice article. Some guys in my place are working on a new machine which generates and detects neutrinos. It's called DUNE. It uses a super conducting linear proton accelerator to generate them and 100s of massive liquid helium cooled wire detectors 800miles away! Super exciting project.


If anyone's interested in other unusual (i.e. other than an optical imaging telescope) detectors, have a look at Imaging Atmospheric Cherenkov Telescopes [0]. More details can be found e.g. in [1].

TL;DR: how to detect a very high-energy gamma-ray photon? Either with an orbital observatory (but these have limited size); or let the photon hit upper atmosphere (at a few tens of km), create (together with some atmospheric nucleus) a fast electron-positron pair, which through additional interactions eventually causes a narrow cone of Cherenkov light to shine all the way down to the ground (having ~100 m diameter there), and then gather this light with an array of a few large reflectors. Finally, perform some tricky processing to recover the initial photon's direction and energy.

[1] https://en.wikipedia.org/wiki/IACT [2] https://arxiv.org/pdf/1510.05675


I'm honestly in awe of that diagram of the detector. I'd seen pictures of the surface structure of the lab, but I had no idea there was an antenna array nearly two and a half kilometres deep..


ieee link includes an interstitial and then fails to load the article for me, probably my ad blocker. Alternate reporting:

https://www.npr.org/2018/07/12/628142995/a-4-billion-light-y...


The original is worth the read, it’s excellent.

http://archive.is/T9DB4


What a great story. That article was inspiring and easy to understand. It makes me proud to be a human, really!


Thanks! :)




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: