That may have been all he did that time, but imagine if stuff had gone sideways, he'd have to troubleshoot it from Antarctica! I thought working from home was bad...
Years ago, I interviewed for a couple of positions with Schlumberger which, at the time, basically went out to oil wells and made various measurements. In a lot of ways, it was essentially a tech job but the reason you had engineers (assisted by techs) was that you had various training and, presumably, other background for when things went sideways.
Problem solved? Of course not. Employee safety is still a core KPI after all. So to address this problem, they installed a device in every company issued car which makes a BEEP! sound every time you drive aggressively (as judged by the device). After three BEEP!s you have to explain yourself to your boss.
I was extremely impressed with the safety-first culture when I interned at Schlumberger a few years back. Your coworkers will yell at you if you simply step into a work area with proper PPE (hardhat, safety boots, etc.).
EDIT: I guess that is the reason why consultants come into the picture
A job has a task load that varies over time. Those tasks also have some maximum latency before Bad Things happen. Thus, you need enough worker capacity to handle the peak load. You can allocate new workers on demand, but that also has a certain warm-up time which can be problematic if it's lower than the maximum latency.
For most white-collar service-oriented jobs, the variance of that task load is quite smooth. If you're a software dev not working directly on a production service, there aren't that many urgent surprises. Also, the max latency is really high — many tasks can be put off till tomorrow, next sprint, etc.
That makes it easy to allocate just enough workers to handle your average task load, keep them busy all day, and rarely need to spin up or spin down new ones.
But some jobs just have really high variance or really low latency. No one knows when a patient is going to show up at a trauma hospital but when one does, you need the surgeon on site right now and not somewhere stuck in traffic.
For that kind of work, the logical thing to do then is to have spare idle worker capacity. It seems inefficient, but it's cheaper than the cost of missing your latency targets when a spike happens.
You can reduce the idle capacity needed by reducing variance. For example if you flew all your trauma patients to one hospital, that actually lets you allocate surgeons more efficiently. Because then at that point you have spikes frequently enough that the aggregate number of patients in a given day becomes more consistently predictable. Instead of one surgeon each sitting on their thumbs most of the day at ten city hospitals, you get three surgeons that reliably have a couple of patients a day at the regional hospital. (But of course there is the overhead of getting all the patients there.)
Also, you can reduce idle capacity by lowering warm-up time. That's why doctors have pagers — it reduces the time between a patient coming in and the doctor being ready to help.
Or you can increase the max latency, though this can be hard based on the nature of the work. For example, if you're better able to stabilize patients, you may be able to wait longer until the doctor is on-site.
In this case, the "install Debian" job has really high variance. 99% of the time it's a click, 1% of the time, you've gotta know some Linux internals. The max latency is low because you don't want the IceCube array non-operational for long. And the warm-up time is really high — fly someone out to the South Pole.
So the reasonable response is to throw money at idle capacity and send someone out pre-emptively.
Protip: pressing up several times and then enter is very dangerous since the command that you will actually run may not have appeared on your screen yet.
And never ever paste commands without first trying it into a trash terminal or editor.
Despite (mostly) using vim as an editor, it's got to be emacs on the shell itself.
I think the situation now is a lot better than it was, but a few years ago, most of the day there was no connectivity except Iridium. Most communications satellites are in orbits that don't take them near the poles, with Iridum being the obvious exception.
Related: https://en.wikipedia.org/wiki/GOES_3 was an amazing piece of the South Pole infrastructure for several years. There was a period every year where the Earth would eclipse GOES-3 - the batteries onboard were long dead - so the satellite would power down and everyone just sorta crossed their fingers and hoped it came back online with the right orientation and with enough electronics going so that we could keep using it. Bandwidth wasn't great, but it was quite reliable especially considering that it was launched in 1978.
They had designed and built their own instruments, but they were on the plane so they could fix any problems, although the idea was to have no problems!
(I don't mean scientists are so dumb, but that they might have Antarctic Stare...)
Video chat isn't a realistic option - not the greatest connectivity down there...
They have some technical support at the SP but it's not great. In fact, one of my ex-coworkers, a well-respected research scientist, volunteered as a technical support person there (https://davidpablocohn.com/ and https://www.youtube.com/watch?v=eBaQtsft2bM). But I think he mostly helped people fix laptops.
Not sure how this could be the case. Dry air has higher thermal conductivity than moist air.
The bigger problems with keeping ICL (the IceCube Lab - where the computers on top of the detector live, also the biggest datacenter on the continent) are somewhat more mundane though:
* If memory serves, there's one air handler that brings in outside air. Like anything else, it occasionally breaks or needs maintenance, sometimes in the middle of winter.
* Anything that has to operate in outside-ish conditions there, like the air intake for example, needs to be simple and robust. I think that early on in ICL's life, they had problems with frost clogging up the intake louvers, but that was before my time with the project.
* South Pole Station gets Cold, and so you need to be careful with mixing a little bit of outside air with much warmer inside air, to keep the heat evenly distributed and temperature from fluctuating too much.
* The station's HVAC is controlled by a DDC (https://en.wikipedia.org/wiki/Direct_digital_control) system, which is reputedly a pain to work on, and sometimes people get ideas about ways to improve it, which of course leads to new and interesting quirks in the system...
I imagine just circulating your nice toasty air into a larger area filled with moist humans who want to be warmer would work, but it depends on your building layout.
Now walk across a carpet in leather-soled shoes. Do NOT even touch metal doorknobs, let alone metal faucets!
I read this news with great interest because I thought they might have finally detected the flux of neutrinos predicted by the GZK process. But, this can't be the case because 1) 10^14 eV is probably on the low side for GZK neutrinos, and (more importantly!) 2) that flux would be fairly isotropic, and not from point source.
But still! Exciting results! IceCube is an ambitious and wonderful detector and I'm always pleased to see it in the news.
(the kind of question that reveals search engine limitations)
(since many of us spend most of our time writing code too...)
Your article says the neutrino came from far away, which disagrees with that post that says it was probably "photonuclear with local photons".
When would a photon not be created at the source? What is a non-local photon, only CMB photons?
The neutrinos are produced in the process:
proton + photon -> (usually) Delta_1232 (basically an excited proton, which is very unstable so that's why you may not have heard of it)-> neutron + pion -> proton + neutrinos, provided there is enough energy available to make pions. The same process also makes gammas (from a neutral pion instead of a charged pion), but there are also lots of other processes that produce gammas (which again, are just high-energy photons).
The protons are accelerated at the source. The photons that the protons interact with to make neutrinos may either be local to the source (i.e. in the same galaxy, such that the neutrino is produced within the galaxy) or the proton could propagate some distance, bending in the intergalactic magnetic field and, if it's energetic enough, interact with a CMB photon (which are very low energy but everywhere in the universe, courtesy of the big bang). The former are likely the astrophysical neutrinos, while the latter are by definition the GZK neutrinos. The astrophysical neutrinos are at lower energy because inside galaxies, photons with higher energy than the CMB photons are available, allowing pion production at lower energies.
It seems needlessly confusing so I think there is some reason I'm not grasping that you keep saying "the source".
So by "source" you mean "source of the protons"? And "local photons" means "produced near the source of the protons"?
OT: I keep meaning to join IEEE - do you get any commission or perks for referrals?
If I may ask: why 'recovering'?
That's the idea of putting a 'worthless' payload up first.
Given 3-6 months, $<1M all-in, and a team of engineers and scientists, it would be quite possible to get something ready to rock. Such an instrument would allow an ever-improving test of solar-system dynamics for many millenia.
If Elon is reading and plans to do something similar again, get in touch (firstname.lastname@example.org). We'll make science happen.
Or, just launch a dense one of these: http://www.landfallnavigation.com/davis-emergency-radar-refl...
It's really sad that the only ways to fund science seem to be charity and war.
edit: to sum up, this would be an efficient economy at scale. Some one with money already puts someone with expertise to good use :)
If you have actual suggestions, please do share them, but taking the mickey out of people trying to figure out the universe on $45k/yr is just mean.
While it's also well known that you don't go into certain fields to make the big bucks (astrophysics, viola da gamba performance, etc.), a corollary is that you do go into those fields because they're highly addictive. So, referring to it as "recovering" might be appropriate.
Disclosure: The people I know who are astrophysicists, and violists da gamba, both now have successful careers as software developers.
All that effort focused on one incomprehensibly small neutrino...
The water supply for the station comes from a rodwell, which is effectively a big bubble of molten ice, sustained by heat from the diesel generators. Once the rodwell gets to a certain size, some water can be extracted for use on station, and the bubble lowers progressively in to the ice. Eventually, the liquid surface in the well gets too deep for the pump in the well to pump water all the way to the surface, so a new well is started.
Waste water then goes in to the void left from the old well; poo, gunk off the dirty dishes, etc all included.
TL;DR: how to detect a very high-energy gamma-ray photon? Either with an orbital observatory (but these have limited size); or let the photon hit upper atmosphere (at a few tens of km), create (together with some atmospheric nucleus) a fast electron-positron pair, which through additional interactions eventually causes a narrow cone of Cherenkov light to shine all the way down to the ground (having ~100 m diameter there), and then gather this light with an array of a few large reflectors. Finally, perform some tricky processing to recover the initial photon's direction and energy.