Wow. I was a CERN summer student in 1995 and me and another student worked on the tagging of B-meson decays in ATLAS. Hers was exactly the B->K mu+mu- decay. So 20 years later, it's observed... ;-)
(Incidentally, the particle physics publication system is beyond screwed up. We, who did the actual work, weren't even allowed to be authors on the paper where the results were described, because we weren't "members of the ATLAS collaboration". We merely got an acknowledgement.)
The idea is that you need to quickly identify ("tag") interesting reactions out of the millions happening, so there are a set of increasingly stringent but slower filters that select events. Muons are good because they are easy to identify, then you look back at where they came from and see what other things were created. Typically mesons (and other hadrons) create a "shower" of particles and by adding upp all their energies you can estimate what particle they came from.
But all that stuff is mostly repressed from my memory. I decided that particle physics wasn't for me after that summer... ;-)
Having just gotten our PV panel installed on our roof this year, this is fresh in my mind. Of course everyone is doing it -- electricity in Hawaii is insanely expensive. Before we turned on the system, we paid $200/month for power! And this is in a house with no heat and no AC.
Do they have to buy the power? Yes, but not without limits. The system is built on "net metering". Every kWh we deliver to the grid can only go to offset one that we already have or will use. If your system is so large so as to make you a net producer of energy when averaged over a whole year, you will be giving that away to the utility for free.
The rules also limit installation of PV systems to a level that will not exceed the minimum daytime load on the local circuit, exactly because there is nowhere to store the power. So every kWh they get from us is one they would otherwise have to generate to supply to one of my neighbors.
It's clear that the utility companies aren't happy with this. But studies have shown that these "extra costs" the utilities talk about being saddled with are offset by savings to them as well.
Solar power typically produces power during times of high power demand, so the net effect of solar is actually to even out the daily load variance. This is good for the utility because they need to make sure they have capacity to handle peak load, even if it's only for a small fraction of the time. These "peak power" plants are expensive, because you almost never get to operate them. When PV reduces peak load, they don't need these plants and this is a net win for them.
The fonts in almost every PDF that I have ever seen created with Latex just look horrible to me. Am struggling to understand why my perception is so much different than most people who seem very happy with the results.
Just googled "latex pdf" and looked at the very first pdf on offer, and indeed the fonts look horrible to me, like there is no anti-aliasing applied -- jagged and inconsistent weighting of stems.
Am genuinely confused here, and wish I had an explanation of why my perception of Latex output is so different from the majority.
The original TeX fonts are rendered as bitmaps from an outline format that isn't very compatible with PDF (based on pen strokes rather than outlines). Some older toolchains embed these fonts as bitmaps. It doesn't look particularly good in a PDF.
There now are TrueType versions of these fonts, that are default in modern TeX distributions, but not everybody uses them. (Especially older papers.). There are some scripts out there to do substitution in ps files, but not PDF.
Unrelated to the rendering technicalities, I agree that Computer Modern (the default) is not a beautiful font. It's the Times New Roman of the LaTex world, saying "I don't give a shit about typography".
But throw on something like Sabon with a proper microtype config, use some nice large chapter/section headings, and LaTeX will give you the paper equivalent of world class latte art.
I don't think the physics of balancing a rocket on the way down are any different from doing it on the way up.
I'm also not sure where you got the idea that using multiple boosters is a problem. Look up the Atlas 5, Delta IV Heavy, the Space Shuttle, Ariane 5, etc, etc. (And multiple engines on the same booster have been used since the start of the space program.)
In general, burning while you descend is a terrible waste of fuel, because you have to generate 1G of acceleration just to keep your speed constant. (See http://en.wikipedia.org/wiki/Gravity_drag.) That's fine if you're not fuel limited (like the grasshopper tests) but if you want to land with minimal use of propellants, you want to use as high acceleration as possible. So the idea is to let the rocket drop freely and then light the engines just early enough that you can stop before hitting the ground.
The HR screen was a bit off. I applied as a Python developer and was given a purely C memory-leak multiple-choice quiz. Sort of annoying, considering I told the recruiter explicitly that I don't know C.
EDIT: My broader qualm is this. If you've got hundreds of open req.s, but fail a candidate based on a quiz that you know the candidate doesn't know (but which aligns with one specific job posting), you're maybe doing hiring wrong.
You guys regularly fire some portion (3% to 5%) of your employees every year yes? I love what you're doing and I'd love to be part of it, but the press doesn't make it out like a great place to be an employee. It makes sense I guess, given the risk involved in launching rockets, but this kind of work environment is probably not for everyone.
Being fired for low performance is okay — if you feel you're continually perform poorly, you should start to look for another job well ahead of this moment. California is not a particularly saturated market for software developers.
Not following the legally required procedure is bad, though.
The problem is that companies following the "fire at least 10% every year in every department" Jack Welch philosophy tend to let some good people go. It's a management philosophy that just assumes that there's no such thing as a really good team, full of worthwhile players.
Its an attitude that works better in professional sports, where hyper-competitiveness is more often an asset than a liability. The knock-on effects of a bunch of coworkers trying to outdo one another to make the cut seem as though they'd make for a crappy work environment, which I believe is what VieElm was talking about.
It's not appropriate for me to comment on company HR policy. All I'm going to say is that my impression of the atmosphere on the software team is not at all "a bunch of coworkers trying to outdo one another to make the cut".
lutorm, I should have put a disclaimer in there. I know nothing about SpaceX policy and so I wasn't trying to cast any aspersions. (and frankly, the 3% figure VieElm mentioned sounds more like attrition than Neutron Jack policy)
Well it's well documented that what happens is that the system ends up being gamed. For example managers hire people to fire so they don't have to fire the people they've friended. See Microsoft Stack Ranking.
> To conform to U.S. Government space technology export regulations, applicant must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State.
Optimistic that in the distant future we will be eligible.
Try graphing y = -1 * log(x) and imposing a limit on the upper bound of x and you'll get close to what he has. Perhaps that's the angle he's coming from. He provided the fitted equation further down in the featured article, and the log term does have a negative coefficient, plus an intercept term.
It screams exponential at me, especially given a potential underlying model where every sick person has a .x probability of getting every individual they work with sick. As the number of individuals goes up with no change in the rate of sickness from outside the office, the number of sick people should go up exponentially (as with any multiplicative process).
Edit: actually I think I completely misinterpreted the data. Now that I look more closely, I have no idea what the X axis is for. I assumed it was number of employees in a company whose sick time was somehow represented by bar height, but is it just a list of all employees sorted by how much sick time was taken?
If so, this is probably an example of a normal distribution with an exponential tale.
I'm pretty sure it's just a list of employees sorted by how much sick time is taken, so the X-axis is an "employee index number".
More interesting (and pertinent when trying to find a pattern in this data) would be a histogram for sick time taken. Trying to fit a curve to the graph as-is isn't useful, because the X-axis doesn't represent anything meaningful.
This is my thought as well. So you fit a curve to a sorted list of each employees sick time. Does this give you any additional insight? So it follows a log function. Does that mean anything?
If you do a histogram and fit a function you get something that could conceivably be interpreted as a probability distribution function, you might be able to say something about predicting the sick time a given employee will take and the uncertainty of your prediction.
But I honestly don't see what visualizing the data in the method of the post, or fitting a function to it contributes. Hope that doesn't violate the new no negativity policy of HN.
Since there is confusion in the sibling comments, I want to explain how y = - kln(x) + m fits in with the exponential function. I am going to be a little sloppy with closed and open intervals and round a little.
x is the rank of the employee. Let N be number of employees. We can generate a new observation from the model by generating an x' uniformly between 1 and N, and inserting in the formula for y. Then p'=x'/N is a number between 1/N and 1, or if we round, between 0 and 1.
The generated observation will be distributed according to (convince yourself by looking at the submission's graph)