Hacker News new | comments | ask | show | jobs | submit login
Live CERN Higgs Announcement in 20 Mins (cern.ch)
278 points by Rickasaurus on July 4, 2012 | hide | past | web | favorite | 211 comments



So, the magic number here was 5σ, the generally accepted "gold standard" for discovery, which would mean a 1 in 1.74 million chance that the results occurred by chance rather than being a signal. As other commenters have pointed out, the presenter originally announced a 4.1σ observation, then continued to add data from other experiments until the combined result was 5.1σ. However, right at the end, he added in some additional data which actually brought the significance down to 4.9σ... That's science - you can't ignore data just because it ruins your big presentation.

IANAPhysicist, but I'd be interested to know how strict the 5-sigma discovery rule is considered - for example, could they still get a Nobel prize for a 4.9σ announcement? I suppose it's not that big a deal - the LHC is still running, and I'm sure they'll have enough data for a true 5σ announcement soon. Regardless, hats off to all involved, it must be exciting to be at the forefront of human knowledge :)


To be explicit, since this comment is still top voted: This comment was made by someone who only watched the first of 2 presentations.

The second presentation showed a 5 sigma result.

I'm Incredibly disappointed by the trivial inaccuracy of comments on hacker news lately, and that corrections never get upvoted quickly enough to prevent the spread of misinformation.


> which would mean a 1 in 1.74 million chance that the results occurred by chance rather than being a signal

No!! It's the chance that randomness could produce a result that large. This distinction sounds pedantic, but misunderstanding this is widespread and leads to fallacies committed frequently by very smart people who ought to know better in many fields.


Aren't those the same thing? How are they different?


Imagine testing 1000 potential cancer drugs. Only 100 of them actually work, but you don't know that; you have to do the trial. So you get out some petri dishes and start testing. You look for a 5% chance of false positive (p < 0.05), which is the statistical significance level usually used in medical trials.

Of the 100 real drugs, you detect all of them. Of the 900 fake drugs, 5% of them falsely appear to work. So you have 145 drugs you think work, only two thirds of which actually work. The chance of any individual drug having obtained its positive results by chance is 31%, not 5%.

I have a guide to this here, since it's so common:

http://www.refsmmat.com/statistics/#an-introduction-to-data-...

(Side note: the average medical trial would only detect 50 of the working drugs, not all 100, so the real situation is even worse.)


> The chance of any individual drug having obtained its positive results by chance is 31%

And that's different from the chance that randomness could have produced the effect? How?


Sorry, I wasn't very clear. Usually people quote the p value as the chance the result is a fluke; the p value in CERN's case is p = 1/1,740,000. But that's the chance that the effect would be produced if the Higgs did not exist, which is different.

By analogy in the medical case, p = 0.05. The incorrect interpretation is that this means only 5% of drugs with statistically significant benefits actually achieved these benefits through luck; rather, the right interpretation is that 5% of the nonfunctional drugs somehow appeared to work.

You could also imagine testing 200,000,000 hypotheses which were all completely false. Even if you used CERN's level of statistical significance, you'd still quite likely find one hypothesis which appears to be true, simply by chance. The chance of that hypothesis being false is 100%, despite the significance level of 1 in 1,740,000.

So yes, 31% is exactly the chance that randomness produced the effect in the trial. But people will try to tell you that it's actually 5%, and they're wrong.


This thread started with the claim that "the chance that the results occurred by chance" was different from "the chance that randomness could produce [the result]".

But you're saying that in your example both are 31%. So again, I ask, are we talking about two separate things? And if so, can you give an example where the two things have different values?


In my medical example, "the chance that the results occurred by chance" is 31%. "The chance that randomness could produce [the result]" was only 5%.

For CERN, the chance that randomness could produce this result is 1 in 1.74 million; the chance that the results occurred by chance is larger, but not computable with the information we have,

The guide I linked to above gives a much better explanation than this. I rushed my first post here, and I think I was unclear.


I swear I'm not trying to give you a hard time, but I don't see how these two things you said could both be true:

> "The chance that randomness could produce [the result]" was only 5%.

> 31% is exactly the chance that randomness produced the effect in the trial.


Whoops, poor wording on my part.

Imagine flipping a perfectly fair coin 100 times. You'd expect to see 50 heads, but you don't always -- it's just an average. Suppose you see 75 heads. What is the chance that you'd see 75 heads with a fair coin? Very very small. The chance that randomness could produce such a result is small.

Now, imagine you test 100 perfectly fair coins. A few of them give more than 75 heads, just by luck. You conclude they're unfair, since the result is unlikely otherwise. The chance that randomness produced the effects you saw is actually 100%, because all the coins are fair.

There's a difference between the question "How likely is this outcome to happen if the coin is fair?" and "Given that this outcome happened, how likely is it that the coin is fair?" Statistical significance addresses the first question, not the second.


Thanks; I feel like I learned something. Your infinite patience was appreciated.


I suspect Almaviva is talking about the biases inherent to publication / talking about results.

Consider, by analogy, the event "rolling a six sided die and getting a 6 and announcing that fact to the world".

"What is the probability that random events could produce a result that large?": one in six, per die roll. The question excludes the whole "announce it to the world" filter.

"What is the probability that these results [getting a six and announcing it] occurred by chance, rather than being a signal?": We have no idea. If the person announced "I'm going to roll one die and announce the results, regardless of the outcome", then it's one in six. If they kept rolling dice until they got a six, then the probability is 1. If they rolled 3 dice, then the probability is 91/216.

The point is that the scientific method has all sorts of biases (publication bias, confirmation bias, etc.) and p-values are rarely "probability that the result is wrong".


Standard null hypothesis testing -- including what they're talking about here -- focuses on

P(results | random noise is active)

You're talking about

P(the thing we care about | results)

The former is more like a simple sanity check. If random noise could have produced what you see, you shouldn't take the results too seriously.

The former tells you a little bit about the latter -- which is good, because the latter is what we actually care about. But you can't explicitly compute the latter without making much stronger assumptions like priors and the like. That's why this last step of reasoning is often performed qualitatively.

A good stats book to read about all this is Larry Wasserman's http://www.stat.cmu.edu/~larry/all-of-statistics/


A couple of extreme examples:

1. I'm rolling two dice to try and get the highest total. I get two sixes. What is the chance randomness produces this? 1/36, about 0.027. This more than a two sigma result. What is the chance that this is caused by random chance? 2.7%? Nope, 100%.

2. I study the same thing in a million situations in parallel. I take the most extreme result and find that random chance can produce this one time in 1.7 million. It's a five sigma result! What are the chances this result is caused by random chance?


The 5σ convention is more about announcement than about discovery. No one can really say when a discovery is made. As more information comes in, the image can remain fuzzy or can get clearer and clearer. In the latter case, 5σ is a convention as a threshold for when you are entitled to announce a discovery without being blamed for jumping the gun. They'll collect data for many more years before passing the threshold of Nobel-worthy discovery.


Atlas is still presenting. So let's wait and see what they came up with. 4.9σ was just from CMS.

Edit: It is worth noting that 125GeV fits well within Fermilab's recent announcement: http://news.discovery.com/space/tevatron-data-detects-higgs-...


She is working the crowd, so to speak. Here is the final CERN press release, which claims a 5 sigma result:

http://press.web.cern.ch/press/PressReleases/Releases2012/PR...


I think these physicists have been watching too many Steve Jobs presentations. "4.9σ... Oh, but one more thing!" ;)

In all seriousness, does anyone know how common it is to do this kind of gradual reveal during scientific presentations vs. stating your basic final results in the introduction? It's kind of fun, but you could cut the tension in that room with a knife!


The way they handled it was perfect. For over 20 minutes each, the presenters for CMS and Atlas explained their measurement capabilities, methods, recent improvements, and recorded measurements before stating their conclusion. They made sure to cover their bases and justify the conclusion before making a big announcement. I'm pretty sure the abstract for the papers will state the conclusion early on but the initial announcement for the greatest achievement in Particle Physics shouldn't be prefaced with "We found it!" just for the convenience of the impatient.


The bravado comes from an ego and pride in ones self worth, not from Jobs' keynotes.


5σ is not a rule. It's just a measurement of the probability that the result you observe is an accident. It means that it's very unlikely that it was just pure luck.

By the way, σ measurements are also used in many engineering jobs, and quality systems measurements. The kind of industry "goal" is to get 6σ (let's say when you manufacture consumer goods in large quantities, like millions every month). But for some other industries, 9σ is the benchmark - and from what I know, air companies have quality systems up to 9σ to ensure the lowest risk of something wrong happening.


How can they possibly have up to 9 sigma? 6 sigma is a failure rate about 1 in 10 billion, whereas 9 sigma would be 1 in 10^19 or so! That's oddly similar to the number of grains of sand on the earth![1]

[1] http://www.hawaii.edu/suremath/jsand.html


The math will happily pop out absurd numbers. Remember that all stats carry assumptions with them, such as "this is a Gaussian process from negative to positive infinity" and "everything is absolutely perfectly independent" and other things that break down if your push them hard enough. What that is telling you is just how hard you're pushing the math rather than the real odds. I'm not 1 in 10^19 sure of anything.


Where did you get your failure rate at 6 sigma to be 1 in 10 billion ??

6 Sigma is 3.4 defects per million opportunities. http://en.wikipedia.org/wiki/Six_Sigma


Yeah, I doubt most pilots have failure rates anywhere near or above 4 sigma.


That's not about pilots. It's about equipment failure rates.


>IANAPhysicist, but I'd be interested to know how strict the 5-sigma discovery rule is considered - for example, could they still get a Nobel prize for a 4.9σ announcement?

It's very much a HEP thing, and done that way because HEP is pretty much all statistical analysis these days. Other fields wouldn't treat the sigmas as the most important thing, and I've heard mutterings that it's not really the most accurate approach - but it is objective and easy to apply.


5σ seems like an unnecessarily high standard to this non-physicist; what's the rationale for that? At 5σ we could publish 1,000 major discoveries a year and have E[false discovery accepted] = 1,740 years.


The two main rationales are that systematic uncertainties are historically under-estimated and that we are looking in many channels (more than 1000), so it would not be too hard to find a 3 or 4 sigma anomaly. The second part is the so-called "Look Elsewhere Effect." If you hit 5 sigma, you are fairly safe from either of these effects ruining your "discovery."


> The second part is the so-called "Look Elsewhere Effect."

Also more generally known as the "multiple testing" problem, fwiw (not sure why it has a different name in physics, unless I'm missing a subtlety).

It's a major problem in "big data" also, where people just data-dredge thousands of possible parameter choices and pairwise correlations, and then report the p<0.01 results that came up, even though you'd expect several false positives just by chance with that methodology.


I think it's mainly so they can be as condescending and smug as possible at interdisciplinary conferences.


Actually, it's because the rest of you "scientists" end up publishing bullshit results that you got by chance.

I'm kind of joking - most other scientists don't collect enough data to have to worry about 1 in 10,000 events happening by chance. In medicine, though, I'm not joking at all - those guys publish absolute statistical garbage all the time, I hesitate to even consider it a science because the data dredging is so bad. I can "prove" just about anything if the publication standard is 95% significance...


I searched but didn't find an explanation of this 5 sigma. Any reference to share?


5σ = 1 / 1,744,278 chance this was just luck.

http://en.wikipedia.org/wiki/Standard_deviation


The question this actually answers is: Assuming it was luck (ie there's no Higgs), how likely is it to get the measured result (or one even more extreme)?


Exactly. P(results|luck) and P(luck|results) are two very different things. We don't know the probability that the results are due to "just luck" (whatever that means).


I think it's quite dramatic how many very intelligent people (up to writers in major publications) are uneducated about this distinction, and how dramatically wrong decisions can be if this is misunderstood. If we could teach this, as well as correlation vs causation, I'm convinced the intelligent public would make much better decisions about many things, medicine and nutrition to name a couple.


I know standard deviation but I didn't know scientists reported their results talking about x sigmas. Thanks for clearing that up.


You should have waited to watch the ATLAS woman's presentation afterward. During hers, they concluded with a solid 5 sigma.


I've given up. Considering how this is supposed to be a big announcement which is probably important for a number of reasons that may affect a lot of people. I am surprised he didn't start with:

"I know a lot of people are tuning in without degree's in Physic's. Let me break it down for you in Leymens terms. We are fairly certain we have discovered this. It is important because of that. Now let me get onto why we think this."

I get that this talk is not meant for me. However, it is important - apparently. Its on the front page of Guardian.

If this is an announcement of great importance and it is 98% mumbo jumbo aimed at high end Physicists or whatever then.. I don't know. Its another chance to get people interested in science that has been missed.

Note: I am not saying the whole talk should be dumbed down. I am just saying a 2-3 minute prefix for those who do not understand a single word for the first 20 minutes of the presentation.


I am a particle physicist. The most interesting point of this announcement for me is that this is a confirmation of predictions of a very exotic particle.

The properties of subatomic particles include something called 'spin' - that's a fundamental quantum mechanical property. The higgs boson is the first elementary (i.e. not made of other particles) spin-less particle that we've discovered; it's completely unlike anything that we've seen up until now.

That the model that we have constructed can accurately predict its existence and the way that it decays without having observed anything like that beforehand is a huge confirmation that we're in the right region of model space. Today seems to be a huge confirmation that out understanding of physics is not fundamentally broken.

That's why its important; the prediction is like attempting a 5-point dive and nailing it pretty much perfectly. It's an impressive confirmation of 50 years of theoretical work.

(The anarchist in me would have preferred them not to find anything, I must admit. That would have been much more interesting, as the standard model came tumbling down... :-)


We still couldn't understand this completely,

What is the significance of discovering a particle without a spin?

Does this answer some questions about black matter, anti matter, big bang etc.

Then it would be interesting thing for us, laymen.


There will be time for popularization for the laymen. It is not now. It's more important that the knowledge be transmitted accurately and completely, than it is to give a reader's digest for the laymen. We'll have to wait until all of the discovered information is processed, and then summarized. Making a summary early on in the discovery is likely to contain errors. It's also better handled by people other than physicists working on the project.


The standard model predicted it and it was found. That's a major win for the standard model, which is the result of many years of theoretical work, as said above - it was able to predict something completely new, something that was never observed before. I'm not deep enough in the field to see what questions could be answered, and we will see, but it's always a good thing to be able to rely on your model of the world.

So - this is not about other questions in the first place, it's about the validity of the standard model. We can continue from there. (A common title for talks etc is "Physics beyond the Standard Model".)


I'm curious what you mean by "pretty much." Did the Standard Model predict 125 GeV?

I'm asking because I've seen a few casual descriptions of this Higgs as a "lightweight." I'm assuming that means it's not as heavy as expected?

EDIT: "If the mass of the Higgs boson is between 115 and 180 GeV, then the Standard Model can be valid at energy scales all the way up to the Planck scale (1016 TeV)."

http://www.daviddarling.info/encyclopedia/H/Higgs_boson.html


This isn't exactly my field, but as I understand it a Higgs of 125GeV implies a supersymmetric model with relatively light squarks and no excitingly novel features. Basically, a small adjustment to the standard model that allows for one more family of heavy quarks and not much else.


So basically all of this will not result in having that transporter I want so badly?


There's nothing stopping you from having a transporter, but this implies no FTL, no time-travel and likely no anti-gravity machines :-(


Even the Guardian reporter has admitted that it goes over her head. Heck, I'm doing a Master's in condensed-matter physics and I don't understand much of the jargon.

What I can tell you is, the parts which sound the most intimidating are actually probably the simplest bits. CERN operates a particle accelerator -- this means that the LHC basically smacks subatomic particles into each other at absurdly high speeds to create infinitesimal explosions with tremendous amounts of energy (these are the TeV, GeV numbers that you see -- they're talking about the amount of energy that was concentrated in the explosion). The explosion essentially disrupts the underlying fields of the universe so much that new particles can be created or destroyed, but if you excite the Higgs field to its quantizing particle, it tends to immediately decay into other things.

The other things are subatomic particles, including quarks (the letters u, d, c, s, t, and b for up, down, charm, strange, top and bottom -- you may have heard him for example say 'bb') and bosons (he talked a bit about W W* and gamma-gamma; gamma rays are light while W bosons are, well, a little more complicated let's say).

All of the stuff he says about Monte Carlo and so on is about creating "expected" curves from the Standard Model. You want to have two curves, "expected" vs. "actual", so that you can compare them.

On the base axis usually there is energy -- this is the energy of the explosion. There are usually two curves from Monte Carlo which tell you what you expect to see. Then there are data points with error bars which tell you what's actually seen and what the statistical "counting" errors are, how weak the signal is. Usually there is then a follow-up graph where they have tried to "subtract out the noise" to see the signal more clearly.


I feel like the statistics going on here is almost as fascinating as the physics. Well, they technically are the same, but so intriguing to see people hooping and hollering at 5.0 sigma.

"We are 99.999995% sure we found it!"


This is for the scientific community. The scientific method is that all results should be scrutinized, tested and verified. If you want a 2-3 minute explanation. Wait for CNN.


I don't really buy that in this case. It's still high level and summarized and no one is verifying or scrutinizing anything based on that presentation alone.

NASA handles these kinds of announcements well, but then they also announce cyanide-based life. So.


When scientists present results to scientists, they present in a scientific way. I.e. methods, analysis, etc. There is no way a scientist can get by with just a short summary when talking to fellow researchers... it's just not how science is done.



I understand that. I don't understand why announcements of this significance are done like this.

Its like NASA landing on the moon without video and presenting geology findings based on the rocks. Sod that. The people want to see VIDEO! They want to live the moment. I thought this could be one of those moments where something significant was discovered which I may be asked about in many years time. A "this changes everything moment." The way it is presented though may be just that for scientists. For everyone else though.. who cares when the announcement is this technical.

Surely I wasn't the only person wondering if we are not closer to the hover board? That would have been a nice way to start.

Screenshot of hoverboard "For the leymens tuning in. Our discovery means this is / is not closer to being made."


This isn't an announcement. There's no press. This is CERN doing us a courtesy and letting the public see a presentation they were going to give anyway.


> Surely I wasn't the only person wondering if we are not closer to the hover board? That would have been a nice way to start.

Ah ok, you're trolling.

PS - it's "laymen", not "leymens".


The ATLAS lady has what I think must be the worst set of powerpoint slides I've ever seen. Epically bad. Pretty amazing.

Even if you are working at CERN running the equipment, there's no way you could absorb all the info on each of those slides in the 10 to 15 seconds she shows them. They might as well have pictures of frolicking kittens on them.


Bullshit. She has plenty of informative graphs and useful numbers.

The worst powerpoint presentations are ones that are content-free - this one could be accused of too much content.


I suspect the slides will be given to people afterwards. In this respect, they're not that bad.


Clearly you don't understand how much time pressure these people are under. If you were at CERN you would realize how intense the atmosphere has been in the past 2 weeks.

Also, focus on the content - if you're caring so much about the presentation, then you probably don't understand enough of the physics to comment on the content.


Seems like a lot of it is just comparing more recent data to last years data. I'm sure they have spent plenty of time looking at last years data, so they can probably understand quite a bit at this pace.


This, unfortunately, is not even that bad when compared to the litany of bad, overloaded, eye-scorching powerpoint presentation I usually sit through in research group meetings, conferences, and more. Maybe all intro courses in STEM should include a course on communicative design.


That, and Comic Sans!


Maybe the intent was to make it look less intimidating.


That'll happen, someone will explain it in "layman" terms about why/how this is fundamentally important. That's probably not going to be done at this announcement though, but hopefully in the aftermath by the media reporting on the announcement.


He specifically said that would come later.


Final (CMS) summary: We have observed a new boson with a mass of 125.3 +/- 0.6 GeV at 4.9σ significance!

Atlas comes in at ~126.5 GeV with 5.0σ. That would be a confirmed discovery!

Interesting that Atlas' mass is outside CMS's confidence range, though Atlas didn't have a range on theirs.


Are there any other theorized boson candidates, which may fit the experiment results or is Higgs Boson the only candidate?


Short answer: They discovered a particle which looks like a Higgs Boson.

They are talking about the standard model Higgs, which is the result of a specific way to break electroweak symmetry. A consequence is, that there are quite well understood predictions from this how the cross sections and branching ratios should look like. And on the current level of statistical significance it looks like a standard model Higgs.

On the other hand, there are so called effective field theories, that is you can start from a complicated theory and derive a simpler theory from it, which behaves the same in some aspects (for example at low energies).

So the more exact answer is probably that now a theory has to contain a Higgs boson in the appropriate limit.


The error bars on the mass range are typically just 1 sigma, so there's considerable overlap between the two figures (the Atlas figure is within the 95% confidence interval for the CMS figure, for example).


Hawking owes Kane $100!



I wouldn't think Hawking would be betting on such a result.


In 2000, Hawking bet Gordon Kane $100 that the Higgs Boson would never be found.


In December 2000, Hawking bet Gordon Kane $100 that the Higgs Boson will be discovered at the Fermilab Tevatron.

Since the funding dried up for the Tevatron and the first hints for the discovery of the Higgs boson come from LHC we can conclude that Hawking won the bet and Kane will be the one paying up.


This just in: "Stephen Hawking on Higgs: 'Discovery has lost me $100'"

http://www.bbc.co.uk/news/science-environment-18708626


That look he gives at the end really makes me tear up. What a great sentiment.


There are 2 big experiments (detectors) at LHC. This is for the CMS detector. Next up is Atlas.

Clarification:

http://public.web.cern.ch/public/en/lhc/LHCExperiments-en.ht...


Could we combine the ATLAS results to get over the 5 sigma mark?


It's possible to combine the data and get a higer significance. AFAIK, Officially, CERN doesn't do these combinations, but independent people do. See here:

http://blog.vixra.org/


CMS and ATLAS will do an official combination. For example, it was done for 2011: http://arxiv.org/abs/1202.1488.


This paper combines different channels from CMS, but not CMS and ATLAS.

"In this Letter, we report on the combination of Higgs boson searches carried out in proton-proton collisions at Sqrt[s] = 7 TeV using the Compact Muon Solenoid (CMS) detector at the LHC. ... Combined results are reported from searches for the SM Higgs boson in proton-proton collisions at Sqrt[s] = 7TeV in five Higgs boson decay modes: gg, bb, tt, WW, and ZZ."


Any idea about what prevents them from combining the data?


Those two detectors are attached to the same accelerator, right? My guess is they don't because they cannot guarantee the results to be independent from each other?



Charge? Color charge?

Since this is apparently "only seen" through the Monte Carlo analysis I suppose they haven't seen one specific particle decay with it, I suppose?!


The world's most cutting edge scientific results presented with comic sans. :)

(edit: The current ATLAS talk, not the first one on the CMS data)


Exactly! Take that, Valley's "Beautiful" startups ;)


It's certainly an incredibly readable font, even with the webcast being somewhat blurry. A good choice.


Twitter is full of bashing. I don't think it matters. The presentation is already unfriendly to anyone not in the scientific community, so presentation isn't really that important.

However, as everyone else has said, it would have been lovely to have a layman's TL;DR. Perhaps that's the press conference at 11:00, and expecting it beforehand is arrogant.


There is a font called Dyslexie which looks like a much subtler Comic Sans and was designed to be more readable for dyslexics.

www.studiostudio.nl/project-dyslexie/


Simon Peyton-Jones, of Haskell fame (however relative such a fame may be), is also well known for using Comic Sans in his presentations.

So it's not unknown in advanced CS either :).


This happened already in 2011:

http://www.youtube.com/watch?v=0Vif4D-TJYQ

They must really like that font :)


Soon the web will move to Lobster.


I love how we're using Hacker News here -- this is definitely not what it was designed for. What we're really doing is something like a live chat room while watching a common talk, but unlike chat rooms it can be threaded and points can be allocated. Also unlike a chat room, HN does not automatically update when we get new discussion messages, but that's a constraint of the technologies at the time it was built.

It might be very interesting to try to use comet-casting or websockets to revolutionize chat in precisely that way, realtime threaded discussions. So, in addition to all of the chat constraints you have the ability to dynamically mark certain chat messages as replying to other messages, and as the noise in the chat room gets higher you can filter yourself to just "I want to follow this discussion."


This is why Google Wave got me so excited - and why I was so disappointed when it was botched and silently killed. I think an app like Wave could be great for many things, including discussing a live event like this.


My thoughts precisely. The code is still out there though - just waiting for someone to pick up where Google left off. . . http://incubator.apache.org/wave/about.html My plan is to try to rewrite this as a sharepoint add in, so that all those companies who install sharepoint as their "solution to everything" get something good with it; that could help grow the user base / promote belief in the technology. Then it could be openned out into allowing people to connect to hosted public services, and eventually get back to Google's original vision. ps. In honour of the subject matter, perhaps we should be calling it Google Duality?


This guys have picked up where Google left off:http://rizzoma.com/

in fact, many of my Waves were transferred to it


Awesome - thanks for the link :)


I'm glad to hear I'm not the only one sad that Wave passed away. I really think it could have found some unique uses, but most people (and, transitively, Google) just didn't give it enough time.

Isn't it open source now? I might try the open source version one of these days... Maybe I can even convince my friends to use it :).


I love how HN users love to make live meta analysis; in most of the topics, some significant percentage of people are not focused much on the topic, but on observing other users' interactions and its techno-historical review. These show opportunities for future products.


This happens a lot on sports-based subreddits as well, when there's a game on.


Really striking what a large percentage of the words are jargon. Sometimes I understand less than 10% of the words in a sentence. I think I might now better understand how a non-programmer feels when seeing a talk related to programming.


That's true for almost every field of academic research and work; around 70-80% of the words are not in any dictionary (or, if they are, their standard definition has nothing to do with their professional meaning).


What always interested me is how the ratio of new words to repurposed words varies per field.

For example, in CS we use a whole bunch of words like "string", "thread", "class", "type", "object", "arrow", "map" and "macro" to denote CS-specific concepts related at best tangentially to the words' original meanings. On the other hand, biology seems to prefer to come up with new words for their technical terminology.

I wonder if this is a product of different cultures or something like that.


The worst is botany. Botanists use common culinary words to describe almost entirely non-overlapping sets of plants/fruit etc. The "Tomato a fruit?" question is nothing compared to the "berry" thing. According to the botanical definition cherries, raspberries, strawberries, boysenberries and blackberries are not berries, but bananas, watermelon, avocado and pumpkin are.

Nuts are worse. According botanists, peanuts, cashews, macadamias, pistachios, walnuts, almonds, pecans, pine-nuts and Brazil nuts are not nuts. According to most lay-people, though, botanists are nuts.


I don't understand the particle physics they're talking about, but I do find it fascinating how a lot of the work is really how to sort through a massive amount of data to remove all the noise and find the signal. It sounds like they're using some machine learning algorithms to examine and classify various interactions in the data.


> It sounds like they're using some machine learning algorithms to examine and classify various interactions in the data.

They are.


I wonder if the people doing it are trained in computer science or physics? Not that it should matter in the end results, just curious how people got there.


A lot of the modern research in 'big data' analysis is/was driven by physicists. Bayesian Inference is about trying to make a decision about what you can infer from an observation or series of observations, and the impetus for this came from trying to make sense of experimental results.

Two of the really great text books in the field are by physicists,

1) 'Information Theory, Inference and Learning Algorithms' by David Mackay, a physics professor at Cambridge. Perhaps the most readable and enjoyable text book I own. Certainly up there.

2) 'Pattern Recognition and Machine Learning' by Chris Bishop, now a director at Microsoft Research in Cambridge but formerly a physicist. Delightfully, under the circumstances, his PhD supervisor was Higgs (yes, the one of boson fame)!


They're probably physicists. One can take IT classes during the study, and some people have a very high skill level. Data interpretation is a big part of being an experimental physicist, and these algorithms are very useful, so people will seek them out.

Computer science people work in other areas, such as setting up and running the data collection and on-line processing. (A professor told us many interesting stories about the many Unix servers they build and the bugs they created..)


few of us are formally trained in CS. some of us are good. others are not.

my understanding is that the computer engineers at CERN are mostly tasked with IT work, the rest (including DAQ software/firmware, network code, distributed+realtime data processing, etc) is made by the physicists.


Right or wrong, the physicists would never trust data analysis done by a CS person.


That's Joe Incandela speaking, he taught me graduate quantum physics at UCSB. I feel awesome now :-P


gaucho here too! :) Edit: Had David Stuart, (Cern -Compact Muon Solenoid (CMS))


UCSB Physics Alum checking in.


I am having a lot of fun watching this, although I don't understand a single word of what he is saying :D


Pedant here.

Can people please stop posting intervals in headlines i.e. "20 minutes!"

It expired a long time ago and was probably too late by the time the first people read it.

Please use at 12:00 UTC or something (a time).


Why is the speaker in such a rush? He seems to be under a constant time-crunch. He keeps saying he's going to go over time and that he'll speed up.

Didn't he practice this before? Can't he just tell us what he wants to tell us, and skip over the rest?


If you're going to make an announcement that is going to be held up to the highest rigorous standards, figuring out exactly what to say is going to be a difficult issue. This presentation isn't for you, this is for the physics community.


Right now, he's presenting general stuff that everyone in the room already knows, including him. It's just a summary of the status of the project up to now.

He could have practiced this 6 months ago.

Aaaand... at this very second he's starting on the new stuff, I think.


Sounds like the kind of content that is just very hard to condense into a reasonable amount of time. This seems super dense to me, but you can imagine to a physicist this is a high level overview.


That was simply nerves. He knew how much media attention was on him.


Next time you do a major presentation on state-of-the-art high-energy physics experiments live to a world-wide community on what could be a Nobel Prize winning event, let us know and we'll watch you do it better. ;)


So I see a lot of people are frustrated about the impenetrable physics "jargon".

I'm not exactly sure how you'd expect some sort of tldr; of potentially one of the most important scientific announcements of the last 100 years.


Probably because the guy's presentation style is terrible, rushed, and his slides are a graphical disaster. He jumps from overload to overload, connecting with "obviously" and "as you can see". There is no overview, nothing connecting the endless series of slides. Everyone in the room is just waiting for the Higgs announcement.

I gather that they wanted to include last minute data, but given that they're livestreaming this and tons of people are watching, it was a huge chance to get a decent presentation done that would at least highlight the important results clearly rather than having them be throwaway lines between jargon.


The ATLAS presentation is set in Comic Sans. Monumental.



"We have observed a new boson at 125.3 +/- 0.6 GeV at 4.9 signa significance"


CMS is at 4.9 sigma. Soooooooo close.


5 sigma is kind of arbitrary, isn't it? Just a nice "round" number. Obviously better than 4.9, but the line is fuzzy, if it even exists.


Correct. What really counts is independent verification, all at high sigmas: Tevatron, CMS, Atlas.


But then at the end he kept mis-speaking as to whether there were five Directors-General or six. Maybe they're at 5.9 sigma after all!


could you explain in easy terms what getting to 5 would mean? thanks


I am just a hobbyist, but as i understand it, 5.0 sigma is just the probability of the result being bogus. So 5 sigma is very high probability, something like one in 1.4 million. i.e. the chances of the result being wrong is one in 1.4 mil. 4.9 sigma will mean lesser probability, but i am not sure by exactly how much will that 1.4 mil number reduce.


This is basically lecturer giving lecture to lecturers. Pretty amazing.


Peter Higgs just teared up. This is amazing.


I heard once that when Peter Higgs tears up, each teardrop is 126.5 GeV.

And now it has been confirmed to 5 sigma. :)


Is that comic sans?


Nothing says High Energy Physics like Comic Sans.


Designers vs physicists:

Designer sits in coffee shop wearing his hipster outfit drinking his hipster coffee, writing incensed blog post about the outrages of Comic Sans.

Physicist makes presentation on what is clearly a state-of-the-art advancement in the progress of high-energy particle physics (and thus, physics) to a world-wide community, live, and NOT a fuck was given as to what font is used. :P

Much respect to the latter folks.


Thank you! It seems very few people understand the immense pressure and intense atmosphere that surrounds work like this, especially in culminating times like these.

Fabiola would likely rather spend her time improving the quality of her analysis, which is undeniably more valuable to the scientific community than agonizing over the font.


Looked more like Chalkboard to me.


I have a 5 sigma level of confidence it was either Chalkboard or Comic Sans. We'll need a real designer to chime in to present their own numerical analysis.


Sean Carroll is liveblogging from the room and has some actual interpretation of what's being said: http://blogs.discovermagazine.com/cosmicvariance/2012/07/03/...


I'm in a place with single-bar wifi signal and the livestream is really choppy.

Does anyone have something like a buffered stream on a delay so I can be sure I don't miss anything, even if I have to stop to buffer more of the stream?

I have a vanilla, chrome browser, and a fairly recent VLC.


Try the RTMP stream: rtmp://cern.fc.llnwd.net/cern/cern1_900

If that's choppy, save it to disk with rtmpdump: rtmpdump -v -o cern1_900.flv -r rtmp://cern.fc.llnwd.net/cern/cern1_900

edit: alternate bitrates (thanks to Brajeshwar): cern1_900 cern1_600 cern1_300


I am in office, and am following a live blog of the webcast. http://www.quantumdiaries.org/2012/07/03/higgs-seminar-live-... [The Guardian one pointed out below is more informative]


The guardian has one but it is not very detailed, but I guess they will have the main points not all the details of the presentations.

http://www.guardian.co.uk/science/blog/2012/jul/04/higgs-bos...


Nothing to do with the science, but it looks suspiciously like they are using Squeak to produce their slides. Maybe they are using it in some capacity to collate data?


Mr. Higgs just walked in.


Sorry, noob, who is this?



tip: in the future, google Higgs


The magic number for CMS is 4.1 sigma. That was combined across all channels I think?


At the point you made the comment, the full number has not yet been shown. That's only the combined number for Higgs to gamma-gamma for 2011 and 2012 (so far).


Sorry about that. He just said 5 sigma for the combined signals (I think? maybe I misunderstood again.)

Nobel prizes all round!


Can someone please explain what 5 sigma signifies?

Edit: “Evidence” usually means a 3-sigma signal, which existed last December, “Proof” would be a better way to describe a 5+ sigma signal, if that’s what the combined CMS/ATLAS data shows - http://www.math.columbia.edu/~woit/wordpress/?p=4809


5 sigma is the traditional limit of significance for new discoveries in particle physics. If you have data showing the existence of a new particle or phenomenon at 5 sigma you publish and you announce and you start saying "this thing exists" instead of "this thing may exist".


They just announced 5.1 sigma, and on a normal distribution that would mean that it is 99.999966% certain, so that there is about a one-in-three-million chance that this is due to random chance, otherwise there is some sort of excess here due to a new particle.


This physics-exchange post does a great job at explaining why 5.0 sigma is required as a threshold for discovery in particle physics,

http://physics.stackexchange.com/questions/8752/standard-dev...


5 sigmas means that a model without a Higgs would have their actual data set be 5 standard deviations out from what is expected.

That corresponds to 99.99994% confidence on a standard null hypothesis test, and is the usual threshold in physics for considering something proven.


I believe 5 sigma is the threshold for "proof" in the physics community.


99.999% confidence.


Actually, it's 99.99994% confidence (1 in 1.7 million chance of error).


> Can someone please explain what 5 sigma signifies?

It means a statistically very very significant number of physicists are getting laid tonight, around the world. ;)


Yeah. The gamma-gamma and Z-Z together have been combined into a signal of 5 standard deviations from the no-Higgs Standard Model. Which is the common threshold for a discovery.


OK, guess I should stfu then.


TL;DR: New boson discovered!


What they actually established except for mass and the fact that it's previously undetected particle?

Do they know that it's a boson not ferminon? Do they know that's elementary particle?


Just announced 5 sigma.


Sorry if this sounds ignorant, but were they doing some sort of regression analysis over large data sets?


CERN: Makes you feel stupid since 1954


Meta, but come on, "in 20 Mins"? The headline was wrong seconds after it was posted...


OK, when can I order my bottle of Higgs bosons? ;-P


Why is potentially the worlds finest discovery of the 21st century being communicated using the worst software to be cobbled together in the 20th? (Aka Flash)


It works. Flash Player is the only browser plugin yet that can do an actual streaming (not progressive download) using either Adobe's own Media Server or other similar Open Source at the back-end. If need be, that is the best form of protection against content theft so far.

It is not that difficult to setup the whole streaming with lots of free and open source solutions available today. It's just a good means to a useful end.

EDITS:

Just as I suspected, it's using a Media Server Streaming Server. "rtmp://cern.fc.llnwd.net/cern/"

It is also automatically streaming corresponding quality depending on the user's bandwidth.

  { bitrate: '1000', width: '640', file: 'cern1_900' },
  { bitrate: '700', width: '640', file: 'cern1_600' },
  { bitrate: '400', width: '640', file: 'cern1_300' },
So far, I haven't found a decent way to do that in HTML5 without having to encode multiple video-streams for multiple bitrates.


It works

Unfortunately I was unable to listen in due to being on a mobile device....


Inkjet printers were made in the 70s and everyone I know still uses one (labs, etc) (even for thesis prints)

To Flash or not is probably the least significant decision anyone at CERN makes.


Latest application of the uncertainty principle - you can have 21st century discoveries or 21st century technology, but you can't have both simultaneously.


Because the universe is conspiring against us.


Anyone else getting no sound?

edit: Sound is working now!


It's just very quiet, you're just hearing the spillover from the room into the main mic. If you crank your speakers you'll hear people talking, but be careful when someone talks into the mic :)


I only heard sound for a few seconds, while I let it run in the background for 5min or so.


They muting until it starts. ;)


you're right, I have sound now, thanks!=)


The 404 page looks great.


Cool - they just showed a quick shot of Peter Higgs in the audience - he looked a bit emotional


TL;DR: "I think we have it."


This guy is excited to get to the conclusion of the talk.


Comic sans!


They desperately need a designer. The color scheme,use of comic sans and broken composition make my eyes bleed.


They could care less about the presentation; these guys are all about content.


I am under the impression, that what you think is not entirely true:

"How should we make it attractive for them [young people] to spend 5,6,7 years in our field, be satisfied, learn about excitement, but finally be qualified to find other possibilities?" -- H. Schopper

Perhaps an answer to the naive question:

http://cdsweb.cern.ch/record/1127343?ln=en

https://secure.wikimedia.org/wikipedia/en/wiki/Spin_(public_...


Ok, so how does that contradict my point about content over presentation?


The work at CERN is differentiated when it is performed by westerners or by people from the East:

"The cost [...] has been evaluated, taking into account realistic labor prices in different countries. The total cost is X (with a western equivalent value of Y)" [where Y>X]

source: LHCb calorimeters : Technical Design Report

ISBN: 9290831693 http://cdsweb.cern.ch/record/494264

Western discrimination is firmly in place there.


Really, I think this might take the cake as the worst comment I've ever read on Hacker News.


He cited a source, so I don't think it can take that particular cake.


Except he is misusing a source to come to an absurd conclusion!


In an attempt (presumably, I can't really know) to distract from a truly historic scientific achievement. Don't forget that part.


No attempts, just plain facts: a reminder of those, who took their fair share in contributing to such a scientific achievement but were discriminated negatively compared to their "western equivalents" only for being born in a non-western country.

The cited document (did all the downvoters also take their time and actually read the TDRs and papers in detail??? Or are they just ignorant sheep?) is a rare case of putting the facts on the ground down in a written and approved document, despite being taboo in an organisation touting "equal opportunities" and such policies.

No downvotes will change the situation I warn about above, quite to the contrary: may my previous comment serve as a warning to all non-westerners at CERN for the time being.


If you have difficulties in accepting criticism based on a factual quote from a (peer!) reviewed document, then you might consider changing your behaviour.

The moment one experiences the consequences of such discrimination, things get far more real than an absurd conclusion you are alluding to.


No, I posit that you are taking the quote and extrapolating your own conclusion. The report does not make the same conclusions you are making. All it seems to be saying is that labour costs are different between countries.


All it seems to be saying is that labour costs are different between countries.

Would you think, that this evaluation scheme has any consequences to peer evaluation within the same group? [aside from the inherent bias, not even mentioning all the other loopholes with categorisations such as Scottish (read western) MC-EST/PhD etc.]


No.


to quote somebody who claimed to have invented the Internet:

...inconvenient truth?

What makes it even worse, is that the worse comment is actually factual.


What

That is referring to production costs for the calorimeter, which is part of the ATLAS detector, (it picks up neutral particles IIRC.)

It doesn't refer to differentiating between work done by white people and non white people actually at CERN, which is what you seem to be implying?


Please take time to read it. But here is a summary:

"1the western equivalent value is 1'390 kCHF. 2the western equivalent value is 5'450 kCHF"

http://lhcb.ecm.ub.es/spd/spd/General%20information/spd_cost...

It is not from the ATLAS experiment, but of another LHC experiment -- but still within the organisation of CERN.

The comment/quote is about evaluating people -- ie. by the simplest budgeting metric, labour cost (with obvious consequences for peer evaluation). It has nothing to do with colour but with peer evaluation of equivalent work, differentiated according eastern or western membership.

edit: unable to reply to the comment below. But I can cite concrete case(s) in which work was performed in Geneva by both eastern and western member within the same group. China isn't a memberstate anyway.


Uh, it says labour costs in different countries. I.e. if the work is done in China then wages may be lower, if it's done in Switzerland then wages will be higher.

There's no big conspiracy here by CERN, just different wages in different countries.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: