Hacker Newsnew | comments | show | ask | jobs | submit | archgoon's comments login

That was number 17. I'd link to it, but individual cards don't seem to be real URIs.

"The case against: The lead theory has the same problem as the abortion theory: in the 1990s, even people who had been exposed to lead as children started committing fewer crimes. That indicates that while lead exposure may well have been a factor, it isn't the "real answer" that it's often characterized as."


They're missing some corroborative evidence though: several other countries removed lead in different years, and in almost all of them crime dropped around 20 years later.

> in the 1990s, even people who had been exposed to lead as children started committing fewer crimes

Maybe people also just commit less crime as they age?


Or maybe it isn't just a matter of whether someone has been exposed to lead at all but a matter of how much and over what duration. Toxicology is all about the dose but the public (and apparently Vox writers) tend to think about chemicals in terms of exposed/not exposed.

A big point of evidence that I didn't see in the Vox or Mother Jones piece is that crime rates seemed to decline in different areas corresponding to when they stopped using leaded gasoline.


That was covered in the Mother Jones article:

"Reyes discovered that this reduction wasn't uniform. In fact, use of leaded gasoline varied widely among states, and this gave Reyes the opening she needed. If childhood lead exposure really did produce criminal behavior in adults, you'd expect that in states where consumption of leaded gasoline declined slowly, crime would decline slowly too. Conversely, in states where it declined quickly, crime would decline quickly. And that's exactly what she found.

Meanwhile, Nevin had kept busy as well, and in 2007 he published a new paper looking at crime trends around the world (PDF). This way, he could make sure the close match he'd found between the lead curve and the crime curve wasn't just a coincidence. Sure, maybe the real culprit in the United States was something else happening at the exact same time, but what are the odds of that same something happening at several different times in several different countries?

Nevin collected lead data and crime data for Australia and found a close match. Ditto for Canada. And Great Britain and Finland and France and Italy and New Zealand and West Germany. Every time, the two curves fit each other astonishingly well. When I spoke to Nevin about this, I asked him if he had ever found a country that didn't fit the theory. "No," he replied. "Not one.""


B&Es are a lot tougher when you've got a bum knee and arthritis in your hands.


I believe they also measured this on a city level, too, and it fit the hypothesis.


I agree, but the card 17 doesn't cite exposure to other harmful and poisonous elements (mercury, aluminum, among others) and we have strong scientific evidence that these elements trigger psychiatric disorders.

There are lots of studies on how environmental and chemicals causes these illness that criminals seems to share and they are related to undeveloped/malfunctioning control centres on the brain.

Government policies to avoiding population exposure to poison and at the same time psychiatric/neurologic advances on treatment and drugs makes a stronger case for that theory.

Update: missing 'c' in centres


> individual cards don't seem to be real URIs

Indeed! When you click on the target, you get this URL,


But if you load that URL, either directly or by reloading, you get a 404.


This is a bug. We're working on fixing it.


Looks good now!



Here the concern was that Lego would jeopardize it's brand, but they also wanted to leverage their brand to increase appeal. Were the issues you were dealing with similar, or of a different nature? Can you give any examples? I know little of the biotech buisiness, and I'm sure it'd be interesting.


That sounds interesting; can you provide a link to your homepage? Or describe the project in more detail?


Well, the first part of the project got scrapped, because it turns out people behave like a particle only under trivial conditions (or I modeled the problem wrong, that's also possible). The idea was that, since we know which target is attracting a pedestrian's attention, we could model both as a particle being attracted to a target (and being repelled by distractors). Whatever little remained of that idea ended up in another paper[1]. As for the grammar, the best I have is this paper[2].

I apologize for not having anything less dry to read than papers - as I'm not looking for another job, I don't feel the need to get off my ivory tower too often.

[1] http://ivan-titov.org/papers/emnlp13.pdf (poster: http://www.ling.uni-potsdam.de/~villalba/publications/refexp...)

[2] http://www.ling.uni-potsdam.de/~koller/papers/chart-gre-14.p...


> What? No. That's not how the Vorlons worked at all.

But it is how they wanted to be perceived. ;)

Granted, Microsoft can quite easily say "We have always been here."


So, when does Microsoft release their Genetic Algorithm / Z3 based Fuzzer called 'Morden'? It's what I want at least.


You'd think Snowden would've mentioned the NSA blackmailing the president.


That's the whole point, right? The info would be about any future presidential candidates.


Why, you didn't actually think that the links that Google displayed were actually _links_ to the websites did you? ;)

For a variety of reasons, the links that Google displays when you perform a search first redirect you to their servers, and then they redirect you to the site.

For example, here's the link for "sluggy.com" obtained from the search "sluggy"

As you can see, the original url is in that, along with some metadata. 'usg' seems to be consistent across searches, so it probably ties back to me, or at least my browser's USeraGent. 'ei' is inconsistent across searches, so it's probably a per search token, which will presumably allow google to track the spread of this link, and prevent abuse. sig2 is likely the signature of the whole thing (and thus also changes from search to search).

* Disclaimer: I do not work for Google, and knowledgeable engineers there are probably laughing at the poor dweeb on the internet who thinks he knows more about their infrastructure than they do. ;)


Elon Musk is able to do what he does largely because he made himself rich off a startup. If you want to be Elon Musk, you have to be rich.


Even PayPal was bootstrapped partly with what he made off a previous startup.


The author explains exactly what he means by that in a bullet point.

"Because there are many other modeling groups, and scientific results are filtered through processes of replication, and systematic assessment of the overall scientific evidence, the impact of software errors on, say, climate policy is effectively nil."

The commenter seemed to completely ignore the explanation, and took the quote out of context. The rebuttal was answered in the original statement.


I don't find the author's explanation very reassuring for a couple of reasons:

1. There's probably a considerable amount of overlap between the software used by different research groups. The article says:

"Single Site Development – virtually all coupled climate models are managed and coordinated at a single site, once they become sufficiently complex, usually a government lab as universities don’t have the resources"

Which implies that several university research groups use the software being maintained by a handful of large labs. They might be running models with different parameters, but the underlying modeling software is the same. So an error in the software might be widely propagated across hundreds or thousands of research papers. Also new scientific results frequently build upon previously published research, so anyone who cited a buggy result as evidence may have a weaker case.

2. If other scientists have similar attitudes toward writing software (i.e., that climate software is "different" from other software), they're also not likely to be using the best practices of software development, and much of the modeling software in the world is likely to be unreliable. In that situation, how would you be able to figure out which modeling software is giving the correct result?

As that commenter wrote in another comment:

"Unfortunately, I have encountered the following argument a distressing number of times:

1. As a good scientist, I am automatically a good engineer.

2. As a good engineer, I am automatically a good programmer.

3. As a good programmer, I am automatically a good Software Quality Assurance Analyst (whatever that is, nothing significant I would guess).

What hubris. Programming is a domain in its own right. It’s the most complicated domain there is because if people could make their programs any more complicated they would. Programming is also an art. Even a computer science degree does not make you a good programmer."


If you missed it, you can see the next row being added to the output in the last 10 seconds of the video.

"Rule 110" is the name given to a cellular automata rule which is known to be Turing Complete.


Notch of course, is famous for Minecraft, which was inspired by Infiniminer, which was made by the same author (Zach Barth) who made Infinifactory. I'm glad to see that Notch is having fun with the new game. :)



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact