Hacker News new | comments | ask | show | jobs | submit login
More Wrong Things I Said in Papers (scottaaronson.com)
97 points by ikeboy on July 29, 2016 | hide | past | web | favorite | 33 comments



You know, these days, we shouldn't have the concept of a blog or paper being published. It should be collaborative effort.

The initial post should be a comment period where you ask for feedback and no matter how large your mistake, it should be considered okay.

And slowly as you get more eyes on it, it should solidify into a more accepted paper.


I think an open system along these lines would result in too poor signal to noise ratio. There are far more people willing to engage in bikeshedding than people with the expertise, interest, time and funding to make valuable contributions.

Today's system is already collaborative. It is precisely those people with the expertise, interest, time and funding in a particular problem that co-author the papers. Then there is the review process to provide feedback and spot the serious mistakes and problems with the papers. After which, for significant works, there are follow up papers by various authors to expand, refute or clarify the original work.


A large part of science drives on ego, the process you describe would reduce that considerably. 'Publish or perish' works if there are a very small number of authors at the top of a paper.


I think you can have both -- keep a record of who made what suggestion when and which got merged into what branch on whose approval. Wiki/git-style tools already do (most of) this.

In fact, that makes ego validation easier: "I contributed these insights, reviewed these contributions, and received these upvotes/contribution acceptances."


The same way open source projects don't afford their collaborators any fame or recognition? Publish or perish still works as long as the history of contributions is clear (which seems like a requirement for a variety of reasons).


I know several open source projects first hand where the founder gets all the recognition while most of the valuable parts were written by others.

Of course you can see the contributions in the VCS history, but that doesn't translate to public recognition.


Great, let's change that. Letting ego drive everything does not seem like a good idea.


Dude, how will science progress without ego? That's like trying to remove greed from business. The remainder might have higher quality than what we have now, but the quantity would be tiny.

It's not like scientists are in it for the money...


Okay, let me know when you get us all to be robotic science-proving automatons.


Yeah, because such things as human nature we can just go ahead and change...


If the communists could do it, we can to! Oh wait....


Different specialties have different authorship cultures; it is trivial to demonstrate that science goes on (in physics, no less!) when papers are headed by many, many authors.

http://www.improbable.com/airchives/classical/articles/peanu...


> small number of authors

Have you seen papers in particle physics? Sometimes they don't fit on a single page!


arXiv.org does a better job than most at allowing resubmissions... and serving as a repository for rebuttals that might not warrant journal publications.

Lijie Chen (the student Scott mentioned) used arXiv.org's preprints for this purpose.


I think we need, first and foremost, a good mechanism which allows people to comment on papers, ask questions, and (for authors) add errata.

Right now, there are websites which allow people to discuss papers, but the problem is that a paper has no "home", so the discussion takes place on different sites, and therefore gets watered down. I guess we need to agree on a standard.


In my PhD thesis I claimed a function was convex that was not. The embarrassing part was that in the next section I proved that an almost identical function was non-convex.

Phew, I feel better getting that off my chest. Good thing nobody reads dissertations.


I wish I was smart enough to be that wrong.


How do you know you are not? Well, you certainly have a case by making this assumption in the first place.

Are you "not smart enough" in all areas, or just not not in (deep) math? Rhetorical question.

Look at "IQ". There is no such thing as "IQ" - simply because there are so man vastly different things that you should measure. I forgot the lecture, but you can show that the exact same "IQ" can be for people with vastly different skill sets, or different IQs with a completely different distribution of abilities, and the higher IQ losing out in one or more fields. The guy with the Einstein math and physics abilities can be a total dimwit when it comes to managing or just dealing with people, for example, something I would argue the world is in much greater need of compared to getting more math geniuses.

When you don't understand something ask yourself: Do I actually need to understand this?

I took about 70 edX and Coursera courses mostly in completely different fields then my own (CS). One of the most important lessons is that there is sooooooo much knowledge.

Have a look at this little story of a very simple product that humans make (ignore which concrete one they chose): https://medium.com/@kevin_ashton/what-coke-contains-221d4499...

Summary quote (product name replaced with "X"):

    > The number of individuals who know how to make X is zero.
    > The number of individual nations that could produce X is zero.
That's for one of the most simple products out there.

You do not need to understand each and every subject. You are not dumb if you don't. You may be dumb if you think you are dumb that you don't understand every single random subject... :)


>Look at "IQ". There is no such thing as "IQ" - simply because there are so man vastly different things that you should measure.

When you measure vastly different things, like reaction time and performance in IQ tests, there is surprisingly strong correlation (> 0.8 without correction for attenuation). This supports the hypothesis that there is such thing as G-factor.

None of those vastly different things we can measure measures it directly, but they all together point into the same direction. What has reaction time have to do with academic performance, job attainment, income, or IQ tests?

Most of those who argue for different intelligences don't provide good evidense against strong correlation.


That argument is like "economy" vs. "individual". When I wrote "there is no such thing as IQ" I mean (as stated?) that this is not a concrete thing, but a summary measure of many things. Of course it exists, it just doesn't represent a (single) concrete thing. The growth rate of the economy, maybe, comes to mind.

Of course averages work fine when you look at the grander things, but they are useless when looking at individuals. So sure, IQ "works" in that sense, for "big picture stats". It's not good for individuals. (Necessary additional comment: Looking at more than one individual is not "looking at an individual", but it again is "statistics".)

If you only want to place "the best" on average, going for "IQ" is enough. If you want to match each one (individually, not by global average - in which case after hiring by IQ you just place them anywhere) to the appropriate job it isn't. The distribution of skills under a given IQ score can be very different.

    > IQ is an imperfect predictor of many outcomes. A person who scores very low
    > on a competently administered IQ test is likely to struggle in many domains.
    > However, an IQ score will miss the mark in many individuals, in both directions.
http://blogs.scientificamerican.com/beautiful-minds/what-do-...


> IQ is an imperfect predictor of many outcomes.

It's _very_ often the best single predictor.

As the article you quote puts it:

>IQ is an imperfect predictor of many outcomes. A person who scores very low on a competently administered IQ test is likely to struggle in many domains. However, an IQ score will miss the mark in many individuals, in both directions.

If IQ test misjudges 5-20% of applicants, it's still hell of a predictor.


    >  it's still hell of a predictor.
I think I had already responded to that. The question is "on what level" and "what for". See my previous response.

I wrote "If you only want to place "the best" on average, going for "IQ" is enough. If you want to match each one (individually, not by global average - in which case after hiring by IQ you just place them anywhere) to the appropriate job it isn't."

It's the same as arguing for racial profiling because "it works". Well it does! If you don't care about people (individuals) but only about peoples.


I wrote a paper a few years ago that was published in a journal and which I felt was subsequently refuted by another paper. (I argued that for various reasons a server can never tell if a client has been virtualized or modified or not in the absence of hardware-based anti-virtualization features, but https://www.cs.cmu.edu/~jfrankli/hotos07/vmm_detection_hotos... and some recently work on obfuscators make me think I was mostly to entirely wrong.) Particularly since I don't have a blog right now, I keep thinking that I have no way to correct my assertions in public. It feels like it would be useful to be able to do that, just so if anyone somehow comes across the old paper they could quickly find out why its arguments aren't valid.


Personally I wouldn't worry about it, given it was an honest mistake, so long as more recent papers that refute your claim cite your paper. Whenever I'm reading an interesting paper I always do a reverse citation search to see what more recent papers say about it. I find that if a paper's claims are important enough someone in the field will publish a paper if they have results that contradict it.


If it makes you feel any better, I don't think that paper refutes your claim through any general law. It just enumerates a bunch of side effects of current virtualization approaches, all of which are focused on performance/isolation. It's not inconceivable that someone could develop virtualization software that offers a significantly slower clock speed processor to the VM to give it plenty of time to do things in the luxurious gaps without giving away timing information.


PubPeer might be a decent option. They have some mechanism to identify comments from the author but I haven't investigated how it works.

www.pubpeer.com


One usual approach is to write to the original journal and see if they'll post a notice.


Whatever happened to loyalty.org & virtuanova?


Some things happened that made me no longer feel that my life was "new", so I stopped blogging there. (One thing is that the friend's server that it was hosted on went down.) I had trouble thinking of a new blog name and so neglected to start a successor blog.


If it makes the process of restarting any easier, consider sticking with one of the existing brands.

I'm a big fan of not just writing to write, but focusing on having something to say. I find that when you think you do, you're almost always correct.


Aaronson is awesome. Intellectual honesty like this is to be admired in any field.


It's rare to see folks admit mistakes even in private. Writing it on a blog which is probably widely read is extremely rare. Such incidents are deeply inspiring.


http://retractionwatch.com/ is fun to visit once in a while




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: