Both CO2 concentrations and the global temperature varied significantly over that period of Earth's history (~400-250 millions of years ago). Around 300 Mya, CO2 was comparable to what it is today and temperatures were likewise comparable. Around 400 Mya, CO2 was in the 4000ppm's and the temperature was likewise higher (by ~ 8 C). No contradiction.
Also, when discussing matters scientific, please cite your scientific sources --- otherwise the discussion is of little value. Like so:
After repeated interactions with the biochem profs at JHU, I think that in their case, it wasn't temporary. Maybe it was due to spending so much time standing in front of a blackboard that it just stayed reduced... and they had slaves... er, I mean graduate students doing all of their research for them, so they probably weren't getting a whole lot of intellectual exercise.
The main reason is probably that physicists found graphene more interesting to study than buckyballs. This can be quantified by looking at the number of articles published in top physics-only journals (e.g. Physical Review Letters).
There are a couple of good reasons why so: (i) graphene has a "simple" electronic and atomic structure that has interesting features of its own (Google for the Dirac cone), (ii) graphene flakes are big compared to buckies, so it's possible to study them with common methods that physicists like -- electronic transport, crystallographic methods, you name it, and (iii) many proposed applications of graphene e.g. in electronics fall close to physics.
In short: physicist are fond of simple things, and graphene is simple. So, Nobel prize in physics.
Such large differences imply a difference in the algorithm
Indeed, you write
c = mat2.getcol(j)
norms[0, j] = scipy.linalg.norm(c.A)
which means (i) extract a sparse column vector, (ii) convert it to a dense vector, and (iii) compute the norm. Now, this should explain the speed difference. Looking at the nnz, a dense norm can take up to a factor 5e5/(1.2e8/1.3e7) ~ 54000 longer :)
The main issue here is that the linear algebra stuff under `scipy.linalg` doesn't know about sparse matrices, and tends to convert everything to dense first. You'd need to muck around with `m2.data` to go faster.
Thanks a bunch for the help. It would be nice if this were documented somewhere besides reading the source code though :( And none of my googling turned up m2.data.
I'd actually guessed that it might be making columns full, but I'd expected to see a step-ladder up and down memory pattern as fectors were allocated, gc was triggered, vectors were allocated, etc. I didn't observe such a pattern; memory usage was almost constant.
Anyway, thanks again for your help -- I'd offer via email to buy you a beer if you're ever in SF, but no email, so...