Hacker News new | past | comments | ask | show | jobs | submit login

I hear the rate of change argument a lot, but I haven’t been able to comfortably arrive at how the comparison is made. Historic temperatures (pre-nineteenth century) have to be inferred from proxies—ice formations, fossilized tree rings, et cetera. Those, in turn, must be put into a date range based on some form of (I assume) radiometric dating. Carbon dating can get you +- 80 years. The other common techniques have even less precision. Uranium-lead dating can peg down a 2 million year window, for example.

How can researchers say that the rate of change is so much greater now when, not even counting propagation of error, the granularity of measurements pre-nineteenth century is, at best, of the same order-of-magnitude as the WHOLE of the era of modern precision measurements?

EDIT: I realize my question is inconvenient, but downvoting without meaningful response is no way to counter skepticism. This topic gets more religious as time goes on.




Sometimes you can simply count backwards, eg in tree rings of living trees. There are some other problems though:

http://www.climatedata.info/proxies/tree-rings/

Ice core data is correlated with solar activity cycles:

http://www.climatedata.info/proxies/ice-cores/

> Those, in turn, must be put into a date range based on some form of (I assume) radiometric dating.

Google can tell you precisely how it is done. Government institutions such as NASA, NOAA or the universities (and equivalents in other countries) generally have reliable information (and the information is straight from the source since most publishing scientists work there).


Since CO2 emissions increased dramatically around the 1700s, that 80+- of carbon dating becomes feasible, no?


this is how science works. it is the best model and projection using current data. if you can improve it, please go ahead


You aren't addressing the arguments at all here. With the error bars that black6 suggests, we cannot possibly infer anything of meaning over long periods of time.


I knows how science works. That is why I am asking how we can compare the rate of change when the sample rates being compared are three orders of magnitude off.

EDIT: more benefit-of-the-doubt for orders-of-magnitude difference — three instead of six. For example, if we take the yearly average, that’s a sample rate in the micro-hertz. A sample rate of +-80 years is in the nano-hertz range.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: