They don't, and they can't cheat physical realities either.
Plants only filter out very small amounts of CO2 from the air over relatively long timeframes. That's why crop-based biofuels require such enormous amounts of space.
"The amount of CO2 removed from the atmosphere via photosynthesis from land plants is known as Terrestrial Gross Primary Production, or GPP. It represents the largest carbon exchange between land and atmosphere on the planet. GPP is typically cited in petagrams of carbon per year. One petagram equals 1 billion metric tons, which is roughly the amount of CO2 emitted each year from 238 million gas-powered passenger vehicles."
Man-made carbon emissions amount to over 40 billion metric tons annually, according to a quick Google search. Worldwide terrestrial plant carbon exchange amounts to less than 2.5% of the CO2 humans release, if plants take in 1 billion tons per year.
From the perspective of averting climate change it is indeed very small.
A team of scientists led by Cornell University, with support from the Department of Energy’s Oak Ridge National Laboratory, used new models and measurements to assess GPP from the land at 157 petagrams of carbon per year, up from an estimate of 120 petagrams established 40 years ago and currently used in most estimates of Earth’s carbon cycle.
Whether 157 billion tons or 120 billion tons, these numbers are large compared to anthropogenic releases. Of course most of this carbon is quickly cycled back out from land plants due to animals/bacteria/fungi consuming the biomass produced by land plants.
You still need to turn incredible amounts of biomass into charcoal or other stable forms of carbon to make a dent in atmospheric co2. It would take decades of hard work on gigantic scales to unburn and bury the fossil fuels we used.
That's the pay-off of our 150-year rush to monetize as much of the Earth's natural resources as possible -- while making stringent efforts to keep quiet knowledge - or suppress any efforts - to utilize the benefits of free solar energy.
Having polluted and despoiled much of the biosphere, of course we'll be donating our supposed wisdom and that hard work to the future generations that will enjoy the fruits of our labors and entreasurement.
They're pretty amazing for the amount of capital cost. $50 in seed and an acre of land can sequester several to over a dozen tons of carbon per year. It might not be space efficient but it requires basically zero infrastructure.
Which is something that when I try to explain to some 'environmentalists' do not get the point.
The other benefits of a biodiverse green belt are great, but if tomorrow I have a concrete system that captures CO2 at 10x the level of trees over lifetime in a similar density, guess what I would like my futuristic city to look like.
Again, it's not that all telehealth doctors are great at this, it's that LLMs are awful at caving in to saying something with warnings the reader will opt to ignore instead of being adamant things are just too uncertain to say anything of value when continually prompted.
This is largely because an LLM guessing an answer is rewarded more often than just not answering, which is not true in the healthcare profession.
LLMs almost never reply with I don’t know. There’s been mountains of research as to why this is, but it’s very well documented behavior.
Even in the rare case where an LLM does reply with I don’t know go see your doctor, all you have to do is ask it again until you get a response you want.
> Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell.
Sometimes. Sometimes they practice by text or phone.
> They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.
If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
> Sometimes. Sometimes they practice by text or phone.
For very simple issues. For anything even remotely complicated, they’re going to have you come in.
> If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
It’s not just about being intentionally deceptive. It’s very easy to get chat bots to tell you what you want to hear.
Would be interested to hear a legal expert weigh in on what 'advice' is. I'm not clear that discussing medical and legal issues with you is necessarily providing advice.
One of the things I respected OpenAI for at the release of ChatGPT was not trying to prevent these topics. My employer at the time had a cutting-edge internal LLM chatbot for a which was post-trained to avoid them, something I think they were forced to be braver about in their public release because of the competitive landscape.
I'm struggling to understand what the result really is: it seems that some dogs at some point would rather play with a toy than eat or come play with their owner. That seems pretty normal. Is this really "addictive-like"? Why isn't it "really enjoy"?
Whenever I try to read up on it, it seems like glaciers are receding at ~2x their without-climate-change rate. That's a huge increase, but it doesn't seem like there's something that a person can experience at a visceral level here that is based on fact and not just preconception.
It's definitely striking, I can't deny that. I crossed the last remnants of an almost-extinct glacier last year that my guide guessed would be gone in 1-3 years: at the beginning of his career it was a real glacer with non-trivial extents, crevasses, etc.
I live in one of the places in the lower 48 with relatively easy access to glaciers. The change in some of them is fairly noticeable for me over the last say 20 years. It tends to feel grim and helpless if think about it too much. But I hike so I have spent time closer to them than an average person.
I grew up in a small town in rural Alaska that would have been completely under glacier ice when Columbus landed in North America. In the time between Captain Cook exploring the area in the 18th century and the next western survey a hundred years later, the coastline had been transformed by glaciers receding and revealing inlets hadn't been there for Cook to map. The glacier that was directly in between my town and the highway to Anchorage when I was a child is all but gone now, and there is a road.
Yes, there are unilateral policies and treaties that let the US and the UK collaborate in legal action (going through US institutions to judge them), some of them referenced in https://meta.wikimedia.org/wiki/Legal/Legal_Policies -- a keyword might be letters rogatory
Wikimedia also seems to have a presence in the UK https://wikimedia.org.uk/ that presumably would be affected.
In most cases they might have enough pull to get folks blacklisted by payment processors, but wikimedia in particular might win that one.
with sql_context(columns="x"):
query, values = sql(t"SELECT {col} FROM y")
I think
1. this is relying on the `col = "x"` in the previous example
2. columns is a set of strings, so it might be sql_context(columns={"foo", "bar", "x"}) to allow those as valid options. It just happens that "x" is a collection supporting the `in` operator so it works much like the set {"x"} would.
2a. (You might hope that something would convert such a string to a singleton set, but I don't think it does, which would have weird results with a multi-letter string.)
Their model had 15 slots spread across three lists, with Prevost appearing on one list in the top spot (and not in the other two lists at all). I am not sure we can conclude a ton about their predictive power.