The research isn't there. Jury is still out on whether the long term consequences are a net benefit. In the end you're talking about increasing emissions for a temporary decrease in temperatures. And the chemicals we have that are good candidates for albedo modification are quite toxic. Today more than 10% of deaths globally can already be attributed to air quality
If India is experiencing large scale mortality from warming, they aren't going to give a damn about your concerns. They're just going to inject aerosols into the stratosphere.
Besides the well-documented increases in PM2.5 concentrations at ground level we already have clear research on we'd also face
- ozone layer depletion
- reduced precipitation in an area already drought-stricken. As well as other difficult to predict effects on local climate and weather
- alteration of many stratospheric chemical cycles. We're talking changes to nitrogen oxide chemistry and even impacts on hydroxyl radicals which drive atmospheric cleansing capacity
- increased risk of acid rain from sulfuric acid
Like I said. The research is not there. There are many many side effects we haven't worked out yet.
And spare me the personal attacks about dishonesty, jackass
Humans tend to not breathe in a lot of stratospheric aerosols, on account of that being pretty high up.
As they sink down, they grow larger (condensation & coagulation). Once they reach the troposphere, they usually get down via precipitation, which also isn't really affecting a lot of breathing.
They can absolutely have other effects (see SO2/acidification, e.g), but air quality isn't really the main concern. For SO2 specifically, there's actually very little mortality sensitivity: https://www.giss.nasa.gov/pubs/abs/wa01010x.html
You're right that the research isn't there yet to make statements with confidence, but that applies to the air quality claim as well.
We were talking about timelines. Chinese timelines are faster than western but slower than Japanese. So maybe you can expand your thought about how China is counting years differently
Fast reactors have an extremely serious potential failure mode.
In a thermal reactor, reactivity is maintained by a carefully designed lattice of fuel elements and moderator. Disrupt this lattice and reactivity goes down. Thermal neutrons are also highly absorbed by certain neutron poisons with resonances that enable neutron capture at low energy; these can be added to shut down any potential reaction.
Fast reactors aren't like that. If fuel rearranges (for example, by melting and flowing into coolant channels) reactivity can increase. A fasts reactor will have ~100 times the "bare core" critical mass of fissionable material in it, so there's plenty of room for serious rearrangement to bring fission material into a prompt fast supercritical configuration.
That by itself could give you an explosion. But if the explosion then compresses some other part of the system beyond supercriticality, one could get an even more serious explosion. The possibility with something with a yield in the kiloton range can't easily be ruled out. This would be far worse than Chernobyl.
The fast reactor concepts I've seen deal with this by saying "our design can't ever melt down". Color me skeptical on that, and defense in depth says you don't believe such claims when failure could be so catastrophic. Even if regulators can be convinced (or be made to say they are convinced), the first experience that indicates the assumption wasn't true will lead to all reactors of that design being permanently shut down. This would be a serious financial risk to anyone thinking of building them.
If I were dead set on a fast reactor I'd look at something like a fast MSR (chloride salt) where such rearrangement could be ruled out.
Not sure about this argument, do you have any references?
In a LWR, if the coolant/moderator boils away, sure, the reactivity goes down. But there is plenty enough decay heat left to melt all the fuel that can then flow into a puddle of suitable geometry and go boom. Hypothetically speaking, at least.
I suppose in practice most LWR's use lightly enriched fuel so it's very hard to get enough material close enough together to make it critical, let alone supercritical, without a moderator of some sort. Of course, plenty of research reactors, naval reactors etc. have operated with very highly enriched fuel (90+%?), but even these have AFAIU so far managed without accidentally turning themselves into nuclear bombs.
Seems most contemporary civilian fast reactor designs are designed to operate with HALEU fuel, where the limit is (somewhat arbitrarily) set at 20%. A lot higher enrichment than your typical LWR, but still much lower than you see in weapons, and you still need quite a lot of it before it can go boom.
It's straightforward. Consider what would happen (for example) if all the fuel in a reactor is compressed into a more compact configuration.
In a thermal reactor, there's no problem, as there's now no moderator. There was massive rearrangement and compaction of melted fuel at the TMI accident, but criticality was not going to be a serious issue for the fundamental reasons I gave above.
In a fast reactor? It can only become more reactive. Anything else there was only absorbing neutrons, not helping, and the geometric change reduces neutron leakage.
Edward Teller somewhat famously warned about the issue in 1967, in a trade magazine named "Nuclear News":
“For the fast breeder to work in its steady state breeding condition, you probably need half a ton of plutonium. In order that it should work economically in a sufficiently big power producing unit, it probably needs more than one ton of plutonium. I do not like the hazard involved. I suggested that nuclear reactors are a blessing because they are clean. They are clean as long as they function as planned, but if they malfunction in a massive manner, which can happen in principle, they can release enough fission products to kill a tremendous number of people.
… But if you put together two tons of plutonium in a breeder, one tenth of one
percent of this material could become critical. I have listened to hundreds of analyses of what
course a nuclear accident could take. Although I believe it is possible to analyze the
immediate consequences of an accident, I do not believe it is possible to analyze and foresee
the secondary consequences. In an accident involving plutonium, a couple of tons of
plutonium can melt. I don’t think anyone can foresee where one or two or five percent of this
plutonium will find itself and how it will get mixed with other material. A small fraction of
the original charge can become a great hazard."
(Natrium is not a breeder but the same argument holds.)
That no fast reactors have yet exploded is of course no great argument. How many fast reactors have been built, particularly large ones? Not many. And we've already seen a commercial fast reactor suffer fuel melting (Fermi 1).
That's false too. Most of the arguments from antinuclear activists in this direction are about physical capacity of modulating too slow, which was false. Regardless, EDF is modulating now mostly due to economic reasons. Above 50-60% capacity factor you'll be fine, beyond that it'll be problematic with any asset, at which point you'll need to ask yourself if you love gas or you let nuclear run for some minimal CF or if you mandate each NPP to build a bess buffer to absorb capacity when needed
Most of the claims from nuclear opponents talk about lack of flexibility in nuclear without specifying whether they are talking about technical or economic flexibility. Dishonest nuclear proponents then interpret that in a strawman way, as if the opponents were arguing they couldn't technically scale power output.
You can design a load following nuclear reactor (that's the industry term, only activists and marketers say flexible). Nobody does that because the basic NPP design that everyone uses is for a base load reactor. We have had load following NPP designs for 50 years but getting them approved is a political process that greens block.
You are just trying to politicize the laws of physics due to your own lack of understanding of the topic. Meanwhile, your solar panels are manufactured mostly with power gotten from coal, in the 3rd world, and are mostly sited in places where they do little to no good while at the same time destabilizing the grid. Then you have the temerity to argue with actual engineers who spend their lives studying this topic. Seriously???
load following for modern reactors is mostly embedded. For some Gen2 it's possible to adapt ALFC from Framatome (like Germany did in the past). But if you want fastest load follow you need BWR's.
Solar manufactured from coal is irrelevant, it's offsetting that carbon many times over during lifetime. A real problem is on the other hand providing firm power. In some regions like Australia it could be realistic to get by with ren alone. In other regions like say Germany, it's not realistic and confirmed even by Fraunhofer ISE
Calling BS on the last claim there. It's not realistic to do it in Germany with just batteries for storage (since something like Li-ion batteries are poorly suited for handling Dunkelflauten or seasonal storage). Throw in a very low capex, if poor RTE, storage technology and renewables can easily get to 100% anywhere.
I will add that if a place like Germany tries to compete in energy-intensive industry against places nearer the equator with cheap, low seasonality solar they're going to lose.
Well, the race is definitely on. But another couple of years of reduced costs for solar and wind deployments and it may well be that nuclear projects underway will end up being cancelled before construction is complete.
Very long duration storage in my opinion is going to be thermal.
Standard Thermal's approach seems very simple and promises to deliver 365/24/7 heat (ultimately sourced from PV) at 600 C for cost competitive with Henry Hub natural gas. It's difficult to see how nuclear competes with that.
There is a very interesting tech using superconducting loops to store power. I think that that will be the 'battery of the future' but it is going to take a while before that sort of thing is safe enough (and cheap enough) for things like vehicles. But for stationary short term 'ride through' situations it has already been deployed, and also for stabilization of the grid in the presence of fast fluctuating loads or generators.
Superconducting storage is inherently short term, as the capex per unit of energy storage capacity is rather high. I doubt it's competitive with batteries even for diurnal storage. It might have niche uses, for example smoothing demand (on time scale of hours, not days) from intermittent high power users like electric arc furnaces.
Well, if you're programming in C or C++, there may not be a parse tree. Tree-sitter makes a best effort attempt to parse but it can't in general due to the preprocessor.
Great point. C/C++ with macros and preprocessor directives is where tree-sitter's error recovery gets stretched. We support both C and C++ in sem-core(https://github.com/Ataraxy-Labs/sem) but the entity extraction is best-effort for heavily macro'd code. For most application-level C++ it works well, but something like the Linux kernel would be rough. Honestly that's an argument for gritzko's AST-native storage approach where the parser can be more tightly integrated.
It's an argument against preprocessors for programming languages.
Tree-sitter's error handling is constrained by its intended use in editors, so incrementality and efficiency are important. For diffing/merging, a more elaborate parsing algorithm might be better, for example one that uses an Earley/CYK-like algorithm but attempts to minimize some error term (which a dynamic programming algorithm could be naturally extended to.)
Interesting idea. Tree-sitter's trade-off (speed + incrementality over completeness) makes sense for editors but you're right that for merge/diff a more thorough parser could be worth the cost since it's a cold path, not real-time. We only parse three file versions at merge time so spending an extra 50ms on a better parse would be fine. Worth exploring, thanks for the pointer.
reply