This is incredible. I mean it's a long way to seeing this in affordable silicon solar cells, but still amazing to me that the materials engineering can accomplish things like this.
It leads to roughly a twenty percent boost in theoretical max efficiency. If that can be translated to a twenty percent actual improvement, that's huge.
We're already below 2 cents/kwh utility scale cost for solar and utility scale battery storage [1]. We're rapidly approaching the point where the solar is "almost free" and you're just going to pay for the battery storage (until that cost is rapidly driven down as well).
> We're already below 2 cents/kwh utility scale cost for solar and utility scale battery storage [1].
That price is under optimal conditions. They built in the desert where there are clear skies, lots of sunlight and cheaper land. Plus whatever kind of tax incentives or other subsidies they get in California that conceal part of the cost and may not be available everywhere.
The real question is how far can the price continue to fall before it hits a floor. And what happens to the other generation methods as a result of the competition. If people build a bunch of solar farms and storage and replace natural gas plants with them, then there is less demand for natural gas, so the price falls and the remaining natural gas plants stay competitive due to the reduced fuel cost.
The "solar takes over everything" narrative is possible, but there is no guarantee of it, and it's probably best not to take it as a hard assumption and make no contingencies for if it doesn't actually work out that way.
Any subsidies solar receives is a tiny fraction of what oil, gas and coal still receive today, without even factoring in all.the subsidies they've received over many years, or the ton of money spent on geopolitics in order to secure fossil fuel supply lines. Or the subsidies and damage they cause when they leak all over the place destroying entire eco systems.
And yeah, the massive subsidies they've received by not having to pay for polluting the air, as well as being major contributors to climate change.
And the massive "subsidies" given by governments by not prosecuting any criminal or civil charges over the fact that gas and oil companies knew about climate change in the 60s and tried to cover it up.
> Any subsidies solar receives is a tiny fraction of what oil, gas and coal still receive today, without even factoring in all.the subsidies they've received over many years, or the ton of money spent on geopolitics in order to secure fossil fuel supply lines. Or the subsidies and damage they cause when they leak all over the place destroying entire eco systems.
Very true, but also kind of the point. Those subsidies still exist -- we still have to do work to get rid of them, and in the meantime replacing fossil fuels with non-carbon sources is at best slower and at worst plateaus entirely.
Moreover, this is the same charge leveled at nuclear. Opponents claim that it's more expensive without exploring why it's more expensive -- and a big part of the reason is because we don't price carbon and then subsidize fossil fuels on top of that. Fix that and all the non-carbon sources become more competitive.
But we can't just sensibly conclude that's what needs to be done and then pack up and go home. We've got to actually make it happen first.
Do we have any idea what the true cost of utility scale for Solar in Non Subsidies Hypothetical Optimal Condition, and Median Condition? Hypothetical assume the land is free, ( most of the dessert and direct sunlight places are cheap anyway. ). And what percentage of that cost contribute to Battery?
Thanks for this great link! I posted a somewhat lengthy analysis of the photovoltaic cost structure, and how it relates to the cost structure of more traditional kinds of power stations, yesterday in https://news.ycombinator.com/item?id=20365309.
Is the power producer in this case receiving subsidies not included in this price, or is 1.997¢ the unsubsidized price?
Utility scale cost has been dropping exponentially, but small panels (e.g. 7W phone chargers) haven't.
I wonder if the efficiencies will eventually trickle down, or whether the form-factor, other components, retail sales channel and small market for solar phone chargers will prevent this.
I don't build 7W phone chargers, so I could be wrong, but I think there are a couple of factors:
· 7Wp of solar panels at this point would cost about US$1.50 if we were talking about the low-cost power modules used in utility-scale deployments. This is, I think, less than the (retail) cost of the case, sales transaction, or power electronics.
· Low-cost power modules are less efficient than higher-cost modules, which means you'd need a larger surface area to get 7 watts.
Which chargers are you looking at? Searching on Amazon I find a lot of USB battery packs with built-in solar panels, which I can't help but think are basically fake, since a 16%-efficient panel that size would provide less than a watt. (I've probably been influenced by watching a lot of bigclivedotcom videos where he dismantles phony consumer electronics, including USB battery packs that have a tiny fraction of their claimed capacity.)
AUD 150 (≈US$150) does seem too high; the eBay prices sound a lot more like what I would expect. But even when solar panels cost US$1 per watt or US$5 per watt (25 times the current price), US$150 sounds too high.
Not always, especially when working with electronics. Many manufacturers offer development parts at one price, then when you try to scale up production you find the price can be double or triple the original cost. Sometimes the smaller fabricating shops are willing to take slimmer margins as they dont have the bloat of managment, regulatory issues and worker costs such as medical and collective barganing agreements.
> Not always, especially when working with electronics. Many manufacturers offer development parts at one price, then when you try to scale up production you find the price can be double or triple the original cost.
Can you give an example? If one goes to Digikey or Mouser, the price of most standard electronic components decrease when the quantity increases.
I guess the phenomenon you described only occurs if special or proprietary components is involved, so that the company is willing to sell samples at a loss in exchange for more consumers.
I have never found this to be the case, in that 1) normally I only design BOMs around parts that are "normally stocked" at Mouser or Digikey, anything else and I don't want to waste time on the supply chain and/or 2) If you aren't designing around your batch manufacturing size then how are you designing the circuit (economically) in the first place?
My guess is that the OP was referring to some non-replaceable proprietary stuff with higher-level of integration, like some kind of modules, devboards or FPGA or something. I don't know.
Yeah, you will see cases where this happens for specialist components like CPUs that are EOL. The five hundred I have left in the warehouse can be let go for hardly anything, mostly I want that warehouse space back. If you're a hobbyist making one of something for a personal project you don't care. But if you were actually prototyping for a large SKU project and later you want to make a real order - I will have very bad news to break to you. This is why serious players get volume quotes up front and execute on hardware quickly.
The Amigans ran into this when they had the idea that their next generation desktop computer should have the PPC chip people thought Apple was going to put in laptops right before it went x86-only - the PWRficient PA6T-1682M. Getting a few of them for prototyping was easy but there was never going to be a long term supply at a reasonable price. Nobody had a plant manufacturing this CPU since 2008, but the Amigans started needing them in 2010 - so they were soon resorting to playing scavenger hunt "Hey, did you have a box of these anywhere? Can you check? Yes we'll pay the original per-unit price".
Internally they were reassuring each other that hey, these devices are inside US military hardware, Apple assured the US DoD that it would be able to supply replacements. But the ability to send the DoD a few spare CPUs out of a crate you've kept in one corner of a warehouse is not the same thing as continued commercial availability.
I am sympathetic to the situation. I bought my long awaited Amiga 500 about the time the 600/1200 came out. Went up going for the 500 since it had more after market support but the higher res and the 020 would have been nice in the 1200.
One really needs to ensure there will be supply and attainable bulk pricing when building a niche device like this. I understand the desire to use some esoteric part like the PA6T, and with the expectation that it was going into a mass market laptop, that it would be available.
Is there an effort to get Amiga running on RISCV? Custom silicon is also within reach starting at about 1k$ Machines are so damn fast now, you could run the whole thing in emulation, in Wasm inside of Firefox.
The Amigans are fragmented like fifty ways. As a result, and with waning numbers, funding and outside interest, they are mostly going nowhere.
There are several businesses (notionally for profit) fighting over this mess, in court, lying to each other and their fans in public, all that jazz. Some of them claim to own the name "Amiga" in various forms.
The device with a PA6T in it was the AmigaOne X1000, which eventually ran AmigaOS 4.1 update 6 or some similar string of nonsense. So that's a 32-bit, uniprocessor PowerPC operating system, kinda-sorta compatible with the Motorola 68000 series Amiga Workbench versions (aka AmigaOS 3.x)
There are also a bunch of Amigans running an OS called "MorphOS" that is also a 32-bit, uniprocessor PowerPC operating system, from a different outfit, mostly on old PPC Apple desktops or laptops.
Then there's a bunch of Amigans running souped-up Commodore/ Motorola era Amigas with patched up "Amiga OS 3.x" versions, sometimes with PowerPC "accelerators". This is a more ordinary nerd type culture, a lot in common with people who own and operate actual steamrollers, or build their own space rockets as a hobby. Most of them are conscious this is all it is, a weird niche hobby.
And then there are groups building new hardware, often with FPGAs but sometimes other weird ingredients. It seems as though this group might built the thing lots of ordinary people want - which is a box you plug into a TV and then play games you remember as a kid. But much more than for Nintendo when they did that, the Amiga problem is mostly not about technology, it's about _Rights Management_. Those games were produced by a wide variety of different companies many of which no longer exist or are now part of huge console companies that won't let your rival product exist. In some cases the games were licensed, so the license expired and you'd need to get OKs from two or more organisations that are now at exactly cross purposes. Imagine trying to get Sony, Microsoft and Nintendo to all sign off on you selling a product that competes with their consoles. Why would _any_ of them do that, much less all of them? And if you can't secure the rights your product will be much worse than the illegal products available already.
Way riskier market (consumer goods). It will eventually trickle down though, once it's really mature and countless factories in China have access to it.
There's nothing wrong with solar per-se, but it's just not sufficiently power dense if you're talking national scale power usage. https://www.finder.com/uk/solar-power-potential seems to think that countries like the UK would need more than 10% of their area covering with solar panels to cover their energy needs. I think you'd need to be talking about an order of magnitude improvement in efficiency to really start making it look really feasible as a significant way to change our fossil fuel usage, no?
UK would do best supplementing wind with solar, rather than going wholly solar. UK is famous for many things, consistent reliable sun isn't one of them. Maybe in 20 years we'll have a more Spanish climate, though I would hope not. Adding solar on every roof that's suitable should be plenty for starters.
There's probably dozens of spots around the UK suitable for Dinorwig 10 GWh pumped storage style solutions. NIMBYs permitting.
If the UK fully built out offshore wind it would probably have enough to power the whole of Europe. Same goes for Ireland.
Actually the estimates are that the Irish coast alone has enough potential wind energy to power all of the planet, during peak seasonal wind, and at least all of Europe during the less windy summer season.
Building the infrastructure to transmit that power would be a massive undertaking though. Ireland and France are currently cooperating to build an undersea power cable which is going to cost $1Bn, take 5 years, run 575km and have an annual transmission capacity of 6.1 TWh.
Annual global energy usage in 2013 was 19,504 TWh, or 3,197 times that.
Warning this book is quite old now. While the discussion of theoretical limits should probably still hold up, all the economics has been turned upside down since then.
Solar and wind costs dropped faster and further than most people's best case estimates, while nuclear costs and timescales just keep going up.
Thermodynamic cycles and engineering constraints. ICE engines are about 30% efficient at best. Operationally they're usually a lot worse. Where modern combined cycle power plants are pushing towards 60%. A big part of the difference is that stationary plants use multiple stages to wring the last bit of energy out of the fuel. Where piston engines use one stage.
They also suffer from mechanical and thermal losses due to their small size. Small size means higher internal surface area to volume and higher operating speeds. Contributes to higher friction and thermal losses. Friction in an ICE is actually significant.
Electric cars need about 15-20kWh/100km. A liter of gasoline has about 9kWh of energy. The average car needs about seven liters per 100km, so around 60kWh/100km.
Similar improvements can be had for heating. You get a little less than 1J of heat in your room per Joule-equivalent of fuel you burn, but with an electric heatpump you get 3-5J of heat per Joule of electric energy.
I was serious. Solar cells are like solid state heat engines in that they’re fundamentally limited by Carnot Efficiency (which is theoretically high for solar cells as the effective temperature of sunlight is high). If we’re going to use “primary energy” and not useful electrical or mechanical energy, then we’d need to include the energy of sunlight falling on the solar panels, which is about 5 times the useful electrical energy produced.
And just like actual heat engines, solar cells are improving. The best cells have achieved 47% efficiency in the lab, comparable to a high performance combined cycle natural gas plant (note that thermal power plants often cheat a bit by listing the efficiency in terms of the low heating value of a fuel, ie the heat produced not counting condensing the water vapor out).
We also need batteries. But that's why solar is only part of the solution. No single source will solve all the problems in all the areas. I'm a big proponent of nuclear but it's only part. Solar is also what I want to see. But more than the other two I want batteries.
From what I heard from people in the solar business battery is generally not a good idea. Better to invest in a good grid where we can transfer electricity from one place to another (there’s always windy somewhere). And have some sort of backup power plants on the lowest days (like natural gas, biofuel, etc, not co2 free but they wouldn’t be used very often from what I understand). Or alternative “batteries” like pumping up a lot of water in some dam.
I like to point out the efficiency gains and load shifting will also reduce off peek demand somewhat. Industrial users will chase lowest cost if possible. If electricity costs 3X at 1am the price at 11am, no ones going to be running an electric furnace at 1am.
Either way I'm actually rather confident that the whole storage thing is more or less solvable. Since there are a host of technologies that are workable.
So it's slightly pointless argue as some do that lack of current solutions is a reason not to move forward with renewables. Worst comes to worst we already have peaking plants that could substitute for baseload power.
> lack of current solutions is a reason not to move forward with renewables
No one is saying we shouldn't move forward with renewables. Well maybe a certain canine news network, but not anyone here.
> Either way I'm actually rather confident that the whole storage thing is more or less solvable. Since there are a host of technologies that are workable.
What people are saying is there isn't good enough technology yet. So that means that we need to continue funding research. Not being solved means that just building won't be the solution. That we need to spend some serious money researching. That's what people mean by "battery tech isn't there yet". It isn't a defeatist attitude, it is a "continuing and doubling down on research is an important part of the equation" type of attitude. This is typically because that kind of research isn't done by the private sector, but rather government. DOE does a lot of this funding, and even funds a lot of the research being done in the private sector. So being vocal about this means we care and hopefully those in control of allocating that money (probably not by Rick Perry) will spend more on fixing that part of the equation while the private sector does the building out part.
There's significant power loss in transmission on grid and to avoid power loss, you need fairly expensive equipment to convert to and from high voltage. If you are adding wind to the mix and working on an AC grid as normal, phase sync is also tricky. Electricity supply on grid has to be very closely matched to demand and the problem will likely become fairly complex with lots variable sources. A lot of problems potentially go away if you push renewable to significant battery storage first and you also get a much more resilient grid to natural disasters (see Puerto Rico's situation)
For an interesting graphic along these lines see [0].
The parent of this comment is talking about increasing the efficiency (reducing rejected energy) and the gp is talking about just increasing the total amount of energy. Both are solutions. But I wouldn't say it is a good thing we reject more than half the energy we create.
Because there's confusion I'm adding an addendum to the addendum. Here's an LLNL scientist explaining the graph. Which he does talk about the rejected energy [0]
> But I wouldn't say it is a good thing we reject more than half the energy we create.
The meaning of "Rejected Energy" in that graphic is obscure but it certainly doesn't relate to energy which has been created and rejected. It may relate to excess capacity of fossil fuel plants (how much more they could produce if ordered to burn more fuel).
The US is thankfully not "rejecting" twice as much energy as it is producing as those figures of 68.5 vs 32.7 lead you to believe. Curtailment of wind and solar generation in the US is recently around 2-4%
Sfifs has an accurate link. But your comment really isn't correct. You suggest it is solely to fossil fuels. There is inherent waste to any electrical system as some of the energy produced goes into heating the wires (unless they are super conductors, but as far as I'm aware solar panels and wind farms don't use ultra high temperature super conductors. Don't even know any that exist tbh). Heat is just the simplest example, but Sfifs liked more information.
What has inherent waste to do with your reference to "Rejected Energy" ? What do you now understand of that graphic ? Transmission losses are generally in the range 5-10% for modern grids. Your interpretation of that ambiguous reference wrongly indicated 66% losses. Thats not reasonable and doesn't support your advice on energy storage. I suggest gaining more familiarity with the general subject before judging it.
The graphic's term for rejected energy includes a multitude of things. Thermodynamic loss is part of it but not all. It was mentioned because it is one of the easiest for people to understand. There has been another link provided that I have already referred you to that includes a more comprehensive breakdown.
But yeah you can act all knowledgeable, or you can listen to the group leader of the LLNL staff that put this chart together using the same exact example I used.[0]
I, again, will say that this is only a portion of the rejected energy. But thermodynamic loss (i.e. heating wires, heating your ICE engine, etc) is part of that rejected energy.
I watched the explanation of the chart and it is still ambiguous both what it is, and its purpose. That narrator explains: "Rejected energy is the portion of energy that goes into a process and comes out generally as waste heat" - waste heat is not just as you wrote: "part" of it, it is explained to be "generally" (all of) it. Really neither of us should need to clarify the difference between "part of" and "generally all"
So then "rejected energy/waste heat" can not be used or as we are saying here -"accepted" by better battery storage, which was the opinion that you brought this ambiguous chart and presentation to explain. This chart is irrelevant to battery storage.
Also, most of the energy in the chart is thermo-electric derived from fossil fuel and nuclear plant. Wind turbines and solar "reject" heat energy in completely different manner and lesser degree than thermo-electric plants - it is truly nonsensical to mix the two together under the vague heading of "rejected". It makes so little sense, I do wonder about the credentials or motivation of this LLNL to educate on energy issues.
I think that most rejected energy is on the majority from thermodynamic losses. At least for electric power production. Converting heat to work has large theoretical losses (See Carnot Cycle) and loses due to engineering limits (Turbines aren't 100% efficient). Losses end up as waste heat.
Rejection is an action - whatever it means in that graphic its not a thermodynamic factor. My bet is the figures shown relate to capacity factor, they have no connection to what they have been presented as - a potentially reclaimable amount. Of course the proof should lie in the article which the graphic comes from - its a very messy idea to link and debate graphics which have no clear sense without their article.
Personally I'm not really very happy with that graph since it combines energy inputs into black boxes instead of groupings. Take natural gas, solar, wind, nuclear, and hydro and combine those into an electric power generation' 'fruit basket.' It makes you think you know something, but at the same time makes sure you don't know enough.
Energy loss in the grid is not that dramatic. Average losses are typically well below 10%.
Matching demand to supply is not that hard. If you have oversupply you run pumped hydro, batteries, and power-to-gas plants and store the extra energy. If you have even worse oversupply you disconnect your solar panels, or your wind turbines. If you have undersupply you use the energy you stored previously.
Conventional power plants fail first because they are big and centralized. When a plant goes down temporarily you have a huge gap to fill. With renewables 100 turbines stopping out of 1000 is not a big deal because they don't all fail at the same time.
The problem is that if you have a completely renewable grid (note I'm not saying a zero or low carbon grid) you can't provide large parts of the world (even the US) with that energy unless you direct it from some big hub. An easy to see example of this is that certain parts of America aren't very sunny and/or windy throughout the year (PNW, NE, Midwest, Alaska). Other parts are very sunny and very windy (think desert areas like CA and southwest). This leads to centralization even with renewables. It isn't an easy problem to solve and that's why people suggest things like nuclear, to fill in the gaps. Hydro has been a great filler in many places, but not all of these regions have access to large enough rivers, so again we get some centralization (see Tennessee Valley Authority, who also still use a lot of nuclear).
The problem is 100% solvable, but that doesn't mean it is currently solved or what's left is trivial. We're almost there, but it is hard to finish fixing things when you slap a sticker on it saying "done" and stop addressing the remaining issues.
from what i understand (and i could be wrong), you could make biofuels using atmospheric co2, alguae, heat from the grid during low demand... when you burn it, it releases that co2, but when you’re not it acts as a big carbon sink
could be totally off base, or very much a “not yet industrialised” process... but the idea is super cool none the less
Your own link says that about 6% of the UK has been built over; although I am unclear exactly what that means [1], this is an achievable level. Of course, I would still agree that it’s good to have a mixture, for example the British Isles have a lot wind power (I’ve heard, but can’t find a reference for the claim, that it could supply most of the European continent).
[1] the percentage sounds high enough that I assume it includes all gardens and roads not just the buildings themselves
Thank you, though — while interesting — that doesn’t tell me how much wind power the British Isles could produce if UK and Ireland built the wind farms necessary to exploit it.
The problem with solar isn't that it isn't power dense. The problem is that the capacity factor compared to something like wind energy is very low. I'm skeptical of that 10% area claim. They probably made a very basic mistake of replacing all chemical energy with the electricity generated by solar power plants. This completely ignores that we can only turn that chemical energy into heat and then have to convert it to electricity. Depending on the size of the engine the efficiency of this step can range from anywhere between 25% to 50%. So you need 2 to 4 times more primary energy than if your power source generated electricity from the start.
Take a look at this chart [0]. As you can see, renewables produce barely any primary energy at all with the exception of biomass, because they don't need to. [1] If we really did increase our renewable deployment 8 fold then Germany would be at 400% renewables at current consumption rates which clearly isn't necessary even if every single car is electric and all industry switched away from fossil fuels. There will still be some processes that depend on fossil fuels as input material but those do not burn the fuels to obtain their energy. Oil products like plastic will most likely dumped in the oce- I mean in landfills.
It’s also somewhat misleading we already have solar cells over this theoretical limit. They operate by having multiple layers that operate on different frequencies. So, this really more an alternative manufacturing approach.
I disagree that the title is misleading. To anyone familiar with solar cell technology, theoretical limits are discussed in relation to the material in question. Multi-junction cells are not something that are commonly discussed outside of those with expertise, so confusing a theoretical limit with efficiency gains through band gap augmentations is not likely.
This is analogous to an article about a chemist discovering a way to get more chemical efficiency out of gasoline and someone saying that we can already do that with a richer air-fuel mix using a turbocharger. The two are not comparable.
Whether or not it is "misleading", I'd say, at least to a layman like myself, it was certainly confusing.
Why? Because just yesterday there was an article on the front page touting how Toyota was testing putting improved solar panels on the roofs of their vehicles, and that article quoted an efficiency of 34%, well above the "max theoretical limit" given in the present article: https://news.ycombinator.com/item?id=20364348.
So I guess my question is how would the proposed manufacturing process in this article be an improvement? Cheaper manufacturing costs? Even higher efficiency if you had "multi-junction cells" made using the processed discussed?
> Whether or not it is "misleading", I'd say, at least to a layman like myself, it was certainly confusing.
> well above the "max theoretical limit" given in the present article
Ok, so how would you improve these articles? Do you want the news articles to draw upon a distinction between 'maximum theoretical limit of the substrate' (or whatever it's called), and the 'maximum theoretical limit of the resulting panels'?
Like, I understand and support your point here, but I'm wondering at what point do you accept that nothing is truly fully-accessible to a layman, that you need to have at least some surface knowledge to understand anything, and that it is likely impossible to eliminate this form of confusion?
I think a distinction like that is the bare minimum if you intend to broadly distribute the text. Additionally, simply don't mention the word maximum until it's clear from context that you're talking about one component of many and that the overall system can be do better than the max of one component.
Just look at that opening sentence, it's really bad if the aim is to inform readers. It's great if it's intended to be self-aggrandizing marketing without consideration of how deceptive it can be to readers.
Any expert can trick uninformed readers into drawing misleading conclusions. If you want to be trusted as an expert, it's necessary to avoid doing that.
MIT is a pretty renowned brand it its own right - I think it's fair to demand better. This doesn't come across as trustworthy to me, anyhow.
I think your concerns are addressed a bit toward the end of the article
> Other approaches to improving the efficiency of solar cells tend to involve adding another kind of cell, such as a perovskite layer, over the silicon. Baldo says “they’re building one cell on top of another. Fundamentally, we’re making one cell — we’re kind of turbocharging the silicon cell. We’re adding more current into the silicon, as opposed to making two cells.”
The opening sentence is accurate -- "conventional" silicon solar cells are exactly what this work is improving.
I think your parent's implication is that the articles would be improved by answering the question posed in their last paragraph: 'So I guess my question is how would the proposed manufacturing process in this article be an improvement? Cheaper manufacturing costs? Even higher efficiency if you had "multi-junction cells" made using the processed discussed?'
If the researchers intended audience was other experts, then this was the wrong channel to distribute the research.
Obviously this PR blurb was intended for a broad audience; that is why universities write and publish blurbs like this. And it's great that they try to make research a little more approachable!
But since it is intended fora broad audience this really is misleading. Probably not maliciously so; the job of a PR department is to make things sound great and impactful after all, and they did that. How much to you want to bet the most actual researchers would never have used words like this to describe their breakthroughs to non-experts they know and respect, at least not without really emphasizing what a "cell" is and is not?
> we already have solar cells over this theoretical limit
I need to ask my brother how this new method compares to his work[1] on high efficiency heterojunction solar cells. While this is all really interesting physics, it quickly races past my electrical-engineering-level of understanding.
(the apparatus he made to build these heterojunctions one-layer of atoms at a time (molecular beam epitaxy) was crazy; you're in the sci-fi future when your project needs a high-purity sapphire substrate and vacuum with a mean free path >10km)
In theory a single layer single electron setup you capture a set amount of energy from all photons with more than that energy and nothing from photons below that level. Based on sunlight you then pick what frequency gives the best compromise. This is then how much energy you then setup your electrons to capture.
In multiple layers setups, you tune multiple separate bands one for each layer. Electrons in different layers capture different amounts of energy.
This also lets you also have multiple bands, but rather than separate layers you’re using numbers of electrons to determine the thresholds.
PS: Also the efficiency bands are based on sunlight, a different light source changes the maximum efficiency.
Yeah, in this case it appears that they're focused on the single layers simply because it's easier to build one thing than multiple things. This one was focused on the blue/green portion of the spectrum.
The origin of the phenomenon lies in the molecular crystalline layer, the hafnium layer they are talking about keeps the excitons separated and intact.
Photosynthesis is less efficient than solar panels. According to the Wikipedia page on photosynthetic efficiency, typical plants have a radiant energy to chemical energy conversion efficiency between 0.1% and 2%.
It's true but photosynthesis has two stages. The light reaction stage is quite efficient. Chlorophyll a is slightly more efficient than commercial solar panels. The dark reaction stage, which assimilates CO2 to produce carbohydrates, is the limiting step that drives the efficiency down.
The actual antenna structures and reaction centers are reasonably efficient, at least over certain wavelengths. Even through electron transport and the whole photosystem, things still aren't too bad.
But overall efficiency tanks because of Rubisco. Even with C4 and CA metabolism, oxygenation is a major limitation.
My quantum physics teacher at the U of U was doing research in this area about 10 years ago. It was very interesting hearing him talk about that process, and the ways that different plant molecules capture energy in different ways.
I seem to remember something about matching up photon energies so that extra energy isn't wasted, and multiple frequencies can contribute usefully.
Interesting how "through a process called singlet exciton fission" and other article "General Formula for Bi-Aspheric Singlet Lens Design Free of Spherical Aberration" [1] both deal with Singlets. Odd coincidence of the day.
FYI they are totally different "singlets", a singlet exciton refers to the spin state of the particle (as opposed to a triplet exciton, which has a different total spin). A singlet lens is just a lens with a single simple component.
Kind of, but not enough to matter for this application anytime in the next several years. The USGS report at
https://www.usgs.gov/centers/nmic/zirconium-and-hafnium-stat...
says the US imported 36 tonnes of hafnium last year. If a three-atom-thick layer of hafnium is 0.6 nanometers and weighs 13.3 g/cc, those 36 tonnes would cover 4500 km², enough to produce about 700 GWp of solar energy. The US installed 10.6 GWp of photovoltaic generation capacity last year, which would amount to one seventieth of its hafnium imports. (China installed 45 GWp.)
It is relevant once photovoltaic energy becomes the main power source for the humans, because the materials used in current photovoltaic cells are dramatically more abundant than hafnium.
Generally I wonder about sustainability of rare earths (and other elements with limited supply like helium). We build a lot of products that don't last very long - some of it gets recycled, a lot of it doesn't. Wonder what humans 200 or 2000 years later will think of our consumption
Helium is a special case: when it leaks into the atmosphere, it eventually escapes into space. So the humans really do consume helium, just as the humans consume fossil fuels. By contrast, all the lithium, phosphorus, carbon, aluminum, iron, hafnium, zirconium, and erbium ever mined on Earth is still on Earth, except for the extremely small amounts sent to Venus, Mars, the Moon, and interstellar space on spacecraft, or transmuted to other elements in nuclear reactors. It is not consumed, just moved around a bit. Where is it?
Well, it varies by element.
While some of the phosphorus makes up human bones, most of it is dissolved in the ocean as phosphate, enormously less concentrated than the original deposits of apatite and other minerals it was mined from. At some point the humans might run out of easily accessible apatite deposits, since phosphorus only makes up 0.1% of the Earth's crust. If we consider only the top 2 km of the continental crust (29% of the surface) to be "easily accessible", that's only 3.0 × 10⁸ km³ of rock, or about 7.1 × 10¹⁷ tonnes, of which about 7.1 × 10¹⁴ is phosphorus. Since phosphorus is currently being mined at 153 million tonnes per year, the humans are currently on track to deplete most of it in only 4.6 billion years, considerably earlier than the Earth is on track to be destroyed by the sun.
As for the carbon, it's mined for its chemical potential energy, which is lost when it is burned; but the carbon remains on Earth. 870 billion tonnes of it is in the atmosphere, of which about a third is due to the humans setting rocks on fire; a comparable amount has been added to terrestrial biomass and carbon in the oceans. The part in the atmosphere is relatively easy to recover; existing processes have comparable cost to the cost of digging up rocks. This is not being done at scale because it is "unprofitable".
More than half of the aluminum that has been mined on Earth is still in use. The other half is mostly in landfills, where it is less concentrated than the bauxite ores that are commercially mined, but less oxidized.
Most of the iron that has been mined is in landfills, but not only ores but also slag heaps near foundries are a more concentrated form of iron.
Rarer metals like gold, hafnium, zirconium, and erbium are mostly in landfills. They mostly do not readily form soluble ions to leach into the ocean or groundwater, and they mostly only find their way into the air when incinerated. Their concentrations in landfills are dramatically higher than the concentrations in the ores they are mined from, and this is becoming a "profitable" way for the humans to spend their lives before being killed by the hazards of the landfills. Presumably automation will speed up this process.
This is really insightful. A lot of human “progress” and activity includes us exporting growing amounts of high entropy waste, for a mostly unaccounted for, deferred cost.
In most natural systems there is no waste - everything gets recycled. We create a mess digging for elements around the planet, consuming the “value” of the things we make from those elements (and large amounts of energy) in a way which lasts a very short time, and disposing of the waste which then mostly get spread around the planet in even greater mess. I find it difficult to see how automation will easily solve our waste problem - it seems many orders of magnitude easier to try to solve it before human waste gets to a giant garbled heap or gets spread around the atmosphere/oceans/groundwater/(soon low earth orbit) where it’s difficult to retrieve, and seems to cause huge problems to the natural world (and ultimately is) by being in places it shouldn’t be.
Why are humans so good/bad at creating a mess? Are we equipped with skills/incentives to solve this conundrum, or will we continue to drown in our mess?
The Earth doesn't recycle solar energy; it wastes ≈100% of it by radiating it away into space. Over 99% of it never even gets photosynthesized, just being absorbed as heat and then reradiated as infrared. Far less than 1% was ever accumulated in fossil fuels, and of course accumulating it as heat at the surface would be fatal. Nor does it recycle, for example, crustal iron; once the iron finds its way into a subduction zone and melts into the mantle, it gradually finds its way to the core, never to return. Even the fast CO₂ cycle is noticeably leaky, as a substantial amount of CO₂ is taken up by marine organisms that sink to the bottom and stay there, eventually being compacted into limestone and remaining limestone for hundreds of millions of years (the slow CO₂ cycle), until probably being liberated again by volcanism. So, I think that for most definitions of "waste", it is false that in most natural systems there is no waste.
It's true that recovering materials from the atmosphere and oceans is more difficult than mining them. But most of the materials we're discussing don't end up in the atmosphere and oceans; they end up in landfills. And recovering them from landfills is, generally, much easier than mining them from natural deposits.
Landfills are much more concentrated and heterogeneous deposits of "valuable" elements than the ores from which they are mined; for example, a single accidentally discarded catalytic converter may contain 5 grams of platinum-group metals, which are reduced and relatively easy to recover; commonly mined natural ores of platinum-group metals (the Merensky, chromitite, and contact types of deposits) typically contain 5 grams per tonne of ore, mostly oxidized and thus requiring not only froth flotation but also smelting. And catalytic converter platinum-group elements are far from the only "valuable" elements in landfills.
The trouble is that when the humans go digging around in landfills with their bare hands, it reduces their life expectancy a lot, because of poisons, injuries, and poor safety practices around refining. This is not a new problem for mining — the Spanish silver mines were a death sentence for the enslaved indigenous Americans sentenced to work in them — but the particular measures needed to solve the problem are different for landfills than for other mining. Automation will largely solve them. I suspect that the same ritual-pollution taboos that cause the humans to deprecate garbage collectors, cannibals, and undertakers are a major factor in the slow development of landfill mining.
A Kessler syndrome in LEO will be only a short-term problem, as the lifetime of objects there is limited to decades, not even a single million years!, by atmospheric drag, and the quantity of mass there is limited by launch "costs". MEO Kessler syndrome would be a more serious problem.
Doing things creates messes. Lowering entropy locally — one definition of life — raises entropy globally. Doing more things will create more messes. Doing things intelligently can create safely contained messes.
I knew there was a high likelihood that I would be downvoted to oblivion for making a terrible Dad joke, but it was worth it. I feel that a few years back I could maybe have cracked that joke without my karma being bludgeoned.
Yeah but you'd have to start with very energetic photons. The article says that you mint get two elections when a green or blue photon hits, so to get three even theoretically you'd need to use ultraviolet optics photons. And there just aren't enough of those in sunlight you make a noticeable difference in efficiency.
Let me try to do the math here, please correct me if I'm wrong.
The band gap of Si in solar cells is about 1.1 ev. To excite three electrons, a photon thus needs 3.3ev, or about 370nm.
The visible spectrum starts around 380nm or 390nm, depending on which source you believe. So the triple photon frequency would be very near UV.
According to this spectrum [0], there's a sharp drop in solar power around the 370nm.
It seems that with the 1.1ev band gap, exciting three electrons doesn't seem to be worth it. But, if you could reduce it just a bit (by doping, most likely), it might be worth considering.
Of course, reducing the band gap would likely lead to much worse performance overall, so one would need a multi layer approach, which sounds pretty complicated.
And then you need to pray that your glass cover doesn't absorb the UV light, which seems to be a pretty close call[1].
You can activate as many electrons as you want, in fact photomultipliers can detect single photons by setting them up to cause cascades of countless electrons. Granted, number of electrons isn't what matters.
A photomultiplier has a pre-charged state where a photon triggers an avalanche -- not really relevant for photovoltaic applications.
The question was more if you can lift three electrons trough the band gap, triggered by a single photon, which would increase the solar cell's output current without dropping the voltage.
I don't think that three is possible, and I told Baldo as much back in 2013 at APS March Meeting (hence my downvoted comment below). He didn't really care to listen to my explanation why because he'd already gotten a paper out of the experiment.
It may be possible, if an intermediate storage mechanism can be found to add the energy 2 or more which is then tapped to initiate extra electron(s). I understand that some natural process do this and with research a useable mechanism may be found?
Pardon my ignorance but don't avalanche photodiodes, scintillators, phosphors, etc. excite many electrons per 1 photon? Why is 2 electrons for 1 photon noteworthy?
I mean it all means bunk till a mass produced product is on the shelves. A lot of these “breakthrus” fail to note edge cases like scalability, durability
While true, it's worth noting that it's happened. It's rather like when there's a theoretical breakthrough that might help to cure a type of cancer. Will it result in an actual medical breakthrough that helps people? Usually not. But, occasionally yes, and that's huge, so it's still worth noting. If you don't get enough shots on goal, you never score.
When we first discovered uranium we made drinks and lipsticks out of it. Now we use it to get insane amount of energy and bombs.
Sometimes there are decades between the "breakthrough" and the use of it. Most of the time the finale usage has not much to do with the initial breakthrough.
It leads to roughly a twenty percent boost in theoretical max efficiency. If that can be translated to a twenty percent actual improvement, that's huge.