Hacker News new | comments | show | ask | jobs | submit login
Has Li-battery genius John Goodenough done it again? Colleagues are skeptical (qz.com)
170 points by M_Grey 40 days ago | hide | past | web | 154 comments | favorite



From what I understand of this, the battery works like this:

You have a sodium or lithium metal anode, and a sulfur-carbon "ink" cathode on a copper charge collector. The battery is encased in steel, so a cross section would be steel-Na|Li-glass-S (aq)-copper-S (aq)-glass-Na|Li-steel. The anode has to be sealed away from oxidizers and water, or it will react violently with a direct chemical reaction rather than through the desired electrochemistry.

During discharge, at the anode, the reactive metal violently throws its extra electron away as hard as it can, because it hates having that extra one. It really, really wants to have +1 charge. The steel casing doesn't help, but that glass electrolyte is apparently porous enough that +1 ions can pass into it. So the lithium/sodium throws an electron down the wire and jumps into the electrolyte, because positively charged ions repel each other. That works fine until the electrolyte fills up all those empty spaces with positive ions. The anode can't throw any more electrons down the wire, because positive ions have nowhere to go, and attract the electrons right back out of the wire just as strongly as the atoms could throw them.

On the cathode side, electrons are coming in from the wire, spreading out across a copper plate, and jumping onto sulfur atoms, which devour extra electrons with a passion. Sulfurs love to have 2 more electrons than they usually own. Sulfurs near the copper plate devour two free electrons and become [S]--. It just so happens they live in a thin "ink" cathode, and are therefore very close to the positive ions that have been filling up the electrolyte when they become charged. They get yanked across to the glass. Ordinarily, if all this happened in an aqueous electrode, each negative sulfur ion would coordinate with the two positive ions in solution so that the sulfur can keep its extra two electrons, and the lithium or sodium wouldn't have to take back their hated extra. The glass-aqueous interface probably prevents this?

I would guess that the glass electrolyte allows so much positive charge to be present in it, without allowing the sulfur inside, that as soon as the sulfur hits the glass, it fumbles its extra electrons that fly out of its grasp and home in on one of the positive ions, converting it back to neutral metal. Sulfur grumbles and goes back to the copper to pick up another pair. Repeat until the positively charged layer in the electrolyte is not strong enough to yank the electrons off of sulfur, and it hangs on to its electrons at the ink-glass interface. The battery is now fully discharged. Physically, you have layers Li, [Li]+ (glass), Li (glass), [S]-- (aq), Cu. I'm guessing here, as I haven't read the paper.

During recharge, you are pushing electrons into the anode and pulling them from the cathode. That reactive metal wants nothing to do with extra electrons, so it starts attracting positive ions back through the electrolyte toward the metal anode. Meanwhile, that copper plate is having electrons sucked out, getting positively charged, and yanking the sulfurs across to take their borrowed electrons back. The neutral sulfurs can then wander back to the glass and grab electrons from the reactive metal in it. They shuttle the electrons back and forth, turning the metal on the cathode side of the glass back into ions that transport back toward the anode. Eventually, the electrolyte is once again saturated with only positive ions, and the battery is fully charged. As soon as the charge is removed, the atoms in the anode will once again try to throw away their extra electrons and escape into the electrolyte.

It seems important that the glass electrolyte be constructed such that the cathodic atoms and ions cannot enter it. While the battery is charging or discharging, those will be bouncing back and forth like ping pong balls in a Chinese recreation center, shuttling electrons between the glass-ink interface and the ink-copper interface. The action of the battery is allowing the reactive species to ionize, and get their charges closer together, but not so close that they can coexist in the same ionic crystal or aqueous solution. That would make it too difficult to separate the charges again, to recharge.

Of course, as I am not a chemist, I might be completely wrong about this.


This is going to be my children's bedtime story tonight. Thanks!


Fun times


I don't really care if it's correct or not, it was super entertaining. Thanks for sharing


Props for thinking it through yourself.


Also for anthropomorphizing atoms. Sulfur seems very grumpy.


If you've ever worked with sulfur, yes. Sulfur is grumpy.

Could be worse --- flourine is so clingy.


flourine -> fluorine

I assume it's a typo here but it is getting really common on the Web, doesn't anyone turn on the spell checker in Firefox/Chrome/etc?

Yours, Sulphur.


Is flourine a thing? If it doesn't exist this typo is a bit annoying, but not as bad as the silicon / silicone typos.


It's one of the elements found in the minerals breadite, crackerite, and hardtackum.


More a braino than a typo but yes, you are right.

And I can't even pretend I was talking about bread, because someone else made that joke further down. Damn you, logfromblammo!


I legitimately laughed out loud.


Sulfur is grumpy because it hates to be anthropomorphized like the GP did.


There are a couple things wrong with what you said, here's just a clarification after I've read the paper and some of the references a few times.

1) The cross section is given in an image in the original article, and explained in a Medium post [1]. It is Li-(fancy) glass-(Cu/C/S combination) encased in steel. I think for now, you can ignore the steel. Nothing is aqueous, as it's a solid state battery.

2) During discharge, e- leave the anode, travel through the wire and end up in the Cu/C/S cathode. This leaves a Li+ cation which then travels through the glass, (most likely) does not remain there, and deposits itself on the cathode. Hence, in the original paper [2], they see a build up of lithium on the cathode via SEM. From what I understand, the contentious part of the paper is that the researchers state that the S provides the driving force for the reaction, but then isn't reacted (the main point in [1]), hence the "perpetual motion" attack. The deposited lithium does have a sort of screening effect on the cell voltage, slowly decreasing it with discharge time. From my understanding, the battery capacity is limited by the amount of Li before it is all oxidized, and not dependent on the amount of reduction the cathode can handle, very interesting. This cross section at full discharge would be nothing-glass-Li-(Cu/C/S cathode), again encased in steel, but ignore that.

3) During recharge, the opposite happens, e- are forced back to the anode, and the preponderance of e- cause a driving force to remove the Li that has been deposited on the (Cu/C/S) and transfer it back to the metallic Li side. The key part is that the electrolyte shouldn't be holding onto any ions, if the paper description is to believed. The Medium article states that some sort of reaction may be occurring in the electrolyte, I can't really weigh on on that. Once all of the Li is back on its anode side, recharging is done.

4) It is important that the glass electrolyte is a solid, not that the cathodic atoms/ions cannot enter it. They most likely can, it would just take a recharging voltage much higher than the one they use to recharge the cell. The fact that it is a solid means that lithium dendrites (tree like growths) cannot grow during redeposition, as dendrites that span the distance between anode and cathode short circuit the battery.

Feel free to ask me any questions about it, I am not a battery person so there's a high chance I got something wrong, but I am a MSE grad student so I might be able to say something that sounds intelligent.

[1] https://medium.com/@steingart/a-potential-big-deal-in-batter...

[2] http://pubs.rsc.org/en/content/articlepdf/2017/ee/c6ee02888h


1. I assumed it would be a cylindrical cell, or a sandwich, with the copper conductor common to both sides.

The cathode was described as an ink, with sulfur as one component. The copper is not part of the cathode, but a means to conduct electrons to and from it. I didn't see anything about how this cathode was processed, so I assumed it had a gel-like consistency, and the individual components retain some physical mobility. The charge carriers would be in aqueous solution, and the liquid solute would be stabilized by support molecules to keep it in place. It might need to be liquid at first to make a good connection between electrolyte glass and copper conductor, then dries out. Even ink that feels dry to the touch can retain enough water to still be considered aqueous chemistry, right? A block of firm gelatin is still mostly water. Maybe the cathode is like that? Perhaps it is dried entirely and sealed against moisture. In that case, I'm not sure what the atoms in the cathode are doing.

2. The cathode is physically very thin, according to the diagrams I have seen. There's no room for very much of the lithium/sodium to enter it. If the lithium exited the electrolyte as ions, it could plate at the cathode, but I think most of the ions remain on the cathode side of the electrolyte glass, as close as they can get to it, then get forced to take back an electron on the cathode side. Once neutralized, they are immobilized in the glass.

3. You need something present to pull the electrons off the lithium/sodium. It can't send its own extra through the electrolyte that does not conduct electrons, only ions. The lithium is only mobile through the electrolyte when ionized. If the lithium were in direct contact with the copper, it could ionize there, but then what does the sulfur do?

4. The atomic/ionic radii in pm are Li+ 90, S 102, Na+ 116, Li 134, Na 154, S-- 170. If the mean aperture diameter in the glass is between 180 and 268 pm, lithium ions would be mobile through it as neutral atoms are immobile, and at that size neutral sulfur could enter it, but ionized sulfur could not. If the mean aperture diameter is between 232 and 308 pm, the same is true for sodium and sulfur. I know there are some macroscopic minerals useful in chemistry (i.e. zeolites) for their aperture size in relation to molecular dimensions, but I don't know how precisely we can tune man-made materials. It seems likely that we could still overproduce with an imprecise technique and then select the appropriate materials through binning, though.


Sure, all valid assumptions, but a lot of these are discussed in the paper. If you let me know your email, I can send you a copy. But in response to your points/anyone else who is interested,

1) It's a button or coin cell [1] (halfway down). So the conductor is the steel casing, the e- travel as you put a wire from either side wrapping around the cell. The copper current collector is only on one side. You're right in that they don't really fully describe the cathode in the methods portion. They simply state the cathode consists of a redox center (S or MnO2) embedded in electrolyte and carbon in contact with the copper, which is pressed against the glass electrolyte. It could be a slurry that they press, or something that has water, but that is never mentioned. I think the problem is that nobody really knows what the atoms at the cathode are doing. I would be hesitant to say anything is aqueous, but I guess there is the possibility.

2) Cathode is reported at 0.06 mm. They show SEM pictures of Li plated on the cathode, and directly call the cell a metal plating cell. From their conclusion, "With the Li-glass and Na-glass electrolytes, we have demonstrated in this paper one possible new strategy in which the cathode consists of plating the anode alkali-metal on a copper–carbon cathode current collector at a voltage V > 3.0 V." Additionally, from the results, they "examined the electrodes with the naked eye and with SEM EDS analysis, as shown in Fig. 2, which indeed shows lithium plated on the cathode current collector and no evidence of metallic lithium remaining on the stainless steel at the anode or the anode side of the electrolyte after full discharge of the lithium anode."

3) The e- are pulled off by the voltage applied during charging with any voltage higher than the work function of whatever state the plated Li is in (I think), causing a charge imbalance and Li ionization. This is balanced by the ionized Li moving back through the electrolyte to the anode side. The role of the sulfur is debated and I would say not understood, but they claim it's a redox center, where redox does not actually happen. Here is the key part:

We therefore conclude that the sulfur acts as a redox center determining the voltage of the cell at which electrons from the anode reduce the Li+ at the electrolyte/cathode interface to plate lithium rather than reducing the sulfur, so long as the voltage remains above 2.34 V; below 2.34 V, the S8 molecules are reduced to Li2Sx (1 r<= x <= 8) and the lithium on the anode becomes exhausted after 28 days in the cell of Fig. 1. The cell reaction was no longer reversible after this full discharge. At voltages V > 2.34 V, the cell is rechargeable and the sulfur is not reduced. The Fermi level of the lithium plated on the carbon–copper composite cathode current collector is determined by the Fermi level of the cathode current collector, whereas the Fermi level of the lithium anode remains that of metallic lithium, but the cell voltage is determined by the energy of the redox couple of the unreduced redox center

That is the part that I think is confusing a lot of people, myself included, on how that happens exactly. They have some evidence that supports their claim, but I'm sure that's the next paper, how this actually works.

4) With a cathode that is 60 microns thick that they dry from a slurry, there is no real basis for "apertures" through the thickness, especially in a disordered glassy structure. Instead, ion transport is through vacancy mediated diffusion. This does have a dependence on atomic radii, but from what I'm reading, is more dependent on the atom species being ionic as opposed to neutral. Again though, the important portion of the finding is that the electrolyte is solid, not liquid, and therefore during lithium deposition on either the cathode or anode, ions cannot easily preferentially grow dendritic structures as opposed to evenly plating the anode/cathode.

I would agree that more characterization needs to be done of the role o the components during all stages of charging and usage. Comparing a recharged and emptied batteries in a microscope would be helpful, as well as more careful chemical analysis as to what is actually going on.

[1] http://batteryuniversity.com/learn/article/types_of_battery_...


So all the glass does is frustrate dendrification, and I have no idea how neutral sodium/lithium can sit next to neutral sulfur and not try to give it an electron.

In an ordinary sodium-sulfur battery, you get 2V and Na2S4 at the cathode.

In an ordinary lithium-sulfur battery, you get 1.7V to 2.4V and Li2Sx (x={1,2,3,4,6,8}) at the cathode.

So this battery has something extra in it that causes the lithium/sodium to plate (without dendrification) at the cathode instead of doing some sweaty redox with the sulfur that is right over there, practically begging for that outer electron.

The carbon is obviously in the cathode for electrical conductivity. Sulfur sucks at conducting electrons, even as a polymer.

...What if it is also acting as a graphene/fullerene "shell" over the sulfur, to keep the sulfides from forming and migrating? If you let the cell drain too far, the swelling sulfides physically overwhelm the nanostructure, but if you recharge it before then, it stays in place.


Yeah, I think you kind of hit the problem that people have with this paper, namely, how does Li exist un-ionized on both sides of the battery. They have a figure that shows that based on the amount of S they have in the battery, they can discharge like 9000x the amount of sulfur available to react, which is their basis for saying that it remains unreacted. There could be a multistep process, some sort of odd structure could be forming, but I don't think anyone knows yet, which is why it's both interesting and a little confusing.


I'm sure the details of the cathode preparation will be included in the patent filing documents.


Brilliant! And sounds plausible too.


Previous discussion: https://news.ycombinator.com/item?id=13778543 Maybe some additional added detail in this qz.com article, idk.


There are some odd disparities between the UT article and this new article. The UT article claimed:

"The use of an alkali-metal anode (lithium, sodium or potassium) — which isn’t possible with conventional batteries — increases the energy density of a cathode and delivers a long cycle life. In experiments, the researchers’ cells have demonstrated more than 1,200 cycles with low cell resistance."

That would seem to completely rule out the lithium-air explanation.

Regarding the mechanism for energy storage, the UT article said:

"The engineers’ glass electrolytes allow them to plate and strip alkali metals on both the cathode and the anode side without dendrites, which simplifies battery cell fabrication. "

Does the "plating and stripping" language imply energy storage/discharge?

The abstract from the paper itself reads:

"The advent of a Li+ or Na+ glass electrolyte with a cation conductivity σi > 10−2 S cm−1 at 25 °C and a motional enthalpy ΔHm = 0.06 eV that is wet by a metallic lithium or sodium anode is used to develop a new strategy for an all-solid-state, rechargeable, metal-plating battery. During discharge, a cell plates the metal of an anode of high-energy Fermi level such as lithium or sodium onto a cathode current collector with a low-energy Fermi level; the voltage of the cell may be determined by a cathode redox center having an energy between the Fermi levels of the anode and that of the cathode current collector. This strategy is demonstrated with a solid electrolyte that not only is wet by the metallic anode, but also has a dielectric constant capable of creating a large electric-double-layer capacitance at the two electrode/electrolyte interfaces. The result is a safe, low-cost, lithium or sodium rechargeable battery of high energy density and long cycle life."

It is also referred to as a "metal-plating battery" in the abstract. I find it mystifying that neither of the terms "plate" or "plating" appears in the Quartz article.


I hope they're doing more than just arguing over whether it works or not.

Assuming there is enough information in the paper to replicate the experiment, then the proper course of action would be re-run the experiment and see whether the counter-hypothesis that this won't or can't work is true or not.

Of course, other groups should also do the same. I just hope this isn't another case of "we can't afford to do a replication of the experiment because that doesn't get us grant money" - which I have read about elsewhere is a big deal right now in scientific research (that is, new research is not being replicated - thus science isn't really being done properly).

This research, if it does turn out to be valid, could or would be world changing.


Not that you're completely wrong, but it's worth recognizing that experiments aren't exactly trivial to replicate. There's a (sometimes massive) cost in time and money, and expertise can get unbelievably niche.

It's really not as simple as "just replicate it".


> There's a (sometimes massive) cost in time and money, and expertise can get unbelievably niche.

This being a potential trillion dollar business I wouldn't worry about expediency in pursuit of it's veracity


The promise of future trillions doesn't help when you need, say, 300k right now, not to mention that you can't pull requisite expertise out of thin air.

Again, replication is crucial. It's a Good Thing. It's just not a trivial thing that we can expect to "just happen".


Well unless you are going to fund it personally, there are hundreds of universities with the capability and the corporate relationships and the fiscal motivation to do it. Furthermore there are a lot of R&D labs in this sector with the capability in private hands.


The point was this: the paper was _just_ published and people seem to forget that securing funding takes some non-zero amount of time.


300k is nothing given the potential profit of this idea working. Tesla probably spends more than 300k on R&D every week.


I've seen discussions about replication a lot. Isn't it more often than not the case that replication is made expensive because the paper describing it simply leaves out too many details, making replication more of a guessing exercise than actual, well, replication?


Well, yes and no. Yes, to a certain extent papers don't have enough details for replication, but this is mainly because you just can't fit enough details into the scope of a journal paper. In a PhD thesis, you should be able to.

And no, it's not that replication becomes a guessing exercise, but it's the fact that the time spent setting up, debugging and then running a replication of even a perfectly described benchtop-style experiment costs much much more than the actual equipment involved. And the ROI is poor, since replication studies are hard to publish.


Should, sure.

But there can be critical details needed to replicate which the experimenter didn't realize were important and didn't think to record. And possibly didn't notice.

One of the reasons why replication matters is that it is a way of discovering these factors which matter but weren't obvious.


"...the paper describing it simply leaves out too many details,"

... and that would be because the patent hasn't been issued yet.


There's also skill. Its not as simple as running the same shell script twice, this time on your lab's cluster.


So ask the original researchers for more details? What is the problem here?


The problem is that they probably don't want people to be able to easily replicate it, if it really works. It's the same reason compsci papers rarely include source code.


If they don't want people to easily replicate then why do they publish at all? Doesn't it defeat the scientific process? If it's about protection of the invention then why not patent it?


I mean, there is one of these things on a lab bench somewhere.

wouldn't hooking it up to a charge controller, weighing it, then watching how long a lightbulb stays on suffice?

Even if it's a complete black-box, and the thing never gets cracked open, if i understand correctly, the claimed energy density has not been achieved?


If you're replicating a study with your rig, you aren't doing something else with it.

Moreover, replication by the same team isn't what's usually understood by "replication". A large part of the point is that others confirm a scientist's findings.


I agree and really want more real scientific studies but, and perhaps I'm naive here, if they have made a battery which behaves as they describe, you shouldn't need to use any sophisticated methodology to test it. If it looks like a duck, etc.

Supposing it was not a hoax, it becomes a curiosity how it works, and hopefully lead to awesome discoveries, but being able to produce a thing which is great and we don't understand fully is fine by me.


> For his invention to work as described, they say, it would probably have to abandon the laws of thermodynamics, which say perpetual motion is not possible. The law has been a fundamental of batteries for more than a century and a half.

"A fundamental of batteries", yes, but also just, well, fundamental of everything.


Well, perpetual motion is not impossible, obtaining energy from it impossible


> Well, perpetual motion is not impossible, obtaining energy from it impossible

This is incorrect.

From Wikipedia [0]: "A perpetual motion machine of the third kind is usually (but not always)[10] defined as one that completely eliminates friction and other dissipative forces, to maintain motion forever (due to its mass inertia). [...] It is impossible to make such a machine,[11][12] as dissipation can never be completely eliminated in a mechanical system, no matter how close a system gets to this ideal (see examples in the Low Friction section). "

[0] https://en.wikipedia.org/wiki/Perpetual_motion


Parent is probably referencing https://en.wikipedia.org/wiki/Time_crystal Because a time crystal is a driven (i.e. open) quantum system that is in perpetual motion, it does not violate the laws of thermodynamics... does not produce work...does not spontaneously convert thermal energy into mechanical work...cannot serve as a perpetual store of work... A time crystal has been said to be a perpetuum mobile of the fourth kind: it does not produce work and it cannot serve as a perpetual energy storage. But it rotates perpetually.


In theory I think a closed system undergoing completely reversible processes could be considered perpetual motion (being reversible, entropy never increases). The second law states that those are the only sorts of isolated systems that don't gain entropy over time.

https://en.wikipedia.org/wiki/Second_law_of_thermodynamics

But time crystals aren't perpetual motion just by dint of not being closed systems at all, so they're kind of neither here nor there.


> But time crystals aren't perpetual motion just by dint of not being closed systems at all, so they're kind of neither here nor there.

Literally!


> Parent is probably referencing https://en.wikipedia.org/wiki/Time_crysta

Possibly, but that is not the meaning intended in the article, nor the usual meaning of the expression, so it's not relevant.


That was the weirdest shit I've seen in a good while.


He said perpetual motion, not 'perpetual motion machine'.

Do planets ever stop rotating?


> Do planets ever stop rotating?

Of course they do, on an appropriately long time scale. First they stop rotating around their own axis due to tidal forces, after which they lose the remaining kinetic energy to the gravitational waves emitted by their rotation around the star (again, on a very long time scale, but not infinite).

2nd EDIT: <removed gratuitous reference to moderation>.


Not gravitational waves (you're thinking caused by solar flares moving at the surface of the sun) - but the particle radiation from the sun will continue to slow planets over time.


> Not gravitational waves.

Yes, gravitational waves.

You are correct that in the case of the Earth, other causes will lead to its demise long before said waves would amount to any significant effect. But even in an ideal case where a single planet like the Earth revolved around, say, a black hole with the same mass as the Sun, which would emit no particles, it would still not orbit indefinitely due to the energy lost to gravitational waves emitted as a result of its revolution around another body. The amount of energy lost in one revolution is tiny, but is not zero (according to this calculation [0], for the Sun-Earth system it amounts to about 200W).

[0] https://en.wikipedia.org/wiki/Gravitational_wave#Power_radia...


Yes, two bodies create a gravitational wave that affects other bodies - but I don't see how those variations represent work - and they aren't observable on the two bodies themselves, so one wouldn't expect any effect upon those two bodies. Throw in the other planets and you might have a better argument, for example that Jupiter-Sun waves affect earth. Still, gravity "waves" aren't like water waves, as far as we yet know they don't represent (changes in) the impact of particles. However if you wish to argue that Higg's Bosons mediate gravity (some have, I believe) this objection of mine might be overcome. In other words, though I might be wrong, I just don't think we're at the point where we can deduce friction from gravitational waves, and we certainly aren't at the point where we can empirically measure such friction (as opposed to the "wave" - variation in gravitational field.)

PS, sorry: there should have been a question mark in my previous post asking whether you were thinking about flares.


> I just don't think we're at the point where we can deduce friction from gravitational waves

Where do you think the energy we measure in a gravitational wave comes from? The whole concept arose to explain the last parsec problem of orbital collapse.


There is a change in gravity, true - but a net slowing is another matter. That an explanation of X was needed doesn't justify any sufficient-if-true explanation Y.


> Yes, two bodies create a gravitational wave that affects other bodies - but I don't see how those variations represent work - and they aren't observable on the two bodies themselves, so one wouldn't expect any effect upon those two bodies

The lost energy results in orbital decay, eventually leading to the two bodies colliding. The page I linked to explains it in more detail.


Asserts it, yes - but my last sentence still applies. We don't have a fine-grained understanding of gravity; we'd need it for such a deduction. As it happens, I don't think we've ruled out that gravity itself - not just waves - slowly decays orbits. I rather expect this to be so, in fact, but on the basis of an eccentric theory of gravity.


BTW, I find it amusing that people take internet points away because they think the laws of thermodynamics apply only in certain circumstances.

They do only apply to closed systems.

(which information we are unable to ascertain about our universe)


It's better than that: the fact that the universe even exists implies that at some point there was either a violation of this law or it is part of a larger closed system within which the law still holds.


Or, there is no minimum entropy, and no matter how far back you look there's more entropy. Maybe the Big Bang is an illusion and before the big bang, there was even less entropy, and on and on infinitely into the past.


Which is like saying "I can swing forever as long as someone keeps pushing me." Clearly true, but pedantic and irrelevant to the discussion.


Pedantic needling is always a relevant response to internet point outrage.


Yes and yes. Any other softball questions?


Doesn't inertia guarantee the possibility of linear perpetual motion?


No because there is such a thing as the heat death of the universe, which with the knowledge we have at present seems to be the most likely end-game.

In other words: planets and comets will eventually completely disintegrate because their component atoms and even the components of those atoms will possibly disintegrate.

You're looking at 10^30 years or so, so don't wait up for it, but it is still substantially shorter than 'forever'.


That is such a pedantic and irrelevant answer.


Given the context (planets and comets) it actually isn't.

Planets are just like clocks, slowly winding down, especially the ones with moons through tidal friction.

And even that weren't the case the long run will still get you.

The problem with anybody proposing some kind of perpetual motion device is that perpetual really means 'forever', so no cheating.

If you want to change the scope you can do so but you'll have to drop the 'perpetual' bit.


If it lasts as long as the universe as we know it, I'm happy to call it 'perpetual'.


That's the trouble with definitions.

The way I look at it is the spin-down of planets and the heat death of the universe are reduction to extremes arguments about why perpetual motion machines won't work.

That's the very best you could do, in practice any 'perpetual' motion machine that is actually built will do enormously worse, so much worse in fact that I feel pretty confident placing the word between quotes.

The problem with this subject matter is that it brings the kooks out of the woodworks like not much else does (oh, maybe 'over unity energy', en even more absurd concept).

Of course any practical failures are due to insufficient funds or skill, and any theoretical limitations are just that: theoretical.

But so far in spite of there being in excess of a million dollars of prize money awarded for the first perpetual motion machine nothing seems to have passed the first level of testing.


"The very best you could do" is downright simple if you can afford a big rocket. I think it's fine to admit perpetual motion non-machines where perpetual means 'longer than the universe has existed', and to focus on the important part that there's no perpetual energy extraction.

A handheld box that outputs 1kW constantly for 2 billion years is very impossible despite bypassing all those arguments about eventual tidal/orbital/proton decay.


No, big rockets will return to their point of origin eventually. Compare with a comet, which also re-appears every so many years. The only thing we've managed to launch into inter-stellar space are the Voyager craft and they are simply coasting, like any other bit of inter-stellar debris, straight-line unless captured by some gravitational field. And they'll run out of power within a decade.

> A handheld box that outputs 1kW constantly for 2 billion years is very impossible despite bypassing all those arguments about eventual tidal/orbital/proton decay.

It is impossible in principle, but if it were possible chances are that it would still not be a perpetual motion machine, likely it would be producing an excess of 1KW and be capped at that much output to mask the eventual running down of whatever power source sat inside the box.

Applying all this to perpetual motion, it does not mean 'straight line unless interfered with', it means 'able to change configuration or acellerate relative to it's own frame of reference'.

Dead objects are dead, whether they are moving respective to Earth or not is not relevant and does not magically make them perpetual motion machines, that's just energy imparted at some point in the past.


> Voyager

Yes, that's what I mean by 'big rocket'.

>It is impossible in principle, but if it were possible chances are that it would still not be a perpetual motion machine, likely it would be producing an excess of 1KW and be capped at that much output to mask the eventual running down of whatever power source sat inside the box.

I meant something that is uncapped and does not run down.

Just because it eventually breaks does not stop it from being a perpetual motion machine. Nobody claiming to have a perpetual motion machine is saying it will never need maintenance. An argument that everything needs maintenance is not enlightening. What matters is whether the energy source is infinite, not whether physical durability is infinite.

> Applying all this to perpetual motion, it does not mean 'straight line unless interfered with', it means 'able to change configuration or acellerate relative to it's own frame of reference'.

> Dead objects are dead, whether they are moving respective to Earth or not is not relevant and does not magically make them perpetual motion machines, that's just energy imparted at some point in the past.

Did you forget that this line of conversation started with "Doesn't inertia guarantee the possibility of linear perpetual motion?" They're not talking about a machine with infinite power, they're talking about a dead perpetual motion non-machine.


You are redefining terms in ways that I'm not comfortable with.

Linear perpetual motion isn't.

Dead perpetual motion non-machine is indistinguishable from object at rest. If you don't agree with that you are disagreeing with just about all of physics and any speculation that follows is disconnected from reality as we currently understand it, which leaves a tiny little loophole but not enough to get your hopes up about.

If wishes were horses...

> An argument that everything needs maintenance is not enlightening.

Not everything comes in easy to understand ten word soundbites, any machine needs maintenance due to friction, and as soon as there is friction there is a loss of energy and so the system will run down.

A needle bearing is a really good bearing but it still wears out (and warms up slightly losing some system energy to heat) and will eventually cause your machine to run down.

A big enough flywheel on air bearings could run for a really long time but it too eventually would run down, and air bearings use some energy. Magnetic bearings would work too but would induce a little bit of drag causing some losses to heat.

A current in a superconductor will run for a really long time but - you've guessed it - eventually it will run down. Very low resistance != zero resistance.

Anyway, HN is one of the few forums where scientific discussions are grounded in facts and that makes it a good place to hang out. If you feel that you want 'equal time' for perpetual motion either by redefining the term in such a way that it becomes meaningless or in a way that it no longer qualifies as perpetual motion to begin with I'm fine with that but it makes the discussion rather pointless.


Let me try to explain my entire point in one go, because I don't think you understand what I'm trying to argue.

We have the real world, where something can go on nearly-forever but there are no magic energy sources.

We have unicorn world, where there are magic energy sources.

Arguments about friction and air resistance and tidal decay and heat death prove that nearly-forever is not literally-forever.

That's fine, but it doesn't disprove the unicorns.

If we were in unicorn world, our magic energy source would take 100% power input and turn it into 200% output, and even after losses to friction you'd still have 190% left over.

Saying that "eventually the magic device needs parts replaced" is completely true, but that's not what people care about.

When someone says "perpetual motion machine" they're talking about turning 100% power into 200% power. They're not worried about exactly how long it can run without maintenance.

In other words, you're taking 'perpetual' too literally. You're disproving the idea that anything goes forever, but what 'perpetual' actually means here is that it's extracting energy from an infinite source rather than a finite source.


> You're disproving the idea that anything goes forever, but what 'perpetual' actually means here is that it's extracting energy from an infinite source rather than a finite source.

There are no infinite sources of energy, only very very large ones.

> When someone says "perpetual motion machine" they're talking about turning 100% power into 200% power.

No, that's over-unity, a different concept altogether.

Anyway, it's way past my bedtime here (3 am almost), so I have to bow out here, apologies for that, it was an interesting discussion, I'm not sure if I've achieved anything but that's fine with me. FWIW I was active for years on a message board where 'perpetual motion', 'over-unity' and 'zero point energy' came up with great regularity, if you enjoy that sort of thing you might find similar minded folks there, it is called 'fieldlines'.

sleep well & best regards,

Jacques


> There are no infinite sources of energy, only very very large ones.

Yes, in the real world. But thought experiments are different.

> No, that's over-unity, a different concept altogether.

It's a subset. I'm pretty confident that if you have a hundred people define or describe a "perpetual motion machine", almost all of them will talk about pulling energy out of 'nowhere', or turning energy into more energy. And almost none of them will care if parts wear out, as long as those parts aren't supplying the energy. Maybe I'm wrong, but you're the first person I've seen that has taken 'perpetual' literally and to the end of time.

Good night.


Linear perpetual (unaccelerated) motion is the same thing as no motion.


Are you saying motion == acceleration?


He's making the point that relativity says that if something is "in motion" but never accelerates (including accelerating by turning), you can just put your point of reference with that object and define it as "at rest."

In practice, in the real universe, everything is subject to acceleration. You can only have an object that is infinitely moving without accelerating in a toy universe with only one object. At which point it is probably more intuitive to define it as at rest instead, since there's nothing else around for it to be moving in relation to.


I understand that everything is motionless (with respect to itself).

However, we are talking about motion. So, it is obvious that there is an implicit frame of reference other than the object itself.

It makes no sense for him to bring up that everything is motionless.


It makes perfect sense. Maybe not to you, but all motion is relative to some frame of reference. So if there is no acceleration in any dimension an object might as well be at rest until it collides with something (which would show either one or both objects to be in motion relative to a third reference frame).

The universe does not have convenient 'true' grid lines along its axis to indicate the one true reference frame. You would be hard pressed to indicate a frame of reference that is motionless compared to everything else.


The statement that motion is relative is true, but using it to argue that nothing has motion is what doesn't make sense. An object does not have to be alone in an entire universe to have no acceleration, or utterly negligible acceleration. And if it's not alone, it's trivial to have motion, even if you're not 100% sure which objects are moving.


> The statement that motion is relative is true

Ok.

> but using it to argue that nothing has motion is what doesn't make sense.

Nothing has motion worth discussing that is not able to (a) change it's trajectory or (b) change it's configuration.

As soon as those options are out you have an inert piece of matter, by definition not a perpetual motion machine.

> An object does not have to be alone in an entire universe to have no acceleration, or utterly negligible acceleration.

No, it can be without acceleration in an extremely crowded universe. And any acceleration that it does have will likely come from gravitational interaction with other bodies, or the impacts of particles or other objects (including light).

> And if it's not alone, it's trivial to have motion, even if you're not 100% sure which objects are moving.

That is where we depart. You are thinking of motion in a classical sense, possibly anchored in 'relative to the point of origin', but that's not how it works in physics where acceleration to a local frame of reference is the only meaningful one because it shows you whether or not the 'device' is the master of its own fate or merely along for the ride.

That's a huge difference, roughly akin to 'dead' or 'alive' and of course there are always people who would like to split hairs over that but we all know what 'dead' means. In physics it can be a little harder to prove that an object really is dead but it gets a lot easier once you catch on to the inertial frame of reference idea.

https://en.wikipedia.org/wiki/Inertial_frame_of_reference

A final note that may drop the coin for you: An object 'at rest' does not change relative to it's own frame of reference, but an object that has a powersource can change relative to that same frame of reference. The reference frame stays still relative to where you found the object the first time and then you track it over time. If the change is just a translation then the object is dead, if it can change trajectory relative to that frame of reference, rotate about an axis faster or slower than how you found it initially or change it's configuration relative to the frame of reference (for instance, by operating some kind of actuator) then it is still 'alive'.


> As soon as those options are out you have an inert piece of matter, by definition not a perpetual motion machine.

I think you're mixing up different comment threads. This was entirely about inert pieces of matter with 'dead' motion.

> You are thinking of motion in a classical sens

No I'm not. Look, if two billiard balls are 10 meters apart and that distance is growing by one meter per second, there is objectively motion there. It's inert motion, it's relative motion, but it's real motion. You can use any reference frame you want.


I'm willing to bet that existence of perpetually travelling energy or mass has been around for an infinite amount of time.


A superconductor is sort of a perpetual motion machine for electrons, isn't it?


> A superconductor is sort of a perpetual motion machine for electrons, isn't it?

It may seem like it, but no. As this link [0] explains, "A prime example is the superconductive metals, whose electrical resistance disappears completely at low temperature, usually somewhere around 20 K. Unfortunately, the energy required to maintain the low temperature exceeds the work that results from the superconductive flow."

[0] https://www.britannica.com/science/perpetual-motion


Yeah I didn't mean to imply you could get work out of it, just that it's a system with zero dissipation, so it would seem to be a perpetual machine "of the third kind" as referenced by the post above.

(If you don't count the work required to maintain low temperature, of course, which I don't think you should since you could place the device way out in intergalactic space where it would attain a temperature near 2.7K without you having to do any work to maintain it.


It's a misuse of the word 'machine' to use it on something incapable of performing any kind of work.


My money is on it being an air battery, in the paper they commented: "The charge and discharge voltages show a good coulombic efficiency over 1000 h; the cycling was continued beyond 46 cycles despite an imperfect seal of the cell. "

A rather simple test to determine if it is in fact behaving as an air battery would be to weigh the battery before and after discharge. Air batteries get heavier as they discharge.


you could also try running it in various vacuums.


Or in a noble gas environment.


I had a highly upvoted comment on the original submission- the thing I missed was that in the original paper they claim they went from metal lithium on the cathode to metal lithium on the anode after discharge. You can't do that. In order to have a battery the lithium has to move from a high energy state to a low energy state- not from one state to the same state. There is something fundamentally fishy about the experiment, but I'm not sure what. The way it is portrayed in the paper, you could basically just switch the cathode and and suddenly have a fully charged battery.


If you had a normal electrochemical cell, with a sodium anode and sulfur cathode, and liquid electrolyte, the sodium would release an electron into the circuit and hop off the solid anode into the liquid electrolyte. On the other side, sulfur would grab electrons from the circuit and hop off the solid cathode into the liquid electrolyte.

In the middle, the ions would keep their charges and physically arrange themselves such that the charges are balanced in every direction. This would be an energy minimum, and it would be hard to reverse. Trying would probably cause the electrolyte to decompose instead of redepositing the original reactants onto their electrodes. It would not be rechargeable.

As a physical metaphor, you have two mutually-attracted boulders rolling down from opposite ridges into the same bowl-shaped energy valley. When they collide, they strike with such force they turn to sand.

In the rechargeable battery, you prevent that last step by putting a giant springy foam block in the center of the valley, so that the boulders can't touch. You prevent the sodium and sulfur from coordinating physically to cancel out the physical force from separated charges. Then to recharge, you tow or push the boulders back up the hill.

I think the "same metal at the cathode" might be an artifact of putting the springy foam block in to keep the boulders intact. You couldn't switch the anode and the cathode any more easily than you could swap the elevations of the ridges and the valley. That is, it takes as much energy to do the physical rearrangement as it would to charge the system electrically.


To follow your boulder/hill metaphor, the lithium boulder rolls down the hill (crosses the electrolyte), settles at the bottom of the valley (bonds sulfur)... And then somehow as more lithium rolls down the hill, it continues to pile up at the bottom until it forms another hill, the same size as the first (lithium plates onto the anode). That's what they claim happens- that the sulfur acts as a redox center and sets the voltage for an unlimited amount of lithium. But a redox center cannot just lower the voltage of an arbitrary amount of metal. The metal is an entirely normal bond and can be cut with an energy proportional to mass^(2/3), while the energy stored increases linearly.


>I think the "same metal at the cathode" might be an artifact of putting the springy foam block in to keep the boulders intact. You couldn't switch the anode and the cathode any more easily than you could swap the elevations of the ridges and the valley. That is, it takes as much energy to do the physical rearrangement as it would to charge the system electrically.

That is the case in normal batteries, but in this battery the start and end product are both just lithium metal- not intercalated into graphite or bound to sulfur, literally just solid lithium. There's no (obvious) place for the energy to be hiding. It would appear that you could just scrape it off and receive free energy.


I agree that it is odd, but it isn't categorically impossible from first principles.

For a simple example, consider two plates near each other, one with a positive charge and the other negative close together. That's a capacitor. Now physically pull the plates a bit away. Voila! Every individual piece is in the same state yet you now have more energy stored!


The voltage changes in that case. In this case, a voltage difference exists between two materials with the same voltage relative to reference. It is as if you got pure gasoline out of a cars tailpipe.


Yes, but you can make the voltage not change.

Take the same capacitor, but make each plate into 2 layers. Pull the outer layers away and attach your electrodes to the inner layers. You can arrange to have more energy, at the same voltage. As the capacitor discharges, bring the outer layers in to keep the voltage across the inner plates constant. Reverse the process as it charges.

Please be clear, I'm not claiming that their battery works this way, or even that it works at all. I'm only making the claim that it is actually possible to design a device where what looks impossible here is not necessarily impossible.


I think they start with Li on one side, and a Cu/C/S cathode on the other, and then plate the lithium on the anode. They measure a difference in chemical potentials between cathode and anode, so there is reason for the Li to move, and as the Li moves, I think I remember the V_cell decreasing.

From the paper:

> At the cathode, the build-up of plated metallic lithium changes the morphology of the cathode to increase C_C and create a very slow fade of V_cell with time of discharge


Wait. there's more energy if you pull the capacitor plates apart? What're does it come from?

How much more energy are we talking about?


Since the plates attract each other electrostatically -- one is positively charged, the other negative -- it takes energy to pull them apart. The amount depends on various factors: the size of the plates, how close they are together, how much charge is on them.


Is there a formula for that? I'm curious now.


CV=Q. Pulling plates apart makes the capacitance C decrease, while the charge Q remains constant. This means V must be increasing. Energy in a capacitor is VQ/2, so the energy increased.

(This is somewhat circular reasoning. The point is that the charges on capacitor plates attract each other so it takes work to separate the plates. Integrating the force along the path the plate takes during the separation gives the extra energy stored in the capacitor.)

A similar thing is a static generator made using a pie tin. You put a little charge on the pie tin by some means, like rubbing on styrofoam, and then you separate the tin from the styrofoam, which gives more energy to the tin (increases the voltage), which then you can transfer to a Leyden jar.

I also discovered in high school that you can use a pair of plastic chairs and two willing participants to generate high voltages when there is low humidity. Person one stands up off the chair, touches person two, then sits down. Person two then stands up off the chair, touches person one, then sits down. Repeat. Eventually, you will generate fairly high voltages that sort of hurt. You can hear the excess static charge on the chair squeal into the atmosphere after standing up.


If you're even the slightest bit interested in this topic, I highly recommend the Nova special "Search for the Super Battery" that aired in February, hosted by David Pogue.

https://www.youtube.com/watch?v=pCDuM_apIg8


Wonder why they didn't mention LiFePo4. Amazing, non explosive production cells. A bit pricier, but not that much.


The energy density (Watt*hours per kg) is about half compared to the "explosive" cells on the market.


Honestly, it doesn't matter to most of us right now. Goodenough's original breakthrough didn't hit the mass market for 20 years. Turning breakthroughs into /products/ is still really, really hard.


Better battery technology increases demand for batteries. Lithium ion technology made batteries good enough to be common but not good enough for everything we want to do with them. There's so much battery dependent technology now that another breakthrough could be even more profitable. There's strong incentive to get it to market quicker than 20 years.


That is true, BUT: In the 1980s A) far fewer mobile devices were around and B) such devices ran happily on Mignon batteries (think the Walkman).

There is a vastly bigger hunger now for batteries (also as storage devices/electric cars) and I bet inventions/innovations will be faster implemented.


All-solid-state Li-S batteries have been previously reported by ORNL. Manufacturing is the giant hurdle here.


Very interesting. I remember how many entries to the LITECAR challenge essentially said "Use graphene!" as the innovation, which, frankly speaking, is fundamentally useless. As in, it doesn't really exist as a viable notion right now, or for the foreseeable future in consumer application.

I like how the article notes one of the primary benefits is related to cost, which always is a concern of mine when reading about innovations and developments and discoveries like this.

All that said, I'd really like to see what a catastrophic failure looks like, because if they're going to be on the road eventually, let's see what we're in for in some worst-case-scenario testing.


I have the same intuition about this that I had when the OPERA team thought they had exceeded the speed of light [0] -> this violates a fundamental law of physics so this must be wrong.

My money is on experimental error.

[0]: (most upvoted HN post if you search for "speed"+"light") http://www.bbc.co.uk/news/science-environment-15017484


If there is a theory that the battery is in fact a Lithium-air battery, couldn't this be easily disproved by running the experiment in a vacuum?


Or just under an atmosphere of inert gas.


Or a even a sealed container


The use of glass in this invention sounds similar to this one: https://phys.org/news/2015-03-glass-coating-battery.html


"Goodenough" is a very apt name for a leader in battery technology.


Life lesson: John Goodenough is certainly happier than Pete Best.


Isn't it strange good enough sells better than the best?


As any HN reader should know, Worse Is Better (https://en.wikipedia.org/wiki/Worse_is_better)


You'd think if there were anyone on the planet who should have quit while he was ahead it would be a gentleman named "Goodenough"


Seems like a pretty good battery, but is it goodenough?


I am hopping for some experimental verifications :)


TLDR; Batteries store electrical energies by using materials with different potentials. The paper in question does not indicate materials with different potentials, at least as far as others in the battery field can see.

1. unlikely - the paper is a complete fraud

2. Some sort of measurement error has occurred (possible)

3. The method of action is different from what is described in the paper

4. The method of action is novel


Good synopsis.

Given the pedigree of scientists involved my money is on #4.


I'm skeptical. The conclusions drawn in the first paper were... egregious. I think its more likely that there was insufficient oversight. I'm reminded of the FTL neutrino experiment.


It's definitely an extraordinary claim. Even people excited about the possibilities should be tempering their expectations.


do researchers not have professional courtesy to do something like replicate an experiment without publishing the experiment it was based on (without letting the cat out of the bag)? (I mean if they were approached to do so by a researcher with an extraordinary result.)

What I mean is if someone across the world sent you a detailed methodology to test, without mentioning the result you're expected to get, would you do it for them without wanting coauthor credit, and without talking to anyone else about it?


No. A project lead doesn't (usually) do all of something personally, so outside of large personal favors you'd be asking someone to have people under them do a ton of work to replicate something for no tangible benefit. At most you get a special mention, or very rarely are added to the author list.

Replicating experiments requires extremely expensive equipment, materials, expertise and time. You can't bang out most experiments in a few afternoons. If you want to have your results replicated you do them again, because you're gonna be fastest and you're the only person you know 100% will follow the same steps.


Thanks. This makes sense.

Now that it had been published, are teams going to try to replicate and understand it? Or are some seeming breakthrough papers ignored because they "must" be wrong?


Given the publicity someone is probably already planning on looking into it. Publicity is one of the major differences between crank science that is ignored and other things like the reactionless drive at NASA.

I wouldn't say that papers are ignored because they must be wrong. It's hard to make a definitive statement either way, but "revolutionary" experiments very rarely come with good rigor and examination (the neutrino experiment is one of the few counterexamples, it was excellent). Crappy papers get ignored, revolutionary or not. It's very rare that a paper is well-done, upsets normal beliefs, and isn't immediately seized on by the media and community.


> Where does the energy come from, if not the electrode reactions? That goes unexplained in the paper.

shrug seems premature to publish this if no explanation is yet available. Otherwise I'd just assume that, as you state, a measurement error has occurred. IMO you can't really rule it out effectively unless you have a detailed understanding how the invention works.


From the article: "Goodenough invented the heart of the battery that is all but certainly powering the device on which you are reading this."

^ rumors of the demise of the desktop are greatly exaggerated.

Yeah, I get it, a lot of people read internet articles on a tablet or phone, but using the phrase "all but certainly" is way beyond the scope of reality. A brief google search suggests mobile usage is probably between the 50% and 60% of internet traffic.

Yeah I think mobile will continue to grow, but I don't think the desktop will disappear in the near future, if it ever does completely. Of course, the blurring of the lines between the two could cause making the distinction to become more challenging.


> brief google search suggests mobile usage is probably between the 50% and 60% of internet traffic.

Devil's advocate: author could have been making his assumption based on Quartz (qz.com) readership profile, which I would assume probably consists of a high number of mobile and laptop (also Li batteries) users – probably more than desktop (assuming we are not counting laptops as desktop).


besides the obvious of laptops being a large part of the 40-50% "desktop" figure you use, the CMOS battery in your desktop tower is lithium.


Lithium is not lithium ion.


CMOS battery does not power a running computer, though.


Correct. if you never shut down or restart your computer, it is never powered by lithium batteries, however that niche is consistent with the authors use of "all but certain"


hahahaha was about to point out the same exact thing.


A charitable interpretation of the "all but certainly" line is that they meant to imply given that you are reading this on a mobile device, that device is all but certainly powered by the device invented by Goodenough. In fact, that's how I read it. After all, rising above the level of semantics, what is the topic of the sentence? That the things he invented have widespread use. Percent of people reading a given article on mobile vs desktop is not close to the topic of the sentence so that's probably not what they intended their words to be a commentary on.


According to Gartner and Fortune, of the 2.3 billion devices sold worldwide in 2016, roughly 100 million were desktop devices, the rest were portable in some fashion (laptop, tablet, phone)

That's a 96% certainty.

http://www.gartner.com/newsroom/id/3560517 http://fortune.com/2016/06/09/pc-sales-are-worse-than-you-th...


I would expect that desktop PCs have a longer lifetime than portable devices. ==> most people I know have a desktop 2-7 years old, but have a phone from the last year.


But do your friends trash their phone when it gets 2 years old phone or do they sell it on EBay or hand it down to family members?


Most people I know (including myself) seem to break their screen in under 2 years, and have to buy a new one.

Meanwhile, PCs are often shared by family members


This is an incredibly off-topic rant.


"Desktop" ambiguously conflates desktops and laptops.

Mobile web views have definitely surpassed desktop/laptop views.

Desktop/laptop web eyeballs marketshare is in free-fall (concave-down trend).

Actual desktop sales are continuing to crater, which is why IBM got out of the biz long ago. Laptop sales nearly floundering.

https://qz.com/825014/mobile-website-views-surpassed-desktop...


but all traffic is not equal you might do research or look at a news ap on mobile but serious revenue comes from desktop


You missed the boat. Mobile revenue edge out desktop revenue in 2016, and it's concave down... which means dying. Desktops continue to die back to a new equilibrium because they're less portable and less practical than a device that can go almost anywhere but a Dave Chappelle show. They won't go away completely but their importance is greatly and permanently diminished to the point of irrelevance for most average people, hence again why PC makers are struggling. Everyone these days has a smartphone but not everyone has or needs a desktop. That's a fact of life as technologies change. Old PC homebrewers and $10k rig gamers will continue to tout their resistance to getting a cell phone and the superiority of their own, particular religion.


This is my problem with many HN comments. Picking some irrelevant point to nit-pick.


This is what I love about many HN comments. Picking some point of interest they care about and starting a discussion about it.


I didn't even see anything wrong with that sentence until you pointed that out. My desktop exists for computing power, and I just manage it's tasks from my laptop (which I read the article on).


i mean i prefer the desktop so i sure hope it doesnt


Title reads like a bad dip.ly clickbait...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: