The initial energy is mgh and the final energy is 2 * (m/2 * g * h/2) = mgh/2 so half of then energy has disappeared. It is clear that work could have been done by the water moving between the two barrels (like in a hydro-electric power station).
and it can cause things like water hammers: https://en.wikipedia.org/wiki/Water_hammer which are analogous to inductor sparks: http://web.physics.ucsb.edu/~lecturedemonstrations/Composer/...
Water analogies only go so far. You would do well to pick up a book on basic electrical networks to learn this material further. Don't shy away from the math; it's really the only way to build the understanding that leads to intuition.
The biggest difference is in mechanical effects: in hydraulics, physical force is primarily a function of the system pressure and velocity of the flow rate. For electromagnetics, those are reversed: voltage controls the speed of a motor and current controls the force, for example.
They also deviate from the linear regime in different ways, which means the more interesting components have to be built completely differently to perform the same job. A one-way check valve and a diode are both governed by similar equations on a macro scale, but you’ll never be able to understand the internal structure of the valve by an electronic analogy or the design of the diode by a hydraulic one.
Part of that is getting comfortable with the maths, and the way that’s visualised is just equations, graphs and sometimes things like diagrams of fields or heatmaps. I work in RF so we do antenna simulations and things like that, and the software generates radiation pattern diagrams in 2D, or in 3D heatmaps showing the energy density. It just doesn’t work to, say, try and think of what would happen if an antenna was spraying water out or something.
It's then equally straightforward to see how the same explanation applies to the more contrived case of two capacitors.
Once you include that, you'll see your LC resonator is perfectly undamped and oscillates the charges back and forth forever, breaking the assumption that a "steady state" would equalize the charge on both capacitors.
The equations will also reveal the problem to you when you try to calculate the current that flows from one capacitor to another, with no inductance or resistance in between. You might try putting in an inductor and looking at the circuit behaviour in the limit as the inductance goes to zero -- you'd see the frequency of oscillation climbs to infinity; the full charge essentially teleports back and forth from one capacitor to the other, but still never settles into a steady state of equal charge on both capacitors.
Once you see this, it's like having a problem set up with a frictionless ball on a hilltop beside a valley, saying "in the steady state, the ball has rolled down and settled in the valley. But there was no friction! Where did the energy go?", and the answer is just that the ball doesn't settle in the valley, but rather continues back up the other side, carried along by the kinetic energy that had been neglected in the problem statement.
(Before the well-actualies point out that an LC circuit will damp itself via radiation, let's just say it's also perfectly shielded).
A full model is complicated by the fact that your capacitors have an internal series resistance and leakage resistance, and that the leads and circuit board traces have resistance and inductance. Just like the pipes and valve has some resistance to flow, and some water might leak out or evaporate, and the water has inertia and nonzero viscosity, and turbulence will turn some of the motion to heat, and depending on the phase of the moon, the time of day, and the compass orientation of the barrels, the water may be pulled into a picometers-higher tide in one barrel. When you say "they equalize with half the water in each" you don't typically mention that the phase of the moon may be a factor.
We do store energy in water towers for example, so it is pretty surprising and unintuitive that if you put two large water tanks, one full and one empty, right next to each other, open a valve between them allowing their levels to equalize, then assuming there is no distance and you used teflon coated valves you lose... half of the energy as they equalize!
I certainly wouldn't have thought so. I'd have thought you keep 70%-95+% of the energy.
Actually the oscillation explanation didn't match my intuition at all, because I would have thought the water flows from high to low until the point of equalization and then stops flowing, without oscillation.
I get that this doesn't happen, but I would have thought it would!
It might very well do that. You will just have lost half the energy already to heat from friction with the piping and due to water's viscosity.
> it is pretty surprising and unintuitive that if you put two large water tanks, one full and one empty, right next to each other, open a valve between them allowing their levels to equalize, then assuming there is no distance and you used teflon coated valves you lose... half of the energy as they equalize!
If you did that, they would equalize - very briefly - and then the second tank would fill higher and higher, until it's (nearly) full. Then the reverse begins.
This is very similar to a pendulum. Just that our intuition about pendulums is better than for near-frictionless transfer of fluids in connected systems.
> I'd have thought you keep 70%-95+% of the energy.
Well, the potential energy being zero at the bottom of each tank is arbitrary. If both tanks are inside a water tower, you might keep 99% of the "useful" energy even if you let them equalize, because the height above ground is greater than the height above "tank bottom". Maybe this is the source of some confusion here?
The oscillation bit happens, but it doesn't dominate. Water analogies can be misleading because water has intrinsic properties (eg turbulence ~= resistance) which aren't always significant in an electronic circuit. To make the equivalent of an LC-dominant circuit with (open) water tanks, you'd need something like a high-momentum turbine in the transfer pipe.
The demonstration is called Pascal's vases.
So is it not accurate to say that water does this by oscillating and throwing away energy as parasitic losses, until it equalizes?
Are you saying in general does a system of connected tubes NOT throw off lots of energy as it gets into the equalized state shown?
If it does, I think this fact should also be mentioned when teaching the "water seeks its own level" demonstrated above. (Called Pascal's vases.)
In this case, the equations not matching up proved that our initial assumption (steady state) was itself wrong.
But obviously, even with this 1/2, your relation after dividing between two tanks still holds. :)
Now I want to try the water barrel thing myself and see how many times the water goes back and forth before it finally stops...
At the first engineering firm I worked for, we had a very good heat transfer solver but no electrical solver since it was an infrequent need in our field.
One of old timers was an expert at reframing the electrical problems into heat transfer, solving in the available tools, then converting back. As he said "it's all unit conversions". I never picked it up beyond simple resistance networks, but it was a cool way to abuse the tools.
Radiation could be the sound produced by the rushing water?
Well, but maybe... if you equalize two water tanks with a very fat pipe it will swap forth and back, causing the whole assembly, table, and room to rock?
Is there some underlying physical explanation of this? Something that says that "in a dynamical system the maximum efficiency can be at most 50%".
It works out to the same 50% because the analogy is just very good. Dividing over two barrels/capacitors means halving the potential level (amount of water/load doesn't change), which means 1/4 of the energy in each barrel.
If the fill level is h, so the weighted average of the mass is at h/2,then
E = m * g * h / 2
m = V * r
V = A * h
E = A * r * g * h^2 / 2
m2 * kg/m3 * m/s2 * m2 = kg m2 / s2 = J
On the second interview they asked the exact same questions and he got the job offer.
He's now got his name credited to several AAA games and moved in with an indie studio to publish his personal project under them.
See? Interview puzzles work!
Interview question was the first result... 7 years later.
Except that question in particular is a fairly silly one. A satirical rendition exists that illustrates why, and happens to have been discussed long ago on HN (https://news.ycombinator.com/item?id=569897) but apparently the original article has moved (https://sellsbrothers.com/12395).
Instead they should have asked: design a manhole such that these constraints are satisfied.
It was 1982. A small company from the Pacific Northwest called Microsoft sent some engineers down to recruit at Caltech. I was a senior that year, and interviewed with them. They asked me their probability problem, I got it wrong, and did not receive an offer.
I've occasionally wondered how different things would have been had I got it right, and so maybe got an offer, and accepted . I think pretty much every engineer that started at Microsoft around that time and stuck with it (which I almost certainly would have done) ended up quite well off.
I've also wondered if any of the engineers that interviewed me are people whose names I would now recognize.
 It was not at all certain I would have accepted. I was more interested in bigger machines than the microprocessors MS was focused on, and also had a strong preference for staying in the Los Angeles area.
We just hope to be
certain about the present moment.
Seven puzzles later of me saying, “yes, I’ve heard that one” and giving an short description indicating that I understood the solution, the interviewer became visibly upset/annoyed.
And so when they asked the 8th question I suddenly ‘didn’t know this one.’ The interviewer was delighted. Of course I did know it already but pretended to work my way through it.
Got the offer, didn’t take it because that interviewer would’ve been my boss’ boss.
Unsurprisingly, did not get an offer.
In this case, if the interviewee had been radically honest and interviewer had not had a backup question, the interview might have gone badly, even though the interviewee apparently was a good employ.
In fact, radical/extreme honesty is often seen as a sign of childhood problems or mild autism by mental health practitioners, as people normally learn to white lie at young age.
That seems like a wild assumption to make.
Interviewers will usually tell you straight up to inform them if you already know the question.
GP is in fact, doing the employer a favour by bypassing this silly charade and letting them have access to his real talents.
But a good interviewer should also follow up on questions that can lead to trivia-type answers.
For example, if some interviewer ask you "What is the sum of integers from 1 to 100", a valid answer from you could be
"Well we learned in school that Gauss figured out this problem at early age, and came up with the answer 5050 - so the answer is 5050"
In which case the interviewer has the options of
A) Conclude the question, and move onto something else
B) Try to get you to expand on this theory - and how Gauss came up with that
If he then goes with B, and you start explaining that (1 + 100) + (2 + 99) + ... + (50 + 51) equals 5050, he could ask you to come up with a more general form of this answer - i.e the sum of some arbitrary arithmetic sequence.
And if you then go to expand on this, and arrive at the answer that the sum of the original sequence is sum = s(s+1)/2, then that is IMO very good for both parts.
The key here is that you actually walk them through the steps, and interact with the interviewer. Maybe they introduce constraints, or new rules.
One can take "easy" questions that you've seen thousands of times, and make them into interesting ones, where you spend some time communicating, and showcasing how you solve problems.
The point is not to arrive at "the sum is 5050", but rather to monitor how you think, and solve problems, with people around you.
This reminds me of another time on HN someone told a story about how they'd cheated in a hi-tech way in an exam and a dozen comments supported them, not one person suggesting it might be wrong...until me, then I got virulently attacked for doing so. At least there their comment got [dead]ed.
I just wanted to say - you're not alone, that also seems very dodgy to me. Moments like this HN is very disappointing.
But I also thought "quite unethical" was a bit silly (though I didn't downvote anyone). Perhaps people didn't like the tone. Also there could be a language issue with the degree implied by the word "quite". In stark black and white terms, it is unethical, yes, as a lie of omission. If there are shades of gray, it is not on the extreme to get lucky and have heard a question before.
I have also noticed there is a contingent of commenters who seem to view stories in terms of the victimized workers versus the evil corporations, and punish people who appear to take the wrong side. Probably a similar bunch shows up when the argument is about students versus teachers. Not necessarily a universal lack of ethics.
IMO, that's the perfect use of this thought experiment. In fact, I dare say if there are more examples like this, where laying out basic clear assumptions and following the model through its rules leads to an answer that is clearly wrong, then this example and those other examples of that type should probably make up a series of lectures in a course (or subsection thereof) on model use. It seems to me that plenty of people pick up on what models are and how to use them. It seems less people internalize, "All models are wrong. Some models are useful."
> The two capacitor paradox or capacitor paradox is a paradox, or counterintuitive thought experiment, in electric circuit theory.
> A paradox, also known as an antinomy, is a logically self-contradictory statement or a statement that runs contrary to one's expectation. It is a statement that, despite apparently valid reasoning from true premises, leads to a seemingly self-contradictory or a logically unacceptable conclusion.
(emphases mine). It's paradoxical in that it runs counter to expectations or intuition, using apparently valid reasoning. (Of course, maybe it doesn't run counter to your intuition; but that's how correct intuition is developed, by breaking naïve intuition against examples like this!) In fact, the article explicitly acknowledges this:
> Unlike some other paradoxes in science, this paradox is not due to the underlying physics, but to the limitations of the 'ideal circuit' conventions used in circuit theory.
However, I think my main point is still valid that this should not necessarily be called a paradox exactly because it differs a lot from how this term is commonly used in the naming of scientific thought experiments.
Perhaps it’s fair to say that all paradoxes are “just” part of a proof by contradiction and the only difference between ones that feel truly paradoxical and those that feel obvious is something subjective about a person’s understanding of the situation. Maybe it’s down to whether someone has enough of an understanding of the conceptual landscape to be able to actually recognize some of the problematic assumptions from the start of the problem statement or not, or maybe it’s something else, but either way I’m suspecting that it’s something subjective related to a person’s state of knowledge and/or approach to organizing their own knowledge.
- these which only contradict "common sense" intuitions/expectations but reality conforms to predictions of the theory instead of "common sense", and so these are usually called not real paradoxes like Pascal's hydrostatic paradox;
- these which show us limitations of the models we use to construct them, like two capacitor paradox or black hole information paradox;
- these true contradictions in the nature of reality which when you discover them the universe disappears in the puff of logic and you wake up as a student in the middle of the physics lecture, and that student was Albert Einstein.
A difference between model which we know how to adjust to make more realistic and one we don't know how is about history of physics and not physics itself.
It’s an apparent contradiction between our understanding of the meaning of the word and the actual usage of the word. The resolution of this paradox could be that there is widespread wrong usage of the word. Alternately it could be that our understanding of the meaning is wrong - which itself could be for multiple reasons. Perhaps our understanding of the definition is incomplete. Or it could be that the definition itself is simply wrong because it doesn’t reflect actual real-world usage.
Upon some reflection I think that a core part of the idea of a paradox is that there are many possible solutions to a problem and that some people, but almost certainly not all, will initially fail to even recognize the existence of solutions other than the one they expect, due to unconscious assumptions.
Naturally in many cases there will also be people who do not make those problematic assumptions to begin with, but that doesn’t on its own invalidate the use of the word “paradox” in that situation. The birthday paradox probably fits this definition just fine - ask anyone not trained in statistics about the subject and you’ll likely find a lot of people have unconscious assumptions that lead them to wrong answers, and in some cases thy will even be exceptionally confident in their wrong answers.
You might object to the “not trained in statistics” part but that objection itself would be circular - modern statistics education was designed with paradoxes like this in mind, specifically to lay down a foundation that doesn’t lead you down the wrong paths in cases like that. The solutions to the basic paradoxes of statistics are built into the way statistics is taught, just like the solutions to Godel’s and Russel’s and many others’ paradoxes are built into the foundations of any modern logical system.
It is a fate of any solved paradox. A paradox is a confusing paradox only to a point when you learn where the contradiction comes from, and how it does it, and how to avoid it.
There is actually a small resistance in series with an inductor between the nodes. There’s a time varying voltage across those.
Edit: as a follow-on, the paradox is posed as a problem in electrostatics, when it’s actually a problem in electrodynamics.
My intuition is that there's a missing counter-action energy which would be required in order to maintain the initial state in such a circuit. Once the state change is being effectuated (switch closed), it would change the balance of action-counter-action which would lead to dissipation in the real world. Not sure how to quantify/formalize it.
In practice, you can get hundreds or even thousands of volts from inductors that way, which is how auto ignitions and boost-type switching power supplies work.
Similar problems come up in the idealized physics of impulse/constraint physics engines. Getting rid of the energy in collisions requires hacks to prevent things from flying off into space, a problem with early physics engines.
It's also compatible with barbegal's hydraulic pressure analogy elsewhere in the discussion. A liquid- or gas-filled tank feels pressure (i.e. force per area) on the walls as the fluid "wants" to either (1) flow to a lower gravitational potential energy location, for a liquid tank in a gravitational field, or (2) expand in volume, in the case of a pressurized gas tank.
Where does this charge come from?
2. The shorted capacitor will discharge some current into the short wire and some into the oscilloscope’s capacitor (probably wire capacitance).
3. Once you remove the short wire, some current will flow back into the capacitor under test to equalize the voltage.
That, or this is the "memory effect" I've heard about in certain dielectrics acted out.
For an infinitely small resistor the energy is effectively a spark pulse coupled directly to free space, and half is radiated away in EM waves.
Thanks for the great source!
Lee Hill, a popular instructor for EMC compliance trainings, has a saying about that: "There's a lot of money to be made understanding the circuit components that aren't captured in the schematic."
If you connect an ideal voltage source through a resistor R to a capacitor C, the amount of energy required to charge the capacitor is CV^2 while the amount of energy that winds up in the charged capacitor is 1/2 CV^2. The other 1/2 CV^s is dissipated across R. This is unaffected by the value of R, even as R -> 0. The only thing that changes is the charge time.
R = 0 is impossible for real circuits, but no matter how close you get you still lose half of that energy in the resistor.
The system with 2 capacitors seems like a good analogy to a heat bath; when the switch is flipped the charge is divided equally between the capacitors when it reaches equilibrium. The entropy of the system has increased, and the potential to do work has decreased. The number of possible states the system can be identified in has decreased by half (it is not possible to know which side had the initial charge after the switch has been flipped). Flipping the switch back will not bring the charge back to only one side. While this line of thinking doesn't explain the physics of what is happening, there is clearly a (statistically) irreversible change going on which seems like the natural language is energy, entropy, and temperature.
If simple lumped element model really fails to model your circuit, there is distributed element model which can analyze all sorts of RF voodoo https://en.m.wikipedia.org/wiki/Distributed-element_model and https://en.m.wikipedia.org/wiki/Distributed-element_circuit
Even if you work by your idealization, saying wires have zero resistance means that every piece of connected conductor in the circuit is at the same potential (in the absence of a magnetic field) so saying you have two connected capacitors charged to different potentials is already violating your assumption.
You can find similar edge cases in almost every situation and field of study where you try to simplify things with an approximation, and most of them aren't called paradoxes.
There are many alternatives. A good start is noting that the inefficiency is actually lower the lower the starting voltage difference:
V1 = V, V2 = V-dV
V' = (V + (V-dV))/2 = V+dV/2
We can define efficiency as the ratio of energy lost by the first to energy transferred to the other.
dU1 = U1-U1' = CV^2/2-CV'^2/2 = C/2 (V dV-dV^2/4)
dU2 = U2'-U2 = CV'^2/2-C(V-dV)^2/2 = C/2 (-V dV+dV^2/4+2VdV-dV^2) = C/2 (V dV - 3dV^2/4 )
n = (dU2/dU1) = ( V - 3dV/4 ) / ( V - dV/4) ~ 1 - 3/4 dV/V (for small dV)
as dV → 0, n → 1. You lose 3/4 of the fractional difference in efficiency, so for 10% difference your efficiency is ~92.5%, pretty great.
Now, there are indeed devices that change voltages without energy losses! (transformers, for example). So if you plug in a variable transformer that keeps the voltage close to the target, your (dis)charging efficiency can be arbitrarily high.
Of course, if your second voltage is 0, the efficiency must start at 1/3 no matter what (which can seem to imply this cannot be changed) -- but as soon as you have a small voltage you can start tracking it and keep efficiency high.
Challenge to the reader: Use quantum mechanics and thermodynamics to derive a fundamental limit of efficiency (which must be less than 1 at positive temperatures) :)
Why did someone mod you down dead?
- quote -
It's an exponential function on the W=CV^2
let's say initially:
C = 1
V = 16
W = 1 * 16^2
W = CV^2 + CV^2
V = 8
W = (1 * 8^2) + (1 * 8^2)
(I think the difficult part isn't to accept that some of the energy vanished from the system, it just contradicts our expectation from having supposedly no dissipation elements)
Edit: also when I think about it there is a little bit of additional energy in the open switch which is itself a capacitor.
Edit 2: for this circuit to stay the way it is in the initial state, wouldn’t the open switch need to have equal capacitance to the capacitors? Or some kind of voltage generating field applied across it that is removed when the switch is closed?
And why reversible computing can be done without using energy ;)
--------| |---------------| |----------
--- +6 -6 ------------ 0 0 -----
^^^^^^^^^^^^^^^^ NOT POSSIBLE!
Your diagram is wrong. Let's mark out some contact points:
---| |-------| |---
A B C D
So when you measure A at 12 volts above B, it doesn't mean B is at any absolute voltage. If only one capacitor is charged, then A can be 12 volts above B while B=C=D. It works out fine.
If you attach ground to B or C, then A is 12 volts relative to ground, and B and C and D are all 0 volts relative to ground.
If you attach ground to A, then A is 0 volts relative to ground, B and C and D are all -12 volts relative to ground.
If you somehow attach ground to the midpoint of A and B, then A is 6 volts relative to ground, and B and C and D are all -6 volts relative to ground.
None of these are impossible. Nothing goes wrong until you try to close the switch.
You have an ideal capacitor, charged to some voltage. You connect both ends, which empties it.
Where did all the energy go?
The answer is that there's no such thing as an ideal capacitor or an ideal wire. There are a few effects you can't avoid, but most importantly there are tiny bits of internal resistance that usually go ignored. In a short circuit, they become important, and will turn all the energy into heat.