Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No this is completely and utterly wrong. Entropy is not a function of knowledge.

Two people with varying and different levels of knowledge of a system does not mean the system has two different entropy values. Even if I knew the exact position of all atoms in a cup of water, the temperature of that water does not change due to that knowledge.

Entropy does rely on what your picked configuration of macro states and microstates. Temperature is an arbitrary choice of macrostate.



> Even if I knew the exact position of all atoms in a cup of water, the temperature of that water does not change due to that knowledge.

It actually does! You would disagree with the other person about the temperature of that water. But I agree that this is admittedly not obvious at first.


No it does not. The thermometer does not change based off of my knowledge or opinion.


A thermometer doesn't measure temperature any better than a meterstick measures length. And we all know what Einstein had to say about the relativity of metersticks.

To paraphrase from the paper I linked in another reply to you, a thermometer is just a heat bath equipped with a pointer which reads its average energy, whose scale is calibrated to give the temperature T, defined by 1/T = dS/d<E>.

You can read the thermometer if you like, but if you know the exact microstate of the water to begin with, the thermometer reading will tell you much less than you already knew about the water. And precise knowledge of the water's microstate will (theoretically) allow you to extract much more work from that water than you would be able to with only the thermometer reading.


But entropy does not change with this knowledge.


You seem pretty convinced. Let me see if you're talking about the same pedantic distinction that oh_my_goodness was.

A: "an urn containing either a white ball or a black ball".

B: "I notice that the ball in the urn A is white".

I would say that initially the entropy of our ball-urn system is 1 bit, and that with observation B, we have reduced the entropy of our ball-urn system to 0 bits.

But if you are going to take the view that even knowing the ball in this particular urn is actually white doesn't change the fact that the entropy of <<"an urn containing either a white ball or a black ball">> is 1 bit and not 0, and that that's the entropy that we're discussing, then I won't argue about it any further.

Except for the obligatory xkcd: https://xkcd.com/221/


No I'm not saying that.

The entropy of the system was always 0 bits. Knowledge is irrelevant.

If the urn actually contained nothing and would materialize a black or a white ball randomly then this can occur with or without your knowledge. When the ball materializes and nothing more can be done THEN the entropy has changed. Because there's no more possible microstates.

You not having knowledge about microstates DOES NOT change available microstates. You seem to think that if you don't know about something, anything goes.

You're really arguing abstract philosophy. Did a tree in the forest fall if no one was around to see it? Yes it did dude. Your knowledge of it has nothing to do with whether it fell. Same with entropy. And if you deny the fact that a tree in the forest never fell, then you're the one going off onto a pedantic tangent.


I'm surprised to hear that "the entropy of the system was always 0 bits."

Let's say that the urn contains a ball that changes its color from black to white and viceversa every thousand years (relative to January 1, 1970 midnight UTC).

Given that "macrostate" there are two possible and equally probable "microstates". The entropy is positive. If I look into the urn and find that the ball is white was the entropy of the system always zero? Or is it always positive in this example?


Positive always based on the choice of our macrostate.


Ok, that’s at least more coherent with the others things that you wrote.


Don't appreciate that comment at all. Rude.

What I'm seeing in your other reply is actually you not even reading my reply. Your making statements on things I already touched upon.

It makes you seem not intelligent. But that would be a rude thing to say would it? There's no point. If you want to have a discussion probably smart to say things that will keep the other person engaged rather then pissed off.

This is actually bad enough where I demand an apology. Say your sorry, genuinely, or this thread won't continue and you're just admitting your mean.


I'm sorry, I really didn't intend any offense.

I really mean what I wrote: that answer is consistent with other things that you wrote indicating that you view the macrostate as a theoretical collection of microstates which is defined under some assumptions which may be dettached from what is known about the state of the system.

So you think that it's still somehow meaningful to refer to the macrostate "the ball may be black or white" and the associated entropy even if the color of the ball is known.

The coherence is in discarding the knowledge of the microstate / the future outcomes of the die / the color of the ball and claiming the original models are still valid (which they may be for some purposes - when that additional knowledge is irrelevant - but not for others).

If you had said "the macrostate I originally chose becomes meaningless because I know the microstate and the entropy is zero now [or was zero all along, as in your previous comment]" it would have been less coherent with the rest of your discourse - and it would have merited a longer reply.


>So you think that it's still somehow meaningful to refer to the macrostate "the ball may be black or white" and the associated entropy even if the color of the ball is known.

Yes. A probability distribution can still be derived from a series of events EVEN when the outcome of those exact events are known prior to the actual occurrence.

I believe this is the frequentist view point as opposed to the bayesian viewpoint.

The argument comes down to this as entropy is really just a derivation from probability and the law of large numbers.


I would say that - in this particular example - knowing that the ball is white and will remain white for the next 950 years and will then be black for the next 1000 years, etc. makes the macrostate "the ball may be black or white" irrelevant.

However, I agree that one can still define the macrostate as if the color was unkown - and make calculations from it. (It's just that I don't see the point - it doesn't seem a good or useful description of the system.)


Probability is invariant to the arrow of time.

If I can look at a probability distribution from data collected from the past and generate a probability. Why can't I look to the future and do the same thing? Even with knowledge of the future you can still count each event and build up a probability from future data.

Think of it this way. Nobody looks at a past event and says that the probability of that past event was 100%. Same with a future event that you already know is going to occur. The probability number is communicating frequency of occurrence along along a large sample size. Probability is ALSO independent of knowledge from the frequentist viewpoint. Entropy is based off of probability so it is based off this concept. Knowledge of a system doesn't suddenly reduce entropy because of this.


That's not the only defition of probability - and it's a very limiting one.

Statistical mechanics is based on the probability of the physical state now and not on the frequency of physical states over time. Sometimes we can assume an hypothetical infinite time evolution and use averages over that but a) it's just a way of arriving at the averages over ensembles that are really the object of interest, b) the theoretical justification is controversial and in some cases invalid, and c) doesn't make sense at all in non-equilibrium statistical mechanics.

Don't be offended, but "Nobody looks at a future event that he already knows is going to occur and says that the probability of that future event is 100%" is a very strange thing to say. Everybody does that! Ask astronomers what's the probability that there is a total solar eclipse in 2024 and they answer 100%, for example.


>Don't be offended, but "Nobody looks at a future event that he already knows is going to occur and says that the probability of that future event is 100%" is a very strange thing to say.

Not offended by your words but I am offended by the way you think. Maybe rather then assuming I'm wrong and your superior, why don't you just ask questions and dig deeper into what I'm talking about.

Probability is a mathematical concept separate from reality. Like any other mathematical field it has a series of axioms and the theorems built off the axioms are independent of our typical usage of it in applied applications.

Just because it's rare for probability to be used on future events that are already known doesn't mean the math doesn't work. We tend to use applied math, specifically probability, to predict events that haven't occurred yet but the actual math behind probability is time invariant. It can be applied to events that have ZERO concept of time. See Ulam spirals. In Ulam spirals prime numbers have higher probability in appearing at certain coordinates. This probability is determined independent of time. WE have deterministic algorithms for calculating ALL primes. Yet we can still write out a probability distribution. Probability still has meaning EVEN when the output is already known.

That means I can look at a series of known past events and calculate a probability distribution from there. I can also look at a series of known future events and do the same thing. I can also look at events not involving time like where prime numbers appear on a spiral and calculate a distribution. Just look at the math. All you need are events.

English and traditional intuitions around probability are distractions from the actual logic. You used an english sentence to help solidify your point but obviously our arguments are way past surface intuitions and typical applied applications of probability.

Look up frequentist and bayesian interpretations of probability. That is the root of our argument. You are arguing for the bayesian side, I am arguing for frequentist.


I'm aware that there are different interpretations of probability. I said as much in the first line of my previous message!

You may be content with an interpretation restricted to talking about frequencies. I prefer a more general interpretation which can also - but not exclusively - refer to frequencies.

Even from a frequentist point of view I find perplexing your suggestion that nobody says that the probability of something is 100% when they are able to predict the outcome with certainty.

Probability may be a mathematical concept separate from reality but when it's applied to say things about the real world not all probability statements are equally good - just like the "moon made of cheese" model is not as good as any other even if it's mathematically flawless. This has nothing to do with Bayesian vs frequentist, by the way, empirical frequencies are not mathematical concepts separated from reality.

It's a perfectly frequentist thing to do to compare a sequence of probabilistic predictions to the realised outcomes to see how well-calibrated they are.

The astronomer that predicts P(total solar eclipse in 2023)=0%, P(t.s.e. 2024)=100%, P(t.s.e. 2025)=0%, P(t.s.e. 2026)=100%, etc. will score better than one who predicts P(t.s.e. 2023)=P(t.s.e. 2024)=P(t.s.e. 2025)=P(t.s.e. 2026)=2/3 or whatever is the long run frequency.

A weather forecaster that looks at satellite images will score better than one that predicts every day the average global rainfall. Being a frequentist doesn't prevent you from trying to do as well as you can.

I did already agree that you _can_ keep your model where the millenary-change ball may be either white or black and make calculations from it. (It just doesn't seem to me a good or useful description of that system once the precise state is known. You _can_ also change the model when you have more information and the updated model is objectively better. I think we will agree that from frequentist point of view predicting a white ball with 100% probability and getting it right every time is more accurate than a series of 50%/50% predictions. And the refined model can calculate the loooooong-term frequency of colours just as well.)


>It's a perfectly frequentist thing to do to compare a sequence of probabilistic predictions to the realised outcomes to see how well-calibrated they are.

Yeah but it's a philosophical point of view. The bayesian sees this calibration process as the probability changing with more knowledge. The frequentist sees it as exactly you described a calibration... a correction on what was previously a more wrong probability.

>A weather forecaster that looks at satellite images will score better than one that predicts every day the average global rainfall. Being a frequentist doesn't prevent you from trying to do as well as you can.

Look at this way. Let's say I know that the next 100 days there will be sunny weather every day except for the 40th day, the 23rd day, the 12th day, the 16th day, the 67th day, the 21st day, the 19th day, the 98th day, and the 20th day. On those days there will be rain.

Is there any practicality if I say in the next 100 days there's a 9% chance of rain? I'd rather summarize that fact then regurgitate that mouthful. The statement and usage of the worse model is still relevant and has semantic meaning.

This is a example of deliberately choosing a model that is objectively worse then the previous one but it is chosen because it is more practical. In the same way we use approximate models, entropy is the same thing.

Personally I think this is irrelevant to the argument. I bring it up because associating the mathematical definitions of these concepts with practical daily intuition seems to be help your understanding.


> The bayesian sees this […] as the probability changing with more knowledge. The frequentist sees it as […] a correction on what was previously a more wrong probability.

Ok, so the Bayesian starts with one probability (the best available) and ends with another probability (the best available). While the frequentist ends with two probabilities - one more right and one more wrong. Good enough for this discussion, it makes clear that frequentists are also able to use the “more right” probability instead of the “more wrong” probability when they know more. (Of course they can also keep using the “more wrong” probability - we agree that both options exist.)

> The statement and usage of the worse model is still relevant and has semantic meaning.

Sure. But the meaning no longer includes “as far as we know” as it would be in the absence of other knowledge. It’s still relevant but not as much as before. And I still wonder if you really claimed that _nobody_ would say “there is 0% probability of rain in the next ten days” if they knew it with certainty - or maybe I misread your comment.


> it makes clear that frequentists are also able to use the “more right” probability instead of the “more wrong” probability when they know more.

I shouldn't have used the word "more wrong." A more accurate model is the better term. Similar to how general relativity is more accurate then classical mechanics.

>Sure. But the meaning no longer includes “as far as we know”

The definition of any model including entropy doesn't include as far as we know. And usage of any model less accurate or more accurate isn't exclusively based on that phrase.

Less accurate models are used all the time and they summarize details that the more accurate models fail to show. In fact all of analytics is based off aggregations which essentially lose detail and accuracy as data is processed. But the output of this data, however less fine grained summarizes an "average" that is otherwise invisible in the more detailed model.

Entropy is the same thing, it is a summary of the system. Greater knowledge of the system doesn't mean you discard the summary as useless.


> Even if I knew the exact position of all atoms in a cup of water, the temperature of that water does not change due to that knowledge.

If you knew the exact position of all atoms in a cup of water you wouldn't assign any temperature to it. Not a thermodynamic temperature at least.


The number of microstates does not change, even if you KNOW the the cup of water is in a specific microstate.

The boltzman equation is based on total accessible microstates.


"accessible" means something only given a set of constraints.

Like the temperature, if you keep the temperature of the water fixed. And the number of molecules if instead of a cup you have a close container to prevent it from evaporating. Then what you have is water at some temperature that you control. And you could have the water at a different temperature with exactly the same microstate.

Or imagine gas at some fixed temperature within a cylinder with one movable wall. If you knew the location of every molecule of the gas it wouldn't make sense to talk about its pressure - you could compress it (reducing the number of accessible microstates) without doing any work.

Edit: In summary, thermodynamics loses its meaning if you know the microstate and can act on that knowledge.


>it wouldn't make sense to talk about its pressure -

If I have a pressure gauge that reads the same thing regardless of my knowledge how is pressure meaningless? The tool that reads pressure gives me an accurate pressure number regardless of what I know or don't know. This number is correct.

Your argument is basically saying that the pressure gauge becomes wrong once you have more knowledge of the system. No it doesn't. The pressure gauge is still giving you a number defined as "pressure."

The gas in that cylinder is at a specific microstate within the macrostate defined as pressure.


> The pressure gauge is still giving you a number defined as "pressure."

As long as you define “pressure” as “the reading of the manometer” and not as “the variable that together with temperature specifies the state of the gas and measures the quantity of energy required to compress it further”.

Thermodynamics is based on state variables giving a complete description of the system. Statistical mechanics is based on looking at the ensemble of microscopic descriptions possible given what is known about the system and their probabilities.

If all you know is a handful of thermodynamic variables that ensemble is huge. If you know already the microscopic description of the physical system your ensemble has one single possible configuration in it.

As in jbay808’s xkcd example, if you have a random number generator and you know the sequence of numbers that will be generated, do you have a random number generator? The random number generator is still giving you a number defined as “random”, right?

I guess that it’s still random if you “forget” that you know it in advance and that the macrostate is still meaningful as a complete description of the physical system if you “forget” that you have a perfect knowledge of its state.

Edit: the GPS receiver in my phone is giving me some coordinates defined as “position” that happen to be in the middle of the road. However, I know precisely where I am. Don’t you think that the meaning of that “position” is somehow affected by this additional information?


>Edit: the GPS receiver in my phone is giving me some coordinates defined as “position” that happen to be in the middle of the road. However, I know precisely where I am. Don’t you think that the meaning of that “position” is somehow affected by this additional information?

No it is not affected by it. The meaning of position is never changed. Your knowledge of your position can change, but your actual position exists regardless of your knowledge or inaccuracies of your tools.

>As in jbay808’s xkcd example, if you have a random number generator and you know the sequence of numbers that will be generated, do you have a random number generator? The random number generator is still giving you a number defined as “random”, right?

Random number generators are a rabbit hole. There's not even a proper mathematical definition for it. We're not sure what a random number is... we just have an intuition for it. Case in point, the xkcd article could not define it mathematically. This is the reason why the joke exists, because we're not even truly sure what it is or if random numbers are a thing. We have intuition for what a random number is but this is likely some kind of illusion similar to the many optical illusions produced by our visual cortex. If formalization of our intuitions are not possible then there is likelihood that the intuition is not even real.

>Statistical mechanics is based on looking at the ensemble of microscopic descriptions possible given what is known about the system and their probabilities.

ok take a look at this: https://math.stackexchange.com/questions/2916887/shannon-ent...

They're talking about deriving the entropy formula for fair dice. But they talk about it as if we don't have knowledge about physics, momentum and projectile motion. We have the power to simulate the dice in a computer simulation and know the EXACT outcome of the dice. The dice is a cube and easily modeled with mathematics. So then why does the above discussion even exist? What is the point of fantasizing about dice as if we have no knowledge of how to mechanically calculate the outcome? The point is they chose a specific set of macrostates that have uniform distribution across all the outcomes. It is a choice that is independent of knowledge.


Thanks for your reply!

You didn't address the first line in my comment about the definition and meaning of "pressure" so maybe we actually agree.

To ellaborate a bit, one may define "pressure" as the reading of a device that measures its exchange of momentum with the particles of gas averaged over time. The last bit is important because those microscopic impacts are discrete events. If we know [in a classical mechanics framework] the state of every particle in the gas we can predict when they will happen - and succesfully calculate the (averaged) "pressure" measurement.

However, one may also define and interprete "pressure" as a variable that - together with volume and temperature - characterizes completely the behaviour of an ideal gas in equilibrium. But if we have a precise knowledge of the physical state we could in principle do impossible things - like compressing the gas without effort or creating a temperature gradient.

If we have a fish contaminated with mercury and the concentration of 0.01% characterizes completely its toxicity we won't eat it. If we also know that the mercury is only on the surface we won't eat it either but in principle we could if we are careful. The content of arsenic in the fish remains the same although the meaning of that number changes - but of course if we're a bear unable to clean our fish the additional information doesn't change anything at all.

> They're talking about deriving the entropy formula for fair dice. But they talk about it as if we don't have knowledge about physics, momentum and projectile motion. We have the power to simulate the dice in a computer simulation and know the EXACT outcome of the dice. The dice is a cube and easily modeled with mathematics. So then why does the above discussion even exist? What is the point of fantasizing about dice as if we have no knowledge of how to mechanically calculate the outcome? The point is they chose a specific set of macrostates that have uniform distribution across all the outcomes. It is a choice that is independent of knowledge.

I can make a model where the moon is made of cheese. That model is independent of any knowledge about the true nature of the moon. But if I visit the moon and find that - surprisingly! - it's made of lunar rock I may re-evaluate the pertinence of that model.

The model where all the outcomes of the die are equally likely it's particularly useful when all the outcomes of the die are equally likely. If you have no additional knowledge - apart from the number of outcomes - you have no reason to prefer one outcome to another. All of them are equally likely - to you. You can calculate the entropy of one event assuming that there are six equally-probable possible outcomes.

If I know exactly the future outcomes of the die - 4, 2, 5, 1, ... - I can also calculate the entropy of each event assuming that there is one single possible outcome that will happen with certainty. You have one model. I have one model. Are all models created equal? If we play some game you'll painfully realize that my model was better than yours - or at least you'll believe than I'm incredibly lucky.


All mathematical formulas representing physical phenomena are called models. Some models are more accurate then other models.

Entropy is one such model. The mathematical input parameter that goes into this model is a macrostate. We are also fully aware that the model is an approximation Just like how we're aware newtonian mechanics and probability itself is an approximation.

If you feel entropy is too vague of a description then you can choose to use another model for the system. One with billions of parameters and can record the exact state of the system. Or you can use Entropy, which has it's uses just like how classical mechanics still has uses.


Ok, we agree then. Models may or may not represent a physical reality. They may be in conflict with reality - as in "the moon made of cheese". They may be incomplete - as in "the fish is 0.01% mercury". Those inaccuracies may or may not have practical relevance. Fundamentally it makes a difference though. In principle, someone with a better model of the die can consistently win bets contradicting the predictions of the "fair die" model and someone with a better model of the gas can do things forbidden by the "entropy is a measure of the energy unavailable for doing useful work" interpretation.

To reconcile those views in the context of your first comment: "Entropy is not a function of knowledge."

Entropy is a function of the macrostate. The macrostate is defined by state variables (the constraints on the system). Those state variables represent what is known about the system. Given P1, T1 we calculate S(P1, T1). Given P2, T2 we calculate S(P2, T2). The entropy obviously change with our knowledge in the sense that if we know that the pressure is P1 and the temperature is T1 we calculate one value and if we know that the pressure is P2 and the temperature is T2 we calculate a different value. If we don't know P and T we cannot calculate _one_ "entropy value" for the system at all because the corresponding macrostate is not defined.

"Two people with varying and different levels of knowledge of a system does not mean the system has two different entropy values."

What is the “entropy value of the system”?

Imagine that the system is composed of two containers with equal volumes of an ideal gas at the same temperature and pressure that are then put together - the volume is now the sum of the volumes, the pressure and temperature don’t change.

Alice can calculate S1 and S2 and the final entropy is SA=S1+S2.

Bob knows something that Alice ignores: that it was hydrogen in one container and helium in the other. They will mix and he can calculate that in the end SB>S1+S2.

What is the “entropy value of the system”? It seems to be more a property of the description of the system than of the system itself.

I'll say more about that in a reply to https://news.ycombinator.com/item?id=31201129 (somehow I've missed that comment until now)


>What is the “entropy value of the system”? It seems to be more a property of the description of the system than of the system itself.

Yes. That is what entropy is as defined.

>If we don't know P and T we cannot calculate _one_ "entropy value" for the system at all because the corresponding macrostate is not defined.

If the input is macrostate. And you don't know the macrostate. Then you can't calculate the value. That's pretty basic and this applies for ANY model. If you don't know the input variables, you can't calculate anything. Nobody talks about mathematical models this way. This applies to everything.

I don't think you picked up on my model argument either. You seem to think you made progress on us agreeing that entropy is a "model." I'm saying every single math formula that representing physical phenomena on the face of the earth is a "model." Thus it's a pointless thing to bring up. It's like saying all mathematical formulas involve math. If entropy uniquely has a parameter called knowledge that affects it's outcome, citing properties universal to everything doesn't lend evidence to your case.

Let's "reconcile" everything:

You're implying that there is some input parameter modeled after knowledge. And that input parameter affects the outcome of the entropy calculation. I am saying no such parameter exists. Now your saying that knowledge of the input parameter itself is what your talking about. If you don't know the input parameter you can't perform the calculation.

The above is an argument for everything. ANY model on the face of the earth if you don't know the input parameters you can't derive the output. Entropy is not unique for this property and obviously by implication we're talking about how you believe entropy is uniquely relative to knowledge.

>Alice can calculate S1 and S2 and the final entropy is SA=S1+S2.

Who says you can add these two entropies together? S1 and S2. The macrostates are different and Mixing the two gases likely produces a third unique set of macrostates indpendent of the initial two.


> You seem to think you made progress on us agreeing that entropy is a "model.

I thought we had agreed that entropy is something you calculate with a model, in fact.

> You're implying that there is some input parameter modeled after knowledge.

I was trying to say that the inputs to S(...) are the things that we know because we did measure them or set their values. It seems that we agree on that because it's extremely obvious.

Hopefully we also agree that if there are other other relevant things that we know in addition to the inputs to that model we could refine our model. I fully acknowledge that we may choose to ignore the additional knowledge and keep using the old model - and it may be good enough for some uses. (We may also choose to incorporate the additional knowledge. Maybe it rules out some microstates and we could be using a smaller macroset to represent what we know about the system.)

When all we know is the macrostate, the macrostate is the most detailed description - and gives the most precise predicitions - available to us regarding the system. However, if we know more the original macrostate is no longer "complete". Because we do know - and we can predict - more precise things. There is a fundamental change from "the macrostate represents all we know and is the basis of everything we can predict" to "not the case anymore".

Which also seems obvious. Probably we agree on that as well! (Sure, it applies to everything. Anytime one ignores information one has a suboptimal model compared to the model one could have. The improved model may or may not be better for a particular purpose.)

> Who says you can add these two entropies together? S1 and S2.

Alice, who considers two equal volumes of an ideal gas at the same temperature and pressure.

> The macrostates are different

They were the same in my example. Same volume. Same temperature. Same pressure.

> and Mixing the two gases likely produces a third unique set of macrostates indpendent of the initial two

For an ideal gas doubling the volume and the number of particules (so the pressure remains the same for a fixed temperature) doubles the entropy. If you have two identical systems the total entropy doesn't change when you put together the two containers resulting in a single container twice as large with twice as many particles.

If you thought that the number of microstates - and the entropy - increases when you bring toghether two identical systems because they will mix with each other that's not correct. (Even though there are still debates about this issue 120 years later.)

https://en.wikipedia.org/wiki/Gibbs_paradox

The entropy would increase however if they are different ideal gases (it doesn't matter how different). Bob - who knows that they are different - would calculate the correct entropy.

It could be the other way. Maybe they're actually the same gas but Bob treats them as different because he isn't aware and keeps the general case. He calculates an increase in entropy due to the mixing. While for Alice, who knows that they are the same gas, the total entropy hasn't changed.

Ax Maxwell wrote: "Now, when we say that two gases are the same, we mean that we cannot separate the one from the other by any known reaction. It is not probable, but it is possible, that two gases derived from different sources but hitherto regarded to be the same, may hereafter be found to be different, and that a method be discovered for separating them by a reversible process."

If we think that the two gases are the same the entropy is 2S but if we discover later a way to tell apart one from the other the entropy is higher (there are more microstates for the same macrostate that we thought initially).


>I thought we had agreed that entropy is something you calculate with a model,

We did agree. I never said otherwise. Where are you getting this idea? I'm saying our agreement on this fact is useless. Why don't you actually fully read what I wrote.

>I was trying to say that the inputs to S(...) are the things that we know because we did measure them or set their values. It seems that we agree on that because it's extremely obvious.

I spent paragraphs remarking on this ALREADY. I get what your saying. You're not even reading what I wrote. Every mathematical model has this property you describe. It is not unique to entropy. If you don't know the parameters of even the Pythagorean theorem, then you can't calculate the length of the hypotenuse. Does this mean the pythagorean theorem depends on your knowledge of the system? Yes but kind of a pointless thing right? If this is the point your trying to make, which I highly doubt, then why are we focusing only on entropy? Because knowledge of any system is REQUIRED for every single mathematical model that exists or the model is useless.

I don't think your clear about the argument either. If your not talking about knowledge as a quantifiable input parameter then I don't think your clear about what's going on.

>I fully acknowledge that we may choose to ignore the additional knowledge and keep using the old model - and it may be good enough for some uses.

Entropy is used with full knowledge that it's an fuzzy model. It's based on probability. It doesn't matter if we "ignore" or don't know the additional properties of the model. The model doesn't incorporate that data regardless of whether that information is known or not known.

>They were the same in my example. Same volume. Same temperature. Same pressure.

No. The boltzman distribution changes with gas type as well. The models are different.

>For an ideal gas doubling the volume and the number of particules

In this case yes. But only for an ideal gas. I don't recall if you mentioned the gases were both ideal. Let me check. You did mention it. But then you mention the gases are different. Hydrogen and helium. Neither gas is technically ideal, and the quantum mechanical effects would likely influence the boltzman distribution when mixed. There are contradictions in your example that make it not clear.

>https://en.wikipedia.org/wiki/Gibbs_paradox

The article you linked explains it away. It's the choice of Macrostates, effects the entropy outcome. The article says it's subjective in the sense that it's your choice of Macrostates. The Macrostates don't change based off your knowledge. You choose the one you want.


>>I thought we had agreed that entropy is something you calculate with a model,

>We did agree. I never said otherwise. Where are you getting this idea? I'm saying our agreement on this fact is useless. Why don't you actually fully read what I wrote.

It was a minor correction. I wouldn't say that entropy is a "model". But essentialy we agree, that's what I meant. We agree that we agree!

>> I was trying to say that the inputs to S(...) are the things that we know because we did measure them or set their values. It seems that we agree on that because it's extremely obvious.

> I spent paragraphs remarking on this ALREADY.

Again, I was stressing that we had also reached a clear agreement on that point. (Except that I don't know what do you mean by me implying something about "some input parameter modeled after knowledge" if every input corresponds to knowledge and that's a pointless thing to discuss.)

> The Macrostates don't change based off your knowledge. You choose the one you want.

And if you want, you can choose a new one when your knowledge changes! One that corresponds to everything you know now about the physical state. Then you can do statistical mechanics over the ensemble of states that may be the underlying unknown physical state - with different probabilities - conditional on everything you know. In principle, at least.

[it was an interesting discussion, anyway]


>Again, I was stressing that we had also reached a clear agreement on that point.

And I'm stressing the agreement was pointless and even bringing up the fact that entropy is a model doesn't move the needle forward in any direction. You haven't responded to that. I still don't understand why you brought it up. Please explain.

I also don't understand how zero knowledge of the input parameter applies as well. This argument works for every model in existence and is not unique to entropy. Again not sure why you're bringing that up. Please explain.


I never said it was unique to entropy. (And i still don’t understand if there some meaning that escapes me in calling entropy a “model”. Are temperature and pressure also “models” or is it unique to entropy?)

If I have a system in thermal equilibrium with a heat bath and know the macrostate P,V,T I can calculate the energy of the system only as a probability distribution - it’s undefined. If I knew the state precisely I could calculate the exact energy of the system.

If I define lycanthropy as “error in the determination of energy” it’s positive given the macrostate for the system in a heatbath and zero given the microstate of the system. Of course, given the microstate one can know the energy but can also pretend that the energy is still indeterminate.

While the distribution of energies - and the whole thermodynamical model - may still be useful its meaning would change. It would no longer be the most complete description of the system that encodes what can be predicted about it. Of course if that was never the meaning for you, you’ll see no loss. But I thought we were talking about physics, not mathematics. The meaning of thermodynamics is a valid point of discussion. The interpretation of the second law remains controversial.

I think this discussion has run its course - I may no longer reply even if you do. Thank’s again, it was interesting.


If the cup of water is in a specific microstate at time t=0, and evolves over time according to deterministic equations of motion, how will it "access" other microstates that aren't along that specific trajectory in phase-space?


It can't. But you're not typically defining ONLY microstates along that trajectory as accessible. You are defining all accessible configurations according to your defined macrostate.

Knowledge of future microstates does not change what was already defined as a macrostate. The definition and the rules you used to construct a macrostate are independent to knowledge of the system.

If you gain knowledge of the system and you would like to change your macrostate, then be my guest. You can certainly do that, but "entropy" as we know it does not actually change with more knowledge unless you change the parameters according to your gained knowledge.

Think of it this way. The thermometer ALWAYS reads the same thing EVEN if you have 100% knowledge of the current microstate. You can build a new thermometer using some other mechanism to get a different reading and to take advantage of your new found knowledge... but you'd be changing the definition of your macrostate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: