Hacker News new | past | comments | ask | show | jobs | submit | evdev's comments login

I was going to rant, but this:

> The energy-driven reduction of entropy is easy to demonstrate in simple laboratory experiments, but more to the point, stars, biological populations, organisms, and societies are all systems in which energy is routinely harnessed to generate orderly structures that have lower entropy than the constituents from which they were built. There is nothing physically inevitable about increasing entropy in any of these systems.

is so straightforwardly incorrect that it frankly just should not have been published.


Perhaps you could point out what specifically you believe to be wrong, then?

The author very explicitly addresses the obvious flaw: all of these examples do, and must, increase total entropy. The point is that all of them produce reduced entropy structures, which is thermodynamically possible because none of them are closed systems.

The author is arguing that the overwhelming common assumption that a system is closed is problematic because it never actually holds.

An organism that is highly drought-adapted tends to not do as well as other organisms during a flood. We are similarly awash in energy: solar, chemical, nuclear, residual core heat, etc. It is worth considering if we are trying to overadapt to the wrong environment.


I would argue that what the author said is somewhat true, in that life can be roughly separated from nonlife by the observation that organisms do appear to tend toward less entropy in a universe where everything else seemingly does the opposite.

Genetically coded beings are orderly structures with far fewer microstates than their molecular constituents would likely contain otherwise. And we tend to make copies of ourselves, giving order to otherwise chaotic matter.

I think there's a profoundness in there, somewhere. Such a definition also solves the virus conundrum!


>in that life can be roughly separated from nonlife by the observation that organisms do appear to tend toward less entropy in a universe where everything else seemingly does the opposite.

In a superficial way, I suppose, but it's still wrong. Another way to look at it is that life actually tends to a more efficient increase of entropy than otherwise would be expected (when compared to non-living processes). For examples, humans are a complex chemical reaction that has reached the point where it can release energy through splitting of atoms - which raises entropy so much higher than it would be been possible otherwise and completely impossible via non-living chemical reactions.


A squirrel will spend energy to collect nuts and bury them together in the ground, rather than having them scatter and roll and blow willy-nilly. A person is constantly sweeping up the dust inside their house, painting and repainting the trim of their windows, and organizing the cables in their desk drawers.

Life can most certainly be viewed as a counterforce to entropy. Certainly at the philosophical level, but why not at the genetic level too, reproduction being the repeated organization and duplications of chemical bonds from smaller constituents.

I certainly see my life as a constant battle against entropy, an adult's life consists pretty much 80% of putting things in things.


>I certainly see my life as a constant battle against entropy

lets consider an amount of non-live matter equal to your mass. That pile of non-live matter wouldn't be able to generate amount of entropy that you will generate during your lifetime. Your actions of "battle against entropy" is a more faster way to increase total entropy. That is the reason of live matter existence - it is a faster way to generate entropy, and thus it is direct result of the 2nd law which states that any system evolves among the entropy maximization gradient. And live matter organizes into more and more complex systems - bodies/colonies/organisms, smarter organisms, societies - because that generates even more entropy than the simple set of constituent parts would generate on their own. Compare entropy generated by a 10 strong tribe in Amazon and 10 regular Americans or Europeans (bonus point - consider that the civilization complexity allows for 100 "civilized" people all actively generating entropy where hardly 10 could barely survive without the civilization). One can notice that intelligence arises as the power multiplier of live matter entropy generation capability.


>Life can most certainly be viewed as a counterforce to entropy.

Sure, as long as we qualify the terms correctly. That is, you need to decrease the resolution of what you mean by 'entropy' because each one of your examples actually increased entropy moreso than inaction would have.

Regardless, this goes against the author's point, because in each case 'work' needs to be done to reverse the entropy of a local system (e.g. scattered nuts) at the expense of the larger system (squirrel heat emitted into the universe)

>I certainly see my life as a constant battle against entropy, an adult's life consists pretty much 80% of putting things in things.

Sure, with proper qualification that is one way to look at things. This works because of the resolution that we care about. Namely, we don't care about heat generated from our bodies, or smart phones, or nuclear reactors, accelerating global entropy, but we certainly care about dusty rooms.

Again, it seems like the author disagrees with this view.


>Regardless, this goes against the author's point, because in each case 'work' needs to be done to reverse the entropy of a local system (e.g. scattered nuts) at the expense of the larger system (squirrel heat emitted into the universe)

I submit to you that you did not get the point the article is trying to make because it was exactly this. When considering the animal expending work as the system, it's entropy doesn't decrease because it isn't a closed system. You can then retort that the 2nd law concerns a larger closed system, but you can keep playing that game until the 2nd law essentially becomes a tautology, and becomes useless in understanding the system at hand.

I don't know if the author made this point explicitly (he hinted at it at the end), but one needs to actually know the details of the system under consideration, and very general laws can bring some level of context but will be limited in terms of the actual relevant or useful insight one can glean.


If you consider "organism" and "environment" to be separate systems which may exchange energy/entropy, would it be fair to suppose that living organisms dump entropy into their environments to maintain internal order?

I agree that this is a typically murky discussion since the concept of entropy for a complex organism gets pretty handwavy...

I'm supposing that an organism in "living" condition has many orders of magnitude fewer valid microstates than the same constituent atoms/molecules would have once life sustaining reactions cease and decomposition begins - there are only so many valid ways to assemble a given living being...


I think we are agents of accelerating disorder, somewhat like a growing fractal with order inside but a much higher disorder produced at the edge. The internal order supplies the growth, and the external disorder grows exponentially atop it.

Behaviourally, when you look at a lot of the things we do (e.g. breaking big chunks of metal to tiny coins and distributing them into people's pockets, or turning big clumps of clay and tree and metal into small piles to live in, or retail stores in general), you can see that humans are pretty good distributors, dis-aggregators, disintegrators.

At least so far, we don't disintegrate most things to the point that they are useless. Only to a point where they serve our growth. If one day we were to become a truly spacefaring, or an intergalactic, disintegrator, we might meaningfully hasten the advance of universal disorder as a whole. What better purpose for life, as a function of the universe, than to advance the march of a primary universal trend?

All that said, it's hard to see how our work will meaningfully bring about a true final state any faster, because we're not likely to accelerate proton decay. Maybe protons decay faster in isolation! Maybe we'll find out that we can poke them just the right way, and it'll be surprisingly useful.


Although rare, natural nuclear fission reactors are known to have existed:

https://en.wikipedia.org/wiki/Natural_nuclear_fission_reacto...


> humans are a complex chemical reaction that has reached the point where it can release energy through splitting of atoms

Could you expand a bit on this point? What are you talking about, exactly?

By the way, a reaction concerning splitting of atoms is not called a chemical reaction: it's a nuclear reaction.


An alien race with no notion of life (as we know it) would see us as nothing more than a natural biochemical reaction process that, as one of its byproducts, manipulated a local environment to split and release energy from atomic nuclei.

There is potential energy locked in multitudes of structures (from chemical bonds, to atomic nuclei). A simple chemical reaction may need a relatively small catalyst to free this energy. For example, a mix of oxygen and hydrocarbons will need a small spark - which can be easily provided through a natural process (e.g. lighting).

To release atomic energy, the catalyst that is needed is a highly complex and organized structure that cannot be achieved with a natural process like lighting, but instead required a reaction that lasted billions of years guided by natural selection. Natural selection progressively and incrementally found improved catalytic structures (for lack of a better phrase) to free previously unachievable energies. But because there is no free lunch, as we're accessing these higher energy levels we're actually accelerating global entropy and speeding up the heat death of the universe.

>By the way, a reaction concerning splitting of atoms is not called a chemical reaction: it's a nuclear reaction.

Well ... yes, but in our example, it is a chemical reaction (i.e. us) that serves as a catalyst to start the nuclear reaction.


I think the idea is that humans, despite being essentially no more than chemical reactions themselves, have developed the power to cause nuclear reactions (through technology).


What happens to this metaphor when we crack hydrogen fusion?


Nothing. It applies equally well. There is potential energy in free hydrogen molecules that stemmed from the initial low entropy conditions of the big bang that can be freed, for example, in the high-pressure environments in the cores of stars, or human fusion reactors. The act of fusing hydrogen molecules still increases entropy (i.e. we're going from low-entropy to high-entropy). You can keep fusing resulting elements and releasing energy, until you hit iron, at which point, you've reached the most stable atomic nucleus and you won't get any more energy out from fusion (or fission for that matter).


Actually, I recognize this problem! I know it as Pirsig's Chemistry Professor. Here's the quote from Pirsig's "Lila":

"The Second Law of Thermodynamics states that all energy systems run down like a clock and never rewind themselves. But life not only 'runs up,' converting low energy sea-water, sunlight and air into high-energy chemicals, it keeps multiplying itself into more and better clocks that keep 'running up' faster and faster. Why, for example, should a group of simple, stable compounds of carbon, hydrogen, oxygen and nitrogen struggle for billions of years to organize themselves into a professor of chemistry? What's the motive? If we leave a chemistry professor out on a rock in the sun long enough the forces of nature will convert him into simple compounds of carbon, oxygen, hydrogen and nitrogen, calcium, phosphorus, and small amounts of other minerals. It's a one-way reaction. No matter what kind of chemistry professor we use and no matter what process we use we can't turn these compounds back into a chemistry professor. Chemistry professors are unstable mixtures of predominantly unstable compounds which, in the exclusive presence of the sun's heat, decay irreversibly into simpler organic and inorganic compounds. That's a scientific fact. The question is: Then why does nature reverse this process? What on earth causes the inorganic compounds to go the other way? It isn't the sun's energy. We just saw what the sun's energy did. It has to be something else. What is it?"

Pirsig's exploration of the answer depends heavily on his own metaphysics. We don't have a grand answer yet. In more modern terms, the question still stands: If quantum logic is completely reversible, then why is chemistry reversible but only with extra energy in one direction?


That’s a complete misunderstanding of what’s going on. Fire also self replicates and also requires very specific conditions to continue. But, it’s less obvious for people or fire that CO2, heat, and ash / human fesis are the largest outputs of the open systems rather than more humans/fire.


Key thing about life is it has a strong exponential tendency that overwhelms the ordinary processes of decay.


Not quite. Life accelerates entropy, where decay is a different idea. Consider does water decay into ice on a cold day? Does ice decay into water on a warm day?


I feel like it's just missing the fairly fundamental thing that any flow of energy will result in an increase in entropy of the universe as a whole.

It doesn't mean that localised systems can't tend towards lower entropy.


In my experience what we have in a lot of places are cultures of anti-competence.

In a competence culture, you try to understand your sphere very, very well, including not only the current facts of the matter but how those facts might change under different circumstances. Then you try to understand enough of spheres of people around you that you can bracket what will affect you, and how you will affect them. It's implicitly understood that the functioning of the entire enterprise comes from people doing this.

In anti-competence culture, the functioning of the entire enterprise is mysterious and located somewhere else where it's not your problem. You only know as much about what you're doing as you need to barely keep going. You try to know as little about what's going on with others as possible, and if there is any interaction between you, you try to minimize and even resist it, in the hopes that you will have to change as little as possible or changes will be discovered to be unnecessary or deemed to be too expensive.

You want to stay in the cultures of competence. What the article is describing is just how much of the other kind is out there.


It's not the culture of anti-competence (although, yes, that exists).

How many times have you tried to communicate a novel idea to competent people just to have them dismiss it with obvious problems that clearly don't apply to your idea? That doesn't happen because they can't evaluate your idea, it happens because they don't understand the idea itself.

When the innovation is small you can get away by repeating it again and again. At some point people stop, take your idea into account, and suddenly understand why none of what they said applies. But when it is something too different, you can't do this one either.


One thing I've noticed is the better an engineer gets, the better they are at shooting down every idea they see. While I have seen many cases where a person did it anyways and failed, I have also seen many cases where they did it anyways and where massively successful.


Indeed, I suspect there is a curse of expertise. If you're an expert, you begin seeing everything as so nuanced, as so fragile, that you will dismiss lots of radical innovation as too simplistic. (And I think especially it affects motivation to actually try something new.)

But if somebody somewhat naive comes along, and just pushes through with sheer effort, they might succeed where many experts have predicted a failure.


Or more likely, the newbies will solve the “unsolvable” problem by removing some of the constraints that the old guard was holding inviolable. The newbies crow about their success for a few years while everyone struggles to work around the constraint violations.

NoSQL is the biggest example of this. Ignore everything we learned about ACID and just use key-value stores with no transactions or relations. People can build their own if they need them, right? And duplicating data to work around missing relationships is not a problem because storage is cheap? Then we get an explosion of new databases, each of which solve a subset of the missing functionally problems, with varying degrees of success.

I suspect this came across as more snide than I meant it. Sometimes holding a treasured constraint is the wrong thing and the old guard of experts failed to understand that not every business problem needed the full solution. But it annoys me for some reason the attitude of “we just solved this problem that experts could not solve for decades” when the nuance is that only a subset of the problem was solved, with potentially extraordinary effort required to re-introduce those missing constraints.


I can't quite buy your example, because NoSQL is pretty much a reinvention of VSAM and IMS DB (not IMS DC).

Regardless, I don't think it's necessarily that the expert would not want to drop some design constraint. It's more likely that it is genuinely hard to decide, which of them to drop and which of them to keep, because there are so many and problems are complex.

But if you're somewhat new (not a beginner either), you don't see all these, and you can benefit from the ignorance. Unfortunately, I think it cuts both ways, it's far more likely that ignorance will actually hurt you. But for a small number of people, ignorance can lead to lucky innovation.


I call that the Procrustean Bed: Simplify the solution by cutting pieces off the problem.


There doesn't even need to be great effort. Sometimes the experts are just so good at making every idea look like shit that they don't even try.


Neh, the better an engineer gets, the better they are at asking questions which make the person with the idea realize there are some problems with his idea. These problems may be fixable.

Shooting down ideas is not productive. Making sure ideas are realizable is.


One thing I've noticed is the better an engineer gets, the better they are at shooting down every idea they see.

This was two jobs ago. Razor sharp developer, could come up with any feature you asked him to. Which was the problem, if ideas didn't originate from him, good freaking luck getting him to work on it. He cost us to miss a few critical deadlines because of his refusal to let other people do their jobs unless it satisfied his demands, company refused to fire him because he had been around so long and carried institutional knowledge about the application but also refused to de-silo himself-thereby making the entire engineering effort dependent on his knowledge.

Guy went from a severely annoying veteran on the team to being given management duties over the team once our old boss retired due to health concerns (irrelevant to the job itself-he just drew a very unlucky health card). I and several other developers were out the door soon after because none of us wanted to report to him.

Company later got acquired. I like to peep in on their "About Us" company roster page every now and then. He's still there. But I've seen 3 different people in three years in my former role reporting to him show up on the roster, and then disappear and then get replaced by a new face.

"People don't quit jobs, they quit bosses".


One obvious example of that is when Dropbox was presented here as a startup and was shot down in glorious fashion.


That is a great example. The only ones I could think of were internal to my office, and didn't want to go posting about them on HN.


I would amend this to: the better an engineer gets while still staying a salaried engineer.

Calcified engineers are more likely to enjoy the high-level corporate environment. Flexible ones leave to start companies, contract, whatever.


  > the better an engineer gets, the better they are at shooting down every idea
Are they really better or just more confident with time ?


> But subscribers to a scientific worldview often make a more ambitious claim: that the best theories are isomorphic with the fundamental nature of the universe.

This is not an "extra" claim on top of conservation laws/fundamental symmetries.

> Reductionism can be understood as a combination of (1) the claim that the intelligibility of the universe depends on the unity of scientific theories

It's strange and frankly likely just projection to say that it's the reductionists that claim the universe must be a certain way in order for it to be intelligible.

> Despite its limited usefulness as a guide to scientific practice, reductionism is a powerful cultural idea. We might call it the Lego-block conception of reality: only the Lego blocks are real, so ‘fundamental’ science involves identifying what the blocks are and how they interact, while ‘applied’ science involves discovering the right combination or permutation of blocks that accounts for the phenomenon in question.

The question of realism is separate than reductionism of fundamental law, and it's not a good sign to (deliberately?) confuse them. EDIT: Just to be clear to people skimming this stuff, I can hold two theories: a) your dog is real, b) your dog is not real, only quarks are real. We can debate this for as long as we'd like, but what I am not necessarily saying is that your dog's dogness corresponds to some suspension or modification of fundamental physics.

> that parts and wholes have ‘equal’ ontological priority, with the wholes constraining the parts just as much as the parts constrain the wholes.

Again, if ontology means "realism" this is a confusion, if it means the way things work, it's simply wrong or completely unsupported.


Rabbit MQ is a traditional message broker; you use it when you have lots of messages you don't particularly want/need to be stored persistently, and where you want/need to take advantage of the routing feature--that you put keyed messages into some topic/exchange and then subscribe to only part of the messages any given application is interested in.

Kafka creates the abstraction of a persistently stored, offset-indexed log of events. You read all events in a topic. Kafka can be used to distribute messages in the way AMQP is used, but is more likely to be the centerpiece of an architecture for your entire system where system state is pushed forward/transformed by deterministically processing the event logs.


There is a sleight-of-hand in "meritocracy" evinced by the Scott Alexander quote--we ask "who should do surgery, the best surgeon or the worst?" and agree that surgeons should be chosen by "merit", or better yet, by their instrumental value to the task at hand.

The trick comes in when we switch without acknowledgement to describing the system for the distribution of wealth and status.

This kind of "meritocracy" is more like if we held an arm-wrestling tournament, declared the victor to be our new feudal lord, the next 6 runners up to be knights, and everyone else to be peasants. Our position in this new society was based on "merit", but that can't necessarily justify the difference between nobles and serfs.

We could even re-run the tournament every year. We could make sure no child gets extra time in the weight-room because of her noble parents. We could decide that arm-wrestling is stupid and brutish and so, in a glorious revolution, switch to speed chess. None of it would address the question of justice.


I grew up in a society where merit didn't really matter, most of the time. Regardless of you finished just primary school or you went to college, your expected income in the job assigned by the state (that matched your educational status) would pay mostly the same. Only if you ended up being a director at a (state run) factory or were higher positioned in The Party would you have significantly more benefits (financial and otherwise).

To my mind, that society sucked. It meant that passion, time, obsession wasn't being appreciated. The way the state was treating it was just one aspect of it but it had huge trickle down implications for the rest of the society. It meant that people with 4 classes didn't need to respect those with college, it created this perverse inverse set of values where to spend more time studying and doing things was seen as a sign of weakness, of stupidity, the "smart" people would study or do the least amount of work and "trick" their way to success. As the state assigned you your job, with extremely low risk to lose it (you would have had to be caught stealing, repeatedly) there was no incentive to work hard in your line of work. As you can imagine, if most everyone doesn't do serious work and try to "trick" their way into everything, you don't get a very competitive economy, which had negative consequences for everyone.

The repercussions of that are still felt in the country I grew up today, the value system of older generations hasn't changed. When they see a young person that got hired at am multinational company buy a shiny new foreign car, old retired people talk behind their back saying "who knows how much that young person stole" to get that car (or worse, if the young person is a young lady). When someone does anything extraordinary, the normal reaction isn't to congratulate or praise them, or to show them as an example to be followed, the natural reaction is to be envious and to suggest unsavory ways that could explain their success, because after all, like the state said a long time ago, everyone is equal.

It is a shitty, miserable society, rotten to its core that I wish nobody else would experience in their life.


> The trick comes in when we switch without acknowledgement to describing the system for the distribution of wealth and status.

I think the trick comes even earlier: in making people think that there has to be a single "system for the distribution of wealth and status".

Wealth is not a zero sum game; there is not a fixed pool of wealth in the world that has to somehow get distributed. Wealth can be created. Indeed, wealth is created every time people make a positive sum trade, a trade in which both sides come out better off.

Status tends to be more of a zero sum game, but it doesn't have to be. For example, status here on HN does not have to be the same as, or even measured by the same criteria as, status somewhere else.

However, if we set up one centralized system that is supposed to "distribute" wealth and status, we are making zero sum games out of things that shouldn't be (wealth) or at least don't have to be (status). The solution is to stop doing that. Stop centralizing power.


Here is yet another trick: situating the creation of wealth primarily in the exchange of goods, implicitly devaluing the act of producing the goods in the first place; treating allocation as the primary problem, with production a mere by-product.

The line of thought typically proceeds by claiming that it really is the exchange that makes the wealth, because it only after exchange that the person who wants to use a thing can actually get their hands on it. However this is misleading, because production is necessary before exchange can take place.

The idolisation of the problem of allocation structures the world in a particular, and not inevitable, way, with many unsavoury properties. Allocation favours fungibility, as a tool for reducing the time needed to exchange, creating immense difficulties in valuing the act of production itself, because one line of production can just be exchanged for another. This abstraction over production removes almost all incentive to consider the future, or to plan for catastrophe, something we see visibly in the response of allocation-focused countries to the current pandemic, and in their willingness to attempt to mitigate, or even to prepare, for the consequences of human-induced climate change.


While I want to agree with the general gist of your argument, there is value created in economic activities besides production. For example, operating a pick & pack line to ship goods to end-consumers is labor intensive.

Speaking as someone who supports such an operation, shipping to end-consumers is expensive. None of the manufacturers I order from want to be in that business, they want to operate assembly lines and ship out truckloads at a time.

While my job is the classic 'middleman' in the supply chain, there is value provided that we do generate. The manufacturers I work with understand this as well; if they wanted to sell to end consumers it would be easy for them to cut us out. All they need to do is to advertise, package, and ship out individual products and provide support for those purchases.

Producers (read: manufacturers) want predictable demand and have long lead times. If I am running out of a product, my lead time is 8-12 weeks coupled with a sizeable minimum order. No end consumer wants to deal with placing an order for 26 skids of product and waiting 2 months. Specialization means some firms produce things and are good at it, others distribute those goods. Distribution is its own challenge, and takes specialization.

There are very few manufacturers an end-consumer can order goods from, for very good reasons. Honestly, I can't think of a single manufacturer that ships direct to consumers; even most alibaba 'factories' are intermediaries (and alibaba itself also acts as a retail channel).


Yes - I have a tendency to waffle, and it seems that editing came at the expense of my actual opinion, namely that a concern for both production and allocation is vital for a stable society.

Significant advantage has been found in organising redistribution of goods (and services, another important mechanism of wealth production, as pdonis mentioned in a sibling comment), and it clearly isn't something to be ignored, my point is only that a lot of the contemporary approach to thinking about economies focuses almost entirely on allocation/distribution - production is taken as a given, driven largely by demand through the allocation mechanism.

And to clarify - it is not even the first order effects (e.g. worse conditions the closer you are to "mere" production) that I find most concerning (though they are serious issues), but the higher-order effects of how society manages and maintains its productive capabilities, and prevents them from causing longer-term harms, simply because those harms aren't handled by the system of allocation.

The common response within the current mental framework is to try and manage those harms through the allocative system, by creating markets for them, but fundamentally the incentives simply aren't there in the way that they are for the allocation of things people want - they have to be coerced, and so people try to game the system.

No easy answers, unfortunately!


> situating the creation of wealth primarily in the exchange of goods, implicitly devaluing the act of producing the goods in the first place

Producing goods and services also involves exchanges, but it's a fair point that there are other activities besides direct trades that can create wealth, yes. Transformation of raw materials into finished products also can. So can providing services.


There are two kinds of wealth that gets confused in the discussion.

The absolute wealth of having access to high quality services and goods with relatively low efforts. This is the wealth we create by progress.

Then there’s the relative wealth of which share of the economy you wield power over. How much of the current means of production is commanded for your personal priorities and how do they relate to others priorities? What is the opportunity cost?


> There are two kinds of wealth that gets confused in the discussion.

No, there's just wealth. Which is basically what you are calling "absolute wealth".

What you are calling "relative wealth" is not wealth; it's either simple trade (when you buy a product, you are "commanding the means of production" that produced it, but that's just how a free market works) or brute force (using either overt violence or government power to "command" resources that in a free market would go to other uses).


Relative to your peers. If it clear things up.


> Relative to your peers. If it clear things up.

I already understood that that's what you meant. It doesn't change anything I said.


> Wealth can be created.

Corollary: wealth doesn’t exist and the need for disparity to drive the economy doesn’t exist either.


> Corollary: wealth doesn’t exist

I have no idea how you are getting that from what I said.


yeah. i can create a sandwich. that does not imply there are no other sandwiches.


It does imply that sandwiches are not necessary. I’m not sure what else you can draw from a comparison between an abstract concept and physical expression of one—you certainly didn’t materialize the expression out of thin air.

Incidentally, the benefit of wealth is unclear.


> It does imply that sandwiches are not necessary.

You're still not making sense.

> the benefit of wealth is unclear.

How are you posting here without taking advantage of various benefits of wealth? Last I checked the Internet doesn't work with paper cups and string.


I can use raw ingredients to create a sandwich too, and if we prefer the each other's sandwiches more, we can agree to trade sandwiches.

Was anything actually created during the trade though? I think that's what he's getting at. An economist would argue yes, value was created, but it's an abstract concept at best, a bit made-up at worst, because at the end of the day it's the same two sandwiches.

"Wealth isn't real" is a pretty radical take though, and I'm not sure how productive it is to contemplate because you'd have to entirely throw away the concepts of personal property and money before you can get there.


> at the end of the day it's the same two sandwiches

No, it isn't, because the sandwiches are in the possession of different people than when they started. That's where the wealth gets created: the value of each sandwich is different for different people. Many people fail to understand the concept of wealth creation because they think of "value" as something inherent to the object, instead of something that depends on who is using the object and for what purpose.


Depends on which type of value people are talking about; not all forms of value are subjective.


I suggest without underlying thoughts that we exchange the sandwich back after taking a closer look at it.

Is even more value created?


> I suggest without underlying thoughts that we exchange the sandwich back after taking a closer look at it.

You can suggest that, but why should I agree if I value the sandwich I now have more than the sandwich you have--which of course I do because that's why I agreed to the first exchange?


Say you do for sake of argument. It has shellfish on it and you are allergic. Can we reverse the exchange and create more value?

We could also take a bite before exchanging again.

(I'm not trying to prove a point, just curious how the logic extends)

edit:

Wait, by your refusal, is value lost or gained?


> Say you do for sake of argument. It has shellfish on it and you are allergic. Can we reverse the exchange and create more value?

More value relative to the state where I now realize I have a sandwich I can't eat, yes. But that's just because I was unaware, when I made the first exchange, that the sandwich I was getting was one I would be unable to eat. If I had known that before, I would have refused the first exchange.

More value relative to the original state, before the first exchange, no, since if we re-exchange the sandwiches back, we're just returning to the original state.

A simpler way of describing this case is that people can sometimes have mistaken beliefs, which causes the perceived value to them of something to be different from its actual value to them. The actual value is what matters for wealth creation. In the case where the sandwich turns out to be inedible, the first exchange actually did not create wealth; I just thought it did. Then I realized it didn't, so I was willing to undo it. In fact, given the actual facts, the first exchange destroyed some wealth, which undoing the exchange restored.

> We could also take a bite before exchanging again.

How would that make either of us want to exchange the sandwiches back again?

> (I'm not trying to prove a point, just curious how the logic extends)

I think you're missing a key aspect of the logic, which is that people make exchanges, not to satisfy someone's abstract, highly contrived thought experiment, but because they want to. So asking about exchanges that nobody would want to make is pointless.

(At least, that's how it works in a free market, where all transactions are voluntary. If people are forced to make exchanges they would not choose to make, you can no longer count on those exchanges creating wealth even if everyone's beliefs are accurate.)

> by your refusal, is value lost or gained?

Neither, because nothing happens if I refuse.


Interesting, thanks.


> This kind of "meritocracy" is more like if we held an arm-wrestling tournament, declared the victor to be our new feudal lord, the next 6 runners up to be knights, and everyone else to be peasants. Our position in this new society was based on "merit", but that can't necessarily justify the difference between nobles and serfs.

Isn't that because the aspect of merit which is measured by an arm wrestling contest isn't the same as that which is relevant to running a feudal society?

A tournament for selecting a feudal lord would need to measure economic and strategic literacy, intelligence, moral compass, etc.


It begs the question to just assume there is a "feudal society" that needs to be "run". For instance, why not have a "first citizen" who is selected to manage internal coordination and external strategy, but must live in the worst house in the village and wear a hair shirt.

You can say, well that wouldn't work! But now the idea is that the structure of this model feudal society is justified by reasons like "people won't follow someone if they don't have a gold hat and a scary sword" and not by any process that led to selecting the particular feudal lord.


This analysis is incomplete. Assume I am the best (objectively) at "managing internal coordination and external strategy" but I don't want to live in the worst house and I am allergic to hair shirts. Then I won't sign up for the job and the society is worse off for it.


Assuming a magical test that actually could pick the best lord... that would justify the lord. It wouldn't do much of anything to justify the subjugation of the serfs.


What if they happen to be really good at being subjugated serfs?


I think that the argument that they are making isn't about the role that the person ends up with but the dramatic difference in wealth, status, and privilege.


However, the inverted version of the lord and peasants scenario is not a scenario where the differences in wealth, status, and privileges are leveled out. It is a scenario where, since power corrupts, those appointed to decide how those things are allocated most equally somehow just happen to wind up favoring themselves.

Or, as Orwell so eloquently wrote "All animals are equal, but some animals are more equal than others."


One would think.

But the reality of feudal lords is that merit was measured by whose legs you were born between, if a penis was present and in what order you emerged. Things worked out.

Most of life is like that. Theoretically, we all want the best surgeon on the planet, but the reality is we mostly go to a random draw of a surgeon that meets or exceeds the minimum qualification.


I generally agree with your last sentence, but maybe it's a mistake to generalize it to other activities or professions.

If an adequate surgeon has an 85% success rate, and a brilliant one a 90% success rate, then it's arguable that being ten times smarter or more dexterous isn't that important, to be overly rewarded. So in that sense, merit might not matter.

But many activities or professions lend themselves to multiplying others' results. What if someone can teach all the surgeons to have 5% fewer failures? That still has diminishing returns, but what if someone figures out a way to do, say, twice as many surgeries with the same resources, or to eliminate the need for half of them? You might say they still don't need or deserve wealth, but in order to reap the benefits, society has to give those people power in some form to organize the activities of others. And wealth tends to flow to those with power.


I think you’re on good track of thinking.

But I would see your scenario as improving process, not doctors. Doctors are the last real guild profession in modern society, and industrialization always beats skill in the long run. As time goes on, IMO their role will get whittled away, first in primary care (already happening) and thing like radiology, and eventually in other areas.


You optimize for what you actually measure, not what you wish you were measuring.


Now you need a tournament for selecting the measures.

It’s tournaments all the way down.


I think there's a second substitution, hinted at in your last paragraph where the agreement that the best surgeon should do the surgeries moves to the best people should be wealthy and run society and then to "the people who already have wealth and power right now, are better than everyone else".

They sometimes try to justify that by saying that the children of the rich who've been groomed from birth are better qualified than the children of the poor, but rarely would they venture into suggesting everyone should have an equal investment in their education before merit is decided.


But does anyone actually make the final claim? A lot of people accuse their political opponents of being snobby elitists, but I don't think I've ever heard someone actually say that rich Harvard graduates are better than everyone else.


The problem is, to a first approximation, we are using the arm-wrestling tournament to determine who should be our village's chief and deputy arm wrestlers; which (I presume) is important for settling inter-village disputes. We then compensate the well lest they defect to another village; and because the benefit we get from them is so large that we can afford to pay the relatively few of them well. This compensation then gives them power in a diffuse way that is hard to combat.

In more concrete terms, look at "anti-meritocracy" [0] positions. They are talking about making, say, the programming profession more equal; and not about making the wealth and status of programmers more equal to that of, say, teachers.

Approximately no one is arguing for major changes to the system for the distribution of wealth and status; so of course the reactionary movement will not frame their arguments in that way.

[0] This is a terrible name, as the "pro-meritocracy" crowd is the more reactionary one; but all the other identifiers I can think of pull in baggage I do not want.


Alternatively,

"If your life depends on a difficult surgery, would you prefer the hospital hire a surgeon who aced medical school, or a surgeon who had to complete remedial training to barely scrape by with a C-?"

If you are choosing a King, would you accept the decision of the class rankings of the Harvard School of Law?


Do you mean a systematic category error, that people are "ranked" or promoted according to criteria that is irrelevant to the purpose of the ranking (e.g. armwrestling is not politics)?


> None of it would address the question of justice.

I'm probably misunderstanding your point, but this is a question of justice. Specifically, it's a question of Distributive Justice, which is "[concerned with] the socially just allocation of resources". Saying "What would we need to change about our absurd arm-wrestling-based distribution system in order for it to be just" is squarely a question of distributive justice.

Are you saying that nothing that determines who gets to be Lord can be tweaked to reach a just distribution because of the winner-take-all nature of the rewards? Further, are you suggesting that the current (American, I assume) system of distribution is equally unjust in distriubtion?

https://en.wikipedia.org/wiki/Distributive_justice


"The Old is Dying and the New Cannot Be Born: From Progressive Neoliberalism to Trump and Beyond" by Nancy Fraser is a good (short!) book that talks about similar themes. I'd recommend it.


Think of it as a Maslow's hierarchy of needs-like thing. You take care of the lower tiers before worrying about the upper ones.

The lower tier is that things work. Food gets grown, things get built, medicine gets done - and it's all done well.

The upper tier is 'justice'.

Meritocracy (and related concepts like capitalism) simply isn't about pursuing justice or equality or any of these higher-tier concepts. It's just a way to satisfy those lower-tier requirements - the only way that really works. To criticize it on the basis of justice is simply the wrong level of analysis.

Meritocracy is a foundation that satisfies the lower tiers. Once that is handled, and upon that basis, you then worry about how to pursue the upper tiers using the resources that meritocracy has provided. But that doesn't mean you jump off your foundation to do it, down into the pits of scarcity and hunger - that would make no sense at all.

And this is how all functional societies work - a meritocratic/capitalist engine of production, paired with other social structures (redistribution, military) to handle other social needs. Right tool for the job.


> The lower tier is that things work. Food gets grown, things get built, medicine gets done - and it's all done well.

> The upper tier is 'justice'.

I'm not sure about this distinction at all. The lower tiers of human needs are about people being able to access things, not whether or not those things exist. The "justice" of the system is just another link in the supply chain between a person and what they need to survive, it can be a bottleneck in the same way that low production or waste can diminish access. This means that issues that might seem abstract to us are concrete to people who can't access food, shelter, or healthcare and therefore can't meet their basic needs.


> The lower tiers of human needs are about people being able to access things, not whether or not those things exist.

If those things don't exist, nobody can "access" them. All of those things have to be produced before anyone can use them.


And yet producing them without the ability to access them is completely pointless, which is my point. In fact, it's worse than pointless because effort and resources are expended in production.


> producing them without the ability to access them is completely pointless

Your continued use of the word "access" obfuscates the issue. It isn't a matter of "access"; it's a matter of trade. Nobody is going to produce something that they aren't either going to use themselves, or sell in exchange for money that they can then use to buy something they are going to use themselves.

If people are producing things that never get used by anybody, it's because some other entity (which would be a government) is paying them to do useless work. It's not because they are just deciding to produce things that others don't have "access" to. So the way to fix that problem is not to "improve access". It's to stop governments from handing out the taxpayers' money in exchange for useless work.

You are also ignoring the other possibility: that governments pay various special interest groups to not produce things that would be used (a good example in the US is farm subsidies for not growing what the government thinks is "too much" of some crop). Again, that isn't a matter of the people who would be able to use the things not having "access" to them: it's that the government is preventing them from being produced at all, even though their production would be a net increase in wealth. And the way to fix that is not to "improve access"; it's to stop the government from paying people not to do useful work.


> Nobody is going to produce something that they aren't either going to use themselves, or sell in exchange for money that they can then use to buy something they are going to use themselves.

Citation needed. People do this all the time.


> People do this all the time.

Example needed.


People constantly produce things without complete certainty that they will find a buyer. Think of all the produce which is thrown away unsold. The reasons are certainly not limited to senseless government demand. Life and business involve uncertainty.


> People constantly produce things without complete certainty that they will find a buyer.

Without complete certainty, yes. But nobody has complete certainty about the future. And I didn't claim complete certainty. I'm assuming readers are capable of applying common sense.

> The reasons are certainly not limited to senseless government demand.

The reasons why life is uncertain aren't, yes.

But the reasons why people would either produce something which they know nobody will want but they still get paid for it, or would not produce something that they know people will want? Yes, that pretty much comes down to senseless government actions.


It sounds like you are dismissing the phrase "improve access" as some wibbly-wobbly social justice fuzzy meaningless concept that is obscuring the important stuff.

To me, though, it sounds like the fundamental basis of the whole capitalist system - I'm thinking of the idea that free markets can only function if transaction costs are reasonably low, and the economist Ronald Coase, etc.


> It sounds like you are dismissing the phrase "improve access" as some wibbly-wobbly social justice fuzzy meaningless concept that is obscuring the important stuff.

No, just as a term that is hindering understanding rather than helping it.

> I'm thinking of the idea that free markets can only function if transaction costs are reasonably low, and the economist Ronald Coase, etc.

Coase didn't say free markets couldn't function with high transaction costs. He only said it would be more difficult and take longer for those markets to reach equilibrium.

Also, the true observation that free markets in the real world are always imperfect does not justify the further claim that is usually made, that governments must intervene to "fix" these imperfections. In fact, the government "fixes" almost always make things worse, often much worse. Even on strictly Coasian terms this should be evident, since the most common source of high transaction costs in modern markets is...government regulations.

However, when it comes to basic necessities--things like food, clothing, and shelter, the kinds of things this thread was originally focused on--the issue is not transaction costs for the people who need these things. There are perfectly good, low friction markets for these necessities. The problem is that governments are skewing those markets by paying people not to produce useful things, or to produce useless things instead. Or, in the case of many third world countries, the government simply confiscates all the useful things for government officials and their cronies. I don't think "transaction costs" or "access" is a useful description of those problems.


When transaction costs are too high for transactions to be made that would improve society, the losses don't somehow get made up. You have a lot of produce or something, and you can't get it transported to the people who could use it, and it gets trashed, that's permanently lost.

It seems wrong to me to dismiss this as "taking longer to reach equilibrium", as though you get to the same destination either way. Perhaps you are interpreting "function" in a loose manner, but of course I didn't mean "function" = merely "do something".


> You have a lot of produce or something, and you can't get it transported to the people who could use it

And why not? "High transaction costs" doesn't seem like a viable explanation in a world where goods constantly get shipped around the world at extremely low cost per unit. Something more like "some stupid government regulation is preventing common sense from being applied", or "corrupt officials are stealing stuff instead of letting it get sold on the open market" seems much more likely.


Because you don't have a truck?

And maybe you don't have a truck, because there are no roads.

And there are no roads, because there is no nearby marketplace.

And there is no big market, because there is nobody with capital to invest in that.


I'm not sure what you're talking about, since in countries which have these attributes, nobody is producing anything that would require trucks to transport, so your hypothetical of someone having a lot of produce that they can't get to people who need it doesn't apply.

If you are simply saying that there are countries which are poor because there is nobody with capital, first, that's still not a problem of "high transaction costs", it's a problem of lack of capital. And second, the problem isn't even lack of capital, since there are plenty of people in rich countries who would be glad to invest capital in poor countries--if the corrupt governments of those poor countries weren't going to steal everything of value that got invested. So we're still looking at a problem of government corruption, not "high transaction costs".


Government corruption is exactly "high transaction costs". I mean, what is the stereotype of a poor corrupt country, but one where you have to bribe someone to get anything done?


You see “justice” as the societal equivalent of “self actualization”?


A court system that works perfectly still won't be of any help to get you fed when food isn't grown and made available to the civil servants in the first place.


There's a tendency in political philosophy (and economics) to imagine up some kind implicit or explicit of ordering of events that progress from one point or another, the ordering of which is then used to assert something about justice or the "correct" ordering or way of running of society, though on closer inspection certain of the conditions in the progression (often the earlier ones) never actually existed or did so so ephemerally such that they're not really worth worrying about, or even that all the steps kind-of happened but all at the same time or in a different or chaotically mixed-up order. It's encountered all over the place and big names do it all the time—Locke's Natural Law? Yep, built on exactly that kind of dubious base. It's everywhere in political philosophy and such orderings-as-a-foundation-for-further-reasoning or guidance aren't necessarily wrong or useless, but they're often a sign you've wandered into some weak and/or misleading reasoning.

I have a feeling this is one of those. I'm not sure "society producing some food, but unable to produce any justice until they produce a little more food" is really a thing. Humans were decent at food fairly early, and some version of justice seems absolutely central to the functioning of human communities, so I'm not inclined to believe some kind of leveling-up from "food production" to "justice" is a real thing that ever, meaningfully, happened, and if it's not something that actually happens or has happened it's worth calling into question whether that "hierarchy of needs" is real or whether reality's sufficiently more complex (or even inverted—it may be more that you need some amount of justice to have a society of humans producing food in the first place, even if they're all on the verge of starvation at the "start", whatever that even is) that such a model isn't even useful as any kind of abstract, general guide (which I suspect is the case)


Huh?

What trick - we want to promote being an excellent surgeon, by rewarding them - the rewards that people are most pleased with are wealth and status.

Where is the trick?

What other justice do you imagine, other than the person who brings the goodies to the group, being rewarded for doing so, and those attempting to break this mechanism, getting removed?


Meta comment, but I would love for there to be more insight into the sociology of Hacker News. I feel like in the last couple of years we've seen a stark decrease in the frequency of "What to expect when you're expecting (to be a millionaire founder)" articles and a stark increase in "Enterprise Patterns" discourse. The latter is oddly at pretty serious tension with PG's ideas about development and Scheme.

I find the "Liskov Substitution Principle" is an awkward way of pointing at the idea that your usage of subtyping should not render the type variance in your system nonsensical.


It's cyclic. When I first joined over a decade ago there seemed to be a lot of technical posts and discussion at the time, which gave way to a lot of SEO discussions, which gave way to technical discussions, then shit tons of Erlang (not that I minded), then other stuff for founders, etc. People tend to post interesting things which leads to other interesting and related things. So you'll see a lot of a certain topic or field, maybe a surge in new users or user activity because it interests them, and then some new topic or field becomes dominant for a while and appeals to a different subset of users who become a bit more active with submitting, posting, and voting.


It seems like there's something deficient in the way we tell the story of the history of software architecture (to the extent we tell a story at all) in terms of the name-brand techniques and technologies involved, rather than in the actual layout and organization of actual codebases.

For a while I've assumed that OOP as in C++/Java essentially formalized modular programming in C. In other words, that people were already writing programs whose state was divided into functional areas, with some functions serving as the interfaces between the modules. With a class-based system you can rigidly formalize this; and then OOP as we use the term essentially just reinterprets this formalization as actually creating the architectural paradigm that had already evolved as programs grew.

(This is NOT meant as the one way to sum up the whole world of things identified as or related to "object-oriented programming".)

But I wasn't around at the time...


This is my thinking too. It's really silly to have wars around programming paradigms. There are only a few principles around which we're all arguing:

* How do we make programs that are easy for the machine to execute efficiently? * How do we make programs that are easy for humans to read and understand? * How do we ensure, given the maintenance requirements of our programs, that another human who doesn't have the benefit of our experience can safely make changes to our programs without unitended consequences?

Discussions around OO versus Functional versus Procedural miss the point. You can write perfectly maintainable procedural, functional, or object-oriented code. If you're authoring something brand new you have to approach it with a complete understanding of all the moving parts. If you're not there, make a prototype, wait a few days, then go through and re-read it. Anything you don't understand is something nobody else will the first time they approach your code base. Come up with ways to be explicit and to communicate clearly what the intent is. Try to anticipate what things people will be changing often and make those easy things to change. Remember that it's about conveying a representation, not a deep understanding. You want to represent your understanding of the problem space to someone who doesn't have the same level of understanding as you.


> It's really silly to have wars around programming paradigms. There are only a few principles

Well, you say we're having wars around the programming paradigms, I say we're having "spirited debate" around the principles :). I've been working mostly in Java for the past 20 years or so, and I can't help but observe that most people, when they try to put together a Java application, default to a sort of design that looks an awful lot like old Cobol programs did: they have a "datatype" generator (usually automated from XML schemas) and a slew of "utility" classes with mostly static functions that have mostly static data that operate on these datatypes, and as little class instantiation as they can possibly get away with. I've seen this same basic architecture repeated many times across four different employers in two decades. It's always a lurching, monolithic, untestable behemoth that never works reliably and resists any attempt to change. In talking with the original designers, it's clear that there were no principles behind the design besides "it still doesn't work, how do I get this thing to work". If there were clear and adhered to principles like automated testability, you'd end up naturally with an OO (or even better, FP) type design.


Interesting. I guess I've been more fortunate. Most of the Java code I've worked with has involved reasonably well thought out classes. For me that mostly means I can read and understand parts of the codebase in isolation. There are usually a few piles of sometimes ugly utility classes and the occasional mess of deeply nested inheritance that nobody wants to touch. When the latter becomes painful enough, someone usually decides to refactor it, which is often not as hard to do as everyone fears.

It seems to be improving in the last 5-10 years, as most practitioners have found that both of these eyesores can be reduced. DI (sometimes messy itself, but it can be done cleanly) tends to make people rethink those utility classes, and shallow inheritance is now favored, with a focus more on interfaces and composition.


Sort of.. But the way this was done in C is still done and was evolving long after C++ split.

IMO classes as false separation is the reason we kept C++ and Java out of Oses. ABI compatibility, dynamic loading, etc are all from the duck philosophy. It is no one's business if your duck thinks it is a duck.


When I learned C, it was drilled into us that the proper way to implement all the data structures and things was with abstract data types.

https://www.edn.com/5-simple-steps-to-create-an-abstract-dat...

And then you have nicely "namespaced" functions that operate on those abstract pointers and your code is isolated from the implementation and you don't accidentally depend on some implementation detail you shouldn't.

But ultimately this is all just informal OOP. Objects/classes are just a natural way to organize programs.


It's cargo-culting physics to assert that there are a set of "high level" laws in some area without having in hand the reductionist mechanics.

This cart-before-the-horse is so pervasive in social sciences, and developing sciences like neuroscience, that it's understandable one would ask why history can't get in on the action.

The glib invocation of phlogiston theory is telling. The essence of phlogiston theory is not wrong!


I strongly disagree with this view. I sympathize with you that it's probably true that scientists are inclined to cargo-cult their science after physics. But the endeavor here is not wrong, so even if it's cargo-cult, it's good.

Science's primary objective is to find models. A model should do predictions, and then scientist should collect data of interest, and make sure the data isn't falsified by the model's predictions. Any scientist who cannot do predictions, and verify that data doesn't contradict predictions, is no scientist at all.

Once you have models, it's simply too tempting to formalize them and build mathematical theories for them. If nothing, for computational advantage, so you can make computers make predictions, this way you can eliminate human errors. So, it seems like any science will eventually build models that can generate predictions from first principles.

It is one thing to claim the entire human history can be predicted from first principles of a theory X. Clearly, we have no such X. Maybe we never will. It's another, and totally reasonable thing, to build a theory X from first principles that correctly predicts some data.


No one has ever built a historical model that predicts the future with any accuracy. There are plenty of overfit models that "predict" the past, but none hold up on new events.

Interesting book on the topic from the world's leading researcher, if you're curious: https://www.amazon.com/Superforecasting-Art-Science-Predicti...


Does Moore's law count as a historical model?

Perhaps it is too recent and limited in scope,

but it is about, like, some measurable things about society, which is based on past observations, and it has been somewhat predictive? (though, perhaps it being somewhat predictive has been in part due to it becoming somewhat prescriptive?)


Moore's law is more of an economic necessity. Circuits get cheaper to run, cheaper to make, and faster the more you shrink them, so you just keep shrinking them. This was observed by Carver Mead way back in the middle of the 20th century.

Moore's law requires an average of 3% increase a month in the number of transistors packed on a chip. With a few hundred thousand people working on the problem, that's not crazy. Of course it's going to be spikey, but combined over the lifetime of a chip project...


I don't know that Moore's Law is what most people would mean by a historical law, but it has held up surprisingly well even for a non-historical law :)


Just because it has not been done before, doesn't mean trying to do so (i.e. make a science of history) is invalid or just cargo culting. Especially since GP referred to other social sciences, and named neuroscience. It has not been done for history, maybe it never will be. I don't think generalizing this to all social sciences holds up. Surely there are some falsifiable predictions of psychology, sociology etc that has been tested.


The social sciences don't work like the hard sciences though. And the primary reason for this is because humans, unlike say, atoms or molecules, are not independent objects. They are subjects with agency and can react and change their behaviour in response to what your models predict about them. Essentially predictions can become self-fulfilling prophecies.


> I sympathize with you that it's probably true that scientists are inclined to cargo-cult their science after physics. But the endeavor here is not wrong, so even if it's cargo-cult, it's good.

Having been trained as a physicist and then worked as a biologist and dabbled in history, it isn't good. Physics cannot be used as the ideal of sciences because it is an extremely strange science. Consider: physics deal with simple systems with only a few essential observables that can be repeatably measured. History deals with very complicated systems with an inordinate number of possible observables none of which can be repeatably measured. Why would you expect them to resemble each other? I actually wrote a book about this...

> Science's primary objective is to find models. A model should do predictions, and then scientist should collect data of interest, and make sure the data isn't falsified by the model's predictions. Any scientist who cannot do predictions, and verify that data doesn't contradict predictions, is no scientist at all.

A model is supposed to recapitulate observations. Prediction is only relevant in the case of repeatable observations or the discovery of new observables. Astrophysics and history don't get the former, but you will find prediction in both in the latter case...but once you've measured it, it's not prediction anymore.

> Once you have models, it's simply too tempting to formalize them and build mathematical theories for them.

People do this to a limited extent with toy models, but the relationships among observables in history that you can get from the historical record tend to be fairly simple and rough, while the number of observables is enormous, so formal modelling doesn't buy you much. When people have done this on a large scale, you get the Club of Rome...trying to peer a few decades into the future with an enormously complicated model.

> It's another, and totally reasonable thing, to build a theory X from first principles that correctly predicts some data.

First principles are overrated, and I think the training of physicists overemphasizes them. In condensed matter there's a phrase, "more is different." We can go from atomic descriptions to macroscopic ones...in very, very simple cases. History has no such simple cases. You always have a huge number of parameters, which leads you back to the old adage that with three parameters you can draw an elephant, and with four you can make him wag his tail. There's too much slop for a model from first principles predicting things in history to be interesting.


Boyle's gas law can be discovered without knowing anything about a kinetic theory of gases or quantum mechanics. You can discover "high level" laws (effective but incomplete approximations to reality) without knowing how to model the system components at the individual level. Inexact, sure, but with (limited) analytic and predictive power.

In Biology you don't know how all the cells of a population of wolves and rabbits works or his present state. But you can use predator-prey models to predict (at some level) population fluctuations.

If science awaited to have a complete description at the most fundamental level to make predictions, there will be no progress at all. By discovering "high level" laws you can get closer to "lower level" ones, as Boyle is an step to get to QM (what's the cart and what's the horse). Maybe all "laws" are emergent and our models only approximations, and there is no fundamental ones to be discovered, so gravity, etc, are much more complex and "systemic" we think.

But sure, there is lot of epistemological traps, pseudoscience, bullshit and honest dead ends in the process.


It's absolutely possible to argue persuasively for high level laws without having a full reductionist mechanics. For example, Darwin's arguments for evolution by natural selection were strong even though the physical/chemical basis of heritability was totally unknown in his time. He didn't even know Mendel's laws. Fisher also developed his models of evolutionary dynamics without knowledge of the underlying mechanics of heritability, and his models still stand. In fact, I'd argue that a big part of what makes a good scientific theorist is the ability to formulate and test high-level laws without complete knowledge of lower-level mechanics.


... "natural selection" IS the mechanics. The reason it's convincing is you can imagine what would arise, systematically, from imperfect reproduction under forces of selection, NOT because it's an attractive theory you saw in the tea leaves of the complexity of the world.


It's also falling that a historian would discredit an idea by referring to "phlogiston" which is only the go-to cliche for "haha ridiculous premodern scientific bad idea" just because the name sounds old-timey and obsolete.


In your view, what is the not-wrong essence of phlogiston theory?


That the energy output of a flame is regulated by the oxygen flux. Phlogition theory suggests a phlogiston flux that is just the oxygen flux with a sign reversed and consequently when they measured the mass of the phlogiston, they got a negative value.


A lack of understanding is like dirty glasses, everything blurs together and it becomes impossible to distinguish between right and wrong ideas.


I don't agree that the essence is not wrong, but I can imagine a decent argument for that position predicated on considering Phlogiston as chemical energy. Some substances can easily release energy through oxidation, others not so much.


The problem with this way of thinking is, fundamentally, you don't determine what is self-contained.

(What you can do is remove unnecessary coupling, but you're doing that anyway...)


So I'm a little late here, but I'm interested in you elaborating on "...fundamentally, you don't determine what is self-contained."


I think to truly be an event-driven architecture you need to go a step or two further and be data-driven.

In other words, the appropriate way to describe your system would not be (subscribable) relationships between a set of components that describe your presumptive view of a division of responsibilities. (This is the non-event driven way of doing things, but with the arrows reversed.)

Instead, you track external input types, put them into a particular stream of events, transform those events to database updates or more events, etc. Your entire system is this graph of event streams and transformations.

These streams may cut across what you thought were the different responsibilities, and you will have either saved yourself headaches or removed a fatal flaw in your design.

If you're thinking about doing work in this area, don't just reverse the arrows in your component design!


I'm really interested to understand your comment better.

Can you give an example for "presumptive view of a division of responsibilities" and generally the whole comment? Something like "bad way" vs "good way"? Thanks!


I'm currently reading this book, and it's clarified a lot for me with regard to structuring events https://www.manning.com/books/the-tao-of-microservices


It's abstract, but I'll try to get something down.

First, look at what happens to the system from the outside, say a web request that leads to a web response. In between, information is gathered from other areas (databases, program logic) and combined with the request data. There are also possibly other effects generated (writes to database state, messages to other users, etc.).

Now take all of those “effects”--the web response, but also the database updates, logs, messages, etc.--and look at each of them as a tree (going left to right, with the root, the result, on the right) where different kinds of information were combined and transformations were performed in order to get the result.

We’re being conceptual here, so imagine we’re not simplifying or squashing things together--the tree can be big and complicated. Also temporarily ignore any ideas you may have that there’s a difference between information coming from the “user” area versus the “admin” area versus the “domain object #1” area. In this world, those stores of information only exist to the extent they enable the flow that produces our results.

Now notice that there are many different requests and many different effects and responses. Thankfully, some number of the inputs are shared and reusable. Further, entire spans of nodes are in common (an event type) or entire subtrees are in common (a subsystem). These are your data streams and your modules. You didn’t add them in because you felt like there had to be a “user service” or an “object #1 service”--those commonalities factored out (to the extent they did) of the requirements of the data flows.

Often, there isn’t an “object #1” at all--that was a presumption used to put stakes down so you had somewhere to start. And our systems that are made of up of things like “object #1 service” and “object #2 service” very frequently end up with problems of the form: “we can’t do that because object #1s don’t know about [aspect of object #2s]! Everyone knows that! We need a whole new sub-system!”. In the data-driven world the question is always the same: what data do you need to combine in order to get your result?

This isn’t to say all modules we usually come up with will turn out to be false ones (especially since a lot of the time we’re basing our architectures on past experience). For instance, that there is some kind of “user” management system is probably made inevitable by the common paths user-related data take to enter the system.

Now for the reverse argument: imagine you have a system that was done with the sort of modeling where there is an “object #1 service” that must get info from the “user service” and work with the “object #2 service” through the “object set mediator service”. You’re tracing through all the code that goes into formulating a response to requests, from start to finish, but someone has played a trick on you: they’ve put one of those censoring black bars over deployment artifacts, package names, and class names. The punchline is that your architecture inevitably is one of the trees described above--it’s just a question of how badly things are distorted because someone presumed the system comes from the behavior of “object #1”s and “object #2”s and not the other way around.


It is the same as arguing whether lambda calculus is better than pi-calculus or a Turing machine.

These are all isomorphic structures. Neither of them can do more than the other.

For example - you’re speaking of dependencies, etc - but any language based on statements can be reduced to a dependency graph defined by it’s single-assignment form.

Event sourcing is not a panacea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: