Please don't post random LLM slop on HN, there's more than enough of it on the internet as is. The value of HN is the human discussion. Everyone here is capable of using an LLM if they so desire.
1. About 25% of their service revenue is from charging commissions in app store and other 25% of the revenue is Google paying them for search default. Other services include things like insurance (applecare) That's not exactly same type services that most of the people would be thinking about.
2. A lot of their services have less criticality (and it's not a ding at them - it's often very explicit design choice).
3. App store having hiccups or iCloud backups being delayed it's not something that will usually gather enough attention of media.
You might be amazed to know how critical Services are to functioning Apple devices. While they mostly can run offline, there are dozens and dozens of services that Apple runs that modern ecosystems require (like certificate related stuff). Other oddball things related to iCloud, APNS and the private services like iCloud relay are all extremely critical to billions of devices. Thankfully the all mostly fail open (captive portal is particularly tricky). Not saying they are as critical or visible as, say, Google.com going down, but none the less would have a very very large and visible problem if they all did go down suddenly. Thankfully, due to Apple design philosophy, most are totally decentralized and teams are given almost complete autonomy on how services are ran, which makes them a huge confusing mess but also, kind of a feature as Apple generally expects them all to fail in odd ways and the software can generally handle it.
A small nitpick that doesn't take away from the rest of your comment: staying alive and fed was not necessarily a laborious activity for hunter-gatherers living in good climates [0]. It's our expansion into less hospitable environments that made it so.
> Woodburn offers this “very rough approximation” of subsistence-labor requirements: “Over the year as a whole, probably an average of less than two hours a day is spent obtaining food.”
> Reports on hunters and gatherers of the ethnological present--specifically on those in marginal environments--suggest a mean of three to five hours per adult worker per day in food production.
The "original affluent society" theory is based on several false premises and is fundamentally outdated, but people keep it alive because it fits certain Rousseauean assumptions we have. I recommend reading this:
I just read the 'original affluent society' and (most of) your linked essay, I kind of agree with you. That said, the conclusions of Kaplan lead to estimates or 35-60 hours a week (excluding some depending on the group) and that surprised me a lot. That's very different from the image I got from some other comments in this thread talking about extremely long days with constant back-breaking work. Would you agree?
Constant, backbreaking work was not a feature of hunter-gatherer societies in the way it was of early agricultural societies, yes; at the same time, they still worked equal to or longer hours than we did, at things we would likely consider quite grueling and boring (mostly food processing), and what they got out of it was a level of nutrition even they regularly considered inadequate; moreover, a lot of the reason the average per day work estimate is so low, as the paper covers briefly, is that there were very often times, especially during the winter, where food simply wasn't accessible, or during the summer, where it was so hot it was dangerous to work, so there was enforced idleness, but that's not the same thing as leisure.
It's a detailed, complicated anthropological argument made by an expert — and he also does it in a very well-written way. I could attempt to lay out the argument myself, but ultimately everyone would be better served by just... reading the primary source, because I doubt I could do it sufficient justice. I recommend you actually just do the reading. But a general TLDR of the points made are:
- the estimates of how much time hunter-gatherers spent "working" were based on studies that either (a) watched hunter-gatherers in extremely atypical situations (no children, tiny band, few weeks during the most plentiful time of the year, and they were cajoled into traditional living from their usual mission-based lifestyle) or (b) didn't count all the work processing the food so it could even be cooked as time spent providing for subsistence, and when those hours are included, it's 35-60 hours a week of work even including times of enforced idleness pulling down the average
- the time estimates also counted enforced idleness from heat making it dangerous to work, or from lack of availability of food, or from diminishing returns, or from various "egalitarian" cultural cul de sacs, as "leisure" but at the same time...
- ... even the hunter gatherers themselves considered their diet insufficiently nutritious and often complained of being underfed, let alone the objective metrics showing that the were
The anthropological research that came up with 2-3 hours of work per day only looked at time spent away from camp gathering, hunting, and fishing. When you account for food processing, cooking, water collection, firewood gathering, tool making, shelter maintenance, and textile production the numbers go way up.
Yes, pretty much this. If they worked in the fields 12 hour per day as in a Victorian industrial setting, they would have perished from exposure, not having time to attend obligatory work around the house and to process the food and materials used to make food. Basically peasants worked all the time to maintain a level of "comfort" like in the article's picture: https://i0.wp.com/juliawise.net/wp-content/uploads/2025/12/S...
Also idealization of rural life and past rural life tends to come almost exclusively from city dwellers, basically people who never set foot in a rural area let alone grow or live there.
I grew up in rural Romania and even though the conditions were (and are) exponentially better than what the non-industrial non-mechanized non-chemical (herbicides, pesticides and fertilizers) past offered, all I thought growing up was get the funk out of there. Agriculture (and it's relatives, animal husbandry) sucks and I hate it! :)
And without mechanization it's incredibly labor intensive to tend to a farm. Just to keep the animals alive over winter you have to dry and deposit a lot of hay, but before that you gotta scythe it. Scything is no walk in the park and basically you gotta do a lot of that every day to cover enough area to keep the cattle fed. Then plowing without a tractor and using animals: not just dangerous but backbreaking work. Then hoeing the weeds, funking need to do it all the time because without herbicides, the weeds grow everywhere and by the time you "finished" going once over all crops, they've grown back where you first started. At some point my father had this fantasy of what is now called "organic" crops, in fact cheapskating at paying the price for herbicides, so I did so much hoeing that it got out of my nose. I don't recall me saying it but my mother told me that at some point in a middle of a potatoes hoeing session I said that I'd rather solve 1000 math problems than do even just another row of potatoes. Definitive moment in my career choice, which is a lot closer to solving math problems now than hoeing organic potatoes :)
Not necessarily back, but to the right environments. As quoted above, we see the same today in isolated tribes that live off of hunting and foraging. All of this also doesn't account for the lack of all other modern convenience such as medicine, hygiene, etc. So it isn't about chill and romantic, but rather the time commitment specifically.
Without modern entertainment devices, or even books, what else are they going to do? Some “work” could have a lot of crossover into hobby. Some people enjoy cooking, making tools, spending time with kids, etc. They need to do something to pass the time. The stuff is also for a clear purpose. Making a tool to solve a problem right in front of you feels different than performing a seemingly arbitrary task everyday because a boss says so.
The Bush People previously called The Pygmies are modern humans who eat the diet of the previous homonids and get stunted by the caloric deficits. The only thing they plant is hemp, which doesnt scale to actual agriculture.
There is a simple solution to this problem, but it's not very popular: do the same thing Workers do, require using a separate file. All the tooling works out of the box, you have no issues with lexical scoping, etc. The only downside is it's (currently) clunky to work with, but that can be fixed with better interfaces.
UDP gives you practically no guarantees about anything. Forget exactly once processing, UDP doesn't even give you any kind of guarantees about delivery to begin with, whether delivery will happen at all, order of delivery, lack of duplicates, etc, nothing. These things are so far from comparable that this idea makes no sense even after trying real hard to steelman it.
UDP plus increment means that the client can request a snapshot to be re-sent. This mechanism is used in financial exchanges and works amazing.
This illustrates that the webdevs who write articles on "distributed system" don't really understand what is already out there. These are all solved problems.
You are 100% correct. UDP can be used to solve this problem, in fact, UDP can be used to solve any (software) networking problem, because its kind of what networking is.
The thing that webdevs want to solve is related but different, and whether the forest is missed for the trees is sometimes hard to tell.
What webdevs want to solve is data replication in a distributed system of transactions where availability is guaranteed, performance is evaluated horizontally, change is frequent and easy, barrier to entry is low, tooling is widely available, tech is heterogeneous, and the domain is complex relational objects.
Those requirements give you a different set of tradeoffs vs financial exchanges, which despite having their own enormous challenges, certainly have different goals to the above.
So does that mean this article is a good solution to the problem? I'm not sure, its hard to tell sometimes whether all the distributed aircastles invented for web-dev really pay out vs just having a tightly integrated low-level solution, but regardless of the hypothetical optimum, its hard to argue that the proposed solution is probably a good fit for the web dev culture vs UDP, which unfortunately is something very important to take into account if you want to get stuff done.
> in a distributed system of transactions where availability is guaranteed, performance is evaluated horizontally, change is frequent and easy,
Isn't that the situation inside a CPU across its multiple cores? Data is replicated (into caches) in a distributed system of transactions, because each core uses its own L2 cache with which it interacts, and has to be sent back to main memory for consistence. Works amazing.
Another even more complex system: a multi CPU motherboard supporting NUMA access: 2 CPUs coordinate their multiple cores to send over RAM from the other CPU. I have one of these "distributed systems" at home, works amazing.
Indeed, again you are right. I've gone through the same motions as you trying to understand why the webdev people make this so complicated.
For your specific question here: NUMA & cpu cores don't suffer from the P in CAP: network partitions. If one of your CPU cores randomly stops responding, your system crashes, and that's fine because it never happens. If one of your web servers stops responding, which may happen for very common reasons and so is something you should absolutely design for, your system should keep working because otherwise you cannot build a reliable system out of many disconnected components (and I do mean many).
Also note that there is no way to really check if systems are available, only that you cannot reach them, which is significantly different.
Then we've not even reached the point that the CPU die makes communication extremely fast, whereas in a datacenter you're talking milliseconds, and if you are syncing with a different system accross data centers or even with clients, that story becomes wildely different.
I don't think I'm qualified to answer the question, and I also think it depends on terminology where maybe 'core' is the wrong thing to say, but regardless: my general point is that the assumptions that hold for CPUs don't hold for webservices, and that's where the design ethos between them splits.
I'd tell you a joke about UDP, but you might not get it.
More seriously, you are confident and very incorrect on your understanding of distributed systems. The easiest lift, you can fix being very incorrect (or at least appearing that way) by simply changing your statements to questions.
Personally, I recommend studying. Start with the two generals problem. Read Designing Data Intensive Applications; it is a great intro into real problems and real solutions. Very smart and very experienced people think there is something to distributed systems. They might be on to something.
I'm not sure which endpoint gp meant, but as I understood it, as an example, imagine a three-way handshake that's only available to enterprise users. Instead of failing a regular user on the first step, they allow steps one and two, but then do the check on step three and fail there.
It tracks with my experience in software quality engineering. Asked to find problems with something already working well in the field. Dutifully find bugs/etc. Get told that it's working though so nobody will change anything. In dysfunctional companies, which is probably most of them, quality engineering exists to cover asses, not to actually guide development.
It is not dysfunctional to ignore unreachable "bugs". A memory leak on a missile which won't be reached because it will explode long before that amount of time has passed is not a bug.
It's a debt though. Because people will forget it's there and then at some point someone changes a counter from milliseconds to microseconds and then the issue happens 1000 times sooner.
It's never right to leave structural issues even if "they don't happen under normal conditions".
In hard real-time software, you have a performance budget otherwise the missile fails.
It might be more maintainable to have leaks instead of elaborate destruction routines, because then you only have to consider the costs of allocations.
Java has a null garbage collector (Sigma GC) for the same reason. If your financial application really needs good performance at any cost and you don't want to rewrite it, you can throw money at the problem to make it go away.
I don't think this argument makes sense. You wouldn't provision a 100GB server for a service where 1GB would do just in case unexpected conditions come up. If the requirements change, then the setup can change, doing it just because is wasteful. What if we forget is not a valid argument to over engineer and over provision.
If a fix is relatively low cost and improves the software in a way that makes it easier to modify in the future, it makes it easier to change the requirements. In aggregate these pay off.
If a missile passes the long hurdles and hoops built into modern Defence T&E procurement it will only ever be considered out of spec once it fails.
For a good portion of platforms they will go into service, be used for a decade or longer, and not once will the design be modified before going end of life and replaced.
If you wanted to progressively iterate or improve on these platforms, then yes continual updates and investing in the eradication of tech debt is well worth the cost.
If you're strapping explosives attached to a rocket engine to your vehicle and pointing it at someone, there is merit in knowing it will behave exactly the same way it has done the past 1000 times.
Neither ethos in modifying a system is necessarily wrong, but you do have to choose which you're going with, and what the merits and drawbacks of that are.
Again, when you're building a missile nobody should "forget" a detail.
You have very clearly in the specification, "this missile SHALL not have a run time before reboot of greater than 36 hours ref. donut_count.c:423 integer counter overflows"
Seriously, there's a military standard for pop tarts and they'd get rejected if they had out of spec amounts of frosting on top. It is not the software world you live in.
It's not that they don't ever make mistakes, just an extraordinary amount of effort is put into not making mistakes and oftentimes things are done "wrong" on purpose because of tradeoffs ordinary silicon valley software engineers have no context about.
The way it always seemed to go for me, when I was in that role, is the product is already complete, development is done, you're handed all the tests/etc that the disinterested developers care to give you, and you're told to make those tests presentable and robust, and increase test coverage. The process of doing that inevitably uncovers issues, but nobody cares because the thing is already done and working, so what was the point of any of it? The point was just to check off a box. At companies like this, the role is bullshit work.
There are many contexts where this comment would apply, but border crossing is not one of them. If you're a foreigner trying to enter another country, then by definition you have less rights than natives.
reply