> This is false. The EU has put up more money than the US but they have not _donated_ more money than the US.
That's not quite right, either. A large portion if not a majority of the financial aid from the US is--and tacitly required to be--spent on purchase of weapons and services from US defense contractors. That's in addition to direct military aid where the Pentagon directly purchases and transfers weapons, and transfers old weapons--I think the replacement cost is what's calculated as the US "contribution" for that portion.
This is how military aid packages are structured for Israel and Egypt also. I don't mean to insinuate anything negative, but the reality is that the majority of all this aid is effectively a direct subsidy to US defense contractors.
The EU does the same thing, but the structure and pretense is different--loans tacitly required to be used to buy EU weapons and services, the loans forgiven after the public stops paying attention. Though in the case of Ukraine I think a much larger portion (relative to US) of aid is intended for civilian programs, at least early on, on account of the EU's squeamishness.
> You can't therapy your way out of epigenetic changes.
Isn't that exactly what the study purports to show? Why would psychological trauma (i.e. negative nurturing) effect epigenetics but not psychological healing (i.e. positive nurturing)? What we consider "trauma" by modern standards would have been experienced by a huge fraction of every prior generation, and over the course of generations the entire population countless times over via epigenetic inheritance. Either it works both ways, even if unevenly, or it's BS (the framing if not the science). Ideological motivations aside, it just happens to be much easier to identify and track discrete incidences of severe trauma.
Yes, you can cushion people from the effect of trauma as it's happening, by providing resources at or soon after the point of trauma.
But your epigenome is a record (almost exclusively) of stuff that was observed during your development — it's a log, not a state database.
A particular epigenome-affecting gene process would look like: "when I was -6 months old, a process kicked off that began expressing a gene. This gene caused the expression of a protein that acted to detect whether there was enough magnesium in my blood or not. It detected that there was enough — and so the protein recorded [by not methylating anything] that I shouldn't silence any of the copies of the magnesium-transporter in my intestinal-lining tissue cells. Then, the buildup of the protein caused the expression of another gene, which methylated [silenced] the first gene, permanently deactivating this magnesium detection step from happening again."
Which means that trauma during early development — if it is not cushioned at the time, such that its impact is felt fully* — will lead to some epigenome-affecting process that looks for "impact of trauma" observing said trauma as happening at the time it activates, and so recording it. That record, within the cell-line descending from the cell that made that observation, is now permanent; that tissue will now do whatever it does with that knowledge, from then on, for the rest of the organism's life.
Treating the trauma later, will help the organism psychologically, but will not change those markers (and whatever biological effects they have), because they are markers of "what was at a certain point", not "what is."
> Not sure why everybody reports the average everywhere.
Because that's the modern definition of life expectancy: the average (mean). Probably because the primary driver behind the production of actuarial tables has always been for insurance, annuities, and similar purposes. For calculating costs and valuations the mean is often a better measure because the distribution of deaths isn't normal. Though in modern actuarial tables you'll often find both.
I would assume popular literature quotes expectancy figures from birth because that's the easiest thing to do when you're just trying to give a single simple figure. But actuarial tables have always been just that--tables. Given a person at age X, what's their life expectancy. The first row will be from birth, but people with money on the line have always understood that number is mostly irrelevant. Here's an actuarial table from the 3rd century which was built for calculating annuities: https://en.wikipedia.org/wiki/Ulpian%27s_life_table Interestingly that table seems to use median life expectancy.
Actuarial science was a significant driver in the development of modern statistics and mathematics, especially as the market for financial products exploded over the past several centuries. The tables and models for life expectancy from 150 years ago were startlingly accurate, even in their forecasts of changing life expectancies. When Social Security was passed in the 1930s the models for how life expectancies would change, both from infancy and across all the age brackets, were spot on with actual changes up to the 1990s.
> What wasm brings to the table is that the core tech focuses on one problem: abstract sandboxed computation. The main advantage it brings is that it _doesn't_ carry all the baggage of a full fledged runtime environment with lots of implicit plumbing that touches the system.
Originally, but that's rapidly changing as people demand more performant host application interfacing. Sophisticated interfacing + GC + multithreading means WASM could (likely will) fall into the same trap as the JVM. For those too young to remember, Java Applet security failed not because the model was broken, but because the rich semantics and host interfacing opened the door to a parade of implementation bugs. "Memory safe" languages like Rust can't really help here, certainly not once you add JIT into the equation. There are ways to build JIT'd VMs that are amenable to correctness proofs, but it would require quite alot of effort and the most popular and performant VMs just aren't written with that architectural model in mind. The original premise behind WASM was to define VM semantics simple enough that that approach wouldn't be necessary to achieve correctness and security in practice; in particular, while leveraging existing JavaScript VM engines.
The thing is, sophisticated interfacing, GC, and multithreading - assuming they're developed and deployed in a particular way - only apply in the cases where you're applying it to use cases that need those things. The core compute abstraction is still there and doesn't diminish in value.
I'm personally a bit skeptical of the approach to GC that's being taken in the official spec. It's very design-heavy and tries to bring in a structured heap model. When I was originally thinking of how GC would be approached on wasm, I imagined that it would be a few small hooks to allow the wasm runtime to track rooted pointers on the heap, and then some API to extract them when the VM decides to collect. The rest can be implemented "in userspace" as it were.
But that's the nice thing about wasm. The "roots-tracker" API can be built on plain wasm just fine. Or you can write your VM to use a shadow stack and handle everything internally.
The bigger issue isn't GC, but the ability to generate and inject wasm code that links into the existing program across efficient call paths - needed for efficient JIT compilation. That's harder to expose as a simple API because it involves introducing new control flow linkages to existing code.
> Other transport layer protocol rollouts have been stymied by ossification such as MPTCP
AFAIU, Apple has flexed their muscle to improve MPTCP support on networks. I've never seen numbers, though, regarding success and usage rates. Google has published alot of data for QUIC. It would be nice to be able compare QUIC and MPTCP. (Maybe the data is out there?) I wouldn't presume MPTCP is less well supported by networks than QUIC. For one thing, it mostly looks like vanilla TCP to routers, including wrt NAT. And while I'd assume SCTP is definitely more problematic, it might not be as bad as we think, at least relative to QUIC and MPTCP.
I suspect the real thing holding back MPTCP is kernel support. QUIC is, for now, handled purely in user land, whereas MPTCP requires kernel support if you don't want to break application process security models (i.e. grant raw socket access). Mature MPTCP support in the Linux kernel has only been around for a few years, and I don't know if Windows even supports it, yet.
TL;DR: Apparently only traiteurs were permitted to sell meals. Restaurants were marketed as a kind of (I guess) upscale health service, originally only selling fancy broths. One of the early restaurateurs is documented as using the advertisement, "Venite ad me omnes qui stomacho laboratis, & ego restaurabo vos" ("Come to me, all of you whose stomachs are in distress, and I will restore you", an allusion to Matthew 11:28, "Come to Me, all of you who are weary and burdened, and I will give you rest.")
> It goes against the foundation of not only US law, but couple of hundred years of international democratic tradition in which allegiance is not to a person, but to the nation itself.
The United States had a spoils system of government administration until at least the late 1800s. The spoils system was still prevalent in many state and city governments until the mid 1900s.
This didn't mean officials were permitted to violate the law, but self-dealing and bald partisanship in administration was rampant, and of course violations of the law often went unpunished as administration officials had (and have) discretion to prosecute.
Yes, on paper VAT works out better, and it's a darling of economists. In practice, VAT requires more paperwork, accounting, and interaction with the bureaucracy. The end result is that even though the U.S. has the tax pyramiding "problem", you find much more tax avoidance in Europe than in the U.S. Grey and black markets constitute a huge, double-digit fraction of the European economy, and it's what helps sustain organized crime there, even in stereotypically rule-abiding Germany. Like many things in Europe, VAT works well for large enterprises; it's quite burdensome for small businesses, and that's probably where the complaints are coming from--small and medium-sized businesses in the U.S. who find dealing with EU taxation daunting.
In school (economics, law) I had learned all about how great the VAT system is. But about 10 years ago I wanted to buy a simple ~$100 rack shelf to fit a PC Engines APU from an Italian manufacturer. I had to create an Italian tax ID, which was annoying. I recently had to use it again just to buy some tins of anchovies from Italy.[1] In both cases I received more paperwork regarding the VAT than I did the import paperwork. It seems slight but it's actually quite a lot of friction compared to just giving X dollars and receiving your product. Dealing with tax and import crap is exactly why import/export companies exist, creating needless intermediaries that siphon value.
In that light, the "inefficient" sales tax premiums we pay in the US can be interpreted as the cost of enjoying a more decentralized taxation system that makes compliance more convenient and transactions run smoother. There's less accounting and--more importantly (because US accounting can be complex, too)--less coordination required. It's the economics version of worse is better.
[1] And just to be clear, in both cases I was purchasing through a clearly retail-oriented store website. IOW, even as an effectively retail consumer you had to provide a tax ID--the equivalent of an employer ID or social security number in the US. I don't know if this is a hard requirement for retail generally in Europe, or just the easiest way for them to deal with VAT accounting on their end when only a small portion of their business is retail.
I'm in the US so didn't have a VAT ID. Plus, I originally wasn't trying to purchase it as a business, and the businesses I was purchasing from do sell to individuals, it's just that apparently they wanted even individuals to provide a [personal] tax ID. Though, to get the Italian tax ID I believe I did have to register using an American business entity. (I can't remember, and apparently because of the strict privacy laws AFAICT there's no online database where I can query my Italian tax ID to see what name it's registered to, or even whether it's registered to a business or individual. It's somewhat understandable, but at least in the US the government provides an online service that can confirm whether a name matches a tax ID.)
Not directly related, but note what is mentioned in the post about the purpose of individual object files for each function. What might be considered an anachronism is being leveraged for improved security.
What's happening (IIRC) with the reordering is that all those otherwise dynamically linked binaries (sshd is a PIE binary) are being themselves relinked from scratch (OpenBSD ships with the .o files), but shuffling the order of .o files passed to the linker. This effectively enhances ASLR. The way ASLR normally works is that each binary (program or shared library) is loaded into a random base address, but all the functions defined in each binary are fixed relative to each other. In stock installations libc's fopen is at a fixed offset from libc's snprintf, for example, and this will be true for every program loading libc.so. In the above, libc.so is relinked with a randomized ordering to the .o object files. The result is that relative offsets for libc functions on your individual system will be different from every other system, including from your previous boot, hopefully making exploits (e.g. utilizing ROP chains) more difficult. More modern projects like libcrypto and sshd tend to group related functions into the same .o file, so all the functions in each .o will still have a fixed offset from each other, but they still have quite alot of individual .o files to shuffle.
Note that OpenBSD does the same thing for the kernel.
That's not quite right, either. A large portion if not a majority of the financial aid from the US is--and tacitly required to be--spent on purchase of weapons and services from US defense contractors. That's in addition to direct military aid where the Pentagon directly purchases and transfers weapons, and transfers old weapons--I think the replacement cost is what's calculated as the US "contribution" for that portion.
This is how military aid packages are structured for Israel and Egypt also. I don't mean to insinuate anything negative, but the reality is that the majority of all this aid is effectively a direct subsidy to US defense contractors.
The EU does the same thing, but the structure and pretense is different--loans tacitly required to be used to buy EU weapons and services, the loans forgiven after the public stops paying attention. Though in the case of Ukraine I think a much larger portion (relative to US) of aid is intended for civilian programs, at least early on, on account of the EU's squeamishness.
reply