There is one exception to the foreigner comment though. Blonde / white foreigners usually have an advantage, even over native Japanese people since the Eurocentric beauty standard is much in play there. You can see evidence of this in their modeling industry where high paying modeling gigs go to those who are Caucasian or half Caucasian.
That’s true in certain industries like modeling, but not others, even if you’re a white native Japanese. Often, these types struggle to ever be fully accepted into society. They’re seen as outsiders and, at best, seen as “cool” in an exotic way.
(Depending on your definition of compression, my monitor shows more colors than there are particles in the Universe at 1 billion frames per second. It's just that there's a little bit of quality loss from the compression.)
I also like the N-1 strategy for things that are released often. I also like going with the most recent release for things that are released every 3+ years though
Intel has moved to DisplayPort 2.0, but their current product specifications, e.g. for NUC12SNK, say: "DisplayPort 2.0 (1.4 certified)".
But I assume that they will get a certification in the future.
For Thunderbolt, where it was Intel who did the certifications, many companies have sold motherboards or computers for months or years with non-certified Thunderbolt ports which worked OK, while waiting for the certification.
Regarding the Process–architecture–optimization model from intel, what do each of those upgrades mean to the user?
I believe Process has the most to deal with energy efficiency due to the shrinkage, which is why M1 was so energy efficient when it was launched.
M2 seems like an architecture/optimization change, seems like they are just able to cram more stuff which is why it's faster without increasing battery life.
For those familiar with Intel, what does the consumer mainly gain out of optimization product launches?
> I believe Process has the most to deal with energy efficiency due to the shrinkage, which is why M1 was so energy efficient when it was launched.
Eh, that's part of it, but a lot of it has to do with the M1 having very very high IPC (much higher than comparable x86-64 parts), meaning they could run the chip at a much lower max clock speed (3.2 GHz versus boosting to 5 GHz for most competitive x86-64 CPUs) for similar overall performance.
This makes a huge difference because power consumption increases exponentially with clock speed.
edit: thinking about how it relates to Intel's process–architecture–optimization, it feels a bit tricky to compare. Apple's process seems to be something like: architecture(A-series chip for iPhones)-optimize-with-new-die-reusing-basic-cores-in-diff-arrangement-targetting-perf-in-small-thermal-envelope(M1 for iPad Pro and small Macbooks)-optimize-again-with-another-new-die-reusing-basic-cores-but-in-yet-another-arrangement-targetting-perf-in-a-wider-thermal-envelope(M1 Pro/Max/Ultra), and that all happens before you get to the next M-series increment, which begins with an A-series increment.
So the M2 is less the optimization of the M1 than it is the re-use of the cores in the new A-series chip preceding it, which was an architectural change, plus optimizations and other SoC differences.
I said "power consumption increases exponentially with clock speed" not "power consumption increases exponentially with f". That any modern processor has to crank up V to crank up its clock speed is a given.
Hence, power consumption is exponential with clock speed (which is achieved by cranking up both V - the polynomial term in P – and f).
(Which, fine, if you want to be pedantic, is polynomial growth, but that changes nothing about the point, which is that you have to burn a shitload more power at 5 GHz than at 3.2 GHz, because your consumption isn't scaling anything close to linear).
Waiting is not bad, most child abuse cases come from people who shouldn't have had kids in the first place. The worst case if you wait is that you'll have to adopt kids, which is arguably way more humane than introducing another hungry soul into the world.
It's not that we have too many mouths to feed, it's that we have too many selfish people in positions of leadership and authority who exploit the masses to line their own pockets.
Having 50% less people would reduce the overall hunger numbers, but not the ratio.
Yes, pollution and climate change affect everyone and I am saying that is entirely unfair because many people did not choose these things, do not contribute, and do not benefit. The loss of trees in Europe is due to global effects, not just pollution in/by Europe.
Sorry, there was a typo in my comment (very -> were). What I wanted to say is that the situation in Europe is actually better in most places now than it was in the past.
With respect to food poverty, the increase in logistics/technology/marketing productivity may well offset some (perhaps all) of the resource pressure that a growing population creates.
Generally speaking, logistics have huge economies of scale (see Amazon) so an increase in market size will reduce the logistics cost per each item of food. Especially, for cold perishable dairy foodstuffs like dairy the cold chain costs may well dominate the costs of cows.
IIRC the ratio of people at risk of famine as been going down steadily while the population increased until Covid (I assume it should get back on track eventually, though)
Why is it great for remote work? The land is already developed and the undeveloped areas get the sweltering sun of socal without the cooling ocean breeze to balance it out. The "good climate" only lasts for a mile or so from shore from