
Intel's flagship 10th-gen desktop CPU has 10 cores, reaches 5.3GHz - redm
https://www.engadget.com/intel-10th-gen-s-series-desktop-cpu-10900k-130000913.html
======
Dunedan
This article states the following power consumption:

> The 10900K has a 125-watt TDP, for example, while AMD's Ryzen 9 3900X's is
> just 105-watts.

My understanding from other articles (like [1]) is however that Intel had to
_massively_ increase the amount of power the CPU consumes when turboing under
load:

> Not only that, despite the 125 W TDP listed on the box, Intel states that
> the turbo power recommendation is 250 W – the motherboard manufacturers
> we’ve spoken to have prepared for 320-350 W from their own testing, in order
> to maintain that top turbo for as long as possible.

Somehow it feels like back in the Pentium 4 days again.

[1]: [https://www.anandtech.com/show/15758/intels-10th-gen-
comet-l...](https://www.anandtech.com/show/15758/intels-10th-gen-comet-lake-
desktop)

~~~
magila
AMD's not really any better when it comes to fudging specs. At least Intel
CPUs actually limit their sustained power draw to the TDP when running in
their factory default configuration, even if enthusiast motherboards usually
override it.

AMD CPUs OTOH run significantly higher power than their spec'd TDP right out-
of-the box. For example, that "105 W" 3900X actually ships with a power limit
of 142 W and it is quite capable of sustaining that under an all-core load.

AMD's turbo numbers are also pure fantasy. If AMD rated turbo frequencies the
same way Intel did that 3900X would have a max turbo of 4.2 or 4.3 GHz instead
of 4.6 GHz. When Intel says their CPU can hit a given turbo frequency they
mean it will actually hit that and stay there so long as power/temperature
remains within limits. Meanwhile AMD CPUs may-or-may not ever hit their max
turbo frequency, and if they do it's only for a fraction of a second while off
full load.

The outrage over Intel CPU's power consumption is pretty silly when you
realize that the only reason AMD CPUs don't draw just as much is because their
chips would explode if you tried to pump that much power into them. If you
care about power consumption just set your power limits as desired and Intel
CPUs will deliver perfectly reasonable power efficiency and performance.

~~~
sq_
> AMD's turbo numbers are also pure fantasy.

I'm pretty certain that I remember watching videos from GN, LTT, or Bitwit
(can't recall which) where they noted that AMD chips were turboing up to a few
hundred MHz above the specced numbers.

~~~
magila
Those youtubers are measuring "turboing" by looking at the maximum frequency
column in HWiNFO. So they consider a CPU to have hit a frequency even if it
only touched it for a single millisecond during a minutes long benchmark run.
For AMD CPUs the average frequency is typically significantly lower.

~~~
sq_
Maybe smaller channels, but I find that most of the bigger ones ( _especially_
GamersNexus) put a ton of effort into having a good, repeatable process for
in-depth looks at performance. Certainly on par with Anandtech or any of the
other typically trusted review sites.

~~~
magila
Ok, now show me the GN video showing an AMD CPU sustaining a turbo frequency
"a few hundred MHz above the specced numbers".

------
henriquez
I'm surprised they _still_ don't have a working 10nm desktop chip. They're 5
years behind schedule (and still counting) at this point! This is just a
reskinned 9900k with 2 more cores and a much higher (> 25%) TDP at 125-watts,
which is a generously low estimate of how much power it will actually suck. I
briefly had their "140 watt" 7820x chip (returned for refund) that would
gladly suck down more than 200 watts under sustained load. Intel plays such
games with their single core turbo at this point that the 5.3ghz means very
little, and it's the same tired architecture they've been rehashing since
Sandy Bridge (2011).

This is an incredibly poor showing and if I were an investor I would be
seriously questioning the future of Intel as a company.

~~~
superpermutat0r
I believe Jim Keller is currently at Intel, maybe they'll have some surprises
soon.

~~~
oAlbe
I've always been puzzled by this whole Jim Keller situation. Is one man
literally the only reason we have technological advancements in
microprocessors?

~~~
blihp
Obviously not the only reason, but he appears to be a helpful ingredient in
the mix with good things tending to happen with him around. No idea how much
of it is due to his technical vs. management skill, but there definitely seems
to be some there there.

~~~
patrec
In that light AMD losing him to Intel via Tesla sounds like a potential few
billion dollar blunder. I wonder how difficult/expensive it would have been to
prevent that.

~~~
jessermeyer
Jim has a long history of jumping between companies.

~~~
patrec
All the more reason to try hard to avoid him ending up at your main competitor
if you finally start getting a leg up.

~~~
adventured
You probably can't. He appears to like jumping into challenges such as those
that AMD was facing and Intel is facing now. The jumping would seem to imply
he's doing it to pursue an interesting challenge as the primary. Some
personalities need that (others are obviously repelled by that risk &
challenge). Lots of big corporations can afford to pay very large sums to
retain him, if that were his primary consideration.

~~~
hanniabu
Ask HN: I'm somebody with this type of personality that likes challenges and
to bounce around, but I'm not as talented and experiences as someone like Jim.
What are some suggestions on how I can satisfy this while also avoiding the
trap of not being somewhere long enough to gain a higher level of experience?
It seems that my habits have sabotaged me and my peers have excelled in their
companies to much higher positions and compensation.

~~~
greggyb
Feel free to email me to talk more about this. I can share trajectories for
myself and a few peers who have similar experience.

Get a role at a >decent consulting firm in the field you like. Move
aggressively into a role where you get to take lead on projects. Find
conferences in your area of interest. Speak at those. Build relationships with
any vendors that are common to your customers and area of interest. Speak
more. Get to know organizers at conferences and seek keynote and panel
discussion opportunities. Engage with everyone you can to identify problems.
Evolve your content to focus on the intersection of interesting and common
problems. Somewhere in here you can shift to a top-tier consulting firm in
your field, or to independent consulting, or jump to a vendor, or take on a
senior role at an org that would otherwise be a customer of your consulting
firm.

At this point you should have strong experience and a reputation for the same.
Leverage this to filter opportunities to those you want.

All of this is predicated on you actually being quite good at what you're
interested in. You don't have to be world class to start, but you do need to
continuously improve. You'll probably end up in the top 10% of your field.
Again, predicated on ability.

------
antongribok
So, $488 for 10x 3.7 GHz Intel vs. $409 for 12x 3.8 GHz AMD?

I'll take one AMD Ryzen 9 3900X, thanks MicroCenter!

~~~
paulmd
Base clocks are irrelevant on desktop, it's not Sandy Bridge days anymore. No
modern (desktop) processor runs at baseclock in practice.

The 10900KF will do 4.8 GHz all-core at $472, the 3900X will do it at around
4-4.1 GHz[0]. AMD has a slight IPC advantage in productivity, Intel has a
slight IPC advantage in gaming[1].

The market for the 10900K/KF is basically someone who wants the fastest gaming
rig, but then as a secondary priority wants more cores for productivity. Kind
of an overlap with the 3900X, sure, but the 10900K/KF will still outperform it
in gaming, where the 3900X tilts a little more towards productivity.

There are of course exceptions, CS:GO loves Zen2's cache and Zen2's latency
makes DAWs run like trash, so your specific task may vary.

I'd personally point at the 10700F as being the winner from the specs, 4.6 GHz
all-core 8C16T with slightly higher IPC in gaming than AMD, for under $300.
That's competitive with the 3700X: little higher cost, and more heat, but more
gaming performance too. The 10900 series and 10600 series are a little too
expensive for what they offer, the lower chips are too slow to really have an
advantage.

But really it's not a good time to buy these anyway. Rocket Lake and Zen3 will
probably launch within the next 6 months, if you can stretch another 6 months
there will be discounts on Zen2 and more options for higher performance if you
want to pay.

[0]
[https://www.pcgamesn.com/amd/ryzen-9-3900x-overclock](https://www.pcgamesn.com/amd/ryzen-9-3900x-overclock)

[1] [https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-
vs-...](https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/)

~~~
fermienrico
These days you can't say anything positive about Intel even if it is factually
true. I am personally excited about the 10900K and the competition with AMD is
bringing Intel prices down.

People seem to have a deep hatred for Intel, exactly the way they had deep
hatred for AMD (pre-Conroe days, when Intel released Conroe in 2005/06, people
were rooting so hard for Intel, it was hard to find any positive comments
about AMD). I don't get it. Both are multi-billion dollar companies and
innovating like crazy. We can't just sit back, and feel the utter awe of what
it takes to make computer chips whether it is Intel or AMD. You know, its
engineers like anyone else - they have good intentions and work hard to make
all of this possible, the fans have this egregious entitlement that is so
infuriating and dismissive.

~~~
ImaCake
The engineering is great. People's bias likely stems from the numerous
security scandals surrounding intel's CPUs in the last few years.

~~~
azinman2
Hardly unique to intel's cpus... they're just the biggest target.

~~~
rowanG077
That's simply not true the biggest target is ARM. Most of the worlds devices
run ARM.

~~~
azinman2
Which also have side channel attacks... these and much more:

[1] [https://www.blackhat.com/docs/eu-16/materials/eu-16-Lipp-
ARM...](https://www.blackhat.com/docs/eu-16/materials/eu-16-Lipp-ARMageddon-
How-Your-Smartphone-CPU-Breaks-Software-Level-Security-And-Privacy-wp.pdf)

[2] [https://duo.com/decipher/new-side-channel-attack-extracts-
pr...](https://duo.com/decipher/new-side-channel-attack-extracts-private-keys-
from-some-qualcomm-chips)

[3]
[https://eprint.iacr.org/2016/980.pdf](https://eprint.iacr.org/2016/980.pdf)

~~~
rowanG077
Still not even close to the clusterfuck that is Intel CPU's

~~~
azinman2
Well they’re typically not as superscalar in processing, which helps. But I
wouldn’t make an assumption that ARM chips generally are more secure — plenty
of weaknesses have been found, and doubtlessly many more exist. Security
research has shown there are always new untapped directions to exploit, and
really nothing can be assumed to be actually secure at this point. Like I said
before, Intel is the big target, especially as they run the servers you access
over the internet. ARM devices tend to be local and thus aren’t as valuable
for side channel attacks. Instead, most attacks for ARM chips tend to be at
the sandbox/OS level.

------
unnouinceput
As desktop user I don't care about power consumption, I care very little that
it has x% more power then last year processors (at least when the x < 100)
because current processors have enough power for anything you throw at them,
but what I do really care is this:

-Do they still pack that PoS Intel management inside the CPU? And are these new CPU still vulnerable to Meltdown? Because if any of those questions are answered with "yes" then no amount of cores, GHz and performance % is going to change my mind from Ryzen

~~~
sdflhasjd
Well, power consumption is still kinda important on a desktop. All that power
has to go somewhere and you need a cooler that can keep up or you'll be losing
more performance from thermal throttling.

~~~
unnouinceput
Water cooling. Desktops can afford that space generously.

~~~
sdflhasjd
True, but when you're approaching 300W, you can only take heat away so fast.

~~~
unnouinceput
My current desktop water cooling setup can handle 700W under load easily.
After setting it up my test was to actually spin everything up under 100% load
with 2 crypto miners (1 for GPU and one for CPU, in 2014) and did it for 10
hours straight. Peak temp was 60 degree Celsius in hot July. Why I never
continued with that? Because at the time bitcoin was 120 dollars and the
electricity was 10 times more expensive. According to worker pool I joined at
the time in those 10 hours I've earned 3 cents. Anyway, back to topic - I
trust my water cooling 100% (also I made it manually because I like to build
things).

------
zaptheimpaler
It feels like we are moving towards ARM becoming the primary consumer CPU, and
x86/x64 being used to power only niche use cases (dev, audio/video, gaming
etc.) or servers only. Can anyone working in the space confirm/deny?

~~~
greggyb
Define "moving toward". We're nowhere close to that at this point.

~~~
Skunkleton
Define "nowhere close"? Apple's cpu's are competitive with x86 when looking at
low power applications. Microsoft has multiple examples of windows running on
arm. This seems pretty close in the grand scheme of things.

~~~
wmf
Arm has zero percent market share in PCs. It would take ten years or more to
hit 50%.

~~~
freehunter
Well they said “consumer CPU”, not “PC CPU”. So the question is, do most
consumers do their computing on PC CPUs or ARM CPUs? Which then raises the
question, what does “computing” mean...

~~~
nolok
They also said "x86 for niche case" and I don't think every single desktop
computer in existance, from office to gaming to anything, is a niche.

~~~
wtallis
Laptops outsell desktops by something like a 2:1 ratio. If you restrict your
analysis to just _consumer_ devices and exclude office PCs purchased in bulk
by businesses, that ratio is even more skewed toward laptops. Desktops _are_ a
niche, and x86 could be relegated to niche status for consumer computing
simply by ARM making significant inroads to the laptop market without having
any uptake in the desktop market. (It's already the case that consumers tend
to own more ARM-powered devices than x86-powered devices.)

~~~
freeflight
Sounds a bit like Google trying to sell Stadia: "Nobody needs processing power
close-by, let's compute everything in the cloud and access it with thin-
clients!"

Which sounds workable in theory, but is unworkable for many people due to
limited Internet access speeds/volume.

And the real disadvantage then becomes obvious when the "super powerful cloud"
only renders the game at console visual details levels with in-built control-
lag.

Not just limited to gaming: Video-editing is becoming increasingly popular as
a hobby and a field of work, which is another use-case for lots of local
processing power.

So while in terms of market size desktops might be a niche, that niche still
fulfills an important function thus I don't see that going away any time soon.

------
MageSlayer
Too bad they have only 2 channel memory controllers and same 32/32 L1 cache.
That means all that power is still wasted waiting for memory (Max Memory
Bandwidth 45.8 GB/s, seriously?). Not sure why feeling so excited about those
processors.

~~~
vbezhenar
For typical desktop workloads memory bandwidth is not that important. They
likely will release Xeon-W counterparts later with similar frequencies but
higher memory channels and PCI-E lanes for those who need it.

~~~
MageSlayer
> For typical desktop workloads memory bandwidth is not that important.

Perhaps, I need to take that as an axiom, right? i7 5820K was non-typical for
that matter then.

And yes, Xeon-W with 5.3GHz frequency? Tell me more :)

~~~
chapplap
The i7-5820K is from the "high end desktop" line of chips, derived from Xeon
workstation chips. The modern equivalent is chips like the i9-10900X (note X
not K), which does have quad channel DDR4-2933 and 48 PCIe lanes. Clock speeds
are a bit lower though.

------
dchyrdvh
Does anybody need those 5.3 GHz at 300 watts? I believe the next big thing is
"massively parallel distributed processing": thousands or maybe even millions
of tiny processing units with their own memory and reasonable set of
instructions run in parallel and communicate with each other over some sort of
high bandwidth bus. It's like a datacenter on a chip. A bit like GPU, but
bigger. I think this will take ML and various number crunching fields to the
next level.

~~~
motoboi
Those are desktop processors. Most applications won’t even use 64 cores. But I
agree this is this future, but today faster single core will speed up your day
a lot.

------
greendave
Somewhat surprised at the lack of PCIe 4.0. Is there really no demand for more
PCI bandwidth on high-end desktops?

~~~
kllrnohj
There is, but keep in mind this is yet another refresh of Skylake, not really
a "new" CPU.

Supposedly the new socket is PCI-E 4.0 capable, which the next CPU "Rocket
Lake" will supposedly enable.

But since it's still 16 lanes from the CPU only, that means you'll only get
PCI-E 4.0 to likely a single slot. And likely not any of the M.2 drives, which
typically are connected to the chipset instead.

------
ksec
This may be just good enough to hold off AMD from further gaining market
shares on Desktop. For certain group of user that doesn't need a GPU, the iGPU
included is a good enough solution.

One thing that bothers me a lot is the 2.5Gbps Ethernet that is supposed to
come with those new Intel motherboard ( assuming MB vendors uses it ). Why not
5Gbps Ethernet? How much more expensive is it? It seems we still dont have a
clear path on how to move forward from 1Gbps Ethernet. Personally I would have
liked 10Gbps, but the price is still insanely expensive.

~~~
tylerhou
2.5 Gbps will be an intermediary standard for consumer ethernet applications
because existing CAT5(e) cabling can usually carry 2.5 Gbps (but not 5 Gbps or
10 Gbps).

~~~
paulmd
CAT5e is not in-spec for 10 Gbps but it will usually do it for short runs.
Under ten meters, worth a try. The official spec is 100m and the transceivers
are pretty solid at pulling a signal out of the noise.

Personally I'm with the other guy, I think this is a dumb half-measure and we
should just move straight to 10 Gbps. Right now the single biggest cost in
your household multi-gig deployment will be switches, a switch with 4-8
10Gbase-T ports will run you around $500, and this figure is not really any
cheaper with multi-gig. You are paying through the nose for something that is
a watered-down version of the "real standard".

Say you have two switches, one is upstairs and one is downstairs, are you
really going to drop $1000 on a half-baked deployment? People need to try it,
figure out if their existing cable is stable enough for their needs (maybe
that segment can run at 5gbit speeds and it can be 10 gbit on your machines
and on the switch), and just pull cable where it's not.

~~~
mastax
Multigig switches are expensive because they're new. The products have NRE
expense to pay off and so do the chips inside them. There's not yet the volume
to make them really cheap. They will get cheaper.

The question is will they get cheaper faster than 10G? Perhaps. Despite being
14 years old 10G is just starting to get put into reasonably priced prosumer
switches so it doesn't have that much of a head start practically.

~~~
ksec
I have been thinking for a long time if having Switches that is limiting in
total bandwidth would help save cost. Example you dont need 40Gbps on a 4 Port
10Gbps switch, or a 20Gbps on a 4 Port 5Gbps Switch. For Consumer usage, you
only really need to be able to sustain Maximum speed on 1 port, while others
working at a lower speed or next speed grade down.

------
jobseeker990
So how is this possible now and not 5 years ago? Are there new discoveries
that let us bring up the clock speed again?

(Will they go even higher in the future?)

~~~
hedora
They’re basically factory overclocked.

Intel has been backed into a corner by AMD, so they’re pushing clockspeed over
balancing against TDP and bus.

Also, the multi-year delays in the next process shrink gave them lots of time
to improve yields and micro-optimize the fabrication generation these are
running on.

------
zamadatix
It'll be interesting to see what Intel looks like in performance after they
get past their 14 nm node for HEDT.

------
nimbius
is this 5.3GHz with, or _without_ spectre, RIDL, meltdown, zombieload,
fallout, MDS, TAA, and whatever other mitigations for vulnerabilities inherent
in Intel chips?

~~~
vasco
Clock speed is clock speed, you're thinking about operations per cycle.

~~~
greggyb
You are correct. Just a nit: it's usually referred to as "instructions per
cycle" or IPC, rather than "operations per cycle."

~~~
hermitdev
Operations per cycle also matters. Circa P4, an instruction took around 40-50
cycles to complete. Yeah, it was +5GHz, though.

Core architecture brought it down to around 8 cycles to complete an op. Clock
speeds dropped, but more shit got done at lower clock speeds.

~~~
CyberDildonics
That is still IPC, which refers to instructions per clock as an average
throughput and not the instruction latency.

> Circa P4, an instruction took around 40-50 cycles to complete

Different instructions can have wildly different latencies. Even then an
instruction taking 50 cycles sounds like double precision division or an 80
bit floating point operation. Most operations on the P4 had a latency of 1 - 7
cycles, but the P4's high clocks made memory latency and branch mispredictions
a bigger issue.

Some instruction latency might have been part of the overall pipeline
shortening that made the core architecture fast, but this is an
oversimplification, and the numbers here don't apply to the vast majority of
common instructions. Caches, deep out of order buffers, prefetching and branch
prediction all play a part.

------
api
This is still inferior to AMD's high-end offerings in terms of
price/performance and performance/watt, but it does have one edge: single-
threaded performance. There are certain applications where that matters a lot.
Other than those, I'd pass.

------
ineedasername
Are the gigahertz wars making a comeback?

~~~
sq_
Seems to me that all Intel can do at this point with their 14nm++++ is pump
the Ghz and power consumption (when actually turboed to that high clock) up as
far as they can.

~~~
gsnedders
The power consumption isn't going up quite as quickly as we'd expect; we're
seeing a return to high frequencies partly because we're seeing high-end CPUs
manufactured on a _vastly_ more mature process than we typically have part.

~~~
kllrnohj
> The power consumption isn't going up quite as quickly as we'd expect

Eh? From anandtech's coverage:
[https://www.anandtech.com/show/15758/intels-10th-gen-
comet-l...](https://www.anandtech.com/show/15758/intels-10th-gen-comet-lake-
desktop)

"Users wanting the 10-core 5.3 GHz will need to purchase the new top Core
i9-10900K processor, which has a unit price of $488, and keep it under 70 ºC
to enable Intel’s new Thermal Velocity Boost. Not only that, despite the 125 W
TDP listed on the box, Intel states that the turbo power recommendation is 250
W – the motherboard manufacturers we’ve spoken to have prepared for 320-350 W
from their own testing, in order to maintain that top turbo for as long as
possible."

Power is going up and by a lot to hit that 5.3Ghz. And requires a much lower
temperature to do it.

~~~
blattimwind
Keeping a CPU under 70 °C while dissipating 300+ Watts is going to be tough
even for high end custom loops. Air just isn't an option for these CPUs at
all.

------
gok
I'm kind of surprised at this point that Intel didn't just bite the bullet and
go with TSMC or Samsung for fabrication for a year or two until they figured
their in-house sub-14nm story out.

------
fortran77
I can't wait for the Xeon version of this to be released. There have been
unofficial reports.

------
andy_ppp
What’s the betting that when Apple move to ARM processors AMD and Intel both
start developing their own ARM CPUs. I have a feeling it’s going to be
extremely hard for x86 to compete once Apple show the way on this.

~~~
makomk
AMD were developing their own ARM CPU for a while. They dropped it to focus on
Zen 2, and after looking at the results it's easy to see why.

------
riazrizvi
So about 400,000X faster than my first computer console, the 8bit 1Mhz Atari
2600, which cost $200 at that, time 40 years ago. Or 600X faster than the
64-bit 80 MHz Cray-1 which was $8mm at the same time.

~~~
moonchild
Frequency is no longer a good proxy for speed, what with advancements in ipc.
Also, note that the amount of memory in a cray-1 is similar (within 1-2 orders
of magnitude) to the amount of cache in a modern cpu.

~~~
wahern
Frequency is still very much a good proxy for speed, especially in the context
of a chip which has the same deep pipelines and sophisticated scheduling that
made "frequency is no longer a good proxy for speed" a popular retort. The
benefits of improved IPC are dwarfed by the frequency gains as compared to
those older chips.

~~~
moonchild
Not really. Straight FLOPS might increase pretty-much linearly, but that's not
representative of a typical workload. Cache latency (as measured in clock
cycles) and size in a modern core is comparable to memory in an older CPU, as
I mention in the parent. But speculative execution means you can effectively
use the 'slower' medium (memory) for intermediate computation.

~~~
moonchild
To clarify, the problem is not 'clocks give X% improvement to performance, but
architectural improvements give Y%, and Y is bigger than you give it credit
for.' The problem is that CPU performance is not practical to reason about for
humans. Hasn't been and probably won't ever be until/unless computers get
_massively_ rearchitectured. The speed-of-light limit means that 'fast' memory
(accessible within a couple of clock cycles) will always be limited, and 'big'
memory (gigabytes-terabytes size) will always be at least 100 cycles delayed.
This in turn means that the cpu architecture must be asynchronous, which means
it does not have consistent or intuitive performance characteristics.

------
bzb3
Once again crushing the competition in single core performance, which is what
matters.

~~~
tpmx
It is what matters to web browsing performance.

Web browsing is no longer a "light" thing to do. You need solid integer single
core computing performance to have a snappy browsing experience.

AFAIK, there is not yet a web browser core that can parallelize tasks in a
meaningful way.

~~~
zozbot234
> AFAIK, there is not yet a web browser core that can parallelize tasks in a
> meaningful way.

Firefox is getting there. The CSS style engine is parallelized already, and
graphical compositing+rendering will follow shortly (already enabled
experimentally in some configurations). Other parts of the browser will be
next, including the DOM.

~~~
tpmx
That sounds interesting. Any links to design documents or something similar?

~~~
amaranth
The overall name of the project is Quantum but you have to dig in a bit to
find out about the specific sub projects and how they work or plan to work.

Overview: [https://wiki.mozilla.org/Quantum](https://wiki.mozilla.org/Quantum)

CSS: [https://hacks.mozilla.org/2017/08/inside-a-super-fast-css-
en...](https://hacks.mozilla.org/2017/08/inside-a-super-fast-css-engine-
quantum-css-aka-stylo/)

DOM: [https://billmccloskey.wordpress.com/2016/10/27/mozillas-
quan...](https://billmccloskey.wordpress.com/2016/10/27/mozillas-quantum-
project/)

Rendering: [https://www.masonchang.com/blog/2016/7/18/a-short-
walkthroug...](https://www.masonchang.com/blog/2016/7/18/a-short-walkthrough-
of-webrender-2)

~~~
tpmx
I'm well aware of this Rust-based research project. (I thought you were
talking about something new. Those pages are 3-4 years old, btw.)

None of this parallelization has found its way into mainstream Firefox afaik.

~~~
gsnedders
Stylo shipped a long time ago now, in Firefox 57.

[https://wiki.mozilla.org/Oxidation#Rust_Components](https://wiki.mozilla.org/Oxidation#Rust_Components)
has a list of what Rust has shipped within Firefox, though most of it has been
motivated by safety guarantees rather than parallelism.

~~~
tpmx
Thanks, Sam (!).

~~~
gsnedders
"Sam (!)" and a Swedish browser person… _wonders who you are_

