Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If this is seriously a $143 part then this could turn things upside down. For anyone who haven't yet clicked through and read the article, the CPU is by far the cheapest one featured in the benchmarks, is usually middle of the pack at worst and frequently beats CPUs that are $250+


It's also a part with Performance cores only, which is bad for most applications but might improve compatibility with older software (esp. games) that doesn't cope well with having both P- and E-cores, and sometimes might even fail to run altogether on these asymmetric machines.


That's not true. E cores on desktop Alder Lake mostly allow for higher multithread performance for a given amount of silicon. At desktop part power limits they are run so high that they don't allow for more work per power budget, and they don't have any advantage apart from cost. This will probably change in the future with higher core counts, but for now E cores have no real advantage outside low power laptop parts.

https://chipsandcheese.com/2022/01/28/alder-lakes-power-effi...


That article still shows E-cores as being more efficient for most of their clock speed range. They may lose out to P-cores at 3.5GHz+ clock speeds, but these speeds are very rare nowadays. You'd have to run a compute intensive workload that pegs both P- and E-cores to run into the issue.


They had to manually underclock the E-cores to get better efficiency than the P-cores, which isn't surprising; Intel's power management has historically been based around the idea of "race to idle" - running at a high speed so you could complete the task faster and shut off the CPU sooner. AMD makes heavy use of downclocking their cores but this causes all kinds of headaches with performance regressions on real loads because of course it's not possible to predict in advance how much CPU time arbitrary code will need.


>which is bad for most applications

Why?


I assume what he means to say is that its power draw characteristics are bad for most common workloads, in the sense that the power draw is far higher than it needs to be when running non-demanding applications, but I see actively no reason why higher clock speed would be "bad for an application"


My interpretation was more that it was kneecapped by not including the cheap but decent E-cores. Not surprising, if it was 4+4, it would make the 6+0 i5 redundant.


I’m reading it where applications = use cases not software apps.


I was thinking why the heck is this review submitted and even upvoted on HN. It is another Alder Lake CPU. So What?

But once you mentioned it is a $143 part and a cheaper version goes for $122 this suddenly is news worthy. You are getting the latest Intel uArch for only $122. This is looking like a very good chip for low end machine especially Office work. And suggest may be Raptor Lake could really deliver 10-15% IPC. Enough of differentiation for next gen.

Another important note is Intel doing it with Intel 7nm . Which suggest their 7nm node yield must be doing exceptionally well. And their 4nm must be on track to market at 2H 2022.

I only wish I have money to invest.


Isn’t the latest Intel platform based on DDR5? I don’t know why someone who is budget-focused would opt for being an early adopter of a new platform with overpriced ram instead of going for the EOL socket where every component is well developed and priced sensibly?


From the article:

> "users are more likely from a cost perspective to build a system with one of the more affordable B660, H670, and H610 chipsets and pair that with DDR4 memory"


> And their 4nm must be on track to market at 2H 2022.

How does that follow from this?


If their 4nm aren't doing well ( or at least on schedule ), pushing your 7nm capacity towards lower end lower margin $122 chip doesn't make any sense. Of course, that is not ruling out the possibility of Intel being completely irrational.


They might just sell CPU dies where some cores aren't working. Also these chips are important for business customers where lower end designs are important and they make the money by selling huge amounts.


Then they should do it with their 14nm capacity, not on 10nm / 7nm.

>They might just sell CPU dies where some cores aren't working.

These chips aren't disabled die. They are quad core SKUs and Design.


Just out of curiosity, how do you know the die is designed with four cores? I would be interested to know which chips have cores disabled and which don't.


who would have thought that Intel would be known as "the best cheap CPU" company. I think last time I remember saying this was when the first Celerons came out.


> the first Celerons

They were not the best CPUs at all


What they said was

> "the best cheap CPU" company

And the Celeron 300A was definitely a contender for best cheap CPU during that pre-Athlon era, especially given how overclockable it was. Later there were also a couple of inexpensive socket 370 motherboards which could accomodate two processors, so a few people had pretty kickass early dual-CPU setups for a good price.


I remember the lack of an L3 cache was a real pain. The ones you remember were the later models. The first model was a Slot-1 "cartridge". I still remember the legendary Abit BP-6 motherboard.


Yeah it's true those weren't the very first model. Seems the 300A was ~four months after the initial Celeron (April '98 Celeron launched, August '98 300A released).


4 months... I guess version 1.0 was really bad, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: