Hacker News new | past | comments | ask | show | jobs | submit login

The debate about non-symetric RISC cores feels like a tech battle that was already fought in the 2k10's. Amusingly its nearly the exact same debate this time around. The only difference is 2 orders of magnitude of TDW really, and Google is now playing Oracle.

You say to Oracle, Yeah T4 SPARC64 RISC chips look nice. But coding against non-symetric asychronous cores is hard.

Oracle counters, Yup, that is why we wrote /maintain the Java JIT that does that for you!.

Now Google is selling ARMv8 with A57 non-symetric asynchronous RISC chips, which you optimize with Java and we do this song and dance over again.

Intel, Nvidia, AMD (formerly ATI) solved this problem a long time ago just by putting the binary logic between the scalar processors and only exposing 1 scheduler to pipeline these smaller weaker cores but make them pretend to be a single processor to the OS/User.




Apple's recent decision to promote LLVM bitcode as a canonical binary form for apps ("promote" meaning mandatory on the watch) makes me suspicious they're planning something similar in their future A-series or S-series chips.

The paired relationship between the phone and watch could probably benefit from either side being able to do a little computation and coordination without having to spin up the big cores.

There, of course, I'd expect Apple to make it developers' problem to designate which thread(s) could operate in a limited environment.


>Apple's recent decision to promote LLVM bitcode as a canonical binary form for apps ("promote" meaning mandatory on the watch) makes me suspicious they're planning something similar in their future A-series or S-series chips.

This makes a lot of sense. LLVM byte code is pretty damn portable. It just makes a lot more sense since you can maintain PPC/x64/ARMv8 support of a binary file. Yet have native performance in all environments.

What I'd consider more likely is on install you compile to native for both High Power/Low Power chips. It might get funky from an OS standpoint but theoretically you could dyanmically switch between binaries if they were compiled to do that. Its really just dynamic dispatch in Kernel Space.


> There, of course, I'd expect Apple to make it developers' problem to designate which thread(s) could operate in a limited environment.

Less so when using task queues, I guess.


Sure, but I'd expect that Apple would provide some API for you to designate certain tasks as intended to be run in the "Deep Background". With the understanding that such tasks are on a very short leash power and capability-wise, and may be arbitrarily cancelled—like iOS's forcible app killing, but at a smaller grain. (Does GCD have a facility for the dispatcher to unilaterally do this already?)


> You say to Oracle, Yeah T3 SPARC64 RISC chips look nice. But coding against non-symetric asychronous cores is hard.

But… the T3 is symmetric? And not from the 90s, it was launched in 2010?

> Intel, Nvidia, AMD (formerly ATI) solved this problem a long time ago just by putting the binary logic between the scalar processors and only exposing 1 scheduler to pipeline these smaller weaker cores but make them pretend to be a single processor to the OS/User.

That's the original use case for big.LITTLE actually, either cluster-switching between the low-power (A7/A53) cluster and the high-power (A15/A57) cluster, or presenting groups of low power and high-power cores (1/1, 2/1) as a single virtual core.


>But… the T3 is symmetric? And not from the 90s, it was launched in 2010?

I was thinking of the T4 and I had my dates wrong. T4 does weird dynamic threading internally. So it can be Out of Order and In Order at the same time. Depending on the number of execution units you need.

Furthermore you illistrate that Intel was again ahead. Having 6 core scalar galaxy hiding behind a virtual processor. You can just change that to a 3 and low-and-behold 50% less power consumption!


> already fought in the 2k10's

Am I missing something? That sounds like it means the decade beginning in 2010 ... which we're still only half-way through.

Do you mean something like "in 2011-2012"?

[EDITED to add: Oh, wait, looking at the comments I think I see what happened. You already wrote "already fought in the 1990s", then someone pointed out that the thing you were talking about was a lot later, and then you changed the date. I'm afraid the result is a pretty odd paragraph, talking about a decade still in progress as if it's ancient history.]


Focusing on single app performance may be missing the mark.

For a mobile device, having multiple smaller cores going at lower voltage may be more beneficial than one big core screaming ahead at higher voltage. This because most devices these days have all sorts of stuff running in the background.

Yeah i know Intel is talking big about race to idle. But my main concern there is that given the number of background tasks etc, idle will never be reached in a meaningful sense.


For a server having multiple smaller cores going a lower voltage maybe more beneficial then several big cores screaming ahead at higher voltage. This because most servers have all sorts of processing running in the background while idle.

Notice how it sounds identical? We've had this debate before.

You can argue phone's have stricter power requirements then server farms. But power is the primary expense of server farms. Much like battery life is a big selling point of phones.

Its the same debate. Power is king, always has been.


The work profile of a typical phone is different than the work profile of a typical server (which is different from the work profile of a typical desktop). Just because we decided this approach was not good for servers does not mean it is not worth revisting for phones.


Do you save more power having lot's of slow cores idol or fewer faster cores idle?


Lots of low-voltage cores idle, assuming the faster cores don't use similarly small amounts of power when idle. https://en.wikipedia.org/wiki/CPU_power_dissipation#Sources


The only difference I see here is that Google will have a good occasion of optimizing the android system to deal with these assymetric cores in a proper way.

It makes alot of sense when you think about it. Android has some good clues about what a process is responsible for.

It would be quite interessting to have all the background daemons doing mostly I/O being scheduled on a light core and the heavy ones (UI -> GPU, whatever) on more robust ones.


>The only difference I see here is that Google will

This is a cargo cult mentality. You're expecting a company to get it right. Simply because they will.

>Android has some good clues about what a process is responsible for.

This is actually already a thing on servers. Or will maintained servers. You set core affinity of running process to the same core that handles the interupts for that task to preserve better cache locality when it comes to handling system calls.

People keep telling me Phones/Desktops/Servers are fundamentally different, but it always seems like they're trying to solve the same problems.


> This is actually already a thing on servers

Servers are a different story, we are talking mobile systems here.

> This is a cargo cult mentality. You're expecting a company to get it right. Simply because they will.

Do you want to bet that Google will totally ruin their mobile platform by messing up big time with a new architecture ?

Ok I'm in, how much you got ? :)


>Servers are a different story, we are talking mobile systems here.

I'm talking about the similiarities between the two, and how one can solve the others problems. Creating arbitary divides based on name alone is the core crux the problem I'm attempting to address.


With the current trend on mobiles to have all three major platforms in some form of bytecode (bitcode, MSIL, DEX), JITed (Dalvik), AOT compiled installation time (ART), linked at installation time (WP8/MDIL), AOT compiled at the store (bitcode, .NET Native), they are looking like mainframes.

Using processor neutral binary formats was always a thing with them.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: