Hacker News new | past | comments | ask | show | jobs | submit login
Apple, Huawei Both Claim First 7-nm Smartphone Chips (ieee.org)
135 points by extarial on Sept 12, 2018 | hide | past | web | favorite | 92 comments



Disclosure: I work for Intel.

As far as I can tell nanometer is basically a marketing term now and will not be as important to performance as it has been in the past. Not only is there not a standard for what constitutes a Xnm chip these days (Intel, TSMC & Samsung all have different descriptions) but chips in the future will be heterogeneous in which features are a specific nanometer size. It's getting so expensive to reduce nanometer size that the next generation of products will have some 5 nanometer features, some 10 nanometer features, and some 7 nanometer features.

Chip packaging technology, power performance and advanced fabrication techniques (ex. EMIB, stacked die, EUV, die disaggregation) will decide who has the most performant chips from now on, not feature size.


More people need to understand this. The “<x>nm” nomenclature now represnts the new generation of product. Similar to how 4G cellular is not 33% better then 3G. It just means the next generation.

One of the things that got Intel in trouble with their 10nm generation of products was that their 10nm process was much more ambitious then other companies. Turned out that it increased the complexity (and decreased the yield) so much that they were not able to make it work when they expected to.


There are also process generations within the same process "size". Intel has 14mm, 14nm+, 14nm++, and now 10nm (the limited # of 10nm skus released so far is because of all the 10nm problems) and 10nm+. I thought I read somewhere that while 10nm+ was originally intended for Ice Lake and beyond, they're considering moving some 10nm products to 10nm+ early because the original 10nm process yields are so awful.


The 3g/4g analogy works here too. LTE has categories (0 to 19, with some extras) that increase bandwidth (or another metric) from Rel8 to Rel13.


Please could we not use this 3G / 4G analogy?

1. 4G, or LTE really is at least 33% better than 3G even when it initially launch. Just the latency reduction were huge improvement.

2 The different between baseline 4G and top end 4G is an completely different scale to nomenclature.


I'm sorry, I don't understand why it isn't a good analogy.

The comment is saying that nanometers really only denote the technology generation, and that is exactly what 3G and 4G denote.

There is wide variation between 4G phones, as there is wide variation in 10nm processes. The analogy seems ideal.


I think the point was that 3G -> 4G does not imply a literal 33% increase in performance, which one could guess was the case as 4 is a 33% increase over 3 (3x1.33333~=4)


I always thought it just meant "fourth generation". Never until now did I think 4G would be a 33% increase over 3G


I don't understand why. Could you elaborate more? Isn't the whole point of reducing the minimum feature size to increase the number of transistors that you can pack in a layout or in other words the smaller the node, the more density you get and therefore more processing power with less energy consumption? How is that not supposed to impact performance?


The gist of what's being said on this post and the thread on Apple's video is that everyone is generally around the same size for various features. So we might get marketing speak that we're on "10nm" or "7nm" or "5nm", but that doesn't correlate to much. Just like "4G" and "5G" technically mean something, that doesn't mean the carrier offers speeds remotely close to the specification they say they provide.

Features lists generally include the transistor gate pitch, interconnect pitch, transistor fin pitch, and transistor fin height. None of these features are actually "10nm" or "7nm, but range anywhere between 30nm-60nm. Although sources are conflicting online, it looks like Intel 10nm is about on par with TSMC 7nm. Some features for Intel's 10nm are larger, while others are smaller than TSMC 7nm. Also _most_ sites are saying Intel 10nm has a higher transistor density, but information is kind of flaky on this metric.


Yea but TSMC's "7nm" is definitely smaller than Intel's "14nm"(+++), which is what Intel is actually on right now, and will be for at least another year.


What's funny about this still though is that people keep saying TSMC's 7nm is roughly equivalent to Intel's 10nm like that somehow scores points for Intel... even though Intel hasn't been able to ship their 10nm process for shit.

So no matter how you slice it Intel has fallen behind.


Or like CPU clock speeds, which have never correlated directly with performance across families, even when comparing something as old as the 6502 vs. Z80.


It's not quite the same thing, since CPU clock speeds do in fact have a well-defined meaning.


It's starting to get murky. We're back to the point where not everything runs at the same speed (take AVX for example [0]). Add to that that some of the frequencies are turbo, and specific frequencies are only achieved under very specific load conditions, etc.

You get something that physically is very clearly defined but in practice it's not. Just like the minimum feature size which doesn't tell you much about the rest of the transistor. Perhaps if they used an average feature size it would make more sense for the regular person.

So how are the fabrication sizes different from the frequency that gets advertised on the box? "Oh that? It's only when you load one core. And don't run anything using these specific instructions. And when the temperature isn't too high. And...".

[0] https://www.tcm.phy.cam.ac.uk/~mjr/IT/clocks.html


The way I understand, it’s really more like minimum feature size (like think a trace/wire and not a transistor). So it’s more like a sort of DPI measurement


You're correct, smaller features will improve performance. What I'm saying is that it is becoming so expensive and difficult to get high yield on smaller feature size that from a product generation to product generation other performance features (like the ones I mentioned before) will be the main factor to improved performance.

You can see this with Intel right now: CannonLake (10nm) has been delayed due to yield issues but KabyLake, Coffee Lake & Whiskey Lake (14nm) are all improvements over the original 14nm product (Skylake).


Yea that's all true. The problem is consistently defining what exactly constitutes a 'feature'. Intel's 10nm features are smaller than Intel's 13nm features but it's harder to compare Intel 10nm to other companies' 10nm processes.

Or at least that's how I understand it. At the end of the day the chips are now physcially small enough to do whatever we want so it's more useful to directly compare power consumption rather than the implementation details. After all, if a 10nm Intel chip can give you the same output per joule and per second as a 7nm Apple chip, does it really matter how big the transistors are?


A few generations ago they just stopped being able to reduce minimum feature size. Instead they're working on some kind of hypothetical combination of work-per-watt and transistors-per-centimeter, ignoring additional layers.


Do you have a source that backs this up? I am under the impression that 7nm is a marketing term but that it is still smaller than the previous node.


I don't disagree with you. Hell I have no basis to do so.

But this sounds soooo much like the marketing around the IBM/Moto G3/G4 chips that it's amazing.

"This specific number is bigger in the competition, but the number that really matters is this other one that's smaller!"

And this was legitimately true for a certain amount of time. There were G3 and G4 chips that actually smoked Intel's best chips in the mid-late 90s and early 2000s.

This is probably true as well. I have no reason to think otherwise.

But I love how you're sheepishly trying to make a win out of different numbers it's doing. Tastes like justice for those of us who were a little chafed by other CPU fabs not quite getting it right for a while, 20-something years ago.


Totally understand your point. I think the main thing I'm trying to achieve with my comment is that consumers should understand that the reported feature size will correlate weaker in the future to overall performance than it used to. Nanometer is still the preferred marketing feature of choice but there is a large class of innovative techniques that are maturing the semiconductor fabrication space right now that are going to contribute more and more to the real world performance than feature size.

As feature size shrinks new fabs will need to be created to support the manufacturing, a Fab is a multi billion dollar investment. Eventually the industry will have to choose if feature size is something we will continue to push in favor of investment in other techniques.


I'm curious as to what the job title and pay range for the, what I assume to be EE's, designing chipsets make at an Intel/Apple?


There are many roles. From soc architects to ASIC design engineers, verification folks at all levels, analog folks, physical design folks (layout, etc), circuit designers. It takes a village. Degree-wise mostly computer engineering, electrical engineering, and computer science. But I mostly know the cpu space. On something like a phone/watch there are also all the sensors, camera stuff, etc. No idea where that all comes from.


I'm CoE and have done some FPGA in school. Stuff is pretty interesting.


Maybe this explanation is simply too long, but how is this? Isn’t this simply a measurement of size?

As a layman, I see how Retina can be a buzzword, but not 7nm vs 5nm.


> Isn’t this simply a measurement of size?

But what are you measuring? "Minimum feature size" is cited often.. what constitutes a feature?

I think this is where the line is blurred.


Couldn't they use use sqrt of transistors per area?


That's actually what Intel is trying to push instead of feature size.


Does the small wire size make the iPhone more susceptible to drop shock impact?


It has nothing to do with the size of any wires. The chip itself is a solid wafer of silicon with a tiny bit of copper, aluminum, etc deposited on top. It's all one solid piece though, the silicon die itself is never going to be damaged from shock unless it's strong enough to break it in half which isn't going to happen before the rest of the device is toast.

There's tiny bond wires that connect the silicon die to the pins on the outside of the chip but that has nothing to do with the process size of the chip. Basically the feature size just means that somewhere on the die there's a tiny spot 7 nm wide. It's a measure of how fine they can get it when etching and doping a silicon wafer.


"Basically the feature size just means that somewhere on the die there's a tiny spot 7 nm wide." Understood. And these tiny spots are not brittle you say. Presumably because of embedding?


It's sort of like measuring the wood grain of a log. The wafer is solid but the wells are measurable, just like the log is solid but the grain can be measured.


I mean, Huawei was technically correct when they announced it was the first. However, it's now clear that the A12 Bionic will be the first shipping 7nm SoC, and I think it's fair for Apple to claim it's the first 7nm SoC in a smartphone.


To quote Jobs: real artists ship.

I suppose that since TSMC makes both processors as others have pointed out TSMC is the first to ship with at the 7nm node.


What does SoC stand for?


System on chip. It usually refers to a cpu with integrated devices such as GPU, memory controller, camera interface, wireless, etc.


System on a chip which means cpu, gpu, modem, etc. Almost the whole system you need. Then ram is baked on top of the chip literally.


Apple and Huawei don't have any fabs, TSMC makes them for both.

A better title would be "TSMC first to manufacture 7nm smartphone chip, Samsung close behind while Intel is playing catchup."


I think you are underestimating the challenges involved in moving to a new node and designing to a new node. Just last year, a lot of other companies were on the fence saying 7nm was not ready.

TSMC doesn't just invest in 7nm with a build it and they will come philosophy. I'm sure Apple and Huawei were both instrumental in bringing 7nm to market. Global Foundries has dropped out of the advanced node game and Intel is struggling to maintain parity.

Hurray to all of them for a tremendous achievement.


Not Huawei; just Apple. Huawei won't have any product for quite some time. Their announcement on this is fraudulent vapor.


Won't be long. Mate 20 will be announced on Oct 16.


I noticed you've used the word node in reference to semi-conductor fabrication plants. I've seen this word used elsewhere in other blogs. Could you please elaborate on what it means with respect to chip fabs?


AMD is launching 7nm soon on the TSMC process but I recall them saying that they intend to use GF as well within 6 months of launch. So it seems like GF 7nm may also beat Intel's 10nm to market.



Maybe they can move other processors to be manufactured by them if it's cost effective.

(Not sure how much it would cost though, it's probably not a trivial cost, but at Apple scales, it might be doable)


Who ships a phone with the 7nm chip first? I’ll have to get the exact quote but I think Apple said first 7nm chip in a phone. When is Huawei’s phone shipping? Apple’s is here in 2 weeks.


Now we're in the neighborhood of 1-10nm, putting one number on the feature size doesn't mean much. It's just marketing.

Once people start paying attention to a metric it gradually stops meaning anything. Feature size in semiconductors has hit the point where nobody should really care.


Did the jump from 28 to 14 mean more than 14 to 7?


I am by no means an expert but I think the answer is yes. As I understand it, as we shrink smaller we run into greater leakage costs even when the devices aren’t switching. This is probably not a completely correct answer.


The big jump there was from 2D to FinFETs.


Does anyone know where we're heading next? Will there be a slowdown of performance increase the next years? Did we hit a barrier?


That happened long ago. Low power cores - done. Specialized coprocessors - more and more of them (from VPU to AI and whatnot). Universal coprocessors - Apple has FPGA on board since iPhone 7 (iCE5LP4K).


Would you mind sharing a source for this?



There isn't really one particular source. Just kind of been something happening about and written about in numerous forms over the last few years


I was referring to the presence of an FPGA on the iPhone. There doesn't seem to be a good source on what they're actually used for. Just that they're there.



7nm+ in 2019, 5nm in 2020, 3nm in 2021, 3nm+ in 2022.

That is TSMC Roadmap. Also likely to ship according to Apple A Series SoC / iPhone shipping schedule. The 2020 / 2021 scheduled are not set in stone yet depending on EUV yield. There are work on 2nm that is likely coming in 2023 or 2024. All the nm numbers are TSMC's, i.e don't compare them against Samsung or Intel.

People likes to talk about barriers, but as you see we still have 5 - 6 years of road ahead of us. And it is hard to see further ahead than this. I am pretty sure we have many ways to improve even beyond TSMC's 2nm technical barrier, but the problem is cost barrier. Who is going to paid for 3x to 5x more expensive SoC or R&D.?


slowdown in performance increase? haven't we been there for a decade?


For PCs, sure. Not for mobile chips.


Apple is claiming only 15% performance improvement for the A12 CPU cores; in the past it was more like 30-50% per generation. But 15% is far more than the ~5%/year that we've seen in PC processors.


Yep, but also they're claiming 40% lower power consumption. So it's safe to assume they could have traded that power efficiency gain for more performance. Despite that, most of that gain in PPA is due to the 7nm transition, rather than micro-architectural developments.


That's assuming they could cool off such a design. Cooling is harder to deal with in a mobile device. Can't just throw big fans in there.


Cooling is a factor of energy input. If they can cool the A11 using 40% more power, they can cool an A12 using the same power.

That said, I’m more than happy to save that extra power - battery life is a good thing.


9x performance on the Neural Engine compared to the A11 is the big increase. We'll probably see slower CPU/GPU increases with massive advances in AI for a few generations now.


> But 15% is far more than the ~5%/year that we've seen in PC processors.

thanks to threadripper, there were huge jumps for PC in the last 12-18 months. 2990wx bumped Cinebench R15 score from less than 4,000 (Intel's 7890XE) to 5500+ overnight.


It has to be said that the core Threadripper concept is just throwing more cores at the problem.


According to Apple, the neural engine can perform 5 trillion operations per second—an eight-fold boost—and consumes one-tenth the energy of its previous incarnation

Sidenote, but the claim by Apple seemed to be that those apps that use CoreML will see such an efficiency boost in their ML tasks, as CoreML did not use the neural circuitry on the A11 Bionic, and it was restricted to system tasks. Not sure if the opening of the A12's neural processing means that CoreML will be altered to also use the A11's functionality.


Wouldn't Apple technically be the first to mass-market with a 7nm SoC?


Not just technically. There's nothing complex or "technical" about it. Huawei is announcing something they are not even close to shipping. Apple will be first, period.


"nm" is the new "pixels" (which was the new "MHz"-- a marketing buzzword sought after though non-understood by enthusiasts.

The other day I heard someone discussing FinFET who clearly didn't know what a FET is, but didn't really know what a transistor is.

Oh but I need the new shiny.


Why don't say intel laptop chips use that approach of say 4 high perf cores and 4 low perf efficiency cores?

I get that they dynamically adjust the frequency but surely cellphones can do that too?


I'm not exactly an expert. This is just speculation and I would love to be corrected if I'm wrong.

I'd guess that the ratio of CPU power to display power is much higher in a phone (whose displays are small).

If the Intel cores are idle anyway then their power consumption is probably so much less than the huge laptop display that it doesn't make a lot of sense to optimize further.


Even when idle, extra cores still cost in terms of area (hence yield). So if you want to sell very cheap chrome books and such, that may not be attractive. Power isn’t at as much of a premium in a laptop cpu (bigger battery and also bigger screen, more ram, etc. so that the CPU part of the power draw is proportionally less)


EDIT: After typing this up I realized big.LITTLE was mentioned and the article and it what OP was referring to but people may find the explanation useful so I'm posting it anyways

I'm a little surprised no one else has mentioned this yet, but this is exactly what most modern cellphones do, the architecture is known as big.LITTLE [0]. There's basically three ways you can use the extra set of cores:

1. Either all high performance cores are used or all low performance cores are used, but never both.

2. The cores are paired up in twos, where only one member of each pair is active at a time. This is very similar conceptually to the way frequency scaling works in a desktop or laptop chip.

3. All cores are always available for use and processes are scheduled to either a high performance or low performance core as needed. Cores without any scheduled processes can be powered off or put in a low power state


No, A12 can use all the cores at the same time, it is different from a traditional big.little. At least this is what they said during the event.


As I understand it, all cores at the same time is traditional big.LITTLE, while paired cores are proprietary hacks (e.g. Tegra X1)


I think because laptop chips tend to prioritise performance over battery life, at least compared to mobile chips.


Pretty sure this is the right answer. The power draw differential between your Intel laptop CPU and the ARM SoC in your phone is enormous.


AMD might go for that IMO, they have the tech to "glue together a bunch of CPUs", to misquote Intel, they could feasibly make a low power and low perf CCX and pair it with a high power desktop CCX.

The problem will likely be support by the OS, to my knowledge Windows doesn't have anything on board to manage switching to what essentially amounts a low-perf/low-power NUMA node to save power. I don't think Linux has something on board either.


ARM's big.LITTLE works fine on Linux.


IIRC it doesn't work natively on Linux, ie needs a blob to work...


Really? I thought was just the job of the Global Task Scheduler in the kernel.


It would probably take Windows five years to properly use the small cores.


So? It's not like Linux on the desktop will suck less in 5 years.


How would any software know which CPU to use?

Apple controls the entire OS and SoC so they can schedule different tasks onto different classes of CPU.


The Linux thread scheduler already knows how to do this, which is why ARM's big.LITTLE architecture works fine in Android phones.

There's zero reason MS couldn't implement this in Windows. In fact, it might already be done giving HP and others are currently shipping Windows on Snapdragon devices.


I hadn’t looked at the sched code in a couple of years. Is this article up to date:

https://community.arm.com/processors/b/blog/posts/ten-things...

?


I believe that patch referenced in the article for "global task scheduling" that allows you to use both the big and little cores at once is now in the mainline kernel. But I don't recall where I read this so apply a few grains of salt to my comment.


Why would it need to?

Application-level software shouldn't need to care about that, it's way too low-level. If I do:

    #include <stdio.h>

    int main(void)
    {
      printf("Hello from some core!\n");
      return 0;
    }
that code doesn't need to think about which core it runs on, or even if there are multiple cores. Of course it does nothing to support multiple cores, but it does need some core to run on. The OS just schedules the process onto some core, and then it can of course monitor the process' performance and decide to move it, if it detects a better fit. All of that is invisible to the process itself.

Of course high-performance/server software might want to actually care and use OS-specific API:s and services to deal with that, but those are the exception.


Who cares who was first?




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: