As far as I can tell nanometer is basically a marketing term now and will not be as important to performance as it has been in the past. Not only is there not a standard for what constitutes a Xnm chip these days (Intel, TSMC & Samsung all have different descriptions) but chips in the future will be heterogeneous in which features are a specific nanometer size. It's getting so expensive to reduce nanometer size that the next generation of products will have some 5 nanometer features, some 10 nanometer features, and some 7 nanometer features.
Chip packaging technology, power performance and advanced fabrication techniques (ex. EMIB, stacked die, EUV, die disaggregation) will decide who has the most performant chips from now on, not feature size.
One of the things that got Intel in trouble with their 10nm generation of products was that their 10nm process was much more ambitious then other companies. Turned out that it increased the complexity (and decreased the yield) so much that they were not able to make it work when they expected to.
1. 4G, or LTE really is at least 33% better than 3G even when it initially launch. Just the latency reduction were huge improvement.
2 The different between baseline 4G and top end 4G is an completely different scale to nomenclature.
The comment is saying that nanometers really only denote the technology generation, and that is exactly what 3G and 4G denote.
There is wide variation between 4G phones, as there is wide variation in 10nm processes. The analogy seems ideal.
Features lists generally include the transistor gate pitch, interconnect pitch, transistor fin pitch, and transistor fin height. None of these features are actually "10nm" or "7nm, but range anywhere between 30nm-60nm. Although sources are conflicting online, it looks like Intel 10nm is about on par with TSMC 7nm. Some features for Intel's 10nm are larger, while others are smaller than TSMC 7nm. Also _most_ sites are saying Intel 10nm has a higher transistor density, but information is kind of flaky on this metric.
So no matter how you slice it Intel has fallen behind.
You get something that physically is very clearly defined but in practice it's not. Just like the minimum feature size which doesn't tell you much about the rest of the transistor. Perhaps if they used an average feature size it would make more sense for the regular person.
So how are the fabrication sizes different from the frequency that gets advertised on the box? "Oh that? It's only when you load one core. And don't run anything using these specific instructions. And when the temperature isn't too high. And...".
You can see this with Intel right now: CannonLake (10nm) has been delayed due to yield issues but KabyLake, Coffee Lake & Whiskey Lake (14nm) are all improvements over the original 14nm product (Skylake).
Or at least that's how I understand it. At the end of the day the chips are now physcially small enough to do whatever we want so it's more useful to directly compare power consumption rather than the implementation details. After all, if a 10nm Intel chip can give you the same output per joule and per second as a 7nm Apple chip, does it really matter how big the transistors are?
But this sounds soooo much like the marketing around the IBM/Moto G3/G4 chips that it's amazing.
"This specific number is bigger in the competition, but the number that really matters is this other one that's smaller!"
And this was legitimately true for a certain amount of time. There were G3 and G4 chips that actually smoked Intel's best chips in the mid-late 90s and early 2000s.
This is probably true as well. I have no reason to think otherwise.
But I love how you're sheepishly trying to make a win out of different numbers it's doing. Tastes like justice for those of us who were a little chafed by other CPU fabs not quite getting it right for a while, 20-something years ago.
As feature size shrinks new fabs will need to be created to support the manufacturing, a Fab is a multi billion dollar investment. Eventually the industry will have to choose if feature size is something we will continue to push in favor of investment in other techniques.
As a layman, I see how Retina can be a buzzword, but not 7nm vs 5nm.
But what are you measuring? "Minimum feature size" is cited often.. what constitutes a feature?
I think this is where the line is blurred.
There's tiny bond wires that connect the silicon die to the pins on the outside of the chip but that has nothing to do with the process size of the chip. Basically the feature size just means that somewhere on the die there's a tiny spot 7 nm wide. It's a measure of how fine they can get it when etching and doping a silicon wafer.
I suppose that since TSMC makes both processors as others have pointed out TSMC is the first to ship with at the 7nm node.
A better title would be "TSMC first to manufacture 7nm smartphone chip, Samsung close behind while Intel is playing catchup."
TSMC doesn't just invest in 7nm with a build it and they will come philosophy. I'm sure Apple and Huawei were both instrumental in bringing 7nm to market. Global Foundries has dropped out of the advanced node game and Intel is struggling to maintain parity.
Hurray to all of them for a tremendous achievement.
(Not sure how much it would cost though, it's probably not a trivial cost, but at Apple scales, it might be doable)
Once people start paying attention to a metric it gradually stops meaning anything. Feature size in semiconductors has hit the point where nobody should really care.
That is TSMC Roadmap. Also likely to ship according to Apple A Series SoC / iPhone shipping schedule. The 2020 / 2021 scheduled are not set in stone yet depending on EUV yield. There are work on 2nm that is likely coming in 2023 or 2024. All the nm numbers are TSMC's, i.e don't compare them against Samsung or Intel.
People likes to talk about barriers, but as you see we still have 5 - 6 years of road ahead of us. And it is hard to see further ahead than this. I am pretty sure we have many ways to improve even beyond TSMC's 2nm technical barrier, but the problem is cost barrier. Who is going to paid for 3x to 5x more expensive SoC or R&D.?
That said, I’m more than happy to save that extra power - battery life is a good thing.
thanks to threadripper, there were huge jumps for PC in the last 12-18 months. 2990wx bumped Cinebench R15 score from less than 4,000 (Intel's 7890XE) to 5500+ overnight.
Sidenote, but the claim by Apple seemed to be that those apps that use CoreML will see such an efficiency boost in their ML tasks, as CoreML did not use the neural circuitry on the A11 Bionic, and it was restricted to system tasks. Not sure if the opening of the A12's neural processing means that CoreML will be altered to also use the A11's functionality.
The other day I heard someone discussing FinFET who clearly didn't know what a FET is, but didn't really know what a transistor is.
Oh but I need the new shiny.
I get that they dynamically adjust the frequency but surely cellphones can do that too?
I'd guess that the ratio of CPU power to display power is much higher in a phone (whose displays are small).
If the Intel cores are idle anyway then their power consumption is probably so much less than the huge laptop display that it doesn't make a lot of sense to optimize further.
I'm a little surprised no one else has mentioned this yet, but this is exactly what most modern cellphones do, the architecture is known as big.LITTLE . There's basically three ways you can use the extra set of cores:
1. Either all high performance cores are used or all low performance cores are used, but never both.
2. The cores are paired up in twos, where only one member of each pair is active at a time. This is very similar conceptually to the way frequency scaling works in a desktop or laptop chip.
3. All cores are always available for use and processes are scheduled to either a high performance or low performance core as needed. Cores without any scheduled processes can be powered off or put in a low power state
The problem will likely be support by the OS, to my knowledge Windows doesn't have anything on board to manage switching to what essentially amounts a low-perf/low-power NUMA node to save power. I don't think Linux has something on board either.
Apple controls the entire OS and SoC so they can schedule different tasks onto different classes of CPU.
There's zero reason MS couldn't implement this in Windows. In fact, it might already be done giving HP and others are currently shipping Windows on Snapdragon devices.
Application-level software shouldn't need to care about that, it's way too low-level. If I do:
printf("Hello from some core!\n");
Of course high-performance/server software might want to actually care and use OS-specific API:s and services to deal with that, but those are the exception.