Hacker News new | past | comments | ask | show | jobs | submit login
64-bit ARM chips in iPhone 5S serve up taste of Intel-free future for Apple (zdnet.com)
34 points by boh on Sept 11, 2013 | hide | past | favorite | 58 comments



Disagree. Intel looks to be getting better at low power faster than ARM and partners are getting better at perf/watt.

The ARM universe has a flexibility advantage, but Apple might not care, since Intel will surely make them the SoC they want if it means a spot in the iPad.

I don't think the outcome here is ordained, nor would Apple want it to be: Competition for supplying them fast, efficient CPUs is exactly what they didn't have in the PowerPC days, and it nearly killed them.


Intel looks to be getting better at low power faster than ARM and partners are getting better at perf/watt.

Citation, please?


This is the kind of thing I have in mind:

http://www.anandtech.com/show/7117/haswell-ult-investigation

Even their "ultrabook"-class Haswell parts are becoming power competitive with ARM tablet chips. The Y series should get even closer to parity. Broadwell is, in theory, right around the corner, with Intel claiming another 30% power improvement in the 14nm process.

http://www.anandtech.com/show/7318/intel-demos-14nm-broadwel...

Intel will have a significant process advantage over the ARM partners for at least another couple of years, and possibly longer. I would be surprised if they failed to overcome the engineering challenges required to exploit that process advantage in mobile.

My point isn't that I'm sure who's going to win, but rather that Apple is not inextricably tied to ARM.


Google atom silvermont


Just because Apple says the processor is "desktop-class", does not make it so. If it consumes 5W, you are only going to get 5W worth of perfomance. Any half-recent Intel processor will easily blow this out of the water.


Of course it's desktop-class! If your desktop was built in 2005....


Or the early 1990s if you were unfortunate enough to have an Indigo. Mid-1990s if you were really unfortunate and had an Indy, or were fortunate enough to have an Indigo2 with decent gfx.

About the same amount of RAM then, too. :-)


Or if you just bought a mid range chip from 08 - 10. Comparing the 150w behemoths of 2005 to the 5w parts of today is kind of unfair, when we have always had 65 - 110w parts acting as the mainstream.

In that realm, this next generation of 5 - 30w parts from ARM and Intel is performing equivalent to the last generation of core 2 quad processors at 120w. And people did some serious professional work on those kinds of chips.


What was wrong with the desktop in 2005?


Nothing... when running 2005 software. But due to Gates's Law you probably don't want to run 2013 software on 2005 hardware.


Name one software package (not a game), released in 2012-13, that would not work on a midrange 2005 desktop.


"Work" is pretty broad. Let's define it as "have a productive experience" because any non-period matched programs have issues other than pure compatibility. For example, bring many old DOS programs up on modern hardware and they will "work" but be entirely unusable unless you do something to slow them down.

So, 2013 programs that don't "work" on 2005 hardware? Off the top of my head: SolidWorks 2013, VMWARE running Ubuntu 12 on top of Windows, Avid, Visual Studio 2013.


>So, 2013 programs that don't "work" on 2005 hardware? Off the top of my head: SolidWorks 2013, VMWARE running Ubuntu 12 on top of Windows, Avid, Visual Studio 2013.

So basically things 95% of users don't have any need for?


Nothing, it just illustrates the meaningless of the term "desktop-class".


You mean around the same time (give or take 1-2 years) that common people found out that their machines were perfectly good for most of the uses they had in mind, and started feeling the need to constantly upgrade them much less?


It doesn't seem like th author has a very clear grasp of what 64-bit word sizes actually allow over 32 bit. Here's a hint - not much. Sure, eventually more ram and more registers (which is very important but not really inherent to 64 bit as far as I know). It seems like the author thinks 64 bit magically makes things run faster or easier to develop for - neither of these are true except in the very small minority of circumstances.

I'll leave with this: Nintendo came out with a 64 bit game console ~15 years ago at half the cost of the iPhone. This is not ground breaking stuff.


The "more registers" bit doesn't require going 64-bit, but it does require changing the instruction format, at which point it would be foolish not to take the opportunity to go 64-bit.


Place your iPhone 8S next to your Thunderbolt display, and you have full blown OSX appear on the screen, and you use keyboard/mouse as usual.

OSX and iOS will be running concurrently on the same hardware, when it gets fast enough. Of course as soon as you "undock" your phone (which doesn't actually involve a cable, or any physical connection), the OSX "VM" is instantly suspended.

By the way, you could do the same thing with a MacBook Air type device which is nothing more than monitor, keyboard, trackpad, and battery. Phone stays in your pocket. Open the lid, and boom, there your desktop OS is. Except it's actually being served over AirPlay from your "phone". Maybe an early iteration would require connecting the phone to the shell with Thunderbolt.

Although I think it's fundamentally two different OSs, I think the line will blur to the point where it's a meaningless distinction. For example, apps which support both modalities will install with a single click in both places.


Sounds silly to have two separate operating systems. Sure they're going to converge those?

What about Chrome OS and Android?


At least Microsoft has shown the market how not to do it.

Long-term, a converged, unified OS makes sense, but a lot more work is going to have to go into making a truly context-responsive UI system.


Albeit the about box says the author has more than 20 years of experience in the IT industry, at no point does the article convince me that the author understands the difference between a 32 bit and 64 bit processor.


The amount of misinformation on the 5S's 64-bitness is hilarious. I've yet to see a single person get it right aside from people I know personally.


Care to share? I haven't heard a good explanation of what it's good for.


It's less about word size and more about the underlying CPU architecture, i.e. ARMv8. See

http://en.wikipedia.org/wiki/ARMv8#ARMv8_and_64-bit

It's much easier to market 64-bit than ARMv8. But what really matters are a new instruction set, more registers and better SIMD.

You have to give credit to Apple as they are indeed the first one to come out with a commercial SOC based on this and the first to get the OS support added.


fleitz's and eddieplan9's replies are generally accurate, so I won't repeat them.

The revised instruction set is the main thing. ARM's 32-bit instruction set is a bit odd, and the better 64-bit one will give a decent speed boost.

The ability to expand the address space beyond 4GB is nice, although not yet killer. The device still doesn't have enough RAM to need it, surely, but it still lets you do interesting things like memory map large files or play other fun address space games.

In addition to that, 64 bits also lets Apple make various improvements on their end, such as adopting tagged pointers (already seen on 64-bit Mac), which can be a big performance and memory win for certain kinds of code.

What 64-bit doesn't do:

1. Make code vastly faster. The performance increase will be somewhere in the neighborhood of 10-20%, not the huge gain some people seem to think. Some code could end up being slightly slower due to not doing anything that benefits from the new architecture while using up more memory and cache due to larger pointers, although this probably won't happen often.

2. Allow qualitatively different things to be done with the devices. For example, the linked article implies that fingerprint recognition and VPNs are both made possible by the 5S's 64-bitness, neither of which is even remotely true. Fingerprinting might be a bit slower without it, but not much. VPNs have been on iPhones for years, and 64 bits makes nearly no difference there, as CPU load isn't high for network-limited crypto anyway.

3. Make code easier to port or more compatible. For any halfway decent halfway modern code, the 32/64 divide is not a big deal at all. Maintaining a code base that works in both 32 and 64 is trivial.

4. Signal anything about future iOS/Mac convergence at Apple. A 64-bit CPU in an iPhone was inevitable. It's pretty much required once you want to go beyond 4GB of RAM, and extrapolating forward, iPhones will hit that limit soon. The only surprise (and let's not downplay it, it was a pretty big surprise) is that it happened now, and there's no real difference in terms of hypothetical Apple plans between a 64-bit ARM in 2013 and a 64-bit ARM in 2015.


64-bits has little to do with amount of addressable memory. In fact, 32-bit and 64-bit processors can address just as much. For example, 32-bit ARMv7 Cortex A15 can address up to 1TB of memory.

The only difference is amount of physical RAM you can map at once. Only virtual address space is limited in 32-bit processors. But even then, you can change the virtual-to-physical mappings at runtime. It's only inconvenient.


In theory true, but in a practical sense, roughly no real-world programs are going to use more RAM than they can actually address with a raw pointer, which means that roughly no real-world programs are going to take advantage of more than 4GB of RAM on a 32-bit CPU.


There are two things where the bit width of a processor matters. First there is the width of the address bus determining how much memory and I/O ports you can address - 32 bit buy you a maximum of 4 GiB, 64 bit a whole lot more, 16 EiB, that is 16 million TiB.

The other thing is the width of the registers determining the maximum size of the numbers you can process with a single mathematical or logical operation. The register width is also related to the addressable memory space because you have to store your memory addresses somewhere but this is not a hard relationship - you can stuff all kind of mapping logic between the registers and the address bus or combine several smaller registers to form a wider address - but it is quite common that the register width and the address bus width match.

Beyond more addressable memory you gain not that much. If you write your code carefully with no dependency on the bit width of the processor the compiler will happily perform the necessary transformations to make your code run even if you perform 64 bit math on a 8 bit processor. It will take a couple of instructions instead of just one, but it will work and be transparent for well written code.


Mostly expanding RAM beyond 4GB.

If you have tight numerical code that requires 64 bit floats/integers you may see a speed up.

If your code is for some reason register starved you may see a speedup.

Take the case of an app that downloads some JSON and makes a UITableView, it's not going to run much faster on 64 bit vs. 32 bit unless you're doing some extremely compute intensive image transforms.


>unless you're doing some extremely compute intensive image transforms.

Which a lot of the best apps for the iPhone do quite a lot.

Especially since nobody cares about the performance of todo apps, those are good enough already.


In Mac OS and iOS, 64 bit is a big gain because it more than doubles increases the space avaiable for data in tagged pointers (http://objectivistc.tumblr.com/post/7872364181/tagged-pointe..., http://www.mikeash.com/pyblog/friday-qa-2012-07-27-lets-buil...), making the approach usable in practice.

Given that Apple designs its own CPUs, I guess it would make sense to have a specific instruction for 'read 64 bits from this pointer, or, if it is a tagged pointer, extract the value from the tag'. Is that a good idea?


The question of hardware support for tagged pointers is interesting. Currently, objc_msgSend basically does (in pseudo-C):

    if(self & 1) {
        tagged_class_index = self & 0xf;
        class = tagged_class_table[tagged_class_index]
    } else {
        class = *self;
    }
And the rest is all common code that looks up the method implementation and jumps to it.

Within the method implementation itself, there will be some shifting to extract the value, but not much, since the method implementation already knows what class it's in:

    value = self >> 4;
Producing new "instances" of the class as tagged pointers involves some shifting and logical ORing:

    new_tagged_pointer = value << 4 | class_index;
(Note that class_index is conceptually a 3-bit value, but the runtime treats it as a 4-bit value for which the bottom bit is always 1. The tagged_class_table has 16 entries but only the odd-numbered ones are used.)

Most of this is really fast on any modern CPU. The one part that could potentially be slow is the if statement up at the top, since that's a candidate for mispredicted branches. I don't know how much it matters, but I'd say that that would be the one place where a dedicated instruction might potentially be of use. You could eliminate the branch yourself, except for the fact that dereferencing self when it's a tagged pointer will likely crash, so you have to avoid reading that value when you aren't going to use it.

My guess is that the penalty for the branch there is small, but you'd have to do some benchmarking to say for sure. If modern CPUs are indeed able to predict that branch (or run both sides to completion internally, since they're short), then there's no need for a new instruction.


Apple did a bang up job accelerating their deployment of ARMv8 and A7.

The question is, what now Intel? On one hand Intel did a huge mis-step and is missing out, on the other hand at the time "low cost ARM cpus" probably didn't seem like a good market for a company that has typically sold premium high end CPUs.

One thing is for certain, Apple is unwilling to let Intel or anyone else dictate features via CPU roadmap.

Did anyone else notice that the A7 has 1 billion transistors? That's on the scale of a desktop/mobile CPU - the latest intels have about 1.4b or so. That's a big deal. And don't give me the CPU vs GPU, because intel ships with a GPU on die.


I don't buy that Apple will merge iOS and OS X.

At the last All Things D that Steve appeared at (9? 10?), he talked at length about this, and said they'd thought about it and worked on it for a very long time. In the end, they figured that phones and Mac Pros are too different to share the same OS, and it just doesn't make sense.


Steve said a lot of things. He couldn't figure out what tablets were good for other than reading on the toilet ... as he was furiously working on a tablet. And he said 7" wasn't a great size for tablets ... as he was excitedly exchanging internal memos about making a 7" tablet. Phones and Mac Pros are very different but 10" iPads and MBAs not so much and are only getting closer. And the Mac Pro can't be more than 5% of their OS X business.


Agreed. The tablet-like features that got added into the recent versions of OS X are something I don't particularly care for. If I'm on a desktop/laptop, I want a desktop-like experience. If I'm on a mobile device, the paradigm is sufficiently different that it justifies having a different OS.

Good example: the default scroll direction got changed in OS X to operate like a touch device. Hated it with a passion. First thing I changed back.


Some people like moving the scroll bar (which seems to be your preference). Other people like moving the document (which is my preference).

There's no right or wrong choice here. The current default logically makes sense -- your cursor is located at a document, and when you're moving your fingers upward, the document also moves upward.

But like any changed default, there are people who liked the previous setting. And there are of course people who want to use (or have to use) other OSes where this setting can't easily be changed. Those people can change the OS X setting at will.


This was true then and it is still pretty true, but it will not be true forever. It isn't difficult for me to imagine the Ubuntu strategy of having a phone which runs an OS which can be used as a phone as usual or a desktop/laptop if placed in an appropriate shell (perhaps turning the phone part into a graphical touchpad) becoming a pretty common hybrid solution.

Having said this, I still think there are some practical problems with Apple switching to non-Intel in the near term. Quite a lot of people I know who are diehard MacBook users use Boot Camp often, and some actually use their Macs as dedicated Windows boxes. I work at a place that is about 90% Mac laptop users but about half of them wouldn't be able to use Mac laptops for work if not for the ease of running Windows on their MacBooks.


Well, depends on how they do it. Obviously the Win8 approach of GUI unification is right out, but I could see support for iOS apps in MacOS. Ubuntu's idea of showing the phone-apps in a slide-out sidebar is nifty, for example.


There's a pretty fundamental reason why they can't merge them: OS X is their development platform. iOS has no hope to self-host, especially with the paranoia it suffers from right now (only running App Store apps).


Nor would it be sensible even on Android or any other mobile platform, which is a big reason I believe PCs will not completely die.


Sure. But Android at least has a chance, since you're allowed to run your own software on your own device.


That is why I say sensible.


Steve is dead. His visions no longer mean much. I don't know if the two will merge, but they certainly seem to be converging.


>Because both Mac OS X and iOS are now 64-bit, it will be easier to port the more demanding (but fewer in number) Mac apps to a single, converged future-state Apple device OS.

It seems like Microsoft has shown that strategy to be a dud. Touch interfaces are awkward with a mouse and mouse interfaces are too fiddly to touch.


And the 32bit/64bit thing seems to be largely irrelevant, too. You just set 64bit or 32bit flags when compiling the vast majority of projects. There's no discernable difference.


If your business logic (Models/Controllers) are strongly decoupled from your Views, you still maintain a pretty large amount of code that becomes easier to port.


It really doesn't, though. The 32/64-bit divide is not all that consequential for this.


In 99.9% of cases, true. I was just trying to point out that the comment about UI's was invalid.

If your code drops down to ASM or anything like that, then it becomes an issue. But that's not most use cases.


I am having nightmares if the future is hundreds of arm cores on workstation class systems. I'd rather have a couple big x86_64 cores which are "just there and work" without having to deal with all kinds of scary things.

Also I think it's funny how people fall for this 64bit marketing bs. It's like back in the days of AMD Athlon64 when people though everything it's faster just because of 64 bits.

I am interested in the new Atom generation too, which seems really promising.


This is an interesting article for the theme, but not the content :-). Intel and ARM are on a collision course. What is interesting to me is that the previous two trains on these tracks were Intel and the bespoke computer makers.

Some folks here will recall the great microprocessor wars for the general purpose computer. Starting with the Z80, 8080, 6502, and 6809, moving to the 80286/386, 68000/68020, and SPARC/MIPS/PA-RISC/POWERPC.

Intel pretty much won that war by coming up from the bottom, powering 'toy' computers (IBM PC) running a 'toy' operating system 'MS-DOS' with 'toy' peripherals (640 x 480 graphics). They came up and ate workstations, and then they ate servers.

And here is the salient fact for me, they could do that because while the market for workstations might be 1 million/year, the market for PCs was 1 million a month (or more at its peak). So while a PC couldn't do what a workstation could early on, the PC market was generating a ton of cash which was going into improving the PC, much more cash and investment than was going into improving the Workstation. One by one the Workstation vendors dropped out of the market or replaced their offering with a "high end" PC.

ARM set up the same scenario today, although this time it was portable devices that were the 'toys' and clever and useful but not nearly as powerful as PCs were so hardly a threat, but for every million PCs sold there were 5-10 million phones sold. And all that money and all that investment goes into making the chips that power these phones better and better, and the PC king is sitting on there trying to make a more powerful PC when what people are buying are more powerful phones.

At IDF (in 2010 if memory serves) Intel noted that they were going to move into the embedded space more aggressively but it didn't actually follow through too well focusing in multi-chip solutions which didn't work well in space constrained phones. Meanwhile ARM was getting design win after design win with their SOC partners.

So last year Intel started going all out on making a lower power version of the x86 architecture which could compete with ARM and this years IDF they took aim at the SOC business. All in an effort to pull enough oxygen out of the ecosystem to slow ARM down. After all, if today you could choose between two chips, one ARM and one x86-64, with the same cost and power curve, you'd probably go with the x86-64 for the assurance that software support would be easier. But once software support isn't an issue it gets to be more of a horse race, and ARM has notably had much better licensing terms than Intel has after being nearly usurped by AMD the first time they tried it.

If ARM can get to 64 bits and I/O channel bandwidth faster than Intel can get to ARM's power envelope and cost, it will be a very interesting fight indeed.


Nice summary, and I agree with the view but good to remember Intel has a two-pronged battle they're fighting now -- ARM's partners (QCOM, Samsung) and the Foundries (TSMC, Samsung, GF, IBM) -- much different than the discrete merchant competitors they faced in the past. Intel therefore has to get to deeper process nodes faster (14nm and below) while also trying to create an embedded x86 ecosystem. I've never seen them more motivated to compete on both fronts.

For what it's worth, here's what Intel's key execs presented to address those challenges at this year's IDF: http://www.legitreviews.com/intel-confims-mobile-focus-shift...


Intel getting to ARM's power envelope and cost won't help them though. They already did the Razr i which I think shows people won't buy something just because it's Intel. Once Intel scales down to match ARM it has lost all of the things that make it "Intel", and so what's the point? If they compete directly with ARM they're going to lose because they have more to lose.


"They already did the Razr i which I think shows people won't buy something just because it's Intel."

Pretty sure it's because it was made by Motorola and for sale only in Europe, where they have no mindshare.


Switching from Intel to ARM for their "real" computers sounds like a decision that Apple's current leadership would make.


It'll be an interesting fight with Intel's Silvermont Atom, which finally gets out of order execution.

Intel claims that dual-core Silvermont chips can outperform quad-core ARM chips by a factor of 1.6 when power draw is similar. Intel also says Silvermont can draw 2.4 times less power when delivering similar performance as the quad-core, ARM-based competition.


I think this is spot on, if you're buying 2 million units of X its nearly always cheaper per unit than buying 1 million. That said, I dont expect to see a total platform convergence, I see a future where you have iOS and OSX, and where you could for example run iOS apps on OSX, I dont see them following the same route Microsoft did with Windows 8.


Current ARM memory systems are terrible compared to what you find on even medium scale x86 chips. There's going to have to be a LOT of investment here before you'll see ARM approach the scale of large (or even medium) iron.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: