A lot of people miss how big of an advantage Apple is continually taking by being first to market.
People try to pin their mobile success on Apple fanboys or their "magic" user experience/ecosystem but Apple is simply just x-many months ahead of everyone's product roadmap and continually throwing curveballs at the rest of the players in the market. "Real world" benchmarks aside their 64-bit CPU has everyone else distracted, rethinking their product roadmaps, and demanding it in their devices while Apple works on it's next trick.
There is no "magic" that having more computing power on a phone can enable. The vast majority of consumers are more than happy with the speed of their phone. The most impactful ways that Apple could significantly improve the iPhone right now would be to (a) double or triple the battery life, (b) make the screen unbreakable, and (c) make the screen bigger. These are the most common complaints I hear from iPhone users. Everything else really isn't that important. And I don't see any progress in any of those areas.
a - I agree, I would pay for thicker/heavier iphone with 2x battery life. However this will fracture Apple's product lineup, thus economically unfeasible.
b - screen on iphone 5S is already very strong.
c - After lots of thinking I agree with Apple. Current form factor is just right for communication device [1]. If you need bigger screen, chances are you do not want a phone - you want a tablet.
[1] Iphone user here, take this with a grain of salt. Over 3 models of iphone I never had a case. Phone is small enough for a proper grip with one hand while using thumb to control it.
Anecdotal experience: I know 12 people with phones that have large screens. 7 of them had screens cracked due to dropping the phone. Every single one has a case (People who dropped their phones got a case after). What is the point of getting super light and thin phone just to put bulky case on it?
> Current form factor is just right for communication device [1]. If you need bigger screen, chances are you do not want a phone - you want a tablet.
Who are you to decide that for others? When I pick up an iPhone, the screen is way too small. I have a 5.2" display on my phone, and so 4" seems comically small.
And I also have a Nexus 7, so I can assure you that there's a huge difference between a 5" phone and a 7" tablet. The latter cannot substitute for the former.
> Anecdotal experience: I know 12 people with phones that have large screens. 7 of them had screens cracked due to dropping the phone
And in my anecdotal experience, everyone I know with a broken screen has an iPhone. So this sort of conjecturing isn't very helpful.
As someone who might have larger than average hands (I don't know), I find the iPhone rather small. I currently have a SGS3, and I find it a much better fit for my hand than the overly skinny (and previously too short) iPhone. However, I may very well be an outlier in that.
> If you need bigger screen, chances are you do not want a phone - you want a tablet.
This kind of remarks is really annoying because it suggests that you know better than us (the people who want larger screens) when you really don't. Either that, or you are incredibly blind to the difference between a freaking tablet (7", designed for browsing on WiFi) and a telephone with a decently sized screen (4.7 ~4.8" compared to the 4" of the iPhone - this makes a HUGE difference).
I doubt this is a case of hubris, but rather of the Pareto principle. For 80% of all users, the iPhone is either the right size or they want a tablet. What you (and others making similar comments in reply) fail to account for is that this still leaves 20% of users who actually do want a bigger screen on their phone.
Yes, 20% of all users is still a lot of users, but it's not the 80% that Apple has traditionally targeted.
>c - After lots of thinking I agree with Apple. Current form factor is just right for communication device [1].
All that thinking got you one vote - unfortunately for apple, most votes want a bigger phone (the numbers don't lie).
>If you need bigger screen, chances are you do not want a phone - you want a tablet.
So now people buying larger phones don't really know what they want? This is typical, flawed Apple superiority complex thinking.
Who wants to carry a phone AND a tablet around (one in each pocket?) How is this even portable or feasible in most cases? No one wants to carry a tablet around with them everywhere they go. A larger phone is a logical compromise. Besides, economics is often the biggest decider. If someone really wants a tablet, but really needs a phone are they going to buy an iPhone AND an iPad? Can they even afford that? or can they really justify spending $1,000 for it when they can spend less than half of that for a well polished Android phablet?
While others will value other things, I vote for (b) - longer battery life. I've hated the barely-one-day life of most phones, it's downright ridiculous. However, I'm a person willing to sacrifice "sleekness" for longer battery life, and because of that I have an extended battery for my current phone (which I highly recommend if you're ok with a thicker phone).
The screen size part is what keeps me on an android. With fat fingers its more important to me to be able to type out long messages without having to hit the backspace every few characters, that saves me more time then any speed increase that can be brought in the near term from 64bit processors.
I had a Sharp Actius MM10 in 2003, which in some respects was way ahead of where most of the market is today. Criticism centered on the lack of an internal CD-ROM (yeah) and video playback performance. But I could hack on something in Visual Studio for 6-8 hours with the extended battery. A quick search pulls up a Wired article that calls it a 'fugly black box' (mine was a silver Mg-alloy case actually), but people would ask me about it so frequently that I printed up an info card to hand out. I'm not even mentioning some of the more interesting features. Of course the thing was expensive, but so is a 13" Air today (and it doesn't come with a docking station). Then vmware came out and I was sure it wouldn't be long before I'd be able to dynamically share compute power between my laptop and workstation, when I plugged the former into the docking station. Then OQO showed up and I thought the same thing there. But all of it died, for various reasons.
Apple deserves credit for pushing a more aggressive roadmap than most, and I'll be buying another round of systems here shortly, but being 'first' isn't enough. The Transmeta chip in the MM10 was dual-core 128 bit VLIW with a soft x-86 translation layer. The next chip was 256 bit. Both were very low power devices for their day.
That advantage is running out. You can only have a new revolutionary product every 30 years (this 64 bit chip's effect is quite oversold in this article). See Mac vs PC in the 90s. Apple is still playing the same game (Expensive, Closed and non expandable Hardware, limited hardware choices, Lone Wolf, act alone mentality, not reacting to consumer's requests because they smugly always know best) Android has filled in the role of PC. The difference now is that there is no Steve Jobs leadership and cult following to bring Apple to prominence after this era fades.
"But now Apple can also claim an advantage in raw horsepower. The new 64-bit A7 chip is the smartphone equivalent of a big V12 engine."
Except these performance gains most likely would of been seen on a 32 bit version of the chip.
It's interesting to see how much marketing is put around 32 vs 64 bit chips and the effect it has on how people market. The biggest benefit is the ability to access more than 4GB of memory. For reference, the iPhone 5S has 1GB of memory. Eliminating the need for PAE and other expensive workarounds is definitely a bug plus. i64 and cheap doubles are certainly a bonus too.
I suspect the primary reason for moving to 64 bit is not for performance gains now, but for future compatibility. Eventually, the iPhone 5S will be the oldest supported iPhone and running a 64 bit processor will reduce maintenance costs of needing to maintain a 32 and 64 bit line of software.
I think this would be a much more informative article if the author had interviewed a few experts to discuss the benefits of 64 bits over 32 bits rather than just say they go faster which at this point is debatable.
> It's interesting to see how much marketing is put around 32 vs 64 bit chips and the effect it has on how people market.
It's because it's a much more approachable pitch than "we doubled the size of the reorder buffers and added another instruction dispatch port." In Apple's case, they're simply using "64-bit" as a proxy label for all the other architectural improvements in Cyclone.
> The biggest benefit is the ability to access more than 4GB of memory. For reference, the iPhone 5S has 1GB of memory.
No. The biggest benefit as having an addressable virtual address space which is in the tera/peta/exa byte range. Which makes a lot of things simple and efficient, even if you don't have a lot of physical memory. Read about e.g. LMDB's design or Varnish's design for good examples.
Or is the biggest benefit in this scenario the expanded availability, in 32bit mode, of registers, so that programs compiled for such a processor execute on-core significantly more than those not compiled for such a machine?
Conclusion
The "64-bit" A7 is not just a marketing gimmic, but neither is it an amazing breakthrough that enables a new class of applications. The truth, as happens often, lies in between.
The simple fact of moving to 64-bit does little. It makes for slightly faster computations in some cases, somewhat higher memory usage for most programs, and makes certain programming techniques more viable. Overall, it's not hugely significant.
The ARM architecture changed a bunch of other things in its transition to 64-bit. An increased number of registers and a revised, streamlined instruction set make for a nice performance gain over 32-bit ARM.
Apple took advantage of the transition to make some changes of their own. The biggest change is an inline retain count, which eliminates the need to perform a costly hash table lookup for retain and release operations in the common case. Since those operations are so common in most Objective-C code, this is a big win. Per-object resource cleanup flags make object deallocation quite a bit faster in certain cases. All in all, the cost of creating and destroying an object is roughly cut in half. Tagged pointers also make for a nice performance win as well as reduced memory use.
Some JVMs already use tagged pointers for various things, often called "compressed pointers". It is of unclear utility for a language like Java and it could be easily added to ART if deemed useful.
It reminds me of AMD Athlon64 period. Everyone claimed 64bit to be responsible for everything. In fact the majority of the performance came from the improvements over the k7 architecture (like integrated memory controller, hyper transport, better prediction, improved pipelines, cache etc.), new instruction sets and so on.
"The "64-bit" A7 is not just a marketing gimmic, but neither is it an amazing breakthrough that enables a new class of applications. The truth, as happens often, lies in between.
The simple fact of moving to 64-bit does little. It makes for slightly faster computations in some cases, somewhat higher memory usage for most programs, and makes certain programming techniques more viable. Overall, it's not hugely significant.
The ARM architecture changed a bunch of other things in its transition to 64-bit. An increased number of registers and a revised, streamlined instruction set make for a nice performance gain over 32-bit ARM.
Apple took advantage of the transition to make some changes of their own. The biggest change is an inline retain count, which eliminates the need to perform a costly hash table lookup for retain and release operations in the common case. Since those operations are so common in most Objective-C code, this is a big win. Per-object resource cleanup flags make object deallocation quite a bit faster in certain cases. All in all, the cost of creating and destroying an object is roughly cut in half. Tagged pointers also make for a nice performance win as well as reduced memory use."
So basically, it's only really a substantial performance win because of a quirk of how Apple's 32-bit code was written, a quirk that doesn't apply to competing platforms.
If you look at the history of desktop/server operating systems, it followed the same pattern as clearly explained in the article: problems start before 4GB and affect programming decisions as soon as users start working with files which aren't trivially small. Once 64-bit became the default, many small optimizations or larger design changes became easy enough that they actually happened and produced small but meaningful improvements
This specific case might only apply to Apple (I don't know, haven't read about it before) but the quirks of ARM's 32-bit instruction set surely apply to everyone else.
The transition could have happen later, maybe, with not too much repercussion, but why not make the jump now, especially when you have a whole new OS engineered to take advantage of it?
"Adding it all together, it’s a pretty big win. My casual benchmarking indicates that basic object creation and destruction takes about 380ns on a 5S running in 32-bit mode, while it’s only about 200ns when running in 64-bit mode. If any instance of the class has ever had a weak reference and an associated object set, the 32-bit time rises to about 480ns, while the 64-bit time remains around 200ns for any instances that were not themselves the target.
In short, the improvements to Apple’s runtime make it so that object allocation in 64-bit mode costs only 40-50% of what it does in 32-bit mode. If your app creates and destroys a lot of objects, that’s a big deal."
Processor word width has absolutely nothing to do with memory size. e.g. 8-bit processors regularly interface with 16-bit-wide memory buses, and 64-bit processors often allow use of 32-bit-wide pointers.
What the word width does affect is how much data can be processed by each instruction. Certain algorithms benefit immensely from this (you can easily double throughput), but the code needs to be written to take advantage of this (or at least, not to squander the advantage), and needs to be compiled to use 64-bit-wide registers.
If your processor doesn't have special pointer registers, you can use banked memory. (e.g. the high bits are selected on a page-by-page basis by the OS, which understands 64-bit-wide pointers.)
If the processor does have special pointer registers (e.g. 6502, 6809, 8086, AVR, etc.), you use those.
By that logic, 8-bit computers would only access 256 bytes of memory (which is definitely not true). On processors where the hardware word size is less than the useful size of memory, there generally is another means of extending the addressing range (either per-page bank addressing, or special pointer registers).
32-bit Windows Server 2003 supports up to 64 GB (although SQL Server is probably the only application which can actually do anything useful with that).
It may not be necessary today, but it's good to get it in the pipeline. In two years or so when top-end phones get >4GB of RAM Apple will be able to say 'iOS is 64-bit only, this is the line'. When they decide it's time to cut 32-bit support, the majority of their users will already be ready.
That's what happened on the desktop. I think Apple only shipped one generation of 32-bit Intel processors. Apple was shipping 64-bit CPUs for years when many Wintel computers were still on 32. So when it came time to say 'We're going 64-bit only' with Mavericks Apple left almost no-one behind. On the other hand Microsoft still supports 32-bit Windows 8.
Apple is preparing to make their lives easier in the future. For today's user, it's usefulness may not be too much higher than the level of a buzzword.
I remember the same argument being employed against 64-bit server operating systems and later desktops, too: it'll waste pointer space, nobody can afford 1GB of RAM, etc. It quickly became obvious that they were underestimating the value of all of the little optimizations and bigger design reconsiderations[1] which this made possible – from little things like having more registers to larger boons like not having to play games either mapping memory around the operating system reserved block. All of that added up both in making things faster but also in making systems simpler in ways which allowed more ambitious improvements.
1. Think about designs like e.g. the Varnish cache versus squid where the latter team has poured an enormous amount of effort into features which allowed them to manage storage manually, work which is increasingly counterproductive when you can simply mmap() 4GB and let the kernel perform that complex, error-prone task for you while you focus on more interesting features.
64-bit addresses have a number of subtle advantages, even if you aren't addressing more than 4GB of memory. One is address space layout randomization (ASLR), which makes it harder for attackers to determine memory locations to use in exploits. ASLR works with 32-bit addresses, but it works far better with 64-bit addresses, where any given guess is overwhelmingly likely to point to absolutely nothing. This is pretty relevant to Apple, who are increasingly concerned about security on iOS.
> Except that 64 bits is still nothing but a marketing gimmick as far as the memory sizes in smart phones is concerned.
We're less than one iteration of Moore's Law away from the 32-bit address space not being enough, even if you're just concerned about memory sizes of mobile devices. Getting this done doesn't seem particularly premature.
Prediction: Qualcomm wasn't actually ready to release an ARMv8 chip even at the end of 2014. They may try to do it now, but I don't think they were planning for it. From recent news it seems they were more interested in jumping on the "8-core" bandwagon (you'd think they learned their lesson when they rushed out their first quad core chip, the S4 Pro, in order to catch-up to everyone else and also have a "quad core chip", but no, they didn't).
So how do I figure this? Because their first 64-bit chip they've just announced a few days ago (Snapdragon 410) is an ARM one! That blew my mind. Why? Because Qualcomm licenses ARM's architecture in order to build its own cores, and that should mean that they are coming out with a next-gen core faster than ARM's "stock cores". And yet their very first 64-bit chip is based on ARM's stock cores?
That made it pretty obvious that Qualcomm was not prepared at all to make an ARMv8 chip, and I think it was mainly laziness, because of the nice position they enjoy in the Android market right now. They thought they could squeeze 3 years out of the 32-bit Krait, when normally it should've been two.
Well the joke is on them. I certainly won't buy any phone in 2014 that isn't 64-bit, since I usually change my phones every 2-3 years, and I don't want to buy one of the very last 32-bit phones.
I was actually expecting Qualcomm to release its own custom-made ARMv8 core early 2014 - and on 20nm! So I can't believe Qualcomm was actually going to rest on its laurels to such degree that they weren't even planning to switch to 64-bit, and probably 20nm either, even in 2014! That's insane, and only a company that thinks it has too much monopoly power in a market would do that.
As Andy Grove used to say - stay paranoid about your competition. A paranoid company would've expected at least to some degree Apple to release its first 64-bit chip this year. But they didn't even have to do that. They just had to follow the original plan. The one that says release a new core every 24 months, and on a new process node (the one ARM is also following). What ever happened to that plan?
While 64 bit mobile processors are certainly a significant step forward for mobile computing, at present 64 bit processors for mobile are a gimmick because it's going to be a long time before the capabilities of such a chip can be taken advantage of. All this means is that we'll be seeing phones hit the market that have 16gb plus of RAM and well, I think we are a long way off seeing phones with that much memory hit the market. We are only seeing phones with 3gb of RAM that require equally as large batteries to power them like the Samsung phones.
It took years for the PC market to transition from 32 to 64, and it'll take years for mobile to do the same. The amount of time it will take means competitors shouldn't worry because by the time 64 bit processors make sense, everyone will be manufacturing one. Sure memory addressing is just part of the picture and a 64 bit processor means data bandwidth is double that of a 32 bit chip which can be seen below 4gb of RAM, I think at present the iPhone isn't taking advantage of such improvements.
The processor, in 32bit mode, offers more on-core registers, and a beefier instruction set.
Programs compiled by a compiler aware of those distictions should significantly, measurably, perceptibly outperform those compiled for the older more constrained architectures.
The S4 still fits in your pocket. It's 2.75 inches wide. iPhone 5 is 2.31 inches wide.
Anyways, I realize some ppl like the current form factor, I just wish Apple would give people the choice. I know in my circle of friends they're losing a lot of business to Android just because of the screen size.
I think Apple pulled a good one on Android and other mobile OSes. With Android's ecosystem being so fragmented and Windows is still trying to get a foot hold on the apps market, a push to 64bit will further fragment their ecosystem.
I'm not sure that I understand how the switch to 64-bit will further the fragmentation of Android. I would have thought that the Dalvik JIT would be ported to a 64-bit processor and the issue would be done. Short of running an application which requires more than 4GB of memory, why would a developer care if her code was running on a 32-bit or 64-bit processor?
I don't follow the mobile space, so I'm probably missing something.
http://developer.android.com/about/dashboards/index.html shows it's not, although it's definitely improving. The best way for that to continue is not to pretend the problem doesn't exist but rather to keep reminding carriers and vendors that this is one of the reasons they're less successful than they hope.
"64 bit" is the media way of saying ARMv8-A. The architecture has more registers and is generally designed and tuned for the current scale of processors being built. ARMv8-A instead of ARMv7 is the big deal.
Often when a CPU becomes "64 bit" it's not just the memory bus that's expanded, but all the CPU registers become wider, and often more are added. For example A32 had 14 32-bit registers, whereas A64 has 31 64-bit registers. This can speed up a variety of calculations.
For example adding 64-bit numbers; finding the end of zero-terminated strings; certain hashing and encryption operations; and so on.
Yes, the importance of more (and to some extent larger) registers can't be understated. If you're not paying close attention, register spills (i.e. when the compiler doesn't have enough registers so decides to stick stuff on the stack) can easily turn decently performing code into poorly performing code if the register spill happens in a tight loop.
14 registers is pretty tight. 31 registers are better, and doubling the width helps for structure locals and parameters (which Clang/LLVM fortunately does a good job of keeping in registers). (I do a lot of work on a processor 60-odd 64-bit registers, and even then GCC decides to spill registers now and then.)
This story did a good job of explaining the advantages and disadvantages of Apple's 64-bit chip. There are advantages of 32 more bits, but that is only a part of the story.
What do you mean by "a phone?" Is the assumption here is that since it's a "phone" it somehow is not necessary to have top-tier computing power? The word "phone" is a holdover legacy from when these devices were used to just talk to other people in real time over voice. My current iPhone 5S is more powerful than a MBP from 2008.
How about a device with "physical memory of 1 or 2GB and tight power constraints"?
To quote the article, "the new 64-bit A7 chip is the smartphone equivalent of a big V12 engine." Sure, but there's a reason why you don't drive a car with a V12 engine, too.
i really hope i don't need to explain why this is a beyond terrible car analogy argument. do you have a better way to explain why a 64-bit chip is overkill for a phone? is 2013 iPhone hardware overkill for a "phone" because it is 20x more powerful than the first iPhone? 640k should be enough for everyone, and all that.
The car analogy was just a way to bring up the generic engineering principle that "everything is a tradeoff".
It's been said many other times elsewhere in the comments, but to summarize:
1) One of the biggest benefits of a 64-bit processor is the ability to address a gigantic memory space. Phones just don't have that much memory today.
2) Even if they did, the types of processes that are run on phones usually aren't the kind that need that much memory. "Isn't that just 640k is enough for everyone"? Sure, but memory isn't free--if nothing else, a larger DRAM burns more power. So if you want to run a 8GB SQL server on your phone, it's going to cost you in terms of battery life.
3) Another big downside of a 64-bit process is that pointers are twice as large. So your 64-bit process actually consumes more memory than a 32-bit process; in programs with pointer-heavy data structures (like, say, trees) this can result in the 32-bit process being able to solve bigger problems than the 32-bit process. I have seen this in the embedded world when we moved a piece of network equipment from a 32-bit processor to a 64-bit processor. We doubled the memory but that only increased the solvable problem size by about 1.4.
4) Additionally, 64-bit process will tend to blow out your caches a lot faster.
5) If you've got a large data set that you'd like to plow through "big chunk" at a time, going to 64-bits at a time might be a win. But a 32-bit processor with vector instructions is usually quite good, too.
6) Does that 64-bit processor have more transistors? Probably. Does that mean it burns more power? Probably.
There are a lot of good things about that 64-bit processor that I am not mentioning, but the point is they are not free. A lot of these tradeoffs are hard to measure by outsiders because they didn't just change one variable: in this case, the move to 64-bit introduced new instructions, more registers, was concurrent with a smaller and faster transistor size, bigger caches, etc, etc.
It's important to consider what you could have done otherwise. Possibly you could have had a 32-bit processor with more cores. Possibly you could have had similar performance with more battery life. Possibly you could have had a lot of things, but if you just think "64 is bigger than 32 and therefore better" you will miss those possibilities.
I think one thing most commentators are missing re: 64-bit in the 5S is that Apple is not putting the 64-bit processor in there for phones today, but for phones tomorrow. The 5S, as they state, is its "forward looking" platform. If you want a phone that optimizes things differently, look to the 5C. The 5S is there so the compiler, app developers, and the overall ecosystem move to 64-bit and get the kinks out earlier rather than later.
> The 5S is there so the compiler, app developers, and the overall ecosystem move to 64-bit and get the kinks out earlier rather than later.
This is a good point, and it's the first time I'm hearing it stated so succinctly.
With 64 bit mainstream chips, the manufacturers got to roll them out slowly, first to the scientific computing community and then into server applications before finally tackling widespread desktop adoption. Given that there's not an equivalent path in the mobile space, I guess you have to test the waters somewhere.
I think it's a marketing gimmick, similar to upping the megapixel count on the phone's camera. Realistically, 10-12 MP is plenty to upload to Facebook, or text message back and forth. But people see a bigger number and think it's better.
My fear is that touting a certain number of bits in the instruction set as a proxy for technological leadership will bite Apple in the ass. It's never been something they emphasized with much success -- e.g. the early PowerPC chips were arguably superior to their x86 contemporaries but it wasn't enough to compel users to switch.
And whether Apple wants to buy into the chip game, well, good luck with that.
Companies like Apple and Google have significant power to direct the investment efforts of their competitors. Adopting something just to catch up with Apple is a marketing gimmick. Using that money to add more RAM or improve the integration of the software is not a gimmick and could give an edge. Just like how Microsoft got distracted by Bing whilst Google built an OS.
anyone who argues that 64-bit at the mobile level is worth anything other than being a marketing gimmick is an idiot. We've had 64-bit at the desktop level for years and the large majority of desktop programs have yet to find a way to take true advantage of it.
All 64-bit does at the mobile level is introduce a ton of power loss for no real performance gain. The power usage Apple is reporting is likely due to other technological improvements that more than compensate for 64-bit's loss.
The first poster had it right. All this does is screw up the rest of the industry, make them scramble, while Apple can buy time to figure out what to do next.
Side note, Qualcomm's layoffs have little to do with their processor line...
We already have 128-bit registers for vector computations (for example, the SSE registers in x86).
As for addressing 128 bits of memory: that's more than a century off even if memory continues to double every two years (which doesn't seem likely to begin with). It's actually plausible that the step to 64-bit addressing was the last one, ever.
(And while one could argue that a vector register isn't a "real" general-purpose register, the primary difference is that you can't perform fullwidth arithmetic, which is of very little use > 64 bits anyway.)
64-bit ARM means passing around 64-bit pointers by default.
In most code settings, this pollutes the cache (you can only hold half as many pointers), and leads to slightly weaker performance.
64-bit computing is NOT an advantage in the phone world. It is a massive advantage for database applications or large web services... but certainly not for phone apps in the near future.
I know what 64-bit computing is. And I didn't say anything about phone chips, or ARM. It seems like you're posting this rant to every subthread, regardless of content?
That is an interesting conclusion. I did not realize x86 already had 128 bit registers. That was really what I was referring to. We might not need to worry about addressing more memory yet but I feel like larger registers will still be useful.
Actually, x86 already has 256-bit vector registers (AVX), and will have 512-bit vector registers in a few years (AVX-512). If you include the exotic-but-still-x86-based xeon phi it already has 512-bit vector registers.
Nice. I suppose this is for memory-mapped file I/O? Or do those beasts really have access to terabytes of core? (In which case I will become jealous of my dad, who works with AS/400s.)
No seriously, there is little to no speed difference in 32-bit vs 64-bit computing. The REAL benefit is the ability to address memory beyond 4GBs. But even the most high-end phones are stuck with 2GB... hell, a number of laptops are still shipping with 2GB of RAM. Let alone phones. The 64-bit "advantage" is almost entirely a marketing gimmick.
That said, it is a known fact that Apple's iPhone chip is leagues ahead of its competitors in terms of performance / watt. Qualcomm has a value buy with Snapdragon and their integrated LTE chip + FCC conformance. But Apple has full integration, and controls the software / hardware from bottom up. That is a real advantage that is leading to improved battery lives and faster performance.
(This is less an "Apple Advantage" and more of an "Android disadvantage". Android should be able to catch up if they got their act together... but the reliance on Dalvik-VM code and unoptimized APIs leads to noticeably worse battery life)
Beyond that, Apple is beginning to invest into state-of-the-art foundries, possibly to create chips in house by the year 2016 or 2017. They've also bought out the high-end 20nm wafers through 2014, forcing their competitors to lag behind on the older 28nm process nodes. (Hell, even AMD / NVidia are feeling the sting. All AMD / NVidia roadmaps to better GPUs are at 28nm technology... Only Intel: who has reached 14nm on their in-house labs, remains unaffected by Apple's purchasing power)
When your company has $100 Billion in cash, you can afford to have a process node advantage over your opponents.
No seriously, there is little to no speed difference in 32-bit vs 64-bit computing. The REAL benefit is the ability to address memory beyond 4GBs.
No, there really is a huge performance gain for 64-bit processing for certain algorithms when coded correctly. Basically anything that works with vector-like data can easily benefit. I'm sure there are lots of mobile multimedia developers who relish the change to 64-bit.
(Of course it's possible the 32-bit predecessor to this chip special-cased certain 64-bit operations, e.g. double float arithmetic, in which case even fewer algorithms would benefit from widening registers across the board. I'm not familiar enough with ARM architecture to comment on this.)
Mobile multimedia developers rely on hardware decode for codecs, Fourier transforms, and the like. Such code can only slow down if moved to a vector processing unit similar to Intel's SSE. Embedded web video is standardizing upon H264, and accelerated audio is everywhere as well.
Game Programmers will prefer a faster GPU, since none of that stuff is actually calculated on CPUs now-a-days. (in fact, Apple's superior GPU is one of the reasons why it "feels" so much faster than many Android stuff).
So unless you're gonna be doing software-decode of H265 (or some other future codec), or something... my bet is that multi-media processing will remain the same. It will go to the dedicated multimedia DSP that is on every phone, and be translated extremely efficiently (powerwise).
Mobile multimedia developers rely on hardware decode for codecs, Fourier transforms, and the like
Uh, no? Yes, video decode for common formats is hardware-accelerated, but I've never seen dedicated Fourier transform hardware in consumer hardware, and I can't think of any other "and the like" algorithms that are hardware accelerated not at a CPU register level.
Game Programmers will prefer a faster GPU, since none of that stuff is actually calculated on CPUs now-a-days
Mm, I think this is dubious. I agree, GPUs are better than CPUs for many multimedia applications, but getting data to and from GPUs is not fast. And of all the multimedia applications I run on my desktop (mplayer, Audacity, the Gimp, Inkscape), none currently use the GPU except for maybe mplayer for certain videos.
DxVA passes tasks like iDCT (Inverse Discrete Cosine Transform) to the GPU on Windows. If you are running ANY DxVA codec on any Windows computer, the process happens exactly as I've described.
I assume a lot of people watch Youtube on Windows computers, amirite? The iDCT is basically a Fourier Transform as far as the math is concerned. Other portions of the H264 codec (such as motion compensation) are similarly increasingly hardware-accelerated... even on crappy integrated GPUs like the old GMA950.
Phone hardware on the other hand, is basically state-of-the-art. I wouldn't be surprised if phones of today had superior hardware decoders than the crap that Intel churnned out for the bottom-barrel consumers back in 2009.
Congratulations, you have proved the tautology that hardware-accelerated frequency-domain codecs use hardware-accelerated frequency-domain transforms.
Unfortunately you entirely missed my point about everything other than video decoding. Bandwidth between the CPU and GPU quickly becomes the bottleneck, unless you're able to move most of your processing onto the GPU, which I granted you was the right thing to do. But also as I stated, none of the popular software I use actually does this. It is all optimized for CPU processing.
EDIT: And in case you think I'm talking out of my ass, I work on a high-performance embedded product. We recently switched from a 32-bit to a 64-bit version of the (ARM-like) processor we use. Nearly every single one of our major algorithms benefited from the increased register width (although we did have to slightly modify some of them to do so). And we don't even use multimedia operations. A lot of the gains come from simply moving less stuff around, which, when you have to process a packet every 40 cycles, really adds up.
> "Beyond that, Apple is beginning to invest into state-of-the-art foundries, possibly to create chips in house by the year 2016 or 2017. They've also bought out the high-end 20nm wafers through 2014, forcing their competitors to lag behind on the older 28nm process nodes. (Hell, even AMD / NVidia are feeling the sting. All AMD / NVidia roadmaps to better GPUs are at 28nm technology... Only Intel: who has reached 14nm on their in-house labs, remains unaffected by Apple's purchasing power)"
Very interesting. Would you have a source? I know Apple did the same "trick" with touch screens when the iPhone was introduced. I think these actions are typical of Cook - from what I understand he was earlier (before he became CEO) responsible for the whole inventory chain management and it's likely an area he's still involved in.
On Apple eating up wafer supply: its mostly speculation from even more illegitimate rumor sites. But those keeping up with the current roadmaps note that Apple and Samsung are reaching 20nm far before their competitors... and even AMD / NVidia have their 20nm plans pushed out to 2015 or later.
"Proof" is nonexistent, but the product roadmaps I've seen seem to match the rumors.
Wouldn't a 64 bit processor be able to perform more accurate floating point operations faster? I definitely don't believe the only benefit to a 64 bit architecture is being able to address more memory.
Depends highly on the architecture. Both floating-point and vector operations are often special-cased in the pipeline (e.g. x86), so e.g. 64-bit floating point operations on a particular 32-bit processor may not exhibit worse performance than if that processor had 64-bit registers.
I'm not familiar enough with the particulars of ARM to answer confidently for floating point operations, but to take an example that's not usually special-cased, say bit vector arithmetic, yes, those operations will execute twice as quickly if they are vectored.
ARM 32-bit did not have SIMD (vector) double precision, while ARM 64-bit does, so here it's definitely a win.
On x86 though, both 32-bit and 64-bit did double precision vectors just fine, so it didn't really apply there (except that the fp register count was doubled).
Only if your code were SIMD aligned. But that is not code that your typical compiler outputs.
Most SIMD code is heavy number-crunching stuff like multimedia or GPU shaders. But much of that low-level handling is handled off CPU on phone platforms. It is simply more power efficient to have a hardware decoder of multimedia.
(1) it's enabled by default at -O3; (2) the loop constructs needed are fairly simple; (3) arguing that unoptimized code will be slow is still a poor argument.
ARM64 does NOT grant you the vectorized instruction advantage.
So many people are arguing here, but clearly few of you people have even worked with ARM chips at the assembly level.
Yep. Thankfully I can back up my arguments with quoted facts.
EDIT: and I already granted that vectorized and floating-point operations don't necessarily benefit from larger register widths, so I don't know why you're even arguing. Let alone the OP wasn't even asking specifically about ARM!
To be fair, iOS is getting performance improvements from ARMv8. That leaves me with the question of whether such improvements apply to non-ObjC code and if so why the rest of the industry didn't realize that.
Android has basically been a three legged horse in the speed/performance because of the Dalvic runtime anyway. Who cares about 32/64 bits. When Android comes out with it's new java compilation model and runtime (ART) in version 5 and claims that your phone OS and apps can now execute twice as fast, where does that leave Apple in the performance race?
All of this does not really matter anyway. iPhone has always had better performance than almost every android phone. This has not saved them from losing massive market share. The PC/Mac history is repeating with Android/IOS.
>* Apple dominates the high end, and is at 0% of the low (some might say junk) end.*
Where did you get the idea that Apple dominates at the high end? They haven't for a while now.
The Galaxy S4 was outselling the iPhone 5 all by itself for several months (before the 5s came out). The collective of high end Androids have taken over the high end market on a seemingly permanent basis. Apple is at less than 50% and is on a steady decline vs the (high end) Android collective of phones.
People try to pin their mobile success on Apple fanboys or their "magic" user experience/ecosystem but Apple is simply just x-many months ahead of everyone's product roadmap and continually throwing curveballs at the rest of the players in the market. "Real world" benchmarks aside their 64-bit CPU has everyone else distracted, rethinking their product roadmaps, and demanding it in their devices while Apple works on it's next trick.