Hacker News new | past | comments | ask | show | jobs | submit login

Hmm...sure puts all the criticism Apple got for not waiting for Kaby Lake for its MacBook Pros into perspective.

We are in an effective post-Moore's law world, and have been for a couple of years. Yes, we can still put more transistors on the chip, but we are pretty much done with single core performance, at least until some really big breakthrough.

On the other hand, as another poster pointed out, we really don't need all that much more performance, as most of the performance of current chips isn't actually put to good use, but instead squandered[1]. (My 1991 NeXT Cube with 25 MHz '40 was pretty much as good for word processing as anything I can get now, and you could easily go back further).

Most of the things that go into squandering CPU don't parallelize well, so removing the bloat is actually starting to become cheaper again than trying to combat it with more silicon. And no, I am not just saying that to promote my upcoming book[2], I've actually been saying the same thing since before I started writing it.

Interesting times.

[1] https://www.microsoft.com/en-us/research/publication/spendin...

[2] https://www.amazon.com/MACOS-PERFORMANCE-TUNING-Developers-L...




Kaby Lake offers significant reductions in power consumption which is important for longer batter life on laptops and definitely something Apple should be concerned about.

Laptops that have had upgrades from Skylake to Kaby lake reported significant increases in battery life:

http://www.itworld.com/article/3154243/computers/12-things-y...

http://www.theinquirer.net/inquirer/news/3001791/lenovo-thin...

http://www.pcworld.com/article/3127250/hardware/intel-kaby-l...

http://uk.pcmag.com/new-razer-blade-stealth-late-2016


Except it seems like any time there is a nice drop in power consumption Apple says "oh look at how much thinner we can make it by shipping it with a smaller battery!"


A physically smaller battery usually translates into a weight reduction. I'd even argue that it's the point of laptops: A mobile/light computer. But how you balance the two is a matter of opinion.

If you look at the amount of battery-saving features they have been working on, it's obvious that they focus on lowering consumption instead of engaging the spec numbers game.


> But how you balance the two is a matter of opinion.

I guess. But i do wonder if there isn't some framework for thinking about the tradeoff in a quasi-objective way. Is it possible to make any kind of general statements about the marginal utility of improvements in weight/thickness and battery life?

One hour of battery life lets me take my laptop to a meeting, or sit on the sofa for a while. Two hours lets me watch a movie on the battery. Eight hours lets me work away from a power socket all day. Twelve hours lets me do that and read HN in the evening. Twenty-four hours is longer than i'm ever away from a power socket. It feels like there's steadily increasing utility up to that "as long as i want to be sat in front of a screen in one day" point.

A 15 inch, 2.41 cm thick, 2.54 kg laptop rests comfortably on my thighs, goes in a padded envelope, and fits in my satchel without taking up much space. A 15 inch, 2.79 cm thick, 2.5 kg laptop somehow did seem a lot more ungainly. A 15 inch, 1.55 cm thick, 1.83 kg laptop fits in a bag just as easily, and is comfortable to hold in one hand. Given that i don't often hold a laptop in one hand, that seems like a small increase in utility.

Are there other reasons why that 0.86 cm of thickness and 0.71 kg of weight are a real improvement? What does it let me do that i couldn't before?


I don't know about thickness, but there is a reason I prefer light laptops: carrying them in a backpack. If I have a 20min commute by bike every day in which I carry the laptop, it really does make a difference. Your back will thank you for those 0.71 kg you took off it.


I have panniers. I rode a bike for years with just a rucksack, and i look back now on those wasted years and despair at my foolishness. If you get panniers, your back will thank you for much more than 0.71 kg!


Or you could put the whole weight of your laptop in a bike bag.


I had to do that, the MacBook Pro is very heavy


> A 15 inch, 1.55 cm thick, 1.83 kg laptop fits in a bag just as easily, and is comfortable to hold in one hand.

Are you sure that's "kg" and not "pounds"? I could not imagine holding a 1.83 kg laptop in one hand for longer than a minute or so. (I assume you hold the laptop by its side while the screen is open and active, which creates a surprisingly high torque on one's fingers.)

I'm typing this on an 2012 Asus Zenbook that weighs 1.3 kg, which is quite near the sweet spot for me. The only thing I don't like about it is that it has no replaceable parts whatsoever, and the 4GB RAM is starting to feel a bit tight. So if I ever decide to get a new notebook, I might choose something a bit heftier if it offers replaceable RAM, battery, etc.


It's kg. You can't imagine holding ~4lbs for more than a minute?


That may be so, but it doesn't invalidate the point made by the parent comment. What Apple chooses to do with that improvement in power consumption may be up for discussion, but the fact that there is fair criticism about the fact that they didn't wait is not.


Yes and then they will make the battery thinner than the CPU cooler and instead of solving that one foot will sit slightly lower than the other three and the whole laptop will wobble on a desk.

The iPhone6(s)/7 camera lens is such shockingly bad design I still can't understand how Apple justified it to themselves.


That's what customers seem to want, so that's what Apple is giving them. I can say that one of my favorite things about my new iPad Pro is how thin and light it is, which makes it comfortable read in bed in a way that hasn't been true for me in the past.


Oh no they dont. CPU Performance is roughly the same, power usage is also roughly the same. What you do get is some hardware accelerated scenario like playing 4K High Bitrate HEVC which would make your Skylake machine literally stand still and drop to below 10% CPU usage on Kaby.


Do you have evidence to back that up? All the media I've found talking about Kaby Lake has talked about several hours of battery life improvements.


> we really don't need all that much more performance

That's a ridiculous thing to say, especially on HN. There are many tasks that would benefit from higher performance. Not everyone limits themselves to text processing.

We're also CPU-bound for many tasks at the moment - consumer SSDs got to 2-3GB/s, RAM to 40-60GB/s. Not many useful computations can be performed on a single core at such speeds (hardware-accelerated encryption is one exception I can think of).

Yes, we need more cores, but you can scale linearly only that far - production gets expensive, electrical consumption gets expensive, cooling gets expensive, you run out of space.


Developers need more performance. Video editors need more performance. CGI artists need more performance. Servers need more performance. Phones need more performance.

In contrast, desktop computers for general-purpose needs are already adequate. Word processing, YouTube, Facebook, and non-enthusiast gaming has pretty modest needs that are easily met by a mid-range i5.

The only thing that will put pressure on that is entirely new applications that are much more demanding. VR could be one of those things, but I don't see it as a significant market maker inside of a decade. The same goes for AI.


> Developers need more performance.

But do we need more hardware performance or more software performance?

My main point was that things are slow mostly not because the hardware is inadequate, but instead because the software is inadequate, and the software is inadequate because developers have become lazy, because the hardware used to bail us out for free.


This is the laziest argument ever made.

Hardware is continuing to improve in performance, it's just that the gains are focused on things like power efficiency rather than brute force. Likewise, software continues to improve in performance: JavaScript engines, to use but one example, are consistently getting faster.

If you want to live in some grim world where everything is shit and nothing will ever get fixed, knock yourself out, but the truth of the matter is there's relentless and significant improvements being made across the board.


> JavaScript engines, to use but one example, are consistently getting faster.

JavaScript is your example for efficient software? Hmm...

The only thing happening with JavaScript is trying to optimize away all the incredible inefficiency of the original design, at tremendous expense.

And the reason we are doing this in the first place is that we are now re-implementing all sorts of desktop software in JavaScript...typically at least an order of magnitude bigger and slower.

So thanks for making my point for me.

EDIT: And I am sorry that you perceive my message as "grim". I personally find it hopeful, because we have massive untapped power lurking in the machines we already have, no Intel-provided improvements needed.


JS is quite a bit faster than, say, idiomatic Objective-C. Or Java run through Dalvik.

Personally, I don't see why everyone focuses on JavaScript as a massive source of inefficiency compared to, say, PHP. Not to mention the rose-colored glasses we see the past in: folks back in the '90s used to work day in and day out with VB6 (or even interpreted Java) apps.


Comparing browser games with Dalvik games on 1/4th the cpu/gpu as on my desktop, I feel obliged to point out:

Nope.

Now this may be that browsers just suck at graphics, or at games in general. Perhaps. Does it matter ? No. Javascript does not match Java in speed, not regular java, nor Dalvik. Comparing node.js apps with PHP apps, it doesn't match PHP performance either.


> Now this may be that browsers just suck at graphics, or at games in general. Perhaps. Does it matter ? No.

Yes, it does, because I'm talking about JavaScript. Not about the entire browser graphics stack (which, by the way, is architecturally better than that of Android).

> Javascript does not match Java in speed, not regular java, nor Dalvik.

No, Dalvik is slow compared to JS. Its trace based architecture is roughly on par with JS engines in 2009.

> Comparing node.js apps with PHP apps, it doesn't match PHP performance either.

Zend doesn't even have a JIT.


Well, that may be true, and a big plus for the developers of those systems, but if the results aren't there, what's the point ?

Even if it's browser APIs rather than directly the language/interpreters ...


> JS is quite a bit faster than, say, idiomatic Objective-C.

(a) That depends hugely on what you consider "idiomatic" Objective-C and (b) yeah, that's kind of my point.

If you write Objective-C in the style that is currently popular and propagated, treating it as a unified programming language like JS, Java or Smalltalk, you're going to be slow. In my book[1], I show how easy it is to be slower not just than JS or Java, but even slower than a byte-coded Smalltalk interpreter or Ruby. And the sad part is that it's not even particularly expressive!

On the other hand, I also show how you can use the original Objective-C style, let's call it "Software-ICs", to create an XML parser faster than libxml and with a much nicer interface, or a Postscript interpreter that uses Objective-C objects as Postscript objects and is still faster than Adobe's interpreter (written in C) for most basic language stuff like arithmetic and loops (it doesn't do the graphics itself, so comparing those numbers would be pointless). In fact, said Postscript interpreter, written in Objective-C, is faster at arithmetic than the "bad" Objective-C style, while doing automatic arithmetic promotion, so being safer.

The "Software-IC" style for Objective-C involves fast components written in C connected at the architectural level by dynamic messaging. It is highly idiomatic, convenient, fun and fast as hell. Interestingly, variations of that basic style are being rediscovered these days by the people doing "imperative shell, functional core". See for example Gary Bernhard's talk[2] and then tell me that the final picture doesn't look almost exactly like the pictures Cox drew of Software-ICs (and remember that Cox also said that (a) messaging didn't have to be synchronous and (b) the implementation language for the Software ICs didn't matter much). The Unix tools are also very similar, with fast components written in C connected at the architectural level with pipes.

This is also the reason why I think my Objective-Smalltalk language[3] actually has a good chance of enabling very fast programs despite the very high level of abstraction it is aiming for. Being able to pick an appropriate interconnection style and fast components is the key to fast software.

Back to the original point, we are leaving orders of magnitude of performance on the table, quite frequently for no particular reason. And as I wrote elsewhere, I actually think this current point we have reached is good for software and for software people, because we can get that performance quite easily by writing decent code, and writing decent code is starting to matter more again. Just like I think the end of Moore's law might finally break the stranglehold Intel has had on hardware architecture, so innovative architectures (remember the Transputer[4]?) have a chance and aren't clobbered by Intel's next fab-step. Speaking of innovative hardware architecture, has there been any new news on the Mill?

Interesting times.

Anyway, I didn't bring up JavaScript, this problem is everywhere. I just thought that bringing it up as a counterpoint was...humorous.

[1] Did I mention my book? g https://www.pearsonhighered.com/program/Weiher-i-OS-and-mac-...

[2] https://www.destroyallsoftware.com/talks/boundaries

[3] http://objective.st/

[4] https://en.wikipedia.org/wiki/Transputer


So I checked out your Objective Smalltalk (aside: isn't that a pleonasm?) link, the last update was from May 2014. Are you still working on it?


Thanks for checking it out!

Yes, I am still working on it, in fact I just added the first cut of an sqlite scheme-handler 3 days ago (https://github.com/mpw/Objective-Smalltalk ). Still read-only for now and doesn't do any complex queries. On the other hand, table-name completion is kind of nice.

You are right though that I should update the site more often.

The seeming pleonasm is intentional...and not really a pleonasm. When you look more closely at the "Objective" side of Objective-C, it really is more about architectural interconnection than just adding Smalltalk to C, Smalltalk-style messaging is just one mechanism chosen.

Adding the kind of architectural promiscuity that's in Objective-C to Smalltalk is a big boon. On the other hand there is the (fro me) big idea of the Smalltalk class libraries, the fact that the machine primitives are not the conceptual primitives (Object -> Number -> SmallInteger, Collection -> Array). Taking that and applying to the "objective" part is also a big deal. I believe.


I agree with you and disagree with parent.

Remember the original Blackberry 950?

https://en.wikipedia.org/wiki/BlackBerry_950

It was a 80386 w/ 512KB of SRAM and 4MB of storage, and it ran on a single AA battery for days.

..or a VAX/11?

32-bits in 1997:

https://en.wikipedia.org/wiki/VAX#Processor_architecture


meant 1977.


JavaScript is surprisingly fast and can easily outperform Ruby, Python, and PHP. It can at times keep up with or exceed Java, which is extremely popular. How many gigawatt-hours of power are spent on an annual basis to run trashy, completely unoptimized PHP scripts? But okay, blame JavaScript.

Sure, you can piss all over JavaScript for having been shitty and slow, but it's not like that any longer. A lot of people worked very hard to dig it out of that ditch it was born in and make it into something that performs well.

It does everything Java promised and a whole lot more. This is not the world anyone predicted would happen.

> ...we are now re-implementing all sorts of desktop software in JavaScript...

Have you stopped to ask yourself why? It's cross-platform. It's ubiquitous. It's fast enough. Name another language that's as effortless to get started with, that's as insanely portable, that can be distributed easily through innumerable delivery channels. Java? Nope. C++? Hardly. C? Not really.

It's easy to sit in an armchair and bitch about how crappy things are, how much "power" we have lurking in these machines, and yet do nothing to tap into it.

What language do you use most frequently? What improvements could be made to that? No language is at its performance peak, not even long-time performance champions like C++ or C. There's still tons of room for compiler optimizations, for new libraries that better vectorize things, for better design patterns that make use of multiple cores better.

This is all at the language level as well. What about kernel issues? Linux is far from flawless. BSD could us improvements. That's not to say there aren't people working to make these things better, to push performance.

If these things are hard, that's the answer to why things aren't improving to your satisfaction.


> Sure, you can piss all over JavaScript for having been shitty and slow, but it's not like that any longer. A lot of people worked very hard to dig it out of that ditch it was born in and make it into something that performs well.

And yet electron apps are still shitty and slow. Funny how you only compare javascript to languages that have always performed poorly on desktops and not something apps are actually built in, like c/c++/c#.


The original versions of things built with NodeWebKit and Electron were unbearably slow, it's true, but they've come a long way.

If you want to build a cross-platform desktop application with any of C, C++ or C# knock yourself out. Someone using Electron will have good-enough prototype within weeks while you're still working on your build process.

Where you can focus on one OS you can get better results with a pure C++, Objective-C, or Swift result, but when going cross-platform it massively complicates things. Today JavaScript is a pretty good answer to that problem, and performance is adequate enough people are willing to pay the price in terms of footprint.

Software that exists but is suboptimal is better than software that doesn't exist but is hypothetically better.

If you don't like those applications and think you can knock out something better in C++, by all means, but you've got quite a hill to climb.

Microsoft Visual Code, as one example was written deliberately using Electron even though Microsoft obviously has some amazingly talented C++ people who can build cross-platform applications: Microsoft Office and a compiler. I don't think they took that decision lightly.


Rather missing the point that we've replaced much native software with thin clients running JS in a browser or pretending to be an app. Whether it needs to be or not.

For many reasons - multi device support, convenience, data harvesting, lock in. Performance isn't something we generally get now everything becomes network constrained rather than CPU or IO.

So yeah it's being fixed because it has to be, because it's a terrible experience in comparison. I don't deny that often the convenience far outweighs the loss of performance and latency.

You can't avoid the fact it's heavily skewed the market against those needing native power on the desktop. If you're doing things that need that, the last 5 years have been rather underwhelming progress wise.


> JavaScript engines, to use but one example, are consistently getting faster.

When was the last time you where really impressed with https://arewefastyet.com/ ? Recent improvements are looking more like stragglers catching up with the state of the art than like the state of the art improving much. This is a very good thing, but it is not consistently getting faster. More like "getting more consistently fast". Good, but not the same. Actually it seems to be the exact same pattern of approaching a wall as we see (and lament) with desktop CPUs.


> Likewise, software continues to improve in performance: JavaScript engines, to use but one example, are consistently getting faster.

That's the point. We've seen hardware improve but we've had a net loss of performance because a lot of stuff has moved to web apps and other "higher level" languages.


> JavaScript engines, to use but one example, are consistently getting faster.

And now people write basic user-space tools in Javascript instead of C, with a basic 100 libraries bolted on as dependencies, where only 0.1% of the code is ever used, but you'll also need to boot the JS runtime before any of this code even runs.

Guess which tool is faster. The C one or the JS one?

I'm not saying "Do everything in C" (I'm rather in the opposite camp), but he does have a quite valid argument there.


That's great and all, and damn them for being lazy! But you know, it's going to take a lot of time and money to fix that problem assuming it is ever fixed at all. In the mean time I'll take the nice hardware upgrades.


We are already witnessing it with AOT compilation to native code in Java and .NET, which should never have been a JIT based runtime to start with.


We need both.


"Software gets slower faster than hardware gets faster" -- Wirth's law.

So it seems you get to pick one. And of course, the observation is that we're mostly no longer getting faster hardware.


Only software other people write. :P


Real programmers program in machine code, although you can get away with using assembler and still call yourself a programmer now a days. Heck I even heard about someone using C who called herself a programmer. These new processors allow non-programmers to write software and use high level languages that make you more productive.


> In contrast, desktop computers for general-purpose needs are already adequate.

This falls down on the word "general purpose". What is general purpose? And what is general purpose today vs tomorrow? I put it to you that high definition VR and AR, which currently stretch the best consumer machines, are going to be part of "general purpose" as soon as people can put out affordable mass market hardware that supports it.

General purpose expands to use the hardware it has available.


Not really. This is the fundamental principle of technology disruption. At some point one paradigm over-serves the market to such a degree that a new, less functional entrant comes in and starts to steal away market share.

This happened most dramatically with hard disks, where 8" drives lost way to 5 1/4" even though they took a hit in storage capacity. Later 3 1/2" drives nibbled away at that even further, with the same penalties. Notebook-sized drives ate away at that, too, despite their limited capacity. Every time the driving factor was not performance or capacity, but convenience. Lugging a twenty pound hard drive around in your laptop wouldn't make any sense even if it could hold 400TB of data.

Desktops yielded to notebooks, notebooks yielded to phones and tablets.

General purpose migrates to the most convenient solution that's adequate for their needs.


Except that the 3.5" disks were not only smaller but higher capacity and more robust than 5.25".


I put it to you that VR and AR are not going to be general purpose, at least not in our lifetime. I even doubt there will even be a significant niche for either technology as they remain at best a novelty and at worst an expensive gimmick.

What people want (and have always wanted) from VR* will be out of reach for some time.

*You can't feel virtual objects or move freely in a virtual world (without being hendored by the physical world).


> In contrast, desktop computers for general-purpose needs are already adequate. Word processing, YouTube, Facebook, and non-enthusiast gaming has pretty modest needs that are easily met by a mid-range i5.

OK, but that has been the case for a long time. Microsoft Word users are never the market for the highest end desktop CPUs.


But performance from where?

The single biggest performance increase I've experienced in the last decade has been due to the move to SSD.

Video editors and CGI artists need more GPU and IO performance. It's what gives them real-time feedback. Developers need fast IO (I have slow compile times, but I rarely desire a new CPU to fix that).

The CPU has largely become irrelevant, performance wise. It's great when you hit the final render button on a video, or 3D scene, or when you're compiling code. But those actions constitute a minority of our interactions with computers.


> The CPU has largely become irrelevant, performance wise. It's great when you hit the final render button on a video, or 3D scene, or when you're compiling code. But those actions constitute a minority of our interactions with computers.

Read any HN thread full of complaints about browsers and you'll see otherwise.


> Developers need fast IO (I have slow compile times, but I rarely desire a new CPU to fix that).

Compile times are almost always CPU-bound. Put all your code and headers/libs on a ramdisk [0] and hit compile and see how much faster it is. My bet is, not much.

[0] https://www.jamescoyle.net/how-to/943-create-a-ram-disk-in-l... (just an example)


I acknowledged compile times are CPU bound. I'm just saying that I'll take fast IO over faster compile times any day, for development.

I want any data I access to appear instantly. Compilation often happens in the background and only very occasionally does it specifically block me from working. Slow IO, however, used to block me from working constantly.


Viewing a document or PDF can be remarkably slow. And large image heavy documents are pretty common. Of course that is not always entirely CPU bound, but it often is.

Also, a lot of the promise of computers is the ability to process things quickly. If a "big data" task takes 1 sec to run you can afford some trial and error. Increased performance should make software usable by more people with less training.


I'm pretty sure the only thing that's ever going to make the performance of Eclipse and IntelliJ IDEs tolerable is gains in raw single core performance, so it can't get here soon enough.


> Word processing, YouTube, Facebook, and non-enthusiast gaming has pretty modest needs that are easily met by a mid-range i5.

You can go lower than that. When I didn't do enthusiast gaming, my desktop PC was an Intel NUC with an Ivy Bridge low-power Core i3 and its integrated Intel HD Graphics. That thing could easily do Portal 2 at 720p on acceptable niceness settings, and Minecraft at 1080p30. Video playback and browsing was perfectly smooth at 1080p, too.


> Most of the things that go into squandering CPU don't parallelize well

That's not true. We just aren't trying very hard to parallelize our code. I work on parallelizing browsers and there's a lot that can be done. For example, libpng leaves 2x potential performance on the table by not pipelining the Huffman decode with the defiltering.

That said, I do agree that a better use of our time in most cases would be to improve the sequential performance of our software, because usually you shouldn't start parallelizing until you've exhausted the potential sequential improvements.


What do you mean by putting it into perspective? Those are desktop CPUs, which are separate from laptop CPUs.

On laptops, Kaby Lake would give them the ability to add more RAM and IIRC even lower power consumption.


Apple didn't need to wait for Kaby Lake to be able to add more RAM, other laptops have 32GB.

Lower power consumption wouldn't mean a lot when most peoples laptops are idle, or doing very little work, most of the time anyway.

I guess these are some of the trade-offs Apple chose to roll with.


The chipsets for the Skylake generation in current MacBooks top out at 16GB. They would have to use a more power-hungry version of them or wait for Kaby Lake. They decided to limit memory instead.


> Lower power consumption wouldn't mean a lot

> when most peoples laptops are idle

Er...RAM eats power even when the machine is idle.


That's a given. The question is: how much?

As one data point, Tom's Hardware measured¹ 12W for 32 GB DDR4 in 2014, scaling linearly. According to some random guy on the Internet², the MBP's LPDDR3 power usage is similar to DDR4 when active, and much lower - 10% - in standby. I'm not sure if standby is limited to system-wide standby, or if the computer can selectively standby portions of the system RAM (that would seem desirable). For what it's worth, this³ is the tech ref of the Samsung memory used in the 2016 MBP (according to the iFixit teardown).

So we have a range of 1.2 to 12W for 32 GB (and 0.6 to 6W for 16 GB), if LPDDR3 is comparable to DDR4 and depending on active or standby usage of the memory.

¹ www.tomshardware.com/reviews/intel-core-i7-5960x-haswell-e-cpu,3918-13.html

² With references, it's a good post/thread https://www.reddit.com/r/hardware/comments/5dimal/lpddr3_vs_...

³ https://memorylink.samsung.com/ecomobile/mem/ecomobile/produ...


Oops, that was pretty wrong.


> On the other hand, as another poster pointed out, we really don't need all that much more performance, as most of the performance of current chips isn't actually put to good use, but instead squandered

The real benefit, in my view, is to emulation. It's extremely CPU intensive. And the more accurate you want the emulation, the more the resource demands go up. Today's CPUs are only just powerful enough for very good reproduction of a poorly optimized 16-bit gaming system emulator, or a greatly optimized 32-bit system emulator. But for PS2 and beyond, we don't have the power for highly accurate simulation yet.

It's also nice for "wasting" power by writing in higher-level abstractions that are not as efficient, but are safer and easier to understand (think always bounds-checking a vector, instead of dropping down to raw pointer arithmetic.) Something that applies to all application development, including for emulators.


What we really need is not the performance increase, but a good excuse to spend some money on new toys.

It would be interesting to know how many of those who really feel strongly about this kind of things are actually buying based on their computing needs and how many have some other reasons (like getting some tangible reward for having worked hard - money on bank account is abstract, new PC is quite concrete).


So was this article just nothing?

http://spectrum.ieee.org/semiconductors/devices/intel-finds-...

"Sometime in 2017, Intel will ship the first processors built using the company’s new, 10-nanometer chip-manufacturing technology. Intel says transistors produced in this way will be cheaper than those that came before, continuing the decades-long trend at the heart of Moore’s Law—and contradicting widespread talk that transistor­-production costs have already sunk as low as they will go"


Well, the comment your replying to is sort of confusing - Moore's law does talk about transistors, yet his point was that even with more transistors, performance doesn't improve. That's also what your article mentions.


Does it always have to be about the speed? How about native USB 3.1 support (which would make a lot of sense since they went all in with USB Type-C)? New 200 series chipset (Union Point) anyone? This is not an exhaustive list of new features, just the ones that I care about.


Overshooting mainstream performance needs is what drove netbooks... and phones. Now, phones have overshot. So (eg) iPhone 5s and Samsung Galaxy S5 are still selling well.

Mainstream doesn't need the fat trimmed. eg Vulkan/Metal, which halve the CPU load (not GPU) of graphics sit idle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: