Hacker News new | comments | show | ask | jobs | submit login
Intel Publishes Microcode Patches, No Benchmarking or Comparison Allowed (perens.com)
1318 points by jeswin 86 days ago | hide | past | web | favorite | 484 comments



Before Zen, we all kind of assumed they were so far ahead that AMD were more likely to be out of business before they would ever be a credible threat again.

I actually thought Intel must have had some tricks up their sleeves in terms of performance gains that we hadn't seen yet, simply because there was no market need to roll them out and they had so many years of coasting on marginal gains.

Seeing them taking this stance looks a lot like the microcode hit is that bad, and that the emperor has no clothes.

Clearly, they don't have an answer to AMD at all. If this is true, their shareholders should be asking serious questions about why they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.


Equally troubling for Intel is the fact that they're losing their advantage in fabrication. The performance of their architecture is going backwards with every microcode patch, while their move to 10nm is hugely delayed.

https://www.tomshardware.co.uk/intel-cpu-10nm-earnings-amd,n...


I'm kinda curious to see how Zhaoxin, Hygon and other manufacturers in the x86_64 space are going to play out. It would be nice to see some real competition, not just Intel vs AMD, in this space.

There are other services like Packet that offer bare-metal hosting on small Atom and ARM processors. It'd be nice to see some alt x86 processors in this space.


Hygon is just AMD's EPYC CPU [0]. I am uncertain if they will differentiate themselves more as time goes on, but at the moment almost the only difference in the kernel is that it has a different name.

I think you're more likely to see some ARM CPUs which have comparable performance to low-end and mid-range x86 before you see a new x86 competitor [1] - the overhead to making x86 perform well is just so high that I can't imagine anyone new bothering to get into the space. VIA has been in the market forever as the third seller of x86, and despite the theoretical benefits of entering into the datacenter CPU market, they've never made that leap (though I don't know enough about their history to know if they tried).

I'm hoping that ARM becoming competitive in the client CPU space ends up making the cross-compile overheads of enough drivers/kernel stuff that we can start to see some more diversity in the CPU market overall. I'm excited about RISC-V, especially now that they have shipping hardware you can play with today [2]. The Mill CPU sounds super cool in a bunch of ways, but the architecture has made so many decisions that I'm unsure will play out in practice I'm holding my excitement until I see real silicon somewhere [3]

[0] https://arstechnica.com/information-technology/2018/07/china...

[1] https://www.engadget.com/2018/08/16/arm-says-chips-will-outp...

[2] https://www.sifive.com/products/hifive-unleashed/

[3] https://en.wikipedia.org/wiki/Mill_architecture


My own speculation on this is that it's a complicated legal dance to allow a native to and manufactured in China "trusted" processor that is tolerable for use in higher security government (Chinese) computers and systems.

They might have also baked in slight tweaks or customized whatever back-doors could be included if such things exist...

It's better to think of it as a Chinese subsidiary in a franchise system.


> despite the theoretical benefits of entering into the datacenter CPU market, they've never made that leap (though I don't know enough about their history to know if they tried).

They haven't, they instead entered the niche of low-cost kiosk hardware. Intel and AMD completely abandoned it due to lower profit margins, but plenty of raw sales numbers to help Via float by.


>they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.

Big companies rarely innovate without competition around.


I don't think this is necessarily true for big hardware companies.

If you're selling a subscription to a best-in-market service, there's not a lot of motive to innovate, agreed. Maybe you'd try a new product or a premium variant, but there's no reason to sink effort into advances that won't let you expand userbase or raise prices.

But for Intel? Before AMD got going, Intel's biggest competition was itself 9/18 months previously. Innovation wasn't just for new devices and new buyers, it was how they sold updated processors for machines that were already Intel Inside. They're not waiting on processors to die, they're actively pushing for them to be replaced for performance reasons.

That might create an incentive to release 'good enough' updates and dole out big improvements gradually, but in practice any of that which happened was already ending. Intel appears to be up against the wall regarding 10nm even with a competitor, and has been attempting major innovations to handle 7nm and below for years. With a revenue stream that relies on annual improvements to their product, they seem to have been learning hard into innovation and struggling, rather than waiting for a competitor.


It is really a nice point about competing with themselves. The volume of Intel is rather big that they have to sell to the same customer each 2-3 years even if CPUs are in principle can last 10 years or more. So they must increase the performance substantially to justify the sales.


This is a great argument to be against the recent trends of companies moving to subscription models just for the heck of it.


Definitely.

The common complaint about subscription models is obviously good: if the company folds you have nothing, instead of an unsupported product. But it neglects the other issue, which is that companies intentionally cut off the possibility of "good enough" to guarantee revenue.

I don't think its an accident that products like Microsoft Office went to subscriptions around the time it becomes very hard to imagine a new feature actually worth buying for.


This is certainly a myth. Everyone knows, including big companies, that as you grow in size you have a harder time innovating. The classic "lean startup" book mentions this issue and gives the example of Intuit's in-house accelerator/incubator. No way Intel doesn't know about it too.

https://www.forbes.com/sites/forbestechcouncil/2018/04/19/in...


wasn't bell labs part of a very big corporation when they invented basically everything about the modern world?


AT&T's early dominance was the direct result of the invention of the transistor that came from their own labs. From then on, the company was committed to spending massive amounts of money into developing new technology. Even if AT&T didn't directly invent a new technology, they were still one of the few companies in the world that could actually afford to buy new toys and put them into the hands of researchers who would find novel uses for it.

The Computerphile YouTube channel as some interviews with computer scientists that worked at Bell Labs during their heyday. It's incredibly interesting to hear about how they screwed around with early Linotype printers (which $100k+ in the 70s) and did things like reverse-engineer the font formats to create custom chess fonts.


That's not accurate. AT&T dominated telecommunications for decades before the transistor. Many of Bell Labs great scientific inventions predate the transistor, such as information theory.


Sure, but the transistor ushered in the long-distance era and helped AT&T establish their monopoly (and their eventual breakup). Regional telephone services were already beginning to drive down costs in local markets.


Maybe you're thinking of the triode, which AT&T did have a monopoly on around 1915, when they deployed the first transcontinental long-distance service using vacuum tubes to boost the signal. They bought deForest's patents and filed many more of their own on amplification circuits.

They had direct-dial long-distance by 1951, using relays and tubes.

The transistor started making a difference in telephony with the release of the 1ESS switch in 1965. But transistors were a commodity by then.


I think it is also that with a huge monopoly in an area, you can capture most of the benefits from basic research. With lots of competitors basic research gains go to all the competitors too and at some level you don't capture enough of the benefit to justify it.


Add to that, in return for their monopoly status, they released much of their IP to the public domaian. Imagine if they had held/sold their IP how electronic innovation would have been stifled.


[flagged]


They didn't invent Google and Facebook, but they're responsible for transistors, lasers, Unix, C and C++, CCD digital image sensors, long distance microwave radio relays, the first transatlantic phone cable, MOSFETs, communication satellites, and cell networks.

Not modern internet services no, but all of the infrastructure it's built on is grounded on theirs.


All of which they were required to license on FRAND terms - Consider that the basic research done by bell labs was paid for by a royalty paid to it by western electric and the various bell operating companies - it was blanket royalty, not a per product one too.


Google and Facebook are ad companies. Their “innovation” isn’t even in the same ballpark as what came out of Bell Labs


>Bell Labs invented Google, Facebook, etc? Wow, did not know that.

Compared to what they invented, Google and Facebook are insignificant in comparison.

We could go back to Altavista and no-Facebook and we'd be more or less fine.

Giving back the Bell Labs technlogy would be a much harder hit...


No-Facebook would arguably be a better world the older I get...


Google and Facebook would look a lot different without transistors.


And lasers. And Unix. And switched networking. And binary digital computers. And long-haul undersea cables. And the first successful communications satellite. And data networking.

http://blog.tmcnet.com/next-generation-communications/2011/0...


Yes, but apart from that, what have the Romans, er, Bell Labs ever given us?


TBH your examples are not what people should be much proud of. If you mentioned e.g. stackexchange and similar human-grade tech, then this argument would look somewhat better. Invasive tracking and manipulation through ads is actually the style of military of thinking.


No, Bell Labs did not invent basically everything about the modern world.


yeah, part of it was invented at PARC


you down voters are not giving Al Gore enough credit.


What myth?

As you say, bigger companies have a harder time innovting.


Yeah, but it's not necessarily related to competion, size is primary factor. At least I believe that was his point.


I think it is related to competition. Lack of competition brings complacency of executives, who think they can get free money for shareholders without having to reinvest into innovation. (And I think that was carlmr's point.)

Of course, if you are bigger, the more likely it is that there is no competition at certain times.


Yeah, my point was that a big company can coast without innovating. They usually have a lot of stuff to sell that they have done in the past. The pressure to rejuvenate comes from outside, rarely from the inside (surely there are exceptions as with every good rule).

If you can survive without investing, a lot of companies choose not to.


Apple / Google / Amazon don’t seem to struggle?

Microsoft / IBM and now Intel defintely did.

I think it has more to do with the leadership of the company than the size of the company.


Does Google innovate?

-ss


Have you seen how many messaging apps theyve made in the last few years?


I wish they would fix up one of the existing ones rather than keep shoving out new half finished ones. I poked around in a few of them from a web client and phone perspective for a while and still don't feel like they beat Slack.


And it's still a mess that nobody wants to use.


Took over 500 lightbulbs before there was a usable one. Just dont give up.


This is sarcasm right? Messaging apps are the height of "innovation?"


:-)


This is also the topic of the book "The Innovator's Dilemma" by Clayton Christensen.


>Before Zen, we all kind of assumed they were so far ahead that AMD were more likely to be out of business before they would ever be a credible threat again.

It depends where about on the timeline. AMD's hired of Lisa Su and Jim Keller in 2012, we all thought it was too little too late. Look back at the Roadmap Intel were giving at the time, I used to joke about Tick Tock were like the sound of AMD's death clock. In 2012 we were looking at 10nm in 2016, 7nm in 2018, and 5nm in 2020. We just had Sandy Bridge, but that was the last big IPC improvement we have had.

Fast forward to 2018 / 2019, No 10nm, and I would have been happy if they were selling me 14nm++++++ Quad Core Sandy Bridge. Broadwell and Skylake brings nothing much substantial. Intel were suppose to break the ARM Mobile Market with tour de force, and that didn't happen.

We all assumed Intel had many other tricks up its sleeves, new uArch or 10nm waiting in the wings when things are needed. Turns out they have nothing. Why did they buy McAfee?( Which has been sold off already ) And Infineon? Nearly eight years after the acquisition they are just about to ship their first Mobile Baseband made by their own Fabs, Eight Years! What an achievement! Nearly three years after their acquisition of Altera, which itself has been previously working with Intel Custom Fab before that. What do they have to show?

During that time, the Smartphone revolution scale has helped Pure Play Fab like TSMC to made enough profits and fund their R&D rivalling Intel. And in a few more weeks time we will have an TSMC / Apple 7nm node shipping in millions of units per week. In terms of HVM and leading node, making TSMC over taking Intel for the first time in history. AMD has been executing well following their Roadmap, and Lisa Su did not disappoint. Nothing on those slides were marketing speaks or tricks that Intel used. No over hyped performance improvement, but promise of incremental progress. She reminds me of Pat Gelsinger from Intel, Down to Earth, and telling the truth.

Judging from the Results though, AMD aren't making enough of dent in OEM and enterprise. Well I guess if you are not buying those CPU with your own money, why would you not buy Intel? The consumer market and Small Web Hosting market though seems to be doing better, where the owners are paying. I hope Zen 2 will make enough improvement and change those people's mind, better IPC, better memory controller.

If you loathe Intel after all the lies they have been telling and marketing speaks, you should buy AMD.

If you love Intel still after all, you should still buy AMD, teach them a painful lesson to wake them up.


Dont forget Puma series of broadband modem chipsets bought from Texas Instruments in 2010. All defective, 3 generations and hardware is still not fixed, just this month Intel released some half assed software patches.

https://www.theregister.co.uk/2017/08/09/intel_puma_modem_wo...


Broadwell and Skylake brings nothing much substantial.

Ironically, Broadwell's 128MB L4 cache did bring a substantial performance boost to a whole range of real-world applications, but it seems it's so expensive to manufacture that they've subsequently dropped the feature except for Apple's iMacs and expensive laptops.


Haswell massively improved the branch predictor, which gave a significant IPC boost to many real-world workloads (especially emulation and interpreters).


How much of that boost remains after patches to Spectre and meltdown?


I have an inexpensive laptop with i3-6157u, with 64MB eDRAM. And yes, performance boost is substantial, for e.g. C++ compiler.


> If you loathe Intel after all the lies they have been telling and marketing speaks, you should buy AMD.

> If you love Intel still after all, you should still buy AMD, teach them a painful lesson to wake them up.

But how do I choose which AMD CPU I need ? Back in my youth p4 and athlon were easy to compare (freq., IPS and a modifier because AMD) but now I can't even tell the differences between any i5/3/7 and when I look at AMD names it's as confusing but with a different lingo. I feel the same regarding GPU so maybe I am too old for that now.


The right answer to this is to look into benchmarks. It now works again to compare Intel and AMD clocks, but only of the current generation, and then there is core count and motherboard prices to consider, so on.

A project of mine is a hardware recommender, it also includes a meta-benchmark. I collect published benchmarks and build a globally sorted order of processors out of it. https://www.pc-kombo.com/benchmark/games/cpu for games, https://www.pc-kombo.com/benchmark/apps/cpu for application workloads (that one still misses a bit of work, the gaming benchmark is better). Legacy processors are greyed out, so this might be a good starting point for you. There is also a benchmark for gpus.

For most people this processor choice is also very easy, it is "Get a Ryzen 5 2600 or an Intel Core i5-8400."

Feel free to ask if you want some custom recommendations, email is in profile :)


Can you add the ability to sort based on price/perf?

Also, the existing bar graph is unclear to me. What does 10/10 mean?


10 is just the fastest. Because it is a meta-benchmark, this rating is not necessarily relative performance, it is based on the position in the ordering. The achieved average FPS is just a factor in that, used to make the distance bigger to indicate performance jumps.

Example, fictional values: The 8700K is the fastest, because it was most often seen as the benchmark leader. It gets a 10. The 8600K has almost the same FPS, but it was always a bit slower, it gets a 9.9. The i5-8500 comes next, but its average FPS scaled to the 0-10 scale is lower, it gets a 8.7. Then the i5-8400, always seen as slower than the 8500 in benchmarks, would at most be able to get a 8.6, no matter what the average FPS says (with enough benchmarks average FPS become an almost meaningless metric, it's the position in the benchmark that counts).

That's why it is not possible to calculate price/performance with this. I could only highlight good deals, processors that have a high position despite being cheaper than the processors below. Which is of course already baked into the logic of the recommender.


Thought a bit more about this. What I can do is filter out all processors/gpus that are slower than cheaper processors. Proof of concept implementation: https://www.pc-kombo.com/benchmark/games/cpu?optimize=true. Those are essentially the best price/performance choices, with some restrictions as explained in the other comment.


Pretty cool project. Thanks :).


> but now I can't even tell the differences between any i5/3/7 and when I look at AMD names it's as confusing but with a different lingo.

They way I see it, I could look at AMD's Core count, Threads and Frequency, As they are clearly labelled, and that is it. On Intel's side you have features turned on and off for different i3/5/7/9, AVX speed difference etc I don't even want to bother looking it up.


> On Intel's side you have features turned on and off for different i3/5/7/9, AVX speed difference etc I don't even want to bother looking it up.

Seriously, Intel has always been way too confusing with their processor lineup. AMD has always been straightforward: leave in the kitchen sink on nearly every CPU and performance scales with price. Not linearly of course, but it's much simpler to choose an AMD CPU.


I don’t get it. You just look at popular apps&games benchmarks and decide your hw stack. You don’t need to synth-bench AVX differences unless you’re writing bleeding edge specialized software that depends on it.


I just tried to recycle my old i7-4770k from my gaming CPU to a home server. Only guess what? The 4770k doesn't have VT-d. The cheaper 4770 does have VT-d, so why did the more expensive, "upgraded" 4770k not have VT-d? Because Intel's product line is a disaster.

We're not talking minor performance differences in features, we're talking features randomly added and removed for no logical reason from the same generation & tier of CPU model.


Perfect example of what I was talking about. Their lineup has always seemed customized to maximize the need to buy more CPUs.

"Gamers need these features, but gamers often also setup servers. Let's remove server features from gaming CPUs so they can't reuse them when they upgrade, and so they have to buy new server CPUs!"


I write all kinds of different types of software. The feature matrix of Intel CPUs has always been remarkably inconsistent, like you get AVX, but don't get virtualization extensions, or vice versa, etc.

Basically, if you don't want the headache you just buy one of the highest end/most expensive CPUs on offer and you're probably fine. With AMD the feature set is pretty consistent, so you have considerably more choice to find a good price point.


Intel no longer just has i3/5/7-- celeron, pentium, i3/5/7/9 and plus variants. Within those are dozens of variants differentiated on things like PCIe bandwidth, SMT, instruction sets, accessible RAM, and so on.

In comparison, AMD's offerings are surprisingly easy. There's a handful of SKUs differentiated on core count and frequency. Generally they all have the same PCIe lanes, RAM access, SMT, instruction sets, and so on.


On the AMD side, there's only 2 generations of Ryzen.

Ryzen 1 - slow, medium, fast, elite

Ryzen 2 - slow, medium, fast, elite

pick the one you need based on pricing/discounts if any. It's not that hard really.


Thanks but... bear with me https://www.tomshardware.fr/articles/comparatif-cpu-amd-inte... what are those ryzen 3/5/7 TR ?


13xx, 15xx, 17xx, ThreadRipper.

For gen 2, that'd be:

23xx, 25xx, 27xx, ThreadRipper.

I think they picked the names/numbers to show some kind of equivalence with i3/5/7, but that's not quite it.


Don't forget the 1700X and 1800X.

ThreadRipper is interesting. It requires a different socket than the other desktop processors and is targeted more at workstation class machines.

In servers there are Epyc and Epyc 2.


ThreadRipper seems more like a Xeon competitor to me. Something for the server room crowd.


Epyc is the server Xeon competitor, Threadripper is a workstation cpu.


That'd be EPYC, their server offering.

Threadripper is their workstation CPU.


That's exactly it

slow (3), medium (5), fast (7), elite (Threadripper)

If you don't want to spend more than USD$800 on the CPU alone, don't look at the elite/threadripper line.

What is your budget? That is the first question you need to answer instead of trying to understand the entire line of chips, look at how much you have to spend, and then find the fastest one within the budget.

It's useless to try to think about the entire line of CPU's when you will only buy 1 chip (unless you are representing Dell and need to buy thousands of chips)


Compare the benchmarks for those processors?


Ah, but now we're not allowed to benchmark Intel anymore, so nobody can prove AMD is faster now.


AMD was faster depending on the workload. With these patches, I think AMD being faster is just a given regardless of what you're doing.


My next CPU in few years probably still will be Intel. I need fastest one-core performance in the world and Intel can provide me 5+ GHz and best IPC. I want honest multi-core CPU, not NUMA and Intel will give me 8 honest cores. Meltdown fixes should be in hardware by then, so performance won't be affected and if it is, I'll disable those fixes, I don't need them anyway. But CPU after that might very well be AMD.


After reading a few articles, here's what I got:

it seems that Intel couldn't jump into EUV manufacturing when they were in full domination because it was too expensive and new so they started improving multipatterning so improve resolution, which proved too hard to ship (hence the delays) meanwhile smaller players went their way until recently and now EUV is accessible so they can jump in swiftly while intel is still caught inside his intermediate strategy, lazy market behavior and unforeseen failures. They also have EUV planned but not until the next-generation. Note that even at larger pitch their process is near competitive with smaller ones today, but it sounds terrible.


I just want to throw it out there that it might be impossible for AMD to actually go out of business. They're Intel's only major x86 competitor, IIRC their x86 license is non-transferrable, and no way Intel would be permitted in the USA to have an actual monopoly on x86 production.

I kick myself for not buying AMD at $2 (or buying a LOT more at $5).


> They're Intel's only major x86 competitor, IIRC their x86 license is non-transferrable, and no way Intel would be permitted in the USA to have an actual monopoly on x86 production.

I just looked this up, and it seems to boil down to the patent cross-licensing agreement between Intel, which developed the x86 architecture, and AMD, which developed the 64-bit instruction set. I don't think there's a unilateral "non-transferable license" per se — and they're free to enter into a new agreement if either party does get acquired.

This Reddit thread seems pretty good at explaining it in much more detail: https://www.reddit.com/r/hardware/comments/3b0ytk/discussion...


> I just want to throw it out there that it might be impossible for AMD to actually go out of business.

Even if you buy this, (I don't), there's no point for them to stay in business with absolutely non-competitive products. The remarkable thing is that thanks to Zen that did not happen and Intel actually feels some heat for the first time in years.


> Clearly, they don't have an answer to AMD at all

They still have a huge opportunity for CPU+FPGA, they bought Altera for the purpose.


It's remarkable how little we've seen from that acquisition. It's perfectly possible that Intel has butchered the acquisition the same way they have with many others.


The two huge companies being merged is more often considered fail than otherwise.

> a 2004 study by Bain & Company found that 70 percent of mergers failed to increase shareholder value. More recently, a 2007 study by Hay Group and the Sorbonne found that more than 90 percent of mergers in Europe fail to reach financial goals.

http://edition.cnn.com/2009/BUSINESS/05/21/merger.marriage/

Especially when the merge should be deep and involve engineering teams with different cultures to join and work together on the product. So I'd consider the release of first Xeon+FPGA after 3 years past acquisition as a somewhat success.


Billion Dollar Lessons by Mui & Carroll goes through a lot of these grand strategies and demonstrates how much of a bonfire they turned out to be.


The Xeon + FPGA that was actually underway before the acquisition and based on pre-acquisition technology (Arria 10).


Computer hardware has a very long lead time between product concept and metal-in-your-hand. Combined with the pains and huge initial slowdown of a megacorp purchasing a medium-corp, I think the real fruits of that acquisition are yet to be seen. I bet it took at least a year just for management to get their bearings on straight.

I would have guessed additional lead time for Altera to move their designs from TSMC to Intel process, but it looks like Altera has been planning to fab on Intels 14nm since 2013[0].

[0]http://chipdesignmag.com/display.php?articleId=5215


But CPU+FPGA will forever be a tiny niche, right? I don't see any path where mainstream programs get a boost from FPGA.


Not at all. Specialized coprocessors are mainstream actually - from network cards offloading to small chip in every iPhone, and for a reason.

The obvious way forward is universal specialized coprocessors, reprogrammable for the task(s). Better if tightly integrated with the memory, buses and CPUs.

The weak side of FPGA historically is programmability and especially the tools. But since the interest for FPGA is growing exponentionally in open-source community in recent years, things may change.

And by the way, 10 years ago you would say that exactly 'niche' words about GPU.


Specialized coprocessors are super common but they're basically all ASICs with nary an FPGA to be found. In theory you could have reconfigurable co-processors on a mainstream chip but nobody does that - partially because the latency involved in reconfiguring the FPGA makes it a losing proposition in most cases.

There are uses for FPGAs where there's enough money at stake for the hardware development but the number of units is small - stuff like high frequency trading or many defense roles. Or in the development of new hardware. But it's pretty niche.


How is every consumer device with a display "niche"?


Dedicated GPUs were not required historically to drive displays, and this was done by the CPU instead.

Similarly, we’re finding more functions we can take away from the CPU and migrate to dedicated circuitry (FPGAs) that can handle those tasks more efficiently than the CPU can.


dedicated circuitry != FPGAs.

GPU's avoid the overhead of FPGAs while still retaining a lot of flexibility.


Because they were all 2D. And the idea that GPUs would be used for mobile computing was not obvious.


Maybe 25 years ago. 10 years ago they were commonplace. Intel shipped integrated 3d graphics by no later than 1999:

https://en.wikipedia.org/wiki/Intel_810


Lol I don't know who bought that. Anyone who wanted 3D was buying Voodoo, RIVA TNT or ATI RAGE. Everyone else was happily 2D and running Word 95.

But, to clarify, I was speaking of consumer/mobile. The original iPhone was quite revolutionary for having a decent PowerVR graphics chip. High end symbian phones just had a CPU. See for example https://en.wikipedia.org/wiki/Nokia_6110_Navigator or https://en.wikipedia.org/wiki/Motorola_Razr2

Even though GPGPU was already big in 2008, people still thought of it as a difficult to use coprocessor for big compute jobs. Much as people consider FPGAs now.


I didn't highlight it as a desirable 3d processor, I pointed it out because 3d was essentially becoming default at that point.

And the first iPhone shipping with a powerful graphics chip is a counter to your argument that the future of mobile wasn't clear. The people with the ideas wanted a graphics processor.


The fact that the entire industry apart from Apple (a computer company, not a phone/device company) were completely ignoring 3D exactly shows that it was NOT obvious or clear. After Apple demonstrated the potential it became clear.


Okay, I should have clarify the 'niche' thing 10 years ago was not GPU per se, but of course GPGPU, the GPU as a specialized coprocessor for general purpose computation, not restricted to graphics processing and output.


That depends what you mean by mainstream. These days the cloud data centre is a mainstream market for hardware vendors. In that environment FPGAs can make sense. They might have advantages in throughput, latency, power consumption, security (no spectre), reliability (no software updates). For example, Microsoft use them for Azure virtual networking, machine learning and something inside Bing. You can imagine a world where every blade server has an FPGA. In fact, you can imagine a world where many blade servers have an FPGA and only a small support CPU.


You can spin up an FPGA-accelerated EC2 instance right now. There are a few highly specialised applications where every scrap of performance matters, but for the most part the software development costs are prohibitive compared to just spinning up more CPU or GPU instances.

https://aws.amazon.com/ec2/instance-types/f1/


There are some interesting ideas about programmability, like automagically offloading some computation parts to FPGA.

http://conal.net/blog/posts/haskell-to-hardware-via-cccs http://conal.net/blog/posts/circuits-as-a-bicartesian-closed... https://github.com/conal/lambda-ccc/blob/master/doc/notes.md


The software needed to drive them needs to become much better (documented, ergonomics, everything) before that will be a realistic option.


As far as know there are zero applications on the client or desktop side that can take advantage of FPGA.



I can see there being a market for that. The reason nobody uses FPGAs at the moment is because nobody has them outside of specialized applications.

If Intel can release a CPU with a built-in FPGA and everyone has one, software developers will take advantage of them. I can see stuff like video editing programs, compression algorithms, etc taking advantage of that.


That's nonsense. PCI/PCIe FPGA accelerator boards have been around for ages, mostly used by the finance traders and for machine learning/math-intensive computations before cheap GPUs with much simpler programming models (no need to write e.g. matrix multiplication in VHDL/Verilog anymore) like the various programmable shaders, CUDA and OpenCL have pretty much relegated them into obscurity.

FPGA is great if you need to talk to some hardware very fast/on many pins. E.g. something like a network router where you are shuffling packets between many high speed interfaces. Or doing a lot of measurements/interfacing a bunch of high speed sensors.

But not for general purpose software - GPUs are both faster, easier to develop for (and with good tooling) and much cheaper for doing that today.


Looking forward to that, without 'you need to buy Skylake CPU that is just as fast as the one you had before to get H265 decoding'. Good riddance.


The other thing is that they're a nightmare to programme and they're expensive!


Yes, but speedup can be huge. For example, for NFA processing, using (a couple of) 2W FPGA against 200W GPU "GPUs underperform FPGA by a factor ~80-1000x across datasets" http://people.cs.vt.edu/~xdyu/html/ics17.pdf


The state transition table is encoded in logic for FPGA and as global memory table for GPU. Why didn't they try to encode state transitions as code on GPU?

Also, did they try to use enhanced locality introduced by processing several streams on GPU? E.g., if you keep states sorted as for tuple (state id, stream id) for all your streams, you may get more memory-controller friendly access pattern. I haven't seen mentions of that technique (which MUST be considered after Big Hero 6 [1] - they used that technique to never miss caches in whole movie rendering process). Big Hero 6 is 2014, the paper is 2017.

[1] https://en.wikipedia.org/wiki/Big_Hero_6_(film)

I really do not like papers like one linked by you. One system gets all of the treatment while other ones get... whatever is left.

I guess have they tried to use these techniques for GPUs, they would get performance gap that is much less than reported.


Today they are. Programming can be made easier with large frameworks. Cost can be reduced with higher volume.


at SC and CUG this year the main focus was that in less than 10 years IPC improvement is going to go away (I am not sure where IPC improvement was in the last three years). Anyway, the next step is to make CPUs as much heterogeneous as possible like a SOC. Both Intel and AMD are going there but we need to sit and see which direction is going get the momentom like what nVidia has done.


The new ARM AI processor looks like it'll play well in this space for some workloads:

https://www.nextplatform.com/2018/08/22/arm-stands-on-should...

Not a thing for everyday desktops, but looks like compute competition is er... heating up. :)


Here's a Stratechery article, nothing more to add. https://stratechery.com/2018/intel-and-the-danger-of-integra...


> shareholders should be asking serious questions about why they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.

This might be the answer. No competition and a good cash flow is a comfortable position. You defend this model with ads, policy, etc. and technical innovation can languish. I am not saying this is necessarily the case, but it is possible that Intel just got comfortable, slow and fat. Having a scrappy, smart competitor can be a good thing.


Pretty stupid that they didn't continue their Itanium architecture. It does have In-Order-Branch-Prediction and we are at a point again where a significant number of users don't really care what kind of CPU they are using.

ARM and RISC-V have become a serious thread and are on the way to get standardized ecosystems...


Most of the emperors out there have no clothes.


They did hire Keller, no? I wonder if AMD has an answer to AMD in the coming years. Not only CPU, GPU as well. Especially GPU. They missed the boat on AI, and now Nvidia is pushing RTRT left and right. That's seemingly two areas they still don't have an answer to. It's a big battlefront, and intel is only a step or two away from possibly bringing an answer to AMD. Not to mention whole laptop domination by intel, and mobile by ARM.


> Not to mention whole laptop domination by intel, and mobile by ARM

traditionally yes, intel has absolutely dominated the laptop market however i have been seeing a lot more laptops lately with a Ryzen processor and Vega graphics

AMD making gains into a very lucrative market


Intel are still ahead in low power mobile (laptop) CPU race, and the clear leader in the laptop market. Who knows for how long, though?


>Intel are still ahead in low power mobile CPU race,

I never met a person with a smart phone that uses an intel chip. They probably exist but i know none.

Apple aren't using them. Samsung aren't.. HTC nope.. Google pixel nope...

Intel basically sold off/scuttled their mobile division right before the iphone took off.


I think it's referring to laptops. That said I had the ASUS Zenfone 2 and the performance seemed alright, but certain apps didn't work because of the architecture difference, most notably Pokemon Go for the whole time it was popular (although it's probably for the better that I missed that whole craze).


I had a Zenfone 2 for a while too, performance and battery life were both pretty good. Thing was super unreliable though, not sure for what reason.


oh yeah i forgot laptops.


I meant "mobile" as in laptop CPUs, I've edited my post to clarify.


My apologies, you're right about laptops.


FWIW, I owned an ASUS Zen Phone 2 which contained an Atom x86 processor. It worked pretty well. I sold it to a friend of a friend who I think still uses it today. That said, that processor line was discontinued.


I think “mobile” was supposed to refer to laptops, not phones or tablets.


New Snapdragon 845-based Chromebook is on the way.

> It comes as Microsoft continues its work with Qualcomm to optimize Windows for devices powered by Qualcomm's Snapdragon chips, including the forthcoming Snapdragon 850, which Samsung used for its first Arm-based Windows 10 laptop. So it appears there is some momentum behind the concept.

https://www.zdnet.com/article/arm-on-windows-10-chromebooks-...


I think ARM could be a viable contender in the low-power laptop segment, kinda maybe possibly.

However, I used to own a Tegra K1-based Chromebook, and that thing was sl-o-o-o-o-o-o-w, and it only got worse with successive updates. I'm not really optimistic when it comes to performance, absent highly-optimized apps. The state of Firefox and Chrome doesn't really fill me with confidence.


The state of Firefox and Chrome shows more than anything else that the many cores is good thing and especially that RAM size is the king actually. Modern smartphones with 6 and 8 GB of RAM make a difference.


I think Apple laptop CPUs will be like USB-C. First 'what's the point, nothing is compatible', year later 'everyone is doing it and it's actually pretty cool to attach everything including power with a single cheap dongle'.


I wasn't aware we were there with USB-C. I still need a dongle every time I use anything. But more importantly, I was in the camp of "I barely connect anything to my computer other than power, and MagSafe worked so well I forgot it was ever an issue, but then "USB-C charging" regressed things to the point where I thought I was charging but nope, turns out the connector was slightly out and my battery is at 5%".


I think it would take far longer than a year for a critical mass of OS X software to become ARM-compatible, but it could be viable in the long run.

I'd like for ARM laptops to become more popular though, as a Debian user nearly all the software I use is already there.


Apple already got a huge number of developers to transition their apps to 64-bit ARM with the iPhone, I feel like if they made it simple enough they could get a critical mass on the desktop just as quickly.


Not at all, they bungled low power bigly with the atom trainwreck and are so far behind ARM now that they have given up competing.


I don't see very many competitive AMD- or ARM-based laptops around, though.


> Clearly, they don't have an answer to AMD at all. If this is true, their shareholders should be asking serious questions about why they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.

But they have lots to show for that money!

Brand change implementations such as "Gold" and "Platinum", which gouge the customers more than ever before.


Here is one thing we can do about it: make public service announcement to our users that we no longer recommend Intel CPUs because of security holes, censorship and crippled performance.

I am going to do that today. While we only have several thousand users they do CPU intensive work, buy a lot of new CPUs and rent a lot of servers. My small contribution will likely amount to low-mid 6 figures out of Intel pocket in coming 2-3 years.

Please consider such announcement if you could do some damage as well.


Honest question: is AMD any better? Do they somehow manage to avoid Spectre / Meltdown without a slowdown?


AMD is better, they are still affected by side channel attacks but they did not skipped on security checks and are not affected by speculative execution AFAIK

https://www.amd.com/en/corporate/security-updates


> ...and are not affected by speculative execution

AMD's chips definitely speculatively execute instructions. It's a common performance trick.

AMD's chips also definitely throw an exception at retirement (of course) for instructions that attempt to load a privileged address, just like Intel's chips do.

The difference is that when AMD's chips see a load instruction, the load isn't executed until it knows that the address isn't privileged. Intel's chips do execute the load (but then throw away the result when it realises the address was privileged).

The speculative part is for instructions that depend on the result of the load.


>The difference is that when AMD's chips see a load instruction, the load isn't executed until it knows that the address isn't privileged. Intel's chips do execute the load (but then throw away the result when it realises the address was privileged)

Thank you for the detailed explanation, can't we conclude that AMD engineering is more defensive then Intel, this is what I concluded.


The general reaction from CPU designers I interact with online after this was that Intel engineers should have known the optimizations that enabled Meltdown were dangerous but that Specter was totally unforeseeable.


Based on this datapoint? I don't think so. They're vulnerable to other side-channel attacks.

Both AMD and Intel hire really smart people, but this stuff is really, really hard.


Yes. Their arch is just different, they where not affected by meltdown and the chances of a spektre exploit actually working are very small. I don't know why that is the case but basically amd is 99% free from this story.


They aren't. Spectre works fine on AMD. They avoided Meltdown only because their cores are less optimised. Meltdown isn't Intel specific - Apple cores were also affected.

It's clear that AMD weren't doing anything special w.r.t. side channel attacks. They were just further behind in the optimisation race and as a consequence, were less hit.


That is a horrifically biased & wrong summary.

AMD enforce privilege checks at access time rather than at retirement time. Whether or not this is due to "lack of optimization" or "good security engineering" nobody knows. But your claims that this was purely the result of "less optimised [sic]" cores is nonsense. You have zero evidence whatsoever that that was the case vs. AMD just having superior engineering on this particular aspect and not adding bugs to their architecture.

All we know are that Intel & ARM CPUs have an entire category of security bugs that AMD don't, and that upon close analysis AMD's CPUs are operating in a more secure foundation.


Minor tidbit, but "optimised" is correct British spelling, so the disdainful "[sic]" is not needed.


The summary is correct and optimised is the correct spelling where I'm from.

If AMD were making conscious efforts to avoid side channel attacks they'd already have had various features to show for it like IBRS. But AMD's chips say nothing about side channel attacks. Their manuals do not discuss Spectre attacks. And there is no evidence anywhere that they knew about Meltdown type attacks and chose to avoid them.

I get that suddenly hating on Intel is cool and popular, but the facts remain. There is no reason to believe AMD has any advantage here.


For meltdown AMD followed what the x86 spec said. It's flabbergasting you are trying to contort this into making AMD look bad. Your "summary" was basically that AMD was too incompetent to have multiple security issues.

The facts are AMD does not have a significant per-core IPC defeceit vs. Intel (as supported by every ryzen review at this point) and AMD has, on multiple occasions now, had objectively superior security.

You're trying to twist this into a negative against AMD. It's nonsense FUD. Intel fucked up, AMD didn't. Why are you trying to run PR damage control for Intel?


Please show me where in the x86 spec side channel attacks are discussed at all? They aren't.

I am really unsure where you're getting this from. Your reference to the spec makes me wonder if you really understand what Meltdown and Spectre are. They aren't bugs in the chips even though some such issues may be fixable with chip changes, because no CPU has ever claimed to be resistant to side channel attacks of any form. Meltdown doesn't work by exploiting spec violations or actual failures of any built in security logic, which is why - like Spectre - it surfaces in Apple chips too. Like Spectre it's a side channel attack.

I'm not trying to twist this into a negative for AMD: it's the other way around, you are trying to twist this into a positive for them, although no CPU company has done any work on mitigation of micro-architectural side channel attacks.

I'm simply trying to ensure readers of this discussion understand what's truly happening and do not draw erroneous conclusions about AMDs competence or understanding of side channel attacks. What you're attempting to do here is read far more into a lucky escape than is really warranted.


Meltdown was enabled by a clear-cut violating of the memory access restrictions of x86. Intel simply did the permission check at the wrong point in the pipeline. It's not anything more obscure or clever than that, and it wasn't even an optimization as the end-to-end latency remains the same. It still did all the work, it just did it in the wrong order. Permission check was done after read access was done anyway.

Memory was accessed that the spec says was not accessible. This has nothing to do with side-channels. The side channel part of the attack was how the spec violation was exploited.

For Meltdown specifically Intel fucked up, AMD didn't. This is not at all vague. Whether or not this was due to luck or not is irrelevant, it was clearly NOT due to incompetence as you were pushing. You pushed a narrative that AMD was too incompetent ("missing optimization") to have a severe security bug. That has nothing to do with reality whatsoever.

Side note, side channel attacks are not exactly obscure. Guaranteed AMD & Intel have security experts that are well aware of how side channel attacks work long before any of meltdown & spectre came to light. They have been around for ages. Practical exploits of L1/L2 cache via mechanisms like Prime+Probe date back at least as far as 2005: https://eprint.iacr.org/2005/271.pdf


Can you really call "fast but wrong/insecure" optimized?


It doesn't calculate anything wrong, so sure.

For this particular optimization, it looks better not to have it.

But in general, knowing how something is flawed lets you mitigate the flaws. We use floating point despite it being mathematically wrong, because it's fast and we can mitigate the wrongness. I could imagine a chip where speculation could be toggled between different modes, if there was enough demand.

I will say that "just further behind" is probably wrong. AMD has a lot of optimizations in their chips. They have safer ones, which might be luck or might be engineering talent, but it's not a mere artifact of being behind the curve.


Yes you can, if it makes your benchmarks look better. That's why Intel is trying to suppress the benchmarks.


AMD are not employing these anti-competitive practices (at least at the moment).


I think GP was asking not about the anticompetitive practices but about the actual exploit that Intel is responding to. But you make a good point.

Intel has two faults: one, they made a significant mistake in their chip design, and two, they responded to criticism poorly. AMD did not make the mistake and has responded well to criticism.


That's one of the best ways to actually make a change.


I am in the market for a new rig- looks like I'll be going with AMD.


same. i _was_ looking at getting an HP Spectre x360 with the i7 8550U. Now, I'm going to look explicitly at laptops with Ryzen chips. I think the Spectre might have a Ryzen model, but don't know.


AMD is severely behind on the Laptop space.

In the Desktop space, I can definitely recommend AMD. But Laptops are totally Intel right now.

There are AMD Laptops, but they are hard to recommend. Most are low-end offerings at best (laptop manufacturers don't put the high-end stuff with AMD). As long as high-end HP / Dell / Lenovo / Asus / Acer / Apple / Microsoft Surface are all Intel-based, you're basically forced to use an Intel Laptop.

Desktops... you can build your own. And even if you couldn't, there are a ton of custom computer builders out there making great AMD Desktop rigs.

-------------

With that being said, AMD laptops are competitive in the $500 to $750 space. Low-end to mid-tier laptops... the ones with poor mouse controls, driver issues, bad keyboards, low-end screens and such.

But hey, they're cheap.

Its not really AMD's fault. But in any case, its hard for me to find a good AMD laptop. So... that's just how it is.


Check out the Huawei Matebook D with Ryzen if you want an amazingly built laptop for a great price. Might change your mind, it's the best 600 dollar laptop out there.


i actually thought about that one too. If they had a 16GB version, I'd totally go for it. Entirely the reason I'm leaning towards an HP Envy x360 13z instead.


the new HP Envy x360 13z doesn't look bad.

Kind of waiting to see what the Lenovo ThinkPad A285 looks like, as well.


The HP Envy x360 has a noticeably worse screen though. Its definitely not the same class or caliber as the Spectre x360.

That's what I'm talking about: most laptop manufacturers offer a "premium" Intel laptop. But then they have a lower-quality AMD one on the side.

Nothing AMD has done wrong per se, just laptop manufacturers refuse to sell AMD on the high-end.


Is Intel not still leading in per-core performance? That's what I'll decide my next cpu on, since I care more about use cases like emulation/games (many games don't use multiple threads or use them effectively)


The problem is now it will depend on which patch(es) your CPU may have. Many online benchmarks and reviews of Intel CPUs vs AMD may be old, or run without patches applied / security modes enabled. If I order an Intel CPU now, I am not sure what microcode version I'll be getting to be honest. As a result, those benchmarks that show Intel edge out AMD a bit on some games may actually be inverted depending on these factors.

Going with AMD at this point is simpler, better for security, and if you care about this sort of thing -- rewards more honest and less consumer-predatory behavior. I've always gone with Intel my whole life, but given these many incidents with Intel, combined with their really poor public responses to it, I will now be switching to an AMD user for all future PC builds.


They're neck and neck at this point, within a few percentage points. Zen 2 should close that gap to negligible hopefully.


This amounts to little more than making a statement at the expense of your users.

It would've carried that much more weight if it were _your_ low-mid 6 figures that were redirected away from Intel.


It's Intel's policy and silliness that is at the expense of this person's users. Making those users aware of it is absolutely defensible on the grounds of it being the right thing to do. Intel are relying on customers ignorance, in fact they're trying to contractually ensure it! The Streisand effect is exactly what they deserve.


It will likely benefit their users. AMD's EPYC processors are cheaper for the same performance as intel's, as well as allowing more memory per processor.

As they allow more cores per socket this can also often massively reduce per-socket licensing cost, if you have the misfortune of using software which requires that.


Intel isn't really that much far ahead to say that users are going to be negatively affected.


Intel is actually behind in performance/price ration by a wide margin at least on workloads which can make use of many cores. The margin is likely getting bigger after the latest patches.


Well, AMD currently has better CPUs for high-performance computing, so OP's users would benefit from this public service announcement, if anything.


I thought it was the opposite of that? That is, AMD chips are currently great for most workloads but Intel still has a definite edge in working with big vectors of floating point numbers of the sort you usually encounter in HPC. Mostly because their core's vector units are 512 instead of 256 bits wide.


Skylake-X and certain Xeons have avx-512 (which includes two 512 bit fma units). The rest only have 256 bit wide vectors, like Ryzen. But they still have the advantage of two 256 bit fma units, while Ryzen instead relies on two 128 bit fma units, meaning the fma instructions critical to matrix operations are faster on Intel.

I think the idea for HPC though is that you want to offload these highly vectorizable operations to a GPU. Or maybe you're doing a lot of Monte Carlo that it is hard to vectorize.

I really do like avx-512 though. If you're writing a lot of your own code, and that code involves many small-scale operations and has control flow (like in many Monte Carlo simulations), it's a lot easier than mucking with a GPU. If you're using gcc though, be sure to add `-mprefer-vector-width=512`, otherwise it will default to 256 bit vectors (clang and Julia use 512 by default).


I can think of two theories:

1. It's a mistake. Someone in legal got carried away.

2. The performance of the L1TF mitigation is so awful that someone at Intel thought it would be a good idea to try to keep the performance secret.

(Which leads to option 2b. The performance of the L1TF mitigation is so awful that somemone at Intel is afraid that Intel could be sued as a result, and they want to mitigate that risk.)

I would guess it's #1.


Normally I'd give people the benefit of the doubt, but in this case I think Intel has already shown that they have no credibility in anything anymore. They only ever spin everything, the last time anyone said something truthful (former CEO mentioned they needed to try to limit AMD to 20% server share, up from less than 1%), he was not only fired but publicly humiliated.


I think it's #1 but this legal change has been more than 2 weeks and there's no response from Intel in this problem. I don't understand why Intel is screwing everything lately.


Yeah, my guess would be it's #1 too; I can't believe that Intel can be so naive that no one on the internet will benchmark the performance—given how all these nasty CVEs are raising such a stink. Serious customers will demand performance numbers; you can't simply answer them with "blocked due to legal".


I am not a lawyer but I question whether this is enforceable in the USA and I do not question, rather state this is not enforceable in the EU. All click-though / shrink-wrap licensing the end user is forced to or has to accept automatically is invalid.


Regardless of the details of enforceability, this just sends the wrong message to the rest of the industry and community, and doesn't inspire confidence in Intel.


So, Intel indeed redacted the "no benchmarks" thing: https://news.ycombinator.com/item?id=17833777


It works for Oracle (it is famously illegal to publish benchmarks of DB2 vs other engines), I'm sure intel can make it work for them thanks to Oracle's court case(s).


DB2 is IBM. The court case you were on about was Oracle's RDBMS called Oracle RDBMS. However I believe DB2 might have been used as one of the comparisons against Oracle.

Anyhow, specifics aside, you do make a good point.


oops, yeah it would help if I got the names right.


But you can't buy an oracle license in a shop around the corner. It is going to be hard for intel to enforce it.


Perhaps if Intel were to play honestly then you might have a point. However there's certainly a few ways they could attempt to enforce it through slightly underhanded, yet pretty typical practices for how many multi-nationals like to operate in this day and age.

* cease and desist orders. They could probably argue improper use of trademark or something. And by "could" I don't mean "they have a legitimate legal case" but rather "a flimsy one but one that is still scary enough that few people might want to take the risk / expense testing the argument in court.

* many benchmarks are ran by reviewers who might have access components before they hit the shelves. It would be trivial for Intel to end that relationship. If it's suppliers further down the chain who are providing samples to journalists and reviewers then Intel might put pressure on those suppliers to end their relationships with said journalists. This might break a few anti-competition laws in the EU but it's not like that's ever stopped businesses in the past.

On a tangential rant: I think the real issue isn't so much whether it is enforceable but rather the simple fact that companies are even allowed to muddy the waters about what basic journalistic and/or consumer rights we have. I'm getting rather fed up of some multi-nationals behaving like they're above the law.


I literally got one with a book on databases as a student, so you kind of can? But the power of the argument is that if it's possible to put this kind of clause in place and enforce it by law for something that's harder to get, then Intel putting that clause in place for the general public has legal prescedent when they do want to take someone to court.

It actually makes it easier for Intel to argue that chips are such specialized bits of equipment that even though your average joe can _get_ one, they won't understand how it works and so only highly trainer professionals who have been certified by Intel would be able to reliably benchmarks their products. "As the average user would interpret the results incorrectly, their publication would hurt intel's bottom line". And suddenly you're 95% of the way to having won the case already.


You don't need to buy anything, just download from their website.


So is the below technically illegal?

http://phpdao.com/mysql_postgres_oracle_mssql/


I think OP is using the wrong term - breaching terms of contract is not illegal in itself. It just means Oracle will not do any more business with you.


> Oracle will not do any more business with you

Isn't that a blessing?


Breaching terms of contract is most definitely illegal and Oracle can sue any time they want. It's not like trademark law, where you _have_ to sue even if you don't want to; Oracle can ignore anything that comes out not looking bad for them, which still being 100% able to sue the pants off of anyone that goes "hey look this product is worse across the board in real world comparisons to MySQL and MariaDB".


They can sue but it's not illegal, and they very well may not be able to get any money from you for truthful statements.


So what you're saying is that sysadmins should all benchmark Oracle products to prevent their employers from being trapped in Oracle land?


makes sense. so unless they dedicated the resources to figuring out who did it, etc. it's reasonably moot.


Technically yes, but Oracle won't sue as the results are favorable for them in this case.


Yeah that was my first thought, too. They are not even losing in this ;)


I'm pretty sure DB2 is an IBM thing.


You're right, it is, I had the wrong DB name.


I believe you are referring to the DeWitt clause.


The wording sounds like it bans any and all benchmarks. How would groups like Linus or Dave2D even operate? It honestly doesn't seem remotely enforceable.


I'd say that it is related to L1TF, but not to keep it secret. It's additional ammunition to use in court when they get sued for performance loss.

> Cloud Company: Your honor, the security flaws in the hardware and microcode provided by the defendant necessitated the installation of updates, also provided by defendant, which resulted in a 30% loss of overall performance. Since our business model is predicated on selling the processing power of computers that have CPUs manufactured by defendant installed, they are liable for this loss in productivity.

> Intel: Your honor, plaintiff could not possibly prove any loss in productivity. If you'll examine Exhibit A, the Intel microcode EULA, you will see that it expressly prohibits benchmarking. Whether plaintiff is claiming they did these benchmarks themselves or a third party did them is immaterial because our license expressly forbids doing so. Plaintiffs need to show a loss of productivity without relying on performance benchmarks and therefore need to show that the workloads prior to and after the microcode has been installed are equivalent and that the results are detrimental.

Now, I don't expect most judges would go for it since EULAs are notoriously weak, but it does give them ammunition to impugn the evidence. It's always possible a judge or jury would listen to that.


"could not possibly prove" [without violating the license]. That doesn't invalidate the Cloud Company's claim, it only offers Intel an opportunity to countersue. In that case, they can likely only sue for copyright infringement damages.


What are the teeth on an admitted-in-court EULA violation?

Looking at the license, the only thing it grants the end-user and which Intel could revoke is the license to use the software:

> Subject to the terms and conditions of this Agreement, Intel hereby grants to you a non-exclusive, non- transferable right to use the Software (for the purpose of this Agreement, to use the Software includes to download, install, and access the Software) listed in the Grant Letter solely for your own internal business operations. You are not granted rights to Updates and Upgrades unless you have purchased Support (or a service subscription granting rights to Updates and Upgrades).

So yeah, they could revoke your license to the update and leave you with a lot of insecure silicon...but that's what you had before the update (and, in all likelihood, what you still have after the update, just with a known flaw patched and the system lots slower). I don't even think they could sue you for copyright infringement, because you're not violating copyright, you're violating the EULA.


I didn't say it was particularly strong. The point is that it's something. Intel could claim that it's there because benchmark software impact performance itself, and so they're explicitly making no claim of warranty when it's present. The point isn't that it's going to stick, just that it's more stuff to muck up the the legal process with to delay judgement.

Weighed against the normal risks of someone actually reading the EULA, it seems like a minor thing that may help in some way.


A hotshot lawyer could sue them for this anticompetitive clause for big $$$.

(it disallows comparing the product against the competitors)


Sue Oracle.


A corporation the size of Intel doesn't do this sort of thing accidentally, especially as there was no good reason for them to be modifying the terms in the first place.


In 2014 I would certainly assume (1) as well but Intel has been getting less and less open and also more and more willing to creatively stretch the truth in their marketing over the last few years in a worrying way. Currently I think (2) is about equally as likely as (1).


The author updated the post. I'm guessing #1 (I work at Intel). Text and link below.

UPDATE: Intel has resolved their microcode licensing issue which I complained about in this blog post. The new license text is here (https://01.org/mcu-path-license-2018).


I'm going with option #3. Intel sold me a warranted part without these terms. Now they're trying to alter an existing sale after the fact because they need to fix a defect under that warranty. I will do as I please with my property and they can talk to my attorneys.


If you consider how fundamentally broken speculative execution from a security standpoint is, then a fully "fixed" processor will be significantly slower than unfixed ones. The benchmarking clause rather clearly shows this.


After it blows up I'm sure they'll claim it's the former, regardless of the latter.


My thougt based on the HN headline "Intel Publishes Microcode Patches, No Benchmarking or Comparison Allowed" (after having read the article, of course):

Doesn't this show that it is time for someone to set up some kind of "ScienceLeaks" website, where scientists can upload research (results, papers, ...) anonymously which they are are not allowed to do legally because of various such "research-restricting" laws.

---

UPDATE: Before people ask the potential question how the researchers are supposed to get their proper credit - my consideration is the following: Each of the researchers signs the paper with their own "public" key of a public-private key pair. This signature is uploaded as part of the paper upload. The "public" key is nevertheless kept secret by the researchers.

When the legal risk is over and the researchers want to disclose their contribution, they simply make their "public" key really public. This way, anyboy can change that the signature that was uploaded from beginning on, indeed belong to this public key and thus to the respective researcher.


Or... just have people outside the US do the benchmarking.

These kinds of clauses are likely effectively null, void or unenforcable in any country that has decent consumer protection laws or laws concerning anti-competetive practices.

You also can't just go and write whatever into a document that already has weak legal footing in many places - and especially not after I purchased your defective product. This shit won't hold for a minute in court.


That's not how you do signing at all. You just use your private key to sign something, publish the public key for verification, and, when you want to reveal yourself, you just sign "I am X" with that key.


OK, thanks for the correction. Much better, indeed. :-)


No problem! It's a very good strategy, it's what someone who wanted to remain anonymous would use.


  > Doesn't this show that it is time for someone to set up some kind of "ScienceLeaks" website
Good idea, but I doubt it's necessary here. I'm fairly sure this sort of EULA clause is unenforceable in many jurisdictions. Run the benchmarks in a country where they are legal.


If that's the case would it be ok to publish these results in US-based websites such as Phoronix, even if they're done by other country's citizens in that other country?


If Phoronix never agreed to the license, then I can't see how that wouldn't be legal. And otherwise it's going to be a good year for Europe-based benchmarkers.


> UPDATE: Before people ask the potential question how the researchers are supposed to get their proper credit - my consideration is the following: Each of the researchers signs the paper with their own "public" key of a public-private key pair. This signature is uploaded as part of the paper upload. The "public" key is nevertheless kept secret by the researchers.

> When the legal risk is over and the researchers want to disclose their contribution, they simply make their "public" key really public. This way, anyboy can change that the signature that was uploaded from beginning on, indeed belong to this public key and thus to the respective researcher.

Commonly-used public key signature systems (RSA, ECDSA) do not provide the security properties necessary for this. Someone else who wants to claim credit for the research could cook up their own key pair that successfully validates the message and signature. This is called a duplicate signature key selection attack (https://www.agwa.name/blog/post/duplicate_signature_key_sele...)


After chmod775's comment (https://news.ycombinator.com/item?id=17825777), I already suspected something into that direction. Thanks for the independent confirmation.


"Doesn't this show that it is time for someone to set up some kind of ..."

I don't think this is necessary ...

I think "circumventing" this benchmarking restriction is as simple as having one person purchase an Intel CPU and just drop it on the ground somewhere ... and have another person pick it up off of the street (no purchase, no EULA, no agreement entered into) and decide to benchmark the found item against other items.

"I found a chip on the ground that had these markings and numbers on it and here is how it performed against an AMD model."


I think that one has to apply the microcode patches after startup. For this, you have to obtain the microcode patch file from somewhere. So I am not sure whether this "legal hack" will work.


why can't you drop a computer with the microcode patches already applied?


Sleazy marketers would be all over that. :/


If they show reproducible results, then I don't see a problem. If they don't, well then you know it is either dodgy or not rigorous enough to take seriously.


They'd definitely show reproducible results. eg either get their mates to submit matching fakes ones, or just submit an extra set themselves after slight tweaking

That being said, as soon as someone with a clue comes along + tries them out and finds it's bogus... that would lead into potentially weird territory too. eg the dodgy submitters likely attempting to discredit the er... whistleblower(?).

Seems like a re-run of an old story. :/


> Doesn't this show that it is time for someone to set up some kind of "ScienceLeaks" website, where scientists can upload research (results, papers, ...) anonymously which they are are not allowed to do legally because of various such "research-restricting" laws.

I don't go there since 2008 but why not just wikileaks?


Wikileaks is a Russian propaganda machine.

They are only interested in Anti-America leaks, not all leaks.


> Wikileaks is a Russian propaganda machine.

> They are only interested in Anti-America leaks, not all leaks.

Everybody should make their own judgement on the political bias of Wikileaks (which is, in my personal opinion, a very good reason why monopolies are typically bad; in other words: there is a demand on multiple independent Wikileaks-like websites), but the statement that Wikileaks is only interested in Anti-America leaks does not hold in my opinion. See for example the following leak about Russia's mass surveillance apparatus:

> https://techcrunch.com/2017/09/19/wikileaks-releases-documen...

Or let us quote

> https://www.reddit.com/r/WikiLeaks/comments/5mv07m/has_wikil...

"They released Syrian/Russian email exchanges, I don't know if it led to anything extremely controversial. In an interview, Assange said the main reason they don't receive leaks from Russian whistle-blowers is that the whistle-blowers prefer to hand over documents to Russian-speaking leaking organization.

And Wikileaks doesn't have Russian-speakers in their organization. You can tell your friend that if he or she wants Wikileaks to release damaging Russian documents, then he or she should hack the Russian government and give what they find to WL."

Or have a look at

> https://www.reddit.com/r/WikiLeaks/comments/5mv07m/has_wikil...

On

> https://www.reddit.com/r/IAmA/comments/5c8u9l/we_are_the_wik...

you can find a list of various counter-arguments to common criticisms of WikiLeaks.


Edit: Seems the parent comment has now reached net positive votes

It's a really sad thing you're being downvoted, as I made a similar comment about a year ago [1] in defense of WikiLeaks to over 30 upvotes. HN seems to be getting more and more active with their downvotes towards information that doesn't fit their current perspective.

[1] = https://news.ycombinator.com/item?id=13816762


> It's a really sad thing you're being downvoted, as I made a similar comment about a year ago [1] in defense of WikiLeaks to over 30 upvotes. HN seems to be getting more and more active with their downvotes towards information that doesn't fit their current perspective.

> [1] = https://news.ycombinator.com/item?id=13816762

I don't see myself as a defender of WikiLeaks. It is, for example, hard not to admit that the Democratic National Committee email leak and the exchange between Donald Trump Jr. and WikiLeaks (at least to me) has some kind of "smell" [2].

My argument rather is:

a) the position of WikiLeaks (if there exists one) is far more confusing and sometimes self-contradictory than it looks on the surface (in this sense, I am both somewhat sceptical of the "WikiLeaks defenders" and the "WikiLekas prosecutors").

b) do not trust any side as "authoritative". Consider multiple different sources - in particular the ones that do not agree with your personal opinion - and conceive an opinion on the whole story.

[2] https://www.theatlantic.com/politics/archive/2017/11/the-sec...


Completely agree with that.

I just threw you into that category for the purpose of my comment as I believed it was a perceived defense of Wikileaks that you were being downvoted for, rather than something else regarding the content of your comment.


>Wikileaks is a Russian propaganda machine.

>They are only interested in Anti-America leaks, not all leaks.

Any evidence to prove this? If you go to Wikileaks there are leaks from all around the world. I personally am very glad Wikileaks exists, doing the work they do, and still has never been proven incorrect or fraudulent.

By the way, Anti-American leaks are also Pro-American leaks, even if they may not be favorable to your own political beliefs.


Ya, there's a huge niche in the market for pro-American leaks.


Nobody is asking for "pro-American" leaks. Wikileaks tried to present itself as a unbiased source of leaked news, the moment they refuse to leak negative news about a nation or individual they are showing a bias.

If you go on Wikileaks right now and search "Russia" for 2012-2018, you get mostly news about how western plans are going to fail and how powerful Russia's military is. If you search for Russia in the "Syria Files" section, you get no results. How is that even possible?


That's not the way. The tech press isn't powerless here.

The media that has been performing such benchmarks for decades and have thus earned a large and faithful audience can organize and simultaneously publish the relevant benchmarks in the US, complete with an unapologetic disclosure right at the top as to the why this is happening. Put Intel into the position of suing every significant member of the US tech media and create a 1st amendment case, or perhaps try to single out some member and create a living martyr to which we can offer our generous gofundme legal defense contributions over the decade it takes to progress through the courts. Either way Intel creates a PR disaster for itself.

Sack up and call their bluff. There are MILLIONS of people that will stand behind that courage. At some point the share holders will feel it and this nasty crap will end.


The press can just give intel a "Don't buy" rating with a comment that intel has something to hide, we don't know what but you would be a fool to even consider them when you can't know if they are any good. Technically I can't even show the latest intel is faster than a old i386 that I'm finally ready to replace, while I can show the AMD is better.


How is the 1st Amendment relevant when private parties deal with each other? (As far as I know, it's not.)

It's contract law and tort law that's relevant. Intel's defective product, Intel's ToS, and maybe fair use. (As the tester could get the patch without an explicit license and try to claim fair use.)


Why would the legal risk ever be over?


> Why would the legal risk ever be over?

I can imagine some scenarios:

- A company is bought by another one which has a different legal policy and voids some legal restrictions even retroactively

- By some other way, the "illegal" knowledge in the paper became "public knowledge", so that company lawyers will have a hard(er) time convincing a judge that the respective paper is of any danger and thus the author to be prosecuted. For example: After these microcode patches, people will of course privately do benchmark the performance differences. So some years later, the order of performance differences are kind of public knowledge.

- With time, companies have a much less legal interest to sue people for disclosre. As long as there is a high commercial interest, companies can be very "sue-happy". On the other hand, each of these lawsuits is a PR risk for the company. If the respective product becomes outdated, the ecomonical advantage of a lawsuite is much less than the probable PR loss.

- The researcher now (later on) lives in a different country that has a different legal system where there is much less legal risk. For example, in Germany, it is very restricted what can be included in the general business terms (in opposite to the USA).

In all of these cases, it can become much less legally dangerous to disclose the real identity of the author. On the other hand, there exists an incentive (academic credit) to do so.


As a side note: Some of the license changes also block Debian from updating their intel-microcode package[1].

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=906158#14


My guess is Intel will revert the license change soon. It's just too absurd to stay. But if not, I wonder if distros could have two packages, named with appropriate and well-deserved passive-aggressiveness, e.g.: intel-microcode-insecure and intel-microcode-legally-restricted.


And maybe they'll mark benchmarking packages as conflicting with intel-microcode-legally-restricted?


Installing the benchmarking package, or running it, isn't against the license. Providing or publishing (comparative?) numbers while the microcode package is applied would be.


There is nothing passive-aggressive in intel-microcode-legally-restricted, it's just a statement of fact.


It's not quite that clear cut IMO.

E.g. "intel-microcode-legacy" and "intel-microcode" would be more diplomatic (disgustingly so IMO).

And one could argue that every package in non-free would deserve a "-legally-restricted" suffix.


Debian has a dedicated archive, named "non-free", for everything that does not comply with the DFSG.

The DFSG is a set of guidelines that define what can be considered true FLOSS.

The reason is to protect users from legal risks.

https://www.debian.org/social_contract#guidelines

https://www.debian.org/doc/debian-policy/ch-archive.html


This seems like it would be a big deal considering this whole thing is related to servers and I have to imagine some server operators are running Debian? Maybe at the bare metal level their all running RHEL, which I presume doesn't care about the license restrictions.


From the article linked in TFA[1], Debian appears to be the only distribution that is refusing to release it. Gentoo has made it so that you have to agree to the new license when upgrading your intel-ucode package, while other distribution vendors (Arch, Red Hat, and SUSE) all appear to be shipping it without issue.

The argument from Intel is that the new changes don't actually affect distributions, as distributions are given the right to redistribute the microcode in the license (and this is separate to allowing third parties to publish benchmarks). Either that, or the lawyers at Red Hat and SUSE missed this somehow (though this is unlikely -- at SUSE all package updates go through a semi-automated legal review internally, similar to how openSUSE review works).

I do understand Debian's ethical issue with this though, and I applaud them for standing up for their users (though unfortunately it does leave their users vulnerable -- I imagine it was a difficult decision to make.)

[I work at SUSE, though I don't have any hands-on knowledge about how this update was being handled internally. I mostly work on container runtimes.]

[1]: https://www.theregister.co.uk/2018/08/21/intel_cpu_patch_lic...


> Gentoo has made it so that you have to agree to the new license when upgrading your intel-ucode package

As well as not redistributing it to mirrors, as they are unable to ask mirrors to accept a new license. [0]

[0] https://bugs.gentoo.org/664134#c2



Would be cool if this was blown out by the courts.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: