I actually thought Intel must have had some tricks up their sleeves in terms of performance gains that we hadn't seen yet, simply because there was no market need to roll them out and they had so many years of coasting on marginal gains.
Seeing them taking this stance looks a lot like the microcode hit is that bad, and that the emperor has no clothes.
Clearly, they don't have an answer to AMD at all. If this is true, their shareholders should be asking serious questions about why they've nothing significant to show for all that time and money spent when they were raking it in without a serious competitor.
There are other services like Packet that offer bare-metal hosting on small Atom and ARM processors. It'd be nice to see some alt x86 processors in this space.
I think you're more likely to see some ARM CPUs which have comparable performance to low-end and mid-range x86 before you see a new x86 competitor  - the overhead to making x86 perform well is just so high that I can't imagine anyone new bothering to get into the space. VIA has been in the market forever as the third seller of x86, and despite the theoretical benefits of entering into the datacenter CPU market, they've never made that leap (though I don't know enough about their history to know if they tried).
I'm hoping that ARM becoming competitive in the client CPU space ends up making the cross-compile overheads of enough drivers/kernel stuff that we can start to see some more diversity in the CPU market overall. I'm excited about RISC-V, especially now that they have shipping hardware you can play with today . The Mill CPU sounds super cool in a bunch of ways, but the architecture has made so many decisions that I'm unsure will play out in practice I'm holding my excitement until I see real silicon somewhere 
They might have also baked in slight tweaks or customized whatever back-doors could be included if such things exist...
It's better to think of it as a Chinese subsidiary in a franchise system.
They haven't, they instead entered the niche of low-cost kiosk hardware. Intel and AMD completely abandoned it due to lower profit margins, but plenty of raw sales numbers to help Via float by.
Big companies rarely innovate without competition around.
If you're selling a subscription to a best-in-market service, there's not a lot of motive to innovate, agreed. Maybe you'd try a new product or a premium variant, but there's no reason to sink effort into advances that won't let you expand userbase or raise prices.
But for Intel? Before AMD got going, Intel's biggest competition was itself 9/18 months previously. Innovation wasn't just for new devices and new buyers, it was how they sold updated processors for machines that were already Intel Inside. They're not waiting on processors to die, they're actively pushing for them to be replaced for performance reasons.
That might create an incentive to release 'good enough' updates and dole out big improvements gradually, but in practice any of that which happened was already ending. Intel appears to be up against the wall regarding 10nm even with a competitor, and has been attempting major innovations to handle 7nm and below for years. With a revenue stream that relies on annual improvements to their product, they seem to have been learning hard into innovation and struggling, rather than waiting for a competitor.
The common complaint about subscription models is obviously good: if the company folds you have nothing, instead of an unsupported product. But it neglects the other issue, which is that companies intentionally cut off the possibility of "good enough" to guarantee revenue.
I don't think its an accident that products like Microsoft Office went to subscriptions around the time it becomes very hard to imagine a new feature actually worth buying for.
The Computerphile YouTube channel as some interviews with computer scientists that worked at Bell Labs during their heyday. It's incredibly interesting to hear about how they screwed around with early Linotype printers (which $100k+ in the 70s) and did things like reverse-engineer the font formats to create custom chess fonts.
They had direct-dial long-distance by 1951, using relays and tubes.
The transistor started making a difference in telephony with the release of the 1ESS switch in 1965. But transistors were a commodity by then.
Not modern internet services no, but all of the infrastructure it's built on is grounded on theirs.
Compared to what they invented, Google and Facebook are insignificant in comparison.
We could go back to Altavista and no-Facebook and we'd be more or less fine.
Giving back the Bell Labs technlogy would be a much harder hit...
As you say, bigger companies have a harder time innovting.
Of course, if you are bigger, the more likely it is that there is no competition at certain times.
If you can survive without investing, a lot of companies choose not to.
Microsoft / IBM and now Intel defintely did.
I think it has more to do with the leadership of the company than the size of the company.
It depends where about on the timeline. AMD's hired of Lisa Su and Jim Keller in 2012, we all thought it was too little too late. Look back at the Roadmap Intel were giving at the time, I used to joke about Tick Tock were like the sound of AMD's death clock. In 2012 we were looking at 10nm in 2016, 7nm in 2018, and 5nm in 2020. We just had Sandy Bridge, but that was the last big IPC improvement we have had.
Fast forward to 2018 / 2019, No 10nm, and I would have been happy if they were selling me 14nm++++++ Quad Core Sandy Bridge. Broadwell and Skylake brings nothing much substantial. Intel were suppose to break the ARM Mobile Market with tour de force, and that didn't happen.
We all assumed Intel had many other tricks up its sleeves, new uArch or 10nm waiting in the wings when things are needed. Turns out they have nothing. Why did they buy McAfee?( Which has been sold off already ) And Infineon? Nearly eight years after the acquisition they are just about to ship their first Mobile Baseband made by their own Fabs, Eight Years! What an achievement! Nearly three years after their acquisition of Altera, which itself has been previously working with Intel Custom Fab before that. What do they have to show?
During that time, the Smartphone revolution scale has helped Pure Play Fab like TSMC to made enough profits and fund their R&D rivalling Intel. And in a few more weeks time we will have an TSMC / Apple 7nm node shipping in millions of units per week. In terms of HVM and leading node, making TSMC over taking Intel for the first time in history. AMD has been executing well following their Roadmap, and Lisa Su did not disappoint. Nothing on those slides were marketing speaks or tricks that Intel used. No over hyped performance improvement, but promise of incremental progress. She reminds me of Pat Gelsinger from Intel, Down to Earth, and telling the truth.
Judging from the Results though, AMD aren't making enough of dent in OEM and enterprise. Well I guess if you are not buying those CPU with your own money, why would you not buy Intel? The consumer market and Small Web Hosting market though seems to be doing better, where the owners are paying. I hope Zen 2 will make enough improvement and change those people's mind, better IPC, better memory controller.
If you loathe Intel after all the lies they have been telling and marketing speaks, you should buy AMD.
If you love Intel still after all, you should still buy AMD, teach them a painful lesson to wake them up.
Ironically, Broadwell's 128MB L4 cache did bring a substantial performance boost to a whole range of real-world applications, but it seems it's so expensive to manufacture that they've subsequently dropped the feature except for Apple's iMacs and expensive laptops.
> If you love Intel still after all, you should still buy AMD, teach them a painful lesson to wake them up.
But how do I choose which AMD CPU I need ? Back in my youth p4 and athlon were easy to compare (freq., IPS and a modifier because AMD) but now I can't even tell the differences between any i5/3/7 and when I look at AMD names it's as confusing but with a different lingo. I feel the same regarding GPU so maybe I am too old for that now.
A project of mine is a hardware recommender, it also includes a meta-benchmark. I collect published benchmarks and build a globally sorted order of processors out of it. https://www.pc-kombo.com/benchmark/games/cpu for games, https://www.pc-kombo.com/benchmark/apps/cpu for application workloads (that one still misses a bit of work, the gaming benchmark is better). Legacy processors are greyed out, so this might be a good starting point for you. There is also a benchmark for gpus.
For most people this processor choice is also very easy, it is "Get a Ryzen 5 2600 or an Intel Core i5-8400."
Feel free to ask if you want some custom recommendations, email is in profile :)
Also, the existing bar graph is unclear to me. What does 10/10 mean?
Example, fictional values: The 8700K is the fastest, because it was most often seen as the benchmark leader. It gets a 10. The 8600K has almost the same FPS, but it was always a bit slower, it gets a 9.9. The i5-8500 comes next, but its average FPS scaled to the 0-10 scale is lower, it gets a 8.7. Then the i5-8400, always seen as slower than the 8500 in benchmarks, would at most be able to get a 8.6, no matter what the average FPS says (with enough benchmarks average FPS become an almost meaningless metric, it's the position in the benchmark that counts).
That's why it is not possible to calculate price/performance with this. I could only highlight good deals, processors that have a high position despite being cheaper than the processors below. Which is of course already baked into the logic of the recommender.
They way I see it, I could look at AMD's Core count, Threads and Frequency, As they are clearly labelled, and that is it. On Intel's side you have features turned on and off for different i3/5/7/9, AVX speed difference etc I don't even want to bother looking it up.
Seriously, Intel has always been way too confusing with their processor lineup. AMD has always been straightforward: leave in the kitchen sink on nearly every CPU and performance scales with price. Not linearly of course, but it's much simpler to choose an AMD CPU.
We're not talking minor performance differences in features, we're talking features randomly added and removed for no logical reason from the same generation & tier of CPU model.
"Gamers need these features, but gamers often also setup servers. Let's remove server features from gaming CPUs so they can't reuse them when they upgrade, and so they have to buy new server CPUs!"
Basically, if you don't want the headache you just buy one of the highest end/most expensive CPUs on offer and you're probably fine. With AMD the feature set is pretty consistent, so you have considerably more choice to find a good price point.
In comparison, AMD's offerings are surprisingly easy. There's a handful of SKUs differentiated on core count and frequency. Generally they all have the same PCIe lanes, RAM access, SMT, instruction sets, and so on.
Ryzen 1 - slow, medium, fast, elite
Ryzen 2 - slow, medium, fast, elite
pick the one you need based on pricing/discounts if any. It's not that hard really.
For gen 2, that'd be:
23xx, 25xx, 27xx, ThreadRipper.
I think they picked the names/numbers to show some kind of equivalence with i3/5/7, but that's not quite it.
ThreadRipper is interesting. It requires a different socket than the other desktop processors and is targeted more at workstation class machines.
In servers there are Epyc and Epyc 2.
Threadripper is their workstation CPU.
slow (3), medium (5), fast (7), elite (Threadripper)
If you don't want to spend more than USD$800 on the CPU alone, don't look at the elite/threadripper line.
What is your budget? That is the first question you need to answer instead of trying to understand the entire line of chips, look at how much you have to spend, and then find the fastest one within the budget.
It's useless to try to think about the entire line of CPU's when you will only buy 1 chip (unless you are representing Dell and need to buy thousands of chips)
it seems that Intel couldn't jump into EUV manufacturing when they were in full domination because it was too expensive and new so they started improving multipatterning so improve resolution, which proved too hard to ship (hence the delays) meanwhile smaller players went their way until recently and now EUV is accessible so they can jump in swiftly while intel is still caught inside his intermediate strategy, lazy market behavior and unforeseen failures. They also have EUV planned but not until the next-generation. Note that even at larger pitch their process is near competitive with smaller ones today, but it sounds terrible.
I kick myself for not buying AMD at $2 (or buying a LOT more at $5).
I just looked this up, and it seems to boil down to the patent cross-licensing agreement between Intel, which developed the x86 architecture, and AMD, which developed the 64-bit instruction set. I don't think there's a unilateral "non-transferable license" per se — and they're free to enter into a new agreement if either party does get acquired.
This Reddit thread seems pretty good at explaining it in much more detail: https://www.reddit.com/r/hardware/comments/3b0ytk/discussion...
Even if you buy this, (I don't), there's no point for them to stay in business with absolutely non-competitive products. The remarkable thing is that thanks to Zen that did not happen and Intel actually feels some heat for the first time in years.
They still have a huge opportunity for CPU+FPGA, they bought Altera for the purpose.
> a 2004 study by Bain & Company found that 70 percent of mergers failed to increase shareholder value. More recently, a 2007 study by Hay Group and the Sorbonne found that more than 90 percent of mergers in Europe fail to reach financial goals.
Especially when the merge should be deep and involve engineering teams with different cultures to join and work together on the product. So I'd consider the release of first Xeon+FPGA after 3 years past acquisition as a somewhat success.
I would have guessed additional lead time for Altera to move their designs from TSMC to Intel process, but it looks like Altera has been planning to fab on Intels 14nm since 2013.
The obvious way forward is universal specialized coprocessors, reprogrammable for the task(s). Better if tightly integrated with the memory, buses and CPUs.
The weak side of FPGA historically is programmability and especially the tools. But since the interest for FPGA is growing exponentionally in open-source community in recent years, things may change.
And by the way, 10 years ago you would say that exactly 'niche' words about GPU.
There are uses for FPGAs where there's enough money at stake for the hardware development but the number of units is small - stuff like high frequency trading or many defense roles. Or in the development of new hardware. But it's pretty niche.
Similarly, we’re finding more functions we can take away from the CPU and migrate to dedicated circuitry (FPGAs) that can handle those tasks more efficiently than the CPU can.
GPU's avoid the overhead of FPGAs while still retaining a lot of flexibility.
But, to clarify, I was speaking of consumer/mobile. The original iPhone was quite revolutionary for having a decent PowerVR graphics chip. High end symbian phones just had a CPU. See for example https://en.wikipedia.org/wiki/Nokia_6110_Navigator or https://en.wikipedia.org/wiki/Motorola_Razr2
Even though GPGPU was already big in 2008, people still thought of it as a difficult to use coprocessor for big compute jobs. Much as people consider FPGAs now.
And the first iPhone shipping with a powerful graphics chip is a counter to your argument that the future of mobile wasn't clear. The people with the ideas wanted a graphics processor.
If Intel can release a CPU with a built-in FPGA and everyone has one, software developers will take advantage of them. I can see stuff like video editing programs, compression algorithms, etc taking advantage of that.
FPGA is great if you need to talk to some hardware very fast/on many pins. E.g. something like a network router where you are shuffling packets between many high speed interfaces. Or doing a lot of measurements/interfacing a bunch of high speed sensors.
But not for general purpose software - GPUs are both faster, easier to develop for (and with good tooling) and much cheaper for doing that today.
Also, did they try to use enhanced locality introduced by processing several streams on GPU? E.g., if you keep states sorted as for tuple (state id, stream id) for all your streams, you may get more memory-controller friendly access pattern. I haven't seen mentions of that technique (which MUST be considered after Big Hero 6  - they used that technique to never miss caches in whole movie rendering process). Big Hero 6 is 2014, the paper is 2017.
I really do not like papers like one linked by you. One system gets all of the treatment while other ones get... whatever is left.
I guess have they tried to use these techniques for GPUs, they would get performance gap that is much less than reported.
Not a thing for everyday desktops, but looks like compute competition is er... heating up. :)
This might be the answer. No competition and a good cash flow is a comfortable position. You defend this model with ads, policy, etc. and technical innovation can languish. I am not saying this is necessarily the case, but it is possible that Intel just got comfortable, slow and fat. Having a scrappy, smart competitor can be a good thing.
ARM and RISC-V have become a serious thread and are on the way to get standardized ecosystems...
traditionally yes, intel has absolutely dominated the laptop market however i have been seeing a lot more laptops lately with a Ryzen processor and Vega graphics
AMD making gains into a very lucrative market
I never met a person with a smart phone that uses an intel chip. They probably exist but i know none.
Apple aren't using them. Samsung aren't.. HTC nope.. Google pixel nope...
Intel basically sold off/scuttled their mobile division right before the iphone took off.
> It comes as Microsoft continues its work with Qualcomm to optimize Windows for devices powered by Qualcomm's Snapdragon chips, including the forthcoming Snapdragon 850, which Samsung used for its first Arm-based Windows 10 laptop. So it appears there is some momentum behind the concept.
However, I used to own a Tegra K1-based Chromebook, and that thing was sl-o-o-o-o-o-o-w, and it only got worse with successive updates. I'm not really optimistic when it comes to performance, absent highly-optimized apps. The state of Firefox and Chrome doesn't really fill me with confidence.
I'd like for ARM laptops to become more popular though, as a Debian user nearly all the software I use is already there.
But they have lots to show for that money!
Brand change implementations such as "Gold" and "Platinum", which gouge the customers more than ever before.
I am going to do that today. While we only have several thousand users they do CPU intensive work, buy a lot of new CPUs and rent a lot of servers. My small contribution will likely amount to low-mid 6 figures out of Intel pocket in coming 2-3 years.
Please consider such announcement if you could do some damage as well.
AMD's chips definitely speculatively execute instructions. It's a common performance trick.
AMD's chips also definitely throw an exception at retirement (of course) for instructions that attempt to load a privileged address, just like Intel's chips do.
The difference is that when AMD's chips see a load instruction, the load isn't executed until it knows that the address isn't privileged. Intel's chips do execute the load (but then throw away the result when it realises the address was privileged).
The speculative part is for instructions that depend on the result of the load.
Thank you for the detailed explanation, can't we conclude that AMD engineering is more defensive then Intel, this is what I concluded.
Both AMD and Intel hire really smart people, but this stuff is really, really hard.
It's clear that AMD weren't doing anything special w.r.t. side channel attacks. They were just further behind in the optimisation race and as a consequence, were less hit.
AMD enforce privilege checks at access time rather than at retirement time. Whether or not this is due to "lack of optimization" or "good security engineering" nobody knows. But your claims that this was purely the result of "less optimised [sic]" cores is nonsense. You have zero evidence whatsoever that that was the case vs. AMD just having superior engineering on this particular aspect and not adding bugs to their architecture.
All we know are that Intel & ARM CPUs have an entire category of security bugs that AMD don't, and that upon close analysis AMD's CPUs are operating in a more secure foundation.
If AMD were making conscious efforts to avoid side channel attacks they'd already have had various features to show for it like IBRS. But AMD's chips say nothing about side channel attacks. Their manuals do not discuss Spectre attacks. And there is no evidence anywhere that they knew about Meltdown type attacks and chose to avoid them.
I get that suddenly hating on Intel is cool and popular, but the facts remain. There is no reason to believe AMD has any advantage here.
The facts are AMD does not have a significant per-core IPC defeceit vs. Intel (as supported by every ryzen review at this point) and AMD has, on multiple occasions now, had objectively superior security.
You're trying to twist this into a negative against AMD. It's nonsense FUD. Intel fucked up, AMD didn't. Why are you trying to run PR damage control for Intel?
I am really unsure where you're getting this from. Your reference to the spec makes me wonder if you really understand what Meltdown and Spectre are. They aren't bugs in the chips even though some such issues may be fixable with chip changes, because no CPU has ever claimed to be resistant to side channel attacks of any form. Meltdown doesn't work by exploiting spec violations or actual failures of any built in security logic, which is why - like Spectre - it surfaces in Apple chips too. Like Spectre it's a side channel attack.
I'm not trying to twist this into a negative for AMD: it's the other way around, you are trying to twist this into a positive for them, although no CPU company has done any work on mitigation of micro-architectural side channel attacks.
I'm simply trying to ensure readers of this discussion understand what's truly happening and do not draw erroneous conclusions about AMDs competence or understanding of side channel attacks. What you're attempting to do here is read far more into a lucky escape than is really warranted.
Memory was accessed that the spec says was not accessible. This has nothing to do with side-channels. The side channel part of the attack was how the spec violation was exploited.
For Meltdown specifically Intel fucked up, AMD didn't. This is not at all vague. Whether or not this was due to luck or not is irrelevant, it was clearly NOT due to incompetence as you were pushing. You pushed a narrative that AMD was too incompetent ("missing optimization") to have a severe security bug. That has nothing to do with reality whatsoever.
Side note, side channel attacks are not exactly obscure. Guaranteed AMD & Intel have security experts that are well aware of how side channel attacks work long before any of meltdown & spectre came to light. They have been around for ages. Practical exploits of L1/L2 cache via mechanisms like Prime+Probe date back at least as far as 2005: https://eprint.iacr.org/2005/271.pdf
For this particular optimization, it looks better not to have it.
But in general, knowing how something is flawed lets you mitigate the flaws. We use floating point despite it being mathematically wrong, because it's fast and we can mitigate the wrongness. I could imagine a chip where speculation could be toggled between different modes, if there was enough demand.
I will say that "just further behind" is probably wrong. AMD has a lot of optimizations in their chips. They have safer ones, which might be luck or might be engineering talent, but it's not a mere artifact of being behind the curve.
Intel has two faults: one, they made a significant mistake in their chip design, and two, they responded to criticism poorly. AMD did not make the mistake and has responded well to criticism.
In the Desktop space, I can definitely recommend AMD. But Laptops are totally Intel right now.
There are AMD Laptops, but they are hard to recommend. Most are low-end offerings at best (laptop manufacturers don't put the high-end stuff with AMD). As long as high-end HP / Dell / Lenovo / Asus / Acer / Apple / Microsoft Surface are all Intel-based, you're basically forced to use an Intel Laptop.
Desktops... you can build your own. And even if you couldn't, there are a ton of custom computer builders out there making great AMD Desktop rigs.
With that being said, AMD laptops are competitive in the $500 to $750 space. Low-end to mid-tier laptops... the ones with poor mouse controls, driver issues, bad keyboards, low-end screens and such.
But hey, they're cheap.
Its not really AMD's fault. But in any case, its hard for me to find a good AMD laptop. So... that's just how it is.
Kind of waiting to see what the Lenovo ThinkPad A285 looks like, as well.
That's what I'm talking about: most laptop manufacturers offer a "premium" Intel laptop. But then they have a lower-quality AMD one on the side.
Nothing AMD has done wrong per se, just laptop manufacturers refuse to sell AMD on the high-end.
Going with AMD at this point is simpler, better for security, and if you care about this sort of thing -- rewards more honest and less consumer-predatory behavior. I've always gone with Intel my whole life, but given these many incidents with Intel, combined with their really poor public responses to it, I will now be switching to an AMD user for all future PC builds.
It would've carried that much more weight if it were _your_ low-mid 6 figures that were redirected away from Intel.
As they allow more cores per socket this can also often massively reduce per-socket licensing cost, if you have the misfortune of using software which requires that.
I think the idea for HPC though is that you want to offload these highly vectorizable operations to a GPU. Or maybe you're doing a lot of Monte Carlo that it is hard to vectorize.
I really do like avx-512 though. If you're writing a lot of your own code, and that code involves many small-scale operations and has control flow (like in many Monte Carlo simulations), it's a lot easier than mucking with a GPU. If you're using gcc though, be sure to add `-mprefer-vector-width=512`, otherwise it will default to 256 bit vectors (clang and Julia use 512 by default).
1. It's a mistake. Someone in legal got carried away.
2. The performance of the L1TF mitigation is so awful that someone at Intel thought it would be a good idea to try to keep the performance secret.
(Which leads to option 2b. The performance of the L1TF mitigation is so awful that somemone at Intel is afraid that Intel could be sued as a result, and they want to mitigate that risk.)
I would guess it's #1.
Anyhow, specifics aside, you do make a good point.
* cease and desist orders. They could probably argue improper use of trademark or something. And by "could" I don't mean "they have a legitimate legal case" but rather "a flimsy one but one that is still scary enough that few people might want to take the risk / expense testing the argument in court.
* many benchmarks are ran by reviewers who might have access components before they hit the shelves. It would be trivial for Intel to end that relationship. If it's suppliers further down the chain who are providing samples to journalists and reviewers then Intel might put pressure on those suppliers to end their relationships with said journalists. This might break a few anti-competition laws in the EU but it's not like that's ever stopped businesses in the past.
On a tangential rant: I think the real issue isn't so much whether it is enforceable but rather the simple fact that companies are even allowed to muddy the waters about what basic journalistic and/or consumer rights we have. I'm getting rather fed up of some multi-nationals behaving like they're above the law.
It actually makes it easier for Intel to argue that chips are such specialized bits of equipment that even though your average joe can _get_ one, they won't understand how it works and so only highly trainer professionals who have been certified by Intel would be able to reliably benchmarks their products. "As the average user would interpret the results incorrectly, their publication would hurt intel's bottom line". And suddenly you're 95% of the way to having won the case already.
Isn't that a blessing?
> Cloud Company: Your honor, the security flaws in the hardware and microcode provided by the defendant necessitated the installation of updates, also provided by defendant, which resulted in a 30% loss of overall performance. Since our business model is predicated on selling the processing power of computers that have CPUs manufactured by defendant installed, they are liable for this loss in productivity.
> Intel: Your honor, plaintiff could not possibly prove any loss in productivity. If you'll examine Exhibit A, the Intel microcode EULA, you will see that it expressly prohibits benchmarking. Whether plaintiff is claiming they did these benchmarks themselves or a third party did them is immaterial because our license expressly forbids doing so. Plaintiffs need to show a loss of productivity without relying on performance benchmarks and therefore need to show that the workloads prior to and after the microcode has been installed are equivalent and that the results are detrimental.
Now, I don't expect most judges would go for it since EULAs are notoriously weak, but it does give them ammunition to impugn the evidence. It's always possible a judge or jury would listen to that.
Looking at the license, the only thing it grants the end-user and which Intel could revoke is the license to use the software:
> Subject to the terms and conditions of this Agreement, Intel hereby grants to you a non-exclusive, non-
transferable right to use the Software (for the purpose of this Agreement, to use the Software includes to
download, install, and access the Software) listed in the Grant Letter solely for your own internal business
operations. You are not granted rights to Updates and Upgrades unless you have purchased Support (or a
service subscription granting rights to Updates and Upgrades).
So yeah, they could revoke your license to the update and leave you with a lot of insecure silicon...but that's what you had before the update (and, in all likelihood, what you still have after the update, just with a known flaw patched and the system lots slower). I don't even think they could sue you for copyright infringement, because you're not violating copyright, you're violating the EULA.
Weighed against the normal risks of someone actually reading the EULA, it seems like a minor thing that may help in some way.
(it disallows comparing the product against the competitors)
UPDATE: Intel has resolved their microcode licensing issue which I complained about in this blog post. The new license text is here (https://01.org/mcu-path-license-2018).
Doesn't this show that it is time for someone to set up some kind of "ScienceLeaks" website, where scientists can upload research (results, papers, ...) anonymously which they are are not allowed to do legally because of various such "research-restricting" laws.
UPDATE: Before people ask the potential question how the researchers are supposed to get their proper credit - my consideration is the following: Each of the researchers signs the paper with their own "public" key of a public-private key pair. This signature is uploaded as part of the paper upload. The "public" key is nevertheless kept secret by the researchers.
When the legal risk is over and the researchers want to disclose their contribution, they simply make their "public" key really public. This way, anyboy can change that the signature that was uploaded from beginning on, indeed belong to this public key and thus to the respective researcher.
These kinds of clauses are likely effectively null, void or unenforcable in any country that has decent consumer protection laws or laws concerning anti-competetive practices.
You also can't just go and write whatever into a document that already has weak legal footing in many places - and especially not after I purchased your defective product. This shit won't hold for a minute in court.
> Doesn't this show that it is time for someone to set up some kind of "ScienceLeaks" website
> When the legal risk is over and the researchers want to disclose their contribution, they simply make their "public" key really public. This way, anyboy can change that the signature that was uploaded from beginning on, indeed belong to this public key and thus to the respective researcher.
Commonly-used public key signature systems (RSA, ECDSA) do not provide the security properties necessary for this. Someone else who wants to claim credit for the research could cook up their own key pair that successfully validates the message and signature. This is called a duplicate signature key selection attack (https://www.agwa.name/blog/post/duplicate_signature_key_sele...)
I don't think this is necessary ...
I think "circumventing" this benchmarking restriction is as simple as having one person purchase an Intel CPU and just drop it on the ground somewhere ... and have another person pick it up off of the street (no purchase, no EULA, no agreement entered into) and decide to benchmark the found item against other items.
"I found a chip on the ground that had these markings and numbers on it and here is how it performed against an AMD model."
That being said, as soon as someone with a clue comes along + tries them out and finds it's bogus... that would lead into potentially weird territory too. eg the dodgy submitters likely attempting to discredit the er... whistleblower(?).
Seems like a re-run of an old story. :/
I don't go there since 2008 but why not just wikileaks?
They are only interested in Anti-America leaks, not all leaks.
> They are only interested in Anti-America leaks, not all leaks.
Everybody should make their own judgement on the political bias of Wikileaks (which is, in my personal opinion, a very good reason why monopolies are typically bad; in other words: there is a demand on multiple independent Wikileaks-like websites), but the statement that Wikileaks is only interested in Anti-America leaks does not hold in my opinion. See for example the following leak about Russia's mass surveillance apparatus:
Or let us quote
"They released Syrian/Russian email exchanges, I don't know if it led to anything extremely controversial. In an interview, Assange said the main reason they don't receive leaks from Russian whistle-blowers is that the whistle-blowers prefer to hand over documents to Russian-speaking leaking organization.
And Wikileaks doesn't have Russian-speakers in their organization. You can tell your friend that if he or she wants Wikileaks to release damaging Russian documents, then he or she should hack the Russian government and give what they find to WL."
Or have a look at
you can find a list of various counter-arguments to common criticisms of WikiLeaks.
It's a really sad thing you're being downvoted, as I made a similar comment about a year ago  in defense of WikiLeaks to over 30 upvotes. HN seems to be getting more and more active with their downvotes towards information that doesn't fit their current perspective.
 = https://news.ycombinator.com/item?id=13816762
>  = https://news.ycombinator.com/item?id=13816762
I don't see myself as a defender of WikiLeaks. It is, for example, hard not to admit that the Democratic National Committee email leak and the exchange between Donald Trump Jr. and WikiLeaks (at least to me) has some kind of "smell" .
My argument rather is:
a) the position of WikiLeaks (if there exists one) is far more confusing and sometimes self-contradictory than it looks on the surface (in this sense, I am both somewhat sceptical of the "WikiLeaks defenders" and the "WikiLekas prosecutors").
b) do not trust any side as "authoritative". Consider multiple different sources - in particular the ones that do not agree with your personal opinion - and conceive an opinion on the whole story.
I just threw you into that category for the purpose of my comment as I believed it was a perceived defense of Wikileaks that you were being downvoted for, rather than something else regarding the content of your comment.
>They are only interested in Anti-America leaks, not all leaks.
Any evidence to prove this? If you go to Wikileaks there are leaks from all around the world. I personally am very glad Wikileaks exists, doing the work they do, and still has never been proven incorrect or fraudulent.
By the way, Anti-American leaks are also Pro-American leaks, even if they may not be favorable to your own political beliefs.
If you go on Wikileaks right now and search "Russia" for 2012-2018, you get mostly news about how western plans are going to fail and how powerful Russia's military is. If you search for Russia in the "Syria Files" section, you get no results. How is that even possible?
The media that has been performing such benchmarks for decades and have thus earned a large and faithful audience can organize and simultaneously publish the relevant benchmarks in the US, complete with an unapologetic disclosure right at the top as to the why this is happening. Put Intel into the position of suing every significant member of the US tech media and create a 1st amendment case, or perhaps try to single out some member and create a living martyr to which we can offer our generous gofundme legal defense contributions over the decade it takes to progress through the courts. Either way Intel creates a PR disaster for itself.
Sack up and call their bluff. There are MILLIONS of people that will stand behind that courage. At some point the share holders will feel it and this nasty crap will end.
It's contract law and tort law that's relevant. Intel's defective product, Intel's ToS, and maybe fair use. (As the tester could get the patch without an explicit license and try to claim fair use.)
I can imagine some scenarios:
- A company is bought by another one which has a different legal policy and voids some legal restrictions even retroactively
- By some other way, the "illegal" knowledge in the paper became "public knowledge", so that company lawyers will have a hard(er) time convincing a judge that the respective paper is of any danger and thus the author to be prosecuted. For example: After these microcode patches, people will of course privately do benchmark the performance differences. So some years later, the order of performance differences are kind of public knowledge.
- With time, companies have a much less legal interest to sue people for disclosre. As long as there is a high commercial interest, companies can be very "sue-happy". On the other hand, each of these lawsuits is a PR risk for the company. If the respective product becomes outdated, the ecomonical advantage of a lawsuite is much less than the probable PR loss.
- The researcher now (later on) lives in a different country that has a different legal system where there is much less legal risk. For example, in Germany, it is very restricted what can be included in the general business terms (in opposite to the USA).
In all of these cases, it can become much less legally dangerous to disclose the real identity of the author. On the other hand, there exists an incentive (academic credit) to do so.
E.g. "intel-microcode-legacy" and "intel-microcode" would be more diplomatic (disgustingly so IMO).
And one could argue that every package in non-free would deserve a "-legally-restricted" suffix.
The DFSG is a set of guidelines that define what can be considered true FLOSS.
The reason is to protect users from legal risks.
The argument from Intel is that the new changes don't actually affect distributions, as distributions are given the right to redistribute the microcode in the license (and this is separate to allowing third parties to publish benchmarks). Either that, or the lawyers at Red Hat and SUSE missed this somehow (though this is unlikely -- at SUSE all package updates go through a semi-automated legal review internally, similar to how openSUSE review works).
I do understand Debian's ethical issue with this though, and I applaud them for standing up for their users (though unfortunately it does leave their users vulnerable -- I imagine it was a difficult decision to make.)
[I work at SUSE, though I don't have any hands-on knowledge about how this update was being handled internally. I mostly work on container runtimes.]
As well as not redistributing it to mirrors, as they are unable to ask mirrors to accept a new license.