Hacker News new | past | comments | ask | show | jobs | submit login
Intel Performance Hit 5x Harder Than AMD After Spectre, Meltdown Patches (extremetech.com)
319 points by mepian on May 20, 2019 | hide | past | favorite | 107 comments



Things just keep getting better and better for AMD... it’s like once you have a little bit of luck things just keep snowballing from there.

They are on track to have an amazing year with CPUs that may take a very significant chunk of the cloud compute away from intel and then maybe even on the performance side.. it’s just fascinating that a company that almost died just a few years ago is now a big contender on multiple fronts.


This reminds me of AMD's launch of the 1.4GHz Athlon Thunderbird. People were shocked that benchmarks proved AMD dethroned Intel.

>The Athlon 1.4GHz is far and away the fastest PC processor you can buy. Well, OK, it's not that much faster than the 1.33GHz Athlon, but it's a heap faster than Intel's 1.7GHz Pentium 4. Through a range of tests measuring a variety of applications and abilities, every one of our Athlons from 1.2GHz to 1.4GHz beat out the Pentium 4 with regularity."[1]

[1] https://techreport.com/review/2523/amd-athlon-1-4ghz-process...


And history repeating again. It was a perfect Storm, Pentium 4 requires Rambus DRAM that no one wants and was expensive. The later Pentium 4 switched back to DDR but it was slower, which Pentium 4 in itself was already quite slow / not much of an improvement. Then AMD made Athlon Thunderbird, and later AMD 64.

It is happening again, Intel has been sitting on 14nm for 5 years, and now with many security issues and they won't be able to react with 10nm until early next year, and even the 10nm CPU will likely not have a hardware fix. Last time this happen Intel started a price war and FUD against AMD.

The timeline somehow always synced when AMD executed their plan to perfection and Intel somehow messes up at the same time. And then Intel woke up, last time it was Pat Gelsinger who saved them ( And later forced out of Intel ), and then somehow AMD make some missteps. This time Intel got Jim Keller, I hope Dr Lisa Su wont repeat the same mistake again.


Not to mention there's a fair bit of evidence suggesting that Intel won't even have 10nm ready next year either.


You could also run dual AthlonXPs in SMP motherboards (even though it was only the AthlonMPs that were officially supported).


It wasn't just luck. It was Jim Keller. https://en.wikipedia.org/wiki/Jim_Keller_%28engineer%29


Given that he was only at AMD for 3 years and now works for Intel, I wonder if AMD will be struggling again after Zen 2/3 -- I seem to recall he designed "multiple" Zen versions before leaving, but cant find any comments from AMD to that effect.


Regardless of who designed it, I'm concerned about how much progress Zen3 (and so on) can make. Zen was a massive leap, Zen 2 looks like it'll be a great upgrade, but can it go on like this? If not, what happens next? "AMD didn't absolutely blast Intel out of the water, so let's all go back to buying nothing but Intel."


Bulldozer was mistake that cost AMD years of advancements. They tried to do what Intel did a few years before. Design for clock speed and nothing else, figuring the 4ghz ceiling was easy to break.

Intel did this with P4 "netburst" architecture. They hit a frequency wall that made thier new deeply pipelined CPU worthless. This is when AMD caught up last time with the athlon series.

Intel actually went back to the design of the Pentium III!! With higher IPC, a few upgrades lifted from the P4, and new processes, it gave birth to the Core series.

The Intel processors we have today still share more design history with the P3 than the P4. And since then, Intel has focused on IPC over clock speed.

The crazy part was AMD making the same mistake years later with Bulldozer. Makes me wonder if the remedy was the same... Go back and update the Athlon cores.

The ancient Athlon/P3 IPC is amazingly good compared to today's chips if you scale them by clock speed and core count. Perhaps half, which is impressive for the age of the chip. All these bugs affecting over a decade of CPU design tells us these chips share a lot of the same logic, if not entire blocks unchanged for more than a decade.


I'm typing this on a Bulldozer system. It is still fine for what I need it for - mostly compiling software, and provided a lot more cores than the equivalent Intel CPU at the same price point.


Nobody's saying they were unusable. But like--I do video production and was pricing out my first live video mixer around that time. I was ready to throw down for a Bulldozer build immediately because of how wide the processor was and had to do the proverbial "hard pull up away from the mountainside" because of how bad the throughput was.

Instead, ended up going i7-875K -> i5-3570K -> Ryzen 1600 -> Ryzen 3700X (probably, this year). So like, I'm glad to see AMD back in the game, but Bulldozer was pretty rough.


I'm still using mine for gaming daily, obviously it's not high end, but it was $99 like...9 years ago? I don't remember when actually. I've been computing since I was young and it's the best value I've ever purchased. I can still do just about anything I want to right now.


> The crazy part was AMD making the same mistake years later with Bulldozer. Makes me wonder if the remedy was the same... Go back and update the Athlon cores.

I mean at the time the only thing on the minds of consumers was those GHz. So the only way to stay in the market was to hunt those GHz even if it meant some long term pain. After hitting the 3GHz/4GHz frequency walls consumers began to realise that processors are distinguished by more than frequency (obviously the lay person still doesn't quite understand but they're more likely to buy based on i7 > i5 than 3Ghz > 2.5Ghz these days).


> I mean at the time the only thing on the minds of consumers was those GHz.

No, AMD dispelled that long before Bulldozer, and even Intel had abandoned the GHz game years ago with the Core lineup.

You're thinking late 90s, Bulldozer happened in 2011. People got over the GHz mindset in the early-mid 2000s when the market told 'em we're now going to increase core count instead, and before that when Athlon labeled CPUs like 1800+ (it's not 1.8GHz but it's as fast as one!).


Hilariously, the one takeaway I have from that article is that no one wants to lead the autopilot arm of Tesla.


It's certainly not a great sign that less than 1 month before Musk is parading around screaming that they've got self-driving solved, the head of the project has jumped ship.



maybe they can put that department on autopilot :O


And the expiration of a few patents, like the ones about simultaneous multithreading. Zen is the first AMD's architecture to implement SMT, Bulldozer had CMT (https://www.extremetech.com/wp-content/uploads/2015/04/zen.j...).

https://patents.google.com/patent/US3728692

https://patents.google.com/patent/US3771138

https://scalibq.wordpress.com/2012/02/14/the-myth-of-cmt-clu...


Don't Intel and AMD have a fairly comprehensive patent cross-licensing agreement?

Not sure if SMT parents would fall under that, but it was my impression that AMD was relatively unconstrained wrt Intel patents due to this agreement.


The agreement can be read here https://www.sec.gov/Archives/edgar/data/2488/000119312509236...

I would say that only the Instruction set is cross licensed, implementation techniques and hardware interfaces aren't (AMD CPUs shall not be compatible with Intel sockets, for example). Intel's SMT is known as hyper-threading, the patent has expired too but the name "hyper-threading" is still copyright protected and the property of Intel.

https://patents.google.com/patent/US5361337A/en

https://en.wikipedia.org/wiki/Hyper-threading#History


Has he ever written about his management technique? He must be a great engineering manager.


There are some tidbits here about how he views his team and some stories: https://www.youtube.com/watch?v=SOTFE7sJY-Q


Yeah, once AMD was desperate enough to kick the MBA CEOs to the curb and finally put an engineer in charge, things started working better. Imagine that.


I hadn't realised he'd ended up at Intel! Will be interesting to see how much impact he has there.


From what I've read about Intel's corporate structure - don't hold your breath.


Oh he's the guy JP mentions every now and then when talking about AI.


> may take a very significant chunk of the cloud compute away from intel

My understanding is that they don't have the manufacturing capacity to ship this many processors, even if data centre operators wanted to buy them.


>My understanding is that they don't have the manufacturing capacity to ship this many processors, even if data centre operators wanted to buy them.

That is the current situation as they Fabbed on 14nm with Global Foundry, together with their GPU and APU. So AMD is completing against itself on capacity. Once they moved off GF to TSMC with 7nm Zen 2, things should [1] hopefully be a lot better.

[1] That is assuming the yields are good, and I/O chip's production will not let down by who ever is fabbing it. Current guess it would still be Global Foundry for IO.


Seems like a FUD-y non sequitur. Everyone isn't going to run out and completely replace their infrastructure with AMD overnight.

I doubt intel could keep up much better.


I'm a fan of AMD, I hope they succeed. However, how do you figure things are getting better and better based on actual results?

Right now it looks like they're going to be reset back to 2017 numbers, losing the business gains they made in 2018. Their sales have fallen for the last three quarters in a row, quarter over quarter, and they barely turned a profit last quarter. Sales imploded by 23% last quarter year over year. When does the amazing year start?


You can't compare 2018 numbers to 2019 numbers without taking into account the intervening cryptocurrency crash. Setting that aside, their CPU and GPU sales have actually improved, even though both are nearing the end of a generation (and so are the console SoCs). The fact that AMD's still profitable even without cryptocurrency GPU sales is a pretty strong indicator of financial health, the likes of which they couldn't even dream of a few years ago.


Look at their own disclosures for 2019 H2 from their earnings statement - don't do your own estimate based on a linear projection of the last 3 quarters.

2019 is going to be an absolutely phenomenal year for AMD.


It starts when they get free positive publicity from scandals like this.


Once Rome and Ryzen3000 are shipping. They have been hyped so much & everyone is waiting for those releases, no point buying when it will be (probably) severely outdated in a few months.


I'm much more excited about RISC V and MIPS myself because they're actual open architectures.


> Things just keep getting better and better for AMD

I'm happy to hear that AMD does better with this. I'd already decided that I won't be buying Intel CPUs anymore, so I like that AMD is a reasonable replacement.


Persistent memory is going to a big thing in the cloud computing space in the coming years. Without an answer to Optane AMD is going to have a tough time competing with Intel.


That is assuming Optane is the only Persistent Memory solution. Micron, Samsung, all have something similar ( whether it is by technological design or function ) in the work.

Optane used to be very attractive when it promised all the performance at cheaper than DRAM price. But now DRAM price has sunk, and we will have to see whether 2nd gen Optane will deliver what Intel promised.


Persistent memory can greatly reduce application start-up time. In SaaS usage that's pretty important. Downtime during software updates is a big headache.


At some point we need to start pointing the blame directly at Intel here. It’s becoming more and more obvious that this isn’t a problem where modern CPU architecture didn’t anticipate a security attack vector — but more that Intel took shortcuts with security infrastructure on the chip in order to improve their IPC count.



Not exactly the right link, that is for the server chips. This one is for the consumer CPUs: https://www.phoronix.com/scan.php?page=article&item=mds-zomb...


As far as I understood, the solution to this in sandboxes such as the js world, is simply to deny anyone using timers with a resolution that could reveal cache misses. How much is software really relying on timers with this resolution? What would it mean if CPU manufacturers simply gave up and said "to mitigate side channels, you can't have a clock that is so accurate that it lets you measure whether X has happened because knowing that is equivalent to reading any memory".

Or, instead of detecting various things and flushing out sensitive data on some context switch, the CPU just adds noise to the timers instead? I'm gussing this is a complete no-go, but I'm wondering why it is?


Adding noise just makes side channel attacks slower, it doesn't stop them; there are statistical techniques to extract the original signal from the signal plus noise, given enough samples.

For a simple example, imagine you want to distinguish a 1ms difference in the execution time of some operation. Without noise, you just have to time it; now let's randomly add either nothing or 1ms to the operation time, so the "fast" operation will take either +0ms or +1ms, and the "slow" operation will take either +1ms or +2ms. But if you repeat the same operation several times and average the execution times, the "fast" operation will take an average of +0.5ms, and the "slow" operation will take an average of +1.5ms. As you can see, in this simple example the random noise averaged itself to a normal distribution, and the original signal is still visible on top of it.


Yes it would be extremely difficult to make it so slow that it's not a viable attack vector. So long as it has a known distribution it's not random enough because the mean is the unknown value. I remember too little statistics to understand whether it's possible to add randomness such that the measurements are not distributed in a sense that makes the underlying value not be the mean. It does sound impossible at least.


Cauchy distribution. https://en.m.wikipedia.org/wiki/Cauchy_distribution The mean of a sample of cauchy(m,s) variables has that same cauchy(m,s) distribution, regardless of sample size.

Meaning to matter how many samples you take, the mean (of the samples) is just as variable as an individual sample.

The mean and variance of the distribution (equivalently, of infinite samples) are undefined. The Cauchy is equivalent to a t distribution with 1 degree of freedom.

Infinite variability will be undesirable for lots of reasons though.


The browser makers also had to disable mutable JS shared memory arrays until other mitigations where in place. Having a single thread that continuously increments a shared value serves as a good enough approximation of the CPU clock for these exploits.


If you really want to, you can "manufacture" high resolution timer pretty easily with thread spinning.


I suppose then "adding timing noise" here would also require making sure instructions don't have fixed and dependable execution times, because then you can just manufacture a clock by incrementing a number and knowing how many clock cycles the increment is. So an increment cannot be a known number of cycles. It does sound messy.


Adding random delay makes timing attacks more costly by not impossible. Any random noise can simply be filtered out by performing the attack multiple times and averaging the measurements. This even works over the network with milliseconds of random delays.


But then you'd have to increase execution times, and here we are ...


That's true of course. So basically adding timing noise is equivalent to adding artificial slowdowns. The only upside I suppose is that it might solve all timing sidechannel attacks in one go. So it's not 3% for one and 4% for the next and so on. It's a one time cost to disable timing as an attack vector.


Adding randomness doesn't solve the issue, it just slows it down somewhat. Fast operations are still going to be faster on average, etc.


I am excited. Can you give an example? Also, if there is any use for it in system software for it?


Proper dumb-ass question here, so please forgive.

When you talk about timers and resolution, what do you actually mean? When I hear timer, I think about setTimeout, when I hear resolution, I'm thinking about screen resolutions.

Is that what you mean, or are you referring to other things?


A timer simply reports the current time. In this context, people are using timers to calculate how long an operation takes.

Resolution is the precision that the timer reports the time. For example, it could report seconds elapsed (e.g: 8 seconds), it could report milliseconds (8.432 seconds), it could report microseconds (8.432389 seconds), etc.

Attackers want high resolution timers so that they can distinguish cache hits from misses (8.432389 seconds vs 8.432367 seconds, for example).


Awesome thankyou!


In this case, a higher resolution timer might be accurate to microseconds, while a lower resolution might be accurate down to milliseconds. Resolution here is about how low you can drill down before it becomes useless.


Does anyone have the full kernel command line to turn all of the mitigations off? I don't need this on my private development server which runs all code either written by me or under my control.



At least based on my tests it is not possible to remove the impact of the mitigations with boot-time flags: this "disabled" the mitigations but the code paths for eg for kernel calls is still much different and slower than pre-mitigation kernels. Pre-mitigation I measured just over 100 ns for a bare do-nothing system call, while after it is over 1,000. However if you disable mitigations on the boot command line I still see over 500 ns so it is not close to the old performance.

You'd have to run an old kernel to really get pre-mitigation performance. I tested this just after Meltdown + Spectre 1-3, so things may have better or worse since then, I'm not sure.


That's why average joe doesn't like Linux. how i'm going to use this?


Maybe you'll prefer the Windows version:

  reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 3 /f

  reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f



Clearly windows is never going to be ready for the desktop.


Haha! Made my day.


Absolutely. I was talking to an average joe the other day and he was complaining how hard it was to roll back Spectre, Meltdown, and ZombieLoad mitigations to improve his compilation speed.


The "average joe" shouldn't be doing this.


> That's why average joe doesn't like Linux

Having seen the attempts at making Linux friendly for theoretical frictionless spherical Joes in vacuum, I'd rather just have my 1-2% market share GNU/Linux OS for unix greybeards, warts and all. The friendly GUI shell on top of Linux for average Joes is called Chrome OS and Android.


In 5.2 (next major release) the "mitigations=off" flag should cover it all. Up until recently there wasn't really a need for such a catch-all so it hadn't been made.


You must put it on kernel options, somewhere on grub config files.



mitigations=off is the only parameter you need.


What are we supposed to draw from those graphs? The AMD part is already much slower than the intel part.


I mean, a 15% reduction in effective computation is not nothing just because AMDs CPUs aren't faster. The 8700k has 2 less cores and costs ~$100 more, so now it is not a very compelling purchase if you plan to turn mitigations on.

Many people bought the 8700k for a level of performance, that they now need to compromise for: 1) higher performance, small chance of exploits, 2) lower performance and less exploits.

This dilemma did not exist until this year, and now everyone is paying for it, besides Intel.


The point is that it isn't comparable. It is likely that one of the reasons that the measured intel parts are so much faster is because of insecure optimizations. It would have been a much more interesting comparison to start from two SKUs with similar performance.


Isn't that what they've done? The 6800K had been slightly faster than the 2700X, now it's significantly slower. The 8700K had been significantly faster than the 2700X, now it's slightly slower (and yet more expensive). The same for the 7980XE and the 2990WX.


> It is likely that one of the reasons that the measured intel parts are so much faster is because of insecure optimizations.

I doubt it. The cost of workarounds is probably far worse than the time saved from using unsafe methods.


I wonder if there's a sort of 'herd immunity' effect that would reduce your risk of not running the mitigations. Since the assumption is that this class of exploit is known and patched, it might be less attractive for exploit makers to target. Given that, you can actually disable the mitigation to get the performance back without suffering a breach.

I wouldn't do this at a data center or a nuclear powerplant, but for personal workloads like gaming this might be an okay tradeoff.


> Since the assumption is that this class of exploit is known and patched, it might be less attractive for exploit makers to target. Given that, you can actually disable the mitigation to get the performance back without suffering a breach.

This is an argument that anti-vaxxers use. And it works -- until enough people do it to compromise herd immunity. Then, suddenly, it doesn't work and you have a Very Bad Time.


This is basically one of the arguments why Linux doesn't have as many exploits. If Linux were in more widespread usage, then more focus would be put on exploiting it and hence running it without e.g. anti-virus or other mechanisms would be more dangerous. So Linux's relative market share might protect it more than many security choices in the OS.

(I don't want to claim that any of the reasoning here is true, but I just wanted to point out the parallel to what you're saying.)


Well, it remains to be seen if someone can even mount and attack against firefox or chrome. The researchers (RIDL) only managed ~1byte/s against a detached spidermonkey engine, with the old high-resolution timer enabled (these have been removed). They were not able to mount an attack against chrome. I have my doubts that we will ever see attempts at widespread exploitation against browsers. So not running mitigations for personal use/gaming would probably be absolutely fine.


Google attempted to fix Chrome against Spectre but gave up, so what you're saying about no feasible attack is not true.

See: https://v8.dev/blog/spectre

> we quickly discovered that software mitigation of all possible leaks due to Spectre was infeasible.

Are there many other security issues that are easier to exploit with potentially higher impact? Sure. Does this mean that Spectre is fixed or can be mitigated in software? No. It's a bit like the formerly theoretical timing attacks against TLS: attacks only get better.


As said in that very blog, no attacks have been observed. Spectre has been out for a year and a half. Can you show me any indication of any feasible attack on firefox or chrome, that has a PoC or at least proof that it can be utilized for anything remotely useful?

However this conversation might be meaningless, as it seems we have a different definition of what constitutes a feasible attack.

I still maintain my opinion that not turning on mitigations is safe for personal computing.


This conversation reminds me of discussions around cryptographic hash functions:

https://slideplayer.com/slide/12035043/69/images/28/Reaction...

Google clearly considered it an important enough issue to spend considerable resources on trying to mitigate Spectre and in the end only gave up because they didn't find a feasible way to do so. They emphatically didn't conclude that it's fine because attacks are impractical.

This attitude was learned the hard way though: about a decade ago the PoC or gtfo attitude was prevalent among browser makers and large tech companies. Theoretical vulnerabilities were dismissed if no immediate proof of concept was provided.

What changed this was a bunch of security/cryptographical vulnerabilities. MD5 was known to be theoretically week for years and years, but when researchers minted their "can break every SSL/TLS connection" intermediate certificate to finally make browser vendors move on the issue, it was too late.

You see with systemic issues, in cryptography or hardware, by the time you actually demonstrate a PoC, things are way too late: it takes years if not half a decade (as in MD5's case, or with older TLS versions) to deprecate insecure things, if you look at the timelines.

So for issues in fundamental building blocks, it's more or less irrelevant if there is a working PoC today or not: if we don't move to fix the underlying issue and start acting on a roadmap to move away from insecure things, people _will_ come up with a working exploit that allows practical attacks. If mitigation is only attempted at that point then we're being left vulnerable for years to come.


>it's more or less irrelevant if there is a working PoC today

By that logic, all current crypto is already broken and we should only use quantum safe crypto.

You guys threat model for your personal computers are way beyond most of the planet, so I will concede and agree that you should not use browsers or run untrusted code until new CPU's are released. That is pretty much the only thing that will match your threat model.


"all possible leaks" is not the same as "no feasible attack"


I do have that CPU but I bought it because it was really close to the X series chips (feature wise) without the extra dime.


I wonder what other optimizations Intel have made, which could be potential security threats some day...


If I'm not wrong, these are kernel patches. Intel is adding hardware (they call it in-silicon) mitigations to Meltdown (in Coffee Lake [1]) and Spectre (in Ice Lake[2]).

This should help increase performance again, right? I'm actually waiting for Ice Lake before buying a new laptop.

[1] https://www.tomshardware.com/news/intel-9th-generation-coffe... [2] https://en.wikipedia.org/wiki/Ice_Lake_(microarchitecture)


What’s the cost of interprocessor communication versus clearing these caches so aggressively?

I wonder, as cores continue to increase, if another solution will present itself in the form of segregating traffic per processor.


Dispatch units are slowly increasing in number, and there is enough registers to octuple threads per core.

It isn't hard to imagine a future where multithreaded applications with write disabled executable memory would be faster to the point where browsers will stop using JIT as we know it.


Those who care already do this. NUMA is taking advantage of memory locality by itself, and processes can be pinned to cores manually as well. But even reading memory of your own process can be an issue (it is for browser tabs)


The browser is providing OS services so if those services are “broken” they have to be fixed twice. Wouldn’t one process per origin and letting the OS do its job reduce the surface area here?


Admitting all the issues with this 'attack' as given, I can't help thinking this is only an issue because, CPUs use the old 'secrecy as security' model. E.g. if we add to each thread an encryption key used to interpret every memory access (just an xor with a rolling mask) then access to somebody else's memory become access to an encrypted document with a key you don't know. No longer a problem?


These are really timing attacks. It's not a matter of directly reading the data, it's a matter of being able to deduce what's there based on how long it takes to do it.


Yes; but that would be irrelevant if they didn't depend on secrecy for security, right?


Ah, you're proposing a solution for Meltdown rather than Spectre. The difference is that with Meltdown there is speculative access to memory that isn't even mapped to the original process.

Memory encryption might work as a way to solve that, but we already know how to solve that -- do what AMD and others have been doing the whole time and don't continue speculative execution through a page fault.


...until somebody finds another vector to reveal the value of Kernel memory.

The mantra is, secrecy is not security. The User/Kernel dichotomy flagrantly ignores this principle.


The title is clickbait, because 5x does not matter if the AMD's difference is under 1%. Also not the original source.


Clickbait because it could be misleading in a counterfactual?

15% vs. 3% is pretty meaningful. 15-16% is comparable to the gap between 4th and 9th generation Intel processors at the same frequency. And in that time turbo has gotten better but base clocks have dropped.


Also, I think fully mitigating the security issues on Intel processors requires applying all the patches and disabling hyperthreading, at which point you're seeing more like a 20+% impact on performance. In particular, from what I can tell the microcode and software patches for RIDL and ZombieLoad only flushes the state on context switches and cannot protect against the hyperhreading-based variants of the attacks.


Loss of hyper-threading is only "20%+" because you put a plus there.. Hyper-threading in synthetic workloads is usually in the order of 40% loss in performance. Depending on how well used the hyper-threads are it can actually be higher.

It's unlikely to be lower than 40% unless you're bottlenecked before the CPU (memory speed or heavy CPU cache invalidation)


The article says...

"Disabling [hyperthreading] increases the overall performance impact to 20 percent (for the 7980XE), 24.8 percent (8700K) and 20.5 percent (6800K)."

...so I think a performance impact of "20%+" (i.e. 20% or more) is fair - for the 8700K it's 24.8%.


Yeah it depends on your workload. If you are playing video games with well optimized C++ code that also takes advantage of SIMD etc then it's not going to matter a lot. However if you're running a python webserver that causes a cache miss every other line of code then this could actually result in a 40% loss of performance.


The "5x" difference is a measure of how wrong chip designs from the two companies are.

Intel being five times more careless and/or incompetent and/or malicious has implications beyond practical performance degradation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: