I'd love to be a fly on the wall in the board meeting room.
I know there's a non-trivial mobile performance penalty, but a fourth gen mid-range i5 I got on eBay for $30 has nearly the performance of the 8750H in last year's Dell XPS (my work laptop). Even when you get away from mobile, compare two i7 chips that were released five years apart for basically the same price, and userbenchmark.com shows just a 21% overall performance difference in them:
This windows shrinks even further when you look at what overclockers get out of the older chips with fancy cooling setups. But yeah, when else in the history of personal computing has five years of progress yielded such small gains?
I know eventually I'll upgrade my system for real, probably to something AM4-based, but when I saw these numbers, it became a no brainer to just bump up the CPU on an old LGA1150 system I had lying around rather than spend a bunch of money on an all-new motherboard, new RAM, etc.
Intel really has got to be in an internal panic. The last time AMD pushed Intel this hard, Intel went to extraordinary tactics to push AMD out of the market. I wouldn't be surprised if pushing their workers well beyond safety limits was in the current playbook.
There's a funny series of LTT videos from this past fall where they bring up a 2008 Skulltrail mobo and try running some modern games on it, with and without overclocking:
The last big microarch jump for intel was Skylake (6th gen) and then 7, 8, 9 and some of 10th gen (but not all, but they needed "10th gen" for marketing reasons) are just incremental improvemnts over it. So Skylake was about a 15-20% increase in performance (real) but took a few shortcuts (Meltdown and Spectre). From then on, you had a 5-10% performance improvement per generation (let's round it at 7%). That's not bad per se, but it was done out of comfort. The "cursed" 10nm node which never came is also part of the reason why Intel stalled, they insisted on "tick-tock" and were unwilling (or unable?) to deliver a new microarch over 14nm+++, the super refined node (and the extra refinement allowed them to pull the clock higher on each new gen, now alowing those sweet close-to and with this new weird turbo [that seems more marketing than substance] surpass the 5Ghz barrier).
And that's how you come to Intel's situation on 2020: no 10nm node, no real new microarch in the last 5 years.
As for why we never saw something like this... I think we never had a situation before where one of the competitors had such an advantage (the microarchs before Zen on AMDs part were bad, and that hit them hard for a long time) that they could pull this duckish moves for so long. For a long time, Intel chose not to improve their offering, because they were so far ahead of competition and they could keep milking the Whatever-Lake microarch family. They'd blame on the difficulty of extracting more IPC, Moores Law coming to an end (or not coming to an end, depending on what marketing was trying to sell).
In the end it came to AMD+TSMC offering superb processors for Intel to slash the prices in half, and then still being crushed performance wise, in desktop, server and now mobile too. And all Intel has to show is AVX512, which after having it gatekeeped for years in the server market, they've released on consumer processors - but hey, nobody's using it because there was no support!
Chips fabbed on 10 nm have been shipping in volume since last summer. The Sunny Cove CPU cores found in Ice Lake chips, fabbed on the 10+ node, average roughly around 16 % higher performance vs. Skylake, clock-for-clock, before accounting for improvements in memory bandwidth. Tiger Lake (fabbed on 10++) will be shipping this year, alongside with Ice Lake Xeons.
Extract from Anandtech comparing Comet Lake vs Ice Lake:
Some of these tests rely heavily on turbo, such as the PCMark tests, and so the Comet Lake i7-10710U can hit 4.7 GHz on the latest variant of Skylake, while the Ice Lake i7-1065G7, despite its higher IPC difference, can only do 3.9 GHz. This means in a lot of bursty workloads (which a lot of business workloads are), the Comet Lake wins and we see that play out.
As I said before: I really hope Intel sorts this mess up and comes along with something that can compete against AMDs offering. Otherwise we might see a turn of the tables, with AMD eventually stalling progress, just because they aren't pushed hard enough by the competition.
They seem to have learned from their 10nm mistake, so they're apparently planning to have + and ++ versions of the following nodes, that could handle production of a new microarch if the next node isn't ready, but only time will tell how that goes along.
Last month, I started getting some kernel_trap BSODs so I reduced voltage a bit and also the clock speed down to 4.7ghz.
I don't see a reason for me to upgrade my CPU for another few years unless it burns out or something. The only component I've upgraded is my GPU, from a GTX 670 to a 980 TI.
The newer chips may be great for specific needs, but for gaming and light development work I don't see any reason why I need to upgrade. Sure, a quad core from 2012 isn't the hottest thing anymore, but along with my 980TI it runs all my games at 2k without any issues.
Intel is a very odd company, and for all that there are truly brilliant people there, it feels increadingly like IBM of 2003.
Probably explained by: https://www.youtube.com/watch?v=P4VBqTViEx4
On another point. I'm not sure how much this is true, but I've read that most of AMD's chips are made from the same basic building blocks, and their different CPUs are mostly just a matter of how many these blocks are stitched together along with whatever tuning/other hardware. Such a design is probably very scalable, which simplifies manufacturing and reduces costs. Whereas I heard intel chips have more separate customizations requiring more separate tooling for each of their chipsets, which complicates manufacturing...
The bigger TSMC failure was on 32nm, which had to be killed outright.
Their problem right now is that they have an inferior process and their high margin datacenter parts are in critical danger of facing competition from devices fabbed by TSMC and (to a lesser extent) Samsung.
Their offerings in the market now, though, are still winning and Intel is making a ton of money.
So the net effect of a downturn is that the future market, the one where AMD is competetive and Intel needs to drop prices, is going to shrink in total dollars. That's a win for Intel, comparatively. It means the window into which AMD could have jumped to steal revenue is smaller, and Intel has more time to catch up.
 Remember that most of the money in the CPU industry is made in the datacenter. AMD having better desktop parts doesn't affect Intel's bottom line much as long as they remain competitive in servers. (AMD still lags with laptop devices too, which are also a bigger market than desktops).
Uh... because their revenues are down 20%, they have a hiring freeze, and are delaying all capital expenditures? That was my point: there may be technical reasons for people to move away from Intel's products. But the economic reality means that fewer customers will.
Intel is Xilinx + AMD ( CPU + GPU ) + Nvidia ( Competing with CUDA ) TSMC ( And more since Intel does everything but TSMC relying on many of its partner forming an ecosystem ) + Qualcomm Modem / WIFi + AI Chip + Autonomous Vehicle + Broadcom's Networking ( Consumer to Enterprise Networking Controller ) and many many more smaller things that are not listed.
And since the R&D budget happened during the era where current CEO was the CFO of the company. I wouldn't say Intel doesn't know much about it.
And if we are looking at fiscal results, , Intel is earning exactly 10x the revenue from AMD. So in terms of dollar generated by R&D they are pretty much in line with industry standards. Now of course in reality Intel is earning likely 80% of their profits from ˜50% of the R&D budget purely on Fabs and x86 CPU. The rest of the R&D budget still provides little returns at the moment, but for example it has been a long time that anyone has the potential to challenge Nvidia 's CUDA or GPU/GPGPU dominance. And Intel's current leaked gigantic GPU offering seems to be just that.
Having said all that. The other side of argument is that there is currently no reason why AMD cant doubl, triple or even 5 times the revenue with its existing R&D budget. Their EPYC line is still doing poorly by (my) standards, only at less than 5% marketing share. Their Desktop are doing well in consumer market but not OEM market ( The majority of Desktop Market ). And laptop has barely stated. And that is all due to AMD's ( comparatively speaking ) incompetence in Sales and Marketing. ( You could properly argue that it was Intel's Sales and Marketing doing its job far too well ).
I would argue that fab capacity at TSMC is very much a limiting factor , putting AMD in the same spot as Intel by selling practically every working chip produced. This puts a cap on the amount of units sold, and with that a soft cap on the revenue. Doubling revenue at this point would mean doubling prices, which would be very unhealthy for their turnover.
The fab production capacity problem is even worse if you consider that basically all current and coming products by AMD (CPUs and GPUs for any of PS5, laptop, PC, server, the new Xbox) are produced on 7nm or 7P, and therefore compete for the limited fab-capacity available.
Intel performance will always be one step ahead of AMD, intel have faster tech in their pocket waiting to be released.
No, they delayed time and time again because the 10nm process utterly failed to deliver the necessary yields and performance to be viable. It's not some ingenious display of sandbagging - only now in the past year or so has the 10nm process yields risen high enough to make low-power chips viable, and it is still not viable for high-power desktop chips, and probably will never be viable.
>Intel performance will always be one step ahead of AMD
A completely ridiculous assertion, and one that is already largely false at the present moment.
>intel have faster tech in their pocket waiting to be released.
Likely true, but only because of their failure to execute on 10nm. CPU designs are closely tied to the process they are designed to be manufactured on -- especially for Intel, who have never been concerned about being too tied to a single manufacturer (since it is themselves). This tight coupling can be good when the process is good, but the failure of 10nm meant that for years and years those architectural improvements you speak of are unshippable and useless.
Intel is not completely screwed by any measure, but the idea that they have the situation entirely under control and are intentionally not competing with now-superior AMD chips is entirely ridiculous.
I really had never imagined this could be so. Could you expand somewhat on why it is? Thanks.
This just isn't true. When Ryzen first came out Intel was still ahead in many use cases, especially if $$ was no object to you. But after several years AMD is now taking the crown from intel in all areas they compete in. If they really had some magical faster tech Intel would have released it years ago, much less this year, but they haven't.
Instead they announce the same old chips then hook them up to industrial strength cooling units to make it look like they can compete with AMD. This isn't the behavior of a company that is just biding its time.
They've even had processors that basically only shipped on paper.
Citation very much needed.
Show me an Intel CPU with PCIe 4.0 or with the performance per watt of the latest AMD Ryzen mobile chips. You can't, because to date they don't exist.
I think if they had anything in reserve it would have been released by now, because AMD is trouncing them in reviews.
After that the story is well known: their microarch were tied to their node, and Intel's 10nm was a failure, and still is not up to what the competition is doing. They started to backport the next microarch to 14 too late IMO. They may release new ones on 14nm; or they may suddenly manage to strongly improve their 10nm (not likely, it is usually very gradual at this stage)
And it is again obvious that even in the last 3 years, they had nothing (on the microarch side that they could produce on their mainstream node) they were able to rush as a fallback to counteract Zen, otherwise why would they not have done it?
Now they are clearly behind on the process, and on the microarch Zen 2 is good enough and does not need to compete much against Sunny Cove, plus if Intel releases a backport of Sunny Cove for their 14nm it may be against Zen 3...
Hmm. I think if Intel had a significant architecture enhancement waiting in the wings we would have seen it by now. They need it badly.
I just updated a 2013 quad core laptop with a late 2019 8 core laptop. Tasks that really use the cores are of course faster. Other than that, no substantial performance increase. 7 years and it is just incremental. Their process issues of late are horrific but there is not much happening architecturally either. Sad to see. They just seem a bit lost.
Meanwhile in a couple of months AMD will be on 7nm+
(However, it should be said that Intel 10nm is roughly equivalent to TSMC 7nm in terms of density. AMD is still ahead but not by as much as the numbers would imply).
"Recent technology nodes such as 22 nm, 16 nm, 14 nm, and 10 nm refer purely to a specific generation of chips made in a particular technology. It does not correspond to any gate length or half pitch."
"At the 45 nm process, Intel reached a gate length of 25 nm on a traditional planar transistor. At that node the gate length scaling effectively stalled; any further scaling to the gate length would produce less desirable results. Following the 32 nm process node, while other aspects of the transistor shrunk, the gate length was actually increased."
Nodes mean 3 things: density, elecrical performance (ion, max f, leakage, etc), and processing flexibility (how many supported feature primitives or designs).
Also, I assume the complaints were for outside of any clean room environment since inside a cleanroom you are obviously wearing facial covering at all times. I do wonder about virus circulation inside a cleanroom though since there is a high volume of laminar flow and the filters are designed for particulates much larger than virus-sized.
That being said, serious accidents have happened when employees violated procedures/protocols, failed to lock-out/tag-out, or defeated interlocks inappropriately. You can read about them in the press. For example: https://www.oregonlive.com/silicon-forest/2018/10/intel_sued...
Employees who fail to follow safety protocols rarely keep their jobs, even if there happened to be no injuries.
High voltages, high-power optics, vacuum systems, etc, all exist, too, but the risks associated with chemistry/process-gas accidents are perhaps the greatest.
As a teaser, HF is used throughout the industry -- it is spooky stuff (and awesome at getting things done).
Dimethylmercury able to kill through 2 drops absorbed through a latex glove: https://www.acsh.org/news/2016/06/06/two-drops-of-death-dime...
And of course the amazing series of ‘things I won’t work with’. E.g.: https://blogs.sciencemag.org/pipeline/archives/2014/10/10/th...
I would say that it is an extremely safe occupation, having worked previously at a major semiconductor manufacturing facility for a few years. A lot of the staff virtually never enter the cleanroom. I personally only ever went in there 3-4 times. The automation is nearly absolute, so most of the effort is spent making sure the computers are doing the right thing. Sending humans into the cleanroom is directly in conflict with the objective of realizing low defect rates, because people are walking particulate factories.
I do not doubt it, but it all sounds like ordinary circumstances. How about extraordinary stuff?
How about times when safety is in direct conflict with executives' pay?
Before any of that, I'd already decided back in early March—when it became clear that things were more serious than people had been taking everything—that I'd stick around until we either temporarily shut down or when the reported cases start getting too close for comfort.
A few weeks ago I sat down in the morning to find another company-wide email sent out at 9:00PM the night before (don't know why they hadn't also forwarded a copy of this one to my personal inbox that I could access from home, as with the ones before). In this case, symptoms were reported 4 days before, and now with confirmation having tested positive. This time it was on my floor, not far from where I sit in our open office. I sent an email that I was leaving and would not be returning for at least two weeks and pending further info about any spread. That was a Friday. I got fired over email (although not in those terms) at the end of the day the following Monday:
Although you did not resign failure to report to your scheduled shift(s) has been taken as a voluntary resignation. We have gone ahead and ended your assignment for you. Please do not report back to the SAS site as your assingment here is done.
This was all within the last two weeks. At the time, our group had been doing a piss poor job of observing both company and local rules about distancing. It had been less than a week since people around me had finally gotten moved to different workstations. It wasn't an availability problem, because (a) half the office was already a ghost town due to select folks ordered to work from home, and (b) the rest of the company had been been working spread out for something like a month already—but not our group.
At the beginning of April, we were supposed to get a scheduled payout for earned PTO, but we received an email that due to COVID they were actually going to delay the payout. I had to jump through a lot of hoops alternately taking on extra days that I wasn't ordinarily scheduled and taking off other days to burn the PTO to give the effect of getting the payout for the hours I'd accrued.
There is a big difference between unilaterally announcing that you won't be coming to work and having your request refused and getting fired for leaving anyway.
It looks like the employer interpreted the message as a type of ultimatum. In the face of such an ultimatum, they basically have no choice but to treat it as a resignation.
The ultimatum is a very dangerous negotiating tactic, and generally harmful to long-term relationships. As an example, I recommend against posing an ultimatum to a spouse or partner.
Edit: speaking of "ultimatums," what do you call a strike? Is it only an "ultimatum" (thus unacceptable) when one person does it, but not when a large group does it?
If they had said that they have concerns about safety, and requested an urgent (possibly remote) discussion, the employer would have more options. By telling the employer that they were leaving immediately, and not coming back until they felt something had changed, they force an immediate decision of either treating it as a resignation, or accepting and condoning unilateral actions by employees.
Strikes are definitely ultimatums, though they generally have clearer objectives, and are usually preceded by a series of discussions. Strikes are not conducive to good relationships.
Read this again, and comprehend that the HR unilaterally and knowingly subjected them to risk of contacting the virus.
Subsequent self-quarantine at home benefits the company as it reduces the chances of spread. It was the right thing to do, and there's not much to negotiate here given the initial screw-up.
I am not making a judgement as to who is at fault for the situation. I am only stating that the decisive move was the grandparent's e-mail, which left the employer only one rational option (that I know of).
What does Samsung stand to gain from this? They lost an employee (which will cost them in recruiting when they open up again) and gained bad reputation (and a disgruntled former employee).
This is power play that hurts everyone.
You do know that frequently the only reason these "discussions" are able to be had is because of the threat of a strike? What if the GP had these "discussions" and was told to go back to their desk and STFU? Then what?
Strikes are actually the result of employers abusing employees, which is what creates the bad relationships. You sound as though you believe the other way around, that strikes create bad relationships.
Do you see an alternative way the corporation could have handled this that would at least given the impression they valued the lives of their labor force?
Imagine someone gets up at their desk, and yells out: "I am leaving, and not coming back until you agree to my terms". That is the (assumed) situation here, and leaves management with almost no options.
I read "sent an email that I was leaving and would not be returning for at least two weeks and pending further info about any spread" as such an ultimatum. It is a final statement, the rejection of which will result in a breakdown of relations.
The issue here is that Samsung notified an employee of a coronavirus case on their floor by an email they could only read at work.
The HR willingly and knowingly subjected the employee to the risk of contracting coronavirus, taking away their ability to make an informed decision of whether or not to come to work that day given the risk.
This is criminal behavior. Don't make excuses for them.
I'm not even sure if you pushed this hard enough as a wrongful termination that it wouldn't stick.
That said it may be difficult to prove. Employee should demand answers to their concerns about work place safety, and the employer should answer them properly. This doesn't look like it was done. The employer knee-jerkingly firing the employee the next business day after being informed the employee was not abandoning their post for no reason also doesn't look good. They knew why he was not there, and I wonder if they made any attempt to reassure or reiterate their COVID-19 safety plan, and what the expectations of staff are. If the employee was not satisfied with the answer or application of the answer, then they should file a complaint with local employment board and OSHA, and possibly speak with a lawyer if they feel they are under threat of retaliation for reporting genuine work place safety concerns.
"By the way, your office is infected, and we didn't want to tell you this before you arrived here. Sorry not sorry."
This is criminal, and should not be excused.
Or to put it another way: just because you can fire someone by rolling dice doesn't mean you can fire someone for being black.
That's part of why you'd often see lots of HR bureaucracy regarding firing process, documenting infractions, performance improvement plans, etc also in states where technically you can be fired without a reason.
They're trying to avoid paying out unemployment by saying he failed to show up for a shift but even sans-COVID firing after one absence is abnormal.
Instead, it feels like most corporations are managed by the people who run Royco in Succession:
Its the same as a manager who hears that being interested in the well being of his employees can get them to work harder so he starts pretending to be interested in his employees, not because he cares but because he wants performance improvements. It ends up being insincere, creepy and off putting instead.
If the brass of a company try and emphasize safety to try and get the benefits ALCOA got, they won't pull it off, not because safety isn't important but because they didn't really care about safety in the first place.
Stabilizing employee mindset is the measure they should care about.
Hard to ship chips if they feel the company doesn’t care about their demand to stay alive and they all quit. AMD wouldn’t mind a bunch of chip experts being suddenly available.
Ogling market economics first and foremost is really not the priority for agency these days.
If demand falls such that we’re just generating chips for science and industry, so be it.
No one owes tech nerds tech to fetishize.
They may have contracts that require them to ship units or lose large contracts (and thus have to engage in mass layoffs).
For instance, see the USCSB's video on the BP Texas City refinery explosion. The refinery operator had a great record for safety, but it was measuring individual worker safety, ie PPE and recordable injuries. It was not measuring Process safety.
Edit: I should probably make clear that I don't think BP actually had a great safety record. I think they measured and cared about a certain class of safety, but not another class of safety, and that's the point I was trying to make.
In a very limited scope, it could work. If a team has a sprint or something, they can work on the safety tasks first.
But that's not really how a company operates. You have a whole mess of tradeoffs and often the costs and benefits can't be clearly quantified.
What they do is work out an operational model, some people work exclusively on examining safety, and they put out guidelines to management and employees who then have to practice safety themselves. Generally, you only know if it works after the fact by examining the results, maybe even tally up the costs of lawsuits or bad PR.
But it's meaningless to say "safety is job #1" because you're doing all the jobs 2 and on or you're going out of business. And that means all of them; no one says "taking out the trash is job #1" or "sending out W2s is job #1" but a company grinds to a halt pretty quickly if they aren't done.
That's the pretty much entire reason for pandemic-mitigating measures.
Even healthy people with no need for hospitalisation can be taken down for a couple of months of fatigue and symptoms from it.
I'm curious how this works in other countries; if a company lets someone go inappropriately, is there faster recourse than the courts? Can you go to the police and have them escort you back in, or what?
The problem with even arriving there is that there are three ways of letting go of someone:
1) just letting them go for any reason, but it requires giving them minimum notice(usually at least a month, but in many places it's 3 months or even more). Companies get around this by just paying the employee their salary for the duration of the notice period and just telling them not to come in any more.
2) both sides can agree to terminate the contract immediately - this usually happens if the employee wants to go somewhere else, but they also have say a 3 month notice period - in that case sometimes you can agree with the company that you will leave immediately but also not receive your 3 month pay.
3) You can let someone go immediately without pay, but only due to gross misconduct - and ooh boy, you better have it very very well documented that it was gross misconduct(like, a recording of someone stealing is usually good enough).
So the entire "letting someone go inappropriately" thing is lessened because it's really hard to argue for in cases 1 and 2, and employers really try to avoid case no. 3 specifically because of the risk of being sued. Most companies would rather just pay you for the remaining time on your contract to let you go immediately than risk being sued.
In other countries, you can't "just let someone go". In Belgium there are CAO's , which loosely translates to: Collective Employment Agreements. Basically all employment agreements for a whole industry are more or less standardised. On one side there are the employers (of different companies) and the other side are the workers, which are represented by union leaders (perhaps also political representatives).
These agreements includes a lot of rules on how employment can be terminated. One example could be that an employer is only allowed to fire an employee for underperformance after given the employee 3 written reports over a period of 3 months and working on a plan to improve their performance. Only after that fails to produce results are you allowed to fire the employee. And you need to keep the documentation as it could be reviewed in court if the employee decides to fight it. (That doesn't happen often).
For this reason, often the employer will try to get the employee to quit. Or at least come to a mutual agreement to terminate the employment.
In Belgium, it's very hard to fire employees, and I think this is the same for several other European countries.
As a result, I haven't heard of many EU companies firing employees during these rough times. At least, not as often as US companies with "at-will employment".
The flip-side is that large companies will often have a glut of employees that aren't productive, but can't be fired.
If you need to get legal, it says “It can take a few weeks or a few months for an application to be processed, heard, and determined by a member. The length of the process will depend on things such as urgency of the application, whether parties have tried to resolve their problem at mediation, the availability of parties, representatives and the complexity of the case.”
You cannot be directly denied your salary or get demoted/fired for this. The company can of course appeal by opening a court case saying you abused this right, in which case any sanction may be applied if the ruling is in their favor.
The law also mandates that the company is responsible for the health of its employees on company time and premises, so you can also open a court case if you do not feel adequately protected (what happened to amazon).
but what if they do it anyway? this is more the context here - we have wrongful termination as well, though it is likely much less strict
Not to mention that the US is below in people immigrating to it internationally (net immigration / capita) compared to:
Luxembourg, Switzerland, Australia, Austria, Italy,
Sweden, Germany, Belgium, Great Britain, Denmark , and others...
But I think we do everyone better by trying to put some effort into posting about why it sucks, and how it specifically seems to be impacting this situation. Low effort posts aren't going to change anything, and they clutter up HN.
E.g. Intel's primary motivation is to increase their capital, they have relatively fixed material costs and somewhat flexible labor costs, and so they choose to expend their labor (in this case by putting them in a risky environment) to continue to increase their capital.