Hacker News new | past | comments | ask | show | jobs | submit login
Intel accused by workers of prioritizing chip output over safety (bloomberg.com)
313 points by fortran77 on May 8, 2020 | hide | past | favorite | 181 comments



I can't imagine what higher ups in Intel are feeling about the current state of things especially since they are spending nearly 10x on R+D as AMD is($1.5b AMD[0] vs $13.1b Intel[1]). Of course AMD doesn't need to spend R+D money on the process they're using from TSMC although I'm sure some of the money they're handing over to them goes to R+D.

I'd love to be a fly on the wall in the board meeting room.

[0] https://www.statista.com/statistics/267873/amds-expenditure-...

[1] https://www.investopedia.com/news/amazons-23b-rd-budget-sets...


I was recently putting together a desktop PC for some light gaming after being out of the market for a long time, and I couldn't believe how incremental Intel's progress has been in the CPU space over the past few years.

I know there's a non-trivial mobile performance penalty, but a fourth gen mid-range i5 I got on eBay for $30 has nearly the performance of the 8750H in last year's Dell XPS (my work laptop). Even when you get away from mobile, compare two i7 chips that were released five years apart for basically the same price, and userbenchmark.com shows just a 21% overall performance difference in them:

https://cpu.userbenchmark.com/Compare/Intel-Core-i7-4790-vs-...

This windows shrinks even further when you look at what overclockers get out of the older chips with fancy cooling setups. But yeah, when else in the history of personal computing has five years of progress yielded such small gains?

I know eventually I'll upgrade my system for real, probably to something AM4-based, but when I saw these numbers, it became a no brainer to just bump up the CPU on an old LGA1150 system I had lying around rather than spend a bunch of money on an all-new motherboard, new RAM, etc.


Five years? Make that ten; the progress since Sandy Bridge has been incredibly slow. If you take a Sandy Bridge i5 or i7 CPU and turn off the mitigations, it's still within spitting distance of the current i5/i7 models. The only reason I changed out my old i5-2500K two years ago was that the motherboard had finally failed. It was holding up fine with any software I was running, including modern games from that time such as Doom 2016.

Intel really has got to be in an internal panic. The last time AMD pushed Intel this hard, Intel went to extraordinary tactics to push AMD out of the market. I wouldn't be surprised if pushing their workers well beyond safety limits was in the current playbook.


That's a good point; past a certain point I think it's less about the single threaded performance and more about interconnect constraints— slower RAM speeds, slower PCIe lanes, slower SATA, USB, etc.

There's a funny series of LTT videos from this past fall where they bring up a 2008 Skulltrail mobo and try running some modern games on it, with and without overclocking:

https://www.youtube.com/watch?v=wNo7qoLRtkQ

https://www.youtube.com/watch?v=Hl7Dx895ND4


Memory bandwidth is a big limitation with the older i5/i7s, at least when trying to pair them with anything beyond say a gtx 970.


Intel really made their own bed on this one. Anyone with a few grad classes and some interest could tell you what changes to make at each generation to capture more performance. Intel consistently under-delivered on those fronts. It was not because they did not have the technical knowhow, because that part is easy. The people making decisions at Intel were not interested in giving customers the same or better value for their money without competition. Competition came and their pants were still down. Intel's pants continue to stay down since it takes so long to pull them up. They got greedy and they're paying for it.


I found an ITX board for that chip from Asus or Asrock...it's still living as extra workstation for the office!


Userbenchmark is useless, it can barely compare between Intel microarchs. Best it can do is """help""" compare two processors from the same family.

The last big microarch jump for intel was Skylake (6th gen) and then 7, 8, 9 and some of 10th gen (but not all, but they needed "10th gen" for marketing reasons) are just incremental improvemnts over it. So Skylake was about a 15-20% increase in performance (real) but took a few shortcuts (Meltdown and Spectre). From then on, you had a 5-10% performance improvement per generation (let's round it at 7%). That's not bad per se, but it was done out of comfort. The "cursed" 10nm node which never came is also part of the reason why Intel stalled, they insisted on "tick-tock" and were unwilling (or unable?) to deliver a new microarch over 14nm+++, the super refined node (and the extra refinement allowed them to pull the clock higher on each new gen, now alowing those sweet close-to and with this new weird turbo [that seems more marketing than substance] surpass the 5Ghz barrier).

And that's how you come to Intel's situation on 2020: no 10nm node, no real new microarch in the last 5 years.

Edit:

As for why we never saw something like this... I think we never had a situation before where one of the competitors had such an advantage (the microarchs before Zen on AMDs part were bad, and that hit them hard for a long time) that they could pull this duckish moves for so long. For a long time, Intel chose not to improve their offering, because they were so far ahead of competition and they could keep milking the Whatever-Lake microarch family. They'd blame on the difficulty of extracting more IPC, Moores Law coming to an end (or not coming to an end, depending on what marketing was trying to sell).

In the end it came to AMD+TSMC offering superb processors for Intel to slash the prices in half, and then still being crushed performance wise, in desktop, server and now mobile too. And all Intel has to show is AVX512, which after having it gatekeeped for years in the server market, they've released on consumer processors - but hey, nobody's using it because there was no support!


> And that's how you come to Intel's situation on 2020: no 10nm node, no real new microarch in the last 5 years.

Chips fabbed on 10 nm have been shipping in volume since last summer. The Sunny Cove CPU cores found in Ice Lake chips, fabbed on the 10+ node, average roughly around 16 % higher performance vs. Skylake, clock-for-clock, before accounting for improvements in memory bandwidth. Tiger Lake (fabbed on 10++) will be shipping this year, alongside with Ice Lake Xeons.


That 10nm node (10nm+ as they call it) is not as mature as 14nm+++ and they can't reach the same clock speeds (or yields) and that's why they have to resort to comparing clock-for-clock against their previous generations.

Extract from Anandtech[0] comparing Comet Lake vs Ice Lake:

    Some of these tests rely heavily on turbo, such as the PCMark tests, and so the Comet Lake i7-10710U can hit 4.7 GHz on the latest variant of Skylake, while the Ice Lake i7-1065G7, despite its higher IPC difference, can only do 3.9 GHz. This means in a lot of bursty workloads (which a lot of business workloads are), the Comet Lake wins and we see that play out.
Again, that means that some chips aren't produced in 10nm, because if they were, they'd end up being straight up slower, despite having an 18% IPC improvement.

As I said before: I really hope Intel sorts this mess up and comes along with something that can compete against AMDs offering. Otherwise we might see a turn of the tables, with AMD eventually stalling progress, just because they aren't pushed hard enough by the competition.

[0] https://www.anandtech.com/show/15385/intels-confusing-messag...


Are they just going to skip to 7nm at this point?


Intel's 7nm should be comparable to TSMC's 5nm (actually Inte'ls original 10nm, the one that failed spectacularly, would've been better than TSMC's 7nm and 7nmP in terms of transistor density see [0]). And it was started a long ago (like new nodes usually are) so it should be coming along somewhere around the same time maybe? Intel hasn't said much about its future nodes, but there's some information and a few marketing slide around[1].

They seem to have learned from their 10nm mistake, so they're apparently planning to have + and ++ versions of the following nodes, that could handle production of a new microarch if the next node isn't ready, but only time will tell how that goes along.

[0] https://en.wikipedia.org/wiki/7_nm_process#7_nm_process_node...

[1] https://www.anandtech.com/show/15217/intels-manufacturing-ro...


Userbenchmark is totally useless because too many users run benchmark in very varying environments. Better to just watch proper review sites like AnandTech, GamersNexus, Phoronix.


Yup. Same boat. I have an i7 3770k that I purchased in 2012 with the intention of overclocking it. It turned out to be a great chip that was stable at 4.8ghz. I use this computer mainly for gaming.

Last month, I started getting some kernel_trap BSODs so I reduced voltage a bit and also the clock speed down to 4.7ghz.

I don't see a reason for me to upgrade my CPU for another few years unless it burns out or something. The only component I've upgraded is my GPU, from a GTX 670 to a 980 TI.

The newer chips may be great for specific needs, but for gaming and light development work I don't see any reason why I need to upgrade. Sure, a quad core from 2012 isn't the hottest thing anymore, but along with my 980TI it runs all my games at 2k without any issues.


It is also much greener, to reuse your old hardware, especially if it is Sandy Bridge and later (because 1st gen and earlier hardware consumed lots of energy, even in idle; Sandy dropped idle consumption from 50 to 5 watts).


AMD users and overclocking has always gone hand and hand 4 years before the 4770 came out i had a FX8120@5.1Ghz that's faster benchmarks than said 4770 4 years prior and I daily that chip @ 5.1 4.9 for 5 years. had a 9590 that did 5.2 FX was amazing in all the work I did and fraction of cost. In it's idle time these cpus pushed me to the top of the F@H charts with BIGadv work units.


That's because chip makers have been focusing on energy efficiency far more than chip speed.


Yet the TDPs of desktop processors are around 100W, just like ten years ago. And apparently current Intel CPUs Turbo Boost themselves to 200+W!


Take a long view.AMD and others suffered through the 45nm delays at TSMC and they benefit from the acceleration currently.

Intel is a very odd company, and for all that there are truly brilliant people there, it feels increadingly like IBM of 2003.


Having spent a few years inside of Intel, it's a company paralyzed by meetings and meetings about meetings. People seem to be afraid to make any decisions so they have meetings to talk about the decisions that need to be made and then schedule more meetings to follow up. Working there felt like swimming in molasses. People who come into the company with any experience realize that it's a mess, try to change things in their area, meet resistance and after a while just give up and leave. That's probably why they mostly hire NCGs at this point.


>it feels increasingly like IBM of 2003

Probably explained by: https://www.youtube.com/watch?v=P4VBqTViEx4

On another point. I'm not sure how much this is true, but I've read that most of AMD's chips are made from the same basic building blocks, and their different CPUs are mostly just a matter of how many these blocks are stitched together along with whatever tuning/other hardware. Such a design is probably very scalable, which simplifies manufacturing and reduces costs. Whereas I heard intel chips have more separate customizations requiring more separate tooling for each of their chipsets, which complicates manufacturing...


TSMC 20nm failed process as well.


TSMC 20nm didn't fail, it could be made in volume just fine, even if thermals were a bit disappointing.

The bigger TSMC failure was on 32nm, which had to be killed outright.


I just remember that AMD and Nvidia canceled their plans to use 20nm for GPUs and skipped straight to 14/16nm.


Actually in a relative sense the pandemic clearly helps Intel.

Their problem right now is that they have an inferior process and their high margin datacenter parts[1] are in critical danger of facing competition from devices fabbed by TSMC and (to a lesser extent) Samsung.

Their offerings in the market now, though, are still winning and Intel is making a ton of money.

So the net effect of a downturn is that the future market, the one where AMD is competetive and Intel needs to drop prices, is going to shrink in total dollars. That's a win for Intel, comparatively. It means the window into which AMD could have jumped to steal revenue is smaller, and Intel has more time to catch up.

[1] Remember that most of the money in the CPU industry is made in the datacenter. AMD having better desktop parts doesn't affect Intel's bottom line much as long as they remain competitive in servers. (AMD still lags with laptop devices too, which are also a bigger market than desktops).


Why wouldn't data centers switch when they can get more computing power for less energy and at a lower initial price point? That's happening now with the current offerings.


> Why wouldn't data centers switch when they can get more computing power for less energy and at a lower initial price point?

Uh... because their revenues are down 20%, they have a hiring freeze, and are delaying all capital expenditures? That was my point: there may be technical reasons for people to move away from Intel's products. But the economic reality means that fewer customers will.


While I agree Intel's R&D budget not giving enough returns, this isn't giving a correct perspective on the subject.

Intel is Xilinx + AMD ( CPU + GPU ) + Nvidia ( Competing with CUDA ) TSMC ( And more since Intel does everything but TSMC relying on many of its partner forming an ecosystem ) + Qualcomm Modem / WIFi + AI Chip + Autonomous Vehicle + Broadcom's Networking ( Consumer to Enterprise Networking Controller ) and many many more smaller things that are not listed.

And since the R&D budget happened during the era where current CEO was the CFO of the company. I wouldn't say Intel doesn't know much about it.

And if we are looking at fiscal results, [1][2], Intel is earning exactly 10x the revenue from AMD. So in terms of dollar generated by R&D they are pretty much in line with industry standards. Now of course in reality Intel is earning likely 80% of their profits from ˜50% of the R&D budget purely on Fabs and x86 CPU. The rest of the R&D budget still provides little returns at the moment, but for example it has been a long time that anyone has the potential to challenge Nvidia 's CUDA or GPU/GPGPU dominance. And Intel's current leaked gigantic GPU offering seems to be just that.

Having said all that. The other side of argument is that there is currently no reason why AMD cant doubl, triple or even 5 times the revenue with its existing R&D budget. Their EPYC line is still doing poorly by (my) standards, only at less than 5% marketing share. Their Desktop are doing well in consumer market but not OEM market ( The majority of Desktop Market ). And laptop has barely stated. And that is all due to AMD's ( comparatively speaking ) incompetence in Sales and Marketing. ( You could properly argue that it was Intel's Sales and Marketing doing its job far too well ).

[1] https://www.anandtech.com/show/15747/intel-reports-q1-2020-e...

[2] https://www.anandtech.com/show/15754/amd-reports-q1-2020-ear...


> Having said all that. The other side of argument is that there is currently no reason why AMD cant doubl, triple or even 5 times the revenue with its existing R&D budget.

I would argue that fab capacity at TSMC is very much a limiting factor [0], putting AMD in the same spot as Intel by selling practically every working chip produced. This puts a cap on the amount of units sold, and with that a soft cap on the revenue. Doubling revenue at this point would mean doubling prices, which would be very unhealthy for their turnover.

The fab production capacity problem is even worse if you consider that basically all current and coming products by AMD (CPUs and GPUs for any of PS5, laptop, PC, server, the new Xbox) are produced on 7nm or 7P, and therefore compete for the limited fab-capacity available.

[0] https://wccftech.com/amd-7nm-wafer-production-set-to-double-...


The others are being "subsidized" by TSMC's R&D expenditures, which are being amortized across a larger market namely mobile and specialized chips.


.


Do you have any data to back this up? I should hope people would be embarrassed to post wildly unsubstantiated nonsense under a username linked to them.


Ya this doesn’t really make any sense to me. Intels attempts at shrinking the node failed due to diversity hires? I have a very hard tome believing this. If you’re capable of working at that level at all, I can’t imagine skin tone or gender is holding you back...


Realise that intel, has not been releasing chips with any major improvement either. They have delayed time and time again new chips and designs on purpose because they haven’t had the competition to be go up against.

Intel performance will always be one step ahead of AMD, intel have faster tech in their pocket waiting to be released.


>They have delayed time and time again new chips and designs on purpose because they haven’t had the competition to be go up against.

No, they delayed time and time again because the 10nm process utterly failed to deliver the necessary yields and performance to be viable. It's not some ingenious display of sandbagging - only now in the past year or so has the 10nm process yields risen high enough to make low-power chips viable, and it is still not viable for high-power desktop chips, and probably will never be viable.

>Intel performance will always be one step ahead of AMD

A completely ridiculous assertion, and one that is already largely false at the present moment.

>intel have faster tech in their pocket waiting to be released.

Likely true, but only because of their failure to execute on 10nm. CPU designs are closely tied to the process they are designed to be manufactured on -- especially for Intel, who have never been concerned about being too tied to a single manufacturer (since it is themselves). This tight coupling can be good when the process is good, but the failure of 10nm meant that for years and years those architectural improvements you speak of are unshippable and useless.

Intel is not completely screwed by any measure, but the idea that they have the situation entirely under control and are intentionally not competing with now-superior AMD chips is entirely ridiculous.


> CPU designs are closely tied to the process they are designed to be manufactured on

I really had never imagined this could be so. Could you expand somewhat on why it is? Thanks.


Hardware has layout constraints just like data have alignment constraints.


> They have delayed time and time again new chips and designs on purpose because they haven’t had the competition to be go up against.

This just isn't true. When Ryzen first came out Intel was still ahead in many use cases, especially if $$ was no object to you. But after several years AMD is now taking the crown from intel in all areas they compete in. If they really had some magical faster tech Intel would have released it years ago, much less this year, but they haven't.

Instead they announce the same old chips then hook them up to industrial strength cooling units to make it look like they can compete with AMD. This isn't the behavior of a company that is just biding its time.


AMD seems to have supply issues though. You can have the best chips on the market, if they are out of stock you don't really compete.


Whatever supply issues AMD has, I can still get their chips at or below MSRP from a number of vendors. The latest generation of Intel chips, on the other hand, are hard to find and selling at a significant premium where they are available.


That's not unique to AMD. Intel also goes through periods of supply issues.

They've even had processors that basically only shipped on paper.


Yeah, like good luck trying to source a 9900KS for anywhere near MSRP, which was just a polished revised 9900K so Intel could still technically claim the “gaming crown” from AMD.


Have you tried to get your hands on any of the Intel HEDT parts? There's little to no supply and the supply that does exist is heavily marked up. 3950Xs and Threadrippers are in stock everywhere.


AMD’s supply issues have improved significantly since Apple is moving from 7nm to 5nm this year. AMD is now TSMC’s top 7nm customer.

https://twitter.com/chiakokhua/status/1258393158766886913 https://twitter.com/chiakokhua/status/1258393160784330754


Intel ran out of 9th gen chips started fishing Quality Control reject chips out of the trash and selling them as F chips (9100F etc), which are chips that failed GPU quality controls and so got GPU disabled. credit for using honest label - F for F grade GPU.


> Intel performance will always be one step ahead of AMD, intel have faster tech in their pocket waiting to be released.

Citation very much needed.

Show me an Intel CPU with PCIe 4.0 or with the performance per watt of the latest AMD Ryzen mobile chips. You can't, because to date they don't exist.

I think if they had anything in reserve it would have been released by now, because AMD is trouncing them in reviews.


If they were to have a revolutionary technology that they haven't released yet, they'd release it as soon as their OEM and enterprise sales actually started to take a nosedive. Even though PC enthusiasts are outspoken about it, Intel is still doing just fine in their much higher-volume areas.


Have you seen the numbers and prices on Epyc Rome? All signs point towards Intel's reign in all areas being over. Short of a miraculous turnaround, Intel won't be competitive in three years. I want Intel to do well, but they've demonstrated over the past ten years that they are not interested in doing well.


Isn’t there rumblings of AMD starting to make headway in data centers?


I can speak only about our smallish datacenter (1TB Core Bandwidth concurrent) and we definitely have been moving towards more AMD in the past year or two. Currently we have roughly a 60/40 AMD/Intel split on new machines.


they are only doing fine through inertia in enterprise and datacenter


Intel has faster tech in their pocket? 5 years ago I would have believed you. But they have seen the AMD zen architecture for years now and they can't meaningfully compete anymore. If they had tech ready to go they would have released it by now to crush AMD.


I agree. 5 years ago Intel put Skylake out. This was what they had, and it was great; it was years ahead the competition. They did not have anything else secret even better and ready to go out, because that makes absolutely no sense: if you can crush the competition even more, just do it...

After that the story is well known: their microarch were tied to their node, and Intel's 10nm was a failure, and still is not up to what the competition is doing. They started to backport the next microarch to 14 too late IMO. They may release new ones on 14nm; or they may suddenly manage to strongly improve their 10nm (not likely, it is usually very gradual at this stage)

And it is again obvious that even in the last 3 years, they had nothing (on the microarch side that they could produce on their mainstream node) they were able to rush as a fallback to counteract Zen, otherwise why would they not have done it?

Now they are clearly behind on the process, and on the microarch Zen 2 is good enough and does not need to compete much against Sunny Cove, plus if Intel releases a backport of Sunny Cove for their 14nm it may be against Zen 3...


>>intel have faster tech in their pocket waiting to be released.

Hmm. I think if Intel had a significant architecture enhancement waiting in the wings we would have seen it by now. They need it badly.

I just updated a 2013 quad core laptop with a late 2019 8 core laptop. Tasks that really use the cores are of course faster. Other than that, no substantial performance increase. 7 years and it is just incremental. Their process issues of late are horrific but there is not much happening architecturally either. Sad to see. They just seem a bit lost.


Intel is literally on an older fabrication process right now, they’re on 10nm while AMD is on 7nm. They’re objectively behind.


You're giving Intel a little too much credit. They're only "on 10nm" for tiny, low-power laptop chips. The yields are reportedly still too low to use it for desktop or server CPUs.

Meanwhile in a couple of months AMD will be on 7nm+

(However, it should be said that Intel 10nm is roughly equivalent to TSMC 7nm in terms of density. AMD is still ahead but not by as much as the numbers would imply).


Meanwhile AMD & TSMC are talking about 5nm for their Zen 4 architecture.


The nm numbers are marketing. Processes are too nuanced to boil down to a single number.


And if one insists on comparing, Intel 10nm is kind of TSMC 7nm. However Intel only manages to make a few of their laptop parts with their 10nm, and with not great frequencies IIRC. So they are behind now.


Process isn’t everything, but they’re not just marketing. The process has a huge impact on the heat & power characteristics of the core, and also controls how much stuff you can stuff into a die.


The point he's making is that 10nm, 7nm, etc don't actually denote anything that lets you do an apples-to-apples comparison anymore. At this point, they're not measures the way they were in legacy process nodes.


They haven't been apple to apples in over 20 years. But being stuck on one feature size for more than 2-3 years is still a major roadblock to performance improvement.


I thought that it used to measure gate width or something like that. Did that stop being true?


Yep

https://en.wikichip.org/wiki/technology_node

"Recent technology nodes such as 22 nm, 16 nm, 14 nm, and 10 nm refer purely to a specific generation of chips made in a particular technology. It does not correspond to any gate length or half pitch."

"At the 45 nm process, Intel reached a gate length of 25 nm on a traditional planar transistor. At that node the gate length scaling effectively stalled; any further scaling to the gate length would produce less desirable results. Following the 32 nm process node, while other aspects of the transistor shrunk, the gate length was actually increased."


It never stopped being true, but a lot of what makes a processor faster these days is smarter layout, better branch prediction, and better microcode. Process size matters, but it’s probably less important than it was in the 1990s.


Also, there's a difference between the minimum feature size that can be fabricated (mostly a lithography challenge), the minimum size of a reliable device that works well (e.g. with enough doping atoms in the channel) and the size and shape a specific transistor needs to be for its particular requirements.


That is literally not how process nodes work. They are objectively matched.

Nodes mean 3 things: density, elecrical performance (ion, max f, leakage, etc), and processing flexibility (how many supported feature primitives or designs).


AMD and Intel do not use the same node designs.


A generalization, but accurate enough. Disappointing to see it getting downvotes. Intel 10nm and TSMC 7nm have similar characteristics, as a brief Wikipedia search reveals.


I worked at Intel in a fab for 2 years. The culture is extremely safety conscious, if not safety obsessed. The moto is Safety, Quality, Output, in that order. In my experience, Intel strives to create an environment in which workers are safer at work than at home.


Yeah, it's ingrained into the culture for lower level employees to call out people even in upper management for such minor safety violations as not using a hand rail on the stairs.

Also, I assume the complaints were for outside of any clean room environment since inside a cleanroom you are obviously wearing facial covering at all times. I do wonder about virus circulation inside a cleanroom though since there is a high volume of laminar flow and the filters are designed for particulates much larger than virus-sized.


Protocols designed to avert industrial accidents don't automatically translate to protection from COVID. Specifically, the article mentions that 6 ft distancing is not being maintained.


From what I’ve seen, the protocols are there but conformity is lax among some personnel —not speaking about intel in particular.


Speaking as someone who knows nothing about fab: What are the most dangerous aspects of the job? What's the most likely injuries to happen doing x?


Due to NDA, I can't talk about specifics, but a fab is basically a bunch of small chemical plants in the same (giant) room. There are all sorts of dangerous chemicals in use, but there are multiple layers of safety equipment and protocols to ensure their safe use. Because of how seriously they take safety, the most common injuries were things like ergonomic strain or tripping down stairs. Every stairwell has multiple "Keep one hand free for the rail" signs.

That being said, serious accidents have happened when employees violated procedures/protocols, failed to lock-out/tag-out, or defeated interlocks inappropriately. You can read about them in the press. For example: https://www.oregonlive.com/silicon-forest/2018/10/intel_sued...

Employees who fail to follow safety protocols rarely keep their jobs, even if there happened to be no injuries.


I've done lithography in research settings. To me, the chemistry is the scariest.

High voltages, high-power optics, vacuum systems, etc, all exist, too, but the risks associated with chemistry/process-gas accidents are perhaps the greatest.

As a teaser, HF is used throughout the industry -- it is spooky stuff (and awesome at getting things done).


Seconded. TMAH is particularly scary, though less common than HF. If enough splashes on your skin, you die. Highly toxic stuff.


Off topic, but that reminded me of some horror stories from my chemistry days:

Dimethylmercury able to kill through 2 drops absorbed through a latex glove: https://www.acsh.org/news/2016/06/06/two-drops-of-death-dime...

And of course the amazing series of ‘things I won’t work with’. E.g.: https://blogs.sciencemag.org/pipeline/archives/2014/10/10/th...


The chemistry is easily the most dangerous aspect for humans and safety protocols have been developed to match. Chances of exposure at the tool-level are probably zero (i.e. on the factory floor you see pictures of). If you are working with bulk chemicals (i.e. one of the support floor(s) below the factory floor), chances are probably higher that you could run into some exposure.

I would say that it is an extremely safe occupation, having worked previously at a major semiconductor manufacturing facility for a few years. A lot of the staff virtually never enter the cleanroom. I personally only ever went in there 3-4 times. The automation is nearly absolute, so most of the effort is spent making sure the computers are doing the right thing. Sending humans into the cleanroom is directly in conflict with the objective of realizing low defect rates, because people are walking particulate factories.


Other than the nuclear fuel processing industry, the semiconductory industry is the only other place where you're likely to come across chlorine triflouride, which is the fun stuff that'll set concrete on fire.

https://blogs.sciencemag.org/pipeline/archives/2008/02/26/sa...


*trifluoride


Semi-related, he kind of covers it in here: https://www.youtube.com/watch?v=NGFhc8R_uO4


I can't speak to Intel or CPU manufacturing, but in almost any job, the most dangerous part of a worker's day is their commute.


> I worked at Intel in a fab for 2 years. The culture is extremely safety conscious, if not safety obsessed. The moto is Safety, Quality, Output, in that order. In my experience, Intel strives to create an environment in which workers are safer at work than at home.

I do not doubt it, but it all sounds like ordinary circumstances. How about extraordinary stuff?

How about times when safety is in direct conflict with executives' pay?


That's for the fabrication system, not the office around it. Fans are designed around chemical and physical issues, not biological.


I have firsthand experience with this, albeit not at Intel. Several known positive cases have appeared Samsung's fab in Austin. Samsung mandates they not return and quarantine at home.

Before any of that, I'd already decided back in early March—when it became clear that things were more serious than people had been taking everything—that I'd stick around until we either temporarily shut down or when the reported cases start getting too close for comfort.

A few weeks ago I sat down in the morning to find another company-wide email sent out at 9:00PM the night before (don't know why they hadn't also forwarded a copy of this one to my personal inbox that I could access from home, as with the ones before). In this case, symptoms were reported 4 days before, and now with confirmation having tested positive. This time it was on my floor, not far from where I sit in our open office. I sent an email that I was leaving and would not be returning for at least two weeks and pending further info about any spread. That was a Friday. I got fired over email (although not in those terms) at the end of the day the following Monday:

Although you did not resign failure to report to your scheduled shift(s) has been taken as a voluntary resignation. We have gone ahead and ended your assignment for you. Please do not report back to the SAS site as your assingment here is done.

This was all within the last two weeks. At the time, our group had been doing a piss poor job of observing both company and local rules about distancing. It had been less than a week since people around me had finally gotten moved to different workstations. It wasn't an availability problem, because (a) half the office was already a ghost town due to select folks ordered to work from home, and (b) the rest of the company had been been working spread out for something like a month already—but not our group.

At the beginning of April, we were supposed to get a scheduled payout for earned PTO, but we received an email that due to COVID they were actually going to delay the payout. I had to jump through a lot of hoops alternately taking on extra days that I wasn't ordinarily scheduled and taking off other days to burn the PTO to give the effect of getting the payout for the hours I'd accrued.


The issue here is you could probably have not lost your job if you had made this a request to HR or your manager instead of a blanket email.

There is a big difference between unilaterally announcing that you won't be coming to work and having your request refused and getting fired for leaving anyway.


This seems correct.

It looks like the employer interpreted the message as a type of ultimatum. In the face of such an ultimatum, they basically have no choice but to treat it as a resignation.

The ultimatum is a very dangerous negotiating tactic, and generally harmful to long-term relationships. As an example, I recommend against posing an ultimatum to a spouse or partner.


No, they also have the choice to implement proper safety measures. You sound as if you believe the employer is entitled to abuse every bit of power they have over an employee.

Edit: speaking of "ultimatums," what do you call a strike? Is it only an "ultimatum" (thus unacceptable) when one person does it, but not when a large group does it?


The problem is not the legitimacy of the issue; the problem is the approach. Grandparent comment said: "I sent an email that I was leaving and would not be returning for at least two weeks and pending further info about any spread."

If they had said that they have concerns about safety, and requested an urgent (possibly remote) discussion, the employer would have more options. By telling the employer that they were leaving immediately, and not coming back until they felt something had changed, they force an immediate decision of either treating it as a resignation, or accepting and condoning unilateral actions by employees.

Strikes are definitely ultimatums, though they generally have clearer objectives, and are usually preceded by a series of discussions. Strikes are not conducive to good relationships.


I think you missed the part where the employee was notified of a coronavirus case on their floor by an email they could only read at work.

Read this again, and comprehend that the HR unilaterally and knowingly subjected them to risk of contacting the virus.

Subsequent self-quarantine at home benefits the company as it reduces the chances of spread. It was the right thing to do, and there's not much to negotiate here given the initial screw-up.


You are looking at this issue from a moral/ethical lens, while I am looking at it from a game theory perspective.

I am not making a judgement as to who is at fault for the situation. I am only stating that the decisive move was the grandparent's e-mail, which left the employer only one rational option (that I know of).


You seem to be making an argument for unionization in the tech industry, then.


The rational option is to grant the employee a (possibly unpaid) leave in such circumstances.

What does Samsung stand to gain from this? They lost an employee (which will cost them in recruiting when they open up again) and gained bad reputation (and a disgruntled former employee).

This is power play that hurts everyone.


> Strikes are definitely ultimatums, though they generally have clearer objectives, and are usually preceded by a series of discussions. Strikes are not conducive to good relationships.

You do know that frequently the only reason these "discussions" are able to be had is because of the threat of a strike? What if the GP had these "discussions" and was told to go back to their desk and STFU? Then what?

Strikes are actually the result of employers abusing employees, which is what creates the bad relationships. You sound as though you believe the other way around, that strikes create bad relationships.


This comes off as pretty heavy victim-blaming bootlicking.

Do you see an alternative way the corporation could have handled this that would at least given the impression they valued the lives of their labor force?


In this specific situation, I think the grandparent left their employer with no options after the "email that I was leaving and would not be returning for at least two weeks and pending further info about any spread". Putting this in an e-mail really forces the employer's hand. Remember that the manager does not know who is being BCCed on the conversation, and must assume the worst (that everyone the employee knows is on a BCC list). Given that state of affairs, I think any competent HR department would insist on treating the e-mail as a resignation.

Imagine someone gets up at their desk, and yells out: "I am leaving, and not coming back until you agree to my terms". That is the (assumed) situation here, and leaves management with almost no options.


If there was an active shooter in the building and the email was “yo, I’m leaving for the rest of day and not coming back until I have more info on the active shooter situation” does the corporation have any option other than to immediately terminate that employee?


In that situation, the employee is saying they will do something that the employer likely wants them to do, or is at most indifferent to. It would be more similar to someone sending an e-mail to their boss saying 'I am coming in to work on Monday, and planning to work very hard, because I fully support the management'.


Yes, they have the option of providing more info on the active shooter situation.


Doesn't sound like he gave an ultimatum... he said he wasn't going to come into the office for two weeks, he didn't say UNLESS the company did something... there was no demand, and a demand is required for an ultimatum.


The definition of an ultimatum is: 'a final demand or statement of terms, the rejection of which will result in retaliation or a breakdown in relations.'

I read "sent an email that I was leaving and would not be returning for at least two weeks and pending further info about any spread" as such an ultimatum. It is a final statement, the rejection of which will result in a breakdown of relations.


>The issue here is you could probably have...

No.

The issue here is that Samsung notified an employee of a coronavirus case on their floor by an email they could only read at work.

The HR willingly and knowingly subjected the employee to the risk of contracting coronavirus, taking away their ability to make an informed decision of whether or not to come to work that day given the risk.

This is criminal behavior. Don't make excuses for them.


You are legally correct but morally quite wrong, any employer is obliged to provide a safe workplace, not taking this serious leaving the employee no other option but to announce they were going home was the first fault.

I'm not even sure if you pushed this hard enough as a wrongful termination that it wouldn't stick.


Were you a contractor? If not, sounds like you have cause for unlawful termination. Bring it to them.


Texas is an at-will employer state. You can be fired for pretty much any reason, or more precisely, without any reason.


The 'pretty much' part is very important, though... you can fire people for no reason, but you can't fire people for protected reasons... I am not sure if firing someone for following legal social distancing orders are protected or not.


Why is that unlawful termination?


Unsafe work environment, may put the employer into OSHA compliance failures territory. Also there may be some local COVID-19 laws in place offering additional protections/mandates for employers to follow.

That said it may be difficult to prove. Employee should demand answers to their concerns about work place safety, and the employer should answer them properly. This doesn't look like it was done. The employer knee-jerkingly firing the employee the next business day after being informed the employee was not abandoning their post for no reason also doesn't look good. They knew why he was not there, and I wonder if they made any attempt to reassure or reiterate their COVID-19 safety plan, and what the expectations of staff are. If the employee was not satisfied with the answer or application of the answer, then they should file a complaint with local employment board and OSHA, and possibly speak with a lawyer if they feel they are under threat of retaliation for reporting genuine work place safety concerns.


Generally, if you email your boss and just say I'm not coming in for two weeks without discussing it with anyone before hand, you are probably going to get fired. But if what this person says is true, and they are working in cramped conditions with people who are testing positive for Covid, then something is definitely out of whack with Samsung and needs to be dealt with.


Not only that, but Samsung also informed them of covid cases on their floor via an email they could only read at work.

"By the way, your office is infected, and we didn't want to tell you this before you arrived here. Sorry not sorry."

This is criminal, and should not be excused.


Sure you can get fire, but they are framing is as voluntary resignation, meaning they are going to deny unemployment claims, which is really fucked.


Many employment agreements have a clause that says that if you fail to report to work for X days without an approved leave you're considered to have voluntarily resigned. Based on the language, I assume that's what happened here.


There is a push by senate Rebublicans to include liability protections for employers. Assuming it gets passed, then depending how it is worded, there may be no case against employers in this type of circumstance regardless of arguements over other worker protection laws.


In any first world country, it would be. In the USA...meh.


In Texas you don't need reasons to fire someone.


Just because you can fire someone for no reason doesn't mean you can fire someone for any reason.

Or to put it another way: just because you can fire someone by rolling dice doesn't mean you can fire someone for being black.


Nothing stopping you from firing someone for being black, then claiming you rolled a dice, however.


IANAL but I believe juries tend to be biased towards the plaintiff (terminated employee) in wrongful termination suits simply because there are far more low level employees in the jury pool than there are managers, not to mention how expensive it can be to litigate. Hence all the CYA measures like performance improvement plans and so on.


You can claim that you rolled dice, but that won't be sufficient - the distrimination law explicitly puts the burden of proof on the employer to demonstrate that the firing was because of a non-discriminatory reason, so if it's just the employer asserting that it was because of dice and the employee asserting that it was because of race, then the employer automatically loses unless they have convincing evidence to state their case.

That's part of why you'd often see lots of HR bureaucracy regarding firing process, documenting infractions, performance improvement plans, etc also in states where technically you can be fired without a reason.


Attorney here who has handled a number of similar wrongful termination claims here. You can try this, but you better have a lot of good, clear documentation backing up your non-discriminatory reason for the firing which, if you're just making it up as a pretense, you probably don't. And if it's found that you did fire for a discriminatory reason and you attempted to lie about that fact, you're opening yourself to a world of hurt.


If they make the job impossible or dangerous, it might at least count as "constructive termination", so they'd be eligible for unemployment.


True but the list of reasons that allow you to fire someone without paying unemployment is limited.

They're trying to avoid paying out unemployment by saying he failed to show up for a shift but even sans-COVID firing after one absence is abnormal.


Why would firing make someone ineligible for unemployment? That seems silly


Quitting your job makes you ineligible for unemployment -- that's why the company called it their "resignation"


Why does it matter to the company if you claim unemployment benefits?


Unemployment taxes for a company scale based on historical claims from that company. More Unemployment Benefits paid out = more future UI taxes paid by that employer. It's meant to be a self-funding system.


To expand on what phonon said -- UI tax rate is company specific and depends, in part, on how much UI is paid out to company's employees.


That sounds a nightmare of bureaucracy to calculate!


I mean, that’s how most insurance premiums work.


Are you going to take legal action?


All in all, COVID19 is just about the one thing that's not a safety concern in a chip-fabbing cleanroom. Hydrofluoric acid is somewhat harder to deal with though, so I'm not at all sure that 'deprioritizing' workplace safety is the right approach.


This is an excellent observation. A class 1000+ clean room is probably the last place you would catch something airborne from someone else.


It might not be a concern in some cleanrooms, but it definitely is when entering and exiting a cleanroom. And no matter how good the equipment and protocols are in manufacturing areas you could also still get it by grabbing the doorhandle of the main building entrance or a bathroom.


We're not talking about the folks in bunny suits here, obviously.


Idle concern -- what about the inside of those bunny suits? Are they sanitized between uses? As I understand it, folks wear 'em to keep the cleanroom clean, not so much to keep them safe from eachother's germs


Since reading about Paul O'Neill's turnaround of ALCOA, I've been convinced that worker safety (or well-being for less life-and-death office environments) should be the one OKR/KPR/whatever-you-want-to-call-it every company is focused on. Especially for manufacturing companies. As Charles Duhigg notes in writing about O'Neill's leadership, it's a focus that can improve communication and performance all around:

http://txti.es/duhigg-keystone-habits

Instead, it feels like most corporations are managed by the people who run Royco in Succession:

https://youtu.be/UcTmBfA7Qik?t=48


I heard about that story, albeit through "The Power of Habit" by Duhigg, which is probably what your referencing. My reservation about just saying "safety is the top priority" is that it can quickly turn into management cargo culting if someone doesn't actually understand what changes it made at ALCOA and why it worked.

Its the same as a manager who hears that being interested in the well being of his employees can get them to work harder so he starts pretending to be interested in his employees, not because he cares but because he wants performance improvements. It ends up being insincere, creepy and off putting instead.

If the brass of a company try and emphasize safety to try and get the benefits ALCOA got, they won't pull it off, not because safety isn't important but because they didn't really care about safety in the first place.


How has demand for chips looked over the past few months? Has it been enough to potentially necessitate from a business perspective keeping output high? I can imagine arguments for higher or lower demand that seem perfectly reasonable (low because consumer segment slowdown, high because surge in cloud service use increasing failures/expansion demand or business laptops for WFH) and I'm wondering what the dominant force is.


Personally demand for chips is not the measure I’d want them to use.

Stabilizing employee mindset is the measure they should care about.

Hard to ship chips if they feel the company doesn’t care about their demand to stay alive and they all quit. AMD wouldn’t mind a bunch of chip experts being suddenly available.

Ogling market economics first and foremost is really not the priority for agency these days.

If demand falls such that we’re just generating chips for science and industry, so be it.

No one owes tech nerds tech to fetishize.


Oh I completely agree that it's a bad measure. I'm not looking for a justification, I'm curious for an explanation of why management might be incentivized to act this way. If theres a business case for it it's fairly easy to rationalize as a manager why you have a moral responsibility to put employees at risk (the economy is running in WFH powered by our CPUs! They need us!). Especially if they're in for a nice bonus if sales spike in higher demand. I'm more curious if there's even a business case to be made which might explain (but not necessarily justify) reckless behavior.


They need to sell units, to make money, to continue to employ and pay employees.

They may have contracts that require them to ship units or lose large contracts (and thus have to engage in mass layoffs).


I doubt many people are comfortable quitting their job in the current market. I also doubt AMD would hire any of these people, even in good economic times, since they divested their fabs 11 years ago.


Even safety conscious employers have a hard time reacting to new and different safety related areas.

For instance, see the USCSB's video on the BP Texas City refinery explosion. The refinery operator had a great record for safety, but it was measuring individual worker safety, ie PPE and recordable injuries. It was not measuring Process safety.

https://www.csb.gov/bp-america-refinery-explosion/

---

Edit: I should probably make clear that I don't think BP actually had a great safety record. I think they measured and cared about a certain class of safety, but not another class of safety, and that's the point I was trying to make.


The whole notion of priorities in safety vs. revenue never works for me. (And you can s/safety/security/ or s/safety/privacy/.)

In a very limited scope, it could work. If a team has a sprint or something, they can work on the safety tasks first.

But that's not really how a company operates. You have a whole mess of tradeoffs and often the costs and benefits can't be clearly quantified.

What they do is work out an operational model, some people work exclusively on examining safety, and they put out guidelines to management and employees who then have to practice safety themselves. Generally, you only know if it works after the fact by examining the results, maybe even tally up the costs of lawsuits or bad PR.

But it's meaningless to say "safety is job #1" because you're doing all the jobs 2 and on or you're going out of business. And that means all of them; no one says "taking out the trash is job #1" or "sending out W2s is job #1" but a company grinds to a halt pretty quickly if they aren't done.


Not the location in the article but my roommate is a technician for one the fab plants in Hillsboro and he is still required to go to in-person classes alongside his main role despite the non-essential lockdown order in Oregon.


The fab is probably the safest public place on earth right now.


Let's be specific: managers prioritized chip output over safety. The call came from the top -- the board of directors.


This seems redundant. Who else would be?


You might be surprised to see how many times manufacturing/warehouse employees ignore safety protocols for reasons of aesthetic/fashion, comfort, or convenience. Many managers have to discipline employees for disregarding precautions.


As for the original topic, it's patently wrong. It sounds like an individual case being blown out of proportion or a disgruntled employee trying to leave a mark. I can speak from experience that their safety standards are above and beyond regulation, and sometimes reason.



Intel should have placed that fab here in Sweden. What this accusation is saying is the norm here.


At first, I thought the headline meant chip designers were pressured to create chips that had huge performance at the risk of catching fire.



So this is how it's going to be right? Endless lawsuits based on what really??


If you are a healthy adult, COVID-19 is barely more lethal than the Flu, so what is the concern here?


Many healthy adults have parents and non-healthy friends.

That's the pretty much entire reason for pandemic-mitigating measures.


Having both your legs amputated is less lethal than the flu, is that your only measure of whether you want something to happen to you or not?

Even healthy people with no need for hospitalisation can be taken down for a couple of months of fatigue and symptoms from it.


Are workers rights a thing in the USA? Seems like it's not.


The serious answer is: yes, but it requires courts, which are slow.

I'm curious how this works in other countries; if a company lets someone go inappropriately, is there faster recourse than the courts? Can you go to the police and have them escort you back in, or what?


>>if a company lets someone go inappropriately

The problem with even arriving there is that there are three ways of letting go of someone:

1) just letting them go for any reason, but it requires giving them minimum notice(usually at least a month, but in many places it's 3 months or even more). Companies get around this by just paying the employee their salary for the duration of the notice period and just telling them not to come in any more.

2) both sides can agree to terminate the contract immediately - this usually happens if the employee wants to go somewhere else, but they also have say a 3 month notice period - in that case sometimes you can agree with the company that you will leave immediately but also not receive your 3 month pay.

3) You can let someone go immediately without pay, but only due to gross misconduct - and ooh boy, you better have it very very well documented that it was gross misconduct(like, a recording of someone stealing is usually good enough).

So the entire "letting someone go inappropriately" thing is lessened because it's really hard to argue for in cases 1 and 2, and employers really try to avoid case no. 3 specifically because of the risk of being sued. Most companies would rather just pay you for the remaining time on your contract to let you go immediately than risk being sued.


IANAL

In other countries, you can't "just let someone go". In Belgium there are CAO's [0], which loosely translates to: Collective Employment Agreements. Basically all employment agreements for a whole industry are more or less standardised. On one side there are the employers (of different companies) and the other side are the workers, which are represented by union leaders (perhaps also political representatives).

These agreements includes a lot of rules on how employment can be terminated. One example could be that an employer is only allowed to fire an employee for underperformance after given the employee 3 written reports over a period of 3 months and working on a plan to improve their performance. Only after that fails to produce results are you allowed to fire the employee. And you need to keep the documentation as it could be reviewed in court if the employee decides to fight it. (That doesn't happen often).

For this reason, often the employer will try to get the employee to quit. Or at least come to a mutual agreement to terminate the employment.

In Belgium, it's very hard to fire employees, and I think this is the same for several other European countries.

As a result, I haven't heard of many EU companies firing employees during these rough times. At least, not as often as US companies with "at-will employment".

The flip-side is that large companies will often have a glut of employees that aren't productive, but can't be fired.

[0] https://en.wikipedia.org/wiki/Collective_agreement


This is what the NZ government says: https://www.govt.nz/browse/work/workers-rights/your-options-...

If you need to get legal, it says “It can take a few weeks or a few months for an application to be processed, heard, and determined by a member. The length of the process will depend on things such as urgency of the application, whether parties have tried to resolve their problem at mediation, the availability of parties, representatives and the complexity of the case.”


That does not sound dissimilar to the situation in the US. (But it's hard to know without having been involved in a labor dispute in either country.)


Genuine question. Are there other countries in which you can not show up for a scheduled shift, without making prior arrangements with management, and not get reprimanded? On top of this telling your boss that you won't be in for two weeks without their approval...


Yes. e.g. in France you have something called "droit de retrait" ("right of withdrawal") when you consider that you cannot do your job without endangering yourself (e.g. not adequate protection or measures, especially in a pandemic), you only have to inform the health committee or your boss by any means before doing so.

You cannot be directly denied your salary or get demoted/fired for this. The company can of course appeal by opening a court case saying you abused this right, in which case any sanction may be applied if the ruling is in their favor.

The law also mandates that the company is responsible for the health of its employees on company time and premises, so you can also open a court case if you do not feel adequately protected (what happened to amazon).


> You cannot be directly denied your salary or get demoted/fired for this.

but what if they do it anyway? this is more the context here - we have wrongful termination as well, though it is likely much less strict


Well, then you go to court and win, because that’s not how they are supposed to do it. If you are a temp worker they can obviously not renew your contract as well.


Yes, this is the law in the US as well. The catch in both the US and France is that you have to say you believe it is unsafe and why.

https://www.osha.gov/right-to-refuse.html


We detached this subthread from https://news.ycombinator.com/item?id=23115743.


No, USA is the worst country ever, please don't immigrate here :).


[flagged]


US has net positive immigration from most of the 'proper modern western countries'.


Keyword: most. And even those come for money opportunities, not for anything related to preferring the lifestyle and/or culture. Be assured people don't leave Western Europe for the US because of "freedom" or better "legal rights" there (which is what we were discussing)...

Not to mention that the US is below in people immigrating to it internationally (net immigration / capita) compared to:

Luxembourg, Switzerland, Australia, Austria, Italy, Sweden, Germany, Belgium, Great Britain, Denmark , and others...


[flagged]


Assuming you are correct, why would the US be in a situation need to trade worker protections for civil liberties? If those politicians offering worker protections were threatening civil liberties, I wouldn't take that deal either.


[flagged]


I'm just about as socialist as they come, but I'm dropping you downvote. Yes, this move is clearly precipitated by capitalism, yes, capitalism sucks.

But I think we do everyone better by trying to put some effort into posting about why it sucks, and how it specifically seems to be impacting this situation. Low effort posts aren't going to change anything, and they clutter up HN.

E.g. Intel's primary motivation is to increase their capital, they have relatively fixed material costs and somewhat flexible labor costs, and so they choose to expend their labor (in this case by putting them in a risky environment) to continue to increase their capital.


Or, you could zoom out and look at a pattern: Intel's failure to care for the safety of their employees is analogous to their failure to care for the safety of their customers, in the sense of Meltdown and Spectre. In both cases, upper management prioritizes the bottom line and reinforcing the status quo, despite the fact that the world fundamentally changed and thus their practices should change too.


This posting made I am buying AMD CPU the next time.


AMD's superior CPUs is why you should be buying them


Oh wow, who would have thought weak labor protection laws and no unions would end up reproducing exactly the same conditions that lead to labor protection laws and unions?

I'm shocked.

https://en.wikipedia.org/wiki/The_Jungle


"your boos mean nothing to me, I've seen what makes you cheer"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: