Hacker Newsnew | past | comments | ask | show | jobs | submit | rwmj's commentslogin

The technical term seems to be "Zombie Simpsons".

"Silicon Valley" doesn't get to make the decision unless they are willing to send some of those hundreds of billions to TSMC up front. (TSMC isn't going to want future promises of business either since those are worth very little.)

I don't disagree. I wrote the top comment here basically saying the same thing: https://news.ycombinator.com/item?id=46764223

If big tech prepays for the entire fab, I think TSMC would do it.


And if the Big Tech companies think it is so important to get all those compute and/or memory chips sooner and in larger supply, it should be no problem at all for those Big Tech companies to pay for the costs and then have priority access to all (or their portion of) the output for the future years.

OTOH, if they are insisting on not investing their funds or stock, and it is simply pressure on TSMC to take on the risk, TSMC should be very wary of taking on risk for those players (unless TSMC sees another advantage of producing into a likely glut or supply canyon shortly after the new fabs come online).


if what Elon recently said is true (if - but he might not be... inaccurate... on this particular thing) they already have and bought the forward production capacity of those new fabs and it still isn't enough.

I believe that. TSMC would have to start another fab or two.

PS. I'm pretty sure Intel is also at max capacity. They cancelled a bunch of fabs a few years ago when they were on a spiral.


There's no world in which this case is being covered up. It's literally on the BBC News website and you have linked to it.

The poster linkd to the story they 'could think of', not one that may be upcoming. My guess is on a nonce-case, and the royals are involved.

Is this company a candidate for being "Jia Tan"?

Jia Tan wouldn't be interested in secret spyware firms. They hide their code in plain sight.

No need, they have plenty of 0-day exploits that don't leave discoverable traces.

I remember after I read the 1st edition, bought MINIX ($150 !!), and then was very annoyed to find that the compiler source was not included. Luckily it was '89 or '90 and GCC sources were available.

There's a first time for everything.

It's a shame Google doesn't let us use a log scale on that graph.

Slashdot didn't allow you to vote and comment on the same topic. (If you voted, then commented, your votes on the post were rescinded.)

That's the most authors I've seen on any paper. I counted 46 across 36 separate institutions.


46 authors isn’t that many. Big projects necessitate many authors (e.g. https://arxiv.org/abs/1807.06209)

In high energy physics it can easily be thousands of authors, like in the ATLAS collaboration.

Turns out launching a gigantic camera into orbit and developing a photograph of the beginning of the universe takes teamwork.

The OP should try with -march=native so the compiler can use vector instructions.

Slightly off-topic but I like this way to test if memory is all zeroes: https://rusty.ozlabs.org/2015/10/20/ccanmems-memeqzero-itera... (see "epiphany #2" at the bottom of the page) I really wish there was a standard libc function for it.


> The OP should try with -march=native so the compiler can use vector instructions.

I just tried "-O3 -march=znver5" as well as "-O3 -march=native" and it didn't seem to make any difference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: