Okay, maybe it's just Twitter, Stratechery, and Matthew Ball, but in my little filter bubble... it felt like everyone was talking about the Metaverse non-stop.
HN probably isn't the ideal audience for your article.
Your content would be novel to someone who wasn't following Boom, but... that person isn't here. Because [0]. Parent could have put that more gently.
That said, if you did want to target HN readership (which is non-representative of general readership!), you could drill down into each of your points. How is
Boom doing these things?
For example, their struggles to find a moderate-bypass engine (even uprated and recertified) without having to fund novel engine development is critical to the entire endeavor, and is still an open question [1].
In short, recommendation would be to (1) know your audience, (2) research prior art (previous HN Boom stories), and (3) bring something novel in your article, either through new reporting or synthesis.
It depends on the price they paid for stock or the exercise price for stock options. If you were an early employee and your received options when the company was only worth $10m or something low, you could make money.
The risk is that one of the later investors had a ratchet or something that would allow them to claim more of the proceeds in a sale. You can't just take $220M cash, subtract $105M in funding and pass that to the founders and employees. The preferred shares were probably "participating" meaning they get a portion of the common.
Does anyone have any theories as to how Nanite actually works? I've never heard of virtualized micropolygon geometry before and it sounds a bit buzzwordy. Do we think they are just loading the full model into GPU memory, or are they baking down various LODs and normal maps at compile time through some automatic process? Either way, it's a huge workflow improvement. It's just unclear what's actually happening...
I wouldn't be surprised if it has something to do with "Geometry Images." Like REYES, the goal is to target pixel-sized polygons, but it handles tessellation differently. Brian Karis, the programmer speaking in the video linked this old blog post of his from his twitter when talking about inspirations for the technology: http://graphicrants.blogspot.com/2009/01/virtual-geometry-im...
“Micropolygon” I assume means Reyes rendering, I.e that the polygons are created on demand from underlying geometry. Instead of various LODs you tessellate when rendering, specifically for the view so each pixel has ~1 vertex. Walk closer to the statue and it gets more triangles in tesselation.
But how is it running so quickly? I've seen adaptive rendering implementations before, but they couldn't run in real time. If they are really using billions of polys they can't store them all in VRAM. Is the PS5 SSD fast enough to recalculate polys for every model in the scene every frame (Or even every few frames)?
> I don’t think the SSD but rather the GPU would be doing the tesselation...
Ha, I phrased that badly. I meant that, if the high-poly models can't all be stored in VRAM at once, is the SSD fast enough to load them back onto the GPU every frame?
If the tesselation is performed on the GPU (surfaces uploaded to GPU in some non-triangle representation e.g image geometry/SDFs/patches) then I don’t think it’s ever viable to have the tesselated triangles loaded back from GPU memory even to RAM (never mind disk). This isn’t a lot of data. 2 triangles/1 vertex per pixel is just 8K triangles and 4K vertices every frame, that can be overwritten each frame. This is tiny. It’s <1MB!
Ah, that makes sense. I was thinking the problem lay in loading the original meshes, but I didn't consider they could be using a smaller non-triangle format. I really hope they share more about how this works.