Hacker News new | past | comments | ask | show | jobs | submit | johncoogan's comments login

Okay, maybe it's just Twitter, Stratechery, and Matthew Ball, but in my little filter bubble... it felt like everyone was talking about the Metaverse non-stop.


Author here, what do you think would give this kind of content more substance? Still very new to this and genuinely looking for feedback!


HN probably isn't the ideal audience for your article.

Your content would be novel to someone who wasn't following Boom, but... that person isn't here. Because [0]. Parent could have put that more gently.

That said, if you did want to target HN readership (which is non-representative of general readership!), you could drill down into each of your points. How is Boom doing these things?

For example, their struggles to find a moderate-bypass engine (even uprated and recertified) without having to fund novel engine development is critical to the entire endeavor, and is still an open question [1].

In short, recommendation would be to (1) know your audience, (2) research prior art (previous HN Boom stories), and (3) bring something novel in your article, either through new reporting or synthesis.

[0] https://hn.algolia.com/?q=boom

[1] https://en.m.wikipedia.org/wiki/Boom_Overture#Engines


Hey thanks for posting this!


By "never be technically possible" do you mean:

Not possible in the usual 10 year timeframe we consider for Silicon Valley type companies.

Or

Never in a million years, completely violates the laws of physics.


Pricing in a small chance that the acquisition won't go through (maybe due to FTC clearance, but could be other due diligence related items).


TheMelt wasn't in Y Combinator, but it is an interesting story.


I'm sorry for linking the two. I have had this wrong idea stuck in my head since 2011. I should have validated that fact.


What's the story?



It depends on the price they paid for stock or the exercise price for stock options. If you were an early employee and your received options when the company was only worth $10m or something low, you could make money.

The risk is that one of the later investors had a ratchet or something that would allow them to claim more of the proceeds in a sale. You can't just take $220M cash, subtract $105M in funding and pass that to the founders and employees. The preferred shares were probably "participating" meaning they get a portion of the common.

Here's an example of a ratchet: https://www.forbes.com/sites/petercohan/2015/11/07/unicorn-s...


I figured there were one or more investors participating preferred especially since Chef held most of their valuable IP as an open source project.


Came here to comment this.


I'll give it a go! Seems pretty easy to replace the blacklist test with a WHOIS lookup.


Don't bother, WHOIS is pretty much useless 9/10 times nowadays.


Does anyone have any theories as to how Nanite actually works? I've never heard of virtualized micropolygon geometry before and it sounds a bit buzzwordy. Do we think they are just loading the full model into GPU memory, or are they baking down various LODs and normal maps at compile time through some automatic process? Either way, it's a huge workflow improvement. It's just unclear what's actually happening...


I wouldn't be surprised if it has something to do with "Geometry Images." Like REYES, the goal is to target pixel-sized polygons, but it handles tessellation differently. Brian Karis, the programmer speaking in the video linked this old blog post of his from his twitter when talking about inspirations for the technology: http://graphicrants.blogspot.com/2009/01/virtual-geometry-im...


“Micropolygon” I assume means Reyes rendering, I.e that the polygons are created on demand from underlying geometry. Instead of various LODs you tessellate when rendering, specifically for the view so each pixel has ~1 vertex. Walk closer to the statue and it gets more triangles in tesselation.


But how is it running so quickly? I've seen adaptive rendering implementations before, but they couldn't run in real time. If they are really using billions of polys they can't store them all in VRAM. Is the PS5 SSD fast enough to recalculate polys for every model in the scene every frame (Or even every few frames)?


No details yet, but hopefully that will follow soon.

I don’t think the SSD but rather the GPU would be doing the tesselation...

Here is a good presentation on micropolygon rendering (note: nothing says nanite works like this at all): https://www.cs.cmu.edu/afs/cs/academic/class/15418-s12/www/l...


> I don’t think the SSD but rather the GPU would be doing the tesselation...

Ha, I phrased that badly. I meant that, if the high-poly models can't all be stored in VRAM at once, is the SSD fast enough to load them back onto the GPU every frame?


If the tesselation is performed on the GPU (surfaces uploaded to GPU in some non-triangle representation e.g image geometry/SDFs/patches) then I don’t think it’s ever viable to have the tesselated triangles loaded back from GPU memory even to RAM (never mind disk). This isn’t a lot of data. 2 triangles/1 vertex per pixel is just 8K triangles and 4K vertices every frame, that can be overwritten each frame. This is tiny. It’s <1MB!


Ah, that makes sense. I was thinking the problem lay in loading the original meshes, but I didn't consider they could be using a smaller non-triangle format. I really hope they share more about how this works.


Agreed, Peopleware is great. Very good for engineering leadership specifically.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: