A request for everyone writing docs with content like this:
>NOTE: This only works on desktop Chrome 129+ currently. Firefox and Safari are hopefully [supported soon](link), but currently even firefox nightly and safari technical preview do not work.
This is great, especially with that link! Thank you! But please say when "currently" is, e.g. add an "(Oct 2024)". Stuff like this tends to be time-sensitive on accuracy but not consistently updated and is often years out of date with no easy way for visitors to tell.
And when it's recent, it also tells people that the project is active.
Also, webgpu isn't enabled on Chrome for Linux by default currently. You'll need
> The chrome://flags/#enable-unsafe-webgpu flag must be enabled (not enable-webgpu-developer-features). Linux experimental support also requires launching the browser with --enable-features=Vulkan.
I’m realizing this might not cross some developers minds because the current time in that context is attached to the commit when it was added or changed.
I do this quite often. I probably shouldn’t, though. It’s only useful if you’re looking at commit logs or have an inline ‘last changed by [author] on [date]’ helper in your IDE.
Then again, even that could be made wrong by future edits.
Yeah, there are frequently ways to figure out what date a relative measure is referring to; it's one of the best things about version control being a true norm in this field.
But it's a few extra steps (depending on the UI), and many will not take those steps. They'll just trust it (far beyond when it's relevant), or think "that's probably old" and doubt it (immediately, because old docs are so common).
It's relatively minor, but it's extremely easy to prevent, and just a better habit when communicating with the future.
And that’s assuming the repository never gets reorganized in any way that doesn’t perfectly preserve history — which over long timespans, is bound to happen at some point.
Wow - the in-browser demo (https://arthurbrussee.github.io/brush-demo/) runs way more performantly and renders much better-looking results than any other I'd tried in the past.
It loaded my 50MB .ply file almost instantly. Orbiting around the scene is extremely smooth and everything is free of flickering or artifacts.
I never tried out training a Gaussian splat from images/video myself before, but this tool makes me want to give it a go.
Love to hear it!! Most viewers take some shortcuts, like only sorting every so often, it's good to hear the difference is noticable :)
Training a splat requires a lot less setup with this, but does still require running COLMAP(https://github.com/colmap/colmap) first, which is still a big barrier... one thing at a time!
How expensive is the COLMAP step to run? I was also really impressed with the speed in the demo (but thinking that the shown training was the only step)
Could you ELI5 what the training is versus what the COLMAP part is?
This is definitely still a big blocker to adoption. The goal is to get to a more all-in-one system. The splatting optimization can also help align cameras, if they don't start out entirely random, so any system to quickly provide a good "initial guess" will help here. At least for mobile devices, initialization from ARCore / ARKit poses should be enough.
If you're capturing on a mobile device, why not use Scaniverse? It's about as all-in-one as it gets - you just scan and it'll generate a .ply after a minute or two of processing. They'll host the splat for you in the cloud if you want too.
For me, at least, I want to own all my data, and not give any away without explicit permission. So, even in the case of Scaniverse, I'm reluctant.
But I'm just an artist trying to read and learn, and haven't yet gotten around to actually figuring out how to do all this on my Macbook Pro M1 yet ^-^
Thank you for releasing this. It is the first option afaik, to generate a 3D Gaussian on a Mac without a gpu (using M1 Pro). It is quite slow, but quick enough to test-train a dataset while onsite, without the need to carry heavy workstations around!
I really like the option to use rerun.io for training analytics. Again, thank you.
Thanks for the advice, i did compile the repo on my m1 using vs code, but i do compare the speed to my workstation RTX4090, that comparison is not appropriate.
Making a movable 3d scene from limited initial information. Some pros and cons against a traditional '3d model' approach. Pros - faster/simpler to generate (especially with lots of data), better at dealing with light and reflection. Cons - Having 3d geomtry can be useful eg. for collision detection, volumetric understanding, surface alteration/deformation etc.
Not much widespread use right now - Possible commercial use cases are things like real estate walkthroughs and maybe replacing a google street view with something more interactive.
That was in some way the original motivation for the project!
I think if you are reconstructing your own data the algorithm better just work, without input, ideally.
But, imagine you could add in generated videos. Lay down a camera path, tell it what to generate, and add it to the reconstruciton. A brush stroke one might say ;)
Sorry for the dumb question: What are the inputs? Photos? Videos? Any other data?
And then, what's the output?
Otherwise I find the whole website far too "involved" to understand what it's doing at all. Someone who already understands the area won't have my trouble of course.
Not a dumb question! This first version is still mainly targetted at people who are in this area and generate some excitement, I do hope to make this more accesible though!
The output is a 3D model. Specifically a "Gaussian Splat", which is a sort of fuzzy point cloud. There are some tools out there to view & edit these (besides Brush), eg. https://playcanvas.com/supersplat/editor.
One of the things thats held me back from being super interested in this field is that my understanding is that there is likely to be some kind of mesh backing needed for this to progress.
IIRC some researchers had started to back the gaussians with a mesh to provide an editable artifact that would allow the gaussians to be moved and manipulated.
Is this anywhere near being a standard feature yet?
I expect this tech to make big waves in the backend of robotic systems soon. Accurate SLAM with accurate semantic tagging of objects is a big deal. An efficient and accurate reality-to-simulator translation, allowing reinforcement learning in simulated environments to be directly applicable to reality.
Begone lidar units for basic robot tasks! All praise, normal cameras! (though, its far to slow to run on autonomous cars, since the environment changes so rapidly)
Accurate SLAM with accurate semantic tagging would be a big deal, yes, but this project still relies on pretrained data with COLMAP so how is that relevant to your comment?
I'm not saying you can use THIS method for a SLAM; I'm saying you can use this METHOD for a slam.
You don't need a perfect COLMAP for this method (well, not this in particular, but for this method more broadly with some modifications); you just need an approximate location for a few of the images to start and then match the others progressively... which is literally what SLAM is all about.
And "pre-trained data" makes no sense. It's trained, as in slowly chewing iteratively on the data before getting decent 3D space, but that just means it's a bit slow. Hence, my mention of simple robots that move in a semi-fixed environment rather than being unusable for self-driving.
But more broadly, it's a method to describe the real-world appearance 3d space, which may have computational and flexibility advantages over massive point clouds.
This is great. Thanks so much for putting this together.
It works on a laptop with a so-so graphics card... But it's the first time I've ever been able to process a Gaussian splat myself.
I'm a bit out of the loop on async runtimes. I know Tokio is of course the big on, but, that seemed much to heavy to just run some tasks, and isn't very WASM compatbile afaik.
Otherwise there's smol, and maybe others? Would love to hear what a good web WASM compatible async framework is nowadays!
To place a barely-educated guess: because "... In Rust" is enough of a trope around here that it brings very specific crowds of people out to argue the same points each time, whether it's even remotely relevant to the link or not.
Better to just avoid it unless the "in rust" part is somehow intrinsically relevant (e.g. it's in rust for specific reasons that were previously too hard in other languages)
There is still brigading that happens regardless. I understand trying to minimize the battles but when there are actors propping up these same posts, it ends up having the opposite effect. (There are coordinated audiences on Discord and Mastodon looking to swarm these posts and game things like the front page of HN.)
Better to suppress these posts if the desire is to avoid the inevitable arguments with the bonus that it can be automated rather than requiring manual intervention. Otherwise, it likely ends up a well-intentioned but poisonous pill. I don't know if this will stop the most motivated members though.
Part "yes, you have a point" and part "this is letting perfect be the enemy of good".
There is apparently enough time and energy for manual intervention, given that it just happened - if it isn't making things worse, it may still be worth doing. Particularly since brigading tends to move in temporary bursts.
>NOTE: This only works on desktop Chrome 129+ currently. Firefox and Safari are hopefully [supported soon](link), but currently even firefox nightly and safari technical preview do not work.
This is great, especially with that link! Thank you! But please say when "currently" is, e.g. add an "(Oct 2024)". Stuff like this tends to be time-sensitive on accuracy but not consistently updated and is often years out of date with no easy way for visitors to tell.
And when it's recent, it also tells people that the project is active.