After more than a decade working with 3D graphics on the web, nothing has been more fun or magical to play with than transmission, IOR, and bump/env maps (thank you Three.js and now WebGPU).
I put together this rough diamond configurator so you can try it yourself.
And you are right. That's what makes 3D tough right now. Without webgpu, there can be a big cost to adding 3D content to a site, that in many cases may not be worth it. But on the other hand, we're hoping that this is a good time to jump into this space!
This is absolutely fantastic. I love it. Works great on about 80% of the website elements I've tried across about 10 websites now.
I tried it on our site, and was able to completely replicate some of our most esteemed components.
A few sites wont allow the selection. I activate the select mode then hover over elements, and it won't pick up on anything. Not that is has me wondering how I could have our own site prevent people from copying it as well.
> I activate the select mode then hover over elements, and it won't pick up on anything.
Did you have these pages (the tab itself) open before you installed the extension? Try refreshing the page and seeing if that works, so that Chrome knows the extension is now available on that page.
If that doesn't fix it, I would love to know what sites don't allow selection at all, I actually haven't seen that before: alex [at] magicpatterns.com
And finally, thanks for the kind words!! Here to help if you need anything.
I'm a bit confused. While the demo looks amazing, I feel it is quite misleading along with some of the wording they use.
Is is actually creating the 3d environment and character models or are these premade, and instead, its handling solely character rigging and camera tracking?
You have to provide rigged 3d character models yourself(or use their premade ones)- it does camera tracking + motion matching or whatever algo/ai fun to track the biped animation- so yeah you feed it a video and the 3d models and it spits out either a video of the composite or you can download the 3d scene for further use/massaging in other applications.
btw Animation filmmaker here- tested a previous version- it was a janky toy that wasn't useful to me, checked out the new stuff today but didn't get to testing it after reading through the several pages of limitations on camera work, composition etc that can be used in it. I don't want my cinematography/blocking constrained.
We've looked at a number of FOSS and Commercial options for a project recently, and found most options were not much better than https://freemocap.org/ with video occlusions.
However, we did purchase the https://faceit-doc.readthedocs.io/en/latest/mocap_general/ commercial seats for Blender, and have found it workable with the results from the iPhone 3D camera App compared to other options (52 marker lip sync, gaze, and blink cycles will still need cleaned up to look less glitched in complex lighting.)
Combined with Auto-Rig Pro in Blender, it is fairly trivial re-targeting for Unreal Engine with volumetric preserving rigs (can avoid secondary transforms, so elbows don't fold in weird ways by default like Makehuman rigged assets.)
Best of luck, we concluded after dropping/donating a few grand into several dozen addon projects... there were still quite a few version rotted or broken add-ons for Blender around that people had zero interest in maintaining (some already made redundant by FOSS work etc.) However, there were also a few tools that were surprisingly spectacular... will still likely need to run both 3.6.x and 4.x ... YMMV =3
Used your app when I saw it posted a few weeks ago. Results and UI were pretty good. I also tried another platform I saw posted on the stable diffusion subreddit. On that platform I uploaded pictures, then got a text 30 minutes later saying my model was done. I opened the app and saw 30 images of someone that had about 1% resemblance to me. Then I went to try and generate with my own prompt, and they were asking for $6 for 15 generations. $6 to generate 15 versions of me that look nothing like me? No thanks.
Appreciated the 50 minutes of generation time you gave me. Generated over 150 images.
A GPU cost us about $2 an hour. So with training time and 50 minutes of generation, we're only making about $3. Not very much but we'd rather spread the joy that is generating yourself than overprice it.
Will give this a go. I saw an app kind of like this but the images did not look like me. Was very unsatisfied. And I know it was just released, but is it using Stable Diffusion 2?
Not yet but it's what I'll be spending much of my Thanksgiving doing. Being able to train models on a half or a third the steps is huge so this is a priority.
I put together this rough diamond configurator so you can try it yourself.
Hope you enjoy!