

Semantic Shape Editing - stokedmartin
http://www.meyumer.com/semantic_editing.html

======
noahlt
Reminds me of this scene from Star Trek
[https://www.youtube.com/watch?v=d0qGXPDJ19c&feature=youtu.be...](https://www.youtube.com/watch?v=d0qGXPDJ19c&feature=youtu.be&t=19m57s)
where the characters try to recreate their memories using the Holodeck. When I
first saw it I thought that scene was ridiculously unrealistic -- but I
suppose that's what I get for betting against technology.

------
pubby
It seems like defining how the semantic sliders behave would be much more work
than having an artist do the 3d modelling. And either way, you're going to
need that 3d artist in the first place. Am I missing something?

~~~
sourkremlin
I think so. You don't need a 3D artist -- models can also be created with a 3D
scanner.

I also don't think (I haven't read the paper, yet!) you need to do anything to
get the semantic sliders other than to label existing data. If you're going to
design a single chair, it's probably more time efficient to have an artist
make a 3D model to your specifications. On the other hand, if you're going to
design something like a customized biomedical part with variations that depend
upon the patient, then this could easily be a net win. Not to mention if you'd
like to automate that design, you now have a much smaller dimensional
parameter space to play with.

~~~
Gracana
> You don't need a 3D artist -- models can also be created with a 3D scanner

As someone who creates 3D models from scanned objects... hah haha hahahahaa.
<sob> It's _so freaking hard_. I wish it weren't so, but it's so frustrating,
it is probably the aspect of my job that I enjoy the least. I use a Faro CMM
arm with a laser line scanner and polyworks (IIRC that whole package costs
about $90k), and you can get decent scans of certain objects without much
effort, but it's really hard to get good definition in small details, and some
surfaces (shiny ones, transparent ones, ones where light scatters slightly
below the surface) and some details (holes, crevices, small protrusions) are
really hard to capture. And even still, the resulting model has high
complexity... If you want a smooth mesh without holes or other aberrations,
that's another layer of work. And if you need to convert it to proper NURBS
surfaces, that is yet another layer of work (and one that takes an entirely
separate skill set).

I know these technologies are getting better all the time, but that one's
still a really hard problem that's waiting for a better solution.

~~~
sourkremlin
Oh I agree with you on all those points. That's one of the main thrusts of my
work as well. It's hard, but doable, and if you've gone through the effort
then you might as well leverage it with a technique such as this.

------
nicklaf
This looks fantastic.

Before Google axed it, there was 'Google Sets', which expanded a few basis
terms into a longer list of related phrases.

To be sure, this seems to be more powerful than that. The ability to eek out a
3D model by simply playing around with five or so intuitive parameters could
be enough to get millions of people to use 3D modelling for many casual
purposes in the first place (whereas traditional tools require orders of
magnitude more deliberate thought, and therefore cannot be justified for non-
critical drawings).

------
haversine02
I haven't read the paper, but from the video this kind of feels like blend
shapes to me which are definitely not a new thing. Custom facial rigs often
have blending between different expressions, controlled by 2d sliders. Another
issue is probably the fact that the meshes need to be composed of the exact
same amount of vertices to blend correctly, so no scanned 3d models. Would be
cool if they blended on an underlying voxel representation though.

~~~
IanCal
> Another issue is probably the fact that the meshes need to be composed of
> the exact same amount of vertices to blend correctly

These examples don't require the same topology, in fact they seem to use
really quite different ones.

This should allow them to take _new, unseen shapes_ and modify them in a
similar way. I think that's what they're doing anyway but haven't read all the
paper yet.

------
starmole
I like this a lot! One big challenge in 3d today is making the jump from data
to content: The jump we made for 2d in the late 80s when Photoshop changed the
perception from pixels to images. But what is the equivalent to a histogram or
a brightness/contrast filter in 3d?

------
ValentineSmith
I was thinking about EXACTLY the same idea before I went to sleep last night!

~~~
M8
This is not an idea, it's an implementation.

