Hacker News new | past | comments | ask | show | jobs | submit login
1000fps image projection on deforming non-rigid surface (u-tokyo.ac.jp)
1306 points by hongzi 16 days ago | hide | past | web | favorite | 128 comments

The second generation of the projection tech does color: http://www.k2.t.u-tokyo.ac.jp/vision/dynaflashv2/index-e.htm...

And they can now do the tracking without the infrared paint: http://www.k2.t.u-tokyo.ac.jp/vision/MIDAS/index-e.html

I cant describe how awesome this is to me, personally.

In 1998 I was hired on a contract to do a Fashion Show, where I developed graffix to project onto models who were on the catwalk in this theater....

Basically it was a music + visuals show where the models would walk down the catwalk and present themselves....

Guess what tool I had available to me to make this happen...


I F5'd that bitch and projected a slide per model on a projector borrowed from work with me manually changing slides with --> for each model....

Lost my shit reading this. Thank you so much for sharing this.

That is just stunning.

I guess Disney Imagenineering dept. would love to do some cools things with this. Or perhaps they already have this capability?

These are the kind of things were I'd like to have a commercial version already!

Edit: Also, the url in the original post is from 2016! wow!

(@dang maybe add (2016) to the title?)

Disney has a (poor) version of this idea. They project an animation onto wedding cakes at their resorts. They're pre-rendered though, rather than dynamic like this. It'd be fascinating to see what uses they could come up with for this stuff.

Disney does a lot of projection mapping in the parks right now, but it's all (AFAIK) the traditional, pre-rendered type, with animations projected onto large static surfaces like the Magic Kingdom's castle, or onto animatronics with pre-planned motion paths.

I'm having a little trouble visualizing how this non-rigid projection mapping could get applied in a practical way in the parks today, mostly because it seems like it has a fairly small "active" area (determined by your projector and sensor resolution, essentially). I could imagine this being used in a parade or stage show, for example, but this system seems like it would be pretty restrictive as to where the performers could move and remain in the projection space.

Those projections onto the castle are quite stunning. The technology for producing an enormous, apparently seamless image on that uneven canvas is amazing (even if it's static). And a lot of creativity went into building an animation that used that very specific venue so well.

What if the projection space were mobile? One of the applications is to put projectors on moving objects, and project onto static (or moving) surfaces, assuming the system gets small & portable enough. How about one or more projectors mounted on each car of the Haunted Manison ride, or projectors mounted on flying drones aimed at the Spaceship Earth (the geodesic Epcot globe) during the night light show? Combine multiple projectors and a position tracking system, maybe even viewer head/eye tracking too, and I think there might be some amazing possibilities...

I think you could even go and apply the same algorithms to AR projections though. This would have massive applications in parks. Guests wearing AR projection goggles while touring the park allowing for a hugely immersive experience

I could imagine doing something for the house of terror like projecting ghosts or something like that, in the curtains to a doorway or on sofas/couches after you stand up (and have your shadow follow you while doing creepy things for instance)

Of course, I'm no expert but throw a few million dollars into this and you can probably come up with some neat stuff even if the projection space is small-ish for now.

Augmenting human actors/customers (think Harry Potter spells) could get interesting.

Disney uses something similar in a lot of their stage shows at their parks. They use a water fountain to create a screen and project onto the water sheet. It works surprisingly well, but I would assume that it is pre-rendered.

The water screens are actually a pretty old technique that you can do with any traditional light projector (flim or digital). The hard part is making the water sheet.

> Disney has a (poor) version

Seems an unnecessarily nasty way to refer to what I guess is earlier less advanced work.

What exactly is "nasty" about it, and why would anyone take offense at the use of the term "poor version"?

What do you think the word 'poor' means when referring to quality? Might be kinder to say 'less advanced' or 'simpler' or 'earlier'.

>What do you think the word 'poor' means when referring to quality?

poor (adjective): of a low or inferior standard or quality.

>Might be kinder to say 'less advanced' or 'simpler' or 'earlier'.

Kinder towards what? Is a multinational like Disney sensitive or is the technology sensitive to the choice of words criticising it? Or will the researchers take offense to their technology, which is objectively inferior, being described as "poor"?

We're stretching this too thin, inventing issues where there are none...

Real people worked on it, and you’re calling their work poor.

First of all, I'm not calling it poor, the grandparent did.

I'm saying it's nothing special to call it poor.

We say 10x harsher things everyday in HN for frameworks, languages, etc. Heck, check any thread about Apple products. Don't real people work on those?

Plus, ever read art criticism, or restaurant criticism, or political critiques even in the most respected newspapers? "Poor" is the least harsh of the terms they use. And those are also real people they level those things at...

Would the Disney version be considered inferior quality to this new research?

The connotative meaning of “poor” isn’t really anything that isn’t the absolute best. Words have meaning beyond the dictionary definition. “Poor” in this case means you think the researchers did a bad job, not that their work was a step on the path to something better. “Earlier”, or even “dated”, would have been a much more charitable depiction. And plenty of people are deriving joy from the application of that technology.

> in this case means you think the researchers did a bad job

It really does not. You’re inventing things to be offended by. It simply means it is low quality version of the same thing, which it very much is.

In retrospect, Ford’s first cars were a poor version of their 2019 ones. Doesn’t in any way diminish the accomplishments of the past.

I’m really not. And I don’t get offended by very much. Maybe you’re not a native speaker? “Poor” as it refers to quality is a negative qualifier. And it’s typically used in a subjective context. That’s the connotative meaning.

Even the definition used here (taken from Google) indicates so, if it weren’t truncated:

poor (adjective) worse than is usual, expected, or desirable; of a low or inferior standard or quality.

The list of synonyms is even more telling about it’s real meaning: shoddy, bad, deficient, defective, lamentable, deplorable, awful, etc.

These are not words I’d use to describe the technology, having seen it first-hand. The technology is not worse than usual and I really don’t see how it’s worse than expected or is otherwise undesirable. It’s out there and being enjoyed by people.

Apparently if someone thinks it’s poor?

Very cool, a sort of live-action cgi capability.

Hope it doesn't easily scale out to larger spaces and crowds, or the current tech industry would soon have public spaces filled with ads projected on peoples belongings.

I doubt it would ever be used on private property like that, but we absolutely will see this tech used in the advertising space.

Going to start ordering everything coated in vantablack then...


Unless you're Anish Kapoor you'll have to settle for Black 3.0 https://www.culturehustleusa.com/products/black-3-0-the-worl...

Plus, Black 3.0 isn't toxic like Vantablack.

...or Anish Kapoor.

Who makes a fine mirror.

The tech industry would soon have some very expensive equipment destroyed.

The target needs to be marked with infrared ink: you can project only on specific surfaces prepared in advance.

Edit: discard this comment, I misread the parent comment.

Not according to the comment that started this thread, which suggests that they no longer need the infrared paint.

no? the second link showed that it doesn't have to

> public spaces filled with ads projected on peoples belongings.

They literally could have done this for 100+ years.

They do not.

Do you see how what you say makes no sense other than a fear of technology?

You've seen projectors, you've seen how they don't just project them anywhere, it's hard to see how one makes this leap.

That second one is insane! I can't get my head around how they would be able to do it.

The second example doesn't, IMO, demonstrate the kind of deformation tracking the example in the OP does. As far as I can tell, they are able to get a depth representation, segment objects from a video, get an approximation of surface normals and reflectivity for those objects, and project a shaded surface onto those objects.

What they do not seem to be able to do without IR markers is project a diffuse texture to an object so that it would stick properly. See the one example with a non-uniform texture, where the fingers of a hand are fanned out - the texture warps noticeably.

You say that like it's not ridiculously impressive and interesting.

Could you imagine using something like this for reviewing finish options for a product with zero turnaround time?

It is impressive, but it's not an improvement on the tech in the OP - it's a different approach to a similar problem as far as I can tell.

wow they even added shading, to do fake lighting effects

I hope to see this at the next Tool concert

This is what popped in my head while watching the video!

Very cool. I do live projection work[0] and latency is always the killer with immersion. Anything higher than a 1000/90ms latency breaks integration at normal dancer movement speeds. 1000fps seems like overkill but it allows for very fast movement.

0. https://youtu.be/ggRcDQZWD_8?t=1281

Nice work :)

Is this real-time 3D taking into account dancer’s motions? I imagine it takes a crazy GPU rig?

In any case, very impressive.

It is, but I cheat! The dancer has a small android device with a custom gyroscope app, mounted in the middle of her back. I can get her general orientation accurately this way (more accurately, and faster, than state of the art pose estimation).

I am in the process of bringing together a community around art making like this. Let me know if this is something that interests you.

It is of interest! I am in the process of learning TouchDesigner, looking to integrate it with Ableton Live. I doubt I’ll have enough GPU power for real-time 3D renders based on changing sound or visual input though…

Currently I am considering pre-rendering scenes to given BPMs (where applicable) and doing only limited realtime alterations with TD nodes.

This seems like an extremely advanced version of those sand tables at science centers that kids play with, where digging a trench in sand with your hand modifies the projection to affect virtual water flow.

The youtube video[0] from that page is especially interesting.

0: https://www.youtube.com/watch?v=-bh1MHuA5jU

Yeah, and for the sandboxes, latency is the part that breaks the immersion. But it is straightforward to set up with a kinect and a projector.

You want a high-bandwidth interface to computing? Combine this with Dynamicland technology (https://dynamicland.org/). This makes way more sense than creepy neural interfaces. Best of all, it would allow people to freely collaborate on things in real life.

This here is the real winner. The thread is full of amazing ideas but successfully executing this one combination would be a complete paradigm shift.

This is crazy awesome! And look at this one! I don't know if it comes from the same project, but looks cool. https://www.youtube.com/watch?v=c40cxE-dfPg

In 1899, H.G. Wells wrote "A Story of Things to Come", which later was adapted become the 1936 film "Things To Come". In the original story, the main characters mention being irritated by the advertisements projected onto the backs of the people they walk behind. Old idea, only now possible without image distortion.

The latency is way more impressive than the fps

Edit: the video gets the point across more effectively IMO: https://youtu.be/-bh1MHuA5jU

Well you can't really achieve low latency if the end-to-end system has low FPS.

Yes but you can achieve high fps very easily if you have a day to compute each frame! :)

Yeah, you totally can. You need a low duty cycle, though.

Another use of a high speed projector would be to create real 3d display anywhere in a volume swept out by a moving surface. Objects will be translucent, but otherwise real 3d with wide viewing angles.

Afaik that's one of the more popular current 3d display technologies: https://www.youtube.com/watch?v=FVYoWsxqK8g

The first I heard of it in the 1980s. TI used a laser to project point on a rotating surface. Then someone used a flexible mirror in front of a speaker. The mirror would flex convex/concave changing the apparent distance to a vector display screen.

Just saying a modern 1000fps display could do this much better.

That flexing mirror idea is really smart.

That’s an amazing idea. A naive approach would be to slice the volume into N surfaces (say, ~30 layers to stay above 30fps) but there is probably a much more efficient organic pattern or interlacing that would give good volume and angle coverage at much higher resolution - think of crumpled cloth blowing in the wind at speed.

My first thought is that this could provide some awesome effects for live theater.

The Mandalorian on Ice!

Iron Man: The Musical

Benjamin and the Buttons in Concert

Ruining film was bad enough.

Ruining comics was bad enough

This is just too much. My mind is completely blown. And makes me think of how many things I think are impossible but actually are or will be in the near future.

Similar feelings. The video doesn't go easy on the tech either. He is shaking that paper violently and I can't see any faults. The part where he has 2 bits of paper as well as when he stretches his shirt are mindblowing.

This is from 2016, but still very cool.

this makes me even happier. Means SOTA is even more advanced?

I was fortunate to see some of the 2019 projection mapping competition on odawara castle. It is absolutely mind bending technology.

This is the real beginning of augmented reality, not VR or cell phones

One downside with this approach is that you need something to bounce the light off of (e.g., a surface), so adding virtual objects to AR is difficult if the objects aren't positioned at the surface of real world objects. That's an effect you often want to achieve in AR applications.

Nah if you see it in person you understand. No matter the surface, you can create a virtual 3d space from the perspective of the observer.


This gives you an idea but in person the illusion is stronger.

Yeah exactly. I want to see Dynamicland-style AR rooms based on this

Here's a very cool art project done with this tecnology: https://www.youtube.com/watch?v=3Aos1Z2htDU

This lab is doing a whole series of amazing stuff---

500ps hand gesture recognition system: http://www.k2.t.u-tokyo.ac.jp/perception/zSpace/index-e.html

Robust tracking for moving objects: https://www.youtube.com/watch?v=p7IL0Gvux7U

Having just watched a bunch of videos on Deep Fakes after the revelation of the Deep Fake video of Nixon's Moon Landing Disaster speech on here this morning, I can't help but feel like this is something else that will make fakes more and more difficult to distinguish from genuine.

I think it would be very interesting idea for a music show. Tell everyone to dress using white tshirts and create this kind of projection from multiple beacons standing spread in the venue. Not sure if that would be possible but very cool to see IRL and talk about

My initial reaction was that it would be brilliant to use in theatre productions.

The idea of a stage version of A Scanner Darkly's scramble suit seems amazing.

There he is! The guy in the red shirt! blue shirt! striped shirt!

Reminds me of Jeff Han's original multi-touch demo. Slightly different topic, but also mind-boggling back then: https://www.youtube.com/watch?v=EiS-W9aeG0s

Back in 2012, my wife and I attended a stage show called "The Animals and Children Took To The Streets" [0]. It was done with "dumb" projectors, with choreographed movement of different screens on the stage, but created a highly dynamic show.

Remembering that show, and seeing these videos, it makes me giddy to think what could be done with the latest tech today.

[0] https://www.theguardian.com/stage/2011/dec/11/animals-childr...

I'd love to see Cirque du Soleil leverage this type of technology in a show...

That is pretty impressive. Tried something similar with a DLP projector stolen from texas instruments (not stolen, but they tend to be picky about selling them. Probably because these are awesome devices).

Was slow as hell, since I used a raster with multiple images to measure the topology. The resulting heightmap was awesome, but even with synchronized camera and projector, I needed a pretty long illumination time per image. So it would be interesting what camera(s) they used too. I doubt you would need multiple projectors, because the available ones are extremely fast.

First saw the on the prosthetic knowledge Tumblr account years ago. Really miss that account. Whoever was behind it did a phenomenal job of curating incredibly interesting technology developments.

They can project what's behind an object onto an object to make it "invisible"

How could you demo that?

Turn the projection on and off to see the cloaking in action?

This is so cool. Now I want to take this ( and so many more things like it that have come up recently ) and show the students at the art school I attended that projection mapping can be so much more than lining up all the parallel lines.

There are going to be clubs/bars/venues that will market themselves as "foo" enabled light spaces for your "bar" apparel.

we really don't need this stuff to overload our already maxed out sense organs but here we go!

The demo video answered my questions, is short and the tech is impressive. Not entirely sure of the business model though. The non-rigid tracking might be more useful than the projection -- perhaps a Defense application?

Skimming the description, they used a structured light approach (active) for geometry deformation tracking. This is still probably useful for certain defense applications but an ideal goal is often to use passive tracking systems. It's impressive either way.

Lots of theater, performing, and visual art uses come to mind.

Some retailers (especially Japanese ones) have a fascination with the idea of allowing customers to "try on" different colors/styles via an on-screen avatar that's based on a scan of the customer. This would seem like a logical next step.

I'm unsure how much customers would actually want it though, at least after the novelty wears off.

I immediately thought of Disney parks and stuff like Cirque du Soleil and other performances (concerts, raves, etc)

Streamers could liven up their streams with this tech.

Reminds me of this high speed tracker video from 2013:


Much easier problem to solve at 1k fps but still cool.

This would go great with the teamLab borderless exhibit, which currently seems primitive in comparison (it’s an art exhibit where they project images everywhere).

Advertising, advertising, advertising and dance floors.

Football games need more advertising... sigh.

Could this be applied to motion capture to remove all the unnecessary gear actors have to wear? This could revolutionize digital animation.

What kind of specs would someone need to reproduce this on a Linux system, say?

Are there any FOSS projects that are doing this?

Sure seems like a lot of science and tech related awesomeness is introduced to the world on a website from 1998.

I can see some big usage for this in medicine. Projecting stuff to help doctors performing complicated surgeries.

I was thinking the same thing. Stryker already has a system that has a tracking device for surgeons that shows pre-OP CT scans around the area and shifts sliced around depending on the position (also performs a 3D rendering).

Projecting this directly on the surface may be useful but you'd have to be careful not to skew information surgeons may find useful. Seems great for training on cadavers and stuff though.

This could be cool at techno parties.

This is very cool. I suspect that it's only a matter of time before all projectors work like this.

Something like this should play a role in the future of more sustainable clothing/fashion.

if I want to do object tracking for a low-grade version of this kind of thing, what’s the best off-the-shelf solution right now? Is it ARKit?

That could be big for the fashion industry.

I'm working on 3D for clothes design, and we're already exploring VR; seems like this could be really interesting too, yes.

Does anyone know what camera they use is?

Simple but effective. Really cool.

Military camouflage applications?

That demo looked incredible!

Advertisers will love this.

This is pretty neat.

Very cool!

This is awesome!


invisibility cloak is not far away

Invisibility cloaks need to work from multiple points of view?

How would you project it onto the person without giving away their position?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact