Hacker News new | past | comments | ask | show | jobs | submit login
Revolutionary "Light Field" camera tech - shoot-first, focus-later (allthingsd.com)
385 points by hugorodgerbrown on June 22, 2011 | hide | past | web | favorite | 91 comments



The downsides, which, of course, this press release doesn't mention:

- Greatly, greatly reduced image resolution. Great big dedicated-camera sized lens and image sensor, cellphone-camera sized pictures. 1680×1050, at most. (1.76MP)

- Color aberration. The microlenses have to be small, of course, so they're going to be made of single physical elements, rather than doublets.[1]

- Various amusing aliasing problems. (note the fine horizontal lines on some of the demo shots)

- Low FPS. Each image requires lots of processing, which means the CPU will have to chew on data for a while before you can take another image.

- Proprietary toolchain for the dynamic images. Sure, cameras all have their particular RAW sensor formats, but this is also going to have its own output image format. No looking at thumbnails in file browsers. Photoshop won't have any idea what to do with it. Can't print it, of course.

- - You can just produce a composite image that's sharp all over, but why not use a conventional camera with stopped-down[2] lens, then?

- It's going to be really thrillingly expensive. This is a given, of course, with new camera technology.

[1]: http://en.wikipedia.org/wiki/Doublet_(lens) [2]: http://en.wikipedia.org/wiki/F/stop#Effects_on_image_quality


Proprietary toolchain for the dynamic images – you could embed the full dynamic information in a JPEG extension and use the JPEG thumbnail as a thumbnail and the JPEG image as your selected representation. Then you could use your nonstandard code to regenerate the JPEG image data from your full data when you wished.

You can just produce a composite image that's sharp all over… – but what fun is that? I'm having a grand time with SynthCam on my iPhone to act like a large lens camera to regain distance dependent focus even though I have a tiny lens.

It's going to be really thrillingly expensive. – It doesn't need to be. Tiny micro lenses on the image sensor might be much cheaper than large chunks of precision glass. Think "inkjet-like print head squirting one of the resins used for plastic eyeglasses into etched depressions relying on surface tension to form the lens". Just guessing there. Maybe placing precision sized beads in each depression and then heating to reflow into a surface tension defined lens would work better.


     You can just produce a composite image that's sharp 
     all over… – but what fun is that?
With DSLRs that have huge lenses with large aperture sizes, the depth of field varies a lot and those lenses also have a sweetspot in focal-length / aperture for which images produced are the sharpest. Lowering the aperture size increases the depth of field, but then you've got another problem as the shutter speed also has to be adjusted and you end up using a tripod.

Getting a picture that has everything in focus and is tack sharp can be seriously challenging.

To tell you the truth, while these photos with adjustable focus seem cool, focus is not my pain - what I want is to be able to take great photos (the kind that 35mm cameras can do) for reasonable prices, preferably with something that fits in my pocket.

And to expand on the point above - focus is not painful when the camera has enough focus points. I played with a Nikon D3s that has a whooping 51 auto-focus points; let me tell that it's freaking awesome, as it can track your subject as it moves. The problem is that consumer-level DX DSLRs only have like 11 focus points, which is still cool, but point&shoots suck badly in this area, most of them focusing only in the center of the image.

Another problem with this project that I can see - people don't like playing with their images on the computer. When you take 500 photos in a single day, and another 600 photos the next day (like when going on a trip), it's really painful to carefully adjust each image, not to mention that the RAW formats are huge and seriously cuts in the number of photos you can take ... yeah, making adjustments is great, but I prefer making more photos, that's why I shoot in JPG and don't regret it.


Focusing with point and shoots should not be a problem. They all have a huge depth of field due to their small sensors, so you have to try really hard to put something out of focus. Smudged pictures on point and shoots are usually not a result of lack of focus, but something else (e.g. shaky camera).

Even with DSLRs you usually only need one focus point. You can focus at the center and re-frame. It is very simple to do, and is much simpler than choosing a focus point (and much safer than trusting the camera to automatically choose a focus point for you). Multiple focus points can be very useful in certain rare circumstances: when shooting something really fast off of center without being able to pre-focus or when your camera is bolted down on a stand. Even then I cannot possibly imagine one anyone would need 51 points. This is obvious feature creep.

But yeah, I am really not sure who the intended market for this camera is. Focusing is just not a pain point, in my opinion. This camera could be used by artists and professional photographers to play around with the depth of field to get a great artistic shot, without making their subjects wait. But with the micro-lens design, will it have enough image quality for professionals? I guess we will see.


Focusing with the center AF sensor only and recomposing will generally put the plane of focus behind the intended subject (assuming the lens does not have an extraordinary degree of field curvature). The geometry is easy to see if you remember that your focus distance is the distance from the sensor/film plane to the subject, not the distance from the lens to the subject. The problem is more noticeable with a wider lens, a larger aperture setting and a closer subject, but it's always there. That's why Hasselblad (who only offer one AF point in the H4D camera system) has incorporated positional sensors and focus correction in their latest models.

If your camera offers the option, choose a focusing composition that will put your subject as close as possible to its final position in the picture and use the focus point closest to that position.


You cannot re-frame when shooting moving subjects - kids playing, sports, birds, cars, motorcycles, boats - there's all kinds of instances in which your subject doesn't stand still and even has unpredictable moving patterns, so keeping your subject in the center of the frame and/or re-framing is in many instances not feasible.

Nikon DSLRs have this 3D tracking feature in which you select an object to keep in focus and it refocuses based on its movement inside the frame when it hits the focus points. And when the subject exists the frame and re-enters, auto-focus comes back. 51 focus points may seem like feature creep, but as I said, it's freaking awesome when shooting moving targets like birds.

Even for subjects that are still, like for portraits, you have a lot more freedom for composition as you just select the person's eyes and then you can move around while the eyes are kept in focus.

Of course, you can do a good job with a single focus point, but professionals and amateurs need predictable results, because good moments for taking photos are rare and you don't want to screw up because your camera wasn't properly focused.

That's why I can see partly the utility of this technology here, but on the other hand I can see serious problems with it too, the biggest one being that for most people quantity of photos trumps quality. Another problem that I can see is the one I mentioned above; precise and predictable focus is not that much of a problem with modern cameras. And yet another: mega-pixels and quality of optics count a lot. Well, maybe once passed a certain threshold, the there's less ROI from a higher MP, but still, under 6 MP a camera is only usable for publishing on Facebook.

My consumer DLSR has 4 FPS and I don't worry about focus as I just continuously shoot like 20-30 pictures in a row to make sure one of them is good, and usually one of them is.


> under 6MP a camera is only usable for publishing on Facebook.

2560x1600 pixels is 4.1MP. At 1:1 that will completely fill the most monstrous computer monitors one can get for under about $10k. At 300 pixels per linear inch (a very reasonable resolution for photos; about the same as the iPhone 4 "retina" display) it will give you a picture with 10" diagonal.

Unless you're making posters or big prints for photographic competitions or something, even at 4MP raw pixel count is not going to be your problem. It's completely untrue that below 6MP your pictures will be useless for anything beyond Facebook.

(Of course pixel count starts to matter more if, e.g., you're taking pictures of distant birds or distant celebrities with a not-especially-long lens and you need to crop heavily. Most photographers, most of the time, are not doing that.)


Don't forget about pixel quality (it's understandable because the industry has been encouraging it for years).

A 6MP sensor would produce a fantastic 10"-diagonal print if its pixels weren't affected by noise and it was combined with a high quality lens. Unfortunately sensors of that resolution are typically small (=noisy pixels) and placed in point-and-shoot cameras (=cramped, low-quality optics).


Or you could buy a top of the range pro DSLR from 5years ago with this resolution, with fantastic lenses and bullet proof build for the price of a modern entry level camera


I find the higher mega-pixel really helps on cropping. Especially when photographic wildlife like a dragonfly its nice to be able to crop a photo to just focus on that while still maintaining resolution. For me 10MP minimum is for this purpose rather than large prints. 2c this looks like a really cool technology, I would love to see it built into phones.


"people don't like playing with their images on the computer."

...until they have a great shot ruined by improper focus. Or worse, an entire afternoon of great shots at a memorable event ruined by the autofocus switch set to "off". Us geeks may notice in time, but most consumers/users won't.


It's a piece of first-generation consumer electronics, made in small production runs by a new company that's already spent $30 million in R&D. While each camera will not cost much, they will be priced high.


Not necessarily. If the company is looking to recoup its investment soon, sure. But if the company is looking to get mass-market appeal then they'll price it more aggressively. The latter may be a good idea given the declining interest in dedicated cameras - make it cheap enough, while it's still novel, and people will actually buy the thing.


Well, if you watch the Techcrunch video, the CEO was only willing to say it was between free and $10k. That's a big range


"greatly reduced image resolution"

Not necessarily. If you're using a plenoptic camera purely for producing 2d images, you can reduce pixel size further than you would normally. This is because each pixel in the output image is the average of many primary pixels in the sensor. The primary pixels can be smaller and noisier while still getting a smooth output image. In rough terms the resolution loss is a cosine term over the pixel areas involved, so it's modest.

The technology also allows a tradeoff between resolution and sharpness that is novel, including the possibility of realizing resolutions beyond the diffraction limit.

The cameras used in research projects suffered reduced resolution because they were a frankenstein modification of an off the shelf DSLR. A camera designed to be plenoptic from the beginning has different constraints.

"color aberration and aliasing"

I believe these can be addressed in the processing stage. I don't see anything about either of these issues that's insurmountable.

"Low FPS. Each image requires lots of processing"

The processing can be done at any time after the fact. The primary image capture is just that, a raw image, same as any other camera.

The processing is also relatively simple convolution which can be done via FFT. Overall it's comparable to common video and image compression algorithms, not something new that requires a supercomputer in your camera.

"Proprietary toolchain"

There's no reason we can't standardize raw plenoptic images. Also, once the plenoptic raw has been processed it can be saved as any available 2d or 3d format, from png to psd to jpeg. These files will be fine in Photoshop or on the web.

"You can just produce a composite image that's sharp all over, but why not use a conventional camera with stopped-down[2] lens, then"

This allows depth of field independent of aperture size, which is a new capability. This will greatly aid low light photography, particularly dim landscapes.

But secondly, the images contain depth information, with all that implies. This is a very different tool from a stopped down lens. It's capable of realizing images that are physically imposible with a traditional lens and sensor.

"thrillingly expensive"

Citation needed. AFAIK each piece of technology involved is well understood and readily manufactured.


"The processing can be done at any time after the fact. The primary image capture is just that, a raw image, same as any other camera."

I suppose that's the advantage. I could see myself changing the settings so that you preview the image right after the shot (fashion, portrait) or have no preview at all and keep processing power focused on shooting as many photos as possible (sports, wildlife, candid).


It's the first version man. This is exciting technology. I love seeing true innovation like this come to life.

I'm gonna be all over this. You know, for the kids.


The NYTimes also has an article on Lytro which suggests that they have solved the resolution problem. From http://www.nytimes.com/2011/06/22/technology/22camera.html?_... :

"The picture resolution, he added, was indistinguishable from that of his other point-and-shoots, a Canon and a Nikon. Eliminating any loss of resolution in a camera like Lytro’s, which is capturing light data from many angles, is a real advance, said Shree Nayar, a professor at Columbia University and an expert in computer vision."


There are other techniques for computational cameras that produce "flexible depth of field" -- images with objects both near and far in focus [1]. That technique works a little differently: the image detection plane is shifted during image capture, and then you apply digital signal processing. Very cool stuff.

[1] http://www.hizook.com/blog/2009/06/26/computational-cameras-...


I realize that this type of discourse is the raison d'être of this site, but I think it's pretty funny that I came to this discussion immediately after reading today's xkcd (http://xkcd.com/915/)


Mouth-open really makes a statement about the quality of cheese-steak:

http://blogs.abcnews.com/.a/6a00d8341c4df253ef01347fc42e5d97...


Truly, you are a meta-connoisseur.


"- Low FPS. Each image requires lots of processing, which means the CPU will have to chew on data for a while before you can take another image."

This will improve and processor speed improves eventually making it a nonissue.

"- Proprietary toolchain for the dynamic images. Sure, cameras all have their particular RAW sensor formats, but this is also going to have its own output image format. No looking at thumbnails in file browsers. Photoshop won't have any idea what to do with it. Can't print it, of course."

Sure, the raw image needs to be processed but you can export the focused product and do what you like, can't you? I can invasion a raw pic and a "best guess on what you'd like to be in focus" jpeg along with it. You can then improve on the jpeg by using the raw image to change what's in focus and recreate the jpeg.


  Sure, the raw image needs to be processed but you can export the 
  focused product and do what you like, can't you?
Certainly, and that will be a cool feature. I don't think it's worth all the other tradeoffs inherent in a light-field camera.

  This will improve and processor speed improves eventually making it a nonissue.
We are not talking about the future. We are talking about what this particular first-generation product is likely to do, and I, Samuel Bierwagen, will bet you $20 that this camera won't do better than three seconds per photo.


Good point about referring to first-gen model though I'd be looking forward to it being able to take 1 good picture fast then 3 so so pictures. Either way, I can't wait to see how well it and future versions work in real world environment.


"No wireless. Less space than a nomad. Lame."


I take it you bought the first gen iPod?


This is the biggest advancement in photography I've ever seen. All radical new technologies have some caveats like this initially like how LCD displays were worse than plasma for a long time. Wait a few generations for the tech to mature and I bet there will be no going back.


>cellphone-camera sized pictures.

With 5+ megapixel cell phone cameras being common these days, what resolution do you imply in that sentence?


Most cell phones have about 1 MP of signal, and 4 MP of noise.


Why, back in my day phones didn't have cameras in 'em t'all!

So, I don't know. 1280×720? 1680×1050 at most? They don't quote a megapixel number on their site, of course.


FYI Ren Ng (the founder of this company) won the 2006 ACM Doctoral Dissertation award for the research that turned into this product: http://awards.acm.org/doctoral_dissertation/



I read these papers in grad school in 2007. Apparently he's been working on commercializing it ever since. This guy is the real deal.


I remember seeing this "news" years ago...

edit: here's the article from 2005 http://graphics.stanford.edu/papers/lfcamera/

The company that flourished from this research in 2008: http://www.crunchbase.com/company/refocus-imaging

And another startup already doing this for mobile phones: http://dvice.com/archives/2011/02/pelican-imaging.php


The article is about a company named Lytro. 'Refocus Imaging' is the previous name of Lytro.


One awesome use for this technology is in microscopes! Instead of having to focus on each slide, slides can be run through much faster, photographed once, and interesting objects (like cells in a culture) can be found by processing afterwards.

And even cooler IMO, is that a display panel with proportionately sized microlenses can be used (after a little image processing) to recreate the light field for a glasses free 3d display.


Very nice. But it leaves me wondering about a couple of things:

(1) Given the info captured by the camera, can we, without further human input, create an image in which everything is in focus?

(2) What the heck are these people thinking? Going into the camera business? That means that, in order to get my hands on this technology, I am stuck with whatever zillions of other design decisions they made. One product. No competition. No multiple companies trying different ways to integrate this idea into a product. And if this company goes belly-up, then the good ol' patent laws mean that the tech is just gone for more than a decade. <sigh> Please license this.

P.S. FTA:

> Once images are captured, they can be posted to Facebook and shared via any modern Web browser, including mobile devices such as the iPhone.

Surely there must be a more straightforward, but still understandable to non-techies, way to say "the result is an ordinary image file".


A. Yes, I'd imagine so, within reason. Makes for a less interesting demo, though - we've seen lots of images with large focal depth, and lots of images with a narrower depth of focus used to call out one thing, but we've never seen snapshots which you can refocus after they are taken.

This is seriously awesome.

B. This tech isn't going anywhere. If the camera succeeds they might license it. If the camera fails they will surely try to license it. Note also that the technique is apparently not wholly new so the key patents are already running down.

And your point about all those design choices that go into the camera cuts both ways. If they license this tech to a consumer electronics company that flubs the execution they will lose money, as the lousy execution will reflect badly on the tech and will prevent it from getting popular sooner. (The sooner every camera buyer wants this tech, the more profits there will be before the patents expire.) In a world consisting mainly of (a) Apple and (b) hardware companies that cannot design software to save their lives, keeping control of your own fate seems wise. The popularity of this technique among the general public will presumably depend crucially on the UI, both when taking the photo and when displaying it. Better to screw that up yourself than outsource the screwing up to someone else. ;)


Re (1), the guy in the video gives a couple demonstrations of focusing everything in an image. It seems reasonable to assume it could be automated, if that's what you're asking.

And for what it's worth, the just-one-design thing has worked out pretty well for Apple.


(1) - cameras already do this. Photography 101 - if you use a small apature, everything is in focus (minus diffraction and motion blur). Cheap lenses (like most cellphones) have low apertures, and don't really need to focus. Expensive lenses (like the iPhone 4 lens) often have high apertures, which let you take faster photos, and artistically blur stuff.

Photographers don't like to do this, as blurring the background draws your attention to the subject.


> Cheap lenses (like most cellphones) have low apertures, and don't really need to focus.

Some companies seem to follow this idea and then, surprise surprise, barcode scanning app doesn't really work on my phone because someone decided not to install AF with the camera :/.


This technology has been around in research articles for at least 15 years. http://scholar.google.com/scholar?hl=en&q=light+field+ph...


true, but that's usually the amount of time it takes for many research technologies to come to fruition as products.


Apologies if I wasn't clear, I was referring to the above poster's comment about this technology being patented.


The image quality seems a bit low (low contrast/greyish, lots of grain), but the concept is very cool. I also see lots of artifacts (horizontal lines) in several of the demo images:

http://www.lytro.com/picture_gallery


(1). if my memory serves, they can do this by taking one pixel from each len and compose them into a total focus image.


You can have your car in any color you would like, as long as it's black.


Imagine it being combined with technology that tracks your eyes motion, focusing the part of the image you're looking at automatically.


Then take it few steps further with a 360 degree fish eye lense strapped to your forehead, and some sort of VR helmet display for playback. Then take the camera while skiing, mountain climbing etc., and you would essentially have a brainscanning device like in the movies Strange Days, or Brainscan.


Combine it with 3D glasses and maybe, just maybe, 3D movies will become bearable to watch.


Looking forward, I see interesting applications of this tech in motion graphics and film. Where 3D movies have failed in forcing the user to focus on something, I can see this bringing photos and eventually film to life in ways that let the audience control more of what they want to experience.

On the motion graphics side, I imagine all kinds of creative potential in compositing photography together with procedural or rendered graphics.


The ability to focus afterwards is at the tradeoff of image size and quality, assuming they use a microlens array similar to the study located here: http://graphics.stanford.edu/papers/lfcamera/. However, this is cleverly marketed towards the social media crowd, which has little use for high resolution photos.


I'm not an optics expert, but couldn't this be used to generate 3d depth maps? By stepping through each field depth you could find the edges of objects (by how clear they were at each depth) and map those edges onto a mesh. Effectively, doing what the kinect does but without any of the infrared projections...


You could probably do it, but it's not clear why it would be superior to stereo vision. Both approaches have the same pitfalls (CPU-intensive, need texture to work well) and stereo vision is largely a solved problem that works on commodity hardware.


Because you don’t need a second sensor and a second lens?


Stereo vision does need two sensors and two lenses, where a micro-lens approach would only require one of each. However, the micro-lens camera would need a much larger and higher resolution sensor to produce a depth image that has the same resolution of the equivalent stereo camera.

Ignoring the size of the sensor, producing two standard camera lenses will always be cheaper than producing an array of multiple (i.e. more than two) micro-lenses. This is doubly true considering that the micro-lens technology is already encumbered by patents.

Finally, stereo is very well understood and has already been implemented on the GPU, on FPGAs and in ASICs (commonly known as STOC, Stereo On-Chip). I would personally love to see a demo of a micro-lens array used for creating a depth map, but I just don't see any practical advantages over stereo.


Sure, but you get it for free with this camera. It wouldn’t make sense to build a second sensor and lens into a light field camera.

This is obviously not the main use case – and we seem to be talking past each other.


Well, to be fair, you actually need a whole array of (micro)lenses, so 2 lenses sounds pretty reasonable.


That’s definitely possible but low contrast regions without any edges to detect (e.g. a grey wall) are probably a problem.


Raytrix already have a plenoptic camera on the market:

http://raytrix.de/index.php/r11.185.html


Now just give me this in full stereoscopic, hi-res, for my cellphone. With video.

Of course (hopefully), that's version 4 or 5. This initial roll-out is looking great! Can't wait to play around with one of the units in the local photo shop.

Looking at the demos, I wonder what the depth-of-field is? Is it entirely calculable, or is it just a few feet and then the user sets the target? It looks like it is tiny, but I'm guessing it's set that way to show off the cool features of the technology.


This sounds very exciting. To play the devil's advocate however, on most of the example photos on Lytro's site, you really only need two points of focus - roughly near and far. Clicking on those two shows you everything there is to see in a picture.

If someone comes up with software to allow refocusing on two distance points with existing photos, they could eat Lytro's lunch. Can Picasa do something like this?


If you have a photo that's all in focus (i.e. f15 aperture or something) you can throw parts out later with using masks in a photo editing tool, though it's time consuming.

I'd suggest you could get a similar effect with a camera that had two or three lenses using different focal lengths. Fujifilm already released a camera with two lenses: http://www.dpreview.com/news/0809/08092209fujifilm3D.asp so it's just a software modification for that.


Using a photo where all is in focus you only need a depth-map to process the blur. I think something like the Kinect is making this possible already.


According to Wikipedia this is called "focus stacking", combining depths of field to create a larger depth of field. The page lists a number of apps/plugins that blend photos to create this effect:

http://en.wikipedia.org/wiki/Focus_stacking


Once it can capture high speed video of the light field, so that you can actually change the timing and exposure of each shot, as well as the focus... then we'll really be somewhere. Then you can just aim the camera, click the button some time shortly after something cool happens, and go back and get the perfect shot. Hell, capture a 360 degree panorama and you can even aim after the fact!


This is much more useful than a simple depth map, since it works with translucent and amorphous things like steam, and other things that are hard to model with meshes like motes of dust. Also, if you have a shiny object, focusing at one depth might show the surface of the object in focus, but focusing at a different depth would show the reflection in focus.


Doesn't Magic Lantern already do this? I mean the Focus Bracketing shoots 3 (or maybe more) pictures as fast as the camera is able to do it at different focal distances. You just make sure the depth of field is wide enough to cover the distances between those points and you should have a similar effect.


It's possible to buy a light field camera right now, for example from a German compamy named Raytrix (http://raytrix.de/index.php/home.181.html). I don't know whether they are the only example or whether there are other companies.

They don't name a price on their website (write them to find out) and, looking at the applications they are naming on their website (http://www.raytrix.de/index.php/applications.html), they certainly do not target consumers.

Here are their camera models if you are interested: http://www.raytrix.de/index.php/models.html


This is a very interesting concept, I would be doubly interested to see this technology used for video in camcorders. However, I'm curious to see if they have the resources available to go toe to toe with Nikon, Canon, and Olympus. The camera industry is so competitive... and the life cycles on digital cameras are so quick nowadays. They may find it difficult to keep up.


I don't think they need to keep up with the middle-to-high end yet.

Given the choice of glass and bodies, I think most professionals would still keep with Nikon and Canon (especially for print).

But for entry-level, I could see it being a killer because of the ease.


semi-pro's. Don't forget about Medium Format, etc: http://en.wikipedia.org/wiki/Medium_format_%28film%29#Digita...


These guys clearly target investors money, not consumers. 1) Lots of hype long before the product is released. 2) Ignoring market trend (consumers prefer smartphone integration over picture quality). 3) Instead of focusing on refining and selling technology, they want to reinvent the wheel and produce their own camera.

I'd say that investors would lose lots of money on that venture.


I'd say that investors would lose lots of money on that venture.

I'd bet against that. They definitely solve a problem people have with taking focused photos and I see this technology becoming even more popular as it becomes integrated into videos. Just the other day I tried to take a quick photo of my niece and cat playing, and trust me when I tell you it was a real hassle trying to keep them in focus. This issue is just one of the many that can be solved with this technology. Even if there isn't strong demand for their own camera, they still should be able to license the technology to camera makers down the road and eventually integrate it into the smartphone market.


I'm not saying that technology would be useless. It might be at least somewhat useful.

What I'm saying is that this particular business approach would fail (too much hype, not focusing on teams' advantages, ignoring customers' preferences).

Investors would over-invest, but business would not get enough revenue to pay them back.


Investors would over-invest, but business would not get enough revenue to pay them back.

We shall see, but just for the fact their product and technology improves a previous experience in such an obvious way, I have much less of a problem seeing this company receiving a lot of hype and funding than say Color, for example.


See also the discussion three weeks ago on hn: http://news.ycombinator.com/item?id=2596377

There is also an (unrelated) iPhone App by the inventor for playing with depth of field: http://sites.google.com/site/marclevoy/


Foveon was supposed to revolutionize digital photography too. Hell, there was even a book written about it.


Props to the innovation, but in terms of reaching the consumer market I doubt the appeal will suffice for widespread reach. Even if it did, a licensing deal would be more appropriate just for the sake of them investing in the innovation which is what they're good at, not distribution.


How closely is Lytro's method related to Adobe's Magic Lens demonstrated last summer? http://www.youtube.com/watch?v=-EI75wPL0nU


As a geek, I think this is totally awesome. As a casual photographer, I'm less excited. I've pretty much got the hang of focusing my shots as I'm taking them. Why focus later what I can focus now?


I'm a casual photographer too.

The advantage I see in the future is that you are 'guaranteed' a potentially sharp picture.

Even when I do portraits with a wide aperture (1.2/1.4) there are times when I miss the focus on a tiny detail that I wish was more in focus. And since I prefer doing candid poses, redoing a situation just isn't that preferable.

For sports or wildlife, I imagine it can be hard to focus too, sometimes just missing a shot of a bird because of a split second.

It does make me wonder how the motion blur on this would work.


This might be a bit far into the future, considering the speed and resolution deficiencies that these first products are bound to have, but in the case of sports and nature photography, this would allow greatly improve the ratio of keepers to frames shot, by making those otherwise perfect but slightly out of focus shots viable. Even with the amazing AF systems available today, a significant number of frames don't have perfect focus. This could provide a solution for that problem.


Well. One use would be to experiment with different focuses and pick the best later. It is not a rarity for me to look at old pictures and think that a different focus would have worked better (Then again I am no great photographer. For all I know others are never feel this need. So take that with a pinch of salt)


Another Segway? Lets see if any reviewers ever get their hands on one.


Combine this with eye tracking (to the level that my focal depth can be detected) and an automatic lens over my monitor and you'd have a pretty immersive picture.


Didn't Adobe showcased the same technology in September 2009?

Of course they haven't delivered a consumer product with it yet... But neither has this company.

Let's wait and see...


I think some kind of Ken Burn effect with transitions between the different planes would make a good screensaver.


There may be an opportunity in 'doing something' with all the data collected ..


can this light field tech be used in reverse to create 3d holographic images?




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: