You might check out Mitsuba (http://www.mitsuba-renderer.org/) and LuxRender (http://www.luxrender.net/). Both are more modern open-source renderers with Python APIs, and I know that Mitsuba is quite popular in the graphics research community (in part because that's where it comes from).
Berkeley, CA - Contract - REMOTE - iOS front-end engineer
We’re bringing interactive cinema-quality physics to mobile devices. We need someone to help us build a slick, seamless iOS app to show off our awesome physics. If you want to push the frontiers of computer graphics, work with brilliant researchers, and build a game with the most beautiful interactive 3D liquids the world has ever seen: come talk to us!
Our team is three professors and a Ph.D. student, from CMU and Berkeley. We have a track record of producing great research, with six SIGGRAPH papers among us this year alone. This is a research project, not a company, but we do have a reasonable budget. If all goes well, we'll be presenting this project at SIGGRAPH next year.
If you write fantastic iOS apps, are interested in computer graphics, and (ideally) have some experience with video on iOS, you should get in touch!
Send an email to firstname.lastname@example.org with your resume and tell us a bit about why you’re interested.
In any case, running global illumination often causes a major increase in rendering time. So it's understandable that Pixar, which has to render a huge number of frames at huge resolutions, did not traditionally use it much.
There's also another factor at play, which is directability. Physical correctness is not usually a priority except as far as it advances the artistic goals of the people making the movie. If the director says, "can you make the right side of that table look less red?", you need to have some way for the artist to achieve that goal, even if that's not how the scene would "really" look. I expect that the development of new tools and processes to allow precise manipulation of the lighting in globally illuminated scenes was just as much, if not more, of a barrier than the additional cost in rendering time.
For an interesting parallel, this is analogous to my experience with emergent gameplay when I was in the game industry. Everyone really likes the idea of emergent gameplay and the open-ended-ness and flexibility that gives you. But you sacrifice a lot of control when you go that way. This can leave game designers and producers feeling like their hands are tied when the game doesn't play the way they want.
Less flexible, more scripted behavior is often the smarter choice when you want to be able to ensure a certain gameplay experience.
And less flexible, more scripted behaviour is one of the biggest things driving me away from gaming these days. Most seem to end up as a sequence of action bubbles punctuated by cut-scenes, often with super-heavy hints about the "correct" way to handle the situation - sometimes even unwinnable (through e.g. infinitely spawning enemies) until you do things the "right" way.
And the resulting primary gameplay experience is boredom; felt most heavily recently with Bioshock Infinite.
The other type of game is the open world formula, featured in Assassin's Creed and GTA, and to a certain extent Fallout, Skyrim etc. But these become boring in another way; they rely on making navigating the territory interesting, but eventually the novelty wears off and you just want to enable the "instant teleport" function.
I still miss games like Thief, where navigating the territory was the main challenge of the game, but the territory was carefully enough designed, yet still very open, and not seen repeatedly enough to become boring. Dishonored came within 60%, but the player character was too powerful.
This is something that is often overlooked in any analysis of global vs. local illumination. Local illumination gives you perfect control, and allows you to "paint with light", which is the cornerstone of the pixar lighting process.
We used GI at pixar when it was appropriate, even at the expense of long render times - that is to say, only when it made the final product look better. How you get to the result doesn't matter, only what it looks like on screen.
Interesting. The OP got me thinking along the lines of manually tagging salient features of each model (as well as ranking models by salience, either manually or automatically based on criteria related to the object that the model represents).
What would worry me is that I don't understand what Facebook really wants out of this acquisition, and why this will motivate it to do right by Parse users. It looks to me like a number of other people in this thread don't understand either and are coming up with their own theories, few of which are good news for the developer.
Long story short is that Facebook needed a way to be a platform for other mobile app companies' products without building a new os.
Developer mindshare is a big deal for technology companies. It has been argued that Microsoft's dominance in the 90s was fueled by its ownership of the windows api. Because the windows api had the most users, Microsoft owned the main platform where software developers and consumers would meet. While there was often friction between MS and the rest of the software community, they had a beneficial relationship.
Microsoft got paid rent and could leverage other people's work in making their value proposition to customers (ie if you want to game seriously you'll need to run Windows). And all of those developers did not have to write their own operating system or deal with all of the different computer companies.
Because Facebook apps have turned out to be more attractive as a way to access their social media users (ie for dating services etc) than it is for general purpose software products, they need a new way to grab developer mindshare.
Amazon didn't care about making their own mobile operating system because they have AWS. That's why they just forked android. Facebook doesn't want to buy a whole new mobile operating system in 2013 because blackberry and windows phone have shown how expensive it is to try and convince customers that they are a viable competitor to android and iPhone.
Being a mobile backend as a service allows Facebook to take rents and work with developers and avoid that big marketing effort. On the other hand, they will take on similar risks to Netflix. They need to prove themselves to be as valuable to android and iPhone as Netflix is to Verizon and Comcast or they will get jerked around.
If you sympathize with the author: do not post things like this. Listen to him. What makes you think you were the intended audience? What makes you think you know why he was writing? Do you see the irony in you trying to "educate" a black person on talking about racism?
Don't fault people who have to put up with this shit, day in and day out, for not having limitless reserves of patience. His job isn't Racism Educator, so why should he have to act like it?
I mean, yeah, ideally everyone is infinitely calm and can call these things out in a measured way every time they happen. But you're asking a lot from the victims here in order to spare the feelings of the aggressors. (Perhaps unintentional aggressors, but aggressors just the same.) What you're suggesting isn't easy.
And what's to say that the person he calls out will take it well? Look at what happened at his workplace: they told him that the things he was complaining about were only jokes and that he was too sensitive. Look at what is happening inthisthread: people are complaining about his tone, complaining about his word choice, and insinuating that he's making things up. That's not even going into the people who start out ostensibly agreeing before segueing into what sounds like their main point: how racist they think the author was being.
So give the tone argument a rest. It's one of the most reliable distractions that people fall back on to avoid discussing racism, sexism, and all other kinds of oppression, and it's already taken over way too much of this thread.
It looks to me like the robot just exhibits less motion in general. This might be because (as far as I can tell) the authors optimize only to match a set of static poses, not for physical or perceptual realism of motion.