Hacker News new | past | comments | ask | show | jobs | submit | xeonmc's comments login

I wonder if it might also be tech related. Like how aaa games these days all have the same “default unreal engine look” thanks to its out-of-box settings being good enough to not motivate further tinkering, your run-of-the-mill Hollywood big budget action films also use the same de-facto default equipment and out-of-box setups?

I think there is a lot to be said about the technology factor to it. On the movie side special effects heavy stuff is still largely the wall-to-wall green or blue screen sound stages. Motion capture and motion tracking tools have all got insanely good so studios can do a lot more than say in the days of Lucas' Star Wars prequels (which were notorious wall-to-wall green screen films of their era), but also the budgets and time constraints and "crunch" still provide a lot of barriers to how well the digital/analog integration is and the overall look of all the effects. Also, there's only so many ways to light a green or blue soundstage so that things are readable and you don't get extra ugly color spill that the special effects have to work even harder to remove. (I think the base lighting options being so mostly homogeneous is a subtle but huge factor in the overall things always feeling that way. Differences in lighting have always been one of the biggest "cinematic languages".)

Special Effects houses working on those sorts of movies are probably using heavier, slower renderers than Unreal Engine today, but given the average life span of an effects house is still appallingly close to 2 years and scenes scattered across 1.5 movies, those renderers are still generally off-the-shelf or borrowed from other companies. Industrial Light and Magic is about the only effects house that no one wants to kill, so also becomes something of a homogenizing factor in whatever tools it white labels for sale/rents to other effects houses. Disney owning ILM seems to be like an obvious factor that almost everyone working on an MCU film is trying to rent or ape the ILM house style at all times.

It's also interesting that right now TV can't afford as much time with the motion tracking/motion capture tools and studio spaces of wall-to-wall green screen, so in the effects heavy shows we're seeing a lot more experimentation, some of which I'd argue look better than the movies currently. For big instances, right now ILM's TV efforts with The Volume (and CBS Paramount's copycat AR Wall and other studios have copycats now too) are fast changing the look of TV. Because these LED walls are their own light source which dynamically reflects what the background should be doing it seems like there's a ton more options for lighting scenes in these rooms/volumes/spaces or doing shots that would be tricky or expensive even with how great motion tracking and motion capture have gotten. Of course the trade-off there is that the real-time background rendering requires a rendering engine more in the vein of Unreal Engine, and allegedly in most cases is definitely Unreal Engine, so "default unreal engine look" starts to apply literally to background elements in TV shows.


It's not just CGI, background replacement and other effects work. Some cinematographers have been noting that the increasingly flat, uninspiring look of much modern content is partly due to the amazing dynamic range of modern camera imagers as well as HDR production workflows. While wonderful, this also enables a kind of laziness (or expediency) when shooting. You don't have to specially light many scenes as much (or at all) as long as there's enough ambient light. While this can enable styles like cinema verite to be higher-quality, it also lets film-makers just go with ensuring there's enough flat light overall and call it good enough. Intentional lighting is hard and time-consuming but it's also one of the most expressive visual elements of film-making.

Another factor is that dramatic lighting tends to dance closer to the edge in terms of quality. Lighting a high contrast scene with inky blacks in one area and your hero elements in moody shadow means that those very dark areas are at risk of slipping into the floor of imager noise. The same is true when there are extremely bright elements near the upper-bound of clipping. While modern imagers and HDR capture give film-makers more latitude than ever to avoid these problems, perversely, the same luxuries seem to be making younger film-makers less skilled, or at least less confident, in their ability to dance close to the edges of too dark or too bright.

Newer post-production color grading tools also enable new degrees of what's possible to "just fix in post." Once again, this is wonderful when we need to fix a mistake or for amateur and hobby productions which don't have the time, budget, gear or even knowledge to do creative lighting. The downside is it creates an ever-present temptation when under time and/or budget pressure to take the shortcut and just bounce a key light off the ceiling and a fill off the wall and call it done. It's sad because modern imager sensitivity and battery-powered, micro-sized lighting instruments now make it easier, cheaper and faster than ever to do intentional lighting at a Vittorio Storaro level. Storaro might spend half a day with a crew of six gaffers lighting one scene that a four person indie crew could largely recreate today in an hour with $1k of lights, clamps and stands from Amazon.

This recent video discusses the problem from a cinematography perspective: https://www.youtube.com/watch?v=EwTUM9cFeSo


I wonder if it has to do with the snapshot timing resolution being asymptotically high when you have not yet experienced enough passage of time.

As you know we generally experience timescales logarithmically with age, i.e. your incremental experience of time is always compared in reference to dividing by the total passage of time experienced, which is why children tend to get bored much more quickly than adults, because waiting for ten minutes constitutes a much larger percentage of their current experienced life compared to that of an adult's proportion.

Since a baby/toddler has only experienced a tiny amount of time passage thus far, their tiny reference "yardstick" would result in their memory being snapshotted at an untenable timining resolution thanks to division-by-almost-zero. So perhaps the brain does some form of filtering to prevent the entirety of experienced memory being dominated by the super-early ages, or perhaps there is some equivalent of an overflow in their internal counter.

Keep in mind that the above are purely metaphorical as a functional description, and not to be treated as a literal hypothesis on the mechanistic operation of memory.


An interesting idea; although I think somewhat unlikely. This is one of those instances where I would assume the brain is doing something carefully calibrated. But it'd be easiest to just not record as many experiences for the first few years rather than do a complete flush.

My guess has always been there is some optimal approach to learning that works by developing a really basic schema (what does a person look like, which ones are mine, roughly how does this body thing work, etc) then flushing all the training data and starting again. I vaguely expect the machine learning people develop some sort of similar process where they get a lot of value lightly conditioning a model before sinking compute into doing a full train.

Basically, my wild guess is there is a lot to learn in a baby's first few experiences but the risk of mis-encoding the lessons is so high that the brain uses the data to bootstrap but then throws it out as too unreliable and starts again once it orients.


> which is why children tend to get bored much more quickly than adults, because waiting for ten minutes constitutes a much larger percentage of their current experienced life compared to that of an adult's proportion

I think boredom is the brain's hunger more than its watch. It's about the needs of a developing brain, rather than plain time.

Right after the bulk of neurogenesis the brain goes into synaptic pruning where it strengthens some synapses and eliminates others. Developmental pruning is experience dependent and the scale of the process in childhood needs a lot of experiences to fuel it.

Those experiences are literal "food for thought". Building the body takes more food, building the brain takes more experiences.


I disagree.

I think the brain compresses experiences.

If you had a routine of k activities every day you won't remember every instance of that routine but you may remember what the routine was.


N of 1, but my experience doesn't fit with this common explanation. Big life changes, travel, having kids... none of it has done a thing to disrupt a steady progression to a point where (around age 40) a year feels about subjectively as long as maybe three months did in high school. Started being noticeable around age 25 IIRC, and time's done nothing but keep getting faster. This doesn't revert, at all, when things get shaken up, not even temporarily. It's still "you know that place we went the other day... oh, shit, that was six weeks ago".

Exactly what I was going to say. Go travel through strange lands for 3 months and compare the feeling of the passage of time to 3 months doing your normal routine.

I’ve heard great things about the fun-loving culture of the programming team at a certain radiotherapy vendor...

I chose my words very carefully

An article about Hans Berthe without a picture of his huge forehead just doesn't seem right.

I don't think he actually had a huge forehead, it's just he balded front to back which made it look like his forehead just kept going


what's with the forehead fad?

1^100 = 1 though

That's a typo in the response, I believe. The screenshot shows 10^100 (10 to the power of 100).

Perhaps a better term to differentiate it from regime-run ones -- "Ethically North Korean Restaurants"?

Are North and South Korea ethnically distinct?

Probably not in any meaningful way, but I would agree the above comment they are ethically distinct.

Yes. Why would they not be?

"Told you so."

That's personal pronouns.

I identify as a consipracy theorist, my pronouns are: Told/You/So.


Specifically, "told you so" as we all sink into the abyss.

And birds are at 2nm process nodes while mammals are 22nm

looks great, have you considered making a startup business for this?

they should track the office furniture provenance with blockchain, that way you could verify what startup it came from, and then when your startup fails and you sell the aeron chairs again it can be an added bonus to the item value.

I’ve always wondered why they don’t just call them “luxel displays”.

By borrowing the term from graphic rendering that describes abstract platonic idealizations, it perfectly captures the meaning of “the holy grail of each pixel being an ideal self-contained light emitter”.

Engineers in general often seem overly fixated on naming things to describe “what the thing is made of” instead of “what the thing does”.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: