Embedded Software Engineer looking for part time engagement. I have working setup at my home with all necessary equipment. I've experience with different architectures and frameworks (FreeRTOS, Espressif, Zephyr RTOS), but I'm currently looking for mainly Zephyr based development.
Has anyone else noticed the leg swap in Tokyo video at 0:14. I guess we are past uncanny, but I do wonder if these small artifacts will always be present in generated content.
Also begs the question, if more and more children are introduced to media from young age and they are fed more and more with generated content, will they be able to feel "uncanniness" or become completely blunt to it.
There's definitely interesting period ahead of us, not yet sure how to feel about it...
There are definitely artifacts. Go to the 9th video in the first batch, the one of the guy sitting on a cloud reading a book. Watch the book; the pages are flapping in the wind in an extremely strange way.
Yep, I noticed it immediately too. Yet it is subtle in reality.
I'm not that good to spot imperfections on picture but on the video I immediately felt something was not quite right.
There have been children, that reacted iritated, when they cannot swipe away real life objects. The idea is, to give kids enough real world experiences, so this does not happen.
I noticed at the beginning that cars are driving on the right side of the road, but in Japan they drive on the left. The AI misses little details like that.
(I'm also not sure they've ever had a couple inches of snow on the ground while the cherry blossoms are in bloom in Tokyo, but I guess it's possible.)
The cat in the "cat wakes up its owner" video has two left front legs, apparently.
There is nothing that is true in these videos. They can and do deviate from reality at any place and time and at any level of detail.
These artefacts go down with more compute. In four years when they attack it again with 100x compute and better algorithms I think it'll be virtually flawless.
I had to go back several times to 0:14 to see if it was really unusual. I get it of course, but probably watching 20 times I would have never noticed it.
I don't think that's the case. I think they're aware of the limitations and problems. Several of the videos have obvious problems, if you're looking - e.g. people vanishing entirely, objects looking malformed in many frames, objects changing in size incongruent with perspective, etc.
I think they just accept it as a limitation, because it's still very technically impressive. And they hope they can smooth out those limitations.
certainly not perfect... but "some impressive things" is an understatement, think of how long it took to get halfway decent CGI... this AI thing is already better than clips I've seen people spend days building by hand
Embedded Software Engineer looking for full time position. I have working setup at my home with all necessary equipment. I've experience with different architectures and frameworks (FreeRTOS, Espressif, Zephyr RTOS), but I'm currently looking for mainly Zephyr based development.
Contact me only if you're interested in having a call to meet and discuss the position.
Remote: Yes
Willing to relocate: No
Technologies: Zephyr RTOS
Résumé/CV: on demand
Email: neutral1@net.hr
Embedded Software Engineer looking for part time engagement. I have working setup at my home with all necessary equipment. I've experience with different architectures and frameworks (FreeRTOS, Espressif, Zephyr RTOS), but I'm currently looking for mainly Zephyr based development.