I am using Calibre with my 6 inch Kindle paperwhite. I notice that it display the books bought from Kindle store a lot nicer than the same books but in .epub format sideloaded by Calibre. Do you know how I can improve this?
Likely there's a Calibre plugin to convert to Kindle's format. Kobo also supports .epub out of the box, but the software really works best with their .kepub format. There's a Calibre plugin which automatically converts .epub into .kepub when sending to the device.
Looks cool but seems like it doesn't work on torch 2.0
"AttributeError: module 'torch' has no attribute 'export'"
The torch.export API is currently in active development with planned breaking changes. The installation guide for this is still very minimal, anyone knows how to get it working on torch 2.0?
I haven't managed to successfully export my custom ViT model yet, but I've not had an issue accessing the export methods in torch 2.3 within the nvcr.io/nvidia/pytorch:24.02-py3 container.
I may have some more time to debug my trace tonight (i.e. remove conditionals from model + make sure everything is on CPU) and will update if I have any new insights.
Come to any engineering schools, and I mean real engineering like Mechanical, Electrical, Civil and you will see almost no one uses Mac. The software just isn't there. And even when the software is available, the exhorbitant price of RAM make it a bad deal for many students.
Sure, but that represents a lot of fast cuts balanced out by a selection of significantly longer cuts.
Also, it's less likely that you'd want to upscale a modern movie, which is more likely to be higher resolution already, as opposed to an older movie which was recorded on older media or encoded in a lower-resolution format.
Huh, I thought this couldn't be true, but it is. The first time I noticed annoyingly fast cuts was World War Z, for me it was unwatchable with tons of shots around 1 second each.
The first time I noticed how bad the fast cuts are we see in most movies was when I watched Children of Men by Alfonso Cuarón, who often uses very long takes for action scenes:
So sad they didn’t keep to the idea of the book. Anyone who hasn’t read this book you should, it bares no resemblance to the movie aside from the name.
It's offtopic, but this is very good advice. As near as I can tell, there aren't any real similarities between the book and the movie; they're two separate zombie stories with the same name, and honestly I would recommend them both for wildly different reasons.
And similarly, I, Robot, which is much more enjoyable when you realize it started as an independent murder-mystery screenplay that had Asimov’s works shoehorned in when both rights were bought in quick succession. I love both the movie and the collection of short stories, for vastly different reasons.
It’s style is based on the oral history approach used by Studs Terkel to document aspects of WW2 - building a big picture by interleaving lots of individual interviews.
The lost world is also a great book. It explores a lot of interesting stuff the film completely ignores. Like that the raptors are only rampaging monsters because they had no proper upbringing having been been born in the lab with no mama or papa raptor to teach them social skills
Disagree, Jurassic Park was an amazing movie on multiple levels, the book was just differently good, and adapting it to film in the exact format would have been less interesting (though the ending was better in the book.)
I think like the motorcycle chase that they borrowed from the lost world in Jurassic world, they also have a scene with those tiny dinosaurs pecking someone to death.
The textures of objects need to maintain consistency across much larger time frames, especially at 4k where you can see the pores on someone's face in a closeup.
I'm sure if you really want to burn money on compute you can do some smart windowing in the processing and use it on overlapping chunks and do an OK job.
I believe the relevant data point when considering applicability is the median shot length to give an idea of the length of the majority of shots, not the average.
It reminds me of the story about the Air Force making cockpits to fit the elusive average pilot, which in reality fit none of their pilots...
Freal. To the degree that i compulsively count seconds on shots until a show/movie has a few shots over 9 seconds then they "earn my trust" and i can let it go. Im fine
I actually can't wrap my head around this number, even though I have been working on and off with deep learning for a few years. The biggest models we've ever deployed on production still have less than 1B parameters, and the latency is already pretty hard to manage during rush hours. I have no idea how they deploy (multiple?) 1.8T models that serve tens of millions of users a day.
reply