"Essentially, for the HD release of Star Trek, all people had to do was scan each episode. For The Next Generation, they would have to scan all those original pieces of film and then edit together each episode again, themselves. It’s more difficult, more expensive, and much more time-consuming.
"Unfortunately, it wasn’t actually worth it. Sales of the extravagant TNG remaster—original retail price $118 for just season one—failed to reach CBS and Paramount’s expectations. A similar process would have to be done for both DS9 and Voyager—and would actually be even harder."
Well, there's your problem right there. Expecting three figures for a single season of anything is just pure madness.
Yes, this means if a show lasted more than 5 to 7 seasons I expect a discount on the remastered version!
How many people bought an entire season for that price is another matter, but that was the asking price.
US release: Jan 95
UK VHS release: Jun 95
UK TV release: Sep 96
1) Throw the top/bottom parts of the frame away
2) Stretch the frame
Neither is acceptable.
Japanese DVDs/Blu Rays for Anime sell for upwards of $40 and usually only include around 4 episodes.
"What if we go over budget?"
"Does this mean they 'own' any rights to the remaster?"
It's possible ML could help with a lot of that, but the budget just isn't there.
While it was shot on film, the effects weren't, each has to be created again, not just the external scenes like the Defiant shooting something, but scenes like Odo morphing
Here's a comparison picture of the DVD vs film quality though
On second thought, considering the shirts that Jake wore on the show, it's probably better for the world that we don't have the higher vibrancy film version.,,
Remember that those who do a lot of video call NTSC "Never The Same Color"
The technician over at the film lab would receive the film every day and run it through the development solution. As the image formed on the film, he kept saying to himself, ‘My God, this woman is green!’ And so he kept correcting the film developing process in order to turn her back to normal skin color again!
"Imagine everyone’s surprise, upon viewing the developed film the next day, to find the actress’ face just as normally pink skinned as ever! There was no trace of green."
The problem is they'd devolve into various camps bickering about the "right" way to do it, creating their own mini standards bodies, and forming a massive internal bureaucracy of tribalism.
It seems the more passionate a group of people is about something, the more convinced they are that they know the right way to do things.
This is Star Trek we are talking about. Their fans are dead serious.
(I bought individual TNG remaster seasons as they came out specifically to support the remaster effort.)
It means they charged at a level where sales volume times net (after variable costs) price per unit was insufficient to cover fixed costs.
But whether that's because they charged too much (variable costs for the disc sets were a small fraction of the price; it's quite possible they would make more profit with higher volume and lower price), too little (maybe the people willing to pay as much as was charged would have paid even more), or either would have worked and they priced in an unprofitable valley, or neither would have worked is speculation.
It makes sense to price high to get maximum returns from the users who must have it now, and then catch everyone else later with lower priced offerings.
It makes cultural, or creative, sense to release the footage at cost and let people remix and use it for whatever purposes.
The profit for the show was already made, it should be public domain by now.
I'd guess from a studio's POV the trade-off is between spending money updating an ancient show in the hope of making a little cash now and a little more from residuals, and making new shows which are more or less guaranteed to bring a profit.
But that aside, I'm not sure there's any way to make something like Star Trek public domain. Never mind the studio - the actors, writers, and producers will still be relying on residuals for continuing income.
And even if that weren't true - what exactly do you make public domain? You can't just hand out the original unedited film stock to anyone who asks for it. How about the scores for the music? Or the audio mix files? Or the EDL? Or the various revisions of the scripts? Props? Set carpentry - if there's any left? Wiring?
Reality: very few elements are digital files that can be copied/shared, even if you wanted to allow the public to copy/share them.
The situation is that the studio could do work on the footage to make it releasable, then sell the processed footage for profit.
The argument is that the processing of the footage makes it too expensive to do this, the studio might make a loss.
So, all the IPR, releases and such need to be in order for it to be possible to release the processed footage.
It's possible there is contractual obligation preventing release of rushes and other unfinalised footage, but it seems highly unlikely. I'd imagine the studio have rights to publish anything, thus enabling "making of" and "blooper reel" type videos.
So, if it's possible to arrange the IPR for the processed footage, then it's close to certainty that there's no IPR limitation on releasing the unprocessed footage.
Meaning finances are the only remaining issue.
Sure, the studio may not want to spend the money up front to arrange release of the footage; but that is likely to be primarily to avoid fan works based on the footage from competing with their own outputs.
>You can't just hand out the original unedited film stock to anyone who asks for it. //
You could hand it to an archive for digitisation and give them rights to sell it at cost. Or, burn it. Or leave it in a canister to degrade until it's unusable.
>How about the scores for the music? Or the audio mix files? Or the EDL? Or the various revisions of the scripts? Props? Set carpentry - if there's any left? Wiring? //
Well, IPR aside, why not. If you're paying to keep the set in storage, why not give it away and save your storage costs and get on with making new sets that are going to be used?
Primarily however, we the demos should be taking the question of what to do with unneeded IPR away from the studios. Make copyright terms shorter to match patent terms.
IAE, I can understand the reasons something upscaled is all we’re ever likely to see of DS9 or Voyager. Primarily that as soon as CGI became the dominant effects technology, the frequency of use went way up, and all those scenes would require non-trivial reconstruction.
As I noted in another comment, many of the effects artists from DS9 and Voyager have their original meshes and scene files, and some have posted insanely high resolution files on their blogs. It would be a frustrating effort to gather all of this, but arguably, CBS already owns the works in question, so compensation for retrieving these would be minimal probably, and the quality of the work done for the series was more than adequate if re-rendered in HD.
When I saw the submission title I was actually hoping re-creating the out of shot image data is what the ML was being used to do.
Some TV networks have aired a 16:9 version of TNG (as well as an upscaled Voyager and DS9) but they just zoomed and cropped. There wasn't anything smart going on.
Once they have manually scanned all those pieces, surely they can let machine learning edit together each episode again?
Then try to color balance (when the original information isn't there),
If you get ML to do that, congratulations, you'll make a fortune in the production industry.
No, we've long had the technology to solve most of this. Clip correspondence is content based image retrieval across a database of keyframes. Matching framing and position is an image registration problem (feature correspondence). Color balance seems like an almost trivial problem if you've solved the other ones, because the information _is_ there -- just modify each channel to match the histogram in the lower resolution image 
The challenge is that earlier stages in the pipeline need to be robust to inexact matches, and we don't want to rely on absolute color or pixel position. But I don't think that should be insurmountable for a slightly creative implementor - use local variation in color, gradient descriptors, pull in the motion vectors for an additional channel, etc...
Sure, you could go down the deep learning path by trying to reduce scenes to bags of labeled objects and semantic actions but that's bringing a water cannon to a squirt gun fight, which only makes sense if Google is giving you a free water cannon.
I'd probably try it if I really thought there was a fortune to be made here, but it's such a niche application. When something hasn't been done yet, there's usually a good economic reason, unfortunately.
1. First Google result for "histogram matching color balance":
Of course, the quality won't be as good as something done by a professional, but the question is that since we don't have a HD DS9, whether the version produced by an automated system is noticeably better than what we have.
If you're aiming for network TV quality you can probably do an episode for $5k though.
Of course that's from scratch. The trouble is that using the original video as your source will have lost a lot of data, and that means making a lot of judgement calls about what the scene is meant to be doing, so you're not much nearer.
Where, precisely, did that number come from? It doesn't happen to include marketing, does it?
- Many of the frames look like someone just applied a "sharpen" filter -- there's (as expected) no real new information, it's just sharper... so it doesn't seem like a big deal, like I could do it in Photoshop trivially
- But then there are a few spots where new details are truly seamlessly added... the fireball in the spaceship explosion, and forehead wrinkles. Stunning, they're absolutely seamless and believable, with detail that is simply not there in the original... that's literally magic
- But at the same time, characters in the background that are slightly out-of-focus get oversharpened when they're supposed to be blurry, like it can't tell when moderate blurriness is due to resolution or focus
Overall, I'm pretty shocked that the effect is so seamless across frames -- I totally would have expected this to produce weird discontinuities in time, but I didn't see any at all. I mean, this actually seems like it's already production-ready to throw into TV's or VLC. Which is crazy.
I wonder how much of this is "general-purpose", or to what extent this works on this episode because it's trained on this episode? E.g. the neural network is learning from a close-up of the Ferengi face or spaceship, to apply specifically to a later smaller version. Or to what extent this will work well across TV shows, across actors, across genres, without prior training, or with sparse training?
Deep Space Nine. Mission succeeded?
The creator says that 4K-vs-480p renders (that is, rendering the YouTube video at 4K) are on the way.
It has been delayed due to master the old footage (it's one of the last updates)
I'll leave this work it as a comment on the project.
How did they manage to get the entire cast, EXCEPT Avery?
You're ultimately right, though, and that a true HD is only going to come from the raw film content. What the neural network gives us are essentially plausible higher-res hallucinations.
Edit: as per the other comment, if the original exists only on video and not film, perhaps this is the best we're going to get.
I don't think that's quite right, at least it doesn't jibe with what the DS9Doc people have been doing (which consists partly of remastering pieces of DS9 scenes):
I think the footage really was on film, but the issue was that it was composited with low-quality CGI effects, or something like that. So you can rescan the film, but you have to redo all the compositing (and probably with your own models because I'm guessing the original CGI didn't look that good). That's why a DS9 remaster is so expensive.
The main difference here is that the interpolation algorithm on your TV is online. It's handling 30 frames per second, over 9 million pixels per second. Doing the interpolation offline (ahead of time), you can take as long as you want, look at multiple frames to try to make better guesses, try multiple things and use some fitness measure to pick a winner, even a frame or a pixel at a time.
It's still interpolation.
In this case they're using machine learning to add additional information about textures that isn't in the footage broadcast. They can add frames by interpolation, but the ML texturising and detailing is not interpolation.
Starting with a blob, if you interpolate you get a smoother blob, with this process you get a more structured figure.
It can still look nicer than naive upscaling though.
> ... interpolation is a method of constructing new data points within the range of a discrete set of known data points
There are entire catalogue of overlay comparisons of different releases, encodings etc. .
In the fully general case of arbitrary video this is true, but in practice it isn't.
You can gather information over time to do superresolution, and if you want to get super fancy you can build a world model (e.g. get more information about what an actor's face looks like from a close up shot, and apply that knowledge to less detailed shots).
I expect ML based upscaling to eventually produce some truly stellar results.
i imagine upscaling the Phantom Menace podrace to 4k, and giving the model a bunch of NEW rock texture info to use to create new detail.
Kinda of an automated way to combine these two ideas.
What you dont want is every upscale to start looking homogeneous, so it would be best for a design team to specifically map old to new texture sets, giving each upscale a unique look.
"They’re using the original Lightwave scene files for camera and model movement, lights, etc. It’s also the original 3D models and textures used on the show – and nothing has been updated in any way other than being rendered out at 1920x1080. It’s the raw CGI without any post work."
In DS9 / Voyager as I understand it, they transitioned from using models to directly generating the whole shot with CGI at NTSC resolution. See https://memory-alpha.fandom.com/wiki/CGI#Acceptance
This means that in TNG they just had to recomposite the film with a newly created high-res phaser shot; whereas for DS9/Voyager they would have to recreate the whole shot in high-res CGI.
Well, that is the difference between regular upscaling and upscaling with neural networks. With a neural network, the additional information is being stored within the network during training and added to the video during the upscaling process.
Ultimately, you could argue, that this is just interpolating too, but the quality of the interpolation depends on the training material. If you would train it on an original and use such a neural network to upscale a lower resolution version you could end up with the original (a perfect interpolation).
So it all comes down to the quality of the training material and while AI Gigapixel seems to have quite good material, I wonder if the result could be improved by transforming the video as a whole and not just frame by frame, as that would give the NN even more information to interpolate on.
I've seen a few people say this in the thread. It doesn't seem accurate to me. Information is being created/hallucinated/interpolated. Re-scanning filmstock gets new information. If not, it doesn't matter how sophisticated your algorithm is (naive upscaling or deep learning), you're still interpolating values.
"While the popular Original Series and The Next Generation were mostly shot on film, the mid 90s DS9 had its visual effects shots (space battles and such) shot on video.
While you can rescan analog film at a higher resolution, video is digital and can't be rescanned."
Edit: why am I getting downvoted for asking a question? Sorry :-(
I assume your downvoting is because people (uncharitably) believed you were being an annoying pedant instead of asking a genuine question.
But you can infer visual detail from the information that is already there. Especially because ML uses information from the training set to help make sense of the information that is already there.
e.g. if I show you a picture of a key then you can figure out what the lock looks like because the key contains that information, even if it’s not visible.
You know, I think there is some real promise here!
With some HD remasters, you can start to really see the makeup, the little pores in everyone's skin, the smudges and uncleanliness of real film-making. Film-makers and directory choose the lighting and the focus with the end-product in mind. They know that the screen won't capture certain things and so they know where they can skimp and save . When you re-master it, you're going against the 'vision' of the directors. Not in a big way at all, it's very subtle. But it's still there.
With ML techniques, you get the 'idea' that the director was going for, without seeing all the screw-ups that they knew they could get away with. It's crisper but the idea and vision are the same .
Peter Jackson's recent 'They Shall No Grow Old' is another great example of using ML too. In that case it was to preserve the old WW1 footage, bring it's frame-rates up to modern 24 fps, 3-D it, and colorize it. The results are literally breath-taking. Personally, I gasped when it finally hits; it's that good. Not to geek out too much here, but Jackson is literally changing history with that film. He changed the way we all view old footage, as something all herky-jerky and grainy, to something that is modern and real. Those 16 year old child-soldiers become real people again.
Though Jackson's work is a lot different than this effort, I think we all know that ML and the movies are here to stay. It's relatively cheap to update, takes little time, and be profitable (Remember the Disney Vault gimmick?). How long will it be before Chaplin's 'City Lights' is ML'd and remastered into 24 fps, 3-D, color, and with sound? Maybe 5 years?
Hell, I'd pay to see the best of old cinema brought back to modern standards like that.
 I know no film-makers, this is supposition.
It’s this bizarre smodge of minority group agendas, dark horror-like scenes and modern political ideas disguised as the family friendly Star Trek that you could watch with your kids that we all actually loved.
And are the plot lines intelligent? Do they make you think about interesting (non overdone ie non-left) philosophical issues? Not really, and they’re strung along in an awkward and long story arc that’s pretty dull and predictable and takes away from the per-episode storytelling :/
Uh, every Star Trek series was, in its time, a “bizarre smodge of minority group agendas, dark horror-like scenes and modern political ideas disguised as” something else (“Wagon Train to the stars”, in the case of TOS.) It seems tamer retrospectively because it's usually been on the side that became the accepted standard on the issues, and because the presentation of horror evolves.
OTOH, the pervasive long story-arc thing hasn't been constant throughout Trek, but it's been a growing trend within Trek since at least DS9 (and also, within adult TV generally; static-background episodic fare outside of children’s shows seems to have been progressively going out of fashion for decades.)
It is, because CBS says it is, and your opinion on the matter is not relevant.
It's canon. It's part of the franchise. It takes place in the same timeline as TOS. Accept it.
> family friendly Star Trek
Star Trek has always been preachy. Did you forget about the TOS episodes with the half-black-half-white and half-white-half-black guys who hated each other?
Star Trek has always been incredibly progressive/left-leaning. The Federation is a communist utopia where everyone has everything they could ever want for free. If you talk to people on the original series, plenty of them will talk about things that they wanted to do, in terms of gay characters and the like, but that they couldn't get permission to actually air. But they have always pushed the envelope of being about as far left as they can get away with from a business side perspective.
I've seen a lot of complaints about "liberal" and "feminist" agendas being pushed in Discovery. It baffles me. Do such people not know what the Federation is about? They're literally social justice warriors in space.
could you refer me to an analysis that elaborates on this claim?
Honestly the worst part of the currently available DVDs is the compression artifacting.
One of the other interesting things is that while CBS may not have all of the source files for most of the VFX shots: The original artists do have many of them, and would likely offer them up in a heartbeat. The original 3D models and scenes for DS9 and Voyager are incredibly detailed, and would be more than adequate in HD. In many cases, it'd just be a matter of loading up the files, changing the output resolution, and mashing the "render" button.
Indeed. I think its TrekCore that has some HD images of re-renders of the 3D model of Voyager used on the show. They are so detailed, you can start to see the where the 3D artists copy/pasted the same empty sickbay stock photo in various places around the ship to show there is something inside the windows.
Maybe someone at Google can use spare cloud time to remaster the whole series as their 20% time project? This remastering process would be easily parallelizable.
EDIT: I didn't mention fan models as a possible approach because this is WB/PETN we're talking about here. They aren't Paramount-dumb, but they seemed take that as a challenge, at least vis à vis B5.
The biggest reason you'll never see a remaster of Babylon 5 is that it lacks the continued financial interest of it's backers or widespread market appeal. Star Trek is a wildly popular 50 year old franchise, which is still seeing new work today, and even then, CBS decided it wasn't worth it to remaster two of it's shows.
If I remember correctly, the show was originally shot at an aspect ratio of 16:9. Because it was only going to air in 4:3, the special effects were done in 4:3. For the DVDs (which I think are the same version that Amazon streams), the 4:3 shots were enlarged to fill a 16:9 screen, making the picture look fuzzy. It would be interesting to see how well this technique would work on those types of shots.
One could take a generic "AI Gigapixel" net, and retrain it on some of the newer Star Trek content, for which an upscaled/remastered "ground truth" exists. My guess is that it would help a lot with features that are specific to the Star Trek universe.
Taking this further, one could take the resulting samples, and "rate" them, before feeding it back in the training engine. This would make some kind of "adversarial" feedback loop, but instead of the GAN, humans are involved in the loop (which could be used to train an adversarial network as well, that said). My hope is that it would converge to much better results in a shorter time, and with less input data.
My apologies if those are common concepts in ML. If so, I'd love to look at some references that could further my understanding on these topics, and get me to use the right terminology.
Edit: Okay, new plan, anyone wanna make a cli for AI Gigapixel?
I changed the exposure too, but this is what i was able to achieve using both their JPEG to RAW and their Gigapixel, along with some color correction. (theres still a tiny bit of blocking id like to clean up, but its not bad for a test run.)
If anyone has used Photoshops Match Color feature before, I do expect a day where I can open some textures and apply them to similar low rez textures in a photo, and use the samples to repaint the picture. Similar in end result to how texture packs work for games.
I also think by taking HDR footage and compressing it to 6-8bit color space, and training a model to reproduce the original, that AI will be pretty good at upconverting older footage into high color depths.