Roanot is an AI video editor for sales letters, demos, and explainers. The core idea is simple: instead of treating a video as one giant generation, it treats it as a sequence of editable scenes.
I started building this after repeatedly running into the same problem with AI video tools: If you change one line of a script, one visual, or one voiceover, the entire video has to be regenerated. Iteration becomes slow, expensive, and frustrating—especially for sales letters where small copy changes matter a lot.
What I wanted instead was something closer to how people actually write and refine sales letters: scene by scene. So in Roanot, you start with a script (AI-generated or your own), and it’s automatically split into scenes. Each scene can have its own video (AI-generated or uploaded), text overlays, and voiceover. If you don’t like one scene, you regenerate or replace just that part and leave the rest untouched.
Under the hood, the product is built to support that workflow. AI video, audio, and text generation (mainly ChatGPT & Google Veo) run as queued jobs processed by a separate worker service, so the editor stays responsive while heavy jobs run in the background. Assets are stored privately and delivered via signed URLs, and prompts are moderated to avoid surprises.
Another design choice I made early on was to treat the output as a digital sales letter, not just a rendered MP4. When you publish a letter in Roanot, it’s a web-based experience that can be embedded or shared, with the video as one part of it. That opens the door to things like real-time personalization, dynamic content, embedded CTAs and analytics that are hard to do with a static video file. Some of that is still early, but it’s the direction the product is heading.
The product is currently focused on video sales letters and demos, but I can imagine people could use it for explainers and educational content as well—anywhere the “iterate one piece at a time” model makes sense.
There’s a free tier to try it out (with some added credits), and paid plans if you want higher limits.
I’d really appreciate feedback on whether the scene-based editing model matches how you’d expect to build AI video, and where it might fall short compared to timeline-based editors. Happy to answer questions about the tech or the workflow. If anyone has a way to increase my Veo video generation limits (or has access to Sora), I’d love to hear about it.
Just wanted to say thanks for keeping this alive! I used magic lantern in 2014 to unlock 4K video recording on my Canon. It was how students back then could start recording professional video without super expensive gear
Not quite… in the most common version of chessboxing it’s more a case of alternating between the chess & the boxing via, say, a 3 minute timer from the start of each “round”. You do need to be very careful moving the chess board in/out of the ring between rounds so as not to upset any of the pieces but otherwise play just continues round by round until checkmate or knockout (or TKO or a clock falls etc.)
Source: I fought in (and won!) a one-off chessboxing exhibition match in London back in 2012
Would anyone be able to hack Voyager? Just wondering - if the software is that old, as long as you can get a signal to it anyone should be able to hack it / wipe it right?
Is the biggest barrier the enormous satellite dish you'd need to contact it or do the commands have a auth header with some key you'd need to brute force?
I think the big radio telescope is the main barrier. At this point there's only one in the world with enough transmit power to talk to it, and a handful that can receive the signal.
Roanot is an AI video editor for sales letters, demos, and explainers. The core idea is simple: instead of treating a video as one giant generation, it treats it as a sequence of editable scenes.
I started building this after repeatedly running into the same problem with AI video tools: If you change one line of a script, one visual, or one voiceover, the entire video has to be regenerated. Iteration becomes slow, expensive, and frustrating—especially for sales letters where small copy changes matter a lot.
What I wanted instead was something closer to how people actually write and refine sales letters: scene by scene. So in Roanot, you start with a script (AI-generated or your own), and it’s automatically split into scenes. Each scene can have its own video (AI-generated or uploaded), text overlays, and voiceover. If you don’t like one scene, you regenerate or replace just that part and leave the rest untouched.
Under the hood, the product is built to support that workflow. AI video, audio, and text generation (mainly ChatGPT & Google Veo) run as queued jobs processed by a separate worker service, so the editor stays responsive while heavy jobs run in the background. Assets are stored privately and delivered via signed URLs, and prompts are moderated to avoid surprises.
Another design choice I made early on was to treat the output as a digital sales letter, not just a rendered MP4. When you publish a letter in Roanot, it’s a web-based experience that can be embedded or shared, with the video as one part of it. That opens the door to things like real-time personalization, dynamic content, embedded CTAs and analytics that are hard to do with a static video file. Some of that is still early, but it’s the direction the product is heading.
The product is currently focused on video sales letters and demos, but I can imagine people could use it for explainers and educational content as well—anywhere the “iterate one piece at a time” model makes sense.
There’s a free tier to try it out (with some added credits), and paid plans if you want higher limits.
Site: https://www.roanot.com
Demo: https://www.roanot.com/app/demo/de745846-87e2-4861-88f2-b91f...
I’d really appreciate feedback on whether the scene-based editing model matches how you’d expect to build AI video, and where it might fall short compared to timeline-based editors. Happy to answer questions about the tech or the workflow. If anyone has a way to increase my Veo video generation limits (or has access to Sora), I’d love to hear about it.
reply