Hacker Newsnew | past | comments | ask | show | jobs | submit | Sean-Der's commentslogin

That's exciting! When you were evaluating it everything about the protocol/APIs fits your needs?

Just features/software need to be implemented?


I wouldn't say I'm done evaluating it, and as a spare-time project, my NVR's needs are pretty simple at present.

But WebCodecs is just really straightforward. It's hard to find anything to complain about.

If you have an IP camera sitting around, you can run a quick WebSocket+WebCodecs example I threw together: <https://github.com/scottlamb/retina> (try `cargo run --package client webcodecs ...`). For one of my cameras, it gives me <160ms glass-to-glass latency, [1] with most of that being the IP camera's encoder. Because WebCodecs doesn't supply a particular jitter buffer implementation, you can just not have one at all if you want to prioritize liveness, and that's what my example does. A welcome change from using MSE.

Skipping the jitter buffer also made me realize with one of my cameras, I had a weird pattern where up to six frames would pile up in the decode queue until a key frame and then start over, which without a jitter buffer is hard to miss at 10 fps. It turns out that even though this camera's H.264 encoder never reorders frames, they hadn't bothered to say that in their VUI bitstream restrictions, so the decoder had to introduce additional latency just in case. I added some logic to "fix" the VUI and now its live stream is more responsive too. So the problem I had wasn't MSE's fault exactly, but MSE made it hard to understand because all the buffering was a black box.

[1] https://pasteboard.co/Jfda3nqOQtyV.png


What was the WebRTC bug, would love to help! I saw at work that FireFox doesn't properly implement [0] I wanted to go fix after FFmpeg + WHEP.

If you are still struggling with WebRTC problems would love to help. Pion has a Discord and https://webrtcforthecurious.com helps a bit to understand the underlying stuff, makes it easier to debug.

[0] https://datatracker.ietf.org/doc/html/rfc8445#section-7.2.5....


I really thought that OBS + WebRTC would be a few months project. Ended up being way stickier than I expected.

Most days I’m more excited for this space than Pion. It’s just more fun to make software for users vs businesses mostly doing AI maybe?


If anyone is using/testing WebRTC I would love to hear how it is working for them :) I am hoping Simulcast makes a impact with smaller streamers/site operators.

* Cheaper servers. More competition and I want to see people running their own servers.

* Better video quality. Encoding from source is going to be better then transcoding.

* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC

* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.


If anyone is using/testing WebRTC I would love to hear how it is working for them :) I am hoping Simulcast makes a impact with smaller streamers/site operators.

* Cheaper servers. More competition and I want to see people running their own servers.

* Better video quality. Encoding from source is going to be better then transcoding.

* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC

* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.


I built a Open Source tool to help people debug/understand their live stream quality better. I would really appreciate people's feedback! https://streamsniff.com/

I see a constant stream of people saying 'my video is blurry' or 'why do I have so much latency' and I wanted to build something that helped people understand/diagnose. Also wanted to make something that runs in the browser so you can share/send a link to a friend (or on discord) to make debugging easier.


https://github.com/sean-der/stream-sniff is the code.

I wanted to help people debug/understand their live stream quality better. I would really appreciate people's feedback!

I see a constant stream of people saying 'my video is blurry' or 'why do I have so much latency' and I wanted to build something that helped people understand/diagnose. Also wanted to make something that runs in the browser so you can share/send a link to a friend (or on discord) to make debugging easier.

I am hoping to move it to https://github.com/glimesh org soon so it can sit next to broadcast-box.


WebRTC is a real super power on this stuff :)

I also love seeing it used for 'kill the jump box' and file transfer. Just drives me crazy that we lets files sit on file providers.

Especially if you are transferring in the office! Send it right over the LAN and could be instant. Being forced to upload + download from remote servers frustrates me.


Broadcast Box is a project that I have been maintaining for a few years, but feels even more relevant with streaming services locking things down more and more.

I originally built Broadcast Box so I can get WHIP support into OBS, but it outgrew that. I love this new generation of streaming that is more intimate. Instead of pushing video to a large audience you pull a group of friends into a stream and low latency makes you feel more connected.

Always looking for feedback on the architecture, docs, and onboarding experience! especially suggestions to make setup easier for new users.


If anyone is using/testing WebRTC I would love to hear how it is working for them :) I am hoping Simulcast makes a impact with smaller streamers/site operators.

* Cheaper servers. More competition and I want to see people running their own servers.

* Better video quality. Encoding from source is going to be better then transcoding.

* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC.

* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.


I would love to host an ultra high quality stream on my own web server, and then have that exact stream piped to YouTube live via OBS. Is there an easy way to do that now?

YouTube likely won't support streaming 3440x1440 60FPS video, and while discord technically supports it, they usually compress the footage fairly aggressively once it's sent up to the client, so I'd like to host my own; it only needs to support a few people. I wouldn't mind hosting it so my friends and side project partners can watch me code and play games in high quality.


I would send your high quality stream to something like Broadcast Box with Simulcast enabled.

Then you can forward your lowest quality stream to YouTube with FFmpeg/GStreamer. Hopefully no re-encoding needed!


Do you have any resources for someone who would want to get started in small-time self hosted options?

Context here is just self-hosting my own site for friends to stream to friends (instead of whatever we squeeze out of Discord).

The WebRTC work sounds awesome, would like to try it out.


Yes! I maintain https://github.com/glimesh/broadcast-box for this.

You can try it out at https://b.siobud.com to see if you like it first. It if fits your needs then go for the self host :) I run my instance on Hetzner

I want to add more features to it, but I have been focused on OBS mostly lately. If you have any ideas/needs that would make it work for you and your friends I would love to hear! Join the discord and would love to chat.

What I want to do next is make a 're-broadcast feature'. So friends can stream to it + hang out. When they are ready they hit a button and then goes to Twitch/YouTube etc...


I am hoping this space improves, I wanted to cast video to watch some stuff with friends last year and the software to accomplish this now is both really heavy (does EVERY part of the process need to run http server?) and convoluted.

We ended up just doing a discord screen share, which evaded all the tunnelling/transcoding/etc issues which made us give up on WebRTC.


What software did you try and use last year?

Can you try Broadcast Box. If that is still too heavy, what could I do to make it better?


Around last year I was using some custom plugins for OBS, I haven't used Broadcast Box but I can pick it up to try sometime later.

> If that is still too heavy, what could I do to make it better?

I haven't picked it up yet to see whether it's complex enough to really need it but it has the same pain point a few priors did--being yet another service which I must configure via the browser and so it has to run an entire frontend for doing that rather than being able to do config files.


Broadcast Box can be configured via env variables. The frontend is also optional!

It is like RTMP where you can just do a URL to publish/watch. It does come with a frontend though.


The only thing I know about webRTC is that vdo ninja uses it https://docs.vdo.ninja/getting-started/vdo.ninja-basics#powe...


OBS outputs WebRTC, so you can push directly into vdo ninja now.

If you want more control over your video quality/capture it's nice to not have to use your browser. Trade off is its way harder to setup.


Is that using the WHIP output or something else?



I'd never heard of vdo.ninja. It sounds like the base use case, according to their main page, is the opposite? Phone to webrtc:

> In its simplest form, VDO.Ninja brings live video from a smartphone, tablet, or remote computer, directly into OBS Studio or other browser-enabled software.

I really hope I'm processing what they're saying incorrectly but this sure sounds like they are doing a video encode for each peer, which is madness & obviously bad.

VDO.Ninja is a peer-to-peer system. This means for each new person viewing your feed, a new encode is processed. It also is CPU bound since encoding usually takes place on the CPU. Take care not to overload your system. Keep an eye on your CPU usage.

The intro video also emphasizes that each person has to send video to all peers, that in fact it's not about sending to OBS, it's about having people in a room. And warns that room size of 10 is about as good as you'll get. Seemingly because of these limits.

But if it does what the original purpose states, of streaming to OBS (a single consumer), it doesn't really matter. I am piqued to see how it handles maybe sending multiple people's streams to OBS: if that's what the room is for that's very rad (even if weird anti-efficient at it?)!

I really like the idea of web based tools for video capture. And for some video production. It's cool that vdo.ninja is here. But what the heck; this sounds not good.

Also I find it a weird claim that anyone would have heard of vdo.ninja but not webrtc. 3 results for https://hn.algolia.com/?q=vdo.ninja , about a thousand for webrtc. Always an interesting world, interesting people.


It has a lot of options, including the use of a meshcast server https://docs.vdo.ninja/steves-helper-apps/meshcast.io and using OBS

It has uses for things like hosting video podcasts.


I've been waiting for the WHEP support PR to be merged so I can input video from a stream into OBS and mix it before outputting it again with WHIP. Or am I thinking about it wrong ?


No you are thinking about it right!

I think the best way to do it today is via a Browser Source. It is hard to get people excited because it is already technically possible.

I will keep working on that PR!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: