Hacker News new | past | comments | ask | show | jobs | submit login
OBS Studio: Open-source software for video recording and live streaming (obsproject.com)
1518 points by open-source-ux 56 days ago | hide | past | web | favorite | 362 comments

I randomly learned OBS a while ago for doing some twitch streams in the evenings. I'm so glad I did.

I run a 6,000 people company during the day and have OBS setup to push into Google Meet. I've done townhall with live on-screen Q/A voting, hosted podcast discussions, done PIP product reviews. I use its video record feature to react to figma prototypes and post the MP4s in the respective channel for discussion.

OBS is an amazing tool and its worth learning. Even simple things like adding a compressor to an audio stream can make a huge difference to the quality. As one of our coaches recently said "Video quality is the new presence in 1:1s".

On windows its reasonably easy to output OBS to a virtual camera for video conferencing software through a plugin. I've posted a bounty of $10k recently to make this a native feature and it's getting lots of traction.

https://twitter.com/tobi/status/1242641154576965634 https://github.com/obsproject/obs-studio/issues/2568 https://github.com/obsproject/rfcs/pull/15

Are there any existing plug-ins for virtual microphone?

I want to use OBS'S realtime noise suppression and noise gating in another app (mainly online lecture platform Echo360). I got it working using VoiceMeeter in what seems like a hacky way, but only with high latency so far.

I'm not aware of any plugins, but in case anyone is curious about how to replicate that setup on Linux systems with PulseAudio, you can create virtual outputs with:

    pactl load-module module-null-sink \
        sink_name=Device_Name \
If some program filters out output monitors from its input list, you can usually use pavucontrol to force-change it. Or, you can create a linked virtual source:

    pactl load-module \
        module-virtual-source \
        source_name=VirtualMic \
        master=Device_Name.monitor \
Then in OBS settings you can set up the virtual device as Monitoring Device.

I took the commands from one of my side projects, maybe someone will find other parts of it interesting: https://github.com/pzmarzly/mic_over_mumble

Thanks! This fixes my issue!

Use Equalizer APO with a noise-reduction VST. This is by far the best option on Windows if you are concerned about latency×compatibility×underruns as a figure of merit. It is far superior to using "virtual audio cables" and a VST host (like Lighthost or SAVI) or voicemeeter.

It's not well known, but it really works spectacularly well compared to those other options. For me it never has had any audible buffer underruns (unlike Lighthost), no noticeable latency (unlike SAVI and Voicemeeter, even with small buffer sizes), no problems regarding exclusive mode (unlike voicemeeter) and it works with every single application.

The UI is not terribly clear about this, but it can drive multiple devices independently, simply by adding several "Device" blocks to the configuration.

Wow, I just did this and it's crazy how much better it sounds. It's a little complex to set up, but this guide makes it super simple: https://antlionaudio.com/blogs/news/removing-background-nois...

This was great advice! The key thing about Equalizer APO is that it does its processing of the sound before the Windows sound API, so any program recording from my mic gets the processed version, and there's no virtual microphone needed (well, the actual microphone becomes virtualised - there's no separate device). I have that all working, and mostly followed this tutorial: https://www.youtube.com/watch?v=J3fBx2ftaBs

I adjusted the numbers for my setup but his were a good starting point.

What VSTs would you recommend for improving audio quality of video calls?

ReaFIR works quite well and has very low latency. It's a bit fiddly to auto-generate a noise profile due to the architecture of Equalizer APO (the entire audio processing runs inside the Windows audio stack, so the VST panels in the configuration editor don't have a signal). Basically you use another VST host (e.g. Lighthost or OBS), generate your noise profile there and then copy/paste the chunk data into the APO config file.

Some general EQ'ing on the mic also works wonders for how well it sounds, but that's very specific to your voice and mic.


Another use case of Equalizer APO where it is much better than everything else is compressing game audio. Some games simply have audio that was designed without regard for hearing safety (CS:GO is a strong contender for #1 here), and this helps immensely with it.

ReaFir is amazing.

Do you have any recommendations for VST plugins to use?

Please see my sibling comment. The video I linked uses ReaPlugs (https://www.reaper.fm/reaplugs/) and I ended up without needing any explicit noise reduction, just noise gating and some other adjustments.

On Mac I use Soundflower to redirect audio between apps.

There'a also AULab which can be used for some effects in realtime (e.g. adding reverb and redirecting to built-in audio out).



Blackhole is a nice Soundflower alternative that might be appealing for some folks in certain use cases as well: https://github.com/ExistentialAudio/BlackHole

I have no association with them, but https://krisp.ai might help you.

I think there should be software that provides virtual audio devices, so you could configure it as the output for OBS and the input for you other application - something like this: https://www.vb-audio.com/Cable/

Yes, that's what VoiceMeeter does (same author).

I use Loopback on Mac, and its never let me down! https://rogueamoeba.com/loopback/

Can you recommend any resources for learning OBS?

My daughter wanted to record some videos of her playing a web-based game. I found the interface to OBS unintuitive. I managed to figure out how to capture a specific area on the desktop, but it was unexpectedly difficult to resize the output to match the input. I found some way of doing it that I can't remember.

A few months later I had to do it again and that time I couldn't find the option to resize the output to match the input.

I'd love to find some resources to help her learn how to setup OBS for recording or streaming.

Osiris, there are a lot of tutorials around learning OBS. One of the best ones that I've come across is EposVox's OBS Studo Master Class 2018. It helps you figure out what you want to learn and covers a large swath of the various OBS functionalities.

EposVox's OBS Studio Master Class 2018 YouTube Playlist: https://www.youtube.com/playlist?list=PLzo7l8HTJNK-IKzM_zDic...

Instead of streaming the entire desktop (or a portion of it), you'll actually want to set up the ["Game Capture" source](https://obsproject.com/wiki/Sources-Guide#game-capture), and configure it to automatically attach to any fullscreen application.

Then add an audio stream, and you're good to go.

try obs streamlabs or twitch studio - they're a lot more beginner friendly and built with OBS if your daughter just wants to record or stream.

I'll say that StreamLabs was better before they changed their monetization model. I was lucky to download and install a few sets of overlays a few months ago that, today, would cost a minimum monthly fee to access.

Check out "Gaming Careers" on YouTube, it's a channel dedicated to teaching all the various aspects of streaming from the ground up, assuming little or no previous knowledge.

The official obs website has a wiki and a forum, that’s probably the best place to go.

I can also help you set it up if you want!

I was just thinking about this, but mostly to route the audio through OBS so I can play my teammates some sweet synth music while we wait for others to join the standup.

I've just started learning how to use it and it's blown my mind how bloody useful it is, while also being open source. I've got it set up to record part of my screen and my webcam at the same time, and then I can chop it up in post using Resolve, no problem.

Until I found OBS I was basically trying to record my screen and then narrate over it after the fact, but it just didn't jibe with me as there's basically no room for improv or failure at that point. And I personally prefer to leave my less egregious mistakes in the final cut to demonstrate that you don't always get things right the first time.

Thanks for posting the bounty. Awesome! I wonder if it would make sense to add a virtual mic output as well so that the video and audio can be synced together, for ie when someone is using OBS to Zoom... I've gotten it to work via pulseaudio routing but the audio isn't automatically synced.

Sadly, OBS is busted for interfacing with Google Meet on MacOS. :( Found this out the hard way Monday.

How do you do q/a and voting?

Poll Everywhere (YC S08) has Q&A question support with voting. I'm a developer at Poll Everywhere and we use it during our weekly townhalls. Our company was 40% remote before the coronavirus crisis, and my experience with the Q&A poll as a remote worker has been great.


You're replying to Tobi the CEO and founder of Shopify who recently and very publically posted about 40 hour work weeks and how Shopify lives and breathes work life balance which multiple employees confirmed.

I don't know him from a bar of soap outside of what's shared publically but a cold, calculating executive is about as far from the mark as you could possibly be.

Hm, I will admit I didn't realize the particular executive I was addressing, and that Shopify is off my radar in terms of executive abuses, but CEOs are all of a class, and 40-hour work weeks are still an untenable arrangement, and the Overton window for commonly acceptable work-life balance is far, far toward the wage-slavery side of things–to say nothing of the work-life balance for millions upstream of the Shopify supply chain, e.g. computer-mineral miners in the Global South. So I am not ready to give Tobi any humanitarian awards.

In truth, I have long admired Shopify for their open culture, their tech blogs, and a product that empowers small businesses. None of these things are enough, however, to convince me that he's anything more than a wealthy Libertarian seeking (primarily) personal gain through economic exploitation, managerial coercion, and authoritarian hierarchies.

> a cold, calculating executive is about as far from the mark as you could possibly be

This is an indefensible exaggeration. You're telling me Tobi is a saint? A CEO? An absolute absurdity.

Liberals love democracy until it comes to the workplace. You disgust me.

You couldn't have Tobi pegged worse. I don't even agree with Tobi's thoughts on the 40 hour work week and I know you're dead wrong.

> dead wrong

You're exaggerating.

"Presence" in terms of 1:1s should be about listening to, understanding, and empathizing with the person opposite you.

"presence" before remote working was about actively being in the room - if you've ever had a manager who checks their email in your 1:1 you've experienced someone not being "present".

Video quality in a remote 1:1 is the foundation of that presence eg how the other perseon can see that you're understanding and empathising.

> Video quality in a remote 1:1 is the foundation of that presence eg how the other perseon can see that you're understanding and empathising.

You ever had a "phone call"?

How is this comment not dead? It violates all the criteria for participating on this site.

What has the world come to when you can't even trust the self-appointed oligarchs to enforce their arbitrary rules?

I'm one of the core contributors for OBS. Our website traffic has more than doubled over the last couple of weeks due to the COVID-19 situation - when we released the v25 update we accidentally killed our site due to a cache stampede after purging the CDN (oops).

We're seeing all kinds of new uses, especially users who are integrating the OBS Virtualcam plugin to do presentations and other content sharing with apps that only support webcam input.

Thanks for maintaining OBS! Any timelines for OBS Virtualcam to be available on macOS? It seems like several software like Snap Camera are integrating with users in this mode, OBS could be really helpful this way in webinars that don't happen over Twitch/YouTube.

We're hoping sooner than later thanks to Tobi's bounty on this. You can follow the design spec at https://github.com/obsproject/rfcs/pull/15 if you're interested.

It is really awesome and fast development. I am hoping it ends up cross platform (Linux) and not only OSX specific.

Woah, I was gonna ask if anyone knew when Display Capture was gonna be possible on macOS Catalina again, but when I opened OBS now, I see that it's actually possible in version 24.0.6 that I am running. Guess I must've overlooked it until now the last few times I looked for it, lol.

Anyway. Does anyone have any recommendations for settings to use in order to avoid lots of frame dropping and OBS making other applications sluggish when I try to stream to Twitch? I'm using a MacBook Air Retina 2018 model with an external monitor connected to it.

Check out CamTwist in the meantime - that has a virtual camera on Mac, and I enjoy using it to flip my video upside down (because I live in New Zealand). Also for typing large-text subtitles over the image, which was great for my late grandmother. I'm not affiliated, and it's free.


My problem is more like OBS does everything I need, but I need to use OBS as a camera in Zoom or Hangouts, but I suspect CamTwist isn't as feature-rich.

You can pipe OBS through Syphon into CamTwist.

OBS is very good software, thanks for working on it!

Recording the screen works much better than any other screen recording software I tried. For this use case the preview can be a bit confusing. If the resolution does not match (because of OS 200% scaling for example) going to the settings each time to adjust it is a bit cumbersome; the interactive resizing handles in the preview somehow never helped me. Sometimes one of the reset zoom context menus helps.

Also the "Window source" would be awesome, but is a bit cumbersome to set up every time, and doesn't capture things like menus unfortunately.

It's probably really difficult to improve these things, so they work automagically for dummies like me that know very little about OBS and use only a tiny feature set, without taking making things worse for power users, which are probably happy with things as they are.

You guys are awesome. I've been using this for a long time to stream, record whatever I want from my computer, and it's always been fantastic.

Keep up the great work and thank you.

OBS is awesome. I love being able to switch scenes on the fly. But I have one suggestion that would save a lot of embarrassment for a lot of people.

Tldr; There needs to be a master audio level display or at least some sort of master indication whether a stream is getting an audio signal or not.

Or at the very least, audio sources should be muted by default in new scenes.

We have an Intro screen scene that just displays our logo and some background movement with a message that we will begin soon. We started the live stream and then muted the mic on the audio inputs and then, for good measure, muted the physical mic. We then proceeded to chat and get things ready for the presentation, etc.

Little did I know that OBS includes -all- audio sources on every scene, by default, unmuted.

And though I had muted our regular mic, the webcam's built-in mic was on and transmitting. We didn't see the green audio level animation or even the listing for that input either, because it was at the bottom of the list of audio input sources where you have to scroll down within that box to see it.

Luckily, we didn't say anything too embarrassing, but it was embarrassing nonetheless.

This is something we want to do, but there are some complications both in the design of the program and getting the UI balance between "normal" users and users familiar with professional equipment/DAWs. There's been some proposals, but we haven't reached a consensus yet.

Something as simple as sorting the unmuted audio to the top of the audio source list when you start streaming would be helpful.

Would it be possible to create a big "mute all audio" button with an OBS plugin?

opt-in "Mute is Default" option

Happy user of OBS here. Thanks for all the hard work.

Huge thanks to you, jim and everyone else involved. It truly is easy up there as one of the greatest open source success stories.

OBS is amazing and I love everything about it. Thanks for maintaining it.

Are there any paid developers working on it? Does the project make money?

Currently the lead developer (Jim) is the only paid developer, and he's able to work on the program full time thanks to a few large sponsorships. That said, we'd really like to be able to pay more people, as the program has many development needs that could really use attention. More detail about sponsorship/donation opportunities can be found here: https://obsproject.com/contribute

This makes me so sad. Thousands of streamers make millions of Euro from their great software and so few of them contribute back.

Twitch, who makes as much as the streamers, is a sponsor though.

Just thinking out loud.

I haven't used it yet, so maybe this already exists, but maybe think about adding something to the app itself to remind people to contribute financially, similar to how Wikipedia is doing it.

Maybe count how many times someone is using the app (local count) and when the count is high then show a little something, short message and call to action to donate.

Repost this comment as a standalone so it can get more attention than a buried comment!

Can we just get the 'color picker' option that exists on windows when you are using chrome/color key feature for 'green screen' effects?

Not having that on Linux when it exists on windows is frustrating.

What CDN are you using? I worked on a feature at Cloudflare that prevents this scenario and is on for all users.

Fastly have kindly sponsored us with free CDN service. They have a tiered caching feature called Shielding that ended up being our solution - it just needed turning on.

I see! I didn't mean tiered caching. I meant, if a thousand clients simultaneously request a cacheable resource from the same colo that lacks the resource, only one request should make it to your origin, and the response should be streamed to every client. This is possible in Fastly but IIRC it depends on what combination of http and https your client and origin connections are using.

Thanks for making OBS!!

Thanks I use obs all the time at work for recording stuff keep up the good work!

It's so cool to see you in the mainstream and not hidden on teamliquid!

r1ch hwaiting!

Thank you. Although I've moved away from it, it's excellent.

OBS is amazing. Half our faculty just went all-out and spent thousands of dollars on some commercial screen recording software.

Meanwhile I'm doing my online courses with OBS, and it works beautifully. I have multiple scenes set up in OBS that grab different parts of my screens, and I switch between them with simple key strokes, while narrating on my actions as I do them.

It's a very simple, and very effective setup, and my students love it.

To me, it is immensely powerful to be able to switch scenes and narrate live, instead of doing these things in post. This saves a ton of time, that I can instead spend on refining my content.

Love hearing stories like this! Often, we only hear the negative or when people are having issues (it's rare for folks to speak up when everything is working well!), so it's genuinely heartwarming to hear how much people are able to use our program to keep their livelihoods moving.

I've been using OBS for quite a while now and I absolutely love it!

By chance, do you know what software your faculty purchased?

Probably any of the education-specific tools that are maintained similar to enterprise software platforms, where they add 'online video tutoring' capabilities to check off a feature/table-stakes box, but it has a painful UI that makes OBS look like a dream to use, and adds an extra few thousand dollars to the school's bill every year.

At the university I work for they use Teams, Blackboard and Echo360. I do know of some people in maths schools using OBS though.

Camtasia, I believe.

Which OS are you using?

Not OP but I can say that on Windows it may require a lot of fiddling. Hardware drivers appear to be a big factor.

Another concern is the many overlays that accompany game launchers and drivers, including: Nvidia, Steam, GOG, etc. These add latency and sometimes private notifications.

The fiddling step was what I hoped to minimize. Did you have better luck with Linux or MacOS?

Interface is similar across the platforms; I had to do a few test streams on macOS before I could find settings that didn't cause streaming hiccups or problems with other software. I also had to make sure the OBS app interface was on a monitor separate from the one I was streaming; otherwise sometimes it would have some weird stuttering issues.

No matter what, you'll probably need to spend a little time tweaking things to get it all working like you want. But the 'scenes' and preferences are pretty good about letting you lock things down once you do find something you like.

Linux. Kubuntu 18.4 to be precise.

What is a "Scene" exactly?

In OBS, a Scene is an arrangement of video inputs, images, texts written on the screen, ...

If you're a teacher and you're going through a PDF exercise opened on your screen while drawing things on a whiteboard behind you, you may want to have 2 scenes:

* One with the opened PDF in full screen, with your camera feed in the bottom-right of the video, in small.

* One with the camera feed in full screen, where viewers can clearly see what you're writing

You'd then be able to switch between those 2 scenes at will depending on what you're currently doing. You'd show the first scene when you're reading the exercise out loud and then switch to the second scene when you're resolving it on the whiteboard.

Great explanation, thank you kindly!

I use OBS Studio with OBS-VirtualCam [0] to attend virtual lectures & hold meetings for my team. I've found it to be incredibly convenient because you can control nearly everything with scenes and the audio controls.

Before meetings start, I can broadcast music and display announcements, and then without having to hit a jarring "End Screenshare" can switch to my webcam and start a meeting. Live demos and presentations are another scene with the desktop/window/browser and webcam. 100% would recommend.

[0] https://github.com/CatxFish/obs-virtual-cam

I'm doing something similar on Linux, although a bit more complicated.

I'm using v4l2loopback [0] to create a dummy video device, ffmpeg to create a stream endpoint that streams into the dummy video device, then setting up OBS to stream to localhost.

It is actually really nice to have the capability to fully control what is going in to the video input.

I haven't run into a need to also change the audio input yet but if it becomes necessary, it should be possible to set up loopback with ALSA.

[0] https://github.com/umlaeute/v4l2loopback

Are you using this with zoom, by any chance? I had no luck trying to capture my Webcam with ffmpeg, add text to it with ffmpeg, and output everything to a fake Webcam with video4linux. Actually, it works perfectly well, but this particular stream I can't open with zoom, even though zoom will accept it perfectly if instead of my Webcam, I add text to a video file.

I suppose zoom detects that my actual Webcam is in use, and therefore refuses to display... any webcam whatsoever, including the virtual one...? Makes little sense but maybe...

I have not tried it with Zoom but Chrome refused to recognize any loopback video devices unless they were capture-only. So the following worked:

    # modprobe -r v4l2loopback
    # modprobe v4l2loopback video_nr=7 exclusive_caps=1 card_label='Screenshare'
where exclusive_caps=1 is the work-around for Chrome (both video_nr for /dev/video7 and card_label should be able to be set to ~arbitrary values). You need to first start writing stream to the loopback device and then it would switch itself into a capture-only device and Chrome will recognize it.

I use it with Zoom as well and haven't had a problem with it. I haven't tried using pure ffmpeg to output to the dummy device though.

Could you detail on how you are creating the ffmpeg streaming endpoint? That would be very heplful!

Personally, the command I use is this:

  # Replace `/dev/video2` with the dummy video device added by `l4v2loopback`.
  ffmpeg -re -listen 1 -i rtmp:// -c:v rawvideo -an -pix_fmt yuv420p -f v4l2 /dev/video2

After starting ffmpeg, you set up OBS to stream to a custom streaming server at`rtmp://` and start streaming.

It's not very efficient and there's a delay since OBS is encoding with h264 then ffmpeg is decoding that. It's not too bad for me because I can use the NVENC encoder but I'm sure there's a way to get OBS to stream raw video somehow.

I don't know exactly what you mean, but I would use something like this: https://github.com/arut/nginx-rtmp-module

how are you getting desktop audio (music or whatever) to get sent to your meetings? I didn't see how to expose the audio output from obs as a "microphone" or whatever to video conferencing software. I ended up hacking my setup together with voicemeeter but it's pretty sloppy and error prone.

Pipewire [0] (the successor to PulseAudio) attempts to streamline this process for Linux. I've been messing with wf-recorder [1] for my screen+audio recordings, and might try to get it to spoof a camera input so I can get any program attempting to connect to the webcam to instead turn into a screen-casting tool.

[0]: https://en.wikipedia.org/wiki/PipeWire

[1]: https://github.com/ammen99/wf-recorder

At least in my case, I haven't had to configure anything. Perhaps it's included in OBS VirtualCam? (see my original post)

JACK audio or ALSA monitor device

Tip: zoom doesn't list pulseaudio monitors in the available sources, and trying to set zoom's input to anything manually from pavucontrol fails silently (even if zoom is set to "use system default" or whatever it's called).

The only way I found to stream audio to zoom is to use a pulseaudio module that lets you use a named pipe as a source. You can then output your sound to said named pipe, and set it as the microphone in zoom. The sound is pretty bad of course.

Have you tried PulseAudio's module-virtual-source? https://news.ycombinator.com/item?id=22754216

Are there any "virtual usb" devices, something like a software usb gadgets to emulate a webcam so that software will have to special knowledge of these alternative sources?

OBS is commonly used for video game streaming, but it's a great tool for any scenario where you need to take live audio and video from different sources and display them at the same time or transition between them.

I've been using it to make a corny music interview show with my local musician friends during the coronavirus shelter in place. Whereas a lot of my fellow musicians are streaming from their phone, I'm able to connect a mixer to my computer and stream the show with really good audio quality.

The B in OBS hints at streaming, but it's also fantastic for purely recording. It's honestly surprising how lacking that space was before OBS. I remember using FRAPS/Taksi a bit, and stuff like Camtasia, but there were all pretty awful to be honesty and definitely not free or open source.

It feels like OBS has been here forever but I remember the days when I had to use Camtasia. The software was actually surprisingly good and easy to use but for the amount you were paying, it wasn't worth it, not to mention the proprietary recording format wasn't doing it much good.

I really like using the recording feature to do sound checks. We go through all of our checks, then I watch the video locally in VLC. That way I'm certain when it goes live it'll sound the way it's supposed to.

I wish more people would do this. Or maybe OBS should have some (opt-out) warnings for when your audio is either unbalanced, too low or too loud. I seen way too videos or streams with bad audio levels.

Agreed. I used it just this week to record my screen (have used it in the past for streaming) and was blown away at not only how easy it was to use, but how small the files were and how well it integrated the encoders my system supported.

I've been using OBS for a couple months for live streams on YouTube (see https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geer...), and I have had rock-solid reliability, using an external mic interface, an external camera (displayed PIP), and sharing one of two displays during some instructional sessions.

One thing to keep in mind, though—unless you have a dedicated video card, and it's supported by OBS (the list of supported cards on macOS is very thin), your CPU has to do all the compositing and compression, meaning you need a lot of CPU to be able to manage the streaming.

On my 2016 MacBook Pro 13", it barely has the horsepower to do a stream and also run processes that I'm explaining (e.g. manage some VMs, run some database operations, etc.). I had to turn down the compression method to 'ultrafast', which is lowest quality (but still pretty good with 1080p output), and I also use SwitchResX to set my shared display at 1080p 1x resolution (instead of 4K/2x resolution).

OBS core team member here, just a quick clarification on this post. OBS will not run without a supported GPU for compositing; that is always handled by the GPU running on OpenGL (for macOS and Linux, we use Direct3D on Windows). The available encoders, however, might change based on the available hardware. Hardware encoders are, generally, much faster and lower impact on system resources, but may have lower quality per bitrate as a trade-off.

Ah, thank you for the clarification, TIL!

Any ideas if OBS could support FFGL for video effects?

Thanks for all of your contributions to the ansible ecosystem! My team and I greatly appreciate the work you do.

Lots of options: - Use Quick Sync - Lower your bitrate - Use a faster encoding preset - Lower your output resolution

If I remember correctly then your PC is a dual-core, and you can't expect a low-end machine to do high-res encoding well without HW acceleration.

have you ever thought about getting a PC?

If I was doing Logic Pro X tutorials, or XCode tutorials, this would be a non-option - which are the only kinds of tutorials I personally have enough knowledge and drive to create.

PC is a non-option for a chunk of people in dev and media - I, for instance, do iOS dev for a living, and live by Logic Pro X for professional audio work. (I've been using Logic for about 15 years...)

The amount of time it would take for me to transition, to, say Ardour - or the amount of my career that would be lost if I swapped from, say, iOS to...I'm not sure what the FOSS equivalent is...(Android doesn't count, obviously, because Google) I'd lose years and years of training, experience, and wisdom.

I think in that case you'd use a capture card on the PC and still continue to use your Mac.

I don't think you would loose that much.

From my experience real experience comes from learning concepts not applications.

Someone that knows the ins and outs of Microsoft Word can relatively easy switch to switch to Libreoffice Write.

Or someone that is a good modeler in 3dmax can also become a good modeler in blender.

Buttons in applications change position all the time. UI gets reworked, keyboard shortcuts changes, etc. from each version to the next, but if the concept behind doing certain stuff is learned, it doesn't really matter which software product is used.

The only time when stuff is really hard, if you encounter a concept you haven't worked with before and have to change the way you think radically.

I have one but 30 years of muscle memory and favorite Mac apps is hard to overcome. If anything I’d go Linux, where similar limitations seem to apply.

My PC laptop has almost identical specs to my Mac anyways... I haven’t owned a desktop in 10 years :/

If you're running a modern Linux desktop you're probably running Wayland, and screencasts on that have long since been a complete pain in the neck with per-compositor "solutions" that mostly don't work quite right. Fortunately someone who works on Gnome wrote the obs-xdg-portal plugin that should fix this, at least for Gnome and hopefully soon for wlroots and KDE once they fully support the underlying portal API. Until then, the easiest way to get screencasting working is just to run in X11.

(Ask me about ffmpeg raw GPU buffer capture one day; running a bunch of codec code as root is always exciting.)

OBS Plugin: https://gitlab.gnome.org/feaneron/obs-xdg-portal/

For GTK: https://github.com/flatpak/xdg-desktop-portal-gtk

For KDE: https://github.com/KDE/xdg-desktop-portal-kde

For wlr: https://github.com/emersion/xdg-desktop-portal-wlr

Does Wayland do anything better than Xorg? Every time I see it mentioned, it is about how it does not support this or that core feature of Xorg (e.g multiple displays with custom pixel densities/scaling, screen sharing apps being broken, etc...). What is Wayland's reason for existing?

It does do a ton of things better. It's just that X11 was basically a giant shared buffer, so things like screenshots etc. were easy.

Wayland is a lot more security conscious, but once the compositors are up to parity, these things won't be a problem.

At least for me, Wayland is way smoother than X11 ever was as well.

Same reason systemd exists, the previous solutions were old and clunky and some people got fed up and decided to update. Plus some good old NIH. People who were dealing just fine with previous solutions then start complaining about breaking things for the sake of breaking things and not being able to do things the way they were used to.

In the specific case of Xorg I find the situation strange because I'd gladly have made the switch 15 years ago back when messing with Xorg.conf was a common occurrence for me and it kept getting in the way (although a big portion of the blame was with the proprietary drivers, especially AMD's). Xorg was sometimes a bit of a pig too resource-wise, but that's when I was running a PC with 256MB of RAM. I remember being fairly optimistic when I first heard about Wayland and Mir, the prospect of ditching X11 was enticing.

But now? I haven't really had to wrestle with X in a long time. It just works for me. I'm definitely not looking forward to reworking my entire workflow for minor benefits although I suppose I'll have to one day. I also use X forwarding pretty extensively, but I'm probably a small minority these days.

I agree with this. Wayland had taken so long and is still lacking in a few areas, while Xorg somehow managed to reach a point where it actually "just works" first try, with some minor problem every other year. I don't need Wayland anymore.

Is it possible to use XWayland for X forwarding? I've used X forwarding on macos with no problems, using XQuartz.

Yes, XWayland is architecturally very similar to XQuartz.

I'm doing my classes online with Zoom. For whatever reason, under Wayland I can not choose individual windows to share--I can only share the whole desktop. I switched to Xorg and now I can share individual windows in Zoom. I honestly don't care to support proprietary software, but application writers must do extra work to support both.

Wayland by default prevents applications from accessing other application display, input and output, while on X it is basically free for all. Any application can see what any other application is displaying, read it's input & send it it input events.

This might have been fine in the past, but is not really OK any more with efforts to make things more secure (eq. to prevent a malicious application to read you password entry, make screenshots of sensitive data, inject input events to your secure sessions, etc.).

The side effects is that new protocols need to be developed that applications can use to request access to display/input/output of other applications in legitimate cases (such as screen sharing in your case) & not all is in place for that yet.

> The side effects is that new protocols need to be developed that applications can use to request access to display/input/output of other applications in legitimate cases (such as screen sharing in your case) & not all is in place for that yet.

This is the main hurdle for most people. I would say that 99% of people agree that the Wayland way makes sense and is he better way of doing things but without the needed access controls it just not ready yet.

Like if Google said "Apps can't access [location|files|whatever] without permission" on Android with no way to grant those permissions.

>Wayland by default prevents applications from accessing other application display, input and output,

So it breaks the entire linux philosophy of using input and output streams to pipe data between different modular applications?

>This might have been fine in the past, but is not really OK any more

Says who? Personally I like my computer being able to access other things on my computer. It kind of makes it more useful that way. The ability for applications on linux to fairly seamlessly work together using a set of standard protocols is one of the primary reasons I use it.

> So it breaks the entire linux philosophy of using input and output streams to pipe data between different modular applications?

Not really. To use your analogy, the way that X works - every application being able to read the framebuffer of any other - is the equivalent of every application running as root and being able to read and modify any file on the system. When you consider that applications running under Wayland may include e.g. banking details, any app being able to read that is like anything being able to read /etc/shadow.

If your computer is perfectly secure, with no untrusted code running, that's great - and also far more secure than 90% desktop computers out there.

Maybe a stupid question, but can't any program read the memory of other programs from the same user? Can't you attach to it for debugging?

On many systems, and by default, yes - but the other part of what's going on is that Wayland allows applications to be sandboxed like they couldn't be before, as they can no longer use your X server as a conduit to spawn an unsandboxed shell and run commands. You can, today, run e.g. Firefox in a sandboxed environment and be certain it can't access anything you don't want it to.

AFAIK graphical application disttribution/sandboxing systems such as Flatpak pretty much require this to be avaialable if they ever want to provide reasonably secure sandboxing & might be already making use of this on Wayland systems.

Not necessarily. Newer Linux distros have ptrace disabled on all non-child processes. You can still turn off this protection if you need to.

Arguably the end state planned for Wayland in this regard (having access to specific applications provided) is conceptually closer to streams than the current situation with X (one big shared ball of global state).

Not really - I should have bean clearer - by input i mean keyboard input and its manipulation.

You can pipe stuff to other executable all you want under Wayland, you might just not easily (eq. user granting permission using the correct protocol) inject keyboard events from on application to another (sey malware masking as a game injecting code in the form of keyboard events into a running terminal emulator or ssh client).

>Does Wayland do anything better than Xorg?

Nitpick: This is not a meaningful comparison. Wayland is a wire protocol. Xorg is a display server, and the wire protocol it implements is called X11. There are several other display servers that implement the Wayland protocol. Some of these display servers do support those core features, and some of them don't (yet). It depends on which one you're using. The display server used by GNOME should support those features.

>What is Wayland's reason for existing?

From the website [0]:

>Wayland is intended as a simpler replacement for X, easier to develop and maintain.

[0] https://wayland.freedesktop.org/

> easier to develop and maintain

wayland may be easier to maintain than X. But from what I've seen writing a compositor for wayland is more difficult than writing an X window manager, because things you got for free with X have to be implemented by the compositor in wayland.

And many applications are more difficult to target wayland, because there aren't (at least yet) standard protocols for things like screenshots, screencapture, etc. So they have to either choose one desktop environment to target, or have implementations for all of them.

Take a look at wlroots [0] for a library that massively simplifies the task of writing a wayland compositor. It also gives many of those lower level things "for free". For an even higher-level API built on top of wlroots, you can look at wltrunk [1].

There are standard APIs for screenshots and screencapture, implemented through the desktop portal and pipewire. Check the top-level post for more info about this -- it's part of why Wayland support for OBS has progressed.

[0] https://github.com/swaywm/wlroots

[1] https://git.sr.ht/~bl4ckb0ne/wltrunk

I'm aware of wlroots. But KDE, GNOME, Enlightenment, etc. don't use it, so each of those have to implement things separately.

Concerning the desktop portal API. It's basically just a wrapper around the native custom API's of the underlying compositor. And it is pretty limited in functionality. For example, the screenshot API just has a way to request a screenshot, it doesn't have a way to specify that you would like to select a window, region, or display/screen/monitor. In the case of wlr-portal, from what I could tell it just always gives you a screenshot of the full desktop.

>But KDE, GNOME, Enlightenment, etc. don't use it, so each of those have to implement things separately.

I am not sure how this is relevant if you're trying to write your own compositor. If those projects want to create extra work for themselves, that's on them.

>the screenshot API just has a way to request a screenshot, it doesn't have a way to specify that you would like to select a window, region, or display/screen/monitor.

Yes, that's on purpose. What's supposed to happen is that the portal daemon (NOT the application) pops up a dialog asking the user to choose which one they want. Unfortunately the wlr portal is still not done yet and doesn't implement this.

> I am not sure how this is relevant if you're trying to write your own compositor. If those projects want to create extra work for themselves, that's on them.

I'm actually more concerned about the fact that wlroots has/had to duplicate work done by Gnome and KDE (wlroots is more recent than much of gnome and kde's wayland support).

> Yes, that's on purpose. What's supposed to happen is that the portal daemon (NOT the application) pops up a dialog asking the user to choose which one they want. Unfortunately the wlr portal is still not done yet and doesn't implement this.

Yeah, the problem is that each compositor has to implement it's own screenshot dialog, and you _have_ to go through that dialog for that compositor. So on wlroots, currently, an app can only get a full screen screenshot. And a tool like flameshot becomes awkward if the compositor opens it's own dialog. In X, if you don't like Gnome's screenshto tool, you have a handful of other options. With wayland, tough luck, the most you can get is a better editor/annotation tool.

>I'm actually more concerned about the fact that wlroots has/had to duplicate work done by Gnome and KDE

I don't think so, GNOME and KDE have never had the goal of making a reusable and generic compositor library like wlroots. You can try to build something with their internal compositor libraries (libmutter or kwayland) but they probably won't be as nice.

>The problem is that each compositor has to implement it's own screenshot dialog, and you _have_ to go through that dialog for that compositor.

This is on purpose and it's not the problem. It's the only way to do it securely. The problem is that you are trying to perform a privileged operation, which is the only way that something like flameshot can even work. Allowing random unprivileged programs to scrape your screen without confirmation is how you get trojans and other spyware. It's not worth adding more APIs to the portal just to support this because it's intended to be a secure API that can be accessed from within sandboxed applications.

Sure there are other tools on X but unfortunately none of those options are secure either.

> This is on purpose and it's not the problem. It's the only way to do it securely. The problem is that you are trying to perform a privileged operation, which is the only way that something like flameshot can even work.

That's not true. One way is to have secure protocols that can only be used by whitelisted programs in a secure context. sway has something like this (although by default I think it is pretty open), but there isn't any kind of standard mechanism for privileged protocols in wayland.

Also, I don't see why the screenshot API couldn't take a value for the type of screenshot to take. Like an enum with values for Region, Window, Screen, Full, and Any. To hint at what kind of screenshot to prefer.

Yes, one way to have a whitelist is to pop up a dialog asking to approve elevated permissions for a certain application. This is what mobile operating systems already do. The security implementation in sway is incomplete and has stalled, and is not going to work for all other types of desktop anyway. Pluggable security configuration should probably be added to wlroots at some point. This would allow any compositor to implement their preferred security policy and support whatever MACs or auditing they need.

The desktop portal does sort of support choosing a source, but only for screencast. See the enum here: https://flatpak.github.io/xdg-desktop-portal/portal-docs.htm...

> There are standard APIs for screenshots and screencapture

What standard? Does anyone track how compliant the existing compositors are with the standard?

The standard is the desktop portal. The top-level post has a list of backend implementations. The readme in the main daemon has the same list: https://github.com/flatpak/xdg-desktop-portal#using-portals

It actually works in a way which makes sense for modern compositors and GPUs, which means the rendering is much smoother without tearing issues and so on. Issues with getting this to work reliably in Xorg is what lead to the maintainers abandoning it to work on a replacement. It just turns out shifting a bunch of software built around a core and complicated interface to another system is quite difficult.

I'm kinda new to Linux as a desktop, and thus went straight to Wayland, so these kinds of comments from ol'-timers are super interesting to me.

I run a Wayland desktop, and I start it by typing it's executable from the TTY after I log in. No fuss, no muss.

Everything works great, except there was this one game I wanted to try out that's a Windows .exe and needs to run in Wine and I couldn't quite get it to run in Wayland. So I installed xorg-server and an X window manager. Tried to just run it from TTY and it complained that there was no X server running. Okay, turns out I need another program to start X, then start my window manager, as a kind of desktop chaperone. Finally get that worked out, try running my game, and the screen tearing is a nightmare. So now I have to run a compositor in there as well to be an intermediary in the already extremely complicated X protocol. And since X needs to run as root (I think?), half the time I try to start it, I get odd permissions errors, or it tries to use the wrong TTY. As someone going the _other_ direction, I can't fathom how anyone puts up with X.

The good news, is that after it did it's initial setup and install in X, the game now seems to run fine in Wayland. :D

X11/Xorg was the default on many distros, so often it was preconfigured in a working state. I started my Linux journey around...2004? And booting into Mandrake or Slackware (or a Knoppix Live CD, my true beginning), X would work fine. But as soon as I had to install it myself (minimal Fedora, minimal Debian, or my fave, stage 1 Gentoo), I'd hit all kinds of issues with configuration and starting the X server.

As a counterpoint, I tried to set up Wayland a couple years back on Ubuntu and Fedora before it was default anywhere, and that was also a nightmare.

It's easy to forget sometimes just how much the distro maintainers make our lives easier.

Xorg has definitely become easier to configure. Back in the old days, you had to write the XF86Config file, either manually or automatically during installation, or else it wouldn’t do anything. These days, Xorg auto-detects everything and you only need an xorg.conf if you’re doing something weird.

Yeah, back then I was using one of the glorious Trinitrons at 75 Hz and 1600x1200. To work at all, I had to manually look up the horizontal and vertical sync ranges and put 'em in the XF86Config.

most people don't start X from the TTY, they use a display manager that starts X and manages the login.

The screen tearing isn't caused by X. It's a faulty video driver.

You had decades to fix the faulty video drivers for X11.... Meanwhile they work on Wayland since day one.

> Meanwhile they work on Wayland since day one.

So how's that Nvidia support coming along?

Vendors with closed source drivers had decades to fix.


We found the Nvidia fanboy.

Debian 11, AMD RX560, KDE on X, FreeSync on and working, screen tearing appears in landscape orientation and isn't there in portrait. If that's the driver problem, then show me any reason it works fine on the same buffer size.

More uninformed lies...

There used to be all kinds of nice rules. Like that X would run on tty7.

> I can't fathom how anyone puts up with X.


I don't think it's the case. X just works for me. Had to configure it once, after that I experienced no friction using or updating it.

I use ssh -X multiple times per week in general. Wayland in itself does not provide something equivalent.

Keep an eye on the waypipe project for the equivalent functionality: https://gitlab.freedesktop.org/mstoeckl/waypipe/

I don't think any solution based on video streaming can ever match what X11 provides, which is, remote apps use the settings of the client computer for rendering. e.g. with ssh -X, if I set my dpi in my .Xresources, no matter the machine to which I'm ssh'ing to, I'm always getting a correct font size for my local screen.

I haven't tested but there is no reason that can't be done in waypipe. It works by intercepting certain protocol messages and proxying them over the network. The client just has to be given the output information from the remote machine.

See chapter 7 of the Unix haters handbook from 1994, which was linked here the other day: http://simson.net/ref/ugh.pdf

The amazing thing about Wayland is that it's taken over 25 years to happen. Over those 25 years, X has become less of a problem as CPU speed and RAM have grown exponentially, and we now have GPUs to help it too.

Not an expert but it's something along the lines of security, things like keyloggers must not work, or at least have a hard time working for example.

On X:

1. Implementing a screenlocker in a secure and working manner on X is impossible.

2. Screenlockers cannot be activated when there the context menu (or something similar) is open.

3. There is a lot of screen tearing, in particular in Firefox and Chromium on X scrolling is a joke.

4. Dual-DPI is a no.

5. Everything is in a shared buffer, so security is a joke.

On Wayland these things should be fixed.

I had to turn on X11 for screen recording once (the screen recording was done by a windows only app running under wine). It didn't take a minute to see extreme tearing. I seriously don't understand how anyone can use X11 other than as a fallback.

Those will cover the capture plugins, but OBS still needs XWayland to run due to a dependency on GLX for rendering. For those interested, there is an open PR here to add native EGL/Wayland support: https://github.com/obsproject/obs-studio/pull/2484

For wlroots compositors there is also the wlrobs plugin, which can be used if you don't need pipewire: https://hg.sr.ht/~scoopta/wlrobs

There is an even cooler pull request that adds zero-copy screen capture. https://github.com/obsproject/obs-studio/pull/1758

I think this is a better way to go to get the same performance and low latency gaming capture as on Windows with gaming GPUs.

The guy who made that PR frequently streams coding sessions on Youtube. I think he made it because he wanted a better way to stream some cool live opengl coding sessions. And even though that code isn't production ready, he has used it for some time now and it seems to work great.

If there is some company that slightly cares about Linux desktop and gaming on Linux, I would suggest helping with that pull request and getting it merged. (Anyone from Steam, AMD or Nvidia here?)

Some of the EGL portions of that PR are actually included in the one I linked :) At some point my plan is to go through and merge these all together if no one else does it, but streaming has not been a priority for me at the moment.

Whoa, that's awesome! I tried to get the same thing working with ffmpeg, a la:

  ffmpeg -device /dev/dri/card1 -f kmsgrab -i -
    -vf 'crop=4096:2160:0:0,hwmap=derive_device=vaapi,scale_vaapi=w=4096:h=2160:format=nv12'
    -c:v h264_vaapi -qp 24 output.mp4
... and that process was, um, exciting, but not in a good way if you care about having an MP4 at the end with minimal fuss.

Right, I figured capturing is the big ticket item, and that most people wouldn't care what OBS itself runs on. Is there a reason to care about it running on XWayland other than being able to say you don't need X at all anymore? Would you expect to see major improvements for apps that are already doing all their heavy lifting in GLX on X11?

>Is there a reason to care about it running on XWayland other than being able to say you don't need X at all anymore?

I personally don't on my setup but the reason to do it is so other plugins can make use of EGL extensions. Native Wayland support just comes along with that trivially. Future development on platform-integration extensions is expected to happen in EGL instead of in GLX. For a current example the other PR that does direct KMS capture needs EGL to work, even with the X11 backend.

> If you're running a modern Linux desktop you're probably running Wayland

I'm a single data point but I'm running Ubuntu 19.10 and I'm not running Wayland. I don't remember if I opted out during the installation or if I wasn't given the choice.

The top reason to stay on X11 is that no screen sharing application work with Wayland (Meet, Slack, Skype) and I need them a few times per week to work with my customers.

This was more or less your point but from a different perspective.

> If you're running a modern Linux desktop you're probably running Wayland

I believe the big exception to this is Nvidia. It looks like things might be changing, but until quite recently, the Nvidia proprietary driver was X11 only, so anyone running Nvidia graphics would automatically fall back to X11.

Wayland won't be a thing until they agree on the common API for the real time capturing of the screen. I've read some Wayland developers said screen capturing is not a priority and I can't understand it. The screen capturing demand is higher than ever now we have ubiquitous live streaming sites everywhere and people earn the money from it. Besides, the easiest way to explain how to use GNU/Linux desktop for the complete beginners is by the videos.

The common API is the desktop portal and pipewire. The major projects (GNOME/KDE/wlroots) all agree on this one. Take a look at the links in the GP comment more info.

THe common API is whatever gains traction. Wayland has even less direction than Xorg development (especially in its early days), because it's a spec with lots of holes that others have to implement and fill in, respectively. Even Keith Packard doesn't think Wayland is on a good track anymore.

The Linux ecosystem needs a standard and unified API or SDK for its desktop endeavour like macOS and Windows does.

This is why this whole thread on having the user to find out if an app like OBS is running on KDE, GNOME with X11 or Wayland on Linux is something which risks itself in losing traction with general users. I always recommend people to don't bother trying out the other distros and use Ubuntu instead.

The Linux community is eternally stuck with its micro-ecosystem of alternatives of alternatives of the desktop stack which is best described by Howl's moving Castle of components.

Also for future Linux app developers, never tell the user to 'compile' something as a way of distributing your app.

>The Linux ecosystem needs a standard and unified API or SDK for its desktop endeavour

In my opinion, this is incredibly unlikely to happen any time soon. The closest existing thing to that is building web apps targeting Chrome and Chrome OS. If that's not your thing, then I would advise against operating on the assumption that there will ever be a unified SDK. At least for me it's gotten easier to understand and work with the open source world after internalizing that. There are both upsides and downsides to it.

Ubuntu is a funny example because they were ready to drop both X and Wayland for a while. They came very close to shipping their own incompatible display server called Mir.

Maybe ChromeOS could do since it is the closest to this idea.

But distro-wise, if that's the case then the second last sentence in my previous reply is an unfortunate tautology which doesn't look good for those who just wants work done or needs to reproduce/trace bugs in subsystems. :(

My point with web apps is that you can target both Chromebooks (technically a "Linux desktop") and any other system that has the Chrome browser installed.

If you're shipping a native B2B application the standard solution I see is to target a specific distro version (Latest RHEL/CentOS, Ubuntu LTS, etc) and tell customers you only support the default desktop. If they want support for some other weird configuration they can pay extra for that.

The desktop portal has gained traction. This is what we have right now, I don't know how to solve the problem of vendor- or desktop-specific features that need to be supported in extensions. X has experienced fragmentation from having to do this through its entire existence. I think the only thing a protocol designer really can do is make it easier to ship extensions. If Wayland does that for you, you probably know it already.

> If you're running a modern Linux desktop you're probably running Wayland

I missed the point when Wayland took over all the major modern distros. Did supersede Xorg now? I've been using X11 forever and never thought of alternatives.

I think Fedora is the only major distro running Wayland by default. Debian and Ubuntu are still on X11.

I think Cannonical shipped 17.10 (or other minor release) with Wayland by default, but subsequent releases reverted to X11.

Debian Buster (which is "stable" now) defaults to Wayland in Gnome, but you can switch to Xorg at the login window, which I had to do this week for screencasting to work as expected.

> with per-compositor "solutions"

Didn't every one agree to use Pipewire for that? Or it's still bikeshedded to death?

Somehow for input, everyone settled on libinput and this didn't become a problem.

That's mostly the case now, but that's a much more recent development than Wayland on the desktop becoming popular. Also, pipedrive gives you the plumbing, but you still need the portal API. My understanding is everyone more or less agrees that's the way forward, but it's still not stable and ubiquitously implemented. Even then, that resolves capture but not control: if you want something closer to ssh -X you need ways to forward input too, and IIRC right now the main answer for that is still compositor specific, e.g. krfb relying on kwin/KDE.

May be libinput should also implement remote features?

There's still plenty of things which don't work with Wayland. And even Gnome Shell, which perhaps has the best support for Wayland, doesn't work very well with Wayland on some of my machines (Gnome Shell is extremely slow and jerky at least on my one machine if I try to run it with Wayland).

I've been using screen sharing on Plasma/Wayland for a while now and it works absolutely fine. With krfb remote desktop control is also fully available. The latter uses a KWin specific protocol though IIRC as virtual input isn't part of the portal API.

wayland is mega broken on HiDPI. Or at least apps are broken on wayland on HiDPI. Very sad

I am currently running Gnome in wayland and multiple displays with different fractional scaling settings, it works fine. On the other hand in Xorg I can't set different scaling factor for each monitor (at least gnome doesn't allow it), neither can I use fractional scaling.

Does fractional scaling work with Electron programs like VS Code, Discord, Slack, etc? On Wayland? It is a blurry mess for me...

That's GDM's fault, although XRandr doesn't make it any easier to implement.

If you are looking for a more robust solution Streamlabs OBS[1] is more popular in the livestreaming community, it's OBS on steroids. It is also open source, and just released a beta on Mac like this week.

[1]: https://streamlabs.com/

I wouldn't say Streamlabs is more robust - it's the same core of OBS Studio, with Electron instead of Qt for the frontend and better Streamlabs integration.

For newbies, it also comes with a ton of demo content and setups (pre-roll, transition, etc), to ease the learning and spin-up curve. Agree that doesn't make it more robust (see my other comment). I have no affiliation with either project, just an Old Guy learning/doing some streaming projects.

The main issue I have with Streamlabs is that they _heavily_ push their "Prime" SaaS model.

I started to get myself set up on Streamlabs for the first time the other day, and accidentally deleted my free Theme/ Scenes that I set up during install. So I went to their Store to re-find it (https://streamlabs.com/library#/), and it's almost impossible to filter through to find the non-prime things -- none of their filters / sorts allow filtering by price or Prime.

I stumbled upon that I could type "free" in the search bar to finally do it, but it was quite painful, and without that I was having to filter out the first 20-30 pages to get past all the "Prime" addons.

Is this a fork of OBS or another project.

If it's a fork, why not work on the OBS project to implement these enhancements there? Is there backlash to that sort of thing from the OBS maintainers?

Almost all of their changes are UI changes. They use our core OBS Studio code, with an Electron GUI instead of Qt. Not easy/basically impossible to port back over. We monitor any back end changes they make, and pull when appropriate. They do not collaborate with us, however, so it's rare they make changes that we can use.

I think one of the reasons why is because Streamlabs sells premium add-ons https://streamlabs.com/goprime

That aside, I think it specializes OBS in a way that is too specific for OBS, which tries to be more generic. I think they both have their place, but personally I use OBS for recording videos and SLOBS adds no value for me.

I didn't know about slobs, but the idea of a "theme store" maybe would be something worth to explore on obs, It actually could be a revenue stream, just like wordpress, where the core is open source but you have several theme stores.

It's a frontend for OBS.

This is the Electron app: https://github.com/stream-labs/streamlabs-obs/

This is the wrapper library for Node: https://github.com/stream-labs/obs-studio-node

A link to the Mac beta? And no Linux version, right?

No linux version, no. They concentrate on twitch / game streamers. Most gamers are on windows so Linux doesn't even pop up on their radar.

Can you elaborate on what makes it more robust?

One example: keeping Preview and Program windows open (common for realtime stream prep/mgmt) on Mac with OBS will kill your frame rate (dropping 20%+ of frames) due to GPU rendering issues in Qt [1]. Streamlabs has no issues with this. I guess there's a question about whether that == "robust", but in terms of the app's features performing as it should, I would guess so.

Some simple/weird workarounds: literally move the Preview window offscreen, and/or open a Windowed Projector (popup) for Preview (but again keep Preview off screen).

[1] Can't find the thread on it now, but it's something about the way the two views are structured in a container

It's still in beta, so it's not actually better yet. I tried it out for a week and it didn't work out. After about 30 minutes of streaming, my dropped frames went up to 90%, despite my connection being strong. OBS never had a problem with this.

Streamlabs OBS has support for alerts (subs/followers, donations, etc), themes, overlays/widgets, etc. Most livestreamers use it for a more interactive experience. Just depends on what you need.

Oh snap, it's out of Beta!?


Love it. I've been using it on Linux with v4l2loopback to get it into things like Skype, zoom, jitsi, and teams. Really slick.

For quarantine levity, this combined with live audio effects possible with JACK rack like voice changers and echos is hilarious. Maybe today's a good day to try that out on the engineering managers meeting.

That sounds like a great setup. I use OBS for recording and PulseEffects for some features like a noise gate. However, the latter doesn’t work well for me.

Do you have some docs you could share on the setup you describe above, please?

In any case, thanks for sharing your setup so far!

I have a video about the JACK Rack setup at least from a while back but haven't written anything about the OBS/v4l2loopback stuff. It's probably a good time to write something like that up, eh?


I started using OBS when our church moved our services to live streaming due to the pandemic. Our mostly non-technical volunteer media team has had zero issues using it to stream to Facebook or a self-hosted Restreamer instance. Easy to use, straightforward interface. I'm sure we'll keep using it for streaming even after the pandemic is over.

A blog post about your setup would be invaluable for small churches with a skeleton/volunteer AV crew. Would you be willing?

Yep, same here. Went from zero to a fairly professional streaming experience in a few hours. I had a couple Logitech C920 webcams, and a Presonus Audiobox lying around. Combined with an older iMac, I was able to set up a pretty "fire and forget" rig.

What's your setup if you don't mind me asking?

I know I've been getting a number of questions around livestreaming for churches lately (since I used to do that a lot more in the past), and I've been gathering my thoughts in a blog post here: https://www.jeffgeerling.com/blog/2020/how-livestream-masses...

It depends mainly on the budget, but with Easter coming up (probably _the_ major day in many (if not most) Christian churches), it seems many groups are scrambling to find a way to get a decent quality stream set up in time.

Many groups on the lower end of the budget scale are using an iPhone on a tripod (but the audio is terrible). Medium range you have one or two cameras plugged into a laptop with OBS, and you can get audio from the church's sound system. High end many places already have PTZ camera systems installed, and they just need someone to control the video system during the event.

What kind of camera are you using?

I've found OBS Studio to be brilliant. I needed to capture and livestream the screen of a small embedded Windows box. I purchased an HDMI-in/USB-out HD capture device. Plugged the HDMI side into the little Windows box, and the USB side into my linux box. OBS Studio recognized the new "HD Capture" virtual device, and captured the live video off the other system. I could save to a file, livestream, etc. No driver issues or problems. Just amazing.

My team loves OBS! We’ve been using it to live stream “learn to code” sessions for kids 4-5x daily after the world went into lockdown.

Shameless self plug: https://makecode.com/online-learning

We just ran the entire foss-north conference virtually using OBS. Not a glitch during 4 days. I started documenting the setup here: https://github.com/e8johan/virtual-conf-resources .

If you are looking for self-hosted desktop streaming with OBS via nginx and RTMP, you might find some insights in my recent blog post: https://bitkeks.eu/blog/2020/03/desktop-video-streaming-serv... The nginx module also supports DASH encoding, which can be delivered by dash.js - I have it in production, but not yet updated the article. Next I'll try setting up SRT.

Also check out https://github.com/alfg/docker-nginx-rtmp for a Docker setup with a template to output HLS.

I've been trying to find a way to easily use my green screen outside Zoom.

Have been tinkering with OBS: https://www.rightpoint.com/thought/2017/12/19/improving-your..., https://streamshark.io/blog/chroma-key-software-live-streami...

I'm looking for an actively-developed macOS virtual webcam tool, as CamTwist's website is showing a PHP error, and their Mac software doesn't seem to be notarized: http://camtwiststudio.com.

Last week, HN's front-page featured "Proposed bounty for adding virtual camera" [0] that ended up generating an RFC. [1]

You might find some encouraging and helpful information there, if you haven't seen those links already.

[0] https://news.ycombinator.com/item?id=22682022

[1] https://github.com/obsproject/rfcs/pull/15

Thank you!!

I too am looking for a virtual cam. Have been for a long time.

ManyCam is an option, though I'm not really sure how reliable/performant.

You can try out the free version, seems ok. I was worried about it being sketchy, but it looks like it's from a team in Canada.

I'm not sure if it's true but I love the notion that Canadians are inherently better developers than the rest of the world.

I didn't mean to suggest they're better developers per se, but there's very little malware that comes out of Canada so with limited information about the trustworthiness of this application it might be inherently less sketchy than camera/mic capture software that comes out of some other countries. That said, they are in Quebec. :D

>I was worried about it being sketchy,

It's not sketchy, it's just aimed at, well, pro cammers. Or at least used to be.

Now that we're all camming, we're all in the target audience.

This. Just a few days ago I wasted a few hours playing with OBS, CamTwist, and a few other "chroma key capable apps"... all with no luck... Zoom's built in beats them all....

What I really would like to do is have my green screen capabiity regardless of the video conference app (e.g. Facetime, FB, etc)...

I ran into the issue with syphon and Camtwist... and I gave up :)

I've been using it to create videos of an infrastructure provisioning product. One of the most useful things so far is being able to record a process that may take 15 minutes to fail but only has an error for a few seconds before clearing the screen, rebooting, hanging, etc. Much easier to rewind and grab the failure from a stream than to hang poised over a keyboard waiting to bang print screen at the precise moment needed.

Sounds like a perfect fit for OBS Replay Buffer.

...shouldnt you just log errors as you go and check the log?

Absolutely. However this isn't my code and it can fail in strange ways with an ephemeral error message. If I can't change the code, this is my workaround.

If you think about it, that's exactly what they're doing. ;-)

OBS is a very nice example of developping & distributing a Qt app to Mac / Windows / Linux with rad performance.

Telegram is the best Qt cross-platform app i used

Although Telegram is QML, not Widgets - I think they should be compared as different categories.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact