I run a 6,000 people company during the day and have OBS setup to push into Google Meet. I've done townhall with live on-screen Q/A voting, hosted podcast discussions, done PIP product reviews. I use its video record feature to react to figma prototypes and post the MP4s in the respective channel for discussion.
OBS is an amazing tool and its worth learning. Even simple things like adding a compressor to an audio stream can make a huge difference to the quality. As one of our coaches recently said "Video quality is the new presence in 1:1s".
On windows its reasonably easy to output OBS to a virtual camera for video conferencing software through a plugin. I've posted a bounty of $10k recently to make this a native feature and it's getting lots of traction.
https://twitter.com/tobi/status/1242641154576965634 https://github.com/obsproject/obs-studio/issues/2568 https://github.com/obsproject/rfcs/pull/15
I want to use OBS'S realtime noise suppression and noise gating in another app (mainly online lecture platform Echo360). I got it working using VoiceMeeter in what seems like a hacky way, but only with high latency so far.
pactl load-module module-null-sink \
pactl load-module \
I took the commands from one of my side projects, maybe someone will find other parts of it interesting: https://github.com/pzmarzly/mic_over_mumble
It's not well known, but it really works spectacularly well compared to those other options. For me it never has had any audible buffer underruns (unlike Lighthost), no noticeable latency (unlike SAVI and Voicemeeter, even with small buffer sizes), no problems regarding exclusive mode (unlike voicemeeter) and it works with every single application.
The UI is not terribly clear about this, but it can drive multiple devices independently, simply by adding several "Device" blocks to the configuration.
I adjusted the numbers for my setup but his were a good starting point.
Some general EQ'ing on the mic also works wonders for how well it sounds, but that's very specific to your voice and mic.
Another use case of Equalizer APO where it is much better than everything else is compressing game audio. Some games simply have audio that was designed without regard for hearing safety (CS:GO is a strong contender for #1 here), and this helps immensely with it.
There'a also AULab which can be used for some effects in realtime (e.g. adding reverb and redirecting to built-in audio out).
My daughter wanted to record some videos of her playing a web-based game. I found the interface to OBS unintuitive. I managed to figure out how to capture a specific area on the desktop, but it was unexpectedly difficult to resize the output to match the input. I found some way of doing it that I can't remember.
A few months later I had to do it again and that time I couldn't find the option to resize the output to match the input.
I'd love to find some resources to help her learn how to setup OBS for recording or streaming.
EposVox's OBS Studio Master Class 2018 YouTube Playlist: https://www.youtube.com/playlist?list=PLzo7l8HTJNK-IKzM_zDic...
Then add an audio stream, and you're good to go.
I can also help you set it up if you want!
Until I found OBS I was basically trying to record my screen and then narrate over it after the fact, but it just didn't jibe with me as there's basically no room for improv or failure at that point. And I personally prefer to leave my less egregious mistakes in the final cut to demonstrate that you don't always get things right the first time.
I don't know him from a bar of soap outside of what's shared publically but a cold, calculating executive is about as far from the mark as you could possibly be.
In truth, I have long admired Shopify for their open culture, their tech blogs, and a product that empowers small businesses. None of these things are enough, however, to convince me that he's anything more than a wealthy Libertarian seeking (primarily) personal gain through economic exploitation, managerial coercion, and authoritarian hierarchies.
> a cold, calculating executive is about as far from the mark as you could possibly be
This is an indefensible exaggeration. You're telling me Tobi is a saint? A CEO? An absolute absurdity.
Liberals love democracy until it comes to the workplace. You disgust me.
"presence" before remote working was about actively being in the room - if you've ever had a manager who checks their email in your 1:1 you've experienced someone not being "present".
Video quality in a remote 1:1 is the foundation of that presence eg how the other perseon can see that you're understanding and empathising.
You ever had a "phone call"?
We're seeing all kinds of new uses, especially users who are integrating the OBS Virtualcam plugin to do presentations and other content sharing with apps that only support webcam input.
Anyway. Does anyone have any recommendations for settings to use in order to avoid lots of frame dropping and OBS making other applications sluggish when I try to stream to Twitch? I'm using a MacBook Air Retina 2018 model with an external monitor connected to it.
Recording the screen works much better than any other screen recording software I tried. For this use case the preview can be a bit confusing. If the resolution does not match (because of OS 200% scaling for example) going to the settings each time to adjust it is a bit cumbersome; the interactive resizing handles in the preview somehow never helped me. Sometimes one of the reset zoom context menus helps.
Also the "Window source" would be awesome, but is a bit cumbersome to set up every time, and doesn't capture things like menus unfortunately.
It's probably really difficult to improve these things, so they work automagically for dummies like me that know very little about OBS and use only a tiny feature set, without taking making things worse for power users, which are probably happy with things as they are.
Keep up the great work and thank you.
Tldr; There needs to be a master audio level display or at least some sort of master indication whether a stream is getting an audio signal or not.
Or at the very least, audio sources should be muted by default in new scenes.
We have an Intro screen scene that just displays our logo and some background movement with a message that we will begin soon. We started the live stream and then muted the mic on the audio inputs and then, for good measure, muted the physical mic. We then proceeded to chat and get things ready for the presentation, etc.
Little did I know that OBS includes -all- audio sources on every scene, by default, unmuted.
And though I had muted our regular mic, the webcam's built-in mic was on and transmitting. We didn't see the green audio level animation or even the listing for that input either, because it was at the bottom of the list of audio input sources where you have to scroll down within that box to see it.
Luckily, we didn't say anything too embarrassing, but it was embarrassing nonetheless.
Are there any paid developers working on it? Does the project make money?
I haven't used it yet, so maybe this already exists, but maybe think about adding something to the app itself to remind people to contribute financially, similar to how Wikipedia is doing it.
Maybe count how many times someone is using the app (local count) and when the count is high then show a little something, short message and call to action to donate.
Not having that on Linux when it exists on windows is frustrating.
Thanks for making OBS!!
Meanwhile I'm doing my online courses with OBS, and it works beautifully. I have multiple scenes set up in OBS that grab different parts of my screens, and I switch between them with simple key strokes, while narrating on my actions as I do them.
It's a very simple, and very effective setup, and my students love it.
To me, it is immensely powerful to be able to switch scenes and narrate live, instead of doing these things in post. This saves a ton of time, that I can instead spend on refining my content.
Another concern is the many overlays that accompany game launchers and drivers, including: Nvidia, Steam, GOG, etc. These add latency and sometimes private notifications.
No matter what, you'll probably need to spend a little time tweaking things to get it all working like you want. But the 'scenes' and preferences are pretty good about letting you lock things down once you do find something you like.
If you're a teacher and you're going through a PDF exercise opened on your screen while drawing things on a whiteboard behind you, you may want to have 2 scenes:
* One with the opened PDF in full screen, with your camera feed in the bottom-right of the video, in small.
* One with the camera feed in full screen, where viewers can clearly see what you're writing
You'd then be able to switch between those 2 scenes at will depending on what you're currently doing. You'd show the first scene when you're reading the exercise out loud and then switch to the second scene when you're resolving it on the whiteboard.
Before meetings start, I can broadcast music and display announcements, and then without having to hit a jarring "End Screenshare" can switch to my webcam and start a meeting. Live demos and presentations are another scene with the desktop/window/browser and webcam. 100% would recommend.
I'm using v4l2loopback  to create a dummy video device, ffmpeg to create a stream endpoint that streams into the dummy video device, then setting up OBS to stream to localhost.
It is actually really nice to have the capability to fully control what is going in to the video input.
I haven't run into a need to also change the audio input yet but if it becomes necessary, it should be possible to set up loopback with ALSA.
I suppose zoom detects that my actual Webcam is in use, and therefore refuses to display... any webcam whatsoever, including the virtual one...? Makes little sense but maybe...
# modprobe -r v4l2loopback
# modprobe v4l2loopback video_nr=7 exclusive_caps=1 card_label='Screenshare'
# Replace `/dev/video2` with the dummy video device added by `l4v2loopback`.
ffmpeg -re -listen 1 -i rtmp://127.0.0.1:5050/ -c:v rawvideo -an -pix_fmt yuv420p -f v4l2 /dev/video2
It's not very efficient and there's a delay since OBS is encoding with h264 then ffmpeg is decoding that. It's not too bad for me because I can use the NVENC encoder but I'm sure there's a way to get OBS to stream raw video somehow.
The only way I found to stream audio to zoom is to use a pulseaudio module that lets you use a named pipe as a source. You can then output your sound to said named pipe, and set it as the microphone in zoom. The sound is pretty bad of course.
I've been using it to make a corny music interview show with my local musician friends during the coronavirus shelter in place. Whereas a lot of my fellow musicians are streaming from their phone, I'm able to connect a mixer to my computer and stream the show with really good audio quality.
One thing to keep in mind, though—unless you have a dedicated video card, and it's supported by OBS (the list of supported cards on macOS is very thin), your CPU has to do all the compositing and compression, meaning you need a lot of CPU to be able to manage the streaming.
On my 2016 MacBook Pro 13", it barely has the horsepower to do a stream and also run processes that I'm explaining (e.g. manage some VMs, run some database operations, etc.). I had to turn down the compression method to 'ultrafast', which is lowest quality (but still pretty good with 1080p output), and I also use SwitchResX to set my shared display at 1080p 1x resolution (instead of 4K/2x resolution).
If I remember correctly then your PC is a dual-core, and you can't expect a low-end machine to do high-res encoding well without HW acceleration.
PC is a non-option for a chunk of people in dev and media - I, for instance, do iOS dev for a living, and live by Logic Pro X for professional audio work. (I've been using Logic for about 15 years...)
The amount of time it would take for me to transition, to, say Ardour - or the amount of my career that would be lost if I swapped from, say, iOS to...I'm not sure what the FOSS equivalent is...(Android doesn't count, obviously, because Google) I'd lose years and years of training, experience, and wisdom.
From my experience real experience comes from learning concepts not applications.
Someone that knows the ins and outs of Microsoft Word can relatively easy switch to switch to Libreoffice Write.
Or someone that is a good modeler in 3dmax can also become a good modeler in blender.
Buttons in applications change position all the time. UI gets reworked, keyboard shortcuts changes, etc. from each version to the next, but if the concept behind doing certain stuff is learned, it doesn't really matter which software product is used.
The only time when stuff is really hard, if you encounter a concept you haven't worked with before and have to change the way you think radically.
My PC laptop has almost identical specs to my Mac anyways... I haven’t owned a desktop in 10 years :/
(Ask me about ffmpeg raw GPU buffer capture one day; running a bunch of codec code as root is always exciting.)
OBS Plugin: https://gitlab.gnome.org/feaneron/obs-xdg-portal/
For GTK: https://github.com/flatpak/xdg-desktop-portal-gtk
For KDE: https://github.com/KDE/xdg-desktop-portal-kde
For wlr: https://github.com/emersion/xdg-desktop-portal-wlr
Wayland is a lot more security conscious, but once the compositors are up to parity, these things won't be a problem.
At least for me, Wayland is way smoother than X11 ever was as well.
In the specific case of Xorg I find the situation strange because I'd gladly have made the switch 15 years ago back when messing with Xorg.conf was a common occurrence for me and it kept getting in the way (although a big portion of the blame was with the proprietary drivers, especially AMD's). Xorg was sometimes a bit of a pig too resource-wise, but that's when I was running a PC with 256MB of RAM. I remember being fairly optimistic when I first heard about Wayland and Mir, the prospect of ditching X11 was enticing.
But now? I haven't really had to wrestle with X in a long time. It just works for me. I'm definitely not looking forward to reworking my entire workflow for minor benefits although I suppose I'll have to one day. I also use X forwarding pretty extensively, but I'm probably a small minority these days.
This might have been fine in the past, but is not really OK any more with efforts to make things more secure (eq. to prevent a malicious application to read you password entry, make screenshots of sensitive data, inject input events to your secure sessions, etc.).
The side effects is that new protocols need to be developed that applications can use to request access to display/input/output of other applications in legitimate cases (such as screen sharing in your case) & not all is in place for that yet.
This is the main hurdle for most people. I would say that 99% of people agree that the Wayland way makes sense and is he better way of doing things but without the needed access controls it just not ready yet.
Like if Google said "Apps can't access [location|files|whatever] without permission" on Android with no way to grant those permissions.
So it breaks the entire linux philosophy of using input and output streams to pipe data between different modular applications?
>This might have been fine in the past, but is not really OK any more
Says who? Personally I like my computer being able to access other things on my computer. It kind of makes it more useful that way. The ability for applications on linux to fairly seamlessly work together using a set of standard protocols is one of the primary reasons I use it.
Not really. To use your analogy, the way that X works - every application being able to read the framebuffer of any other - is the equivalent of every application running as root and being able to read and modify any file on the system. When you consider that applications running under Wayland may include e.g. banking details, any app being able to read that is like anything being able to read /etc/shadow.
If your computer is perfectly secure, with no untrusted code running, that's great - and also far more secure than 90% desktop computers out there.
You can pipe stuff to other executable all you want under Wayland, you might just not easily (eq. user granting permission using the correct protocol) inject keyboard events from on application to another (sey malware masking as a game injecting code in the form of keyboard events into a running terminal emulator or ssh client).
Nitpick: This is not a meaningful comparison. Wayland is a wire protocol. Xorg is a display server, and the wire protocol it implements is called X11. There are several other display servers that implement the Wayland protocol. Some of these display servers do support those core features, and some of them don't (yet). It depends on which one you're using. The display server used by GNOME should support those features.
>What is Wayland's reason for existing?
From the website :
>Wayland is intended as a simpler replacement for X, easier to develop and maintain.
wayland may be easier to maintain than X. But from what I've seen writing a compositor for wayland is more difficult than writing an X window manager, because things you got for free with X have to be implemented by the compositor in wayland.
And many applications are more difficult to target wayland, because there aren't (at least yet) standard protocols for things like screenshots, screencapture, etc. So they have to either choose one desktop environment to target, or have implementations for all of them.
There are standard APIs for screenshots and screencapture, implemented through the desktop portal and pipewire. Check the top-level post for more info about this -- it's part of why Wayland support for OBS has progressed.
Concerning the desktop portal API. It's basically just a wrapper around the native custom API's of the underlying compositor. And it is pretty limited in functionality. For example, the screenshot API just has a way to request a screenshot, it doesn't have a way to specify that you would like to select a window, region, or display/screen/monitor. In the case of wlr-portal, from what I could tell it just always gives you a screenshot of the full desktop.
I am not sure how this is relevant if you're trying to write your own compositor. If those projects want to create extra work for themselves, that's on them.
>the screenshot API just has a way to request a screenshot, it doesn't have a way to specify that you would like to select a window, region, or display/screen/monitor.
Yes, that's on purpose. What's supposed to happen is that the portal daemon (NOT the application) pops up a dialog asking the user to choose which one they want. Unfortunately the wlr portal is still not done yet and doesn't implement this.
I'm actually more concerned about the fact that wlroots has/had to duplicate work done by Gnome and KDE (wlroots is more recent than much of gnome and kde's wayland support).
> Yes, that's on purpose. What's supposed to happen is that the portal daemon (NOT the application) pops up a dialog asking the user to choose which one they want. Unfortunately the wlr portal is still not done yet and doesn't implement this.
Yeah, the problem is that each compositor has to implement it's own screenshot dialog, and you _have_ to go through that dialog for that compositor. So on wlroots, currently, an app can only get a full screen screenshot. And a tool like flameshot becomes awkward if the compositor opens it's own dialog. In X, if you don't like Gnome's screenshto tool, you have a handful of other options. With wayland, tough luck, the most you can get is a better editor/annotation tool.
I don't think so, GNOME and KDE have never had the goal of making a reusable and generic compositor library like wlroots. You can try to build something with their internal compositor libraries (libmutter or kwayland) but they probably won't be as nice.
>The problem is that each compositor has to implement it's own screenshot dialog, and you _have_ to go through that dialog for that compositor.
This is on purpose and it's not the problem. It's the only way to do it securely. The problem is that you are trying to perform a privileged operation, which is the only way that something like flameshot can even work. Allowing random unprivileged programs to scrape your screen without confirmation is how you get trojans and other spyware. It's not worth adding more APIs to the portal just to support this because it's intended to be a secure API that can be accessed from within sandboxed applications.
Sure there are other tools on X but unfortunately none of those options are secure either.
That's not true. One way is to have secure protocols that can only be used by whitelisted programs in a secure context. sway has something like this (although by default I think it is pretty open), but there isn't any kind of standard mechanism for privileged protocols in wayland.
Also, I don't see why the screenshot API couldn't take a value for the type of screenshot to take. Like an enum with values for Region, Window, Screen, Full, and Any. To hint at what kind of screenshot to prefer.
The desktop portal does sort of support choosing a source, but only for screencast. See the enum here: https://flatpak.github.io/xdg-desktop-portal/portal-docs.htm...
What standard? Does anyone track how compliant the existing compositors are with the standard?
I run a Wayland desktop, and I start it by typing it's executable from the TTY after I log in. No fuss, no muss.
Everything works great, except there was this one game I wanted to try out that's a Windows .exe and needs to run in Wine and I couldn't quite get it to run in Wayland. So I installed xorg-server and an X window manager. Tried to just run it from TTY and it complained that there was no X server running. Okay, turns out I need another program to start X, then start my window manager, as a kind of desktop chaperone. Finally get that worked out, try running my game, and the screen tearing is a nightmare. So now I have to run a compositor in there as well to be an intermediary in the already extremely complicated X protocol. And since X needs to run as root (I think?), half the time I try to start it, I get odd permissions errors, or it tries to use the wrong TTY. As someone going the _other_ direction, I can't fathom how anyone puts up with X.
The good news, is that after it did it's initial setup and install in X, the game now seems to run fine in Wayland. :D
As a counterpoint, I tried to set up Wayland a couple years back on Ubuntu and Fedora before it was default anywhere, and that was also a nightmare.
It's easy to forget sometimes just how much the distro maintainers make our lives easier.
So how's that Nvidia support coming along?
The amazing thing about Wayland is that it's taken over 25 years to happen. Over those 25 years, X has become less of a problem as CPU speed and RAM have grown exponentially, and we now have GPUs to help it too.
1. Implementing a screenlocker in a secure and working manner on X is impossible.
2. Screenlockers cannot be activated when there the context menu (or something similar) is open.
3. There is a lot of screen tearing, in particular in Firefox and Chromium on X scrolling is a joke.
4. Dual-DPI is a no.
5. Everything is in a shared buffer, so security is a joke.
On Wayland these things should be fixed.
For wlroots compositors there is also the wlrobs plugin, which can be used if you don't need pipewire: https://hg.sr.ht/~scoopta/wlrobs
I think this is a better way to go to get the same performance and low latency gaming capture as on Windows with gaming GPUs.
The guy who made that PR frequently streams coding sessions on Youtube. I think he made it because he wanted a better way to stream some cool live opengl coding sessions. And even though that code isn't production ready, he has used it for some time now and it seems to work great.
If there is some company that slightly cares about Linux desktop and gaming on Linux, I would suggest helping with that pull request and getting it merged. (Anyone from Steam, AMD or Nvidia here?)
ffmpeg -device /dev/dri/card1 -f kmsgrab -i -
-c:v h264_vaapi -qp 24 output.mp4
I personally don't on my setup but the reason to do it is so other plugins can make use of EGL extensions. Native Wayland support just comes along with that trivially. Future development on platform-integration extensions is expected to happen in EGL instead of in GLX. For a current example the other PR that does direct KMS capture needs EGL to work, even with the X11 backend.
I'm a single data point but I'm running Ubuntu 19.10 and I'm not running Wayland. I don't remember if I opted out during the installation or if I wasn't given the choice.
The top reason to stay on X11 is that no screen sharing application work with Wayland (Meet, Slack, Skype) and I need them a few times per week to work with my customers.
This was more or less your point but from a different perspective.
I believe the big exception to this is Nvidia. It looks like things might be changing, but until quite recently, the Nvidia proprietary driver was X11 only, so anyone running Nvidia graphics would automatically fall back to X11.
This is why this whole thread on having the user to find out if an app like OBS is running on KDE, GNOME with X11 or Wayland on Linux is something which risks itself in losing traction with general users. I always recommend people to don't bother trying out the other distros and use Ubuntu instead.
The Linux community is eternally stuck with its micro-ecosystem of alternatives of alternatives of the desktop stack which is best described by Howl's moving Castle of components.
Also for future Linux app developers, never tell the user to 'compile' something as a way of distributing your app.
In my opinion, this is incredibly unlikely to happen any time soon. The closest existing thing to that is building web apps targeting Chrome and Chrome OS. If that's not your thing, then I would advise against operating on the assumption that there will ever be a unified SDK. At least for me it's gotten easier to understand and work with the open source world after internalizing that. There are both upsides and downsides to it.
Ubuntu is a funny example because they were ready to drop both X and Wayland for a while. They came very close to shipping their own incompatible display server called Mir.
But distro-wise, if that's the case then the second last sentence in my previous reply is an unfortunate tautology which doesn't look good for those who just wants work done or needs to reproduce/trace bugs in subsystems. :(
If you're shipping a native B2B application the standard solution I see is to target a specific distro version (Latest RHEL/CentOS, Ubuntu LTS, etc) and tell customers you only support the default desktop. If they want support for some other weird configuration they can pay extra for that.
I missed the point when Wayland took over all the major modern distros. Did supersede Xorg now? I've been using X11 forever and never thought of alternatives.
I think Cannonical shipped 17.10 (or other minor release) with Wayland by default, but subsequent releases reverted to X11.
Didn't every one agree to use Pipewire for that? Or it's still bikeshedded to death?
Somehow for input, everyone settled on libinput and this didn't become a problem.
I started to get myself set up on Streamlabs for the first time the other day, and accidentally deleted my free Theme/ Scenes that I set up during install. So I went to their Store to re-find it (https://streamlabs.com/library#/), and it's almost impossible to filter through to find the non-prime things -- none of their filters / sorts allow filtering by price or Prime.
I stumbled upon that I could type "free" in the search bar to finally do it, but it was quite painful, and without that I was having to filter out the first 20-30 pages to get past all the "Prime" addons.
If it's a fork, why not work on the OBS project to implement these enhancements there? Is there backlash to that sort of thing from the OBS maintainers?
This is the Electron app: https://github.com/stream-labs/streamlabs-obs/
This is the wrapper library for Node: https://github.com/stream-labs/obs-studio-node
Some simple/weird workarounds: literally move the Preview window offscreen, and/or open a Windowed Projector (popup) for Preview (but again keep Preview off screen).
 Can't find the thread on it now, but it's something about the way the two views are structured in a container
For quarantine levity, this combined with live audio effects possible with JACK rack like voice changers and echos is hilarious. Maybe today's a good day to try that out on the engineering managers meeting.
Do you have some docs you could share on the setup you describe above, please?
In any case, thanks for sharing your setup so far!
It depends mainly on the budget, but with Easter coming up (probably _the_ major day in many (if not most) Christian churches), it seems many groups are scrambling to find a way to get a decent quality stream set up in time.
Many groups on the lower end of the budget scale are using an iPhone on a tripod (but the audio is terrible). Medium range you have one or two cameras plugged into a laptop with OBS, and you can get audio from the church's sound system. High end many places already have PTZ camera systems installed, and they just need someone to control the video system during the event.
Shameless self plug: https://makecode.com/online-learning
Have been tinkering with OBS: https://www.rightpoint.com/thought/2017/12/19/improving-your..., https://streamshark.io/blog/chroma-key-software-live-streami...
I'm looking for an actively-developed macOS virtual webcam tool, as CamTwist's website is showing a PHP error, and their Mac software doesn't seem to be notarized: http://camtwiststudio.com.
You might find some encouraging and helpful information there, if you haven't seen those links already.
It's not sketchy, it's just aimed at, well, pro cammers. Or at least used to be.
Now that we're all camming, we're all in the target audience.
What I really would like to do is have my green screen capabiity regardless of the video conference app (e.g. Facetime, FB, etc)...
I ran into the issue with syphon and Camtwist... and I gave up :)