edit: forgot to mention that this works over USB, you don't have to pay crazy markup for capture card
edit2: (because I'm so excited about getting this to work) here is a list of supported cameras - sadly I was not able to get GoPro Hero 6 to work.
I followed the instructions in your post, and (Although it didn't work upfront) gave me the will to make it work ;)
Little advice: I fixed my set up by finding the correct v4l2 device, because video0 was already assigned. If you run:
it will tell you where v4l2 is plugged in your machine, in order to enter the correct command (That was the only part missing for my puzzle) as if you already have a webcam in your computer it will already /dev/video0 assigned, and the gphoto | ffmpeg piping gives too cryptic messages (It complains about the formats not being correct, while it should complain about it not being a v4l2 device)
I mostly shoot with my cell phone these days, not because I mind spending money on cameras, but because it's a better device for most photography. It integrates with the world. Cameras integrate with their manufacturer's closed ecosystems.
brew install gphoto2
brew install ffmpeg --with-ffplay
# Abilities for camera : Sony Alpha-A6300 (Control)
gphoto2 --stdout --capture-movie | ffmpeg -i - -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f matroska - | ffplay -
$ brew install ffmpeg --with-ffplay
Usage: brew install [options] formula
# Install flags here, nothing about --with.
Error: invalid option: --with-ffplay
gphoto2 --stdout --capture-movie | ffmpeg -i - -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f matroska - | gst-launch-1.0 fdsrc fd=0 ! decodebin ! videoconvert ! videoscale ! autovideosink
it should be,
gphoto2 --stdout --capture-movie | ffmpeg -i - -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f matroska - | gst-launch-1.0 fdsrc fd=0 ! decodebin ! videoconvert ! videoscale ! osxvideosink
First of all, it works! And it's pretty cool to not need a capture card. But, for most cameras, you only receive at the resolution of the on-camera screen.
In other words, the video stream gphoto2 receives is intended for a camera remote preview screen. Check your camera's resolution before investing in this as a solution -- my very expensive 4k @ 60fps-capable mirrorless camera only produces a pretty poor 640x480 @ 50fps stream using gphoto2.
Additional video recording features like flicker reduction or IS seem to be lacking through this method as well.
In the meantime, I'm patiently awaiting the delivery of my 4k capture card :)
I've been looking for something that does this for a while now, WFH on the Mac Mini without a camera or mic is just dreadful :(
- https://obs.camera/ (which seems to be the most promising)
- https://www.newtek.com/software/ndi-camera/ (mentioned in a forum)
- and EpocCam https://www.kinoni.com/ (from the answer below)
I'll leave these here for people who stumble upon this answer, but if you were talking about another alternative I'd love to know about it. For now I'll give OBS.Camera a go and see if I like it. Thanks!
Pretty sure it “just works”.
Will give this a shot, luckily I'm still in the return window.
E: Works great! Make sure to enable "PC Control" in settings for other Sony cameras. I had it set to USB Mass Storage (which is maybe the default?)
Additionally, many camera features (flicker-reduction, electronic shutter options) are unavailable through this method.
This is really equivalent to a camera remote preview video, not intended to be used for actual video capture.
It work perfectly and the virtual camera can be used with Jitsi, BigBlueButton and the likes :).
And yeah, 100% agree on video quality, it is so much better than what you get from that potato sensor on MBP
Plan to give it a go later tonight but wondering if anyone has any success stories.
It’s unclear if that’s the list of supported cameras or if it’s a list of cameras and only those with entries in the next two columns are supported.
Either way I can’t wait to get home and mess with it.
- Send out a composite overlay (screen capture + webcam + lower thirds) on Teams/Skype/etc.
- Send out screen capture from another machine (usually OBS to OBS via NDI and then out via this plugin)
OBS is a lot of fun, but, alas, extremely demanding on system resources in some configurations, enough that I've started considering getting a new machine solely for video conferencing.
 this happened in the middle of all the zoom bombing and I've seen an allegation that Zoom did this to intentionally nerf that OBS -> Zoom pathway as it was found to have been used by many zoom bombers, but I have no idea if this is true so don't get out pitchforks about it.
I couldn't find an NDI product for macOS from NewTek.
The OBS NDI plugin is not part of OBS proper, but available separately:
Amazing. It took a good 30 minutes to figure it all out, but I now have the output of OBS (in my case, just the preview itself without needing to stream/record) as a video/webcam input source for Zoom, Microsoft Teams, and within Firefox. Note: doesn't work with Discord or QuickTime's File > New Movie Recording.
And all this with me being on macOS. Not Windows, but macOS. Incredible.
Steps (should work for macOS and Windows, not sure about Linux):
1. Install OBS. Run it, and set up a basic scene for testing (eg. webcam and a text label).
2. Download/install NDI Tools for your OS from https://ndi.tv/tools/#download-tools (note: system restart required). You only need the "NDI Virtual Input" app; on macOS each app had its own .pkg file bundled in the single .dmg archive; on Windows I assume it's an install wizard with checkboxes for each component. Again, only need "NDI Virtual Input" app/component.
3. Run the NDI Virtual Input application installed in step 2. It should live in your systray (without doing anything useful yet).
4. Download/install the obs-ndi plugin for your OS from https://github.com/Palakis/obs-ndi/releases - right now for Windows or macOS it's version 4.9.0 (expand the "Assets" link). There's a 4.9.1 update specifically for Ubuntu/Debian, but I'm not sure how those OS's are supported when there is no NDI Tools for Linux in step 2.
5. Run OBS. If you're lazy and didn't read the GitHub release notes in step 4, starting OBS should popup with a direct link to the NDI runtime you also need to download/install; then restart OBS.
6. In OBS, go to Tools > NDI Output Settings. If you want to "clone" the OBS output to the NDI virtual device only when you start streaming/recording in OBS (ie. to disk or to a streaming platform like Twitch), check "Main Output". Otherwise check "Preview Output", in which case your OBS preview will be output to the virtual video device at all times without having to start OBS streaming/recording.
7. In your operating system's systray (ie. top-right on macOS, bottom-right on Windows), you should have an NDI icon living there as started in step 3. With OBS running and configured according to step 6, you should be able to click the systray icon and select that output source as the input source for the NDI virtual device.
8. Open Zoom, Teams, or hopefully other apps which will work. Wherever you configure which input source to use for video/camera within that app's settings, there should be an "NDI Video" source. Select that… and BAM – your OBS canvas is now your input source!!
That took so long to type out, I hope someone manages to make use of it. :)
a) Make sure Chrome has access to cameras (System Preferences > Security & Privacy > Camera > Google Chrome checked).
b) In step 6 from my comment, make sure to try "Preview Output" in OBS's Tools > NDI Output Settings, as "Main Output" requires you to start OBS streaming/recording (which you probably don't want to do).
c) After setting up that OBS setting, clicking the NDI systray icon should show a dropdown list with a single item. You actually need to click that item in the list (it will checkmark it). If you don't do this, you'll get a black screen when trying to pull from NDI.
d) I used https://webcamtests.com/ to test – maybe try that site.
Otherwise I'm sorry, I can't imagine what the problem is.
He's talking about the poorly named NDI Scan Converter
Windows version doing the same thing feels light as a feather when running (3% CPU, <10% GPU usage)
1) Potentially an entire framebuffer memory copy could be avoided if we could get CMBlockBufferCreateWithMemoryBlock to work. See the CMSampleBufferCreateFromDataNoCopy method (currently unused -- linked below) in my code. It mostly worked but the virtual camera video wouldn't show up at full resolution in OBS, which is how I typically test while developing. Not sure why it wasn't working; possible it's an obscure OBS bug.
2) It might also be possible to get the virtual camera to advertise one of the pixel formats that OBS supports natively which would avoid the pixel format conversion in the CPU. I _bet_ this is where the majority of the performance hit from my plugin happens. I'm not sure if this is possible, however. Maybe OBS doesn't natively support any formats you can use for virtual cameras.
3) If #2 isn't possible, maybe the pixel format transformation could happen on the GPU? I don't know much about GPU programming but maybe this would help.
I'll definitely be using this!
And I do use this plugin.
The side effects seems to be that a bunch of the code that prevents the same thing being re-rendered with every frame if it hasn't changed gets bypassed, and I'd bet that kills performance.
i'd still point back to USB-C's complexities, though. apple isn't the only major company to have gotten USB-C wrong; look at the nintendo switch.
I don't know if it is correlated to your problem though.
I am using OBS a lot with a MacBook Pro Mid 2012 but I stay in 720p.
1. Install headers for your Linux kernel:
- sudo pacman -S linux56-headers
- git clone https://aur.archlinux.org/v4l2loopback-dkms.git
- cd v4l2loopback-dkms
- makepkg -scCi
- sudo modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam" exclusive_caps=1
- sudo modprobe snd-aloop index=10 id="OBS Mic"
- pacmd 'update-source-proplist alsa_input.platform-snd_aloop.0.analog-stereo device.description="OBS Mic"'
ffmpeg -an -probesize 32 -analyzeduration 0 -listen 1 -i rtmp://127.0.0.1:1935/live/test -f v4l2 -vcodec rawvideo /dev/video10
- File > Settings > Stream, set Service to "Custom..." and "Server" to `rtmp://127.0.0.1:1935/live/test`
- File > Settings > Output, set Buffer Size to 0, CPU Usage Preset to "ultrafast" and Tune to "zerolatency".
9. Select your virtual camera and audio devices in Google Meet / Zoom / etc.
I get virtually no latency with this setup but I'm running an AMD Ryzen 7 2700X with 32GB of RAM. As always, YMMV.
strings /Applications/zoom.us.app/Contents/Frameworks/nydus.framework/Versions/A/nydus | grep "Developer ID Application"
A number of folks have reached out to Zoom support to request that my plugin be added. See the latest in https://github.com/johnboiles/obs-mac-virtualcam/issues/4 for details.
I wouldn't fault Zoom for being restrictive about what they include in their allow list, since every addition adds risk, but it looks like they've already included closed-source applications from individuals as of 5.0.5. At least my code is open source and auditable!
I see "OBS Virtual Camera" in the source list of Video Capture Device, so I know it's running.
I do see Snap Cam (from SnapChat) listed in Zoom, so it doesn't seem to be a global virtual webcam block.
I've used it with Zoom / Google Meet / Discord and it's never failed me.
There are also other apps out there, besides DroidCam and Iriun, that support different phone-computer connections.
It's available for Mac and Windows.
So you can use it directly as a normal webcam or pull it into OBS and manipulate it with filters and text etc. then use OP's tool to output the OBS processed version of your EOS.
Ironically Apple came up with FireWire a long time ago to bring (mostly prerecorded) video faster into Macs and now they lag far behind in every aspect related to video. This includes their webcams, which are terrible. Now that's a different story on iOS devices...
Thanks for reminding this and I'll make a donation to this person. Hope this becomes the actual implementation and he gets the $10k bounty by @tobi.
@seanchas116 made a Swift port of my minimal example
Overall it _was_ very difficult! Apple's documentation and sample code for CoreMediaIO DAL (virtual camera) plugins are terrible. I just brute forced it for hours trying all sorts of different combinations of things before I got something to work.
... and apples examples are in c++ but i ended up doing it using straight c out of frustration since thier (c++) examples were so convoluted
"Changes to existing features
Re-enable virtual camera support
Support for virtual cameras will be re-enabled for users on client version 5.0.4. "
- combine and arrange multiple portions of the screen as you like
- filter things
- switch between different layouts whilst you're in a call
Essentially, it let's you run a live production over zoom and it is a blast. I've run two shows myself and helped with tech on another two with this setup and they've all been fun.
I've found that the preview window tends to be glitchy. I don't know what causes it, but I've had obs crash on me multiple times when using that interface.
Also, the virtual cam allows you to output at a given resolution but if you want to do the same with the preview window, you need to scale it up which just beats more desktop space and may also just eat more resources.
Finally, it is just more direct of a connection. I would rather be able to get a feed directly out of a program rather than have to go the window-capture route as that just add an additional layer of possible bugs, jitter, and fail.
Alt AULab link, without the requirement of an Apple ID to download: https://www.apple.com/ca/itunes/mastered-for-itunes/
It's quality software but a bit unintuitive how to set this up. First you create virtual audio source in loopback then with audio hijack you can route adjusted audio to this source you can't route to normal outputs. Also you need to have hijack turned on "record" for the effects to work.
But after that it works pretty well and you can add pretty cool things like... you can record both raw input and procesed input and route to your destination at the same time. Or you can mix in other apps like music into recording/routing.
Sidenote I've also tried BlackHole and unfortunately didn't have too much luck with it. It somehow worked as Loopback alternative but i think Loopback / Hijack share same audio drivers/code so integration seems smoother.
Here's a crude diagram that shows how this might work:
| Loopback: virtual_keyboard |
| +-------------------+ +-----------------------+ +--------------------------+ |
| |MIDI / Logic piano | |output channels: L & R | | monitors: headphones | |
| | (Pass+Thru) +----->+ +--->+ | |
| +-------------------+ +-----------------------+ +--------------------------+ |
| Zoom |
| +---------------------------+ |
| | input: virtual_keyboard | |
| | | |
| +---------------------------+ |
I don't know of free, good alternatives.
Paid: Loopback, Audio Hijack, Sound Siphon
Then it'd be cross platform and just a matter of running an optimized VM that can run on a limited amount of resources.
So OBS is better somehow. How? Why is OBS better than built-in OS recorders?
These various inputs can be arranged into "scenes" for easy management and switching among them.
If all you need to do is record the raw video from a video device, you don't need OBS.
This. And you can have multiple layouts (scenes) and quickly switch between them. It's a video switcher on steroids.
OBS is feature rich, the native video recorder is no comparison.
What would I do if I want the enhanced stream to be piped to screen share instead of webcam? Most video chat solutions will show webcam and shared screen differently
OBS has full desktop as well as window capturing capabilities, so if you only want to show the contents of a single window, you would add a "window capture" item to your scene: https://github.com/obsproject/obs-studio/wiki/Sources-Guide#...
...then select "VirtualCam" from the Tools menu, and click "Start" to start sending images to the selected virtual camera. These virtual devices appear in eg. Zoom, so if you select it and enable video within the app, others will see 5 faces and Visual Studio Code having a conversation.
(Of course you can add more sources to the scene, aligning them appropriately, so you can have your face in the corner alongside the captured window.)
It also has a lot of advanced features beyond simple recording of a video source or your screen.
ALSO the name is literally "open broadcaster studio"
Maybe take two seconds to investigate what you're talking about before making comments like this. Sheesh.
OBS just isn’t for macOS users. If you record video with it it’ll be corrupted. The performance is bad. It’s for people who stream to Twitch on a Windows computer.
The real problem is that the macOS security model broke virtual cameras in the latest version of Zoom.
It's true that there are some performance concerns and hardware encoding isn't available for streaming purposes on macOS due Apple's Video Toolbox API not exposing the appropriate encoder options.
However, I doubt that OBS would be responsible for corrupted video. It uses the industry-standard x264 encoder — if there's a problem with the video, it derives from (A) your settings, and (B) x264. I'm more inclined to believe A than B.
I'm more than happy to use Wirecast if you'll pay for a licence. I and most other people, mostly amateurs, don't have a spare US$599 lying around.
You state this like a fact. I have recorded video, it was not corrupted. OBS works fine for my needs.
I never had any issue playing an OBS recording on macOS or uploading it to YouTube.