Most of them use a USB Type-A to Type-A cable for the link between the switch and PC.
This cable violates the USB spec, and is the moral equivalent of an electrical cord with prongs on both ends. It should never, ever be made or used, because plugging one of the ends into the wrong receptacle can destroy both USB ports.
The correct cable to use here would be a standard Type-A to Type-B or Type-A to Micro-B cable, but for some reason a very small minority of these switches do that .
WTF is going on here?
And if the product under consideration does brandish a trademarked USB logo but clearly doesn't satisfy some observable aspect of the spec, then it's liable to be a counterfeit and you should seriously be questioning the integrity of your source.
The reason was that the signal amplifier it came with was built into the housing of a 2-way coaxial cable splitter, presumably because said housing was inexpensive and readily available. One of the ports that would normally have a second TV attached was instead repurposed to provide power to the amplification circuits.
I shudder to think what it would have done if I'd run this directly to a TV, though.
The Coaxial cable that comes from the pole has a voltage differential on it already, 96 Volts, IIRC, and it powers the demarcation device. (which is kinda a modem/voip device breaking out into a phone line or two)
How would this happen?
That is why all serious inter-computer USB devices use galvanically isolated connection (e.g. using optocouplers)
It won't stop you from sending 200V into it and frying something, but it will stop short-circuits and faulty/non-spec USB devices from doing serious damage.
I guess it's a cheap safeguard that works pretty well for consumer products.
If the GNDs are different then neither device will be “providing power” in the traditional sense. Rather current will flow from the GND of one device to the GND of the other, and GNDs are normally not protected.
You will probably also see power flow on the V+ rails as well because they will be referenced against the GND in each device. In that case the devices can cut the power, but that only protects the V+ rail, not your GND rail that could still be transporting enough current to melt something.
This is usually more of a problem with audio equipment that has analogue signals with very high sensitivity. A ground loop can convert any nearby magnetic fluctuations (like say from the electricity flowing through the mains cables in the walls) into a nasty bit of noise.
I don't understand how that's cheaper than the alternative of paying a few cents more for the correct receptacles and then not even including cables, because the customer can use literally any standard off-the-shelf cable.
And also in more recent years, Intel DCI uses A-A 3.0 cable for CPU debugging on tablet PCs. Lesser known but Intel defines USB host controller spec.
So not allowed in spec for public use, but not custom cables per se.
So, I'm mildly confident you're right about A-A cables being valid, at least at some point.
Those have a dongle inside the cable to actually do something useful, versus mindless delivering 5 volts to the other end.
I have a Hub to every port, some of them powered and advantage of using non-powered Hubs are that they can be easily used for isolating EMC(e.g. to prevent buzzing noise on speakers etc.).
It works surprisingly well, no noticeable lag even when gaming.
It works very well too.
Have been using InputDirector to combine a desktop and a laptop everyday for the last 6+ months and am very happy with it.
Doesn't Synergy do the same thing, is open source and works on basically all platforms?
(I'm talking about Synergy 1, Synergy 2 is a shitshow and I don't know how it's licensed)
So the idea is you have PC A + Monitor A, and PC B + Monitor B.
You install the software on both machines (they talk over a local network), and configure PC A to be left of PC B, and PC B to be right of PC A. Attach a keyboard+mouse to one machine.
When your mouse goes 'off screen right' on PC A, it knows that PC B is over there, so it redirects keyboard/mouse input to PC B.
I used Synergy for a while, and it was pretty great. The only issue is when you need to do control keys that the OS takes over (like CTRL-ALT-DEL on Windows), that wouldn't transfer over, and you'd be looking at Monitor B wondering why keyboard/mouse wasn't working... then look over at Monitor A and see it's at the lock screen.
You arrange the geometry in the software to make the mouse move between machines in an intuitive manner. Whichever machine has the mouse has the keyboard input as well.
So you do need a dedicated monitor per machine from my experience.
Add in noise sources from the power supplies, your cell phone, and other sources, and it becomes a challenging physics problem. Not to mention, you don't want those signals leaking out either. At these data rates, wires are both transmitting and receiving antennas.
Switches are inherently very mechanical devices, so designing one that can operate in these conditions reliably would be both complex and expensive. Much easier to have some silicon do it for you.
However, the challenges I am referring to above mostly have to do with the steady state operation of the bus, after it has negotiated it's final data rate and is transmitting at full-blast. That's when you run into the transmission problems (eg. radiation of your signals), or your setup just won't allow you to reach the full data rate due to interference.
Obviously impractical for a KVM switch, but I wonder if you could build one out of 20 of these.
Additionally: The DisplayPort spec mandates that operating systems have to throw a huge temper tantrum and garble your desktop and windows when you disconnect or turn off a display. (Yes, nVidia and Microsoft consistently deny implementing an option to turn this utterly braindead behavior off, saying that it's not a bug, but intended behavior because DisplayPort and VESA say so).
What KVMs instead contain is similar to the rescaler ASICs found in a monitor, they receive multiple streams of display data and mix/forward/scale one of these. But since they constantly receive all of the streams, switching can be instant, or at least as quick the monitor can modeswitch (which, for some reason, takes screens at least a second).
Do you know of a DisplayPort KVM switch that actually does that? I tried one a few years ago (it was one of the first ones I saw for a reasonable price), but I returned it immediately because I got the brain-dead window-rearranging behavior you described whenever I switched. I've been reluctant to try another until I know it solves that issue.
I still use one of these HDMI switches (https://www.iogear.com/product/GCS62HU/), because it's EDID support actually does make it transparent to the PCs, so I can switch instantly like I want to.
I finally ordered one a while ago after I couldn't find anything cheaper with similar functionality, but it's currently on backorder so it will take some time to arrive & see how well it works in reality.
There is an interruption if you press the KVM reset button, but that's not something you should often do.
> Most of the support on (their DisplayPort 1.2 KVM offering) have been down to bad or dodgy DisplayPort cables. Seriously, it's like sort of nuts. Like you go on Amazon, you order a DisplayPort cable, it doesn't work, and it's like, "Well, I have two DisplayPort cables that work independently, but when I plug both of them into the KVM, it doesn't work." The KVM doesn't have a repeater. So if you've got two 6-ft cables that pass through the KVM, it is like as if you have one 12-ft cable, and while one 6-ft cable might work, 1 12-ft cable will not because...(Jedi hand gestures)...repeater. So use good quality cables, you'll be fine.
Ignoring the "repeater" terminology wank, that soundbite disclosed enough about the white label device's internal architecture for me to infer with confidence that even if EDID was properly implemented (the easiest part), the video interface is skimp and doesn't actually buffer frames in realtime...which means that on switch, the monitor's receiving ASIC will detect frame interruption as it starts to receive its new stream and blank for a second or two while it tries to reacquire...which is definitely not the instantaneous performance I was looking for.
I assume garbling windows isn't technically mandated by the spec, it's just what DWM does when Windows tells it to "rearrange" all windows on a desktop that has lost a bunch of displays, because Windows helpfully noticed that they were turned off.
https://www.reddit.com/r/nvidia/comments/66opvp/this_display... https://answers.microsoft.com/en-us/windows/forum/all/how-to... https://answers.microsoft.com/en-us/windows/forum/windows_7-... (AMD workaround) https://www.overclock.net/forum/74-graphics-cards-general/12... https://superuser.com/questions/630555/turning-displayport-m... https://www.nvidia.com/en-us/geforce/forums/discover/209725/... https://www.reddit.com/r/Windows10/comments/4895u9/turning_o... https://www.reddit.com/r/nvidia/comments/9qn0dk/has_anyone_f... https://www.reddit.com/r/windows/comments/788l4f/put_one_dis... and many, many more, like this one: https://what.thedailywtf.com/topic/24548/displayport-is-the-...
Until I figured out what the hell was going I was pulling my hair every morning when I turned on my work monitor (40-something inch 4k display) and found all program windows in the top-left corner of the monitor, stacked on top of each other and resized to some svga-resolution.
Sounds colorful. Can you cite the requirement and revision of the standard referenced? I'd like to read the actual verbiage for myself.
Looking at the normative language of the early DP v1.1a § 3.3 Hot Plug/Unplug Detect Circuitry:
> The HPD signal is asserted by the DisplayPort Sink whenever the Sink is connected to either its main power supply or “trickle” power. HPD signal specification is shown in Table 3-2.
To be sure, HPD assertion on "trickle" power is the monitor's requirement.
Looking closer at Table 3-2 (empty Nom col removed, Comments col separated for clarity):
| Parameter | Min | Max | Units | Comments |
| ---------------------------------- | ---- | --- | ----- | -------- |
| HPD Voltage | 2.25 | 3.6 | Volt | (a) |
| Hot Plug Detection Threshold | 2.0 | | Volt | (b) |
| Hot Unplug Detection Threshold | 0.8 | | Volt | (b) |
| HPD source termination | 100 | | kΩ | (c) |
| HPD sink termination | 100 | | kΩ | (d) |
| IRQ HPD Pulse Width Driven by Sink | 0.5 | 1.0 | ms | (e) |
| IRQ HPD Pulse Detection Threshold | 2.0 | | ms | (f) |
> (a) HPD signal to be driven by the Sink Device
> (b) HPD signal to be detected by the Source Device
> (c) Source Device must pull down its HPD input with a ≥ 100kΩ resistor.
> (d) When a Sink Device is off, it must pull down its HPD output with ≥ 100kΩ resistor.
> (e) Sink generates a low going pulse within this range for IRQ (interrupt request) to the Source
> (f) When the pulse width is narrower than this threshold, the Source must read the link / sink status field of the DPCD first and take corrective action. When the pulse width is wider than this threshold, it is likely to be actual cable unplug / re-plug event. Upon detecting HPD high, the Source must read the link / sink status field, and if the link is unstable, read the link / sink capability field of the DPCD before initiating Link Training.
Then just below Table 3-2:
> The voltage level of the HPD pin is monitored by the Source Device. TTL levels must be used for the
> The Sink Device may detect the presence of the Source Device by monitoring the DC voltage level of the AUX CH lines. Source detection is an optional feature of a Sink Device.
Based on this language and description of noted workarounds, color me unconvinced that DP spec ambiguity was the root cause of this issue.
So I think what might be going on behind the curtains is that 1) DP requiring handling of hot plug 2) DP forbidding "trickle power" being sourced from the DP port 3) monitors perhaps not being able to meet 0.5 W standby if they'd entertain the DP interface 4) Operating systems handling display hotplug disgracefully combine into one heck of a bad user experience.
Things actually start to align now, because this issue started to happen en-masse around 2013-2014. And that's precisely when the more stringent standby power requirements came into force. For example, the Dell U2713H doesn't do this, while the later U2715H does. Both of these are DP 1.2 monitors, so any new mandates in the non-publicly available DP 1.3/1.4 specs shouldn't matter.
The answer seems to me unambiguous when considering the normative language in § 184.108.40.206:
> The standard external cable connector assembly must not have a wire on pin 20, DP_PWR.
...in conjunction with the normative definition of trickle power per § 1.4 (my emphasis):
> Power for Sink Device that is sufficient to let the Source Device read EDID via the AUX CH, but insufficient to enable Main Link and other sink functions. For sink to drive the HPD signal high, at least the trickle power must be present. The amount of power needed for the trickle power is sink implementation specific.
(To be sure, Figure 4-23 per DP v1.1a is single-slot PCI "Panel Cut Out Reference Dimensions", so unsure what was meant to be conveyed there).
> Note that EC demands that electronics put in stand-by / "off-mode" may not consume more than 0.5 W of wall power.
...which is ~150 mA at 3.3 V nominal. Even generously derating 30% to account for ballpark efficiency losses, we're still talking > 100 mA budget to support an explicitly constrained requirement. I mean asserting HPD in the worst case is apparently a 36 uA affair.
Anecdotally, I haven't encountered the issue despite a bias towards DP (on advise from a friend in compliance test over a random beer discussion on radiated emissions performance); all my setups discriminate to either dual U2415 or dual U2412M as a matter of personal preference.
Just thought I'd remark in passing because VESA specs never really struck me as being technically egregious in its ambiguity.
> ...which is ~150 mA at 3.3 V nominal.
Small power supplies become very inefficient due to relatively fixed losses relative to output power. The 0.5 W spec is wall power, so if you fully use the 0.5 W, you might get 0.1-0.2 W worth of output.
> Anecdotally, I haven't encountered the issue despite a bias towards DP (on advise from a friend in compliance test over a random beer discussion on radiated emissions performance); all my setups discriminate to either dual U2415 or dual U2412M as a matter of personal preference.
I'm using a Dell U2415 connected by DisplayPort to Intel graphics right now, and if I turn it off (using the soft buttons) my desktop and windows are garbled because it "disappears" from the system despite not being disconnected.
Compare this to the DVI days where I could literally switch the outlet strip powering my screens off and nothing would get garbled, and the system would never hang or crash. I'm pretty sure it's impossible for DisplayPort to support that, since the display has to be powered to avoid being un-detected by the system. So I'm not asking for that to work. Obviously a good standard would support this usage, but DP never will, so okay, fair enough. I'm just asking for the insanity of "disconnecting" monitors sent to stand-by / "soft button off" and subsequent garbling of my workspace to stop. And while they're at it, maybe fix crashes and hangs on monitor re/disconnects. Y'know, it's not really "plug and play" if I plug it in and have to hit the reset button to get my machine back.
Real world analogy: A standard for desks that dump everything on the floor as soon as you turn your back to them would never fly. But somehow the equivalent is acceptable for computers.
PS: A Japenese entrepeneur sells DPHPDMA devices, which are little dongles plugged directly into a Displayport output and contain a microcontroller powered by DP_PWR, which intercepts the EDID communication etc. and essentially performs EDID emulation in hardware (since GPU manufacturers disable software EDID emulation for consumer cards). They're like 50 bucks plus international shipping each.
> There is no mandatory time constraint on the Source Device’s response to a plug / re-plug event, but a Source Device vendor may want to impose a voluntary constraint similar to that for HPD IRQ events (for example, 100 ms) to ensure a good user experience via prompt discovery and configuration of newly attached devices.
There's a lot more going on behind the scenes than mere "scanning for a valid data frame and flinging bits onto the glass", and much of it has nothing to do with approaching technical limits, e.g. the dynamic of multiple product vendors finding incentive to reach consensus in standards development despite naturally seeking to cut each other's throats in the open market.
> PS: A Japenese entrepeneur sells DPHPDMA devices, which are little dongles plugged directly into a Displayport output and contain a microcontroller powered by DP_PWR, which intercepts the EDID communication etc. and essentially performs EDID emulation in hardware (since GPU manufacturers disable software EDID emulation for consumer cards). They're like 50 bucks plus international shipping each.
There are similar considerations regarding DDC passthrough on monitors for resolution negotiation and so forth.
A bad KVM presents to the computer as though the user is disconnecting all the input devices and the monitors every time you switch. These are indeed much simpler and cheaper devices but they cause all kinds of quality-of-life issues when you use them regularly.
The price was around $50 IIRC.
Most people aren't plugging/unplugging monitors often
They still handle it horribly though. My favorite is how macOS used to assign the logical displays randomly, so I'd have to fix the arrangement in Displays preferences several times every day.
Of course it's easy to say that these are silly problems that should have pure software fixes, but Microsoft and Apple don't exactly accept patches. Let's not pretend that software is optimal.
That's not possible any more. Microsoft, VESA, nVidia say no. This mustn't be possible. (Actually it used to be possible to add a registry key in Windows 7 that disabled all of this bullshit, but Microsoft wisely decided in Windows 10 to remove that. It must not be done.)
Turning compliant DisplayPort devices off while using a compliant operating system must always garble all windows and shuffle desktop icons around. The spec says so. So it must be. The will of the ancients.
(nVidia will allow you to turn this off, if you are using a Quadro card - then you can use EDID emulation even on display-ports, which causes the OS to think the display is always connected - and more importantly - turned on. One of these areas where Linux is ahead -- although Linux is back to garbling windows like the rest in case you do actually decide, as an individual with agency, not a compliant operating system, to change the arrangement of displays.)
I'm also not 100% sure if it'll freak HDCP out.
Too often is something like 20-30 times.
I use magnetic 20-pin USB C adapters to connect laptop thunderbolt and usb c to a DisplayPort headset and misc. Less worry about yanks; easy dis/re-connect; simplified cable and headset handling. No issues so far. I like them. Though I've not been pushing the DP bandwidth hard. And because they disconnect much more easily than a plug, I can't for instance move the laptop around and expect the cabling to simply follow.
But to use them for frequent hot swap, compared with a real switch, I'd be worried about duty cycle lifetime. And the dis/connect isn't hidden from the environment, which might be inconvenient.
They were very expensive and very unreliable. If something was weird, click it back and forth a few times and maybe it fixes itself.
Nowadays the signaling is too fast to go through a physical switch and most devices would be unhappy to be plugged/unplugged so rapidly.
Probably could build a mechanical contraption to do that on a lever push. So that’s what I’m curious about too: why yet no mechanical switch if I’m literally doing it manually already?
It also does 4K at 60 Hz. You can switch via the keyboard (uses scroll lock as a shortcut). It has worked flawlessly for me and is only £86.
The only real downside is that it is yet more wires, and it can only switch one video signal. If I had two monitors I'd probably go with this guys solution.
By the way, I have that same USB switch. I have tried a number of them, and that is the fastest I've found. The only major flaw is the status lights -- they are too hard to see from far away.
To fix that issue, I drilled a hole in the top of the case, over the lights, about the same size as the adjacent power button. I put a small piece of foam inside to separate the two lights, and covered it with tape. It works perfectly now and it's so much easier to see the status from a typical usage angle.
The whole modification took maybe 15 minutes, and I didn't even bother disassembling it first.
Now, switching the monitor however…
This involves clicking the six buttons on the rear of the monitor in an arcane sequence (your fingers sort of curl around the side, the buttons have labels on the front of the monitor on the right-side bezel, and yes, that works as well as you can imagine). Which then switches the monitor:
* Button 1: activate menu
* 1 again: select input
* 2: move down list of three inputs
* 3: select mini display port
A somewhat similar sequence exists for going back to its HDMI input.
But hold on! It gets better if you turn off the computer you have live at the moment. The monitor then goes blank, and until it throws its 'no signal found' message, it won't respond to anything. If you are lucky, you can switch inputs before the current computer is shut down completely and the monitor blanks itself.
I do feel like a caveman.
Monitor is ASUS PG348Q.
Feature: 60 (Input Source)
sudo ddcutil setvcp 60 0x11 # To HDMI
sudo ddcutil setvcp 60 0x10 # To DisplayPort 2
I wonder if there's any hope of hacking a solution to this, as it seems unlikely the LG would actually fix it.
 top comment - https://news.ycombinator.com/item?id=24344045
Also, if your monitor stores the current input in EEPROM, changing it via the OSD will also write to EEPROM, so it's not like you're saving the EEPROM by doing it manually (buying a real KVM is another story, obviously).
Generally I’ve only been unlugging my work laptop from my desk about three times a week but for work-life hygiene I should probably unplug it at 5pm 5 days a week. Since it’s mostly split across 2 laptops and occasionally a third, I know the cable will go but haven’t thought too hard about the laptops so far.
TB monitors mostly means I’m wearing out one connector instead of three (can you wear out MagSafe?) so it’s an improvement.
You seem to be asking about a solution where you don’t have to move any cables? I’m not sure I have enough TB3 ports anywhere to do that. Someone with a complete TB3 system might be able to put one computer at each end of a chain, but I think you’re right that they won’t play nice, and you could never go to three. Docking station or KVM switch are probably it.
I wouldn't want to wear out the slot in the expensive monitor, but rather have a short extension permanently attached just for manual cable plugging purposes.
The issue is 4k@60.
Not full emulation and remaining "connected" - just the ability to press a button and change what the device is plugged into.
Edit: the only ones I could find with usb c were full KVMs and far more expensive. I would guess a female usb c to male usb 3 adapter might be a cheap workaround.
Offtopic: I wanted to buy this USB switch, but it's hard to get on the current market in my country (or available for an absurd price). So I started trying to build my own... which was fun, but then I understood I need a some USB/MAX chips that are also hard to find, and started to cause me frustration instead of fun, because I'm not really that good with low-level electronics, university was a long time ago. And I was still without my switch/hub. So at the end I just chose a 3-pole double-throw toggle switch and a couple of USB connectors and wired them up. I only need to switch my keyboard, because my mouse is multi-device. If I need more, I'll just wire a hub to it. Work is in progress, but it can't get simpler than that.
If you're wondering what got complicated, well as far as I read and understood, actually breaking a USB connection with solid state logic is kind of a pain in the neck if you want to stay within the USB parameters, at least for an inexperienced designer like me. And you need to build the power supply for the circuit, and I didn't want to introduce an external power supply, I wanted to piggyback of the USB source. Something this ugreen product does not do. So I left this project for another time, when I freshen up on my perished knowledge. A mechanical switch just switches the lines and that's it.
What the board looks like:
It has both an USB-C input and an USB3 upstream that can be combined with either DisplayPort or HDMI. It also has signal auto detection.
When I'm starting up my PC, it will use the USB connection from my PC via USB3 upstream. When I power up my work laptop, it will take USB from its USB-C connection.
That even eliminates the need to push a button to switch. I'm really happy with this setup.
Added bonus, it's quite a good monitor!
It's a small taste of the stuff that software-defined networks have added to networking.
I just checked my 2019 Acer XB27HU, which reports brightness control but it doesn't seem to work - it then got stuck in Factory Test mode. It doesn't support input selection and doesn't do any kind of hotplug detection. I've never seen a monitor review mention these kind of features, either.
DDC works fine from my desktop over DisplayPort from an nvidia GPU, but I can't seem to get it working from my laptop over HDMI from an Intel GPU.
It worked well enough that I'd forget which pc the kvm s were actually connected to.
But I was only programming, not gaming, so maybe the 60Hz would be a problem. no hurt to try. It was free and there were a couple software solutions.
At 60dps that is one frame every 16.7ms.
Sub-ms latency is easy on a local network. Heck I can get a few states away and back in 16ms.
I've resorted to having both machines plugged into my monitors and switching inputs using the monitor front panel. Not ideal. For the keyboard/mouse I currently just have two of each on my desk. Also not ideal, although I must admit it provides some semblance of work/life separation. I'm going to look into one of the software solutions for switching the keyboard/mouse.
I use a similar setup where I literally have a single USB cable that gives me access to everything: monitor, keyboard, mouse, lighting, camera, microphone.
I just need to plug the cable in the computer I want to use. However, I do need to change the input on the monitor as one of the them is connected via HDMI.
Might just be USB-A, I haven't looked at the other specs
Currently I have one computer connected to the AV receiver via TOSLINK (optical audio). The other computer is connected to the receiver via coaxial S/PDIF. When I switch computers I have to physically go to the receiver and switch between the two input channels. This presents two problems:
P1: Manual action required
P2: Only one computer can play sound at any given time
Ideally, I would want a S/PDIF mixer that can decode the two S/PDIF streams, add each constituent channel together, and then output that as a single S/PDIF stream to the receiver. But I can't seem to find this anywhere.
Most mixers I've found are:
1. Analog mixers
2. Pro audio mixing consoles that have a dozen input channels and 50 dials on it, with a price tag to match
I worked for Symless for 2 years and I can tell you first hand that the owner had little interest in fixing or improving the core functionality, the code for which has remained largely unchanged for 12-15 years.
Most of the input bugs in the software are a decade old, and unless a community patch comes along that can be merged, the owner only has interest in keeping it working on new OS versions, and tarting up the UI/UX to support Symless as a lifestyle business.
In the 2 years since I left I've seen exactly 3 bugfix releases, one of which broke TLS, and none of which seemed to contain any significant original work.
Supposedly Synergy 3 is in the works, after Synergy 2 (which I worked on) was scrapped, but this will almost certainly only be a proprietary Electron UI for the FOSS CLI instead of a Qt/QML UI for the FOSS CLI (as v2 was).
Major versions of Input Director (v2.0) and ShareMouse (v5.0) have been announced in the last few months, and it looks like significant work has taken place on both projects. Both of these products need better marketing.
Unfortunately none of the alternatives to Barrier/Synergy seem to want to support Linux, which remains its moat. The days for that are numbered due to Wayland though (Synergy uses the X11 XTest extension to inject input).
The issue I repeatedly ran into with these solutions is that they want to have a "hot" edge and both systems up on independent screens at the same time. I really only want a hot key and one system on my single monitor at a time. Which I could configure with Synergy 1 (not 2).
I also updated my post with the recommendation to use Barrier.
- Your solution requires Synergy. In my experience, Synergy has been buggy--or at least extremely finicky--every time I've tried it since release circa 2001. When it comes to input devices, I want things to be basically flawless. So I like this very simple fix better.
- The posted solution is cross-platform between Mac and Windows, which is a common use case. Yours relies on Autohotkey, which is awesome on Windows but, last I checked, terrible on Mac OS.
I've often dreamed about doing this with a keyboard shortcut ala switching layouts in a tiling WM (or 'real' kvm switches from the CRT + ps2 era).
Is there a good usb hub that supports switching outputs via software?
Not particularly cheap but at least no longer back-ordered. I've been pretty happy with it switching 2x4K monitors, keyboard and mouse between a Mac and a PC laptop. Both corporate laptops where custom software is more difficult.
I bought HDMI/DisplayPort switches which are capable of 4k60 and are very cheap. Only these combined KVMs have outrageous prices, but if you're fine with using multiple switches it can be achieved quite cheaply.
Seriously, I'd love to see this. I have had mixed results with udev and no idea where to start to send an input switch command to my monitor.
The display is very good at automatically switching KVM based on the selected input, but the selection still has to be done with the little buttons underneath the screen
1. I have 4k 144hz monitor. Not many devices or even display protocals can handle that (I won't be able to even fully utilize it until I upgrade my desktops graphics card).
2. The amount of cambes I have for my current KVM swicth (for my second, 1080p, monitor). Is getting unweildy.
are there any keyboard switches that emulate a keyboard correctly instead of just unplugging and replugging the usb connection?
I found that X-Keys make a range of keypads that are specifically designed for KVMs. I have it setup for KVM switching and the remaining keys have other macros. I'm glad I got it, but it's expensive compared to DIY kits.
StarTech's KVMs have a rather robust USB stack which handles this situation very well.
But this is friggin awesome.
Weird flex, but okay.
Also, I can tell you it's harder than you'd think to find good hardware in this area, so I don't think the software effort is wasted.
- it only supports one 4K monitor, not two,
- it doesn't support USB-C.
And my heart sank.
Zero chance of getting that past corporate laptops that are way too locked down for actual code anything beyond VBA