Have any hardware companies ever written good accompanying software? From all the custom-ui graphics card config nonsense to utilities that phone home of their own accord, to things like this which are laughably awful.
I feel glad I left for the mild shores of Linux in the early 00s.
I didn't use iTunes for Windows, but I did use one for Mac a couple of times. I can imagine how terrible the Windows one must be if it's worse than the garbage they ship for Mac. No use case I needed from it was ever easy or worked as I wanted...
They need to refactor the monolithic app that does everything into smaller, more focused apps. My media player shouldn't also do document management for my phone.
Note that the grandparent said: Have any hardware companies ever written good accompanying software?
I think this should be read as an existential quantification. Sure, not all Apple software is great, but for a hardware company Apple has made a lot of great software (or: for a software company, Apple has made a lot of awesome hardware).
When I first started using Linux, I was annoyed that I couldn't customize my Corsair keyboard lights and layout, so I swapped it for a Poker II that lets you do everything directly on the board through keypresses and store settings onboard. So much better.
OpenBSD's official FAQ has this to say about Flash:
Adobe's Flash plugin is distributed in binary form only, and they do not provide a native OpenBSD version. Considering their security record, we thank them for this neglect.
The situation is equivalent here; given the kind of software that hardware manufacturers tend to write, I'm quite happy to take volunteer efforts over what they produce.
I would like to inform you that it does in fact work really rather well. I write this on a laptop running Debian Linux which I use for work. It's great.
I also use a machine running Debian Linux for my personal computing needs, but prefer to separate those concerns a bit. I can confirm the community's support is excellent though.
It doesn't work at all, for the thousands of pieces of hardware that simply don't work on Linux, or work poorly.
Further, it doesn't work at all from an economics standpoint. There is zero incentive driving the community to support every iteration of every type of hardware other than personal need, and if you're suggesting that anyone who needs something has to go out and build a driver themselves, please let me know who your dealer is because I want to be as high as you.
As for, "the community is better than the manufacturer" try having an outage over the holidays, and see how your community reacts to, "help me or my business fails".
I see your _thousands_ of pieces of hardware that don't work and raise you all the hardware written for old versions of Windows whose drivers no longer work on new versions yet still work _perfectly_ on Linux.
See: professional audio and video gear, perfectly good network cards.
Also, note in my post I referred specifically to 'accompanying software'. Drivers that work with the OS in question are to be expected in the commercial software world.
So many hardware companies fell into the trap of making steaming piles like ATI's CCC[0], Creative's SoundBlaster Control Panel[1], ASUS's Control erm.. thing[2]. And it's not just gaming peripherals, buy a computer from HP and see how many HP-specific utilities clog up the system tray. Same goes for most of the other big players.
> As for, "the community is better than the manufacturer" try having an outage over the holidays, and see how your community reacts to, "help me or my business fails".
And for those people there are companies like Canonical, who derive their product from Debian. There is also Redhat, SUSE, etc. Lots of commercial options grow up around these vibrant communities to fill the gaps that they sometimes leave. This is an unmitigatedly good thing.
Actually the support from community is often much better than from the vendor. Especially for the older hardware, which works perfectly fine, but just have no drivers for modern Windows.
The main problem with hardware support in Linux is there is no up-to-date source of information about what works and what doesn't (and everything in between). You can usually find info about particular piece of hardware, but it lacks the main purpose of such a database - to make more people to buy more open hardware.
Name one, just one, hardware manufacturer you can reach by email to report a bug and expect both your mail to be replied and the bug to be fixed. And without being charged a dime in the process (well deserved yet voluntary donations aside).
No thanks, I'll stay with the Linux community any day.
It's not, not even remotely. You clearly haven't actually lived day-to-day with Linux or you'd know how buggy and unreliable the random drivers for less-than-popular hardware actually is.
Further, you've clearly never had a critical outage happen during off hours. Try running to the community when your job is on the line at 3AM Christmas Day, let me know how responsive they are to you then.
You get the quality you pay for, and while the open source community is wonderful and amazing, it's done by people working for free, and subject to the whims of those people, which makes it A) unreliable B) inconsistent, and C) lower quality, on average.
"Try running to the community when your job is on the line at 3AM Christmas Day"
That's a bit extreme. Of course throwing a shitload of money at support contracts will give you helpdesks answering during holidays and drones flocking into the data center, but that applies only to a very small subset of hw/sw products where any outage can be fatal.
The consumer market is just a bit different though.
I'm actually struggling to find any kind of comprehensive list, but I'll point out that I've used Intel, AMD, and NVIDIA first-party drivers in Linux over the years. I'm sure there are countless others.
It's a slippery slope towards a world in which the computers we use are completely locked down and we have no control over what they trust, being dictated by the corporations and their interests. The freedom to modify your roots of trust is extremely important and situations like this should not scare us into depriving ourselves out of that. I say this as someone who runs an adblocking/filtering proxy and MITM my own traffic continuously. What's scarier than being MITM'd? Being force-fed by those who want to control your life.
These certificates are used by the software to communicate with the headset using a TLS encrypted web socket.
No you don't disagree. The comment you're responding to doesn't say users shouldn't be free to add whatever root certificate they want. Of course they should.
What should be illegal are programs that silently add a root certificate without the user's informed consent. I very much doubt you disagree with that.
The law doesn't have to lock it down though. Software companies should be accountable for putting users at risk in this way, it should not be legal to install fundamentally insecure software without explicit permission from the user.
Did it not ask for permission? "Type in your admin password to install?"
This seems like an OS problem to me. I wish Apple and MS would clamp down on allowing apps to even ask for root to install. I don't think the OS should be as locked as iOS, as a user I still want control, but the fact that so many apps ask for root to install needs to be stopped. 99.9% of all apps need to be sandboxed
> The freedom to modify your roots of trust is extremely important.
I agree, modifying certificates shouldn't be technically impossible but perhaps we could have regulations where companies that do this are penalized. If companies are expected to safeguard customer then facilitating MITM is a huge breach that should result in fines.
A lot of root CAs are backed by national governments anyway, shouldn't they object to being impersonated?
It's also extremely common for companies to install root certificates on work computers that employees use to check their personal email. Why is nobody objecting to this?
> It's also extremely common for companies to install root certificates on work computers that employees use to check their personal email. Why is nobody objecting to this?
A question: if I'm on a work PC that has a root certificate installed, can you tell if they're using it to MITM? When I go to a site on my work PC with the padlock (eg my bank) and click to get more info, it does show the bank's certificate. Can they still be MITMing that connection?
Of course it will show the cert with the matching name, practically all MITMs will generate the correct certificates so that browsers don't complain. You need to look at the CA it's signed with, and compare to the one you get when you access the site from somewhere else.
Also,
It's also extremely common for companies to install root certificates on work computers that employees use to check their personal email. Why is nobody objecting to this?
Because you do not own those computers, and they should not be used for non-work-related activity. The company policy will explicitly mention this, something like "all communications on company property are subjected to monitoring at all times."
>Because you do not own those computers, and they should not be used for non-work-related activity. The company policy will explicitly mention this, something like "all communications on company property are subjected to monitoring at all times."
Just as a thought experiment, lets say I set up a "free computer booth" on a street, but made people tick a box when they started that expressed the same conditions.
Would I then be entitled to MITM and read all communications that passed through that machine?
Yes. The owner always has the right to monitor what goes on with his/her property. If you remove that right (like what is happening on mobile devices, unfortunately), it's a slippery slope to a situation where no one has absolute ownership of what they "own", which is even more treacherous for privacy.
That's why it bothered me so much when Firefox started banning unsigned addons, even if you go through a tedious process to enable them and recompile said addon. Ticked me off so much I made a Hitler parody:
In practice the corporation "dictating" the set of publicly trusted CA roots is the Mozilla Foundation, a 501(c)(3) non-profit with a large volunteer effort.
On paper all the major browser vendors / operating system vendors (Microsoft, Apple, Google, Mozilla) have independent root trust programmes. But after several years working on this stuff I would say that all real public oversight is done by Mozilla, which AFAICT suits everybody else just fine.
So you can and should help Mozilla "dictate" what is trustworthy. You might be surprised how much difference you can make.
The libertarian dream solution (everybody makes their own trust decisions somehow) is unrealistic in the face of the reality that most people just vaguely assume that "someone" is keeping them safe and lack both the technical knowledge and the spare time to make useful decisions.
> In practice the "corporation" "dictating" the set of publicly trusted CA roots is the Mozilla Foundation, a 501(c)(3) non-profit with a large volunteer effort.
As a bit of a layman, is there even any legitimate reason at all (other than a user installing it in their own machine for reverse engineering purposes) for anyone to install a root certificate anymore?
I could understand it if it was a small company doing so at the time when certificates were expensive, but Sennheiser has plenty of money and certificates can be obtained for free nowadays.
Nobody will issue Sennheiser a certificate for this purpose. Every so often a company abuses a cert they were issued to do what Sennheiser wanted to achieve here (local loopback HTTPS) and when they're caught the cert is revoked and they get a slap on the wrist. Blizzard is a recent example.
The Right Thing (TM) is to not do HTTPS, a modern web browser is supposed to conclude that ::1 and 127.0.0.1 are secure without HTTPS since there is no possibility of a "man in the middle" of your own computer's loopback.
If you want an arbitrary (thus https based) website to be able to communicate with a localhost server using websockets you are forced to use https on the localhost server. This is because the browser won't connect to non-secure websockets from an https website, even if the websocket is to localhost.
The actual right thing to do is to generate a private key and certificate (for a specific, public name you point to 127.0.0.1) during the software installation and add the latter to the trusted store. Now you don't have this vulnerability because each computer has a different trusted certificate with a different key, so a random attacker cannot just use the key they got to spy on other users.
I run some services for my private use. It's crazy that I need to have them certified by some third-party over-seas CA since I can't get my own devices to trust my own certificates.
We're not at that point yet, but running your own trust root is getting quite annoying. For example, Android constantly nags about "network might be monitored" when custom certificates are installed.
As far as changing the certs, I know offhand to do it with a couple random linux distro's but i'm not 100% sure for android, you might just try searching the repo for the default certs then looking at how they are built into the image and tweaking that.
As someone that has been burned by self signed internal only sites. Take the extra 15 minutes and get a proper cert, and domain name for your internal sites. It can save a massive amount of pain later.
Just hope you never need an external system, or a cloud hosted service to talk to your dev/test environments. It’s so cheap and easy to do it right that it just doesn’t make sense to do it the other way.
In this case we started doing hybrid cloud, we were unable to address a ton of sites since they were on a made up internal only tld. Plus every thing we could address served up certs we couldn’t trust since we were utilizing services that didn’t allow us to modify the trusted root cert store.
We saved probably $100 and 2 hours by rolling our own solutions instead of doing things the standard way. It took weeks to clean the mess completely up.
Well, yeah, using a made up domain is obviously a bad idea ... but what does that have to do with root CAs? And how does trusting your own root CA lead to not trusting the certs presented by other parties? I don't really understand what kind of scenario you are describing there.
And no, I don't see anything "standard" about not running your own CA, it is perfectly standard as far as I am concerned, and a really good idea as well. Relying on an external CA for internal services just creates risks of both availability and security. If you need an external CA to set up or continue operating internal services, that is an availability risk, and if you trust the whole standard set of root CAs for all of your internal services, that's a massive security risk.
Obviously if all your services are hosted in house and you will never need to expose internal services externally go for it. But as soon as your organization grows, splits, merges or starts utilizing other services that don’t give you access to the trust store you are boned. It screwed us, and was a giant pain to fix.
Why would all your services have to be hosted in house and why would it prevent you from "exposing internal services" (I mean, apart from the fact that they kindof aren't internal services anymore from that point on)?
For one, there is no problem hosting your own services elsewhere and having them use your own certificates. But more importantly: Why should your own CA prevent you from obtaining certificates from an external CA for external services? I mean, it just doesn't, that's how I run stuff: Purely internal stuff runs on internal CA, stuff that needs to face the public somehow runs on globally recognized CAs. And it's mostly trivial to switch services from one to the other - or to just run two endpoints, one using the internal CA, one using an external CA.
It seems to me like your problem wasn't your own root CA, your problem was that your services were incompatible with external CAs for some reason, among them probably your private DNS root? But that isn't a reason why you should put your internal services at risk from mismanaged public CAs, that's simply a reason why you should use a global domain and support provisioning of certificates from external CAs.
The big issue was identifying all the impacted services, reconfiguring all of them testing and redeploying them. If it’s a few services fine. But once it’s a few hundred it’s a pain.
Well ... but then that still has nothing to do with using your own root CA, does it? I mean, why would you want to suddenly reconfigure all of your services to use a different CA? It might come up here and there that you need external access to some service hat was internal before, but that is hardly a huge problem to reconfigure?!
And also, if you have so many services running that swapping out all of the certificates is a major headache, your primary mistake probably was that that wasn't automated? When keys are compromised, you should be able to reprovision anyway.
It can be a real pain in the butt to go up and down the whole stack and reconfigure every library and application that might detect a insecure connection and bail. Need several independent webapps to communicate? Hoo boy.
Who should have the ability to install root CA certs?
I like using HTTPS instead of HTTP, so I need some installed. Who should be responsible for managing them?
And before you say the CABF, they're ultimatately not the ones who decide what gets installed on your computer. The answer to that question is much more complicated.
There was never a good reason to install a root certificate for the purpose of speaking securely to your own gear or web services. In that case, you don't need to add them to the root trust store, you just need to create an SSL context that has those certificates set as trusted.
However, the problem is, nobody understands SSL properly. Among the people who don't understand SSL properly is, alas, a number of people writing SSL libraries. Not the actual SSL libraries like OpenSSL, but all the surrounding libraries to make it "easy to use", which includes libraries that try to make HTTP easy to use and abstract the difference between HTTP and HTTPS. Pretty much every "ease of use" library I have ever seen accomplishes its ease of use by dropping features from the underlying SSL support, but it's clear they often don't understand why those features were there and the consequences of dropping them.
I have a particular case where I've got some Perl code that is literally 4 levels deep in "SSL ease of use" code, with pretty much every layer dropping SSL features along the way (even the base level Net::SSLeay is missing a lot of stuff once you go beyond the basics, and it only gets worse as the stack gets higher). Once I had to poke support for a certain feature all the way down through the entire stack because it got dropped really early.
So what you need to do in this case is create your own root store of trust, and then stick your own certs in there from some .pems or something, and then use that to initialize SSL on your connection. But it's complicated to do that at the base level of a lot of SSL libraries, and this is often one of the first features to be "abstracted away" by support libraries, and a lot of HTTP libraries end up trying to "abstract away" SSL so thoroughly that they don't even have parameters to control the SSL elements of the connection, and if they do, they have some ad-hoc selection of random bits that someone once needed, rather than comprehensive support.
The upshot is that I'd consider it likely that they were using one of these libraries, and the only way they could see to get their certificates trusted was to stick them in the default root store, because that's the only thing that would work with such libraries. You can also find web pages and such recommending this approach. It's also possible they just came across one of these web pages or something and put stuff in the root without realizing what it really meant, even though their library allowed them to do everything I said, because it's way easier to slam something into the default trust store than write the code to create your own at run time. (It ought to be easy to write that code. The rather nice Go TLS library makes it almost as easy as I say it is; create cert store, add certificate, set that as your root trust, modulo a bit of error handling it really is just about that many lines of code. But the "ease of use" libraries can really get in the way, when the people adding the "ease of use" abstractions themselves don't understand what that means, or why you might want/need it, or how to make it easy.)
what you need to do in this case is create your own root store of trust, and then stick your own certs in there from some .pems or something, and then use that to initialize SSL on your connection
Their goal was to allow the browser to connect to the local daemon, for web-based softphones, so this wouldn't work.
They give permission to modify their computer/device, but obviously there's an implicit trust that it isn't going to do something horrifically bad.
Trusting a builder to come in to your home and change things- you'd be pretty angry if they took down a supporting wall to put up a new light fitting (and you'd probably have some legal comeback).
I'm a little lost on why the need a new root CA cert on a computer that already has a cert store.
Can't they safely communicate with whatever.sennheiser.com using the existing certs? Afaict, this isn't a stand-alone device trying to communicate, but your computer, running some app.
What am I missing?
Edit: okay, I see below that they are using a local web server, and (thanks to browser decisions about localhost) it requires https.
The really silly thing is that it's 2018 and the browser vendors still refuse to implement name based constraints on certificate authorities. It should be perfectly reasonable for a local, single domain CA to be generated and installed with the application. Instead we treat every CA as worthy to handle every domain always.
The X.509 spec specifies a field for that, which in OpenSSL would be called "subjectNameConstraints". The rules for the constraint can be found in RFC5280.[0] Mozilla have had an open development track for CA name constraints for quite some time, but the last edit to the page is from 2015.[1]
I tried to actually use this field couple of years ago, and none of the existing tools I tried had any support for it. OpenSSL would fail to parse a CSR config with this key. Same for Go's TLS library.
So of course I did what any enterprising hacker would do: I created a CSR manually with the correct OID in place. Trying to sign that was nothing short of hilarious. Loading up the CSR into OpenSSL would trigger a BIO_read_* error. Trying the same with Go's TLS library triggered a panic!
I then realised that if you could somehow supply a certificate chain with a name-constrained CA in it, it would act as a highly reliable DoS against virtually all clients. (Probably against servers too, if you supplied a client-cert chain.)
Based on discussions since, I have been informed that Microsoft's TLS stack supports this - or at least should be technically capable of issuing CAs with the field in place. But because practically nothing else in the world has the support, and is in fact likely to crash when presented with one, even a gradual rollout is simply not possible.
Hence every single CA you see will be valid for *.
Another simple mitigation would be the ability for one certificate to be signed by several CAs.
We could combined with some DNS records that states the policy for validating the certificate (stating the number of CAs to validate a given certificate).
It could be a huge improvement on security, and eliminating CAs as single point failures for the whole internet, at least for critical pieces of it.
I work on a product that is in a similar boat. We have LAN based remote control using HTTP connections, but have no practical way of TLS enabling it. Stunts like this would work, but are a really bad idea.
The other alternative is to have the devices and remote control connect to a central server over TLS and Internet and then have that server relay traffic. But that is not nice either. However we may be forced into it because of client restrictions on non-TLS connections.
the problem is not that they add a root ca to the store. the problem is that they use the same Root CA on every computer in the world AND adding it into the Store AND having the Root CA PRIVATE KEY on ANY computer.
There is a valid reason: to enable website-to-hardware communication (i.e, to provide a javascript-based api for websites to interface with that hardware).
Ideally browsers should implement well standardized, secure APIs for all devices in the world, but we are far from there. Until browser vendors implement the API you need, the only option is to employ this trick.
Of course, companies should NOT reuse the same certificate between installations though (just generate a certificate during the installation process and life is good again).
> Ideally browsers should implement well standardized, secure APIs for all devices in the world, but we are far from there. Until browser vendors implement the API you need, the only option is to employ this trick
See how bad are software companies at creating reliable pieces software/services that don't crash and are without security flaws.
Now imagine Companies where software development is not a priority and/or is not in the set of core competencies.
As long as it functionally (kind of) works, it will be good enough.
The comfort in this article is knowing for every boutique german headphone company that insists on becoming a CA, there are thousands of nameless chinese companies producing superior products at lower prices that do to some measure respect the users privacy in that they arent more than just a USB peripheral.
Sades and Xiberia for example make perfectly useful (if not a little bit cyberpunk) headsets that just operate as USB soundcards with no special CA requirement.
And if you're just in the mood to listen to some music without special software in this foul year of our lord 2018, Might i suggest a pair of Superlux 668B's? for ~$40 theyre easily better than anything Sennheiser produces that requires its own PKI.
Exactly. The funny thing is the most of 'boutique' headphones companies are small Chinese vendors now, mostly unknown for the Western people outside of HeadFi forums.
Nameless Chinese companies' software support is usually rather limited, to say the least. From the absence of firmware updates to the very basic set of the software features. When they will produce comparable software support for their hardware, there will be the same problems, maybe much worse.
This seems like pure laziness. Did the developers really not have an understanding of basic PKI? Or did they realize late in the game that their local web socket was gonna require HTTPS and slap this on at the last minute?
Hmm, I recently picked up a pair of pxc 550s (crazy good black Friday deal), and I saw the thing about installing their android/iphone app to do NFC pairing, but I frankly have no idea why you would want that, or really much of anything else in the app. The reviews even mention that the eq controls don't even work for DRM'ed content.
OTOH, it seems if you pair the headphones with normal bluetooth its just using A2DP/SBC and the audio quality is _miserable_. Maybe its using a custom bluetooth profile/a2dp codec?
Basically, why exactly do they even need a full blown app?
(on a further side note, I've gotten to the point where I don't really even notice AC and computer fan noise so much so that while a couple of coworkers complained about it, it wasn't until I tried the PXC 550's at work that I realized our AC blowers are really _LOUD_. With the 550's the constant low frequency rumble is just gone. I guess my earplugs just wern't blocking that much low frequency.).
> Sennheiser does make BT headphones with a dedicated tower that helps a bit, but nothing can replace a hard line.
Their RS line of headphones don't actually use Bluetooth. Their older ones use the Kleer protocol, while the newer ones use a proprietary wireless protocol. Lossless audio in either case.
How does the process of getting the certificate installed work? Does the user manually accept the installation at any point? Or is part of the blame on Microsoft for allowing this?
You "just" need admin priv's to install one. there are "legit" reasons for using one atm (how else do you communicate with a localy running application from a site using SSL/TLS without using a browser extenstion? I honestly want to know as it would be handy for a project i'm working on).
For example Battle.net have a per machine generated CA that gets installed when you install BNet (which is why it recently started asking for admin privs to install / update instead of just asking for them when installing a game).
Its used for talking to BNet when following a battle.net link which can be used to prompt you to join BattleNet groups and other things. They used to use a cert signed by a public CA but that's is frowned upon (as the only way the client could really use it is if it knew the key for the cert which would lead to either a million localhost.bnet.tld (I don't remember the actual hostname so pulled one out of the air) certs or a shared cert with a million people who could access the private key if they went looking hard enough). They made a forum post about it when the issue about their own self signed CA started showing up everywhere [0].
I believe Spotify do something similar so things like open.spotify.com and other widgets can control the locally running spotify app.
MS themselves have a certtool in Visual Studio to create and add certs when dev'ing using the latest builds of ASP.NET Core 2 as the default for new projects is to use SSL (but iirc the cert tool VS uses does give you a prompt about it installing a cert).
Sorry if that sounded promotional, but VPNs do in fact protect against MITM attacks. And the service I mentioned I found reliable so I felt like the information was relevant, but point taken for future notice :)
All the technical mistakes aside, yet another illustration of why I refuse to use wireless peripherals. They're uniformly shoddy at best, dropping connections or having difficulty pairing often. The idea that headphones should need software strikes me as insane.
This is why Apple’s AirPods have been so popular. They really work quite well, compared to any other wireless headphones I’ve used. Easier pairing and better connections.
Also, surprisingly durable. Around April I accidentally left them out of their charger and outside in the pocket of a foldable chair when I went out of town for 1 month. It rained on them multiple times. When I found them I was sure they would be broken.. I charged them up and I still use them daily. No issues.
I feel glad I left for the mild shores of Linux in the early 00s.