For some time recently, AlwaysOnSSL was issuing automated, free and trusted certificates with 12 months validity via HTTP or DNS DV. They've since disabled new registrations (who knows why), but I'm still able to get these on my existing account.
Comodo is issuing 12 months certificates for cPanel servers, and 3 month certificates for all domains on any licensed cPanel server (without the rate limits imposed by Let's Encrypt). In fact, the cPanel Comodo CA is currently the second largest CA with some ~13-14% of all currently trusted certificates, in front of Comodo with 8% and behind Let's Encrypt with ~66% .
So, I'm not sure that it's strictly true that commercial CAs will refuse to do anything that cannibalizes their own sales. I think there are some mixed models that can work (RIP StartCom), and I hope we see more of them for the sake of the WebPKI. Overwhelming reliance on a free CA that is funded purely by donations doesn't seem like a good thing to me.
Someone (ie. some CA) had to cross-sign Letsencrypt root certificate (or its direct derivative). Turns out that CA is IdenTrust, a bank consortium. 
I'm not disagreeing its a good rule of thumb though.
It seems likely at some point that they will drop the cross-signing.
The top row of "Valid¹" items with one "No" and one "Defer to OS" is the trust you care about, whether ISRG is trusted for the Web PKI. "Defer to OS" is what Chrome says by default about almost everything - basically Google prefers to let your OS vendor pick what's trusted unless they feel the need to step in and override.
I have no idea why Apple trusts ISRG for things ISRG doesn't do and has no intent of starting to do, my instinct is to label it "incompetence" but I've never been an Apple fan, I'm sure they have some excuse.
On the other hand, at least Microsoft has a comprehensive and reliable update process, so there's a good chance most of their population of systems can be upgraded in months rather than years once they do make a decision.
All but one of the programmes listed is a closed door operation, we have no idea how they decide what to do, and only a cynic like me would suggest that it's because they mostly react to events after the fact...
The exception is Mozilla, which makes all these decisions in public in the newsgroup m.d.s.policy
This way, LE works on ancient Android devices as well as on god knows which strange Internet-enabled devices.
I used to work in a casino and Compliance/Regulator (a type of audit) would check the settings I input. After much employee churn it got to the point where the regulator was just a warm body with a clipboard. I'd enter a setting say jackpot limit $1,200, the auditor asked me to show him, they checked it off. But the regulator used my copy of the data so they were telling me what I had input was right but they were using the exact same data. The regulator should have used a separate source of data (they used to), what if my data was wrong? It's happened before many times.
Anyway that's what CA auditors reminded me of.
Part of the failure of Symantec (causing them to exit the CA business last year as a result of financial pressure) was that rather than having in place their own effective oversight they had increasingly relied upon audits from the Big Four to confirm that things were being done correctly at their "partner" companies, even though the wider PKI community had seen repeatedly that audit is not very effective and should not be a substitute for oversight roles.
As a result when problems with CrossCert were identified, Symantec found they didn't have the records that ought to have existed, and ended up asking CrossCert for them, of course CrossCert did not have such records. This situation couldn't have happened if Symantec had effective oversight, since necessarily this oversight role would examine the (non-existent) records.
Individual audit companies may prove either incompetent or corrupt too. Mozilla concluded that the Hong Kong EY (Once "Ernst and Young" but short names are cool now apparently) was either incompetent or corrupt and told its parent that it will no longer accept audits from this outfit. This was part of the problem with StartCom, their audits don't mention things that should have been evident at the time to a competent auditor.
A central purpose of Certificate Transparency is to reduce the need to rely on auditors. With all the signed certificates made public, independent researchers can interrogate data once only accessible to auditors. For example, the question "Were any certificates issued for Debian Weak Keys?" is something we might once have resolved with audit - let's ask the CA "Do you check for weak Debian keys?" and if they say "Yes" that's a pass. But today researchers can simply download all the public keys, and memcmp() them against Debian Weak Keys. Upon finding one, they raise it as a Problem Report. Sure enough the CA discovers that although of course they believed they were checking for these weak keys, for a brief period the check was disabled and they issued a certificate. Oops. Observatory tools like Censys might have helped, but Certificate Transparency gives us guaranteed coverage.
Nevertheless, audit is a necessity for some practical questions. Auditors physically visiting a facility can verify that it is indeed physically secure (are the data centre fire doors left open onto a shopping centre car park?) and that documentation physically exists (are there, in fact, binders full of procedures? It's funny, when they know nobody follows the procedures humans are often too lazy to even bother writing them down...). Likewise auditors must attend certain ceremonies e.g. to create a new private key. It would be very expensive for representatives from the major Public Trust Stores to be sent around the world to do this, and it's clearly not acceptable to just insist that all CAs must be physically located in Northern California.
I agree. I've experienced both situations. KPMG did an audit of my former workplace and I was amazed at what they found and so fast. It was a single receipt that a cashier rang in but the supervisor had forgot to log off the POS. The cashier didn't notice and rang in that one item under the supervior's login. The KPMG auditor caught it on paper not electronic they had gone through four year's of paperwork for one department and spotted it (on the first day of the audit too!).
After that Deloitte or KPMG seemed to be mailing it in. I knew some departments in the place where I worked had greatly slipped in following protocol and none of it was discovered by the auditors. Or I should say I wasn't aware of any discoveries since I was not directly involved in the process.
(DigiNotar was trustworthy right up until people discovered that it had been compromised. StartComm actively hid that it had been bought by WoSign; it took people some digging to turn this up and verify it. Other CAs have routinely denied that they had problems when they did.)
Even if you could, picking an especially trustworthy CA doesn't add any extra security for your website, because you are actually dependent on the security of all of the CAs out there, any of which can in theory mis-issue a TLS certificate for your site. Sure, there's CAA records, but this only helps if the CA actually follows the rules properly; CAs can fail here at both a technical and a business level. (One CA recently proposed that it should be allowed to ignore CAA records if the domain owner told it to.)
Even attempting to do basic checking of a CA probably doesn't make sense at a business level unless you're buying certificates in bulk. Suppose that you're buying four TLS certificates and the best price you can get is $30 per certificate per year, or $120 in total for a year. How much engineer time does it make sense to spend in order to pick the most 'trustworthy' CA (ie one that is likely to remain trusted by browsers for a year), given that CA de-trusting is an extremely rare event and if it happens, you're only out another $120 to buy certificates from the next CA?
(I'm the author of the linked-to entry and as you can tell, I have opinions on CAs and the TLS ecosystem. One of them is that Certificate Transparency is a really big deal because it removes a lot of the 'trust me' from the whole CA ecosystem.)
I can generate a set of magic bits on my own computer without any input from a CA, and I'll be able to use them to encrypt a TLS session. What I can't generate on my computer is some trust, which is what I get when I buy a cert from a CA. Also, as a customer of CAs, I have a selection of different companies to choose from, and I base that selection mostly on trustworthiness. It's not so difficult for me to determine which CAs have had the least number of trust-undermining incidents.
To your point of likelihood around CAs being de-trusted, something that happens even less often (as fas as I'm aware), is relying parties claiming on the relying part warranty, so I don't think appealing to likelihood is an especially valid point here.
If advance warning is nonexistent, there's no point in even trying to check. If advance warning is hidden, you have a cost/benefit tradeoff to make, and I believe that in most cases (ie with low amounts of money involved), it's not good use of scarce and expensive engineer time to try to assess the state of a CA.
(This is especially the case if your company doesn't already have an engineer who is relatively expert in the CA ecosystem and knows where to even start looking. Without that expertise, would something like StartComm's quiet sale to WoSign have raised any alarms even if you discovered it?)
As far as magic bits go, the one thing that self-generated magic bits can't do is insure that a complete first time visitor can't be MITM'd by an active attacker. Whether this matters (or matters more than the scary browser warnings) depends on circumstances.
If we say that this CA is not 'trustworthy' here, what we really mean is 'this CA is not in browser root sets and so the TLS certificates it issues provoke browser warnings'. This is useful in one sense (it is what most people care about), but I prefer to be explicit about what we mean (partly because 'trust' is a loaded term with tangled implications).
But as a user of certificates, there is no levels of trust to compare, either the padlock is green or its not.
So I look only at price, and maybe more abstract things like customer service.