This was the bit that I spotted as potentially conflicting as well. Having managed (and sanitised!) tech&security policies at a small tech company, the fail-open vs. fail-closed decisions are rarely clear cut. What makes it worse is that a panicked C-suite member can make a blanket policy decision without consulting anyone outside their own circle.
The downstream effects tend to be pretty grim, and to make things worse, they start to show up only after 6 months. It's also a coinflip whether a reverse decision will be made after another major outage - itself directly attributable to the decisions made in the aftermath of the previous one.
What makes these kinds of issues particularly challenging is that by their very definition, the conditions and rules will be codified deep inside nested error handling paths. As an engineer maintaining these systems, you are outside of the battle tested happy paths and first-level unhappy paths. The conditions to end up in these second/third-level failure modes are not necessarily well understood, let alone reproducible at will. It's like writing code in C, and having all your multi-level error conditions be declared 'volatile' because they may be changed by an external force at any time, behind your back.
> Your argument seems to be that it's fine to break the law if the net outcome for society is positive.
In any other context, this would be known as "civil disobediance". It's generally considered something to applaud.
For what it's worth, I haven't made up my mind about the current state of AI. I haven't yet seen an ability for the systems to perform abstract reasoning, to _actually_ learn. (Show me an AI that has been fed with nothing but examples in languages A and B. Then demonstrate, conclusively, that it can apply the lessons it has learned in language M, which happens to be nothing like the first two.)
> In any other context, this would be known as "civil disobediance". It's generally considered something to applaud.
No, civil disobedience is when you break the law expecting to be punished, to force society to confront the evil of the law. The point is that you get publicly arrested, possibly get beaten, get thrown in jail. This is not at all like what Open AI is doing.
That second trick is neat, I'll need to remember that. I also really wish I had known about it back in 2015, when we had to massage an inconveniently big (at the time) PG database and started to carve it out into smaller, purpose-specific instances.
Being able to verify that an index was either useless or inefficient without jumping through hoops would have saved quite a lot of time.
That is also why so much of the security[tm] software is so bad. Usability and fitness for purpose are not box-tickers. The industry term in play is "risk transfer".
Most security software does not do what it advertises, because it doesn't have to. Its primary function is for the those who bought the product, to be able to blame the vendor. "We paid vendor X a lot of money and transferred the risk to them, this cannot be our fault." Well, guess what? You may not be legally the one holding the bag, but as a business on the other end of the transaction you are still at fault. Those are your customers. You messed up.
As for vendor X? If the incident was big enough, they got free press coverage. The incentives in the industry truly are corrupt.
Disclosure: in the infosec sphere since the early 90's. And as it happens, I did a talk about this state of affairs earlier this week.
Some kind of additional leverage and/or connections were certainly used.
The open dirty secret of infosec is that outside of authentication systems, the products and services sold do not actually work. Usability and real world functionality are not box-tick items in feature matrix comparison. It is enough that a security[tm] product does something technically correct to get a green tick in the relevant feature list row.
As a result the products are not commonly sold to their end users. They are sold to C-suite, and inflicted upon their victims. And how do C-suite choose what vendor to throw their money at? DDQ/RFx templates. I wish I was joking.
The other dirty secret of infosec is that everyone does their vendor/client/etc. vetting with bingo sheets full of meaningless, context-free questions that try to enumerate SYMPTOMS of different kinds of breach scenarios - they do not attempt to look at root causes, and they certainly do not consider threat models. These bingo sheet templates are used by everyone: vendor teams, insurers, auditors, you name it.
And now we finally get to how Wiz pulling connections intersects with the above. A fair number of the bingo sheet templates come with pre-populated dropdown choices. The choices usually include no more than 8 options, including "Other". The implication is very clear: "if you use one of these known & approved vendor products, then we are fine with it".
Wiz got their offering included in the bingo sheet templates in approximately 18 months from launching publicly. That has provided them with constant advertising from the countless infosec questionnaires thrown around the various industries and the implied checkmark of being pre-approved as a vendor of choice. Given the landscape and the general quality of competing vendors, your product needs to be merely not-shit to stand out and get traction through the various back channels.
Now, from personal exposure I can say that Wiz's product (or at least those I have been faced with) are still better[ß] than their competition. A recent security scan report from a client using Wiz had only ~85% of false positives. The average FP rate for other vendors tends to be 95% or even higher.
ß: security products must be the only segment where vast majority of results being false positives is considered both acceptable and normal. In any other field a product that routinely gets >90% of its answers wrong would be consigned to rubbish heap.
my experience as well. better product, and a very aggressive sales team which is something you missed. they were very willing to cut any deal at all, to get the sale. win-win IMO, and exactly the VC 101 playbook.
I can provide an example where cloud, despite its vastly higher unit costs, makes sense. Analytics in high finance (note: not HFT). Disclosure: my employer provides systems for that.
A fair number of our clients routinely spin up workloads that are CPU bound on hundreds-to-thousands of nodes. These workloads can be EXTREMELY spiky, with a baseload for routine background jobs needing maybe 3-4 worker nodes, but with peak uses generating demand for something like 2k nodes, saturating all cores.
These peak uses also tend to be relatively time sensitive, to the point where having to wait two extra minutes for a result has real business impact. So our systems spin up capacity as needed, and once the load subsides, terminates unused nodes. After all, new ones can be brought up at will. When the peak loads are high (& short) enough, and the baseload low enough, the elastic nature of cloud systems has merit.
I would note that these are the types of clients who will happily absorb the cross-zone networking costs to ensure they have highly available, cross-zone failover scenarios covered. (Eg. have you ever done the math on just how much a busy cross-zone Kafka cluster generates in zonal egress costs?) They will still crunch the numbers to ensure that their transient workload pools have sufficient minimum capacity to service small calculations without pre-warm delay, while only running at high(er) capacity when actually needed.
Optimising for availability of live CPU seconds can be a ... fascinating problem space.
There are absolutely plenty of spaces where this is true and cloud makes sense either because it's actually cost effective, or because the cost doesn't matter.
Most people aren't in those situations, though, but I think a lot of them think they're much closer to your scenario than the much more boring situation they're actually in.
Chinese companies have one massive advantage in aggregate: they know that from 2028 onwards they will be competing for a captive domestic market of >1.3B people. The CCP have declared as their industrial [service] policy that by the end of 2027, all Chinese companies must be using services exclusively from Chinese suppliers. The target ratio of domestic/foreign services is being ramped up year over year, so that by 2028 the base expectation is everyone to have 100% Chinese suppliers only.
From thereon, every exception must be justified to - and approved by - their respective politburo.
An obvious second-order effect is that there has been an explosion of Chinese B2B companies eager to get themselves established in the market. They know that in just a few years they can still sell their services outside China, but can expect very limited competition from non-Chinese companies. And inside the country, they have a population of ~4x of US to compete for.
That is particularly true for anything dealing with security. I evaluated both BitWarden and 1Password when we wanted to migrate away from LastPass. My recommendation was to eventually go with BW. Its open-source nature was a factor, but for a corporate use the UX factors were even more prominent.
Over a course of a month, I ran into several subtle footguns with 1P. Search included only some of the fields. Password reset/rotation flow was easy to mess up (thanks to the confusing + inconsistent "copy field" functionality) and get into a situation where the generated password that was stored in the vault was different from the one that was set: in my tests there was 50/50 chance of accidentally regenerating the password before the vault storage step after submitting the new one for a remote service.
There were a whole load of "features" that didn't make any sense. The UI for 1P was a real mess. The feeling I got from it was that their product had been captured by Product Managers[tm] desperate to justify their own existence by shipping ever more Features[tm] without considering the impact on the core functionality.
BW's UI is by no means perfect, and their entry editing flow is far from ideal. But at least most of the actual usability snags in their browser extension have a common workaround: pop the BW overlay out from the browser, into a separate window. Their open-source nature and availability of independent implementations mean that there will be alternatives, should BW go down the same features-features-and-more-antifeatures hellhole in their race to eventually appease their VC backers.
Sounds like our experience with it could not be more different.
> The UI for 1P was a real mess.
In what way? You described how you feel about the UI, but I’m curious about actual specifics.
It’s entirely possible that I’m just too accustomed to it because I’ve been using it for many years, but what you’re describing is how I felt about Bitwarden.
I can completely see choosing BW in a corporate setting for a host of other reasons. But for me personally, the priority is a tool that gets out of my way and just works.
The tool that has done that is 1P.
> Less is more.
That really depends. If less means that the password manager doesn’t get used, then less is less.
The downstream effects tend to be pretty grim, and to make things worse, they start to show up only after 6 months. It's also a coinflip whether a reverse decision will be made after another major outage - itself directly attributable to the decisions made in the aftermath of the previous one.
What makes these kinds of issues particularly challenging is that by their very definition, the conditions and rules will be codified deep inside nested error handling paths. As an engineer maintaining these systems, you are outside of the battle tested happy paths and first-level unhappy paths. The conditions to end up in these second/third-level failure modes are not necessarily well understood, let alone reproducible at will. It's like writing code in C, and having all your multi-level error conditions be declared 'volatile' because they may be changed by an external force at any time, behind your back.
reply