Hacker News new | past | comments | ask | show | jobs | submit | macrael's comments login

Passkeys can't actually replace passwords, right? I will always need a username and password with a website, then can generate a passkey as a separate auth mechanism, which if I lose, I will recover by setting up again using my username and password? I don't get how we can get to a place where passkeys are all, how do you get a passkey on a new device when you only have passkey auth on some other device enabled?


That's not the idea, no. The idea is that - instead of a password - you have a cryptographic key. Like an SSH key. This key is managed for you, so you never have to see it or type it. You ought to be able to either have just a few keys, or else a different key for every service you use.

Unfortunately, the big players are trying to force this (really excellent!) idea into platform dependency. They want to store the keys on physical devices, which (a) eliminated portability and (b) restricts the number of keys you can have. If your device fails, you will also be faced with account-recovery problems.

Great idea, but the implementations are looking...not great.


Both iOS and Android sync to other devices in the same ecosystem, so there is at least a limited form of device portability.

If you have both, register two passkeys with each account and that's even better, since they back up each other if the vendor somehow deletes your account.


Wouldn't transferring the keys around just massively increase the attack surface? There's a security reason why we want them stored on-device and never moved, right?


Think of all of the password leaks you've heard of. How many were due to syncing password vs password reuse, poor site security (e.g. storing in plaintext, weak encryption, etc.)

I'm not saying syncing is 100% secure (nothing is), but for most people it's not the main attack surface to be concerned about.


The KeepassXC file is encrypted (granted, only with a password). Sure, that file is now on multiple devices, so somewhat more vulnerable.

The problem with storing on-device comes when you use multiple devices. I have three devices (PC, laptop and phone) that I use regularly and interchangeably. What am I supposed to do, if the keys are tied to a single device? Worse, what do I do if that device dies, or is stolen?


This is how I'm using them. Still have a username/password, with a passkey as an additional factor. I use 1Password for passkeys rather than Apple's solution, which enables me to use them wherever I have 1Password.


Passkeys change nothing about account recovery. The same process for a forgotten password should be followed for a lost passkey.


They can, and hopefully will.

To get a new passkey on another device, the provider needs to allow you to prove you have possession of your other device first. They can do that by sending you a one-time code, for example, when you authenticate using your existing device, which you can then type in the new device, and that lets you associate your new device-generated key with your existing account.

With iCloud, you don't need event that because Apple can, and does, sync your keychain across all your Apple devices. So as long as you use the same Apple ID on different devices, all passkeys are automatically sync'd.

If you lose ALL your passkeys, you may be in trouble and for that reason, it's common that when you register your first passkey, you should also be given a long recovery code which you must keep privately in a very secure physical location (as that will allow anyone who can get it to reset your account). You could say that IS a password, and perhaps you're right, but there's a difference in my mind that's pretty big: you're never supposed to use that "password", nor keep it easily accessible or even anywhere in digital form.

Finally, a lot of people in this thread are missing that passkeys prevent phishing, and are basically the only way we know to prevent phishing. And phishing is extremely high in the ranking of security issues we currently have to try to solve.

If you know a better solution to phishing than passkeys, please let us know (Passwordd Managers are not that if they allow the clueless user to extract the password and manually enter it anywhere)!


A long recovery code that both you and the provider need to know in order to authenticate you IS a goddamn password no matter how infrequently you expect to use it. It just changes what knowledge a hacker looks for either in your digital storage or in a company's databases.

If you get rid of all knowledge-based authentication in order to increase account security, then you necessarily increase the chances of permanent lockout. You can't square a circle.

As for phishing, maybe google should put its AI capabilities to good use, and if the text of an email matches enough patterns of examples it's seen before, there should be a banner at the top of the email warning "this looks like a phishing attempt: common tactics include X, Y, and Z. Confirm authenticity before reacting to this email."


Long recovery code is definitely a password lol. Needs to just have email recovery and call it a day.


There’s more to phishing threat models than strictly credential theft, as you seem to imply.

If passkeys are around, phishing will certainly still exist, and shift to dropping malware on endpoints or w/e vs going after logins


I don't think you know what phishing means. By "dropping malware on endpoints" I think you mean having a website serving malware? That's not phishing. For an attack to be "phishing", the website needs to be pretending to be some other website that the user trusts. Passkeys completely prevent the user from logging in to another website than the one they've created an account with.

Your attack only works on people who basically "trust any website" at all. For those, yeah there's no salvation.


I’m a security engineer, I’m pretty fluent on the topic. And phishing comes in beyond the methods you describe -> malicious attachments downloaded, etc etc.


I do agree but in this discussion we're talking about the general problem of logging in to a website. That's the case where phishing is the most devastating. Solving that problem is a huge step in making people's online lives more secure. Just because we didn't solve all problems, doesn't mean we shouldn't solve what we can solve. If you're a security engineer, it's your job to promote ways for people to be more secure online. And this is what I am trying to do myself.


The discussion you brought up is, which I object to:

> Finally, a lot of people in this thread are missing that passkeys prevent phishing, and are basically the only way we know to prevent phishing. And phishing is extremely high in the ranking of security issues we currently have to try to solve.

And I can point out several ways phishing is currently prevented without passkeys. And several ways it occurs without logins, such that it’ll still be around after passkeys. And phishing is difficult, but per defense in depth concepts, it is not the mission critical focus you label it as.

So to turn it back around, I don’t think you understand phishing threat vectors well haha.


It depends on the implementation. Some sites use it to replace both 2FA and passwords, some use it for just 2FA.

> how do you get a passkey on a new device when you only have passkey auth on some other device enabled?

You'd use a sync service like a password manager.


They are stored in your platform's password manager. So they're available on all the devices you're logged into.

If you're enrolling a new device (say you buy a new android phone) you can scan a QR code from your previous phone go log in.


“My platform”? So, like, the BIOS? What if I want to use both a PC and an iPhone?


Platform as in "ecosystem". iCloud Keychain, Google Password Manager, Bitwarden, 1Password, your Yubikey. Anything that can store passkeys.

If you want to use both you simply enroll both your PC and your iPhone. There's nothing stopping you from doing this. You can register multiple passkeys from different providers to the same account.

You can also log in to your PC with your iPhone by scanning a QR code. And then afterwards enroll your PC as a secondary passkey.


I see, so you can use one device to auth and create a passkey on another


Sublime text is great! It’s been worth it to write small plugins to fix my biggest complaints but since they got LSP runni by first class it has been a great place to be.


Never a good sign when you have a line in your slides that says “Alien spacecraft are made to be very light. Why?”

He mentions aliens multiple times. Not a good sign when claiming to have discovered a new force coming out of a static electric charge.


If you work for NASA it's probably quite common to think about aliens more than the average person.


Reminds me of The One Electronic from Rice Boy


yes


Protected bike lanes save lives. https://www.nyc.gov/html/dot/downloads/pdf/2012-10-measuring...

Painted only bike lanes, in fact, do not really have an impact on safety. This is why it's law in the Netherlands that all bike lanes need to be protected.


Unbelievable how often this is written about without mentioning one of the primary reasons for this design: Parklets. Before the pandemic there was a finalized design for Valencia that would have done what has been done most commonly elsewhere which was to swap the car parking and the bike lanes, putting bike lanes right by the curb. But then the pandemic happened and many more parklets opened on Valencia for outdoor dining and people loved them! But parklets replace parking spaces so suddenly swapping the parking and bike lanes made a mess instead of safety. Take a look at Telegraph in Oakland, they had swapped parking and bike lanes before the pandemic, and now a ton of bars and restaurants have active bike lanes cutting through their dining areas. It's terrible! The alternative that was considered to the center lane involved weaving the bike lane around parklets, creating stretches of bike lanes that were no longer protected from cars and eating up parking spots with the weave.

As a cyclist, I love the center lanes. I feel safe. Before the change every time I biked down Valencia I had to weave in and out of the car lane b/c 100% of the time someone was parked in the bike lane. That basically doesn't happen anymore, and the one time I had an ambulance blocking my way, I went around it _in the opposing bike lane_ not exposed to cars at all.

It's always been the merchants opposing this stuff and study after study says they are wrong about it.


This was a great lightning bulb moment for me. Yeah, detached bikelanes seem semi incompatible with parkets.

Not being doored or having my bike lane taken over us probably the base of my heirarchy of needs. Not having a RL game of pedestrian frogger is also on the list.


Did you notice most of the parklets on Valencia are actually gone now? The city forced businesses to get rid of them over the last 3 months or so. Monk's kettle, City Beer Store, and many more.


They've also stopped closing down every other block regularly and a bunch of other stuff that were actual wins from the pandemic. I'm as confused about that as anyone else but it's true that parklets were a big driver for the center lane design.


Yeah I was sad when they stopped closing the street on Saturday :/


>parklets opened on Valencia for outdoor dining and people loved them!

Patrons of specific dining establishments sure, but they objectively inhibit transportation even more than parking spots do. This is painfully straightforward. Optimizing for wealthy people that frequently dine out is a expert way to undermine the collective interests of people who actually live in the city and those neighborhoods. Gentrification 101.


Just curious, are you saying wealthy people _don't_ live in the city and those neighborhoods? Specifically around Valencia?


"Third: Package it nicely. Ideally, your customer should be able to download a zip with an executable and some config files from a well designed enterprise page - or get a URL to a Docker Repo. When your on-premise version starts for the first time, it should set itself up as autonomously as possible: Check if it has the required access and permissions, connect to the database to set up tables, have sensible log output etc."

LOL that paragraph hides all of it. On prem means that someone is administering your application that does not understand it. Which means YOU are administering it but via slacking with your customers infra engineers rather than through your own control plane.

The article is right that there are customers who require this and if you want them you'll have to figure this out, but it is a huge pain in the ass and should be avoided if at all possible.


The industry used to be able to ship shrink wrapped software that a non-technical person could install on their home computer, without 24/7 instant messenger support to get working.

I think B2B software should be able to create manageable installations for technically competent IT professionals..

If your install has that long of a manual task list, your cloud SaaS infra is probably junk too.


You’re thinking of a time when only “computer geeks” could install and run the software of their choice. When every office had fleets of IT admins to install the pre-approved applications people needed for their day-to-day.

Back then, learning how to manage your own PC and install software was tantamount to earning an IT certificate. Doing so for a few years in high school would be enough to qualify you for a career in IT.

So I politely disagree. The industry was never able to ship software that a non-technical person could install. Especially not B2B, but barely B2C as well.


"The industry" sold software in the 1980s that had use instructions like "Insert disk 1 and press the 'Reset' key to load".

In the very early 1990s, something like "Insert disk 1 and click the disk icon, then click <name of application>". Installation to the hard disk was optional, and might be one or two steps.

Later in the 1990s, installation was pressing "Next", "Next", "Next" after inserting the CD.


Yes, and now we have to deal with security theatre, whether it is corporate, or pushed on us by our OS.

Software has to handle much more hostile environments.


> You’re thinking of a time when only “computer geeks” could install and run the software of their choice.

Perhaps not thinking of a "time when", perhaps thinking of an OS that required an IT certificate.

Because even early Macintosh System Software (classic Mac OS before OS X) was bewildering to Windows users, since any user could "drag" a program from a floppy disk onto their hard disk, leave it anywhere, and it worked.

Arguably, OS X introduced the Applications folder in 2001 partly to give people a place to drag things to and feel more comfortable that's where they could find their "Program Files" (but mostly so multi-user system users would get to add and re-use apps in a common clear place they "should" be).

But you still didn't need an IT admin.


We do it right now on Android and Ubuntu.

Installing software means clicking the button, waiting, clicking yes on some permission prompts, and logging in with Google.

The only thing about any other software I can think of, that would be inherently any harder, is if it involves domain names and certificates. Which is a problem we should be working to solve, and I'm still annoyed at Mozilla for cancelling FlyWeb.

Unless you're such a big company you can definitely afford IT staff, you're probably not doing anything that needs separate services or a database or anything beyond one executable with SQLite, same as consumer apps.


Windows 7 Enterprise?

Mac OS X Server?

The vast vast majority of SaaS products talked about on HN are probably not even 1/10th as complex.


"But we want it to run on our chosen database product."

"But we want it to integrate with our single-sign on product."

"Our security team scanned it with our chosen tools and you have to fix these things before we will deploy it."

"We aren't willing to make those network changes to allow it to run."

"We won't allow it to connect to our <foobar> server but it is a requirement to connect to our <foobar> server if it is going to be hosted internally."

This is the stuff that makes "enterprise" deployments difficult. Oh and they want you to hold their hand through it but they aren't willing to pay for consulting.


> Oh and they want you to hold their hand through it but they aren't willing to pay for consulting.

These sound like tire kickers and un-serious customers, why focus on their expectations?

Serious customers, almost by definition, are willing to pay for custom work they want done.


I'm speaking from my experience selling both software and services to the Fortune 500. If they can cut a corner, slow pay you, try to do it themselves without paying you, etc - they will. Billions in profits but they will refuse to pay a $10K invoice just to spite you.


And in my experience you have to provide a lot of this up front BEFORE the contract is signed otherwise they won't even evaluate whether they will purchase it.


Software had infinitely less demands on its usability and non-technicalness. Maybe it would be cheaper to have IT and have'm maintain this; but I doubt it.


Let's not exaggerate here, the software industry managed to deliver software for on-premise use for several decades.


They used things like standardized packaging mechanisms, matrix testing across supported profiles, dedicated staging environments, and sensible logging schemes. These are expensive and time-consuming! Product velocity is greatly increased when less time is spent on such engineering cruft.


AND a several orders of magnitude larger investment in tech writers, and manual "How can my customer break this software" testing.

When I worked in shrink-wrapped software back in dark ages the documentation writing team and a very extensive manual QA department where each the same size as the development department. Think people trying for DAYS to find out why out of 100s of thousands of active users, a few dozen reported being able to launch 2 instances of the main window when that should not be allowed. (Fix: Race condition in the "double click" handling code with a window of a few milliseconds)


> standardized packaging mechanisms, matrix testing across supported profiles

While this is somewhat true, it's not unreasonable to keep the matrix very small to start with and gradually expand it. People think that matrix testing is some insurmountable task when it mainly comes down to some reasonable, thoughtful engineering. If for some reason you've got some part of the matrix that's particularly expensive (looking at you AIX) then charge a heavy premium for that platform.

> dedicated staging environments

Even if you're a SaaS company you should have dedicated staging environments. Without this you're using your customers as QA, and that's just rude.

> sensible logging schemes

Please have sensible logging schemes. Your on-call/ops folks deserve to be happy too.

Having lived on both sides of the on-prem / SaaS flow, the things that make an on-prem offering successful also make the SaaS offering easier to develop, maintain, and troubleshoot. On-prem can be more expensive but it also means bigger deals, more predictable income, and higher margins. Not every company can fully adopt the mechanisms to generate an on-prem offering but the purported costs of an on-prem offering frequently come down to "ship higher quality software" rather than "shipping somewhat sketchy code at a breakneck pace."


> Product velocity is greatly increased when less time is spent on such engineering cruft.

This attitude is why I get paged. Perhaps we shouldn’t be encouraging lightning-fast growth above all else.


This attitude is why the IT field is never going away. Businesses need adults and professionals ensuring reliable infrastructure, not "product velocity".


This is very circular -- yes, that's true.. and because it was so difficult everyone, vendor and customer alike, moved towards SaaS hosted offerings.

There has to be a name for this argument-type: A leads to B, and those who seem to know only B start arguing for A because they don't remember why A lead to B in the first place. I see this all over the place.


I don't know that the shift was about difficulty, at least not primarily. As best I can tell it's more about profit, and companies usually move to SaaS because it makes them more money and the other pluses and minuses are less important to them.

I know as a consumer I'm much more reluctant to buy a new subscription product than I am to make a single purchase, but the options get less and less every year.


This is exactly it. On-prem deals were generally large perpetual license fees based on some quantity of software installed, and then a % of that as an annual "maintenance" fee which would provide access to support and software updates.

The problem was that the maintenance revenue was valued at a much higher multiple by investors than their Perpetual Licensing revenue because it was recurring. Cue the thought... "What if the whole thing was recurring?"..."Hmm how could we justify that?"..."We make it a SERVICE that WE host!"... and then here we are now where if you want to actually own software, chances are you're out of luck.


It can also be how difficult it is to support on-premise software installations especially when customers use leverage to keep from updating to recent releases that had bug and security fixes, etc…

I ran a SaaS and PaaS for fintechs and we had annual events where we had to hard negotiate with customers who didn’t want to incur the employee training burden for upgrading to newer releases with new features etc. Instead of accepting the rolling releases they would do one or two updates/year and we would stand up monolithic support programs to accommodate them. Moving away from this customer model was about headaches and release/support debt.

There are a lot of challenges with complex on-perm solutions that hosting-for-customers can alleviate. Staffing for every customer hosting variation gets unduly complicated — and impossible — very quickly.


I think it's also about changing expectations and environment. Big customers expect timely updates for the purposes of security, keeping third party integrations working, and bugfixing. They also expect high uptime and prompt outage mitigation. That's really hard to square with on-premise offerings.

Don't get me wrong, it's also hard, but providing on-premise software is less difficult than it used to be in absolute terms. The main things that changed are SaaS driving expectations up, and needing to upgrade faster.


>> because it was so difficult everyone, vendor and customer alike, moved towards SaaS hosted offerings

I actually remember why one company I worked for moved from on-premise to SaaS. They told us that the market valued recurring software revenue at a higher multiple than it did one-time software revenue.

On-prem was a large one-time sale and a small recurring support contract. SaaS was almost all recurring revenue.

On the customer side, it is capital expenses vs. operating expenses.


And some companies continue to do so! It's not a lost technology. But the way it typically works is that big contracts come with a ops engineer or two to live with the customer and help them set it up.

At some companies, this is clearly demarcated from the SWE role - at others, stereotypically the "IBM mainframes, SAP and Oracle DBs" category the OP mentions, they kinda shade into each other. Which is, of course, why those technologies have such negative reputations. (There's also the Palantir route, where you have a clear demarcation but lie about it to new grads who don't fully understand the conflict between "forward deployed" and "software engineer".)


I really disagree. On-Premise software is frequently bought by large customers with significant resources - including a competent ops team. Yes, there may be some initial handholding, but once things are up and running, there is little support necessary.

Basically, if you want to build an on-premise offering, look at the way Nginx, Postgres, Redis and similar projects are packaged and do the same.


> Basically, if you want to build an on-premise offering, look at the way Nginx, Postgres, Redis and similar projects are packaged and do the same.

Nope. You should look at how an app that uses all of those is published. That’s what on-premise software truly is. A combination of dependencies and your companies secret sauce.

Your own app code is just one cog in that engine.


This highly depends on the industry… I worked on projects for large F500 companies who had neither ops nor where willing to spend resources… as I mentioned in another thread, on prem can be a world of pain… but it can also be a lot of fun if you have customers like you mentioned


> On-Premise software is frequently bought by large customers with significant resources - including a competent ops team.

These are orthogonal. Resourcing is no guarantee of ops competency, or a culture where ops is enabled to deliver, hence the concept of shadow IT. On-prem software is bought by orgs with low risk appetite or other compliance objectives that skew procurement towards running on prem. I have seen exceptional ops teams on a shoestring ramen budget, and I have seen dumpster fire ops teams in companies with thousands of workers and billions of dollars a year in revenue who could not get you a single virtual machine in less than 90 days even if preventing the end of the world depended on it.


Over 40% of the Web is deployed this way. Wordpress installation is basically "upload some files, complete installation GUI, and then get started."

There's no reason other products couldn't do this as well, even those with a complicated stack that requires provisioning cloud resources (or equivalently, SSH'ing into servers on the local network or calling private cloud API endpoints). If you can deploy a commit from a pipeline, then you can make a self-hosted bootstrappable installer for your users.

The installer could be a tiny shim running on the user's laptop that's responsible for bootstrapping the rest of the infrastructure and then deleting itself after redirecting the user to the cluster admin panel. The trick is to make it smooth, so even non-technical users can provision and eventually administer a secure, self-updating cluster as easily as they can keep their iPhone up to date. Walk them through the process of getting AWS credentials. Heck, scrape and script the login flows if that makes it smoother (it's their computer running the code so who cares?). Ask the user the minimal questions needed to fill the config. Once all that's in place, trigger your deployment scripts on the cluster, to pull images from a container registry or whatever. Wait for it to be up and running.

When the cluster's up, redirect them to its admin panel that's more sophisticated and controlled by you. Let them check for updates, install plugins, run common operations (like "restart," etc).

Then just don't ship broken code, make sure your scaling policies work (or at least that they work well enough to avoid phone calls from customers that don't make you any money). And boom you've got a nice product.

I'm not saying it's easy. But if you already have automated deployments (you do have automated deployments, right?), then it's not that many extra steps. Productizing it is hard but can be done.


> On prem means that someone is administering your application that does not understand it. Which means YOU are administering it but via slacking with your customers infra engineers rather than through your own control plane.

I got a first-hand view into how this model falls apart - and it wasn't the fault of the customer's infra engineers; it was entirely our fault and we walked away from piles of money because we couldn't deal with the issue.

Here's my rather salty take on the issue.

The killer issue with SaaS software that attempts to migrate to on-prem offerings is that SaaS companies tend to build up cultures that are locked into the "move fast and break things" model. Simply creating an on-prem offering from a SaaS product does not generate the cultural shift needed to successfully ship on-prem software.

If it's extremely easy to ship then the development cycle can be fast and sloppy. You can ship buggy software, bypass meaningful testing, lean on your ops team/on call rotation to limp the software along, skip documentation and use humans to glue the thing together. SaaS is popular because of the accelerated iteration cycle because shipping a bug is much less difficult.

Things fall apart extremely fast when moving from a SaaS offering to an on-prem offering because _all of the developers are still conditioned for the fast iteration cycle_. Developers are still going to be targeting SaaS first because they can interact with it, debug it, and evaluate telemetry from that running system. They won't be developing in the model where they're truly freezing features, polishing implementations, and shaking bugs out of the system. This ultimately means that regular releases are more of snapshots of main rather than something that can be shipped to a customer and run unattended.

In addition, having an ops team or on-call rotation for a SaaS offering means that troubleshooting/operational guides can be largely informal or loosely written runbooks. For an on-call/ops person, if given the option between scratching together a loose note or using Slack for documentation, or taking the time to write well polished documentation that can be consumed by outsiders, operational velocity is going to drive people to follow the easier, less labor intensive. The end result of this is that on-prem offerings are not going to have a lot of critical operational documentation.

I was on the team responsible for building, shipping, and supporting an on-prem version of our software; it was one of the most grindingly miserable work experiences I've had to date. In addition to dealing with the on-prem version my team was also on the on-call rotation and I can say from first hand experience that the SaaS model was only held together with a great deal of firefighting; shipping an on-prem version exposed every single weakness we had on the SaaS version in painful detail.

The project ultimately failed; millions of dollars of deals were left on the table and my entire team was laid off because the company was not capable of shipping a reliable, documented project that could be operated without constant babysitting. The product itself had a great number of incredible features and if people slowed down and focused on improved quality rather than sheer velocity then we'd be drowning in enterprise contracts.

> Which means YOU are administering it but via slacking with your customers infra engineers rather than through your own control plane.

If your product requires that you're remotely administering your software because your customers can't operate it successfully, then please invest in documentation, validation and testing, and troubleshooting. Your customers will need it as much as your on-call rotation.


On prem can mean self manageable to a small or large degree.

You can also sell a managed on-prem setup where you set it up and maintain it in the client environment.

It’s not a huge pain in the ass as long as the software is reasonably designed to be multi-tenancy as well as multi server. For serverless architectures creating an internal service provider to route to your serverless (or theirs).

A lot of non prem can be made easier by ring azure friendly which in turn has some ways to get on prem easier.


Sounds like a great business model if you're able to pull off the reduction in support load the article also mentions.


The problem with onprem is that you need your customer to be operationally competent, and that is not the core competency of most organizations (and in most cases nor should it be).

SaaS is just specialization of a vendor in operational software delivery, freeing end-user organizations from having to maintain and support the competency in house, and properly supporting, securing, auditing, certifying, and updating software is hard and complex at scale — an expertise that is, as you say, a huge pain in the ass.

The main driver at scale (excluding Enterprise, security sensitive and real-time apps that really should be on prem) of the abandonment of this efficiency is excessive profit taking by SaaS vendors and the overall expense being extraordinarily more expensive that the in-house expertise, or the considerable misappreciation by end customers of how hard developing and supporting a competent dc/Noc operation can be by end user customers.

I’ve seen both.

SaaS vendors move to value added pricing and nose bleed out of contention frequently because the most profitable top-market customers are who the category market leaders build their valuations and VC pleasing returns on. So there is this massive pressure to push prices and disqualify less profitable customers, leading to surges in prices for market leading solutions as SaaS companies mature.

And then there is the “don’t stack the coffee machine on the network rack” crowd, which can get pretty sophisticated. I’ve seen some impressive network operations inside of non-descript office buildings even recently. But then everyone is confident until it’s time for SOC 2 certification and realize their IT staff doesn’t even know what they should be getting audited, or they get their first successful hacker infiltration event that gets the attention of the CEO and their emergency PR group realizes they have nobody to fire but their own VP, or somebody finds the unsecured Postgres database with nmap and hits up their journalist friend who writes an article about it… suddenly it’s back to AWS or vendor soup.

And beyond those two there is also an optics problem.

IME in a lot of big companies you sell an on-prem solution and install it only to have a management change 3 years later and their entire board pitch is about outsourcing to the cloud to gain efficiency and moving off “legacy” systems and you realize you screwed up because on-prem doesn’t have the right optics to the ocean of management consultants who see your category as an easy win… unless you’ve got hardcore integration with the mainframe in the basement that runs the bean counter machines that nobody wants to touch because integration is done with an rj-11 jack and serial port adapters from 1993 and only get their support from Susan or Steve who are half retired but keep the mainframe up because they are the last people in the county who have the expertise.

It’s hard to counter these trends with on-prem. Regime changes are killers even when the on-prem solution is justified or even a no brainer — it’s tough to pitch and support on-prem for a lot of use cases and hard to keep broad support.

I remember when supermicro would drop entire rack assemblies off preconfigured with software and burned in ready to go, and they couldn’t keep it going because the cloud and SaaS trend was just a deal killer. Maybe that’s changed, but I don’t see much evidence beyond organizations at scale looking to recapture the millions AWS takes in profits. Maybe companies like backblaze and Dropbox are exceptions but they are among the companies who should run dc ops because it’s core to their business — it’s what they do. That’s not true of most companies.

Just my 2c.


What a delightful shit show. I don't even personally care whether Sam Altman is running OpenAI but it brings me no end of schadenfreude to see a bunch of AI Doomers make asses of themselves. Ethical Altruism truly believes that AI could destroy all of human life on the planet which is a preposterous belief. There are so many better things to worry about, many of which are happening right now! These people are not serious and should not hold serious positions of power. It's not hard to see the dangers of AI: replacing a lot of make-work that exists in the world, giving shoddy answers with high confidence, taking humans out of the loop of responsible decision making, but I cannot believe that it will become so smart that it becomes an all powerful god. These people worship intelligence (hence why they believe that with infinite intelligence comes infinite power) but look what happens when they actually have power! Ridiculous.


I'm so unsurprised that he's part of Alcor. The comments on HN lately do make me hopeful that this quote: "He is the techiest of tech evangelists, the purest distillation of Silicon Valley’s reigning ethos." is starting to be about the past and not the future.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: