That might be the goal. If you are storing private data for your users which should not be queried. You can have separate encryption keys for each user and keep their data on separate databases.
We did pretty much that ages ago when I built a webmail provider.
Marketing hated it, because it meant collecting user data took conscious effort which meant they had to ask the dev team, which meant requests were often turned down when hard questions were asked about whether it was justified to collate personal information.
That was a feature, from my perspective.
We actually took most user demographics data entirely offline. Data that was only intended to create aggregates anonymised profiles of our users were kept encrypted in bank box when not being analysed. Any analysis on it was done on an airgapped machine, and only the anonymised reports taken out.
The only central database that was online was one that kept a mapping of whether or not a given user name was available.
Then each storage shard kept track of which users were on which backend, and the account data that had to be online for each user (primarily their actual mail since we were a mail provider, along with settings etc. for their mailbox) were kept on a per user basis.
It worked well, but you need to have buyin from the top for this approach, as there will be constant pressure for easing access to more and more user data.
I think this is an excellent architecture for powerful, respectful, hosted applications. I’ve been thinking about a few extensions of this idea:
First, use advances in privacy technology to create a service-wide data warehouse that has enough information to help you make good decisions without exposing any specific user’s data. Done properly, users will benefit from your improved decision-making without giving up their personal data. Differential Privacy can do this.
Second, give users the opportunity to download their own little database in native format (e.g. SQLite) This is the ultimate in data portability. I think Dolt [0] might be good for this, because its git-like approach gives you push/pull syncing as well as diffing. That would make it easy for users to keep a local copy of the data up to date.
Third, you can start to support self-hosting and perhaps even open-source the primary user-facing application. The hosted service sells convenience and features enabled by the privacy-respecting data warehouse.
The big questions, of course, are many:
- Would users pay for this?
- Does increased development cost and reduced velocity outweigh the privacy benefits?
- Would the open-source component enable clones that undermine your business, or attract new users who may eventually upgrade to your paid service?
One of the interesting side-effects, to me, with respect to what you mention, is that designing things this way prevents your from accidentally building solutions that are hard to self-host. The boundary between "per-user" or "per-tenant" vs "site wide" becomes very sharp because it becomes a choice of where the data is stored, so it's always obvious when you're stepping across that boundary.
Re #3: At my former B2B SaaS, each customer had their own MySQL schema. We allowed users to perform a full mysqldump of their schema as a form of backup. We found that, for us, the database schema alone wasn’t enough for anyone to straight up copy our product. The magic was in the business logic code which was closed-source.
Offering to give users a $50 Amazon gift certificate in exchange for a 45 minute call works pretty well for us.
Since we know that 3 out of 10 users who receive this offer will accept it we segment users by industry / feature usage etc. and send offers to talk to them to listen their stories and feedback.
This is why when investors replace founders the companies stop innovating. Obvious ideas don’t have any value. You need deep domain knowledge to see how things really are, and not how things seem to be.
What he means: If your product (first version) is Great (at a few important things), it doesn't need to be Good (at everything)
I think the unstated “first version” part is important. You should be gradually adding the missing parts. And not just leave it at the great, simple but missing too many things state forever.
True but I think another important factor is that by avoiding the missing parts, you might realise they're not really missing after all. What once seemed essential actually isn't.
Facebook/Twitter is terrible for putting down ideas. You don’t control what happens to your content. Posts get lost in the sea of millions of other posts. These companies constantly change policies.
Writing is a public act. You might still be expecting some readership without the expectation of ad money or promotion of something.
What I miss most about the blogs in 90s and early 00s is the conversation between blogs by linking, quoting and commenting each other. Reading blogs involved going from one blog to another, and reading blogs of 10 people about a single issue. It was a conversation. And you would constantly discover new blogs this way.
In the US, perhaps. If you're going after international customers — the topic which the original article complains about — you'll find that this preference isn't nearly as strong as you think.
In the UK I disagree. I personally always prefer to pay by debit/credit card. The charges are instant (and will be displayed instantly if you use a modern bank), you get the ability to make chargebacks (or the UK's law-mandated protection for credit card customers), etc. Direct Debits on the other hand take 3 days to set up before the money actually leaves the account.
Internationally, the local card details often don't match US "debit cards" enough to actually be called by that term; I've taken the habit of translating to "bank card". I've had non-US bank cards with no credit with security & user terms better than US credit cards.
The problem is that in mature companies there is often no next version where you need pioneers. The next version is just an incremental change to the previous version. If the business is very management or sales driven then as engineer you are told exactly what to do in the shortest time. No room for or interest in pioneering work.
Corollary to this is when a business is so mature that they decide to no longer take risks (or higher risk) by designing a new product, or go after a new market or go for the next disruptive change that is valid. Instead they go after reliability based on exploiting a current product or resource or asset and optimize the product or process to extract out yet even more value. The latter is always viewed as safer and more predictable than the former. The truth is to recognize that that is what is happening and accepting that in the long run, the latter will not produce sustainable growth.
Even worse, sometimes it turns out that the business assumptions baked into the first version were completely wrong, and now have to be undone. The original pioneers can be the worst suited people for that work, if they are emotionally invested in the old approach.
I disagree - a bit. There is often a blue ocean that can be created in the industry or around it, and usually pioneers are part of a small group to see it first.
That's a nice goal, but not realistic, especially for a lot of B2B products. They're just too complicated to understand all the functionality right off the bat.
A good way to measure outcome is to watch how many people are using a new feature (or a new version of an existing feature) successfully and how often they use it. The key word here is “successfully”. Don’t just measure how many people tried to use a feature. Define and measure success with a feature.