
Local-first software (2019) - kkm
https://www.inkandswitch.com/local-first.html
======
pvh
Hello, we've published some more work in this space -- no HTML version yet,
but here's a more recent paper on our PushPin project.

[https://martin.kleppmann.com/papers/pushpin-
papoc20.pdf](https://martin.kleppmann.com/papers/pushpin-papoc20.pdf)

You can also try PushPin for yourself:
[https://github.com/automerge/pushpin/](https://github.com/automerge/pushpin/)

likely outdated binaries are available here:
[https://automerge.github.io/pushpin/](https://automerge.github.io/pushpin/)

Also, we're currently in the midst of a new project called Cambria, exploring
some of the consequences and data demands for a new Cambrian-era of software
here:
[https://inkandswitch.github.io/cambria/](https://inkandswitch.github.io/cambria/)

~~~
codetrotter
This Cambria project looks pretty interesting. Is it an internal project only?
One of the blog posts gives an example of running it, but I couldn’t find any
links to source nor binaries, so I am assuming it is currently internal only.

------
AriaMinaei
> ... cloud apps depend on the service continuing to be available: if the
> service is unavailable, you cannot use the software, and you can no longer
> access your data created with that software. This means you are betting that
> the creators of the software will continue supporting it for a long time —
> at least as long as you care about the data. Although there does not seem to
> be a great danger of Google shutting down Google Docs anytime soon, popular
> products (e.g. Google Reader) do sometimes get shutdown or lose data, so we
> know to be careful.

Plus, the value of the cloud app is not just your data, but the network
effects. Like, if you've emailed links to a GDocs document, and 5 years later
you decide to move to another service, those GDocs links will 404, regardless
of whether you've transferred all of your data to the new service.

With local-first apps, the URL starts with _you_ , not some _some-sass-
provider.com_.

~~~
elmo2you
Many far smarter people have said this before, far more eloquently than I can,
but in short:

Cloud Computing (and SASS even more so) is little more than just another
attempt to recreate access/info monopolies, essentially the same profit
proposition as existed with closed source software, while pretending to be one
of the cool kids and use politically more acceptable (but in this context
rather meaningless) terms like Open Source and Open Standards. It may be a
different generation of companies, with slightly different cultures, but they
are all equally predatory in nature as the old ones.

It's going to be a rude awakening, when some of the bigger service providers
will eventually fall over (which they will). Of course, everyone will blame
anything and everything but their own willful ignorance, when that happens.

~~~
scarface74
What do you propose? That every company reinvent the wheel or host everything
locally even if it’s not their core competency? Every company has to decide
what its “unfair advantage” is and concentrate on that.

~~~
zozbot234
What's wrong with hosting software locally and buying 3rd-party support for
the stuff that's "not core competency"? It eliminates a single point of
failure.

~~~
scarface74
>What’s wrong with hosting software _locally_....it eliminates a _single point
of failure_.

I’ll let that just sit there.

But you are going to host your own project management software? Your own
expense reporting software? Your own email server? Your own payroll
processing? Salesforce equivalent? Your own git server? Dropbox equivalent?

~~~
encom
Why not?

~~~
gav
Short answer: generally it costs more money for worse results.

Longer answer: doing things in-house that are outside your core competencies
and/or value creation model is a poor use of scarce resources (both capital
and human—predominantly management bandwidth) and increases risk carried.

To give a concrete example: imagine you need to host your source code
repository. You can pay for something like Bitbucket for $6/month/user and not
have to worry about it. It’s a price that scales linearly with your team size
and is a tiny fraction of their total cost.

Doing it in-house: you have to pay for hardware, storage, worry about backups,
have somebody support it, have somebody manage the person that supports it,
deal with users, find a solution to remote access, and so on. But all these
miss the big cost—risk—what happens if the server dies or your office burns
down? Nobody used to get fired for buying IBM, nobody now gets fired for
buying a popular SaaS product.

You aren’t Google, at some point scale changes the equation, but that’s a rare
spot to be in.

~~~
toast0
At my last job, we hosted our own git repo with gitolite. That doesn't give
you a ticketing system of course, but it was fairly easy to administer. And it
sat on a miscelanous jobs server we already needed and didn't use up too much
resources.

Remote access followed production norms, so no extra work there (other than a
lot more people need access to git than other production servers). Maybe a few
hours one time to lock down permissions for git etc, probably less fuss than
getting SSO setup for a git SaaS.

~~~
scarface74
Now repeat that for your mail server, your expense tracking system, your HR
system, your project manager system, various collaboration software....

And what happens when that one server goes down or becomes overloaded?

~~~
toast0
Some of these are better to outsource than others. Mail probably needs more
active upkeep, but you could more easily block links to reduce spear phishing
if you were running it yourself. Expense tracking needs detailed knowledge of
accounting to ensure only proper expenses are approved (possibly by an outside
consultant) I don't even know if there are low cost software options here. HR
systems (includes payroll?) is also specialized and high effort and not core
competency.

I've got opinions on collaboration software (why not put text files in git),
but ignoring that, I don't really want to run a wiki, so sure, maybe your
email provider offers something anyway.

> And what happens when that one server goes down or becomes overloaded?

You fix it? Same like when production breaks; hopefully you have people who
can fix production, hopefully you monitor your important tools. What happens
when it gets overloaded and its outsourced? You hope your provider fixes it,
and you call and yell at them.

~~~
scarface74
> You fix it? Same like when production breaks; hopefully you have people who
> can fix production, hopefully you monitor your important tools. What happens
> when it gets overloaded and its outsourced? You hope your provider fixes it,
> and you call and yell at them.

Well, in my case. I just clicked on “minimum” and increase it by 1 in my
autoscaling group or I scale vertically and change the server from 2xlarge to
4xlarge.

Can you get another server shipped to your colo in 5 minutes, brought online
and roll over and go back to sleep? Like I did at my last job. Most of the
time by the time the notification alarm woke me up, scaling has already taken
place.

But, depending on the service, part of “monitoring” is bringing up another
server automatically when CPU usage spikes.

But I think my current employer knows just a little about managing
servers/services at scale. As far as I know, my current employer manages more
servers than anyone else on the planet and has more experts on staff than
anyone.

Even that being said, _we_ outsource a lot of the services that aren’t in our
core competencies to other companies.

I couldn’t say that about my previous employer who had 50 people in all.

------
oblib
This is the 3rd time in a week a "Local-First software" overview has been
submitted and the 2nd time it's made the front page here. I'm pretty surprised
about that because I'm about to release a local-first, offline-first, option
for an app I make.

This article also quickly moves past "local-first" software to conflict
resolution which, in my opinion, is a distinctly different issue. It's
certainly not reason enough to hold off offering users a local-first option.

At this point I believe that since it can be done it should be done. I'll even
go so far as to say it's a necessity. At some point users will understand it's
a necessity and demand it. All that really needs to happen to convince them is
one big incident where they lose access to their data for an extended period
of time, or worse yet, lose all their data forever, and it won't matter why or
how.

Aside from that, as more app makers start offering local-first options and
users begin to see the benefits of that they will begin to demand it. That
could take some time, but I expect it's inevitable.

There are other benefits to a local-first approach for developers. Take a
"Contacts" app for example. If we have a standard for saving contacts data on
the client side that any app could access this would give users and developers
options to create and use new apps and features that all use the same data.

CouchDB & PouchDB.js provide a pretty solid and easy way to do this right now.
Installed on the user's desktop PC, CouchDB provides the missing link to a
robust client side web app runtime environment.

There may be other ways of achieving this right now, but I am not aware of
them.

~~~
rzzzt
Unhosted.org [1] mentions the remoteStorage protocol [2] as one of the options
for storing user data separate from an application.

[1] [https://unhosted.org/adventures/7/Adding-remote-storage-
to-u...](https://unhosted.org/adventures/7/Adding-remote-storage-to-unhosted-
web-apps.html)

[2] [https://remotestorage.io/](https://remotestorage.io/)

~~~
cxr
remoteStorage is pretty cool, and it's a shame that neither it nor something
like it has really taken off yet. The spec has some rough edges, though—in
particular the protocol requires a smart server to handle the network
requests, when it _should_ be fairly straightforward to define a "static"
profile that can turn most commodity dumb hosts (Neocities, GitHub Pages,
etc.) into a user store. I'm convinced that this seemingly minor design tweak
would give remoteStorage a new life and cause it to spread like wildfire.

The spec gets periodically refreshed/resubmitted. It last happened a couple of
months ago and is set to expire at the end of the year.

------
mkoubaa
I think there's probably a market for a personal cloud, which probably sounds
dumber than personal computer did in the 80s. What I mean by that is a
computer somewhere in the garage, like a furnace, with enough compute and
storage to drive all the devices and appliances in a house. In this model,
devices do not have CPUs or memory, only input/output and a network chip.

The way this would work is for the computer in the garage to have the ability
to divide itself arbitrarily into VMs for each purpose, with an ecosystem of
images designed for things like fridges and gaming consoles. It should be
possible to add or upgrade compute to the device in a hot swapped fashion, and
because it doesn't have to be in a thin tablet, it could be easily cooled.

~~~
TeMPOraL
I wonder if there would be market for _community clouds_ , or _neighborhood
computes_? Imagine that a new apartment building comes bundled with a server
room in the basement. Every dweller gets compute/storage there. This could
serve as edge cache for services like Netflix/YouTube, as well as for the
ecosystem you describe.

I once imagined that homomorphic encryption would allow people to store data
in their personal/neighborhood clouds and have third party SaaS code operate
on that data locally. But I've recently been made to understand that
homomorphic encryption would also allow companies to fully close off any
access to data beyond what a program/service wants to give out, and
unfortunately I get the feeling that the market will prefer the latter over
the former.

~~~
mkoubaa
Could be. You could also implement it for smaller businesses. I think another
possibility is to sell excess compute back to some decentralized cloud, the
same way you could sell excess solar power back to the grid.

------
eternalban
I skimmed the desired qualities, the review of current tools, and finally the
software centric approach to achieving the stated goals.

While we can reasonably expect software elements in any proposed solution, the
hardware and physical elements of distributed computing may provide a far
simpler pathway and likely will permit much greater reuse of existing proven
software approaches.

For example, all future multi-unit residences could come with 'data center'
along with the boiler, or possibly the actual units will host this equipment
along with their air conditioning units. All your cloud apps can now point to
this cloud. I don't see any fundamental reason why 'data center' can not
become a modular utility unit, coming in domicile, commercial, and industry
grade flavors.

In my view, the pure software solution approach to the 'modern informaton
society' has implicit political dimensions. One of these is the concentrated
private ownership and control over physical resources which are now a required
substrate of modern society. I for one am not ready to accept that as
'acceptable'.

~~~
sukilot
What's the point of putting an amateur-run data center in every apartment
building instead of using a proper one in town? Or instead of just putting it
in my unit (with off-site backup of course), since a personal data center is
just a single computer?

My landlord can barely run the water and A/C; no way they can run IT.

The improved quality and reliability is worth it for the trivial latency cost.

~~~
eternalban
You are assuming the only possible solutions require user maintenance.

However, your implicit point regarding income level and the range in quality
of building management is valid, and successful products in this space would
address it.

------
asiachick
I'll take cloud first thanks. No software to update. Nothing to backup. I just
open a computer/phone/tablet and access my stuff.

I get the tradeoffs. I'm not going back 10+ years.

~~~
robenkleene
Out of curiosity, how do think about the prospect of being arbitrarily locked
out of your account if you’re mainly using cloud-first software?

(E.g., you can find example horror stories here on Hacker News:
[https://www.google.com/search?q=locked+out+of+gsuite+site%3A...](https://www.google.com/search?q=locked+out+of+gsuite+site%3Anews.ycombinator.com+&t=ipad&ia=web))

~~~
CPUstring
One of those stories is having a bad password memory. One is a person
conducting business on his personal account and triggering flags. Another is a
business messing up.

It's a good habit to keep multiple interlocking personal email accounts from
multiple providers, but being cloud-first is still obviously correct.

~~~
robenkleene
What does "multiple interlocking personal email accounts from multiple
providers" mean?

Just to be clear, regarding the "being cloud-first is still obviously
correct", your stance is that it's ok for your access to your life's work
being at the discretion a company? (Presumably one you trust.) Not saying
you're wrong here, just curious how people who are all-in on the cloud think
about this.

------
jjordan
This is why I'm a frequent visitor to /r/selfhosted. Any other self-hosted
centric communities to be aware of?

~~~
ffpip
Maybe r/datahoarder ? Or even r/opendirectories ,r/plex , r/homeserver ?

------
valuearb
The No Spinners thing seems to be my professional niche. Every job I’m tasked
at cleaning up a poorly performing native app. And it’s always caused by
developers writing views like web apps, posting their server requests as views
open and fire up a spinner to wait.

It’s not that hard to have a caching strategy. And then your native app feels
like a native app.

~~~
hbogert
that's not caching, that's the optimistic UI from the referenced paper that
you are describing

~~~
valuearb
You can call it Tomato or Tomatoe.

------
RivieraKid
What's the origin of the "X-first" language pattern? Earliest example I cam
remember is when Eric Schmidt was talking about "mobile first" approach.

~~~
zurfer
Nice question. I am not that old and only further aware of the more recent
"cloud first" and "america first"

------
oblib
It's worth noting that a local-only offline-first web app shifts the load of
storing and managing user data entirely onto the user themselves.

That's a huge load of responsibility taken off the software supplier's
shoulders and it eliminates the entire cost and complexity of building and
maintaining Cloud based data management infrastructure.

And that's a pretty great option to have on the table for both users and
software makers.

I'll also note that these apps consume almost no network bandwidth. With
Service Workers implemented they barely speak to your server. The amount of
bandwidth that can be potentially saved has got to be pretty huge.

------
webscalist
TODO: build a local-first livestream remote conference zoom alternative
software

~~~
ForHackernews
Wasn't this Skype, originally? Pre-Microsoft acquisition?

~~~
ramraj07
Everyone starts with p2p and realizes it doesn't work. OG skype predates fast
internet and expectations of extremely smooth conferencing between arbitrary
number of users. A centralised server is basically a requirement for
acceptable performance.

~~~
pferde
There is a step between peer to peer and centralized, and that is federated.
For most services, that should be the sweet spot.

~~~
fastball
How do you federate a video call with 10 people?

Federation isn't needed because you don't need to have servers that
communicate with each other, you just need some server that can host a given
video call. So any open source solution works, no federation architecture is
required.

~~~
hinkley
The same way people federated email forever? Local servers that contact
upstream servers?

~~~
fastball
My question is not "how does federation work?" It is "in what situation is
federation _useful_ for video conferencing in a way that a single open source
server is not?"

~~~
toast0
Depending on the network topology and number of callers, you could get some
benefit from a multiple server model. Large meetings across offices could make
use of a server at each office to streamline bandwidth. Not a good fit for
today's reality of everyone from home or your scenario of 10 callers. Could
still be useful for EU vs US callers to connect locally and only one stream
across the atlantic. Multiparty e2e calling is already challenging, and adding
multi-hop routing might make it more so, though.

------
m3047
As partly a philosophical exercise, and partly as hyperlocal "no really, you
are here so you have access", I added Apache with DAV and Etherpad to a
DNS+DHCP "sled":
[https://github.com/m3047/pangolin](https://github.com/m3047/pangolin)

The irony that Etherpad is written in Node is not lost on me.

------
merricksb
Made the front page a few days ago but didn't get much traffic or many
comments:

[https://news.ycombinator.com/item?id=23985816](https://news.ycombinator.com/item?id=23985816)

[http://hnrankings.info/23985816/](http://hnrankings.info/23985816/)

------
oscargrouch
Im doing a "browser-based something" that can mostly be reasoned about as a
web-over-rpc-services where the primary pitch to start it, was exactly based
on the thinking generously expressed by this article (albeit i had to start it
much earlier).

The first thing is to expect the app devs to define a api in RPC so that what
they app serve it transparent to the whole platform.

That way, users and other apps can reuse that api and integrate tools and
other services without worrying about if the node is online or not.

Given they are RPC (over TCP or IPC), they can be used as a peer node in a
distributed computing flock or in the app service process the application can
serve the RPC requests not locally if it wants or needs, but going over the
network.

But even in the case it goes straight to the "cloud" for resources, in the
node it will always go through the RPC-api first. (How the application handle
the RPC request is defined by the app developers).

There are a lot of others important details actually, for instance the UI sdk,
window management, its already there, the storage layer (files and key-value
db) are distributed over p2p(torrent) and everything is
accessible/bootstrapped through a DHT address accessible anywhere.

But imagine if you had this architecture before, and Twitter, Facebook or
Google search had to install a api-based application, where they need to ask
you for permission to index content, or store locally your list of friends..
where later other apps could extract that information from your machine.. For
instance you could change your search to DDG or export your Facebook list of
friends and post to some other social network(this is actually the primary
reason almost no one could compete with a popular social network).

In the end, things that should be yours are yours and third-party apps will
have to be installed and granted permission by you to manage that digital
property.

But overall i'm very glad to see that this is starting to become some sort of
a trend, that people are finally waking up, because despite of all
technological implications of it, it also have a lot of political and social
benefits to this approach.

------
z3t4
As platforms get more and more locked down (Chromebooks, Android, iOS) hosting
the software in the cloud becomes a easier option. For example I'm working on
and editor/IDE for programming, but it's next to impossible to compile
programs on the platforms people use now a days. So in order to run a program
that you made on your Apple phone, Android device or Chromebook, it has to run
on a third party server...

------
MaxBarraclough
Much of this seems to be rediscovering the principles of the IndieWeb
movement, who have already thought about many of the issues discussed here.

[https://indieweb.org/silo](https://indieweb.org/silo)

[https://indieweb.org/data-portability](https://indieweb.org/data-portability)

------
wilsonrocks
How about a more generalised collaboration system, where you just edit your
files lot Cally as standard, but can kind of dial in using o collaboration - a
bit like a screen share on a call. So suddenly the people you've dialled can
leave comments on your work etc, but when you cut the connection, they can't
any more?

------
smithssoso
Take a look at the PJON protocol that is a perfect example of a local-first
networking technology, I am really amazed by the job they did on that!
[https://github.com/gioblu/PJON](https://github.com/gioblu/PJON)

------
WrtCdEvrydy
Unfortunately local-first software is incompatible with you being a slave to
your preferred technology company of choice (Facebook, Apple, Google,
Microsoft).

If it wasn't for GDPR, most of these companies wouldn't even offer exports of
your data.

~~~
henvic
GDPR was born yesterday, and government intervention on such private matters
can really cause a disaster. You want to have control over your data, and just
to cite one case, Apple has started embracing it a while ago ahead of GDPR
laws. Perhaps because it is a hardware company it has more incentives (like,
it could very well create something akin to a 'family vault' that would be a
fancy digital vault that you could buy and store your family data on it, and
it would survive even fires - just thinking some crazy ideas, so don't judge
me too much).

I am positive that market forces would achieve the same effect sooner if it
were not by GDPR. One thing that has to change in my opinion is the idea of
'exporting out'. People should be using such services to import in what they
have to share with others, instead of blindly relying on them to access
remotely.

~~~
Frost1x
>I am positive that market forces would achieve the same effect sooner if it
were not by GDPR.

Considering how many business practices across the board aren't very consumer
friendly and are seemingly growing ever more anti-competitive in nature while
skirting antitrust definitions in grey areas, I'm personally not nearly so
sure we can rely on "market forces" to protect anything but the business in
question.

In fact, the only thing I've seen market forces achieve is self-preservation
and interest which may or may not align with the rest of what society wants or
needs.

------
amelius
But we still need a catchy name.

------
staticvar
Was this paper in a journal?

~~~
cxr
It was published in the proceedings for _Onward! 2019_.

[https://doi.org/10.1145/3359591.3359737](https://doi.org/10.1145/3359591.3359737)

------
flas9sd
if you prefer HTML over PDF it's [https://www.inkandswitch.com/local-
first.html](https://www.inkandswitch.com/local-first.html)

