Hacker News new | past | comments | ask | show | jobs | submit login
Bring Your Own Client (geoffreylitt.com)
533 points by pcr910303 7 months ago | hide | past | favorite | 249 comments



The competitive advantages of owning both the application and the data are so high that don't see widespread BYOC ever happening without government intervention.

I'd like to see an enforced separation in law between commercial application providers and storage providers. So, if someone writes an app like Google Docs, they can't just store the data opaquely on their own cloud servers. They legally have to integrate with a separate storage provider. And that storage provider has an obligation to make my data accessible and manageable by me.

This wouldn't be a panacea, but it at least makes it theoretically possible to write a replacement client for Google Docs that could be a drop-in replacement.


I'd like to see an enforced separation in law between commercial application providers and storage providers. So, if someone writes an app like Google Docs, they can't just store the data opaquely on their own cloud servers. They legally have to integrate with a separate storage provider. And that storage provider has an obligation to make my data accessible and manageable by me.

I think that is a bit much, I would much rather make reverse engineering/screen scraping/whatever for interoperability be 100% legal with zero grey area. Including a bit that says TOS/Eula/NDA/NonCompete/any contract can not give away this right.

Edit: basically let asshole companies use technical means to try to stop us, but give them no legal recourse if we manage to get the cheese out of their trap.


We can split the difference by mandating that whatever storage services are provided via "webapps" must also be provided via a plain API. Users shouldn't have their data locked behind proprietary javascript. This would create zero additional burden, as every webapp needs such an API for the front end to talk to anyway.


Maintaining a stable API surface is a burden.

That said, I would agree with such a mandate, as the costs are probably worth it.


It wouldn't have to be any more stable than what's already required to coordinate with the proprietary front end. A company would actually have an incentive to stabilize the API, to encourage cooperative use rather than migration.

(And if a company starts churning their API in bad faith, that's exactly the kind of thing courts are meant to figure out)


Stable isn't required. Just let the users use the same API the client software (JS,mobile) uses, with password or key authentication. Free market will build clients.

For security against social engineering, add some rigamarole around opting into API access.


I'd like to see cold storage providers split from app providers.

Strict enforcement policy on cold storage data warehouses. And sensible policy for the app providers, like not caching PII.

Implement and enforce at the fed level using this division in a manner that is something akin to Glass-Steagall.

Owning data should not be something providers can ever exploit.

[edit]: I also think this could be used as a means to break apart Big Tech


That seems like a stronger mandate, but I think what I proposed would actually reign in their power better than your proposed decoupling.

First, different services would still have to agree on data formats to interoperate. The ecosystem would be quite similar to windows file formats, and we've got plenty of experience seeing how that works out. It's better than what we have now, but not a panacea.

The bigger issue is that in addition to wielding data lock in, these companies wield network lock in. With storage decoupling, if I load up my own profile in a hypothetical "Facebook competitor" app and make a post, Facebook has no desire to display that post to users of its app that are my friends. However with my API proposal, competitor apps have the ability to publish directly on Facebook's site and be treated exactly the same as every other post.

Also, my proposal has a longstanding philosophical grounding that the abilities of computing should be available to all - companies shouldn't be able to insist on specific methods of usage that computationally disenfranchise users.


(post author here)

While I agree that commercial incentives are critical to consider, two caveats I'd add:

1) I think the nature and depth of the incentives problem varies a lot by industry. Social media is a really complex case, for example. But I'm most interested in collaborative productivity tools, where I think many companies are okay being incentivized to build the best product rather than build data moats.

2) Cutthroat commercial incentives aren't the only thing that got us here. There are also tech barriers.

If I was starting a Google Docs competitor today, even if I wanted to make it open, I think it would be hard to pull off. What kind of API would I expose to enable realtime editing with good offline mode and conflict resolution? How would I deal with clients that have slightly different rich text representations than my own?

In an alternate universe where we already had a user-owned "web filesystem" with good answers to these questions, I could just hook up my client to that existing system and not even worry about persistence at all. It needs to be _easier_ for devs to build the right thing for users, not harder. And it needs to be a _better_ experience for end users, not a sacrifice. The convenient thing will win.


Hey Geoffrey! We are building this in https://braid.org: a standard protocol and API for synchronizing state with realtime editing, conflict resolution, and a good offline mode. By building this into the web, we're creating the equivalent of a "web filesystem" as you put it, where every file (aka "resource") has full versioning, realtime updates, offline abilities, and merge resolution.

I know you're aware of Braid, but I'm not sure you see how much our work is actually aligned. In fact my deepest personal motivation in creating Braid is to enable the "BYOC" vision — but I've been calling it a "separation of UI from state", where each user can choose their own interface to interact with the world. In today's web, the data owner controls the interface. But this means that they control how way too many people interact with the world, because we increasingly have to interact with the world through computers.

The Braid abstraction makes it easier for developers to program with distributed state, and in the process also makes user-interfaces and back-ends interoperable, so that we can easily switch out different UIs for the same state.


Well it can happen if a significant enough open source project establishes a protocol early enough and doesn't get co-opted.

That or a consortium of companies agree on some sort of standard. (See the work on pc buses in the early pc days)

But you are right. When companies run under trying to get as many users locked in as possible during a paid for investors period. This model is precisely the opposite outcome expected as the whole point is to build a moat around the sales model.


One of the better examples still seeing widespread mainstream use is email, and this even applies to multiple stages throughout the email ecosystem: sending mail, receiving mail, managing mailboxes. Imagine if email wasn't cross provider compatible


The surviving open protocols are like libraries: they were created before the space got commercialized. They got grandfathered in. If libraries didn't exist for centuries already, there is no way they could have been created under current intellectual property laws and IP economy. Similarly, new open protocols don't generally gain widespread adoption - and in the rare case they do, that's only because some companies are using them to gain adoption; once those companies gain the market share they want, they switch to proprietary protocols and the open one dies. See e.g. Google and XMPP or RSS.


What about, tangentially, cellphone standards or HTML standard that have iterated and are now at version X? Your point stands and a simultaneous coevolution of the standard, the userbase, and the (app/thing) functionality does seem possible.


Great points -- w caveat that RSS isn't dead! :)


Technically. Neither is XMPP :). It's just increasingly rare to see in the wild.


Email is only useful because it is cross platform compatible. Otherwise, we’d all be back to writing to some friends in AOL, others in MSN, maybe some back in Compuserve. Some people like to send messages only in Facebook and others via Twitter DMs. None of these are/were interoperable which is why they aren’t ubiquitously used.

Email is only useful because each server can send and receive with other servers.

But we all know this, so what the point here?

I would say that my point is that if interoperability becomes more useful than the walled garden approach, then we will see something like this posts call for BYO client. Email is an example. We have interoperable email because it is was more useful than service specific inboxes.

Until interoperability is necessary for Word, we won’t see an opening for other Word “clients”.

Aside: you could argue that Google built Gmail to make a dent in the ubiquity that was Exchange for corporate messaging. It exploited the openness/interoperability of email in order to do this. If you accept that argument, then maybe they should take a page from that playbook and work to make Google Docs as open as possible. Allow for more interaction with external clients. Let a more open ecosystem develop. That might have the potential to diminish the position of Office for document creation.

Or maybe email is just so old, the owners got established before it could be commercialized... it might happen again, but I’m not hopeful. For example, I don’t expect Twitter to care about federating their system with other messaging applications. There is too much to be gained (selling ads) by keeping people on their site.


Gmail was Embrace, Extend, Extinguish. You can't read your own email using you own email client anymore.


Absolutely, however a major problem occurs, and email is a great example of this. When there is a fundamental need to change the protocol for reasons of privacy, security, authentication, etc. It becomes almost impossible to do when it is as wide spread as email, or it takes a very very long time to spread enough.

When one person controls the whole system, it's easy to pivot rapidly. I guess that's another argument against having widely used protocols (even though I disagree with it.)


I think that's a great argument for keeping widely used protocols. I don't want any one person, or one entity, to rapidly pivot my critical infrastructure (whether it is "broken" or not). The slowness of change is a feature.


What's insecure or frozen about email? We have SPF and all that jazz.


For something to get traction, it has to have excellent UI and UX. So far open source hasn't been very successful at providing that since there is no economic model to finance the immense amount of work required for good UX.


That view seems a little shoehorned because the parent comment specified "protocol", implying that it would be the connection between client and storage, thus not user-facing.

Regardless,

* Blender

* Godot

* Krita

* GitLab

* Olive

* Firefox

* Thunderbird

* NextCloud

* Amarok

* Vital

* and more

The problem isn't an "economic model". The problem is that most open-source projects simply don't attract UI designers, UX researchers, and users because contribution flows are more technical or geared toward developers.

UX research isn't "expensive" when referring to open-source projects because it predominantly requires people: designers, researchers, and users. The whole foundation of open-source is volunteerism.


What IS the contribution flow for a designer, anyway? If they can't submit a PR, it seems like the best they can do is submit an issue with a mockup. Are volunteer developers likely to jump on that and implement it?

Most developers I've worked with don't exactly love going back and forth with designers on UI stuff.


Volunteerism is much rarer than paid work, and paid work prefers proprietary too get the money to pay.

And volunteerism was much more popular in the 1990s when it was easy to write software by hard to sell it. App stores suffocated open source by offering a path to revenue.


This view is the one that feels shoehorned. "Normies" (not pejorative) have no idea what a protocol is. They just want something that works and is convenient as possible. That's how we ended up at the $free+$surveillance App Store / modern Web model.


Since when was HN a normie site?


HN builds products for "normies".


I posited a similar point to a former colleague (mine was more about eventual market dominance due to excellent UI+UX), and he came back with Craigslist as a perfect example of a product that gained a massive user base well into an era where its UI felt very outdated. A few years after this exchange, I feel more like I was right about my point: FB Marketplace has almost completely taken over as far as a classified product marketplace, but the point still stands that Craigslist was able to do a ton with very lackluster UI+UX


I think the problem with this analysis of Craigslist vs. Facebook is twofold. One, Craigslist had, in fact, a great UI. It was simple, clean and accessible to everyone, no matter whether you were using the newest desktop browser, a screen reader, a cheap phone or a potato. Craigslist is a success story of bullshit-free UI.

It got beaten by FB Marketplace because Facebook has almost three billion captive users, and a marketplace bundled in the service they use many times a month takes less effort to visit than a completely separate site.

From this, and many other cases - like e.g. everyone hating the IM/videoconference system they use and yet using them anyway, I conclude two rules of thumb:

- UI that's good for users has very little to do with what UI/UX specialists peddle today;

- Network effects trump UIs - a sticky service with bad UI will beat a competitor that offers much better UI, but doesn't hold the user captive.


Craigslist also got beaten because they fought becoming a unified marketplace--with lawyers.

Had they not fiercely destroyed all the various "Search all Craiglist" mechanisms, they might have had a chance at fighting FB Marketplace.


Another counter example: amazon. By far the worst userinterface of any webbshop, no filters, hardly any categories a search were very small changes in spelling/wording make huge changes in results. And still they are by far the biggest game in town.


UX is a trojan horse, now everything has to be mobile friendly for absolutely no bloody reason. You know what also works on mobile? Zooming, yes you heard me.

Just the other day I had someone, an old person, complain about how a "revamp" of an internet banking site. They rewrote their functional, working UI, with a site that was designed by "UX designers", and you know how he phrased his opinion of the new site? "They made the site for stupid people".


GitLab? Xfce?


> without government intervention

Well that's the real trick, isn't it? </Han Solo>

Our government is "captured" by campaign financing. Now we have 2 problems: ineffective governmental oversight, AND Citizens United. We have to solve the latter before we can ever solve the former.



I don't know if this is a good example :) Anecdote: I recently wanted to extend the hoses for my tabletop dishwasher (the one that hooks up to your tap, not to the wall). It's 2 hoses with 2 connectors each, should be easy. But all 4 are different. After 3 trips to 3 different hardware stores, I found exactly 1 fitting that fits 1 (of the 4) connectors. I ended up slicing the existing hose and extending it in the middle, because that was easier than finding the right fittings. I thought about ordering them online, but that would assume I successfully identified them, which I didn't. Granted, I don't know anything about plumbing, but neither do the average person know anything about technology.

That said, I agree, standards are necessary, I just wanted to rant about NPT.


I genuinely cannot tell you how relieved I am this isn’t a link to the Treaty on the Non-Proliferation of Nuclear Weapons, the thought of FAANG cos “disarming” but snuffing out competitors with government approval is terrifying.

https://datatransferproject.dev/ from yesterday’s iCloud photos thread seems to be a step in this NPT direction though.


I think it's not completely inconceivable that this could be regulated some day with music streaming. There are zero technical hurdles (unlike with complex apps like Google docs), and because there is such potential for decoupling the client the consumer benefits would likely be much greater.

I've been meaning to switch from Apple Music to Spotify for a while now (I'm on Android). I just had a look around the Spotify app today and the interface is terrible, it won't let me play and view my library like I want to. It would be so much easier if I could just use both from a single open source client, and then I wouldn't have to switch off to soundcloud and bandcamp for certain artists.

Wouldn't it be in the interests of players like Google and Apple to create this service/protocol, even before the regulatory net starts closing in?


You should check out Spot[0]. It's a Spotify client written in Rust using GTK. It's quite barebones at the moment and only works with premium accounts, but the interface is much cleaner than the official Spotify client.

[0] https://github.com/xou816/spot


If a terminal is more your style you can even use Spotify-tui[0].

Requires some setup and having an additional client installed for the actual audio playback but it's still interesting using Spotify from a terminal.

[0] https://github.com/Rigellute/spotify-tui


remoteStorage[1] is already a thing. I don't think anyone's made a full Word-style document editor for it yet, though.

[1]: https://remotestorage.io


I generally support the spirit of requiring cloud apps to provide equivalently powerful API access to their service. The (solvable) problem is how do you regulate this without stifling development of new cloud apps?

Another more practical problem is how do you even get something like this on the agenda of a lawmaker? This isn’t a readily apparent problem to most people but it likely would result in less monopolistic outcomes for cloud providers and potentially better and/or more efficient products.


I think the focus here is on the wrong thing, the "app", which makes ideas like legal intervention and other distractions feel relevant.

I think client interchangeability and ultimately user freedom is fundamentally about data driven design, look at all the examples: text is text is text... git is a file based graph of merkle trees. They are both highly abstract, portable, unopinionated data structures that don't describe APIs, UI or implementation - don't like git CLI? you can literally write a completely different one with different porcelain and use the exact same git repos as everyone else, people have done this. Text is the same, it's just more obvious because so many examples exist.

If you start with data and make it successful clients will emerge around it without the choice of making it interchangable.

The problem with things like google docs isn't the app, it's the opaque data format. And then there are formats posing as being open and portable like DOCX which is essentially a proprietary microsoft format forced open due to reverse engineering - all of it's details will no doubt be closely coupled to the history of MS Word features and implementation.


Exactly. But in this case that opaque data format is exactly the thing that enables live editing. Gdocs runs on Operational Transforms. There is basically no way to do OT without a ton of back end support which means tightly coupled APIs.

Martin Kleppmann has a great talk on CRDTs for distributed text editing. It is a hard problem. You also need a dedicated binary format (basically a table) but at least in this case it is a lot easier to use an open format with federated clients.


Separation of application and storage is a great idea!

How about web search and social news feeds? Can there be a similar frontend / back-end separation for Google, FB and Twitter? I am thinking - different search front ends with different UIs, rankings and filters. We could have an EFF-Google and a Conservative-Google. Users could pick their preferred flavor, as I don't think it's fair for Google to decide for us all.


We just have to do what the phone manufacturers did for the mobile phone connectivity. GSM was a huge hit thanks to interoperability.


It was but then deteriorated into a big patent war with every company trying to shove its proprietary patents into the next cellular standard.

So while everyone can implement the standard, very few can do this profitably (In practice, Qualcomm, Mediatek and to some extent Samsung)


OTOH, give it time. Open alternatives exist for every private walled garden. I believe the decentralization/federalization and FLOSS movements have a real chance, even if they move at 0.05 velocity. Private platforms are subject to closure, but the light of an open source project will shine as long as there's someone spending time on it.

In due time we'll have networks of small providers catering for BYOC hosting. Cryptocurrencies can play a part in efficient and fair compensation.

But then again, maybe by the time we catch up to current date, everything happens in walled VR moats.

Maybe I'll be the oddball in the metaphorical cabin in the woods, but given the progress we're making so far I'll refuse to enter a closed VR/AR ecosystem.


Maybe the best solution would be legislated government donations to open source projects, with a mandatory legislative bill each year on who they go to.


> with a mandatory legislative bill each year on who they go to.

Another option would be something similar to farm subsidies, although I don't know if that would be enough.


There's been some of that already, not in the sense of governments forcing the separation, but in the sense of governments making the keeping of data a liability, e.g. with the EU's GDPR.

There's certainly still a whole class of applications when owning the data is a competitive advantage, but there are also lots of organisation whose core competency is not selling data, even though they do have to work with user data. When storing data yourself is a risk, it becomes a lot more attractive to just let the user store that data somewhere themselves and have them give your application access to it.

I'm sure this one of the reasons we're seeing a lot more interest from Europe than elsewhere in Solid [1].

Then of course, there are also lots of organisations struggling to maintain correct and up-to-date data. If multiple organisations access the same data as controlled by the user, the user is more likely to have kept that data up-to-date.

(Disclaimer: views are my own.)

[1] https://solidproject.org/


From a European perspective, it depends. I wouldn't ever even consider something like Auth0 to store our user records for example, due to the potential legal fallout of storing all our PII on American servers. You have to be really careful just what kind of storage provider you pick, where their servers are located and which specific regulation your business falls under.

That being said, I'm a huge fan of splitting concerns - such as connecting anything and everything via OAuth delegations with narrowly scoped permissions!


> The competitive advantages of owning both the application and the data are so high

Honestly this is wrong. Google drive has API for everything, Notion is working on an API, Confluence has API for everything, Trello ... Any serious text editor (and all the ones wrongly cited in the article) software has API for everything. Want to export your Confluence data to Notion ? you can. In the documentation tools space owning the data is not something looked for, because company pay for the service so the data belong to them

BYOC can exists today, the only reason it doesn't. Is that the documents format is super complex and require complex editors. But the format themselves are not secret, the application is (and not even always, for example the Confluence editor is open source).


Those APIs are not meant for building a full blown dedicated clients, they’re for application integrations.


Check that https://developers.google.com/docs/api/reference/rest/v1/doc...

You have the full specification of Google docs documents and API to edit any kind of blocks inside


First, that’s not the full spec for Google docs. It’s missing many aspects of the environment.

Second, is that meant for building a full blown client or is it meant for 3rd party integrations, like plugins?


> It’s missing many aspects of the environment.

I'm curious of what is missing

> Second, is that meant for building a full blown client or is it meant for 3rd party integrations, like plugins?

It is certainly not for plugins, as there libraries to call that API from python, ruby and others which are then not meant to be run in the browser.

My point, is that if you want to build an editor of Google Docs, then you can do it, there is an API to get the doc format and an api to make modification to that document


> I'm curious of what is missing

My initial response was related to missing comments and collaborative real-time editing in the v1 link you provided. However, I searched a bit more and found that their newer API versions include both of those features.[1]

Here's a blog that's announcing their real-time collaborative editing[2] back in 2013, so that's been there for a while.

That blog post also lists a few products that make use of the real-time feature but the only client that's an actual editor ("Neutron Drive") has since died with its domains no longer working.

I guess this is one of the rare cases where you can build a client based on their API and Google actually promoted one of them but I'd still say that it's not really meant for it.

The primary use case for these APIs are bot services like Zapier and integrating a "Save to cloud folder" feature in your own app.

Even if they actively encouraged you to build a client based on their API, who would risk it? We've seen how Twitter handled 3rd party clients after it got big and that's likely going to be the case with every major player.

[1] https://developers.google.com/drive/api/v3/manage-comments

[2] https://developers.googleblog.com/2013/03/build-collaborativ...


You touch on an important point here. Even if you were able to gain full access to the data behind say Confluence, the schema is probably very specialized to their client, contains lots of tech debt, and design is probably very opinionated. You would most likely find it easier to just write your own schema rather than integrate with theirs.


Even better: They have to use the storage provider of your choosing.


> The competitive advantages of owning both the application and the data are so high that don't see widespread BYOC ever happening without government intervention.

This.


Interesting proposal, but I see it greatly reducing development and also further entrenching the cloud providers as Lords Of Everything.


Can I ask why?

As I see it, cloud providers are currently entrenched as Lords of Everything because they can both provide the infrastructure and the products that run on top of it. Making them act more like utility providers would reduce their power, I think.


Think it through:

> enforced separation in law between commercial application providers and storage providers. So, if someone writes an app like Google Docs, they can't just store the data opaquely on their own cloud servers. They legally have to integrate with a separate storage provider.

Now, is the law going to mandate the exact API as well?

The likely implementation of this is that, just as every app developer copies every other app developer, when choosing which User Storage Backend to integrate with, they will pick the most popular one, or a near competitor. AWS, Azure, or Google Cloud. (Apple-focused developers may choose Apple Cloud; I'd expect Facebook to spin up one if this became law too).

Just as you don't get general-purpose OAuth integration so much as "log in with Facebook / Google" buttons. Those are your two choices.

Yes, the original intent would be targeted at Google and Microsoft, which currently own both big web apps and big cloud platforms to run them on. I'm not convinced that splitting them vertically would stick; the convergence effects are very strong. So you end up with (choice of two office suites) x (choice of two backend providers), big deal.

Is it sufficient that Google Cloud Storage would have a separate stock ticker from Google Cloud Apps?

It reminds me of rail privatization and the nonsense of having thin shell companies run the trains while leasing all the rolling stock from a couple of companies and running on tracks owned by exactly one company. It didn't really expand choice and it provided plenty of opportunity for blame deflection.


Your objections seem to be predicated on the assumption that the attempt to separate storage providers from application providers wouldn't be properly enforced. Yes, obviously if Alphabet can just set up Google Cloud Storage and Google Cloud Apps and continue with business as usual, then this won't work.

But what I'm suggesting would be a functioning regulatory regime: An independent authority who classifies cloud providers as utilities, writes regulations specifying standards of interoperability, hears complaints from people and businesses and acts to enforce them. Antitrust regulation that stops collusion between service and application providers, and prevents companies or individuals owning a controlling stake in both.

And the major difference I see in contrast to railways is digital infrastructure is not limited in they same way physical infrastructure is. So long as there's only one set of tracks, you can never have a functioning market running trains on them, and it's impossible to build a second set. But while the capital costs of setting up a new data centre are large, they are still within the realm of possibility for many large companies.


Any such regulatory regime would serve primarily to entrench incumbents by creating vast compliance requirements on newcomers. How's your <pick random> healthcare startup doing?

You imagine a benevolent dictator making optimum choices. I think the F35 design/procurement process is a better metaphor.


With storage we already have "S3-compatible" as a widely implemented and supported standard. Just allow the user to set an arbitrary API endpoint for your regular S3 library and you support a breath of providers


I don't think that's true.

In the areas in which "bring your own client" is the standard ("text editors / IDE, RSS readers, email clients, web browsers" to cite the article), there is a seemingly greater diversity in hosting than when it comes to the backends of the services mentioned.

Yes, many of these _may_ use AWS in the end, but there is a great many of them that don't.


don't see widespread BYOC ever happening without government intervention

This is why people don't like nerds, it's like you actually want to fuck up this industry. The number of users that care about this is a rounding error. What you're talking about is increasing the complexity and flakiness of every product through modularization. Have you ever actually shipped a product for the general public?


>I'd like to see an enforced separation in law between commercial application providers and storage providers. So, if someone writes an app like Google Docs, they can't just store the data opaquely on their own cloud servers. They legally have to integrate with a separate storage provider. And that storage provider has an obligation to make my data accessible and manageable by me.

Same can be said about gmail. In the old days, one can access gmail via imap/pop3. I believe that was removed. All email ought to provide an alternative way of getting the email other than company's web interface.



Probably because as described in the linked article IMAP & SMTP is disabled by default - though if you're willing to ignore the scary warnings they're still possible to use.


> In the old days, one can access gmail via imap/pop3

I believe it's still possible.

Also IIRC it required some adjustments from mail clients to the "imap as gmail sees it" protocol, like "one message being visible in several folders", since "folders" in imap sense were mapped to "labels" in gmail


I'm still using imap and smtp, unless k9 mail for Android is abstracting something else away?


Gmail still allows imap access, although it requires an app-specific password. (Or it has to be allowed on the enterprise side with G-Suite)


This idea suffers from the same problem federation does. It's harder to move a whole ecosystem. Makes it harder to innovate. Leads to centralized platforms outperforming these BYOC/federated/open options. Basically how Reddit replaced Usenet with upvotes and basic spam filtering.

There's more on this from 'moxie

https://www.youtube.com/watch?v=Nj3YFprqAr8

https://signal.org/blog/the-ecosystem-is-moving/


Can you imagine a world where you could only use a specific phone model with a specific operator? or where you could only send text messages to people that is on the same network? If we can regulate phones we should be able to regulate social networks.


When the iPhone originally came out in the US it was locked together with a 2 year contract with AT&T.


(Deleted my identical but less detailed comment)

Then again, this was never the case in the EU, where bundling handsets with a GSM subscription was banned.

But yes, we don’t have to imagine - and the change certainly didn’t come out of the goodness of the carrier’s hearts!


> this was never the case in the EU, where bundling handsets with a GSM subscription was banned.

It is not: https://en.wikipedia.org/wiki/SIM_lock#European_Union

Last time I was looking for a phone (~8 years ago), in France, I had to check which ones were simlocked.


Usually you had he option at least. Either fork up full price for unlocked phone, or take the subsidized monthly rental of phone with a locked SIM.

And while the second option does lock you into one operator for 12-24 months you still had the choice of which operator to tie yourself to, I don't think I ever saw a phone model only available on single provider.

Finally no matter which operator you chose you can still call people from other operators, even if they initially tried very hard to make this less pleasurable by allowing free texts and minutes within the operator.


Huh, I lived in two EU countries so thought it was Union law, but turned out the national law in the two just coincided!


And it took how long for people to figure out how to unlock it?


No amount of unlocking would have enabled you to use your iPhone in the second largest operator in the US; Verizon


Ah. Right. I always forget that the US is about the only country in the world where there still are widely used CDMA networks.


> Reddit replaced Usenet

Many things replaced Usenet. Usenet is more of a protocol (that uses NNTP) than a social media platform. If you're any way a systems thinker, you would know that protocols are more resilient than services.


NNTP is basically dead, maybe abused* for piracy in some corner of the Internet, while Reddit ate half of the web that Facebook didn't eat.

* To be clear, I'm rather ok with piracy. NNTP is just really poorly suited for it.


“ If you're any way a systems thinker, you would know that protocols are more resilient than services.” This is not just a terrible saying, it’s also completely wrong. All things are not equal, so a crappy protocol may or may not be more resilient than a service. “Systems thinking” is not code for “Lazy thinking.”


When making the point that protocols are more resilient than services, consider including an example of a protocol that has at least .0001 as many users as Reddit does.


It's also not the _same_ users, which may not matter but probably does.


> Leads to centralized platforms outperforming these BYOC/federated/open options. Basically how Reddit replaced Usenet with upvotes and basic spam filtering.

That is a nice insight.


> This idea suffers from the same problem federation does.

Yes, this is a problem you also see with the Fediverse (based on W3C ActivityPub). You get innovators and laggers, and the latter when they are popular can hold the former back (you see this with Mastodon, which as early adopter created their own client API's, but are now lagging to implement the Client-to-Server part of the AP specification).

At the same time standardization of federated protocols can also be an advantage in that it allows many different projects and applications [0] to be developed and mature independently. Innovation can come from unexpected corners here (heads up to openEngiadina with ERIS [1], DREAM [2] and Spritely [3] project).

[0] https://git.feneas.org/feneas/fediverse/-/wikis/watchlist-fo...

[1] https://gitlab.com/openengiadina/eris/-/blob/main/doc/eris.a...

[2] https://dream.public.cat/

[3] https://spritelyproject.org


The C2S part of ActivityPub isn't for everyone, and it's okay. It basically moves too much logic to the client. It prevents any semblance of a good UX because you have everything — posts you would display in the feed, interactions with your content, etc — in the same inbox. You have to connect to many different domains to render something to the user, and you also have to have a sophisticated caching system on the client. Also good luck loading those full-size images over an EDGE connection.

There are 2 kinds of ActivityPub servers.

- The "dumb" one, that does minimal processing and simply stores JSON objects. Someone POSTs an activity to its inbox, the server does minimal required verification, stores it, and the client then queries it with GET. And does the reverse for the outbox. That's it. That's where c2s would work fine.

- The "smart" one, that treats s2s ActivityPub like an API, comes with a built-in web interface, and stores everything in a way which makes sense for its particular presentation. That's the kind I'm making (Smithereen, it's on the list you linked, btw), and that's what Mastodon is. Implementing c2s in this kind of server is a major pain in the ass, and I won't. You'd have to throw away all your careful optimizations, synthesize ActivityPub objects out of things that never were ActivityPub objects in the first place, do horribly inefficient things to merge notifications and feed and other conceptually unrelated stuff from several different database tables just for the client to split them back apart... Doesn't make much sense. A domain-specific API is the only way to make a client for this kind of server.


There can be considerable use-value to ecosystems which don't move as a whole, or do so with deliberation and in a component-wise manner.

As with, say, LaTeX, or Unix/Linux.


>...Reddit replaced Usenet...

According to Wikipedia, Reddit was created the same year Usenet service was discontinued by AOL. Reddit "replaced" Usenet in the same sense that cell phones "replaced" telegrams.


AOL's only contribution to Usenet was to fill it with people who didn't understand Usenet, all at once, almost completely eradicating the existing community.

They discontinued the service when Usenet was no longer a selling point, because they had destroyed it.


Reddit also has tons of unofficial clients that interact with it. It’s not quite the same since you still don’t own the data, bit it comes close.


tl;dr

>>> So while it’s nice that I’m able to host my own email, that’s also the reason why my email isn’t end-to-end encrypted, and probably never will be. By contrast, WhatsApp was able to introduce end-to-end encryption to over a billion users with a single software update.


This strikes at something that people often WANT when they say "text files are better than the registry/binary formats" - they want to easily bring their own tooling to bear.

Standard APIs let you do this - even if you have a "binary format" like MySQL or PostgreSQL (on disk) - nobody really complains about that because they have a defined API you can interact with.


Or SQLite, which I would recommend over some homebrewed text file format, see https://www.sqlite.org/appfileformat.html


Opening untrusted SQLite files tends to not be safe: https://research.checkpoint.com/select-code_execution-from-u...


There was a bug in an older version of SQLite that CheckPoint was able to exploit. That bug has been fixed for a long time. It was fixed even before the referenced talk was given. They had to use an older version of SQLite in that talk so that the attack would work. The CheckPoint attack was a clever new idea, to be sure, and so more recent versions of SQLite have added new defenses to this kind of mischief, such that even if new bugs in SQLite are found, there will be defense in depth and attacks similar to the CheckPoint attack will still be unlikely. And yet for some reason, because we had that one bug, long ago, people keep saying that SQLite is "unsafe" years after the bug was fixed. Why is that?

See https://www.sqlite.org/security.html for tips on safely opening SQLite database files that have been tampered with by a hostile agent.


Well it's more a question of it being a much more complex interface than with a simpler format. Just about everyone runs a service that parses untrusted JSON, but I personally really wouldn't feel as safe running a service that consumes client-submitted SQLite databases.

"It's possible to get it right" is not a great security boundary.


Unless you like your database to take types as a little more than suggestions.


We're talking about SQLite instead of text files, not instead of RMDBS. Plaintext files don't even have the concept of a type, they're just unstructured blob with a well-known mapping to printable characters. SQLite is a step up, because it lets you express structure and hint at the desired interpretation.


Text file formats have the benefit of being easier to kerge sometimes. SQLite files are binary and hard/impossible to merge.


And this works both ways. Like the Postgres standard API you mention: Not only does it allow you to bring your own client, you can swap the underlying implementation and still use all the clients that speak the protocol.

Like the author I would really like to see some open protocols (and adoption) for todo's and calendars. I mention adoption because for calendars there are some standards but popular software like Google Calendar and Exchange do not implement them properly or fully.


Calendaring is a huge embarrassment and a mess, and it is super annoying if you have to deal with anyone “outside” your group. Emailed ical files seems to be the norm but only work over email. :(


Calendaring is surprisingly complex. Especially when it comes to stuff like rules for repeating events and timezones. I had to do work on this at a previous job and it was pretty hard even with all the libraries and tools out there.


It's insanely complex - one of those problems that is easy to state and nearly impossible to solve (which means there's opportunity here).


Or in a lot of places, that’s Microsoft outlook’s moat


Standard APIs still can require a fair amount of code to write. A plain text format with a well defined grammar allows me to pull out any old text editor, and get to a working change with trial and error.

Protocols and binary formats have their place too -- but there's simply nothing as universal as text.


> even if Google wanted to expose an API for editing Google Docs in third-party editors, it would probably be very challenging

It does, https://developers.google.com/docs/api/reference/rest/v1/doc...

The reality is that building an editor for text is easy. Building a rich text editor is faaaaaaar from being a simple thing


The problem (currently) is that most people simply don't care about being forced to use a particular client - as long as that client looks nice.

I say this with a lot of love for the idea. My company sells a collaborative knowledge base that supports Bringing Your Own Client. Out of the hundreds of users I have spoken to, only ONE has ever asked me if they could use their favorite markdown editor to access their knowledge base.

On the positive side of things, having the capability to bring your own client makes it really easy for us to support many different use cases. We just have to write those clients ourselves.


Most people just aren't familiar enough with the underlying idea. You can't express or even recognize a desire for your own client if you don't know what a client is.

Because businesses focus on meeting users' desires instead of needs, they are content with the status quo of rigid UI/UX design.

This is why we see BYOC so prevalent in developer tools like text editors and terminal emulators, but not in office tools like Google Docs or art/design tools like Photoshop and Maya.

I've heard a million times or more the phrase, "industry standard" used to defend rigid proprietary tooling, even by users who would benefit from want sincerely enjoy more freedom in their tooling.


I think it makes sense to at least have the mindset from a developer's point of view. It helps prevent lock-in between the client and whatever APIs or storage your product uses.

I've tried to build apps with a strict client / server separation for a decade now, it made sense at the time because of mobile apps, and it makes sense now due to application half-life (front-ends don't last as long as their back-ends).

In my current project we may offer API access to our customers in addition to a user interface.


Agreed - with the understanding that maintaining distinct clients is a LOT of work.


Immutability and mutability of content, and expectations or conventions surrounding it, seems to play a large role in feasibility here. The successful examples listed are generally cases where content is immutable to the general audience (someone produces content and then publishes it to others) or the way content is mutable is "understood" by the participants. That is, text editors mutate content, but no one expects there to be multiple editors of a text file, and the convention of a file is understood and encapsulates the "state" of the data.

In the wishful cases listed, collaboration is the norm, so to make it BYOC, you'd have to expose core "mutation" API or else use a general convention that is understood across the board.

In my dream world, we'd be use CRDTs for data and the "schema" of the data for a given "service" (say something like Google Docs) is open. The data storage layer would be a commodity and you can swap in different providers as you see fit. Of course, there is no benefit to the creators of such services to do this, and I don't think CRDTs are quite there yet with defining mutation efficiently with respect to multiple collaborators. However, from a data portability standpoint, it feels like the ideal to me.


The main reason for using google docs is not it’s superlative editor... /sarcasm

It’s the collaboration features. And no I don’t mean the multiple people typing the doc at the same time.

It’s the ability to quickly share a document with a set of people that you want.

Sure I use Vim every day, but without that git push and a trip to GitHub to invite collaborators I’m a man on a desert island waiting for that next weekly ferry.

Code editors and that ecosystem is good for code, but too slow for people who just want to share some recipes with their brother and not think about it too much.

Until we have some sharing infrastructure easy enough for a any client to sit on top of, it’s all a dream.


> It’s the ability to quickly share a document with a set of people that you want.

In particular, it lets you do that while maintaining a single source of truth.

It's equally easy to just email people a document. But then if you change the doc, they still have the old version. Maybe they forwarded it on to some other people you don't know about it after making some changes. The next thing you know, there are twenty versions of this file floating around all slightly different and everyone thinks they are on the same page.


I think your point is spot on and probably one of the bigger reasons why we won't see any change happening in the Google Docs department.


What about emailing your brother the recipe? That's ultimately the sharing infrastructure Google docs sits on top of.


I've got lots of small tweaks I'd love to incorporate into a Twitter client. Unfortunately, I think many of them would violate the developer TOS.

For example: hiding avatars completely or generating replacement avatars using the username to remove any chance of internal bias associated with an account's avatar. Another one: hiding images by default and forcing you to click to expand/open to see them (no previews). A lot of these ideas are intending to modify Twitter's default salience landscape.

However disappointing it might be though that I can't do this, I get it. These kinds of modifications could totally change the ways in which a user experiences Twitter, and how Twitter would be able to monetize those users. Simply forcing Twitter (via legislation) to allow BYOC doesn't seem like a good idea because of this. It doesn't seem right to force them to run a service with a reduced potential for monetization -- e.g. many clients could just not show any ads, which would totally remove Twitter's ability to monetize at all.

An idea might be to charge users money in order to allow BYOC, so Twitter could focus on building out the core offering, which is just basically the public messaging substrate of the internet. But I'm not at all sure how well that would work out in practice.


FWIW the Twitter web client can be somewhat easily hacked by blasting it with an interval from a web extension or userscript. It's a bit inefficient and you have to awkwardly navigate the DOM since they're using obfuscated class names but it works and it can be stable.

Here's some example code from a web extension I wrote that removes promoted tweets: https://gitlab.com/t0astbread/no-promoted-tweets/-/raw/maste...


Couldn't you accomplish that with a browser extension?


I think OP maybe made a meta point about a (company/team/project) having carte blanche ability to, or perhaps restated to not give you the ability to restrict someone from, modify your initiative towards your existing end users (who may prefer your setup) in a way which threatens your survival.

Extending the analogy to absurdium, should (CompanyX/Twitter) be allowed to have ToS against a (browser extension/etc) which would reduce it's revenue to $0, and theoretically force it to shut down and lay off all the (engineers/employees)? If no, what rights does Twitter have to decide alterations to it's product if any, and where do they begin? If yes, who makes that decision: a government, a regulatory body etc?

What if the ToS said "don't circumvent our API rate limiting" and someone made a browser extension doing just that. Is it okay to have it in the Google chrome store? If it got in, should it stay up?

Note, against my points, the Supreme Court decision on scraping in the LinkedIn case.


Twitter is actually working on a new BYOC model for their platform. Called Blue Sky: https://twitter.com/bluesky


Tweetbot on iOS supports hiding images in tweets (but not hiding avatars). It costs a bit of money, but I have bought every version so far.


>What would it look like to have a thriving ecosystem of third-party clients for Google Docs style word processing, which can all interoperate with each other, even supporting realtime collaboration?

Well, based on decades of history of other complex file formats such as pdf, zip, and MS Office formats of doc, docx, xls, xlsx, etc... it would be a buggy mess. It didn't matter whether the format was reverse-engineered or an officially open specification.

The issue is that a plain text file used for programming code is linear from top to bottom and so the low complexity for universal editors is just parsing CR/LF to interpret lines. (Yes, there can be extra complexity of syntax highlighting but the base level complexity of opening the file for display is still just parsing CR/LF.)

The complexity is higher for pdf/zip/xls because a common theme is each have a internal hashmap/dictionary/directory that has byte pointers backwards and forwards to other parts of the file. And they have internal hierarchies of data structures. And changing from binary representation to XML such as Microsoft's OOXML doesn't change the base complexity which is why LibreOffice has constant bug reports from users unable to open their particular docx/xlsx file.

When I collaborate with others in MS Word/Excel, a best practice is to make sure everybody is using the same version of MS Office. If somebody is using MS Office 2007 while others are on Office 2013, a roundtrip save & open between different versions will eventually corrupt the file or lose data. Even staying with just one vendor like MS can get unreliable. The wild west has lots of utilities/libraries that write incorrect zip files and broken pdf files.

>Some successful existing examples of client ecosystems built around open standards: [...] email clients

I'm not totally sold on this example. Yes, the open standard is SMTP for network communication... but there's another aspect that isn't standard: the on-disk file format for email archives. Microsoft Outlook uses binary PST files but Mozilla Thunderbird uses text-based MBOX files. But Mozilla's MBOX is slightly different from other tools that use MBOX.


> Some successful existing examples of client ecosystems built around open standards: [...] email clients

Yeah, the way they just casually slid that in there...

That example really cuts both ways.

Yes, email is fundamentally better than other messaging formats in one respect. Standardization gives it permanence and universality. There are many things we use email for where it would be absurd to use anything else.

But when you actually lift the cover on email, it is a crusty box full of hornets and spiderwebs.

The actual, in-practice email standard comprises:

- A weird, undocumented and ancient subset of HTML

- A weird, undocumented and ancient subset of CSS

- A motley assortment of supported attachment types. Multipart MIME. For the main message body, you can use any type you want as long as it's `text/html`

- A ton of headers. Did you know that SMTP FROM and the From line we all know and love are two different headers? Often containing two different addresses! Don't even look at threading/References/In-Reply-To, it will make you sad.

- And, of course, a bunch of overlapping bandaids to try to retroactively add authentication. DKIM, SPF, DMARC etc.

It is so bad that there's a whole ecosystem of companies that try to make it less painful, often with very brute-force approaches.

I once had to use an expensive tool called Litmus. They literally just run VMs for every OS + mail client combo, then show you how your email will render in each case. Haha try again, your CSS is broken on Win7 / Outlook 2013.

--

The same underlying facts that make it a great format as a user make it obnoxious as a developer.


Maybe collaborate using markdown with stylesheets instead of word then.

As for the mbox problem: on a normal machine that must be shared between multiple tools (at the very least: biff,lmptpd, and the MUA) if thunderbird can't interoperate that's a serious bug.


"Give up all of these cool features because it's 'open'" isn't a winning argument with any consumer.

WYSIWIG rich-text documents are table stakes. Users have had them for 30 years, love them, and are never giving them up. Their layout (and thus their schema) are also incredibly complex by necessity.


Markdown is both underspecified and underpowered as a markup language.


There's the CommonMark spec: https://spec.commonmark.org/

It's still an underpowered language though.


I'm in medicine, and I desperately want this model for electronic health records.

The current state of lock-in has been effectively maintained by the entrenched players, and it's really bad.


Closest we have so far:

http://hl7.org/fhir/

I really like their documentation. My team has moved a lot of our stuff to produce FHIR (or something close to it), for interoperating with other teams and clients.


Since HLX has become increasingly required, I've seen that the lock-in doesn't mean diddly squat. Now we're not "locked in", but for my new vendor to drop-in means I have to pay an extra "API Fee" for them to whip up the API interface to pull everything from the old EMR and into the new EMR.

So either we get to the point where we are legislating perfect compatibility (and I can't imagine how good EMRs will get once the federal government has to outline every individual data field, and update them through, what, the rulemaking process?); or we'll always be paying up for this transition, and lock-in is beside the point.


I saw that you doubled down on [this comment](https://news.ycombinator.com/item?id=26181228). Week 5 excess mortality for 65+ in Israel is now up to 7.4 times above normal. It increased every update for the last three weeks. Unbelievable that people like you take 1 minute to read something, think you are an expert and try and proclaim victory. I'm 100% sure you will double down again instead of taking the honorable path of admitting you were wrong. People like you never get held accountable.


I made my comment two weeks ago, describing a fall in mortality that began three weeks prior. The z-score of mortality for Israel peaked as I described in my last post, and has continued to fall. The 65+ group in particular is well within historical norms.

The aggressiveness of your statement requires me to include an image of the current mortality graph for anyone not willing to take the time to dig it out themselves: https://ibb.co/1vCsD7Y

People like you never get held accountable. Please continue thread-stalking me, I'm happy to keep this up.


Maybe we need a coalition of second-tier EMRs to band together and commit to supporting an API and a group to keep it open and updated? I agree that the current approach isn't really working, and doing more of the same is probably not going to help.


The problem, from my perspective, is that we're fighting the battle on two fronts: I need to get all my data from my old EMR into my new EMR, and I need my new EMR to slot in where my old EMR was with respect to feeding data to my data warehouse. Part of the difficulty there is the API, and part of it is that a bunch of shit is done as a black box in-EMR (e.g., my EMR will feed my warehouse some financial data, but my vendor is opaque as to how it's calculated).

It's gotten to the point where I don't want a legal requirement to be HLX compatible: I want a legal requirement separating back-end data from front-end UI, so that we can shop for each of those independently. Once all the front-end shops (which are more valuable than back-end - as a healthcare org, I care about documentation and billing and error prevention) can't lock you in via data, I imagine there will be a fucking quick race to be the universally compatible back-end. And the back-end is ultimately the stuff that affects patients (portability of records) and loosen the bindings on provider organizations (because... portability of records).

Which of course is why even things like HLX didn't really start working until major orgs like CMS and NYS Medicaid came along and said "you will find a way to be compatible with HLX voluntarily, or you will do it via regulation. One way or another it's going to happen within the next 12 months." (I was at a major conference where that was laid out pretty much that explicitly. It was wonderful.)


Re: Notion, Trello, etc. — wouldn’t enabling interoperability on top of a presumably-free “open core”, completely commoditize these services? (As, for anything they offered on top of their open core, you could instead substitute a third-party service that does the same thing by operating against the service’s API.)

Which is to say — in a world where interoperability is a social norm or legal requirement, how would these services exist? (I would suspect they wouldn’t.) And, without them, would there be any money in advancing the state of the art in these verticals?

It’d be a lot like if there were a legal requirement for every drug to have a generic available from the start. Would there still be an incentive for drug research?


> in a world where interoperability is a social norm or legal requirement, how would these services exist? (I would suspect they wouldn’t.) And, without them, would there be any money in advancing the state of the art in these verticals?

They would. The verticals wouldn't. A service like Trello is at least two components - a storage layer for lightly connected lists of rich data, and UI for displaying/managing them. In a world where interoperability would be the norm, these two components would be two different markets. The user would be free to choose or buy the UI separately from the storage layer. Most would probably choose a vendor that provides both services, just out of convenience.

The main difference would be that all these companies would operate on lower margins, and wouldn't support such insane valuations like our-world tech companies have these days.


That's why I hate forums without a proper E-mail gateway -- for me plain text E-Mail is the most essential API between humans. My mail client is the main reason and I really hate everyone changing something that disables using either my preferred editor.

Everytime some modern hype takes that from me I cannot stop thinking of goose fattening: forcefully stuffing it down the throat and not for good.


I feel Reddit is another thing that has "Bring your own client". I'm using the old design and Reddit Enhancement Suite to tweak it to whatever I want. Reddit also gives you everything as JSON if you just add ".json" behind any URL.


And Apollo, a wonderful alternative client for iOS.


It's a situation like parents: There should be an incentive for a company to build a closed, innovative system. But once that system converges on features, and they reaped their rewards, it should be replaced by an open standard.

Instant messaging was fine, WhatsApp and Slack brought a lot of innovation and drove adoption, and I would argue it has gotten to a stable place where we should all use Matrix.

We need to figure out that process of standardization and opening up.


> situation like parents

patents?


yes. patents :)


I work on a popular Google Docs alternative product in the market.

I gave some thoughts around this idea of experimenting a standard for collaborative rich text, that any client can implement. The problem though is that people want easy collaboration, more than the freedom to bring their own clients.

To simply put how can we deal with assigning people (for @mentions, comments, document ownership and content locking for a group of people) in such file formats? SaaS universe has a concept of a userbase with unique ids. So when a person is assigned/mentioned the product knows who is relevant and what to do. This implies we need a universal userbase standard (which is already hugely complicated) and is adopted by the SaaS that your target users belong to.

This is one huge roadblock that I don't see any practical solution for.


Decentralized Identifiers are aimed to solve this, basically having interoperable identifiers you can use across different applications but still referring to the same user.

Initial concept for DIDs were made for blockchains and other decentralized projects (I think, someone please correct me if I'm wrong) but useful for federation or for centralized projects that want to offer flexible data migration too.

- Specification: https://www.w3.org/TR/did-core/

- https://en.wikipedia.org/wiki/Decentralized_identifiers

- One organization working on DIDs: https://identity.foundation/

- Project leveraging DIDs: https://sovrin.org/


Rich text is expressed with markup. The rest can be done with protocol.

Just like clients would have to implement markup parsing and rich text rendering, they would need to implement protocol endpoints like notifications,etc.


This applies well to the remote work environment. My employer makes no requirement about which editor or terminal or operating system or anything else we use to access their resources. The deliverables are (mostly code) files, and they just don't care what we use to manipulate those. They don't provide any hardware either.

What reads to some as worker exploitation is actually empowerment in practice.


I’m reading Thinking Forth at the moment and there’s a part in the description of Forth about how it decouples words from their parameters and return types by only allowing stack manipulation. This means that all words share a common interface and so they’re infinitely composable. The reason I started reading this book is due to learning how WebAssembly works which turns out to be quite similar to Forth. It got me thinking if there might be a way to make all code modules interoperable. Being able to compose an application or client out of many independent parts would be a dream come true.


Yup yup. I'm on the trail of just this using a purely functional Forth-like language called Joy.

Your hunch is correct, you're intuiting the "Categorical" (as in Category Theory) nature of "point-free" format:

http://conal.net/papers/compiling-to-categories/

> It is well-known that the simply typed lambda-calculus is modeled by any cartesian closed category (CCC). This correspondence suggests giving typed functional programs a variety of interpretations, each corresponding to a different category. ... This paper describes such an implementation and demonstrates its use for a variety of interpretations including hardware circuits, automatic differentiation, incremental computation, and interval analysis. ... The general technique appears to provide a compelling alternative to deeply embedded domain-specific languages.


That Conal Elliot paper is being my current level, and I don’t see the relation to the comment about composability in Forth. Is there a simplified explanation somewhere? (Preferable less mathematical and more verbal, but I’ll take what I can get) Thanks.


I don't know if it's written up anywhere like that, sorry.

The very brief section on mathematical purity of Joy in the Wikipedia article describes it but in (IMO) an unhelpful way: https://en.wikipedia.org/wiki/Joy_(programming_language)#Mat...

> In Joy, the meaning function is a homomorphism from the syntactic monoid onto the semantic monoid. That is, the syntactic relation of concatenation of symbols maps directly onto the semantic relation of composition of functions. It is a homomorphism rather than an isomorphism, because it is onto but not one-to-one; that is, no symbol has more than one meaning, but some sequences of symbols have the same meaning (e.g. "dup +" and "2 *").

> Joy is a concatenative programming language: "The concatenation of two programs denotes the composition of the functions denoted by the two programs".[2]

[2] "Mathematical foundations of Joy" by Manfred von Thun https://web.archive.org/web/20111007025556/http://www.latrob...


  > make all code modules interoperable. 
  > Being able to compose an application or client 
  > out of many independent parts would be a dream come true.
You can stop dreaming, this exists and it's called json and npm install. Some would argue this particular implementation of it is a nightmare rather than a dream though :)

At some point you also have to integrate all this interop. Possibility to compose doesn't mean composition automatically puzzles itself together.


BYOC for shopping online would be nice. Exchange formats for product aggregators sort of solve one part of the equation. But ordering is still an unsolved issue.

It shouldn't be that hard to do. You may need maybe 5 endpoints. Get all product data for the whole stock, get individual product images, get order requirements (can include terms and conditions, desired information for the order, payment options, human contact info, etc.), post an order, get order status.

On this API you can easily create a fully featured 3rd party online shopping clients. Some shopping experiences these days are atrocious.

Some hugely popular platforms have product lists hidden like 4 screens bellow the fold, separated by tons of crap, that just gets in the way. Category selection is placed such, so that categories slide away, when you get to the product list after selecting one of the categories, and if you want to switch category, you just have to scroll all the way up again. No options for selecting alternative views, like in a file manager (large icons with a ton of detail, list view, etc.)

Each new eshop you have to learn how to navigate it, and collecting information about the ToC and payment/delivery methods is a hassle.

Could be so much better if there was some simple web protocol for this, and it could be exposed in addition to existing website, too.

In the simplest implementation, everything but the POST could just be some static json files uploaded to the http server. And POST /order could just be sending an e-mail to someone.


I think you forgot the API endpoints that will tell you how few items are left in stock and how many other users that are looking at this item right now. Not all stores would be sophisticated enough to implement such endpoint though but luckily it is easy to approximate in your client using stock=Random.nextInt(1,2) and users=Random.nextInt(50, 2000).


heh, if those are gone, good riddance. :)


I think a major omission from the authors' wishlist of BYOC-capable applications is chat. Maybe in their case they are lucky enough to have a choice in the matter, but I think overall the walled gardens and closed APIs for the major chat programs is a shame. Every day that I have to use Microsoft Teams in an attempt to communicate with coworkers is an exercise in patience. Basic chat functionality should be open and interoperable.


Maybe you should suggest to your company to set up a bridge[0] to mirror it to something like irc.

0: https://github.com/42wim/matterbridge


disclaimer: Markdown Notes App Author

At least for simpler text documents, where formatting isn't a big concern, markdown seems to be wining as the standard. Sure, each of these have added a few features on top of the markdown standard, but even those features are slowly getting quite standardized. [0][1][2][3][4][5][6]

[0] https://obsidian.md

[1] https://zettlr.com/

[3] https://dendron.so/

[4] https://foambubble.github.io/foam/

[5] https://logseq.com/

[6] https://gitjournal.io

I wish using Git as the VCS would also become more standardized, but we're still lacking good clients which hide the complexity [6] (my project, fails at hiding all of the complexity, but hopefully it'll get there)


I would like something like this but for banking logs. I got an alert from the bank about suspicious activity yesterday. The customer support rep couldn’t give me much information about it. I had to stop myself from asking if I could see the logs to determine myself whether it was a bug in their code or a real risk... I wondered if it was a bug because they had a similar buggy incident in the past.


Really, a good API is all that is necessary.

Also, maybe an SDK, but I am not particularly enthused by some of the SDKs I encounter, these days.

If a service can be exposed by a good, basic API that doesn't require (but also affords) an SDK, then this allows development of connectors. Many client packages, these days, are written to allow connectors to be developed.


For data-only applications. But if an application wants or needs to promise pixel-perfect results, at the very least, there would need to be a single standard rendering engine.


I would suggest otherwise. One of the reasons that people choose differing clients, is because they have preferences in how they like the data/process represented.

If it is necessary for a team to be "all on the same page," then the management needs to require that everyone use the same rendering client.


Can you elaborate. What kind of API? Just a HTTP/JSON api?


I prefer a REST-like API, like Google uses for their services (basically, sending is done with "raw" GET/POST/PUT arguments, instead of blocks of JSON/XML), and Raw/JSON/XML responses.

But realistically, some levels of functionality are so intense, you need a “beefier” API, like GraphQL.

If that is the case, it might not be a bad idea to review the API. In my opinion, overcomplicated APIs are likely to be the result of the service, trying to exert too much control over the interaction/presentation.

Many times, an API is a "back window" or partial view to a more ambitious effort. For example, if the project is a Web-based framework, it will have a lot of rendering and View/Controller functionality that may be of no interest to something like a specialized text editor for content maintenance, so there would be no reason to expose that functionality.

It may be a good idea to provide a suite of APIs, as opposed to a single one.


There is a company called EditShare whose primary stock in trade is (or was, times do change) digital video server appliances. AVID sold such a thing, but it requires AVID software to work... and it turns out that film editors are about as persnickety about their choice of NLE software as programmers are about their text editor choice. So Alice may favor AVID, Bob likes Final Cut, and Claire uses Adobe Premiere. All three are competent editors and the director wants them all on the project. By using standard file sharing protocols like AppleTalk and SMB, the EditShare server lets them work on the same footage.


I don't see Google Drive ever providing an API for bringing your own client; it will probably have to be from the other side.

We already have OT and CRDTs for concurrently defining changes from each user and a way to merge them. There is also the Braid spec (https://datatracker.ietf.org/doc/html/draft-toomim-httpbis-b...) attempting to standardize state synchronization on top of which one can bring any client


> I don't see Google Drive ever providing an API for bringing your own client

In a way, they already have, since you can mount your Google drive on your local file system, and interact with the files from there. The problem lays with the proprietary Google Doc format.


https://developers.google.com/docs/api

Most things you would want to do to a Google Doc are available via oAuth API, right?


Not... really? I can't open it with a local application.

At least, not until someone (not me, no time) creates a local application which interacts with Google Docs via this API.'

In the mean time, I'll continue to use .docx, since Google docs can export and import form this format.


Yes! I think Braid is a good effort in this area, will add to the prior art section.


Great read! Just wanted to chime in and say I’m working on a collaborative editing backend [1] based on Y.js, which already makes some of the things you write about possible.

For example interop between different editors, like CodeMirror and Monaco. And thanks to Y.js, the server helps with syncing, but isn’t the single source of truth anymore. It’s more like Git, where every copy has all changes.

I really hope we all can leave the cloud behind soon.

[1] https://hocuspocus.dev


I went with the BYOC route for my current side project: https://inter.tube

It’s a music locker service with a custom web client. Instead of writing my own native app I implemented the Subsonic protocol: http://www.subsonic.org/pages/api.jsp

It’s awesome to have a polished native app for “free” (eg. play:Sub on iOS is really nice) but it’s such an underspecified and loose spec that implementing it was a huge pain in the ass. For example, clients will choke on results missing values that are specified to be optional in the spec. The spec imposes integer offsets for pagination but my database wants string continuation tokens. It’s specified as XML but there’s an option to make it JSON with zero information on how to deal with discrepancies. Some clients use GET, some POST, some mangle the URL or capitalize random paths. I just had to reverse engineer pre-existing servers to figure out what clients can handle and there’s still a few clients that are mysteriously broken. I’m going to have to test as many as possible and create a list of recommended ones.

I’m happy I went with BYOC but I highly recommend to plan for it from the start so you can deal with the protocol’s weaknesses sooner than later.


Looks neat. Mind shooting me an email matt@engn.com?


Thanks! Just sent one.


Isn’t this just a reinvention of the idea of Open Standards?

The fundamental problem is not that no one has formalized the idea, but that the baseline incentives for commercial software do not include optimization for interoperability.

For that we need standards bodies or other regulation - RFCs and the like. And, hacker news, we need line engineers to recognize when their produce is suboptimal by these standards, and why.


This isn't to detract from what the author has said, but Google Docs does have a public API now. I wanted to mention this because the public API is fairly new, and people may not know about it. I am not sure if the API is rich enough to achieve what the author is describing though. It's also not built on any kind of standard, like email, or other examples that are described.


Background gradient makes me dizzy after reading a single paragraph


(author here) sorry to hear that! Out of curiosity, what screen size are you reading on? Anyone else have similar problems?

Sounds like I should add a button to disable the gradient.


Hi! Sorry for the late reply.

Here's a screenshot of my screen from the time of writing the comment https://i.imgur.com/ngD1Xkj.png

I'm on a macbook pro 15.4-inch (2880 x 1800).

I see you've changed background, thank you for that! <3 Now it is far far better. Though I still find myself having to try harder to read the text on such color. I'm not sure of the reasons, though. Maybe it's me, maybe it's f.lux.

(though f.lux is on Movie Mode, meaning the dim and color correction are lighter than default)

P.S. I do not know of and never experienced any color perception problems.


It's a cool idea but I wrote a userstyle to override it. Reading on a plain color is easier for me


Thanks for the feedback everyone! Never realized it was causing problems. Will change or at least add an option.


It might be that it goes left to right instead of top down.


I'm having issues with this gradient too. (Though I'm running a fever after being vaccinated yesterday, so my perception might be different than usual today).

It feels like that optical illusion with disappearing background [1] where your brain attempts to re-tune itself, and it's kind of uncomfortable.

My screen width is 1280px.

[1] https://knowyourmeme.com/photos/834411-optical-illusion


Interesting premise, but some of the examples given as BYOC wishlist is a little weird to me. Google doc (the main example) makes sense, sure.

With others like Notion -- I recently started using Notion at a personal level and I feel like much of the draw about Notion is basically the UI (client). I don't use it to publish public pages or anything, only private notes. Most of its usefulness to me comes down to it having a really nice client and it syncs between desktop/mobile/etc. I don't know what Notion would be without the built in client. Simply a set of markdown files?

Other examples like Trello / Asana seem to already have solutions. They offer APIs, and while I haven't worked with them to know them extensively, my understanding is you can do basic CRUD operations on tickets. You could possibly build your own clients there.


Notion basically is the client. Everything can be exported as .csv or markdown, so the only thing that notion provides are the nice database features and the UI.


Unirest is the BYOC that I was hoping would get traction: https://github.com/kong?q=unirest

I don't understand why each API needs it's own SDK. Or even why developers use SDKs instead of just calling the RESTful endpoints directly.


I think the problem described in this article is more fundamental: these services don't have documented, stable APIs for clients to build on. Not HTTP APIs. Not libraries either. In some cases they go out of their way to prevent third parties from using whatever internal API their own apps use. [edit to add: to combat abuse/spam, or because they have ads, or as deliberate lock-in.]

But to answer your question: when an API exists, why have an SDK? One reason is to provide strong typing (and everything that goes with it, like IDE auto completion). A very thin layer over the HTTP requests is enough to provide this. Eg, in one of my projects [1] I wrapped a HTTP API with Go calls like this:

    Method(ctx context.Context, req *MethodRequest) (*MethodResponse, error)
where each method call corresponds to exactly one HTTP request. It handles deadlines, cancellation, HTTP error checking, and using Go's built-in json marshalling/unmarshalling on structs. That's about all I want in an SDK. The unirest project you linked doesn't seem to offer strong typing so I wouldn't use it (even if it weren't dead).

[1] https://github.com/scottlamb/luxor


If the SDK in question is just a thin wrapper on REST then sure, it isn't providing very much value. But most of the SDKs I've used implement higher level logical operations -- things that require understanding the intent of the API/data model and how things are supposed to fit together. By being able to write my code against that higher level abstraction I'm able to save time and offload some of the responsibility for knowing how the pieces all fit together to the people who know that best. Ultimately my goal is not to make REST requests, it's to provide value to my customers, and the APIs I call are means to that end.


I understand both sides of it. The thing is that most code bases that aren't using an official SDK will tend to wrap the raw HTTP calls in something that performs the relevant operations anyway. A good SDK cuts out a few classes/functions that would have been written anyway.

Of course, that assumes that the SDK exists and is good.


Nice thoughts, now please think of some nice pro "bring your own OS" arguments I can make towards my IT department ;)

Regarding BYOC, I guess Git is where it is at right now, not? Allthough I switch clients for my personal note-taking every now and the which works well with NextCloud syncing of MD files.


For the "todo list" item: check out Todo.txt and any number of the clients that use the same, human-readable/editable text-based format. I really enjoy being able to choose the client I want on my Mac, iPad and Android phone while keeping the same todo list file...


Small note: Trello has an excellent API - it is my goto example of a pleasant API to work with as an end user.

That means you can "bring your own client", I have a "todo today" list that I pull via the API and display via conky right on my desktop.


Part of the problem is that APIs are intrinsically harder to interop with than a local file format, for fundamental reasons like latency and shared state that others might modify from underneath you. You couldn't just read once at the beginning and then keep things in synchronous memory thereafter, persisting as needed but never reading again.

Of course that doesn't mean the accessibility situation couldn't be improved with (for example) more standardized APIs, better documentation/discoverability, etc. iOS Shortcuts make it easy to work with third-party services; what would the equivalent in a real programming context look like? There's probably some interesting brainstorming to be done here.


>Today we generally think about BYOC at the “app” level. But can we go finer-grained than that, picking individual interface elements?

Maybe will will end up re-inventing OLE from the 1990's for the web. In OLE 2, there was a standard OLE Compound Document file storage API/format. Different apps could all work with different parts of a document. You could even change which app you wanted to handle a certain type of feature. It was complicated, but when it worked, it was pretty neat. This was done during the time when most people had maybe 4-8MB of RAM. With much more powerful CPUs and vast amounts of RAM, we could probably come up with something even better.


There is an app called Twetch which is basically Twitter on the blockchain with money. Recently, I wanted to share someone's feed to a friend, but Twetch requires you to login, and that friend didn't have an account. I then remembered a third-party app called bitsurf.network that essentially served this role, and sent him that instead. Similarly, before Twetch had its search feature, a random developer had built one, because the data was available. So long as you are OK with limited privacy, these are pretty magical experiences compared to the status quo. I wonder if the incentives will allow it to last.


We had standardisation at the protocol level for the internet which let us build services on top, but now we're looking for the same at the app level. HTTP was meant for web pages, we never created anything for Apps and Services. What that new standard is going to look like, I have no idea, gRPC does well to define the APIs and so we use it across dozens of services here https://github.com/micro/services. Curious to hear what others think will happen.


FWIW, my $0.02: I think this problem has been noticed and solved over and over again. If you squint a little stuff like CORBA are in the same stew pot, eh?

There's relatively recent work on collaborative editing and CRDTs, but as it says in the Wikipedia entry:

> The first instance of a collaborative real-time editor was demonstrated by Douglas Engelbart in 1968, in The Mother of All Demos. Widely available implementations of the concept took decades to appear.

https://en.wikipedia.org/wiki/Collaborative_real-time_editor...

There are a lot of these protocol/API tools like gRPC, Thrift, Swagger/OpenAPI, or even good old ASN.1, eh?

To me the question is how to drive convergence? How to alleviate https://xkcd.com/927/ ? (The "Standards" comic.) Who shall forge the "one true ring to bind them"?

- - - -

Also, I think Data Model catalogs (like "Data Model Patterns: A Metadata Map" by David C. Hay) should be part of the answer. Instead of rolling your own data models you should be able to just pull models from a standard catalog.

- - - -

In sum, if we're lucky, we'll start to get convergence in the protocol and model space. Anything that isn't working towards convergence is just adding another log to the burning pile of standards, eh?


I admire the thought here however as author himself pointed out this is an innovation killer in some way. I think then we need to separate what can be shipped with universal BYOC and what can’t. Any new innovation is build up on existing features. Extending it to heterogeneous clients will be neither cost effective nor innovation friendly. I work for MS office and shipping the same feature on online client vs win32 apps takes 1-2 quarter and sometime whole rewrite.


I think no one wants to go through the work of creating a full featured client. What most people want is just adhoc customizability.

For this reason, the web is great if you know how to code. Most sites are pretty easy to reverse-engineer so you can add whatever feature you want via extensions.

For example I do a lot of simple stuff with greasemonkey scripts like making stuff sortable or filterable, creating a bulk download and rename button, or just customizing appearances/css.


Another one is storing data, Apple for instance can easily lock in users with iCloud because it is so much more complicated keeping all devices and apps in sync without, with iCloud you have one login for passwords, browser, files, etc.

In theory a shared folder with clients that handle multiple devices/operating systems in their dot-files well would be able to replace that, but with most phones in a closed ecosystem, I don't see that happening.


I'd love to be able to pay a music streaming service a monthly fee to play their catalog without having to use any of their first party software.


On Google Docs, there were some CLI tools that today are deprecated.

EDIT: not yet:

https://coderwall.com/p/elfkaq/editing-google-docs-with-vim

The best setup would be:

Wordgrinder/nvi+py3-markdown for Google Docs.

Sc-IM+GNUplot opening Google Sheets.

MagicPoint generating HTML files usable for Google Slides.

Rclone for Drive.


Prosemirror might be a good starting point for a headless collaborative rich text environment. It manages rich texts state and is collaborative out of the box.

https://prosemirror.net/examples/collab/#edit-Example


Not having to ship your own client to every platform is the biggest selling point of web apps (or cloud apps). The border between app and client shifted to more low-level, but also more capable interfaces: HTTP, HTML, CSS, JS rather than single-purpose file formats like DOCX or PSD.


Most of these closed systems exist because of gaps in the open alternatives, and those gaps may be be difficult to close. For example, an open client for Google docs, which works fundamentally differently than say, a closed-source word processor that was written in the 90s.


> Today we generally think about BYOC at the “app” level. But can we go finer-grained than that, picking individual interface elements?

Reminds me of this:

https://en.wikipedia.org/wiki/OpenDoc


The only thing I've used that has gotten close to this is emacs.

Even then, you have to fight the default ui/ux.


This article is basically describing the UNIX philosophy from a user-facing perspective, right?


Another great example of BYOC is SQL. Even despite popular relational databases have slightly different SQL dialects, it's still possible to use different clients to query/view/design/update a relational database.


The problem with BYOC client for Google docs / Figma is that the underlying data isn't simple plaintext, it's too complex, and the protocol requires things like operational transforms, so BYOC wouldn't make sense.


Clients don't have to have exact same power. For example it would totally make sense to grep over a google doc represented as a plaintext file. Or at least traverse over the complex structure to extract some data, even if it's read only.


BYOC is why I'm so excited about decentralized social media.

I hate how we have to put up with whatever UI / algorithms each site gives us. Why can't we have access to the underlying data and choose how we wish to consume it?


What about options of importing and exporting files? For example Google Docs lets download your file in another format making it easy to open in another client e.g. MS Word. Wouldn't that be somewhat of a BYOC approach.


This reminds me of the solid project from Tim Berners-lee https://solid.mit.edu/ which i find very interesting


This article was pleasant to read. Well reasoned, well delivered, and timely.


It does not suprise me that it's a kid proposing this. They probably live and breathe open source and still dream of a free and open world, where companies prioritize the user far above over profit.


He's at MIT in the Software Design Group, so his objective is to do blue-sky thinking for things that will never make sense economically.


All I want from office software is a clone of Google Docs that would be open source, self hosted, allow concurrent file editing and would use OpenDocument file standard for storing files.

Humanity really needs that.


msft's docx is only a documented open format bc the european union sued them in like 07 for antitrust

fight for open file formats for common things


Open network and interface protocols. This was Sun's argument before they opened up Solaris and Java.


It seems like local-first software is a good foundation for promoting Bring Your Own Client more broadly.

It also goes hand in hand with end-to-end encryption. This sounds a lot like the Case I made for building Client-First web apps, I just posted it on HN:

https://news.ycombinator.com/item?id=26356391


I'm a developer. Pretty much everything is a text file. Text files get edited with Emacs :)


I trust someone from the HN community will get started working on some of these :)


So maybe something like CRDT or OT on top of libp2p?

Does something like that exist already?


Not exactly that I know of but the automerge project is one of the more active open source CRDT spaces.

https://github.com/automerge/hypermerge for example


I agree that you should choose your own clients. I think the examples provided beg a different question as to why it's not like this today?

PDF and DocX files are open specifications that provide for extension. Nothing is stopping anyone from building clients around these formats with the features listed in the article. PDF is definitely more common as most programming languages have comprehensive libraries to work with it.

The path forward would to be build the features you want and publish the extension specifications for others to use. Perhaps the interesting question however isn't technical possibility but if a market exists for it? Email clients were very widespread over a decade ago but have consolidated to 3-4 over the years. Hey.com has been the first big new email clients that I am aware of. I'm curious if it can prove there is big business in improving on existing, standardized specifications.


Git does not have colaboration tools. It's very much a "work in a silo for a period of time and then evtually figure out how to mash these things together" approach.


That is false. See the built-in git-send-email [0] and git-am [1] commands.

You may not like it, don't use it, etc. But Git does have collaboration tools.

[0]: https://git-scm.com/docs/git-send-email

[1]: https://git-scm.com/docs/git-am


I meant in the sense how it was compared to google docs


Has anyone tried to build real-time collaboration on git? What if every participant would run a system that saved on every keystroke, committed and synced with the remote? It would break due to conflicts as soon as two concurrent edits were too close to each other, but you could probably just accept either commit when they’re so granular and happening in real time. The conflict is going to be obvious and fairly easy to resolve since you will see it happen in the document itself.

EDIT: For some reason, speculating about the potential to repurpose a technology is attracting replies that point out how it wouldn’t work seamlessly with how they are currently using it. I’m not even mentioning editing code, and I’m certainly not proposing that you try this in a random Git repo where other contributors aren’t expecting it.


Keep in mind that a big part of working with a VCS is refining your code until it is done or ready to be merged. Syncing on every keystroke would drown others in noise and I'm usually not interested to see my colleagues' unpolished work-in-progress code.


Every time you modify a file in git you're actually creating a new file. Committing every keystroke would create a massive number of files that only differed by a single character. If you wanted to have a history of changes that can be shared and potentially merged then you would want deltas instead. A better underlying technology would be something like CouchDB https://docs.couchdb.org/en/stable/index.html


I think that's close to what overleaf.com is doing for LaTeX.


That’s interesting. Their website mentions Git integration [1] but I don’t get the impression that their real-time editing is built on Git. Do you have a reference for that?

1: https://www.overleaf.com/learn/how-to/Using_Git_and_GitHub


This problem is what CRDT's are good for. I'm not sure if Git is the right tool to build this on top off.


My comment speculates about what it might be like to repurpose Git for this, so I hope it’s clear that I’m also not sure if it’s the right tool.

CRDTs have the problem the article brings up, that they’re usually very specific to the data they’re modeling and need support from a very purpose-built editor. Git is supported by a lot of tooling and uses plain text which works everywhere.


True but how would you create a useful client without having access to a data format that is specific to the application?


... why would you want that? How would you ever run the program when it's broken on every keystroke?


It may be the case that the Git model is good enough for the way a lot of people want to collaborate, if combined with a nice UI.

Maybe "save" tries to commit, and pulls up a diff interface if there is a conflict? UI indications of other people's commits pending, and maybe merge those with current state of the document if there is no conflict?

There are a lot of edge cases to consider. But I feel like there might be a way to design UX for a Git workflow that makes it comfortable and intuitive to Muggles.


I want to add Music services to the list


I want BYOC for consuming social media.


To work day?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: