I'd like to see an enforced separation in law between commercial application providers and storage providers. So, if someone writes an app like Google Docs, they can't just store the data opaquely on their own cloud servers. They legally have to integrate with a separate storage provider. And that storage provider has an obligation to make my data accessible and manageable by me.
This wouldn't be a panacea, but it at least makes it theoretically possible to write a replacement client for Google Docs that could be a drop-in replacement.
I think that is a bit much, I would much rather make reverse engineering/screen scraping/whatever for interoperability be 100% legal with zero grey area. Including a bit that says TOS/Eula/NDA/NonCompete/any contract can not give away this right.
basically let asshole companies use technical means to try to stop us, but give them no legal recourse if we manage to get the cheese out of their trap.
That said, I would agree with such a mandate, as the costs are probably worth it.
(And if a company starts churning their API in bad faith, that's exactly the kind of thing courts are meant to figure out)
For security against social engineering, add some rigamarole around opting into API access.
Strict enforcement policy on cold storage data warehouses. And sensible policy for the app providers, like not caching PII.
Implement and enforce at the fed level using this division in a manner that is something akin to Glass-Steagall.
Owning data should not be something providers can ever exploit.
: I also think this could be used as a means to break apart Big Tech
First, different services would still have to agree on data formats to interoperate. The ecosystem would be quite similar to windows file formats, and we've got plenty of experience seeing how that works out. It's better than what we have now, but not a panacea.
The bigger issue is that in addition to wielding data lock in, these companies wield network lock in. With storage decoupling, if I load up my own profile in a hypothetical "Facebook competitor" app and make a post, Facebook has no desire to display that post to users of its app that are my friends. However with my API proposal, competitor apps have the ability to publish directly on Facebook's site and be treated exactly the same as every other post.
Also, my proposal has a longstanding philosophical grounding that the abilities of computing should be available to all - companies shouldn't be able to insist on specific methods of usage that computationally disenfranchise users.
While I agree that commercial incentives are critical to consider, two caveats I'd add:
1) I think the nature and depth of the incentives problem varies a lot by industry. Social media is a really complex case, for example. But I'm most interested in collaborative productivity tools, where I think many companies are okay being incentivized to build the best product rather than build data moats.
2) Cutthroat commercial incentives aren't the only thing that got us here. There are also tech barriers.
If I was starting a Google Docs competitor today, even if I wanted to make it open, I think it would be hard to pull off. What kind of API would I expose to enable realtime editing with good offline mode and conflict resolution? How would I deal with clients that have slightly different rich text representations than my own?
In an alternate universe where we already had a user-owned "web filesystem" with good answers to these questions, I could just hook up my client to that existing system and not even worry about persistence at all. It needs to be _easier_ for devs to build the right thing for users, not harder. And it needs to be a _better_ experience for end users, not a sacrifice. The convenient thing will win.
I know you're aware of Braid, but I'm not sure you see how much our work is actually aligned. In fact my deepest personal motivation in creating Braid is to enable the "BYOC" vision — but I've been calling it a "separation of UI from state", where each user can choose their own interface to interact with the world. In today's web, the data owner controls the interface. But this means that they control how way too many people interact with the world, because we increasingly have to interact with the world through computers.
The Braid abstraction makes it easier for developers to program with distributed state, and in the process also makes user-interfaces and back-ends interoperable, so that we can easily switch out different UIs for the same state.
That or a consortium of companies agree on some sort of standard. (See the work on pc buses in the early pc days)
But you are right. When companies run under trying to get as many users locked in as possible during a paid for investors period. This model is precisely the opposite outcome expected as the whole point is to build a moat around the sales model.
Email is only useful because each server can send and receive with other servers.
But we all know this, so what the point here?
I would say that my point is that if interoperability becomes more useful than the walled garden approach, then we will see something like this posts call for BYO client. Email is an example. We have interoperable email because it is was more useful than service specific inboxes.
Until interoperability is necessary for Word, we won’t see an opening for other Word “clients”.
Aside: you could argue that Google built Gmail to make a dent in the ubiquity that was Exchange for corporate messaging. It exploited the openness/interoperability of email in order to do this. If you accept that argument, then maybe they should take a page from that playbook and work to make Google Docs as open as possible. Allow for more interaction with external clients. Let a more open ecosystem develop. That might have the potential to diminish the position of Office for document creation.
Or maybe email is just so old, the owners got established before it could be commercialized... it might happen again, but I’m not hopeful. For example, I don’t expect Twitter to care about federating their system with other messaging applications. There is too much to be gained (selling ads) by keeping people on their site.
When one person controls the whole system, it's easy to pivot rapidly. I guess that's another argument against having widely used protocols (even though I disagree with it.)
* and more
The problem isn't an "economic model". The problem is that most open-source projects simply don't attract UI designers, UX researchers, and users because contribution flows are more technical or geared toward developers.
UX research isn't "expensive" when referring to open-source projects because it predominantly requires people: designers, researchers, and users. The whole foundation of open-source is volunteerism.
Most developers I've worked with don't exactly love going back and forth with designers on UI stuff.
And volunteerism was much more popular in the 1990s when it was easy to write software by hard to sell it. App stores suffocated open source by offering a path to revenue.
It got beaten by FB Marketplace because Facebook has almost three billion captive users, and a marketplace bundled in the service they use many times a month takes less effort to visit than a completely separate site.
From this, and many other cases - like e.g. everyone hating the IM/videoconference system they use and yet using them anyway, I conclude two rules of thumb:
- UI that's good for users has very little to do with what UI/UX specialists peddle today;
- Network effects trump UIs - a sticky service with bad UI will beat a competitor that offers much better UI, but doesn't hold the user captive.
Had they not fiercely destroyed all the various "Search all Craiglist" mechanisms, they might have had a chance at fighting FB Marketplace.
Just the other day I had someone, an old person, complain about how a "revamp" of an internet banking site. They rewrote their functional, working UI, with a site that was designed by "UX designers", and you know how he phrased his opinion of the new site? "They made the site for stupid people".
Well that's the real trick, isn't it? </Han Solo>
Our government is "captured" by campaign financing. Now we have 2 problems: ineffective governmental oversight, AND Citizens United. We have to solve the latter before we can ever solve the former.
That said, I agree, standards are necessary, I just wanted to rant about NPT.
https://datatransferproject.dev/ from yesterday’s iCloud photos thread seems to be a step in this NPT direction though.
I've been meaning to switch from Apple Music to Spotify for a while now (I'm on Android). I just had a look around the Spotify app today and the interface is terrible, it won't let me play and view my library like I want to. It would be so much easier if I could just use both from a single open source client, and then I wouldn't have to switch off to soundcloud and bandcamp for certain artists.
Wouldn't it be in the interests of players like Google and Apple to create this service/protocol, even before the regulatory net starts closing in?
Requires some setup and having an additional client installed for the actual audio playback but it's still interesting using Spotify from a terminal.
Another more practical problem is how do you even get something like this on the agenda of a lawmaker? This isn’t a readily apparent problem to most people but it likely would result in less monopolistic outcomes for cloud providers and potentially better and/or more efficient products.
I think client interchangeability and ultimately user freedom is fundamentally about data driven design, look at all the examples: text is text is text... git is a file based graph of merkle trees. They are both highly abstract, portable, unopinionated data structures that don't describe APIs, UI or implementation - don't like git CLI? you can literally write a completely different one with different porcelain and use the exact same git repos as everyone else, people have done this. Text is the same, it's just more obvious because so many examples exist.
If you start with data and make it successful clients will emerge around it without the choice of making it interchangable.
The problem with things like google docs isn't the app, it's the opaque data format. And then there are formats posing as being open and portable like DOCX which is essentially a proprietary microsoft format forced open due to reverse engineering - all of it's details will no doubt be closely coupled to the history of MS Word features and implementation.
Martin Kleppmann has a great talk on CRDTs for distributed text editing. It is a hard problem. You also need a dedicated binary format (basically a table) but at least in this case it is a lot easier to use an open format with federated clients.
How about web search and social news feeds? Can there be a similar frontend / back-end separation for Google, FB and Twitter? I am thinking - different search front ends with different UIs, rankings and filters. We could have an EFF-Google and a Conservative-Google. Users could pick their preferred flavor, as I don't think it's fair for Google to decide for us all.
So while everyone can implement the standard, very few can do this profitably (In practice, Qualcomm, Mediatek and to some extent Samsung)
In due time we'll have networks of small providers catering for BYOC hosting. Cryptocurrencies can play a part in efficient and fair compensation.
But then again, maybe by the time we catch up to current date, everything happens in walled VR moats.
Maybe I'll be the oddball in the metaphorical cabin in the woods, but given the progress we're making so far I'll refuse to enter a closed VR/AR ecosystem.
Another option would be something similar to farm subsidies, although I don't know if that would be enough.
There's certainly still a whole class of applications when owning the data is a competitive advantage, but there are also lots of organisation whose core competency is not selling data, even though they do have to work with user data. When storing data yourself is a risk, it becomes a lot more attractive to just let the user store that data somewhere themselves and have them give your application access to it.
I'm sure this one of the reasons we're seeing a lot more interest from Europe than elsewhere in Solid .
Then of course, there are also lots of organisations struggling to maintain correct and up-to-date data. If multiple organisations access the same data as controlled by the user, the user is more likely to have kept that data up-to-date.
(Disclaimer: views are my own.)
That being said, I'm a huge fan of splitting concerns - such as connecting anything and everything via OAuth delegations with narrowly scoped permissions!
Honestly this is wrong. Google drive has API for everything, Notion is working on an API, Confluence has API for everything, Trello ... Any serious text editor (and all the ones wrongly cited in the article) software has API for everything. Want to export your Confluence data to Notion ? you can. In the documentation tools space owning the data is not something looked for, because company pay for the service so the data belong to them
BYOC can exists today, the only reason it doesn't. Is that the documents format is super complex and require complex editors. But the format themselves are not secret, the application is (and not even always, for example the Confluence editor is open source).
You have the full specification of Google docs documents and API to edit any kind of blocks inside
Second, is that meant for building a full blown client or is it meant for 3rd party integrations, like plugins?
I'm curious of what is missing
> Second, is that meant for building a full blown client or is it meant for 3rd party integrations, like plugins?
It is certainly not for plugins, as there libraries to call that API from python, ruby and others which are then not meant to be run in the browser.
My point, is that if you want to build an editor of Google Docs, then you can do it, there is an API to get the doc format and an api to make modification to that document
My initial response was related to missing comments and collaborative real-time editing in the v1 link you provided. However, I searched a bit more and found that their newer API versions include both of those features.
Here's a blog that's announcing their real-time collaborative editing back in 2013, so that's been there for a while.
That blog post also lists a few products that make use of the real-time feature but the only client that's an actual editor ("Neutron Drive") has since died with its domains no longer working.
I guess this is one of the rare cases where you can build a client based on their API and Google actually promoted one of them but I'd still say that it's not really meant for it.
The primary use case for these APIs are bot services like Zapier and integrating a "Save to cloud folder" feature in your own app.
Even if they actively encouraged you to build a client based on their API, who would risk it? We've seen how Twitter handled 3rd party clients after it got big and that's likely going to be the case with every major player.
As I see it, cloud providers are currently entrenched as Lords of Everything because they can both provide the infrastructure and the products that run on top of it. Making them act more like utility providers would reduce their power, I think.
> enforced separation in law between commercial application providers and storage providers. So, if someone writes an app like Google Docs, they can't just store the data opaquely on their own cloud servers. They legally have to integrate with a separate storage provider.
Now, is the law going to mandate the exact API as well?
The likely implementation of this is that, just as every app developer copies every other app developer, when choosing which User Storage Backend to integrate with, they will pick the most popular one, or a near competitor. AWS, Azure, or Google Cloud. (Apple-focused developers may choose Apple Cloud; I'd expect Facebook to spin up one if this became law too).
Just as you don't get general-purpose OAuth integration so much as "log in with Facebook / Google" buttons. Those are your two choices.
Yes, the original intent would be targeted at Google and Microsoft, which currently own both big web apps and big cloud platforms to run them on. I'm not convinced that splitting them vertically would stick; the convergence effects are very strong. So you end up with (choice of two office suites) x (choice of two backend providers), big deal.
Is it sufficient that Google Cloud Storage would have a separate stock ticker from Google Cloud Apps?
It reminds me of rail privatization and the nonsense of having thin shell companies run the trains while leasing all the rolling stock from a couple of companies and running on tracks owned by exactly one company. It didn't really expand choice and it provided plenty of opportunity for blame deflection.
But what I'm suggesting would be a functioning regulatory regime: An independent authority who classifies cloud providers as utilities, writes regulations specifying standards of interoperability, hears complaints from people and businesses and acts to enforce them. Antitrust regulation that stops collusion between service and application providers, and prevents companies or individuals owning a controlling stake in both.
And the major difference I see in contrast to railways is digital infrastructure is not limited in they same way physical infrastructure is. So long as there's only one set of tracks, you can never have a functioning market running trains on them, and it's impossible to build a second set. But while the capital costs of setting up a new data centre are large, they are still within the realm of possibility for many large companies.
You imagine a benevolent dictator making optimum choices. I think the F35 design/procurement process is a better metaphor.
In the areas in which "bring your own client" is the standard ("text editors / IDE, RSS readers, email clients, web browsers" to cite the article), there is a seemingly greater diversity in hosting than when it comes to the backends of the services mentioned.
Yes, many of these _may_ use AWS in the end, but there is a great many of them that don't.
This is why people don't like nerds, it's like you actually want to fuck up this industry. The number of users that care about this is a rounding error. What you're talking about is increasing the complexity and flakiness of every product through modularization. Have you ever actually shipped a product for the general public?
Same can be said about gmail. In the old days, one can access gmail via imap/pop3. I believe that was removed. All email ought to provide an alternative way of getting the email other than company's web interface.
I believe it's still possible.
Also IIRC it required some adjustments from mail clients to the "imap as gmail sees it" protocol, like "one message being visible in several folders", since "folders" in imap sense were mapped to "labels" in gmail
There's more on this from 'moxie
Then again, this was never the case in the EU, where bundling handsets with a GSM subscription was banned.
But yes, we don’t have to imagine - and the change certainly didn’t come out of the goodness of the carrier’s hearts!
It is not: https://en.wikipedia.org/wiki/SIM_lock#European_Union
Last time I was looking for a phone (~8 years ago), in France, I had to check which ones were simlocked.
And while the second option does lock you into one operator for 12-24 months you still had the choice of which operator to tie yourself to, I don't think I ever saw a phone model only available on single provider.
Finally no matter which operator you chose you can still call people from other operators, even if they initially tried very hard to make this less pleasurable by allowing free texts and minutes within the operator.
Many things replaced Usenet. Usenet is more of a protocol (that uses NNTP) than a social media platform. If you're any way a systems thinker, you would know that protocols are more resilient than services.
* To be clear, I'm rather ok with piracy. NNTP is just really poorly suited for it.
That is a nice insight.
Yes, this is a problem you also see with the Fediverse (based on W3C ActivityPub). You get innovators and laggers, and the latter when they are popular can hold the former back (you see this with Mastodon, which as early adopter created their own client API's, but are now lagging to implement the Client-to-Server part of the AP specification).
At the same time standardization of federated protocols can also be an advantage in that it allows many different projects and applications  to be developed and mature independently. Innovation can come from unexpected corners here (heads up to openEngiadina with ERIS , DREAM  and Spritely  project).
There are 2 kinds of ActivityPub servers.
- The "dumb" one, that does minimal processing and simply stores JSON objects. Someone POSTs an activity to its inbox, the server does minimal required verification, stores it, and the client then queries it with GET. And does the reverse for the outbox. That's it. That's where c2s would work fine.
- The "smart" one, that treats s2s ActivityPub like an API, comes with a built-in web interface, and stores everything in a way which makes sense for its particular presentation. That's the kind I'm making (Smithereen, it's on the list you linked, btw), and that's what Mastodon is. Implementing c2s in this kind of server is a major pain in the ass, and I won't. You'd have to throw away all your careful optimizations, synthesize ActivityPub objects out of things that never were ActivityPub objects in the first place, do horribly inefficient things to merge notifications and feed and other conceptually unrelated stuff from several different database tables just for the client to split them back apart... Doesn't make much sense. A domain-specific API is the only way to make a client for this kind of server.
As with, say, LaTeX, or Unix/Linux.
According to Wikipedia, Reddit was created the same year Usenet service was discontinued by AOL. Reddit "replaced" Usenet in the same sense that cell phones "replaced" telegrams.
They discontinued the service when Usenet was no longer a selling point, because they had destroyed it.
>>> So while it’s nice that I’m able to host my own email, that’s also the reason why my email isn’t end-to-end encrypted, and probably never will be. By contrast, WhatsApp was able to introduce end-to-end encryption to over a billion users with a single software update.
Standard APIs let you do this - even if you have a "binary format" like MySQL or PostgreSQL (on disk) - nobody really complains about that because they have a defined API you can interact with.
See https://www.sqlite.org/security.html for tips on safely opening SQLite database files that have been tampered with by a hostile agent.
"It's possible to get it right" is not a great security boundary.
Like the author I would really like to see some open protocols (and adoption) for todo's and calendars. I mention adoption because for calendars there are some standards but popular software like Google Calendar and Exchange do not implement them properly or fully.
Protocols and binary formats have their place too -- but there's simply nothing as universal as text.
It does, https://developers.google.com/docs/api/reference/rest/v1/doc...
The reality is that building an editor for text is easy. Building a rich text editor is faaaaaaar from being a simple thing
I say this with a lot of love for the idea. My company sells a collaborative knowledge base that supports Bringing Your Own Client. Out of the hundreds of users I have spoken to, only ONE has ever asked me if they could use their favorite markdown editor to access their knowledge base.
On the positive side of things, having the capability to bring your own client makes it really easy for us to support many different use cases. We just have to write those clients ourselves.
Because businesses focus on meeting users' desires instead of needs, they are content with the status quo of rigid UI/UX design.
This is why we see BYOC so prevalent in developer tools like text editors and terminal emulators, but not in office tools like Google Docs or art/design tools like Photoshop and Maya.
I've heard a million times or more the phrase, "industry standard" used to defend rigid proprietary tooling, even by users who would benefit from want sincerely enjoy more freedom in their tooling.
I've tried to build apps with a strict client / server separation for a decade now, it made sense at the time because of mobile apps, and it makes sense now due to application half-life (front-ends don't last as long as their back-ends).
In my current project we may offer API access to our customers in addition to a user interface.
In the wishful cases listed, collaboration is the norm, so to make it BYOC, you'd have to expose core "mutation" API or else use a general convention that is understood across the board.
In my dream world, we'd be use CRDTs for data and the "schema" of the data for a given "service" (say something like Google Docs) is open. The data storage layer would be a commodity and you can swap in different providers as you see fit. Of course, there is no benefit to the creators of such services to do this, and I don't think CRDTs are quite there yet with defining mutation efficiently with respect to multiple collaborators. However, from a data portability standpoint, it feels like the ideal to me.
It’s the collaboration features. And no I don’t mean the multiple people typing the doc at the same time.
It’s the ability to quickly share a document with a set of people that you want.
Sure I use Vim every day, but without that git push and a trip to GitHub to invite collaborators I’m a man on a desert island waiting for that next weekly ferry.
Code editors and that ecosystem is good for code, but too slow for people who just want to share some recipes with their brother and not think about it too much.
Until we have some sharing infrastructure easy enough for a any client to sit on top of, it’s all a dream.
In particular, it lets you do that while maintaining a single source of truth.
It's equally easy to just email people a document. But then if you change the doc, they still have the old version. Maybe they forwarded it on to some other people you don't know about it after making some changes. The next thing you know, there are twenty versions of this file floating around all slightly different and everyone thinks they are on the same page.
For example: hiding avatars completely or generating replacement avatars using the username to remove any chance of internal bias associated with an account's avatar. Another one: hiding images by default and forcing you to click to expand/open to see them (no previews). A lot of these ideas are intending to modify Twitter's default salience landscape.
However disappointing it might be though that I can't do this, I get it. These kinds of modifications could totally change the ways in which a user experiences Twitter, and how Twitter would be able to monetize those users. Simply forcing Twitter (via legislation) to allow BYOC doesn't seem like a good idea because of this. It doesn't seem right to force them to run a service with a reduced potential for monetization -- e.g. many clients could just not show any ads, which would totally remove Twitter's ability to monetize at all.
An idea might be to charge users money in order to allow BYOC, so Twitter could focus on building out the core offering, which is just basically the public messaging substrate of the internet. But I'm not at all sure how well that would work out in practice.
Here's some example code from a web extension I wrote that removes promoted tweets: https://gitlab.com/t0astbread/no-promoted-tweets/-/raw/maste...
Extending the analogy to absurdium, should (CompanyX/Twitter) be allowed to have ToS against a (browser extension/etc) which would reduce it's revenue to $0, and theoretically force it to shut down and lay off all the (engineers/employees)? If no, what rights does Twitter have to decide alterations to it's product if any, and where do they begin? If yes, who makes that decision: a government, a regulatory body etc?
What if the ToS said "don't circumvent our API rate limiting" and someone made a browser extension doing just that. Is it okay to have it in the Google chrome store? If it got in, should it stay up?
Note, against my points, the Supreme Court decision on scraping in the LinkedIn case.
Well, based on decades of history of other complex file formats such as pdf, zip, and MS Office formats of doc, docx, xls, xlsx, etc... it would be a buggy mess. It didn't matter whether the format was reverse-engineered or an officially open specification.
The issue is that a plain text file used for programming code is linear from top to bottom and so the low complexity for universal editors is just parsing CR/LF to interpret lines. (Yes, there can be extra complexity of syntax highlighting but the base level complexity of opening the file for display is still just parsing CR/LF.)
The complexity is higher for pdf/zip/xls because a common theme is each have a internal hashmap/dictionary/directory that has byte pointers backwards and forwards to other parts of the file. And they have internal hierarchies of data structures. And changing from binary representation to XML such as Microsoft's OOXML doesn't change the base complexity which is why LibreOffice has constant bug reports from users unable to open their particular docx/xlsx file.
When I collaborate with others in MS Word/Excel, a best practice is to make sure everybody is using the same version of MS Office. If somebody is using MS Office 2007 while others are on Office 2013, a roundtrip save & open between different versions will eventually corrupt the file or lose data. Even staying with just one vendor like MS can get unreliable. The wild west has lots of utilities/libraries that write incorrect zip files and broken pdf files.
>Some successful existing examples of client ecosystems built around open standards:
[...] email clients
I'm not totally sold on this example. Yes, the open standard is SMTP for network communication... but there's another aspect that isn't standard: the on-disk file format for email archives. Microsoft Outlook uses binary PST files but Mozilla Thunderbird uses text-based MBOX files. But Mozilla's MBOX is slightly different from other tools that use MBOX.
Yeah, the way they just casually slid that in there...
That example really cuts both ways.
Yes, email is fundamentally better than other messaging formats in one respect. Standardization gives it permanence and universality. There are many things we use email for where it would be absurd to use anything else.
But when you actually lift the cover on email, it is a crusty box full of hornets and spiderwebs.
The actual, in-practice email standard comprises:
- A weird, undocumented and ancient subset of HTML
- A weird, undocumented and ancient subset of CSS
- A motley assortment of supported attachment types. Multipart MIME. For the main message body, you can use any type you want as long as it's `text/html`
- A ton of headers. Did you know that SMTP FROM and the From line we all know and love are two different headers? Often containing two different addresses! Don't even look at threading/References/In-Reply-To, it will make you sad.
- And, of course, a bunch of overlapping bandaids to try to retroactively add authentication. DKIM, SPF, DMARC etc.
It is so bad that there's a whole ecosystem of companies that try to make it less painful, often with very brute-force approaches.
I once had to use an expensive tool called Litmus. They literally just run VMs for every OS + mail client combo, then show you how your email will render in each case. Haha try again, your CSS is broken on Win7 / Outlook 2013.
The same underlying facts that make it a great format as a user make it obnoxious as a developer.
As for the mbox problem: on a normal machine that must be shared between multiple tools (at the very least: biff,lmptpd, and the MUA) if thunderbird can't interoperate that's a serious bug.
WYSIWIG rich-text documents are table stakes. Users have had them for 30 years, love them, and are never giving them up. Their layout (and thus their schema) are also incredibly complex by necessity.
It's still an underpowered language though.
The current state of lock-in has been effectively maintained by the entrenched players, and it's really bad.
I really like their documentation. My team has moved a lot of our stuff to produce FHIR (or something close to it), for interoperating with other teams and clients.
So either we get to the point where we are legislating perfect compatibility (and I can't imagine how good EMRs will get once the federal government has to outline every individual data field, and update them through, what, the rulemaking process?); or we'll always be paying up for this transition, and lock-in is beside the point.
The aggressiveness of your statement requires me to include an image of the current mortality graph for anyone not willing to take the time to dig it out themselves: https://ibb.co/1vCsD7Y
People like you never get held accountable. Please continue thread-stalking me, I'm happy to keep this up.
It's gotten to the point where I don't want a legal requirement to be HLX compatible: I want a legal requirement separating back-end data from front-end UI, so that we can shop for each of those independently. Once all the front-end shops (which are more valuable than back-end - as a healthcare org, I care about documentation and billing and error prevention) can't lock you in via data, I imagine there will be a fucking quick race to be the universally compatible back-end. And the back-end is ultimately the stuff that affects patients (portability of records) and loosen the bindings on provider organizations (because... portability of records).
Which of course is why even things like HLX didn't really start working until major orgs like CMS and NYS Medicaid came along and said "you will find a way to be compatible with HLX voluntarily, or you will do it via regulation. One way or another it's going to happen within the next 12 months." (I was at a major conference where that was laid out pretty much that explicitly. It was wonderful.)
Which is to say — in a world where interoperability is a social norm or legal requirement, how would these services exist? (I would suspect they wouldn’t.) And, without them, would there be any money in advancing the state of the art in these verticals?
It’d be a lot like if there were a legal requirement for every drug to have a generic available from the start. Would there still be an incentive for drug research?
They would. The verticals wouldn't. A service like Trello is at least two components - a storage layer for lightly connected lists of rich data, and UI for displaying/managing them. In a world where interoperability would be the norm, these two components would be two different markets. The user would be free to choose or buy the UI separately from the storage layer. Most would probably choose a vendor that provides both services, just out of convenience.
The main difference would be that all these companies would operate on lower margins, and wouldn't support such insane valuations like our-world tech companies have these days.
Everytime some modern hype takes that from me I cannot stop thinking of goose fattening: forcefully stuffing it down the throat and not for good.
Instant messaging was fine, WhatsApp and Slack brought a lot of innovation and drove adoption, and I would argue it has gotten to a stable place where we should all use Matrix.
We need to figure out that process of standardization and opening up.
I gave some thoughts around this idea of experimenting a standard for collaborative rich text, that any client can implement. The problem though is that people want easy collaboration, more than the freedom to bring their own clients.
To simply put how can we deal with assigning people (for @mentions, comments, document ownership and content locking for a group of people) in such file formats? SaaS universe has a concept of a userbase with unique ids. So when a person is assigned/mentioned the product knows who is relevant and what to do. This implies we need a universal userbase standard (which is already hugely complicated) and is adopted by the SaaS that your target users belong to.
This is one huge roadblock that I don't see any practical solution for.
Initial concept for DIDs were made for blockchains and other decentralized projects (I think, someone please correct me if I'm wrong) but useful for federation or for centralized projects that want to offer flexible data migration too.
- Specification: https://www.w3.org/TR/did-core/
- One organization working on DIDs: https://identity.foundation/
- Project leveraging DIDs: https://sovrin.org/
Just like clients would have to implement markup parsing and rich text rendering, they would need to implement protocol endpoints like notifications,etc.
What reads to some as worker exploitation is actually empowerment in practice.
Your hunch is correct, you're intuiting the "Categorical" (as in Category Theory) nature of "point-free" format:
> It is well-known that the simply typed lambda-calculus is modeled by any cartesian closed category (CCC). This correspondence suggests giving typed functional programs a variety of interpretations, each corresponding to a different category. ... This paper describes such an implementation and demonstrates its use for a variety of interpretations including hardware circuits, automatic differentiation, incremental computation, and interval analysis. ... The general technique appears to provide a compelling alternative to deeply embedded domain-specific languages.
The very brief section on mathematical purity of Joy in the Wikipedia article describes it but in (IMO) an unhelpful way: https://en.wikipedia.org/wiki/Joy_(programming_language)#Mat...
> In Joy, the meaning function is a homomorphism from the syntactic monoid onto the semantic monoid. That is, the syntactic relation of concatenation of symbols maps directly onto the semantic relation of composition of functions. It is a homomorphism rather than an isomorphism, because it is onto but not one-to-one; that is, no symbol has more than one meaning, but some sequences of symbols have the same meaning (e.g. "dup +" and "2 *").
> Joy is a concatenative programming language: "The concatenation of two programs denotes the composition of the functions denoted by the two programs".
 "Mathematical foundations of Joy" by Manfred von Thun https://web.archive.org/web/20111007025556/http://www.latrob...
> make all code modules interoperable.
> Being able to compose an application or client
> out of many independent parts would be a dream come true.
At some point you also have to integrate all this interop. Possibility to compose doesn't mean composition automatically puzzles itself together.
It shouldn't be that hard to do. You may need maybe 5 endpoints. Get all product data for the whole stock, get individual product images, get order requirements (can include terms and conditions, desired information for the order, payment options, human contact info, etc.), post an order, get order status.
On this API you can easily create a fully featured 3rd party online shopping clients. Some shopping experiences these days are atrocious.
Some hugely popular platforms have product lists hidden like 4 screens bellow the fold, separated by tons of crap, that just gets in the way. Category selection is placed such, so that categories slide away, when you get to the product list after selecting one of the categories, and if you want to switch category, you just have to scroll all the way up again. No options for selecting alternative views, like in a file manager (large icons with a ton of detail, list view, etc.)
Each new eshop you have to learn how to navigate it, and collecting information about the ToC and payment/delivery methods is a hassle.
Could be so much better if there was some simple web protocol for this, and it could be exposed in addition to existing website, too.
In the simplest implementation, everything but the POST could just be some static json files uploaded to the http server. And POST /order could just be sending an e-mail to someone.
At least for simpler text documents, where formatting isn't a big concern, markdown seems to be wining as the standard. Sure, each of these have added a few features on top of the markdown standard, but even those features are slowly getting quite standardized. 
I wish using Git as the VCS would also become more standardized, but we're still lacking good clients which hide the complexity  (my project, fails at hiding all of the complexity, but hopefully it'll get there)
Also, maybe an SDK, but I am not particularly enthused by some of the SDKs I encounter, these days.
If a service can be exposed by a good, basic API that doesn't require (but also affords) an SDK, then this allows development of connectors. Many client packages, these days, are written to allow connectors to be developed.
If it is necessary for a team to be "all on the same page," then the management needs to require that everyone use the same rendering client.
But realistically, some levels of functionality are so intense, you need a “beefier” API, like GraphQL.
If that is the case, it might not be a bad idea to review the API. In my opinion, overcomplicated APIs are likely to be the result of the service, trying to exert too much control over the interaction/presentation.
Many times, an API is a "back window" or partial view to a more ambitious effort. For example, if the project is a Web-based framework, it will have a lot of rendering and View/Controller functionality that may be of no interest to something like a specialized text editor for content maintenance, so there would be no reason to expose that functionality.
It may be a good idea to provide a suite of APIs, as opposed to a single one.
We already have OT and CRDTs for concurrently defining changes from each user and a way to merge them. There is also the Braid spec (https://datatracker.ietf.org/doc/html/draft-toomim-httpbis-b...) attempting to standardize state synchronization on top of which one can bring any client
In a way, they already have, since you can mount your Google drive on your local file system, and interact with the files from there. The problem lays with the proprietary Google Doc format.
Most things you would want to do to a Google Doc are available via oAuth API, right?
At least, not until someone (not me, no time) creates a local application which interacts with Google Docs via this API.'
In the mean time, I'll continue to use .docx, since Google docs can export and import form this format.
For example interop between different editors, like CodeMirror and Monaco. And thanks to Y.js, the server helps with syncing, but isn’t the single source of truth anymore. It’s more like Git, where every copy has all changes.
I really hope we all can leave the cloud behind soon.
It’s a music locker service with a custom web client. Instead of writing my own native app I implemented the Subsonic protocol: http://www.subsonic.org/pages/api.jsp
It’s awesome to have a polished native app for “free” (eg. play:Sub on iOS is really nice) but it’s such an underspecified and loose spec that implementing it was a huge pain in the ass. For example, clients will choke on results missing values that are specified to be optional in the spec. The spec imposes integer offsets for pagination but my database wants string continuation tokens. It’s specified as XML but there’s an option to make it JSON with zero information on how to deal with discrepancies. Some clients use GET, some POST, some mangle the URL or capitalize random paths. I just had to reverse engineer pre-existing servers to figure out what clients can handle and there’s still a few clients that are mysteriously broken. I’m going to have to test as many as possible and create a list of recommended ones.
I’m happy I went with BYOC but I highly recommend to plan for it from the start so you can deal with the protocol’s weaknesses sooner than later.
The fundamental problem is not that no one has formalized the idea, but that the baseline incentives for commercial software do not include optimization for interoperability.
For that we need standards bodies or other regulation - RFCs and the like. And, hacker news, we need line engineers to recognize when their produce is suboptimal by these standards, and why.
Sounds like I should add a button to disable the gradient.
Here's a screenshot of my screen from the time of writing the comment https://i.imgur.com/ngD1Xkj.png
I'm on a macbook pro 15.4-inch (2880 x 1800).
I see you've changed background, thank you for that! <3 Now it is far far better. Though I still find myself having to try harder to read the text on such color. I'm not sure of the reasons, though. Maybe it's me, maybe it's f.lux.
(though f.lux is on Movie Mode, meaning the dim and color correction are lighter than default)
P.S. I do not know of and never experienced any color perception problems.
It feels like that optical illusion with disappearing background  where your brain attempts to re-tune itself, and it's kind of uncomfortable.
My screen width is 1280px.
With others like Notion -- I recently started using Notion at a personal level and I feel like much of the draw about Notion is basically the UI (client). I don't use it to publish public pages or anything, only private notes. Most of its usefulness to me comes down to it having a really nice client and it syncs between desktop/mobile/etc. I don't know what Notion would be without the built in client. Simply a set of markdown files?
Other examples like Trello / Asana seem to already have solutions. They offer APIs, and while I haven't worked with them to know them extensively, my understanding is you can do basic CRUD operations on tickets. You could possibly build your own clients there.
I don't understand why each API needs it's own SDK. Or even why developers use SDKs instead of just calling the RESTful endpoints directly.
But to answer your question: when an API exists, why have an SDK? One reason is to provide strong typing (and everything that goes with it, like IDE auto completion). A very thin layer over the HTTP requests is enough to provide this. Eg, in one of my projects  I wrapped a HTTP API with Go calls like this:
Method(ctx context.Context, req *MethodRequest) (*MethodResponse, error)
Of course, that assumes that the SDK exists and is good.
Regarding BYOC, I guess Git is where it is at right now, not? Allthough I switch clients for my personal note-taking every now and the which works well with NextCloud syncing of MD files.
That means you can "bring your own client", I have a "todo today" list that I pull via the API and display via conky right on my desktop.
Of course that doesn't mean the accessibility situation couldn't be improved with (for example) more standardized APIs, better documentation/discoverability, etc. iOS Shortcuts make it easy to work with third-party services; what would the equivalent in a real programming context look like? There's probably some interesting brainstorming to be done here.
Maybe will will end up re-inventing OLE from the 1990's for the web. In OLE 2, there was a standard OLE Compound Document file storage API/format. Different apps could all work with different parts of a document. You could even change which app you wanted to handle a certain type of feature. It was complicated, but when it worked, it was pretty neat. This was done during the time when most people had maybe 4-8MB of RAM. With much more powerful CPUs and vast amounts of RAM, we could probably come up with something even better.
There's relatively recent work on collaborative editing and CRDTs, but as it says in the Wikipedia entry:
> The first instance of a collaborative real-time editor was demonstrated by Douglas Engelbart in 1968, in The Mother of All Demos. Widely available implementations of the concept took decades to appear.
There are a lot of these protocol/API tools like gRPC, Thrift, Swagger/OpenAPI, or even good old ASN.1, eh?
To me the question is how to drive convergence? How to alleviate https://xkcd.com/927/ ? (The "Standards" comic.) Who shall forge the "one true ring to bind them"?
- - - -
Also, I think Data Model catalogs (like "Data Model Patterns: A Metadata Map" by David C. Hay) should be part of the answer. Instead of rolling your own data models you should be able to just pull models from a standard catalog.
In sum, if we're lucky, we'll start to get convergence in the protocol and model space. Anything that isn't working towards convergence is just adding another log to the burning pile of standards, eh?
For this reason, the web is great if you know how to code. Most sites are pretty easy to reverse-engineer so you can add whatever feature you want via extensions.
For example I do a lot of simple stuff with greasemonkey scripts like making stuff sortable or filterable, creating a bulk download and rename button, or just customizing appearances/css.
In theory a shared folder with clients that handle multiple devices/operating systems in their dot-files well would be able to replace that, but with most phones in a closed ecosystem, I don't see that happening.
EDIT: not yet:
The best setup would be:
Wordgrinder/nvi+py3-markdown for Google Docs.
Sc-IM+GNUplot opening Google Sheets.
MagicPoint generating HTML files usable for Google Slides.
Rclone for Drive.
Reminds me of this:
Even then, you have to fight the default ui/ux.
I hate how we have to put up with whatever UI / algorithms each site gives us. Why can't we have access to the underlying data and choose how we wish to consume it?
Humanity really needs that.
fight for open file formats for common things
It also goes hand in hand with end-to-end encryption. This sounds a lot like the Case I made for building Client-First web apps, I just posted it on HN:
Does something like that exist already?
https://github.com/automerge/hypermerge for example
PDF and DocX files are open specifications that provide for extension. Nothing is stopping anyone from building clients around these formats with the features listed in the article. PDF is definitely more common as most programming languages have comprehensive libraries to work with it.
The path forward would to be build the features you want and publish the extension specifications for others to use. Perhaps the interesting question however isn't technical possibility but if a market exists for it? Email clients were very widespread over a decade ago but have consolidated to 3-4 over the years. Hey.com has been the first big new email clients that I am aware of. I'm curious if it can prove there is big business in improving on existing, standardized specifications.
You may not like it, don't use it, etc. But Git does have collaboration tools.
EDIT: For some reason, speculating about the potential to repurpose a technology is attracting replies that point out how it wouldn’t work seamlessly with how they are currently using it. I’m not even mentioning editing code, and I’m certainly not proposing that you try this in a random Git repo where other contributors aren’t expecting it.
CRDTs have the problem the article brings up, that they’re usually very specific to the data they’re modeling and need support from a very purpose-built editor. Git is supported by a lot of tooling and uses plain text which works everywhere.
Maybe "save" tries to commit, and pulls up a diff interface if there is a conflict? UI indications of other people's commits pending, and maybe merge those with current state of the document if there is no conflict?
There are a lot of edge cases to consider. But I feel like there might be a way to design UX for a Git workflow that makes it comfortable and intuitive to Muggles.