I run as de-Googlified a stack as I can: Nextcloud, DAVdroid, LineageOS, exclusively Linux, a private cloud server, etc. This is for my personal usage - at work, we run the whole Google/Atlassian suite, and I'm (mostly) fine with that separation.
But I positively pine for the ease of use and general UX of the closed services when I go back to my private, open-source stack. A quick case in point: the Conversations Android XMPP client
I'm a Librem 5 backer, and I'm looking forward to the challenge of using it as a daily driver. So I'm looking to wean myself off WhatsApp in favour of something open. But Conversations isn't it. It's a great XMPP client, but it's not something I will ever convince the majority of my friends and family to use. It's couched in technical know-how, and while it's a great app for an open source project, it still manages to feel subpar to the closed-source, back-doorable, privacy-draining consumer offerings. Signal seems the closest so far.
(I should add here that I am trying to do something about this: I contribute haphazardly to OSS I use and I evangelize it in my social circle, but it feels like an uphill battle)
More anecdata: I run self-hosted GitLab, Nextcloud, and Wallabag. Hopefully Mastadon and Pixelfed soon.
There is no freaking way to do SSO between all of these. Despite OAuth+OpenID being older than some of that software. Don't get me started on the fact that the quickest way to get a file from phone <-> PC is to Slack it to myself from my work account.
The web I want will compete with closed-source ecosystems, instead of being a disparate collection of DIY server software that doesn't talk to each other in the slightest.
Assuming that phone and PC are on the same network, KDE Connect solves that scenario. You pair the devices once and afterwards you can not also share files, but also have access to a host of other features. The one I use most is clipboard synchronization.
I have a computer that I use to play podcasts, and the subscription list (a file) is synced to my main computer, so I can edit it from there.
Syncthing is also on my Lineage OS phone (until my Librem 5 arrives) so I can have a shared Downloads folder.
In the past, I used it to sync a subset of my music collection to my work computer.
It's reasonably usable for a geek.
Are you ready to roll out free email, maps, photos storage on a massive scale? Cannot beat free, as they say.
I'm going to presume a few things to keep this short: free and advertising-supported, donation-based, and paid for. I'm focussed on the third option.
Ever notice how many new (typically closed) services are "just the price of a cup of coffee a month"? What happens when I want 20 of these "coffee-priced" services? That's more than I personally spend on actual, gourmet coffee a month (of which I buy copious amounts).
I think there's a decent gap for services that cost $/€/£1/month. The volume of subscriptions at that price point might break even with the profitability of higher subscriptions with a lower volume. It might not of course - but I'm tempted to try it. Mail for $1 a month. Not remotely profitable, but with elastic compute and high volume of customers...maybe. With a self hosted option to boot, to lure in the early adopters and convert them when they want to offload the hassle of running the service...
I don't intend to compete with free, but I definitely think there's room for innovation here. I have a day job, but I'd drop that in a second if there was an opportunity to manage a multi-tenant cloud of Nextcloud instances for customers that a) doesn't cost more than DigitalOcean, and b) had other software bundled at the same price point, e.g. Collabora Online, LDAP auth, Mattermost, Mediawiki, etc. etc. All for $5 instead of $5 x n services....
:shrug: maybe I'm too much of an idealist ;)
"Unfortunately Sandstorm is basically dead at this point. I kept actively working on it for a while in my spare time after the company failed in early 2017, but I haven't found any time to do serious work on it in probably a year now. I do push monthly updates to keep dependencies fresh but that's about it.
Oasis takes in about $1800 per month, which is just about enough to cover serving costs and business expenses (e.g. annual tax preparation). For a while it was making only $800, with me making up the difference out-of-pocket. Then, last October I stopped offering a free plan. Remarkably, revenue more than doubled and has even gradually continued to increase over time -- I had expected a smaller spike followed by a drop-off.
But now it's getting to the point where it feeling really awkward to let people pay for a service that hosts a library of apps that mostly hasn't been updated in 2.5 years or more. In theory developers could still be updating their apps and submitting new ones, but basically the only app actually getting updates is Wekan. So, not sure how long long this can really continue... :("
IIRC, one example of the SSO thing I mentioned is exemplified in sandstorm. They either (I can't recall specifics) wrap or fork the relevant projects to make them "fit" within the sandstorm ecosystem. SSO, ACL, and overall storage / permissions are handled by sandstorm, which is why there's a list of supported apps - other apps that haven't been "wrapped" don't integrate with sandstorm APIs so they can't be used directly (I may be wrong now its 2019)
That's...kinda my point. There's OIDC, OAuth, Unhosted, IndieWeb, microformats (repeat ad nauseum) and yet there's still somehow a need to create the XKCD #927 situation (yet another standard). There's already schema.org (OK, Google influence :shrug:), the ActivityPub vocab (still new-ish), RDF / Linked Data etc.
It's not a slight against any of these projects, particularly sandstorm. IMHO the effort and onus should be placed on the component projects to play well with others. GitLab and Nextcloud deserve shout outs for exploring or working on federation strategies that should remove the need for things like sandstorm, but we should keep fighting for and implementing similar things for other projects.
(This discussion is now convincing me to open a second pull request to Wallabag implementing OIDC login)
Have you taken a look at Keybase as an alternative for IM and replacement of Slack-for-sharing-with-yourself?
Advantages over Signal:
* Doesn't rely on phone number for auth or id
* No limit on devices
* Great CLI experience
* Smooth to share files and messages in groups
* Encryption closer to widely used protocols
* Server-side closed source (but hey, that goes for pretty much any IM alternative you hope to use outside of close circles today unfortunately)
* US-based company
* Can't think of much else
I am - unfortunately - picky :D as you mention, they haven't released their server code and don't seem particularly interested in doing so (which is their full right). Good thread for that here: https://github.com/keybase/client/issues/6374
I get that some people aren't too fussed if the cryptography is done right, and you could reverse engineer the protocol and interop a la Amazon S3, but...yeah. Just not me I guess.
I'm new to the limitations you mentioned (# of devices, phone no. Auth, etc.). I'll need to take a good look before I jump ship. But first thought is I think I'll take those downsides as part of the current status quo of messaging apps for a fully open platform I can support and possibly get my parents to use without too much trouble (yep, my principles have priorities, lower ones get sacrificed when absolutely necessary ;))
(I lied about the phone sharing bit a little - Nextcloud sync is really good somehow, I just wish there was something like AirDrop for everyone else)
I managed to get a lot of my non-geek friends over to Signal, but if things don't change I think I will switch preference to Keybase soon in lack of better alternatives.
It's unfortunate that there's no great implementation of federated IM today. XMPP is a mess and Matrix has a long way to go when it comes to that part, but from what I can tell they have a good vision and might just be able to pull it off so I am keeping a close eye on it.
Then we have Secure ScuttleButt, which I find very interesting and would love to see mature.
Matrix is pretty usable for me and a bunch of my friends and family these days. The clients are getting pretty good as well.
I mean, finally throw away that html/css mindfuck, please, and provide some decent shaping and coordinate relationship system instead. So I could go back to that boring 20 projects a week routine.
And, by the way, the weirdness of the day -- and the capabilities of the tools -- shouldn't be underestimated! The good ones did proper relative positioning and alignment, supported various kinds of "flowing" layouts and so on. It wasn't uncommon to have to support all modes between 640x480@4 and 2048x1536@32, and basically no applications ran in full-screen mode, so you didn't have about a dozen resolutions and aspect ratios, you had pretty much an infinite number of combinations.
FWIW, it's part of why I moved away from all this stuff, more than 15 years ago. I discovered a lot of other fun things computers could do (I got into embedded programming, operating systems, security etc.), and web development stopped being fun when we seriously continued digging the hole we were into instead of climbing out of it.
well, at least we got docker and kubernetes to save the day and can scale infinitely with our superior CI which sadly breaks every built and needs a whole department to shepart into this brave new world
Similarly, how does CSS help me translate my document? (based on the court findings in the recent Dominos case, web accessibility technologies may as well not exist, even though that problem has also been solved for paper for a century).
I don't recall mentioning anything about translation. I don't know what hot-button issue you're going on about but I have a feeling it's tangential to the topic of web design.
Pre-web, a content layout shop could apparently do about 20 designs (in three form factors each) per seat, per week. These would be done to best-practices, and also be aesthetically pleasing.
I don’t think that is true for modern web design. If anyone here has that sort of design throughput, I’m guessing a lot of people on this thread would be interested in it.
But why? What in full-flexible geometry and linear problem solving prevents us to support diverse userbases or devices?
I’m not speaking about throwing away accessibility support or using fixed-size medium. I’m simply asking for sane math-based rather than MSWord-based primitives.
* Usable on a older devices and slower connections
* Respect for accessibility, including no-JS and no-mouse
* Markup that weighs less than 20x the content
* No malware and adware
* Respect for non-Chromefox users
Edit: Just more respect for the user in general would be very nice. Pop-ups. Hostile TOSes. Making user jump through hoops. Spam. Unsubscribe that requires login. Text that's mixed with promotional links. Hostility to no-JS users.
I've decided that this is not the Web I Want a while ago, and when I come across this type of crap, I just leave. And I don't click, e.g. medium.com, links anymore I am much happier now as far as browsig the Web goes, my stress levels are down.
With the time I've saved, I've been able to find new, nice, quality websites, which I visit instead.
And I realized something else: The quality of content I access has become higher as a result. Good quality content is apparently accessible. Crap quality comes with the above.
TL;DR: Stop putting up with crap.
How is JS directly related to accessibility?
JS websites without fallback reduce accessibility for:
* Users with slower devices
* Users with dated devices and browser software
* Devices with poor connectivity
* Users who have JS disabled through no choice of their own.
* Users who choose to disable JS.
* Browsers which do not support JS.
* Accessibility software.
* Not to mention bots, scrapers, etc., who also have valid use cases.
This is a non-comprehensive list I came up with in just a few minutes. I'm sure there are many more items that can be added to it.
• ignorance: someone hasn't taken the time to understand what it does or how to use it.
• maturity: the existing thing is perceived as "old", "fragile", "prone to error", not conforming to current expectations and norms (oddly, norms that the existing thing put in place). This perception is influenced by the ignorance mentioned above.
• ego: the replacer's name isn't on the established thing.
• hubris: the current state of the art is fragmented, and will finally be unified under a new regime, and only a new guard and new stewards can do this.
There's a difference between (a) continuously making things better/adjusting to new requirements, and (b) forever creating Brooks' style Second Systems. Recognizing when the line has been crossed or that the thinking and approach is tending towards (b) is more art than science and by then the tipping point was way in the past.
Once upon a time I've just got a junior C# developer job (without knowing anything but basic C++ and SQL), were given a computer and a real task and implemented a reasonable good-looking (looking and feeling the way common to all the Windows apps) business app that went straight to production just in some hours. And I didn't need anything but intuition and the API reference. It felt exactly like playing Lego.
By the time you've thrown your laptop out of a window trying to make elements stay in correct positions relative to each other and the page, a desktop or a mobile developer will have shipped a production-ready app with 10 times the number of layouts and features.
Hell, a dev from 2000 will have shipped 10 times the amount of layouts in half the time with Delphi or C++ Builder.
Edit: Someone linked Figma's blog post on how they ended up "creating a browser inside a browser" for their complex and amazing app to work: https://www.figma.com/blog/building-a-professional-design-to...
The whole browser facing side of the web is a mess. I think it's quite similar to C++'s situation.
Because we require backwards compatibility, the standard is becoming an unmaintainable mess and the best plan would be to phase out the current browser technology while building something that applies the lessons learned from the current web and build something that is simple and powerful. I think WebGL and WASM would be a good way to start. That way we could build a new browser inside the old browser and once the new browser is stable, we can build the old browser inside the new browser and let it have it well deserved EOL. I'm usually in favour of fixing existing solutions than to start from scratch, but the current web is such a badly engineered mess that it warrants a rebuild.
> to phase out the current browser technology while building something that applies the lessons learned from the current web and build something that is simple and powerful.
Yeah, I do remember couple of HTML/CSS rendering implementations in Java applets so we've been there, seen that.
As of React …
React is a Scottish dance for just being able to get didMount/didUnmount calls on custom elements, that can be accomplished way easier. Yet use of diff algorithms (polynomial complexity, sic!) for just populating DOM is kind of too much …
It’s a pretty awesome app.
Most technical issues can realistically be tackled. I doubt you could win against Google et al. on the tracking/code execution stuff.
I'm pretty ignorant of how desktop apps work; how simple is it? Are we talking a straight up pub/sub system?
1. I dislike how mutable information is. While I really value ephemerality, I think the "web" needs the ability to refer to content in an immutable way. Misinformation is big these days, and mutation only adds to confusion. It's why some information store stuff I'm writing is all content addressed; in my view information needs to be immutable.
3. The web as it stands also seems to embrace centralization over decentralization. This again is fine (imo) for user experience stuff; not everything needs to be decentralized. Yet, for information, especially immutable information, decentralization seems not only useful but vital. Ideally, stupid simple decentralization too. Connected hubs of peers are nice, but I don't think we should be required to run a process/server to view and manage something immutable. `git pull` a slice of the immutable information you want seems (to me) to be a critical lowest denominator here. Sharing, slicing up, distributing on flash drives, etc - the web as I'd like it should promote sharing of information in ways that fit as many people as possible; not just the ones permanently connected to the internet and so forth.
All I know is I'm making an information store, likely very imperfect and full of flaws, with these ideals in mind. I love the model of Scuttlebutt and a compatibility of that and connected-distributed ideas like IPFS/DAT seem best to me currently. I hope we continue to embrace immutability.
A presentation-free Web also empowers the browser to set the presentation, which really helps with issues around accessibility, anti-tracking, end-user customization, and (I'd argue) decentralization of Web software overall.
Users like having experiences.
It's much easier to build something like Google/Uber/AirBnB on GUN/CRDTs than only-hash based or append-only protocols.
End users don't use something for the protocol, they use it because the app developer created something that gives them value.
We need to stop pitching decentralization to the fantasies of developers, and start giving value to makers, shippers, and users.
I imagine you're thinking of specific implementation(s) of immutability. Regardless, my use case is not that of Google; my use case is of the person. It's of being able to share, own and read information with trust.
> Users like having experiences.
This feels like a straw man. I very much supported UXs in my post; I even said I love bloated feature rich user experiences. I don't think anyone intends to entirely give up feature rich web. We often just want options. Right now we only have the mutable, feature bloated web. While I love it for applications, it seems terrible for information.
> It's much easier to build something like Google/Uber/AirBnB on GUN/CRDTs than only-hash based or append-only protocols.
Quite possibly; I do not claim to have all the answers. What I have is a desire. A desire to be able to refer to content, and not have someone change the meaning of that content.
In the age of misinformation I don't think we have a choice. In the same way that a signed binary can be important, I think signed / immutable information is important. Do you disagree?
> End users don't use something for the protocol, they use it because the app developer created something that gives them value.
Agreed; which is why I was talking about value I see in the features of immutability I described. As well as the anti-value (is that a thing?) I see in the state of the mutable web as it stands now.
> We need to stop pitching decentralization to the fantasies of developers, and start giving value to makers, shippers, and users.
Saying decentralization is a fantasy seems in spite of some of the most successful technologies in human history. This is odd to me.
edit: as well as new federated software embracing decentralization. You being against this is perplexing to me.
Yes, a person owning their data should mean they can change it!
Fair point on UI/UX, you got me here - you are right.
It is trivial to build immutability on mutability, this is what MD5/SHA checksums do all the time on things like downloads. But if your base system is immutability, it is hard to do anything else well.
To your question: I agree signing is important. Doesn't mean it needs to be immutable!
Very well said! I'm impressed with how both precise and concise that was. Good point.
Oh sorry, decentralization is great, just its target use case should be "developer fantasy" but "user value add". Sorry for the confusion here. I'm pro decentralization (I build p2p protocols).
Well being able to change data is of course needed; so that goes without saying (I should hope). However what I mean by immutability is rather that I think the web should allow me to link to a piece of content, and ensure that what I link cannot be changed from underneath me if I choose not to.
I feel like I'm describing features and you're describing implementation. To me, immutability is a feature. I'm not at all referring to blockchain or any of that junk - merely that I want to be able to link (in the web sense, url/etc) to content - to discuss it with other people and etc, and have that content not change.
Optionally of course I should be able to link to mutable content. However we have mutability down. The feature add I'm talking about, is immutability.
> It is trivial to build immutability on mutability, this is what MD5/SHA checksums do all the time on things like downloads. But if your base system is immutability, it is hard to do anything else well.
Sure, but that's an implementation detail. I was not (or did not intent to, at least) say that the "base implementation" must be immutable. All I care about is that immutability is a first class citizen, and linking immutable content is possible. To build user features around immutability, where as nothing like that exists in the current "web".
Do you write/blog/tweet? Your clarity and ability to see through things is fantastic. I'd love to follow your work.
Append-only logs do scale. Dat 2.0 uses a hash-trie structure inside its file metadata log to act as pointers for fast lookups. Partial replication means you don't have to pull down the full history.
Mark, you work on a competing product to ours. I'd appreciate it if you didn't FUD our work. We don't do that to you.
I'm FUDing append-only logs and immutability, not products - don't conflate the two, or else you discourage legitimate discussion around the scalability of different architectures.
Question: Do you have to still store the full history?
How would you explain scaling something like Uber with this model, then? You're gonna have 100s of GPS coordinate updates per second, for each car, potentially millions of them.
1. Web as a source of information. Super Wikipedia as you wish. Mostly for us - readers.
2. Web as a functionality delivery tool - access and interaction with applications.
As we know any universal tool is not that good for the particular task as specialized one.
And that is the main problem I think.
At early stages we had semistatic HTML(3.2)/CSS(2.1) that was quite adequate for task #1 (reading, consuming information). And we had Java applets for applications.
These two were more or less orthogonal and could evolve independently so they still be specialized tools optimal for the tasks they serve.
Hey, we had Android with Java UI... so technically things like VSCode or Google Docs could be just Java [or Kotlin or whatever] applications inside browser if someone really wants that ...
After transpilation, minification, and with flaky source map support in many places (e.g. rollbar downloads new source maps after the first error, and doesn't apply them retroacively to the stack trace), a lot of information about errors is lost. Combine this with ubiquitous async programming, and all you get is something like "a.x is undefined", which hardly actionable.
I wouldn't mind common libraries (jquery, react, etc) to be shipped by browsers (lazy loaded), with signature verification in the browsers themselves. CDNs / single code locations sorta help by increasing the change of a browser cache hit, but that too seems like a workaround.
Yes, web standards offer loads of tools that let developers and designers get _really_ creative with their work, but we've reached a point where many of us are spoiled for choice, and loads of the mainline options are used, abused, or otherwise ill-optimised to the point of making many websites heavy enough to turn many laptops into space heaters.
Worse still, the "spoiled for choice" factor means that people keep finding radically new ways to use the toolchain; while this is arguably not a bad thing in isolation, it makes it _so_ difficult to keep up with what you're _supposed_ to use at any given time.
Lots of the time we spend deciding, dealing with devtools and toolchains, and debugging magic helpers or abstractions on top of abstractions (and I love good abstraction), that takes time away from what we are building, the product.
Simple is good, complexity is bad unless it is simple layers/parts but not too many you have to deal with, simplification is good, refactoring is necessary at this stage. In a way there is a choice technical debt. Move fast and break things, sometimes the 'break things' part of that is adhered to a little too much.
Engineering/product development is taking complexity and simplifying it, are we doing that? Or are we adding complexity where it was simple? I am looking forward to the next great simplification wave.
What I'd like to see is an HTMLx that addresses the core needs of the Web-using entities, and an ecosystem of Web extensions that expand on the capabilities of it. So, something like what smartphones with their apps and markets are doing. Basic call and message capabilities, but then there's TrueCaller and WhatsApp and the like.
Bear with me. Business, gaming, video and publishing would be the core interests. So this HTMLx would get forms and credit card processing and the like, accelerated sound and input and graphics and networking, two way streaming and en/decoding, and lastly, a responsive grid and symbolic styling.
Everything else, all of it – client side tracking, syndication, advertisement, compulsory client-side computation of various forms like fonts and GPU or native kernels, even JS code and client storage and cookies – I think should be available through a market. If your website is using something like that, let it suggest an "app" module to the user to install and enable on this domain.
"Please support our project by enabling advertising on this domain [OK] [Later] [Never]"
"For Youtube Movies please install DRM/x264 package from the market [OK] [Later] [Never]"
"You will need to enable storage, code, and gaming modules to use Itch [OK] [Later] [Never]"
Because browser extensions suck. Browsers suck. Mobile browsers suck. The experience of the Web sucks, for anyone that knows better. Shopping, gaming, streaming, syndication and publishing all suck big time.
That wasn’t the dark ages of design. It was the bottom floor of a pit in the eye-tortuing prisons of the Inquisition in the dark ages of internet design.
Modern web lacks cross platform - even cross browser - compatibility and is buggy as hell.
I agree Java applets were pretty ugly. But that was 20 years ago...
I want the user agent to enforce this, and I want open standards so anyone can build a user agent.
NNTP achieved these goals. The web has failed in all of these dimensions in practice.
it is very likely the wide majority of today’s content won’t be searchable by historians 200 years from now, because of all the walls we put, of all types (sign up only content, native apps, disappearing companies trashing their database, etc.).
Still, I guess outside of that, I want more customisation options and standardisation around the date/time picker interfaces on those custom fields. The fact every browser has its own version, and most of them are basically impossible to style is ridiculous.
Also it's time we stopped treating form inputs like some immutable part of the OS UI and let developers customise those too. We should be able to fully style select fields/drop downs, radio buttons and checkboxes, etc.
"The Web We Want... doesn't require you to accept a cookie notice, ignore an anti-ad-blocker warning and newsletter pop-up before circumventing a soft paywall in order to read a quick, shared article."
I prefer a scientific method to discover what we really want here.
This is not the Web that I want.
A lot of people say they want this and that, yet they keep using software and services from the known evil companies, and don't advocate against them.
It's easy to just want things.
Like me, I want federated social media so that users control their content. Probably that's only achievable legislatively, but the companies who don't want to have to break down their walled gardens - Facebook, Google, Microsoft (eg Skype) - can buy off ("lobby" you might euphemistically call it) the major political party in my country, I'm sure. Where does that leave me?
If I stop using Facebook I just lose out from contact with various social circles (maybe that would be better for me, but that's an entirely separate question), I can use Matrix (say?) but I don't know of a bridge that will free my Facebook messages?
Moreover most of the population probably don't know what they don't know and so aren't aware of the technical possibilities that are there, never mind how they might be put together as a tool they then can realise they want ...
And that's not even to address things like companies finagling themselves into education or healthcare.
My opinion is that this approach is wrong and less smart than it sounds. This approach suggests that it is possible to smoothly move from one close platform to another one, but it is not.
Network effect is everything in social network, but if you stay on facebook you're keeping its network effect in place, effectively vanishing the need to even explore another social network, federated or not.