Hacker News new | past | comments | ask | show | jobs | submit login
The Web We Want (webwewant.fyi)
105 points by feross 79 days ago | hide | past | web | favorite | 95 comments



I know this isn't quite on topic, but this is not the web I want. I want open services to compete with the entrenched players of free-as-a-service, and I want more interop between these open services. I'm sure HN already gets the gist.

I run as de-Googlified a stack as I can: Nextcloud, DAVdroid, LineageOS, exclusively Linux, a private cloud server, etc. This is for my personal usage - at work, we run the whole Google/Atlassian suite, and I'm (mostly) fine with that separation.

But I positively pine for the ease of use and general UX of the closed services when I go back to my private, open-source stack. A quick case in point: the Conversations Android XMPP client

I'm a Librem 5 backer, and I'm looking forward to the challenge of using it as a daily driver. So I'm looking to wean myself off WhatsApp in favour of something open. But Conversations isn't it. It's a great XMPP client, but it's not something I will ever convince the majority of my friends and family to use. It's couched in technical know-how, and while it's a great app for an open source project, it still manages to feel subpar to the closed-source, back-doorable, privacy-draining consumer offerings. Signal seems the closest so far.

(I should add here that I am trying to do something about this: I contribute haphazardly to OSS I use and I evangelize it in my social circle, but it feels like an uphill battle)

More anecdata: I run self-hosted GitLab, Nextcloud, and Wallabag. Hopefully Mastadon and Pixelfed soon.

There is no freaking way to do SSO between all of these. Despite OAuth+OpenID being older than some of that software. Don't get me started on the fact that the quickest way to get a file from phone <-> PC is to Slack it to myself from my work account.

The web I want will compete with closed-source ecosystems, instead of being a disparate collection of DIY server software that doesn't talk to each other in the slightest.


> the quickest way to get a file from phone <-> PC is to Slack it to myself from my work account.

Assuming that phone and PC are on the same network, KDE Connect solves that scenario. You pair the devices once and afterwards you can not also share files, but also have access to a host of other features. The one I use most is clipboard synchronization.


I use Syncthing for this.

I have a computer that I use to play podcasts, and the subscription list (a file) is synced to my main computer, so I can edit it from there.

Syncthing is also on my Lineage OS phone (until my Librem 5 arrives) so I can have a shared Downloads folder.

In the past, I used it to sync a subset of my music collection to my work computer.

It's reasonably usable for a geek.


> I want open services to compete with the entrenched players

Are you ready to roll out free email, maps, photos storage on a massive scale? Cannot beat free, as they say.


Just to derail the conversation further, I've had a thought for that too - one I'm working up the courage to build myself :D

I'm going to presume a few things to keep this short: free and advertising-supported, donation-based, and paid for. I'm focussed on the third option.

Ever notice how many new (typically closed) services are "just the price of a cup of coffee a month"? What happens when I want 20 of these "coffee-priced" services? That's more than I personally spend on actual, gourmet coffee a month (of which I buy copious amounts).

I think there's a decent gap for services that cost $/€/£1/month. The volume of subscriptions at that price point might break even with the profitability of higher subscriptions with a lower volume. It might not of course - but I'm tempted to try it. Mail for $1 a month. Not remotely profitable, but with elastic compute and high volume of customers...maybe. With a self hosted option to boot, to lure in the early adopters and convert them when they want to offload the hassle of running the service...

I don't intend to compete with free, but I definitely think there's room for innovation here. I have a day job, but I'd drop that in a second if there was an opportunity to manage a multi-tenant cloud of Nextcloud instances for customers that a) doesn't cost more than DigitalOcean, and b) had other software bundled at the same price point, e.g. Collabora Online, LDAP auth, Mattermost, Mediawiki, etc. etc. All for $5 instead of $5 x n services....

:shrug: maybe I'm too much of an idealist ;)


From working on Purelymail (which does offer mail for ~$1 a month), I would say that it's feasible to run such a service and be profitable, but that the $5/month services have a lot more leeway in marketing, hires, opportunity for huge profits, etc.


I'm personally a fan of Sandstorm.io and hope it can continue growing and offering an alternative much more aligned with individual's interests than what the largest few software companies in the world are creating.


> hope it can continue growing

:/

> https://news.ycombinator.com/item?id=20312789

"Unfortunately Sandstorm is basically dead at this point. I kept actively working on it for a while in my spare time after the company failed in early 2017, but I haven't found any time to do serious work on it in probably a year now. I do push monthly updates to keep dependencies fresh but that's about it.

Oasis takes in about $1800 per month, which is just about enough to cover serving costs and business expenses (e.g. annual tax preparation). For a while it was making only $800, with me making up the difference out-of-pocket. Then, last October I stopped offering a free plan. Remarkably, revenue more than doubled and has even gradually continued to increase over time -- I had expected a smaller spike followed by a drop-off.

But now it's getting to the point where it feeling really awkward to let people pay for a service that hosts a library of apps that mostly hasn't been updated in 2.5 years or more. In theory developers could still be updating their apps and submitting new ones, but basically the only app actually getting updates is Wekan. So, not sure how long long this can really continue... :("


Ouch. I hope that $1800 grows to enough to be sustainable for Kenton to invest some time or hire someone's time to grow the product, even if slowly.


I don't want to dismiss what sandstorm is doing. They've been around for a while and they've shown some incredible persistence in pursuit of their vision. I'm also a little out of date since I last checked up on it.

IIRC, one example of the SSO thing I mentioned is exemplified in sandstorm. They either (I can't recall specifics) wrap or fork the relevant projects to make them "fit" within the sandstorm ecosystem. SSO, ACL, and overall storage / permissions are handled by sandstorm, which is why there's a list of supported apps - other apps that haven't been "wrapped" don't integrate with sandstorm APIs so they can't be used directly (I may be wrong now its 2019)

That's...kinda my point. There's OIDC, OAuth, Unhosted, IndieWeb, microformats (repeat ad nauseum) and yet there's still somehow a need to create the XKCD #927 situation (yet another standard). There's already schema.org (OK, Google influence :shrug:), the ActivityPub vocab (still new-ish), RDF / Linked Data etc.

It's not a slight against any of these projects, particularly sandstorm. IMHO the effort and onus should be placed on the component projects to play well with others. GitLab and Nextcloud deserve shout outs for exploring or working on federation strategies that should remove the need for things like sandstorm, but we should keep fighting for and implementing similar things for other projects.

(This discussion is now convincing me to open a second pull request to Wallabag implementing OIDC login)


Wow this really looks awesome! It requires polish as most oss but this idea of an open web app store / cloud that is not tied to a vendor is amazing.


I am in a pretty similar position to you.

Have you taken a look at Keybase as an alternative for IM and replacement of Slack-for-sharing-with-yourself?

  Advantages over Signal:
  * Doesn't rely on phone number for auth or id
  * No limit on devices
  * Great CLI experience
  * Smooth to share files and messages in groups
  * Encryption closer to widely used protocols

  Downsides:
  * Server-side closed source (but hey, that goes for pretty much any IM alternative you hope to use outside of close circles today unfortunately)
  * US-based company
  * Can't think of much else
UI/UX still has some way to go before being on par with e.g. WhatsApp or Telegram, but they've been constantly improving a lot recently so I think they might get there soon.


Wow, again, last time I looked at keybase I just remember it being identity verification. Glad to see they're expanding on that.

I am - unfortunately - picky :D as you mention, they haven't released their server code and don't seem particularly interested in doing so (which is their full right). Good thread for that here: https://github.com/keybase/client/issues/6374

I get that some people aren't too fussed if the cryptography is done right, and you could reverse engineer the protocol and interop a la Amazon S3, but...yeah. Just not me I guess.

I'm new to the limitations you mentioned (# of devices, phone no. Auth, etc.). I'll need to take a good look before I jump ship. But first thought is I think I'll take those downsides as part of the current status quo of messaging apps for a fully open platform I can support and possibly get my parents to use without too much trouble (yep, my principles have priorities, lower ones get sacrificed when absolutely necessary ;))

(I lied about the phone sharing bit a little - Nextcloud sync is really good somehow, I just wish there was something like AirDrop for everyone else)


Yeah, I totally get you. Basically on Signal, you have the same issue as on WhatsApp; phone number required, unique per account, you can ever only have one device at a time (desktop apps are not first-grade clients but sync to phone), and all the privacy and other security issues that come with that...

I managed to get a lot of my non-geek friends over to Signal, but if things don't change I think I will switch preference to Keybase soon in lack of better alternatives.

It's unfortunate that there's no great implementation of federated IM today. XMPP is a mess and Matrix has a long way to go when it comes to that part, but from what I can tell they have a good vision and might just be able to pull it off so I am keeping a close eye on it.

Then we have Secure ScuttleButt, which I find very interesting and would love to see mature.


> Matrix has a long way to go when it comes to that part, but from what I can tell they have a good vision and might just be able to pull it off so I am keeping a close eye on it.

Matrix is pretty usable for me and a bunch of my friends and family these days. The clients are getting pretty good as well.


Back in the 199x I was able to draw a design and pretty decent ui simply in CorelDraw. Stack some shapes, promote outlines, stretch textures, intersect/combine/diff/layer/group, place along few guides and that’s it. Now add some linear algebra on few points to make it resizable (including scroll and time axis) and everyone will create any app or site they need in hours. That’s what I did, working as a teenage ‘designer’ in a small typography producing promotional posters, business cards and info booklets (basically, almost all websites you can imagine, but on paper). Oh, and forms, lots of forms.

I mean, finally throw away that html/css mindfuck, please, and provide some decent shaping and coordinate relationship system instead. So I could go back to that boring 20 projects a week routine.


I won’t argue that today’s web landscape isn’t bloated, but the designs you built in 199x had potential audiences several degrees of magnitude smaller and less diverse (in terms of users and devices) than what the average webdev has today. I don’t think it’s possible to have such breadth and size of userbase and use-cases without a non-trivial increase in build complexity.


It may not possible, but that doesn't mean the HTML and CSS clusterfuck is anywhere near the level we'd have expected back in the 1990s. None of the tools we have today can hold a candle to a proper UI builder -- which would suffice for a lot of "progressive" web apps. Back In The Day (TM), with a proper UI builder (as seen in Visual Basic, C++ Builder, Qt Designer) I could build ten different versions of the same dialog, for ten different resolutions and aspect ratios, in about the same time it takes me to get a button to stay where I want it in CSS.

And, by the way, the weirdness of the day -- and the capabilities of the tools -- shouldn't be underestimated! The good ones did proper relative positioning and alignment, supported various kinds of "flowing" layouts and so on. It wasn't uncommon to have to support all modes between 640x480@4 and 2048x1536@32, and basically no applications ran in full-screen mode, so you didn't have about a dozen resolutions and aspect ratios, you had pretty much an infinite number of combinations.


Granted, I'm not a huge fan of the generated HTML/CSS/JS that it generates, but don't services like Wix make it incredibly easy for a layperson to build a servicable and attractive UI? Again, I reiterate that the generated code is an absolute clusterfuck (even compared to what I remember Frontpage 99 making), but from what I can tell, the end-user never touches the generated raw code.


Tools like the ones I mentioned weren't meant to allow a layperson to build an UI, they were meant to allow developers to build an UI. The experience hasn't degraded much for people who don't write code for a living, but it has degraded significantly for developers.

FWIW, it's part of why I moved away from all this stuff, more than 15 years ago. I discovered a lot of other fun things computers could do (I got into embedded programming, operating systems, security etc.), and web development stopped being fun when we seriously continued digging the hole we were into instead of climbing out of it.


> web development stopped being fun when we seriously continued digging the hole we were into instead of climbing out of it.

well, at least we got docker and kubernetes to save the day and can scale infinitely with our superior CI which sadly breaks every built and needs a whole department to shepart into this brave new world


Why should a layout designer care which device the document is rendered on? (Beyond form factor which is addressed by “ promotional posters, business cards and info booklets (basically, almost all websites you can imagine, but on paper)”)

Similarly, how does CSS help me translate my document? (based on the court findings in the recent Dominos case, web accessibility technologies may as well not exist, even though that problem has also been solved for paper for a century).


Huh? Nothing is stopping you today from creating an image map or do table-based design, other than a client who wants a mobile-friendly design.

I don't recall mentioning anything about translation. I don't know what hot-button issue you're going on about but I have a feeling it's tangential to the topic of web design.


The article specifically calls out css usability and rendering fidelity as things they are interested in.

Pre-web, a content layout shop could apparently do about 20 designs (in three form factors each) per seat, per week. These would be done to best-practices, and also be aesthetically pleasing.

I don’t think that is true for modern web design. If anyone here has that sort of design throughput, I’m guessing a lot of people on this thread would be interested in it.


I did print design (e.g. PageMaker, Illustrator, Photoshop) and web design (including Flash) in the early web days. The workflow and output is not comparable, period. Print and web layout both have their respective inherent advantages and limitations. But I think it's self evident that, in general, a webpage is meant to have far more interactive features for a far bigger audience across a far bigger variety of devices than a print design. But I'm happy to see examples of PDFs that are the exception to this.


“I don’t think it’s possible to have such breadth and size of userbase and use-cases without a non-trivial increase in build complexity.”

But why? What in full-flexible geometry and linear problem solving prevents us to support diverse userbases or devices?

I’m not speaking about throwing away accessibility support or using fixed-size medium. I’m simply asking for sane math-based rather than MSWord-based primitives.


But the Flash Designer could accommodate more design complexity and would scale to different screen sizes. It's not impossible to build modern RAD tooling and layout designers. Even some of the ones existing on the web these days for end users aren't bad.


Nothing about today's web precludes doing the exact same thing now, except maybe that the editor you used doesn't run on today's OSes. In fact, the exact websites you made back then probably still work as intended.


The Web I Want:

* Usable on a older devices and slower connections

* Respect for accessibility, including no-JS and no-mouse

* Markup that weighs less than 20x the content

* No malware and adware

* Respect for non-Chromefox users

Edit: Just more respect for the user in general would be very nice. Pop-ups. Hostile TOSes. Making user jump through hoops. Spam. Unsubscribe that requires login. Text that's mixed with promotional links. Hostility to no-JS users.

I've decided that this is not the Web I Want a while ago, and when I come across this type of crap, I just leave. And I don't click, e.g. medium.com, links anymore I am much happier now as far as browsig the Web goes, my stress levels are down.

With the time I've saved, I've been able to find new, nice, quality websites, which I visit instead.

And I realized something else: The quality of content I access has become higher as a result. Good quality content is apparently accessible. Crap quality comes with the above.

TL;DR: Stop putting up with crap.


> Respect for accessibility, including no-JS and no-mouse

How is JS directly related to accessibility?


Accessibility is not just about vision-impaired users. It is about making your website accessible -- to anyone.

JS websites without fallback reduce accessibility for:

* Users with slower devices

* Users with dated devices and browser software

* Devices with poor connectivity

* Users who have JS disabled through no choice of their own.

* Users who choose to disable JS.

* Browsers which do not support JS.

* Accessibility software.

* Not to mention bots, scrapers, etc., who also have valid use cases.

This is a non-comprehensive list I came up with in just a few minutes. I'm sure there are many more items that can be added to it.


Not really an expert, but here goes: JS is for talking to computers, text is for talking to people. Accessibility, practically speaking, often means text-to-speech. At that point, JS is just gobbledygook in the way.


the current web makes me miss the days of gopher and browsing with lynx.


There's enough tooling as it stands. In-browser devtools are a veritable kitchen sink of possibilities to tap into. And even a cursory glance at some open source projects on Github yields a plethora of tools, modules and snippets of code to use in your project. In essence you can do anything you want with some JS, CSS, and HTML. This is the beauty of the web. We get lost when we ask for more tools and techniques to do things. It's like a massive collective hallucination that the current stack(s) for developing are unworthy or need to be drastically improved. Yes, progress is good, but we've reached a tipping point IMHO.


So many things get needlessly replaced for a mix of these reasons:

• ignorance: someone hasn't taken the time to understand what it does or how to use it.

• maturity: the existing thing is perceived as "old", "fragile", "prone to error", not conforming to current expectations and norms (oddly, norms that the existing thing put in place). This perception is influenced by the ignorance mentioned above.

• ego: the replacer's name isn't on the established thing.

• hubris: the current state of the art is fragmented, and will finally be unified under a new regime, and only a new guard and new stewards can do this.

There's a difference between (a) continuously making things better/adjusting to new requirements, and (b) forever creating Brooks' style Second Systems. Recognizing when the line has been crossed or that the thinking and approach is tending towards (b) is more art than science and by then the tipping point was way in the past.


True, nevertheless the current stack(s) for developing needs to be drastically improved as long as it is meant to be a platform for building apps and not only for publishing documents. I recall how easy and fun it was to build apps with Delphi and alike IDEs and now the whole React&Friends stack scares the shit out of me.


What scares you about it? React seems pretty easy to use?


When I make a WinForms app in VisualStudio I just drag-drop controls, click them and adjust properties the way I want, then click the controls and write the code which is going to execute in particular events. I have never had to learn this, I didn't need a single tutorial.

Once upon a time I've just got a junior C# developer job (without knowing anything but basic C++ and SQL), were given a computer and a real task and implemented a reasonable good-looking (looking and feeling the way common to all the Windows apps) business app that went straight to production just in some hours. And I didn't need anything but intuition and the API reference. It felt exactly like playing Lego.

Despite I also know basic HTML, CSS and JavaScript (I've learnt during the WindowsNT4 years) I have no bloody idea of what to do to get an approximately same result with today web-based front-end technologies. I've seen some React code and it seems pretty extraterrestrial. What scares me is the amount of what I need to learn before I can produce a useful application. And it seems (and feels the scariest) Photoshop and the art of visual design are on the list.


Open a SwiftUI tutorial and build the same thing in React + CSS.

By the time you've thrown your laptop out of a window trying to make elements stay in correct positions relative to each other and the page, a desktop or a mobile developer will have shipped a production-ready app with 10 times the number of layouts and features.

Hell, a dev from 2000 will have shipped 10 times the amount of layouts in half the time with Delphi or C++ Builder.

Edit: Someone linked Figma's blog post on how they ended up "creating a browser inside a browser" for their complex and amazing app to work: https://www.figma.com/blog/building-a-professional-design-to...


If you are throwing RAD tooling in, aren't things like WooForms and SquareSpace in the same arena?


Can they produce an app you can run on a server (or localhost) of your own?


Webflow can make you the ui part that you can host. In fact i think lot of devs start to use it this way.


I have this idea for quite some time, but it really manifested after I started using React. I'd like to hear the thoughts of fellow web developers on this.

The whole browser facing side of the web is a mess. I think it's quite similar to C++'s situation. Because we require backwards compatibility, the standard is becoming an unmaintainable mess and the best plan would be to phase out the current browser technology while building something that applies the lessons learned from the current web and build something that is simple and powerful. I think WebGL and WASM would be a good way to start. That way we could build a new browser inside the old browser and once the new browser is stable, we can build the old browser inside the new browser and let it have it well deserved EOL. I'm usually in favour of fixing existing solutions than to start from scratch, but the current web is such a badly engineered mess that it warrants a rebuild.


Java applets were the very first incarnation of WebGL and WASM really.

> to phase out the current browser technology while building something that applies the lessons learned from the current web and build something that is simple and powerful.

Yeah, I do remember couple of HTML/CSS rendering implementations in Java applets so we've been there, seen that.

As of React …

React is a Scottish dance for just being able to get didMount/didUnmount calls on custom elements, that can be accomplished way easier. Yet use of diff algorithms (polynomial complexity, sic!) for just populating DOM is kind of too much …


I read about how Figma (online design tool) basically did indeed end up building a browser inside the browser using WebGL, WASM , etc.

It’s a pretty awesome app.

https://www.figma.com/blog/building-a-professional-design-to...


What are some of the design choices you would make differently than existing browsers?


Ditch CSS and HTML completely and create a sensible layout engine. If you base everything on a low level rendering API, you can basically create your own framework (though you shouldn't need to). Databinding, event handling etc. should be as simple as in Desktop apps, so we don't need to create ham-fisted abstractions á la Angular or React. Make it so that 90% of the websites don't need any additional code (so that you can reasonably ask users for code execution permissions, similar to notifications). Maybe add the option for low level networking so you don't need to create interfaces so you can support mail protocols natively (I'm unsure about this). Make accessibility so simple it becomes the default. Performance should be a no brainer too. Make tracking as hard as possible from the get-go. Keep the specification lean and don't add stuff on corporate whims. Reduce the memory usage of browsers. Since it's based on WASM you can use any language you want. Hopefully this will encourage better programming practices. Because frankly, the current JavaScript framework/NPM/Yarn/Gulp/Bower/whatever environment is just ridiculous.

Most technical issues can realistically be tackled. I doubt you could win against Google et al. on the tracking/code execution stuff.


> Databinding, event handling etc. should be as simple as in Desktop apps

I'm pretty ignorant of how desktop apps work; how simple is it? Are we talking a straight up pub/sub system?


Sorry, just saw this. The problem with angular and react databinding is that it takes away control from developers. A simple pub/sub system can be extended to everything a developer wants, without clumsy hash-key insertion, digest cycles or array reconciliation. I think WPF did it fairly well back in the day.


Interesting. I'd love to give feedback, but I don't think anything I want fits into the modern web. What I want are often problems with the modern nature of things, not so much with features lacking in the web as it stands. Eg;

1. I dislike how mutable information is. While I really value ephemerality, I think the "web" needs the ability to refer to content in an immutable way. Misinformation is big these days, and mutation only adds to confusion. It's why some information store stuff I'm writing is all content addressed; in my view information needs to be immutable.

2. Obviously tracking is a problem these days. I don't think this is a problem of the "web", but I do think it's a problem of the "web" that we have no easy way to distinguish between information sources, like an html page, vs applications with a user experience. I love big bloated web apps, but I think the user needs to be able to distinguish what type of application they're seeking. Information should not opt into complex application user experiences, aka JavaScript. The web keeps opting into more feature bloat as the default; this seems wrong. The opt-in, not feature bloat, to be clear.

3. The web as it stands also seems to embrace centralization over decentralization. This again is fine (imo) for user experience stuff; not everything needs to be decentralized. Yet, for information, especially immutable information, decentralization seems not only useful but vital. Ideally, stupid simple decentralization too. Connected hubs of peers are nice, but I don't think we should be required to run a process/server to view and manage something immutable. `git pull` a slice of the immutable information you want seems (to me) to be a critical lowest denominator here. Sharing, slicing up, distributing on flash drives, etc - the web as I'd like it should promote sharing of information in ways that fit as many people as possible; not just the ones permanently connected to the internet and so forth.

I could go on, but I think ultimately what I end up describing is an alternate web. Something based around immutability and information, with opt-in full user experiences (aka javascript, wasm, etc) on top of that. What does this look like? Is this IPFS or DAT? No idea.

All I know is I'm making an information store, likely very imperfect and full of flaws, with these ideals in mind. I love the model of Scuttlebutt and a compatibility of that and connected-distributed ideas like IPFS/DAT seem best to me currently. I hope we continue to embrace immutability.


The immutability argument makes a lot more sense once you detach presentation from content (json, markdown, style-free html). A presentation-free Web is basically a DB with immutability acting as its versioning system. IPFS/Dat make that argument stronger my adding writability to end-users.

A presentation-free Web also empowers the browser to set the presentation, which really helps with issues around accessibility, anti-tracking, end-user customization, and (I'd argue) decentralization of Web software overall.


Immutablity doesn't scale.

Users like having experiences.

It's much easier to build something like Google/Uber/AirBnB on GUN/CRDTs than only-hash based or append-only protocols.

End users don't use something for the protocol, they use it because the app developer created something that gives them value.

We need to stop pitching decentralization to the fantasies of developers, and start giving value to makers, shippers, and users.


> Immutablity doesn't scale.

I imagine you're thinking of specific implementation(s) of immutability. Regardless, my use case is not that of Google; my use case is of the person. It's of being able to share, own and read information with trust.

> Users like having experiences.

This feels like a straw man. I very much supported UXs in my post; I even said I love bloated feature rich user experiences. I don't think anyone intends to entirely give up feature rich web. We often just want options. Right now we only have the mutable, feature bloated web. While I love it for applications, it seems terrible for information.

> It's much easier to build something like Google/Uber/AirBnB on GUN/CRDTs than only-hash based or append-only protocols.

Quite possibly; I do not claim to have all the answers. What I have is a desire. A desire to be able to refer to content, and not have someone change the meaning of that content.

In the age of misinformation I don't think we have a choice. In the same way that a signed binary can be important, I think signed / immutable information is important. Do you disagree?

> End users don't use something for the protocol, they use it because the app developer created something that gives them value.

Agreed; which is why I was talking about value I see in the features of immutability I described. As well as the anti-value (is that a thing?) I see in the state of the mutable web as it stands now.

> We need to stop pitching decentralization to the fantasies of developers, and start giving value to makers, shippers, and users.

Saying decentralization is a fantasy seems in spite of some of the most successful technologies in human history. This is odd to me.

edit: as well as new federated software embracing decentralization. You being against this is perplexing to me.


Granted, there are certainly worse implementations of immutability than others, but immutability as the foundational architecture (even in its best design) creates bottlenecks.

Yes, a person owning their data should mean they can change it!

Fair point on UI/UX, you got me here - you are right.

It is trivial to build immutability on mutability, this is what MD5/SHA checksums do all the time on things like downloads. But if your base system is immutability, it is hard to do anything else well.

To your question: I agree signing is important. Doesn't mean it needs to be immutable!

Very well said! I'm impressed with how both precise and concise that was. Good point.

Oh sorry, decentralization is great, just its target use case should be "developer fantasy" but "user value add". Sorry for the confusion here. I'm pro decentralization (I build p2p protocols).


> Yes, a person owning their data should mean they can change it!

Well being able to change data is of course needed; so that goes without saying (I should hope). However what I mean by immutability is rather that I think the web should allow me to link to a piece of content, and ensure that what I link cannot be changed from underneath me if I choose not to.

I feel like I'm describing features and you're describing implementation. To me, immutability is a feature. I'm not at all referring to blockchain or any of that junk - merely that I want to be able to link (in the web sense, url/etc) to content - to discuss it with other people and etc, and have that content not change.

Optionally of course I should be able to link to mutable content. However we have mutability down. The feature add I'm talking about, is immutability.

> It is trivial to build immutability on mutability, this is what MD5/SHA checksums do all the time on things like downloads. But if your base system is immutability, it is hard to do anything else well.

Sure, but that's an implementation detail. I was not (or did not intent to, at least) say that the "base implementation" must be immutable. All I care about is that immutability is a first class citizen, and linking immutable content is possible. To build user features around immutability, where as nothing like that exists in the current "web".


Excellent separation of concerns (feature vs implementation), totally clears things up in my head, and I agree with you now.

Do you write/blog/tweet? Your clarity and ability to see through things is fantastic. I'd love to follow your work.


Haha, alas no. You have me curious about CRDT and immutability though and will be investigating it more for my application. If at all possible. Appreciate the talk!


Discussion around the technical side isn't developer solipsism. It's part of the process for product design.

Append-only logs do scale. Dat 2.0 uses a hash-trie structure inside its file metadata log to act as pointers for fast lookups. Partial replication means you don't have to pull down the full history.

Mark, you work on a competing product to ours. I'd appreciate it if you didn't FUD our work. We don't do that to you.


DAT is one of the better protocols because it has evolved its approach and is starting to adopt CRDTs.

I'm FUDing append-only logs and immutability, not products - don't conflate the two, or else you discourage legitimate discussion around the scalability of different architectures.

Question: Do you have to still store the full history?

How would you explain scaling something like Uber with this model, then? You're gonna have 100s of GPS coordinate updates per second, for each car, potentially millions of them.


What a really want is not to have to browser tabs give triple AAA games a run for their money in resource use for little to no benefit in usability.


I believe that most frustrations we've got is in the fact that we are trying to handle two distinct tasks by single tool.

1. Web as a source of information. Super Wikipedia as you wish. Mostly for us - readers.

2. Web as a functionality delivery tool - access and interaction with applications.

As we know any universal tool is not that good for the particular task as specialized one.

And that is the main problem I think.

At early stages we had semistatic HTML(3.2)/CSS(2.1) that was quite adequate for task #1 (reading, consuming information). And we had Java applets for applications.

These two were more or less orthogonal and could evolve independently so they still be specialized tools optimal for the tasks they serve.

Hey, we had Android with Java UI... so technically things like VSCode or Google Docs could be just Java [or Kotlin or whatever] applications inside browser if someone really wants that ...


I'll volunteer mine, in the hopes that it's been solved already:

After transpilation, minification, and with flaky source map support in many places (e.g. rollbar downloads new source maps after the first error, and doesn't apply them retroacively to the stack trace), a lot of information about errors is lost. Combine this with ubiquitous async programming, and all you get is something like "a.x is undefined", which hardly actionable.

I propose two solutions. First, instrumentation of functions (like rollbar does in python) where you can see the function's local variables and arguments alongside the stack trace. Second, async traces, which trace the scopes where a promise was created, fulfilled, rejected, and awaited. These are huge omissions in javascript tooling today.


What I'd like to see is compiled JS with the equivalent of debug symbols; minified JS with source maps feels like a patchy solution at best, after all it's still JS which is represented and transferred as ASCII text; I feel like this could be accellerated by a lot.

I wouldn't mind common libraries (jquery, react, etc) to be shipped by browsers (lazy loaded), with signature verification in the browsers themselves. CDNs / single code locations sorta help by increasing the change of a browser cache hit, but that too seems like a workaround.


I haven't used rollbar, but both Sentry and BugSnag allow you to upload source maps on build. I've been satisfied with stack traces.


Hot take: the web technology toolchain is plenty powerful as it is, and expanding it will cause more problems through misuse of newer features than it would create solutions.

Yes, web standards offer loads of tools that let developers and designers get _really_ creative with their work, but we've reached a point where many of us are spoiled for choice, and loads of the mainline options are used, abused, or otherwise ill-optimised to the point of making many websites heavy enough to turn many laptops into space heaters.

Worse still, the "spoiled for choice" factor means that people keep finding radically new ways to use the toolchain; while this is arguably not a bad thing in isolation, it makes it _so_ difficult to keep up with what you're _supposed_ to use at any given time.


Technology and progress is great, but deciding on what to use is becoming like browsing for something to watch on Netflix (or other streaming services).

Lots of the time we spend deciding, dealing with devtools and toolchains, and debugging magic helpers or abstractions on top of abstractions (and I love good abstraction), that takes time away from what we are building, the product.

Simple is good, complexity is bad unless it is simple layers/parts but not too many you have to deal with, simplification is good, refactoring is necessary at this stage. In a way there is a choice technical debt. Move fast and break things, sometimes the 'break things' part of that is adhered to a little too much.

Engineering/product development is taking complexity and simplifying it, are we doing that? Or are we adding complexity where it was simple? I am looking forward to the next great simplification wave.


It's in the interest of big corporations (like Google, Mozilla, and Microsoft) that making a browser is as complicated as possible so they face no competition.


I threw this down the tubes, but put this here as well:

What I'd like to see is an HTMLx that addresses the core needs of the Web-using entities, and an ecosystem of Web extensions that expand on the capabilities of it. So, something like what smartphones with their apps and markets are doing. Basic call and message capabilities, but then there's TrueCaller and WhatsApp and the like.

Bear with me. Business, gaming, video and publishing would be the core interests. So this HTMLx would get forms and credit card processing and the like, accelerated sound and input and graphics and networking, two way streaming and en/decoding, and lastly, a responsive grid and symbolic styling. Everything else, all of it – client side tracking, syndication, advertisement, compulsory client-side computation of various forms like fonts and GPU or native kernels, even JS code and client storage and cookies – I think should be available through a market. If your website is using something like that, let it suggest an "app" module to the user to install and enable on this domain.

"Please support our project by enabling advertising on this domain [OK] [Later] [Never]" "For Youtube Movies please install DRM/x264 package from the market [OK] [Later] [Never]" "You will need to enable storage, code, and gaming modules to use Itch [OK] [Later] [Never]"

Because browser extensions suck. Browsers suck. Mobile browsers suck. The experience of the Web sucks, for anyone that knows better. Shopping, gaming, streaming, syndication and publishing all suck big time.


This is turning into a great thread for developers to vent about browsers. "Accept/block Notifications?" have become the DHTML popups of late.


One day the modern web will be sufficiently powerful and standardised to fully recreate the 1990s Java applet.


Welcome to now.


Needs to crash more. Lacks platform incompatibility. Needs more cheese.

That wasn’t the dark ages of design. It was the bottom floor of a pit in the eye-tortuing prisons of the Inquisition in the dark ages of internet design.


Can’t tell which way around you’re talking.

Modern web lacks cross platform - even cross browser - compatibility and is buggy as hell.

I agree Java applets were pretty ugly. But that was 20 years ago...


I want tracking to be opt in, and content serving to be decentralized (at an organizational level — not one company with a decentralized network).

I want the user agent to enforce this, and I want open standards so anyone can build a user agent.

NNTP achieved these goals. The web has failed in all of these dimensions in practice.


This is the Web I want and am working on, in significant organizations that are also working on this web (connected to DLTs &c via DIDs), as a solution to everyday problems and a transformative way forward. https://ruben.verborgh.org/blog/2017/12/20/paradigm-shifts-f...


Something i hadn’t considered before i started working with a historian friend is the duty we have as developper to provide searchable, durable, standard content for future generation.

it is very likely the wide majority of today’s content won’t be searchable by historians 200 years from now, because of all the walls we put, of all types (sign up only content, native apps, disappearing companies trashing their database, etc.).


Hmm, I really want to suggest that linked SVG files should be able to be styled like inline/embedded ones, but I partly feel this item on the site would probably cover that:

https://webwewant.fyi/wants/13/

Still, I guess outside of that, I want more customisation options and standardisation around the date/time picker interfaces on those custom fields. The fact every browser has its own version, and most of them are basically impossible to style is ridiculous.

Also it's time we stopped treating form inputs like some immutable part of the OS UI and let developers customise those too. We should be able to fully style select fields/drop downs, radio buttons and checkboxes, etc.


If this wasn't technology-specific, I'd submit an idea of:

"The Web We Want... doesn't require you to accept a cookie notice, ignore an anti-ad-blocker warning and newsletter pop-up before circumventing a soft paywall in order to read a quick, shared article."


There are block lists that deal with cookie popups and the like.


Would you pay $0.2 to avoid that?


Yes. Or rather, I'd be happy to pay let's say $25-$50 a month for an overall "News Subscription". Then this amount would get distributed over the various news sources I visit, either based on how many articles I read from a certain source or some other "fair" mechanism.


I find it interesting that for this site the "we" is web developers. It isn't web consumers or, for lack of a better term, typical users.


Wouldn't it make more sense to have those discussions under the moniker of the WWW consortium rather then having yet another entity?


I was gonna write that I do not want any more tools to my web toolbox. Then it struck me, I'm working on developing ... a web tool. But I'm struggling to find "product-market-fit", who needs more/better tools ? And what do you need ?


The problem with the web isn't its technology; that part is amazingly great now. The problem is with (a subset of) the people who make websites, and the business interests driving them.


You often don't even know what you want until you build it, try it in PRODUCTION.

I prefer a scientific method to discover what we really want here.


Well that was a bad first impression. I don't want mobile websites asking to place their shortcut on my homepage.


I want a Web in which we use different languages to describe a document and the UI for an app.


The second I saw this, I knew what I wanted: CSS transition to auto heights/widths.


I want the user to be in full control and not the web site provider.


Has anyone looked at the source code for this page?

This is not the Web that I want.


Can they help building a time machine to undo the iPhone?


Meh.

A lot of people say they want this and that, yet they keep using software and services from the known evil companies, and don't advocate against them.

It's easy to just want things.


Most people perhaps don't know how to address the gap between what they want and what they use?

Like me, I want federated social media so that users control their content. Probably that's only achievable legislatively, but the companies who don't want to have to break down their walled gardens - Facebook, Google, Microsoft (eg Skype) - can buy off ("lobby" you might euphemistically call it) the major political party in my country, I'm sure. Where does that leave me?

If I stop using Facebook I just lose out from contact with various social circles (maybe that would be better for me, but that's an entirely separate question), I can use Matrix (say?) but I don't know of a bridge that will free my Facebook messages?

Moreover most of the population probably don't know what they don't know and so aren't aware of the technical possibilities that are there, never mind how they might be put together as a tool they then can realise they want ...

And that's not even to address things like companies finagling themselves into education or healthcare.


> If I stop using Facebook I just lose out from contact with various social circles, I can use Matrix (say?) but I don't know of a bridge that will free my Facebook messages?

My opinion is that this approach is wrong and less smart than it sounds. This approach suggests that it is possible to smoothly move from one close platform to another one, but it is not.

Network effect is everything in social network, but if you stay on facebook you're keeping its network effect in place, effectively vanishing the need to even explore another social network, federated or not.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: