
Mozilla’s Fix-the-Internet Incubator - SQL2219
https://builders.mozilla.community
======
detaro
previous discussions:

[https://news.ycombinator.com/item?id=23182232](https://news.ycombinator.com/item?id=23182232)
(AmA)

[https://news.ycombinator.com/item?id=23194178](https://news.ycombinator.com/item?id=23194178)
(TC article)

------
RodgerTheGreat
Standards-compliant web browsers are impossibly complex to build and maintain,
which is why the handful which exist are all under powerful commercial
influence.

If you want to "fix" the internet, spend some time enriching open, federated
protocols that are designed to be human-scale. The internet is here to stay,
but we can do better than the web as it presently exists.

Gemini is worth looking into:
[https://gemini.circumlunar.space](https://gemini.circumlunar.space)

~~~
Taikonerd
I love the idea behind Gemini, and on a similar note, I'm wondering: is there
a name for the back-to-basics, web 1.0 philosophy? Like, a name for the "sites
that look like
[https://motherfuckingwebsite.com](https://motherfuckingwebsite.com) " ?

~~~
Reelin
Perhaps the idea of brutalist web design fits? ([https://brutalist-
web.design/](https://brutalist-web.design/)) HN is a good example of it IMO.

~~~
clarry
It looks alright but to me the simple-web ideal is rather like.. no design.
Like, you don't design an essay, or a comment, you just write one. And so was
the web. Initially you did not do "web design", you wrote hyperlinked
documents with some markup to support formatting (not design!) and embedding
images; how it looks in the end is up to the user agent (and transitively to
the user who could pick and configure their UA).

Then web design happened and it all went to hell. Web "designers" started
doing "cute" things and business owners started wanting more and more branding
and fashionable stuff.. and here we are. I've said it before:
[https://news.ycombinator.com/item?id=22608293](https://news.ycombinator.com/item?id=22608293)

------
lightninglu10
Hey everyone, cool to see this being shared again. I'm Patrick, one of the
mentors for this Incubator. Lots of chatter in this thread already, but just
want to reiterate what we're offering / looking for in this Incubator:

We have 3 different offerings for this Summer. All incubator style, meaning
you meet weekly or biweekly with mentors and we really try to help drive you
from point A to point C.

1\. $75k investment in a startup. MUST be serious about wanting to build
something awesome and put in the hard work it takes to do so.

2\. $16k funding in a much earlier stage project (idea stage / MVP stage).
MUST be serious about commitment it takes to get to launch.

3\. OPEN LABS: these are open to the entire community and you have access to
the mentors. 10 min checkins each week & peer sessions. We've had TONS of
amazing projects for our Open Labs in the Spring and we hope to see TONS more
for the Summer.

In terms of MISSION and what we're looking for:

We started this new incubator out of Mozilla in order to work with & invest in
developers, startups, and technology enthusiasts who are building things that
will shape the internet and have a positive impact without needing to hyper
focus on the bottom line. Projects, apps, & technologies that will be huge and
a big part of the internet ecosystem while also being in line with Mozilla's
mission of an open web & ethos of "Privacy over Profit".

Comment below if you have any questions about the programs and we'll be happy
to answer them.

------
Wowfunhappy
> Join us on Slack to be a part of the Builders community.

I know it's a bit petty, but I find it quite odd that Mozilla is promoting (or
at least using) Slack, when Slack has a large feature—video conferencing—which
only works in Chrome due to using non-standards-compliant WebRTC. And when
asked by Mozilla, Slack reportedly stated they weren't interested in fixing
it.

~~~
CameronNemo
And Mozilla recently stated they are moving comms to Matrix. Talking about
fixing the internet while using slack is an oxymoron.

~~~
clarry
I came to say the exact same thing.. using and endorsing slack makes them a
part of the problem.

------
kps
> _Fix-the-Internet — Join us on Slack_

Doubt.

~~~
dependenttypes
Mozilla has been part of the problem for ages now. Even their language Rust
has been using Discord. Not to mention that their browser has been the pioneer
on telemetry.

~~~
capableweb
So strange they chose Slack and Discord when they had a perfectly fine IRC
server (that now is deprecated). Instead of trying to improve IRC (either the
protocol or any clients), they chose to go with closed-source hosted
platforms. Damn shame.

~~~
steveklabnik
Mozilla chose Matrix, not Slack or Discord.

~~~
catalogia
On the Mozilla branded website that this discussion is about: _" Join us on
Slack"_

~~~
steveklabnik
I missed that! That is very strange, given their very public move to Matrix.

~~~
freeopinion
From any Matrix client try searching for Rust.

It pops right up for me with #/room/#rust:matrix.org at the top of the list.

Now try searching for Firefox.

There's a Russian Firefox community on Page 2 of the search results. There are
beta and nightly groups on page 3. There's various other noise. But the
maintainers have hidden the real Matrix room for Firefox so well, it doesn't
even seem to be in the search index.

The choice was public. The move has not been public at all.

------
chillydawg
Genuinely, if they want to help fix the internet, they should sack everyone
not doing core browser work and spend every penny they have spare on marketing
to increase their market share. Busses, billboards, TV, radio, Google ads,
whatever. Spend it all. Millions a month, internationally. With balance in the
browser market comes accountability for Google, which is sorely lacking.
Everything else is just mucking about, imo.

~~~
snazz
I find it very hard to believe that Mozilla would have any luck with any
amount of advertising. Firefox offers no features that are useful to non-
technical people that Chrome doesn't have. If anything, it's worse for the
average person, since there are quite a few sites that don't work well in
Firefox.

It's not crazy fast like Chrome was compared to Firefox when Chrome came out.
Privacy is difficult to quantify or feel, so it's not really an effective
marketing technique to market to a large number of people.

~~~
asdff
FF does strain your overall system a lot more than chrome. If they kept track
of this, users will see better battery life. I save hours, personally.

------
GuB-42
The internet is not broken.

It is certainly not perfect, for example IPv4 has a scaling problem, but it
still works damn well. It can be improved so that it can fit the needs it
didn't have originally (ex: with IPv6), but it is not fixing, it is an
adaptation.

None of the points in the "fix-the-internet" are things that are broken. They
are new needs. Everything privacy-related for instance. The base internet
protocol are all clear text, then we started encrypting secrets like
passwords, then personal data, then full end-to-end encryption.
Decentralization went from providing a reliable link between two computers, to
services that are fully accessible even when part of it is offline, to
services that simply can't be shut down.

There is no fix here, it is just building stuff on top of an internet that
works better than I ever hoped for. Good, but calling it "fixing" is a bit
pretentious.

------
whydoyoucare
"Fix-the-internet" and "shape the internet" have different meanings, and you
have used both... so I am not sure what you are offering!

Where are the problems to be fixed? I can see glossy bios' of mentors and
their song-and-dance about how great this is, but it seems a PR gimmick from
Mozilla.

------
komali2
Is there a way to see people that have won / applied to this grant, and are
looking for help? I suspect many of the projects in this space are ones I'd be
interested in working on.

------
jorangreef
Following Conway's Law, the problem with the internet is not so much the
internet but the destruction of IRC and the centralization of email. The
stagnation in open-source email software and security, the embrace-extend-
extinguish of AMP for email, and the decline in quality independent email
providers. Fix email (and revive IRC) and we can start fixing the internet.

~~~
zanny
Probably less than 1% of browser users have any idea what IRC or SMTP are. For
your average user chat is whatever FAANG puts on their device / screen and
email is whatever address one of the several services you signed up for when
getting Internet gave you.

The Internet is considered broken by Mozilla because corporate centralization
has turned the web experience into at most a dozen websites per user down from
the potential infinity of domains possible.

------
mikorym
Why not make webpages that are not accessible by any of x in x = {Chrome,
Firefox, IE, Edge, Opera, ...} and only accessible via simplistic web
interfaces?

(Yes, IRC is like that.)

The cool factor can be as much "everyone is doing it" as "the cool kids are
doing it".

------
zackmorris
A few ideas off the top of my head. Note that I mostly work on the backend, so
my frontend knowledge is at least 5 years out of date. My terms might be
wrong, but I think the concepts align closely with how I projected the web
would work back in 1995 when I first saw it. Pretty much everything has gone
the opposite direction though, and we’ve recreated all of the pitfalls of
desktop programming on the web.

\- A one-shot, partially Turing complete scripting language would be nice. We
should be able to include external scripts, position elements, perform basic
computation, and then have a guarantee that the script has finished and will
no longer run and use resources. Javascript is fine, but the user would have
to give permission for it to run setTimeout(), setInterval() and similar
ongoing operations. Discussion needed about scripts running post-load, like
for clicks are keeping elements positioned (possibly a user-settable
instruction, memory or time limit).

\- I'd vote to largely retire frontend model-view-control (MVC) patterns. The
browser is already the view, so I find most of the approaches introduced by
frameworks like Angular.js to be at the very least superfluous, but
potentially misguided or even hazardous. See Intercooler.js for declarative
alternatives to what we have now.

\- Instead of hand-rolled scripted pages, browsers need a basic component or
window metaphor, in which all GUI elements descend from a base element, can be
nested, and have a full life cycle that can be customized at each step. The
user would have to give permission to override builtin elements like <body>,
<input>, etc, but elements could descend from them with overrides. Discussion
is needed about security. For what it's worth, I just don't think that the
various approaches to building components have panned out. I feel like there
is a common abstraction here behind things like React that could be
generalized.

\- On that note, the way we include remote resources on the web is broken. We
need a way to choose whether the HTML for a component is written in-place, or
as a URL (or other address) that includes the remote definition. Similar to if
inline data: base64 images correspond to our current markup, then we could
replace a block of text with an href to allow reuse. Like <div
href=“example.com/mydiv.html”>, and mydiv.html would hold
<div>something<span>something</span></div> or whatever. See server side
includes (SSI) for an early failed example of this.

\- Individual requests need to go away and be replaced by a content-
addressable caching system where the hash is the address, so we can go back to
including individual global sources of truth for things like jQuery or
Bootstrap files, rather than maintaining copies of those includes on our own
server. See subresource integrity (SRI) as a starting point.

\- It may be time to scrap build systems. I'd like to see the world take a
step back, and consider architecting a web where more protocols and formats
like UDP or Markdown "just work" and don't need to be transpiled to whatever
brittle/proprietary-infested conventions are available. Think of this as how
compilers can provide generators by altering the code to work in runtimes that
don’t have generators. That concept could be extended to everything, so that
the server/browser is able to provide features that aren’t yet mainstream, by
perhaps running portions of our build pipelines directly under the hood.
Basically a turing complete language should be able to run anything, but right
now we have wide swaths of computer science blocked to us for security and
other reasons, but usually because a proprietary player like Microsoft doesn’t
want to implement something. We should be able to run Firefox within Edge,
conceptually. And figure out what fundamental capabilities must be provided to
allow for something so general. It could be as basic as a general-purpose
sandbox where the user decides which system calls and data to allow the
executables running inside to access.

\- Let's talk about how to remove undefined and optional rules from the major
RFCs and specifications. As an analogy, much of the optional functionality in
OpenGL caused the renderer to fall off the fast path back to the software
renderer. But since performance is a feature in OpenGL, dancing around
optionals caused an explosion in implementation complexity. Today many
thousands of developers reinvent the wheel countless times because someone
decided way back when to limit video card programming to SIMD (thus barring
conceptually simple techniques like ray tracing). A better way would be to
have general-purpose computing so we could go back to some kind of cluster
programming in languages like Erlang and Go, but I digress. A similar thing
needs to happen with the web, where the underlying implementation would be
general-purpose and open source (to virtually guarantee that all options are
implemented) then we could pick and choose which ones to disable due to
security concerns or whatnot.

\- The semantic web is unlikely to ever happen, but, resource description
framework (RDF) triples and concepts like hyperlinks pointing back to the
caller could/should happen first. This needs to be done by the server or
browser in an automated way 100% if possible. Content-addressable links might
partially fix this. Basically the gap between raw HTML and the metadata that
search engines and advertisers store needs to be formalized. Think of this as
taking HEAD requests, REST API descriptions in things like Swagger, and any
other information gleaned by spiders or AIs, and making that readily
accessible for any page. So when we go on something like eBay, we could look
at the page outline and detect any tabular date, contact information, form
actions, etc etc and be able to manipulate that page metadata the way we
currently examine things like an element's styling.

\- A prerequisite to the above is to be able to download portions of websites
or the entire web as a directed graph, and to do that, we need to be able to
query the web in something like SQL, only without a search engine. So to use
the eBay example, if I had this then I could find my dream car by searching
and cross referencing against things like performance, fuel economy, warranty,
and so on. The way we might manipulate such data in Excel or a SQL database.
Another way to say this is that most sites would lose their search field, and
the browser would see the site as a gist or index describing the graph, with
common actions suggested, provided by the community of previous searchers. See
Neo4j for how something like this might look in practice.

\- It’s time to talk about retiring the app as a separate concept from a
website. No single metric like performance, device accessibility or ease of
use exists as a hard limitation, or justifies developers having to maintain 2+
sets of code and get mired in cross-platform development. A path forward here
is to provide funding for game developers and other performance-oriented
developers to revamp browser rendering engines to run at many thousands of
frames per second instead of the glacial pace they crawl now. For example, the
DOM diffing of React should have already existed as a feature of early HTML
renderers. We need to be looking at things like forking and copy-on-write to
enhance the performance of web pages that share the vast majority of their
content. There are also streaming techniques that would let browsers access
media as memory-mapped data rather than loading files into memory. Browsers
could be as powerful as they are now, but run in something like 16 MB of ram
rather than GBs, and always react instantly rather than having multi-second
delays when many tabs are open.

Well that’s 10. I know there are many, many more things I would like to see
but maybe this isn’t the best forum for this. I’m temporarily retired from
programming after going through the worst burnout of my life last year. All I
see around me are these disjunctions between what the world is and what it
could be, so it feels like in order to work anymore, I would have to go back
to basics and rewrite most stuff from the ground up. That’s with all of tech,
not any one area. So I just concentrate on the things I can control, like
making rent, getting my car fixed, staying in touch with relatives, etc. Tech
is just too much of a time suck for me to justify anymore, when all of the
capital has been vacuumed up by a handful of large players that don’t care
about this stuff, to the point where winning the internet lottery with a
startup is about as likely as being in the NBA.

~~~
recursivedoubts
NB: intercooler.js 2.0 (renamed htmx) is now available:

[https://htmx.org](https://htmx.org)

------
Nextgrid
You want to fix the Internet?

Easy. Start by releasing a browser with built-in uBlock Origin and a bunch of
other extensions that make the internet better (strip utm_* crap from URLs,
etc).

You know, actually _do_ something instead of merely pretending like they
currently do. Their current "tracking protection" is an absolute fraud when
you look at the list of domains it whitelists explicitly, and both Google and
Facebook are in there. The opt-out telemetry (illegal under the GDPR, should
be opt-in instead) as well as the telemetry for those who opt-out of telemetry
(yeah you heard that right) is just the icing on the cake.

~~~
dralley
>(illegal under the GDPR, should be opt-in instead)

Anonymized telemetry that cannot be correlated to an individual user, and
which contains no sensitive information, is perfectly legal under the GDPR.

Mozilla telemetry isn't tracking your movements across the web, it's measuring
things like "how many crashes are being experienced per 1000 hours on Nvidia
graphics cards" and "what percentage of users have addons installed".

~~~
Nextgrid
Anonymized datasets have been deanonymized in the past so there's no telling
how anonymous the telemetry actually is and when in doubt I'd err on the side
of caution.

The IP address is also being transmitted as part of the telemetry request, and
while they can claim they don't log it there is nothing technically preventing
them (or an attacker) from logging it which could be a privacy violation
especially if you correlate it with the telemetry data.

------
101404
Mozilla? I guess the most important step for them to fix the internet is to
fire people when they dare to have a "wrong opinion" ten years ago.

An organization that uses that kind of violence against political opponents
will never "fix" the internet.

------
admin_account
This probably won't be popular but, does anyone take Mozilla seriously? For me
they've always seemed well intentioned, but completely oblivious to reality.
For example, does a browser with 5% marketshare really need its own engine?
Can a browser with 5% marketshare even change anything if they wanted too?

Personally I don't think so, & also why I don't take them seriously. They seem
to be more concerned with waving their "For The People" flag, than actually
trying to change anything. If they were serious about "fixing the internet"
they'd swallow their pride, transition FF to Chromium, & essentially become
something akin to an activist shareholder within Chromium.

~~~
CameronNemo
They did not always have 5% market share. iOS and Android pushed WebKit/Blink
hard, and Mozilla did not quickly respond; the ecosystem has suffered as a
result.

~~~
admin_account
That's all true, but overall they've been steadily falling since 2010.

However I disagree with your view of the ecosystem. The web needs stability
and consistency more than anything else. FF switching to Chromium would help
with that. So many people have this knee jerk reaction of Chromium = Chrome =
Google having total control. But don't understand the only reason Google has
had this much control over Chromium is because no other major vendor used it.
They were the biggest kid on the street. But now that MS moved in a few doors
down, that's no longer true. Google has to acknowledge MS in a way they never
did with Opera, Vivaldi, Brave, etc.. And the same thing would happen if FF
switched to Chromium.

Idk about you, but having 3 of the 4 biggest vendors all being forced to
collaborate and implement solutions supported by at least 1 of the others, is
1000000x better than having each do their own thing. You effectively go from a
monarchy to some form of democracy.

------
hu3
Mozilla is largely a for-profit corporation.

Their end goal is profit. Same for their execs: maximize their own profit
individually.

Once this is understood it's easier digest PR bulshit.

~~~
crazygringo
Can you expand? I did a little research and found on Wikipedia:

> _The Mozilla Foundation is an American not-for-profit organization that
> exists to support and collectively lead the open source Mozilla project...
> Unlike the Mozilla Foundation, the Mozilla Corporation is a tax-paying
> entity, which gives it much greater freedom in the revenue and business
> activities it can pursue._ [1]

> _The Mozilla Corporation is a wholly owned subsidiary of the Mozilla
> Foundation... Any profits made by the Mozilla Corporation will be invested
> back into the Mozilla project. There will be no shareholders, no stock
> options will be issued and no dividends will be paid. The Mozilla
> Corporation will not be floating on the stock market and it will be
> impossible for any company to take over or buy a stake in the subsidiary._
> [2]

So seems like the for-profit part is just to give it more flexibility in
achieving the non-profit aims?

I'd love to know why you think "their end goal is profit". There's nothing in
the Wikipedia pages about any substantiated criticism there. So just curious
if there are real reasons for thinking this, or not.

[1]
[https://en.wikipedia.org/wiki/Mozilla_Foundation](https://en.wikipedia.org/wiki/Mozilla_Foundation)

[2]
[https://en.wikipedia.org/wiki/Mozilla_Corporation](https://en.wikipedia.org/wiki/Mozilla_Corporation)

~~~
hu3
You're right. I meant corporation. Fixed.

------
camdenlock
“Female Founders Pitch Practice”

When is this nonsense going to end? How about just “Founders Pitch Practice”?
If you want to fix something, Mozilla, maybe start by fixing your bigotry.

~~~
orthecreedence
> When is this nonsense going to end?

When a critical mass of people start treating women (as a group) with the same
respect and dignity as men are treated (as a group) would be a good start, no?
Or are we post-sexism now?

------
gambler
How about investing into their own browser in ways that actually improve the
internet? Like adding IPFS support? Or PGP encryption?

~~~
zanny
IPFS isn't http. Web browsers as software are meant to access and render html
obtained via http. Content addressing on IPFS is entirely different to how dns
resolution works and as such you shouldn't just glue on an entirely different
name resolution system to existing software. You want an ipfs browser that is
UX optimized for actually browsing ipfs content.

Back in the day browsers supported alternative protocols like ftp but in
practice those are largely abandoned to single function as an http browser.

And what in the world would adding PGP to Firefox do? Its not an email client.
Its not an IM client. A website that wanted to can already use PGP via wasm.
Firefox is not an SMTP client, its an HTTP one.

~~~
zoobab
"Web browsers as software are meant to access and render html obtained via
http"

HTTP is a slave protocol, where you need a master.

It's time to retire it, and design the web with an uncensorable protocol that
does not loose memory.

The slave design of a web server need to go.

~~~
dredmorbius
Can you unpack this, because there's much that may or may not be in what you
said.

~~~
6510
The goal should be to render HTML documents not control where they come from.
The server/client model or the master/slave model is only interesting if the
clients are mediocre computers with limited bandwidth.

Think of it like managing a bunch of children. It's a great formula until they
are old and mature enough. If you continue to control them as if children they
effectively become your slaves.

I have tons of bandwidth and disk space. This is all that is required. If you
unreasonably insist I should also be the one to serve every request it's game
over for me. Not by technical limitation but by force.

Oh, I can rent my own web space? I haven't money for this and it has been
tried before. Those sites are all gone. Everything is gone. I have tens of
thousands of blog postings talking about stuff that doesn't exist anymore.

------
thepiratesailor
This is total BS. If anything that needs a fix is Mozilla. They have been
taking money in 100s of Millions of dollars from the same company that tried
to kill their flagship product. Nothing is wrong with Internet. But a lot
wrong with Web. Mozilla cannot fix Web until they start attacking FAANG, which
I very much doubt they will.

~~~
fenwick67
> Mozilla cannot fix Web until they start attacking FAANG

They already attack Facebook (they put Facebook in Facebook purgatory by
default, don't they?) and I'm not sure what they need to go after Apple,
Amazon or Netflix over, I think you are really just mean Google here.

~~~
Nextgrid
They have a list of domains to exclude from their so-called tracking
"protection" and that list contains Google and Facebook. I wouldn't call that
attacking by any means.

The Facebook Container is absolutely pointless when you account for
fingerprinting and IP address tracking.

~~~
lol768
>The Facebook Container is absolutely pointless when you account for
fingerprinting and IP address tracking.

The former being something they actively support defending against with
privacy.resistFingerprinting?

