Hacker News new | past | comments | ask | show | jobs | submit login
MDN Web Docs evolves: Lowdown on the upcoming new platform (hacks.mozilla.org)
335 points by headalgorithm 27 days ago | hide | past | favorite | 196 comments

So, they say that the system is evolving, but from these parts of the article I would argue that the ease of contributing (which is the important part) is taking a massive hit:

> "you will no longer be able to click Edit on a page, make and save a change, and have it show up nearly immediately on the page. You’ll also no longer be able to do your edits in a WYSIWYG editor."

> "you won’t have a WYSIWYG to instantly see what the page looks like as you add your content, and in addition you’ll be editing raw HTML"

They had a user-friendly system previously... Now users have to learn GitHub and author raw HTML! - I would point out that not all documentation writers are developers!

I _wish_ MDN had a user-friendly contribution process. My experience has been _far_ from that. I very much welcome these changes.

In late 2017, I spent upwards of 80+ hours standardizing 80 different MDN docs pages so they used the same best-practices layout as other more popular docs pages on the MDN.[1]

Of the 85 pages I updated to have better formatting, over 18 were reverted by random contributors within 2 days with no reason or explanation in the changelog. It was _impossible_ for me to get in contact with the people reverting the changes, because they are only a username.

I tried contacting MDN maintainers on IRC, searching for those users on IRC, and came up empty handed. I was left to make a plea on the mailing list asking for those change authors to get in contact with me, only to receive no response.

I came away from the experience highly encouraged not to contribute to the MDN.

[1] https://github.com/hexops/vecty/issues/136

Does MDN/Kuma not support talk pages like MediaWiki? IRC and other live chat options are good for building a community, but for discussing content changes (and preserving that discussion) keeping that discussion on the wiki is really useful. It's a shame if MDN never realised that.

> They had a user-friendly system previously... Now users have to learn GitHub and author raw HTML! - I would point out that not all documentation writers are developers!

So you'll need to know HTML and how to create a Pull Request on GitHub to be able contribute to documentation about web development... I feel like if you're writing about web development topics, that doesn't seem like a high bar to pass.

The friction is higher. Now you have to deal with github and accounts and permissions and so on. Before it was a simple web form.

Well, now balance that against the rest of the upsides they present, like the new UX not being to merely revert the changes that you spent the weekend writing.

Let's stop acting like just because we can think of one downside that we can ignore the rest of the trade-offs. Btw this is a trade-off they already mentioned in TFA rehashed by an HN comment for some reason.

Because HN commenters always feel like they know better and like to talk down on any changes even when they have no context for why the decision was made.

Or, you know, because it’s a legitimate concern that readers on HN value higher than the MDN maintainers.

Having used this site for a long, long time. No, almost never.

The friction is lower because I already have a GitHub account but I don’t have an MDN account.

Is higher friction a bad thing though? This is web standards documentation, not Wikipedia; once a document is finished, it'll be a lot more static. I can imagine they've had edit wars (another commenter mentioned it), abuse, etc, whereas with a GH workflow there's a better review process involved.

Its much more than that. Have you seen how many ways there are to write html? So you have to learn their flavor on top of providing the actual information. Thats a good chunk of time that can be used elsewhere. People don't contribute to projects that have bad levels of entry and artificially difficult participation requirements.

> They had a user-friendly system previously

TFA goes into why this wasn't user friendly. You haven't responded to their own justifications. You've just reposted a trade-off that the TFA acknowledges and justifies.

> Now users have to learn GitHub and author raw HTML!

Ignoring the fact Microsoft's Github has recently had DMCA insanity, a proprietary frontend, and recently enforced Webcomponents (effectively removing UXP devs who were forced to self-host Gitea just to continue working)... this seems like Mozilla is trying to outsource as much as possible from their own hosting. They also shutdown Firefox Send.

Perhaps Mozilla is having issues / unable afford their own hosting anymore?

> They also shutdown Firefox Send.

IIRC it was being abused and there was no way to combat that

Although I have a hard time believing they completely overlooked it when they were designing the service.

If I think of serving user-generated/uploaded content, malware and copyright violations come directly to my mind.

Not trying to be inflammatory here. What does it matter if Microsoft has a proprietary front end for Github when all the docs are in markdown?

One example would be it is not possible to even make a PR request on Github as half the GUI no longer works in non-WebComponents browers. If it were open source we could perhaps see what WebComponent feature they think is missing and implement it in the browser engine, as it is they do not have any desire to collaborate and the black box makes it all the more difficult to debug.

Github's response: "... further degradation is a likelihood. I appreciate that this is disappointing and frustrating for you..." - https://forum.palemoon.org/viewtopic.php?p=202146#p202146

GitHub's CLI lets you make a pull request and edit it in $EDITOR, works great.


If you've chosen to use a web browser that's forked from an old version of Firefox and isn't aiming for compatibility with modern Web Platform Tests, you're going to encounter a lot of obstacles. The fact that you can't see the server-side code generating the failing client-side code, which you can see, seems like the least of them.

Yeah pointing to what is essentially an issue only in a mostly unmaintained version of Firefox is really disingenuous and sounds like someone complaining about something not working on IE6.

Palemoon was last updated ~2,280 times more recently than IE6, and UXP is a Free and open platform used by more than one browser.

What does creating a PR even need in terms of functionality? It's effectively nothing more than a big HTML form with some inputs, something that's been around and working perfectly fine in browsers for decades.

The other comment here about how it's like complaining it does not work in IE6 is really pertinent: Yes it damn well should, because I should not need the latest technology just to do something that would've been perfectly possible with the technology of TWO DECADES AGO. It should be entirely possible to use GitHub with a text-based browser because none of its interactions require anything more than that.

It's sad that I could probably write a more accessible interface in less time and resources, and I haven't even done much in the way of web development, yet dedicated web-developers with uninhibited trendchasing mentality will fuck it up so badly with this constant need for useless breaking changes.

Seriously, fuck this "modern" bullshit.




The PR form has a bunch of quality-of-life features that would be impossible without JavaScript. The reviewers/assignees/labels pickers are JS-based and fetch data on the fly, the Markdown editor with previews also couldn’t exist with JS. Many of those components appear in other pages, and it’s a no-brainer that GitHub wants to reuse them between pages. GitHub picked a solution for this that is supported (at least partially) by browsers with a total 94% market share [1].

Would it be possible to make a form with all those features and that works (at least with 95% of the features) with your legacy (dead) browser of choice, be it IE6 or Pale Moon? Sure, it would. But to do that, GitHub would need to spend a lot of resources, even though most GitHub users do not need those dead browsers. Those browsers don’t support many modern APIs that web apps need, or that make developers’ lives easier.

Speaking of dead software, Pale Moon dropped support for Windows XP in 2016 [2]. What happened? Why does software drop support for old runtime environments? Because nobody has the time and resources to test things on XP, and to write polyfills for features that XP does not support, or to skip some features because they don’t want XP users to get a worse browser, and because new features can make your software better for the user or more secure.

The world has moved on. Install a modern version of Firefox (or Chrome/ium if you must) and stop complaining.

[1]: https://caniuse.com/?search=components

[2]: https://en.wikipedia.org/wiki/Pale_Moon_(web_browser)#Releas...

It seems people just don't like to hear the truth.

> One example would be it is not possible to even make a PR request on Github as half the GUI no longer works in non-WebComponents browers

Not that I think this is the right move by Github, but you can create PRs from external tools. I do that from Magit+Forge all the time.

What is UXP, and how is it affected by GitHub's decision to use WebComponents in their frontend?

Well given they laid off a Huge number of people while containing to pay their Executive HUGE salaries it seems their priorities is not on product development at all

Sadly I think we are seeing Mozilla go the way of Netscape, and I would not be surprised if they sell off all the IP to someone soon

or just make FireFox yet another Chromium Skin

The success of Medium has proven that lowering the barrier to write content is extremely valuable. If you're asking people to "work for free" then it should be as easy as possible and not feel like an errand. When you're inspired to write something, you just want to write it and not have the computer get in your way, or else you'll understandably give up. Wikipedia also proved this a couple decades ago.

Also rolling your own wiki is beyond stupid in this day and age. Just import all the MDN articles to MediaWiki. Making a script for this is a one-person weekend project. As I understand it though, Mozilla owns the copyright to all the content and they might shut you down for doing so.

While I acknowledge you have a point, the parameters here are different; on Medium, everyone has their own space, and low-quality work is harmless. On MDN though (and to a degree Wikipedia, although it's a lot broader), you want authoritative, high quality documentation. Quality over quantity.

MDN won't become better if there's more contributors and activity (churn).

Should probably normalize to a superset of Github-Flavored Markdown (GFM)... most contributions can be done ON github via the integrated editor btw... .md files would allow for a reasonable preview as well in the editor.

All of the devs on my team would rather maintain docs in markdown documents with pull requests versus in wiki form or in google docs. Yes the bar for contribution is slightly higher. But barely.

Did they confirm you have to write raw HTML? Markdown seems vastly more likely to me.

Can we have both PR and WYSIWYG? Login with github, Edit page, click button to submit PR with your changes.

In theory, yes; Github is working on / has deployed a web-based version of VS Code, and VS Code has a Markdown preview (and I'm sure there may be a WYSIWIG / Markdown editor as well).

It should be doable to create a WYSIWYG editor just for the MDN pages as well, or any static site generator for that matter. Question is whether they want to invest in that; will that actually improve things, given how the MDN pages are all fairly uniform in look, feel & layout.

That's yet another option, authoring Markdown with preview.

I've thought of GitHub REST API Pull Requests [1] this requires authorizing web application though.

Another version - is there any service that allows to post changes in Pull Request body? Maybe extend GitLab? Workflow would be Edit page, Preview, copy to clipboard, open new Pull Request, paste.

I am not a fun of Markdown anymore. WYSIWYG looks like much better approach, it already there. HTML to markdown tools are not great.

[1] https://docs.github.com/en/free-pro-team@latest/rest/referen...

Shouldn't people contributing to MDN be fairly comfortable with writing HTML? You could make the case for github and git but those aren't exactly uncommon for developers either.

This is a strange thing to gripe about.

It is going to be the 5th iteration of CMS/wiki software. Hopefully this is the right one.

1. Netscape DevEdge

2. MediaWiki

3. Deki Wiki (renamed to MindTouch, closed source since 2013)

4. Kuma

5. Yari

There's a nice history here: https://developer.mozilla.org/en-US/docs/MDN_at_ten/History_...

And the problem is that each time the software changes, attributions are lost. I used to be active contributor to MDN with thousands of contributions. Countless hours of volunteer work.

When the platform changed and my attributions were lost, I stopped contributing. I had no street cred anymore. I was angry.

The switch to GitHub means the same problem once again. Nice way to alienate your community, MDN. Good luck with that.

It should be possible to migrate to GitHub without losing attribution. I've seen scripts in the past that build up a Git repository from scratch, back-dating contributions and crediting them to the author.

No idea if MDN are planning to do that though.

They never did that in the past. Why start now? Entire articles I wrote from scratch... the attributed author is not me, it’s some bot.

Impossible unless you have a thorough map from MDN accounts to Github accounts in advance (because once published the repo can't really change much, especially in this case).

You don't need a map from MDN accounts to GitHub accounts. You just need email addresses.

You can use email addresses, but you can also append a profile page URL to the commit message.

Since you probably spent hundreds of hours on this, and they are apparently in violation of the license under which you contributed this work, maybe get a lawyer and send them a letter asking for your attribution to be re-instated or removal of all the mis-attributed material?

I mean Mozilla earns hundreds of millions a year and pays its CEO several millions a year, they could have well afforded to not violate your rights by spending a few dev days on writing a proper importer scripts.

But imagine how much their CV will improve!

Oh no, my internet points!

Volunteer work like that that results in visible, attributable output can become part of your resume/portfolio. You can point people to it and say, "Here are a set of articles representative of my work." Much like open source contributions (if you're not the primary maintainer of anything open source).

Those are unpaid volunteers. If Mozilla robs them attribution for work (and yes this is work), then what's left?

Specially now that they fired every single paid worker.

> Those are unpaid volunteers. If Mozilla robs them attribution for work (and yes this is work), then what's left?

Warm fuzzy feelings? The knowledge that people read your article thousands of times every day?

Honestly, while I agree that it’s a bit sad to lose the attribution, I find it hard to be sympathetic with people who get salty over losing internet points.

That’s so disingenuous.

It’s a knowledge base. It’s well designed documentation. Something that many folks in IT lack at times. Showing people you can properly document knowledge in a collaborative way helps you get hired.

Mozilla never promised them any pay in the first place

It did promise to attribute the contributions, since they are licensed under CC BY-SA.

It’s not about payment it’s about legal open source practices that help foster and protect the community.

The ability to write good tech documentation is a valuable skill. I understand the GP was upset when they lost an easy way to signal they have it to potential employers.

It's mean to make fun of people for caring about things. If people are proud of work they've done, why shouldn't they be upset when they get no credit?

I assume they could've put it on their resume and gotten something from that. Now it's just baseless claims.

If your potential employer is so skeptical of you that they don’t trust you when you tell them you wrote certain articles on MDN, they’re not likely to hire you in the first place.

If you reference work you do in your own time, I will go look at it. It’s unfair that MDN doesn’t easily make note that their contributions may have been lost to the ether.

I don't mean for this to come off rude, but at some point you have to learn that spending time in life on things you don't control can have extremely negative costs.

How can you be angry? MDN isn't yours. Do they owe you something? It's volunteer work. You're not owed anything. You should consider attribution a privilege.

I don't mean for this to come off rude, but how can you be so cold? How can you lack so much empathy? At some point you have to learn that things matter to people.

Volunteer projects rely on an unspoken contract that symbolic recognition, awarded fairly, is a real motivation. And if a project wants to succeed, it needs to take that seriously.

Does the project owe its contributors anything? Legally, no. But if it wants to survive and keep contributors, then it had better damn well work hard to recognize them. The project isn't owed anything. The project should consider its free, voluntary member contributions a privilege.

Understand now?

> Does the project owe its contributors anything? Legally, no.

Uh, what? Legally, yes.

Why are so many people (see sibling below: "Perhaps legally, in this case, they're owed nothing") just rolling with the suggestion that this is a grey moral issue and not a legal one? It's more than a moral issue. This is Creative Commons content. Mozilla doesn't acquire ownership of project contributors' work...

Once again, we have another Mozilla-related thread where we find two "sides" of an issue, with both offering takes that reveal that neither has any idea what they're talking about. What is it about Mozilla that attracts this sort of thing?

I think it's completely clear from context that the references are to the project as a human organization, not its content. Obviously recognition is given by organizers... not by Creative Commons-licensed content, ha! Unless text has become sentient now. :)

But I don't know why you're then choosing to baselessly insult people who discuss things about Mozilla...? Call me crazy, but I don't think that's a helpful or constructive attitude here...

> baselessly insult people who discuss things about Mozilla...?

It's neither baseless nor is it an attitude that is not "helpful"—I laid out exactly what the basis for the comment is, which comes almost directly from Frank Hecker's post a couple months ago after the most recent layoffs:

> Incidentally, doing a Twitter search on ”Mozilla” gives a good feel for public perception of Mozilla among technologists, but unfortunately most of the people commenting have no real idea what they’re talking about.


On the other hand posting false or simply misleading information, whether intentional or unintentional, is unhelpful. And if it's unintentional, there are a few different ways to respond when someone points it out. One way is to feel insulted and post an emotional response. Another is something like, "Oops, my mistake. I wasn't really thinking about that when I wrote what I did, but on second thought: good point!"

I don't really know what your first two sentences about human organizations and context are supposed to mean. Mozilla does have a legal obligation to abide by the license terms—in contrast to what you wrote—and that's pretty much that.

> I don't really know what your first two sentences about human organizations and context are supposed to mean.

They're mean that you have completely misinterpreted the conversation, and apparently continue to do so.

The discussion started about moral responsibility. You're the only one confusing it with legal responsibility. And that's pretty much that.

You're right, it did start that way. And then the matter of legal responsibility was brought up; one person even posed the question, "Does the project owe its contributors anything?", and gave a direct and unequivocal response: "Legally, no." (Side note: that person was you.) And to say that is to say something that is simply not true—as untrue as any statement now about my being confused about whether legal responsibility was being discussed.

You can't rewrite history. (And we shouldn't have to replay all this. It's still all there to see...)

I don't know why you're so willfully misreading this.

The project doesn't owe the contributors anything legally in terms of recognition (or payment, etc.) which was the subject being discussed. That's quite obvious from the context.

Nobody ever brought up legal ownership of content at all -- that's 100% your misinterpretation.

It's kind of amazing how you misunderstand comments and then go on to insult others for supposedly misunderstanding comments... and then proceed to then do it all over a second time! Amusingly ironic. Better luck in the future, my friend... ;)

Sorry, that's not going to work. The comment you responded to outright said these things:

- "MDN isn't yours."

- "Do they owe you something?'

- "You're not owed anything."

... and your response? "Legally, no"—but of course the problem with that response, again, is that legally, yes; they do owe something.

So try rewriting the context and all the rhetorical gerrymandering you want, but it doesn't change that the fact that (a) there was a discussion in terms of legal responsibilities and (b) in that discussion about those responsibilities, your comments were incorrect. Being wrong because of a slip-up is fine—and it wasn't even wholly your slip-up; you were yes-anding someone else's comment. But this scrambling now to double down after it's pointed out and the subsequent projection—particularly in your last paragraph here—is more than a little annoying to encounter.

> The project doesn't owe the contributors anything legally in terms of recognition

... except they do, for the reasons already stated. Maybe there's some attempt at sleight of hand in your choice of the word "recognition" here (i.e., as distinct from "attribution", but even then, it's not clear whether any argument there, if there is one, would even hold up)—but it's not really important. Because "attribution" is the word that was used, attribution is what the BY part of CC-BY-SA stands for, and attribution is what's required by that license—yes, legally.

It's pretty bewildering that you think you have an argument here.

It doesn’t matter. License violations of different kinds are broken all the time and go unenforced due to cost, time, and energy.

> Why are so many people (see sibling below: "Perhaps legally, in this case, they're owed nothing") just rolling with the suggestion that this is a grey moral issue and not a legal one?

I mean, I dedicated an entire paragraph above that comment to pointing out moral possible legal problems, and used "perhaps" as an explicit indicator of uncertainty and doubt. The moral issue isn't particularly grey. The legal one...

There are various attributions for "Mozilla Contributors". Does that technically suffice under either the license terms or in juristictions which recognize authors rights? (What juristiction(s) apply - hosting provider, Mozilla's headquarters, or perhaps the original authors?) Do they perhaps more explicitly attribute the original authors elsewhere? Certainly stripping names from an explicit copyright header would almost certainly be a license violation, but does flattening VCS history in the manner also count as one? Did CC licensing terms apply at the time of previous CMS conversions? Were there perhaps contributor agreements and/or clickwrap licensing agreements previously which would've made this legal? Are there more buried attributions which might technically meet the burden of attribution while still being done poorly enough to feel slighted? Perhaps in some juristictions but not in others?

Can you answer all that with enough certainty as to assert that you "have any idea what you're talking about"?

> What is it about Mozilla that attracts this sort of thing?

Both sides offering takes that reveal that neither has any idea what they're talking about is far from unique to threads involving Mozilla. I dare say it's not even unique to the internet.

> The moral issue isn't particularly grey. The legal one...

Also not grey—same as before.

> Can you answer all that with enough certainty as to assert that you "have any idea what you're talking about"?

Hey there. I'm a former Mozillian. I was a heavy contributor to Devmo in its early days (2006–2008). A bunch of that content is mine. It's not Mozilla's, and I know on what terms I made it available. So to answer your question quoted above (a) yes, in fact, I do know what I'm talking about, and (b) I don't have to be able to give an answer for every slot in your contrived matrix; it suffices if I'm able to say, "hey, you can't do that with the pieces that belong to me". And that's something that I can say—with certainty.

Because that’s not how the world works. This is evident by living in it. Otherwise, this wouldn’t have happened.

What’s happens in the real world is that you abide by contractual agreements and if a party breaks the agreement, you sue them for breach of contract. This also requires that you can sue them for breach of contract. Otherwise, you can kick dirt.

Understand now?

This is how Wikipedia attracts and retains volunteers.

> MDN isn't yours. Do they owe you something?

Yeah, they do owe something: they owe the attribution which is a condition of the Creative Commons license which is what license the content is available to Mozilla (and everyone else) under.

> You should consider attribution a privilege.

Absolutely not. That's not how copyright works. What a deluded, entitled take.

It’s absolutely a right, and recognised in the Berne convention, although unfortunately not well protected in US law compared to other states.

That's an absurd take on it... MDN owes this guy nothing, and this guy owes MDN nothing. But attribution, and the volunteer labor, are not part of any financial transaction, so the question of who formally owes who what is entirely irrelevant

But from the social contract, attribution is a perfectly fair expectation for this kind of work -- especially because at the time of, and even going forward, attribution was given. It may not have been codified as a public contract, but the relationship was, and is, quite clear.

And the breaking of that relationship for fairly arbitrary reasons (it couldn't be imported/converted?) is a pretty damn good reason to be annoyed -- hell, what other reason would you consider valid? If it was financial rather than social, you don't get angry, you get even (lawsuit).

Now I'm getting annoyed as I write this; its not a fucking privilege to have been given the honor of doing volunteer labor, and its not a fucking privilege to be given attribution for it -- it's like the most basic free compensation you can give for free work. It's a norm across the board

Attribution is important enough that several software licenses amount to little more than "attribute the original authors and don't sue us." Attribution is important enough that a right of attribution is incorporated into some countries copyright law as "authors rights".

Perhaps legally, in this case, they're owed nothing - but it's not unreasonable to consider stripping said attributions to be a dick move, even when legal. Maybe unsuprising, but a dick move nonetheless.

In fact, it's part of the license they tell you that the contributions are made under (CC BY-SA), hence it's not even legal.

In some countries you can't even sign away the right to attribution (ianal), so depending on where the contributor resides they might legally owe them that.

This seems like an odd argument. One can be angry about... well anything. Not just things owed or not owed.

GP is venting some anger here at a relevant moment and the perspective is obviously valuable.

> Do they owe you something

Mozilla is allegedly an open source project, so yes they owe you things based on free software principles, such as attribution and right to fork.

The opensource ecosystem is built on a very thin veneer of perceived fairness: labor is free in the hope that it will bring you reputation (which you can leverage into actual remuneration). If you remove one side of the equation (no matter how hopeful or hypothetical that might have been in practice), the setup appears entirely exploitative, and people get angry.

If he was a contributor, MDN is partly his, in the sense that it can even be anyone's.

Wow, DevEdge. Now that takes me back. I hadn't thought about DevEdge in ages.

> Better community building: At the moment, MDN content edits are published instantly, and then reverted if they are not suitable. This is really bad for community relations. With a PR model, we can review edits and provide feedback, actually having conversations with contributors, building relationships with them, and helping them learn.

This is a longstanding debate in wiki/collaboratively written content, but it it interesting to note that Wikipedia and the other WMF projects have been pretty successful with the anyone can edit, revert or fix after. The review first model is used on specific pages, where it is called pending changes protection, although this is not really like a PR in that there is no comment functionality, and the standard for acceptance is lower--I don't remember the exact wording but it is closer to "this is not vandalism/not obviously wrong" rather than "I personally endorse this" much less "this is the community consensus." These edits have no special endorsement once approved and can still be changed by other editors freely. One benefit of the PR model is finality, once an issue/PR is decided no one else is supposed to open another one with virtually the same change. Wikipedia sort of has that via article talk pages, but there is of course no centralized "maintainer" to adjudicate each and every controversy. Also, many editors just don't check or even understand them.

I think the wiki approach works best for very diverse references, where there is no such thing as a "good enough" maintainer. You can't do Wikipedia like this, because nobody is an expert in all of human knowledge.

On the other hand, MDN is probably sufficiently narrow in scope that a team of subject matter experts can competently manage contributions in a timely manner. Of course, by "narrow" I do mean basically every technology currently in use on the web, so... guess we'll see how realistic that is!

In short, what you are saying is: There is a fundamental difference between an encyclopedia and technical documentation which warrants a different approach to contributions.

I don't think you need to "really" change your approach when it comes to wikis.

The more quality control you need the more often you just let the contributor prove themselves first. Meaning, you don't automatically approve their submissions, but you don't completely prevent them from writing either, at least for as much as your community have the resources to review.

Being able to have defense/discussion about your documentation edit is something sorely lacking on wiki platforms that you get in pull requests.

Frankly, I always felt the wiki model pretty much a show-stopper to making any real edits with no good UI for commit review and a good UI for having a granular discussion about your changes. Instead the UX is to burn a lot of time getting a large change through a bunch of potential reversion cycles while trying to communicate through talk page(s). It's a hoop that selects for die hards.

I think there's an argument to be made that the impact of temporary errors in an MDN page is more significant than temporary accuracy errors on someone's Wikipedia article. If a developer goes off incorrect information on MDN it can waste days-to-weeks of development effort and potentially lead to defective software in end-users' hands.

To be fair, maybe people should view MDN skeptically just like they do Wikipedia. But I think it's valuable for it to be a reliable source and review-first is a good way to maintain that.

I don’t think I would agree. There are so many many different topics on Wikipedia. I think in all likelihood there is an enormously bigger amount of topics on Wikipedia that could lead to far bigger problems than some days of time wasted for a software developer, if the pages were to contain seriously bad information.

This model - storing markdown files in a git(hub) repo and editing them on Github is getting more and more popular. It would be great to see Github add a more rich markdown/frontmatter editor to their interface to accomodate this.

For what I understood it's going to be raw HTML, not markdown.

I was thinking of a more featureful markdown-based rich text editor, like you might see in a proper CMS.

Reading between the lines, it seems obvious that Mozilla wishes to stop funding localization.

Translations aren't even that expensive, especially relative to the cost of producing the docs in the first place. It hurts me to see MDN so starved of resources that they can't pay for even that small piece.

MDN has been key to so many people's technical education. High-quality web documentation is an essential resource for those looking to elevate themselves into a technical career these days.

Much of the world (20%, according to their own research) is set to lose access to this vital body of knowledge.

From TFA:

> Note: In addition, the text of the UI components and header menu will be in English only, going forward. They will not be translated, at least not initially.

But that's the easiest part! I understand the part about translations becoming stale, and how it's hard to manage, but honestly, translating a UI isn't that hard.

I think a good recipe to get better translations is:

- Don't treat it as an aftertought. Co-locate the en-US assets with the translated versions, that way your contributors actually see that there are other languages to support. Each page on the MDN website should be a folder containing a file for each language.

- If a translation is out-of-date, link to these more recent languages, and offer an auto-translated version in the meantime. Importantly, let the user switch between the automatic translation and the manual one. Also, make it obvious how they can contribute.

Idk, but ESL speaker as I myself is, I don’t think localizations are that important for technical writings, where few grammar or rhetoric is concerned, and technical terms can simply be seen as “tokens” that can be represented in any language as long as you can identify their “shapes.” Therefore, if you are among the targeted reader group, that the documentation is not in your mother tongue is unlikely to discourage you from reading it. On the other hand, if you are not comfortable with reading technical descriptions generally, having a localized version will not help much either.

If the text is in github, you can download it, backup, make an iso and share on piratebay, it won't be lost.

The concern I'm trying to express is that the translations of this content will grow increasingly stale as the original articles are updated, and will be missing entirely for new articles. People who don't speak or read one of the top 12 languages will just be out of luck, unless some generous soul decides to do that substantial amount of work for free.

As far as I understand Mozilla's point is that this is already happening. I agree that stuff like the UI should still be localized manually.

We shouldn't discount volunteer translations. Won't the new platform support that?

I think that storing the localized content in separate flat HTML files makes it very difficult to maintain translations, as the article points out.

Ideally, you would have a master document in which some parts are shared across all translations (like the layout, the actual terms in the spec, browser compatibility information, etc), while other parts were localizable (like the field descriptions). Then when content was added/changed, it would be clear which parts of the translations needed updating, and you could either flag them as such for volunteers, or use machine translation as a stop-gap, or both.

With the architecture they described, however, your options are either to make the translations completely independent or to make translations completely machine translated, neither of which is a good solution.

It's quite telling already that they basically admit that mixing markup and content as tightly as that is problematic.

I mean I don't think it's actually that big of a deal as long as they stick to basic HTML markup (paragraphs, bold/italics, etc), but you're already running into problems when you add links.

Not having multi-language support as a first-class citizen is a step backwards IMO. Especially considering the world is still getting connected at an enormous rate.

Multi-language support has the inherent problem that is requires effort linearly in the number of languages. Wearing a hopeful hat this new architecture would only require an effort "linear" to the error rate of the machine translations.

Personally I had some very good experiences with some translators, and it could work even better if Mozilla manages to tune and train them properly. (Like making so that manual fixes improve the algorithm)

I don't doubt that the Github-based CMS will allow for community-provided translations. I _do_ doubt that we'll see anything like the current set of languages covered at anything like the current levels of breadth and quality.

Like I said in another comment, approximately nobody translates technical documentation for fun.

A surprising amount of mediawiki documentation is translated by volunteers (i cant speak to the quality as i dont speak nonenglish, realistically the english source material isnt that great) but volunteers do translate things

Perhaps interestingly, I had a frustration in the other direction just last night:



Notice that the English page seems obviously translated by a non-native speaker, which has some implications not favorable to Mozilla's behavior here, specifically that ESL people will contribute documentation in English that native English-language speakers won't even provide for themselves.

Wikipedia translations are a bit different because they are not meant to be 1:1. The translations in theory are entirely separate articles.

I was thinking more like https://www.mediawiki.org/wiki/Download vs https://www.mediawiki.org/wiki/Download/es

Did they actually fund localisations? I'm unfamiliar with their process, but this article game me an impression that it was completely community driven.

Yes, absolutely. Approximately nobody translates technical documentation for fun, much less at such a high level. The article is pretty clear that the number of languages supported is a business decision.

Wikimedia/Wikipedia show that people do translate for fun (I do it myself on the wiki software website), so I'm not sure how you have that blindspot.

Your comment is the same as people who saw the launch of Wikipedia and wondered who could possibly write articles and find citations for fun.

Hell, who would write hundreds of thousands of wiki articles about The Elder Scrolls universe for fun? https://en.uesp.net/wiki/Main_Page

> Hell, who would write hundreds of thousands of wiki articles about The Elder Scrolls universe for fun? https://en.uesp.net/wiki/Main_Page

70k articles, not hundreds of thousands,[1] and of those 70k fewer than 1/3 are translated to any other language.

And speaking as someone who was a system and content admin for a MediaWiki-powered fiction wiki of about 15k articles... there are not many people relative to the amount of content who are actually interested in contributing anything, much less improving or maintaining what's already there. It's a very, very small core of contributors.

Toward your example, the 161 "active" users[2] (meaning any number of edits in the last 30 days) as of today on English UESP combined for 5,974 edits in that span. Of those edits, 4,779 were from 13 users. 3,039 — more than half of the last month's activity — were from the top 5 users.

On the non-English UESPs? No active users in Portuguese, one in Italian, none in Arabic.[3][4][5]

From my experience at least, it's because it's work — fun work, at times, but still work. Translating MDN content is also work. Doing _any_ documentation of _anything_ is work. Hell, I burned out on fiction documentation work before I burned out on paid work.

Some people do enjoy it! But voluntary documentation is still going to attract a small, specific core group of consistent contributors (if it attracts any at all).

[1] https://en.uesp.net/wiki/Special:Statistics

[2] https://en.uesp.net/w/index.php?title=Special:ActiveUsers&of...

[3] https://pt.uesp.net/wiki/Especial:Utilizadores_activos

[4] https://it.uesp.net/wiki/Speciale:UtentiAttivi

[5] https://ar.uesp.net/wiki/%D8%AE%D8%A7%D8%B5:%D9%85%D8%B3%D8%...

So they're hoping that they can replace the people they fired with open source contributors working for free.

Man these kind of views bother me so much. They’re taking one of the most (if not, most) useful documentation sources for cross platform web development that benefits them very little, and moving it to a system of open contribution so it can live past the company’s financial problems. And this is your first take?

Granted, it depends on how it turns out. But my first impression is that authoritative reference documents of MDN are going to become cluttered with comments and disagreements.

MDN has never been "authoritative reference documents". It's a wiki. Anybody can edit and republish pages, change examples, do whatever. I noticed somewhere about a year ago that it was a wiki, and have made minor changes to a couple of pages since then, when something was missing an example or something.

I change, I publish, it's done. And if my change was wrong, or opinionated, or bad, it's just like that until somebody else catches it and fixes it back.

Neither Safari, Chrome/Chromium, IE, old Edge, nor new Edge have substantial HTML and CSS developer documentation on their own web sites. Their vendors - excepting Apple and the WebKit contributors - all chose to work with Mozilla instead to put their docs on MDN: https://blogs.windows.com/msedgedev/2017/10/18/documenting-w...

It seems to me that as far as the browser makers are concerned, MDN is as close to authoritative as you can get without going to the standards documents themselves - and the standards don't give you browser compatibility matrices or make any note of browser-specific quirks.

MDN may be a wiki, but it certainly isn't treated like one by the people who are most interested in it being accurate and up to date.

I suppose I was reacting to this:

"Authoritative reference documents of MDN are going to become cluttered with comments and disagreements"

Which suggests previously authoritative docs are becoming less so due to being editable. I'm just pointing out they are and always have been subject to edits from the public. If anything the PR process will likely increase the reliability of the info because there would be a chance to do some vetting of changes.

I totally agree that MDN is as close to authoritative as you can get, but it's one (important) step away.

I think people put authoritative sources on too much of a pedestal because it's possible for someone to naively or maliciously edit wikis. People who write official docs are also frequently wrong, and the editing and review cycle there is far slower!

Every recent review I've seen of Wikipedia vs arbitrary big-a Authoritative sources has said that both have issues and overall Wikipedia is better.

I hope they don't suffer the same type of problem that caniuse has had since incorporating the MDN data. In that case, source data sets that were each well-regarded and, particularly in caniuse's case, well-curated on their own became overwhelming and harder to use than before once combined. The slightly different styles for each source and the partial duplication often seem to obscure the information I'm really trying to get to in the search results now. If MDN is now going to be community-led, I hope they manage to find some arrangement where there is still decent curation and not a Wikipedia/SO-style free for all followed by heavy-handed mod over-reaction.

Why is it now more likely to become cluttered than before?

I think you are both right. This is good for the future of MDN, but OTOH cutting cost is likely to be what motivated Mozilla to make this change.

They had already cut cost, I would phrase it as this change being now even more necessary.

With corporations, the cynical view is most likely to be the correct view. And I wouldn’t say MDN does very little for Mozilla; it’s the main part of their brand in developers’ mind share. If they want to have a browser without developers who like it, that’s their choice, but it won’t really work out well business wise.

As far as I can see, Mozilla have no reason to be losing money other than their own mismanagement.

This is an article about MDN, not Mozilla. How exactly does MDN bring money? Do you suggest MDN should be a revenue source? Or do you expect Mozilla to maintain the MDN for free eternally just because?

This is seriously entitled. Mozilla gave us an amazing resource, entirely for free. This attitude isn't right, it's toxic and harmful.

Mozilla is in a spot of difficulty, and instead of taking a resource that is operating entirely at a loss down, they're taking the engineering steps to ensure thez resource can outlive them.

That their mismanagement caused them the difficulty is entirely irrelevant here. It could have happened in a myriad of other ways. We should be praising Mozilla and the MDN tech team for the steps they're taking in making sure the MDN can live on.

It's stuff like MDN that attracts donations.

Fun fact: I have an open source repo with millions of downloads, tens of thousands of active users, and, for at least a year, a prominently displayed donate button. How much in donations do you think it took in over its lifetime?

Hint: it's less than $200.

Donations don't really work.

Thanks for the vim plugin for vscode! It's quite useful and usable. fwiw i don't think you get many donations because it looks like an official microsoft backed vscode plugin so why would people throw money at microsoft. that said, i do think you are correct in that people in general do not donate to open source individually that much.

We didn’t have many for Firebug either, though we appreciated those we got and funneled them to new contributors living outside the States where it made a difference.

>for at least a year, a prominently displayed donate button.

Just to be sure, are you referring to the "BuyMeCoffee" button on the GitHub page for the extension (https://github.com/VSCodeVim/Vim)?

Nah, we used to have a more prominent link. It's been removed since I stopped developing on the extension.

At the bottom of a long scroll isn't something I'd consider prominent. I'd probably change it to simply "Donate" as a button, and put it with the badges at the top. Red, purple or orange as a contrasting color from the blue and green badges.

Which one is it? Maybe you will win a handful of donations here.

> Or do you expect Mozilla to maintain the MDN for free eternally just because?

That's what I expect of a non-profit organization dedicated to maintaining one of the last two remaining major browser engines, yes.

I didn't know that becoming non-profit meant that money is free :)

What are their revenue streams?

Now that antitrust is happening anyways, Google's main reason for shoveling money at them is going away.

Well, the way I see it it was the financial problems who personally made the decision, so that complicated things somewhat.

If the alternative is to close shop and let it die entirely, I don't have an issue with it

It's not like they were extremely profitable and fired people to squeeze an extra buck

Laying off people isn't inherently evil

I have more of a problem with who they laid off than the fact that they laid people off. I'm probably biased as an engineer, but IMO the people who actually create your products should be the last ones to be laid off, not the first. And if the company is struggling, highly paid execs should take a pay cut before you start laying off lower-paid engineers.

hey, firing people is hard work.

It's more the clear mismanagement of their finances on ill-considered products outside of their core offering that never go anywhere. Some of them appear to be nothing more than vanity projects that subsequently disappear having got no traction at all.

I still think they are/were in a great position to offer what is now office365 and google docs earlier on, with a position to offer both dedicated and cloud software options.

They had electron effectively a decade before electron with XULrunner and let it die on the vine.

There were opportunities for developer mindshare and for services adjacent to those tools they were already building... that they didn't do more for getting Thunderbird more competitive to Outlook to offering a better breed of messenger platform.

No, their management sat on fat cash, with fat paychecks and ill-conceived projects that lead to little in terms of mindshare or income longevity.

Managing is inherently risky –– we shouldn't expect managers to make no mistakes while at the same time expecting them to keep workers around in an unprofitable business would only serve to further harm the business

Sure, one can rant all one wants about management's lack of business acumen, but that's completely separate from the issue of the layoffs

I read it more as they cleaned up the contribution process so that it's easier for people to participate.

But if what you said does happen, then RIP. It's not going to be the same - everyone is too busy writing medium articles to sell their course on udemy to make real contributions for free.

There were a lot of useful guides showing practical applications of features beyond just listing the api spec.

But I guess it'll still be useful as a more approachable wiki to api standards rather than having to go to w3schools (yuck) or the horrible ui of w3c.

Nevermind that work on Yari started at least a year prior to the layoffs in question. https://github.com/mdn/yari

So, you want to use it for free, but whoever makes it to be paid by someone else?

I'm a bit confused on the diagram for the new architecture: What's the purpose of the Lambda function? I don't see them explain that anywhere.

For the arrow coming from github to the cdn, I would bet that's a deployment where the existing docs site gets deployed as a lambda. From there, CDN calls that aren't cached go to the lambda which are then served up.

If that’s the case, why not just use GitHub Actions? It’s more integrated and requires no setup.

Sure, that's one of a hundred ways to deploy it. But they don't say if they're using that approach or any other

They are probably more interested in cost rather than setup.

I would assume the lambda is to invalidate / update the cache on the pages that are being changed as new commits are merged in.

First impressions:

PR model via GitHub, good.

That new platform schema diagram for Yari, looks complicated.

It's about the level of abstraction. Kuma's "Kubernetes Cluster" is very complicated but it serves no purpose to include its details in a blog post that describes Yari. Yari's diagram is more detailed than Kuma's, while Yari itself is simpler than Kuma.

Probably looks complicated because of the names of all the services. In a nutshell, the data lives on GitHub, gets automatically compiled into HTML and sent to a web server. They have basically taken the rendering out of the app server and into a CI task that does it all in advance.

Agreed... I'm not sure what of value is really left for non-contributing users on the site.

This is a similar tactic that MS has been using towards their documentation as well, so it's not so bad and for those most likely to contribute to technical documentation like this, the overlap with Github is significant enough.

Good to know that the MDN team are still working on the platform and even evolving it (hopefully for the better), despite severe layoffs.

What about the content part of it? Is it the hope that the move to GitHub and PR model will encourage broader community contribution and thus make up for the sad lack of dedicated content writers?

I am a big fan of jamstack. For a lot of my client's use cases it allows me to save them a lot of money and headaches while delivering a better product, especially with a headless CMS like Netlify CMS

I still don't fully grok the term. Is it a loose/blanket term sorta like 'devops' or is it a more literal prescription of actual requirements and practices?

A big problem is that the "Jamstack" branding and the website make it deliberately unclear that "JAM" is an acronym. I can't even find it expanded on the site. Instead it contains relatively verbose explanations that never become clearer than the actual words they so desperately avoid.

It's Javascript, API, Markup. Put "pre-rendered" before "Markup" and you understand everything.

Maybe they do this because they really want to push the fact that you can serve the J and M from a CDN?

They don't want it to be an acronym anymore: https://github.com/jamstack/jamstack.org/issues/279

It's in a similar category as "LAMP stack" or "MEAN stack", but more vague on details: JAM = Javascript + APIs + Markup. APIs tend to perform the role of a CMS (sometimes called "headless CMS"), and Markup tends to refer to declarative templating languages rather than code. The final output is compiled into static HTML+CSS+JS, which is obviously highly performant, compared to dynamically generated PHP/RoR/etc.

I can get that a significant part of it is just "static HTML," what I don't quite get is what makes it different than just static site generation. I guess that would be the "A" part...

Marketing pretty much. Some vague cloud of "uses serverless to generate", "pushes static files to a CDN through an API", but if you drill down none of that is key to it. I.e. you'll find advocates state that if a site uses a static site generator or Netlify it's JAMStack.

It mostly concerns how often content is updated.

Real-time: dynamically generated on the server-side. (e.g. injecting the current user's username)

Occasionally: Jamstack (e.g. rebuilding a static site every time a contributors adds a blog post via the CMS)

Rarely: manual updates (e.g. manually deploying a new version of a small-town restaurant's website when the menu changes)

no, that's it. jamstack - static site generator, but javascript

Yea... SPA == same thing imho.

More "REST" than "JSON". There's no Jamstack spec, no checkboxes to get your Jamstack compliance score. It is a description of a particular set of patterns and practices.

Jamstack is essentially a client/cloud model. Instead of client/server/db it is client with CDN and apis/microservices.

Jamstack is static sites/headless CMS, CDN for content/storage/speed, microservices/apis for data/auth/etc focused on speed, pre-rendering, and decoupling for easier swappable parts rather than monolith. [1]

[1] https://jamstack.org/what-is-jamstack/

tl;dr its basically just static html/js/css sitting in an s3 bucket, rather than having an active server (that you code/manage yourself).

It pushes the page rendering from runtime to build time.

I don't know. Moving from a Wiki model to a GitHub PR model actually sounds worse for users. They talk about, "building a relationship with contributors". This could also be called, "building a lot of friction for contributors".

With a Wiki, even non-technical users can contribute, because the editing tools are all built in.

But with the PR model, now contributors need to understand HTML, CSS, Git, and GitHub PRs. I've been on GitHub for something like 10 years and I still have trouble figuring out PR workflows.

Making simple edits like grammar or punctuation fixes become significantly more effort in performing the submission than in performing the edit itself.

Wanting to have control over the submission process to prevent drive-by vandalism is certainly important. But that should be managed with role-based authentication and new users needing to have their edits approved by a moderator or by votes from long-term users in good standing. Once again, this sounds like the same old Mozilla line of "doing the right thing for the current user experience is too hard for our developers, so we're going to move the goalpost instead."

Compared to MSDN, MDN isn't as broad, but goes a lot deeper on topics, and has much better examples. Does Microsoft wring its hands over how much running MSDN costs, to the point of considering giving it up entirely? I don't think they do. I think they know that it's important to keep developers on-platform, in-ecosystem, and they do that by offering a comprehensive learning resource.

MDN is the best online resource for learning about Web technologies. I think a large part of that is because of the relatively low friction to edit. It not making Mozilla any money is completely their own fault. They could have taken that a step further and started offering training services and events. It stretches belief that you could have such a popular resource and can't figure out some sort of monetization strategy related to it. But, I guess, that's another of the same, old Mozilla lines.

> Does Microsoft wring its hands over how much running MSDN costs

Microsoft revenue stream is slightly more reliable than Mozilla's reliance on Google.

>We are replacing the current MDN Wiki platform with a JAMStack approach, which publishes the content managed in a GitHub repo.

So... Ceding control over content to GitHub. Ceding control over uptime to Amazon. (Both done by a company that paints itself as pro-people, pro-privacy, etc.)

>You will no longer be able to click Edit on a page, make and save a change, and have it show up nearly immediately on the page. You’ll also no longer be able to do your edits in a WYSIWYG editor.

The fact that this is presented as some sort of improvement is pure farce.

Frankly, vandalizing a page becomes harder.

The job of people who would have to check and approve the PRs becomes harder, too.

The current Wiki is already hosted on AWS.

"If you want to be notified when the new system is ready for testing, please let us know using this form."


"You need permission

This form can only be viewed by users in the owner's organization."


IIRC Microsoft a few years back decided to throw in with MDN as THE source for web API docs: https://mspoweruser.com/microsoft-redirect-7700-msdn-pages-m...

> Microsoft is joining Google, the W3C, and Samsung to make Mozilla’s MDN Web Docs as the single place for web API reference.

Now that Microsoft is on Chromium, and THAT is also documented in MDN, and Google and Microsoft both own public cloud providers.. I feel like this move is more a step in the evolution of MDN rather than Mozilla simply sending it out to pasture.

This also match the focus on mass edits and tooling in the article

Is there any way to back the actual MDN ?

Will the quizzes disappear in the near future ? I was planning on brushing up my html/css/js skills and I was planning on doing it through MDN.

I'm working on an editor that lets you clone any repository - which uses a static site generator or the like to render static pages - where my editor lets you edit in WYSIWYG (fully rendered - what you see is what you get) - while producing clean diff's in plain HTML (or markdown), I would love to add support for editing MDN articles.

Am I the only one put off by the AWS (S3+Lambda) lock-in they seem to be creating for themselves?

It should only take a couple of days to switch those to a different cloud provider. When building a Jamstack website there is only very thin logic in the Lambda itself, and S3 is only used as managed file storage.

S3 is just files and Lambda is just FaaS, you can host your own open source versions of these tools and not encounter an issue.

This would be a more solid argument if they were using a lot of AWS features together, but you could replace this with Minio and FastCGI if you really wanted to.

and GitHub...

Are they locked into github (using their proprietary APIs) or just using it as a git backend (using standard git tooling)? It's not clear from the article.

I'm pretty confident they're going to handle PRs via the github interface. I don't think they'd get many contributions if you had to email patches in.

right, tho the github lock-in is probably gonna be the least worse if they mostly rely on git itself (GH issues and PRs can be migrated to other providers or self-hosted a lot more easily than uncoupling yourself from AWS APIs and services).

And kubernetes, and react, and mysql. How to add something apple to the mix?

(about the comments I see here) This rethoric method of mentioning just the bad/debatable parts and completely ignore the benefits is toxic. It disregard any compromise, as if we could always achieve the ideal solution


So when "retire" or "sunset" isn't enough euphemism yet, we now have "evolve"...

What are you talking about? MDN is not being retired nor sunset. They’re only changing the underlying technology. It’s completely fair game.

> Simplified back-end platform

Shows two flowcharts, of which the new one is arguably more complex.

While one org that's serious about privacy moves to Gitlab and selfhosts everything (wikimedia), another one decides to embrace Github, Google and Amazon. Why Mozilla... why?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact