Hacker News new | comments | show | ask | jobs | submit login
Memories of Mozilla at 15 and Thoughts on Mozilla Research (brendaneich.com)
101 points by kibwen 1384 days ago | hide | past | web | 40 comments | favorite

Mozilla Research is awesome. They punch way way way above their weight.

Google is chock full of programming language people, but Mozilla's work on Rust and Javascript (ES6, asm.js, sweet.js) is producing far better languages than Google has done with Go and Dart in my humble opinion.

You can't compare Google Research against Mozilla Research. Google is really moving computer science forward, just look at their papers and real implementations (disclaimer: I am not a Google fan).

If you compare the Google Chrome source code vs Firefox you will quickly understand how miles ahead Google is.

My opinion is that what Mozilla produces in the domain of programming languages is superior to what Google produces in the same domain. I make no claims comparing the two outside this domain.

Google has far more resources than Mozilla, and produces loads of great stuff in many domains.

I use both browsers daily (Aurora specifically, for Firefox) and have no preference between them. I have no opinion on their source code.

Google has been pretty influential in PL, especially in the PL/systems domain (MapReduce); just because they aren't designing so many mainstream languages doesn't mean people like Jeff Dean aren't putting their PL skillz to work. In fact, many of the heavy hitters at Google have PL backgrounds and do a lot of PL work, they are just pragmatic and realize that not all PL work involves overtly new languages.

And then there is what Holzle, Bak, Bracha, etc...are doing with Dart; and of course Pike with Go.

To be fair though Chrome was started from scratch in 2007/2008 whereas Firefox has a lot of legacy stuff still hanging around from the early 2000s.

Chrome was started off of WebKit, which also has legacy from many years earlier. Both WebKit and Gecko have code that dates to before the year 2000 in fact.

This is pretty much expected for a huge multimillion line C++ codebases, like all web browsers currently are. All have crufty parts (not sure why GP thinks one browser has nicer code overall? That's not my opinion based on the code I've read.)

V8 was new and WebKit was still a newer codebase than Gecko.

Chrome started in 2006.

WebKit forked KHTML in 2005.

KHTML actually started in late '98 if memory serves, when we rebooted Mozilla around Gecko.


Rewriting the browser stack in an evolving language feels like trying to shoot the moon. However, I hesitate to bet against Mozilla when it comes to impossible rewrites.

I love the idea. It will give Rust lots of feedback about real-world, practical problems that arise on large-scale software. I think it will make it all that much more likely that Rust will be an industrial strength systems language.

Yep. It's a high-risk project, but the pay-off could be huge (both for programming languages and browsers) if they pull it off.

As Peter Norvig argued, design patterns are bug reports against your programming language.

Does anybody know what the actual quote is? I love this.

There was a link: http://www.norvig.com/design-patterns/

I think you are looking for an exact quote but the link is to slides of a talk :(

I also agree with the sentiment, but you Architecture Astronaut will no doubt disagree.

You really can't just take a design pattern and apply it blindly across languages. I spend far too much of my time on a C# application where I have to deconstruct anti-patterns and make them more amenable to change.

Yeah, I've shared that sentiment for a while now, but it was the turn of phrase that pulled me in, and I wanted to know whether to attribute it to Peter or Brendan.

I turned that particular phrase based on Peter Norvig's old Harlequin-era talk.

I've also heard people say that "design patterns are feature requests", or "the kindest form of feature request", but bug or feature -- what's the difference?


An aside, but I like this series of articles on design patterns in Perl, first here:


"If a pattern is really valuable, then it should be part of the core language"

You know I was also thinking in Perl yesterday when I read this and this is definitely a Perl idiom.

I really believe you get a lot better at seeing the benefits of patterns when you try to apply them in different languages and different contexts.

Are patterns something we should try to transcend in our learning? Meaning, you really do need to know them before you understand how to dismiss them in favor of something else?

One of the slides shows a benchmark where Rust beats GCC; it would be really nice to see the code.

In what? Compilation speed? Atm Rust is no way the light nimble language they need it to be.

Runtime performance. This is mostly an LLVM versus GCC thing.

Runtime performance is far more important to us than compiler speed—a fast-to-compile Web browser is useless if it loses to existing engines in UX or benchmarks (every cycle counts!)—but we continue to spend time on compiler speed, because it's important for developers.

> This is mostly an LLVM versus GCC thing.

So how does clang do on nbody.c?

I don't know which code Brendan was using, but using the http://shootout.alioth.debian.org/ implementations I ran some tests of my own:

  Performance counter stats for './nbody.gcc_run 50000000' (5 runs):

       6232.525219 task-clock                #    0.999 CPUs utilized            ( +-  0.09% )
               533 context-switches          #    0.086 K/sec                    ( +-  0.43% )
                 1 CPU-migrations            #    0.000 K/sec                    ( +- 66.67% )
               166 page-faults               #    0.027 K/sec                    ( +-  0.15% )
    27,385,819,080 cycles                    #    4.394 GHz                      ( +-  0.05% ) [83.31%]
    19,494,722,200 stalled-cycles-frontend   #   71.19% frontend cycles idle     ( +-  0.07% ) [83.30%]
     6,403,657,640 stalled-cycles-backend    #   23.38% backend  cycles idle     ( +-  0.21% ) [66.70%]
    31,905,910,193 instructions              #    1.17  insns per cycle
                                             #    0.61  stalled cycles per insn  ( +-  0.01% ) [83.37%]
     1,902,954,908 branches                  #  305.326 M/sec                    ( +-  0.05% ) [83.36%]
            39,925 branch-misses             #    0.00% of all branches          ( +- 31.16% ) [83.34%]

       6.239027874 seconds time elapsed                                          ( +-  0.08% )

  Performance counter stats for './nbody_rustc 50000000' (5 runs):

       6970.131625 task-clock                #    0.999 CPUs utilized            ( +-  0.07% )
               601 context-switches          #    0.086 K/sec                    ( +-  0.33% )
                 5 CPU-migrations            #    0.001 K/sec                    ( +-  7.41% )
             1,024 page-faults               #    0.147 K/sec                    ( +-  0.02% )
    30,587,917,853 cycles                    #    4.388 GHz                      ( +-  0.04% ) [83.31%]
    19,368,464,950 stalled-cycles-frontend   #   63.32% frontend cycles idle     ( +-  0.06% ) [83.34%]
     7,348,766,082 stalled-cycles-backend    #   24.03% backend  cycles idle     ( +-  0.84% ) [66.69%]
    36,479,184,126 instructions              #    1.19  insns per cycle
                                             #    0.53  stalled cycles per insn  ( +-  0.01% ) [83.35%]
     2,404,008,283 branches                  #  344.901 M/sec                    ( +-  0.04% ) [83.35%]
            65,846 branch-misses             #    0.00% of all branches          ( +- 11.89% ) [83.33%]

       6.977211938 seconds time elapsed
The C version is here: http://benchmarksgame.alioth.debian.org/u32/program.php?test... . It appears to be the fastest C verson on the site. The Rust version looks pretty gross (it's likely very old code), I'd like to try porting the C version to Rust (it's only 140 lines) and compare the results. Note also that I'm using a version of the compiler that's so new that it's not even in the official repo yet, containing optimizations for floating-point math.

Change the sqrt call to use the sqrtf32() intrinsic. That's the modification we made to make nbody fast.

The branch I'm on has already updated shootout-nbody.rs for the new intrinsics, which is the reason that it takes just 7 seconds instead of 11. :)

I never liked Mozilla for a number of experiences:

Their transition from an open source project to an organization receiving millions from the Google search box.

The restrictions (based on security reasons) to install an extension from a site different than mozilla addons. For me this was a prequel of app stores.

Mozilla Thunderbird team had no respect for people who reported bugs: they closed important bug reports.

Lack of respect for extension developers breaking compatibility between versions and not clearly isolating extensions. I remember looking into the TabMix Plus extension and seeing ~40 IFs handling special cases with other extensions.

Lack of a good community (circa 2007) where complex questions were never answered.

I can think about more disgusting experiences...

>Their transition from an open source project to an organization receiving millions from the Google search box.

I have no problem with this as long as they use the funding to continue producing open source software. I don't think it would be possible for them to compete with Chrome as a pure open source project with no major funding. Mozilla's role since its founding of providing a viable alternative FOSS browser is just as valuable and appreciated now as it was in the IE days, imho.

>The restrictions (based on security reasons) to install an extension from a site different than mozilla addons. For me this was a prequel of app stores.

For a product aimed primarily at easily-duped non-tech lay people, I have no problem with this either. As long as the addons remain FOSS.

>Lack of respect for extension developers breaking compatibility between versions and not clearly isolating extensions.

As a FF user I can live with this, as long as the breaking changes move the browser forward technically, or get rid of some legacy cruft, or similar. It's definitely not in FF's interest either, as it incentivizes users to try other browsers, so I expect it's on their todo list, just maybe not as high a priority as other issues.

> As a FF user I can live with this...

The issue is from the developer perspective but also (although invisible) from the user side. Extension developers make titanic work and users can loose some extensions in the way of compatibility. Obviously important brands will never be incompatible but less known extensions will.

The security restrictions regarding add-ons make sense, I believe Chrome has just recently adopted them due to the browser being owned by hacked websites.

Thunderbird was a silly waste of resources and should have been shelved well before they did.

In balance I'm quite happy Mozilla exist and have been generally positive.

Can you explain you disdain for thunerbird? I'm using it as my main email program and am quite happy with ti.

Mozilla have limited resources and a large number of more important projects that could do with those limited resources.

I explained it: it has important bugs and the team instead of saying "won't fix" said "it didn't occur".

I agree with you on some Thunderbird bug particulars, and I weighed in and in at least two cases helped turn things around:




Sorry I was not earlier on the scene to help -- people are supposed to mail me when my early warning systems don't see a problem in the community.

Honest question: did you try to intervene or get involved in any of these cases?


It would be interesting to receive replies instead of downvotes.

"Their transition from an open source project to an organization receiving millions from the Google search box."

You can be both things. Generally it's nice to be paid for your work.

"The restrictions (based on security reasons) to install an extension from a site different than mozilla addons"

If someone convinces you to install a malicious browser extension (easily done in a world where people will click OK on basically anything) they can spy on and control everything you do online.

> If someone convinces you to install a malicious browser extension (easily done in a world where people will click OK on basically anything) they can spy on and control everything you do online.

Do you think then that for example desktop software must be installed from a central company like Microsoft because someone can convince you to install it from every website?

And also: do you think that Mozilla was really doing security inspections in the early days? No, it wasn't, the extensions had all kind of security bugs. I think it was more about control.

You can install extensions from any site. You'll only get an additional prompt if you're not installing from the mozilla add-on site.

You can think it was about control, but this could not be farther from truth.

<disclaimer>I work for mozilla</disclaimer>

An additional prompt means less conversions.

There are alternative methods that Mozilla could do, like bringing a Mozilla certificate to host the extension in your own site without an extra prompt.

And how would we decide to whom to extend the certificate? To whom should we entrust "conversions"?

You are talking around the larger problem, which is a huge one for many extension and app ecosystems (e.g. Google Play, where weak-AI scanners fail to stop malware and spamware).

Mozilla uses community review, which works much better but is of course imperfect, a human thing.

No one that I know of has solved this larger problem. I would be interested in research pointers and tips (not complaints).


I work for Mozilla, but speak for myself. If you think it was about control, then you don't know how Mozilla works. We fight larger players with one arm tied behind our back because of our commitment to making things open and interoperable. We do it gladly, because we know that sometimes our reference implementation won't be the best implementation, and even if we fail, we want the tech to live on. Every day I have or overhear a discussion about making certain what we build doesn't privilege our own solutions over others. Are we perfect at this? No. Sometimes security of our users trumps a completely level playing field. But every time we have to slightly close a technology, know that it's done extremely begrudgingly.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact