Congratulations! And thank you for the great write-up. I work with the Chromium code base a lot, and it can indeed be daunting. I use Sublime Text, which treats the code as plain text, apart from syntax highlighting. But it's also possible with at least VS Code to get some more intelligence, such as going to the definition or declaration of a function, etc.
People who have now become interested in creating their own Chromium-based browser may want to take a look at my article: https://omaha-consulting.com/how-to-fork-chromium. It gives a high-level view of what goes into maintaining a Chromium fork.
> you will (...) want to change the name of your browser [to] "Browser of Bliss" instead of as "Chromium". You will find that this is already hard to do. The browser name is hard-coded in many places in the millions of lines of Chromium source code. (...) Viasat are offering a (...) fork called Rebel that makes this easier
I am surprised that kind of change has not been upstreamed, or is Google actively working against forks?
It’s just the kind of thing that happens in a huge, production codebase.
There are plenty of reasons to be skeptical of Google, but some strings in multiple places isn’t a good reason to be skeptical of the Chromium maintainers.
I know it's naturally happening in a large codebase, I'm asking why they specifically maintain a fork just for that instead of trying to push what are probably easy (but tedious) upstream fixes.
Purely guessing: abstracting a browser name is yet another abstraction layer. A later that is not needed by chromium. Maintainers of chromium primarily care about maintainability of chromium, not other forks.
They could not accept those changes without being actively against forks. It would just mean they aren't actively supporting forks, which is a different thing.
It depends on the individual part of Chromium. Some teams seem to be much more open to contributions than others. (I believe to recall that this is also what someone at Viasat told me at some point, but I'm not sure).
Also, for example the browser name appears in a lot of places. It is very hard to fully extract it into a single configurable option.
>So how much does it cost to maintain a Chromium fork?
>It obviously depends on the number of customizations your browser has, and on how quickly you want to incorporate security fixes from upstream. Chromium is one of the world's most complex pieces of software, and you need very capable engineers and powerful hardware to match this. It is going to be expensive. And not just once, but also on an ongoing basis.
This confirms my thoughts after I tried messing with chromium's and brave's code bases.
I dunno, I've maintained a personal fork since around Chrome 100 or so. I am only targeting builds on Linux, I don't care about branding, and my patches are limited to about 300 lines total. The initial fork and figuring out the build process took probably 20 hours. After that, I've only had to spend on average around one hour per release of "engineering time" to keep my patches current. The build takes about an hour of machine time on my 5950x.
I'm not saying it's trivial by any means, but I don't think it's outside the range of a motivated hobbyist or a small startup. I don't think you really need "very capable engineers" personally. I barely know C or C++ and I haven't had too much trouble working with the codebase.
What kind of changes have you made with your patches? Did you make them yourself? I wanted to locate the code responsible for chromium's manifest v2 support so I could patch it back in once it's removed but I just couldn't get very far.
Manifest v2 is a bit tricky, because there are really three changes overlapping with each other:
- Manifest v3 site permissions changes
- Removal of protected APIs (like the blocking version of chrome.webRequest)
- Chrome Store code review changes that prohibit remotely loading sources
For now, you can either do nothing (as the rollout is not yet complete), or set the ExtensionManifestV2Availability policy to 2, which will still allow Mv2 extensions to be loaded.
>The other option is to follow their issue tracker and git history, and just revert whatever patches you don't like.
Yeah, the first thing I tried was to look at the issue tracker but as you stated they haven't rolled this out yet. So I tried to find the code myself.
Google's documentation didn't really help that much for this and seemed to be outdated in many sections. So I went and tried to do some experiments on places that seemed like they may have something to do with extension APIs.
I remember there was some build generated code that I was looking into because it related to the extension APIs but I was never able to test my guesses very well since my machine is not super fast and compiling takes so long. So eventually I threw in the towel.
I was a C++ Chrome developer till 2020, and I primarily used Sublime Text because of speed, and I found VSCode weekly releases too distracting.
I indexed code locally with CTags, and then used SublimeText CTags extension for navigation. This worked great for my local branches. When I needed to dig deep, I'd use source.chromium.org which indexes perfectly.
I am not sure if built-in indexing supports only indexing subset of the tree. YOu want to selectively index for speed and accuracy. The complete tree might contain multiple definitions of the same functions, as headers get copied, pre-processed, etc.
It supports selective indexing, you can specify paths to exclude from indexing or you might open just a single module, here is more info: https://www.sublimetext.com/docs/indexing.html
Reminds me of my early experience with larger-scale JS development (early single page apps or whatever they're called now); there were no good IDEs yet, no module / require system, no types or whatever. Sublime Text and fast global search were my go-to tools, and it gave me a newfound appreciation of consistent naming schemes and structures.
Not so much nowadays though, most of the time I use IDEA with Typescript and the like. And yet, I still feel like I lost something moving away from sublime. I've reinstalled and am trying it again lately.
Sublime Text has had some IDE-like abilities even before the LSP plugin, because its own filesystem code indexer reuses the syntax-highlighting language grammars to power best-effort goto-definition / goto-references functionality. How well that actually works varies by language.
For Chromium IntelliSence at least in the form of VSCode plug-in works pretty much out of the box with recent releases of VSCode. It is not as precise as clang LSP but the latter can easily consume over 10GB for Chromium.
>Because of this huge codebase size, I wasn't able to get VS Code's C++ extension to work very well with the project. Features like go-to definition (which I usually rely on heavily when navigating codebases) and find references didn't work well or at all, and one of my CPU cores would stay stuck at 100% permanently while the project was open.
Chromium Code Search [1] tool is very helpful with that and I believe there are some extensions that integrate with it.
It's also possible to get go-to-definition etc working in VSCode locally. You need to switch from Microsoft's C++ extension to the clangd extension. Clangd scales better and is more accurate for projects using clang like Chromium. Instructions here: https://chromium.googlesource.com/chromium/src.git/+/HEAD/do...
The Chromium code search site is still very useful too.
I've been working on an extension https://github.com/phil294/search-plus-plus-vscode-extension for instant search results in gigantic repos like this one because it's a recurring pattern that bothers me. And eventually I'd like it to use its index to provide full go-to, autocomplete etc. on a pure plain text basis, because why not? I don't get the obsession with full-fledged language integration when plain text-based search results can get you all the way 9 out of 10 times, whereas a typical language plugin will constantly suffer from brokenness, performance problems and general annoyance, unless maybe you're working in pure JS/TS. And while LSP is great, you still have to fight this battle separately for every language you use. And regular "search" features are dreadful too.
It's one of these things that Jetbrains products are vastly superior in. It's fast, always works, falls back to text matching and also natively allows multiple languages per source file.
Sounds really interesting, and useful for most developers. I think the issue is probably more what people have read about edge cases, than what it's actually like to use an extension like you are building. People tend to worry that they will hit the edge case and not realize it, so completely avoid anything that might give them faster results in the name of 100% accuracy that they will probably never hit. In a typed language, I would think that it would be extremely rare to search and make the type of changes that would be missed in that edge case, and the code still compiles without issue. Maybe I just haven't worked on large enough codebases, or ones that are full of "magic" where such a thing would create such an issue.
Regardless, I think it's impressive that you've taken on this task and are sharing it with the VSCode community, and appreciate you sharing.
For Visual Studio on Windows Google provides a search extension that indexes Chromium locally and gives instant results for search. I was always puzzled why such functionality is not available in most IDE by default.
BTW there also exists the same code search tool for Android (AOSP): https://cs.android.com
I use it all the time as an Android app developer. Its introduction was a big deal, because before that, it was common among Android developers to pull AOSP sources (all the many gigabytes of them) to one's local machine and just grep around.
it is not as good as the search built into Azure Devops [1] however. You don't know power search until you've used a search tool built on the backs of the devs of Windows. Being able to say "Uhhhh I know it was a macro called DeBeanIt2k near a comment with "HACK" in it" gets turned into "comment:HACK macro:DeBeanIt2k" and you get answers back is super nice. There's also an API for it.
First, thank you for sharing this helpful link, but LOL at needing to use a third party server to search plain text data that could fit in RAM (at least on this developer's machine). JavaScript- and JSON-based developer tooling is a terrible idea.
I'm sure it can do a plaintext search just fine. What the author is talking about is language-aware features like "go to definition". Holding all of a whole web browser's C++ parsing tree in memory is a lot bigger ask than just its plain text.
You're assuming that the only way to a definition of an identifier is 1) parse the entire source tree 2) keep the entire source tree in memory 3) use that in-memory source tree to go to definition. If you accept those constraints and then implement in a slow language, then yes, it won't work.
You’re rather underestimating the vast expanse that is the Chromium codebase, I think. That said, distributing tags files was a common thing once, and a dedicated symbol package you could just download and feed into your language server (instead of being constantly tethered to a symbol server) could make for a nice affordance today.
So this actually exists, as it turns out. You can run clangd-indexer to produce a static index and then load it with clangd -index-file. The caveat is that
> [r]unning clangd-indexer is expensive and produced index is not incremental.
Chromium's codebase isn't so bad for a first timer. Years ago our product had a bug on Windows where if you paste an image from the clipboard, the image had garbage in it (something to do with alpha channels). I realized Chrome has no such bug so they probably had a workaround. It took me like 30 minutes of lurking around in the codebase for the first time to find their workaround and apply it to our code.
Why build it if you are just reading? I find https://source.chromium.org/chromium wonderful. With things like go-to-definition and find-all-overridden-functions working wonderfully well.
I find this to be ideal when working with a large codebase. I don't even need an editor with fancy intelligence features and LSP integration; a bare bones vim or emacs paired with a website with all the intelligence already there.
Is this a serious question? Assuming it is, to insert printf statement, or attach a debugger and step through the program and take backtraces to supplement or confirm the information you gather from reading the code.
Yes it's a serious question. And no, "reading" doesn't involve inserting printf statements or taking backtraces. These are different activities. That's called debugging and not reading.
The OP's scenario is being curious how Chromium does something: so it suffices to find the relevant snippet and then copy it elsewhere. The Chromium code is assumed to be already working and does not need debugging.
>The OP's scenario is being curious how Chromium does something: so it suffices to find the relevant snippet and then copy it elsewhere. The Chromium code is assumed to be already working and does not need debugging.
To clear myself of any wrongdoing in regards to copyright :) They had a very nice comment explaining how the workaround works, and we used a different image manipulation library, so I couldn't, and didn't, copy it as is.
So not only it was easy to find the workaround in their huge codebase, they also explained it well so I could replicate it in our codebase from scratch.
Sure. Doesn't that depend on the complexity of the code you are looking at though? In many cases you can go through code, find the bit you are looking for and have pretty good idea of what it does. Which sometimes is all you need.
One thing Chromium does really well is hooking up cross references in the code search tool (source.chromium.org). This makes it easy to browse, see where things are called from, subclassed, etc. Github feels far behind on this.
I’m pretty sure that code search webpage is the external facing version or Google’s internal code search, which they use to index their huge internal monorepo, so it makes sense that it works well.
last year I worked with js and chrome gave me a weird exception (I think it was something with websockets and ssl? I don't remember). I googled the exception, and found the exception in the chromium sourcecode and quickly found out when exactly that exception was triggered, and could then easily fix my js.
I too was pleasantly surprised how readable it was.
Not that I really see away around it, given the size and feature set of Chrome, but those build requirements are just crazy. It kinda throws the open source and "everyone can contribute" model out the window, if you can't afford a pretty insane workstation then you're going to have a bad time.
I doubt that Firefox is better, I seem to remember that building Firefox and the VIA C3 processor years back as around half a day of compiling, but was also an extremely poorly choose CPU for the task.
Depends on how much you really have hyped yourself on the task :-P.
Back in early 2000s a friend of mine was mentioning on IRC how middle clicking on a scrollbar in Mozilla under X11 didn't jump to the clicked point like in other X11 GUI programs. I was full into the Mozilla hype back then (the project was basically at its apex of coolness :-P) and wanted to get people into it, so i thought "this is opensource, right? I can do it myself and convince people how great Mozilla is".
Problem being, i had a Pentium MMX @ 200MHz with 128MB of RAM. It took at least six hours to do a build (most likely more, i felt asleep at some point during the night, then woke up ~5-6h later, was still compiling and i left to go out with a friend and was done when i came back some time later). Even if i didn't make a change and tried to recompile it took half an hour.
Fortunately i had already done some GUI programs by that point (and even tried to make my own GUI systems and toolkits) so i had a rough idea where to look and it didn't took too long to figure out the CPP file where scrollbar behavior was implemented (though it did take some extra hours).
I did submit it and remember being impressed by the process of having my patch to be "reviewed" and then "super reviewed", thinking that it now makes sense how Mozilla is of high quality (remember, i was a teenager big into the hype, everything was filtered through the most positive lens). FWIW i was asked to make a couple of minor changes but it was merged in.
I don't think i'd have as much patience these days, the last time i tried to build some relatively complex piece of software so i can contribute to it was with Krita, but that took only about an hour (including getting everything needed to build) - and it didn't work at the end (it built but crashed at startup) so i decided to try some other time as i had lost interest by that point to debug why it didn't work :-P.
Open source doesn't imply "everyone can contribute" at all. The most famous example is perhaps SQLite, which is fully open source (even in the public domain) but contributions are not welcome.
Firefox builds are seemingly more modular. I was hacking on the Firefox devtools a while back, which from source involved downloading a pre-build main brower binary and building just devtools from your source. This made it significantly faster due to not having to build the rest of Firefox from source. Of course, this will all depend on which part of the source code you're changing.
Firefox isn’t very modular. There’s just a C++ part and a JS part and they can be built independently. My understanding is that new developers are encouraged to focus on the JS. The devtools team is entirely JS, with some fancy modern affordances (TypeScript type definitions!) that teams working on the older parts of the JS codebase don’t have.
The build process for the rest of Firefox could be as modular as devtools. When I was involved with Mozilla (and for a while after), it was a frequent criticism of mine that no one was prioritizing making the contribution process as easy as possible for people who didn't have an email address ending in @mozilla.com.
Zotero has (had?) a really cool build system. Zotero is written on an Electron-like architecture, except it uses a Gecko-based runtime instead of Blink. (Firefox is the same way and has always been this way, and this predates not just Electron, but Chrome/Chromium, too.) Similar to the way that Electron apps' build scripts generally work by downloading a prebuilt Electron (rather than requiring developers build it from source), Zotero works by downloading a release build of Firefox, unarchiving everything, sweeping away all the XHTML/XUL/CSS/JS that comprises the Firefox UI and application logic, and then swapping in the components that make up Zotero. (In other words, just how it should work 90+% of contributors who want to submit patches against Firefox nightlies, too.)
At which point was Chromium even attempting a "everyone can contribute" model of opensource? Like most corporate OSS projects, community contributions aren't something they rely on.
16 core CPU, 64GB RAM, and 100GB of storage is definitely not a budget build, but I don't really see how that would be a "pretty insane workstation", either. That's firmly in consumer space these days and has been for a few years.
But that's also not a requirement, either, that's just what it took for a 40 minute clean build time.
I'd argue that it's not the 'default' workstation most developers who do not have specific requirements will be running, especially on the RAM side.
My personal machine is very mid-spec'ed because my personal projects are all small and don't really require much.
Even my work machine tops out at 32Gb RAM which has always been plenty, even for our comparatively large codebases (although more is always welcome :)).
RAM is cheap and easily upgraded after the fact, though. Going from 32gb to 64gb is $100
I certainly wouldn't spend that for a one-off patch to an open source project, but if you're doing this regularly it's not exactly a steep upgrade either
> It kinda throws the how open source and "everyone can contribute" model out the window, if you can't afford a pretty insane workstation then you're going to have a bad time.
That, it also means you are going to spend a lot of time on it before you can even attempt to do anything.
Overall, there can be a pretty substantial amount of effort involved before you are even ready to make a PR of any kind. Then it remains to be seen if it is well received by the people who can approve it.
You mention Firefox, my dealings with various contributors and people at Mozilla over the years would make me very hesitant to even consider diving into the deep end. To be fair, I have had good interactions with various people as well. But a lot of communication also has been just outright difficult.
All of this also throws out the "if you don't like it, you can just fork it" mindset.
K. To start with, it's been a common phrase in open source since before "open source" was invented and fairly self-explanatory; it means what it means on its face. Even if that weren't the case and someone were just encountering it for the first time in 2024, a 2-second attempt to get acquainted will not leave anyone wondering.
There are 200 results for "patches welcome" on HN:
The top results on Google for the phrase "patches welcome", most being posts written around the 4-, 5-, 6-year mark after the post I linked to, are all based on the same premise: "patches welcome" is (a) common, and (b) not what anyone wants to hear.
But I'm also missing some context here; my "Huh?" was rather more directed at the part of the remark that says "[...] for me to agree or disagree with it".
Firefox uses unified builds, where a bunch of .cpp files are globbed together and compiled at once. That helps a lot, but a build still takes a bit of time unless you are on an absurdly fast machine. Chrome used to also support this, called "jumbo builds", but they didn't want to deal with the maintenance overhead. Presumably all of the Chrome developers employed by Google are using some kind of massive distributed build infrastructure so there's little impact of slower builds on individual developer productivity, so the use case of building on a single computer is not as prioritized.
> I doubt that Firefox is better, I seem to remember that building Firefox and the VIA C3 processor years back as around half a day of compiling, but was also an extremely poorly choose CPU for the task.
Around 6-7 years back, I was able to make a change & build Firefox from source on a mid-range gaming laptop without much fiddling. I think the build took may be around an hour or more and it was not too long to stand out.
I haven't tried building chromium to compare but from my past experience, Firefox's build was not too challenging for first time contributors.
For a long time I would build Firefox from source every morning, and I don't have the build logs anymore but I would guess it was in the range of 60 to 75 minutes. Comparing building anything on a VIA C3 is not serious, nor is using some 5400rpm disk for the same task
I can make a clean build of Firefox in less than 15 minutes on my MacBook Air M2, C++, Rust, and all. If you’re working on a frontend feature that only needs to modify Firefox’s frontend JS code, you can use “artifact builds” (prebuilt object files) so you don’t need to recompile the native code.
Wow, your rust experience is way different from mine (not just FF, I mean invocations of rustc are where time goes to die). I also readily admit that I struggled constantly with trying to get sccache to behave sanely, so I'm open to that being part of the difference, too
My solution for a few "bugs considered features" in huge open-source software like Firefox was to just patch the binary. Much easier than figuring out how to build it, and with only the change I wanted.
Yes, several decades of experience. Mainly Windbg now but I used to use SoftICE a lot. Look for error (or otherwise) messages/codes and use breakpoints to guide your exploration. Of course, the source code is also available, but sometimes it's even more difficult to follow than the binary if what you're looking for is obfuscated in several layers of compiled-out indirections.
The onerous requirements are just for a first build, though?
I used to leave things like that running overnight in the old days.
All subsequent builds are just going to be compiling the one or two files you changed and then linking in the other 99.9%, which isn't going to take very long.
45 minute build for 32 million lines of code seems pretty reasonable, to me. What are some projects of equal complexity with lower build time in other languages?
There's this one Chrome (?) bug I've been experiencing for a long time on Linux.
Every once in a while, the browser detects I'm typing "±±±±±±+..." and writes that to any selected text input. It stops when I type anything, but sometimes comes back rather quickly.
I thought it was a keyboard issue, but it doesn't affect Firefox or other applications, only Chrome based ones like Spotify and VSCode.
I've found no other mention of this on the internet and I'd love to to hunt this down and fix it but have no clue where to start. I guess the first step would be to consistently reproduce the bug...
If you're interested, I screen recorded it happening once. Mind there's music playing: https://youtu.be/S7OGTULLsqg.
> and I'd love to to hunt this down and fix it but have no clue where to start. I guess the first step would be to consistently reproduce the bug...
I am not familiar with Chromium at all, and I also don't run Linux on the desktop as I'm guessing from your video you do (?) so take this with a grain of salt...
I would start looking at the focus and key event handlers. e.g. maybe log the contents of pressed_keys and/or step thru the code from the beginning of the focus handler. It looks like this might be the place:
Even if you can't repro it, you may be able to figure out the issue by just reading thru that code with some theories in mind. e.g. Since pressing another key seems to fix it, maybe look at what the code is doing there... my guess is the release event fixes whatever corrupted state it is in upon focus.
I vaguely recall experiencing this on my gaming pc (windows). Although with a diff character (W?)
Seems like a race condition, and exacerbated by whatever kb you are using. Are you using a Corsair or Razer kb by any chance? And is it wireless or wired?
I know you suspected it may be a kb issue but have you tried swapping kb to make sure? I junked my old corsair kb because I thought it was just stuck keys after too many “kb smash” events. Don’t recall it happening again after swapping to diff manufacturer (Wooten, wired).
> Every once in a while, the browser detects I'm typing "±±±±±±+..."
Interesting bug! Not exactly following on what triggers the bug. Do you have a ± key on your keyboard (some international one)? Or does it occur after e.g. pressing "+", then "-"? Do you use compose keys? Does it do this randomly?
You could probably set up some keylogging to see if anything special is happening before it, it'd also let you know for sure if it's a keyboard issue or not.
This post is really great! My biggest piece of advice to someone attempting to do the same is to browse the code via the online code browser, which has working cross referencing. (The codebase is so large it is not the sort of index you can reliably build locally...)
Yep. Debuggers are more powerful, they can do everything `printf` debugging can do + more, but take more work to set up. For interpreted languages, they often take more work to use than just adding a print statement & rerunning, for compiled languages the reverse is more likely.
Conditional breakpoints that run a script and continue can be used equivalently to printf debugging, just set it to print when the selected line is hit. You can do this without restarting the application, even for multithreaded code.
Also watchpoints, for the equivalent but at a memory location instead of a code line.
This is great! You should consider fixing the Chromium bugs you run into! Chrome releases relatively quickly, so in 4-6 weeks you can have a bug fixed forever for all of your users on Chrome.
I used to work on Chrome and WebKit and I still have committer status. I've often wondered if there are people out there who would be willing to pay a contributor to get their bug fixed, but don't know who to contact. Feel free to email me :)
There is an annoying bug in Chrome DevTools that people who want to impede debugging of their JS files exploit. I think it's probably related to making the regex engine use excess memory and crashing the tab.
Anyway, just mentioning it to see if someone here knows if it's a well known and difficult to fix bug, or if it's just a bit obscure to have had any fixes for.
I haven't seen that one. I'd start by searching crbug.com. Then, the first step to a fix is always to find a reproducible example of the bug. In this case that would probably be finding an example in the wild and trying to save it locally in a way that still reproduces the bug. If you can get those files attached to the bug report there's a good chance it can be fixed. When I was fixing Chromium bugs, repro cases were worth their weight in gold.
> Although the worklet was running on a worker thread, it didn't have a WorkerGlobalScope - it had a WorkletGlobalScope.
It took me a while to see these were different, I thought it was a wrong copy-paste.
Naming things is hard, but this is a bad convention. Always put the changing bits at the beginning preferably, or the end otherwise, but never in the middle, especially when it's subtle in a rather verbose name.
The changing bit is at the beginning, unless I misunderstand you.
Worker and Worklet are primitives, you can't really split them up. You can't have a LetWorkGlobalScope and an ErWorkGlobalScope, so WorkerGlobalScope and WorkletGlobalScope is the best you can do.
That said, I usually prefer the changing bit at the end. So something like GlobalScopeForWorker, GlobalScopeForWorklet. But then that's clunky, so we're back at WorkerGlobalScope and WorkletGlobalScope again.
> That said, I usually prefer the changing bit at the end. So something like GlobalScopeForWorker, GlobalScopeForWorklet. But then that's clunky, so we're back at WorkerGlobalScope and WorkletGlobalScope again.
I wouldn't necessarily call "GlobalScopeForWorker" more clunky than "WorkerGlobalScope", just a bit longer, but also more descriptive.
Using the languages namespacing features might also make it more obvious, e.g. "Worker::GlobalScope" and "Worklet::GlobalScope" or the inverted version "GlobalScope::{Worker,Worklet}".
Looking at it from a functional programming perspective, I also like approaches of the form "GlobalScopeFor({Worker,Worklet})", i.e. a function returning the respective thing.
Naming things is hard, but the possibilities are endless...
„Of” is underused in programming. It’s short, can appear standalone, as prefix, infix and suffix and it’s generic enough that it works in most contexts for types, type functions, functions, constructors, mappings etc.
> That said, I usually prefer the changing bit at the end. So something like GlobalScopeForWorker, GlobalScopeForWorklet. But then that's clunky, so we're back at WorkerGlobalScope and WorkletGlobalScope again.
I've done this and it's always ended up biting me in the ass when I want to auto-complete and have 9 million "GlobalScope..." to chose from. Which is where "Work..." becomes handy.
On the other hand, it can be nice for other types of autocomplete usage. Say I know I'm looking for a GlobalScope, but I just don't know which one. Type "GlobalScope" and you get a nice list of everything prefixed with it. It's nice for things like Error enums, or similar usages. I know I'm looking for an Error, but I'm not sure of all the ones available to me.
Depending on your IDE and/or plugin used for autocomplete, you can usually type "worklet" and the symbols containing that substring will still be included in that list, even if it's at the end.
It seems to me that the problem was naming something Worklet when another thing called Worker already exists. I personally strive for unique class names when possible.
But that decision was made long before OP started contributing to this project.
That seems reasonable. I understand Worker and Worklet are established concepts in the domain, though, so better to use those names than invent new terminology.
When I joined my current team, I was surprised when I realized all of my co-workers were using the end of strings to verify their identity, when I was looking at the beginning of them. It was confusing: I'd be reading off random characters, and they'd be reading them aloud at the same time, and we'd all be saying different things.
I've run into this with GUID/UUID strings. Some apps truncate them while appending an ellipses to the end, so that the last bits of characters are not even visible by default. It's a mixed bag to be sure
It doesn't matter when/where the IDs are or how they are to be used. Ever been on a call trying to get someone to look at a specific commit? There are plenty of cases where using some sort of ID is necessary. I think you've chosen the very wrong part of the comment to focus on, like the actual point.
This is no name confusion bug, the bug is whoever wrote the code simply didn’t consider enabling the functionality for worklets (maybe worklets weren’t even a thing when that particular piece of code came into existence), and the fix is to change WorkerGlobalScope to WorkerOrWorkletGlobalScope.
I don’t write code so that drive-by internet commenters looking at a random snippet always find it unmistakably clear in ten seconds, and I don’t expect anyone else to adhere to that standard.
I noticed in Chrome based browsers that when I copied an image to clipboard, whole UI would freeze. For large images it would become unresponsive for 5-10 seconds.
I dug into the source and turns out they PNG encode it, I believe at highest compression. (The comments indicate this is something to do with how old versions of MS WORD handle transparency..?)
My "workaround" was to change the compression level to 0. Not ideal but I only needed to change 1 byte in the exe, and I was glad I didn't need to rebuild all of Chrome!
Firefox has 0 lag and has the benefit that you can paste directly into file explorer, because they put the original image file into clipboard instead of image data.
As someone who had the misfortune of working on clipboard support in Chrome, I thought "wow, there's no way we do that in places other than Linux".
... turns out we do and I helped review that patch. Doh!
For how widely the clipboard is, the actual implementation (both in the OS and in the browser) is surprisingly unloved and unmaintained.
FWIW, Chrome intentionally doesn't plumb through the original image bytes. I wasn't around when it was initially implemented, but even for many years afterwards, there were no (Windows) platform conventions for passing around non-bitmap images on the (Windows) clipboard. And another (probably unintentional) benefit was "the encoded image bytes are from an untrustworthy source and could trigger bugs in buggy image decoders", while bitmaps are (relatively) safe in comparison.
Of course, this is a rather arbitrary line, because it's easy to get the original image bytes out of the sandboxed renderer, e.g. by dragging out the image or by saving the image.
At this point, someone could probably try plumbing through the original bytes or even implementing delayed rendering... but it's quite expensive in terms of time, especially to test all the random things that might break. :(
> one thing that I was completely unsure about was how to add tests for this fix.
Similar to blaming the file for maintainers, the diff of those commits can direct you to their tests. The full patches that those commits belong to can also be useful for finding undocumented habits that have lead to approval.
>I started my debugging by finding where the network request for the worklet script was initiated and tracing it down as far as necessary until the request was actually made - or retrieved from the cache. The call tree looked something like this:
It completely escapes me how you can find that in such a codebase.
regarding the comments about the "build time" of firefox/chromium - a couple of weeks ago i installed gentoo 686 on an old netbook, including a DE/WM and firefox. I also told it to completely recompile everything that comes "preinstalled" in the stage3 gzip (that's prior to installing the WM and ff).
llvm took forever to compile, and then for some reason i needed to have two versions of llvm - i don't recall why offhand. So i have a devuan VM on my desktop here, i set up a gentoo chroot, updated it and installed distcc, installed distcc on the netbook - just like i've always done in these circumstances. Believe me when i say: it's still like magic, even if "distcc-pump" no longer works.
total time to get the netbook to a stable, running as i want it, useful machine - ~1 week. Results? It's actually useable - more usable than it was with windows 7 on it when it was new, and much more usable than whatever ubuntu i had installed on there 7 years ago or whatever.
I did, however, make a mistake. I didn't need to use i686 (32 bit) - the atom is a dualcore and on ark it shows that CPU is 64 bit. So i'll probably do all this again (after a reboot onto gentoo boot media and 'dd'ing /dev/sda2 to a network location, just in case). I may even see if it's possible to resurrect pump, because that will speed things up even more. If pump is working, the only thing that sucks about "emerge" on gentoo on a slow machine is waiting for the spinner at the beginning and the "installing <pkg> ..." parts of the flow, due to memory and CPU contstraints. I'm using an SSD in there so at least i got that going.
Excellent. One small question If anyone can answer. When an outside contributor is submitting a fix like these, do open source software maintainers ask for test also to be written? Fix itself is worth acceptance. What if contributor don't have any more time/interest beyond submitting the fix.
It depends on the project, but most large scale projects require test(s) for the fix, and will block submission unless its provided.
These types of projects undergo constant code-change/refactoring/re-architecture etc. If you don't add a test for your specific issue, there is a non-trivial change that it'd be broken again in some future release.
Its somewhat worse if an issue gets fixed, and broken again, vs. it being broken the whole time. E.g. with the former users have likely started to rely on the fixed behaviour, then will experience disruption when it breaks again.
yes if the standard practice on the codebase is to include tests, the maintainers will ask for tests
if the author doesn't want to do it, the PR will likely remain abandoned unless an external contributor comes along to finish it, or a maintainer takes over.
I'm coming out of reading this a bit dismayed as I really thought that the `if let` (to use the swift conventions) pattern would finally be a good and reliable solution for these silent errors.
And at the same time, reading the code in question and putting myself into the position of a person writing the code, I would totally have thought that I'm handling the "is there a global scope" case, totally forgetting that the same check is also checking the "is the global scope a `WorkerGlobalScope`" condition but mixing both checks into a single return value.
And here we are with the code happily chugging along and (for all intents and purposes) causing data corruption (by causing network requests to not be logged and not respect policy).
And here I was thinking that `if let` is fixing exactly this problem while also providing the best ergonomics.
So here we are back to the drawing board, ready for the next pattern which will compromise on ergonomics in some as-yet unknown way in the future.
In 2013 I also worked as first time contributor on a bug that affected the linux version of Firefox.
Firefox took 2 hours to compile in my pentium dual-core Intel laptop, it was in August so warm inside the house, the 2 hours compiling at closer 100% cpu made the laptop halt due to excessive temperature before finishing. I also recall the build failed because of out-of-memory error (ram exhausted) I had 2GB and had to expand to 4 GB or something like that.
After overcoming the struggles I was finally happy as the fix was committed.
I wonder how many more (quantity or quality) commits OP would have to make before getting interviewed as a chrome dev either by Google or another company that uses Chrome heavily like Samsung.
A fun project would be to implement a cross platform compatible web browser in rust. Chromium just has so much baggage with it on top of years of Google junk.
Interesting to read all of this. Bugs in more obscure areas being open for years is something I am pretty familiar with, although then on the Firefox side of things.
I personally never have been able to muster up the courage or energy to try and dive into the code base there, though. Part of that is simply because such a huge code base is daunting to delve into. But an even bigger stumbling block was always the prospect of having to deal with the entire process of submitting the fix and getting it approved. Certainly with Mozilla the interactions I have had on Bugzilla with various people there as well as in other places simply made me decide to work around the issues.
I am honestly surprised how relatively smooth the process seems to have been for the author, dealing with Chromium developers.
> Certainly with Mozilla the interactions I have had on Bugzilla with various people there as well as in other places simply made me decide to work around the issues.
Sure. I should point out though that I also had many positive individual experiences with people from Mozilla. Interesting conversations and insights in various things. It is just that overall I had a few too many interactions, which would make hesitate trying to invest a lot of time in things like PRs.
What it mostly comes down to is that communication several times seemingly seemed a one way street. Where I provided information (often explicitly asked for) only to be effectively ghosted.
Not in the sense that I was dealing with busy people where it just took time for them to get back to it again. But really getting no response at all. Often when I then did follow up on it (several months later) I would see the bugzilla ticket change a tag or some other meta attribute but nothing more.
To be clear, this isn't even unique to Mozilla/Firefox. I had similar experiences on other open source projects, although it differs really per project. It is more that with something as big as a browser, where setting up the development environment can already take up the better part of a day, it becomes an extra barrier for even trying.
My negative experience with Firefox was with this 6 year old feature request relating to container colors. [0] They have hardcoded some colors & icons (6 or 8 I think?) as possible options. The problem is: If you have more than half a dozen of Gmail accounts that you want to containerize (e.g. for client work), it is really hard to keep them apart at first glance. Compare this to Google Chrome, where you can choose the browser color for each associated Gmail account individually.
I tried to manually extend & build it for myself, but the codebase relating to that was just a mindfuck to work on...
Yeah, pretty much my experience with any large open source project - I find that if its a solo activity or a small focused group my bugs might get any attention, but after there's more than 10 or so contributors I politely move on by.
Too many times submitting a feature request or reproducible bug report in github that's "ranked by thumbs up" and then having that closed as wont fix.
On a similar note, i always wanted to contribute to Firefox, but every time i looked at how to compile it i noped the fuck out of it. It's probably is doable on linux but it's sounds like a nightmare on windows.
Yep. The way you do it on Linux is to grab your distro's package build script and use that. It will specify all of the build- and run-time dependencies (which you use your standard package manager to resolve), and contain whatever commands are required to build it. Usually you just install dependencies and run one command, and you've got a package you can install like any other.
Just install the dependencies listed there, run "makepkg", and boom, Firefox pops out the other end. If you're doing active development, you can probably figure out a quicker change/build/test loop, but that'll get you started.
> but it's sounds like a nightmare on windows.
I wouldn't wish the hell of software development on Windows upon my worst enemy :)
The reason I am asking is that I see volunteering time for extremely wealthy big corporations as foolish.
At very least developers should get together and lobby that if big corporations use open source software, they should be paying royalties to contributors.
That said, if you look at volunteering time, it is much better to do it for charities that often struggle getting competent IT people, but of course it is not as glamourous.
Consider this, the thing they fixed is something they depend on for their daily programming needs relatively often. By fixing it themselves instead of waiting for who knows how long, they are saving a lot of future time and frustration. They don't to work around the issue anymore and can simply focus on what they want.
Chromium is open source, in my opinion, because Google can brag about open source, it has all the right buzzwords. But also gives them free R&D and labour.
I think given the size and wealth of Google, this is entirely inappropriate and people shouldn't be contributing to it, because it will only encourage this parasitic and exploitative behaviour.
>I think given the size and wealth of Google, this is entirely inappropriate and people shouldn't be contributing to it, because it will only encourage this parasitic and exploitative behaviour.
So you would rather have chromium be closed source, or for people to fix the bug but hoard the patches? Do you hate the idea of google benefiting from your work so much that you're willing screw over yourself (in the form of having to maintain the bugfixes yourself) and others (because they don't get the bugfixes) in the process? Are you also against contributing improvements to other OSS projects (eg. linux kernel) because corporations might benefit from it and "gives them free R&D and labour"?
The majority of work on Chromium is done by engineers on Google's payroll.
There certainly are OSS projects out there where the majority of work is done by volunteers and where companies profit from their labor. This isn't really one of them, at least not in the way you are describing it.
Applying one "truth" to the entire world generally means that you're simplifying things to such a degree that they become meaningless or even ridiculous parodies of themselves.
I feel like that this might be what you are doing here.
This is fair, and is my general philosophy. But if you're running into the bug first hand and its costing you otherwise, it may be the most pragmatic thing to just fix the bug yourself and contribute it back.
One of my Google interviewers leaned back in his Eames lounge chair and stated that he got his start there by contributing optimizations to Chromium. I think it worked out well for him.
People who have now become interested in creating their own Chromium-based browser may want to take a look at my article: https://omaha-consulting.com/how-to-fork-chromium. It gives a high-level view of what goes into maintaining a Chromium fork.