I tried many times to switch to Firefox from Chrome and went back every single time. Yesterday by mistake I started Firefox with a few hundred tabs - it was using 128GB of memory and simply died. Chrome handles thousands of tabs without an issue. The only lean browser is Safari. Many sites don't work on Firefox or Chrome, but never had any issues with Safari.
uBlock Origin and other adblocker performance will never be as good on Chrome as they are on Firefox, since Chrome forced MV3 on users (which they pushed specs-wise to keep their ad empire running), while Firefox stated they will always keep MV2 also supported.
Truth
Firefox is the last major independent browser engine standing. As engineers, it's our responsibility to keep the fox strong and help it reclaim the popularity it once enjoyed. Let's rally behind it!
Not to mention firing a CEO with cancer, throwing a Party (Feminist, Decolonial, LGBTQIA+, Climate Justice using Al) in Zambia (no joke) after firing 1/3 of the staff and so on. Firefox/Mozilla is the best paid OSS project (apart from Linux), but spends most of the money on non-technical stuff (same with the Linux Foundation).
Mozilla.org/Mozilla is now a Ad-Company not a OSS-Project.
According to Mozilla's financial filings, CEO Mitchell Baker's compensation increased from $5,591,406 in 2021 to $6,903,089 in 2022. During that period, Mozilla's revenues – long dominated by payments from Google to make it Firefox's default search – dipped from $527,585,000 to $510,389,000 [...] "Fully 30 percent of all expenditure goes on administration" [1]
In 2018, Baker received a total of $2,458,350 in compensation from Mozilla, a 400% payrise since 2008. Over the same period, Firefox market share was down 85%.
[..]
In 2020, after returning to the position of CEO, Baker's salary rose to in excess of $3 million. In 2021, her salary rose again to more than $5 million, and again to nearly $7 million in 2022). In August 2020 the Mozilla Corporation laid off approximately 250 employees due to shrinking revenues, after previously laying off roughly 70 in January [2]
I'm of the opinion that there should exist mechanisms for nonprofits to reclaim ill gotten remuneration from ex CEOs
Unfortunately, many Ex-Mozilla people thought she deserved the money.
They loved Mitchell Baker's focus on culture: Mozilla's progressive social causes and performances. They loved their culture more than their business and their own browser.
Like other companies in the ZIRP era, the staff wanted a fun tech company culture that made them feel like they were good people. Even great people. They preferred a celebrity CEO with pleasant illusions over a business leader with hard truths.
please look at their financials. how much money they spend on salary, remuneration to managers, administrative and finally moonshots.
besides, how would YOU finance to pay around 1200-1800 employees (from my initial online search) salaries that are competitive to Facebook and google or microsoft? why would someone work at mozilla for less money when they could work in the same city at a bigger office for more cash?
now, if mozilla took to employing foreigners, like from asia, they could reduce their salary spend by 5-10 times which would make them less dependent on external funding or employ 5-8 times many people for same money.
point is, building a browser takes money so how will YOU finance the company/non profit if you were given the charge
edit: also, how much would you consider your remuneration be commensurate to the level of work you are putting in?
>point is, building a browser takes money so how will YOU finance the company/non profit if you were given the charge
My point is that they dont invest it into the browser but:
>managers, administrative and finally moonshots.
>how would YOU finance to pay around 1200-1800 employees
Get rid of most of the non-technical staff and work on the software you get your donations for. Out with the leeches, in with the makers, it's software, not a political campaign.
Google Chrome has/had ~23 paid developers, then we bring back the MDN team, let's say 8 people, administrative stuff 5 for a total of 36 people, let's double that because we have $590 million to spend and round it up to 80 people, okay?
But hey, look at that ("Firefox Maker Rebrands as 'Global Crew of Activists'"), this is where the money is going:
>Chrome, with an emphasis on making the web great for the next billion users. The team consists of around 40 engineers, in addition to a number of PMs, test engineers, UX designers, researchers, and others. We built a lot of features in Chrome that are used by more than a billion people.)
You've accidentally left off some pretty important context from that quote:
> Chrome Mobile teams in Seattle and Kirkland, spanning four sub-teams in Chrome, with an emphasis on ...
It's not the number of engineers working on Chrome. It's the number of engineers who worked in that guy's Chrome Mobile team in a specific location. (It's not clear whether there were other teams working in Chrome Mobile in other offices). That's a team making mobile-specific improvements to Chrome or adapting it to the mobile environment, not a team making a browser from scratch. So it's ignoring the people working on the layout engine, rendering engine, the javascript engine, security, the desktop UI, codecs, web APIs, developer tools, networking and protocols, extension APIs and store, etc.
There is also absolutely no way Chrome had 23 engineers in 2012, but since you didn't give the source, I have no idea of exactly what tiny subset that number was actually representing.
> Here are some members of the Developer Relations team
It's not the Chrome engineering team. It's some part of just the team doing developer outreach (not necessarily even the entirity of that team).
You keep finding these obviously incorrect references to support your arguments, and presenting them as facts. And by obvious I mean really obvious. There is no way you can read that page and mistakenly think it's the Chrome engineering team. At this point the best case is that you're not actually reading any of these sources, and just randomly pasting them here. The worst case is that you've noticed that your sources are bogus, and just don't care.
> The ~23 was from 2012, sorry about the outdated information
In 2011, when asked "How many engineers work full-time on Chrome[...]?", a member of the Chrome team already said there were "enough to fill many buildings around the world". So even in 2012, ~23 seems way off.
Yes 40 engineers, 200 PM's and 300 "privacy/add's" advisors in typical google fashion.
Just look who wrote the second article i linked, i never said Google is slim and fast, but at least they have invite money.
And look at the second question from your "reddit" post:
>I don't understand. There are enough devs to fill many buildings for Chrome, that work on things exclusive to Chrome, not Chromium? In this post it sounds like Chrome is not much more than Chromium. What gives?
pretty much this. Firefox is the twitter of software projects. You can Elon the workforce and end up way better off releasing features people care about again if you get a decent elon knockoff to skillfully fire everybody.
I knew it was going to happen and said it anyway. This place doesn't like truth when that truth is they or someone they know should be unemployed. What's the point of getting upvoted if you don't spend those points getting downvoted where it matters?
Side note, one thing that is good about this place is the downvoting is never that crazy, maybe -5 worst case most of the time. Reddit is far more expensive to say true things that are uncomfortable and that's a big problem.
Loosing the general audience was do to Google using their dominant search position to shove constant ads like: "Use a secure, fast Browser: Switch to Goole Chrome!" during user searches.
While browser projects like servo or ladybird are certainly appreciated, I think their state of "something browsable" is not really what most users, even technical ones, would like for their daily browsing.
/
How do you think Firefox has lost the power users/devs?
Google didn't steal many Firefox users initially. They absorbed more new users in a rapidly growing market. They used to pay sites and downloads to install Chrome alongside whatever the user actually wanted to install (sometimes done without the user consent)
Later the network effects started going against Firefox but most people go by percentage of market and assume they were losing users long before they actually did.
Not to forget the magnificent growth of Android with Chrome being the standard browser since 2012, with absolutely no incentive to install any other browser.
Anonymous telemetry is more than justified if it improves system stability. I don't think they do remotely the tracking other browser vendors like Microsoft or Google do.
Except when the browser won't start, and then loses all your previous session, because your internet is out when you start it and so it can't do whatever 'necessary' telemetry and verification it needs.
Yes, just happened to me. Not happy about losing my session. Even less happy about the dependency and what might be being telemetered or 'verified'.
Container support in SST is a great addition! But I’d really like to see support for other providers like Hetzner or VPS services in general, which often offer a more cost-effective option. [Update: seems like SST offers a lot of providers (incl. Hetzner) with varying feature sets, only most to all examples are using aws in the guide]
Alternatively, you can offload server management to Coolify Cloud for an extra ~ $5/month, so your Hetzner resources are dedicated solely to running your containers.
- Hetzner VPS + Coolify Cloud: ~ $10/month
You can scale vertically via Hetzner (rescale) and horizontally via Coolify (add more servers).
A more budget-friendly option like this could be valuable for users running small to medium, even larger setups !
> Hetzner will arbitrarily null route your traffic
Never happened to me, been using Hetzner (dedicated though, not VPS) for almost 10 years. What exactly were you hosting before they pulled the plug on you?
I feel it's quite unfair for the title to call out sqlite on checksums, when in reality, as the very closing line of the article states most databases don't do checksum verifications.
The submission doesn't seem unfair. It's simply pointing out something for users of a specific database to be aware of, and I don't really think SQLite is maligned or impugned in any way.
And FWIW, Oracle, SQL Server and DB2 enable page checksum storing/verification by default.
A long time ago there was a famous / infamous ad campaign for some food product like bread or milk, and the ad merely stated the true fact that their milk didn't contain any bleach. Of course no one's milk (or whatever it was) had any bleach (or whatever it was).
Specially highlighting something true, but out of context and with no equally special justification, is not an innocent act and is misleading. And yes, absolutely, very clearly, causes harm, and does so unfairly.
A not-unfair version of this same article would just talk about databases, and include sqlite with others, and not only sqlite and not be titled sqlite.
And then there is this:
"Hey there I am v. I work at Turso Database."
Your milk/bread comparison is specious and invalid. I might say "As an apple enthusiast -- the fruit kind -- I want you to know that apple seeds have arsenic so don't eat lots of them" without having to disclaim every other fruit existing and possibly toxins found in parts of them. If some true-believer apple fan felt victimized, well that would be bizarre, right?
This is a SQLite guy talking about SQLite to SQLite users. They're describing a feature/possible downside of SQLite that users might want to be aware of. They don't need "balanced" coverage of every other DB because it hurts someone's bizarrely fragile feelings. And as I mentioned elsewhere, almost all "enterprise" databases do do checksums by default, if people really want to lean on this "no one does! Leave SQLite alone" argument.
And Turso is literally a SQLite-based firm. This isn't the aha you think it is.
People are just bored of security sensationalism I guess. Too many people want to gain visibility just by reporting either something little-known (but still known) or that needs stars to align in a proper way to be exploitable at all.
Major corporations are not paying exorbitant licensing fees to have checksumming enabled by default. In fact, for enterprises running things like vSAN, ECC DRAM, etc, database checksumming is probably nothing more than additional overhead.
Database defaults in general are a touchy topic. Whatever set of defaults are chosen will be suboptimal for almost any serious user. A far more serious issue is figuring out the actual behavior of a database in different configurations. For instance, Oracle's SERIALIZABLE transaction isolation level only offers snapshoot isolation.
You don’t understand the critical problem that checksums solve at the I/O boundary. PCIe has weak error detection and correction. To transfer your data from ECC memory to your favorite super-robust storage technology requires transiting the PCIe bus, where for a brief moment it becomes relatively easy to corrupt data without anyone noticing. This is the problem that can’t be solved any other way and why checksums are primarily done at the I/O boundary in databases. It is an issue seen in real systems.
PCIe v6 is intended to materially improve the integrity of data transfers but what we are using today is much worse.
> I feel it's quite unfair for the title to call out sqlite on checksums, when in reality, as the very closing line of the article states most databases don't do checksum verifications.
Hi, author of the post here. I work mostly with SQLite and that's the database I'm most familiar with, hence mentioning it in the title. I have also noted at the bottom:
> Again, this is not a bug. Most databases (except a few) assume that the OS, filesystem, and disk are sound. Whether this matters depends on your application and the guarantees you need.
Yes, it's a balancing act. I think it's better to have safer defaults here, not prioritizing extra performance. People who are concerned with performance to the degree where checksums would matter need to consult configuration anyway (in many aspects, not only this one) and can disable this specific thing easily.
Every crumb of information you encounter doesn't have to be subjected to a lens of "but what if the entities involved were sports teams? Would this be "fair" to "them"?"
This reminds me of when Apple first introduced, with great fanfare, their pivot to privacy-first and “Differential Privacy.”
However, when privacy experts later examined Apple's implementation, they found that the promised privacy was largely an illusion. The parameters Apple had chosen for their Differential Privacy were so weak that only a few data exchanges would be enough to de-anonymize individual users.
I don't know if they improved on it, but back then it was less about true privacy and more about the appearance of privacy and an unfortunate example of marketing (core differentiator, premium justification) taking precedence over meaningful protection.
With the computational efficiency of Gaussian splatters, this could be ground-breaking for photorealistic avatars, possible driven by LLMs and generative audio.
Unfortunately not yet. Also code alone without the training data and weights might still requires considerable effort. I also wonder how diverse their training data is, i.e. how well the solution will generalize.
I'll note that they had pretty good diversity in the test subjects shown - weight, gender, some racial diversity. I thought it was above average compared to many AI papers that aren't specifically focused on diversity as a training goal or metric. I'm curious to try this. Something tells me this is more likely to get bought and turned into a product or an offering than to be open sourced, though.
As a simple example, if you ask a question and part of the answer is directly quoted from a book from memory, that text is not computed/reasoned by the AI and so doesn't have an "explanation".
But I also suspect that any AGI would necessarily produce answers it can't explain. That's called intuition.
It wouldn't be a reference; "explanation" for an LLM means it tells you which of its neurons were used to create the answer, ie what internal computations it did and which parts of the input it read. Their architecture isn't capable of referencing things.
What you'd get is an explanation saying "it quoted this verbatim", or possibly "the top neuron is used to output the word 'State' after the word 'Empire'".
Of course the AI could incorporate web search, but then what if the explanation is just "it did a web search and that was the first result"? It seems pretty difficult to recursively make every external tool also explainable…
Then you should have a stronger notion of "explanation". Why were these specific neurons activated?
Simplest example: OCR. A network identifying digits can often be explained as recognizing lines, curves, numbers of segments etc.. That is an explanation, not "computer says it looks like an 8"
But can humans do that? If you show someone a picture of a cat, can they "explain" why is it a cat and not a dog or a pumpkin?
And is that explanation the way how they obtained the "cat-nes" of the picture, or do they just see that it is a cat immediately and obviously and when you ask them for an explanation they come up with some explaining noises until you are satisfied?
Wild cat, house cat, lynx,...? Sure, they can. They will tell you about proportions, shape of the ears, size as compared to other objects in the picture etc.
For cat vs pumpkin they will think you are making fun of them, but it very much is explainable. Though now I am picturing a puzzle about finding orange cats in a picture of a pumpkin field.
> They will tell you about proportions, shape of the ears, size as compared to other objects in the picture etc.
But is that how they know that the image is a cat, or is that some after the fact tacked on explaining?
Let me tell you an example to better explain what I mean. There are these “botanical identifying” books. You take a speciment unknown to you and and it asks questions like “what shape the leaves are?” “Is the stem woody or not?” “How many petals on the flower?” And it leads you through a process and at the end gives you ideally the specific latin name of the species. (Or at least narrows it down.)
Vs the act of looking at a rose and knowing without having to expend any further energy that it is a rose. And then if someone is questioning you you can spend some energy on counting petals, and describing leaf shapes and find the thorns and point them out and etc.
It sounds like most people who want “explainable AI” want the first kind of thing. The blind and amnesiac botanist with the plant identifying book. Vs what humans are actually doing which is more like a classification model with a tacked on bulshit generator to reason about the classification model’s outputs into which it doesn’t actually have any in-depth insight.
And it gets worse the deeper you ask them. How do you know that is an ear? How do you know its shape? How do you know the animal is furry?
Shown a picture of a cloud,
why it looks like a cat does sometimes need an explanation until others can see the cat, and it's not just "explaining noises".
When people talk about explainability I immediately think of Prolog.
A Prolog query is explainable precisely because, by construction, it itself is the explanation. And you can go step by step and understand how you got a particular result, inspecting each variable binding and predicate call site in the process.
Despite all the billions being thrown at modern ML, no one has managed to create a model that does something like what Prolog does with its simple recursive backtracking.
So the moral of the story is that you can 100% trust the result of a Prolog query, but you can't ever trust the output of an LLM. Given that, which technology would you rather use to build software on which lives depend on?
And which of the two methods is more "artificially intelligent"?
Neural networks can encode any computable function.
KANs have no advantage in terms of computability. Why are they a promising pathway?
Also, the splines in KANs are no more "explainable" than the matrix weights. Sure, we can assign importance to a node, but so what? It has no more meaning than anything else.
reply