If I have a closed-source app and claim (and can verify!) E2EE, surely I could still read every message from my closed-source app, within the app itself, and you'd never know.
I've never been a mobile app developer but I've been a desktop and web developer since the 90s so I don't know what apps can and cannot see but in a desktop app or web app, if it's on the screen, it's decrypted and I can put code in to read/steal it.
Surely when I open up a chat in Whatsapp it would be as easy as doing a foreach on every msgElement.text value on screen and copying it to the mothership in plain text. After all, when I am reading them, they're decrypted.
Or, when I send a message, as soon as I press the "Send" button, send a copy to the mothership.
Perhaps I'm not seeing it right but it must be this simple. Right?
At least with an open source app you can inspect the "Send" code and see if it calls "SendToMothership" when it also calls "SendToRecipient".
What I got from your comment and from that interview were very different. He starts that bit with “when I text you on WhatsApp”. The “we” refers to Mark and Joe (Alice and Bob), not Meta (Eve).
It's true in a sense - using an iPhone or an Android phone Apple/Google could be streaming your screen contents constantly, so even e2ee wouldn't help.
I just don't know if that is actually true, or if meta doing e2ee and then pinging your messages around from the app after they're delivered is true. I've no reason to believe either is.
That's really, really expensive imo and you could do it for way less, but given their current revenue stream that's 80 years of development if they took in no more money ever!
Now, I don't know how many it would take to program a browser but it's already written so it's not as hard as doing it from scratch so I reckon 20 good devs would give you something special.
Honestly, if someone said to me "Mick, here's $560M, put a team together and fork Firefox and Thunderbird. Pay yourself 250k and go for it"... I'd barely let them finish the sentence before signing a contract :)
It should be at least 100 devs at $250k each, which is still a severe underestimation. Note that there are many different types of mandatory expenses that roughly matches to the direct compensation, so with $150K you can only pay ~$75K. And you cannot attract senior browser devs at $75K annual compensation. This alone makes $25M year and the reality should be closer to $100M, which makes Mozilla's OPEX more plausible.
$250k is a staggering salary... not everyone lives in San Francisco. Or America for that matter.
The guys I work with are on about £95k and the good ones are very good.
I have seen what small teams of good devs can do with the right environment, scope, tools etc. (oh, and being left alone by interfering management!)
I'm talking about a cut-down Firefox, stripped of all the bullshit in the background, just a browser that shows webpages... all the heavy lifting is done: CSS engine, JS engine etc.
> $250k is a staggering salary... not everyone lives in San Francisco. Or America for that matter.
Still you need to spend at least $250K (which direct compensation would be close to $150K) to hire a competent browser dev. And I'm not speaking about SF... Well you can have better cost efficiency outside American metros, but the reality is that experienced browser devs are rare outside those areas.
> I have seen what small teams of good devs can do with the right environment, scope, tools etc.
Not objecting that disruptions can be done with a small focused team. But here we're talking about dealing with massive complexity, not an emerging market. You cannot "redefine" the problem here, the ecosystem is mess and we've got to live with it for a good time...
> I'm talking about a cut-down Firefox, stripped of all the bullshit in the background, just a browser that shows webpages... all the heavy lifting is done: CSS engine, JS engine etc.
You will be surprised to know how small the core engine parts are to the total code base. You may argue that most of those are not necessary and perhaps half of them are pretty much ad-hoc complexity but the rest have their own reason to exist. And the new browser engine developers typically learn this hard way then decides to fall back to Chromium. I've seen this several times.
Firefox has way more than 20 developers. Looking at https://firefox-source-docs.mozilla.org/mots/index.html, if I'm not mistaken in my count, there are currently 147 module owners and peers alone. Some of those might be volunteers, but I think the large majority of them are Mozilla staff. On top of that there are probably a number of further Mozilla staff developers who aren't owners or peers, QA staff, product managers, sysadmins and other support staff…
I know they have way more than that but I'd argue that you don't need that many.
Hypothetically, if I was given the money and asked to build a team to fork Firefox I'd be more focused. Way more!
The current devs work on stuff I'd scrap like Pocket, telemetry, anything with AI, and so on. I bet there is a load of stuff in there that I'd want out! There's probably a bunch of things in Firefox Labs they're working on too.
So, I'd argue that 20 good devs (again, a number I pulled out of the air!) split into, say, 4 smaller teams could achieve a shit load of work under the right circumstances, with the right leadership and so on.
I'm currently a senior architect with over 50 devs below me. Most are mid-level at best (not a slur, just where they are in their career!) but the few good ones are very good. A team of 20 of those could pull it off!
It'd be a tall order building a browser from scratch with 20 devs maybe but it's already built.
There's someone else right now who is going to important organizations they obviously don't understand, making wild claims about 'I could do it for much less', and cutting personnel drastically.
You severely underestimate the engineering cost of modern web browser. Assuming a sufficient value-addition fork, a team of 20 cannot even catch up the Chromium upstream. Good luck coming up with a new engine compatible with Chrome; MS tried it and finally gave up.
1. Zero telemetry. I mean ZERO: remove all telemetry code from the codebase. They can ask me about features the old-fashioned way - surveys!
2. Focus on privacy and security. Put these to the top of the list.
3. Stop paying your CEO millions! Not worth it imo!
4. Stop with all the other Mozilla shit! I am interested in a browser (and perhaps an email client... I'll let you work on that too!). No more Pocket, VPN and all that other shite.
5. ZERO, I mean ZERO data capture at all! Nothing. Not a single bit except when someone clicks the link to download Firefox, you can capture their userAgent and whatnot. But the browser, Firefox, should not be capturing a single byte of data from me once installed (except perhaps a periodic version check and you can pass in the version like this: https://firefox.com/update?v=123.568).
6. For sync, allow me to sync an encrypted file to Dropbox, OneDrive, Local drive, Whatever.com. That way my passwords, bookmarks etc. can be sync'd from MY location that I control, not yours!).
7. Have a "Block all shady JS tactics" button. This would include fingerprinting, location and such. Perhaps you could send bogus, random data when it's asked for instead. That'd be fine too.
I think that's it :)
For a browser that did this, and was properly audited to prevent anything shady from creeping in, I'd pay $30 a year for it.
Edit: To clarify - I wouldn't pay the current Mozilla a single penny!
I agree with all of this with some minor modifications:
C-level compensation is not a problem unless it’s a problem. Linus Torvalds is compensated handsomely, and it’s okay, because he still delivers.
all the other Mozilla shit also isn’t a problem until it’s a problem. It’s a problem in Mozilla’s case because they neglect the browser.
I’ve switched to Orion by Kagi with their new Linux beta. It’s sadly WebKit, but with the increase in bullshit from Mozilla, the scales have tipped for me.
So no crash logs or similar issues? Logging is seen as a subset of telemetry
I agree with most of your points but you missed out an important one: active lobbying to counteract or reduce google's dominance on the web. As long as Chrome reigns supreme, Firefox will always be playing catch up as Google can break the web for non chrome devices by regularly adding apis that are only in chrome and forcing devs to use them
I don't consider things like crash logs and debug stuff to be telemetry. This can easily be dealt with by a popup saying "want to upload the crash log?". It can just be a text file with a bunch of data.
I'm fine with that.
Telemetry to me is knowing what I'm doing, like clicking a button, using a feature etc. They record that shit! Also, sending data about my websites back to the mothership so they can sell ads (or sell to ad companies... same thing).
Fairs. But logging and similar is a type of telemetry. It is worth being clear about this stuff. And surely, you don't want to send logs only when your software crashes (which seems to be your proposal) as it might never crash. Not all software bugs lead to crashes. Doesn't mean they don't need investigation.
True. But there was a time when we managed to program software and not send every keystroke back to the mothership.
It's possible to have a daily/weekly/monthly popup that says "We've detected a few bugs over the last week, can we send the reports to the mothership?"
It's as simple as zipping the text files and sending them to an API endpoint.
I have no issue with this. Hell, you could even make it automatic where I can check a box that says "Automatically send weekly crash reports".
I have a massive issue with the devs thinking that it's ok to send telemetry back about every single thing I do in the software I've installed on MY computer so that they can "improve my experience" or whatever bullshit they use to justify it.
I reckon it depends on what kind of experience you want from your OS: there may be a variant that works for you already.
Take me for example. I want my OS to GTFO of my way. It's a toolbox, like the one a carpenter would carry around. I use the applications on it (the saw, hammer, nails etc.), not the OS itself (the carpenter toolbox).
In my case, the Enterprise LTO (or whatever it's called) is almost exactly what I need but the consumer variants are constantly trying to "help" me with shit I don't want.
I'm genuinely curious why anyone would download a beta of Windows... can someone enlighten me?
I can think of one scenario only - You have a desktop app or drivers that you want to ensure will work in the upcoming non-beta, so getting a head start would be good I suppose.
But for anyone else, why would you test software for free from a trillion dollar company that have a bit of a track record now of hostility towards users.
This isn't a troll, I'm a Windows 11 user myself. I am genuinely at a loss.
Edit: I'm being a dumbass (on first coffee as I write this), there's plenty of reasons why people would download a beta... I was thinking from my perspective only!
In addition to testing in advance that your own (hard|firm|soft)ware will work, some people will just want to get new features early and are willing to deal with the potential extra bugginess.
Yeah, this used to be me. I don't do it anymore but I used to spend a great deal of time playing with beta versions of Android, Windows, Firefox, and various iPod and Mac customization applications. I just enjoyed playing with the new features, figuring out how to break them, reporting those issues, and then helping get them fixed (even if I wasn't contributing code at the time). I don't have the time anymore, but it was one of my favorite activities when I was younger.
Same here. I remember playing with Beta versions of Windows Vista on my parents home PC. Ended up wrecking the system because I wasn't technical enough to fix it when it broke... but was still an important milestone in my journey today.
At the time, XP was 6-7 years old, and the screenshots of a shiny new Windows OS made me so itchy to try it out. Vista looked magical compared to XP just before release, it was the dawn of the frutiger aero aesthetic too. Very optimistic vibes back then.
Yeah, I was the same way with my iPod. I was contributing to iPod Wizard and one of the QA testers for iPod Linux, iPod Wiki, and Rockbox. I would install unstable firmwares on my iPod a few times a week and spend a few hours testing. I bricked my iPod more times than I could count. I had two iPod hard drives that I would swap between and I had a way of wiping the drive once it was out of the casing. To this day, I have no idea how my iPod 4G survived as long as it did.
Yeah some hobbyists just like to try new features. I remember when Windows Longhorn (pre Vista) was in beta and I would download the ISO and burn a DVD just to try out the latest transparency effects and the sidebar lol.
I'd usually bow out at this point but if you've ever been on any Windows/Microsoft-related subreddit, there are many many people that download these betas.
I did it once, many years ago, for Windows 10 and my printer started printing out garbage in multiple pages no matter what you printed. At that time I used the printer quite frequently. I'd had no issues until then.
I remember checking to see if printer issues were a thing in that build and they weren't listed.
Windows beta testing has worked this way for 30 years, if not longer. I was a 'public' Windows 98SE beta tester. I downloaded new 98SE ISOs over 56k once per week and wiped that machine clean once per week.
The only compensation I ever got was from beta testing DirectX 5?, I think, and I received a MS Force Feedback Pro joystick for filing the most bugs.
You are actually testing for both, as an insider your feedback is asked every now and then, also you report issues you encounter willingly (Get Help metro app) or unwillingly (System Service/App stopped responding, collecting data and sending it to MS)
"Insiders" get (got?) to use Windows without license/activation as long as they stay on the latest version. That can be seen as "payment". The rest of the users get to offer QA services for free, after the repeated layoffs in the MS's QA departments.
Sometimes there are new features that are interesting or useful. For example, before Windows 11 launched x64 emulation support for ARM CPUs was only available as an insider preview build for Windows 10.
The Insider program was initially created to combat leaks by just making every build public. At first they really listened to feedback when Gabriel Aul led the program - after he left, it became a joke (they tried hard to make "ninja cat" a thing) and the "unloved child". Microsoft doesn't know what to do with it, testers can't test anything and provide feedback since 1) MS ignores it 2) it has a literal roulette built in where features get (de-)activated upon reboot. They reopened the beta program for Windows 10 a year ago (to backport some Windows 11 features like Copilot?) and closed it after a few months.
I've been in the insider beta program since the beginning and I don't really have a good reason other than I live on the edge
To be honest the multiple times a week updates requiring restart (which is the entire point) are the biggest drawback. I rarely encounter bugs in my daily work.
I imagine content creators would be likely to download a Windows beta if it included any new or interesting features so they can try it out and share that information with their followers. This includes both written and video content.
Anecdotal but here's how I described using the likes of Copilot to my sceptical colleagues (they were late to the party!):
It's like having a senior software dev over your shoulder. He knows pretty much everything about coding but he often comes to work drunk...
And that was the best analogy I could come up with: I think it's sped up my work enormously because I limit what it does, rather than let it loose... if that makes sense.
As an example, I was working on a C# project with a repository layer with a bunch of methods that just retrieved one or two values from the database, e.g. GetUsernameFromUserGuid, GetFirstnameFromUserGuid and so on. They each had a SQL query in them (I don't use ORM's...).
They weren't hard to write but there were quite a few of them.
Copilot learned after the first couple what I was doing so I only needed to type in "GetEmail" and it finished it off, (GetEmailAddressFromUserGuid) did the rest, including the SQL query and the style I used etc.
To me, that's where it shines!
Once you figure out where it works best and its limits, it's brilliant imo.
as a heavy user of vim motions and macros I'm thinking this is one reason I've not found AI code generation terribly useful.
Yes, it's good at boilerplate. But I've spent a long time getting good at vim macros and I'm also very good at generating boilerplate with a tiny number of keystrokes, quickly and without leaving my editor
...
or I could type a paragraph to an LLM and copy paste and then edit the parts it gets wrong? also I have to pay per token to do that?
> or I could type a paragraph to an LLM and copy paste and then edit the parts it gets wrong?
I think there's something you're missing in their description. They're not asking a model to do anything, it's automatically running and then suggests what should come next.
Also in editors like cursor I can ask for a change directly in the editor and be presented with an inline diff to accept/reject.
I don’t know if you’re a vim user, but what makes people like vim is that once you master it, it’s not about typing and deleting characters. It’s about text manipulation, but live instead of typing an awk or sed script. It’s like when driving a car, you don’t think about each steps like watching the speedometer, verifying the exact pressure on the gas and brakes, and the angle of the steering wheel. You just have a global awareness and just drive it where you want to go.
It’s the same with Vim. I want something done and it is. I can barely remind how I did it, because it does not matter. Something like duplicate the current function, change the name of the parameter and update the query as well as some symbols, can be done quickly as soon as the plan to do so appears. And it’s mostly in automatic mode.
It's a tradition to torture car analogies so let me do so, this is more like when you start to say goodbye to people at a party your car identifies the pattern and warms up, opens the door as you walk to it and then drives you home. If you sit and take the wheel to drive towards a hotel you booked nearby it spots that and starts doing that for you.
> Something like duplicate the current function, change the name of the parameter and update the query as well as some symbols, can be done quickly as soon as the plan to do so appears. And it’s mostly in automatic mode.
And with these things I might move the cursor to where I want to put the new function, and then it's just immediately suggested for me. One key press and it's done. Then it suggests the other two based on the type definition somewhere else.
Let's continue torturing the poor car. It'd be great if it works that way, but what I fears is when the AI chauffeur decide on a route that appears to go where I want, but instead it's a dead end, and you don't know the exact intersection for the correct path. Or it take a long sinuous dirt road ignoring the highway. Or it sends me off a cliff because it assumes it's piloting a plane. To manage those risks, you have to keep your hand on the steering wheel and know the route beforehand. In this case, I would prefer a GPS map and have something that is a bit less fragile.
I don't mind writing code, and if it become boilerplatey, it's a good time to rethink those abstractions or write a few functions. And there's snippets for shorter sections.
Depends upon the user. Are you fast (and confident) at evaluating the output and discarding the bad suggestions? This is why I think using AI hurts some developers and helps others, and the usefulness is best for those who already have a good deal of experience.
I don't ever use it for fire and forget, but I've been wondering how well that might work in small side projects where hidden bugs aren't a big concern. Like using a fire and forget to spin up a small javascript game. But never in production code that I might get a 2am Saturday incident call on.
This is something I've been working on for about 6 months now.
I often come up with simple projects, lots of which go nowhere, and I thought it would be good if I could just host stuff quickly with no configuration and after some time in the shower, I came up with Nervespace.
Anyway, the wires are still hanging out and you can't create an account yet but you can try it for free by dropping your zipped .NET app on the homepage.
Let me know if you have any questions or comments.
I've long wondered if this is actually true.
If I have a closed-source app and claim (and can verify!) E2EE, surely I could still read every message from my closed-source app, within the app itself, and you'd never know.
I've never been a mobile app developer but I've been a desktop and web developer since the 90s so I don't know what apps can and cannot see but in a desktop app or web app, if it's on the screen, it's decrypted and I can put code in to read/steal it.
Am I missing something here?
reply