$30-50/month is a wild price point for this. Who is going to pay that? It feels too expensive both for enterprise (existing remote desktop solutions run about half the cost) and for end-users.
I worked on a similar solution to this and we had a price point of $5/month per user...
EDIT: 16GB of RAM and 16vCPUs. What a weird balancing of resources. Chrome is typically memory bound, not CPU bound. This also explains why it would be so wildly expensive compared to anything else out there.
EDIT2: A lot of the replies I'm getting seem to think my implication here is that no one would pay for this or it would be easier for people to build this themselves. I'm not saying that at all, I'm just critiquing the price point. There's huge market demand for browser isolation, I've worked on products in that field, I just haven't encountered any customers willing to pay $30-50/month for it.
Fwiw, we had 5 customers pay $30/mo in the last 12 hours who have been trying Mighty for a few weeks.
Believe me, I was skeptical too. I remember sitting in a car driving back up from YC with Michael Siebel asking him: "Hey man, do you think I am absolutely nuts thinking people would pay for a browser that's FREE? That's an idiotic idea right?" and, of course, he encouraged me and I am still feeling pretty encouraged based on talking to users and seeing the revenue/usage/praise 18 mo later.
We have a lot of work to do and I am pretty embarrassed of what we've got still but it felt right to get public about it.
Why might I use this instead of / in addition to Shadow (https://shadow.tech)? I'm a Shadow user, and they seem to give you beefier hardware at half the price, and it's a general purpose OS that will let you run any app (as opposed to "just" a browser).
Most people want an experience where the underlying OS and the application (the browser) interoperate seamlessly versus having to tame two desktop experiences. The primary application people think is slow is their browser by a wide margin so that's where we decided to focus as more native desktop apps become web apps. That focus lets us constrain the problems we get solve vs boiling the ocean with all of Windows.
Fwiw, we started by streaming Windows and pivoted away.
It's not clear to me that Shadow's business is sustainable. Windows licensing alone for virtualization across end-users if you buy from a reseller is $11/mo/user alone. I only know because we tried and became a reseller briefly. They also seem to use consumer GPUs that violate NVIDIA's licensing and agreements. Maybe they know something we don't.
> They also seem to use consumer GPUs that violate NVIDIA's licensing and agreements
They claim to, in reality they are sliced Quadro/Tesla cards that get a GTX 1080's worth of performance. I was wondering about the Windows licensing myself, not clear how they got around that.
Perhaps the rotate (?) the licenses somehow? That is, not every subscriber is active all the time? Imagine it as the public computers at the library. Maybe?
In any case, even at $20 p/m it feels like a strong value. That ~$1000 every four years - without ever being stuck with an out of date machine.
Yep, this is exactly what I was getting at. Shadow is one of many examples of application streaming services which aren't limited to the browser and offer similar hardware (or even flexible hardware) at a lower price point.
+1 I love their service, it's flawless and I often forget I'm using a stream. Then again, I'm on a wired Ethernet connection and a fiber line within 5ms of the datacenter so that probably helps.
What's actually crazy is that i even ditched my Ethernet cable and is running on a ubiquiti amplfi 5ghz router, and there is seemingly no difference in my location at least.
Isn't Shadow basically going out of business? Pre-orders aren't estimated to be available until October and I thought I read somewhere that they are selling off pieces of the business.
There are 2 competing offers to buy the company, as I know of. One from OVH founder Octave Klaba, the other from JB Kempf, of VLC fame. So no, I don't think it should go out of business - in the short term.
I'm not skeptical at all the people would pay for this. I worked on a cloud browser for seven years, there's a bunch of different market needs for this stuff. But $30-50 feels really high. We got feedback from enterprise customers that they were looking in the $5-15 range per user per month. That said, we pushed the security angle much more than performance, so the dynamics are a bit different.
Congrats on all the work here. Browser streaming isn't easy stuff!
Pricing is a good example of something that most people are intuitively wrong about. What you think people will pay and what people actually will pay are rarely congruent, and most of the time people guess far too low. Literally every bit of advice and writing about pricing I've ever read boils down to "Charge more than what feels right; you'll be surprised at how high you can go before you lose customers."
Also, 50% of the highest-paying customers bring more than 50%of the revenue, by definition. Often much more.
Apple keeps applying this strategy since 1990s.
Tesla bootstrapped itself off $80k cars, and only now is expanding to the "reasonable" $30k market segment.
You may not need everyone jump on your service just yet, you can start with the most needing it who are moneyed. Then you expand, economies of scale kick in, and you can introduce lower and lower price tiers, and people enjoy falling prices and getting a bargain.
Enterprise might say $5-15, but someone who controls their own budget and spends all day in the browser would easily pay more. Freelancers. Bootstrappers. The same way people pay for an IDE.
I agree they would pay more, but I'm still skeptical of $30-50. As I mentioned in a comment below, why limit it to the browser? If you've got all these resources just offer a full VDI which more typically prices in this ballpark.
> If you've got all these resources just offer a full VDI which more typically prices in this ballpark.
Perhaps their solution has something specific to the browser which allows them to do it really fast and cost effective. Eg. Sending just diffs of DOM to the client.
a useful comparison is other proxy/cloud browsers and especially VDI. $30/user/mo seems normal in enterprise: https://www.nutanix.com/products/frame/pricing , citrix, ... . Frame and some others were a good perf+quality jump, and maybe mighty is/will be the next
positioning for consumer/prosumer is interesting and invites changing the math! opera was notable here as a web accelerator, but also a warning sign for pursuing this as a VC-funded businesses. the internet is bigger now..
Exactly, I always wonder how much Safari is faster than competing browsers. I have dozens of open tabs and it just works. With other browsers, I cannot even work after a certain number of tabs.
Indeed, I'm really bad at closing tabs. One day I wondered how many Safari tabs I had open on my pre-M1 2018 base model MacBook Air. I went into the tab preview pane and discovered it was around 480 tabs. Mind you this was in between system restarts so some were probably suspended or something, but still. I don't even notice with both IntelliJ and VSCode open as well.
People who don’t close tabs because they unconsciously don’t want to lose their search history.
At the end of the day your search history should be fed into a personal search engine which digests the data and figures out which pages were most useful to you (maybe by helpful browser buttons)…and uploads that into some open database. This can then be the basis for a new type of search engine.
It could be implemented trivially on something like Mighty, since everyones browsers run in the same datacenter.
As a reference, I've got more than 100 open tabs on this Firefox Android (it counts them up to 99 then it displays ∞) and probably another one hundred on my desktop (Ubuntu /Gnome), split on five windows on five different desktops. I can't assess the speed of Safari because it doesn't run on my hardware. It could be faster but those Firefoxes don't feel slow and don't slow down when the number of tabs increases. I don't have a really large number of open tabs though.
I expect to pay for this with high probability. I don't think I'm in the first target batch as I'm giddy in M1 land now, but I do work on so many different machines and love the idea of a persistent environment in the cloud. I also expect to want to do genomics in my browser at some point, and thus envision a need for 100x+ more powerful browser tabs.
What would you be doing that would require 100x+ more powerful tabs? I'd imagine most process-intensive work is already being done server-side or in a desktop app, not the frontend of a browser app.
Someday I want to run a whole world simulation. Think "The Sims" except the whole world. 8 billion agents, say a million bytes per person, so 8PB of RAM. While the sim is running I want to copy and paste the URL in a new tab and change a few params to compare the results. I want things to be instant.
Today I want to visualize 100,000 rows across 1,000 dimensions in 10 different tabs.
Between Today and Someday there are endless things I want to do.
You are not explaining your architecture. The parent (and me as it happens) assume that when you paste the URL and change params then that URL is sent from your browser to a server. The server runs the simulatíon based on the params in the URL it received and returns the results to the browser. With that architecture you would need a lot of resources for the server but not for the browser. What architecture are you thinking of?
Server is just a dumb nginx server sending HTML and Javascript. No dynamic routes. Everything happens clientside (main thread and/or web workers, local storage for persistence).
Still to early to think how to sell our [ we are in the very early stage] to sell our WebApp subscription as a bundle with MightyApp. But the price will be to high [add $15 for our side]. Waiting for the future when Linux and Windows ( sadly) versions arrive. Imagine the Arduino’s guys with this. Go buddy!
This is huge challenge BTW. I am dealing with electron kind of Web App and we are thinking in sell our subscription model with a bundle with MightyApp in the future waiting for the other versions Windows ( sadly) and Linux ( imagine the Arduino users ). Wait and see. The best for MightyApp!
I agree. People just point to the exceptions and not the vast majority of products and businesses that failed.
That drop box comment was a bit off since having an offsite backup of your most important data and having it available across all your devices is super useful. However I see where he was coming from. I still have on site backups. And most of the time that’s way cheaper for massive backups.
$30-50 USD for browser inception? If I had my entire environment there I could see the usefulness. But the browser alone?
I see some comments where people are already paying. Who is using this?
Well, that's one of the points. It's easy and trivial to come up with the downsides of something. There are already a bunch of people trying to do that in every thread.
Might as well exercise the less-used part of the brain where you try to imagine the positive aspects of something.
I would care if i knew the buzz around these things is organic or genuine. Yet this thing popped simultaneously on my twitter , hn and elsewhere, clearly some marketing machine is pushing it. Overall though, technology that reduces the options of the user and gatekeeps is always net negative imho.
wasn't trying to !. Also you're absolutely right, mighty might fail as a consumer application. But rigging together a very complicated system of software will definitely fail as a product for these people.
I'm confused by how hostile Zed thinks Drew is being here.
He quotes a post from Drew thanking Brandon for his remarks, and spends the rest of the essay saying the thanking is an uncalled-for "level of retribution", "effective slander case". But both exchanges between Drew and Brandon (the one in 2007 as well as the one in 2018) seem friendly to me.
My impression is that when linking to Brandon's post, people are usually saying "a company can still succeed by offering something that was previously possible, by making it easier to do" and "don't be discouraged by criticism saying it's already possible". They're not saying that Brandon was a bad person or that his feedback wasn't useful or anything.
Zed also makes a big deal about Brandon not being able to delete his post - but I remember dang mentioning that they would delete posts when asked, but everyone so far has agreed to a compromise of removing the username but keeping the post, which does seem like the best solution in a case like this (where the content of the post has historical value but the author might want to disavow it).
Sorry about replying to a week old post but your link and post really made me think:
I think the initial HN comment was justified albeit a bit nerdy, the marketing was just poor at the time. Not being able to delete a post is sort of a problem with all written media, the internet is not your group of friends at a bar. Being able to distance yourself from something that you've previously said might be a solution, an "I stand corrected" button might be a solution. Perhaps just being able to add a strikethough to an old post.
You're right to point to this, but I feel the comparison is much more "unfair" in the Dropbox case. FTP+SVN (lol) is not even close to the experience Dropbox gives.
In the case of Mighty the experience is known. It is Chrome, just faster. Sure, someone might prefer to use Mighty, fair enough, but there's no "extra magic"
BTW, Dropbox was huge when it launched and everybody was using it. I don't know anybody using it anymore. Maybe it's because people are using less desktop programs and more browser apps and apps in their mobile devices. So, less "explicit" files?
Why would someone want to do all that instead of paying this company $30/month? There are lots of people who's jobs are spent in a web browser. Your examples aren't selling a solution to a problem–they are just tools. Which is fine and great for people who need them, but I simply wanted a faster browser, I'd rather use a service that is dedicated to that purpose.
i think the main selling point is the always-on browser, not a faster browser. i dont know what demand there is for faster browsers, if speed was a big deal i think most web apps would have moved to native, but almost none of them do. People who use beefy web apps are likely capable of setting up their own server which could double as a terabyte of remote storage, file sharing, any self-hosted app really.
I m sure the makers have done their research and found $30/month is the optimal price of a browser of a browser. Surely a lot of businesses will be convinced it's worth the money because $bigCorp uses it as well, and cargo cults work, I'm just pointing out what money can buy at that price point.
Then someone might figure that they can rent servers for $30 /mo and sell 10 remote desktop subscriptions on it.
The BBC loses an additional 10% of users for every extra second it takes for its site to load. And when Yahoo! reduced its page load time by just 0.4 seconds, traffic increased by 9%
1 second delay reduces customer satisfaction by 16%
The longer a webpage takes to load, the more its bounce rate will skyrocket.
I'm not onboard with this price-point either. If it's pointed at shitty chromebooks users, I get the price point even less.
Nvidia GeForce NOW (Cloud Gaming Steaming) is $10 a month and gives you access to top of the line enterprise GPU/CPU/RAM hardware and nearly your entire Steam, Unreal, Ubi, etc libraries. I can play Cyberpunk 2077 with fully maxed out graphics settings with no perceptible latency.
But you are updating. You're spending $360-600 a year on this.
RAM isn't that expensive, even if you do feel like you need to upgrade again in another 2-3 years. I can buy a completely brand new, good computer every 3 years for that price. And it will be able to handle running 100 tabs.
There are a lot of potential reasons why someone might benefit from a remote browser, but I don't think computer processing power is one of them. My phone can handle running over 100 tabs in Firefox.
I don't know, is this an adblock thing? I currently have ~950 tabs open on my 6-year-old desktop computer, and my computer isn't crashing. I think it's currently using 8-9 gigs of RAM. Maybe my system is particularly optimized, or maybe without an adblocker websites are way heavier and multitasking is a big problem? I do run uMatrix and uBlock Origin, so maybe my experience isn't typical. But the point is, for $30-60 a month I could buy another 16 gigs of RAM.
I am sure it is uBlock + uMatrix that's giving you the boost. I use both and whenever I open a regular website in Incognito mode (in case uMatrix ruleset adjustment would be too consuming for a one-off) you can feel the fans spinning up.
Wish more people used these addons -- there is no reason why webpages should download megabytes of JavaScript to "improve my experience" :-)
You're paying THEM to update their servers at a price point you could easily match or come in lower on YOUR workstation upgrades. I don't understand how people are trying to justify this cost.
Twitter's special move was a character limit. There are people who just want the browser and will ironically pay a premium to have that one thing done very well.
16GB of RAM and 16vCPUs. What a weird balancing of resources.
They are probably doing things somewhat inefficiently in the beginning, like renting whole, generic VMs for every customer. Both the price and the resource balance should get better when they catch a little scale.
If you’re making good money, investing $1-2 dollars a day to be able to work more productively is incredible roi.
I hope to see people normalize spending $ on software. A lot of software is way under priced, and if it was priced higher, we’d have more incentives for companies to come and make more great software.
I can imagine a small niche for something like this. Big corps can end up with weird IT department restrictions and capex/opex inelasticity. There are a tragic number of professionals stuck with a cheap Dell thin-and-light laptop with a 1368x768 TN display and 6 GB RAM. They can absolutely afford a better computer, but they can't get IT/purchasing to give it to them. They're unincentivized to spend their own money on a nicer computer, and even if they did want to, they could never get it on the domain and approved with IT's spyware and antispam software. But they may have a small amount of opex, their direct manager could accommodate a monthly "I need this subscription to do my job". This results in stupidly expensive Todo-list collaboration subscriptions, and cloud computers that are more expensive than local computers, and IT-bypassed remote storage systems...it's not a rationally optimal state of affairs, more like a weird corner of the chaos of modern society.
Genuine question, but would the places that are that inflexible wrt to hardware upgrades have the flexibility to allow you to use a cloud service to perform your most sensitive work?
I worked there and they had these awful surface pros with hardly any memory. Their solution was to use AWS's hosted Desktop for Developers. It.. sort of worked OK.
This, by the way, was not just for a few people: because of Brexit there are thousands of people all working on making the new systems for customs etc work.
I suspect organisations that are undergoing digital transformation (as they are) will have this kind of setup. It was rife through the whole place: rubbish old IT stuff rubbing shoulders with modern SaaS.
I hate this setup. You generally need to have anything audio/video related on the laptop anyway and these are the most CPU hungry apps. Working through remote sessions suck and are high latency even on good connections, it's really noticeable for certain things like alt-tabbing and intellisense and it works awfully for multi monitor setups.
I suspect it's more so companies can pretend all their old rules about keeping data on site can remain. Still, it's better than going back to the office.
Exactly. Also, who needs those resources just for a browser? Why not make it a full VDI instead? With those resources it feels like a waste to limit it that way.
This is my thing about it - I always hear people complaining about Chrome being a memory hog, but it never feels slow to me. I'm writing this with 15 tabs open and it's not even a worry. I only have 16gb in my laptop too.
I regularly run Docker + Slack + dozens of Brave tabs (still Chromium), and both individual tabs and my whole computer will slow down with some frequency, despite having 16GB of RAM.
Am I missing something? How does Mighty allow professionals access to internal websites and other internally hosted content. If this is priced for professionals, how is it even possible to allow workers to stream sensitive documents etc from a cloud service browser?
This argument falls apart when you consider how often this is made nowadays. Yes, for one individual product spending 1-2$ per day isn't much. If you do this however for everything people advocate it nowadays, you're suddenly spending a thousand bucks plus per month solely on subscriptions.
> it was priced higher, we’d have more incentives for companies to come and make more great software
This is also a strange logic. The definition of innovation and the benefit of competition is to drive down prices for consumers, not up. Let's not turn software into some sort of Veblen good.
> If you’re making good money, investing $1-2 dollars a day to be able to work more productively is incredible roi.
Sure, but investing $2/day pays for an M1 MacBook Air in under 2 years. That's why so many of us are struggling to understand this.
It might make sense in the context of companies with weird IT department restrictions that won't let them buy new laptops but will let them spend $50/month on a service.
For the "why would someone pay" question, I think it's quite simple.
1. We are more and more moving to a world of highly valuable workers. Improving their efficiency in a high salary country is easily worth it. Company should be willing to pay 0.4 - 1% of your salary to make you more efficient.
2. Longer liftetime of company computers. No need to upgrade to M1 yet.
3. Seems like they are building a full on WorkOS as well. That migth also just be worth it.
Pardon me for being rude, but this seems like a pretty naive marketing take on what they're offering. What exactly is the use case here? Employees that have hundreds of tabs open saving a couple seconds loading web pages? How much productivity is being lost there, objectively?
Once you get above 20 tabs, are you genuinely keeping track of every single one as something to return to later? Or are you just being lazy and lack the personal systems to track what's actually important or needs to be returned to later?
I've been using a 11y/o computer at home for everything--code compilation, VMs, work AND personal life--and this has never been an issue for me.
Maybe I'll give you #3, but if an employee came to me asking for this as a paid subscription, I'd shut the idea down immediately. Seems like another startup trying to fill a space that doesn't need to be occupied.
> Once you get above 20 tabs, are you genuinely keeping track of every single one as something to return to later?
yes! Ideally I have around 500 tabs that I all need. I for example let your comment sit here for a while unsure if I was going to reply to it. There are more topics on HN currently under investigation. Each spawns a series of extra tabs. Cloud browsers, whole OS in the cloud, what hapend to paperspace? I open several articles that I may or may not read. When I get back to this discussion I look over the tabs it spawned and continue exploring while closing old ones... There is a window with music, one with youtube videos I might want to watch/comment on with the further research tabs they spawn. A dozen tax tabs, courier services, business card services. Dozens of tabs for websites I'm working at. jsfiddles, specs, demos. Tabs about wind turbines without propellers, road side wind turbines, covid, oil and coal reserves. And aggregators ofc
Basically, I can only do work or look at depressing shit for so long but I get back to it after watching a cat video.
When closing lots of tabs I go over the topics which helps me remember what I've looked at.
Its funny howmany people I talk with who have a single tab (usually also a single application and a single monitor) but know instinctively that their approach is better. (as if there should be only one metric) I cant begin to explain how much I'm enjoying myself.
In the old days there was webspeedreader and MyIE2 that were much more suitable for the giant session. Then there was tabmixplus and then came chrome which is pretty much a turd with 10+ tabs then web extensions killed all the good tools.
It's definitely interesting to see how people's workflows can be so different, I get by with at most ~10 tabs, and close things as soon as I'm done with them. At the end of the working day, I prefer to have at most 2 or 3 left. I sincerely start to experience existential anxiety when the number of tabs goes up too much :-P. Probably related to some subconscious feeling that I need to 'do something' with all these tabs and when they increase in number it starts to feel like I'm 'running behind'. Different people, different workflows, that's perfectly fine.
What I don't really see is why this service needs to exist to solve that particular problem (browser gets slow because too many tabs), because IMO that problem has already been solved very well by most decent browsers. They just swap out the inactive tabs and are able to restore them fast enough even on low-end systems, as long as they have an SSD. Inactive tabs that are not swapped out don't take a lot of CPU resources either. This service sells you a cloud browser with 16GB of RAM, which is pretty much the norm for laptops and desktops now, so it's not going to save you much if 'too many tabs' is causing slowness.
I keep the things I need to do in a separate window. If it gets to crowded I drag some less important ones to a different window. I get anxiety when behind but also if I forget to live. Switching between topics effectively is hard if you are not used to it and it definitely eats away my focus if I don't pay attention.
For a while I use different browsers simultaneously for different things. The session turns out entirely different for some reason as if one is a different person in a different location. I could see a cloud browser as something like that. I have no idea what would happen. Portability will probably influence the session.
I wish bookmarks were good enough, I use tabs in stead to preserve scroll audio and video offset and to have a bunch of tabs for a domain with related tabs next to them. Browsers have poor organization for large numbers of tabs but bookmarks are even worse.
I have no real idea how the session should be organized but I'm sure there are tons of visualizations out there that would work wonderfully. Perhaps some filters with a flow chart for the entire browsing history. Full text search? I don't know.
The price doesn't really matter as I spend way to much time online. 1 euro per day is nothing.
> yes! Ideally I have around 500 tabs that I all need. I for example let your comment sit here for a while unsure if I was going to reply to it. There are more topics on HN currently under investigation. Each spawns a series of extra tabs. Cloud browsers, whole OS in the cloud, what hapend to paperspace? I open several articles that I may or may not read. When I get back to this discussion I look over the tabs it spawned and continue exploring while closing old ones... There is a window with music, one with youtube videos I might want to watch/comment on with the further research tabs they spawn. A dozen tax tabs, courier services, business card services. Dozens of tabs for websites I'm working at. jsfiddles, specs, demos. Tabs about wind turbines without propellers, road side wind turbines, covid, oil and coal reserves. And aggregators ofc
I thought you were trolling at first, but I realize this may actually be serious. You can lose the tab with my comment. I'm a worthless internet stranger, and if you REALLY feel the need to reply, you'll remember, anyway.
How many of those HN topics actually matter? The "may or may not read" stuff I think you can comfortably file under "does not matter" and discard for your sanity's sake.
I waste a lot of time looking at animal videos, too, but I close the tab after. I don't think that counts as something productive or necessary to revisit...
If you're closing lots of tabs, I'd hope you understand those tabs should've been closed earlier--rather than something nostalgic to revisit that never really mattered in terms of what you actually need to do?
It's fun to abuse technology, but at the end of the day, you should ask yourself... why? Is this really making your life more complete? Are you being more productive?
So you have a highly valuable worker where you can afford to pay 1% of their salary for increased efficiency but somehow you can't afford the $1000 to upgrade their machine? Hmmm...
Or you already upgraded the machine and require more efficiency :)
Or the upgraded machine comes with other differences that worker doesn't want :)
It doesn't need to be each of this reasons, and it doesnt need to be a combination, but im just pointing these out as possible ways to justify the pricing.
(1) Sure, but installing more memory works as well and is typically possible without upgrading the CPU a la (2). I'm also not really sure what (3) is about--I'm a bit familiar with WorkOS, but I'm not familiar enough to understand how Mighty is competing.
The 3rd point is clear if you read the mighty website. They advertise improved functionality and hotkeys for common work webapps. That's definitely part of a push to become an OS for work (not workOS).
With that kind of money, you can almost lease nice laptop almost that comes with a browser. I've been looking at options for this recently. 50/month should get you something decent if you commit to that for 2-3 years. That's 1800 over 3 years. E.g. one of the fancy Aple M1 mac book pros would cost about 1300, I believe. The air is cheaper.
The other thing is that browsers need GPUs as well. A lot of stuff is hardware accelerated in Browsers these days. Just a bunch of vcpus does not help that much.
As it is, the target audience seems to be people with too much money and yet with a shitty laptop unable to run a browser properly. I'm sure these people exist but it does not sound like a great market opportunity.
Also, from a security point of view, I don't think that a lot of Fortune 500 companies would ever agree tot this.
However, that oddly might be the path to success for this company as well: play the security angle rather than the performance angle: lots of companies worry about their employees having their laptops and phones stolen. The less data is on these devices; the better. But it will need to be iron clad and more than "We won't look! We promise". That in itself basically just screams "but we could if we wanted to". But if you think about it, a lot of office workers access internal tools almost exclusively through browsers these days anyway and most of that stuff is cloud based anyway. It's just that the terms of use, SLAs, etc. give IT departments enough warm fuzzy feelings that they don't forbid this. Office 365 is a good example. Hosts a lot of very confidential information in a lot of companies. So not a strange thought to narrow the surface between the user and that to just a remotely running browser.
Also there's the convenience angle. A lot of developers are running their development tools remotely. Github is pushing e.g. code spaces. VS code can mount things over ssh. It's becoming normal to do that. So why not do that for other things as well? Streaming games is another good example. I would not be surprised to see Google go there eventually.
apple needs to double down on safari, it really lags behind Chrome right now, and likely won't ever be a dev's first choice until they adopt v8 or release the underlying engine. Chrome is the new IE, its the only browser I'm allowed to use on my work computer
I have a 2013 Macbook Pro and I want to keep using it forever. Now if it (Mighty) helps me focus and save 1 hour of my time per month my work, I'll pay up the $30/mo. However, I'd like to pause my subscription whenever possible. I am a consultant. Time is money. Can it save me an hour per. month? How many hours will I spend customizing it? Will my existing plugins work? What layer of Chrome did they optimize? Security implications? I'll be looking for answers to these questions. The founder is someone with a level head and I trust them enough to not chase after "made up" problems.
EDIT: I did wonder about offsetting some of my CPU load by renting a VM out on the cloud instead of paying the $30 though. Not sure of the cost there.
I can't see it being a thing on the consumer end, so it has to be enterprise.
The problem it'll face being marketed to consumers is that every one of the big JS application sites has already deployed a mobile app that takes care of the "works on light hardware" part.
For the ad-laden, tracker-heavy news sites of the world, there are ad-blocker extensions and Brave.
Independent professionals that have to use a heavy site will opt to upgrade their spec, almost certainly; computer financing has made it so that you can pay $30-50 a month to get a whole new system - why would you pay to get a worse experience?
Now, the enterprise can afford to spend on this and it can even solve some major problems. But that's a "current enterprise" problem, and not where I see tomorrow's enterprise going. There will always be startups aiming to be savvier than this, cut out more fat and not get locked into this particular opex and security model. The basic premise relies on the Web keeping its dominant state and I suspect we're in the midst of a trend reversal against centralized systems.
And...if the current enterprise doesn't provision correctly, it's likely they'll just continue to do so and leave their employees to suffer with 2Gb laptops, because it hasn't become mission critical yet.
So, I really do suspect that while it might have a chance for a few years, it's in a race against time to get some market share and expand differentiating factors. In this respect it could have the success of a Dropbox, i.e. "get big, then run out of places to go".
All they need is to get a few large companies on board, and then to convince Web developers that it's no longer necessary to think at all about front-end performance. The rest of us will be forced to follow suit when a critical mass of Web sites require beefier hardware than we can afford to buy ourselves, and faster connections than we can even get access to due to Comcast and Time Warner not giving a shit about speed.
This kind of service lives and dies based on the experience customers initially get. It makes sense to put the price tag on a level where you can provide top-notch service, even if it means serving less customers at first.
It's not a bad thing if people get the feeling your service provides great experience, but is too expensive. You can fix this later by dropping price or giving discounts.
This is a bit like Superhuman. Who will pay $30 a month for faster gmail? Turns out a lot of people and they love it. Sure a lot of people won't and will continue using free email services but those that do really value it and give it a high NPS.
I see this being similar, people who spend a lot of time in Chrome and for who the improved speeds are highly valuable in both terms of opportunity cost will not think of $30 as 'too expensive'.
The other thing is customer service, like Superhuman, with a $30 a month price tag you can actually give good customer service.
Finally, at this price tag you only need about 275,000 customers and you have $100m ARR. I don't know how long the Mighty wait list is, I do know Superhumans was last reported as 275,000.
Only time and the market will tell, but I'm really bullish on this company doing great things.
as a happy Superhuman user paying $30/month for slightly faster Gmail: yes. its absolutely worth it for tools you use daily to be as fast as humanly achievable.
They actually tried! There was a Chromium project called Blimp for a while which supported browser streaming, but it got shut down after less than a year in development. Had some major dev power behind it too, not sure what happened.
I was head of the Blimp team at Google and could tell you exactly what happened, although it’s probably not something I can discuss too much publicly. Great project, great team, turned out to be very hard and involved making major changes to Chrome to do what we wanted. And unlike Mighty we were not willing to charge users a ton of money to use it. Fast, cheap, high quality: pick two :-)
Very different projects than what Blimp was. Blimp was integrated into Chromium's rendering pipeline itself to stream draw commands directly to the client browser.
Imagine Apple partnering with telcos to do this off M2 racks then Google doing the same with Chrome split into a client and Linux container - $10/month for "Chrome Pro".
As a self hoster, nothing irks me more that more software that takes control from the user to some random third party.
And I fail to see why anyone would use this, you need high speed internet capable of streaming 4k for one and if you have access to that, then chances are you also have access to a sufficiently powerful computer capable of running chrome locally.
Coming to security, this is a complete disaster. All your traffic including passwords are going to a third party server and you have to trust that server to not do anything shady.
This cant be economical either, or will be too expensive.
And the testimonial on the website, I find it hard to believe that a CEO of a company cannot afford a powerful computer but can afford a (presumably expensive) subscription service giving them access to a video stream of a browser running on powerful hardware.
Like another user said VNC can already do this, and much more without the electron wrapper.
It's pretty rare that I root for a company to crash and burn on principle. I'm an entrepreneur myself so it takes a lot for me to go there.
I hope every single one of these cloud-streamed remote-app or remote-OS plays fails and fails hard. They're helping lead the Internet and the computing ecosystem in an even more dystopian direction. I've been happy to see Stadia not really take off.
So lets say this succeeds. Then Google or Facebook buys it. Now all your browser sessions including passwords, keys, authentication codes, private messages, etc. are globally visible to be data mined.
Who's to say they're not doing this already?
What if this is hacked?
This is worse than that Amazon idea of giving Amazon delivery people keys to your house. In the physical world it's pretty easy to see people when they come in your front door. In the digital world you have no idea what these people are doing with your data. There is zero situational awareness.
I think you kinda hit it on the nose. Who knows where or how or who has access to these machines. IDC if it's encrypted in transit or what, but there is no way a corporation with strict data privacy rules would be able to stream potentially sensitive information across the wire especially when it will be stored in the cloud in web form for a period of time. IDK good luck, but I'm definitely tin-foil hatting with this guy above me.
For me it's not so much a trust issue with this company, though for cloud and mobile stuff I have come to a "guilty until proven innocent" rule as regards privacy. It's (1) the trend this supports, and (2) what happens if worse players get access to it either through hacking or acquisition.
I guess I'm more thinking if the target is enterprise (because it's 30$ a month), what enterprise is going to green light workers using a browser where content doesn't reside on the user's machine? I've worked several tech jobs where it's mandated to use a specific browser because it's locked-down to not leak sensitive information. Not to mention it allows users to access internal resources. IDK I'm not necessarily hating the product, just don't know how it's going to work at scale for the listed CPU/Memory/Price point
I don't usually care about companies success or failure, this none of my business, after all, but this kind of "innovation" could have extremely unpleasant side effects.
You have valid concerns but no need to hope for their failure.
Tech people are the minority. The market IS moving towards cloud. It's happened, it's happening, it will keep happening. Stadia may have failed now, but it IS conceptually the future of gaming. It's like you're arguing for blockbuster in a netflix world. We cannot stop this from happening, no matter how many choirs we preach to. All we can do is find ways to make this happen better.
I think it's more constructive (and technically difficult) to accept that the market is heading to full cloud and we as tech people need to find better ways of making this vibe with good privacy practices.
Personally, I would not use anything like this without knowing a lot more about their security. Even then, maybe we're still a few years out from a security perspective before I would feel comfortable storing my passwords and browsing data with a 3rd party server AND pay for it (wild). But, I could see myself doing this if my privacy was ensured.
I hope these guys really focus on innovating in that aspect, and then I hope they succeed big.
> It's like you're arguing for blockbuster in a netflix world.
No, it's not like that at all. Nobody is arguing for going back to distributing software in boxes with floppies or CD-ROMs in them.
Reason from first principles, not by analogy. Context and details matter.
Here are some major reasons for the push to cloud. None of these reasons are immutable or universal.
(1) Wimpy mobile devices with constrained power, storage, and bandwidth requirements.
(2) Cloud is the only kind of DRM that works. It's a way to lock things up and make piracy virtually impossible. As a bonus you can still build on "open source" and placate the open source zealots who don't understand the current state of things and are still living in the 90s.
(3) Application delivery and installation/uninstallation are terrible. OSes are broken.
Here are some solutions:
(1) Moore's law, huge improvements in battery capacity, 5G, WiFi 6, etc. are eating away at this problem. This issue will die of natural causes.
(2) The hopelessly naive idea that "information wants to be free" and everything has to be "free" (as in beer) needs to die, be cut into a thousand pieces, burned, encased in concrete, and sunk to the bottom of the ocean. Nothing is free. Software takes a vast amount of labor to produce, and that must be funded. If it's not funded directly and honestly it will be funded indirectly and dishonestly (surveillance capitalism, cloud lock-in, etc.). "Everything has to be free" and piracy actually help push us toward a surveillance capitalist panopticon future.
(3) This might be the toughest problem. Windows is by far the worst offender here with its nightmarish installation subsystem. Closed app stores are another huge problem but eventually I think anti-trust action is going to chip away at that.
That's not by any means a complete analysis. This is just a comment on a HN thread. It does hit the major points I think.
That's irrelevant, though. The "tech people" aren't preferring local solutions because they're funny this way - they prefer them because cloud-streamed remote apps objectively sucks. It takes some knowledge about computers to comprehend how and why exactly, but it doesn't change the facts.
(To use an analogy - doctors are a minority too, but you listen to them when they say you should vaccinate.)
> The market IS moving towards cloud. It's happened, it's happening, it will keep happening.
The important question to ask is, why. Why it's happened, why it's happening? The answer has little to do with providing value to customers - it's mostly about creating ability to seek rent. Privacy issues only happen on top of that - they're not the entirety of the problem.
> I think it's more constructive (and technically difficult) to accept that the market is heading to full cloud
Or, we could fight it. Maybe it's a quixotic quest. Maybe not. The market is a dumb greedy optimizer, it flows down the profitability gradients the way water flows downhill. If you want it to flow elsewhere, you have to put obstacles in the way, or cut out a better path.
While I totally agree with you, if this succeeds my hope is that it will finally push browser vendors to come up with a good authentication/authorization story. Make it totally integrated in the browser, such that I remain in control and Mighty only sees the equivalent of OAuth token it can't use to login in my name. No more custom signup forms, no more botched login flows redirecting you through 13 sites, no more passwords stored on websites... That is an innovation I would gladly welcome both as a web user and a potential web developer.
Every service needs auth. I can't believe nothing is properly integrated. I still have to click and enter a password, which fortunately the browser can create for me. I still have to receive an email and click on a link to validate my account. Web developers still have to create forms, manage the whole process, hash, salt and sauce my password and not leak it.
Your point about security is valid. But we are already past that point when we started moving all of our apps and our data to cloud.
Before Saas was popular, almost all of our data stayed in our local machine. But now everything is in the cloud. we have already lost the privacy battle.
> And I fail to see why anyone would use this, you need high speed internet capable of streaming 4k for one and if you have access to that, then chances are you also have access to a sufficiently powerful computer capable of running chrome locally.
Plenty of people who can’t afford a fast computer currently have access to a fast internet connection. The ability to substitute internet bandwidth for CPU and RAM will be very valuable for them.
Most people don't need a fast computer. For many a 5 year old average computer is good enough in terms of hardware.
What makes this hardware not great is the many developers who have fast machines who are ok using a lot of it with the software they develop. This makes the experience on older systems slow. It's unplanned obsolescence.
For chrome stuff and using the web I shouldn't need a killer system. No one should.
On the one hand: yes, $50 * 12 months would go a long way toward a machine upgrade, so it doesn't make a ton of sense purely on your-machine's-too-weak grounds.
On the other hand, I don't really run Chrome or Firefox on anything that operates on battery, because I don't like seeing the little battery icon deplete twice as fast, and it barely even matters how powerful the machine is (M1 helps, but there's still a noticeable difference). Maybe there are people who really, really want to run Chrome all the time, but also work mostly on portables and like them actually lasting as long on battery as they're supposed to. Maybe that's worth $50/m to them.
Good point. Decoding's usually pretty efficient, but you're right that use of wifi plus everything else related to this program might erase much of the power-savings.
You can get a more than enough powerful computer to run a web browser and more for $400 if you buy used, and you'll get to keep it forever. A subscription of Mighty would only last you a year for the same $400 price tag.
I imagine VNC can't do this well because it streams pixels with no optimizations other than antiquated compression (it can't even match WebRTC screen sharing), and crappy color depth.
The idea is interesting for lightweight computers e.g. chromebooks and ultrabooks, but it would irk me a lot to have my browser and personal information running on some other machine that I don't control.
What I would be super-interested in though is a self-hosted version of Mighty, that I could install on a Linux box anywhere of my choosing. For example, the server runs on my powerful desktop at home, and my ultrabook in the bedroom can be a client.
This project actually made me think that, since the X-Window protocol is practically a dead-end and everything's gotta be made with web tech now (ugh), it'd be really cool to have a version of FF or Chrome that's smart enough to send some kind of render instructions between a server-instance and a client-instance. Process server-side, render client-side, like X-Window but for web junk.
(the notion that this is completely fucking absurd since those "render instructions" are called "HTML" and I'm just describing server-side rendering isn't lost on me, but it's not my fault things have gotten so bad that having a server-side browser forward draw commands from bloated "web apps" to a resource-light client might actually be kinda nice)
I see this being extremely valuable for companies that hire contract workers, especially for UI-intensive tasks. Say you do labeling for self driving cars where you'll have to render a lot of images to the end user. Rather than giving all these users a powerful computer, you could give them chromebooks and reduce your capital cost massively.
Many thin client setups like teradici PCoiP are used by tons of film studios in post production. The last few animation Oscar winners have all been done without computers at people's desks.
There's already services like Nimble collective (bought by Amazon) that streams 3D apps to your browser like Maya, Blender etc...
WebRTC is already seeing tons of companies move to streaming content to thin client end points. Epics MetaHuman creator for example runs in the cloud.
> And I fail to see why anyone would use this, you need high speed internet capable of streaming 4k for one and if you have access to that, then chances are you also have access to a sufficiently powerful computer capable of running chrome locally
That's a weird assumption. Where I'm from gigabit (or at the very least 100mbit) fibre is the norm, which means fast 4K-ready internet cuts across virtually every socioeconomic demographic.
Just because you have a fast internet connection does not mean that all your client devices have a lot of RAM or a GPU. Even if they do, pushing computation to the cloud could mean improved battery life when you are on the go.
Would be interesting to see how far you can take a raspberry pi with mighty.
How much lithium battery degradation is due to some mobile tab going rogue?
for the moment, I would consider the concept and not so much the price. What they charge is likely not a lower bound on their internal cost structure. The product came out of beta today, so their pricing seeks to first attract those users with a high need and to test their pricing capacity. Better to try to charge too much and then go lower than to take too little.
Given that their engineering expenses are a fixed cost and the majority of their spending, they'll be able to lower prices as they scale.
Maybe I'm biased (I certainly use powerful-ish machines, so maybe I'm not the target market), but I genuinely can't relate when people on here talk about the web being slow as a category.
Sometimes a heavy web-app like Twitter will be slow on first load, but Mighty wouldn't help with network speed, right?
Slack is slow because it's slow to load actual conversation data; the iOS app is just as slow as the Electron app. This is not UI jank; it's a slow API and/or insufficient prefetching.
Jira is slow because it sucks; I've used native apps that are slow because they suck.
Other than that, I don't have many relevant experiences to point to. I'm sure for people running older machines the picture is different (the state of web engineering as a whole could certainly be improved on several dimensions), but I also doubt people stuck using slow computers can afford to spend $30-$50/month on something like this.
I'm genuinely asking: what things are slow for you? Is it just the fact that the code has to load before it can request the data (or render anything) that makes it feel slow? Or is there genuine sluggishness? What web apps are you using that I'm not?
On my nearly maxed out 16” MBP, on day one, web pages with lots of ads and fonts (typically news sites and content aggregators) were noticeably slow. On my totally maxed out iPhone 8 Plus, same experience day one. Scrolling past one of those dumb sticky videos can cause everything to jump into place, then back out, then back again. Navigating back in history can be so slow just hitting the cache that the previous (now forward entry) page shows up again and blocks rendering of the navigation.
Same with electron apps. VSCode is generally among the best. I currently have 9-10 projects open. If I accidentally trigger a font resize by missing cmd-backspace and hit + instead, I’m sitting around for 1-5 minutes waiting for everything to settle. I’ve even hit bugs where trying something app-wide then reverting hit a very slow race condition and just completely deleted my settings.json. Restart to update can take a couple minutes too, and that’s to restore visible functionality while waiting for the changelog tab to randomly show up.
Slack on iOS isn’t nearly as bad as on macOS. But that’s not Electron. I’m in 8 Slack orgs, not a large number compared to some people I know. Refreshing the window takes long enough I just go take a break.
This is on a machine with 64GB RAM, even when it’s not swapping from a couple Chrome windows.
Alright, so there are several different things here:
> web pages with lots of ads and fonts
Ad blockers are a thing. I use them on every browser (it's even possible to do on iOS). It makes a big difference.
> Scrolling past one of those dumb sticky videos can cause everything to jump into place, then back out, then back again
There are plenty of annoying dark patterns (and simply poor UX) out there being used, but what I'm trying to get at here is specifically the perception of slowness for the web as a platform. UX problems can exist in any software.
> Navigating back in history can be so slow just hitting the cache that the previous (now forward entry) page shows up again and blocks rendering of the navigation
I think you may be misunderstanding here... some sites - especially news sites - use a dark pattern where they override the back button behavior to prevent you from going back (presumably to increase "engagement", or whatever). You could argue the web platform shouldn't let them do this, but that still wouldn't have to do with "slowness" (and wouldn't be solved by the OP).
> I’ve even hit bugs where trying something app-wide then reverting hit a very slow race condition and just completely deleted my settings.json
This just sounds like a logic bug; bugs exist regardless of platform
> If I accidentally trigger a font resize by missing cmd-backspace and hit + instead, I’m sitting around for 1-5 minutes waiting for everything to settle
> Restart to update can take a couple minutes too, and that’s to restore visible functionality while waiting for the changelog tab to randomly show up
This is absolutely insane to me. I just tried changing the font size in a very large VSCode project with 10 files open and it took 1 second to change the font size for the whole app. Killing the entire app (Cmd+Q) and restarting it took 4-5 seconds.
How many files do you have open? Are you using some crazy extensions that could be poorly-written or interacting badly?
> I’m in 8 Slack orgs, not a large number compared to some people I know. Refreshing the window takes long enough I just go take a break.
Again, totally crazy compared to my experience. I just refreshed the full window for the medium-sized org I'm in and it took 2 seconds for the UI to come back, and another 5 seconds to load the conversation text (the latter is pretty bad, but as I noted in my original post, not related to performance of the actual web platform)
> Ad blockers are a thing. I use them on every browser (it's even possible to do on iOS). It makes a big difference.
True. They also break things, which I’m not a fan of for personal use. They also make manual testing of web work less consistent with what normal users experience, which I avoid.
> There are plenty of annoying dark patterns (and simply poor UX) out there being used, but what I'm trying to get at here is specifically the perception of slowness for the web as a platform. UX problems can exist in any software.
What I’m describing though is sites using common patterns having such poor performance that I can literally watch a sequence of state changes take place and categorize them as they happen. Forget the ad experience. Common tech oriented sites linked on HN which make it to the front page will frequently show me three to four layout shifts as their fonts load.
> I think you may be misunderstanding here... some sites - especially news sites - use a dark pattern where they override the back button behavior to prevent you from going back (presumably to increase "engagement", or whatever). You could argue the web platform shouldn't let them do this, but that still wouldn't have to do with "slowness" (and wouldn't be solved by the OP).
This wasn’t some back button hijack, I double checked. It was a slow website meeting what I assume is a race condition in the browser, where the state change on load coincided with my decision to stop waiting. And it happens a lot on iOS Safari on perfectly trustworthy sites.
> This just sounds like a logic bug; bugs exist regardless of platform
Sure, like I said, race condition. But exacerbated by how slowly reverting some mistake might take effect.
> This is absolutely insane to me. I just tried changing the font size in a very large VSCode project with 10 files open and it took 1 second to change the font size for the whole app. Killing the entire app (Cmd+Q) and restarting it took 4-5 seconds.
> How many files do you have open? Are you using some crazy extensions that could be poorly-written or interacting badly?
Like I said I have 9-10 projects open. Assuming I have 10 files open in each (I have more, but that wouldn’t matter if the app we’re using native controls), that’s 9-10 times the same thing you tried. Each instance is its own process pool. But they’re all responding to the same event asynchronously.
> Again, totally crazy compared to my experience. I just refreshed the full window for the medium-sized org I'm in and it took 2 seconds for the UI to come back, and another 5 seconds to load the conversation text (the latter is pretty bad, but as I noted in my original post, not related to performance of the actual web platform)
This is also not comparable to what I described, you refreshed one instance vs my 8. And again this would not be an issue using native controls, which would not be running 8 separate instances.
- - -
You seem pretty focused on defending the web and web technologies in the abstract. I’m not necessarily even disagreeing with that. Although real world usage of web tech is the reason things are so bad that I do experience the performance degradation I describe.
I’m not your typical HN anti-JS zealot. I’m just very disappointed with how bad the common web based product is.
You’re mostly right that it’s not the underlying tech that’s bad but how it’s used. But not totally. It’s the only UI platform I’m aware of that developed a huge resource intensive multiprocess model to work around the fact that common usage routinely blocks shared resources and routinely crashes.
> Slack is slow because it's slow to load actual conversation data; the iOS app is just as slow as the Electron app. This is not UI jank; it's a slow API and/or insufficient prefetching.
This makes no sense to me. On my machine, the desktop Slack client has noticeable input lag and it takes ages to perform any action. Try installing Ripcord and compare them; they are talking to the same API, but Ripcord doesn't make me want to throw my laptop out the window.
But Mighty is not a solution, it is just a band-aid that will perpetuate the problem and make UI developers even more lazy because they can assume their crappy Electron apps are always running on a beefy machine in the cloud.
I've never once experienced input lag on Slack- on mobile or desktop (or laptop). The only thing I notice is a slow initial load and slowness to load conversations when I click between them.
Again, I'm not exactly using ancient computers, but that's been my anecdotal experience. I was working from a MacBook Pro that was 3-4 years old at one point, for what that's worth. Not maxed out, though I'll admit it was probably still not a slouch.
Agree. I also just completely fail to understand what the problem is here.
I have 16 tabs currently open in firefox on my MBP. Everything is snappy.
On my desktop (which to be fair, is very powerful) I have maybe 40 or so tabs, the majority of which never get loaded because they are saved by by the tree-style-tab extension, and I don't visit some of the subtrees often.
Literally the only webapp I use that feels slow any more (after I stopped using Gmail) is Notion, and they know they have perf problems. Like you mentioned, these things are slow (Gmail, Notion, Jira, whatever) because they... suck. Gmail is/was just as awful on my powerful desktop as it is on the laptop. I just don't get what this buys me.
> Slack is slow because it's slow to load actual conversation data; the iOS app is just as slow as the Electron app. This is not UI jank; it's a slow API and/or insufficient prefetching.
That's UI jank on top of network issues. Another commenter mentioned Ripcord, which is a good baseline for how fast Slack or Discord should be.
At least for myself, when I say web is slow as a category - including wider web technologies like Electron - I'm mostly thinking about UI performance. Any time a website takes more than 50-100ms to react to an input event, it's noticeably jarring. If it's consistent, it makes the experience of using that site painful. And unfortunately, this problem is common across the board in everything done with modern web tools and principles.
I commented under another reply, but I've never once experienced actual UI jank/input lag on Slack or Discord. Not once. I don't know what I'm doing differently.
Here's one theory: the web makes it super easy to add lots of little animations to apps. Discord in particular takes lots of advantage of this. Is it possible all the little animations are making things feel slightly less "instant", and being mistaken for input lag? That wouldn't impact typing, but
Having powerful machines would be one thing. I really implore you to try using a low end machine for a while and see how bad the web is. I currently use a 4gb 2015 MacBook Air and i often see Discord and other websites-masquerading-as-apps hogging upwards of 2 gigs of ram which is inexcusable. I can hardly believe that animations would contribute to lag (or perceived lag) especially because we can see a lot of completely native apps that have these "micro-interactions" and still feel fast and responsive.
On the other hand Ripcord, an alternate client for Slack and Discord sits at 50mb of RAM and single digit CPU usage.
Don't use Slack, but I have the same experience with Teams for example. Using it is torture on my desktop (i5-8500 - 6 cores and 32 GB ram). There is very noticeable lag when typing, on the order of one second. When moving the mouse around, all the animations are laggy (they take forever to start).
Of course there are what seem to be caching / network-related issues, like switching between conversations always takes forever. But there are also clearly UI issues, like when I try to scroll up in a newly-opened conversation, it scrolls a bit, waits to load, then it sends me all the way back down again before jumping around to some random position. This happens when there are no new messages in the chat and I only try to scroll a little, not go back days.
And the crown, for me: somehow, the number of letters out of order when I type is through the roof in Teams. It happens practically on every message I send, whereas this basically never happens in Telegram (where I send the same kind of short messages) or when I write long-form emails.
I generally feel the same way and sometimes ask myself if I'm living in the same world as some people. Almost everyone complains about Gmail being slow but I don't find it unbearable. I keep it open in a Firefox tab and use it all the time. I have Slack running natively (well, as native as it can get) and it seems to work fine on both my Linux and macOS machines.
I guess I'm happy as long as the keyboard response is (very) good. The only time I notice real slowdowns are when my actions get out of sync with the system. A slow terminal or text editor drives me insane and is one reason I really can't use VSCode; Sublime Text never makes me wait.
Granted, there are things that are genuinely slow for me and drive me up a wall (ie. most issue tracking) but overall, I'd say that my daily use is pretty good. It's certainly not bad enough that I would offload my browsing to some cloud based system. But like you, I'm probably not the target market here.
Maybe it's been a while since you've used a good native app. Sometimes we forget just how fast computers can be without any of the mountains of abstractions that we've piled on.
Possibly. I guess the "fastest" program I use on a daily basis is Sublime Text, which is native on multiple platforms. I never have a slowdown, even on very large files. It instantly responds to keyboard input, which is a must in a text editor. Basically, it's perfect (for my definition of perfect).
I've seen people say things like "Safari is much faster than Chrome" but I don't really see it. Sure, it can seem a bit quicker on some sites but most of the time I don't really notice it. I do notice things like CPU and energy usage between those two browsers, but I'm mostly plugged into power all day anyway so it doesn't make any practical difference which one I use. Perhaps when I get a new M1 machine (ie. 2021 16" MBP!!) I'll feel differently. Perhaps.
With slack though, it absolutely is UI jank. The interface is an absolute nightmare to use on phones (I'd say deliberately so, to force you to use their app).
That's not the problem that needs to be solved, though.
The web isn't slow because it takes a second longer to load a website. The web is slow because, once loaded, the website takes 100+ ms to react to a click or a keypress. Plenty of popular websites are so far off the mark that they take half a second or more to react!
Mighty will definitely help with network speed. If your link is Max 60 kb per second download and mighty has a 50 megabytes per second download and it's closer to the peers for those sites you visit it's definitely going to load faster and then stream down to you.
I haven't used mighty but I'm basing that on my own experience with similar technology.
I think there's a window of optimal use bracketed by low and high download bandwidth for you. if you're faster than that maybe the only speed up you get is if your machine has a slow CPU. If you're slower than that I suspect that the video streaming they're going to use will produce a lag that will make your experience worse than if you were just loading the site directly.
If mighty wanted to push that lower bound lower, instead of streaming video they might be able to stream changes to the DOM. they could compile a sort of single file version of the page on their server that included all the requests the third party resources and styles in line and then whenever there were changes in the layout we could stream those style or DOM changes down to you. As far as I can tell that's basically the minimum amount of information you need to replicate the experience. that might even help a little bit with machines with slow CPUs I sort of tree shaking styles and resources that are not used.
from this point of view with enough development mighty could be purchased by Google as a sort of deluxe subscription model for chrome with bundled premium subscriptions to various streaming services and so on. From that point of view, the bundling up of content, delivery and medium is not really a novel thing because I think similar things have happened with cable TV magazines and news to some extent.
But for the good point that you make that people at the slow or low speck ends might often not be able to afford that kind of service, that lowest of the low-end might be a real focused niche... say people on airplane Wi-Fi or in remote locations on a satellite link.
but from a purely product marketing and psychological point of view I don't think that a product needs actual technical superiority or real measurable utility to become a big hit. I think it really only needs something that makes people want to use it. mighty could position itself as a sort of luxury upgrade for people with already good specs.
Because from the point of view people who are perhaps already of that successful and wealthy mindset many of them may consider that time is their most valuable asset and the accumulated frustrations and annoyance of waiting for websites to load is something they are prepared to pay a service to get rid of and to provide them and experience which they feel is more in line with their station and their expectations of Life in general.
But you cannot really stream remote browser screen on a 60kvps download.... they recommend on their website that you need 500 mbps or something crazy like that for a low lag.
Sir I disagree. I've gotten away with streaming compressed webp frames of a moderately sized laptop viewport where each frame is between 10 to 34 KB, at a relatively respectable two frames per second. I'm not really joking it's still usable. You can push the frame rate even higher if you're prepared to accept higher compression or lower resolution.
Oh I believe you can ‘stream’ stuff at 2 fps over a 500 kbps line alright, the ‘not serious’ part is how anyone could find that acceptable. Even if all you have is 500 kbps...
If you would use your 2fps streaming browser to read, say, hacker news, every scroll operation would be hideously slow and pull in another ~60KB per second, even though the page data itself is only a few KB and never changes. Your ‘streaming solution’ only makes sense if the total amount of data to fetch for the page itself outweighs the total amount of data for all the frames you need to stream while you are using the page. Which is probably almost never, unless you always look at static single-page applications which continuously pull in data on the backend without presenting anything new at the front end. Highly unlikely.
Your logic is sound, just some experience seems to be missing.
> the ‘not serious’ part is how anyone could find that acceptable
I guess you don't have a beeline on what everyone finds acceptable. That's normal, you can only share your perspective not everybody's.
> every scroll operation would be hideously slow
I guess you haven't experienced it because what you describe is not how it works.
The two frames per second is not streaming a 60 frame per second source down to you at two frames per second it's capturing two frames per second from the source and sending them to you because that's what your bandwidth will permit.
> Your ‘streaming solution’ only makes sense if... Highly unlikely.
Only if the goal is a reduction in bandwidth used viewing the page. There are many other goals were streaming the browser makes a helluvalotta sense.
I get you had this focus on bandwidth because i think it's the main obvious focus of this thread but there's an expanded context in which these things operate. I'm sure you'd appreciate that if you'd experience it.
So Figma is written in JS and C++, compiled to WebAssembly so it runs in a browser, which runs in a datacenter, with video streamed to Mighty, an Electron app where the front-end is written in JS and some C++, running inside Chromium.
I feel like we engineers are putting too many abstractions on things. It's like we are all peddling "get rich quick" schemes to people trying to weasel our way into some super popular process. This screams like an anti-direct-to-consumer model.
STOP CREATING MIDDLEMEN! It's going to cost me 30 bucks to just browse the web where I spend another dollar amount to where someone collects a "handling fee". Jesus I feel like the world is going nuts.
Aside from your HTML interpretation there's another interesting thing that's been tried by a remote browser start up bought by cloudflare, called, I think it was, s2. They hooked into the chromium rendering engine skia drawing instructions and instead of sending screenshots or video from the remote browser to the client they sent the skia drawing instructions and then rebuilt the entire rendering of the HTML client side.
Point taken. But there's a time and a place for everything - nowadays we have the tech to stream the contents of the desktop, and quite probably to it real time. So you could say the problem HTML was solving is gone.
Then good news, between modern software development practices, NFTs, and just plain fucking laziness and incompetence, there won't anything left of it soon!
The fact that a web email client (gmail) can turn the fan on when it’s mostly text and runs in a VM written and published by the same company that wrote the email client just makes my head spin.
And the solution to this is to put the browser in the cloud? So what’s the desktop browser on your new $3,000 mbp now, like... a demo environment?
It boggles my mind that we’re not demanding the web bloat stop. Maybe figma just doesn’t really work as a web app! If I have to run my browser in a datacenter, I think it’s fair to say it doesn’t.
As a web dev I’m just embarrassed. How are we not saying “this is too much, stop making web apps that crash my computer it’s not worth it.”
I hear you on the mind-boggling but I think it's sort of like the web makes bloat visible but resource bloat is present in probably most products of our developed economies. Like if you look at an SUV driving down the street if you could visualize the amount of raw resources: energy, air water minerals (labor and its costs?) that went into producing that SUV, I think you'd be demanding that we reduce the bloat of all products of our economy. I think is a valid analogy of the sort of page weight that we're talking about.
my point is not that I'm condoning it I just happen to think that it's probably inevitable.
And there's also the analogy between sort of handmade and craftmade things that are sort of indie Craft products built outside the system and that movement of indy websites and bloat-free websites. I think they're both destined to be small slices of the eventual mainstream market.
getting even more meta societies tend to capture more energy over time and if you think about it more energy is going to end up being crystallized into more matter so we're going to produce more things and, ignoring some inflection points in technology and efficiency, use more energy to produce more things so things are probably going to bloat out.
Pop open developer tools - Gmail's JavaScript is heavily obfuscated, not just minified. (I think it's a custom, self-modifying VM that's written in JavaScript, and it fetches pieces of itself over the network, like ReCAPTCHA).
This "DRM" plays at least some role in making the optimizers in V8 work a lot harder to get anything reasonable out of the spaghetti.
Why Google needs DRM for a web email app? Beyond me.
They're too embarrassed of all the shit code they've written that makes the app slow - so they obfuscate it to try to hide how shit it is - and it turn it becomes even slower ;)
The reason we use such tactics is to increasing barrier of reverse engineering because our teams value their work. Some people claim that security through obscurity is bad. I challenge this view. I claim that every security defense such as RSA is a obscurity.
It's a matter of time until RSA breaks in the same way as Obfuscation does.
Gmail is not your let's make it weekend kind of app. It's highly sophisticated and deliver huge value.
There are lot of people who hate Obfuscation. Some are communists and others are attackers.
My wife (she works in the fraud detection department) found an interesting attacker who masqueraded as a security researcher and student of X University, but in fact he was a a criminal scum. He has reverse engineered anti-fraud scripts of many websites and published them on Github for everyone to see. His main goal was to attract malicious buyers and sell them scripts that bypass this protection. It was one of the heck of marketing.
First, encryption is not "obscurity" in the same way you think DRM is.
Second, several other email providers don't think they need to rely on some performance-killing DRM to "protect" their web app (oh no, what of all the value!).
Outlook has a part of their files minified, but doesn't use any obfuscation; apps like ProtonMail[0] and Tutanota[1] are even open source.
(I'm actually starting to migrate off of Gmail to Protonmail myself.)
Encryption is "obscurity". For example, Quantum computers will break RSA.
> Quantum computers will break RSA
Now here it will take X amount of time so is breaking any protection like DRM.
The goal of any security method is increasing attack time.
TLS got attacked, SSL got attacked. History repeats itself. Period.
> Oh, and there's no need to call people "communists", "attackers", or "criminal scum". Be civil.
Why? I have a right to use these terms. What should I use instead?
Would you call Osama Bin Laden as "His Highness Bin Laden"?
The words exists for reason. I use them in appropriate context.
People don't understand Russian soul. I'm very direct and speak my mind!
>> Second, several other email providers don't think they need to rely on some performance-killing DRM to "protect" their web app (oh no, what of all the value!).
>> Outlook has a part of their files minified, but doesn't use any obfuscation; apps like ProtonMail[0] and Tutanota[1] are even open source.
So? What's your point?
You have Linux which is Open Source and you have Windows (A lot of parts including their licencing is obfuscated)
The performance hit is minimal. ProtonMail & Tutanota are way slower than GMail and lack cutting edge features we offer.
Gmail vs Outlook is like Ferrari vs Toyota.
Gmail has great UX even my grandmother can use it.
The point is that nobody relevant is going to get stopped by this DRM. That's because nobody relevant is likely to even try copying it in the first place, and if an economically relevant party were so unwise, I expect google's legal resources are sufficient to discourage plain copying, even if a court case is never won. They might learn some tricks sure, but the chances of gmail's client side bits doing anything that novel that's also competetively important are slim to none. (And if there really is some kind of secret sauce that needs protecting, relying on DRM seems quite... optimistic. Finally, we're only talking front-end here, not backend; and surely that's at least as important a part of the value proposition here.
While there may be a case for DRM in some places, gmail is almost certainly not it.
How exactly is a post-logged-in-app obfuscation supposed to be relevant to fraudsters that game the AdWords and reCaptcha etc?
Obviously people and corporations can choose to obfuscate; their prerogative. Doesn't mean it's effective nor wise in every instance, though, does it? Gmail is entirely free to waste effort and make its app slower and less (easily) maintainable, no question there.
So your claim is that they can't automate the UI (well) via conventional browser automation tools, and can't access whatever endpoints gmail the client-side-app uses without being detected, but could if the code wasn't obfuscated?
I'll bite once again - from personal experience, I knew Gmail is slower than ProtonMail, but I tested it anyway. I loaded both Gmail and ProtonMail, using the browser's profiler.
Gmail spent 6x the time ProtonMail did in the garbage collector, and 2x the time ProtonMail spent in the JIT compiler.
6x is minimal for me considering how complex Gmail is. It's not that slow. I can use it quickly and get up running and it's okay for anyone unless you're a person who is not patient for few seconds.
You always have the option for loading "Basic HTML" and you can get Protonmail or Toyota like experience there ;)
I don't know what's your agenda really is. Attacking DRMs are bad.
You have issues like spammers abusing Gmail interface to send emails using Google IPs and there DRM rocks.
You're point is completely valid but companies rush their products to the market they dont necessarily do it for the experience they provide but to capture the market.
For example until this month Notion was extremely slow and everyone complained. They fixed it recently and no one's complaining but the important thing here is that no one left the product for being slow. May be there is a way to reduce bloat on the web and to ship desktop apps while keeping pace with modern app dev experience but surely there isn't any right now. Maybe webassembly will help? Lets see
All JavaScript apps run in the browser with its own JS engine (V8 in chrome) garbage collection, etc. This is effectively a VM I think. Like you deploy Java apps to the JVM, you deploy JavaScript apps for the “browser VM.”
Google writes gmail but they also write chrome and V8, so they are in a unique position of writing both the application and the platform it runs upon. Presumably this would allow them to make something more performant than most, not less.
It makes me happier to see HN will have less user who sees the world in binary as "good guy billionaire DRM industry lords with lots of secrecy for good" and "communists, hackers, terrorists and bin laden".
I worked on a similar solution to this and we had a price point of $5/month per user...
EDIT: 16GB of RAM and 16vCPUs. What a weird balancing of resources. Chrome is typically memory bound, not CPU bound. This also explains why it would be so wildly expensive compared to anything else out there.
EDIT2: A lot of the replies I'm getting seem to think my implication here is that no one would pay for this or it would be easier for people to build this themselves. I'm not saying that at all, I'm just critiquing the price point. There's huge market demand for browser isolation, I've worked on products in that field, I just haven't encountered any customers willing to pay $30-50/month for it.