If Windows truly had a great UI+UX equal to macOS, the investment of native apps would be much more worth. Now creating a web app for windows is almost the de facto choice because there’s nothing better!
Seeming like more of a collection of half-baked ideas than robust frameworks that we'd want to bet a bunch of dev time on.
As you both mention, we might end up with a cocoa macOS app and an Electron app for Windows & Linux.
My strategy would be Cocoa for MacOS/iOS and Gtk for Linux/Windows.
Although of course, it does mean you probably have to touch C++ sometimes.
I personally believe that the cause of the latter is a combination of (1) cost-saving through fewer platforms to support (2) cost-saving through a larger developer pool and (3) better branding support, but I've never worked in the business myself.
Of course this doesn't stop MS from simply removing functionality, like they have been doing for the past 10 years. Explorer has a fraction of the API surface it had in the past, as do large parts of the OS. Like for example the color APIs which used to describe every little part of the UI so it could be tweaked are now basically gone.
UI is also one of the hardest things to do right when writing cross-platform applications. Business logic is usually quite portable, but UI introduces geometry and fundamental incompatibilities between widget toolkits. The web, with some exceptions, ignores native widgets, thus the problem of writing UI code mostly goes away because there's only one toolkit to work with.
On Windows, the current "native" UI toolkit is WinUI.
I quoted "native", because it's not really native. It's just what's in active development by Microsoft at this point in time.
Creating a desktop-web-app is just more convenient, because of skill re-use and you know it will continue to work, not only on Windows but Linux and OSX. Web is more mature and will continue to evolve.
On Windows, no one cares anymore if your app has a "native" UI - the OS itself has conflicting UI styles, and most main-stream apps are different.
Not the way I like things, but it's the way things are.
Yea it looks different than what you can get from CreateWindowEx with the various control styles, but those also look different than MFC, Windows Forms, etc.
I think a real native solution would be shipped with the system, and get an updated theme and feel with an OS update. But the last version that did this was Win32/Uxtheme and to some extent WPF. Metro/UWP formally did this, but it encourages you to hardcode a lot of styles, so you have to update your app when a new Windows version comes out. But I think MS has moved away from shipping the UI library with the system.
The real native UI is what MS uses internally, and for a lot of products that is DirectUI. It is used in Explorer, the start menu used it for some time in Win10, I think the control center also used it. But also MSN messenger used it, and Office, too. Spiritually it is similar to WinUI 3 I think: implemented in native code, drawing "windowless", and using some kind of XAML.
WinUI 3 intentionally decoupled from the OS so app developers can support several versions of Windows while still using the latest GUI stuff, in the spirit of web apps.
So in technical sense, WinUI is as native as Qt/wxWidgets.
Just my preference on terminology, rather than anything against WinUI (I've been developing a WinUI app).
> Evergreen distribution. Rely on an up-to-date version of Chromium with regular platform updates and security patches.
It's not like Apple is maintaining iMessage anywhere except their own platforms. And even with that, until very recently the Mac version was second class compared to iOS, only catching up when Marzipan/Catalyst let them share one codebase between the two versions.
So that's not exactly Apple using Electron, but it's still a good example of "Even a giant company like Apple has trouble effectively maintaining separate cross platform versions of the same product."
I think the real reason Microsoft went Electron is because they plan on extending it to eventually take control of Electron on Windows. So they need some big Electron-based applications to justify that.
Mac Messages was comically just a plain webview for years. Again, that changed with Catalyst.
The whole chat transcript being a web view thing was one of the main reasons the app eventually got killed in favor of a Catalyst version. There was zero expertise in the team to make all the random new transcript features that the designers kept throwing on the iOS client and there was also no way to reuse the knowledge and code of the iOS team either. Various attempts to rewrite the mac client's chat transcript to native throughout the years failed due to lack of resources and/or corporate bullshit.
Yes, I overstepped, the sidebar was probably an NSTableView and the text input looked native enough. I still find it comical that the star of the show, the transcript, was a webview.
> due to lack of resources and/or corporate bullshit
I suspected as much, and again, it's comical. Trillion dollar company. Apple, of all companies, can afford to pay Meta-level comp or higher, roll out big recruiting efforts across the US and Canada and let engineers work in more cities, and yet they've only barely started in the last couple years.
The best part is that when Azure Explorer crashes (days that end in "y"), it wipes out all of my VSCode windows, too!
Because of this, I write longer emails in a text editor and paste it into Outlook when I'm done.
Try this if you haven’t already: https://superuser.com/questions/989951/is-it-possible-to-dis...
I am a noob at startups (working on some of my own ideas) but I almost always start with the most simple set ups (zapier + google forms tbh) to try some process with 5-10 people. If that seems promising I'll build a app over a weekend/5 days using flutter, and then get that into people's hands. I've not used electron, but at least my thinking is that if I can validate an idea and come up with a good business model, I can grow to a point that I can hire actually competent engineers to build the best experience for the users.
The primary goal (at least as it seems to me) is to solve some problem that people are comfortable making some tradeoffs(mostly unnoticed by normal people, let's be honest otherwise they wouldn't even want to try a bare bones google forms + email) while providing far more value to them.
Ultimately these are all tools, use the right one where it matters until it needs to be upgraded or changed.
When I was doing some games programming using Imgui for little widgits was actually quite nice. But wouldn't want to use C++ generally for an application.
What do people actually recommend ? Electron seems to be the go to.
I’m a little offended by the comment about WPF; WPF is stuck in 2006 but at least is about as polished as you can expect from a 16 year old product. Microsoft should aspire to make their other UI offerings half as good as WPF.
UWP is like WPF, but with critical features stripped out, broken core functionality (e.g. renderers for paths/SVGs are super broken), and an 80% chance of facing a COM exception if you try to do anything remotely interesting.
I’m learning web tech so I can make a career change - I built my career on .NET client technologies and I deeply regret it.
Take it from a (former) Windows fanboy, choose Electron.
In that case it is a question of resources right? Nobody in our org knows how to do that thing well, so we could have them go learn or ... use Electron and get the app out the door...
The choice for an organization with limited resources seems obvious.
In basically every single text field/area in Windows, `CTRL Delete` deletes the current word and leaves the space before it, so if you type "Hello Hello", hit ctrl delete, and then type "Goodbye" your result will be "Hello Goodbye".
In Windows Mail, for reasons unknown, it deletes all the way to the last character of the previous word. So if you type "Hello Hello", hit ctrl delete, and type "Goodbye", you get "HelloGoodbye". It's baffling. Why make a unique UI element just for this that's inconsistent with the rest of the operating system?
You should check out compose multi platform - https://www.jetbrains.com/lp/compose-mpp/
Most development time isn’t typing code but actually architecting the solution. You will go through multiple iterations to get your UX right. Lots of things like graphics, GUI mock-ups, database design, file formats can be reused.
In cross platform development one too often ends up wasting time on problems introduced by buggy layers on top of the native frameworks.
Every time I have compare my development speed using native tools vs a lot of the cross platform web stuff, native wins hands down.
But it depends on where your skills are. If you are more skilled with web technologies then your experience will likely be different.
We want to try running small teams using the best stack for each platform.
In fact, I'll usually take reduced overall functionality for a better platform-native experience.
> You will go through multiple iterations to get your UX right.
For high-level UX flows, this is where cross-platform frameworks could have an advantage, because you can iterate without redoing the work for each platform. It might even make sense to use a web framework for the initial prototype and early user trials, and then reimplement in native for performance/details once it's finalized. As a bonus, if you're willing to maintain the web version you could use it as your browser-accessible version.
A/ use cross platform to iterate effectively
B/ choose to iterate on only one platform, then build out other platforms as your ux stabilizes.
We chose B but acknowledge it’s an unusual option, and there a 30% chance in our minds that we’re wrong /shrug
HN loves blaming companies like Slack for shipping “crappy bloated electron apps”, but what we should be doing is demanding more from Apple, Microsoft, and Google to ship a sane APIs to make this work. Blaming vendors is essentially victim shaming.
Sadly I know that’s a naive optimistic pipe-dream because these corps would lose power over their user base.
The “one code base” is allowed to have platform ifdefs, doesn’t have to be magically perfect and handle all edge cases.
Because this is table stakes for Electron, cross platform libs, and game engines. Getting Apple, Google, and Microsoft (because honestly if the others can agree Linux will just follow) to agree on something anything would be a huge milestone but I’m sure they would all see it as erasing their moat.
But I do agree with you. Developing backend cloud applications is just easier on Linux. You will only succeed at prying Windows + Excel from Finance's cold, dead hands. Most companies that aspire to growth will get there. Why shoot yourself in the foot for 160 MB RAM savings?
1. You can't use any specialized features or APIs in one OS unless you implement it in all others.
3. Increased test/QA surface area which becomes difficult - especially supporting Linux - which have relatively fewer users but you still have to properly invest time into supporting it, something startups cannot do.
I really don't see the point of this product, but there are plenty of small to medium teams which are mac only, and it's better to be an interesting niche rather than yet another collaboration and huddle-type software.
We’re starting with a small but opinionated target audience, and building out for other platforms from there.
> just for some technical clout
I think we both know that’s oversimplifying :)
And if we wanted to do a web app (which is a fantastic from business perspective for a lot of reasons), it would require some groundwork, of course, but nothing like doing it from scratch.
I think you need to clarify what "our company" means, so we can evaluate the significance of your statement.
Is "our company" 20,000 seats in 43 countries?
Or is "our company" Jack, Joe, and Fred working in their bedrooms and collaborating over ICQ?
I worked at a 300 person startup. No interesting cross-functional team was pure-macOS (80% macOS, mix of Windows and Linux for the rest).
I worked at smaller startups (<20), and they still weren't all macOS.
Which is why we’re generally exposed to so much mediocre software. Business is king and most companies do the minimum possible to get customers and grow. And if they for some reason do engineer to higher quality than their competitors, it’s quite likely that they will be outcompeted.
For Windows, I'm not counting games because that feels like its own thing.
I'll probably catch a lot of hate for this, but Snap is a much smoother experience and is well integrated with Ubuntu, even if the snap store kinda sucks...gnome software is worse.
Needless to say, this is not a great idea in concept, practice, or it's current execution.
I think a much better idea would have been to copy what NixOS does. Yes, your developers have to do a little more work, but you can finally have reliable builds, fix your dependency graph, and actually run your software natively. I'd love for a more uniform "AppImage" style of installation, but Flatpak is definitely not it. Too many sharp edges.
It’s not perfect… the iOS ecosystem is just different enough that common frameworks don’t always go up to the UI layer (UIKit and AppKit are different, although most of their dependencies are the available on both platforms), and that sometimes manifests itself in slightly odd behaving apps in macOS (Messages, News, Stocks, etc are just UIKit apps recompiled for the mac), but it still feels like the different stacks share enough similarities, that from a developer’s perspective it doesn’t feel too jarring.
I think this can be attributed to the fact that Objective-C was such a perfect design choice for its time. While Microsoft was busy being ashamed of COM and spent a lost decade chasing .NET as the New One Way to create windows apps, only to double back and shore up COM again through the WinRT stack, Apple (post-NextStep) had that problem fully solved with Objective-C. While MS’s environment needed a separate IDL and object model to allow ABI-resilient dll’s (C++ is not ABI stable, it needs COM to do that), Objective-C solves this problem naturally through its fundamentally dynamic nature.
- In C++, you can’t just construct an object that’s defined in another .dll. You may know its size now, but you can’t be sure that its size won’t change in the future, and you may not be in a position to recompile your code (say, when a new windows version comes out and your customer tries to install your program.) So to allocate an object, you have to ask COM to do it, which means following the relevant conventions and using COM API methods to ask the remote class to create an instance of itself (CoCreateInstance, etc), and refcounting the resulting pointer with AddRef/Release. This isn’t a bad design choice, but it’s just different from how C++ programmers are used to constructing objects.
Contrast this with ObjC, where `[[SomeClass alloc] init]` is always the way you construct objects, and there’s no jarring change between “how you allocate objects defined in another framework” and “how you always allocate objects”. And refcounts are automatically managed by ARC, which is part of the ObjC compiler.
- COM works by having objects query each other’s interfaces and exchange vtables around so that the function offsets are known, which requires special calling conventions that are different from how normal C++ code works… You’d need to call QueryInterface on a COM object to get a pointer with the right vtable, so that you can call the right methods on it, etc. Various high-level wrappers (MFC, ATL, etc) abstract this from you, but they’re imperfect, and the right wrapper depends on the use case. Not to mention, you only get access to the COM APIs which are actually exposed via these wrappers, so they won’t help you with “general purpose” COM code. I have been out of that game for a while but I’m assuming WinRT has made a lot of this simpler.
Contrast this with ObjC, which simply always uses dynamic dispatch with objc_msgSend to actually invoke methods on objects. It’s “ugly” (`[[[lots of] brackets] etc];`), and “slow” (because the method dispatch is essentially passing a string around and dynamically discovering which code will handle it) but it’s at least uniform.
COM was unwieldy to non-microsoft engineers, but Microsoft mostly “solved” this by creating friendlier programming interfaces on top of it. They didn’t really want engineers being confronted with COM’s complexity, and so technologies like MFC, ATL, (OG) Visual Basic, and then later .NET were touted as solutions that had a better developer UX. But that’s just the problem… there are too many solutions to this problem in Windows, and nobody could agree on the “right” way to write windows apps, so everyone either did some flavor of either raw win32 programming (if you’re up for doing a lot of work yourself), or some rotating framework du jour which would hide the complexity for you.
But on Apple’s platforms there was no such awkwardness, and Apple just kept innovating. ObjC’s internals were not some implementation detail to be hidden but the fundamental way you do all object-oriented programming on the system, so there’s no reason to have different conflicting wrappers on top of it. Developing for AppKit was just plain old ObjC, and you get ABI resilience for free because of its dynamic nature.
Anyway this was a long rant but I really do feel like the lowest level technologies are to blame/praise for the situation with OS software in general, and COM was such a sore spot for microsoft that their attempts to paper over it just made the platform worse and fragmented things, leaving developers to fend for themselves with how to develop applications.
I’m actually sorta worried about Swift as the replacement for ObjC, because it’s not as “fundamentally dynamic” as ObjC is. Apple is aware of this, and has gone to great lengths to enable library evolution/resiliency in Swift frameworks, but they’re coming at it from a totally different place from how ObjC solved it. In ObjC, it’s just `objc_msgSend` all the way down. In swift you have value witness tables, protocol witness tables, reabstraction thunks, and all this massively complex infrastructure to make it feel like objects in another Framework are the same as the objects you define locally, and it all sorta works, but… it’s just not nearly as elegant. It’s technically faster though (since method dispatch through these witness tables is likely faster than objc_msgSend), so at least it was done for a reason. But I’ll always have a special place in my heart for ObjC due to just how perfectly it fits into the problem space.
I enjoy Swift but it feels like building on quicksand.
I'm more worried about Apple's mac approach becoming Windows circa 2005, currently you can develop a mac app using AppKit, which is slowly rotting away, or using either Catalyst of SwiftUI, both of which are playing catchup with iOS and not really getting there. Too many options, none that are truly polished up anymore.
My experience was primarily driven by C/C++ development though. I've heard better things about swift, but I just can't grok how the hell xcode projects are supposed to be used at all and the tool is so slow it's unusable.
I'd prefer electron even it wasn't cross platform. I could write the app 4 times in electron faster than installing xcode.
People who say such thing have no idea what they are talking about
Debugging with VSCode is a joke, and there is no GUI preview for Cocoa/UIKit/SwiftUI
XCode has many flaws, but it's properly integrated and helps you to make beautiful native apps
VSCode is a UX nightmare
I love this community, but the people who are comfy configuring Linux and cant understand why everyone doesn’t roll their own solutions generally don’t know shit about what good UX is.
VScode ain’t it, that’s for sure.
I'll say that working on TS/React in VSCode is head and shoulders above Swift in XCode from a DX perspective. I had to routinely restart XCode to verify that type errors were actually there, it was incredibly slow, and the Swift compiler would give up on highly polymorphic types and red line them.
12 GB to download, more sluggish than the worst electron app, terrible UX, undocumented config files screwing up with git, terrible error messages, app upload is so unreliable even Apple had to create a third party app and tied to the OS version.
Developers have no choice, it's clear and they know it.
The comment about the difficulty of hiring is an interesting one. I haven’t come across a helpful, beautiful and interactive resource to teach those paradigms, as you find created for any language/framework combo used for web development. There also aren’t many huge desktop teams outside of Apple, Microsoft, AgileBits, etc., so it’s hard for someone to find a starter role where it’s okay to contribute minimally as they learn. Lots more lone wolf or tiny team programming where any kind of apprentice would just be a drag.
The problem is convincing someone to try. There's a reputational issue "AppKit is haaaaard" "AppKit is baaaaad" "AppKit is dyiiiiiing". There's also a career issue as it's felt that building native mac apps is a career dead end.
You could make a minimal native app shell for each platform, with the most relevant integration points, and then host a native webview within that shell that shares its UI across platforms.
Not quite as nice as true native, not quite as fire-and-forget as Electron, but certainly more efficient and integrated than the latter and a lot more cross-platform than the former.
Would have been nice to have a real, cross-platform UI library that uses the native controls everywhere, kinda like what WxWidgets hoped to be. Unfortunately Microsoft in their push to make Windows desktop development into a Lovecraftian nightmare, made that all but impossible.
> By writing code in a non-standard fashion, we took on overhead that we would have not had to worry about had we stayed with the widely used platform defaults. This overhead ended up being more expensive than just writing the code twice.
Plus what you're talking about there is the opposite of what I'm suggesting, you wrote 2 UIs with a shared C++ logic, I'm suggesting two native platform layers with hosted webviews.
Sadly I agree. However the blog post I linked describes their mobile strategy. Sorry about the confusion.
FWIW The desktop app now uses the strategy you suggest: native with many hosted webviews.
> what you're talking about there is the opposite of what I’m talking about
Sorry, skim read and missed that! See above comment re Dropbox desktop using your approach though.
Would also love any other recentish articles about hiring macOS developers if anyone’s got em.
I’m very bearish on native macOS after the most stereotypical “worse is better” experience in the late 2010s working on a macOS (now electron) product, but am increasingly reconsidering.
It’s too early to say and I’m too cynical to anyways, but it’s starting to feel like the release of the M1s really has injected a bit of life and taste back into the scene.
IME, the most value-generative and knowledgeable frontend/UI people I've worked with have all had really "weird" backgrounds and would fail a "standard" software engineering interview loop, and most would fail hard.
I've worked at a bunch of companies that tried to have the same hiring process for frontend and backend engineers, and it's never made sense to me.
If anything, this creates a justification for using Electron. "Macs are too fast, even when using Rosetta, so who cares if we use Electron?"
After all the worst possible dev environment will get extensively used if there's money to be made using it (I'm thinking of talking with PS2 developers 20 years ago…).
GTK is a non-starter for something professional, at least if the experiences of the GIMP and Inkscape on macOS are representative. (Also, it's C, which does avoid some ABI problems, but it's hardly suited for an object-oriented environment)
If you want to maximize code-sharing, you would use webtech for all platforms, including iOS and Android. Yet most organizations don't, because they know that that gives them a poor experience, and they're willing to expend the effort to do native ports to those platforms.
Alternatively, if I'm not making a web application, I literally don't care about the fact that I could share my desktop code with it, because it doesn't exist.
...and my point is that nothing about Electron makes it uniquely able to solve this problem. Someone who can't use my app would also appreciate GTK/Qt if it allowed them to use it.
Some people are also unable to use apps because they use Electron. My computer can't play some video games while I'm in a Discord call because of how many resources it uses, but it's perfectly fine in a Mumble call, because Mumble has a reasonably-performant, non-Electron client.
Planning to release this next week!
Replace 'macOS' with any native platform name, and you'll get the whole reason behind Electron proliferation. It's just cheaper to make something with Electron, if you are a software development business.
I've never minded the speed, expressiveness, intuition, or integration of Electron. All the value is in portability. A desktop app limited to just one operating system is near useless today.
TBH only time will tell whether our decision to go native-first, then cross-platform (vs Chromium-based cross-platform) was the right call.
Is there some tech that's making it easier? (Maybe easier memory management, big enough market on each platform, React Native on the desktop, or many native-mapping frameworks, or Swift UI on macOS) Maybe people are tired of Electron?
I'm only talking about devs because many users won't really explicitly notice.
They notice, but "devs" tell them that software just gets more complex so they need to buy beefier machines, which is actually shitty advice in a world where you can't upgrade ram or anything.
devs are lazy, and don't care about the end users. "Works on my machine (64GB RAM and a zillion cores)"
The bigger the audience, the more impact performant software has
It's not even a contest: unless you're a corporation making tons of money, you either pick an OS and write a native application ignoring other OSes, or you use Electron (or some alternative).
I have 4,000 people who purchased the app and not one complained about RAM or some other nonsense people keep hurling as insults at Electron.
https://videohubapp.com/en/ & https://github.com/whyboris/Video-Hub-App (MIT open source)
Re: Electron comments: don't take those comments on Electron personally, it is just a means to an end. Personally I avoid Electron because the JS ecosystem makes me sad every time I touch it. But that's just me. :)
TypeScript makes coding a super-pleasure, it's not your grandma's JS ;)
Just curious, how much RAM does your app typically use?
Are you using correct way to measure memory usage? I think that the correct way is to measure all physical memory mapped into all Electron processes (there are several of them), making discount for shared pages and adding swapped out pages (because they can potentially be loaded back into main memory).
In Linux I usually add PSS and swap usage parsed from /proc, which should give correct numbers (PSS is an additive value). For example, all processes of Skype on my system take approximately 459 Mb in RAM and swap file.
If you are only measuring one process memory instead of all and only private memory pages (excluding shared pages and swapped out) then you get the wrong number. I tried to search online what "Memory" column represents in Windows Task Manager and Mac Activity Monitor (which people often use to estimate memory usage), but couldn't find any information. So I cannot be sure that they show true values and that those values can be added to find total memory usage by several processes.
Yes, I picked up SwiftUI last year initially because Electron/JS/TypeScript just never felt fun. Yet, I wanted to build out a few desktop/work-related ideas.
Then, after going through the Apple Developer tutorials I noticed how easy and satisfying working with SwiftUI actually is.
So, oddly enough, native macOS with SwiftUI is now one of my go-to for most of what I want to build (that isn't web-first).
Native apps feel and fit better and consume less resources. Unfortunately for people like me, market forces mean they'll remain niche relative to cross-platform apps.
Building for Electron / Web is so much cheaper considering the frameworks, transferable knowledge and virtually free support for all platforms. People have never not hated on Electron though, and that's because it's also always going to be a worse experience.
I do think that we're starting to understand the costs of "write-once-run-anywhere," better, too.
There are endless amounts of developer chatter about using / not using Electron.
As a user ... I really don't care what they use, the app runs well or it doesn't and I don't care why.
Well, Facebook almost lost the game completely because they were using a hybrid app.. Users cannot articulate why something si ruining performance (maybe not their app, because it simply has a higher priority set..)
PWAs, for example, can have downloadable bundle sizes < 5 MB, run in < 50MB ram, and have plenty of headroom for lots of features. You only need electron with it's own copy of a browser if the browser sandbox is unacceptable for your use case.
Electron wouldn't even be able to run on it.
While I haven't used SwiftUI directly (only reviewed some code recently) it looks like it makes it simple to build good-looking native desktop apps.
Unfortunately SwiftUI is not very cross platform and the other toolkits (Gtk, Qt, Swing, etc) are lagging behind in terms of polish and controls.
My general conclusion is: Native desktop applications are not dead yet as shown by the popularity of Electron and SwiftUI shows that if the toolkit is good, developers will build native desktop apps. Now it's up to the other (cross-platform) toolkits to step up their game and we can get rid of the electron-based memory hogs.
And although you credited the MacOS specific applications boom entirely to the good toolkit, I think the GTK/Gnome app resurgence comes from a synergy of three factors:
1. The rise of more ergonomic programing languages (Rust, Go, ...) that make desktop app development more fun (GTK programing in C works, but it is like pulling teeth, soo much boilerplate).
2. The rise of new app distribution channels on Linux (e.g. Flathub)
3. And the improvements to the toolkit.
OTOH, QT for example still tries to use native dialogs so when you click open file, you get the native OS open dialogs/etc. Sure if your an expert you can find place where they aren't quite right, but for most people they won't notice the difference. Plus, if your really OCD about it, given your in C++ you can just add your own OS shim layer (or custom components) to correct the slight differences you might see here/there by calling the OS services natively.
Again though, serve whatever niche fulfills you. I don't even know what their app is/does after reading this article, so I'm going to assume it's not a life-changer so much as it is a native version of something we already have.
Re the article not explaining what Remotion does, we're trying to avoid plugging our app itself in our technical posts—we figure anyone who's interested can click onto the main site.
Super annoying that you want to schedule a call "or" get an invite. If an invite works, people should not need a call.
> "Want to skip the call? Ask any Remotion user for an instant invite."
I'd assume you're a user. I'm asking for an invite. Email in profile. :-)
Reasons they sometimes switch from long-lived Meet calls:
- Free up 200-300 MB of RAM when in call.
- Features specific to coworking (e.g. shared music DJing) or eng pairing (e.g. multi-way screen share).
- Presence when out of video calls. For example, you'd see if anyone on your team is in the Meet without having to ping each other. Leads to more unplanned hang outs.
I do understand that macOS APIs and libraries are easy to use, development experience is (at least in my opinion) better than with Electron + framework of your choice, and you can achieve better integration with the OS, but I don't think that saving 200-300 MB is really important to the end users.
So some people are very glad to save RAM, CPU, etc. Others don't care but love shared music DJing etc...
Can you maybe expand a bit on what differences did you encounter between using Electron vs native APIs? That might also be a good post, if you are willing to share the details.
Based on my own experience it should be called the Electron status item.
If they banned Electron, the only thing that would change is you would now run the app in Chrome or not at all.
- look good
- fast by default
Microsoft on other hand... they still don't know what UI framework to use, and all of them are bloated AF
Why would you do that? I have never been pissed off at a product this bad.
Our website does make it seem to some folks like your camera would always be on, but that’s emphatically not the case. (We need to fix this on our site.)
The only things we show when you’re not in a call is online/offline status. You can then opt in to sharing your calendar and whether you’re in a Zoom/Meet call. We chose those status automations because they’re things you can find on your coworker’s shared calendar anyways.
Suggestions for how to clarify on the site are welcome.
The most important point of all - when you go offline, it is now amplified in terms of visibility. For example, when I work remote, I'm not 24/7 on my desk. Heck, I may not even be on my desk for hours when others are working, but I may work at night to compensate for that when it's usually peaceful. But, a manager's perception of me might be that I'm most of the time unavailable, when it's not true. This may affect my performance reviews, etc.
IF you had pitched this as an alternative to Zoom with a privacy angle, that would be a real sweet deal for me. Perhaps, you could try to have a separate sales page for it as a pro-privacy zoom alternative, I have a feeling it may be received well by remote workers like us than what is pitched on the homepage.
I hope this feedback helps :)
> When 7/10 team mates comply and keep it on always, the pressure is on the remaining 3 to turn on their cameras as well.
This is our site being unclear again. Remotion coworking rooms default to audio and video muted. However, we struggling with how to represent Remotion on our site: When you're out of a call or your camera is off, we show users as "selfies" because we think those feel more human that just a green dot. Website viewers often interpret these selfies as being live video.
> when you go offline, it is now amplified in terms of visibility [...] This may affect my performance reviews
Candidly, I think that the root symptom of what you're describing is your team/manager. Let me ask you this: Do you want to work on a team that wants you to be at your desk all the time?
Our philosophy at Remotion is to be radically transparent about breaks, to the point of celebrating them. Apart from a few hours of meeting overlap in the middle of the day, we work on our own terms, including taking breaks for walks/errands/etc in the middle of the day. It goes to the point of actively posting photos of our midday breaks in an OOTO channel. Teammates react to those, making a point of saying: "Hey, prioritizing yourself is good. Being offline is good."
> pro-privacy zoom alternative
Interesting, will bear in mind.
Even though in-app you encourage photos, maybe for the marketing website you could display this as avatars/cartoon-versions of the person (which presumably people can set, if they prefer)? It'd make clear that live video isn't mandated.
Check your employee handbook. Chances are pretty good you're not allowed to actually do this.
It’s important when applying for a remote job to figure out what kind of remote culture the team has. Some are reading and writing only, some like frequent short video interactions, some longer video meetings. This seems optimized for the second type of culture with the option to use it as the third type.
Spot on re culture:
- Teams who are strict-async don’t want Remotion.
- Teams who are distributed but still like hanging out want Remotion.
Re video on/off, the defaults in the app can influence a lot here. Our coworking rooms default to audio and video off.
Translation: "We plan to deprioritize Android in the same way that we're currently deprioritizing Windows and Linux."