Hacker News new | past | comments | ask | show | jobs | submit login
When a rewrite isn’t: rebuilding Slack on the desktop (medium.com/slackeng)
289 points by kgraves on July 22, 2019 | hide | past | favorite | 391 comments

"Conventional wisdom holds that you should never rewrite your code from scratch, and that’s good advice."

Speaking as someone who has done a fair number of rewrites as well as watching rewrites fail, conventional wisdom is somewhat wrong.

1. Do a rewrite. Don't try to add features, just replace the existing functionality. Avoid a moving target.

2. Rewrite the same project. Don't redesign the database schema at the same time you are rewriting. Try to keep the friction down to a manageable level.

3. Incremental rewrites are best. Pick part of the project, rewrite and release that, then get feedback while you work on rewriting the next chunk.

Your third point is arguing against the premise. A "partial rewrite" isn't a rewrite. That's the entire point.

At some point a partial rewrite might become a complete rewrite. It depends. Ship of Theseus[1], etc.

1: https://en.wikipedia.org/wiki/Ship_of_Theseus

The easy way to fund a new military plane is:

- remove one bolt and raise in the air

- slide out rev 30 "parts" from beneath bolt

- slide in rev 40 "upgrade parts" (fuselage, wings, engines, etc) underneath the bolt

- fly "upgraded plane" without a lot of pesky 'new plane" studies

Additionally, in parts of california, how to build a house:

- find existing house

- pick a wall

- remodel everything except for that wall

- reap tax benefits in your 99.9% new house.

That's sometimes a good strategy, but breaks when you do too much at once. See boeing's recent fiasco where they did basically just that with parts and a fuselage that weren't really compatible. Metaphor extends: try to replace too much in part and something will break.

I would say the Boeing incident more so favors the counterpoint. Boeing tried to keep adding features to an airframe (code base) that could not support it.

The company I work for actually did this with our main product.

Gradually re-written from Java (JBOSS) to Python over about 10 years. Basically the Python side knew what URLs it could handle and proxied the rest over to Javaland. We ended up shutting down the last Java bits early last year.

Been doing that quite a few times back in PHP-land.

Partial rewrites based on Domain/Model/REST endpoint and just proxying to both webapps based on which was new already.

No breaking changes from the outside, either share the parts of the code that are still good (most of the business logic might be) or fork them (and then refactor and maybe you need to fix bugs twice) and after a while you can switch off the old part.

Works like a charm with added benefits of being in the same language to avoid wasting time. But the base idea works even if you use different languages.

It's also not for web apps, for example Apache Storm can do Bolts? (been a while) in several languages so you can also easily rewrite parts, if you can serialize your data in and out of it.

Something that takes 10 years to rewrite (even gradually) has no business being written in Python. Yikes!

Why not?

I would argue that if you already have a lot of in-house knowledge with Java then you might as well stick with it (and just not make whatever mistakes you made last time around) but Python seems like a reasonable alternative to me.

At the time there was a strong internal directive that it not only not be Java, but not anything that even looked a bit like Java if you squinted at it (e.g. dotnet) - plus we've always been an "anything but Windows" shop anyway

Sounds like a decision born of vanity and pedantry. "These VMs are too good, and the type systems too useful, to hell with them!"

There's a lot of culture that comes with a language. 10 years ago, Java and C# were associated with enterprisey complexity and interchangeable programmers working under architecture astronauts, Python was used by the cool kids. Keeping to simplicity is hard when the available libraries and people you recruit were used to https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

How stable has Python (and its ecosystem) been over that time?

What would you write it in?

At the time the re-write started, Django hadn't yet seen a public release, Rails barely existed (and had zero traction, and Ruby-the-runtime was horrible), dotnet was barely a thing, Clojure and Go didn't exist yet, dotnet barely existed. 2005 was a weird time.

Realistically the only options at the time were Python, PHP (4, not 5), or Perl.

I didn't come in until much later, but I don't really think there was a better option at the time.

I'm guessing this is because some people only think of python as being a prototyping language, for performance reasons.


It wouldve been Java 5.

do you actually know what that would mean?

But that's not what Joel was arguing against when he first wrote the article against rewrites.

Literally referenced in the article

I prefer the concept of Trigger's broom. ;)

But the alternative to a rewrite is gradual refactoring, i.e. Ship of Theseus, i.e. not a rewrite.

> Don't redesign the database schema at the same time you are rewriting.

Chances are, that's the problem though. You have to re-write because your design is bad enough that it's necessary. The only time you'd re-write without changing the design is when changing languages or platforms.

You could do a rewrite that takes into account the bad old database schema, but doesn't repeat the wrong code design choices based on it.

Then, when that's ready and stable and working with the existing schema, it's way easier to do a refactor and adapt to the new schema too.

"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious. " -- Fred Brooks, The Mythical Man-Month

Attempting a design re-write without changing the data structures will just result in the same design. And you can't change the data design without then having to change the code to match.

I've done several huge rewrites over my sordid career, and a strategy that works well is to abstract your bad data design behind a good one. This involves some kind of transformer layer between your bad data design spec and your abstracted better one. There's of course performance hits of some kind no matter what, but the hope is that they're worth the benefits of the cleaner code/architecture that the better data design spec affords. It's difficult I'll grant, much harder than abstracting simply bad code, but it's possible, and also makes it that much easier to replace the actual data design at some point in the future, and remove the transformer layer. Hope that makes sense!

I've done the transformation layer in the small to keep components running during a re-write/refactor. Create a new structure and write some code (in the DB or otherwise) to make the new structure look like the old structure long enough to eventually replace that old code.

I can't imagine doing that as an independent step towards a rewrite.

Something similar I've seen is a bad data design that is embedded into a giant monolith, and has dependency tendrils everywhere. In that case it's worthwhile to factor out the logic (which includes cutting the dependencies) even leaving the schema intact. Once the problem layer is sequestered e.g. into its own microservice it is hopefully possible to release it on its own cycle much faster than the monolith, paving the way to improving the schema next.

>Attempting a design re-write without changing the data structures will just result in the same design. And you can't change the data design without then having to change the code to match.

Sure you can. You decouple your data structures from your schema with an abstraction layer, facade, etc.

Exactly. If I'm not refactoring the underlying data structures....the refactoring isn't a very interesting/difficult one at all

That would kind of be the point.

Changing languages or platforms is a frequent cause for a rewrite. (Especially if the old one is Modperl 1.)

On the other hand, if you need to refactor the database schema, don't rewrite the code. Just do the minimum modifications to keep the app working.

Just don't do both at once.

Point 3 seems to redefine “rewrite” as “partial rewrite”, which is essentially in the spirit of “never rewrite the [whole] code[base] from scratch”. Am I missing something?

Edit: Clarified quote of original rule.

Take a typical CRUD app. It'll have, say, something for inserting data, editing data, viewing data, and reporting. Pick one of those chunks (reporting is good; it's usually easy and gives insight into the data) and rewrite that. Release it (if it's a web application, modify the old app to redirect reporting requests to the new app). Lather, rinse, repeat until you have a completely rewritten app.

It is more along the line of "don't try to do a big-bang release of everything" when you are initially writing an app.

That’s exactly what they are doing, it says it in the article.


There's this classic article on the topic: https://www.joelonsoftware.com/2000/04/06/things-you-should-...

Joel's article is the famous classic, but he proved himself wrong when his company, Fog Creek, built Trello from the ground up, sharing no code with FogBugz. Trello became a huge success while FogBugz languished. https://medium.com/@herbcaudill/lessons-from-6-software-rewr...

(I think this article should be the new classic.)

> My takeaway from these stories is this: Once you’ve learned enough that there’s a certain distance between the current version of your product and the best version of that product you can imagine, then the right approach is not to replace your software with a new version, but to build something new next to it — without throwing away what you have.

Not a great example, I think.

Trello isn’t a rewrite of FogBugz. It’s a brand new product.

Wow. This article you linked seems worthy of its own HN submission. Thanks for sharing it!

Ironically, FogBugz will be rewritten now: https://www.fogbugz.com/blog/fogbugz-rebuild-spec/

FogBugz went from being ASP to ASP/PHP (Wasabi) to some version of .NET where they hosted it all.

I can’t believe FogBugz didn’t share any code with CityDesk.

+1 on all fronts. Rewrites/big refactors can be tremendously helpful 3-4 years in a big project's life. At least on the stuff I've worked on, that's around the time that accumulated tech debt has built up and the problem domain the we were hoping to solve is now well understood with production traffic.

Jesus NO at no.2

Rewrites are ALL about the schema.

Look, over time, your initial requirements will change. The initial schema will 100% not be what is ideal.

On a long enough time scale EVERYTHING becomes many to many.

I've found any rewrite of software without restructuring the schema has always been a waste of time.

Yep, I've never seen a system that was a) so bad that it requires rewriting and b) had a great schema that rewrite could be done on top of.

Usually a quick glance at the schema will shine a bright light on top of all sorts of badness - denormalizations, excessive metadata-driven-ness etc. Rewriting code on top of crud like that would be epensive, and result in a lot of ugly code, which would need to be rewritten again.

Right, rewrites are basically invest now vs spend exponentially later.

They make little sense as soon as a project is completed, even if you rewrite, the software hasn't had time to point you in a direction, a 'nice' rewrite of the same wrong solution is just as useless as a bad implementation of the wrong solution.

They may make sense after a product has matured, since the actual scope of the software will be more defined. The 'correct' schema is starting to reveal itself, but usually it's rare that business wise it's worth rewriting at this stage, adding features is the better choice.

They definitely make sense once a product is fully matured, starting to gather legacy issues - since fighting around all the cruft and incorrect schema restrictions is wasting more time than redesigning it from ground up using the existing implementation as the guide.

> Don't redesign the database schema at the same time you are rewriting.

100%. rewriting all layers simultaneously is the first step to never finishing.

#3 is in fact what they did, and what this post is mostly about, though their releases were apparently internal.

> Incremental rewrites are best

That's perfectly compatible with the conventional wisdom you claims wrong, which, again, was “you should never rewrite your code from scratch” [emphasis added]

I internalize this as every rewrite is a race to feature parity. Keep your races small and be prepared to call them off and devote energy elsewhere.

3) is usually the only way to go if something has a little bit of history and complexity. It’s almost impossible to capture all features and uses of a system completely and then deliver something that does all of these things. Much better to work in small increments which can be understood. It may take longer and feel less heroic but at least you always have something that works.

Incremental rewrites (aka refactoring) is best, but sometimes that's just not viable, which is where I feel conventional wisdom breaks down.

If your code is trying to get updated, you can usually refactor. If your code is having a paradigm shift, it's likely any incremental rewrite will take more time and have more duplication without benefit for far too long to be successful.

> Incremental rewrites (aka refactoring)

Refactoring is recoding a component while maintaining it's external interface so that external interactions with the component are unaffected.

Rewrites, incremental or otherwise, do not necessary preserve external interfaces (an incremental rewrite doesn't start clean slate, but instead progresses by gradual replacement of existing code.

The two ideas are not closely connected.

Incremental rewrites !== refactoring.

You can rewrite portions of the app and embed the older components in the new app until you rewrite them as well.

#1 and #2 ignore why a rewrite is happening in the first place. No one rewrites a system for fun, they're rewritten because the current codebase has major failings that prevent new features or make the current codebase unworkable. These should be the first things added to a rewrite to prove that it's not a wasted effort.

Thank you! I read the first sentence and had to speak "bullshit" out loud. Sometimes rewriting something feels so good. Especially after some time has past and you are still in the process of learning. You apply all the things you have learned and your are amazed how much better it has become.

Rewriting things always "feels so good", which is why people offer up the advice not to do it.

Depends on the size of the re-write. If it's small, no problem. If it's big, it's a death-march for result-driven developer => because at the end of the re-write, the product is still more or less the same, just different engine.

Some developers prefer to leave the company during the re-write (which usually happens after 3 years exciting hyper-growth).

Re #2 - sometimes the code isn't all the problem, but it is the schema - and the two can be intertwined in such a way that you can't fix one without fixing the other.

4. If the version X team is writing version X + 1, that knowledge is gonna help a whole lot.

The reason to avoid rewrites is because they generally get cancelled, or they build the wrong thing.

In the first case, you have a system that mostly does what you want, but it has architectural issues that make it hard to change. You plan a rewrite. You get buy in from management. But the problem is that the business still needs those changes. If your rewrite takes more than a few months, then you're still going to have to do the changes.

Over time, you end up with 2 groups: legacy and greenfield. The legacy group is usually made up of crufty old salts that don't care what they work on. They just grind it out. The greenfield group contains usually younger people who what to do something new. They want to "get it right" this time. They care deeply about what they work on and so there is usually a fair amount of conflict on the team about how to approach just about anything.

The legacy team grinds along, slowly solving business problems and the greenfield team provides no business value until they are finished (while constantly promising that it will all be amazing as soon as they are done). But the business types see only that the legacy product is doing what they want and that the team is quietly plugging away. The greenfield product does not do what they want and the team seems to be doing a lot of excited talking, but it still isn't finished yet.

So they say, "Wouldn't it be better to move the greenfield team back on the legacy product so that we can get things done faster? After all, they said they couldn't move forward with the old architecture, but it's moving forward just fine. And with more resources it will go along a lot faster. Besides we have these business emergencies that we have to deal with, so let's just put the greenfield project on hold for a while until we can sort out exactly what's best". And the greenfield project is effectively buried.

Now, it doesn't have to turn out this way, but powerful forces point you in that direction and you have to be very careful not to have it happen to you.

The other main problem is that when people think about doing a rewrite the requirements document usually looks like "Do exactly the same thing as the last system". But the problem is nobody actually knows what the last system does precisely while simultaneously they all think they know precisely how the old system works.

You get a lot of push back from the business when you start gathering requirements because the only thing they really want to say is "Just do it like the old system". Only, after months and months of development you end up realising that there were a lot of corner cases that you missed in the rewrite. Oh... and it's not possible in the new architecture to do that without jumping through a lot of hoops.

So, incremental rewrites are best, but only if you can get the business to actually use the greenfield project. Normally they will completely ignore it because their paycheque and their promotion and their happiness depends on being able to do their job at least as well as they could with the old system. And the new system doesn't do everything that it needs to do (because we've released incrementally). What's more the developers keep asking stupid questions like "What do you want it to do" and the business people are responding with "Why can't you listen to me? I've told you one hundred times to do it the same as the old system!"

And so the new system is shunned by the business. It becomes radio active. Unless some upper management type sends a mandate down to force the business to use the new system, nobody will touch it. The upper management, in the meantime are wondering, "Why are we doing a rewrite again? The old system seems to do what we want, while everyone is complaining about the old system".

Yep, it can be done, but it requires considerably amount of help from upper management. You should avoid it at all costs unless you are sure upper management is supporting you all the way.

But even when you are successful, it may just be the beginning of the end. You've managed to stave off cancellation. You've managed to keep user engagement and work through the requirements so that the new system is equivalent. But it turns out that there is some small detail or decision that makes the new product non-viable.

For examples of products I've been personally involved with: Word Perfect 5 was written in assembly code and was not based on Windows. Word Perfect 6 was rewritten in C++ and was based on Windows. The team that did the rewrite were very proud of their work. But they through out all the keybindings in WP 5 (because GUI is way better!). The new rewrite was also very, very slow compared to WP 5. This was the beginning of the end for WP, even though one could definitely say that a rewrite was necessary. They just made the wrong choices.

Similarly, I once worked for Nortel and they had a telephone switch that they sold for $10 million a pop. It had 31 million lines of code (and stinky, stinky, stinky code at that). They realised that they could rewrite it in a fraction of that amount of code. They had 3000 developers working on the rewrite and after a few years they succeeded. Only... it didn't work with all of the business equipment that the old switch worked on. And it turned out that nobody wanted it unless it worked with that equipment. And.. the new architecture was not conducive to making it work with the business equipment. In the end, they gave a few switches to China before they abandoned it.

Rewrites are hard even when you've made the right choice to do it.

"incremental rewrites are best, but only if you can get the business to actually use the greenfield project. "

That completely fits my experience. If the new code is being used early and can grow iteratively while being relevant and useful, its gonna work out fine.

Having the codebases running side by side somehow is the best choice all around, if it can be managed.

I will consider this benchmark vindication both for myself and the times I've had to argue as to why Electron isn't the issue, as well as the commentator from many moons ago who worked at Slack, dropped in here and explained why Electron wasn't the problem (rather, poor engineering on Slack's part was), and then was ripped to shreds over it.

In fact I'm sure this comment will bring out native fanatics in force.

At any rate, good for them. Seems like they reengineered themselves to a happy medium. I can definitely see myself installing it again at some point.

I’m still skeptical. The memory usage chart from the article shows only a slight (10%?) improvement for single-workspace Slack clients, now at around 250MB.

Yes, getting a 5+ workspace client down from ~800+MB to 300MB should be applauded - but the 250MB floor is still too high.

I want to know what’s in the 250MB it’s still using.

I checked in Firefox and a given Slack uses 200+MB at load and then optimizes down to 140MB at runtime for a given Slack, of which ~60MB is JavaScript and ~80MB is page elements. So if those numbers carry over to Electron-Chromium, then it's 100MB of "I'm a browser" and 140MB of "I'm a webpage full of dynamic and objects". They certainly will not carry over precisely equally, but it puts some scope on it to consider.

(Gmail uses 250MB at runtime in the same Firefox instance, for comparison; I didn't test it in an Electron-like container.)

When comparing Electron and non-Electron, bear in mind that Chromium typically uses a fair bit more memory than Firefox; I’d estimate 20–40% as a rough figure, though it depends on all sorts of factors.

You pick Gmail as another example. Gmail is a memory hog and fairly slow. For comparison, Fastmail (which company I work for) tends to use 10–15MB in Firefox for slightly different functionality (no chat, and I estimate the Hangouts widget to be about half of Gmail’s bloat, but on the Fastmail side you can load the calendars, contacts and settings modules in the same document comfortably without breaking 20MB), and is a good deal snappier. It comes of careful engineering with performance as a priority, and a small team.

Slack is simply atrocious in these regards. Painfully slow, utterly greedy for memory. I often run it in Firefox rather than standalone to keep its memory footprint down. Whichever way I do it I regularly find it to have crept up over a gigabyte of memory, on my single workspace.

Why does slack and gmail use so much memory? It feels like it shouldn’t be an intensive process to display messages. Sidenote I don’t have much front end development besides basic web pages.

You can browse their memory usage in rather explicit detail about Firefox about:memory, but that assumes a rather high level of preexisting knowledge.

In my experience at least ~100/150MB will be electron's minimum anyway, certainly impressive what they've done but I think they've basically hit bedrock unless they either improve electron itself or switch to something different.

250MB floor is still too high

Too high for what? Low end desktops and laptops today come with 8GB of ram. Using ~3% of that for a chat app doesn't seem like a big deal.

Those memory-bloat apps are never, never only memory hogs. They always come with an outsize hit to general system responsiveness, always have UI lag, and so on. These days almost all of them are webtech junk. I still have a few Java programs with that problem —I've been reading "it's in your head, here's a proof that it can't be as bad as you say" for like 20 years now and those folks continue to be wrong in practice—but webtech's even worse.

Which makes perfect sense. If you're operating on more memory than should be necessary you're gonna be burning a lot more processor/battery than is reasonable, too.

I've been using Slack on the desktop daily since the alpha version. I've never found it to have UI lag.

I honestly think that most people us use phrases like "webtech junk" are offended on principle and not because of any measurable difference that matters to regular users.

Low-latency UIs that respect user input are amazing. They're just quite rare these days. I guarantee the difference is noticeable and makes a difference—look at how everyone liked (and still likes) the UI smoothness and low UI latency of iOS, for example, and iOS, though probably the current front runner on this front for popular operating systems, is just so-so at it.

I guess if my principle is that I like my computer to respond quickly and accurately to my input then yeah, I dislike webtech junk on principle.

[EDIT] as far as it not mattering, everyone I know, including plenty of non-techies, who has used Google Docs hates it, due to the UI lag. Typing with that large a delay between keypress and the letter appearing sucks. They may use it anyway because they have to, but they all complain about how laggy the UI is, responding to clicks and keypresses.

Google Docs on what device? Their iOS apps are famously bad on slower iPads (like the original Air) but the website with a correctly configured browser on a reasonably-specced laptop or desktop is very responsive, once it loads. Of course, I'm not a fan of it myself, but the input lag isn't the reason.

Another, maybe more appropriate, comparison for UI lag might be between VisualStudio Code and Sublime Text. VSCode is based on Electron and though it is well optimized and a fine editor, I can feel the lag when I use it to edit code. One the other hand, ST is native and editing any text is super smooth, even files that are much too large as to be reasonably edited.

I use it on the desktop too. And it is slow. Slow to start, slow to load channel data. Slow to appear after alt-tab, and so on.

I think it is because it expires caches left and right, loses track of events, or I have no idea what else could be going on on a PC with 16GB RAM (plenty of it free) and stable Internet connection.

> I've been using Slack on the desktop daily since the alpha version. I've never found it to have UI lag.

Try telegram on desktop and compare the UI performance. Slack definitely has UI lag. Not nearly as bad as Teams though.

Easy way to demonstrate the jank: switch tabs quickly on desktop Slack. The new views take a nonzero amount of time to render. Do the same thing on Sublime Text 3. Even big files with lots of syntax highlighting load nearly instantly, or at the least, much much faster.

My opinion is that there is no low threshold for RAM use. If you can go down from 200kB to 100kB, that's still a win, even in a 32GB machine.

The reasons:

- Cache. RAM may be >8GB but the CPU L1 cache is just 64kB. And L1 cache is about 100 times faster than RAM. From my experience, proper cache management is by far the most important low level optimization, and most of it can be archived by just using less RAM.

- What are you doing with all that RAM? 250MB in absolute terms is huge. The entire Harry Potter series is around 5MB uncompressed. For that size you can also have a 1080p fullscreen RGB image, uncompressed. What it means is that unless you are working with large datasets, these 250MB represent a huge amount of stuff you don't control. Lots of moving parts with potential performance bottlenecks, security issues, etc...

- Every byte your app is using is one less byte for your system caches. Wasting RAM makes your whole system go slower.

- Finally, an ethical argument. Your chat app will run continuously on millions of PCs. On that PC are libraries, kernel code and frameworks where people worked hard squeezing every bit of performance they could. Saving energy and making your apps run to their full potential. For me, a background app made by a company the size of Slack have a duty to be as light as possible, as not to interfere with the actual work people are doing on their machine.

Too high compared to similarly featured (or even more full featured) programs built on other technologies.

3% on an Electron chat app here, 5% on an Electron text editor there, and pretty soon you’ve managed to replace what could’ve been several apps with small memory footprints plus several gigs of efficient file caching in a users’ RAM with just a few bloated programs crowding out the cache, and made their entire machine considerably less responsive.

If someone is running so much stuff that an app (or a few apps) using ~250MB of ram instead of ~100MB (or whatever) of ram is a huge problem they can easily spend $200 or so for an extra 8GB of ram.

Ram is cheap. Usage at this scale just doesn't matter to the vast majority of users.

Aside from the obvious issue that a ton of devices don’t have user upgradable RAM, are we really this comfortable with feeding constant consumerist abandonment of old devices through our own sheer laziness as developers?

“It only uses 8% of 8GB of RAM. If that’s an issue consumers can buy more RAM or throw away their devices and buy new ones” is soon phrased as “It only uses 8% of 16GB of RAM. If that’s an issue consumers can buy more RAM or throw away their devices and buy new ones” and soon after etc, etc

At some point, preferably before we start saying “it’s just half a terabyte of RAM to run a chat app”, we may want to step back and question this circular justification for ever increasing bloat and ask ourselves when enough is enough. There’s been no real gain in functionality in the past 10-20 years of chat app churn, just an enormous explosion in RAM and CPU requirements.

If there has been no real gain in functionality, then why is Slack hugely popular while IRC is used by almost no one?

Perhaps you aren't giving credit to functionality that is important to people other than yourself?

> If there has been no real gain in functionality, then why is Slack hugely popular while IRC is used by almost no one?

The main, really only, features that Slack has that IRC doesn’t is easy account onboarding and (crucially, for business) offboarding, and persistent messaging. This has been discussed to death, publicly, by organizations like Mozilla. They were also smart enough to offer themselves for free to startups, which largely undermined hipchat, which had the same basic value proposition over IRC, and allowed them to build a hip reputation and network effects in the startup crowd.

Are we really going to pretend that those features are why the Slack client consumes tons of RAM and a fuckload of CPU cycles? They’re entirely server side, for christ’s sake.

ICQ could do all of the above on a 32 Meg Mac 20 years ago. The bloat isn’t related to the reasons it “won”, or why it continues to be popular. Network effects, the fact that if your company mandates it you have no alternative, simply insulates them from suffering many ill-effects of their bloated, laggy client software.

Lots of different chat software had easy onboarding & offboarding as well as persistent messaging before Slack and none of it was successful in the same way.

Like I said, networking effects and currying a hip reputation are a huge part of why I think they got big. In large part nothing about what makes them big strikes me as technical in nature.

But sure, ok, so let’s hear your theory on why they got big and how it requires all of this client side RAM usage. So what is it?

IME Slack at big companies has mostly been a ground-up initiative by engineers that were sick of important discussions happening in long email chains. It's strange to read that network effects or entrenchment made Slack so huge when 5 or 6 years ago they didn't exist :/

IME as an engineer who’s been around the block, before Slack we were using Hipchat, and before that Jabber or IRC. I have not noticed a significant change in features (gifs, bots, integrations were all there through all of those), just changing fads, but by god if the performance and responsiveness of the chat client de jour hasn’t been steadily declining over time.

Mostly what drove orgs I’ve been with from one to another has been pricing (Slack having a free tier ate Hipchat’s lunch, like I said), and “buzz”. Slack was the hip thing to get on in 2013.

Extra features aren't the reason why Slack is so huge.

What is the reason?

You could make a pile of shit popular if you had similar access to Silicon Valley funding and marketing machines.

Of course, if Slack thought this were true, they wouldn't have bothered with this rewrite.

Obviously no PM is going to tell customers, "just spend $200 for another 8GB of RAM to run our chat software."

> Usage at this scale just doesn't matter to the vast majority of users.

The usage model is that it's on all the time. So if it's draining batteries, causing paging or otherwise slowing things down, people are going to quit Slack, thus defeating the immediacy of communication, thus defeating the point of running Slack.

And Slack becomes powerful when it can be required by a workplace, which is their business model. This means the "vast majority" isn't enough at any given company.

Yeah, yeesh. So I'm in the market for an ultrabook and eventhough I had 8 GB of ram in my desktop over a frigging decade ago there still exist laptops with less than 16 GB today. Not only that, the cheapest 16 GB model is over a $1000 more expensive because now I must have an i7 and everything else.

And no, not even with 16GB will I tolerate what slack uses even with their new version.

Yes, I do consider upgrading the ram myself by replacing the soldered ram-chips. And do not for a second imagine I'm doing that so that I can send text messages@!

The difference between an 8GB macbook air and a 16GB maxbook air is $200, not $1000.

Lot's more options in PCs obviously, but 2 minutes of googling found a 16GB XPS13 $580 more than the 8GB version. Again, not $1000. I'm sure with slightly more searching there are cheaper options as well.

Yeah, as said the problem isn't just the ram price it's whatever else you might need to buy to get it. But 580 is also pretty darn depressing for 8GB.

For reference I bought 64GB for my workstation for less than $400 a few years ago. And that's with 25% sales tax.

This isn’t a fair comparison.

Desktop memory is cheaper, laptop manufacturers are rent seeking with upgrades normally, and with soldered memory they have free reign.

It’s true that often the upgrades are bundled, ie; can’t get 16GiB without an i7. But I think that’s fairly rare at least.

As for 16GiB, there’s a hard limit there for LPDDR3, and intel CPUs do not support LPDDR4 yet.

Oh, and another thing regarding cost, there were some issues a long time ago with the factories producing them, which is why the price is still high, maybe artificially.


Wasn't meant to be an apple to apple comparison. Just a reference. But it does kind of illustrate just how bad of a fit electron apps are.

Well, the biggest point of soldered ram has always been to be able to recoup costs by denying people to buy ram from someone else.

Even in the old days it was much cheaper to buy 2 GB of ram and then upgrade to 4 GB than buying it with 4 GB in the first place.

> Ram is cheap

RAM is cheap...and not upgradeable on the majority of devices in the world.

Agree in principle, but available Ram real estate (slots) is limited, esp in mobile.

My RAM is soldered on.

RAM is cheap but slow, and CPU cache is precious.

Just because there's enough resources for it doesn't mean it was a good use of those resources.

Why should it be reasonable for displaying a few megs of text & maybe a dozen megs of images to take 250MB+ of RAM? What is the rest of that memory usage, and why is it loaded? CPUs don't like executing code that's not in cache, so hopefully it's not code. And if it's not code, what is it? Figure a few dozen MB for the graphic buffers, but where's the other 100MB+ gone?

8GB - minus OS load gives you about 5-6GB to play with.

The problem isn’t with Slack itself - as more and more desktop software run on Electron then with 8GB of RAM you can only have a handful of instances open simultaneously before you get performance- crushing disk paging.

Low end desktops come with a gig of RAM. A $35 Raspberry Pi is what "low end" really is.

While 250MB is a big chunk... I think it's actually okay, even in this extreme case.

The new RPi has like 4gb of RAM tho

At the highest price tier, yes.

This is self-deception. While technically you are right, you don't actually have full 8GB to allocate to JS bloat apps. After the OS and all the other necessary stuff what you will always have running in the background like browser etc. you'll be left with 4-6GB.

That memory is for IntelliJ, Docker images, Photoshop, Final Cut, Blender, etc.

> Too high for what?

Too high for an app that runs continuously.

Nearly all users continuously run web browsers these days and they all consumer a lot more ram than slack.

Yes, and that's a problem. Additionally, if the OS and browser are both always running, then there is even less memory left for your app! So this is a stronger argument to use less, not an excuse.

iTerm2 eats 150mb+. Any cocoa app use that much of memory. App Store is 500mb just after launch.

> Any cocoa app use that much of memory.

What gives you that idea? Lots of apps have cruft, but that's nothing to do with Cocoa.

A "Hello, world!" in Cocoa uses around 13 MB, and the application bundle weighs in at 79 kB. The fact that Electron uses a bare minimum of over 100 MB just to get an application running is a source of some understandable consternation.

His point is, actually, quite valid. By the time you stop and rebuild all the image caching/media pieces, you'll end up with a hefty amount of memory usage too - Telegram (not the Qt one, the Cocoa one) will regularly climb up to 200ish MB if I have a media-heavy chat going.

People have skewed ideas of what's doable in memory constraints in 2019 for the feature sets that customers tend to expect.

You're comparing absolute minimum of slack with a very heavily used Telegram instance. This is, given the similarity of the apps, not fair.

Compare a "media-heavy" slack with that Telegram, or compare with Telegram minimum.

No, I'm comparing the baseline: Slack's feature set.

To support that, you'd need far more than a "Hello World" stack.

The "Cocoa" Telegram is not really Cocoa at all :/

Ah yes. I remember all those 8GB NeXTStations.

And yet, the performance is still bad. And that is only 1 of many reasons why electron is bad.

A hundred times this; 500-1000mb of idle memory usage by a chat client is patently silly.

Sure, but on my MBP it's never been a problem. I've never had slack lag regularly or notice it impacting other programs. :shrug:

Depends on the problem you're talking about. Slack made their memory usage considerably worse due to poor programming so that's obviously an issue. But that also doesn't negate the other issues electron brings.

Personally I would have used those years of development time to go native but it is what it is.

you got a link to the Slack engineer's comment?

It’s little more than a simple IRC client that consumes 250-500 megs of RAM. I wouldn’t take any victory laps over the supposed “efficiency” of Electron here.

Yeah, and a Rolls Royce is little more than a Honda with worse miles per gallon.

Except in this metaphor, in terms of features, performance, and sheer quality of experience, a Rolls Royce would be something like LimeChat, and Slack is a Ford Pinto

You say this like I should prefer a Rolls Royce to a Honda.

Yeah, I'd take a Civic Type R over any other contemporary car.

I know that popular opinion in the Twitterverse is that "Redux is dead", but I note that both Twitter and Slack just released major rewrites that use Redux heavily.

I talked about Redux usage stats and comparison with other alternatives in my "State of Redux" talk at Reactathon earlier this year [0], and my post "Redux - Not Dead Yet!" also addresses some of these aspects [1].

Also, quick plug for our current focus. I'm working on some tutorials for our new Redux Starter Kit package [2], and then will be working on tackling additional functionality to push it towards a 1.0 release [3].

After that, we'll get started on a major revamp of the Redux docs [4] to improve the learning flow, better organize the content, cover more real-world usage topics, and teach using Redux Starter Kit as the "default way to write Redux".

[0] https://blog.isquaredsoftware.com/2019/03/presentation-state...

[1] https://blog.isquaredsoftware.com/2018/03/redux-not-dead-yet...

[2] https://redux-starter-kit.js.org/

[3] https://github.com/reduxjs/redux-starter-kit/issues/82

[4] https://github.com/reduxjs/redux/issues/3313#issuecomment-45...

Thanks for the articles! We've been using Redux since August of 2016 at Groove. For us, we've found that Redux makes sense when you have state that needs to be shared. If you are dealing with some specific, tightly coupled component state, it sometimes makes sense to keep that particular piece of state local to the component (at least for our use-case). The biggest issues we've run into during our time with Redux have been 1. not being atomic enough with our connected components (sometimes passing massive objects down the component tree instead of selecting what we need at each point), 2. running into leakiness in react-redux (such as function arity changing short-circuit caching behavior in the mapping functions, forgetting to invalidate components when using select factories, etc), and 3. not normalizing our store enough (which contributes to #1). As we continue to mature in Redux, we are slowly fixing #1 and #3 in our codebase. We are still sometimes getting bitten by weirdness in react-redux though. I'm not sure I have a solution, but I think there is still a lot of opportunity for improvement there. And not just in react-redux, in helping developers understand how to properly build performant redux-driven apps in general.

Yep. The Redux "Three Principles" docs page [0] emphasizes the "single source of truth" aspect. It's a good selling point, but has unfortunately led to a lot of folks interpreting that as "you _must_ keep _everything_ in Redux". (Doing so is a valid design decision, and I've certainly talked to folks who deliberately did that to make everything trackable and time-travel-able. But, most UI state probably doesn't need to go in Redux, particularly form state [1].)

In reality, there should be a good balance between global and local state. I wrote a Redux FAQ entry that tries to give some rules of thumb for determining what state goes where [2].

When we do the docs revamp, hopefully we can make that kind of thinking more explicit.

I'd be very interested in hearing more details on any issues you may be having with React-Redux, and I'm happy to offer advice for you or anyone else on using it better. HN comments aren't a great place for that, but please file an issue for discussion or contact me in the Reactiflux chat channels on Discord in the #redux channel.

I'd also greatly appreciate any feedback folks would have on weaknesses with the current docs, so we can improve those issues with the docs revamp.

We've had open issues to cover writing a "Performance" docs page for a long time, but no one got around to contributing that, and there's been too many other higher priorities for us to write it. My React/Redux links list does have a section of articles on improving Redux perf [3], if that helps, and there's an FAQ section on that topic [4]. In general, connect more components [5], and use memoized selectors [6] to read data and do transforms.

[0] https://redux.js.org/introduction/three-principles

[1] https://redux.js.org/faq/organizing-state#should-i-put-form-...

[2] https://redux.js.org/faq/organizing-state#do-i-have-to-put-a...

[3] https://github.com/markerikson/react-redux-links/blob/master...

[4] https://redux.js.org/faq/performance

[5] https://redux.js.org/faq/react-redux#should-i-only-connect-m...

[6] https://blog.isquaredsoftware.com/2017/12/idiomatic-redux-us...

> The Redux "Three Principles" docs page [0] emphasizes the "single source of truth" aspect. It's a good selling point, but has unfortunately led to a lot of folks interpreting that as "you _must_ keep _everything_ in Redux".

After a couple of years working with Redux, I think my interpretation has settled down to:

Redux should be the single source of truth for the pieces of information you decide to put in there.

Exactly! I've used literally that phrase in some of the talks that I've done. We clearly will need to include that in the revamped docs.

I recently came to the conclusion that I’ve been going a bit overboard with this in ClojureScript's (let’s call it equivalent) version of Redux, re-frame.

Rather than putting absolutely everything in the global DB, I now try to restrict that to things that are actually global, such as the user profile data and such.

Awesome, I'll hit you up in Discord.

Isn't the common sentiment now that Redux is no longer considered a must-have for all greenfield projects?

Also aren't communication apps a perfect use case for Redux due to the need to have events from multiple sources happen in a single store in a linear order?

That's part of what I was getting at, and what I talk about in the first couple links.

Lots of folks have used Redux because they were _told_ they "need" to use it. That's always been overkill. And, yes, there are plenty of other viable alternatives that overlap Redux's capabilities in various ways (Apollo, MobX, React context, etc).

So sure, it's not a "must-have", because you should _always_ evaluate tools and determine what's really appropriate for your use case rather than just blindly using something.

But, at the same time, Redux is still very widely used (~50% of React apps), and not going away. So, there's a big difference between "not an automatic must-have" (which is true) and "dead/dying" (which is not).

What is the goto for web if Redux is dead?

I am not super knowledgeable about all the exact architectures, but I thought mobile was just starting to adopt the redux architecture vs MVP, MVVM, etc

A lot of people are saying that the new-but-not-really-new React Context API and useReducer hooks are sufficient to cover redux. My team isn't convinced, we see immense value because we're big fans of using Redux Saga-- a model that isn't well covered using Context or hooks, although we do use those APIs as well when needed.

Suspense for data loading seems to cover that side of things quite well in the experiments I've tried with the current API.

I use Sagas as well though, and Ive found them ok...really easy to test but feel very heavyweight and clunky; personally, I don't _really_ like the loading/error state to be handled by Redux: the more years I work on React/Redux app the more I prefer for a section of an app to be mounted -> triggering a fetch -> loading, show local loading UI -> {fails, show local UI|succeeds, dispatch success}. So Redux becomes a serializable normalized data cache that acts to store the results of disparate API calls and doesn't care at all much about the UI.

That's what I've been trending toward as well.

There's a tremendous amount of work out there for how to get redux to completely handle async and branching methods, and I think the best solution for most cases is to not have it handle any of that.

Keep your data fetching in normal everyday functions, have react components call those and funnel the data you need in the global store back to redux, and then have the components that need that data read it from the global store.

What do you do if multiple components/"pages" that aren't even remotely connected need access to the results of an API call?

The results are in the Redux store. I'm not sure I understand the question, why and how would multiple components be making the same fetch request at the same time? With what I'm describing, there is already a common state, if they just need access to that state they have access to it.

Edit: If you do need to synchronise multiple async updates to that state that happen in a very short space of time & in an unknown order in a controlled way, then yeah something like sagas is a solid choice to exert control over that. But very often the user of the UI is only going to make one request [or one batched set of requests] -- eg I am working on an RN app at the minute, and each screen does need to fetch data. So I fetch in the component, it shows loading etc, then populates the store. If the user cancels what they're doing (navigate away for example), the component used to fetch unloads and no dispatch is made.

I meant to reply to the guy above me who advocates component level fetch as opposed to using Redux sorry.

That was me! Component level fetch !== no Redux, that's not what I described; using Redux does not _mean_ you have to use one of the many libraries built to wrestle asynchronous fetching into Redux.

I've got nothing against any of them, I've used almost every major one in my jobs over the last few years. But I (and the person who replied to me) both seem to feel the same way, that in many cases just fetching in React, in the component, is all that is needed, let the Redux application deal with marshalling the resultant state (which it is very good at)

I have preferred this method myself when dealing with simple calls (calls that need no further external information). jumping through 7 files just to find the API request that gets shipped is slightly insane.

but how do you put the result back in the store? how about displaying a default failure screen?

> jumping through 7 files just to find the API request that gets shipped is slightly insane.

The API request is in the file that represents the point at which the API request is made by the user (previously in a loader component or, now, using a hook, in future using suspense), so I would say it involves jumping through 0 files, as opposed to several if I move it to redux (the dispatch occurs at the UI trigger point, then I have to drill through several other layers of abstraction). In practice there's normally some facade in front of the API to prefill values and also to make it easier to mock for testing, but if anything that just makes it simpler as the facade acts as a reference.

> how about displaying a default failure screen?

   failure && <FailureComponent>{failure.message}</FailureComponent>
This is I guess down to a preferred philosophical approach but from my pov React is a UI library and it's _really_ good at showing conditional UI. Redux is a state container and is not great for this. I don't feel I want that in Redux, I want Redux purely to be a reference the state of the data in the app; loading/req/etc states are not that. Whenever I've moved error/request/loading/failure state to Redux (ie almost every app I've made over the past four years before this year) it involves a series of abstractions to track the current request. Whereas React can do this out of the box at the point the action is taken: make the request, show some state, if the request succeeds, dispatch the data to the store and [re]render the component that displays the data based on the state of the store.

> but how do you put the result back in the store?

    dispatch({ type: "foo", data: "bar"})
And have a component that renders based on props in the store, same as normal.

We started a greenfield project a few months ago and decided not use Redux and try the context API instead. In retrospect, that was a bad idea. Context is fine for a small app, but small apps tend not to stay small... and then you're stuck with the Yugo when you what you really need is a truck.

In our main app we also use advanced features like what Slack is describing in their post, like having multiple Redux stores. We also combine stores from multiple sources (e.g. appStore = [...store1, ...store2, ...storeFromALibrary]. It's incredibly flexible.

Hooks are definitely going to reduce a lot of greenfield project needs for a state library, which should be great for React ease of starting new projects.

Something like Redux Saga, Redux Thunks, Redux Observable (my personal preference here), or what have you, are the higher order "toolsets" that Hooks alone aren't likely to replace yet. It may be the case that some of them are going to start targeting Hooks/Context more directly without "needing" an intermediary state engine like Redux, but it's also entirely possible that it will still remain a good reason to make a step change to a state engine like Redux when you find yourself needing higher-order coordination between components that libraries like Sagas, Thunks, or Observables can provide.

I think once you grok sagas, they're very difficult to replace. I think it's such a neat mental model to be able to dispatch global actions and having async tasks able to respond. I think it leads to nicer decoupling in components between UI & data.

Shameless plug: https://github.com/neurosnap/use-cofx

Under the hood it uses [cofx](https://github.com/neurosnap/cofx) which is a library I wrote to solve the need to write declarative side-effects without redux.

I wouldn't say Redux is dead, but there are a number of alternatives. My team has switched from Redux to MobX for all new development, as the two libraries solve the same problems, and we find MobX a bit easier to work with for our specific use cases.

There's nothing wrong with Redux; we just found that Redux based code tended to have a fair bit of boilerplate, and it wasn't as easy to maintain code using Redux as we would have liked. Every dev team has to find the right place to draw the line on implicit versus explicit, magic versus verbose, convention versus configuration. No real right or wrong answers.

> I thought mobile was just starting to adopt the redux architecture

I don't think so, no. The Redux architecture (ie, reducers, actions, etc.) is fairly specific and not universally loved. I would say that people are starting to really grasp the importance of solving the problems that Redux solves, however.

I encourage you (and everyone facing the same problems) to take a look at Rematch (https://github.com/rematch/rematch). It's Redux (the real Redux, you can even use it with the Redux-DevTools) with a MobX-like syntax. And already comes with async support.

I think it's the best of both worlds.

My only request for redux-starter-kit is more documentation on usage with TypeScript (and perhaps deeper integration with ts-not sure how well the current package integrates with a ts codebase).

RSK should be fairly usable with TS. As I understand it, it can't be as magically strict as `typesafe-actions` is or infer quite as much automatically, because of how we're generating the action types from the reducer names as strings. But, it should definitely generate correctly typed action creator functions for you.

And yes, agree on needing more documentation:


I've got an example app I put together using RSK, our new React-Redux hooks, and TS, and I'm planning to turn that into a tutorial soon:


Thank you!

"Redux" Event Sourcing for the JS hype-machine.

Nothing is dead about event sourcing, other then the mob has moved forward in the hype cycle.

The most interesting details:

- The first iteration was largely powered by jQuery(!)

- Every workspace got its own isolated Electron process because they didn't anticipate multiple workspaces in their initial architecture

Honestly it's amazing they got as far as they did with the above limitations. Sounds like this rewrite was sorely needed after their phase of rapid growth. Glad they pulled it off.

They switched to React, which will increase their memory usage and be slower than pure js.

> be slower than pure js

There's no way of knowing, without knowing what they were doing previously. The whole point of React is to leverage the virtual DOM to avoid unnecessary (and very slow) DOM manipulations. Given the sheer number of things Slack has on any given page it isn't unreasonable to expect that React will make things quicker, even if it has a penalty at startup time.

This is great, but that ship long sailed for me. I still use Slack extensively, but for the past year+ exclusively in Safari tabs. Memory usage and Energy usage (I'm on a MacBook Pro) have both quartered (or more).

For over a year i was running 5+ Slacks in the Electron app, and continuously noticed how much memory was being hogged up. After some basic research, I realized that Electron is basically a wrapper of a Chrome instance. Suddenly i understood where my ram was disappearing to because so many "native" apps running in Electron were literally just spawning new chrome instances and gobbling up ram.

I now do my best to keep my website apps running just in Firefox containers.

Electron feels like the last rush of "write once, deploy everywhere" days of flash and Adobe air. God save us all.

How about a native client? I think it's safe to assume that Slack has the resources for this.

Nobody has the resources for that. Some companies make OSes, and some companies send rockets to space. Those feats are also powered by thin wrappers over electron. /s

People seem to forget that C++ and opengl is cross-platform. And there are projects far bigger than slack, like ffmpeg and OpenCV that have existed for decades, always had very fast development cycles, with only a subset of the funding and money that Slack has, and stayed native and close to the metal forever.

A better answer would be, that Slack has deemed the benefits that would result from this as not that important. Or that the current team's expertise cannot handle the task. But they always looking into ways of improving the core product and experience.

Slack also grew out of the team that made Glitch


That core team therefore had a lot of frontend web dev experience. It made sense that they'd stick to their strengths

The performance and resource usage of slack both on the desktop and in a browser indicates that “front end code” is not their strength.

Creating a web app with a multi-billion dollar valuation indicates it is their strength. Who cares about a little superfluous memory usage compared to that.

Drilling and refining fossil fuels are BP's strengths, who cares about a little oil spilling into the environment?

Can you see how maybe other people might have valid concerns about the effects of a product, and ask for more rigor in the company's processes?

>Drilling and refining fossil fuels are BP's strengths, who cares about a little oil spilling into the environment?

That's perhaps the most tortured analogy I've ever heard. I can't even begin to fathom how you think that's comparable or relevant.

Ah, a popular strawman tactic these days: attack the analogy.

Sounds like it got my point across. You just don’t like the point.

Slack didn't win because it was better, it won because businesses thought they were better. That's how most business software works: it creates a problem in the customer's mind in such a way as the customer naturally sees the product as the solution.

Slack is successful because of marketing, not technical superiority.

I worked at a small company that used Google Talk for office chat, and us developers complained because it was difficult to have a group conversation with. We started using Discord (we were familiar with it and liked the dark theme) and it caught on and we were quite productive with it. Our business types saw it and decided to "formalize" it, so they picked Slack and rolled it out company wide. Most people actually preferred Discord, but the business types preferred Slack (probably mostly because of marketing), and now we're all using Slack.

Now, Slack also didn't royally screw up as it scaled up (outages were rare and transparently reported), which is a credit to the developers and PR people. That being said, just because something is successful doesn't mean it's good; it could be, but that's tangential to the topic at hand.

Developers who respect their work, for one.

See the other child comment about ripcord. Made by one (smart) person

> People seem to forget that C++ and opengl is cross-platform

OpenGL is dying. It's already deprecated on MacOS.


Apple deprecates lots of things that are still alive and well (see also: headphone jacks). I wouldn't read too much into that.

Sure, but there are multiple "OpenGL on Metal/Vulkan" projects. Because it's possible to do so without all that much of a performance loss.

OpenGL effectively should die... but it's not going to be in favor of raw Metal/Vulkan, except in rare cases. Most people already use engines that abstract most of that away, and Metal/Vulkan are in basically every way better for those engines.

It is telling that the person arguing in favor of native C/C++ has to use a throwaway account for fear of going against the dominant force of opinion in our industry. We should be able to have these conversations openly without wondering if it makes us less employable.

Please. This is literally the same argument I read over and over again on HN. This IS the bastion for this kind of argument.

You shouldn't make assumptions about why someone's account is named throwaway.

Fair point. It's a thought that has gone through my head a few times. There are places that want you to conform to whatever it is we call "modern" software development, and I often find myself feeling uncomfortable about expressing an alternate viewpoint.

There's no conspiracy or anything going on here, and you're right it's wrong to assume. That's the general sentiment of what I was trying to say though.

The fact that nobody's gone out on their own and built a native client means at least one of four things is true:

1. Slack is too restrictive, and its API is too poorly supported for anyone to create a port. I find this unlikely, given that the API is extensively documented and that an Emacs port already exists, but maybe there are problems I'm not aware of.

2. Good native ports are actually a lot harder to build and maintain than typical HN posters claim.

3. For all everyone complains about Electron, maybe the current app is good enough for pretty much everyone, including the complainers, and those complaints are largely just hot air.

or 4. A few good, Open Source native ports already exist, and people are just unaware of them.

Regardless, the HN crowd is made up of developers who know how to develop things and use experimental software -- most people on here know how to extract a login tokin, and a lot of people on here are willing to put in a lot of effort to solve problems. If there isn't a native port already, there's very likely a good reason for that.

If there isn't a good reason for it, and it's just that somehow nobody on HN has thought to sit down and build a native app yet, then I'd welcome one, I suppose. But obviously I'm not annoyed enough by Slack to do it myself, and I suspect that most other people posting here fall into that category as well.

> 4. A few good, Open Source native ports already exist, and people are just unaware of them.

Native GUI clients:

- Ripcord: https://cancel.fm/ripcord/

- Wey: https://github.com/yue/wey ("written in Node.js with native UI powered by the Yue library")

- Volt: https://volt-app.com

IRC bridges (allow using Slack from native IRC clients):

- wee-slack: https://github.com/wee-slack/wee-slack

- irc-slack: https://github.com/insomniacslk/irc-slack

- bitlbee: http://bitlbee.org (using libpurple)

libpurple plugin (allows using Slack from Pidgin, Adium, bitlbee):

- https://github.com/dylex/slack-libpurple

- Adium (native macOS app) plugin based on it: https://github.com/victori/slack4adium

CLI clients:

- https://github.com/erroneousboat/slack-term

- https://github.com/haskellcamargo/sclack

- the emacs one you mentioned

Most of these clients don't support 100% of Slack's features, aren't as pretty, and are generally not as 'polished' as the official client. But they're also mostly written by individuals in their spare time, as opposed to a team of full-time employees. So no, I don't think that native clients are 'actually a lot harder to build and maintain'.

To be fair, if you want it to be that pretty you will have to either reinvent two-thirds of a browser or half a game engine. There's a reason "rich UI"s are a pain to build compared to native GUIs (MSFT Xaml is more similar to a web browser than traditional winforms-style UI).

The best compromise right now is to aggressively build pretty, heavily themed prebaked designs. Think Material Design, Cupertino etc. so native UI is somewhat good looking too even though it may not be as "rich" as a React one with all the drag and drop animation and flashy trinkets. Feature wise of course there's only marginal improvement to old school Java Swing, but it's certainly easier on the eyes.

Maybe on some platforms. I'm a Mac user, so speaking for that platform, AppKit has had built-in animation support for over a decade (and now with Catalyst you can also use UIKit if you want). Drag-and-drop in particular has animations out of the box when using certain built-in widgets – e.g. when you're dragging an item into an NSOutlineView, the existing item at the cursor will slide down to make room for it. (But what in Slack is even draggable? I can't seem to drag to rearrange channels, for instance.)

Exactly my point, OS X has tons of prebuilt designs with knobs for customization. But at the end of the day they can still be built rather quickly. Drag and drop the components in Xcode, tweak a few variables and animations here and there and you have a perfect looking app that is coherent and consistent with the rest of the system. On the other hand, a modern successful SaaS requires hundreds of man hours of work just on design and UI alone. For web, nobody wants consistency. People gets upset if every site looks exactly like bootstrap. Sites have to be styled, branded, made unique. Yet for traditional desktop i.e. not electron, the moment a cross platform native UI library look vaguely off, devs throw a fit. Think Gtk, Qt, Swing etc.

Nice -- I assumed there were at least one or two, didn't know there were so many!

Once Riot and Matrix pick up steam, I suspect the Electron debate will be even less of a real issue, since you can pretty safely build a native client for Matrix and know for sure that nobody on the corporate side is going to get annoyed and tell you stop.

5. how can I make $$$ from spending a few months writing a native client? Slack can cut me off anytime they get annoyed with me. So even if my client is better, they can decide i am a threat and cut me off.

Never build your future on someone else's platform.

I completely agree, but this would fall under point 1 for me.

See Twitter for a good example of where business interests and API stability intersect in negative ways. I would make the point that if you don't trust a business to keep their official API stable, you probably shouldn't be building a community around their tools in the first place, regardless of whether or not they have a native app.

It's absolutely \#1. I tried switching to Riot (Matrix), but the integration with Slack was painful. You can get it to work well, but what's to stop Slack from breaking/changing their API periodically to make your app stop working as reliably?

If I'm going to put my resources into building a client for something, it's not going to be based on some closed source backed that could change at any time, it's going to be based on something open and forkable (worst case scenario, I can just maintain my own backend).

Slack bills itself as a complete solution, not a piece to a larger puzzle, so third party clients and plugins will always take the backseat. Who is going to try to build a serious competitor when they can't really rely on the backend staying consistent?

Building a decent frontend isn't particularly difficult. It's not trivial, but it has been done. For example, Pidgin, while a little ugly (I prefer "functional"), works well and has been around for years. I used weechat and irssi for IRC chat before we switched away from IRC, and it worked really well (didn't see pictures, but I think that's more a feature than anything, and links were more than sufficient).

The problem is that business types prefer hosted, "batteries included" solutions, and those types tend to not be very friendly with third party add-ons. If I end up being able to use Matrix/Riot, maybe I'll try my hand at a native, cross platform client. But until then, I'm not going to throw my effort toward something that can change at any moment.

I use weeslack and matterbridge. Weeslack is a python plugin for the irc client weechat. Matterbridge syncs slack channels with other services like telegram and irc.

Ripcord [1] is a (native) desktop chat client for Slack and Discord written in C++ and QT.

It was posted a few months ago [2] and I’ve been using it since then, it works great.

[1] https://cancel.fm/ripcord/

[2] https://news.ycombinator.com/item?id=19617699

Looks neat. Definitely a developer-designed UI.

Thanks, I'll take a look. I tried https://volt-app.com a while ago but it had some bugs that made it unusable (for me) with Slack.

I'm also a bit concerned of using a closed-source client with Slack.

The Volt app actually looks in your browser files to steal its Slack login session cookie: https://github.com/voltapp/volt/issues/143.

interesting - they mention:

"Slack connectivity is now available for testing. It's still pretty rough and missing a lot of features."

any further insights here?

could just try it myself, but slack is our key communication so getting more input would be great

I use ripcord for both slack and discord. Have not encountered any major slack-specific issues for the past half year I've been using ripcord

I just tried Ripcord with an old Slack account and it seemed to be working fine. I've not encountered any issues on my end so far.

I was also a little confused about this, because I vaguely remember slack locking down its API, and people complaining of the resulting inability for switching to a different, and possibly more efficiently implemented, UI

can't comment on the original comment you made here https://news.ycombinator.com/item?id=20500181 because it's now flagged, so commenting here to state: There is a deafening irony in deplatforming a post arguing against deplatforming as a valid tactic. I hope it is not lost on the HN mods. ;)

Bringing ideological battle into completely unrelated threads is definitely not ok. Please don't do this again. We want less ideological battle on HN, not more.


It's not an "ideological battle" if it's empirical (based on evidence and rational discourse).

The only "ideological battles" are religious in nature. I'm fine with keeping religion off HN, but you can't just call a contentious topic "ideological", especially if there is evidence and/or rational debate to be had. The fact that an insufficient number of people opt-in to discussing some topic in that way does not make it "ideological". Would you also consider conversation about climate change "ideological" and thus unworthy of HN?

In any event, the original post (interestingly) got unflagged, so this was superfluous anyway.

No matter the available resources, maintaining three separate desktop apps as well as a web app would mean fewer features and more bugs across the board. And Linux would probably be left on Electron (with less support attention), if not abandoned altogether.

Past a certain point, the downsides of Electron outweigh the downsides of native apps. We've crossed that Rubicon years ago.

Pretty sure we crossed the Rubicon in the other direction, as it was basically the popularity of Electron apps that forced Microsoft to abandon their own browser rendering engine in favor of Chromium.

If you had come to me 10 years ago and said "Microsoft will drop their browser and use Google's code," I absolutely would not have believed you.

> it was basically the popularity of Electron apps that forced Microsoft to abandon their own browser rendering engine in favor of Chromium

I don't see how this has anything to do with the point here (browser engines are not normal applications…) or if it's even true.

If this was the case we wouldn't see so many Electron apps. You just don't perceive enough of the upsides to see why the decisions are being made.

What I've noticed is that much of the distaste with Electron apps has more to do with the idea of waste than with the actual practical implications. Programmers like things to be efficient and optimized. Electron sacrifices those traits in a couple of ways, in favor of actual productivity.

So "the downsides of Electron outweigh the downsides of native apps" when you're someone who patrols the task manager looking for things to be upset about. Your chat client - one of half a dozen applications you realistically have open - using 400MB of RAM on your 32GB workstation does not have a meaningful impact on your workflow. It's just offensive.

I think it definitely depends on your use-case. Like you wrote, if everything else you have open also churns through battery life, it's unlikely having Slack open will make a difference.

However, if your other applications are a terminal, other efficient text editor (e.g. Sublime), and other generally respectful apps, I've found having Slack open is the difference between having enough battery life to work all day, and not.

But that is just one one electron app. Imagine that every desktop app you use is written in electron. I am pretty sure that even 32GB workstation would be stretched to the limits.

32GB workstations are the exception, not the norm. I'd wager most people are using Slack from computers with 8GB RAM (e.g. every 13 inch Macbook Pro that isn't built to order) and at least one browser open at all times.

(I'm not against web APIs as a platform for desktop apps, but I'm happier using Webkit wrappers and I think this would be improved, cross-platform, if Chrome could effectively host the apps that currently ship with their own instance of Electron or CEF)

The upsides are for the companies shipping it, the downsides are for the users. So it's not a surprising outcome.

Tell that to people who want to use modern services on a Linux workstation

That's nice for them, but no matter what OS you're on the desktop client is probably crappier than loading up the webpage, with the one upside of giving you a separate icon in the dock and app switcher.

Even with the performance improved, the "stuff the whole app in one web view" implementation is still showing. Clearly this isn't an Electron limitation; VS Code handles multiple windows just as well as any other text editor.

Slack's desktop client is basically "It's a web browser except worse and with the URL bar hidden." Since chat requires an internet connection and it's not like Slack's desktop version launches particularly fast, I'm not sure that it does anything better than using a browser.

We'll see if they support multiple instances on iOS 13. If they do, maybe we can get a Mac version based on that for something closer to native desktop behavior.

> no matter what OS you're on the desktop client is probably crappier than loading up the webpage

I don't see how it could ever be worse than the webpage. And it gives you nice perks like notification badge integration, which for me personally is essential.

My web browser has windows and tabs and lets me open Slack in multiple places if I want to see more than one view of Slack at the same time.

I usually don't need to do that, but when I do it's annoying that the desktop client prevents it.

It's fine that my phone doesn't let me open two text conversations at once since that's all the space it has, but on a computer with a 24+ inch screen and multiple virtual desktop spaces with different sets of tasks, it's a pointless limitation.

For reference - the saddest little File menu https://i.imgur.com/YPoCmbR.png

You're right, I, a user, don't perceive any of the upsides.

The upside that keeps getting paraded around is that it's a huge boon for product velocity. But that's just objectively false because Slack-the-product hasn't meaningfully changed in years.

You 'd think that a software company that just went public for tens of billions would be able to produce 3 programs.

What about React Native on the desktop?

React Native isn't a zero-effort way to port a web app to a native app. It lets you share code where it makes sense, but you're still maintaining separate applications targeting different platforms that each have their own quirks and needs. It's similar to how you can't just port a regular native desktop app by using a different compiler; your business logic may be compatible, but you'll still have to do legwork on top of it.

I know, I'm a React Native developer.

I'm just saying that "native client" doesn't necessarily mean separate Mac, Windows and Linux ports; furthermore, for a billion dollar company, a native client of _some_ kind isn't an unrealistic demand.

That's fair; although I'm pretty sure they wouldn't be able to have such a consistent and unique look-and-feel with React Native (you can correct me if I'm wrong). It's debatable whether that's a good tradeoff, but it's totally one that some people would make.

well, consistent with what? If you mean the native Mac/Windows/Linux desktop experience, yes, it's definitely impractical to do that with any size budget. But the current Electron app isn't either. But if you want consistency with the web app, I don't see anything difficult about that.

I do wonder if the plugin/extensibility architecture — "Slack Apps" like Github and IMGUR and OpenTable — is the real answer. (Unless I'm mistake and these run on the native mobile apps.)

Consistent across OSes; I haven't really done native desktop dev but it's my impression that the elements you're given can't be customized to nearly the same degree that they can on the web, via CSS.

Twitter has a super custom look and feel and I think that’s mostly react-native and react-native-web. I could be wrong about that though.

I think fewer features can be a very good thing actually.

No company has infinite resources. Maintaining native mac and windows apps to appease the few people who have an irrational hatred of electron doesn't make good business sense when those resources could be going work that actually improves their product in a meaningful way.

Slack is the only web-app I’ve seen that routinely triggers the safari warning about its resource usage/affect on performance.

For years slack has been a resource pig on desktop and on the web. So what improvements are you imagining they’ve been working on? Chatting on the internet is hardly such a groundbreaking subject that it can’t be done efficiently.

>So what improvements are you imagining they’ve been working on?

the ones that are described in the blog post that you're currently commenting on

The very problems that were caused by not having native apps.

Thank you for making my point for me.

not irrational. the performance problems can be measured and are noticeable in everyday use.

how can you measure the performance difference between an app that exists and one that doesn't?

Chat apps have been around for literally decades. There’s plenty of prior and current competing apps that do essentially the same thing and use a fraction of the resources.

Very few prior apps did what Slack is doing [1] And there are not that many actual competitors if you start counting them.

[1] It's still strange to me that many people compare Slack to IRC and XMPP and complain that they were doing all the same things, why have people abandoned them. Even though they (and especially the clients) were doing (or capable of doing) just a small fraction of what Slack is doing.

there's also web-based chat apps that use next-to-no resources. If slack's webapp is so heavy, what makes you think a native app from the same company would be considerably better?

I'm not saying I do, really: they seem to be incapable of writing anything efficient, and don't seem to be bothered by that.

They are already maintaining apps for each platform.

The question is whether the shared part is written in C++ vs. JavaScript.

>The question is whether the shared part is written in C++ vs. JavaScript.

also, whether the shared part is 98% of the code or 50% of the code.

As much as I'd love a native app, for a fast moving company with rolling releases it's probably very hard to keep three platforms at the same level especially if you want to do experiments too or roll out features to a small subset.

> for a fast moving company with rolling releases

Name three major feature improvements to Slack in the last three years.

I can think of one, threads, which are a fucking usability disaster and should have been killed immediately. I can't think of a single other major feature.

I like threads. Especially in busy rooms, or where you need a quick diversion of a topic that only affects some of the participants in the room.

I use it for incident updates internally too so the discussion is grouped. Much easier than paging through the channel.

Threads are my favourite feature.

It's not perfect, but we've used it extensively.

Why would they do it? I want a native Slack app, but they have no incentive to do so. Look at where Slack has grown with their current app and ecosystem. I’ve never heard a single company say they’re moving to Microsoft chat because of Electron, or going back to IRC.

It seems Slack has addressed the major issue with their current app with this rewrite — multiple workspaces.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact