
Show HN: Wey – A fast, open-source Slack desktop app - zcbenz
https://github.com/yue/wey
======
polpo
Interesting to see this is from zcbenz, who was the initial creator of
Electron. Slack's desktop app, which is based on Electron, is often criticized
for its memory and CPU usage, so I'm wondering if the creation of Wey and its
underlying Yue library are an answer/atonement for that.

~~~
bb88
On the github page, he writes this:

> Normally for multiple teams with heavy traffics, Wey should not have any
> significant CPU usage, and RAM ussage is usually under 100MB. However if you
> have a team with more than 10k users in it, the memory usage may increase a
> lot.

Bloaty technologies was what made MS great in the 80's/90's, because 18 months
later, the speed of the CPU would double.

I feel like we should think hard about optimizing for memory/cpu again.

~~~
JustSomeNobody
Now that Moore's Law is dead, that time is coming. We can't afford to be
wasteful. Not that we ever should have, but we've spent over a decade now
catering to "developer productivity", now it's time to put the user role back
in the forefront.

~~~
qznc
The last time I checked (roughly a year ago) Moore's law was still holding.

~~~
chacham15
Not sure why the downvotes. Moores law[1] refers to the number of transistors
that are on the die. With the decreasing size of the transistors, this has
remained true. Despite that, clock speeds have stopped growing thereby leading
to serial performance not increasing as fast as it had in the past.

[1]
[https://en.wikipedia.org/wiki/Moore%27s_law](https://en.wikipedia.org/wiki/Moore%27s_law)

~~~
opencl
GPUs were stuck on 28nm for over 5 years before 16nm one launched. Intel has
been stuck on 14nm for three and a half years and counting. This has certainly
not been anything close to doubling every 18 months.

~~~
SlowBro
To repeat: Moore’s law is about transistor count, not size. From the Wikipedia
article linked in the comment above yours: “Moore's law is the observation
that the number of transistors in a dense integrated circuit doubles
approximately every two years.”

Transistors can stay the exact same size for twenty years as long as the die
size grows, and Moore’s law still holds.

~~~
opencl
But the die size _didn 't_ grow at nearly that rate, large dies remain very
expensive to make. For example, comparing the HD7970 and RX 390X, similarly
positioned GPUs released 4 years apart in 2011 and 2015, transistor count grew
~50% from 4.3 billion to 6.2 billion.

[https://www.anandtech.com/show/5261/amd-radeon-
hd-7970-revie...](https://www.anandtech.com/show/5261/amd-radeon-
hd-7970-review)

[https://www.anandtech.com/show/9387/amd-
radeon-300-series/3](https://www.anandtech.com/show/9387/amd-
radeon-300-series/3)

~~~
SlowBro
> _But the die size didn 't grow at nearly that rate, large dies remain very
> expensive to make._

But again, did transistors double in count on any processor, not just GPUs? If
so, Moore held. Yes, according to this chart:

[https://en.m.wikipedia.org/wiki/Moore's_law#/media/File%3AMo...](https://en.m.wikipedia.org/wiki/Moore's_law#/media/File%3AMoore's_Law_Transistor_Count_1971-2016.png)

~~~
opencl
I mean if you want to go with the highest transistor count chip currently in
existence, that would obviously be the 512Gb flash chips with ~170 billion
transistors. Or if you want to constrain it to 'processors' that would be the
30 billion transistor Altera FPGAs. Which have been available for 3 years. No
CPU yet has exceeded the 10 billion transistor count of the Sparc M7, also
released 3 years ago. The Tesla P100, two years old, retains the highest GPU
transistor count at 15 billion.

It's meaningless to compare these totally different types of chips that end up
with extremely different transistor densities on the same node due to the
wiring requirements.

~~~
SlowBro
I'm just the messenger. Apparently there is some sort of criteria for what a
"dense integrated circuit" is for the chart previously pasted, and apparently
all of the listed processors passed the required criteria for charting the law
through time. What that criteria is, I couldn't tell you. Couldn't find it on
the Wikipedia page. "Moore's law is the observation that the number of
transistors in a dense integrated circuit doubles approximately every two
years." (Wikipedia)

~~~
opencl
Yes. I read the wikipedia quote the first time you posted it. No need to say
the exact same thing 5 times. The chart is clearly restricted to CPUs and ends
in 2016. It's 2018 and the top chip on that chart still has the highest
transistor count of any CPU on the market. Absolutely zero increase in
transistor count for the type of chip that would go on that chart. Every
category is seeing the exact same stagnation, Moore's law has been nowhere
close to being matched in the past few years. Every chipmaker is struggling
with their new nodes and falling way behind schedule on improvements. High-
volume chips that normal people buy are really a far more realistic and useful
barometer for Moore's law, and again you see the stagnation. Skylake and Kaby
Lake are near-identical chips. Coffee Lake is ~25% bigger. There is no
doubling of transistor counts going on in anything resembling an 18 month time
frame.

~~~
SlowBro
Okay, so the comment is then that you believe Moore's law no longer holds
because for the past two years transistor counts on CPUs have not increased.
On that basis, I might agree. Not (as you were stating earlier) about the
sizes of the transistors themselves, or about GPUs from 2011 to 2015, or about
512Gb flash chips. None of that applied to this conversation, hence the desire
to repeat myself. Thank you for finally clearing that up. Now that we're
finally on the same page, I might agree.

Are there no CPUs with higher transistor counts out there? Maybe ones you are
not aware of yet?

~~~
opencl
Please show me this CPU with a higher transistor count. There is a maximum die
size that each fab is able to make, mostly related to the optics used for
their photolithography process. Most designs are nowhere near the limit
because larger sizes are uneconomical to produce, though NVIDIA for example is
on the record stating that TSMC is physically incapable of producing anything
larger than their current GPUs.

~~~
SlowBro
> _Please show me this CPU with a higher transistor count._

That was my question to you. You claimed, "It's 2018 and the top chip on that
chart still has the highest transistor count of any CPU on the market." Which
I take to mean that you don't know of any CPU with any higher transistor
counts, which is an argument from silence. There could be a CPU outside of
your knowledge.

So, we wait to see. Moore may be dead but "moore" time may be needed to hold
the funeral.

~~~
qznc
I agree with both of you that transistor count per mm² is stagnating in the
last years. However that has happened before. For example, 1982-1987 and
2007-2010 according to Wikipedia data.

It is too early to declare Moore's Law dead.

------
seba_dos1
Nice! However, looking at the screenshots... I wonder why developers so often
seem to get UI paddings, margins and font sizes so wrong. Does it really need
some special kind of sensitivity to notice and care about it? :)

~~~
iaml
Unfortunately a lot of developers value aesthetic and polish somewhere deep
down on the list of priorities and don't seem to comprehend that other people
care a lot about such things.

~~~
dopamean
And a lot of developers just don't have an eye for it at all. I can look at a
UI all day and tell that something is off but I couldn't even begin to tell
you what or how to fix it. It doesn't mean I don't care.

~~~
seanp2k2
It’s not just having an eye for it; UI design is an entire field, and human
computer interaction (HCI) has existed since the 70s. I think that a large
part of the problem is PMs / managers / tech leads not understanding that
design has subjective elements but can also be objectively good or bad, and
that they can’t tell the difference between a design that is correct and works
well but might not look very flashy vs something that gets lots of likes on
Dribble but is awful to use. This feeds into itself and leads to more people
thinking that they know what good design is (pretty) and hiring other people
who call themselves designers who have no idea about the actual science behind
it.

~~~
mediocrejoker
As a developer who appreciates good design but doesn’t really know how to
create it, can you suggest a good entry level resource that summarizes what
we’ve learned over the last 50 years about HCI and how to make usable
interfaces?

~~~
brailsafe
Don't Make Me Think be Steve Krugg,Grid systems in Graphic Design by Josef
Müller Brockman, Designing with Grids by Mark Boulton, Any of Ellen Lupton's
Type Design books. All light reads except Josef's.

------
kodablah
First time I'm seeing Yue [0]. Seems like C++ w/ some Lua and JS bindings. Are
there C bindings available so it can be used in other ecosystems without
writing C++ glue? Writing a lib in C++ is rough for sharing across ecosystems
and I'd say is a reason Qt and wxWidgets aren't more widespread than they
already are.

0 - [https://github.com/yue/yue](https://github.com/yue/yue)

~~~
frou_dh
Yue sounds similar to libui[0], which has numerous language bindings, but
seems to be in long term flux in terms of scope.

[0]: [https://github.com/andlabs/libui](https://github.com/andlabs/libui)

------
maxehmookau
Awesome! ...also kinda depressing that it's neccessary.

Hey Slack! How about an app for your service that doesn't grind my macbook pro
to a halt?

~~~
rplnt
Disable emojis. Seriously, the animated emojis must be implemented with
something like this

    
    
        while(true) {
            foreach(emoji) {
                emoji.refresh();
                if (emoji.since_last_frame() > 1/60) emoji.next_frame();
            }
        }
    

Slack can easily eat up one core with just a few of them on the screen. Never
seen that.

~~~
wccrawford
IIRC, the last update on OSX specifically mentioned that they were requesting
the emojis way too often, and that it's now just doing it once.

------
PrimeDirective
How is this better than the official web client and why would I use it? It's
not really "native", either.

~~~
bartread
I would guess it uses less memory: it claims generally <100MB, although in my
opinion that's still a ridiculous amount of memory for a desktop IM client to
use. Still, it's better than 800MB.

~~~
PrimeDirective
A bit off topic but yes, these days "native" apps tend to be memory hogs.
Modern computers have tons of CPU and memory which makes the developers lazy,
it would seem. Generally, noone optimizes their desktop clients.

~~~
corrigible
Telegram is amazingly smooth on Linux and Windows.

~~~
chocolatkey
Because it has a real native backend. It uses Qt/C++. I really wish other chat
apps would do it...

------
godot
Seems really fast and snappy when I tried it, but already ran into multiple
bugs within minutes of using it that makes it not usable for me. (Not meant as
a harsh criticism, but just understand that it's early software, and I hope
for it to improve and get good enough for daily uses.)

1\. The always-visible scroll bars (both horizontal and vertical) on the text
box makes it not possible to see what I am typing (the scrollbars cover it and
the text field is not large enough to show both scrollbars and the text) --
this is on the Mac client. The same scrollbar issue exists on the channel list
on the side as well, but at least it's not blocking anything.

2\. When I sent a URL, the message immediately disappeared because Slack tried
to generate the link preview, and apparently it's not supported yet in this
client, so the whole message is just not shown.

~~~
philliphaydon
I absolutely hate websites and apps that hide scroll bars. I always turn that
feature off in OSX.

~~~
godot
I share that sentiment about web sites. But this is what I am talking about:
[https://imgur.com/DMOtSaO](https://imgur.com/DMOtSaO)

That input text field is where I am supposed to type my message. I can't see
what I am typing. I can't resize it either. Doesn't that make it unusable
instantly?

~~~
philliphaydon
Ok fair enough that’s pretty bad design flaw.

------
m0meni
This is pretty cool. From just reading the repo, it's built on this library
I'd never heard of before called Yue[0] that seems to be similar to electron,
but more lightweight.

[0]: [https://github.com/yue/yue](https://github.com/yue/yue)

~~~
opencl
It's not similar to Electron at all. Zero web technologies involved in it,
other than supporting JS bindings. Far more like Qt/GTK except trying to use
native widgets as much as possible. The Linux support actually _is_ GTK, but
MacOS/Windows use the vendor UI libraries.

~~~
bpicolo
What's the bit about using electron in the JS bindings docs?

[http://libyue.com/docs/v0.4.0/js/guides/getting_started.html](http://libyue.com/docs/v0.4.0/js/guides/getting_started.html)

------
staticelf
I kind of understand the need, but I would be more prone to actually use it if
it was actually fully native. Now, I can simply just wait for Slack to update
their client.

I don't really get why I should use this instead of slack unless I run really
shitty hardware, which I don't. I had issued with Slack before, but they fixed
most of them.

Also new features in slack will börk this app pretty quickly, as they have to
be implemented for one to use them.

Cool project though, even if I personally don't get the value of it. Maybe if
they added Gitter, Rocketchat etc support so you would have one client to rule
them all.

------
simon1573
How does this compare to Franz?
[https://github.com/meetfranz/franz](https://github.com/meetfranz/franz)

~~~
hs86
Franz launches a completely new instance for every workspace/network that you
add and these are based on their regular webviews. Unlike Slack, it will not
unload less frequently used workspaces from your RAM and with 8 Slack
workspaces, Telegram, Whatsapp and IRCCloud it managed to permanently allocate
more than 3 GiB RAM. That may be fine on my beefy desktop PC but not on my
MacBook Air with only 8 GiB RAM.

In my case, just leaving the official apps of these chat networks in the
background seems to be less resource intensive than running Franz. I really
hope that [https://eul.im/](https://eul.im/) will up to my expectations.

~~~
gmemstr
I'm still really leery of eul.im, I can't seem to find anybody
reviewing/actually using it and it throws up a couple detections on
VirusTotal. Would be interested to hear of your experience.

~~~
amedvednikov
Hi, developer here. I just tested it on VirusTotal and didn't get any
detections. Could you please post a screenshot with what you get here or
contact me via support@eul.im?

Thanks

~~~
gmemstr
VirusTotal link:
[https://www.virustotal.com/#/file/e88753917d7283715f0a4c633e...](https://www.virustotal.com/#/file/e88753917d7283715f0a4c633ea9f6740b38dd0fed3419a3ee03bcc84357185b/detection)

Screenshot: [https://i.imgur.com/X60nnAT.png](https://i.imgur.com/X60nnAT.png)

I'm excited to try it but because I'd be using it for sensitve/confidential
work I don't want to risk it unfortunately.

------
wuliwong
I installed it, it seemed a lot more "performant" than Slack's app on OS X.
The Wey app crashed within 5 minutes though. :(

I noted that Wey only gives links to pictures with no previews. The linking
out is fine but I think inline previews would be a nice addition down the
road.

------
wenbert
The beta version of the slack app for macOS seem to be better
[https://slack.com/beta/osx](https://slack.com/beta/osx) I can run it all day
without problems. The non-beta one, consumed too much RAM.

------
rsyring
Another client built specifically for Linux:
[https://github.com/raelgc/scudcloud](https://github.com/raelgc/scudcloud)

------
zackmorris
Just to play devil's advocate, I still hope that Slack and other Electron apps
put pressure on things like DOM libraries to improve their internal
representations. The issue here is that a table or manually nested divs store
everything at once rather than memoizing.

I personally don't believe that RecyclerView/TableLayout/GridLayout and
UITableView/UICollectionView are the future because they are not declarative.
The more code we have to juggle, the more it opens us up to bugs and
pathological edge cases.

What we need is something similar to
[https://github.com/splinesoft/SSDataSources](https://github.com/splinesoft/SSDataSources)
that abstracts away the micromanagement inherent to view recycling. It would
likely need to be tied into an evented data source like Firebase for JSON or
maybe Redux or the immutable data store that Clojure uses. Then that metaphor
could be adapted to the DOM and Electron's speed/memory overhead would be
reduced substantially.

TL;DR Wey will likely succeed because it optimizes the table loading either
with a faster runtime or by implementing the view recycling manually. This is
not a long-term solution, unfortunately. But I applaud its efforts.

------
xena
I'd love it if someone made a slack desktop app using actually native code
instead of javascript. I'd pay you money. Please, someone make this so I can
pay you money.

~~~
wffurr
Did you try Wey? Is it fast like the author claims? Is Javascript vs Native
Code really the problem or is it Slack's bloated Electron usage?

------
tuananh
* If all Electron/Yue apps's performance are like this, I'm fine with Electron/Yue.

* The app is still quite buggy but showing a lots of potential.

------
aphextron
Is there any possibility of getting a signed macOS package for this? I can't
install anything on my work machine that's unidentified.

------
indigodaddy
Seems does not support Azure SSO login for Slack? Advises password incorrect.
Made me a bit nervous actually TBH.

~~~
voxadam
Did you file a bug report?

------
tuananh
interesting bit: the creator of Yue is working at GitHub but Atom's xray is
already started so there's no way to see this get adopted by Atom right?

~~~
lioeters
Perhaps the author of Yue has different opinions than the team working on
Xray?

It seems that there's duplicated effort among projects like xray, yue, libui-
node/proton-native. Still, I'm glad to see people trying to solve this problem
area of cross-platform apps, taking the Electron paradigm to the next level of
efficiency.

[https://github.com/atom/xray](https://github.com/atom/xray)

[https://github.com/yue/yue](https://github.com/yue/yue)

[https://github.com/parro-it/libui-node](https://github.com/parro-it/libui-
node)

[https://github.com/kusti8/proton-native](https://github.com/kusti8/proton-
native)

------
faitswulff
> While Wey currently only supports Slack, it is on roadmap to add support for
> more services, and in future we will support plugins to add arbitrary
> services.

This is actually a Slack client, then?

------
grafofilia
Ay, wey; qué chido

------
rplnt
It's insane that 100MB of ram for a "chat" client is seen as a good result.
How did we get there? Is it really that much cheaper to develop desktop apps
as slow and bloated web applications?

~~~
Klonoar
Most people don't seem to grasp that in this day and age, any GUI program (at
least on OS X) is going to be minimum 20MB due to the retina display. Add in
that realistically you need to use Webkit (or some HTML renderer), at which
point... 100MB is pretty good.

For reference, an RSS reader I threw together using standard Cocoa widgets
will hit 50-60MB with no issues, and that's with my being annoyingly judicious
with how it's built.

~~~
zeveb
> Add in that realistically you need to use Webkit (or some HTML renderer), at
> which point... 100MB is pretty good.

Why do you need an HTML renderer? Slack uses Markdown.

emacs-slack is pretty lightweight …

~~~
Klonoar
Slack uses Markdown, yes, but supporting everything under the sun that Slack
does (especially as they add new things) is just much easier inside a Webview.

The job of programming isn't to reinvent the wheel everywhere, it's just to
deliver a product that doesn't suck - with native code, you're implementing a
native solution for each feature Slack has or adds. Slack's content is web
content.

You're really arguing that an insane amount of extra engineering time should
be spent to conserve, at best, an extra 40-50MB of RAM... and this is
disregarding how modern RAM works anyway.

~~~
progval
> Slack's content is web content.

That's the underlying issue. If the "protocol" uses HTML, all clients will
need to embed a web view.

------
CNJ7654
Something something 'do you know da wey :DD'

------
betimsl
I was reading slack as in Slack in Slackware. Then I opened the link, now I'm
disappointed.

