This is a hidden downside of writing code that's only just fast enough to work. It may feel fine for you, but anyone building a new computer will have to match your computer's performance, or everything will feel sluggish. We're raising the bar for new hardware way higher than it needs to be.
I’m pretty sure in general smartphone apps are tested for a decent variety of performance targets, perhaps it should happen more for desktop software.
Now that I think of it, the push for more and more Electron apps may be because all devs are living comfortably on 16GB and 32GB devices, where the voracious RAM appetite of Electron does not matter.
IMO its generally the JS hype producing these monstrosities. QT and Java have had fast and powerful cross-platform UI for ages.
I'm trying to push a new paradigm at work where we write most of the logic in Rust. UI view layer would be JavaFx for desktop and React Native for mobile. Only 2 UI's to target for all platforms, reuse of core code, and good performance all around.
If you use "the JS hype" as a synonym for what are considered best practices
by the folks in the NodeJS/NPM ecosystem, then you're right. (I.e., the fault
lies in the "hype" half, not in the "JS" half.) It was 10–15 years ago that
you could routinely hear one of the most prevalent criticisms in the
programming world about the bloat of Java.
I think that, where JS is concerned, for some reason we're seeing a regression where it's becoming
"conventional wisdom" that JS itself is slow, against the evidence to
the contrary. I've seen straightfaced comments here on HN in the last few
months, for example, that complain about the slowness of JS as a general rule.
But the reality is that the JS runtimes have billions of dollars of
engineering from top-tier teams invested in them, and today's JS engines are
by-and-large pretty fast. V8, in fact, shares parts of its implementation
with Java's HotSpot—specifically the parts that were developed by the folks
who made the StrongTalk VM and who were acquired by Sun, thus leading to that
work being incorporated into HotSpot in the first place.
So what is that reason?
There has been a noticeable shift in performance degradation that corresponds
to the rise of, for lack of a better term, "the NPM way of programming". As
with the case of Enterprise™ Java®, the problem lies in the way people in
those circles are writing their programs and what passes for an "idiomatic"
coding style. The NPM style used heavily in many Electron apps is relatively
recent, with respect to JS's lifetime. Even before JS engines were JITted, Firefox itself had hundreds of
thousands (millions?) of lines of JS doing a lot of the work both in what you
see on the screen when you're poking at your browser as well as behind the
scenes. Notably, the JS in those cases is not of the NPM style. There's
nothing in principle that means the "Emacs-like" application architecture
(compiled core, dynamic shell) needs to be slow, particularly on today's hardware.
As I've mentioned before, in the early days of Firefox, I used to use 1.0 and 1.5 on an 800 MHz
PIII with 128 MB of RAM. (For folks looking to leap in here with what they'd
like to consider a well-timed "well, actually…": yes, I'm acutely aware that
even that number is on the order of 100× or more beyond what is necessary to
get real work done with a computer—but the point is that it's nothing compared
to, say, stock 2015-era Chromebooks with 8GB of RAM, or a comparable quantity
in today's phones, for that matter.) Browser extensions are written in JS, and
my laptop now is several times over better than the laptop I used 10 years
ago—and yet... if I install any arbitrary extension today, there's a good
chance that I will encounter perceptible bloat there, too. A recent example
(within the last year) that I know of, is the WorldBrain Memex add-on, which
upon immediate use has the telltale mark of influence from the world of modern
"frontend" webdev, and the performance to match. This wasn't the case when
add-ons were authored in the sui generis style of yesteryear, before the NPM
practices leaked over and began influencing everything related to JS—and
tainting people's perceptions.
So I find the attempt to draw a contrast between JS and Java a little misplaced. Even ignoring the
common history (in both culture and provenance), there's the fact that Java
IDEs themselves have always been the poster children of bloat—second only (or somewhere in the running) along with Visual Studio proper. I know people
like to point to VSCode as an example of a "snappy" Electron app, and the
inevitable retort about just how lean it really is. (On my system, I don't
think it's possible to run VSCode without making sure that there's at least
350 MB of main memory to spare before launching it. Compare to the old joke
about Emacs's "bloat": that it was supposed to stand for "Eight Megabytes And
Constantly Swapping".) On the other hand, I have to recognize that the folks
calling VSCode snappy really are on to something. The previous statements
notwithstanding, the fact is that VSCode is still snappier than anything
I've ever experienced using one of the mainstream Java IDEs. If I were a
naive person, I could point to that and conclude that the problem lies with
Java-the-language. And yet every day I used my phone with large parts written
in Java—which, to be fair, does impart some impression of bloat and
sluggishness, so it's prudent for me to keep in mind earlier versions of
Android on older, more limited hardware that did have a fairly snappy feel.
And those observations lead us back to the root problem, which is if you
judged only by much the code being written today, programmers seem to have
forgotten (or simply never learned?) how not to write code that's bloated and
It's because people have plenty of experience with actual real-world JS code being very slow. Which has nothing to do with the runtime or the ability of someone purposely writing optimized JS to optimize it well.
The fact that you can write fast JS doesn't mean that the language itself or popular frameworks encourage you to do so, whereas that is the case for a lot of other languages with a reputation for being faster.
I, at least, have had the displeasure of using slow gui and other programs written in those languages.
> The fact that you can write fast JS doesn't mean that the language itself or popular frameworks encourage you to do so
One half of this sentence is bang-on, and the other half is not. That is, it conflates what is encouraged by the language with what is encouraged by the modern JS crowd, and suggests something in that vein—as if they're one tightly interconnected bundle—but in fact, they are two distinct things, and I made several remarks in my original comment alluding to this.
To successfully argue the point you're making now, you have to argue that "the NPM way" that is now prevalent is the inevitable result of merely setting out to write a program in JS. But, in fact, it's not. As I mentioned, the JS that made up a huge proportion of Firefox's codebase was written in a style that doesn't resemble the style now popular with NodeJS and NPM, but there was no big, conscious effort to do that by, say, avoiding pitfalls of the language and the things that might make it slow—it was purely a result of the lack of opportunity for being tainted by bad examples from the NodeJS/NPM world, since that world didn't exist then. The main influence on programmers writing JS for Firefox was the influence of Objective-C, C++, and Java.
You certainly can "optimize" your JS to make it faster, just like you can with any program, but that's not to say have to—you can leave the optimization to the engine itself most of the time. All that's really needed, on average, to make sure that JS is fast, is to avoid tainting yourself with mindworms that have proliferated in the NodeJS+NPM ecosystem, that is: just don't do the things that you would have never thought of doing were it not for having seen others doing it somewhere else on GitHub or in the packages hosted on npmjs.com.
In other words, if you want to write JS that avoids being slow, then you don't have to take any special effort. Generally, you can start by opting for writing the dumbest, most boring code possible. (Indeed, I routinely come across "clever" code by self-styled NodeJS aficionados exhibiting elaborate contortions to fight against the language, when it would be far better to just do the simplest possible thing and then move on. Refactoring to eliminate these contortions can even make the code more performant and more concise.)
For example, let's say you have something written in Java, or something with an analogue in C++ or Go, and your team has some motivation to either recreate it in or migrate it to JS. That's perfect—before you ever think about handing the task over to a professional NodeJS programmer, you really should give some heavy consideration to doing your best to copy the architecture from the existing, non-JS implementation, down to class names and code organization, and doing a straightforward, procedure-for-procedure port to JS. (Although, if in the case of migrating from Java, maybe also consider eliminating any unnecessary abstractions along the way—or don't.) There's a good chance that this will have satisfactory results that challenge your assumptions. Even if you're creating something from scratch rather than porting an existing solution, once again, all you have to do is to not worry about being fashionable and trendy, and just do the most straightforward thing possible.
There's a recent comment here on HN that's extremely relevant and really hints at what's going on with all these slow, bloated, and messy projects from the world of modern webdev:
As far as VSCode being a "snappy" Electron app - I still vividly remember that bug when users were seeing it hog an entire CPU core for itself while not being interacted with, and it turned out that it was caused by blinking cursor in the editor. You know, the kind of problem that was already solved by the time Windows 1.0 came out? And sure, they fixed it... but every desktop Electron app is a potential minefield filled with stuff like that. Some of it is just not obvious until you run it on slower hardware. Or in remote desktop - that also flushes out a lot of "GPUs are fast enough, who cares" problems.
I disagree that the code written is slower than in the past. There's simply much more of it. People build on top of existing stuff N layers deep. Just look at how far removed Electron is from the OS. It's crazy that it even works.JS also has some unfixable limitations.
It needs to be parsed every time. Java comes packed in an very efficient bytecode format that's both several times smaller than minified JS and far faster to parse.
JS lack of typing also means it uses way more RAM than Java in practice. Java has a lot of Object overhead, but nothing anywhere close to JS.
For UI's, JS lack of threading is terrible. In every language that supports threading, the main way to design UI is to have a "UI thread" that you never assign to do slow things. In JS it's extremely easy to accidentally block during UI render. I'm assuming a major of slack glitches and freezes are from the single thread model.
Modern Java apps feel native (on desktop). You probably use some that you don't realize aren't native.
JS is several times slower and uses several times as much ram. And the VM warmup is slower than Java because the code is distributed in a very inefficient format. It's not a good language for UI's but it's becoming the only option for mobile unless you want to write multiple implementations
Okay. You're not actually disagreeing here. This is the antipattern popular with NPM folks (and extremely reminiscent of Enterprise Java) that I identified as the problem.
> I disagree that the code written is slower than in the past. There's simply much more of it.
Even ignoring that "there is more of it" is part of the problem, you can say "I disagree", but all that gives us is a stalemate unless you're going to introduce data into the conversation.
> And the VM warmup is slower than Java because the code is distributed in a very inefficient format.
Not even the Java folks who work on Java agree with that. Warmup is one of Java's weak spots. The GraalVM team can and will tell you this. It's one of the things they bring up people when trying to get them to temper their expectations.
I recently ported a command-line utility more or less line-by-line from Java to JS, in an extremely naive way—the only concern was to make it work. When I finished, on a lark I checked how it compared against the Java version. Even with the JDK's wealth of specialized collections, compared to, say, the way that in the JS version all the places that expect a map got a general-purpose ES6 Map, the JS version running on NodeJS would beat the Java version every time. In this case, it doesn't actually matter because it wasn't performance-sensitive code, and in both cases, both processes would terminate in <1 second, but the fact remains that NodeJS was able to parse the JS program, compile it, execute, and then terminate faster than the the java process could launch, read the bytecode, verify it, and then perform the same job.
There's an interview from 2009 with Lars Bak on MSDN that you might look into, with the MS folks on the static languages side, and Lars on the other side explaining why in practice V8's performance can be comparable if not better than with "managed", bytecode-based, static languages like Java and C#.
FWIW, I'm not even a dynamic languages fanatic. Another of my big complaints about the NPM crowd is their lack of regard for making sure the "shape" of their inputs is easily decipherable. I've made money as a result of dynamic language folks thinking that using a dynamic language means you don't have to worry about types, and that attitude leading to CVE-level security problems. There are a couple Smalltalk papers I've enjoyed reading, both somewhat critical/skeptical of the promises of dynamic languages. In general, I advocate for writing code as if there is a static type system in place, even if you're in a dynamic language that doesn't require it.
Keep in mind, though, that this is all completely besides the point, because my original message was only that JS is now starting to be re-perceived as slow as a general rule because of NPM's hype-driven development, where programs are authored by which patterns are trending at the time or otherwise trying to imitate the styles of NPM tastemakers and "thoughtleaders", which leads to people creating huge messes. The entire Java versus JS issue was an aside.
1. "How (and Why) Developers Use the Dynamic Features of
Programming Languages: The Case of Smalltalk"
2. "An Overview of Modular Smalltalk" by Allen Wirfs-Brock and Brian Wilkerson. (awb was the editor of ECMA-262 version 6, FWIW.)
And on that note, here are two more of my favorite programming essays of all time:
"Java for Everything". http://www.teamten.com/lawrence/writings/java-for-everything...
"Too DRY - The Grep Test". http://jamie-wong.com/2013/07/12/grep-test/
And Slack has become a bloated piece of crap. It lags like hell on everything I own. It takes a few seconds to switch between channels and workspaces on my overclocked 3700x with 32gb of ram on fiber.
It's important for sales pages and ads to be pretty. Web apps just need to work well, keep the UI guys away from me :)
Right? From the comments in the article,
>It should be much faster than a Raspberry pi 3 that do not have all those problems.
I agree. User doesn't know what he's doing yet.
One tip I would offer him is avoid Firefox. Firefox is built with Rust, which doesn't support ARM as a Tier 1 platform. He would be MUCH better off using Chromium, which supports numerous ARM Chromebooks running this exact CPU just fine.
(I use Chrome these days...)
The problem is that fast software is harder to write than slow software (there are trivial transforms from fast to slow, but not vice versa). Thus, each generation back you expect your software to run smoothly on, represents more effort (or at least, more care) on the part of the author. We should expect software to be "just fast enough for the average user" essentially by natural law.
In businesses that would actually pay slack money, they will probably pay $800 for a decent enough laptop to run slack every 4 years and employees bring their own smartphones now.
(I also don't see anything in this comment that actually addresses the accessibility metaphor raised by the person you're responding to.)
It's written in pure C by one developer, and it's super fast. Operations are on par with other software because fx libraries are usually already written in native compiled languages and optimized, but this one loads its interface in a lot less than one second, which becomes like 100 ms or less cached. Wouldn't it be wonderful if all software would at least load their interface comparatively fast? Why do i have to waste (tens? hundreds?) megabytes just to show a GUI that does absolutely nothing else than linking events to graphics elements when someone can write a full fledged software using a quite complex interface whose entire executable size is less than one megabyte?
We develop websites on $3000 Macbooks Pro's. The shitty $300 Windows laptop I see in most households is struggling with a lot of websites. As are $200 Android phones I see many people use on a daily basis. I rarely see any developer testing performance on a crap device or bad connection.
The Pinebook might be an outlier device by itself. But ARM Windows laptops are pretty common.
> Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs.
This is not my experience at all. I do not notice undue delays switching between applications (do not use slack).
> The wifi is not very good, it can't connect reliably to an access point in the next room
No problems at all. This weekend I'm at my parents, I am connected to an old wrt54gl all day long. I have problems with my other laptop (XPS 13) on the same network.
> The screen size and resolution scream for fractional scaling but Manjaro does not seem to provide it.
I've been using this machine since late March, I didn't take notes when I configured it but, from what I remember, I only had to make some small adjustments  to get things right.
Sure, the PBP is an under-powered machine but, given the right allowances (the software is under development, the trackpad is not the best, suspend is not working correctly, etc), I find it easy to use as a daily driver most of the time.
edit: typo and link
None of this is perfect or easy to set up and it's in no way a substitute for fractional scaling support in the toolkits of the programs you're using but it has worked for a very long time and produces appropriately sized crisp and sharp fonts on all monitors, and more importantly it should work reasonably well even on outdated programs or toolkits that have no support for fractional scaling. The Arch wiki link should explain this but it's not spelled out and there's a bit of a misconception that fractional scaling is the only way to get a blur free HiDPI experience on Linux when that just isn't the case at all.
Fractional scaling does not have to be only for super high res screens.
> This is not my experience at all. I do not notice undue delays switching between applications (do not use slack).
I do wonder how much of his experience might be slack + gnome or KDE, some desktop apps are just horribly bloated because they can get away with it. Also modern DEs are just massive, most people don't realise how much resources they take up because compared to 25 years ago we all have x86 super computers.
I find slack to be absurdly slow for what is fundamentally just a text based web app and yet I'm using a 1yr old XPS with an 8th gen intel CPU... i run i3wm and keep things very minimal, yet I still find myself waiting seconds for slack to do stuff.
I recently trialed a 2GB memory (!) dual-core Celeron minipc as a workstation, and if I could have solved all the 4k 2x scaling issues without spending hours (more) on it or resorting to a too-heavy-for-the-hardware DE, and gotten 60hz out of it rather than 30, I'd probably have upgraded the RAM to its max of 8GB and been totally happy on it. Void Linux with suckless tools made it feel blazing fast, as long as I avoided webshit (so, Sublime over VSCode, keep Slack the hell away from it, use a real email client rather than a webpage, that kind of thing). All browsers felt too slow to even launch except Surf and qute, and the latter was a tad slower and jankier than Surf so I settled on that, but it was fine as long as I avoided the kind of pages that eat a couple hundred MB and burn cycles for no clear reason (which is lots of them, sadly). I bet I could have made it work even better if I'd looked into adding a disable/enable JS toggle, defaulting to off, and maybe some kind of click-to-load-media thing, but it was surprisingly usable as it was. FF and Chromium were far too heavy to launch with no page loaded, of course. Man I miss pre-2.0 Firefox, when it was light and fast.
As is, right now sitting at the KDE desktop with default settings, total memory usage is 613MB.
This myth that KDE is bloated and heavy really needs to die out. It may have been true in the early KDE4 versions, but that's a long time ago.
EDIT: apparently they’re shipping with plasma as the DE? Yeah that’s going to be slow.
Perhaps the video driver or compositing is misconfigured?
Firefox is my default browser, and I’ve found Firefox to be fine on low end machines as long as it can use the video card properly.
Chrome tends to have better compatibility with chomebook-class hardware for obvious reasons.
KDE Plasma feels snappy even on a Core 2 Duo with Intel graphics, in my experience.
This myth of KDE as some kind of bloated lumbering beast really needs to die out. Install KDE Neon or openSUSE and give it a spin. You'll be surprised.
KDE (and the other DEs) offer a whole lot more than that, obviously :-) Dolphin does do thumbnails and all of type of stuff, but it's not really a heavy application in itself, it just does a lot of disk-intensive tasks.
Obviously generating thumbnails for a whole folder of photos is going to take a little while on an eMMC drive, just like it takes a little while to get a list of all the shared folders on a network.
Playing around with Manjaro really makes me miss the ease of trying out kernels and patches on Gentoo, but I haven't had the time yet to put together a Gentoo cross compile environment.
I'm not having any issues with playback in browsers (fullscreen or otherwise using latest FireFox), and I use it in bed quite a lot for watching videos, so I'm not sure if there's some specific configuration issue the guy in the article is having, although this could probably be easily worked around with youtube-dl into mpv if you're having something similar happen.
I'm also not seeing the same problems on the terminal. I'm using alacritty as my terminal emulator and elv.sh as my shell, with similar customisations to display git statuses etc, and while I notice the occasional white flash (I'm not sure what this is honestly, something to do with alacritty rendering I'm assuming) there's no sluggishness.
The trackpad is admittedly horrible, you really need to buy a bluetooth mouse or seperate trackpad because it really is just frustrating to use, on the other hand the keyboard is really solid, I love the feel of it and it's responsive. The monitor, well I'm probably not the person to ask about that. I personally think it looks pretty good, but I'm really not the kind of person who cares or notices about sub-pixel perfect rendering, so YMMV with that.
The battery times are also not great, and although you can technically charge it through the USB-C port, it'll still drain power if it's turned on. This isn't helped by the fact I can't get the default brightness up and down buttons to work and have to type
sudo lxqt-backlight_backend --inc / --dec
Yes, it is kind of slow, but you really must've known that going into it considering the price and the specs. Overall, my personal perception is that the pinebook pro is batting well above it's weight (well, mine is anyway), and I honestly expected it to be far slower than it is.
I run openSUSE with KDE on all my machines, and it's responsive and snappy even on the Thinkpad X220i (Sandy Bridge Core i3, upgraded to 8GB RAM + SSD). Firefox is what generally eats resources, but below 15 tabs it's fine.
Even on my Raspberry Pi 3, KDE is perfectly fine. It's not fast fast, but still feels responsive and usable.
KDE was kind of slow in the KDE4 days. Not so anymore.
I bought a pinebook pro, perfectly knowing the performances limitations this article discusses, and actually _wanting_ to learn to live with a less powerful computer, consuming less electricity. In a era where we're roasting Earth, this sounded almost like a duty to me.
But when I received the computer, I had a blocking problem with it : its SD card reader was faulty and would trigger I/O errors after a few writes, and the OS would remount the device as read-only. This is a major problem on that computer because it has very low internal storage space so you're supposed to use a SD card to hold data. I tried various OS, various SD cards, did my due diligence to confirm it was a hardware problem - it was.
I wasn't that annoyed at that point because pine64 sells spare parts on their web shop. So I went there to replace the SD card reader, which meant replacing the main board as well, but OK. This was all very fairly priced, so no problem. Except… the spare part was out of stock. So I waited a few weeks, and it was still out of stock.
I mailed their support presenting my problem and asking a simple question : will spare parts will ever be available again?
They answered by asking me to demonstrate this was a hardware issue, which I did. They did not answer my question, so I asked it again, telling them that I was not wishing to make the device travel the world and back, I just wanted to replace the faulty part. They answered me with the address were I should ship the computer to them. So I asked them again directly : will the spare parts will ever be available again? They dodged the question once again and asked me to send them the computer.
I guess asking to ship them a faulty product is totally fair, they want to inspect it. I'm not willing do to that because we fly way enough products around the world as is, and I can do any check they want me to do locally. But fair enough, they want it back to solve my problem.
What troubles me, though, and what is relevant here is that while they advertise selling spare parts for their computer, they actually don't, and they're shady when asked if they will again. Which probably means they won't. So yeah, I wouldn't bet on long term usage if you have an experimental device that you can't repair.
The only answer regarding replacement parts that I've gotten from them was that they're waiting for the manufacturer, they never provided any indication when they themselves were expecting the next shipment from their manufacturers.
The data I've found from some quick googling  indicates a power consumption of around 5 watts at idle, with the LCD backlight dimmed to 40 percent brightness, with power ranging from there up to about 12-15 watts at load.
From experience, the 13 inch Intel laptops I've used recently burn about the same at idle, and don't consume much more for basic tasks, especially e.g. hardware accelerated video decoding.
All that to say, a slower/worse computer =/= a more energy efficient one. Although it probably does build the "software discipline" you need to use energy efficient software ;)
Also, while power usage does go up a bit when the CPU is more fully loaded, it definitely doesn't jump up to 30+ watts like my ThinkPad does.
The Rockchip SoC in the PBP isn't impressive compared to the ARM chips going in current-gen high-end smartphones, but the max TDP isn't anywhere near what an Intel or AMD laptop chip cranks up to.
That also means a truly fanless computer with a monolithic, user-replaceable battery.
There's are certainly workloads where a faster, symmetric multicore x86 chip will win on computer per watt, but it's not as cut-and-dry a win in real-world usage on a laptop as it might be on (say) a heavily-loaded server.
The "pure idle" draw is around 2.47 watts drawn from the battery with the brightness set to half.
When watching a 1080p youtube video fullscreened, the power consumption levels out at around 5 watts, again at half brightness.
If your workload is very light, there won't be much of a difference. Running mostly terminal emulators and lightweight desktop software, your biggest power consumer will likely be the screen, and most LCDs will be similar.
But if you're running heavy software like Slack or Teams (which some don't have the option to avoid, sadly), then power-per-watt becomes relevant, as running your 30 watt system for 0.5 seconds uses fewer joules of energy than pushing your 5 watt system for 4 seconds.
The rub is that a lot of that heavy software will eat up as many system cycles as the OS will give it -- which annoyingly means that once you have a bunch of tabs open in a browser, that 30 watt processor will probably stay nailed at its TDP.
However, you can manually set limits on the CPU. If I limit this skylake's max clock to 900MHz, I can play Minecraft at slightly reduced settings pretty smoothly while only drawing 7-8 watts.
It's all in what you need. If your primary goal is to reduce electricity usage, "cpupower frequency-set --max <x>" your CPU down to its lowest clock, and you're set.
(none of this is meant to diss Pine64 or ARM -- only I think getting one to be eco friendly/power efficient may be misguided, depending on your use case)
thanks for your input, it's greatly appreciated. I'm quite new to considering my power consumption, so indeed I may still do naive choices. I only based it on the reputation of ARM for being more energy efficient, and the observation that ARM devices indeed run longer with a full battery charge (but there may indeed be a lot of secondary variables explaining it).
I'm curious : how do you measure the consumption of electricity in watts? Is there a tool meant for that?
On my intel laptops, I used "powertop" to measure power consumption as reported by the battery. Powertop is an intel specific tool, but I think most batteries will have an entry in linux's sysfs. Here it's under "/sys/class/power_supply/BAT0", where two files "voltage_now" and "current_now" list the voltage and current in microvolts and microamperes.
[eery@darkshire ~]$ cd /sys/class/power_supply/BAT0
[eery@darkshire BAT0]$ cat current_now
[eery@darkshire BAT0]$ cat voltage_now
Measuring while connected to mains/AC power will probably need some kind of physical device to measure the current being pulled by the charger, sadly. Only server grade hardware seem to have built in current meters :(
First, it's clear that several of these things are OS / software issues. Debian has issues with the screen and wifi, whereas other distros have better wifi handling on the same hardware? That's the OS.
Firefox is slow with Slack? Firefox is slow with Slack on the fastest hardware you can buy, so why state the obvious?
Firefox apparently doesn't have hardware video decoding enabled, which is, again, a software issue.
Disk operations seem sluggish? Why not say whether you're using a cheap eMMC or possibly even an SD card?
Unless you're selling your Pinebook Pro on eBay as-is, you've left out too much information.
> Some people I know wanted to know if it is usable as a daily main laptop.
I've successfully used it as my only laptop.
> I originally wanted to use stock Debian but at some point the Panfrost driver broke and the laptop could not start X.
Only in FLOSS world would someone complain about a first-time-out laptop with very early (reverse-engineered) FLOSS driver support not working on the arbitrary, non-default OS the author happened to install on it.
The Pinebook Pro ships with some kind of hybrid Debian Stretch-- it appeared to have Chrome specifically compiled for this board, and probably some other custom things (probably keyboard/touchpad driver, etc.). There's an icon in the taskbar to update this "Debianstein" using "MrFixit's repo." This is the official way to update the laptop-- it asks for your root password, downloads stuff from the repo and then runs scripts. Super shady.
That aside-- if the Pine devs weren't shipping with stock Debian when author and I bought it, there is probably a very good reason why. And we see one piece of evidence as the author tries to run "stock Debian" on the Pinebook-- shit broke!
Again, on the operating system which Pine ships with the Pinebook Pro, task-switching between Firefox and the terminal (mate terminal?) is immediate.
It's weird to feel I have to defend a laptop that has the shittiest touchpad I've used in a decade. But if you're going to do a serious review of a fledgling laptop like this, either use the default software or know what the heck you're doing wrt drivers/firmware. Otherwise it feels like the point is to get me to sympathize with Apple's "stock OS or it's a brick" strategy.
AFAICT all the relevant video drivers are upstreamed. Additionally, there has been official support in Debian since April plus an unofficial installer for Debian. Plus a pretty healthy interest in the Manjaro community and a lot of other distros. (I received my PBP in December, btw.)
What I'm saying is if you're a hacker and want to go the route of picking your favorite distro for a first-time-out Linux laptop, you really ought to be able to differentiate between borked configuration (esp. where reverse-engineered free drivers for a notoriously unfriendly GPU company) and underpowered hardware.
A good way to ensure success is to try out the hardware with the default install the devs shipped. The author didn't bother to do that and his review is misleading for it.
The author experienced clear signs that this platform's software support is not mature. You seem to want to blame Debian rather than the new platform and its immature drivers.
The author claimed that alt-tabbing takes about one second. This was not true on default install using Mate from December.
Author claims that merely entering a directory containing a Git repo freezes the terminal. Again, clearly not true using stock Mate terminal.
Author goes on to rankly speculate that the problems with this laptop may be "CPU and disk side." Again, completely misleading as to the actual hardware as I stated above. And easily corrected by merely testing with the default OS Pine shipped.
Aside from Panfrost I have no idea why the author ran into those problems. But he doesn't either, and he falsely attributed them to hardware limitations when it was facile to check his error by running the laptop for perhaps five minutes in the default install.
That's the level of serious we're talking about here. In no way is Pinebook's hardware so underpowered that it takes 1 second to switch tasks.
It's an especially egregious oversight because mainline support in the kernel and Debian has happened and continues to be worked on since I purchased the thing. But since I actually took the time to try out the default install, you won't hear me misrepresent the state of the hardware if I happen to flash a newer distro/kernel and run into problems that I don't understand.
For this specific problem, use a prompt that asynchronously updates git status. https://github.com/sindresorhus/pure is an example, and it contains all the primitives you need to implement your own async prompt. Otherwise good luck working on a huge repo like chromium, even on a faster machine.
To my surprise, it seems the voice of reason finally prevailed, as that crap has been fixed 20 days ago: https://github.com/ohmyzsh/ohmyzsh/commit/1c58a746af7a67f311... (It's still not remotely close to being well-designed.)
Prezto's git primitives are solid, but nothing could help you when git itself is slow on a huge repo. Hence the need for async.
Looks like the culprit is the CPU/SoC: Rockchip RK3399. The specs for it do look decent on paper  but I guess it's simply too slow and not suited for laptops due to a small cache. It looks like a mobile phone SoC.
>The screen size and resolution scream for fractional scaling but Manjaro does not seem to provide it. Scale of 1 is a bit too small and 2 is way too big. The screen is matte, which is totally awesome, but unfortunately the colors are a bit muted and for some reason it seems a bit fuzzy. This may be because I have not used a sub-retina level laptop displays in years.
Screen is a deal-breaker for me. I stare at screens for hours on end and if it's not something in the same class as a retina screen on a Macbook or iMac, I just ditch it quickly. As for fuzzy screen and muted colors, well, that's the fault of the matte layer on the LCD panel. It's purpose is to diffuse light and minimize reflections. Personally I don't like to make that trade-off and prefer glass surfaces on my screens. I'll make a shade and make sacrifices as to how I orient myself to enjoy crisp text and proper colors in photos/video.
As for scaling, only Cinnamon DE does it right. I've tried almost all DEs over the past 6 months or so and Cinnamon's new fractional scaling and HiDPI support is the best by far. 
>The trackpad's motion detector is rubbish at slow speeds.
None of the non-Macbook trackpads are great. There's a project that's working on a good Linux trackpad driver but it's far in the future. Don't bank on it in the short-term. 
My Dell XPS 13 Developer Edition trackpad is great. I really don't think you can apply this kind of blanket statement to all non-Macbook trackpads.
I know the GP's post is grammatically phrased as an opinion, but I think a lot of its points are more preference-based than implied
Linux UIs tend to encourage much heavier use of keyboard shortcuts, which in some sense makes up for it.
It might also be the storage. Especially the delay when entering a git directory hints at slow storage. I had exactly the sand symptoms when booting an otherwise good server from an USB drive.
That's a myth. It's just macOS that has well-integrated touchpad gesture support across the whole UI stack, unlike any other OS so far. The only real thing Apple touchpads differ from others in hardware is pressure sensing and haptic feedback, but unless you really need silent clicking I'd say it's just a gimmick (I own a Magic Trackpad 2 precisely because of silent haptic feedback feature).
That said, Pinebook Pro's touchpad actually is very bad.
But at the same time, we need to be cognizant of the need for software optimisations in ARM running pure Linux operating systems. Much of the complaints levelled against Pinebook can be said for any other similar spec ARM SBC for GUI applications. Some performance tuning could be made at user end such as bloat free Arch ARM, SSD, Btrfs, Zswap etc. but still performance gap with similar spec X86_64 are clearly visible.
I have a Acer Chromebook 11 N7, running fan less intel celeron N3060, 4GB RAM, MIL-STD 810G which retailed for $250 (but Google sent one for free to me). The performance enhancements Google has made with Chromium/ChromeOS is very visible, with every update it got better, there's no sluggishness unless we go overboard with memory. I think there's a reason Google ditched ARM for Chromebooks in spite of it being favourable to run android apps.
Mozilla & other free software behemoths who build GUI applications should seriously care about ARM ecosystem when we are voting for projects like Pinebook/PinePhone/Librem with our wallet.
I plan to use it for SSH'ing into more powerful machines, vim, a small amount of compiling, LaTeX, a few tabs, etc. Just a machine to take into the office, into meetings, etc. Not a powerhouse photo/video editing, multimedia viewing, number crunching, compiler building beast.
As for dev'ing for the device, I'm working on upgrading wm2  to work with touch devices and to remove some of the cruft that didn't age well, whilst trying to maintain the minimalism. I'm aiming for a < 8MB footprint for the window manager and toolbar collectively.
The Pinebook Pro itself is probably reasonable (given its price), once you disregard the use of monstrously bloated things, like Slack.
That said, the PineTab has half the ram and a measurably slower CPU (See A64 vs RK3399). It’s basically like a PinePhone with a big screen.
So don’t get your expectations too high :)
> Slack, which is known to kill any machine it is launched on
> (even high-end developer machines) on a lower end, ultra-
> cheap ARM laptop.
Their "apps" are essentially small web browsers, no wonder!
> That said, the PineTab has half the ram and a measurably
> slower CPU (See A64 vs RK3399). It’s basically like a
> PinePhone with a big screen.
This is why I'm working on a lightweight X11 window manager, there's no reason why a UI should take up so many of the CPU horsepower and RAM.
I also seriously doubt the PineTab will have a slower UI than my SurfaceRT. 2-3 seconds for a tap to respond is not uncommon on it.
> movies on road-trips.
First link on DuckDuckGo is somebody really complaining about the SurfaceRT . PineTab should have more than enough power with the Mali GPU at 720. That USB-A port is also _really_ awesome, people seem to forget the importance of commonly used USB ports!
> 2-3 seconds for a tap to respond is not uncommon on it.
Insane, did it ship like that? Microsoft are really not so great with hardware, although I hear better things about later more powerful Surface models. If they don't care about something they really tend to neglect it.
I have an old Kindle Fire that I got the Google Play store running on, after that it was reasonably use-able, but still struggles. A zoom call for example is completely out of the question using the browser (instead of installing their crapware). Still good enough for light browsing, viewing PDFs, writing some notes, emails, etc.
Note that the VLC implementation is old because they only released one version for ARM32 in the windows store a long time ago.
I also wonder whether it would be possible to make executables run without the need to come from the Windows store, possibly by adjusting some registry or manually inserting some "trusted" certificates?
Really depends on what kinda stuff you store on your computer, no?
I don't full-disk encrypt my laptop, but everything that is actually sensitive is stored encrypted at rest.
Cue the systemd bashing, it works for me.
The typical scenario is: dev brings the laptop out of home -> laptop is stolen. In this case, encryption at rest spares the owner's sensitive information.
Actually, I personally consider any machine at risk even if at home, as burglary is not to so far fetched to be considered unlikely.
In closing, I have to say that I really appreciate what Pine64 are doing. Their SoC boards, PineTime, and PinePhone are all great fun and I hope this is just the beginning of a long series of awesome hardware to play around with together with supremely hackable open source software.
I personally think they aimed a little too low on hardware specs. Sure the price is cool but there is something to be said for just a tiny bit more to boost adoption via credible usage
> I personally think they aimed a little too low on hardware specs.
I don't think so. The problems seem to come from more resource hungry applications (yes, that's slack) and it's not much worse than what we had a few years ago with hdd laptops. If you're poorer or in need of a cheap replacement, it's great and priced exactly right.
The ultimate number one thing to do with this device is turn off tap to click. With that off the trackpad is okayish but still crap. i actually enjoy tap to click on Apple and Thinkpad trackpads. The keyboard is good enough and better than most laptops. not mechanical keyboard good but good in a 50g sponge-ish way. Display is a typical 1080p matte IPS screen (aka a minimum requirement for display specs for some but still within the range although at the low end.)
I will say it is valuable to have as a developer. I find it helpful to work off of a slow device sometimes to test how your widget will work as you might run into issues with timers or other weirdness, and aarch64 is somewhat becoming more common - so it might be good to test on that platform too.
I like the concept. Just think they missed a trick there.
I wonder what would be the potential if you had more of the userspace libs optimized or a smarter compiler (I mean, LLVM is pretty smart maybe it just needs some nudging)
Does anyone knows whether Linux can migrate processes from slow cores to fast ones if the fast one is idling?
I'm guessing this means the kernel is making the decisions, which sucks for application developers.
As for encryption, I see that Architect has support for zfs, but doesn't appear to support installating with encryption out-of-the box. I suppose, if the live Iso does support encryption, it should be possible to "convert" a zfs install to encrypt most (probably not /boot) filesystems:
I’ve recently come across a few long term reviews like this that conclude that the computer maybe wasn’t that good, due to either specs or quality of the hardware itself (keyboard, screen...). So now, I don’t know what to think. Feels like I’ve dodged a bullet.
The RAM is low, so you will need to take that into account and decide if it is enough for you.
On Linux, Firefox uses CPU for video scaling by default. You can enable GPU acceleration by enabling layers.acceleration.force-enabled in about:config
Of course the problem here is that this Rockchip's video decoder has no VAAPI driver.
That’s also not true, according to  it should work with Firefox 76 on Wayland
> 06-07 libva and libva-v4l2-request are basically broken except fro mpeg2.
Making web pages render with GPU support will barely help video decoding if at all though. For this to work well you need to tap into your hardware video decoder, which none of these settings achieve.
GLES3 support in Panfrost is experimental, wait for it to mature.
It works reasonably well for a device in this price range.
Using the OpenGL compositor (which should be the default in latests versions of Manjaro) is a lifesaver! Web apps are and will still be slow sometimes (I don't use slack but messenger can be sluggish).
> Video playback on browsers is not really nice.
Depends, YouTube is indeed quite slow (I think it became slower after some YouTube update), but if YouTube was optimized for speed there wouldn't be all those adds, right?
I like to listen to YouTube music in the background, I use https://github.com/mps-youtube/mps-youtube for that and it works like a charm, uses 10% CPU and a few MB of RAM.
I don't use mpsyt for video but I guess it would also improve your experience.
Other websites (kissanime) work like a charm, even in 1080p.
> Manjaro does not provide a way to disable tap-to-click
True, but installing `xf86-input-synaptics` (although it's deprecated) provides such an option. Also, I use ctrl+F7 to disable the touchpad while I type large chunks of text.
From the specs, one could think that the RAM is the problem but zram totally solves that (I have dozens of tabs open in Firefox, including heavy apps like Messenger, Trello and Overleaf). The slow CPU is a larger issue, with Gmail being so slow to load it's almost unusable (loading my mailbox takes 20s and creating a new email takes 3s).
Overall, I find the machine very much usable though.
I only have two complaints:
- static noises
- under "heavy" load (eg html5 games with compilation in the background), the alimentation is less powerful than what the machine consumes, which leads to the battery discharging.
I wrote a guide on my setup that includes other tricks. Shameless plug: https://louisabraham.github.io/articles/pinebook-pro-setup.h...
They go to great lengths to hide their identity or the existence of their native language, even having an About Us page that says nothing about Us.
It reminds me of how MSG manufacturers claim they only sell "flavor enhancers".
I am using manjaro with an encrypted file system. It was relatively easy to set up and even the boot loader is encrypted. Does pinebook ship with a different version?
I occasionally run E and it doesn't have a setting for this either.
As everyone has stated, Firefox, and all major browsers at this point, are resource hogs. On a low powered system, trying to run a huge and inefficient webpage like Slack will always bog down the system. It will frequently cause my browser to use upwards of 15 GB of additional RAM on my 64GB workstations, regardless of the browser and OS involved. However it sounds like it's especially bad on the author's system because he doesn't have any swap enabled (more on that later).
HiDPI is a known weakness in Linux for all versions of Linux. The complaint here seems to be that the screen is too good, and the author is slightly uncomfortable with how small 1x scaling is. There is a solution in XWindows that can partially solve that (see other comments), but no version of Linux solutions using XWindows really exist for it that don't murder your CPU.
The console problem with zsh being slow is straight out of the beginner's zsh configuration problems "handbook". Every beginner starting to play with their prompts loads up the add ons and bogs it down. Not adding asynchronous plugins, especially for git, makes the prompt on all systems slow, but is especially noticeable on low-RAM systems that can't keep the entire git folder cached for faster lookups.
This is obviously exacerbated by the fact that the author has swap disabled. In his first step of the clang build, he says to enable swap. On a low-RAM system, swap is especially important since you're going to run out of space in RAM very frequently. Swap is how you help mitigate that problem, by allowing some of your disk to be used as ultra-low-speed volatile storage. If the author manually disabled his swap (which would be required if he actually used the default install configurations claimed), it's no wonder the system is slow to respond and slow for resource intensive tasks. It's only been recently with the very high memory volumes available that there's been discussions about disabling swap completely.
Related, I'm not sure if the sleep mechanism problems from the Pinebook Pro are driver problems, but if your don't have swap configured then virtually none of the Linux sleep mechanisms work. When the device sleeps, it stores the RAM contents in swap. If you don't have swap, you can't sleep.
Slow build speeds are pretty much expected. Builds of large codebases are one of the most taxing things you can do to a system. Having a slower CPU and small RAM are the worst things for build times because CPU is frequently a bottleneck on even the most powerful systems, and disk io (the other major bottleneck) is usually mitigated by RAM caching. If the project is C++ you also get crushed by the linking process on large projects, which can require huge amounts of RAM. It's no wonder the author said to add swap since a lack of swap and small RAM can cause a literal crash during linking due to failure to provide minimum required volatile storage.
The complaints about the "lack of support for..." are really trite too. It's Manjaro, even a cursory look up on it tells you it's a more stable and slightly more user friendly version of Arch. Both still require you to be comfortable with configuring your system manually, it doesn't come perfectly configured out of the box, that's part of it's draw. Something like disk encryption is a good example where they user is expected to configure their own system. Similarly, Manjaro will perform more poorly than other distros on some hardware if you don't configure it, because unconfigured systems are always less performant than configured systems.
The author seems to have wiped and reinstalled the Manjaro OS, skipped configuring it or explicitly configured it in a way that makes it perform worse, then complains about the low-spec device performing badly.
On my FreeBSD desktop box right now, the main Firefox process has 1890MB RSS, and the content processes have anywhere from 248MB to 675MB. (Swap is completely unused.)
> if your don't have swap configured then virtually none of the Linux sleep mechanisms work. When the device sleeps, it stores the RAM contents in swap. If you don't have swap, you can't sleep.
Suspend-to-disk / "hibernation" (S4 in ACPI terms) is a really unpopular way of "sleeping" these days. FreeBSD outright does not support S4 (except S4BIOS).
The usual sleep is suspend-to-RAM. These days there's also S0ix which means turning as much as possible off but not changing system power state. Mobile phones are probably doing something like that.
> on low-RAM systems that can't keep the entire git folder cached for faster lookups
If you rely on caching for the git directory, the first time you navigate to a repo would be very frustrating too. You can't always rely on caching. Sometimes you have huge repos. Sometimes they are mounted over NFS and git status takes >10s. :D
IMO git status shell plugins are unnecessary and not worth it.
> HiDPI is a known weakness in Linux
Maybe stop using the ancient terrible windowing server ;) It's a non-issue with Wayland.