Hacker News new | past | comments | ask | show | jobs | submit login
Pinebook Pro longer term usage report (nibblestew.blogspot.com)
116 points by ingve 12 months ago | hide | past | favorite | 146 comments



> The biggest gripe is that everything feels sluggish. Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs. As an extreme example switching between channels in Slack takes five to ten seconds. It is unbearably slow.

This is a hidden downside of writing code that's only just fast enough to work. It may feel fine for you, but anyone building a new computer will have to match your computer's performance, or everything will feel sluggish. We're raising the bar for new hardware way higher than it needs to be.


I don’t particularly agree with Facebook it’s ethical ideologies but one great thing they do (or used to do?) is 2G fridays where the devs could only test the app on a simulated 2G connection with throttled bandwidth and simulated delays / packet loss.

I’m pretty sure in general smartphone apps are tested for a decent variety of performance targets, perhaps it should happen more for desktop software.

Now that I think of it, the push for more and more Electron apps may be because all devs are living comfortably on 16GB and 32GB devices, where the voracious RAM appetite of Electron does not matter.


> Now that I think of it, the push for more and more Electron apps may be because all devs are living comfortably on 16GB and 32GB devices, where the voracious RAM appetite of Electron does not matter.

IMO its generally the JS hype producing these monstrosities. QT and Java have had fast and powerful cross-platform UI for ages.

I'm trying to push a new paradigm at work where we write most of the logic in Rust. UI view layer would be JavaFx for desktop and React Native for mobile. Only 2 UI's to target for all platforms, reuse of core code, and good performance all around.


> IMO its generally the JS hype producing these monstrosities.

If you use "the JS hype" as a synonym for what are considered best practices by the folks in the NodeJS/NPM ecosystem, then you're right. (I.e., the fault lies in the "hype" half, not in the "JS" half.) It was 10–15 years ago that you could routinely hear one of the most prevalent criticisms in the programming world about the bloat of Java.

I think that, where JS is concerned, for some reason we're seeing a regression where it's becoming "conventional wisdom" that JS itself is slow, against the evidence to the contrary. I've seen straightfaced comments here on HN in the last few months, for example, that complain about the slowness of JS as a general rule. But the reality is that the JS runtimes have billions of dollars of engineering from top-tier teams invested in them, and today's JS engines are by-and-large pretty fast. V8, in fact, shares parts of its implementation with Java's HotSpot—specifically the parts that were developed by the folks who made the StrongTalk VM and who were acquired by Sun, thus leading to that work being incorporated into HotSpot in the first place.

So what is that reason?

There has been a noticeable shift in performance degradation that corresponds to the rise of, for lack of a better term, "the NPM way of programming". As with the case of Enterprise™ Java®, the problem lies in the way people in those circles are writing their programs and what passes for an "idiomatic" coding style. The NPM style used heavily in many Electron apps is relatively recent, with respect to JS's lifetime. Even before JS engines were JITted, Firefox itself had hundreds of thousands (millions?) of lines of JS doing a lot of the work both in what you see on the screen when you're poking at your browser as well as behind the scenes. Notably, the JS in those cases is not of the NPM style. There's nothing in principle that means the "Emacs-like" application architecture (compiled core, dynamic shell) needs to be slow, particularly on today's hardware.

As I've mentioned before[1], in the early days of Firefox, I used to use 1.0 and 1.5 on an 800 MHz PIII with 128 MB of RAM. (For folks looking to leap in here with what they'd like to consider a well-timed "well, actually…": yes, I'm acutely aware that even that number is on the order of 100× or more beyond what is necessary to get real work done with a computer—but the point is that it's nothing compared to, say, stock 2015-era Chromebooks with 8GB of RAM, or a comparable quantity in today's phones, for that matter.) Browser extensions are written in JS, and my laptop now is several times over better than the laptop I used 10 years ago—and yet... if I install any arbitrary extension today, there's a good chance that I will encounter perceptible bloat there, too. A recent example (within the last year) that I know of, is the WorldBrain Memex add-on, which upon immediate use has the telltale mark of influence from the world of modern "frontend" webdev, and the performance to match. This wasn't the case when add-ons were authored in the sui generis style of yesteryear, before the NPM practices leaked over and began influencing everything related to JS—and tainting people's perceptions.

So I find the attempt to draw a contrast between JS and Java a little misplaced. Even ignoring the common history (in both culture and provenance), there's the fact that Java IDEs themselves have always been the poster children of bloat—second only (or somewhere in the running) along with Visual Studio proper. I know people like to point to VSCode as an example of a "snappy" Electron app, and the inevitable retort about just how lean it really is. (On my system, I don't think it's possible to run VSCode without making sure that there's at least 350 MB of main memory to spare before launching it. Compare to the old joke about Emacs's "bloat": that it was supposed to stand for "Eight Megabytes And Constantly Swapping".) On the other hand, I have to recognize that the folks calling VSCode snappy really are on to something. The previous statements notwithstanding, the fact is that VSCode is still snappier than anything I've ever experienced using one of the mainstream Java IDEs. If I were a naive person, I could point to that and conclude that the problem lies with Java-the-language. And yet every day I used my phone with large parts written in Java—which, to be fair, does impart some impression of bloat and sluggishness, so it's prudent for me to keep in mind earlier versions of Android on older, more limited hardware that did have a fairly snappy feel. And those observations lead us back to the root problem, which is if you judged only by much the code being written today, programmers seem to have forgotten (or simply never learned?) how not to write code that's bloated and slow.

1. https://news.ycombinator.com/item?id=23183770


> I think that, where JS is concerned, for some reason we're seeing a regression where it's becoming "conventional wisdom" that JS itself is slow, against the evidence to the contrary.

It's because people have plenty of experience with actual real-world JS code being very slow. Which has nothing to do with the runtime or the ability of someone purposely writing optimized JS to optimize it well.

The fact that you can write fast JS doesn't mean that the language itself or popular frameworks encourage you to do so, whereas that is the case for a lot of other languages with a reputation for being faster.


I don't think you are actually disagreeing with the author's point- you could easily replace "JS" in your statement with Java, C#, Haskell or a number of other "fast" languages and it would still be an accurate statement.

I, at least, have had the displeasure of using slow gui and other programs written in those languages.


This attitude fails to recognize the point I was making in my comment, and so it ends up fundamentally getting something wrong. It happens here in this sentence:

> The fact that you can write fast JS doesn't mean that the language itself or popular frameworks encourage you to do so

One half of this sentence is bang-on, and the other half is not. That is, it conflates what is encouraged by the language with what is encouraged by the modern JS crowd, and suggests something in that vein—as if they're one tightly interconnected bundle—but in fact, they are two distinct things, and I made several remarks in my original comment alluding to this.

To successfully argue the point you're making now, you have to argue that "the NPM way" that is now prevalent is the inevitable result of merely setting out to write a program in JS. But, in fact, it's not. As I mentioned, the JS that made up a huge proportion of Firefox's codebase was written in a style that doesn't resemble the style now popular with NodeJS and NPM, but there was no big, conscious effort to do that by, say, avoiding pitfalls of the language and the things that might make it slow—it was purely a result of the lack of opportunity for being tainted by bad examples from the NodeJS/NPM world, since that world didn't exist then. The main influence on programmers writing JS for Firefox was the influence of Objective-C, C++, and Java.

You certainly can "optimize" your JS to make it faster, just like you can with any program, but that's not to say have to—you can leave the optimization to the engine itself most of the time. All that's really needed, on average, to make sure that JS is fast, is to avoid tainting yourself with mindworms that have proliferated in the NodeJS+NPM ecosystem, that is: just don't do the things that you would have never thought of doing were it not for having seen others doing it somewhere else on GitHub or in the packages hosted on npmjs.com.

In other words, if you want to write JS that avoids being slow, then you don't have to take any special effort. Generally, you can start by opting for writing the dumbest, most boring code possible. (Indeed, I routinely come across "clever" code by self-styled NodeJS aficionados exhibiting elaborate contortions to fight against the language, when it would be far better to just do the simplest possible thing and then move on. Refactoring to eliminate these contortions can even make the code more performant and more concise.)

For example, let's say you have something written in Java, or something with an analogue in C++ or Go, and your team has some motivation to either recreate it in or migrate it to JS. That's perfect—before you ever think about handing the task over to a professional NodeJS programmer, you really should give some heavy consideration to doing your best to copy the architecture from the existing, non-JS implementation, down to class names and code organization, and doing a straightforward, procedure-for-procedure port to JS. (Although, if in the case of migrating from Java, maybe also consider eliminating any unnecessary abstractions along the way—or don't.) There's a good chance that this will have satisfactory results that challenge your assumptions. Even if you're creating something from scratch rather than porting an existing solution, once again, all you have to do is to not worry about being fashionable and trendy, and just do the most straightforward thing possible.

There's a recent comment here on HN that's extremely relevant and really hints at what's going on with all these slow, bloated, and messy projects from the world of modern webdev:

https://news.ycombinator.com/item?id=23869393


The bigger problem is the use of dynamic HTML for UI everywhere. It'd be slow even if you used C to drive the dynamism, although having JS there doesn't really help.

As far as VSCode being a "snappy" Electron app - I still vividly remember that bug when users were seeing it hog an entire CPU core for itself while not being interacted with, and it turned out that it was caused by blinking cursor in the editor. You know, the kind of problem that was already solved by the time Windows 1.0 came out? And sure, they fixed it... but every desktop Electron app is a potential minefield filled with stuff like that. Some of it is just not obvious until you run it on slower hardware. Or in remote desktop - that also flushes out a lot of "GPUs are fast enough, who cares" problems.


JS is faster than scripting languages, but slow for a VM based language. The NPM way is to pull in thousands of dependencies, when running code only uses a small fraction.

I disagree that the code written is slower than in the past. There's simply much more of it. People build on top of existing stuff N layers deep. Just look at how far removed Electron is from the OS. It's crazy that it even works.JS also has some unfixable limitations.

It needs to be parsed every time. Java comes packed in an very efficient bytecode format that's both several times smaller than minified JS and far faster to parse.

JS lack of typing also means it uses way more RAM than Java in practice. Java has a lot of Object overhead, but nothing anywhere close to JS.

For UI's, JS lack of threading is terrible. In every language that supports threading, the main way to design UI is to have a "UI thread" that you never assign to do slow things. In JS it's extremely easy to accidentally block during UI render. I'm assuming a major of slack glitches and freezes are from the single thread model.

Modern Java apps feel native (on desktop). You probably use some that you don't realize aren't native.

JS is several times slower and uses several times as much ram. And the VM warmup is slower than Java because the code is distributed in a very inefficient format. It's not a good language for UI's but it's becoming the only option for mobile unless you want to write multiple implementations


> People build on top of existing stuff N layers deep.

Okay. You're not actually disagreeing here. This is the antipattern popular with NPM folks (and extremely reminiscent of Enterprise Java) that I identified as the problem.

> I disagree that the code written is slower than in the past. There's simply much more of it.

Even ignoring that "there is more of it" is part of the problem, you can say "I disagree", but all that gives us is a stalemate unless you're going to introduce data into the conversation.

> And the VM warmup is slower than Java because the code is distributed in a very inefficient format.

Not even the Java folks who work on Java agree with that. Warmup is one of Java's weak spots. The GraalVM team can and will tell you this. It's one of the things they bring up people when trying to get them to temper their expectations.

I recently ported a command-line utility more or less line-by-line from Java to JS, in an extremely naive way—the only concern was to make it work. When I finished, on a lark I checked how it compared against the Java version. Even with the JDK's wealth of specialized collections, compared to, say, the way that in the JS version all the places that expect a map got a general-purpose ES6 Map, the JS version running on NodeJS would beat the Java version every time. In this case, it doesn't actually matter because it wasn't performance-sensitive code, and in both cases, both processes would terminate in <1 second, but the fact remains that NodeJS was able to parse the JS program, compile it, execute, and then terminate faster than the the java process could launch, read the bytecode, verify it, and then perform the same job.

There's an interview from 2009 with Lars Bak on MSDN that you might look into, with the MS folks on the static languages side, and Lars on the other side explaining why in practice V8's performance can be comparable if not better than with "managed", bytecode-based, static languages like Java and C#.

FWIW, I'm not even a dynamic languages fanatic. Another of my big complaints about the NPM crowd is their lack of regard for making sure the "shape" of their inputs is easily decipherable. I've made money as a result of dynamic language folks thinking that using a dynamic language means you don't have to worry about types, and that attitude leading to CVE-level security problems. There are a couple Smalltalk papers I've enjoyed reading[1][2], both somewhat critical/skeptical of the promises of dynamic languages. In general, I advocate for writing code as if there is a static type system in place, even if you're in a dynamic language that doesn't require it.

Keep in mind, though, that this is all completely besides the point, because my original message was only that JS is now starting to be re-perceived as slow as a general rule because of NPM's hype-driven development, where programs are authored by which patterns are trending at the time or otherwise trying to imitate the styles of NPM tastemakers and "thoughtleaders", which leads to people creating huge messes. The entire Java versus JS issue was an aside.

1. "How (and Why) Developers Use the Dynamic Features of Programming Languages: The Case of Smalltalk"

2. "An Overview of Modular Smalltalk" by Allen Wirfs-Brock and Brian Wilkerson. (awb was the editor of ECMA-262 version 6, FWIW.)

And on that note, here are two more of my favorite programming essays of all time:

"Java for Everything". http://www.teamten.com/lawrence/writings/java-for-everything...

"Too DRY - The Grep Test". http://jamie-wong.com/2013/07/12/grep-test/


What is this user doing lol? I've run Linux on an absolutely ancient Thinkpad because we had some stuff in the field that needed serial/parallel port for comms. It ran fine as long as you didn't have 10 tabs open.

And Slack has become a bloated piece of crap. It lags like hell on everything I own. It takes a few seconds to switch between channels and workspaces on my overclocked 3700x with 32gb of ram on fiber.


Modern software is soooo terrible. It's not just bloated, the animation of every action, "smooth" opening a menu or a window are so slow. Everything is so slow these days. When you'r just in an office and click a bit around it is all nice, fine and shiny. But as soon as you are in a hurry, these animations feel like hours .. you click 20 times on the wrong place because animations are so slow .. one day I will bend my pretty fast ZBook on work over my knee because I'm f*ing angry.


Oh God I hate the animations! Sometimes I wonder if "UI designers" are a net negative on many projects. Most software is functional machinery, not art.

It's important for sales pages and ads to be pretty. Web apps just need to work well, keep the UI guys away from me :)


"Ancient" doesn't really work as a measure of performance anymore. Sandy Bridge is almost ten years old but it's still about half as fast as the fastest x64 processors with the same number of cores and quite a bit faster than a lot of "modern" low-end ARM processors.


Is the performance of the app the same as the website? I’ve only ever run Slack in a browser tab, and haven't had much in the way of performance problems.


Ironically, I've found it to be worse. No idea why


>What is this user doing lol?

Right? From the comments in the article,

>It should be much faster than a Raspberry pi 3 that do not have all those problems.

I agree. User doesn't know what he's doing yet.

One tip I would offer him is avoid Firefox. Firefox is built with Rust[1], which doesn't support ARM as a Tier 1 platform. He would be MUCH better off using Chromium, which supports numerous ARM Chromebooks running this exact CPU just fine.

[1] https://4e6.github.io/firefox-lang-stats/


It's funny you should mention that; first of all, Chomebooks also run Rust code, at a really low layer. (Also, it appears the Chrome team is experimenting with putting Rust in Chrome too; no clue how serious or how likely though). The hypervisor stuff is in it. Secondly, ARM is on its way to Tier 1, supported by the company itself.

(I use Chrome these days...)


Rust benefits from optimizations in LLVM, same compiler chrome uses. You are right that they might be missing some hand optimized ASM for some routines, but I don't think that's enough that performance would be a lot worse.


I think something is up with your slack instance or computer. I have a 2017 15" macbook pro and never get channel changes taking seconds.


It's a chat program. It should run just fine on 2007 laptops. Shitty engineering is what's up with it.


I don’t understand your comment. When would taking 1 seconds to switch tabs ever feel normal? Are you saying all computers should be that slow?


Their point is that software is written so that it's only barely fast enough on machines much more powerful than a PineBook, which makes it more difficult than necessary to use perfectly functional machines that happen to be slower than average.

The problem is that fast software is harder to write than slow software (there are trivial transforms from fast to slow, but not vice versa). Thus, each generation back you expect your software to run smoothly on, represents more effort (or at least, more care) on the part of the author. We should expect software to be "just fast enough for the average user" essentially by natural law.


Software should be optimized for weaker machines so that it's not this slow even on machines this cheap.


Why bother to support something that 1 in 1,000 people (high estimate) is going to be using?


The same reason you use accessibility in general: there might be 1 in 1,000 people who might permanently need it, but considerably more who may want it at certain times (running a compile job in the background, on a business trip and only have low internet speeds, underclocking to save battery, …) and that can be the difference between "I want to use your software because it always works for me" and "I tried it and it works 90% of the time for me, so it's not reliable enough for me to pick it". Plus there's the usual feel-good "I'm helping more people using my software, even the (often) lesser-privileged" rather than "I shouldn't care about these people".


Those people don't pay you $$$, so in exchange for cheaper / faster eng costs, you pay via lower quality software.

In businesses that would actually pay slack money, they will probably pay $800 for a decent enough laptop to run slack every 4 years and employees bring their own smartphones now.


Capitalists also have financial incentives to prioritize against environmental concerns, and yet raising a criticism about a business's practices that neglect to take environmental impact into account isn't out of hand an unreasonable thing to do on the basis that addressing it would mean higher costs to the business. It's a reasonable criticism that reasonable people can have opposing perspectives on and find it worthwhile to talk about, rather than being immediately dismissed.

(I also don't see anything in this comment that actually addresses the accessibility metaphor raised by the person you're responding to.)


It's not just supporting, it's about not wasting resources that can be used productively for other things. Take a look at this paint software:

https://github.com/Symbian9/azpainter

https://tipsonubuntu.com/2020/05/19/azpainter-full-color-ill...

It's written in pure C by one developer, and it's super fast. Operations are on par with other software because fx libraries are usually already written in native compiled languages and optimized, but this one loads its interface in a lot less than one second, which becomes like 100 ms or less cached. Wouldn't it be wonderful if all software would at least load their interface comparatively fast? Why do i have to waste (tens? hundreds?) megabytes just to show a GUI that does absolutely nothing else than linking events to graphics elements when someone can write a full fledged software using a quite complex interface whose entire executable size is less than one megabyte?


Just to humor you.

We develop websites on $3000 Macbooks Pro's. The shitty $300 Windows laptop I see in most households is struggling with a lot of websites. As are $200 Android phones I see many people use on a daily basis. I rarely see any developer testing performance on a crap device or bad connection.

The Pinebook might be an outlier device by itself. But ARM Windows laptops are pretty common.


If it can run well on bad hardware then it will run even better on good hardware.


Unless it's a videogame where framerate and physics are inherently linked.

[1] https://www.youtube.com/watch?v=AqDOefJc7a4


Typing this on a PBP, Manjaro 20.07 i3.

> Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs.

This is not my experience at all. I do not notice undue delays switching between applications (do not use slack).

> The wifi is not very good, it can't connect reliably to an access point in the next room

No problems at all. This weekend I'm at my parents, I am connected to an old wrt54gl all day long. I have problems with my other laptop (XPS 13) on the same network.

> The screen size and resolution scream for fractional scaling but Manjaro does not seem to provide it.

I've been using this machine since late March, I didn't take notes when I configured it but, from what I remember, I only had to make some small adjustments [1] to get things right.

Sure, the PBP is an under-powered machine but, given the right allowances (the software is under development, the trackpad is not the best, suspend is not working correctly, etc), I find it easy to use as a daily driver most of the time.

[1] https://wiki.archlinux.org/index.php/HiDPI

edit: typo and link


A note on HiDPI since a lack of fractional scaling is always brought up as a frustration point: with Xresources you can set 'Xft.dpi' to an appropriate value (like 144) and restart your X session and most programs/toolkits should scale fonts appropriately (Java is occasionally an exception but the same is true on Windows), it won't scale things like icons which can make the experience frustrating if your program has no other means to scale interfaces but this experience is about what you'd get on something like Windows 7 or 8. Additionally if you need multi-monitor support you can use display scaling (also known as upscaling) with RandR 1.3 (released around a decade ago) to upscale low DPI screens so that the fonts will be equal sizes on both monitors, e.g., 96 dpi -> 144 is 1.5x so a 1920x1080 monitor will become 2880x1620.

None of this is perfect or easy to set up and it's in no way a substitute for fractional scaling support in the toolkits of the programs you're using but it has worked for a very long time and produces appropriately sized crisp and sharp fonts on all monitors, and more importantly it should work reasonably well even on outdated programs or toolkits that have no support for fractional scaling. The Arch wiki link should explain this but it's not spelled out and there's a bit of a misconception that fractional scaling is the only way to get a blur free HiDPI experience on Linux when that just isn't the case at all.


I'm not sure what the note about fractional scaling is about. Pinebook Pro's screen is not hidpi - it's just a regular 1920x1080 13" screen.


On a Windows that would render at 125% or 150% scaling. 1080p on a 13" screen is actually quite small. Even on my 15" laptop with a 4K screen at 200%, which is equivalent to 1080p at 100% in terms of scale, I find everything feels tiny.

Fractional scaling does not have to be only for super high res screens.


> > Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs.

> This is not my experience at all. I do not notice undue delays switching between applications (do not use slack).

I do wonder how much of his experience might be slack + gnome or KDE, some desktop apps are just horribly bloated because they can get away with it. Also modern DEs are just massive, most people don't realise how much resources they take up because compared to 25 years ago we all have x86 super computers.

I find slack to be absurdly slow for what is fundamentally just a text based web app and yet I'm using a 1yr old XPS with an 8th gen intel CPU... i run i3wm and keep things very minimal, yet I still find myself waiting seconds for slack to do stuff.


Gnome is crashy, gobbles resources, incredibly laggy, and drops frames on my beast of a newish hex-core machine. I've switched to KDE in a VM on Win10 (some of the crashes were Linux graphics drivers, not entirely Gnome's fault, though some were Gnome—running in a VM makes them go away entirely, and also all my bluetooth stuff stays paired much better since Linux doesn't even know it's bluetooth) and it feels 4x as responsive on literally half the hardware, and under virtualization, as Gnome did.

I recently trialed a 2GB memory (!) dual-core Celeron minipc as a workstation, and if I could have solved all the 4k 2x scaling issues without spending hours (more) on it or resorting to a too-heavy-for-the-hardware DE, and gotten 60hz out of it rather than 30, I'd probably have upgraded the RAM to its max of 8GB and been totally happy on it. Void Linux with suckless tools made it feel blazing fast, as long as I avoided webshit (so, Sublime over VSCode, keep Slack the hell away from it, use a real email client rather than a webpage, that kind of thing). All browsers felt too slow to even launch except Surf and qute, and the latter was a tad slower and jankier than Surf so I settled on that, but it was fine as long as I avoided the kind of pages that eat a couple hundred MB and burn cycles for no clear reason (which is lots of them, sadly). I bet I could have made it work even better if I'd looked into adding a disable/enable JS toggle, defaulting to off, and maybe some kind of click-to-load-media thing, but it was surprisingly usable as it was. FF and Chromium were far too heavy to launch with no page loaded, of course. Man I miss pre-2.0 Firefox, when it was light and fast.


That's a big part of the issue. Just running a desktop browser like Firefox on these ARM devices is an exercise in pain. While they technically run, they would be described as sluggish on a good day. Then trying to visit a 'heavy' desktop-oriented website will just bring everything to a grinding halt.


Exactly. IME HN or text.npr.org will fly, old reddit / readthedocs / other light js does fine, and SPAs like new reddit and Slack crawl.


I recently discovered that i.reddit.com or reddit.com/.compact is the "old mobile interface" and a lot lighter than even old Reddit.


I recently set up a new HTPC/server/NAS for the living room. It runs FTP, SMB/NFS shares, a DLNA server and a few other assorted things, and a Btrfs RAID1 storage pool. For HTPC duties, I just use the standard KDE desktop in openSUSE and SMPlayer/mpv with some tweaks (I'm not a huge fan of how Kodi works).

As is, right now sitting at the KDE desktop with default settings, total memory usage is 613MB.

This myth that KDE is bloated and heavy really needs to die out. It may have been true in the early KDE4 versions, but that's a long time ago.


Wifi was a serious problem on my PBP, but Firefox and KDE have been impressively smooth. I could use it as a main development computer after I eschewed NetworkManager for dhcpd + wpasupplicant


Firefox is definitely slow on some of these smaller machines. Although a delay between app switches sounds like some virtual memory got paged out (either swap or executable pages got evicted to make room for FS cache.)

EDIT: apparently they’re shipping with plasma as the DE? Yeah that’s going to be slow.


Firefox is unbearably slow on everything including high-end Xeons if you don’t have a video accelerator (think headless VNC).

Perhaps the video driver or compositing is misconfigured?

Firefox is my default browser, and I’ve found Firefox to be fine on low end machines as long as it can use the video card properly.

Chrome tends to have better compatibility with chomebook-class hardware for obvious reasons.


The experience is a few years old now but I spent a year in college running Firefox on a celeron netbook with just the EFI framebuffer (anything else would crash on boot) and it was totally usable unless you opened google sheets. On my ryzen desktop that I built during the lockdown (after baking some bread heh) it’s very snappy. IME chrome (on GNU/Linux anyway) is usually much worse than Firefox unless you’re in the habit of opening hundreds (at least) of tabs.


>EDIT: apparently they’re shipping with plasma as the DE? Yeah that’s going to be slow.

KDE Plasma feels snappy even on a Core 2 Duo with Intel graphics, in my experience.

This myth of KDE as some kind of bloated lumbering beast really needs to die out. Install KDE Neon or openSUSE and give it a spin. You'll be surprised.


I think most people who say it’s slow are comparing it to xterm+cwm (or xterm+fvwm or whatever) which it will probably never beat. I installed dolphin the other day so my girlfriend can use my computers and that still does a lot of stuff which can be pretty slow depending on the hardware.


Anything's going to be slow in comparison to a minimalist X for people who really just want to have a bunch of terminals on screen at once and barely anything else.

KDE (and the other DEs) offer a whole lot more than that, obviously :-) Dolphin does do thumbnails and all of type of stuff, but it's not really a heavy application in itself, it just does a lot of disk-intensive tasks.

Obviously generating thumbnails for a whole folder of photos is going to take a little while on an eMMC drive, just like it takes a little while to get a list of all the shared folders on a network.


I hope the YouTube experience gets better after the hardware video decoder drivers land in kernel 5.8.

Playing around with Manjaro really makes me miss the ease of trying out kernels and patches on Gentoo, but I haven't had the time yet to put together a Gentoo cross compile environment.


For an alternative view to the article, I bought one of the first batches as well, and I couldn't be happier. I don't know what the stock DE is like, as the first thing I did was install arch and lxqt, since I knew from the get-go it was a low powered machine and Plasma was just going to chug on it.

I'm not having any issues with playback in browsers (fullscreen or otherwise using latest FireFox), and I use it in bed quite a lot for watching videos, so I'm not sure if there's some specific configuration issue the guy in the article is having, although this could probably be easily worked around with youtube-dl into mpv if you're having something similar happen.

I'm also not seeing the same problems on the terminal. I'm using alacritty as my terminal emulator and elv.sh as my shell, with similar customisations to display git statuses etc, and while I notice the occasional white flash (I'm not sure what this is honestly, something to do with alacritty rendering I'm assuming) there's no sluggishness.

The trackpad is admittedly horrible, you really need to buy a bluetooth mouse or seperate trackpad because it really is just frustrating to use, on the other hand the keyboard is really solid, I love the feel of it and it's responsive. The monitor, well I'm probably not the person to ask about that. I personally think it looks pretty good, but I'm really not the kind of person who cares or notices about sub-pixel perfect rendering, so YMMV with that.

The battery times are also not great, and although you can technically charge it through the USB-C port, it'll still drain power if it's turned on. This isn't helped by the fact I can't get the default brightness up and down buttons to work and have to type

    sudo lxqt-backlight_backend --inc / --dec
to change the backlight.

Yes, it is kind of slow, but you really must've known that going into it considering the price and the specs. Overall, my personal perception is that the pinebook pro is batting well above it's weight (well, mine is anyway), and I honestly expected it to be far slower than it is.


>since I knew from the get-go it was a low powered machine and Plasma was just going to chug on it.

I run openSUSE with KDE on all my machines, and it's responsive and snappy even on the Thinkpad X220i (Sandy Bridge Core i3, upgraded to 8GB RAM + SSD). Firefox is what generally eats resources, but below 15 tabs it's fine.

Even on my Raspberry Pi 3, KDE is perfectly fine. It's not fast fast, but still feels responsive and usable.

KDE was kind of slow in the KDE4 days. Not so anymore.


Well, I can certainly provide an argument against expectations of long term usage as well.

I bought a pinebook pro, perfectly knowing the performances limitations this article discusses, and actually _wanting_ to learn to live with a less powerful computer, consuming less electricity. In a era where we're roasting Earth, this sounded almost like a duty to me.

But when I received the computer, I had a blocking problem with it : its SD card reader was faulty and would trigger I/O errors after a few writes, and the OS would remount the device as read-only. This is a major problem on that computer because it has very low internal storage space so you're supposed to use a SD card to hold data. I tried various OS, various SD cards, did my due diligence to confirm it was a hardware problem - it was.

I wasn't that annoyed at that point because pine64 sells spare parts on their web shop. So I went there to replace the SD card reader, which meant replacing the main board as well, but OK. This was all very fairly priced, so no problem. Except… the spare part was out of stock. So I waited a few weeks, and it was still out of stock.

I mailed their support presenting my problem and asking a simple question : will spare parts will ever be available again?

They answered by asking me to demonstrate this was a hardware issue, which I did. They did not answer my question, so I asked it again, telling them that I was not wishing to make the device travel the world and back, I just wanted to replace the faulty part. They answered me with the address were I should ship the computer to them. So I asked them again directly : will the spare parts will ever be available again? They dodged the question once again and asked me to send them the computer.

I guess asking to ship them a faulty product is totally fair, they want to inspect it. I'm not willing do to that because we fly way enough products around the world as is, and I can do any check they want me to do locally. But fair enough, they want it back to solve my problem.

What troubles me, though, and what is relevant here is that while they advertise selling spare parts for their computer, they actually don't, and they're shady when asked if they will again. Which probably means they won't. So yeah, I wouldn't bet on long term usage if you have an experimental device that you can't repair.


I think it's because they're all English-as-second-language support staff. I've had the same issues as you and ultimately gave up trying to send them money for spare parts they offered because we kept talking over each other.


It could be. I would say it's unlikely, given they had no problem handling a technical discussion when asking me to demonstrate it was a hardware problem (it was done in several mails, during which they asked relevant questions after each of my mails). I guess it may have been handled by a different team, though, sending the ticket back when confident it was indeed related to hardware.


I've had similar issues with their support and only managed to have the issues resolved this week, almost two months after receiving the device.

The only answer regarding replacement parts that I've gotten from them was that they're waiting for the manufacturer, they never provided any indication when they themselves were expecting the next shipment from their manufacturers.


Thanks for letting us know. Hopefully, this is just terrible communication on their part and we will see spare parts again.


Unrelated to your hardware troubles, but I think it's relevant to mention that the pinebook pro probably uses about the same amount of electricity as a (contemporary) Intel/AMD laptop.

The data I've found from some quick googling [1] indicates a power consumption of around 5 watts at idle, with the LCD backlight dimmed to 40 percent brightness, with power ranging from there up to about 12-15 watts at load.

From experience, the 13 inch Intel laptops I've used recently burn about the same at idle, and don't consume much more for basic tasks, especially e.g. hardware accelerated video decoding.

The other thing to consider is that they're much faster than the pinebook while consuming equal amounts of power, therefore using less electricity overall since you're waiting less time for the CPU to idle down (assuming you're not pegging the CPU at full load, which is harder and harder to avoid with JavaScript event loops running everywhere...)

All that to say, a slower/worse computer =/= a more energy efficient one. Although it probably does build the "software discipline" you need to use energy efficient software ;)

[1] https://gist.github.com/ayufan/1075c3c48ad3e9c7928334c99f7b3...


FWIW I'm regularly able to get ~12 hours on a charge on my PBP with normal terminal and qutebrowser or epiphany sessions running. That's closer to 3W under load, not 5W at idle.

Also, while power usage does go up a bit when the CPU is more fully loaded, it definitely doesn't jump up to 30+ watts like my ThinkPad does.

The Rockchip SoC in the PBP isn't impressive compared to the ARM chips going in current-gen high-end smartphones, but the max TDP isn't anywhere near what an Intel or AMD laptop chip cranks up to.

That also means a truly fanless computer with a monolithic, user-replaceable battery.

There's are certainly workloads where a faster, symmetric multicore x86 chip will win on computer per watt, but it's not as cut-and-dry a win in real-world usage on a laptop as it might be on (say) a heavily-loaded server.


So, this got me curious. I pulled up my now 5 year old Skylake dell laptop, and did some quick measurements.

The "pure idle" draw is around 2.47 watts drawn from the battery with the brightness set to half.

When watching a 1080p youtube video fullscreened, the power consumption levels out at around 5 watts, again at half brightness.

If your workload is very light, there won't be much of a difference. Running mostly terminal emulators and lightweight desktop software, your biggest power consumer will likely be the screen, and most LCDs will be similar.

But if you're running heavy software like Slack or Teams (which some don't have the option to avoid, sadly), then power-per-watt becomes relevant, as running your 30 watt system for 0.5 seconds uses fewer joules of energy than pushing your 5 watt system for 4 seconds.

The rub is that a lot of that heavy software will eat up as many system cycles as the OS will give it -- which annoyingly means that once you have a bunch of tabs open in a browser, that 30 watt processor will probably stay nailed at its TDP.

However, you can manually set limits on the CPU. If I limit this skylake's max clock to 900MHz, I can play Minecraft at slightly reduced settings pretty smoothly while only drawing 7-8 watts.

It's all in what you need. If your primary goal is to reduce electricity usage, "cpupower frequency-set --max <x>" your CPU down to its lowest clock, and you're set.

(none of this is meant to diss Pine64 or ARM -- only I think getting one to be eco friendly/power efficient may be misguided, depending on your use case)


Hi,

thanks for your input, it's greatly appreciated. I'm quite new to considering my power consumption, so indeed I may still do naive choices. I only based it on the reputation of ARM for being more energy efficient, and the observation that ARM devices indeed run longer with a full battery charge (but there may indeed be a lot of secondary variables explaining it).

I'm curious : how do you measure the consumption of electricity in watts? Is there a tool meant for that?


Looking back, I was being a bit too cynical and dismissive. There are of course other factors in a device being eco-friendly, and in practice hotter chips will tend to use more power unless restrained.

On my intel laptops, I used "powertop" to measure power consumption as reported by the battery. Powertop is an intel specific tool, but I think most batteries will have an entry in linux's sysfs. Here it's under "/sys/class/power_supply/BAT0", where two files "voltage_now" and "current_now" list the voltage and current in microvolts and microamperes.

  [eery@darkshire ~]$ cd /sys/class/power_supply/BAT0
  [eery@darkshire BAT0]$ cat current_now
  269000
  [eery@darkshire BAT0]$ cat voltage_now
  8423000
So 269,000 µA x 8,423,000 µV is about 2.266 watts.

Measuring while connected to mains/AC power will probably need some kind of physical device to measure the current being pulled by the charger, sadly. Only server grade hardware seem to have built in current meters :(


Actually, you're better off with a modern AMD processor and artificially throttling it to a lower TDP. Or you can go with Ryzen APUs that are available for laptops with TDPs as low as 15W for 8 cores. The secret is that the all core frequency is 1.8GHz.


Some elaboration would help.

First, it's clear that several of these things are OS / software issues. Debian has issues with the screen and wifi, whereas other distros have better wifi handling on the same hardware? That's the OS.

Firefox is slow with Slack? Firefox is slow with Slack on the fastest hardware you can buy, so why state the obvious?

Firefox apparently doesn't have hardware video decoding enabled, which is, again, a software issue.

Disk operations seem sluggish? Why not say whether you're using a cheap eMMC or possibly even an SD card?

Unless you're selling your Pinebook Pro on eBay as-is, you've left out too much information.


> I bought a Pinebook Pro in the first batch, and have been using it on and off for several months now.

Me, too.

> Some people I know wanted to know if it is usable as a daily main laptop.

I've successfully used it as my only laptop.

> I originally wanted to use stock Debian but at some point the Panfrost driver broke and the laptop could not start X.

Only in FLOSS world would someone complain about a first-time-out laptop with very early (reverse-engineered) FLOSS driver support not working on the arbitrary, non-default OS the author happened to install on it.

The Pinebook Pro ships with some kind of hybrid Debian Stretch-- it appeared to have Chrome specifically compiled for this board, and probably some other custom things (probably keyboard/touchpad driver, etc.). There's an icon in the taskbar to update this "Debianstein" using "MrFixit's repo." This is the official way to update the laptop-- it asks for your root password, downloads stuff from the repo and then runs scripts. Super shady.

That aside-- if the Pine devs weren't shipping with stock Debian when author and I bought it, there is probably a very good reason why. And we see one piece of evidence as the author tries to run "stock Debian" on the Pinebook-- shit broke!

> Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs.

Again, on the operating system which Pine ships with the Pinebook Pro, task-switching between Firefox and the terminal (mate terminal?) is immediate.

It's weird to feel I have to defend a laptop that has the shittiest touchpad I've used in a decade. But if you're going to do a serious review of a fledgling laptop like this, either use the default software or know what the heck you're doing wrt drivers/firmware. Otherwise it feels like the point is to get me to sympathize with Apple's "stock OS or it's a brick" strategy.


Running various distributions and having mainline support was part of Pinebook Pro's value proposition, so while it's worth pointing out I wouldn't blame the author for it that much. There are plenty of people who got or considered getting Pinebook Pro to run things like "stock Debian" on it and aren't interested much in such "Debiansteins".


When it comes to hardware purchasing decisions, I don't really consider a device to be open-source friendly unless the drivers are upstreamed. Out-of-tree drivers that aren't at least on track for upstreaming in the near future will in the long run be just as much of a hassle as proprietary drivers.


That's a great philosophy to have and I haven't a clue what it has to do with the Pinebook Pro.

AFAICT all the relevant video drivers are upstreamed. Additionally, there has been official support in Debian since April plus an unofficial installer for Debian. Plus a pretty healthy interest in the Manjaro community and a lot of other distros. (I received my PBP in December, btw.)

What I'm saying is if you're a hacker and want to go the route of picking your favorite distro for a first-time-out Linux laptop, you really ought to be able to differentiate between borked configuration (esp. where reverse-engineered free drivers for a notoriously unfriendly GPU company) and underpowered hardware.

A good way to ensure success is to try out the hardware with the default install the devs shipped. The author didn't bother to do that and his review is misleading for it.


You're the one who said that it's unreasonable to expect a run of the mill distro shipping an upstream kernel to work on this platform. But now you're trying to talk about underpowered hardware as if that has something to do with being unable to start X in the first place?

The author experienced clear signs that this platform's software support is not mature. You seem to want to blame Debian rather than the new platform and its immature drivers.


> The author experienced clear signs that this platform's software support is not mature.

The author claimed that alt-tabbing takes about one second. This was not true on default install using Mate from December.

Author claims that merely entering a directory containing a Git repo freezes the terminal. Again, clearly not true using stock Mate terminal.

Author goes on to rankly speculate that the problems with this laptop may be "CPU and disk side." Again, completely misleading as to the actual hardware as I stated above. And easily corrected by merely testing with the default OS Pine shipped.

Aside from Panfrost I have no idea why the author ran into those problems. But he doesn't either, and he falsely attributed them to hardware limitations when it was facile to check his error by running the laptop for perhaps five minutes in the default install.


Again, I come back to the Apple: imagine someone claiming the original Iphone wasn't ready for primetime because it didn't zoom very well when trying to use a recently merged, reverse-engineered driver with FreeBSD. Plus that same reviewer didn't even bother to check usability on default iOS installed with the system.

That's the level of serious we're talking about here. In no way is Pinebook's hardware so underpowered that it takes 1 second to switch tasks.

It's an especially egregious oversight because mainline support in the kernel and Debian has happened and continues to be worked on since I purchased the thing. But since I actually took the time to try out the default install, you won't hear me misrepresent the state of the hardware if I happen to flash a newer distro/kernel and run into problems that I don't understand.


> I have a ZSH prompt that shows the Git status of the current directory. Entering in a directory that has a Git repo freezes the terminal for several seconds.

For this specific problem, use a prompt that asynchronously updates git status. https://github.com/sindresorhus/pure is an example, and it contains all the primitives you need to implement your own async prompt. Otherwise good luck working on a huge repo like chromium, even on a faster machine.


Yep agree, I found rather OhMyZsh and ZPresto etc. slowed down significantly in larger repos.


ohmyzsh's git status function is utter garbage and has been that way for at least a decade: https://github.com/ohmyzsh/ohmyzsh/commit/8059c0727a09257dc3... (committed 2010) the function forks 32 times just to do some pattern matching builtin to the shell! Needless to say any theme using that crap is criminally slow. (Edit: It started out forking only 14 times; the number gradually expanded to 32: https://github.com/ohmyzsh/ohmyzsh/blob/d0d01c0bbf32ffe1dc22...)

To my surprise, it seems the voice of reason finally prevailed, as that crap has been fixed 20 days ago: https://github.com/ohmyzsh/ohmyzsh/commit/1c58a746af7a67f311... (It's still not remotely close to being well-designed.)

Prezto's git primitives are solid, but nothing could help you when git itself is slow on a huge repo. Hence the need for async.


I thought this must be the case. Every time I enable oh my zsh git new terminal opens slow by orders of magnitude.


Prezto default prompt (soren) is async.


>The biggest gripe is that everything feels sluggish. Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs.

Looks like the culprit is the CPU/SoC: Rockchip RK3399. The specs for it do look decent on paper [1] but I guess it's simply too slow and not suited for laptops due to a small cache. It looks like a mobile phone SoC.

>The screen size and resolution scream for fractional scaling but Manjaro does not seem to provide it. Scale of 1 is a bit too small and 2 is way too big. The screen is matte, which is totally awesome, but unfortunately the colors are a bit muted and for some reason it seems a bit fuzzy. This may be because I have not used a sub-retina level laptop displays in years.

Screen is a deal-breaker for me. I stare at screens for hours on end and if it's not something in the same class as a retina screen on a Macbook or iMac, I just ditch it quickly. As for fuzzy screen and muted colors, well, that's the fault of the matte layer on the LCD panel. It's purpose is to diffuse light and minimize reflections. Personally I don't like to make that trade-off and prefer glass surfaces on my screens. I'll make a shade and make sacrifices as to how I orient myself to enjoy crisp text and proper colors in photos/video.

As for scaling, only Cinnamon DE does it right. I've tried almost all DEs over the past 6 months or so and Cinnamon's new fractional scaling and HiDPI support is the best by far. [2]

>The trackpad's motion detector is rubbish at slow speeds.

None of the non-Macbook trackpads are great. There's a project that's working on a good Linux trackpad driver but it's far in the future. Don't bank on it in the short-term. [3]

[1] http://rockchip.wikidot.com/rk3399

[2] https://www.omgubuntu.co.uk/2020/02/cinnamon-desktop-fractio...

[3] https://news.ycombinator.com/item?id=23235609


>None of the non-Macbook trackpads are great. There's a project that's working on a good Linux trackpad driver but it's far in the future. Don't bank on it in the short-term.

My Dell XPS 13 Developer Edition trackpad is great. I really don't think you can apply this kind of blanket statement to all non-Macbook trackpads.


Agreed. My XPS 13 also has a 1080p Matte display, and I love it more than my old MBP's Retina display.

I know the GP's post is grammatically phrased as an opinion, but I think a lot of its points are more preference-based than implied


It's obviously a question of preference but Linux trackpad drivers (eg libinput) are less capable than Apple's in terms of gesture/palm detection etc.


I haven't noticed much of a difference when using a Mac trackpad, but I am not much of a gesture user, so that's probably why.

Linux UIs tend to encourage much heavier use of keyboard shortcuts, which in some sense makes up for it.


Drivers are (usually) fine these days, it's the higher parts of the stack (toolkits, DEs and applications) that have neglected their touchpad driver support on GNU/Linux.


I have to disagree. I used to have a MacBook and the XPs 13s keyboard and mouse are not very enjoyable to use. They are fine, but I have considered getting a new computer for the last year because of it


> Looks like the culprit is the CPU/SoC: Rockchip RK3399.

It might also be the storage. Especially the delay when entering a git directory hints at slow storage. I had exactly the sand symptoms when booting an otherwise good server from an USB drive.


I've had this issue running Oh My ZSH on a Chromebook several years ago reading git branch info. Running a leaner config than omzsh would likely have been better, but I decided to just fully switch to fish.


Why would slow storage affect "alt-tabbing between Firefox and terminal"?


I'm not sure he was remarking on that one but it is possible. FF uses a lot of disk cache. You'll find this out if you ever run out of space. Lots of random crashes that are hard to diagnose.


That one could be RAM.


Aren’t eMMC drives known to be slow?


> None of the non-Macbook trackpads are great.

That's a myth. It's just macOS that has well-integrated touchpad gesture support across the whole UI stack, unlike any other OS so far. The only real thing Apple touchpads differ from others in hardware is pressure sensing and haptic feedback, but unless you really need silent clicking I'd say it's just a gimmick (I own a Magic Trackpad 2 precisely because of silent haptic feedback feature).

That said, Pinebook Pro's touchpad actually is very bad.


I'm all in for Pinebook Pro, other open-source hardware projects and ARM ecosystem in general (I even run my production application on an ARM server).

But at the same time, we need to be cognizant of the need for software optimisations in ARM running pure Linux operating systems. Much of the complaints levelled against Pinebook can be said for any other similar spec ARM SBC for GUI applications. Some performance tuning could be made at user end such as bloat free Arch ARM, SSD, Btrfs, Zswap etc.[1] but still performance gap with similar spec X86_64 are clearly visible.

I have a Acer Chromebook 11 N7, running fan less intel celeron N3060, 4GB RAM, MIL-STD 810G which retailed for $250 (but Google sent one for free to me)[2]. The performance enhancements Google has made with Chromium/ChromeOS is very visible, with every update it got better, there's no sluggishness unless we go overboard with memory. I think there's a reason Google ditched ARM for Chromebooks in spite of it being favourable to run android apps.

Mozilla & other free software behemoths who build GUI applications should seriously care about ARM ecosystem when we are voting for projects like Pinebook/PinePhone/Librem with our wallet.

[1]https://abishekmuthian.com/getting-smoother-desktop-experien...

[2]https://abishekmuthian.com/reviewing-the-chromebook-google-s...


Personally I purchased a PineTab [1]. I hope it doesn't feel as sluggish. I suspect most of the lag is the UI they are running. Apparently it purposely runs at a resolution of 720 as not to tax the GPU too much. The plan is to use the tablet mostly with an external keyboard, I suspect the keyboard I purchased with it will not be so useful outside of some odd occasions.

I plan to use it for SSH'ing into more powerful machines, vim, a small amount of compiling, LaTeX, a few tabs, etc. Just a machine to take into the office, into meetings, etc. Not a powerhouse photo/video editing, multimedia viewing, number crunching, compiler building beast.

As for dev'ing for the device, I'm working on upgrading wm2 [2] to work with touch devices and to remove some of the cruft that didn't age well, whilst trying to maintain the minimalism. I'm aiming for a < 8MB footprint for the window manager and toolbar collectively.

[1] https://www.pine64.org/pinetab/

[2] www.all-day-breakfast.com/wm2/


I think this guy is having issues because he is using Slack, which is known to kill any machine it is launched on (even high-end developer machines) on a lower end, ultra-cheap ARM laptop.

The Pinebook Pro itself is probably reasonable (given its price), once you disregard the use of monstrously bloated things, like Slack.

That said, the PineTab has half the ram and a measurably slower CPU[1] (See A64 vs RK3399). It’s basically like a PinePhone with a big screen.

So don’t get your expectations too high :)

[1] https://gist.github.com/ayufan/ce5dc9e501e1b720c2afc31c3ed51...


> I think this guy is having issues because he is using

> Slack, which is known to kill any machine it is launched on

> (even high-end developer machines) on a lower end, ultra-

> cheap ARM laptop.

Their "apps" are essentially small web browsers, no wonder!

> That said, the PineTab has half the ram and a measurably

> slower CPU[1] (See A64 vs RK3399). It’s basically like a

> PinePhone with a big screen.

This is why I'm working on a lightweight X11 window manager, there's no reason why a UI should take up so many of the CPU horsepower and RAM.


I'm planning on using PineTab to replace my SurfaceRT for movies on road-trips. A 10" tablet with 16:9 aspect ratio, a USB-A port and can run VLC (or equivalent) is what I need, and it checks all of those boxes. My SurfaceRT served me well, but it needs regular rebooting now so will be retired.

I also seriously doubt the PineTab will have a slower UI than my SurfaceRT. 2-3 seconds for a tap to respond is not uncommon on it.


> I'm planning on using PineTab to replace my SurfaceRT for

> movies on road-trips.

First link on DuckDuckGo is somebody really complaining about the SurfaceRT [1]. PineTab should have more than enough power with the Mali GPU at 720. That USB-A port is also _really_ awesome, people seem to forget the importance of commonly used USB ports!

> 2-3 seconds for a tap to respond is not uncommon on it.

Insane, did it ship like that? Microsoft are really not so great with hardware, although I hear better things about later more powerful Surface models. If they don't care about something they really tend to neglect it.

I have an old Kindle Fire that I got the Google Play store running on, after that it was reasonably use-able, but still struggles. A zoom call for example is completely out of the question using the browser (instead of installing their crapware). Still good enough for light browsing, viewing PDFs, writing some notes, emails, etc.

[1] https://answers.microsoft.com/en-us/surface/forum/all/surfac...


The MS software is more responsive than third party, but I use VLC to play movies because even h264 baseline at 240p is unreliable with HW decoding, and VLC will do software decoding, but the UI is super laggy.

Note that the VLC implementation is old because they only released one version for ARM32 in the windows store a long time ago.


Purely out of interest, is there not some possibility to compile code on the device? Of course it would take some time, but you should be able to get at least some % speed up with the right compile settings?

I also wonder whether it would be possible to make executables run without the need to come from the Windows store, possibly by adjusting some registry or manually inserting some "trusted" certificates?


> A laptop without an encrypted disk is not really usable as a laptop as you can't take it out of your house.

Really depends on what kinda stuff you store on your computer, no?

I don't full-disk encrypt my laptop, but everything that is actually sensitive is stored encrypted at rest.


Yeah, this line I thought was hilarious because most Windows machines don't have an encrypted disk by default. (It makes data recovery and repair work harder for users who largely don't care about security.)


Even with the Manjaro installations you can use systemd-homed to store your home dir in a luks encrypted container.

Cue the systemd bashing, it works for me.


I'd be willing to bet most people including devs don't bother to encrypt their laptops storage.


If someone gets their hands on it, they can install a keylogger, and then they know the password for the encrypted part...


I think this is not the scenario the parent poster refers to (that is, "three letters agency attack").

The typical scenario is: dev brings the laptop out of home -> laptop is stolen. In this case, encryption at rest spares the owner's sensitive information.

Actually, I personally consider any machine at risk even if at home, as burglary is not to so far fetched to be considered unlikely.


TBF, this is still possible if the boot partition is unencrypted (which is the only way I've ever seen FDE done on Linux in practice).


I don’t actually think that’s true for my MacBook.


I really appreciate write ups like this. While I had considered taking the Pinebook Pro for a spin, the storage and limited RAM situation is ultimately what broke the deal for me as I could not think of a use-case for me as it would fall short of what I need from a laptop. Hopefully I can muster the energy to write something similar after I get my Pinetab, for which I hopefully have more realistic expectations to use as an e-mail terminal (mutt and Syncthing) and occasional “browser peek” while jumping between meetings where I do not feel like dragging a laptop along. My only concern now is whether I can have it fully encrypted and what state video (not browser video) playback will be in.

In closing, I have to say that I really appreciate what Pine64 are doing. Their SoC boards, PineTime, and PinePhone are all great fun and I hope this is just the beginning of a long series of awesome hardware to play around with together with supremely hackable open source software.


What is your issue with storage ? It can take M.2 cards.


Thank you, that was a mistake on my part, it was just the RAM then. The mistake is particularly embarrassing as I am very well aware on the M.2 expansion board for the PineTab, so I really should have remembered. Now if only that expansion board would support both the M.2 and SIM at the same time I would even have hopes for the PineTab to finally drive my cellphone out of my life…


So frankly about as expected given pricing and nature of this particular beast.

I personally think they aimed a little too low on hardware specs. Sure the price is cool but there is something to be said for just a tiny bit more to boost adoption via credible usage


It's great for 200 USD. Of country it's not going keep up with a 2000 USD MacBook Pro, but I doubt that's what they aimed at and the mid-size laptop market is pretty saturated.

> I personally think they aimed a little too low on hardware specs.

I don't think so. The problems seem to come from more resource hungry applications (yes, that's slack) and it's not much worse than what we had a few years ago with hdd laptops. If you're poorer or in need of a cheap replacement, it's great and priced exactly right.


The target audience is developers though (for now) not poor people. I don’t see developers sticking with it long term (and say contributing fixes) if they get frustrated by everything being annoyingly slow (eg browser tab switching)


While I do enjoy using my Pinebook Pro I usually grab the iPad or Thinkpad, but the pinebook does get a good amount of use. The pinebook is faster than i figured, but it is slow although not annoying slow. Storage is slow and they have m.2 adapters if you are cool with more power usage.

The ultimate number one thing to do with this device is turn off tap to click. With that off the trackpad is okayish but still crap. i actually enjoy tap to click on Apple and Thinkpad trackpads. The keyboard is good enough and better than most laptops. not mechanical keyboard good but good in a 50g sponge-ish way. Display is a typical 1080p matte IPS screen (aka a minimum requirement for display specs for some but still within the range although at the low end.)

I will say it is valuable to have as a developer. I find it helpful to work off of a slow device sometimes to test how your widget will work as you might run into issues with timers or other weirdness, and aarch64 is somewhat becoming more common - so it might be good to test on that platform too.


Yes, as a platform to develop. Think of it like a dev board with a permanently attached display and keyboard.


Yeah I get that. To iron out bugs you really want devs using it though.

I like the concept. Just think they missed a trick there.


In addition to what I mentioned, the pbp is faster than a pi4, so there's that too.


Which SoC do you think they should have used ? There are not very many that are more powerful, the one in the RPi is unusual in the amount of RAM that it can address.


Thought it was odd that the author said it's close to being usable, then lists a large number of huge performance problems. That doesn't really sound all that close, though I suppose you could say, "well they just need to slap in a more powerful chip and it'd be fine".


I am not sure, but from what I've heard, ARM has a lot of silicon accelerations but not a lot of software (or compilers) already manage to use it (Chrome being one notable exception) https://phoronix.com/scan.php?page=news_item&px=Arm-Faster-C...

I wonder what would be the potential if you had more of the userspace libs optimized or a smarter compiler (I mean, LLVM is pretty smart maybe it just needs some nudging)


The performance issues reported suggest the software treats all cores as equal and, with that, when you start multiple processes or threads that need to complete for something to happen, you'll end up waiting for the slowest one. That might be bearable on symmetrical multi-core but is an issue with asymmetrical chips.

Does anyone knows whether Linux can migrate processes from slow cores to fast ones if the fast one is idling?


I've been wondering this too. To my knowledge I don't know of any language/framework threading implementations that let you do certain functions in a "fast core thread" and others in a "slow core thread".

I'm guessing this means the kernel is making the decisions, which sucks for application developers.


How does Android (Linux) handle big.LITTLE chipsets? From what I understand they’ve had this process for years so there must be some software they’re using to handle what process goes to what core. Is that not something they’ve upstreamed to the main Linux codebase?


Depends on the phone. Most 2015-2019 Qualcomm phones use niceness to determine CPU allocation, whereas Android 10 has more complete cgroup support where different groups are assigned to different cores and the APIs categorize tasks / threads / processes by what they are doing into different execution profiles. Pre 10 had some fixed cgroups available but it wasn't as comprehensive.


Anyone know if if there's support for Wayland? Fractional scaling works pretty well out of the box for me on Ubuntu 20.04 (and also did on 18.04).

As for encryption, I see that Architect has support for zfs, but doesn't appear to support installating with encryption out-of-the box. I suppose, if the live Iso does support encryption, it should be possible to "convert" a zfs install to encrypt most (probably not /boot) filesystems:

https://unix.stackexchange.com/questions/532619/encryption-o...


I am using the pbp with sway (if you don't know it, an i3 like wm for wayland), works well for me. Hoping for gles 3.3 support in mesa (3.0 is experimental but there) to be able to run alacritty :)


Also for WebRender in Firefox.


I remember being a little too hyped over the Pinebook Pro a few months ago and almost getting one, as it looked like a cheap and promising device I could use to test the state of GNU/Linux desktops again, after switching to macOS 6 years ago. But they went out of stock, and then corona happened, which complicated logistics and made these laptops impossible to get for a few months.

I’ve recently come across a few long term reviews like this that conclude that the computer maybe wasn’t that good, due to either specs or quality of the hardware itself (keyboard, screen...). So now, I don’t know what to think. Feels like I’ve dodged a bullet.


Keyboard and screen are great IMO, but I am coming from a Macbook Pro (butterfly keyboard) and Thinkpad T430.

The RAM is low, so you will need to take that into account and decide if it is enough for you.


>Video playback on browsers is not really nice. Youtube works in the default size, but fullscreen causes a massive frame rate drop. Fullscreen video playback in e.g. VLC is smooth.

On Linux, Firefox uses CPU for video scaling by default. You can enable GPU acceleration by enabling layers.acceleration.force-enabled in about:config


Last I checked, there wasn't a single official browser release on Linux that did GPU accelerated video decode, which is the norm on Windows of course. So expect playing a 1080p video to hog two CPUs of a real laptop, or all and then some of this mobile chip. It's not even the scaling, if you are full screen the compositor should do that.


Check again: Firefox will do this for you now using VAAPI on Wayland since a couple of weeks. If you have an AMD or Intel GPU it should just work on e.g. Fedora and (not sure about if this is explictly needed) setting a Firefox feature flag.

Of course the problem here is that this Rockchip's video decoder has no VAAPI driver.


> Of course the problem here is that this Rockchip's video decoder has no VAAPI driver.

That’s also not true, according to [1] it should work with Firefox 76 on Wayland

[1] https://forum.pine64.org/showthread.php?tid=9171


Although it's great news this is being worked on, that thread strongly suggests that this is still very much in WIP status? The only thing they state is that Firefox supports VAAPI in general since 75/76 (which is what I said), not that the RK3399 can actually use that functionality.

> 06-07 libva and libva-v4l2-request are basically broken except fro mpeg2.


Don't recommend this old setting. If you need GPU acceleration, force enable WebRender support in Firefox (gfx.webrender.all=1).

Making web pages render with GPU support will barely help video decoding if at all though. For this to work well you need to tap into your hardware video decoder, which none of these settings achieve.


WebRender won't work on a GLES2-only system! The Pinebook Pro is one for now..

GLES3 support in Panfrost is experimental, wait for it to mature.


Looks like a combination of slow CPU and slow storage. EMMC can often be quite a bottleneck, maybe if they could get a faster storage controller like new UFS or smth


PineBook Pro can support an NVMe SSD with a cheap adapter that fits inside the case.


I've got the adapter and an Intel NVMe SSD installed, the eMMC is only being used to boot the system.

It works reasonably well for a device in this price range.


Slack is unusably slow on any machine, so there's nothing new about that.


I have used the Pinebook Pro as a main driver for nearly 6 months because my main died before isolation.

> Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs.

Using the OpenGL compositor (which should be the default in latests versions of Manjaro) is a lifesaver! Web apps are and will still be slow sometimes (I don't use slack but messenger can be sluggish).

> Video playback on browsers is not really nice.

Depends, YouTube is indeed quite slow (I think it became slower after some YouTube update), but if YouTube was optimized for speed there wouldn't be all those adds, right? I like to listen to YouTube music in the background, I use https://github.com/mps-youtube/mps-youtube for that and it works like a charm, uses 10% CPU and a few MB of RAM. I don't use mpsyt for video but I guess it would also improve your experience. Other websites (kissanime) work like a charm, even in 1080p.

> Manjaro does not provide a way to disable tap-to-click True, but installing `xf86-input-synaptics` (although it's deprecated) provides such an option. Also, I use ctrl+F7 to disable the touchpad while I type large chunks of text.

From the specs, one could think that the RAM is the problem but zram totally solves that (I have dozens of tabs open in Firefox, including heavy apps like Messenger, Trello and Overleaf). The slow CPU is a larger issue, with Gmail being so slow to load it's almost unusable (loading my mailbox takes 20s and creating a new email takes 3s).

Overall, I find the machine very much usable though. I only have two complaints:

- static noises

- under "heavy" load (eg html5 games with compilation in the background), the alimentation is less powerful than what the machine consumes, which leads to the battery discharging.

I wrote a guide on my setup that includes other tricks. Shameless plug: https://louisabraham.github.io/articles/pinebook-pro-setup.h...


Who makes Pinebook? Why do they seem to be maintaining anonymity?

They go to great lengths to hide their identity or the existence of their native language, even having an About Us page that says nothing about Us.

https://store.pine64.org/about-us/

It reminds me of how MSG manufacturers claim they only sell "flavor enhancers".


> Eventually I gave up and switched to the default Manjaro. Its installer does not support an encrypted root file system

I am using manjaro with an encrypted file system. It was relatively easy to set up and even the boot loader is encrypted. Does pinebook ship with a different version?


Quick tip: you can use xinput to disable tap-to-click on most touchpads, though the exact command is different depending on the touchpad driver used.

I occasionally run E and it doesn't have a setting for this either.


The review itself speaks volumes about the author's misconfigurations and lack of Linux experience.

As everyone has stated, Firefox, and all major browsers at this point, are resource hogs. On a low powered system, trying to run a huge and inefficient webpage like Slack will always bog down the system. It will frequently cause my browser to use upwards of 15 GB of additional RAM on my 64GB workstations, regardless of the browser and OS involved. However it sounds like it's especially bad on the author's system because he doesn't have any swap enabled (more on that later).

HiDPI is a known weakness in Linux for all versions of Linux. The complaint here seems to be that the screen is too good, and the author is slightly uncomfortable with how small 1x scaling is. There is a solution in XWindows that can partially solve that (see other comments), but no version of Linux solutions using XWindows really exist for it that don't murder your CPU.

The console problem with zsh being slow is straight out of the beginner's zsh configuration problems "handbook". Every beginner starting to play with their prompts loads up the add ons and bogs it down. Not adding asynchronous plugins, especially for git, makes the prompt on all systems slow, but is especially noticeable on low-RAM systems that can't keep the entire git folder cached for faster lookups.

This is obviously exacerbated by the fact that the author has swap disabled. In his first step of the clang build, he says to enable swap. On a low-RAM system, swap is especially important since you're going to run out of space in RAM very frequently. Swap is how you help mitigate that problem, by allowing some of your disk to be used as ultra-low-speed volatile storage. If the author manually disabled his swap (which would be required if he actually used the default install configurations claimed), it's no wonder the system is slow to respond and slow for resource intensive tasks. It's only been recently with the very high memory volumes available that there's been discussions about disabling swap completely.

Related, I'm not sure if the sleep mechanism problems from the Pinebook Pro are driver problems, but if your don't have swap configured then virtually none of the Linux sleep mechanisms work. When the device sleeps, it stores the RAM contents in swap. If you don't have swap, you can't sleep.

Slow build speeds are pretty much expected. Builds of large codebases are one of the most taxing things you can do to a system. Having a slower CPU and small RAM are the worst things for build times because CPU is frequently a bottleneck on even the most powerful systems, and disk io (the other major bottleneck) is usually mitigated by RAM caching. If the project is C++ you also get crushed by the linking process on large projects, which can require huge amounts of RAM. It's no wonder the author said to add swap since a lack of swap and small RAM can cause a literal crash during linking due to failure to provide minimum required volatile storage.

The complaints about the "lack of support for..." are really trite too. It's Manjaro, even a cursory look up on it tells you it's a more stable and slightly more user friendly version of Arch. Both still require you to be comfortable with configuring your system manually, it doesn't come perfectly configured out of the box, that's part of it's draw. Something like disk encryption is a good example where they user is expected to configure their own system. Similarly, Manjaro will perform more poorly than other distros on some hardware if you don't configure it, because unconfigured systems are always less performant than configured systems.

tl;dr

The author seems to have wiped and reinstalled the Manjaro OS, skipped configuring it or explicitly configured it in a way that makes it perform worse, then complains about the low-spec device performing badly.


How are you measuring the memory usage? 15 GB sounds like a count of virtual memory that includes lots of mappings that aren't physical memory.

On my FreeBSD desktop box right now, the main Firefox process has 1890MB RSS, and the content processes have anywhere from 248MB to 675MB. (Swap is completely unused.)

> if your don't have swap configured then virtually none of the Linux sleep mechanisms work. When the device sleeps, it stores the RAM contents in swap. If you don't have swap, you can't sleep.

???

Suspend-to-disk / "hibernation" (S4 in ACPI terms) is a really unpopular way of "sleeping" these days. FreeBSD outright does not support S4 (except S4BIOS).

The usual sleep is suspend-to-RAM. These days there's also S0ix which means turning as much as possible off but not changing system power state. Mobile phones are probably doing something like that.

> on low-RAM systems that can't keep the entire git folder cached for faster lookups

If you rely on caching for the git directory, the first time you navigate to a repo would be very frustrating too. You can't always rely on caching. Sometimes you have huge repos. Sometimes they are mounted over NFS and git status takes >10s. :D

IMO git status shell plugins are unnecessary and not worth it.

> HiDPI is a known weakness in Linux

Maybe stop using the ancient terrible windowing server ;) It's a non-issue with Wayland.


use Ubuntu...




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: