Hacker News new | past | comments | ask | show | jobs | submit | p12tic's comments login

Most laptops are severely limited by heat dissipation. So it's normal that performance is much worse. The CPU cannot stay in turbo as long and must drop to lower frequencies sooner. On longer benchmarks they CPU starts throttling due to heat and becomes even slower.


Container security boundary can be much stronger if one wants.

One can use something like https://github.com/google/gvisor as a container runtime for podman or docker. It's a good hybrid between VMs and containers. The container is put into sort of VM via kvm, but it does not supply a kernel and talks to a fake one. This means that security boundary is almost as strong as VM, but mostly everything will work like in a normal container.

E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:

  $ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm  /bin/bash
  root@7862d7c432b4:/# ls /app
  bin   home            lib32       mnt   run   tmp      vmlinuz.old
  boot  initrd.img      lib64       opt   sbin  usr
  dev   initrd.img.old  lost+found  proc  srv   var
  etc   lib             media       root  sys   vmlinuz
  root@7862d7c432b4:/# uname -a
  Linux 7862d7c432b4 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux


gVisor is solid but it comes with a perf hit. Plus, it does not work on every image


FWIW the performance loss got a lot better in ~2023 when the open source gVisor switched away from ptrace. (Google had an internal non-published faster variant from the start.)


One can use something like https://github.com/google/gvisor as a container runtime for podman or docker. It's a good hybrid between VMs and containers. The container is put into sort of VM via kvm, but it does not supply a kernel and talks to a fake one. This means that security boundary is almost as strong as VM, but mostly everything will work like in a normal container.

E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:

  $ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm  /bin/bash
  root@7862d7c432b4:/# ls /app
  bin   home            lib32       mnt   run   tmp      vmlinuz.old
  boot  initrd.img      lib64       opt   sbin  usr
  dev   initrd.img.old  lost+found  proc  srv   var
  etc   lib             media       root  sys   vmlinuz
  root@7862d7c432b4:/# uname -a
  Linux 7862d7c432b4 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux
Gvisor let's one have strong sandbox without resorting to WASM.


Meanwhile, Google moved away from gVisor, because they had too much trouble trying to make it look like actual Linux :-(

https://cloud.google.com/blog/products/serverless/cloud-run-...

Between this and WLS1, trying to reimplement all Linux syscalls might not lead to a good experience for running preexisting software.



It's complicated, memory accesses can really block for relatively long periods of time.

Consider that regular memory access via cache takes around 1 nanosecond.

If the data is not in top-level cache, then we're looking at roughly 10 nanoseconds access latency.

If the data is not in cache at all, we are looking into 50-150 nanoseconds access latency.

If the data is in memory, but that memory is attached to another CPU socket, it's even more latency.

Finally, if the data access is via atomic instruction and there are many other CPUs accessing the same memory location, then the latency can be as high as 3000 nanoseconds.

It's not very hard to find NVMe attached storage that has latencies of tens of microseconds, which is not very far off memory access speeds.


I just want to add to your explanation, that even in the absence of hard paging from disk, you can have soft page faults where the kernel modifies the page table entries or assigns a memory page, or copies a copy on write page, etc.

In addition to the cache misses you mention there's also TLB misses.

Memory is not actually random access, locality matters a lot. SSDs reads, on the other hand, are much closer to random access, but much more expensive.


That would be a significant downgrade. Threadripper CPUs top out at 64 cores / 128 threads.


Due to thermal constraints, a 64-core Threadripper 3990X has a lower peak load than 4 separate 16-core Ryzen 9 3950X units combined.


It's strange when monitor costs are being discussed while they are the primary tool for software development. If software developer produces $100k/year value, then it's worth spending several thousands of dollars for every percent of increased effectiveness. This can buy almost any monitor on the market.

I've found the following worth considering:

- It makes sense maximizing usable screen real estate, both in terms of pixels and area. Complex problems involve a lot of information and being able to view everything at the same time makes it easier to understand them. Personally I've found three 4K 32 inch monitors work well.

- OLED allows working comfortably at low lightning conditions.

- Many larger higher-end monitors support picture-by-picture, which allows splitting the monitor into multiple virtual monitors for separate inputs (2x or 4x per monitor). This is useful in case of multiple external test devices which can then be integrated into existing display setup.


I don't understand this consumerist mindset. A monitor is a monitor. Programmers don't need color accuracy so any 4k will suffice. Your thinking just incenivizes "Programming grade" monitors that are just overpriced regular ones. I don't see how you could make a several thousand dollar 4k monitor if you tried.


> Programmers don't need color accuracy so any 4k will suffice.

Surely it depends on what we’re programming? Color accuracy (and inaccuracy for accessibility testing) is important for most anything we produce graphically that’s user-facing.

> A monitor is a monitor.

Color accuracy isn’t the only reason to invest more in display quality. Viewing angle and refresh rate are important factors as well. Their relative importance varies by use case: of course refresh rate being more important for game development; viewing angle and even color accuracy can be important for developer accessibility (example: I have a lot of sensory sensitivities, and color warp/washout at angles is a constant cognitive burden even if I’m not directly looking at it).

> I don't understand this consumerist mindset.

Indiscriminately buying cheap goods tends to also be a consumerist mindset and behavior. It often leads to more frequent purchases and disposal, producing both greater cost and waste. Which isn’t to say all pricing—in technology or otherwise—reflects value or longevity. But blanket rejecting the possibility isn’t helpful in distinguishing that.

> I don't see how you could make a several thousand dollar 4k monitor if you tried.

Even without addressing specialized use cases, it’s easy: low volume + high cost bill of materials. This used to be the case with IPS panels, and continues to be the case with newer/less pervasive panel details (eg OLED or higher pixel density displays). And again, whether those are requisite for a given programmer will depend on their work product and their own individual needs.

- - -

I won’t go so far as to say that everyone should arrive at the same cost/benefit analysis. But I will say that, in terms of tools to do our jobs, displays are very close to directly analogous to mattresses, chairs and shoes: oftentimes, skimping on cost is more costly than spending more upfront.


> is important for most anything we produce graphically that’s user-facing.

I can see the need for cross-domain (e.g. UX or graphics) specialists to have high-accuracy monitors but for most devs I don't it as necessary, as generally speaking you have an IDE with a bunch of code (text) and probably another monitor with your terminals, web browser with documentation, instant messaging app, etc. pretty much all of which is text-based.

>refresh rate... viewing angle and even colour accuracy can be important for developer accessibility

With regards to viewing angle, I have a nice dual-arm monitor mount system that allows it to be easily repositioned (limited 6DOF) as I change sitting posture throughout the day. Maybe something like this would help you?

I will say I usually score badly on colour perception tests (not colourblind, rather the tests where you have to order a number of very similar hues) so I could be missing a whole bunch of subtle colour errors that would irritate regular users :)


> I can see the need for cross-domain (e.g. UX or graphics) specialists to have high-accuracy monitors but for most devs I don't it as necessary, as generally speaking you have an IDE with a bunch of code (text) and probably another monitor with your terminals, web browser with documentation, instant messaging app, etc. pretty much all of which is text-based.

I was speaking specifically to the domains that serve use cases where display quality matters. Quite a lot of us at least overlap with that. Basically all FE web or GUI dev, any end-user image/video processing, pretty much anything that puts graphics on a screen that aren’t UI SDK builtins.

> With regards to viewing angle, I have a nice dual-arm monitor mount system that allows it to be easily repositioned (limited 6DOF) as I change sitting posture throughout the day. Maybe something like this would help you?

I have challenges with hyperfocus which include uncomfortable stillness for prolonged periods of time. I also have a puppy who reacts to very small motion adjustments during work hours in a way which becomes a huge ordeal. This is another reason my Comically Large Display (described in another comment in thread) works well for me. I think adjusting a mounting arm would be counterproductive for me.

> I will say I usually score badly on colour perception tests (not colourblind, rather the tests where you have to order a number of very similar hues) so I could be missing a whole bunch of subtle colour errors that would irritate regular users :)

I don’t do a lot of color accurate work but color/luminance wash is a huge problem for me if I have to deal with it. Like a background task that never stops until my brain is depleted. Having a panel that doesn’t distort that way in my peripheral vision is essential for me to be able to work.


> Quite a lot of us at least overlap with that. Basically all FE web or GUI dev, any end-user image/video processing, pretty much anything that puts graphics on a screen that aren’t UI SDK builtins.

I completely disagree, except for high end picture or video production. I have used way too many websites that clearly only work well on large, nice displays. Most end users of anything will not have the same high quality expensive monitor/computer that's being suggested.

I remember reading some article about someone who used an old i3 processor on a 4:3 laptop screen or something, knowing that if his code is slow for him, it's slow for his users. I think this mindset is genius and should be more common. Take MS Teams for example. It's like their devs have only ever tested on a gigabit link M1 and it's painful for the other 99.9% of people who use it.

So if you want to get a nice setup for yourself, then you should. But you should not do it for your clients, and if you do, you should understand your users will not have machines like yours.


Upthread I also stressed the importance of using lower quality displays to account for that aspect of real world usage. The point of having high quality displays to support users who don’t is to be able to reliably understand what’s being displayed in the first place. If you’re working at low fidelity you can only address the users who have the same system flaws you do.


This comment is the equivalent of the developer having a redundant, 10GbE Ethernet connection, building a mobile app for users who will have spotty 3G.


I don’t see a problem with that? Artificially limiting my access to data doesn’t help me better serve people who have concrete limited access to it. It just limits my access to information and my ability to assess what their limits are. I agree that we should also test and experience the things we build the way our users do. But I don’t agree that we should build things by imposing those experiences on ourselves without exception.


By and large, modern mass produced monitors are more than good enough. In fact, the best programmers in history had no problems running big CRTs.

These people don't need any fluff, and don't blame underperformance on lack of technology. People nowadays are so spoiled.

https://www.doomworld.com/lordflathead/id_carmack.jpg


Big CRTs didn’t have nearly the viewing angle problems that TN panels tend to have. And having, and addressing, sensory issues isn’t spoiled. It’s just taking care of oneself.


I agree that just buying the most expensive monitor is waste of resources. On the other hand, I think price shouldn't even come into the picture when deciding on basic things, such as the number of monitors or whether to choose 4K monitor or just FHD.

I guess the most reasonable approach is to select a set of models that pass the requirements and only then think about the price.

Some requirements are really expensive though. If one insists on OLED and wants to fit 3 4K monitors side by side the full setup cost is around $10k.


> If software developer produces $100k/year value, then it's worth spending several thousands of dollars for every percent of increased effectiveness. This can buy almost any monitor on the market.

This doesn't add up: 1% of $100K = $1000. You would spend _several_ $1000 for every $1000 of produced value per year.

EDIT: Added "per year" at the end, thanks to Arcuru's question.


Thank you for noticing this as well! Further, if a software developer produces $100k per year in value, the company is likely underwater. Very few developers cost less than $100,000, especially when taxes, benefits, tooling, etc are all factored in.


How often do you buy a new monitor?


Every 3-4 years.


Most applications will use either Gtk or Qt widget libraries, so a lot of similarity of how applications behave already exists.

I don't think it's possible to make Gtk and Qt themselves behave identically at this point. For example Qt is a commercial project and an effort that would break backwards compatibility is not worth it. Gtk on the other hand has strong opinions about how things should work, so it would be hard to change that too.


There was a Gtk backend for Qt which would have solved this: https://github.com/CrimsonAS/gtkplatform


> Qt is a commercial project and an effort that would break backwards compatibility

Not sure how that would break backwards compatibility. Moreover, Qt has always tried to plug into native libraries and integrate as well as possible with the host platform.

You have a fair point about GTK. Any implementation should at least cater exactly to their need or they will make their own.


You need to wait up to a year for the distributions to pick up the code into their default installs. Installing bleeding edge window managers, widget toolkits and similar software yourself is more difficult than it's worth for a normal user.


Hi, I'm the developer behind this effort. I can answer any questions you have.


> Bill believes that the biggest opportunity to improve Linux touchpads is to adapt their acceleration curve to better match the profile of a macOS touchpad. How do you feel about the acceleration and precision that your Linux touchpad offers?

Is this work only going to be for touchpads? I personally hate the X11 curves with mice and vastly prefer Apple’s. It seems to be hard coded last I checked and not easily modifiable (there are two parameters now but it’s still a very different curve). If trackpad curves also benefited mice (particularly those of us who use Apple’s Magic Mouse on Linux), that’d be amazing!

Thank you for your efforts to improve these ergonomics — it’s thankless and hard work but benefits many.


For now we only focus on touchpads. I think that if we're successful in delivering touchpad improvements then we will gain credibility and trust that could be useful when working on other input devices.


Just want to echo the above sentiment—mouse-based acceleration curves are in need of just as much love as touch pads, and I'd love to see both!


Just want to thank the developers behind this and joined as GitHub sponsor. Happy to see this also enters the Qt framework.


Yes incredible what they're getting done with a shoestring budget.

From the announcement "The number of people keeping this project going is tiny (currently just 121 supporters), but this small group of passionate Linux users are creating meaningful forward progress to improve the touchpad ecosystem for hundreds of thousands of Linux touchpad users. For those who don't want to rely on a future beholden to Apple, we hope that you'll consider supporting us? We could be getting more done if we had 250 supporters. "


Thanks for your work on this!

Your report says Firefox gestures are working on Wayland, and two finger swiping left/right appears to be configured in the Firefox prefs to go back/forward in history:

  browser.gesture.swipe.left = Browser:BackOrBackDuplicate 
  browser.gesture.swipe.right = Browser:ForwardOrForwardDuplicate
However, Firefox doesn't respond to these gestures. Do you know what's up with that? (I'm on Fedora 35, if that's relevant.) Two finger scrolling up/down works just fine.


Interesting, this feature may only be for touchscreens because two-finger swipes are registered as scrolls on Wayland. This will indeed need further work.

What does work though is two-finger pinch gesture to zoom in/out of a web page.



IIRC these weren't for two finger swipes at least on macOS (which is the only place I saw this working). I think these were handling 3-finger swipes. Unfortunately now GNOME chomps 3-finger swipes so IDK if that is where the problem is (I guess someone without GNOME can try and see).


How does this differ from what Fedora is shipping in Gnome? Touchpad gestures, sans acceleration are flawless.


Fedora and Gnome is only one desktop environment and one widget toolkit. For example, Qt-based apps didn't have touchpad gestures at all anywhere until my work on Wayland gestures landed this year.

It is true though that if one limits oneself to Wayland and only to Gtk-based applications that touchpad gestures mostly worked before. We now switched focus to adding touchpad gesture support to more applications, so there will be measurable progress even for this case too.


Is there binaries I can install or do I need to compile something? I have no idea where the code is.


In short - there are no binaries and it's relatively hard to compile these manually, so I recommend to wait until the Linux distributions picks these projects up. This usually takes around 6 to 12 months.


Holy cow, that was a LOL from hell. Are you serious? 6-12 months is the biggest tease. "Here's this really cool thing, but maybe, if you're lucky, you'll be able to use it in a year or so." That's the quarter super glued to the floor kind of frustrating.


It's not super glued to the floor. It's on a train, on its way to your station. How far away your station is from the train entirely depends on the length of the release cycle of your Linux distribution, and is completely outwith the control of this developer. That's how Linux distros work. If you want it sooner you can always use a rolling release, such as Manjaro.


The new X server is already in Debian experimental [0], with a bit of luck it trickles down to unstable just in time for Ubuntu to pick it up for the 22.04 release.

If you are running Arch you already have it [1].

[0] https://packages.debian.org/experimental/xorg-server-source [1] https://archlinux.org/packages/extra/x86_64/xorg-server/


What versions of the mentioned packages is this feature included in? I.e. what should I be looking for in my distro updates?


Touchpad gesture implementation in X server has been released in version 21.1

Wayland touchpad gesture implementation in Qt widget framework has been released in version 6.2.0

X11 touchpad gesture implementation in Gtk widget framework has been released in version 4.5.0

X11 touchpad gesture implementation it Qt widget framework will be released in version 6.3.0 (March 2022)

Touchpad gesture implementation for XWayland will be released in version 22.1 (early 2022)


I'm curious - what is needed for LibreOffice to work?


It uses a custom widget toolkit. Adding touchpad gesture support is certainly doable, but it would benefit only single application, so we haven't prioritized that so far.


I work on the VCL component of LibreOffice. Where are the technical docs on how gestures work?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: