Hacker News new | past | comments | ask | show | jobs | submit login

> there is little reason to run native apps,

There are plenty. I like both worlds. Telegram for example is a good example of an unneeded desktop app that lives fine in the browser ( web.telegram.org ), multiple versions, regular updates, platform independent. On the other side there is signal, which forces to use a very shitty desktop app (or maybe I have not found a better yet). It just sucks.

On linux I have no issues installing "native" apps whatsoever. My editor (emacs), cad software, music player (!) - sure spotify works, but I like my network transparent MPD way more. I could go a lot farhther.

Iam curious about (cloud-)gaming since I actually was very suprised how good it can work.

Edit: Why is this downvoted? What am I doing wrong?




> On linux I have no issues installing "native" apps whatsoever

You should. Linux provides pretty much zero protection for your data. Any app you install can spy on the data of any other app you're using, and all your personal files.

Other OSes are slowly introducing some limitations and protections here, but Linux is really not doing much at all.


Well, there is AppArmor and SELinux which AFAIU could fence in individual apps. It's just not exactly trivial to set up without breaking the app.


On Linux there's Flatpak and Wayland which aim to introduce more protection


Not meaningfully. If you have to install an app, only putting it in a VM can ever help much.

If you run X (except on Qubes!) any program can see everything every other X program is doing -- all keystrokes, all mouse clicks, all pixels.


This is not true. You can write AppArmor rules which can restrict pretty much everything. SELinux is also a thing, and introduces a lot of features that you can't see in the Windows for example.

> X program is doing -- all keystrokes, all mouse clicks, all pixels.

Parent was mentioning Wayland specifically to remove this threat.


In Linux, just run an app as separate user. Linux is a multi-tenant OS, so users are well protected from each other. If you need to share files with program, share them via a shared folder, e.g. `/tmp`.


Doesn't sound very practical, especially for grahical programs.


Security is compromise. You can share your whole home directory, if you trust the software, but it will make any kind of protection useless, or you can write a helper tool, which will grant access to a single selected file using a hard link, to make alias for file content, or synchronize files between directories, or mount a shared directory to both container and your home directory, or use SELinux to grant access to a selected directory only. Chose your own compromise.


But it doesn't have to be such a shitty compromise. Most desktop applications could be made such that you have all the security and the convenience.


Separate user for untrusted apps is the proven way to do that in UNIX. :-/


That can happen under the hood, but the user should not have to deal with that


This is how Android does security AFAIK — every app is run as a different user.


If they connect to the same X display, it doesn't matter what user they run as.


So, give separate X server (Xephyr, Xnest, Xvnc) per app for increased security. They will be isolated from clipboard, window titles, and broadcasted key events.


Yes, but in my experience at least, Firefox is not usable running under Xephyr. It's simply too slow for regular web browsing, forget about trying to watch any kind of video.

In theory, the X11 Security Extension would seem to provide a middle ground. On the plus side, I don't notice any performance impact when running Firefox as an untrusted client. However, most programs aren't coded correctly to coexist with it. For example, Firefox crashes regularly when running as such (via SIGSEGV no less, which is its own yikes). Not only that but many programs that are themselves trusted (i.e. normal/default X11 clients) will misbehave if they are simply near an untrusted client: LibreOffice Calc, for example, will lock up hard if the untrusted clipboard is in use.


That is nowhere near practical.


All system daemons are running this way. It's the standard way to isolate networking services on UNIX.


And no user-facing UI apps do this or are built for it, and no desktop environment has any kind of support for doing it.

It just doesn't work in practice.


Desktop environments have nothing to do with that: it's the job of distribution. `gksu` is popular to run graphical apps under root or another user, for example, until it was removed from distributions because distributions don't want to allow users to run untrusted apps as root.


You can run each program in full screen on a separate instance of Xephyr or Xnest, so the program will have a whole separate X server to play with in isolation. Good for running Raspberry Pi desktop via `ssh -X` or `ssh -Y`.


Linux distributions are FOSS. They provide as much protection as I need. I can run untrusted applications in a container, if I WANT that level of protection. For FOSS software, I don't want it.


The standard (and ubiquitous) way on UNIX/Linux is to run each program as a different dedicated user. Complete separation.


Even if you do that, they all connect to the same X server.

There's the X11 security extension, which offers the concept of "untrusted" clients, but many programs won't work with it. For example, Firefox segfaults regularly if run as an untrusted client.


Telegram desktop app is awesome. For me it is a great example of a native app. J ahd zero troubles with it. It is fast, it doesn't consume a lot of RAM and no Electron is bundled. I am glad to see someone is still developing native apps.


It's built with Qt :)


> Telegram desktop app is awesome.

I think I tried it at one point and didn't dislike it. I multiboot linux and windows and from day one it felt very comfortable to have a sticky telegram tab in my eternal browser session on both OS that behave the same.


> On the other side there is signal, which forces to use a very shitty desktop app (or maybe I have not found a better yet). It just sucks.

Usability is often the enemy of the security. Signal is full E2EE, including metadata. It compromises security in many ways when using a browser sharing the keys which were originally meant for single receiver and sender. (e.g malicious browser extensions could access the data).

Signal has chosen to implement only their own desktop app. And as their server side is kinda closed and not self-hostable, it is unlikely that we see other clients for a while.


I don't see what the difference is in security architecture between a desktop app and a web app implementing the same scheme?

My assumption is that you've got an E2EE link between the Signal app on the phone and the desktop app (with the messages decrypted on your phone in the middle). Why can't you do exactly the same thing with a web app?


> I don't see what the difference is in security architecture between a desktop app and a web app implementing the same scheme?

I just gave an example - execution environment is accessible by browser extensions. All the code and runtime data is visible for them with certain permissions.


How is that different from the desktop app? All the code and runtime data is visible to anyone running under the same uid, or the superuser.


No, they aren’t visible by default. Kernel isolates the memory space for every process. One of the basic fundamental things.

Browser extensions can additionally modify code on the fly. On desktops it is really difficult since you need to inject code into memory.

You are doing something wrong if you run your app as superuser or grant too much permissions by default.


That is false, you can read the raw memory for any other process running under your uid. /proc/<PID>/mem

You can also ptrace the process and completely control its execution.


Reading /proc/<PID>/mem requires access mode PTRACE_MODE_ATTACH_FSCREDS which is fundamentally same as access by using ptrace. Thus, kernel is indeed isolating process memory by default.

It is true, that often you get PTRACE_MODE_ATTACH_FSCREDS with same UID/GId, but the most production systems have disabled ptrace or there are extra AppArmor rules to prevent its use. In most of the cases it is recommended to be disabled.

For example latest Ubuntu allows only ptracing child processes on the same userspace (https://wiki.ubuntu.com/Security/Features#ptrace)

You can also set your apps in such a way that they can’t be ptraced, for example ssh-agent is doing this with PR_SET_DUMPABLE attribute.


Thanks for the details. I didn't know Ubuntu was restricting ptrace by default now. Now I need to figure out how to do that on my Debian system -- it definitely allows me to gdb attach to an unrelated process presently.

Even that protection doesn't seem to make it safe to run untrusted programs under the same UID, though? If nothing else, there's always the classic "modify user's rc files to put my malicious program first in $PATH." Similarly you could modify them to increase the core file size rlimit, then send SIGSEGV to the process later and collect the core file.


> Now I need to figure out how to do that on my Debian system -- it definitely allows me to gdb attach to an unrelated process presently.

You could try to set it similarly than Ubuntu is doing. See Yama kernel module [1], and set mode 1 (restricted).

> Even that protection doesn't seem to make it safe to run untrusted programs under the same UID, though? If nothing else, there's always the classic "modify user's rc files to put my malicious program first in $PATH." Similarly you could modify them to increase the core file size rlimit, then send SIGSEGV to the process later and collect the core file.

AppArmor[2] is useful for this, you could define profile for the untrusted app, and it cannot access any other file than you allow.

[1]: https://www.kernel.org/doc/html/latest/admin-guide/LSM/Yama....

[2]: https://www.kernel.org/doc/html/latest/admin-guide/LSM/appar...


I've been using GeForce Now quite a bit to play games that do not run on my laptop. The required internet speed is probably out of reach for most non-urbanites at the moment, but the concept does not really require a Native App, since it is basically video streaming with interactivity, and could probably easily move to the browser in the future.

Cloud gaming is one of the technologies I'm quite unsure about, it could become the de-facto standard in the coming decade, or it could remain a niche, all depending on consumer preferences and network infrastructure.


GeForce Now already works on browsers. It requires Chrome or iOS Safari (I suspect this one was the driving force behind it due to apples app store rules). Just go to https://play.geforcenow.com/ and it shows you what to do...


In my experience it works for mostly static games - most others are not fun due to the additional latencies. If you're close enough to the server and not too sensitive it might be enjoyable, but also some genres are just unplayable IMO (e.g. FPS, roguelikes, racing games).


For some games it might be reasonable but the ones i tried with my 100/100 mbit are miles away from the native experience.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: