Hacker Newsnew | past | comments | ask | show | jobs | submit | walki's commentslogin

I feel like the NT kernel is in maintenance only mode and will eventually be replaced by the Linux kernel. I submitted a Windows kernel bug to Microsoft a few years ago and even though they acknowledged the bug the issue was closed as a "won't fix" because fixing the bug would require making backwards incompatible changes.

Windows currently has a significant scaling issue because of its Processor Groups design, it is actually more of an ugly hack that was added to Windows 7 to support more than 64 threads. Everyone makes bad decisions when developing a kernel, the difference between the Windows NT kernel and the Linux kernel is that fundamental design flaws tend to get eventually fixed in the Linux kernel while they rarely get fixed in the Windows NT kernel.


NT kernel still gets improvements. Think IoRing (copy of io_uring, but for file reads only) which is a new feature.

I think things like Credential Guard, various virtualization (security-related, not VM-related) are relatively new kernel-integrated features, etc.

Kernel bugs that need to exist because of backwards compat are going to continue to exist since backwards compat is a design goal of Windows.


I have the same feeling

Windows is more and more based on virtualization

And the other hand, more and more microsoft stuff is Linux native

It would not surprise me if Linux runs every windows, somewhere far deep, in the next decades

More hybridations are probably coming, but where will it stop ? And why ?


I think rumours of NT's terminal illness have been greatly exaggerated. There are numerous new developments I am hearing about from it, like the adoption of RCU and the memory partitions.

It's not clear to me how processor groups inhibit scaling. It's even sensible to constrict the movement of threads willy-nilly between cores in a lot of cases (because of NUMA locality, caches, etc.) And it looks like there's an option to not confine your program to a single processor group, too.


Running all NT applications in a virtualization layer over top of the Linux kernel would surely impose a performance penalty, and for what, so that someone can run high-performance Linux applications on Windows? It's a bewildering line of reasoning, to be sure.


Microsoft's %Appdata% directory is a security nightmare in my opinion. Ideally applications should only have access to their own directories in %Appdata% by default. I recently came across a python script on GitHub that allows to decrypt passwords the browser stores locally in their %Appdata% directory. Many attacks could be prevented if access to %Appdata% was more restricted.

I also found a post of an admin a few days ago where he asked if there was a Windows setting for disallowing any access to %Appdata%. The response was that if access to %Appdata% is completely blocked Windows won't work anymore.


"AppData" is where user specific application data is supposed to be stored.

"The Registry" is where application configuration is supposed to be stored.

"ProgramData" is where application specific data is supposed to be stored.

"Program Files" is where read-only application binaries and code is supposed to be stored.

It really is a simple concept from a Windows perspective. What ruins everything is overzealous and/or ignorant programmers who don't take any pride in their work, or lack all respect for the users environment. For example; an .ini file should not be a thing in Windows. That is what the registry is for. But the programmer writes the code for Linux, half-ass ports it to Windows, and leaves the .ini file because his code is more important to him than the end-users operating system.

There is nothing wrong with AppData permissions. The problem is with the users understanding of what it is for, and the developers understanding of how it should be used.


As a Windows sysadmin AppData has been an unmitigated shit show forever.

Developers (including those inside Microsoft) don't give a damn about how Microsoft intends anything to work, and AppData has become a dumping ground of software installs to end-run IT departments. A lot of malware dumps into there but good luck limiting execution from that directory hierarchy because all your business-critical end user communication apps live there now too.

The functionality of roaming users profiles (i.e. registry settings "following" you to a different computer, which gives a really slick user experience when it works) was completely ruined by devs dumping piles of small files into "AppData\Roaming" (and completely not understanding that "AppData\Local" even exists, let alone what it's for).

In Windows 2000-land you could redirect AppData to a UNC path and mostly get around this behavior. That's not really "a thing" anymore because you've got apps like Microsoft Teams storing sizable databases in these locations and getting really, really cranky if network connectivity is interrupted.

Windows development betrays its legacy DOS parentage even for devs who never lived thru that era. There were no rules. There was no adult supervision. There was poor documentation of APIs so you just hacked something together that worked well-enough. Periodically Microsoft tries to start over (all the APIs w/ "2" at the end, et. al.) and the cycle repeats.


> was completely ruined by devs dumping piles of small files into "AppData\Roaming" (and completely not understanding that "AppData\Local" even exists, let alone what it's for

As someone who only occasionally uses Windows, I think `%AppData%` sending you to `~\AppData\Roaming` doesn't help.


And converse, on Linux it's so hard to get every shitty tool to put files in XDG dirs, not spew them all over ~.


A maintainer can patch the package to store it configuration files into a XDG directory, then upstream the patch. If none of maintainers did this, then problem is deeper.


https://wiki.archlinux.org/title/XDG_Base_Directory#Hardcode... upstreams quite too often reject patches or let them languish


>an .ini file should not be a thing in Windows

Hard, hard disagree there. Having config files available is vastly preferable to using the unmitigated shitshow that is the windows registry. That and a config file at least gives users a prayer at being able to provide some sort of troubleshooting information, and provides savvy users with a way to actually solve problems on their own.

>half ass ports it to windows

Redmond, themselves, do all sorts of seemingly 'wrong' things with their directory structure, which tells me the 'free for all' nature of it is intentional, and not wrong at all. It is a terrible structure, it does cause problems, but, that's the conditions you work under while using windows. It's mostly OK in practice, but as bitwarden found out, there are conditions that developers have to account for if you require security and safety.

And factually, your presumed solution of "put things in the right place" is doubly broken, because if one acquires the correct privileges, there is no location on a windows machine where cleartext data is safe. The solution is not "store it in the correct location" the solution is to encrypt sensitive data at rest, regardless of location, which is more or less what bitwarden did. That's the correct strategy, and it's operating system agnostic.


Agreed. The windows registry needs to be killed with fire.

There's no appreciable difference between the registry and a directory of config files except that instead of an INI parser you have to use the much, much worse WIN32 API.

Editing config files is fairly safe and user-intuitive. Sure you can break something by writing the wrong config file, but you do not risk breaking everything. But clumsy use of regedit does have a chance of totally borking the entire system.

And then you have maniacs who store user data in the registry. I know of at least one game which stores save files in the registry.

I get the intention of the registry, but it's just not fit for purpose. Maybe it was better back in the 90s, but it's just a hellscape now.


There are real integration challenges with the "simple file approach":

  - File locking and concurrency
  - Atomic writes / moves
  - Realtime change observations
> clumsy use of regedit does have a chance of totally borking the entire system.

So does a clumsy rm -rf, which shows up in stories here far more often than stories of people breaking their registry.

Can you provide a recent reference to someone bricking their system with regedit?


I think you could even make the argument that nobody breaks their registry because nobody wants to mess with something so user-unfriendly. Even the developers making applications tend stick all their config in .ini files because files are easier for everyone to work with.


Then use SQLite.


That solves 1 out of the 3 issues... But at that point, why bother? The registry is a database already.


But it's so easy to export all my PuTTY profiles from the SimonTatham registry folder to a .reg file and use on the next computer...


The point is that, nowadays, apps should by default be isolated from each other, rather than AppData and HKCU being a free-for-all.

Windows makes it hard to whitelist known-safe apps (there’s WDAC but it’s poorly documented and a PITA) and every program you run has access to everything of importance on your system.

Imagine how upset people would be if it turned out TikTok on your phone can access your entire iCloud Drive and Keychain. Yet we accept this security model on our desktops.


We accept it on the desktop because the desktop app model is from before the internet. There was only 'trusted' applications that had access to all the users data (and really most of the time the entire machine), and really there wasn't even the idea of an internet connection being built in at all. In addition desktop applications are based around the ability to read the users data files. Desktop users typically want all their excel files accessed, along with any embedded images from anywhere in their user directory.

For the most part the changes you'd want to implement for security would ruin the productivity most of the workflows desktop users have these days, and would take a massive amount of refactoring to get to work anywhere close to what they do now.


There’s a difference between reading user data (i.e. “My Documents”) and reading other apps’ application data (e.g. Firefox’s cookie jar).

macOS has started disallowing the latter (i.e. restricting access to other sandboxed apps’ files from both sandboxed and unsandboxed apps) more than a decade after the OS was introduced, yet I don’t feel like my productivity has been ruined.


Older desktop apps also tended to be more trustworthy.

There's so much commodity garbage out there now (e.g. I find it near impossible to find quality ad-free apps on Google Play)


Microsoft themselves don't understand that. Teams installs itself to appdata in its entirety. One full install of teams for each user profile. Keeping it updated across one machine is impossible. How can we expect anyone else to do it right when Microsoft allows its own employees to abuse it?


The original Teams was an Electron app and was stuck with Google's methods.

The new Teams is based on WebView2 and runs from C:\Program Files\WindowsApps\


Teams was kept in appdata like Chrome so that these programs can update themselves without admin privileges and I suppose that is how they keep users on a recent version.


Except it completely backfires when you have a workstation with multiple users who infrequently use it. Existing patching solutions, like microsoft's own system center have a hard time coping with applications that live in app data. So you end up with 8 instances of teams on a system. 6 of which are months out of date.


I just hope they kill teams with fire. It is hot garbage.


So what category does stored browser passwords fall? Because it sounds like " user specific application data " which is in AppData, which is the issue. But if that's not correct which of those locations is?


It should be in AppData. Gp is just a really weird unrelated rant.

ggp: unsandboxed AppData (unsandboxed filesystem in general, really) allowing everyone to read everyone else’s stuff is a security nightmare.

gp: stupid programmers don’t respect Windows’ simple scheme to place data in four different places!

What? Even if everyone places data correctly, they can still read everyone else’s stuff, as long as they belong to the same user. That’s the problem.


They belong to encrypted user credentials. https://support.microsoft.com/en-us/windows/accessing-creden....


"programmers won't use our poorly designed system therefore the programmers are wrong"

Windows registry is in itself insecure. Applications can't own perms to their own entries.

Look at what people are using and optimize for that. Clearly the intended system is wrong, and ego death is necessary to create real fixes.

The easy and expected fix being that applications get perms for their own folder, rejecting 3rd party by default.

The proper larger solution being open code signing. But MS and friends are making big cash so they don't care.


> Windows registry is in itself insecure. Applications can't own perms to their own entries.

I think registry entries support DACLs, and permissions can be restricted to SIDs or user accounts. I have no first-hand experience with this though; YMMV.

> The easy and expected fix being that applications get perms for their own folder, rejecting 3rd party by default.

Back in Windows 8, they launched an app model called UWP or something which does exactly this. Met with luke warm reception from the industry because (you guessed it!) back compat.


UWP wasn't just lack of back compat, it enforced things like apps sleeping on minimize which is nuts. This was in an attempt to make Windows a universal OS that's tablet and phone worthy.


They absolutely support DACL's. For the longest time I prohibited my own user account from modifying a certain registry key to prevent Dropbox from constantly reinstalling unwanted green checkmark overlays.


Restricting to user accounts is useless. Malware runs as your user.

https://xkcd.com/1200/


> "AppData" is where user specific application data is supposed to be stored.

> "ProgramData" is where application specific data is supposed to be stored.

Simple maybe. Coherent, no.


It was confusing for me too.

Basically:

- AppData = User (interactive) application storage - ProgramData = Service / Background (non-interactive) application storage


It's not really any worse than *Nix mess of /bin, /usr, /usr/bin, /usr/local/bin and /opt ... and probably a couple others I missed.


I'm not so sure I completely agree with you about .ini files. I rather miss them. Some people have regarded the registry as a mistake, or at least an over-reach. I like the ability to edit .ini files and make them understandable.

Maybe the compromise solution is to put the user-relevant portion of the .ini file in %AppData%.


Please don't use INI files. The registry is infinitely more manageable for sysadmins than INI files. I hate it when your app makes me write scripts to manage settings versus just using the built-in tooling in Group Policy for dealing with the registry. (Yes, yes-- there is tooling in Group Policy Preferences for dealing with INI files. It fails spectacularly on malformed INI files. It has never been reliable in my experience.)

The idea of a centralized grammatically-accessible configuration store was a good idea (albeit this isn't want the registry was "for" originally-- it was just a file-type registry originally). GConf was a similar idea.

Devs misusing the registry to store opaque binary values (especially gigantic ones), accessing it with too high a velocity, and having a less-than-stellar file format have hurt it, for sure. Having few good schema rules or APIs that limited arbitrary developer access didn't help either.


Okay, so that's the sysadmin perspective. Tell me about the user perspective.

Then, we should talk about, when they are in conflict, which one comes first.


A dev is going to include UI to manage the settings if non-technical users are expected to modify them. Whether those settings go in an INI or the registry doesn't matter at all for that UI.

Having said, that level of technical skill req'd to edit an INI or the registry is about the same. Either way you're talking about a non-technical user descending thru a hierarchy of strange-to-them named containers to get to an arcane-looking location where settings are saved.

The user is going to call me when they have problems. It's easier for everybody if I can just administer the software centrally so they don't have problems to begin with.


How is the registry going to make that administration any easier? The registry is its own micro cosmos, doesn't matter if some setting is in an INI file somewhere on the filesystem or somewhere in the registry


Sysadmins have great tooling to deal with the registry (Group Policy, Local Group Policy for non-domain machines). The tooling for INI files isn't very good.


I don't know one sysadmin that likes how the registry does things. INI files for configuration are vastly easier to understand and edit. Use the registry for permissions and keep your tooling.


You are speaking orthogonally to the topic you replied to. The parent wants sandboxing between different programs so that one cannot read another’s data without explicit configuration and consent.


But why split up the application like that? Why not have a folder for each application that just contains everything?

Everything being in one place by default also means that a user can just copy the entire application folder as a backup.


Because it'd be really nice to have a place with just application data to backup. no configs, no application state. Or alternatively it's nice to have a place to just factory reset all your app config, but keep all the data.


But would you trust that you would actually get all the app's configs? Ie window position and all that.

And would you trust that you would only affect that one application and not any others?

And wouldn't you just unusual and reinstall the application anyway?


Making it easier / less work for more devs to do the right thing doesn't seem like an inappropriate request. If users are misusing your system, there are other solutions than RTFM


Thank you! And Program Files is for x64 windows apps, Program Files x86 is for 32bit apps but vendors use both interchangeably and sometimes use both for the same app!


Don't forget about Wow64 and redirection ;-).


You forgot about "my documents", which is of course a great catch-all location for all four types of data you mentioned.


I actively avoid that dumpster fire. None of my actual documents live in any portion of My Documents.


I agree in a perfect world, but I believe the OS should have a design that “forces” the programmer to maintain the correct abstraction.

Or at least have the override for such abstractions be blatant and explicit if the programmer wants to circumvent them.

And of course, given the age of Windows OS/ecosystem, it’s a pipe dream to have a redesign that isn’t backwards compatible


How do you tell the program what belongs where? How does the OS know that the application is reading a file full of configuration entries that should be in the registry? What is the difference between reading a file full of data and reading a file containing your own configuration?

How does the OS know that the file you're writing to belongs in AppData or not?

To create the system calls for this you would break everything about windows file permissions. Currently, you interact as a user account. In order to accomplish the real time heuristics you're proposing you would also need an application user account in addition to the users user account.

At what point does the responsibility for knowing how to code fall on the programmer? How much capability are you willing to take away from effective programmers, to artificially protect the ineffective ones from themselves?


>What ruins everything is overzealous and/or ignorant programmers who don't take any pride in their work

uh you mean overzealous product managers and business owners who never let programmers take their time on anything because quality doesn't matter?

why would I take pride over my employer's property? lol if the code he buys from me is bad, that's his problem, especially since I have to stick to his timelines and am not given sufficient equity and agency to feel ownership over the project.

you know what makes programmers lose their desire to take pride in their work? getting blamed when we're ordered to cut corners, or implement bad designs. fuck right off with that, we're not the ones in power.


Is this post sarcastic and I’m just missing it?

4 different locations to store program data, some of which are hidden, is freaking stupid design. Like, beyond moronic design.

Everything, and I mean everything, about a program should be in a single folder structure and the OS should by-default lock that application to only accessing it’s own folder unless otherwise granted permission (in a centrally auditable/revocable location).

Applications/ExampleApp/

Should contain everything, and deleting it there should clean it as if it was never installed. If it needs to access something in documents/desktop/etc, the OS should ideally present a file picker to pass in a copy, but applications could request access to a specific path if absolutely necessary. You should also be able to “save to desktop” without the application having read/write access to the desktop/documents.

“Exporting” is the application taking the local copy nested in Applications/ExampleApp/ and passing it to a system save dialog, then the OS can store the file (therefore having permissions) wherever the user wishes in an context menu that’s outside the application’s control (it’s the OS).

The idea that every installed application has wide-open filesystem access to say, all my documents, by default is pure insanity.


That makes managing a user's application specific data difficult though. For one you have different user's data intermingling which potentially causes new problems. But on top of that you make managing and backing up that data more difficult. As it works now with appdata you can back up a user's profile folder under C:\users and get everything they have assuming they haven't gone out of their way to save data to a strange place. If all data for an app lived in program files then backing up and restoring that data becomes much harder.


Ideally a new instance of the application is installed for each user. This also provides better isolation if one user upgrades/removes/breaks their application instance. I, for one, have really come around to the AppImage model [0] in the last couple of years.

[0] https://appimage.org/


I don't like the solution being to just make containers out of everything. That introduces its own problems and lets developers be lazy in other ways.


I guess the OS keeping track of .../programs/NameOfProgram/user settings/NameOfUser is just impossible? Or having an app install create a link in /users/NameOfUser/program-config/NameOfProgram to the config folder is equally impossible magic ...?


That's asking a lot of windows. But as a former sys admin, that sounds like it would make things harder to manage. So its linked. But its not really there. So existing userdata backup automation wouldn't catch it. Sorry your Outlook psts are gone. User data should live with users. The problem isn't with that paradigm. Its that its abused and wide open.


Can't blame the programmer for that - Windows shouldn't allow the programmer to do stupid shit


The sheer volume of legacy software prevents this from being realistic. Microsoft's commitment to backwards compatibility has reaped rewards for them. Any restrictions would have a user-controllable toggle.

If APIs prevent programmers from stupid shit the devs would encourage the end users to blame Windows and, more than likely, turn off the restrictions. (Case in point: User Account Control and making users non-Administrator by default. I've dealt with so much shitty software that opens its install instructions up w/ "Disable UAC and make sure the user has admin rights.")

There has to be a point you draw the line and say "Dev, grow up and learn about the platform you're using." An app that required users to be root on a Linux machine wouldn't survive community outrage. Windows doesn't have that kind of community. (Try arguing with a vendor about idiot practices in their app and watch their sales gerbil attempt to end-run you to your manager...)


My Steam Microsoft Flight Sim requires admin rights, so clearly this is a lost battle. We just need to have containers for every app.


We may just get that. Microsoft's attempt to introduce sandboxing with UWP/msix was ignored by developers. Since then MS has added Windows Sandbox to Win 10 Pro and up, essentially disposable VMs for running sketchy software. I wouldn't be surprised if a couple versions down the line we get the option for more permament app-specific VMs, with integration into the window manager similar to QubesOS. A lot of groundwork for that already exists for WSL2, like more efficient memory use between VMs and shared GPU access.


What if Microsoft limited these APIs to programs with "Compatibility Mode" enabled? (And—this may already be the case, I'm not sure—made it impossible to enable compatibility mode programmatically?)

I feel like this would create a strong incentive for modern software to do things "properly", while still allowing legacy software to run (albeit with a couple of extra clicks).


Look how long we're still dealing with software that requires Java 6/7/8, and all the security issues that come with that. Servers/Appliances with IPMI remote consoles that do not support HTML5. It's easy to say "Replace the equipment" but our budgets don't always allow for that.


I think Microsoft's commitment to backwards compatibility is awesome. But it would still be better to at least get newer apps working the right way. Even in the event those legacy apps remain in use for ~forever, at least there would be fewer of them.


See, I disagree with that. The computer is an arbitrary command execution machine. It does what you tell it to do. Don't tell the computer to do stupid shit and it won't. There are plenty of valid use cases where you want to use the capability of the computer without some arbitrary OS policy preventing you from doing it "because some programmers are irresponsible."


In a world of various medium-trusted apps that I don’t love but still have to use to get my job (or a bank transfer etc. done), that model doesn’t really work for me anymore.

Users aren’t “telling the computer what to do” anymore for the most part, third party app developers are; this puts a lot of responsibility on the OS for protecting the interests of its user against that of a malicious or careless app developer.

Of course I want to be able to fine-tune that protection, but restrictive defaults make sense.


I don't think this is fair. Linux and Mac used to operate in generally the same fashion. Only recently have they started sandboxing stuff.

Windows doesn't have the same privileges because they are forced to maintain backwards compatibility.


It's just as bad there with everyone randomly shoving dot-files in my home directory instead of using ~/.config, ~/.local, ~/.cache, and friends.

Just to name a few in my home dir ... aws, cargo, dotnet, yarn, vscode...

All of these narcissistic tools are pretty annoying.


40% of those tools are majority controlled by Microsoft...


No way is that simple.

your rules would state "application specific data" would not reside in appdata even though those exact terms are there. it's the opposite of self-documenting


It really is a simple concept

At first I thought you meant that sarcastically.

Microsoft got overzealous showing off their long file names back when that capability was introduced to their filesystem, and any sense of organization in the OS fell apart after that.

I actually miss .ini files. It was nice being able to keep your software's data alongside it (in a simple folder like C:\Programs\3DS) and made it easier to clean up remnants. I understand what drove the design, but a more sparing and opinionated approach could have produced a much more elegant outcome.

Incidentally, even Microsoft software is wildly inconsistent in how it uses the registry.


Makes /bin/, /usr/bin/ and /opt/ seem simple


Microsoft is trying to do that with msix and a new filesystem driver that transparently restricts file system access to app. Should land into Windows 11 this year. See https://youtu.be/8T6ClX-y2AE for the functionality explaination.


The msix story is really weird/incomplete so far. Let me just leave it at: creating services is part of msix on windows 11, but not windows server. Maybe it will be more then a toy in a few years, but we'll still have to wait for old server versions to get replaced.


And AppStore distributed application which are by default isolated, removing some feature too. (custom shortcut for example).


>python script on GitHub that allows to decrypt passwords the browser stores locally in their %Appdata% directory.

Yes, otherwise known as "if you run code on your computer, it can run code on your computer".

If a random python program can "decrypt" the passwords, that's not encryption. And browser password management isn't about security, but convenience.


>if you run code on your computer, it can run code on your computer

For the love of God will someone please just make a web browser that isn't a web browser and it's just a cross platform multimedia sandbox with a couple of APIs in it, and you can run programs written in rust or something on it, and it doesn't let the programs touch your file system unless it has explicit permission? That would solve 99% of the application use cases. That's literally everything I want. I want the safety of the browser, outside the hell that is web development.


The JVM did that many years ago and nobody liked it. I can't help but think wasm is just the same idea but worse.


Outside of web applets, set-top boxes, and DVD players, JVM didn't really do much sandboxing. On the desktop or server, it did practically none.


I think the rest of your sentence was "by default" which is the same thing the comment you're replying to said: "security gets in the way of everything"

One could always launch any java process with java -Djava.security.manager -Djava.security.policy=someURL and it would sandbox a huge number of things (see: https://docs.oracle.com/en/java/javase/17/security/permissio... )

The problem is that defining a reasonable policy for any modern app is a gargantuan pain -- as is the case with any security policy language -- so as the GP said people hated it and now it's dead https://openjdk.org/jeps/411


I think a key part of solving that is by not thinking of it as a set of security enforcement rules on top of the preexisting platform, but as a new platform (that just runs everywhere). So, instead of ACL listing what files can be accessed, shove it in a sandbox where the app has its own files, and the platform open file dialog enables the user to authorize one-time access to individual files.

You basically can't take a complex thing and write complex security rules for it and expect success & real world adoption.


Thats pretty much Wasm with Wasi (minus multimedia though right now)


It's called iOS. Browsers are also NOT safe. You know what was safe? Not letting random endpoints ship you code to run. HTML was safe, though implementations at the time likely had security flaws.

You cannot make a turing complete language that JIT compiles into machine code and verify it as "safe". Machine code is not safe, so anything that lets you generate arbitrary machine code cannot be proven to be safe. If you take away the arbitrary machine code generation from javascript, it's too slow to run the modern web.


Then don't compile it into machine code? The problem is in application development, not low-level programming. If a random person on the internet makes an application, there's a non 0% chance it's malware if you try to run it. It shouldn't be that dangerous. It's ridiculous that it still is that dangerous after decades of desktop computing and the only way to avoid this is anti-virus heuristics.

All we want is to get rid of the possibility of an application developer including evil code.

We could have a fully interpreted language layer running on a platform that never lets application code touch the file system. How do applications do fast stuff like GUI then? You just have a package manager with libraries that can do low-level stuff but are vetted so they don't expose APIs that let application code interact with the file system. That way in order to exploit an user's computer you need to exploit a flaw in a library thousands of other programmers use instead of just importing std io.

A lot of security seems geared toward server environments where you are only dealing with code you fully trust in, like the left-pad library. If bad code broke your server, you could really just load a backup. But most of people using computers are on their personal computers, a majority of them have no backup, and they are downloading and running random programs all the time. It makes it harder for both desktop application developers and their users if there isn't a sandboxing layer in the middle. It's probably one of the factors that is killing desktop apps in first place since most users can trust a website that is an image editor but fewer would install an image editor because it can contain a cryptominer, or a ransonware, or a virus, or whatever.


You're skipping over a lot of pragmatic middle ground between "full hardware access" and "verifiably safe" (i.e. formally proven?) here.

An absence of turing completeness and JIT compilation is neither necessary (see sandboxing) nor sufficient (see variousexploits against media codecs, PDF parsers etc.) to ensure safe processing of untrusted data, whether that data happens to be "actual data" or code.

You can make your own life easier or harder with your choice of sandboxing target, though: x86 Win32 binaries are probably harder to do sandbox in a working and secure way than e.g. WASM/WASI.


I still can't use a password manager to keep my apple account secure. You must memorize your password, and be able to type ... uh, I mean, draw, no, write? your password on a watch as well (if you get one of those).

iOS is not exactly safe until I can use it without knowing my apple password.


I don't know my Apple password, it's in 1password. I don't use it on my watch though, I have a PIN there.


My watch somehow became unpaired from my phone and needs my password. I just ignore the prompt because all attempts to enter the password fail for one reason or another. Even moving my wrist too much or taking too long clears the prompt.


On a related note, I appreciate the ability to specifically disable JavaScript JIT in GrapheneOS' browser, Vanadium. Theoretically, it's a nice balance of maintaining site compatibility (as opposed to disabling JS entirely) and reducing one's attack surface.


What part of "cross platform" does iOS match?


Full unrestricted disk access for all users and code isn’t the only way an OS can be designed.


AppData is specifically where apps store data, and there are and were plenty of legitimate examples where you want some code to access data from an app in there.

The entire point is that it is not meant to be a secure location, was never meant to be a secure location, has no intended security features etc. If you store your passwords in a text file on the desktop, that is also insecure but you would be wrong to say Notepad has a security vulnerability. Similarly, if you stored your passwords in the Windows registry unencrypted, that would also be insecure, but does not demonstrate a flaw in the Windows registry.

If you want to be able to leave your secrets in the open without them being compromised, then you encrypt them.

Browser password managers are not secure. That is not Window's fault.


Regardless, full unrestricted disk access for all users and code is insecure.


It isn't full unrestricted disk access for all users and all code. Any OTHER user, or code running with that user's permissions cannot access YOUR appdata directory. The appdata stuff was the running user's appdata. They already had total control of the user's machine, and in fact, had control of that user's domain administrator! This attack is only possible if you have control of the user's domain administrator AND data access to the user's machine so that you can use both the locally stored Bitwarden data AND the domain's backup decryption keys. The phone OS model wouldn't work here. The security compromise happened when the domain administrator account was breached.


I tell myself and other people if you have it saved in your browser are you okay if bad people know that password. Also it makes it easy for people in authority to get to that password with a simple court order.


Most average people are not sure of password managers because the idea of losing the god password and losing access to EVERYTHING is terrifying, and there is mathematically no way to recover your secrets. Most normal people have lost a password before, so that's something they think about.

Also for most normal people, an unencrypted note on their desktop with plaintext passwords that are DIFFERENT FOR EVERY SITE is STILL more secure than the SOP of using one strong password for everything. For that to be compromised, someone needs to be able to run code on my local machine, in which case, they can just install a keylogger, so encrypted passwords are no increase in security. I genuinely don't care if App1 on my computer can fiddle with App2's bits, because I chose to run App1 and App2, they are trusted.


Passwords saved in browsers for most users only protect access to accounts that are also accessible with a simple court order, though.


AppData is the Windows equivalent to Linux home directory dotfiles.

> Ideally applications should only have access to their own directories

This happens for Windows Store apps, which are sandboxed similarly to mobile phone apps.


There's probably nothing that I hate in programming more than having full access to the file system. Any time I write a program that has to delete a file I just make it move into a trash folder instead just in case I mess up somewhere and accidentally delete the entire file system.



Is there even a way to opt in to having a secret be accessible only for your process? Like, a way to maybe sign your executable and then use a windows api that then gets "oh. This process is made by the same vendor that created this secret, so it’ll be allowed access".

It’s just ridiculous that the most trivial, unprivileged process can just steal any file and any secret accessible by the user it’s run as. Unless that secret is protected with a key derived from a separate password the user has to put in.


I don't think it's possible on Windows.

It's trivial on Unix - just make the program setgid and change the folder permissions to only allow the group. This can be nested, though that requires that the relevant program be aware of the need to walk through several levels, though often a symlink can hide that.

Note that when creating such a directory setup, `chown`ing away the user requires a privileged helper utility. But you need to make such utilities anyway so the user can delete such directories.

***

Important note - most other "solutions" only protect you from apps the opt in to security. A proper solution, like this one, protects from all processes running as user, except the process of note.


Or use selinux/apparmor - those have supported app sandboxing without group tricks for a long time.


Those are useless because they're opt-in, and we can't expect malicious programs to opt in.

There's probably some mandatory mode but since it breaks all sorts of programs nobody can afford to use it.


Apparmor is opt-in so it protects from exploration mostly, but selinux can definitely work with the whole system by default. It's not trivial, but you can at least prevent apps from accessing personal information unless explicitly allowed. I've been using it for years without issues. It really requires only a minimal amount of learning and you don't need to turn it off.


So - the moral of the story is to never use Windows?


Or don’t use their „security“ features. AFAICT everything would have been fine if they used a hardware key as second factor.


How do you know which app is accessing your hardware key in the absence of any OS feature mediating access to it?


> The response was that if access to %Appdata% is completely blocked Windows won't work anymore.

Yikes. I really wish that instead of Microsoft wasting resources on telemetry nonsense, they would focus on optimizing their OS and modernizing some of these blatant security issues.

I guess it wont happen until we have another wave of ransomware malware or something of the sort.


That's like saying Linux doesn't have sandboxed apps because chmod -R 000 /var will break your system. Technically sort of right, but not a useful or interesting observation.


The difference being Windows is a much higher target for abuse since its the most commonly used OS for Desktop, somewhere as high as 70% marketshare or higher, depending on where you get those numbers from. It's also used a lot in corporate environments as well. Linux usage of /var/ and /etc/ differs depending on various factors too... Developers / distro maintainers put files in different places.


I believe the following is the solution. https://learn.microsoft.com/en-us/windows/win32/secauthz/app... No?


Essentially yeah but it’s currently opt-in from the app developer. I believe (but may be wrong) that an app which doesn’t implement AppContainer isolation can currently access the data written by apps that do implement it. I think the intention is for it to become the default one day.


> not sure if this is still the case though

No this is not the case anymore. Nowadays support for unaligned memory accesses is very good on ARM and most other CPU architectures. On x86 aligned memory used to be very important for SIMD but now there are even special SIMD instructions for unaligned data and the performance overhead of unaligned memory accesses is generally very small in my experience.


> it’s good for software robustness that file formats and hardware use different endianness, as it forces you to read things byte-by-byte rather than lazily assuming you can just read 4 bytes and cast them directly to an int32.

Except that it is very bad for performance. As far as CPUs are concerned little-endian has definitely won, most CPU architectures that have been big endian in the past (e.g. PowerPC) are now little endian by default.

If all new CPU architectures are little endian this means that within a decade or two there won't be any operating systems that support big endian anymore.


Power ISA is actually still big by default, and even on little-endian capable systems starts big endian until changed by the OS. Low-level OPAL calls on OpenPOWER systems are made big endian, even if the OS is little (the OS has to switch the processor mode).


For performance, can’t you “just” have a swap-endianness instruction in your CPU, and have the compiler use it when it detects byte-shuffling code?

(That may even happen already on some architectures for all I know)


> For performance, can’t you “just” have a swap-endianness instruction in your CPU

Yes, most CPUs have special instructions for swapping between little and big endian byte arrangement. The GCC compiler has the __builtin_bswap64(x) for accessing this instruction. However this is an additional instruction that needs to be executed for each read of a 64-bit word that needs to be converted, in some workloads this can double the number of executed instructions and hence add significant overhead.

Supporting big endian CPUs in systems programming sucks beyond imagination. There are virtually no big endian users anymore and making sure your software works fine on big endian requires testing it on a big endian CPU. However it is not possible to buy a big endian CPU anymore as there exist no more consumer big endian CPUs. For this reason I still have a Mac PowerPC from 2003 at home running an ancient version of Mac OS X. But over the last 2 years I have stopped testing my software on big endian, I just don't care about big endian anymore...


> C compilers have to assume that pointers to memory locations can overlap, unless you mark them __restrict...

What I don't fully understand is: "GCC has the option -fstrict-aliasing which enables aliasing optimizations globally and expects you to ensure that nothing gets illegally aliased. This optimization is enabled for -O2 and -O3 I believe." (source: https://stackoverflow.com/a/7298596)

Doesn't this mean that C++ programs compiled in release mode behave as if all pointers are marked with __restrict?


restrict and strict aliasing have to do with the same general concept, but aren't the same. They both have to do with allowing the compiler to optimize around assuming that writes to one pointer won't be visible while reading from another. As a concrete example, can the following branches be merged?

  void foo(/*restrict*/ bool* x, int* y) {
    if (*x) {
      printf("foo\n");
      *y = 0;
    }
    if (*x) {
      printf("bar\n");
    }
  }
Enabling strict aliasing is effectively an assertion that pointers of incompatible types will never point to the same data, so a write to y will never touch *x. restrict is an assertion to the compiler on that specific pointer that no other pointer aliases to it.


OK thanks, indeed Clang is able to generate better assembly using __restrict__. And -O3 generates the same assembly as -O3 -fstrict-aliasing (which is not as good as __restrict__).

I wish there was a C/C++ compiler flag for treating all pointers as __restrict__. However I guess that C/C++ standard libraries wouldn't work with this compiler option (and therefore this compiler option wouldn't be useful in practice).


What's interesting to note though is that I tried marking pointers __restrict__ in the performance critical sections in 2 of my C++ projects and the assembly generated by Clang was identical in all cases!

So while it is true that by default Rust has a theoretical performance advantage (compared to C/C++) because it forbids aliasing pointers I wonder (doubt) whether this will cause Rust binaries to generally run faster than C/C++ binaries.

On the other hand Rust puts security first, so there are lots of array bounds checks in Rust programs (and not all of these bounds checks can be eliminated). Personally I think this feature deteriorates the performance of Rust programs much more than you gain by forbidding aliasing pointers.


> so there are lots of array bounds checks in Rust programs

Depends on how those programs were written. Iterators should avoid bounds checking for example.


Likely most C code, particularly data structure code, would break if compiled with a setting that treats all pointers as restrict.


I think C and C++ have enough problems with accidental undefined behavior already without making aliased pointers into UB.


Not for char. Compiler always assumes non-restrict for char pointers and arrays, which is important to remember if you're ever operating on a RGB or YCbCr matrix or something.


Huh.

Does that also hold for a "uint8_t"--which is often just a renamed unsigned char rather than being a genuine type of its own?


not according to the standard, but in practice yes, currently. there are arguments that it should be considered not for optimization reasons, but there is likely too much existing code relying on it to change behavior. (see related llvm, gcc bugs)


Yes


> Has anyone tried machine learning on this or the Graviton2?

I have not done any machine learning on AWS Graviton2 CPUs but I ran many other CPU benchmarks on Graviton2 CPUs and overall I have been disappointed by their performance. They are still much slower than current x64 CPUs (x64 CPUs are up to 2x faster in single thread mode).

According to the benchmarks from Anandtech the Ampere Altra should have much better performance than Graviton2 CPUs as its performance is neck to neck with the fastest x64 CPUs.


I have personally used a large number of cloud compute services: AWS, packet, Alibaba, Digital Ocean, nimbix, Azure, Oracle, ... I still use all of these services from time to time. If you carefully look at my list of cloud vendors you realize that there is only one of the big cloud vendors missing: GCP. Why?

When GCP came out I wanted to switch from AWS to GCP because AWS was costing me a lot of money and GCP was marketed as being cheaper. So I went to their website and signed up. Or at least I tried to sign up, because GCP did not care about individuals, you had to be a company! No other cloud vendor I tested had this requirement! So up to this date I have never tried to sign up with GCP again.


Let me guess, you're based somewhere in Europe? I don't remember the specifics, but the "business only in Europe" thing was 100% the fault of the EU's (tax?) laws. Not sure if that ever got resolved, actually.


Yes, I am based in Europe. If this was EU's fault then why didn't any of the other cloud vendors have this requirement?


I guess it must have done as Im based in the EU and was signed up for a personal GCP account at one point.


that was sily because they only asked for a company name but you could've entered "Individual" inside the box and they would not care!


That's a bit risky. Google is well known to close accounts and have no humans you can contact.


This is one of the many things which prevents me from building to Google.

1) I don't want them closing my account. Doing development is the sort of thing which looks A LOT like suspicious activity, for Google's buggy anomalous behavior detectors.

2) They close accounts on working businesses all the time.


They also asked for the VAT number of the company...


> The facts: we gain almost nothing by having tiktok around. We lose nothing by banning it, and gain a little bit of buffer against possible threats like election meddling, data mining for nefarious purposes and other things. Completely leaving politics aside, I basically support this.

What about if all non-US countries start reasoning like you and ban Youtube, Facebook, ...?


If country X believes that Facebook is being used as an intelligence or opinion-shaping tool by the United States government then country X would be justified in trying to do something about it, in my opinion. And the loss of Facebook and it’s toxic effect on its users wouldn’t make me lose any sleep if I was a citizen in country X. And I think this is probably true. I know in Myanmar Facebook is insanely popular and I personally knew people very high up in the government who used Facebook and their whole family did as well. I’m confident that Facebook could use their records to effect almost any kind of change imaginable in Myanmar. Their access is mind blowing. Maybe I’m being a bit hyperbolic but the point gets across.


As an EU citizen I hope the European Union (or the biggest governments that are part of it) will start seriously thinking about “nationalizing” the parts of Google and FB that operate in Europe, the same way MS is rumored to do with TikTok.


China has been doing that for over a decade and I don't know of anyplace where WeChat is banned.


China has also put dissident and unwanted people in 're-education' camps. Should we do that next?


> It's worth pointing out how extremely far ahead Apple seems to be in terms of CPU power...

I agree that Apple's ARM CPUs are very competitive on simple scalar instructions and memory latency/bandwidth. However x86/x64 CPUs have up to 512 bit wide vector instructions and many programs use vector instructions somewhere deep down in the stack. I guess that the first generation of Apple ARM64 CPUs will offer only ARM NEON vector instructions which are 128 bit wide and honestly a little pathetic at this point in time. But on the other hand I am very excited about this new competition for x86 CPUs and I will for sure buy once of these new Macs in order to optimize my software for ARM64.


Also, vector instructions are not doing that well on laptops, but are thermally throttled making them less useful https://amp.reddit.com/r/hardware/comments/6mt6nx/why_does_s...


I am more than a little naive on the subject, but is it possible that the vector instructions could be farmed out to a co-processor that is dedicated to that kind of workload? I suspect that the rich instruction set leads to higher transistor count and density(?true?) and thus higher TDP?

Would love to learn more from sources if people might provide a newb an intro.


The vector instructions can't really be farmed out because they can be scattered inline with regular scalar code. A memcopy of a small to medium-sized struct might be compiled into a bunch of 128bit mov for example and then immediately working on that moved struct. If you were to offload that to a different processor waiting on that work to finish would stall the entire pipeline.


Could the compiler create a binary that had those instructions running on multiple processors? I see now I have some googling/reading to do about how you even use multiple processors (not cores) in a program.


That's what we call the magic impossible holy grail parallelizing compiler.


Good to know before I run off looking for the answer :)


The technological knowledge to do this is years and years away.


> The vector instructions can't really be farmed out because they can be scattered inline with regular scalar code.

If you believe this, you won't believe what's in this box[1].

[1]: https://www.sonnettech.com/product/egfx-breakaway-puck.html

> A memcopy of a small to medium-sized struct might be compiled into a bunch of 128bit mov for example and then immediately working on that moved struct

I'm not sure that's true: rep movs is pretty fast these days.


> If you believe this, you won't believe what's in this box[1].

There's a fundamental difference between GPU code and vector CPU instructions, though. GPU shader instructions aren't interwoven with the CPU instructions.

Yes, if you restrict yourself to not arbitrarily mixing the vector code with the non-vector code, you can put the vector code off in a dedicated processor (GPU in this case). The GP explicitly stated that a lack of this restriction prevents efficiently farming it off to a coprocessor.


> I'm not sure that's true: rep movs is pretty fast these days.

That's only true if you target skylake and newer. If you target generic x86_64 then compilers will only emit rep mov for long copies due to some CPUs having a high baseline cost for it. There's some linker magic that might get you some optimized version when you callq memcpy, but that doesn't help with inlined copies.


I think people with computers more than five years old already know that their computer is slow.

Why exactly do you think seven-years-old is too-old, but five-years-old isn't?


That is irrelevant. The default target of compilers is some conservative minimum profile. Any binary you download is compiled for wide compatibility, not to run on your computer only.


That’s different. Rendering happens entirely on the GPU, so the only data transfer is a one-way DMA stream containing scene primitives and instructions.


There's absolutely no reason it _has_ to be one-way: It's not like the CPU intrinsically speaks x86_64 or is directly attached to memory anyway. When inventing a new ISA we can do anything.

And if we're talking about memcpy over (small) ranges that are likely still in L1 you're definitely not going to notice the difference.


By definition a co-processor won't share the L1 cache with another processor.


Exactly.


Then you will face the same problems that GPUs suffer from. Extremely high latency and constrained memory bandwidth. Sending an array with 100 elements to the GPU is rarely worth it. However, processing that array with vector instructions on the CPU is going to give you exactly the speedup you need because you can trivially mix and match scalar and vector instructions. I personally dislike GPU programming because GPUs are simply not flexible enough. Either it runs on a GPU or it doesn't. ML runs well on GPUs because graphics and ML both process big matrices. It's not like someone had an epiphany and somehow made a GPU incompatible algorithm run on a GPU (say deserializing JSON objects). They were a perfect match from the beginning.


This is not an area of expertise for me, so is there a reason to not offload vector processing to the GPU and devote the CPU silicon to what it's good at, which is scalar instructions?


There are many reasons. The latency of getting data back and forth to the GPU is a pretty high threshold to cross before you even see benefits, and many tasks are still CPU bound because they have data dependencies and logic that benefit from good branch prediction and deep pipelines.

Many high compute tasks are CPU bound. GPUs are only good for lots of dumb math that doesn't change a lot. Turns out that only applies to a small set of problems, so you need to put in lots of effort to turn your problem into lots of dumb math instead of a little bit of smart math and justify the penalty for leaving L1.


Yes, communications overhead. SIMD instructions in the CPU have direct access to all the same registers and data as regular instructions. Moving data to a GPU and back is a very expensive operation relative to that. The chips are just physically further away and have to communicate mostly via memory.

Consider a typical use case for SIMD instructions - you just decrypted an image or bit of audio downloaded over SSL and want to process it for rendering. The data is in the CPU caches already. SIMD will munch it.


For certain professions like media editing vector instructions help. But for your average Facebook / Netflix / Microsoft Word user, a kind of user that 95% users are, there are less benefits on vector instructions.


Are you saying Facebook, Netflix and Microsoft Word don't require media processing? Pretty sure you'd see plenty of SIMD instructions being executed in libraries called by those applications.


AVX is widely used in things as basic as string parsing. Does your application touch XML or JSON? Odds are good that it probably uses AVX.

Does your game use Denuvo? Then it straight-up won't run without AVX.

People are stuck in a 2012 mindset that AVX is some newfangled thing. It's not, it's used everywhere now. And it will be even more widely used once AVX-512 hits the market - even if you are not using 512-bit width, AVX-512 adds a bunch of new instruction types that fill in some gaps in the existing sets, and extend it with GPU-like features (lane masking).


Are you saying that iPhones and iPads are bad at Facebook, Netflix, and Microsoft Word? If they are, the end user certainly can’t tell. If they aren’t, then it doesn’t really matter does it?


Phones are much more reliant on having hardware decoders for things like video while desktops can usually get away with a CPU-based implementation, yes.


Sure but the same is true about performance in general.


That's not really true. Single-threaded scalar performance is still super important for the everyday responsiveness of laptop/desktop systems. Especially for applications like web browsing which run JavaScript.


Your UI is slow because of IO and RAM and O(n^2) code, not CPU. Look at your activity monitor.


> Net result: they do not. For my use case (Clojure/JVM and ClojureScript compilation), compile times did not get shorter. There seemed to be a slight improvement, but it was below the level of measuring noise (which was around 8%).

I think the level of measuring noise is 0.5%, at least that is what low level system programmers generally consider as noise...


I don't think anyone else gets to say how much noise the GP saw in their test.


How can you know that information?


Compile times for my project are around 2m15s to 2m30s. Since this work involves lots of CPU (some single-threaded, some multi-threaded work) and I/O, the spread mentioned above is what I get with multiple measurements.

While it's true that without the mitigations times were much closer to the 2:15 mark, there were still outliers at 2:24. Which means it's hard to draw any meaningful conclusions.

Not sure where you got the "0.5%" figure from.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: