"Compromised" meaning that malware hasn't been installed or that it's not being accessed by malicious third parties. This could be at the BIOS, firmware, OS, app or any other other level.
Android and ChromiumOS are likely the most trustable computing platforms out there; doubly so for Android running on Pixels. If you don't prefer the ROM Google ships with, you can flash GrapheneOS or CalyxOS and relock the bootloader.
Pixels have several protections in place:
- Hardware root of trust: This is the anchor on which the entire TCB (trusted computing base) is built.
- Cryptographic verification (verified boot) of all the bootloaders (IPL, SPL), the kernels (Linux and LittleKernel), and the device tree.
- Integrity verification (dm-verity) of the contents of the ROM (/system partition which contains privileged OEM software).
- File-based Encryption (fscrypt) of user data (/data partition where installed apps and data go) and adopted external storage (/sdcard); decrypted only with user credentials.
- Running blobs traditionally run in higher exception levels (like ARM EL2) in a restricted, mutually untrusted VM.
- Continued modularization of core ROM components so that they could be updated just like any other Android app, ie without having to update the entire OS.
- Heavily sandboxed userspace, where each app has very limited view of the rest of the system, typically gated by Android-enforced permissions, seccomp filters, selinux policies, posix ACLs, and linux capabilities.
- Private Compute Core for PII (personally identifiable information) workloads. And Trusty Execution Environment for high-trust workloads.
This is not to say Android is without exploits, but it seems it is most further ahead of the mainstream OSes. This is not a particularly high bar because of closed-source firmware and baseband, but this ties in generally with the need to trust the hardware vendors themselves (see point #1).
IMO we have to step back and be honest that the Linux kernel is simply not equipped to run trusted code and untrusted code in the same memory. New bugs are found every few weeks.
If history is of any guide Android and ChromiumOS likely still have many critical bugs the public does not know about yet.
Sadly the only choice is to burn extra ram to give every security context a dedicated kernel and virtual machine. Hypervisors are the best sandbox that exists anchored down to a hardware IOMMU.
QubesOS being VM based is thus the best effort secure workstation OS that exists atm. SpectrumOS looks promising as a potential next gen too.
I've never used Qubes. Rather I heavily segment with manually configured VMs. The ones that run proprietary software (eg webbrowsing, MSWin, etc) generally run on a different machine than my main desktop. It's quite convenient as I can go from my office to the couch, and I just open up the same VMs there and continue doing what I was doing.
I define the network access for each VM in a spreasheet (local services and Internet horizon), which then gets translated into firewall rules. I can simultaneously display multiple web browsers, each with a different network nym (casual browsing, commercial VPN'd, TOR, etc).
The downsides include needing an ethernet cable on my laptop (latency), and that this setup isn't great at going mobile. Eventually I'll get around to setting up a medium-trust laptop that runs a web browser and whatnot directly (while not having access to any keys to the kingdom), one of these days real soon now.
Which brings me to the real downside is the work required to administer it - you already have to be in the self-hosting game. This is where an out-of-the-box solution could excel. Having recently become a NixOS convert, SpectrumOS looks very interesting!
Thinking of my kids' future has also made me much more energy-conscious. Meaning I've stopped running my VM host 24/7 like I was, because neither ESX nor Proxmox is really set up for saving energy easily (automated suspending and waking, etc). Which is a shame, since I'm actually finding that with gigabit fiber at home, even on mobile connections I can work pretty decently on homelab VMs.
Running something like it on a laptop directly makes sense, but I worry about bringing some workloads back to my laptop that I really prefer to keep off it. In terms of raw performance my laptop isn't even close, especially with heavy graphic workloads. And then there's heat, etc.
I feel like this is the all too common pattern of individuals taking environmental responsibility to absurd levels, while corporations dgaf. How much electricity is burned in datacenters, especially doing zero-sum surveillance tasks?
My Ryzen 5700G ("router") idles around 20-25W, which seems like a small price to pay to not be at the mercy of the cloud. That's around 60 miles of driving per month (gas or electric), which seems quite easy to waste other ways.
My Libreboot KGPE ("desktop/server") burns about 160W. This is much higher than a contemporary computer should be, but that's the price of freedom. I could replace it with a Talos II (~65W from quick research), but the payback for electricity saved would take several decades.
To cut back on the environmental impact, I do plan to install solar panels with battery storage, which will also replace the need for UPSes. I've got another KGPE board for which it's interesting to think about setting up as a parallel build host, only running during sunny days rather than contributing to electricity storage requirements.
I totally agree, but I do what I can. And while setup takes me some extra effort, I can now run my 24/7 workloads on a small box and run the big box only when needed.
I used to leverage VMs more (and still do in certain cases) but I've moved to disposable/containerized by leveraging Kasm [0]. There's other ways to stream environments, but it's another option. Definitely check it out if you're looking for other options.
i think tanenbaum will be vindicated in the end. monolithic kernels are like 90s computer networks with perimeter security. if i were to guess, i'd guess that the future is microkernels with some sort of hardware accelerated secure message passing facility. zero-trust at kernel design scale.
Obligatory reminder of the human angle here: we need also a way for untrusted code to not be able to nag the user into granting them extra permissions.
That's not capability based security, though everyone seems to think it is. (Perhaps it's just survivor bias effecting replies?)
You never get nagged to take $5 out of your wallet to hand to an untrusted person, why should you get nagged to drop a file into an application you don't trust.
"You never get nagged to take $5 out of your wallet to hand to an untrusted person"
I suppose, you never have been to a significantly poorer country? (outside of the protected tourist areas).
Or well, spend time with kids, who really want something.
Begging can get very intense.
And about file permissions, well - are you aware, what kind of permissions the standard free app on the google play store will ask of people? And yes, I won't use them. But I use WhatsApp. Did not wanted to give permission to read contacts, or wide file access. But denying it, means it is allmost unusable, so I also eventually gave in ..
The operating system should allow you to make the choice, then enforce it. Open file X, save file Y.... the user should make those choices (via the OS) and the OS should enforce those decisions... the way applications are currently run, that's not true.
The application still needs to communicate the things it needs, the things on which the OS/the user should make choices. And if the application can communicate this, it can communicate it again. And again. And again. Or flat out refuse to work with "incorrect" choices, and bully the users into compliance.
You'd think that would be really rude of the app. That may have mattered 20-30 years ago. Today, most consumer-facing tech companies - big corporation and small startups alike - adopted "being a rude, obnoxious asshole" as a business model.
Note that this includes all the major commercial OS vendors too - i.e. Apple, Google and Microsoft. This creates a new challenge: how do we design secure systems when neither the apps nor the OS itself are trusted parties? How do we develop this security framework, when untrusted parties are the ones gatekeeping adoption, and also most likely to be developing it?
In other words: how do we maintain security for hens, when the foxes are guarding the hen house?
To be fair, Pixels (and all modern Android phones by my understanding) use some kind of trusted execution environment. So if you have a Pixel 6 or later you're using Trusty to perform some trusted actions, which is not Linux and gets some of its own SoC die space. That doesn't mean you can't get kernel pwned and lose private info.
Linux is a security shit show but it is at least publicly auditable, which is a prerequisite to form reasonably confidence in the security of software, or to rapidly correct mistakes found.
OpenBSD by contrast has dual auditing and a stellar security reputation, but development is much slower and compatibility is very low.
seL4 as an extreme is a micro-kernel with mathematically provable security by design, but no workstation software runs on it yet.
MacOS, iOS, and Windows are proprietary so they are dramatically worse off than Linux in security out of the gate. No one should use these that desires to maximize freedom, security and privacy.
In the case of seL4, don't confuse formal verification with security. The code matches the spec, and security properties can be extracted very precisely, but spec might contain oversights/bugs which would allow an attacker to perform unexpected behaviors.
If you define security as a "lack of exploitable bugs", then security can never be proven, because it's impossible to prove a negative. Also many formally verified systems have had critical bugs discovered, like the KRACK attacks in WPA. The formal verification wasn't wrong, just incomplete, because modeling complex systems is inherently an intractable problem.
The fact that seL4 doesn't even offer bug bounties should be a huge red flag that this is still very much an academic exercise, and should not be used in places where security actually matters.
Besides spec bugs, the seL4 treat model is focused on making sure components are kept isolated. They do not deal with most of what we understand as attacks on a workstation at all.
In fact, in a seL4 system, most of the vulnerabilities we find on Linux wouldn't even be on the kernel, and their verification logic can't test something that isn't there.
That said, the seL4 model does probably lead to much better security than the Linux one. It's just not as good as the OP's one-liner implies.
> The formal verification wasn't wrong, just incomplete, because modeling complex systems is inherently an intractable problem.
I’m not involved in this kind of research or low level auditing, but I have some mathematical training and fascination with the idea of formal verification.
I ran across this thing called “K-Framework” that seems to have invested a lot in making formal semantics approachable (to the extent that’s possible). It’s striving to bridge that gap between academia and practicality, and the creator seems to really “get” both the academic and practical challenges of something like that.
The clarity of Grigore’s explanations and the quality of what I’ve found here: https://kframework.org/ makes me think K has a lot of potential, but again, this is not my direct area of expertise, and I haven’t been able to justify a deep dive to judge beyond impressions.
You’re correct in pointing out that complex systems are inevitably difficult to verify, but I think stuff like K could help provably minimize surface area a lot.
It sure makes auditing that code conforms to an expected design a lot easier, which is most security bugs. This is a fantastic design choice for a security focused kernel.
I will grant that proving something was implemented as designed does not rule out design flaws so, fair enough.
I find it hard to believe that the Linux codebase being auditable makes Linux more secure by default than MacOS, iOS, and Windows. I doubt it is humanly feasible to fully read and grok the several million LOC running within Linux.
I would, however, trust a default MacOS/iOS/Windows system over a default Linux system. The Linux community has a track record of being hostile to the security community - for their own good reasons. Whereas Apple and Microsoft pay teams to secure their OS by default.
If you install something like grsecurity or use SELinux policies, I could buy the argument. I have yet to see these used in production though.
Also seL4 is not mathematically proven to be secure, it is formally verified which means it does what it says it does. Part of that spec may include exploitable code
I regret my poor description of seL4. It has proofs for how it functions, and how code execution is isolated, etc. That is not -every- security issue by any means but reviewing a spec is easier than reviewing code, and a small code footprint that forces things out of the kernel that do not need to be there is a major win. I hope more projects follow their lead.
As for Linux, piles of companies pay for Linux kernel security, though many bugs are found by academics and unpaid independent security researchers. Linux is one of the best examples of many-eyes security. None of those brilliant and motivated researchers are allowed to look at the inner workings of MacOS or Windows though Darwin is at least partly open source so I put it way ahead of Windows here.
On system-call firewalling tactics like SELinux it is true very few use these in practice as most devs have no idea what a system call is, let alone how to restrict it. That said Kernel namespacing features have come into very wide mainstream use thanks to Docker and similar containerization frameworks which cover much of the scope of things like SELinux while being much easier to use.
As for most Linux /distributions/, I sadly must agree they favor compatibility and ease of use over security basically always. I will grant that Windows/MacOS enable basic sandboxing features that, while proprietary, are likely superior to nothing at all like most Linux distros. Other choices exist though.
QubesOS is the Linux distro for those that want high security. It is what I run on all my workstations and I would trust it out of the box over anything else that exists today as hardware access and application workflows are isolated by virtual machines and the base OS is offline.
> I would, however, trust a default MacOS/iOS/Windows system over a default Linux system. The Linux community has a track record of being hostile to the security community - for their own good reasons. Whereas Apple and Microsoft pay teams to secure their OS by default.
I think we can have the best of both worlds here: OS distributions that are being maintained by paid teams of security experts, and that can be audited by anybody.
What are the major ones? Android, Chromium OS, RedHat (Fedora, CentOS), and SUSE.
seL4 actually makes proofs for some core isolation promises, like realtime-ness and data flow adhering to capabilities (though with neglect of side channels for that aspect, which can be corrected for by also verifying the code that runs on top to not do shady stuff to probe side channels).
> MacOS, iOS, and Windows are proprietary so they are dramatically worse off than Linux in security out of the gate. No one should use these that desires to maximize freedom, security and privacy.
Not sure how fair this is, even though I agree with you with regards to Linux being auditable and the others not.
Windows and macOS these days have vendor-provided code signing authorities that can be leveraged (and are by default), which provides at least some protection against malware at the macro level (in that the certificates can be revoked if something nefarious is identified). This doesn't exist at all in Linux, although third party products are in the early stages.
Windows 11 and macOS have hardware-backed root-of-trust. In Windows the root of trust is the TPM, on macOS it's the T2 (Intel) or the chip package (ARM).
Any of these features could be compromised without your knowing, but at least where you have control authorities for these systems you can draw some comfort in knowing that once new malware has been identified spreading on pretty much any machine, it can be stopped quite rapidly on all machines by revocation until the bug can be patched.
> Windows and macOS these days have vendor-provided code signing authorities that can be leveraged (and are by default), which provides at least some protection against malware at the macro level
The code-signing is relatively trivial to workaround, you just have to get users to run xattr(1) to remove quarantine status.
Because even for FOSS stuff, unless you are an expert on all levels of the stack you will not be able to assert there are some hidden exploits disguides as perfectly safe code.
And if you have to rely on third parties to assert that for you, then you will have to trush their honesty and technical skills to be able to assert such statements.
So there is only hope that all players are experts, don't do any mistakes, keep being honest, and exercise their certification for every new release for all products that make up a standard installation.
It's... complicated. Linux is just the kernel, but good modern OS security requires the kernel, the userspace, and the kernel/userspace boundary to all be hardened a significant amount. This means defense in depth, exploit mitigation, careful security and API boundaries put in place to separate components, etc.
Until pretty recently (~3-4 years) Linux the kernel was actually pretty far behind in most respects versus competitors, including Windows and mac/iOS. I say this as someone who used to write a bunch of exploits as a hobby (mainly for Windows based systems and windows apps). But there's been a big increase in the amount of mitigations going into the kernel these days though. Most of the state of the art stuff was pioneered elsewhere from upstream but Linux does adopt more and more stuff these days.
The userspace story is more of a mixed bag. Like, in reality, mobile platforms are far ahead here because they tend to enforce rigorous sandboxing far beyond the typical access control model in Unix or Windows. This is really important when you're running code under the same user. For example just because you run a browser and SSH as $USER doesn't mean your browser should access your SSH keys! But the unix model isn't very flexible for use cases like this unless you segregate every application into its own user namespace, which can come with other awkward consequences. In something like iOS for example, when an application needs a file and asks the user to pick one, the operating system will actually open a privileged file picker with elevated permissions, which can see all files, then only delegate those files the user selects to the app. Otherwise they simply can't see them. So there is a permission model here, and a delegation of permissions, that requires a significant amount of userspace plumbing. Things like FlatPak are improving the situation here (e.g XDG Portal APIs for file pickers, etc.) Userspace on general desktop platforms is moving very, very slowly here.
If you want my honest opinion as someone who did security work and wrote exploits a lot: pretty much all of the modern systems are fundamentally flawed at the design level. They are composed of millions of lines of unsafe code that is incredibly difficult to audit and fix. Linux, the kernel, might actually be the worst offender in this case because while systems like iOS continue to move things out of the kernel (e.g. the iOS WiFi stack is now in userspace as of iOS 16 and the modem is behind an IOMMU) Linux doesn't really seem to be moving in this direction, and it increases in scope and features rapidly, so you need to be careful what you expose. It might actually be that the Linux kernel is possibly the weakest part of Android security these days for those reasons (just my speculation.) I mean you can basically just throw shit at the system call interface and find crashes, this is not a joke. Windows seems to be middle of the pack in this regard, but they do invest a lot in exploit mitigation and security, in no small part due to the notoriety of Windows insecurity in the XP days. Userspace is improving on all systems, in my experience, but it's a shitload of work to introduce new secure APIs and migrate things to use them, etc.
Mobile platforms, both Android and iOS, are in general significantly further ahead here in terms of "What kind of blast radius can some application have if it is compromised", largely because the userspace was co-designed along with the security model. ChromeOS also qualifies IMO. So just pick your poison, and it's probably a step up over the average today. But they still are comprised using the same fundamental building blocks built on lots of unsafe code and dated APIs and assumptions. So there's an upper limit here on what you can do, I think. But we can still do a lot better even today.
If you want something more open in the mobile sector, then probably the only one I would actually trust is probably GrapheneOS since its author (Daniel Micay) actually knows what he's doing when it comes to security mitigation and secure design. The FOSS world has a big problem IMO where people just think "security" means enabling some compiler flags and here's a dump of the source code, when that's barely the starting point -- and outside of some of the most scrutinized projects in the entire world, I would say FOSS security is often very very bad, and in my experience there's no indication FOSS actually generally improves security outside of those exceptional cases, but people hate hearing it. I suspect Daniel would agree with my assessment most of the fundamentals today are fatally flawed (including Linux) but, it is what it is.
I keep thinking about this too, everything should be containerised and you can control access to shared resources like the clipboard or file system with absolute certainty, even stopping certain app from access to anything.
I can’t discuss my former role in too much detail, but it has convinced me that all the above is insufficient in a number of very realistic threat models.
One issue is that software has vulnerabilities and bugs. I’m not talking about the software that users run in sandboxes environments. I’m talking about the sandboxes environments. I’m talking about cryptography implementations. I’m talking about the firmware running in the “trusted” hardware.
The other major issue is as you alluded to: the need to trust vendors and hardware. Without protection and monitoring at the physical level, the user has no way to verify the operation of the giant stack of technology designed to “protect them”. Without the ability to verify operations, how is the user to trust anything? Why do companies tell users to “trust them” without any proof they are trustworthy?
This may seem like a minor point, but this is really the crux of the issue. Building this giant house of cards on top of a (potentially) untrustworthy hardware root of trust does not buy anyone anything. Certainly it does not buy “security”.
Large companies and nation states are the most likely adversaries one wants to be wary of these days (e.g. journalists, whistleblowers, etc.). What good does the technology do them if the supply chain is compromised or vendors are coerced to insert backdoors? These are the threats that actually face people concerned about security, not whether or not their executables are run in a sandboxed VM or not. Great, you’ve stopped the adversary from inserting malicious code into your device after purchase. Good thing for them, they did it prior to or during manufacture.
The technology you alluded to above is mainly useful for protecting company IP from end users, IMO. That’s how I’ve mainly seen it used, and the marketing of “security for the user” is a gimmick to justify the process.
EDIT: I forgot to mention this entire class of security issue since I am used to working on air gapped systems. I don’t care if you are operating in a sandboxed VM with a randomized MAC over a VPN over Tor. If you’re communicating with any other device over the Internet, you have to trust every single other machine along the way. And you shouldn’t.
Why do companies tell users to “trust them” without any proof they are trustworthy?
You know the answer here, they are not to be trusted.
Samsung phones, for example, have a gpsd, which phones home at random times. This runs as root, ignores vpn settings (so no netguard for you!), and if it is just getting updated agps info, it sure seems to send a lot of data for that.
So no, they don't want a legitly auditable device. Too many questions, you see.
It is also worth mentioning, since I didn’t realize it until I worked in depth in the space: your CPU is not the only place to execute code, or the only place with access to hardware.
> The other major issue is as you alluded to: the need to trust vendors and hardware. Without protection and monitoring at the physical level, the user has no way to verify the operation of the giant stack of technology designed to “protect them”. Without the ability to verify operations, how is the user to trust anything? Why do companies tell users to “trust them” without any proof they are trustworthy?
At the end of the day you need to trust Qualcomm or MediaTek. The Oracle of the hardware world, with more lawyers than engineers, or... MediaTek.
AFAIK there are no phones on the market with open-source baseband firmware, so you have to trust one of Qualcomm, Broadcom et al with access to all cellular communication. Do you have a best of breed supplier you’ve vetted?
And even if you _could_ trust the baseband on your device, there’s the problem that the cell tower is running software you have no visibility of.
If I were the NSA, that’s where I’d be focussing at least some of the attention of the “exploit people’s phones” department. If you want cellular connectivity, the cellular provider needs a real time way to identify your device and it’s location (at least down to neatest few cell towers accuracy).
(And once I had some capability there, the people running the “most secure basebands/devices would be the ones I kept the closest eye on. I’ve heard my local intelligence service are very interested in phone switch on/off events, because they are a signal that someone might be attempting to evade surveillance, and the rarity of “normal people” switching their phone off (or disconnecting from the cellular network) makes it worthwhile collecting all that “metadata” so they can search it for the “interesting” cases.
I don't think you can trust any commercial baseband, period. They all have to adhere to complex radio standards, nobody wants to implement them because their design-by-committee stuff is boring/lame/hard/painful, so what you get is a few stacks that pass the tests and everyone builds on top of that. Same goes for any other RTOS-style firmware, it's really hard to get right, and because most of them are built by/for the 'device' world, they often have very long release cycles, slow development etc. just like say, head units in cars or painfully slow touch screens on devices that should just have buttons (like office style coffee machines, ATMs etc).
WiFi is in a similar position, but at least the diversity is a bit better causing integration tests to fail better when too many bad implementations try to talk to each other.
That leaves all the other chips, which I think are best trusted in a divide and conquer setup where they all have to independently verify their blobs, and not be allowed to mess with each others memory/internal state. It can make them more expensive, but it also compartmentalises them in a way that side-channel attacks within CPU cores are completely mitigated.
Only a very very very small amount of companies in the world will have the capital, expertise and manpower to make devices where enough of the stack can be trusted, and out of all of those, only some actually seem to try:
- Google (mostly for UX, but also to make fat stacks of cash via ads)
- Apple (mostly for UX, but also to make fat stacks of cash via ecosystem)
- Microsoft (console, ARM windows, mostly for DRM, but also UX)
- Sony (console, mostly for DRM, but also UX)
- Nintendo (console, mostly for DRM)
Besides the vertical integration they could make, there is the problem of their 'personalities' usually not being a good fit for people that want to go off the deep end in terms of security, privacy, control, feelings etc. But anyone and everyone else simply cannot do to silicon what needs to be done, even if just because of the lack of IP.
The more a company does _not_ want to get burned on their security/privacy positions and keeps iterating making better designs, the more you _could_ trust them. The only realistic alternative is going back to 80's computing, and nobody has time for that.
I agree with your point about how the personality of (technology) companies doesn’t fit the security minded person well. As a software engineer, I feel I am in the small minority whenever I mention security or privacy concerns at work. I feel that most technology is evil at this point because of the immense ability of it to violate people’s privacy. It is sad to me that this is the state of the modern tech ecosystem.
It's just back to the worldview of the 1960's. The only computers were mainframes and were seen as the tools of "The Man", conspiring in the shadows against a free humanity. PCs were a liberating force, making computing accessible to the common man. Modern total reliance on global networks reversed the power balance back to central instances.
There are open source LTE stacks, they just suffer from lack of power efficiency. Not too bad in a gaming laptop form factor, but quite bad in a smartphone.
While respecting your “I can go into details” comment, I’m curious to hear whatever you _can_ comment on about what sort of adversary has the capabilities you describe and do you have an opinion on whether they use those in tightly targeted attacks only, or do they compromise the entire hardware/software supply chain in a way that they can do “full take surveillance” using it?
If I’m not a terrorist/sex-trafficker/investigative-journalist, can I reasonably ignore those threats even if I, say, occasionally buy personal use quantities of illegal drugs or down/upload copyright material? (With, I guess, the caveat that I’d need to assume the dealer/torrent site at the other end of those connections isn’t under active surveillance…)
All of this is public knowledge and has nothing to do with my role:
Nation states, especially the US, should be suspected of having compromised everything. Look at all the things Edward Snowden released. Look at the way the NSA has corrupted cryptographic standards in the past (e.g. Dual EC DRBG). There are countless instances of similar situations.
All that looks good on paper, but a lot of apps require full disk access and can easily run in the background, so how "trustable" can that really be in practice?
With iOS at least I know that apps really are sandboxed and cannot access anything unless I grant permission. No app can ever attempt to access my photos unless I explicitly pick a photo or grant partial/total access. Even then it's read-only or "write with confirmation, every time"
Well both of your complaints were already addressed. Android introduced the scoped storage system to remove and fix abuse of "full" disk access, and they also added the foreground notification system which forces a system notification to be displayed if any app is doing work in the background, so that you know about.
Right, but if the average real-world Android experience lags behind say iOS in terms of security, then the point, even if outdated, still serves to disprove the parent’s premise that AOSP is the most secure.
On GrapheneOS you can choose specifc storage scopes, even if the app is requesting full user storage access.
And you can deny the file access permission like any normal permission, most modern apps request music or videos and photos, rarley an app requests full file access.
Hasn’t this all been long true for iOS as well? There are reasons to hate it, but a walled garden is safer in many ways (as long as you trust Apple). You mention baseband - Android hardware comes with max 3 years of baseband support, compared to 7-9 years on iPhone. The story is similar when it comes to stock OS support. So from my pov, iPhones can be a comparable value (security and otherwise) to the best Android has to offer, specifically because of their (usable to me and the next guy to own my phone) 7-9 years of life, compared to 3 years max with a Pixel. What am I missing here?
Accuracy. You're missing "making accurate statements" here.
And context - neither Apple's proprietary practices nor "trust" of them are applicable solutions to the problem domain here.
A 7-9 year old iOS device is crippled today, with non-user-replaceable parts (both software and hardware). A 3-year-old Pixel is newer than the one I use, running sshd and accepting connections only from the bearer of my private key.
The market for old iPhones drops sharply, and is just for folks with an iCloud account. The market for old Pixels allows a Pixel 6 to be sold for >$1000 less than half a year ago (with a more trustworthy third party OS installed), ready for the user to choose whether this trust model of a pre-installed OS is even sufficient.
> A 7-9 year old iOS device is crippled today, with non-user-replaceable parts (both software and hardware).
A 7-9 year old iPhone has user replacable parts (hardware), as does the equivelant Pixel. It also runs perfectly well.
> A 3-year-old Pixel is newer than the one I use, running sshd and accepting connections only from the bearer of my private key.
That's a lovely thought. The baseband manufacturers with their own blobs disagree though.
> The market for old iPhones drops sharply, and is just for folks with an iCloud account. The market for old Pixels allows a Pixel 6 to be sold for >$1000 less than half a year ago (with a more trustworthy third party OS installed), ready for the user to choose whether this trust model of a pre-installed OS is even sufficient.
This is simply untrue. The Pixel 6 is available for $900 AUD new and about $500-600 used. The iPhone 13 (same year) starts at $1200 AUD new and about $800-900 used. The value held is extremely similar, and in fact drops less for older Apple phones than Android ones likely - as the GP mentioned - due to more than double the years of OS support from the vendor.
It's fine for you to decide that the fruit company is less trustworthy than the advertising company, but don't spread nonsense to try and sell your point.
My impression is that Android is compromised by design because it so easily leaks data that any defense against it already became futile. Apps might not have permissions to do anything, but the same is true for users.
Not that a Desktop with Windows would be any better. But I don't trust the OS itself, no matter how many layers of virtualization you put between code I choose to execute. The weak link is already provided by design. This isn't a technical criticism of Android, but the whole platform as being intransparent and paternalistic.
Not a fan of trusted computing because I doubt it will ever be used in the interest of users. It will be a requirement for some services, which I don't want to further support in their endeavours.
> Android and ChromiumOS are likely the most trustable computing platforms out there
I talked to a security researcher specializing on Android at a conference and he didn't sound like he'd agree.
While I personally think ChromiumOS does a good job, I think a huge problem is that the issue is in how liberally complexity is added. And complexity is typically where security issues lurk. This has been seen again and again.
It's also why I think projects such as OpenBSD do such a great job. Their main focus seems to be reducing complexity (which they are sometimes are criticized for). A lot of the security seems to come from the reduced attack surface you get. And then the security mechanisms build, which typically are more easily implemented, because of said simplicity are the next layer.
And I think OpenBSD has reached a sweet spot there, where it's not some obscure research OS, but an OS that you can install on your server or desktop, run actual work loads, heck, even play Stardew Valley, or a shooter on, but have all the benefits in terms of security or simplicity that you can from research OSs, Plan 9, etc.
So maybe not mainstream, but mainstream enough to actually work with it. There's sadly many projects that completely ignore reality around them, also because their goal is to simply be research projects and nothing more. Then we have those papers that rarely everyone ever looks at on how in a perfect world all those big security topics could be solved. Unless some big company comes along and puts it into some milestone.
ChromiumOS seems like the limitations you get are similar, probably even more severe than what you get compared to OpenBSD's flexibility. That's something many de-google projects struggle with as well. At the same time the complexity remains a whole lot bigger. Of course goals and target groups are hugely different.
I think both Android and ChromiumOS used to put more emphasis on simplicity, but gave it up at some point. I am not sure why, but would assume that many decisions are simply company decisions. After all the eventual goal is economic growth.
That locking down on mobile devices is not just to increase security, but has the beneficial side effect of controlling the platform. This might not even be directly intended by the security focused developers, but it is a side effect.
So "most trustable" in that scenario comes with "most gate-keeping", "least ownership", etc., which we are kind of used to on smartphones, tablets and Chromebooks. So I think comparing it with other kinds of mainstream OSs isn't really leading to much.
This page has stuck with me since I read it regarding openbsd. It's a bit mean spirited, but I think openbsd mostly benefits from its own obscurity.
https://isopenbsdsecu.re/
But the nice parts of ChromeOS, as far as security properties go are the way it can be "power washed" between usages. Along with a desktop Linux system that has less binaries installed at it's base than most. And things that are built in are typically built atop chrome's sandbox.
I used to joke with my friends who ran TAILS Linux that my grandma with her Chromebook had the same threat model.
No other general-purpose OS that runs on my laptop has the track record of OpenBSD: only 2 remotely exploitable security holes in the default installation since ~1996. And then the other mitigations let you control carefully what more attack surface to expose--those mitigations dramatically reduce it. I appreciate the general lack of privilege escalation 0-day exploits, as seen over time.
Serious question: How big is OpenBSD as a target for malware, exploits, viruses, etc. ?
OpenBSD's track record is impressive but is it a significant target compared to Windows, MacOS, and Linux?
It is easy to say "only two bullets have ever penetrated my armor" when hardly anyone is shooting at you. I do not know if this is the case because I have never used OpenBSD and I do not know how widely it is used (headless servers, embedded devices, etc.).
If that were true you might also expect FreeBSD and NetBSD to have similarly low levels of exploits, and my general impression (not research, just reading around) is that they have had more exploits in that timeframe (privilege escalation bugs, whatever).
You can read more about OBSD's security by going to https://openbsd.org then clicking the "Security" link near the top left.
ps: FreeBSD seems to have more users than OpenBSD, and NetBSD seems to have fewer users than OpenBSD.
On OpenBSD do you even need to compromise the kernel? Can't your normal user account install a backdoored browser, steal your ssh keys, DDOS people, start a VNC server, keylog, etc.
I guess that is possible if you try to do so -- compile your own stuff, or download and run it. But if you act normally instead, installing just what you actually need, from the package repository (where most things are "pledged" and "unveiled" which adds some impressive protections), I think the chances are much lower than with other OSes.
Also I separate things by user account, so I don't do my general browsing as the same user that does my programming, which is again separate from bank access, which is separate from .... So the kernel is providing a lot of protections.
(And I usually browse without images and javascript, which is not OS-specific but a suggestion. Many things I use don't require it, and I can flip it on or configure it specifically for those that do.)
There is a package repository where many of the packages have been "pledged" and "unveiled", meaning that they execute with fewer unneeded privileges. And a general lack of privilege escalation exploits in base. And privilege separate by running things as distinct users. So jails might be less needed or less helpful by comparison, overall. There are chroot jails though, not sure why anyone would think they are not available in OBSD.
OpenBSD doesn't have jails. Jails take effort to setup. It's much easier to just run the malware instead of going through the effort of making a jail for it.
PRISM revelations showed that the state worked together with large companies like Google and Apple, MS, Facebook etc to gather information. The CIA also had backdoors into popular mobile and desktop OSes. I'm afraid we can't trust any device right now.
Law enforcement has never had problems with iPhones. iPhones in the default configuration back up all data to iCloud with Apple keys, allowing Apple and the FBI to read all of the photos and messages on a device at any time, without the device.
The "Apple vs FBI" thing was a coordinated PR campaign following the Snowden leaks to salvage Apple's reputation.
The price of an Android zero day has been slightly higher than the price of an iOS zero day since late 2019. Make of that what you like
https://zerodium.com/program.html
I suspect remarkably few people are qualified to objectively say which is more secure. If you're an expert on one, you're unlikely to be an expert on the other.
Security is multidimensional. It's unlikely there will be a platform that's more secure from every possible angle. What's secure for you might not be secure for a less technical person, or a world traveler.
In terms of privacy, Android is "compromised" by default, i.e. Google collects and stores a ton of private information about you. I believe Apple used to be much better, and still is, but getting worse.
> In terms of privacy, Android is "compromised" by default, i.e. Google collects and stores a ton of private information about you.
Apple also does the same. Also, this only applies if you're running an Android device "out-of-the-box". Fortunately, there exist AOSP forks that mitigate this type of intrusion (e.g. GrapheneOS).
iOS is a proprietary OS making security research unreasonably difficult with new setbacks on every new version. It can only be regarded as reasonably private and secure if you trust the Apple marketing team.
My understanding is that Apple has gotten a lot better about this with their bug bounty payouts and providing debug hardware to researchers, and it’s not like there’s not a ton of proprietary code running on most consumer android devices.
I would also assume the fact that their vertical integration all the way down to silicon is an advantage here as well.
In Android you at least have the choice to run a fully open source OS and open source apps, albeit with some driver blobs.
With the exception of the blobs, everything on Android is auditable.
Meanwhile very little of MacOS or iOS is auditable.
Personally I do not use or trust any of the above, but if forced to choose Android is worlds ahead of iOS in terms of publicly auditable privacy and security.
You cannot form reasonable confidence something is secure unless it can be readily audited by yourself or capable unbiased third parties of your choosing. This means source code availability is a hard requirement for any security claims. Even if you had teams de-compile everything you could never keep up with updates.
Not all open source code is secure, but all secure code is open source.
The bug bounty is pretty hard to actually get access to, there’s still no source outside of the kernel, and the Security Research Devices are really hard to get access to. You have to be someone they’ve heard of, in a country they approve of, you can’t move the device around, and you have to sign your life away to get it for 12* months.
What @Irvick is talking about is the fact that you have more freedom to test the security in an Android than in iOS, such as being able to flash other systems.
Isn’t most of the value here in not allowing sideloading? In iOS your grandma/child cannot be tricked into clicking “allow apps from untrusted sources”, which is how most breaches happen.
If the apps sandboxed, how can installing an app cause breaches? As far as I know, iOS apps are sandboxed.
Either the sandbox is very weak and Apple instead relies on App Store audits, or they disallow users installing apps outside the app store to protect their 30% tax that makes them a LOT of money.
the malicious app, even signed off the app store could also exploit unpublished vulnerabilities to gain elevated access and not require asking for permission. even or especially if it's not a full sandbox escape.
The problem is that third-party OEMs don’t have to run AOSP, they can easily replace any and all code with malicious call-home backdoors, and still pass CTS tests.
As far as I can tell, there is no meaningful protection in place to prevent OEMs from poisoning the Android well (and the Android brand), even without considering the black box firmware running on wifi/BT/LTE/5G modems.
Most of your bullet points are reinventions of standard technology or incidental complexity.
Cryptographic verification of the boot chain with Hardware root of trust are real. Heavily sandboxed userspace is real. Everything else would seem to be a reimplementation of common best practices (disk encryption), or a mitigation of a self-created problem (there shouldn't be binary driver blobs running on the main CPU to begin with).
And from what I remember, a plain AOSP install seemed to still phone home to Google to check for Internet connectivity and whatnot. It's awfully hard to put my faith in an operating system primarily developed by a surveillance company, as the working assumptions are a drastic departure from individualist computing. And trying to question those assumptions with independent devs is often dismissed (for a particularly striking example, see LineageOS/"Safetynet").
> Everything else would seem to be a reimplementation of common best practices...
True, but those protections are enabled by default (on Pixels at least). Users don't have to do anything here.
> And from what I remember, a plain AOSP install seemed to still phone home to Google to check for Internet connectivity and whatnot.
You're not wrong, but GrapheneOS and CalyxOS are valid options, if you don't trust the ROM Pixel ships with. Even with a custom ROM you're left trusting the OEM. It'd be nice if we could have an open hardware / open firmware Android, but it hasn't happened, yet.
Sure, but full disk encryption was also enabled on my Mom's Ubuntu laptop 15 years ago, because I chose the correct options when I set it up. What commercial vendors offer out of the box has never been a good yardstick for talking about security features, and it's only gotten worse with the rise of the surveillance economy.
My fundamental problem with Graphene/Calyx is that I don't trust the devs have enough bandwidth and resources to catch all the vulnerabilities created upstream, especially with the moving target created by rapid version churn. For example, Android is finally getting the ability to grant apps scoped capabilities rather than blanket full access permissions, which is actually coming from upstream - the Libre forks should have had these features a decade ago, but for their limited resources.
Concretely, what discourages me from going Pixel is the Qualcomm integrated baseband/application chipsets. I've heard that Qualcomm has worked on segmenting the two with memory isolation and whatnot, but their history plus the closed design doesn't instill confidence. Yet again it's the difference between the corporate perspective of providing top-down relativist "security" rather than the individualist stance of hardline securing the AP against attacks from the BB.
Pragmatically, I know I should get over that and stop letting the perfect be the enemy of the good (I'm currently using a proprietary trash-Android my carrier sent me. The early 4G shutdown obsoleted my previous Lineage/microG). But every time I look at Pixels it seems there's so damn many "current" models, none stand out as the best but rather it's a continuum of expensive versus older ones (destined to become e-waste even sooner due to the shameless software churn). And so I punt.
> What commercial vendors offer out of the box has never been a good yardstick for talking about security features
It absolutely is. Default setting matter a lot!
It's great to have extra security features too. But even experienced users won't change defaults if they have too much cost. If things are turned on by default then those costs diminish because other software has to work within them.
My point was that when talking about commercial security offerings, the security models generally end up relying on on "trust the company", which has never worked out well. So corporate offerings finally coming around to having full disk encryption is more catching up with something they lacked, rather than advancing the state of he art. (Contrast with Android's process sandboxing, which seems like a genuine advancement and could be worthwhile to port to desktop Linux)
In the context of talking about individual actions one can take to trust their personal machine, it's reasonable to assume this involves appropriately configuring your software environment. If this was instead a thread about what products would be good to recommend to your parents, then what was commercially available off the shelf would be relevant.
These are all part of the operating system and unrelated to the biggest attack surface: apps.
Users install apps and grant them full access all the time. Android phones are wide open no matter how secure the operating system itself is, because the security model for apps is so weak from a user experience point of view.
Windows does all of those, in addition to fine grained access controls. I would go so far as to say that the Chromium sandbox implementation is better than on Android because of the ability to completely de-privilege processes.
> Windows does all of those, in addition to fine grained access controls.
Aren't those still user-based? So, in the typical case of a single-user computer, a rogue app that I run with my account would be able to access my data.
There's the protected folders thing (not sure about the name) since a few versions ago, which attempts to block random apps from accessing random folders.
But it's opt-in (folders are not protected by default, you have to add them one by one). It's also all or nothing: either a given app is "untrusted" and it can't access any of the protected folders, or it's "trusted" and it can access all of them.
I can't say I trust Photoshop to only touch my pictures folder, but not my .ssh folder.
Windows struggles with feature adoption though. Win11 helped with the TPM requirement and features on by default, but MSIX apps are still underrepresented so userspace sandboxing is weaker. Windows virtualization-based security is great though, imo it's a significant advantage over Android
MSIX doesn't implement sandboxing. Apps can opt in to being sandboxed via that tech, but you can also write totally unsandboxed apps. There's also a very light weight app container mode called (internally) Helium which just redirects some filesystem and registry stuff, but the goal is to make uninstalls clean, not security.
The Windows kernel does offer an impressive number of options to lock down processes. Look at the Chrome sandbox code some time. The Windows API is huge but you can really lock it down a lot. The macOS sandbox architecture is, however, the best. The Linux approach is sadly in third place.
App silos are pretty neat, I also find it kind of shameful that server silos are locked down to server SKUs. I would figure there could be some significant benefit to using them in browser renderer processes.
I think they recently added a way to opt out of it entirely. That said, it does very little and because it's not a sandbox it's easy to 'escape':
• Copy an EXE to %TEMP% and run it from there.
• Use a Win32 API flag to start a process (of any path) outside the app container.
There are only a few cases where Helium is an issue and they're all easy to work around. One example is writing a log file to your app private UserData directory and then opening Notepad on it. If you do it the 'naive' way then Notepad won't be able to find it, because you'll pass a redirected path which it can't see. The fix is simply to resolve the redirect before passing the argument to Notepad using a Win32 API call.
I know we talked about Conveyor a few days ago so I'll note here that if you package a JVM app with it then the %LOCALAPPDATA% and %APPDATA% environment variables are rewritten to their target locations automatically, so as long as you use them to decide where to write app private files then their paths will be visible to other apps, avoiding the Notepad issue. This doesn't apply to native or Electron apps though, at least not at this time.
Does LineageOS have a place alongside GrapheneOS and CalyxOS?
Asking because I base hardware purchases on whether LineageOS is available for the device and wondering whether I should restrict further to GrapheneOS or CalyxOS.
There's overlap between CalyxOS and LineageOS. CalyxOS (privacy-focused ROM, currently Fairphone and Pixel-only) has different goals to LineageOS (Android ROMs for as many phones as possible); while GrapheneOS is a security-focused distribution. DivestOS is another credible alternative.
I didn't know that pixels allowed custom ROMs to re-lock bootloader with custom keys, Digging into it I found that even OnePlus & FairPhone might have that feature.
What do you want to verify exactly? Do you think Apple is lying about what lockdown mode does? Why would they do that?
Could you at least say what your opinion is based on?
But it is possible to verify what it does, the same way you would for an android phone (I.e. not just look at the source code and hope that it matches what’s running on your device).
https://youtu.be/8mQAYeozl5I
At 26:42 he talks about lockdown mode. Would be a bit weird if he lied about the impact lockdown mode has.
What if he’s wrong? Computers do things their programmers don’t expect them to literally all the time. Security bugs generally come from a mistaken assumption about how something behaves.
He doesn’t have to be a liar to be telling you untruths about how it works.
My android comment was taking yours, turning it around and taking it to the extreme to illustrate a point.
And no, I never asked why we would need to verify the security researcher’s claims (but sure, you should).
1. Dma54rhs says Apple’s (!) claims supposedly can’t be verified and that you need to take Apple’s word for it
2. I ask why not, provide a link to a talk about iOS security by a renown security researcher as both an example of how to verify Apple’s claims (reverse engineer iOS) and to lend some credence to the point that they are likely to be true
3. You talk about the researcher and/or programmers being wrong by replying with an “orthogonal” comment containing “whataboutism”.
Edit: Could we please talk about the actual topic? Do you or someone else know about instances where Apple lied about mitigations like lockdown mode before? Maybe there’s a long history of it and I just don’t know. Or is there some other flaw in my logic?
There is always the argument about hidden bugdoors, backdoored compilers or whatnot. But that’s not practical, by then you might as well stop using technology.
If Apple can’t be trusted then why can you trust google? Or Qualcomm?
You’re the one derailing from the actual topic, which was broadly can we trust our devices and specifically can we trust iOS, by muddying the water with what-about-android. The question wasn’t which we can trust more, the question was whether and how much we can trust Apple.
You can’t verify that iOS is doing what Apple says that it’s doing, because you can’t read the code. You can’t trust that Apple perfectly understands their product, because it’s extremely complicated, and therefore you can’t just take their word for it. I’ll state that the check here, although it’s painfully obvious, is that exploits happen. Researcher opinions are fine, but facts are better.
None of this is in any way contentious or new, it’s the exact debate about open-vs-closed that we’ve been having since the beginning of software.
Considering the security UI layers control device access, I don't think AOSP or Android in general (with its many vendor customisations) is the most trustable.
Knowing a PUK-code for a SIM card you own (and you can insert/hotswap) is all you need(ed) to unlock practically any Android phone until recently. Granted, this got reported and then fixed, it doesn't matter how good the TCB is if the front door is wide open.
I'd say that actual trust is hard to come by because you cannot trust what you see, since what you see is merely what is 'presented'. If a device says something like "the only Root CA I trust to sign my stage 1 boot loader is X", I still don't know if it is lying or not. I also can't do something like replace a SoC BROM since it's fused RO or simply a ROM (and not EPROM or Flash), or because the sources for that are owned by the SoC manufacturer, which isn't AOSP or Google, and I cannot inspect, build and run them. Hell, we can make this worse: even if I could flash it, who's to say that the memory I flashed is also the memory that is read when the SoC comes out of reset? What if there is a separate area on the die that has a different ROM that nobody told us about.
So trust isn't going to be purely based on "because this is the design we present you", but has to be based on non-technical factors and independent research. The former is mainly based on soft factors, and the latter is hard to come by and often just based on individual devices, not even an entire SKU release.
Architecturally, it seems to me that Apple with their own SoCs, bootrom, RTKit, iBoot etc. has a stronger platform trust case because they actually own the stack all the way with nobody else having a say about it. Especially with the spreading around of individually signed and verified hardware ROMs that don't even run on the same chips in the same device, a compromise would be very limited in scope. The only other hardware/software combination that would come close is the aforementioned Pixel devices since Google has almost all of the stack there as well.
On the x86 side it's a mess that will never be resolved, nearly every technology that was supposed to make it more trustworthy has been used to reduce trust and install persistent access outside the view of the OS. AGESA, IME, TXT, SGX, even the SMM implementations before any of those came along had problems that essentially circumvent any trust that was built up by other means. Even the hardcoded certificate signature hashes in the CPUs are coming in range if easy brute forcing (SHA1 mostly) which means that entire decades of systems can now think they are running trusted software from the reset vector all the way to the OS, just because a signature hash was using a crappy algo that was never intended to be used that way.
Windows is probably only ever going to be boot-trustable (but not OS-trustable) on ARM, just like macOS root-of-trust is pointless on anything before the T2 chip (and M1 later on). For Linux, it's about as trustworthy as you want to make it, but putting in the work is a PITA, so unless a distro or derivative (Qubes, ChromeOS etc.) does it for you, most users leave it as-is (untrusted).
> Architecturally, it seems to me that Apple with their own SoCs, bootrom, RTKit, iBoot etc. has a stronger platform trust case because they actually own the stack all the way with nobody else having a say about it.
That's why I specifically and only mention Pixel in my comment. Google's been doing their own hardware since Pixel 6. (iirc) Daniel Micay, creator of GrapheneOS, once said that Google shares firmware / proprietary code for the Pixel hardware "if you ask nicely enough".
Like you point out, eventually, one is left trusting a BigCo or worse assuming a flawed implementation is secure. Though, it isn't for the want of not trying. Or, to put it another way, "the best among the rest".
Great question. I don't anymore. Decades ago when I had a 286 and knew what each file did and what all the software was, and threats were limited and crude, I had good confidence of controlling my machine. Today, when my laptop has millions of files and each website - even hacker news - could inject something malicious and my surface is so broad (browsers applications extensions libraries everything) and virtually anything I do involves network connections... I just don't have the confidence.
FWIW, I try to segregate my machines for different categories of behaviour - this laptop is for work, this one is for photos and personal documents, this one is for porn, this one is if I want to try something. But even still my trust in e. G. software vlan on my router and access controls on my NAS etc are limited in this day and age.
I feel today it's not about striving for zero risk (for 99.99 of people) , but picking the ratio of overhead and risk you're ok with. And backups. (bonus question - how to make backups safe in age of encrypting ransom ware).
I have a backup NAS that's normally powered off, but it's scheduled to turn on, perform backup, shut down.
It doesn't wake on LAN and there should be no way of knowing it exists outside of checking DHCP static addresses reservations - and now that I mention it, maybe I should remove it from there too.
This minimises the size of the window, and network-snoopable information, required to compromise this set of backups.
I think, perhaps ignorantly, That may prevent some human being or intelligent agent specifically targeting your nas.
I don't think it would help against situations where your primary system is being encrypted for a while, and thus your backups eventually get overwritten with bad stuff.
The data being backed up is in tiers of importance or 'frequency of change', and based on this the backups are staggered, some daily, some weekly.
I wouldn't often go a full week without checking some file or other, so I think I'd know pretty swiftly if I got infected with an encrypting ransomware virus - hopefully quickly enough to minimise damage.
I also do off-site backups on occasion, so I could roll back to the last full set of off-site backups - so long as I remember where/when the most recent one is.
My main adversary I'm protecting against is hardware failure, however. I think I run a pretty tight ship in restricting access to data that I'd rather not lose.
Maybe some hubris payback will come visit one day though...
>The data being backed up is in tiers of importance or 'frequency of change', and based on this the backups are staggered, some daily, some weekly.
This right here is the key to backups: triage your data to understand what is truly important. I roughly put things into three buckets
- Priority 1 - potentially disastrous if lost. Financials, taxes, legal documents, and password vaults. The nice thing about most of these documents is that they are immutable and likely append-only (eg you only have one set of 2020 taxes). For most people, this amount of data should be well under 1GB and require only sporadic backups. Which means you can purchase a bolus of $10 thumb drives, encrypt the collection, and leave them everywhere. Mail an annual copy to mom, leave one in your bag, at the office -wherever.
- Priority 2 - anything you created which does not fall into Priority 1. Home pictures, videos, your 1000 half-baked programming projects, etc. Potentially a much larger collection for which a real backup system becomes necessary.
- Priority 3 - everything else which is theoretically replaceable. The archive of music you "acquired", backups of youtube videos, personally ripped DVD collection, etc
It's a bit complex and almost always in the midst of a structural change due to new learnings[0][1][2], so describing it in text will be a bit difficult both to read and write. I am planning to do a nice write-up with diagrams once I get myself a proper platform to broadcast myself (which it's taken me ten years to continue to fail to do, so don't hold your breath).
Firstly, as fbdab103 said:
This right here is the key to backups: triage your data to understand what is truly important.
Know your data, know how often it changes, know its level of importance to you, know what duration of loss you're willing to incur (or are likely to incur based on hardware cost limitations).
Hardware:
- Primary NAS with large internal storage and matched external USB storage
- Secondary NAS with same storage size as Primary NAS
- 2x Servers running Proxmox, each with internal storage enough for their VMs data
- Spare older hardware from past upgrades (not live, but as redundancy in case of disaster)
Software:
- Proxmox
- Syncthing
- rsync
- NAS software (QNAP, Synology, TrueNAS, or whatever else)
Configuration:
I have a central store of documents on the Primary NAS, protected by username / password access. Each family member has their own directory.
Mobiles backup photos and videos to a temporary / throw away location on the Primary NAS via a Syncthing docker instance hosted on one of the VMs. This destination is then backed up (twice?) daily using a couple of rsync scripts to separate photos and videos into a permanent location on the Primary NAS.
The secondary NAS has a daily wake up and backup of these 'home directories', photos and videos to both its internal and external storage.
Each VM is hosted on the Proxmox server's internal storage (I used to have all the VM disks hosted on the NAS and connected to Proxmox via ISCSI, but the centralisation to the NAS became an issue when the NAS was lost - see New Learnings 1[0]).
Each Proxmox Server has a weekly scheduled backup of all the VMs to both the Primary and Secondary NAS's, and at least 4 historic backups, in addition to the current, are kept, which Proxmox is configured to manage.
(This is actually against what I said earlier about the Secondary NAS only 'fetching' backups - that's how it used to do it, but this went against the 'purity' of having a direct backup raher than a copy of a backup, which is its own interesting topic: a direct backup is more trustworthy than a copy of a backup because there's less opportunity for data corruption, so the theory goes).
The primary NAS has a weekly backup of these VM backups from its internal storage to external storage.
The secondary NAS has a weekly wake up and backup of these VM backups from its internal storage to external storage.
The secondary NAS also has a weekly wake up and backup of other media stored on the primary NAS, which gets copied to both internal and external storage.
The external storage of either or both of the NAS's can be disconnected and stored elsewhere if I go on holiday, then retrieved and reconnected upon return.
In addition, I do ad-hoc backups to older HDDs and SSDs of different tiers of data, where Family Photos and Videos are probably the most import sentimentally, and financial / household-management documents are the most important practically. These get spread amongst friends and family and are poorly documented (which I need to improve, lest they find said 'spare' hard drive or USB and decide to use it for something far less important).
(I have also recently setup backups for docker instance configuration data, but they're a bit redundant given that these are encapsulated by the VM backups, but it helps me sleep easier to have them backed up separately - these are a weekly script, similarly copied weekly from Primary to Secondary NAS internal and external storage)
[0]: New learnings 1: In mid-August last year my primary NAS became unavailable suddenly and irretrievably with no notice - an incident I'll be documenting in detail some time in the future
[1]: New learnings 2: Due to some power issues late December 2022 I'm rearranging the setup to ride through such things a bit better. This doesn't directly to the backups, but some of the hardware / configuration setup has changed, which has a bearing on the backup configuration.
[2]: New learnings 3: Stagger some of the backups in case of source getting corrupted / encrypted maliciously (may require a manual backup for the final step for a 'known good' backup to a more-secure destination)*
> and thus your backups eventually get overwritten with bad stuff.
I use rsync's --link-dest to get Time Machine-like backups so they can't be overwritten through the backup system itself. I don't take the "shut it down" precautions with this system the other person does, though I do have a separate in-place one that's strictly manual (as in I have to physically plug it in to update).
I would agree, but I feel the GP specifically talked about "Backup NAS", which is where my question pertains.
I do have a set of hard drives I exchange with my friend as "off site" backup. Neither my cloud service nor NAS keep copies/snapshots of previous versions of files; and I have too much stuff to create BlueRay let alone DVD one-off backups :0/
If your main computer has had all its files encrypted by ransomware, will the backup NAS know not to replace the good backup with a bad one? i.e. hopefully it's not doing something like `rsync --delete`.
That's not hard. On my backup server, I have an account with a restricted shell that my backed-up machine has an ssh key for. That restricted shell is simply the following script:
#!/bin/bash
cd ~/backups
tar --restrict --keep-old-files -x
So the only thing that the backed-up machine (or an attacker) can do with the ssh key is push new files onto the backup server as a tar stream - it can't overwrite any files, and it can't put any files anywhere except the correct directory.
For my server I went one step further and set it up so that the server being backed up has no access to the backup system at all and insteady only prepares encrypted (differential) tar archives that the backup system then grabs.
Restic is my current backup solution of choice and that takes snapshots by default. If you took a backup on Monday, ransomwared on Tuesday, your backup volume would double in size (file de-duplication is going to fail spectacularly), but the Monday backup will be unimpacted until a prune operation is run. Suggested prune workflow is to maintain fairly staggered snapshots eg: 1x six-months ago, 1x three-months ago, 3x from last month, whatever makes you comfortable. Which should give a pretty comfortable margin on retaining your files.
> (bonus question - how to make backups safe in age of encrypting ransom ware).
This is actually a solved problem, with many solutions. In a nutshell, you need a system that has enough space to make many enough copies without overwriting the ones that are too fresh. It also must not be controllable by the host that you backup but this is kind of obvious.
> Today, when my laptop has millions of files and each website - even hacker news - could inject something malicious and my surface is so broad (browsers applications extensions libraries everything) and virtually anything I do involves network connections... I just don't have the confidence.
It doesn't matter how many files your computer has and how many millions of lines of code it runs. There is a concept of Trusted Computing Base (TCB), which is the part of the code that you have to trust.
In Qubes OS it's only about a hundred thousand lines, and doesn't include any browser. The key is security through isolation.
You run your browser and network in virtual machines and assume that they are compromised. You keep your sensitive files in an offline VM.
> Give the backup process write only (I.e. no delete permissions) to a GCP account.
I've looked into this before, and it is just not that easy. "Write" is delete, for most cloud storage systems, for the practical purposes of trying to keep a backup safe. (I.e., you might not be able to delete a blob in some bucket, but if you can write to it, you can just overwrite it with 0s.)
"WORM" (write-once read-many) tends to be the term to search / gets the right documentation from most providers. In GCP's case, it appears to be "set up a retention policy", and that's similar to my experience with other providers. These bring their own set of problems.
That said, encrypting ransomware isn't going to magically determine where your backups are, and for most orgs, having the backup at all (and having it tested) is the priority, not the whole WORM thing.
(Orgs, IMO, also tend to get really uppity about having "database" backups, where "database" == {MySQL, Postgres, etc.}. But then there will be an S3 bucket that also has a bunch of data in it, and that never gets backed up, and nobody even questions that. And half the time it seems impractical to back up, too, due to a mix of cost and S3's design.)
You mean the case when ransomware encrypts your backups themselves? Normally they skip encryption of executable files, so you can hide backups like that.
My even shorter (and incomplete) summary of the document would be: configure your router and firewall; remove default passwords and crapware from your devices; use a lock screen; don't run as root; use a password manager and decent passwords; enable 2FA everywhere you can; enable anti-malware if your OS has it built it; don't run software from untrusted sources; patch regularly.
There are also other controls that you can choose to impose on yourself. For example, I require full-disk encryption, and I will only use mobile devices which get regular updates. Would be interested in hearing other things that HN'ers do to limit risk.
> enable anti-malware if your OS has it . . . Would be interested in hearing other things
Given the most common network activity is web browsing, it seems like enabling protections in the browser is becoming mandatory for the security-conscious.
For me this amounts to enabling NoScript and uBlock[edit: [0]] plugins in Firefox, desktop and mobile versions, and disabling or locking down various "features".
An additional step I take is to use several browser profiles for different purposes (mail, banking, shopping, default, to name four) so that Firefox always asks me to pick a profile on startup. As well as reducing the possibility of XSS, this lets me relax the settings for some profiles where I restrict myself to a small number of trusted sites. (This may well be overkill!)
0: uBlock Origin, that is, as per instructive commment below[1]
Firefox multi-account containers can be a more convenient way to isolate things, especially now that the container can be limited to only the allowed sites.
I also made an app and extensions to help me use multiple browsers, one per site. (Browsr Router)
Regards the containers, I started my profile compartmentalization practice before they arrived, so I never explored them. But I'm curious as to whether each container comes with a complete set of browser permissions, like profiles do, which would enable you (for example) to have a location-enabled container specifically for google maps (in reference to[0]) while disabling it on the container used for search, as you could quite easily do with profile compartmentalization? (Which, tbf, is a privacy rather than a security concern.)
>Given the most common network activity is web browsing, it seems like enabling protections in the browser is becoming mandatory for the security-conscious.
What I am looking for is an easy way to run something like a LiveCD OS in a VM for browsing. The problem is that I have never found a decent LiveCD that has Firefox with all of the mandatory extensions (uBlock Origin, etc...). I guess I could customize my own LiveCD, but last I looked into it, doing so seemed complex and too time consuming to figure out.
Posting this in the hopes of being steered towards a simple solution or to inspire someone to create one.
You should be safe to avoid this unless your threat model includes a trusting trust type exploit on Nix generating the ISO.
Also, just realized it'll be a little more complex because you'll want to use home-manager to install Firefox plugins and do about:config configuration.
Here are some examples of that in various contexts:
Windows AppGuard is close to this, although it’s a hyper-v silo not a full VM. Edge can open links in AppGuard (which is what this technology is called) right from the context menu, super convenient.
That sounds interesting and I like that Edge is one of the few browsers that has vertical tabs built in. The thing is that I don't trust MS and assume that they are thieving my data in any way they can...
MS AppGuard is a clone of Bromium, now branded as "SureClick" for Chromium and bundled with HP business laptops/PCs, with a security co-processor ("SureStart") monitoring firmware integrity.
Would be interested in hearing other things that HN'ers do to limit risk.
Mostly the same basics as you. The document you linked is a good starting point.
I'd add extensive use of virtualisation and sandboxing. I run less and less software as native, installed applications on any device I use personally or professionally. Instead it tends to run inside things like VMs or Docker containers or cloud-hosted platforms now.
My basic policy is to try and make every device and installed application expendable/replaceable in case anything breaks or gets compromised and then focus on the data. I apply the principle of least privilege for access to any sensitive data, try to keep all important data in standardised formats and avoid lock-in effects as much as reasonably possible, and keep good back-ups under my own control with the ability to redeploy/restore anything quickly and as automatically as possible.
100% with you on the entirety of your last paragraph.
I generally use a device as an access mechanism; a configured window into the data. This configuration is the only thing lost when a device is lost. No data, no function, no service. Configure the replacement device and continue as you were.
Virtualisation and Docker-isation makes backups and restores almost enjoyable.
Not sure I fully understand the context of the question, but for a device to gain access it needs to be allowed into the local network by being in wifi range or authorised on the VPN, plus have username/password access to the service/data.
If, however, there's a backdoor I don't know about, then I'm pwned.
Since you mention routers, I’m curious what brand you use.
Since Ubiquity started fown the cloud-first path I’ve switched to Mikrotik. While they do seem to have regular CVEs (which is good, I think?), they also don’t seem to have a public bug bounty program.
I bought a Turris Omnia used on ebay for about a hundred bucks. Very powerful, open source hardware & software, good community, adblock server built in if you're too lazy to set up a pihole, enough compute power to host lots of neat services on it.
> Since Ubiquity started fown the cloud-first path I’ve switched to Mikrotik
I was thinking about getting a Ubiquity router because it has good support for setting up wired VLANs without needing to go down the path of finding a solid OpenWrt router.
Is it really true that you can't access the router's dashboard and configure things without associating an online account to your router?
> Is it really true that you can't access the router's dashboard and configure things without associating an online account to your router?
That might be true for their UniFi line, but EdgeMAX devices work fine without an online account. EdgeRouters run a fork of Vyatta, with configuration files and command line operations that are fairly easy to work with. They also have a web UI for common configurations, though I have little experience with it.
A bit of warning: EdgeMAX support has been declining in recent years. Security updates are still published, but bugs don't seem to be addressed as quickly or consistently as they were in the past, and some forum users have expressed doubts about what kind of support will exist in the future. That said, my equipment is still doing fine.
I think OpenWRT has been ported to at least one Ubiquiti router, so that might be a good fallback if EdgeOS support ever ends. I wonder if anyone here has tried it.
They did add back non-cloud support a while ago after the cloud forcing didn't go over well, but it is a second-class experience.
The day after I set my UDM Pro up as non-cloud, it corrupted the login somehow and I had to factory reset it and redo all settings (as it hadn't been running long enough to run any automatic backups first). I capitulated and just set it up again as a cloud login to avoid having the same fiasco at some random point in the future again.
I immediately regretted going with the UDM to save (quite) a few bucks over a OPNsense/pfSense appliance to fit my requirement of handling a full 10G of WAN.
I will admit having the app to monitor usage remotely is a somewhat neat trick. I find the firewall configuration in GUI obtuse enough to be next to unusable.
> I immediately regretted going with the UDM to save (quite) a few bucks over a OPNsense/pfSense appliance to fit my requirement of handling a full 10G of WAN.
As someone who bit the bullet and sold all their UniFi gear on eBay and switched to OPNsense and Ruckus, I'll tell you that it's been absolutely worth it.
It's definitely still possible to use without cloud now, what I meant by "started down the path" is that it seems like the direction of the company is not aligned with what I want.
I was in the market for more hardware and I had to decide whether to increase my investment in Ubiquity or make a change, and I chose the latter.
I like your shorter list - but I'd suggest concerned people also avoid the obvious attack surfaces:
* Windows
* storing data on other people's computers (cloud apps)
(Yes, they're all safe under the right circumstances. But the right circumstances are far from universal.)
And loosing access to important stuff can be worse than other people seeing the stuff - anything behind a google (etc) account can be lost in a moment, due their mechanised decision making and inability to meaningfully contact humans.
If you can't always easily contact a helpful and authorised human, the continuing existence of your stuff is a gamble. And no, setting off a twitter storm and hoping that the company will be embarrassed into restoring your data is not a suitable contact method!
> Do you lock your computer every time you leave your desk?
This was a corporate requirement where I used to work, unofficially reinforced by the local jokers who would rotate the screen and / or send prank messages if you didn't.
> unofficially reinforced by the local jokers who would rotate the screen and / or send prank messages if you didn't.
Same here. The all time favorite is sending a resignation notice to the person's manager (the manager usually gets a fair warning first and plays along with it).
Check for keylogger thumbdrives: I use a laptop so it would be immediately obvious. But now that you say it I haven't checked the charger USB-outlet on the back of my cabled keyboard.
[1]: it has happened I have failed. Once a year or something.
[2]: I sometimes try to allow myself to go downstairs in my own house to fetch a cup coffe without locking when I am alone, but I find it so stressful in practice I always lock it. I don't need to know but it is a good habit. I'm otherwise normal :-)
What bugs me is when this is applied to remote workers in a way that seems optimized for in-office environments.
For example IT enforces that your screen becomes locked after 15 minutes of inactivity and also ties in your local computer 's user login password to your SSO login to access everything. It's a contradiction around password best practices. If you force people to input their password multiple times a day then naturally people will gravitate towards easier to type passwords.
If the idea is "but what if you go AFK in a public place and forget to lock your screen?!?", that's not a valid reason. If you were working in a coffee shop and went to use the restroom for 4 minutes or turned your back for 2 minutes then your machine could be compromised (or even worse stolen). It's extremely reckless to leave your gear unattended in a public place.
It can really break morale to input your password and MFA half a dozen times a day, especially when you're alone in a locked apartment where the laptop hasn't left that location in a year.
> What bugs me is when this is applied to remote workers in a way that seems optimized for in-office environments.
> For example IT enforces that your screen becomes locked after 15 minutes of inactivity
If your OS is MS-Win, try playing an audio file when you don't want the auto-lock to go off. Provided IT's "checkbox security" parameters [1] did not include turning this off, MS-Win does not timeout lock the system if an audio file is playing, which makes playback of an audio file a way to prevent the timeout auto-lock from happening. Note that this won't help with any 'presence' indicators that go "idle" or "away" with no activity for some time.
If this works, then you can create an audio file of 'silence' with sox to use to play back when you don't want the auto-lock to trigger:
sox -n silence.wav trim 0 10:0.0
Creates a ten minute long wav of 'silence'. If you want it smaller, compress the wav with lame into an mp3 or fdkaac into an aac file. Then launch playback of the silence file, and set windows media player to "loop" when it reaches the end of the file.
[1] Much corporate/govt. IT "security" is "checkbox security". It is the equivalent of IT having a "compliance form" with a long list of "configured settings" with check-boxes next to each, and so long as they can go down the form and "check all the boxes" they deem their setup "secure". Whether it is actually secure is not important, just that it "checks all the boxes" on the "compliance form".
That's really clever. This is a company issued Macbook (macOS is a requirement not a choice btw) but I'm guessing there will be something similar that you could do.
This puts you into a grey area though no? You could make a case this is willingly trying to circumvent security protocols which could be grounds for being fired.
Oh yes. Having my machine lock very fast when I wfh is really annoying. Past that, with so many systems to log into, I sometimes feel like all I do is authenticate and authenticate all day long. SSO is probably saving a lot of this, but at my megacorp it still sucks.
The issue is more other members of your household. Your roommate, kids, spouse, etc. Policy and regulatory requirements don’t allow incidental disclosure to people like that, and the company has no relationship with them.
I dealt with this as a policy issue recently. Controls like aggressive screen lockouts are one of the few options available to allow some categories of workers to work outside of a company controlled premises.
The argument that you live alone etc is irrelevant as I have no idea (and don’t want to know) whether that’s true. I can tell you that people have done shockingly dumb things with remote work and the company has to try to control risk as best it can.
> Controls like aggressive screen lockouts are one of the few options available to allow some categories of workers to work outside of a company controlled premises.
What does the policy really protect against?
If it's being locked out after 15 minutes of inactivity because of roommates or kids it doesn't protect you against anything in the grand scheme of things. For example if I leave my office for lunch and you step in 10 minutes later then you have a solid 40-50 minutes to do whatever damage you plan to do while I'm gone.
The only time it makes a difference is if it locks really fast, such as 30 seconds but then using the computer naturally would be ridiculous because you couldn't stop touching the keyboard or mouse without being locked out.
Also, what if your room mate planted cameras in your office that let them see exactly what keys you're pressing on what screens without ever compromising the machine itself? Now everything is compromised and they have full reign to do whatever they intend to do.
> The argument that you live alone etc is irrelevant as I have no idea (and don’t want to know) whether that’s true
This is the real problem. Everyone gets treated like an equal criminal when in reality none of the measures taken really do anything to provide the security they were designed to do. It reminds me a lot of "for the children" but applied to corporations for "compliance reasons".
I'd be more ok with the precautions if they worked.
It reduces risk by minimizing disruption if you’re following other work rules. If we locked it in 30s, people wouldn’t be able to work.
Re: the “treat people like a criminal” take. A common approach organizations are taking is employee surveillance. I don’t want to know that your girlfriend has a conviction or that your kid sits next to you with sensitive data in your screen, etc. And I don’t want to force you into an office.
There’s a difference between “security” and risk management. If the discussion was pure security with low/no risk tolerance, you’d be working on a locked down terminal server in an office.
> For example IT enforces that your screen becomes locked after 15 minutes of inactivity
If you're on windows, there's a powertoy[1] called "Awake" that can keep your screen on indefinitely despite IT rules. I liberally use this on machines I'm remotely connected to because there's 0 reason why a remote session should lock if I'm active on the computer looking at a different window.
My Windows Surface unlocks with their depth-camera. I don't know if big corporations allow this sort of thing on their laptops, but it's very handy for us in our small consulting business.
I've worked in places where that once a year slip-up would mean you sent an email offering to buy lunch for the team or get your background changed to a David Hasselhoff pinup picture from the 80's. I do feel weird locking my computer when I'm alone though.
Screenshot of the desktop, rotate that 180 degrees and set as background. Hide all icons and the taskbar, then rotate the whole screen 180 degrees. Maybe a bit of tape underneath the optical mouse.
I work from home and still lock my screen because I have cats that will walk on my desk if I'm not looking. 4 paws and an open vim or slack window are a dangerous combination!
Yeah, colleagues have happened once, then never again.
I use a Laptop with a Dockingstation for work and take the Laptop home with me every time I leave so I would have noticed if this would have happened.
When home, I always have to lock or my cat would typeeeeeeeeawww
> Would be interested in hearing other things that HN'ers do to limit risk.
We have a pretty standard setup at work: screen times out after 15 minutes, and co-workers teach you pretty fast to lock your machine.
At home, where I use a MacBook Air, I work in a physically unstable situation...in a rocking chair, on my lap. Whenever I stand up, I close the machine, which locks it immediately. If not, I risk the machine sliding to the floor.
For the rest, I run a pretty esoteric setup (compiled-from-source custom configured linux kernel with no binary blobs; all software compiled from source, with no exceptions; aggressive, burdonsome-to-me privilege separation; chroots and VMs for various degrees of potential threat; etc). I have no illusions that it is perfectly safe. What I am comfortable with is that, in order to compromise me, you would have to know a lot about what I run and how I run it. I believe that I would have to be nearly individually targeted to extract any useful data from my machine, and that I am not nearly a valuable enough target for anyone to do so. I think you would have to be a state-level actor or someone with similar capabilities to compromise me, and none of them would care enough.
My security paranoia stems from extremely sensitive work I did as a lawyer long ago, but I am now so used to it that I carry on as a scientist, even though my current work is not nearly so sensitive (if at all). I give up a lot of convenience and some functionality to operate this way, so it is not for everyone. I am not an adversary to anyone, so outside state actors surely don't care about me. And my own government can just get a warrant and knock on my door, so they don't care about me either.
Embedded device firmware besides the bios is probably my main vulnerability, but if you're successfully getting at me through my hard drives or mouse, then I was surely an incidental rather than actual target.
I'm genuinely curious: Do you check/audit the code you compile and run on your machine? Going with the assumption of "no": How is it then different than downloading a prebuilt version from an official source?
I should say, I will run binaries on VMs and feel very little threat from doing so. The "with no exceptions" referred to the main host OS. I should have been more clear about that.
To answer your questions, oh hell no; definitely I do not audit source code myself. Though I have rarely. I do it this way, and it is different enough for me, because someone could audit the source in theory. If someone did audit and found a security problem, then I could check to see if my source was also compromised. If I install binaries, then I might not ever be able to know if my binary was compromised. Maybe someday if reproducible builds are guaranteed to be bit-perfect, then I would use binaries from reputable sources, but that would only happen in the case where third parties are compiling from source and affirming the reproduction. In that case, why not just compile it myself?
Developers who publish compromised source are going to get burned. Developers who publish compromised binaries are going to say, "omg we must have been compromised by someone else." Obviously it is possible for third-parties to compromise source, but I'll go with what I see as the lesser threat.
If the cost of compiling was high, then that might make a difference. For me, the cost is negligible, which makes it a no-brainer for me.
If anything I think this underscores the parent comment - open source is not inherently more secure than closed, it just adds another potential avenue (source code audit) to ensure security.
If nobody actually audits the source, and the closed-source binary has had other types of testing done on it, it's likely that the closed source binary will be more secure.
I think for hardware level code and things link bios the only way is to trust the manufacturer. You could also trust the manufacturer but if they are not large enough to have fully vetted and trusted vendors then you're back to square one. So I think only in this sense it says something about the high degree of security of devices made by Apple.
I assume it is, per Intel ME / AMD PSP's ability to read everything - memory, CPU registers, disk, inspect all network traffic, directly utilize onboard GbE for bidirectional communication.
For adversaries below the level of the US intelligence agencies, I run everything virtualized and compartmentalized with Qubes, the installation image for which I verified the dev-provided cryptographic signature matches. I try to rigorously avoid any software operated by Google, Amazon, Microsoft, Apple, Facebook, disable all JS by default in my LibreWolf browser, refuse to connect directly websites protected by cloudflare, audit source code for almost everything I run in userland, etc etc etc.
This is all for my personal machine. For work devices, I assume they're pwned even worse and I do nothing but actual work on them.
On the mobile side, GrapheneOS on a Pixel for my first phone, and a linux phone with hardware killswitches for bt/wifi, cam/mic, and baseband for my second phone.
All of this in addition to solid fundamentals like network traffic monitoring, very restrictive firewall, offline encrypted hardware password manager with no password reuse, etc.
There are many ways to do this, most utilizing some kind of proxy-like architecture for all requests, or just to retrieve cookies. My personal favorite for retrieving cookies is FlareSolverr.
For strictly reading public webpages, public paywall bypass tools and archive sites work pretty well.
Since this is a ridiculous troll, so generally speaking the only way to address this to visit the site from a local library, then immolate the computer with thermite and explode the remainder with TNT.
Not trolling, just extremely conscientious about privacy and security, far outside cultural norms / overton window, though frankly, that's not saying much in an age where almost everyone has smart assistants all over their house, insurance companies tracking accelerometer/gyro/gps data from every drive they take, and exclusively chat on systems produced by major corporations that are logging everything indefinitely and eaglerly cooperating with law enforcement / intelligence community without even requiring warrants for data requests - almost all closed source and with a client-server architecture with no way of even auditing how much is being recorded, whether any of these companies are actually honoring their privacy policies or not, etc.
I understand that trust by default is an enormous aspect of our economy, and trust may be characterized as the single most valuable commodity, but my work involves adopting a strict zero trust mindset, and it can be difficult to turn that off after so many years.
Isn't a (non-Android) Linux phone a security downgrade unless you get Qubes running on it or something? (And secure boot, and an HSM of some sort, and...)
They also have a "made in USA" version (though the case is made in China and the wifi chip is made in India). It's even more expensive, and I certainly don't trust that USG/IC hasn't attempted to backdoor this either - we know a lot of ARM processors have the TrustZone, which is very similar to ME/PSP (I believe an ARM TrustZone core is actually how PSP is implemented), but the lead time is much shorter.
Like others here are saying, you can never be 100% sure. But that doesn’t mean there’s nothing you can do.
If you’re worried about the impact to your broader organization (which is what most of the sophisticated threats tend to target), you should think about risk mitigation through the Swiss Cheese defense model. Each system is inevitably going to have holes, but layering them on top of one another will incrementally improve your coverage.
For instance:
- Your team should be trained about phishing attacks. But inevitably some will get through, so…
- You should implement 2FA in case a password is compromised. But a threat actor may be able to capture a 2FA-passed SSO session token, so…
- Production access should be limited to a small number of individuals. But even they might get compromised, so…
- You should programmatically rotate credentials to make old leaked credentials useless. But a newer one might be captured, so…
- Data should be sufficiently encrypted at rest and in transit, and…
- Your team should have an incident management system and culture in place to quickly respond to customer reported incidents and escalate it to the right level and…
- Audit logs should be tracked to understand the blast radius in case of compromise
- and so forth
When you look at incidents like CircleCI and LastPass, a good security organization will understand that there was more than just one point of failure and should talk in detail about how they are shoring up each level.
Exactly this. Security is more about about defense-in-depth, incident response and recovery planning.
Personally, I assume the hardware is already compromised and plan for recovery accordingly, starting with the worse case scenario. Then, I ask myself "If this thing isn't compromised yet, how can I help it stay so?", starting probably with the network access, through firmware, all the way to the browser.
You really can't, anymore. You can watch traffic and hope that anything nasty isn't communicating with the outside world, but then there's all sorts of side channels that you may not know to watch.
At some point you just have to admit there's limits to privacy and work with them. You paper journal could be stolen and read / rewritten too, yaknow? It's not a new problem, its just in a new context.
I only have limited trust. Between 3D printing slicers from Chinese companies, many packages from PyPI and Rust crates, there is always a danger that something is compromised somewhere.
I try to limit attack surface in the following ways:
- I only use M1 Macs as desktops. This reduces attack service in various ways. M1 Macs do not have anything like UEFI firmware, it all starts from the iBoot ROM and the whole chain is verified with signatures. The OS is on a sealed system Volume that is read-only and signed. Altogether, this limit firmware/OS attacks.
- I use a U2F key and/or the Secure Enclave of the Mac for credentials (SSH keys, 2FA). They are set up to require user confirmation.
- When possible, I will install applications from the Mac App Store, since they are sandboxed by default.
- I use separate work and private Macs.
- I clean and factory restore my Macs every few months.
- I use some tools like Knock Knock to see if there is anything suspicious.
Compromise is obviously possible, but I try to push it into 'mostly state actor' territory, because I am not interesting to most state actors.
I run QubesOS which compartmentalizes your usb ports, network card, and all your various application workflows into separate virtual machines. It is literally designed to protect you even if part of your system is compromised.
I don’t have ultimate trust in any software or hardware, but I get to “good enough” by deciding which providers I trust:
* Software: Canonical, Google, Microsoft, Valve, Oracle, Dropbox. I install software from their official repos and keep it up to date. Anything 3rd-party/unofficial/experimental/GitHub goes in a VM.
* Hardware: I built my main PC from mainstream commodity components. I have no way of knowing if there are secret backdoors but I consider it unlikely.
I use a password manager, I enable 2FA, I turn off things I don't use, and generally have a low-risk hygienic approach to computing.
I’m also privileged enough to not be a “person of interest” so don’t feel the need to take any extraordinary precautions.
Yes, I’m aware of VM escapes. Yes, I’ve read Reflections on Trusting Trust. I choose to trust regardless because life’s too short for paranoia. As Frank Drebin said:
“You take a chance getting up in the morning, crossing the street, or sticking your face in a fan.”
I don't consider it practical to take any countermeasures to the possibility of this threat. I think there's a ~10% chance it's a backdoor, and if it is, there's a 98% chance it would be at the behest of a branch of the US government, and I'm not currently an adversary of theirs.
(This is not an argument for mass surveillance, it's just a practical assessment of the risk).
Just to be clear, the question of whether it's a backdoor or not doesn't matter for whether bad actors use it. The capabilities are rather well known. Its vulnerabilities aren't, but vulnrabilities do not a backdoor make.
I trust that Google and Microsoft won't hack into my bank account and steal money, even though they could, but otherwise I assume they collect anything they want and can.
I caught Windows Defender "automatic sample submission" silently uploading places.sqlite out of my Firefox directory despite the assurance in the control panel that "We'll prompt you if the file we need is likely to contain personal information".
So now I disable automatic sample submission via group policy but Microsoft definitely can and will access files that they really have no business accessing.
They (hopefully) won't steal your ip or personal pictures, but there is so much telemetry and other phoning home going on, and without transparency or the option to really turn it off.
I have several layers of security, including an infosec mindset that comes naturally, but at the end of the day I don't really know. I have faith that if I were to be infected statistically it would be by some malware that would give itself away by mining crypto or doing something else very loud and disruptive.
Fun story but my laptop was actually hacked remotely once, without me knowing.
It was almost 20 years ago, some would call me a script kiddie. Just trying to be bad ass, trying to live the movie Hackers. Had a stolen laptop running FreeBSD, with a wicked bootsplash just like the kids in the movie.
So you can imagine I was moving with the wrong crowds online, having little defacing wars with other groups and shit like that. Caught the wrong kind of attention.
I say that infosec comes naturally to me now but pobody's nerfect and back then I had re-used a password in a weakly encrypted service database, someone hacked this service, found my password, found my ssh logins to the servers, and traced backwards to my laptop.
I don't remember the details but somehow working back from one server, perhaps to another jumpserver, they were able to get the IP for my laptop and actually login to it.
Fortunately for me they didn't do anything but gather data, they posted this on a wall of shame saying "another hacker down". I say fortunately for me because I had thousands of customer's data on that laptop, including CC#'s for the business I was running at the time. They missed all this, and the very next day I reinstalled my laptop and reset all passwords on pure coincidence. I had no idea I had been hacked, I just felt like reinstalling for some other reason.
Found their wall of shame posting later and felt very much ashamed.
This thread has inspired me to setup a tripwire for my workstation. It's something I used to use many years ago but I think it's a good setup to have some sort of alerting if files start changing.
Be warned that even 20 years ago when I was doing research on this stuff, that it was standard procedure to exploit the kernel. This included things like read or stat showing a valid file, but exec would run an exploited file. It also allowed hidden directories and files. So you couldn't find a sub directory with ls, but you could cd into it.
So you really have to boot from trusted media with a trusted kernel and no untrusted modules to be sure what you are seeing. Generally this involves rebooting onto trusted readonly media, doing a scan, then rebooting back into production. The HOWTO mentioned finding an unused kernel module like floppy.ko and replacing it with a malicious payload and ensuring it loaded on boot.
Also keep in mind that attackers are well aware of tripwire and some attack kits I saw specifically looked for tripwire like approaches and would hook into the update the checksums process after patching so their exploited binaries would look just like valid binaries.
> This thread has inspired me to setup a tripwire for my workstation. It's something I used to use many years ago but I think it's a good setup to have some sort of alerting if files start changing.
I'm looking at current options, this[1] for example is packaged for Fedora, which is my daily driver.
But then I got to thinking, if I'm going to do a clean Fedora install for the tripwire (it's best practice) I might as well try Fedora Silverblue[2]. Silverblue is an immutable system so it kinda makes a tripwire less useful because no one can change any system files. Only files in your home directory and /etc can be modified statefully.
The biggest thing is being deliberate about your threat model. Who would want to get onto your systems, and how much do they care about you in particular?
From there, take appropriate actions. For the vast, vast majority of us, that means using good passwords, updating software, and not running weird things from the internet.
If you’re worried about 0 click RCE in Chrome/Windows/iOS, you either should be getting better advice from folks outside of HN, or are being unrealistic about who is coming after you.
I worry so much more about the dumb hardware locks and secure enclaves, OS features etc. I find the risk of a compromised machine to be so much less of an impact on my life than my computer telling me I am not allowed to do something.
This is my computer, let me tell it what to do. I hate how much of my time is wasted by all this security stuff. Infinitely more so than had been wasted by actual malware over the last decade or so.
I don't want to have to spend 10hrs figuring out how to hide root from Android pay every time something upgrades. Please just let me have root on devices I own.
Ever since I started doing a lot of work in C where all the foot guns are intentionally left in I've had my eyes opened to how beautiful and fun computers can be when they aren't your fucking adversary.
"Security" that can't be disabled by the device owner is tyranny.
Root detection is about reducing risk to companies, presumably sideloading results in substantially increased risk. It’s not about you, it’s about their bottom line, but it can also help prevent grandma from losing their life savings.
I’m a security engineer and know what I’m doing and agree there’s some level of security theatre, but it you’re not worried about losing Crown Jewels from a compromise you’re most probably uneducated or arrogant.
Of course I'm worried, I just attempt to understand my threat model, my vulnerability surface and act accordingly. I understand why security exists. I just want to be able to turn it off without having to resort to hours of research of security tech in order to control the devices I own. Obviously I wouldn't give access to an account with my life savings to my rooted phone.
It is about their bottom line, but largely not to protect me or grandma, it's about justifying control and divorcing people from the power of the supercomputer in their pocket. There's more money in making it hard for me to edit my hosts file etc. than there is in preventing the potential loss of my money/data through these restrictions.
Before defining any strategy, you need to define an appropriate threat model. Which kind of information are you storing on your device and who can target you?
If you are a standard person and not doing any illegal, the information that you need to protect are mostly related to financial and personal standpoint. So you need to protect you bank/credit card/cryptowallet with encryption and/or MFA. For financial information, use the same criteria, according also to level of continentality that you want to achieve: it's stupid to encrypt your cat pictures, it may be worth to encrypt cipher your son pictures, it's mandatory to protect your health related files also with MFA.
This is just to have an idea, you should make this exercise frequently (let's say every 6 months) and verify if the security controls are in place and have to be updated.
For my own devices, I am using this approach:
* Infrastructure: I am using a password manager with MFA for all my accounts and where is possible I have enabled MFA. I have Cloudflare ZT on my home network, so I am a bit protected against web threat. Moreover, I have a script that everyday download phishing and malicious feeds and update my router's ACLs. I am not exposing anything on public, all the services inside my house are accessible through VPN. My Chinese camera are heavy firewalled in a different VLAN and reachable only from specific host. Every device is upgraded to last version and no default passwords.
* Main laptop: is running Linux, so I am feeling a bit more safer during the web surfing. Anyway, I have an encrypted backup for important data over cloud, just to be ensure disaster recovery.
* Secondary laptop: is running Windows, I am keeping it regularly updated with scheduled MS Defender scans. My wife is mainly using it, but she is not installing anything without my approval (I am the admin of the laptop).
* Phone: Storage encrypted, access protected by strong PIN and no biometric. Applications are installed only from official stores and using a DNS blacklist. My phone has a native feature to reduce and auditing app permissions on a schedule and I am doing it by myself as well sometimes. In case I have to connect to an unencrypted public network, I am using a Wireguard VPN client.
Just my 2 cents, I hope to did not forget anything and be helpful.
> "Compromised" meaning that malware hasn't been installed or that it's not being accessed by malicious third parties. This could be at the BIOS, firmware, OS, app or any other other level.
I don't believe there is a way to be 100% certain, but if I had to go to a store and pick a new device with the lowest likelihood of being compromised, it would be a desktop, a laptop, or a tablet running ChromeOS[1].
If you're just a rando who isn't likely to get specific attention from someone like the NSA or other state-backed threat agents, then the answer is that if everything is behaving normally that you're not compromised. For the bulk of people if someone breaks into your personal device they're going to start using it for something. You'll see unusual utilization, your proxy settings on your browser will get changed, you'll just be hit by a ransomware attack and your drive will be encrypted and you'll be locked out, etc. They're after the bulk of the users out there and they don't need to be particularly stealthy about anything.
Of course if you have large quantities of BTC or something then the answer is to get it off of your personal machine and setup a cold wallet that cannot be hacked, and stop installing clever looking crypto shit on your machine.
Surprised not to see a mention of Talos II system based on IBM POWER9 technology that is open spec and otherwise a very competent build with fully open hardware FPGA mainboard and stuff like physical trip jumper protection, and potential for customised security measures via the BMC, Arctic Tern, et cetera. IBM is notoriously good at virtualisation, and POWER9 is very competent for machine learning workloads, the 2U and 4U systems they offer can go up something ridiculous like 176 threads in a two-socket configuration and there's plenty of lanes. You can reprogram the firmware, too; it's all out there in the open and you normally wouldn't need special hardware.
This is like the third time I'm hearing about these raptors in the past month and I want one. I want the $10,000 one. Your comment is the most constructive one in this thread, because it sounds like the solution for all the concerns expressed above has finally arrived. Who here is willing to put their money where their values are?
In reality none of the "privacy freaks" will ever purchase one, I've recommended these to various privacy-minded people I know and the majority either (1) can't see past the fact that it's IBM hardware (not hip), or (2) make up excuses the like of "DO YOU REALLY believe IBM would do a system without a backdoor? A system with a starting price of $5,500— it must be a rip-off, not good enough a deal for me!" I know there's plenty of nice and horrible people in everything but honestly if I got two bob a piece for every FOSS cheapskate I've encountered is too similar to one another for me _not_ to attribute this to cheapskate mindset. People who are saying things like "Mac is a rip-off!!! bad deal, Intel better performance" all the while being clueless on power consumption, for example. It's much cheaper to just say something seemingly profound like "There is no true privacy, we're all pwned by NSA!!" than to purchase and actively integrate an expensive RISC ISA system based on free hardware into your workflow... A the end of the day, we only have ourselves to blame.
"Ask HN: How do you trust that your personal machine is not compromised?"
The pivotal word in this question is "you". If you allow a third party, e.g., Google, Apple, Microsoft, a "Certificate Authority", etc., to decide "trust" on your behalf, then it is the third party that controls "trust", not "you".
A third party can tell "you" that "your personal machine" has or has not been "compromised". The third party can decide who to trust.
However, this is quite different than you deciding who to trust.
Under the trust models promoted by "tech" companies like the ones mentioned above, ultimately "you" are not supposed to be the one deciding trust. They want to do this for you.
Unfortunately, "tech" companies are themselves third parties and they may have commercial interests counter to yours.
Just some generic things that should help avoid or clean up after a compromise.
- clean reinstall every month, just pick a new flavor of Linux to try out. (also helps ensure I have proper backups and scripts for setting up environment)
- Dev work I usually do in docker containers, easy to set up/nuke environments.
- Open source router with open source bios (apu2), firewall on it, usually reinstall once in a while.
- Spin up VMs via scripts for anything else. (games - windows VM with passthrough GPU for example)
From a more security security research point-of-view, the paper “Bootstrapping Trust in Commodity Computers”[1] is a very good overview. Although it would necessitate a bit of an update for more recent developments with e.g. dm-verity etc.
You generally don’t. It all depends on your attack hypothesis. Are you a Mossad target or a non-Mossad target? The best you can do if you are a non-Mossad target is to anonymously/pseudonymously periodically purchase new hardware and do a fresh OS install. Be minimalistic. If you can’t trust your wifi-enabled printer, disable its wifi connectivity and use it only over USB. If you still can’t trust it, don’t use printers to begin with.
Security is at odds with usability, so you can't run critical apps on your personal machine. For me, stuff that needs to be secure, runs on a separate machine that has nothing else installed.
The Librem 14 has a neutered Intel chip (no ME) among other things. My favorite privacy/freedom-respecting laptop. https://shop.puri.sm/shop/librem-14/
I'm reasonably sure that my personal machine is less compromised than the average, but I can't and will never be able to ensure that it is not compromised because I have no way to know everything the machine trying to do. This remains true even when you have an entirely free and directly inspectable hardware; you simply have no knowledge and time to verify everything. Just keep a reasonable amount of precaution and skepticism.
USA banks seem to have found the 2fa powerpoint presentation and are forcing accounts to use SMS 2fa, with no ability to use something like authenticator app. Nothing about SMS is secure, so their IT is taking a step backward.
SMS is still better than no 2FA, but yes, it’s not secure at all.
What really bugs me, that some systems rely only on the 2nd factor, which replaces the password completely. Some even did that with SMS. So you put in you user name and then the SMS code. That’s really bad. Also a lot of Services disguise this method in the „I forgot my password“ function, where you can reset the password just with a sms code.
Yes, that’s true. It should really only be used as second factor and not as a single factor for password resets. I’ve seen this too, and it’s awful. If sms is used for password resets, there should be at least an email notification and a waiting period of a few days.
Are there any somewhat easy-to-use solutions to isolate a development environment? Preventing or at least decreasing the damage malicious packages could do? Like deleting files or uploading a private ssh key/keychain to a 3rd party server?
I was looking into things like GitHub Codespaces, I believe they're isolated per repository and integrated into VS Code, but I'd like something I could run on my machine or a server of mine.
Seriously, but make multiple user accounts on your computer. That's the traditional UNIX way of enforcing isolation, and it goes back to the days of hundreds of people sharing one single UNIX machine.
I don't trust that my machines are not compromised.
All I can do is to start with a machine I believe to be "clean", and take measures to keep it that way (others have suggested suitable measures). But even a brand-new machine might have a compromised BIOS, or compromised firmware in some peripheral processor like the Wifi adaptor.
I don't know how to guarantee that a machine is "clean" to begin with, and I doubt anyone else does.
The same way you can't be sure that when you drive to work today you are not going to die, you can't be sure that your machine isn't compromised somewhere. That's just how reality is: safety is an illusion.
Like with driving, make an effort to lower the probability to wherever makes you comfortable, then just accept that there's a non-zero chance it wasn't good enough.
Perhaps it's because I have a cold right now, but I interpreted "personal machine" to mean my body. And what is meant by "trust?" The word is really synonymous with "faith". That's why the correct phrase is "trust and verify".
I understand that my "personal machine" - my body - is always compromised. I also have faith that no heinous actors are likely to try to compromise my body. But that is only because I am a nobody and have the good fortune to live in a safe place.
As for computers, I think the same logic applies. I have faith that no nefarious actors are striving to compromise my own machine specifically. But for many high-value targets, this would be a bad assumption. Witness the crypto thefts that have occurred by hacking individual's computers.
I am no expert in counter ciber espionage, but my understanding is that it boils down to a) reducing attach surface, b) using trusted hardware, and c) using ephemeral "machines".
Any way to actually monitor traffic at the PC or network level for an individual user? I see a lot of odd behavior by TVs, IOT devices, connected bulbs and such. None of them seem necessary to me. I feel windows could do a lot better with reporting or allowing users to clearly see what is communicating across the network currently.
"Assume breach" was the phrase they taught us at Microsoft (at least in 2016). I assume everything is compromised. So I make public and distribute/decentralize as much as possible.
I #BuildInPublic as much as possible on GitHub and GitLab and dedicate everything to public domain (http://pledge.pub/).
I have a number of computers and can be up and running on a new Macbook in under an hour.
I run multiple mirrored web sites.
I distribute crypto keys across ledgers and safety deposit boxes in multiple states.
Most importantly: I don't pay for insurance (except for mandated auto and homeowners). Instead, everyday I go out there and try to deliver as much good to as many people as possible, knowing that the best insurance when bad luck strikes isn't some check from some corporation, but the helping hands from your fellow neighbors.
> "Assume breach" was the phrase they taught us at Microsoft (at least in 2016)
security(7) man pages on FreeBSD and DragonFly, I think originally written by Matt Dillon also tells you to assume breach for example for the root password, which is why you shouldn't allow password based logins over SSH, etc.
This is the earliest I could find, and it already contains assuming breach in 1998.
Does anyone have an idea on how to see the file and its history from where it was moved? I checked 4.4 BSD, because the copyright mentions Berkley, bit I failed to find anything in the man1 directory.
I daily drive an m1 mac. I cant even change the initial boot screen wallpaper because it’s on a sealed partition. The previous intel/T2 approach of modifying then blessing the modified partition doesn’t work on m1. When i dug into the depths of this simple problem (changing boot screen wallpaper before login) i came away impressed.
So I feel fairly confident about the machine firmware & OS. Less so about my keyboard for example. Also because i opt out of a lot of the securities (e.g. i download from homebrew rather than using app store apps), I can’t be sure i’m not being compromised.
Of course, a nation state is unlikely to 'tip their hand' so to speak.
If the U.S. has backdoors on every PC, they're not going to bother draining the wallets of "small fish"; they need to keep these things secret so they can go after terrorists
Reminds me of the time I was watching a creepypasta horror movie about some guy who gets strange phone calls and my phone rang.
I think this guy had gotten my phone number from my HN profile and he thought I might be able to help him. He thought his android phone was infected by malware and he knew who did it. I told him the people who repair cell phones at the mall could do a system reset on his phone…. Unless he was dealing with state-level actors in which case it might be an advanced persistent threat and it might be permanent.
I don't. Every now and then I access gross stuff online to make uncle FBI or cousins Hacker man puke their meals and lose the will to snoop on my boring Facebook account.
And since the video card is old and internet is spotty, brother Bitcoin miner and ransomware delivery man won't have much to win either.
The rest of automated attacks have to go through basic PC os protection (firewall, antivirus, hardware locks for unwanted code execution, etc).
My threat profile doesn't include Mossad or NKVD, if they are out to get me then I accept that they will. Especially at the BIOS/firmware/other levels.
For some excellent advice on security and privacy based on thoroughly researched technical concerns rather than speculation or blind trust in any particular organization (e.g. Apple or Google or Mozilla), see here: https://madaidans-insecurities.github.io/ I found the Android and Firefox/Chromium evaluations particularly interesting.
Could you expand on this? How would I securely communicate from a device that, say, has a kernel level implant? This is one of those cases where SGX/TrustZone would be immensely helpful but nobody has built a messenger that actually somehow fully lives in an enclave.
If you assume every device you use is compromised, how can you possibly use any encryption?
I don't think so. While I am not sure about what "devices" means it's common practice for example to assume your root password is always compromised. The consequence here is that you don't allow remote, password-authenticated root.
On a similar note services and networks should be treated as compromised as well, meaning you must use encryption, authentication and in general make sure to limit attack surface.
And all of that boils down that you should make sure you should not rely on services, users, etc. don't for example access personal information they are not supposed to access.
After all the problem with things like Ransomware is exactly that this isn't assumed.
I believe my most vulnerable environment is the development environment. 3rd party code being updated almost constantly combined with fairly standardized cli to cloud environments (kubectl, az/aws/Heroku, gh, etc.
And I don’t just have to be vigilant about what I do, but also about what my team has done. It terrifies me, and it’s a sad reality that my personal risk is reduced by the fact that if I fall victim, countless other teams will as well.
I think you are correct. The latest news of unsafe PyPy packages worried me.
Not always, but often I prefer development on remote VPS’s. For anything deep learning I pay Google a little bit of money every month for Colab notebooks, save a ton of my own time, and don’t worry about trying random 3rd party libraries. I don’t have this use case anymore, but I used to use very large memory VPS’s with SBCL Common Lisp and Emacs to work on an old project that required lot’s of data in memory - VPS’s are really cheap if you turn them off when not in use.
In the 1980s and a bit into the 1990s, I did most things in X Windows - I have thought about how good that would be for secure remote development, but text with tmux, Emacs, etc. is so much less hassle.
Second, unless you're in a situation where you've pissed off/threatened some rather large actors, you should be fine assuming you follow best practices for backup, software, update and password management and you avoid using things like cheap IoT devices to connect to your cloud services.
Third, when disaster strikes, keep calm, rely on backups, change affected passwords and notify others who might get affected.
Also, I have a Google Authenticator as a fall-back in case the 1st Yubikey PAM fails or I have no access to the internet to contact the Yubico servers for token validation.
Perhaps somebody can confirm if it's a good idea or not, but I like to generate only 1 emergency code for Authenticator (the least) and then delete that line in ~/.google-authenticator. Also, permissions of 400.
I use Windows as my main OS. In Windows 10 and 11, there is a feature called Unified Write Filter that essentially resets your computer after every reboot.
When I first got my laptop, I installed a fresh copy of Windows 10, installed all my commonly used applications, configured all my settings, and then enabled UWF. On every reboot, it goes back to this clean snapshot, no matter what I do - And reboots are quick too (~10 seconds).
I'm never worried about making changes to my laptop to try them out (installing a new program, configuring obscure settings, etc). If I don't like it, I can get back to my clean state with a simple reboot.
Still vulnerable to BIOS-level malware though, I suppose.
(Note: I repeated most of this from a previous hackernews comment of mine)
You can play with this in that situation. I assume all my cloud and local data is and just keep that in mind. But I also assume there's layers of access... so not everyone who can access it, has access to every part of it. One group maybe can access DNS queries; one group can access cellphone metadata and SMS; one group maybe can access unencrypted iCloud/Google/OneDrive data; one group needs warrants to access and creates lies to fraudulently obtain/fake-justify those; some other group doesn't need warrants and just has access, either through agreement or covert access.
Once Advanced Data Protection switches on globally "in early 2023" I'll have another compartment. But I assume that someone can access basically everything. You can have fun with it.
I also think what's happening on my devices is some of the least interesting parts of life, so, yeah, there's that, too. :)
Keep a modest Bitcoin wallet canary on the computer and run an activity alert from elsewhere. If I'm compromised one of the first things they'll do is steal that money. It's not perfect but it's a data point that gives me some confidence I haven't been compromised by a petty thief.
For malware that was downloaded through activity, even if it required no clicks, they usually present themselves as slower performance or other quirks. Maybe one has eluded discovery but I'd never know.
I'm much more wary of systemic malware at lower levels that I don't have an opportunity to detect. There's not so much I can do about that other than try to use devices from vendors I trust (or distrust less) that have the least preinstalled software. Lobbying for open firmware or hardware is the long-term strategy.
I also use multiple machines: work, personal, gaming, and utility (Surface Go). E.g. I use the mouse configuration software on the Surface Go and only the mouse hardware with its configured profile on the other machines.
Ultimately I can't know I'm not compromised but don't lose sleep over something I don't have more control over.
As far as I know I did everything right, and someone called my bank with info we both believe they got from stealing from my paper mail and got access because they convinced some human at a bank's call center they were me.
Don't make it easy for people (rng passwords + password manager, 2fa, don't run as su, whole disk encryption, don't leave you computer unlocked, don't log into your bank on rando computers you don't control, don't use untrusted wifi). However, assume you already are compromised and will have to deal with it some day.
Once you think that way, you don't need to stress that much about getting the perfect hardware solution or being super paranoid - buy a device you like, and enjoy you digital life. Stuff happens sometimes and if it does you can deal with it ¯\_(ツ)_/¯.
I use locked down apple devices and wireguard to a remote server to do work, so all my actually sensitive data resides on the remote server that is reasonably hardened, and I believe I would hear if iPadOS was compromised to an extent that I need to worry about it farily quickly, I hope so at least.
I don't know, really. I have a ton of "personal machines" when it gets right down to it, but I'll think client wise.
I distro hop chronically on most of my machines. Sometimes multiple OS reinstalls across machines per week. Some installs have lasted a few months but it's rare.
I try to stick to official repos when I do reinstall, so I'm outsourcing that trust to the distro maintainers.
If it's on the disk, it's gone except for a few important files I keep in a self-hosted Nextcloud sync folder.
I use LUKS encryption to ensure leaving the laptop on the bus is a non-event. If it was ever in somebody's possession for very long (border, police, lost and found) I'd just put it in the garage and never touch it again.
Firmware malware is pretty uncommon, still, so I'm just hoping for the best there.
Run Linux. Install only trusted software. Don’t do sketchy (from a security perspective) things like look at porn or use torrent sites on your main computer. Backup often. Reinstall OS annually. Don’t run Windows. Don’t worry too much. Be happy.
I don’t. Not sure why I’d do that. I find the risk acceptably low that it is. But more importantly, I don’t have any reason to fear that it is.
So I trust that regular caution and OS security reduces the risk to an acceptable level but mostly I don’t fear anyone reading or destroying my data because I have backups and it’s not sensitive. Sure it would be scary from an integrity perspective, but not in any other sense. Even constant access to my machine and everything I do wouldn’t be a big risk.
So if I’m affected by a ransom Trojan (most likely scenario), I’m happy to just wipe my machine.
I can't tell if my hardware was not compromised (most likely it was not), but for the data I store on my SSD - I keep it fully encrypted with boot stored on hardware aes-256-encrypted USB drive. Laptop won't boot without the USB plugged in, and USB won't unlock unless correct key is provided. Not too sophisticated but enough for my humble needs (that is I'm the only one who can boot the laptop, and I know my data will be safely forgotten if my flat gets robbed or if I forget my laptop on the train etc.).
You can try to "snoop" on the virus. For example, collecting all the internet packets, see if some ports are opened that is not needed. Collect logs on which apps are eating up the battery. These steps are not perfect by any means, but you can catch some noisy virus with this. If your virus is very stealthy you can only hope your passwords show up in haveibeenpwned.
This is also why using an open source OS is so important. At least you can investigate why something is happening in the OS. Without the source you can only guess at what is happening.
Open-source is de facto closed source if you don't build your own stuff (and know how to debug it). That's the status most OSS users are in, I suspect. I run Linux but I've never compiled a kernel and I've never run a native debugger. It's nice that I could, but this is just a platitude.
But anyone this paranoid will obviously build from source? Most OSS users don't build from source because they don't care to look in their internet packets for viruses.
BTW, it is not that hard either. You can even have multiple Linux kernels installed at the same time. Same with Android ROMs, just checkout the code, build it and flash using ADB. It is about as difficult as dual booting Windows and Ubuntu.
Assuming that you have no other option but to use a computer, there's some good advice here[1] for securing a Linux system. Then you can run regular scans using security tools like the ones listed here[2].
Separation of concerns is a good idea. Don't run everything together, e.g. multiple boot Os, or nested OS (windows with several WSL setups for different work, test untrusted windows apps first in windows development VM etc.). If you have a server, run dedicated VMs and work on those via remote, these days you can even stream your games from your dedicated VM. In case a game is compromised, it will at maximum compromise other games on the VM, but not your important work on another VM.
A workflow that involves multiple VMs is usually very cumbersome.
I feel that our OSes should solve this problem. Unix was built with the mindset that other users cannot be trusted, but they forgot that applications can also be malicious. There is a huge opportunity here for better OSes.
Untrusted officeapps/browser has been solved on Windows for ~10 years by Bromium (now HP SureStart) and ~5 years by Microsoft cloning Bromium as Application Guard. For example, every tab of the browser runs in an isolated VM, seamlessly composited into the main desktop, with no special action needed by the user.
Ergonomics is part of the equation and Snap is definitely not there yet, and in fact its poor quality may be driving users to other solutions.
Snap has no GUI, where clearly a solution that provides a general sandbox should. When you open an app and click "Open File", the sandbox GUI should intervene and ask the user where the app is allowed to look.
Also, the idea of "security bolted on top" of an existing OS doesn't seem very trustworthy.
In short:
1) secure bootup by locking up BIOS and encrypting your drive
2) set User Access Controls to the highest level
3) install up to date browser with appropriate addons (ublock)
My TL;DR Summary With time I have moved further and further away from using the computer for important things and believe that tech has long since exceeded George Orwell's wildest fever dreams
I assume a freshly installed OS is compromised [1] and the hardware it is on is also compromised in the BIOS and firmware at very least by state actors but then I also assume those state actors have poorly vetted contractors that may also be compromised by other nations i.e. who pays the most gets access. I would not be surprised for a moment if they have competing backdoors that try to block one another. Since I can not control any of this I just imagine the national actors of the world are watching my screen and yawning. More likely the latest iteration of ECHELON AI is yawning. I instead focus on securing important externalities making bank accounts read-only from the web, not all banks will do this. I also diversify where my assets are stored and make a best effort to require physical access.
Beyond that layer I do all the usual hardening practices but that only goes so far as every browser likely also has intentional weaknesses in them. Even FireJail and SELinux/AppArmor will likely just happily relay malicious instructions. Addons may raise the bar keeping some script-kiddies off my machine but I never for a moment assume that it stops government contractors from relaying instructions to the backdoors in the hardware and/or OS and ultimately to the hidden CPU instructions that likely take multiple layers of obfuscated instructions to tickle meaning SandSifter will never find them.
The above is for PC's. For cell phones I assume FAANG are interactively on my phone and since most of them were initially funded by the government. I do not use it for anything sensitive. I also assume that all cell phones have backdoors added by their manufacturer. Each one does seem to dial home to different places and make unique DNS requests. Putting phones into developer/debug mode does seem to quiet them down which is the opposite than I would have expected so maybe they know someone may be watching. i.e. malware knows it's in a sandbox
Wi-Fi Access Points are a story in and of themselves.
Why should I care about state actors? That one's easy. The best contractors will have leaks in their OpsSec and for-profit companies will acquire the weaknesses and use them to do illegal and unethical things to citizens for a price and political, economic and a myriad of other motivations. I would not be surprised if some government actors sell off access and end up working for said companies.
Well, presumably it would stop yawning if you were, like, part of an armed rebellion. The perspective I'd like to hear would be the Ukranian civil and military resistance to Russia's invasion. How do they know their systems aren't compromised? Because, yeah, in their case, being compromised means getting killed.
in their case, being compromised means getting killed.
In fact there have recently been articles about this and each side ordering their troops to stop using their cell phones. Both sides have attributed several mass casualties to cell phones. This is probably harder to enforce with conscripts and military contractors. Many of the first wave of troops thought they were just going on a training exercise.
Same here. GPS tagging in social media is certainly a thing. There are programs and groups of people that can identify locations from a single obscure photo. Then there is GPS data in some photo apps that add exif data. So many options one must assume phones are just loose lips [1] even without factoring in App, OS, Firmware and Baseband Modem backdoors not to mention instant location from satellites.
In my opinion soldiers should be given instructions how to back up their phone contents then hand all tech devices to a range safety officer to put into a belt fed target practice system to give them some closure from their dopamine devices.
For my part, after considering this very question in the past, the answer is that the question is wrong.
The question is: is there some reason to trust, and the answer is: no.
In my opinion, any and all general computing devices sold to the mass consumer market are already compromised in some shape or form as they roll out from the factories -- otherwise such things would simply not be sold in large quantities.
"How do you trust that your personal machine is not compromised?"
You don't. They are all likely "compromised" to some extent. The vast majority likely have asymptomatic/latent state-sponsored vulnerabilities, if not on the machine itself, then in the network infrastructure it uses. For the most part, people might not consider them "malicious third parties".
As with anything, there needs to be some evidence to believe something, and if there’s evidence, you can follow that to figure out if it’s real or just anomalous.
Generally, it’s a bad idea to believe things without evidence, so I guess you can trust your computer isn’t compromised the same way you can trust no unicorns exist; there’s not any credible evidence to suggest it.
I run the latest betas of macOS and iOS which means I get exploit breaking changes as soon as possible. I keep all the security mitigations on my mac enabled (SIP, secure boot, etc.) which helps makes a variety of exploit flows and persistent compromise difficult.
But random malicious code in user space? Well, I really just hope for the best :)
Ugh, I suppose you're right. They do seem to skip the beta pipeline for ITWs. I wonder if this has changed now that they're leaning on "rapid security response" patches for critical vulnerabilities, which betas do get, and so hopefully there's parity now. I should dig into this, pretty sure there were a few RSRs recently and it'd be neat to verify that a) betas got the same bugs patched and b) the RSR went out to beta and release at the same time.
Hmm, actually, my aging Mac always asks me to install something whenever I connect my newer iPhone - I don’t like that at all, it’s not at all what I’d expect from an apple device, but I always am coming to realise that apple devices really aren’t what they used to be - quite sad
There are two levels here: compromised by some national agency vs. compromised by anyone else.
For the former, I don’t assume anything especially since I’m not an American citizen. I still believe with some certainty that my iPhone is safe from the government but not 100%
I don't. I am real picky with downloading software for my personal machine and I sometimes explore with process explorer and I run sketchy stuff in a sandbox but I don't trust that my personal machine is not compromised.
I never trust a computer to not be compromised, if there’s any information so critical that it must be secured at all costs then I simply never let it near a digital device.
You simply cannot. Unless you want to go Richard Stallman path, and work on a laptop from 2008, with only open source software, unable to use Netflix, banking site, etc.
I try to follow what others already mentioned, but still, for any personal high-security stuff I use a device whose OS puts strong limits on apps, like an iPad.
I use adblock to prevent (malicious) ads from running. Browser malicious downloads warnings are enabled. Configured to show a full screen warning if http is used instead of https. Opening links from email requires manually copy pasting, forcing an extra look at links.
Generally I don't install random software outside the official repos or AUR, but I do blindly trust those repos to not be compromised.
That being said, I don't think I could 100% trust a modern computing device to not be compromised, but since that isn't possible I also don't see it as actionable information.
Best advice I have, for what it's worth is to wipe and reformat from a known clean image regularly. If you haven't been hacked yet, stands to reason you wont be hacked going forward.
That said, I often install packages I don't fully vet, and grant permissions I probably shouldn't, either in the name of curiosity (how else do we learn and experiment), or necessity.
I use Linux with lots of distributed sync and backups with e.g. Syncthing (plus copies of stuff NOT on sync thing)
Now, I'm aware many reading this are going to nerd out hard (like how the top comment now is "Android/Chromium" which I'm skeptical of but haven't done much homework on? Maybe?)
But because you said "personal machine"-- I'm thinking about my own threat model and my years of experience.
Thus, not going to much worry about, say, some obscure Linux-Stuxnet-thing, which not only is overwhelmingly unlikely, but also something I can't much do anything about beyond the solutions I mentioned above.
More likely, I can avoid stupid Windows and stupid Mac,and often stupid Web mess by what I'm doing now.
Android and ChromiumOS are likely the most trustable computing platforms out there; doubly so for Android running on Pixels. If you don't prefer the ROM Google ships with, you can flash GrapheneOS or CalyxOS and relock the bootloader.
Pixels have several protections in place:
- Hardware root of trust: This is the anchor on which the entire TCB (trusted computing base) is built.
- Cryptographic verification (verified boot) of all the bootloaders (IPL, SPL), the kernels (Linux and LittleKernel), and the device tree.
- Integrity verification (dm-verity) of the contents of the ROM (/system partition which contains privileged OEM software).
- File-based Encryption (fscrypt) of user data (/data partition where installed apps and data go) and adopted external storage (/sdcard); decrypted only with user credentials.
- Running blobs traditionally run in higher exception levels (like ARM EL2) in a restricted, mutually untrusted VM.
- Continued modularization of core ROM components so that they could be updated just like any other Android app, ie without having to update the entire OS.
- Heavily sandboxed userspace, where each app has very limited view of the rest of the system, typically gated by Android-enforced permissions, seccomp filters, selinux policies, posix ACLs, and linux capabilities.
- Private Compute Core for PII (personally identifiable information) workloads. And Trusty Execution Environment for high-trust workloads.
This is not to say Android is without exploits, but it seems it is most further ahead of the mainstream OSes. This is not a particularly high bar because of closed-source firmware and baseband, but this ties in generally with the need to trust the hardware vendors themselves (see point #1).