Pretty light on specifics. The article has no details showing why FreeBSD is better than macOS for the author's workflow, except that there are a lot of processes running and setup apparently takes longer.
> This is where many people will tell me “Okay but not everything works outside the box”, true! but which OS works outside the box these days anyway? Windows is still a nightmare, setting up macOS took me 3 days the last time, Linux takes way more if you’re building it from scratch. Setting up FreeBSD took me 3 days, however this meant that I will NOT need to change it again for a very, very, VERY long time.
Why is Windows a nightmare? I use Windows 10 on a PC I built and it's fine. I also use it for dev using the Linux subsystem with WSL2 and Powershell.
Why did setting up macOS take 3 days? I don't even think it would take 1 day if I set up a new machine from scratch.
Why are you building Linux from scratch for comparison??? It took me two hours to set up Debian by hand yesterday.
And then they say FreeBSD also took 3 days of setup (?!), but that's alright because they won't have to change it going forward. Leaving aside my skepticism of that last part, the same probably applies to macOS.
EDIT: I think the author actually is better served by FreeBSD given the fact that they were hand-modifying persistent packet-filter rules. That is not a thing I would suggest someone do on a macOS machine. But the head scratcher I have is why they would try to do this on macOS (or Windows) in the first place, and why setup times took so long besides the breaking updates.
> Why is Windows a nightmare? I use Windows 10 on a PC I built and it's fine.
I paid $200 for a Windows 10 pro license because I needed to run a Windows application in a virtual machine. I was very unsatisfied with my purchase. I would describe Windows as “a nightmare”. Here are my top three complaints about Windows.
* There are advertisements built into the operating system.
* The operating system often restarts itself, without the users permission. It will restart itself even if a user launched application is running.
* There is still no central repository for useful applications. Managing and keeping all software updated takes a lot of manual work.
> It will restart itself even if a user launched application is running.
This was a huge problem, for me. Some multiaxis CNC toolpaths require days to generate. I missed a critical deadline as a result of Windows 10 deciding to update itself, which finally prompted me to install ShutUp10 from O&O.
I use the maximum shit-disabling mode in ShutUp10. It works. Once or twice a year, I run Windows Update and get updates, then I run ShutUp10 again and disable all the trash Microsoft re-enables.
There's a free tool called Windows Update MiniTool that works very well for disabling auto-updates and then allowing you to update manually every so often. That is what we install on our CNC machinery that have a Windows 10 backend.
I'd rather describe this as a mail-in-rebate for the costs of running Windows 10.
We sort of implicitly know there is this need to make the Windows lifestyle more sane, and can rely on tools like these existing.
But the time it takes to research this, track down the best binary for the job (examining the source code is not always easy), makes me conclude that there's a real hidden cost to being a Windows user.
It's not so hidden. It honestly astounds me that users are willing to put up with this bullshit.
The most impressive thing about Microsoft as a company is the way it single-handedly lowered user expectations to the point where ads, security issues, and critical time-wasting failures are somehow considered an acceptable price of entry, and not evidence of an unacceptably shoddy, incompetent, and user-hostile product culture.
People who run Windows aren’t the users - they’re the product, either with Microsoft selling their eyes with advertisements, harvesting their usage data through telemetry for a profit, using their numbers to push paid development software, or using their ubiquity to be the basis for computer literacy materials for public schools paid for by taxpayer.
I think the user vs. product dichotomy is not right in this case. Microsoft really does make its money on products and support. You can see this on their public filings.
The relevant dichotomy is more like: people who run Windows aren't the buyers. One-off personal licenses for home PCs are more than a rounding error but are certainly not what made Microsoft what it is.
Governments and F500 companies buy Windows and Office for X00,000 machines for X0 years of support at a time. Enterprise procurement teams are the actual buyers whose opinions matter to product managers.
Remember that time Microsoft let FTDI brick a bunch of knockoffs through the first party update mechanism? Remember when they locked up a bunch of embedded devices with Windows 7 support nags? Remember when they dropped Candy Crush in your start menu, when they decided local accounts now had to be cloud linked, and when they enabled Cortana by default and made it increasingly difficult to opt-out? When they decided to take 30 minutes of your morning without asking (hope you weren't planning on using the computer for anything important)?
When it comes to high-reliability embedded OSes, Microsoft is a case study in inept paternalism. Updates regularly cause problems. Between updates and malware spreading behind a NAT, I'm not at all convinced updates are the lesser of two evils. Ideally, these applications wouldn't run windows, but since they often do, IMO the best approach is to isolate them to the greatest degree possible which includes blocking auto-update (note: not turning it off, blocking it, along with everything else you can get away with).
Local accounts don't have to be cloud-linked! Microsoft just employs dark patterns to make it hard to find how to opt out, and prompts you to "finish setting up your PC" by creating a Microsoft account, after every reboot.
I've also rebooted to find out about Microsoft Edge, the new browser that's now pinned to my taskbar, even after removing Edge several times.
MS finished making Windows into a stable, usable OS and then promptly began turning it into adware.
I just installed a new Windows 10 VM last week and there was no
option to use a local account anymore. None. No dark pattern
menu link somewhere hidden in a corner of the screen with low
contrast, only the choice between logging in or creating a new
Microsoft account. Which took me longer than the whole rest of
the installation, because as it turns out generic addresses like
"fk_u_ms@outlook.com" are already taken and Microsoft seems very
anal about certain choice words.
But at the same time they don't seem to mind if I enter a date
from 2018 as my birthdate and the calendar dialog includes
decades of future dates, but at least they got the forced online
account working...
offline accounts are only available if you are not connected to the internet, or at least you a get a visible create offline user button/link. last month i did a fresh install with the latest iso and i had to unplug my ethernet cable
I installed Windows a few weeks ago and found the option to "set up later," but the ISO is several months old and the computer wasn't connected to network at install time.
Sorry, but are we talking about Desktop operating system? Because if that's the topic, Linux and BSD have much to learn from Microsoft. Including stability. I can tell you, because it's my job: if you don't want to use outdated software, you should go with rolling distributions, that became stable around 2016. Before that every update was a schrödingers update. Windows are able to rollback updates since Windows 2003.
> Between updates and malware spreading behind a NAT, I'm not at all convinced updates are the lesser of two evils.
You can get your answer looking back to events like Nimda and Code Red.
I would have agreed with you before Windows 10 came out.
Afterwards I had to regularly reinstall Windows on most of my
machines as well as laptops of close family members because
whether it was a boot loop, BSOD every 10 minutes, random
switching of keyboard layouts or just performance issues to a
point the mouse lagged, there was a whole wave of new bugs
suddenly appearing after some forced update.
Now I run a rolling distro since 2018. Development got a whole
lot simpler, everything is more enjoyable and to date there's
been exactly one noteworthy issue which was resolved after a
quick search and 10 minutes.
And since then I've never seen a "new update available" popup
dialog, no ads, no Cortana, no "smart" features, and most
definitely no need to install third-party software to actually
have a usable file search. I even update much more frequently
than Windows 10 every forced me to, because for some reason I
just never have any issues and I can just do it in the
background.
I used to say the same, but my perception - and I know it's not reality, but every day with it feels like this:
* I log in.
* I start setting things up so I can start work.
* I'm notified about critical updates. There will be a forced reboot soon...
* I'm watching out in case something pops up while I'm typing and whatever key I was about to press causes the 'reboot' question to be answered - and I wait several minutes and lose flow entirely.
* I log in.
* I start setting things up again as they were so I can start work again.
* There are more critical updates...
Turning off auto-update is not an option, because there are so many security holes and I don't want to end up a victim. I also don't like having to fight to try and keep the OS from doing something it will push against, so trying to creep around letting it do updates when I'm not busy and reboots when I want it to but not asking me - waiting to be told - it's hard work and stressful.
Windows is an amazing piece of tech and gets better all the time, but I find myself much more able to stay in the 'flow' and avoid stress in MacOS or Linux.
To be fair, I get notifications of "important updates" on my Fedora Workstation more often than I get on Windows. (Or at least I think I do; I don't use Windows that often.) Fedora also wants to reboot for nearly every bunch of updated packages that doesn't consist entirely of top-level applications, which seems to mean at least 95% of the times it wants to update.
I think I get it why it wants to reboot: not only is it probably somewhat safer in terms of not screwing anything up in running session, but you can't be sure some running application or service isn't still using an outdated version of a library until you've restarted it. (Not to mention that the Linux kernel updates every week or so nowadays, but those are definitely not the only ones that trigger an update [edit: by which I of course mean "trigger a reboot"].)
I tend to not use the graphical software updater and just ignore its notifications instead, and just update from the command line and reboot when it suits me. That does allow me to not have my workflow interrupted but it doesn't change the fact that I still do get notified of updates that require some kind of action pretty often.
On the other hand, checking for those updates doesn't burn minutes on end of CPU time every time the OS is booted, as it seems to do on Windows 10.
I don't use Windows, but from the comments it seems that Windows just forcibly reboots after some time.
I'm sure your Fedora workstation will ask you to reboot, but not just do it without you triggering the action.
AFAIK in Windows there's generally a prompt for rebooting either now or after a delay that can be selected from given options. (I'm not a heavy Windows user either so I might not be right.) I've sometimes found Windows to have rebooted by itself, but that might have been because I wasn't there to react to the prompt.
But no, obviously Fedora doesn't force a reboot, nor does it give a prompt you can accidentally reboot through.
However, a part of the complaints (which I fully understand) seemed to be about the frequency of the updates, and in the name of honesty I just wanted to point out that's not really just a Windows issue.
Or join your machine to a domain. No ads, managed updates.
I have said it before, but around half of the machines in my estate are macs, and I have many more reported update problems from them. I think a lot of it is caused by inconsistent updates.
Holy hell, the fact this is even a thing boggles my mind. I get that MS want's vital (to them) updates. But oof... your use case is exactly the one that will cause a shit load of pain.
Windows update does give warning and allows you to re-schedule updates to a different day and time. You can also push back on when you start getting certain kinds of updates.
It may not be a perfect situation but I don't find it to be quite as dreadful as many make it out to be. FreeBSD is my other OS and I'll be honest, I dread doing those updates much more.
If you're like me, you've been conditioned to click the ok button on windows to get dialog boxes to go away. Never really reading what/when something is going to happen.
Windows 10 hasn't always warned you about an update. Once I lost a half days work while visiting family. I had been doing some genealogy work, when some small kids come over, so I put my laptop asleep (shut the lid) and went about my day, not thinking to save. The next day, opened my laptop and was greeted with the "Hi! We are setting things up for you" screen. Was very upset to have to redo the previous days work.
Next time, order "KW4-00190" from any Microsoft reseller who will sell you it. You might need to add 4 addt. dummy licenses to your cart (Identity Manager is popular). This gets you access to the volume licensing portal so you can get MAK activation for LTSC editions of Windows 10.
No store, no advertisements, no telemetry, Edge optional, all Enterprise features enabled, security updates only (defer fully configurable), out of the box.
I agree, this was sort of a annoying. Though when you reason about it in a systems way, not having a default of "reboot" would be problematic from a security PoV.
Of course, you're not the only one with this issue. There used to be an option in "gpedit" to disable this, but I can't find it anymore. I suspect that this is because there is now an option in the "advanced options" of the "Windows update" menu, called "restart this device as soon as possible".
when you reason about it in a systems way, not having a default of "reboot" would be problematic from a security PoV.
It's very SV-bubble to assume that Microsoft's security needs are more important than him getting his work done.
As he stated, he missed a critical deadline. What if the client cancelled the contract because of that, causing him to go out of business? How does trying to sympathize with Microsoft feed his family?
No, the tech bubble is not more important than things that happen in the real world. Windows has security problems? Sure, all operating systems do. But Microsoft shouldn't shift its problems onto the users. That's just bad business, and bad ethics.
I don't know that it's a bubble thing. I've never been even near SV (although I am a tech worker, so perhaps in some kind of a bubble), and I can see how there could be rational reasons for implementing automatic reboots. Some people might put of rebooting indefinitely, for instance, and not all of those would be doing it because they know what they're doing. If you're MS, you have to take into account a lot more than just users who know what they're doing.
I don't like the automatic reboots, and I'd agree it's not a good solution and I probably wouldn't have implemented it if I were making the decision, but I don't really see how finding reasons for why MS might reasonably want to do it is a SV bubble thing.
Again, all addressable through Windows Update -> Advanced and some very simple once monthly manual update schedule on your work calendar. It is what I do, and it is really not hard at all. This is all discoverable on the web through a generic "control windows update" search. No special tools needed.
Once a month I update my work PC and three home PCs, all on the same day. Easy to remember, saves surprise updates and reboots.
It's a bit more complicated than that I think. A common function of malware is to create "zombie" computers that are controllable en-mass to, say, DDoS a website. This makes the prevalence of malware-ridden Windows machines a _common_ concern, rather than specific to Microsoft.
Exactly this is what I was getting at. 98% of Windows users don't care if their machine reboots, provided they don't lose work. This is precisely the 98% percent that are likely to get infected, and will cause havoc for all of us (not just themselves). Spam, DDoS, etc.
Similar to how getting a vacination is not only good for you, but also for the rest of society.
Mind you, I'm saying this from a perspective of someone who uses his machine to perform nightly jobs (learning models, ontology alignment, etc) So it's not like I haven't shared your pain =)
A common function of malware is to create "zombie" computers that are controllable en-mass to, say, DDoS a website.
Thanks for the explanation, but I'm well aware of malware. I wrote my own boot sector viruses back in 8-bit days.
But the point still stands: Microsoft's practices stop him from doing his work, and could cost him his livelihood because it assumes that fixing its failures are more important than his work. What's the point of even having the computer if it can't be relied upon to do complex tasks?
Nice! Apologies if I sounded condescending, you clearly are much more familiar than me :).
I tend to agree, but I have difficulty when you scale the problem. No individual person benefits from having their computer rebooted enough to counteract the issues, but might everyone collectively? The utilitarian calculus becomes difficult.
Though parallel solutions: better security testing or a capacity for live updates without a reboot, may be an answer.
There also the option (I think it's Wifi only) where you can mark the connection as "metered connection" and it turns off a whole bunch of automated nonsense.
Same as you, I periodically turn off the metered connection toggle and then run updates. It saves me a heap of angst.
> There is still no central repository for useful applications. Managing and keeping all software updated takes a lot of manual work.
And this is why I tend to ignore most commentary on Operating Systems. You know the moment that Windows releases a centralized repository, there will be a thousand cries of "walled garden!" and "embrace, extend, and extinguish". It's literally impossible to satisfy everyone, given how many mutually exclusive needs exist out there.
Not if it were a repository a la Linux, distributions, where:
- You can still install applications manually, and there are no special permissions lost for doing so.
- You can access it using your choice of application, instead of only the MS-approved tool
- You can add 3rd party repositories
If that's what the "Windows store" was, I'd absolutely use it (if I still used Windows). If it comes along with a bunch of other crap, then yeah, there will be pitchforks.
There is a Windows store. It kind of failed, mostly because using it requires using gimped APIs. If they got rid of that more people would use it.
What does seem to work is the certificate and code reputation. It does actually prevent users from running things but is easy enough to click through if you really know better.
> mostly because using it requires using gimped APIs
Yes, that's the "other crap" I was referring to [edit: specifically "special permissions lost for doing so". I was a little vague because I knew there were some additional restrictions for using the Windows store but wasn't sure what they were.]
Didn't know this existed. I'll definitely consider it if I ever go back to using Windows.
I didn't spend enough time to verify if it satisfies the "can use other clients" and "can add 3rd party repositories" criteria. But assuming it does, I bet tons of people would happily use it, if Microsoft blessed it as the official standard for managing Windows applications / installed it by default.
I didn't use windows for a few years but last time I tried everything I needed was in the main repo - it feels similar to brew but less popular among windows users.
In combination with the ChocolateyGet provider for Powershell this works pretty well actually. It's not apt-get or similar, but it does get rid of the point-click installation for quite a lot of common software.
I think that that is because Microsoft would probably only create a Walled Garden. What's the chance they will allow a system where anyone can host their own repository, and add/remove them as a user? EEE doesn't seem applicable here, because there's nothing to embrace and extend. It's problem that has to be solved for scratch.
"Centralized" does not imply "only". Debian or CentOS do have centralized repos, but that does not prevent others having repos too (i.e. proxmox for Debian or EPEL for CentOS).
Not true. There is one central repository for Windows that manages all the useful applications that I use on Windows. That central repository is called Steam and no one really complains about it.
I've been having to use Windows a lot more lately (after barely touching it for 15 years) and the word "nightmare" is an apt description.
I think, though, I've come to realize why: backwards compatibility. I've seen videos where people have installed Windows 3.1 and then upgraded the OS from there and pretty much everything continues to work. I think in one video I saw, Quake or Doom stopped working after one upgrade (probably Vista) and then started working again after a patch release or something.
I have to hand it to Microsoft that they honor backwards compatibility as well as they do. However, at some point, you have to just make a clean break. You have to acknowledge that what you have is bad and broken and should no longer exist in this world. Microsoft finally did just that with Internet Explorer and then (to a lesser degree) with replacing Edge's rendering engine.
For Apple, backwards compatibility isn't a sacred cow. They're more than willing to throw something out that is old and/or doesn't work as well as it should. They handle breaking that fairly well, all things considered. They give plenty of notice (sometimes not as much as they probably should have -- https://arstechnica.com/gadgets/2018/01/apple-prepares-macos...) so average and power users alike have the time to transition. Sure, it's never as smooth as we would like but what transition ever is? Still, Apple users don't have to deal with things like Windows users do.
Microsoft needs to start making a clean cut with Windows, probably starting with the Explorer shell. It's old. It's obnoxious. It actively gets in the way and makes me less productive. They can do this if they just put their shoulders into it a little. (Christ almighty why does the scroll bar have to jump back to where it was if my mouse veers away from while scrolling? "Because that's how we've always done it!" is easily the worst answer to any question ever asked)
Also, for the love of all that is holy they need to reverse the decision to have ads built into the OS. How can they not see how bad an ad-centric model is becoming for Google and Facebook? Having a whole OS where the user is the product? That's a privacy nightmare that makes Google and Facebook look like little league.
I don't know much about the NT kernel. Is it really as bad as the shell such that they should throw it away? I kinda don't want to see that happen for the same reason I lamented them switching Edge's rendering engine the same one Chrome uses. I feel like not everybody should build on top of Linux/Unix. It's good that there are other options.
Not at all. NT is a fantastic kernel. In many ways the core system of Windows is much better than Linux. Largely because you just don't have the pointless fights about systemd vs. not, or X vs. Wayland, or foo vs. bar - you have a cohesively designed, unified base platform instead. Lots of very fantastic edge-case scenarios that are so trivial & seamless, and with functional GUIs & tooling so you can discover on your own more easily. Run out of disk space? Pop in another drive and just say "extend" and your existing partition is instantly expanded across both drives - no reconfiguration, no broken fstabs, no "well since you didn't anticipate this originally you now just can't do that", etc... Hell, Window's can handle a GPU driver crashing so quickly & seamlessly that you wouldn't even realize it happened if it didn't give you a little notification saying "hey something might be wrong with your GPU since it just crashed, take a look maybe?" It won't even kick you out of a game when it happens. Similarly you typically don't need to reboot to update drivers. Heck, converting from a "bare metal" install to a hypervisor is a matter of checking a box & rebooting.
But Microsoft then throws a bunch of stupid crap on top of it like Candy Crush ads for a piece of software that costs $200 and kinda ruins it. Or with all the deep hooks for Windows Defender & friends that just cripple process launching & file creation performance.
> Heck, converting from a "bare metal" install to a hypervisor is a matter of checking a box & rebooting.
Can't the same be said with Linux too? Major distros has Linux with KVM kernel module built-in. I never had to do anything rocket science to run KVM, for example.
I mean the existing install is converted to a VM in the hypervisor, too. I'm unaware of a way in Linux to convert your install to being a VM under KVM.
It’s not, the problem is all the legacy compat stuff they are forced to support forever now. NT and Windows without legacy would be an amazing operating system but also nobody would use it.
They already have a great WSL going already.
I run Ubuntu 18.04 under Windows 10 all day long.
I use it to model and test code that will take hours to fold into a build for our embedded target. On the last feature I completed, I wrote 2500 lines of pthread / Unix domain socket server code, and it ran on my WSL/Ubuntu target with only lightweight text file based shims to replace drivers on the embedded target. When I slid it over into the embedded build flow, it ran on the first try, no changes.
Compiled as a unit test harness in about 3 seconds on my WSL instance. Our full embedded target takes 30 minutes to build. So the WSL/Ubuntu instance is a huge time saver.
Advertising?
Part of my job is managing about 60 Windows pro Desktops/Laptops. I have yet see any advertising on them.
-On Windows Home however, it comes with Candy Rush and a bunch of nonsense that show up in the Start menu tiles. I just uninstalled them and call it a day.
On your last point, I have been using Chocolatey[0] for several years now and it does a pretty good job for stuff that isn't on the Windows Store or Steam.
While I just turned to ubuntu again, because this days I fell more productive in linux, all this complains can be easily mended.
You can turn off advertisement during installation, and set up privacy measures like don't give up hardware id for advertising.
I had no problem with windows restarting itself during work hours. After you install all drivers, it would only restart during night time while I was sleeping.
About central repository, is true. You have windows stores, but is very incomplete. The best tool I found to help me was chocolatey, and it saved me a few hours.
https://chocolatey.org/
Don't forget to scroll down on that toggle flipping screen -- when you start toggling the top ones, their description text gets longer and (on my screen anyway) pushes some of the toggles below the visible viewport.
I never use quick setup, I flip every single available ad and privacy toggle and yet Candy Crush comes as an uninstalled game in the start menu anyway. I have installed Windows 10 a lot of times as well.
Edit: If you don't mind a little up front prep time, you could probably create install media for windows that would run one of the above programs immediately after finishing the install. If you have to do it multiple times quickly that may save some time.
The fact that these are required at all is ridiculous, and (part of) why I run Linux/BSD these days.
Shoot them an email explaining what you want and they'll have you meet the 5 license minimum for volume licensing by buying a Windows Pro License a Windows Enterprise Upgrade license, and 3 user cals which are cheap. They'll put your name down as the business and then you're done.
Why would you want a long term support contract for Linux? How is that comparable? Do you contact Microsoft for support often? And if so, how many of those things could you resolve yourself on Linux in 10 minutes via StackOverflow?
LTSC is the long-term support enterprise version of Windows. Last I checked, it's stripped down without the Windows Store/Cortana/Candy Crush nonsense that a typical install includes.
I interpreted the OP as suggesting that the proper analog for a $200 (plus ads) Windows Pro license was a Linux long term support contract, suggesting that $200 (plus ads) for Windows Pro is a bargain compared to a Linux support contract (e.g., from redhat or similar). In this case, I'm not sure why a Linux support contract is an appropriate comparison operand.
If the OP indeed meant "Windows LTSC", then I'm not following along with their point.
here's the thing about disabling/turning off settings in Windows 10 (at least from my experience).
- half of the things are for stuff you shouldn't have to opt out of in the first place in any reasonable OS
- that's if you can even opt out of them
- the settings/ways to do so are often obscure/ obscured
- come next update, your settings are never safe, and you can't have faith they didn't add more bullshit that you have to research and find ways of turning off
I was unsuccessful turning off advertisements personally. I also find wsl2 onerous, as I have run into a bunch of strange edge conditions trying to use the same directory with linux programs and windows programs.
I gave up and went back to linux, I had crazy issues with simple things- like not being able to _rename files_ if the wsl for linux git had cloned the file (but it would work if windows git had cloned it). Death by hundreds of little cuts like this for me. Visual studio code worked just as well if not better on a mac or linux.
I've taken to installing Windows Server on personal desktop machines just because Pro is so bad.
Server does require initial tweaks to run smoothly on a desktop machine. Some security and telemetry policies need relaxed, audio service doesn't run by default, Internet Explorer Enhanced Security Configuration needs disabled so you can grab an installer for your preferred browser, and other minor things like disabling Ctrl+Alt+Del on the login screen.
The result, though, is that you get an OS that acts like regular old Windows 10, with _zero_ ads and (I think?) less telemetry. I have not seen anything Pro can do that Server cannot, even gaming (Steam, AMD graphics drivers) works fine. As a bonus, you're not subject to any of the restrictions that Pro has for advanced workloads like virtualization. Server Manager along with all the extra utilities not found on Pro are also nice to have.
Updates are still downloaded/installed automatically, but servers can't reboot whenever they feel like it, so I'm not ever interrupted by unwanted reboots like on Pro.
How did I find it so easy to set up Windows 10 Home edition to only "tell" me about updates, and no forced reboots. I didn't use any add-ons.
Just set 'active hours', turn off everything under 'Advanced', and if you really want to not do any updates for a while, you can select a date a date 35 days in the future to restart updates.
Three very simple things built into the update tool.
If you are on a corporate machine, they may limit what you can block based on their internal rules, but that is an IT problem, not Microsoft.
Windows 10 never updates or reboots during my active hours.
I always defer updates to a long time out, then manually update at a point where I want to. Problem solved.
In my experience Windows will warn you when it plans to schedule restart for update. This is where I always reschedule update so it would be far away before it finally updates. Windows also warns you one hour before scheduled restart and you could reschedule again at that time.
I use chocolatey[1] and winget[2] will be officially support when it’s finished. Microsoft is a step ahead of Apple here, who still refuse to officially support a package manager, much as I love macOS.
1. Can't argue there, although once I "uninstall" the advertised apps, I've never had them return
2. There's an option to disable automatic restart for updates
3. There's the winget package manager, but it's still in preview. There's also chocolatey and scoop, which are community run, but that's similar to homebrew on macOS.
> 2. There's an option to disable automatic restart for updates
Where? I've been able to defer reboot, but not disable it completely. ShutUp10 seems to (mostly) work, but automatic updates/reboots somehow get re-enabled sometimes.
Group policy does not stop it from eventually rebooting. Though it will stop the "rebooting right after applying an update" horror show.
With Group Policy: if you are logged in, Windows will pop a modal dialog in your face and you can preempt it there, but just leaving an application running and being AFK will not prevent reboot if Windows has decided it has waited long enough. The amount of time I can ignore the "restart" request in Windows Updates varies from days to weeks for me.
I hate the behavior with a passion, but I can see why Microsoft wants to force reboots eventually. Having un-updated Windows is not good for 90% of the people out there.
Honestly speaking, Windows 10 is as far from the spirit of the "I'm a PC" and "Windows 7 was my idea" ad campaigns as you can get. Now it's just "You will bend over and take what we give you, for the greater good"
If you want full control of your OS, you need to switch to a BSD or Linux, with appropriate expectations about the applications you will or won't find there.
That is not true. There is a group policy to disable updates. I know, because I have to run updates by hand on my work computer, otherwise I stay months behind on security patches.
Stop spreading lies. It does work. (I also have hot had ads in Windows on all Windows machines I own. But this is a Microsoft product, which always raises these long list of myths.)
Updates requiring reboot are not applied unless you reboot... so don't apply updates requiring reboot? It's useless to install them if you refuse to reboot. You can't have it both ways.
So the solution, as I said, is to not install these updates. (Or reboot, but you say you don't want reboots.)
Yes, beside paying $200 for a Windows license, you have to spend many hours removing all the annoyances you mentioned. I boot into Windows sometimes for certain apps and I'm still struggling with these annoyances after several years of using Windows. This effectively increases the cost of Windows far beyond the license cost.
There are advertisements built in because no consumer buys Windows. Rather the OEM does through their bulk license or it gets pirated. Enterprises can pay to remove ads.
It still require specific license to use the virtualization (Hyper-V). So far as I am aware, Windows 10 Home license cannot run hyper-v. It needs Pro, Education, or Enterprise license to use the feature (mobile license don't have this feature). Likely nix23 upgraded to Win10 Pro from Win10 Home for this reason.
You can run WSL on any version of Windows (including Home Edition). Docker can also use WSL2 as its backend, so I rarely need to run VMs on my Windows dev machines anymore.
I take a snapshot immediately then load from that when it expires. Or there is some arcane, verbose powershell incantation that has also worked for me.
This works for my web testing purposes, but probably isn't practical for a CNC machine.
If I understand correctly, the evaluation licence is very restrictive. You essentially aren't allowed to use it for any purpose other than, well, evaluation.
>Start building Windows applications quickly by using a virtual machine >with the latest versions of Windows, the developer tools, SDKs, and >samples ready to go
You aren't allowed to use the evaluation licence for production work. The licence document is clear on this:
> You may use the software in the virtual hard disk image only to demonstrate and internally evaluate it. You may not use the software for commercial purposes. You may not use the software in a live operating environment.
I think the heading Get a Windows 10 development environment is referring to downloading the image, rather than to licensing. I agree it's not very clear.
How long was the postponement message up? I have left windows machines doing background processes for weeks to find them interrupted by a restart. Sometimes those interruptions have resulted in data loss.
I’ve also had forced restarts while a directX application was in full screen and I didn’t see the windows pop up.
I think the one thing he said that resonated is: "Doing forensics is almost impossible." Holy hell, yes. I use a lot of weird USB hardware, and debugging USB serial-port issues is impossible (even with a Beagle USB debugger, it only gets you half way, system logs on macos are not particularly helpful).
Why FreeBSD over Linux?
This is the most perplexing decision in the article. Why would OP do this? I have FreeBSD running on an old laptop, along side an Ubuntu laptop. There's more software ported for Ubuntu than FreeBSD and I don't want to rebuild everything for BSD, I have too much work to do. Nobody targets BSD but BSD devs.
I'd like to know though, OP aside, besides the BSD networking speed and extensions, why FreeBSD over Linux when the latter is more widely supported for GP dev?
I feel the author's frustration, though. macOS is really diverging[1] from BSD and become even more obfuscated, and as a power user who spent over two decades working in Linux, it is very hard for me to treat macOS and Win10 seriously as development platforms (to my hireability's detriment, I suppose) for anything other than Node/Electron and playing around for my own projects.
> I use Windows 10 on a PC I built and it's fine.
Because you probably don't use Win10 the way other people use it. If you've spent any time developing in Linux, Windows is a pathetic development system once you step outside Visual Studio. The Unix command line is one of the most insanely flexible tools every built, which is why the philosophy has changed so little in 40+ years. (Plus I don't like ads popping up in the toolbar or start menu when I'm trying to work.)
[1] Case in point: I tried setting up iPerf on a new MBP today. port/brew both failed to install correctly with lib dependencies broken. `iperf`? Seriously? /smh/
> Linux has systemd, not my favorite thing out there, Windows is privacy nightmare. That left me with 2 major options: Linuxes without systemd (Gentoo, in my case) or BSDs.
> Since I run FreeBSD servers anyway, I just migrated to FreeBSD.
Basically because the author hates systemd (not an invalid point, but that ship has sailed) and has familiarity with FreeBSD. This isn’t really an article that’s trying to convince people to run FreeBSD. It’s trying to explain why they left macOS. Where they ended up is less consequential, IMO.
Yes, I read that. I should have rephrased because I was posing a more general question, as it seems like OP is headed directly for another problem which is the BSD v. Linux package support deficit, which IMO is far more painful than macOS+VNC vs BSD.
I don’t disagree with you. But I don’t think that the author was all that interested in getting into Linux vs BSD. Instead, they were more interested in pointing out the problems they had with macOS and how it is moving more away from its UNIX roots. This is a fair point.
The larger question of “if not macOS, then what” is almost a throw away in this post. Which, I don’t blame the author for at all... it’s a prickly issue and includes a lot of background context that dilutes their primary point of macOS issues.
I don't like systemd (or redhat) either, it's also one of the reasons why I moved to FreeBSD. And on servers I use Alpine. I don't hate systemd as such but I definitely don't like it. I think OpenRC on Alpine is much simpler.
The ship has only sailed for the mainstream Linux OSes. With Ubuntu for example other things bother me too, like snap, I don't want all this overhead. But I see more and more people moving away from those things.
"Giving in and going with the flow" is not necessary with open OSes. It's exactly why I'm moving off macOS too, because I'll no longer be locked into some company that decides what's best for them (and not me)
There are many linux distros without systemd too. Artix (arch without systemd), Devuan (debian without systemd), and Void are the big ones off the top of my head.
> I'd like to know though, besides the BSD networking speed and extensions, why FreeBSD over Linux when the latter is more widely supported for GP dev?
Considering MacOS and Windows have much more software and driver support than Linux, it's only natural to choose to give up a little more for better docs, userland, and all the other benefits of FreeBSD.
It's the exact same argument the "works for me" Linux folks use, taken to its full conclusion.
I mean, Windows has better driver support, but MacOS doesn't really, unless we're strictly talking about external peripherals. In terms of internal components Linux absolutely slaughters MacOS for raw number of supported drivers, by design, since MacOS is only intended to run on Apple-approved hardware.
Sorry, I know it's nit-picking, I just had to say something.
> This is the most perplexing decision in the article. Why would OP do this? I have FreeBSD running on an old laptop, along side an Ubuntu laptop. There's more software ported for Ubuntu than FreeBSD and I don't want to rebuild everything for BSD, I have too much work to do. Nobody targets BSD but BSD devs.
For instance, I use Arch+KDE for years in the same laptop and I never had problems with updates thanks to the Arch's rolling release approach. That's why, for example I don't get why FreeBSD over Linux in that particular regard.
Like "Windows New Technology Technology (Windows NT Technology)" Or "The The Tar Tar Pits" (The La Brea Tar Pits). :)
I should be more specific, I meant serial-port: lots of USB devices communicate through a USB serial-port driver, but not every USB device appears as a serial-port (e.g., all USB devices expose endpoints for bulk or serial transfers, but whether that maps to a COM: or /dev/tty* depends on the driver).
I'll mention two pain points as a former windows developer.
I used to develop windows shareware, all the way back to Windows 95. Back then you could write to the installation folder "c:\program files\myapp" and life was simple.
At some point MS figured this was bad security, and you needed to write to the Windows Rigistery (HCKLM / HCKLU) to store your app settings. And write to C:\Users\<username>\AppData\ folder to store your data files.
So all of this resulted in 3 different places you needed to write your data to now. But then it gets even more split because, in some cases you needed to run as an admin during the installation, and your appdata will be stored under the admin account.
Also, at some point c:\progrm files\ was split, and also they added c:\program files (x86), and I believe the c:\program files\ is virtualized. And the Windows registry is also split for local user, and machine settings, and admin settings.
Two issues I ran into:
1. c:\program files\myapp\ I could not trust what I wrote there. I believe its virtualized. So I ran into issues where I would delete a file in my installation folder, and old version would be put back by windows in the background.
2. When a regular user installed my app, and it need to write to HKLM, the app will need to be elevated to admin to be able to do that. This also resulted in another split of settings. Because there are per user settings and per machine settings.
In conclusion, app installation became a confusing mess!
> In conclusion, app installation became a confusing mess!
No, app installation was a confusing mess and now it's been cleaned up. There's nothing wrong with an app separating user-specific data from system-global data. Running an app as a regular user shouldn't require any writes to HKLM!
> Why is Windows a nightmare? I use Windows 10 on a PC I built and it's fine.
Buy any consumer machine with Windows 10 and it comes with so much pre-loaded doodoo, that it takes a couple hours to purge it down to the basics. In fact, so much so, that there are a myriad of scripts to help do this [0].
What's good about modern Windows, after it activated it binds to hardware. So you need to run it once connected to the Internet, ensure that it's activated. Then download Windows ISO from microsoft.com and reinstall it. It'll just work.
Another good thing about modern Windows is that it downloads all the necessary drivers after the first boot and Internet connection. Gamers might want to install latest GPU drivers, but that's not necessary for most people. And those drivers usually do not come with bloatware.
At least that was my experience with few computers. Reinstalling takes 3-5 minutes and downloading drivers takes 5-10 minutes depending on hardware and connection speed. Definitely faster than couple of hours.
The only thing that needed to be uninstalled is ads from Microsoft, but that's another 5 minutes: just navigate Settings/Apps and uninstall those unwanted games. They do not reappear in my experience.
So I haven't tried this lately, but the last time I did, resetting windows this way did reset it... to the bloated crapware fest that the vendor had delivered it as. Did that change?
> Why did setting up macOS take 3 days? I don't even think it would take 1 day if I set up a new machine from scratch.
I think it might depend on what "setting up" means. For example, "logging into my main day-to-day accounts" might take a couple of hours, max, allowing for even an annoying system-level issue.
But installing, configuring, and gaining access to all one's apps might add anywhere from another few hours to another few days, depending on the number of apps, the location of the files, the version delta between new and old systems, and so on. If you have to do some online research and you hit StackOverflow more than a couple of times as you search around, you may be looking at a serious time bump.
And if you have to contact the software vendor...well, yikes, in a lot of cases. Days, understandable. Weeks of suffering--possible.
This is not even getting into things like working with local network services and local network utilities, configuring old hardware on the new machine, or even changing or disabling new Mac OS or Mac features to suit one's working style.
> It took me two hours to set up Debian by hand yesterday.
Congrats, :) but this does complicate the question further because now it depends on what is involved in this "by hand" thing. It took me 20 minutes to set up Haiku OS by hand...10 minutes to set up Puppy Linux by hand...what does it mean? That I used my hands? That I plugged in a USB drive and hit "Next" a bunch of times? That I configured and compiled all sources from scratch?
Trying to build everything from scratch is actually the only time it's ever taken me several days to setup an OS. If that's what the author has to do for whatever reason, I can sympathize with that. I'm probably never going to take the Gentoo approach again, but I can't knock its particular advantages.
One of the reasons I have stuck with MacOS is that with Time Machine, I don't have to do any install/config when I get a new machine. Just point it at my backup drive and I'm good to go after a restore. It's not a 100% solution for everyone (e.g. multi machine usage) but it's very good at the common 90% use case.
There are similar solutions on all platforms now, aren't there? I think you could safely contemplate the use of just about any platform, if full backup & restore is holding you back.
macOS has a lot of faults but I agree, 3 days to set it up is an odd statement. I have a reasonably convoluted setup with VMware Fusion Windows and Linux guests, a few different developer toolchains (vanilla clang, rust, go, etc) and two dozen developer tools along with all the usual applications and it took me a little over two hours to do a clean install of Big Sur on my MacBook Pro the other week.
Homebrew manages 90+% of installations the the few that it doesn't do coming from the App Store or manual download and install. The rest of the work is simply restoring my config files and copying over data all of which is done via a shell script.
Windows and Linux (Fedora in my case) is mostly the same with just a little extra work on Windows to manually change some settings that never seem to stick if done another way.
Not saying switching to BSD is the wrong thing for this person but having used BSD in the past I wouldn't say it is any quicker to setup than macOS or pretty much any other OS these days. In fact with hardware issues you are likely to run into it can take a lot longer to get an equivalently working system. They even say how WiFi and Bluetooth still isn't as functional or performant as it is elsewhere so who knows how much time they have spent trying to sort that.
But as long as the author finds their environment works better for them that is all that really matters. However I feel when people throw out things like 'macOS took 3 days to setup' it weakens their argument as I know that is not "normal" so I wonder if perhaps they are just making up things to justify (to themselves) the switch. I see it a lot when people switch from Windows to macOS or Linux. People come up with all kinds of "reasons" as to why they are doing it and a lot of the time it is more emotional than factual which is fine providing you admit that rather than come up with nonsense claims.
Anecdotes and all, but yes this has been my experience too. macOS is by far the fastest to get up and running on.
Linux distros probably come in second, with the caveat that it's easy to get sucked into a time-burning black hole of fine-grained tweaks if I don't make a point of keeping things close to default.
Windows is downright frustrating because even after spending considerable time on setup, there are things that just don't work correctly — for example, changing virtual desktops independent of monitor is not possible with the built-in implementation.
The idea of running BSD and it going well is one thing, but in practice you have to really want it. FreeBSD and OpenBSD are better (and easier) than ever, but you still give up a lot in comparison with the average Linux distribution, where more things just work.
The BSDs are often great if you stick to the base system, but once you start adding software, things break way more often than with Debian, Fedora, Ubuntu, etc.
This is especially true if your hardware is not-a-Thinkpad.
I always wanted to run OpenBSD as my main OS, but for "production" desktop use, I can't deny how much easier/faster it is to get and stay running with Debian, Ubuntu or Fedora.
I thought he was pretty clear about it. Hardware not being serviceable, ux distracting and lots of extra extra. He was also pretty clear about not wanting systemd, and well, Gentoo isn't famous for being something that can be configured quickly. MacOS can be difficult, especially if you are installing lots of non-standard servers, scripting languages and having to configure them. FreeBSD is quite stable, and so there is a big expectation that updates don't break things the way they tend to with Linux and, well, WSL for Windows (which doubles the surface for breaking changes... the Linux distro updates + Windows updates).
> Why is Windows a nightmare? I use Windows 10 on a PC I built and it's fine
It's fine, for you. I spent a day convincing Windows 10 to install on my NVME drive. It didn't like something or other. Brand new, blank drive.
Linux installed and worked flawlessly. I actually used it to troubleshoot and mess with partition schemes until Windows was happy. The Windows 'repair' and diagnostic tools are laughable.
It is running ok so far. The Linux subsystem and the new shell make it not entirely terrible.
Some people care a lot about Open Source Ideology. The BSDs are free in a different way than Linux is Free, GPL vs BSD licensing.
Separately the source code for FreeBSD is pretty clean and easy to read. I needed to modify the behaviour of one of the core utils a few years ago, and I found it very easy to do. The C library that it uses is also pretty well written. It won't be quite as clean as OpenBSD, but it is a lot simpler than trying to figure out glibc(which was possibly intentionally obfuscated to avoid lawsuits).
The build system for FreeBSD is also pretty simple to figure out. You can download the source, modify the kernel or userland and rebuild world in like at most like 30 commands. I haven't tried with Debian or any Linux distro, but I would imagine it would be harder given the separation of the kernel and userland.
If you've experienced computer use for any length of time, that is the root of everything.
like for instance:
> Every time Apple pushed an updated, my pf.conf and automount configs got broken on macOS. They either got deleted or they moved somewhere. Well, the last 2 times it just got deleted.
I mean, wtf? With open source, there is at least discussion and education about something like this.
>I mean, wtf? With open source, there is at least discussion and education about something like this.
You'd be surprised -- unless you follow dev forums for various elements of your DE/distro.
macOS moves because it updates into a clean system as far as those configs are concerned.
In any case, it takes like 2 minutes to write a shell script to deploy your pf.conf/automount config after every such upgrade, and 1 seconds to use it. Or to look for the proper, permanent, way to do it: https://blog.scottlowe.org/2013/05/15/using-pf-on-os-x-mount...
With brew and brew mas you can also automate command line and GUI app installations, even custom font installs, among other things.
I can get up and running into a new machine in ~2 hours from a clean install with an automated shell script and get all my apps and cli apps, custom shell aliases/scripts, configuration for dev tools like VSCode and IntelliJ, documents (from TM/Dropbox) and so on.
> I mean, wtf? With open source, there is at least discussion and education about something like this.
I mean Gnome3 has removed features pretty regularly that people liked without any real discussion. I don't think that is a closed/open source thing, more of a project communication thing.
I won't call Windows a "nightmare" overall, but I had a nightmarish experience trying to install it last night.
It turns out that creating a bootable USB stick that can install Windows 10 is absurdly difficult. You can download an ISO from Microsoft, but there is literally no way I could find to successfully write that ISO to a USB stick from an OS other than Windows.
If you Google "write windows 10 iso to usb linux" and click on a result like https://itsfoss.com/bootable-windows-usb-linux/, the instructions will create a USB stick that will successfully boot to the installer, but will fail shortly thereafter with "A media driver your computer needs is missing".
If you Google this error and try to find a solution, it feels like you just asked a witch doctor for medical advice. You'll find advice to try plugging the USB drive into a different USB slot, try using a USB 2.0 drive instead of 3.0, etc. This Stack Overflow answer swears that the cause of this error is that Microsoft's servers corrupt the download of the ISO file in transit by aborting the transfer early: https://superuser.com/a/964362 (I verified the SHA of my file and it was fine).
Many answers suggest that using the Windows Media Creation Tool from Windows is the most reliable way to do it. I managed to resurrect my old Windows partition and boot into it, and indeed the Windows Media Creation Tool burned a USB stick that successfully booted the installer. But when I tried to select the partition I wanted to install onto, I got the error "Windows cannot be installed to this disk. The selected disk is of the gpt partition style."
If you Google this error, it sounds like when you create the bootable USB, you create it as either a BIOS-booting or a UEFI-booting variety of boot disk, and once created, it can only install onto a MBR or GPT partition, respectively. The Windows Media Creation Tool did not ask me which kind I wanted, it (apparently) just creates a BIOS-booting USB by default.
The only way I was ultimately able to install onto my GPT partition was to use Rufus, which explicitly lets you choose between UEFI and BIOS when you create the USB. It is amusing that Rufus runs on Windows only (so you need to have Windows to install Windows). Luckily I was able to use the VirtualBox images provided by Microsoft for Microsoft Edge testing to run Rufus and create the USB.
Once Windows is installed it's pretty reliable. But it is a nightmare to install.
Wait, there is an automatic tool to create the USB stick. I use it regularly.
I think it is interesting that you can still buy old Windows 7 hardware, and just stick the Windows 10 install USB into it, reboot, and be on your way.
Yeah, I was a little confused by the 3 day comment. When I get a new Mac I use Apple's Migration Assistant to move my environment from the old machine to the new one. That brings over all of my customizations and I am ready to go. I'll go through my change list and make sure everything came over. Sometimes something in /etc needs to be updated or perhaps a license key for an app. It is a simple process, at least for me.
This just happened. (It’s the reason I’m on HN rather than working right now.)
- I get up at 5am to do some early work.
- Work laptop (Win10) is still connected to the VPN from yesterday. Cool.
- Try to connect to drive. No dice. Try same drive in SharePoint. Failed auth. Reconnect VPN. Same. Oh well, guess I’ll reboot.
- Reboot. Go to make coffee.
- Come back to black screen. Ah yes, that’s right. If I reboot and McAfee drive encryption has kicked in, and I’m plugged in to my USB-C monitor, all screens are just black. The solution is to...
- ...pull the monitor cable, power down the laptop, power it back up, authenticate with McAfee, plug the monitor in, let Windows start, check HN, reply to this post.
I know this latter half isn’t Windows’ fault, strictly speaking. But it’s the Wintel laptop’s fault and that’s all the same to me. I hate using Windows. It’s easily the thing I dislike most about my job.
Let’s just try my Mac over here, hang on. I’ll open the lid. Yep, worked.
Corporate IT software on Mac is terrible. I'm happy that my employer has sane Mac policies and management software, but it is due to the large fleet of Apple devices. My wife's employer tries to manage their Macs like it is Windows and it is terrible, IT doesn't test anything on Macs and they don't know how to troubleshoot issues.
I run Windows 10 everwhere, too. I stay on the "happy path" hardware-wise. Everything "just works." Really. I code all day long (Erlang, FSharp, C++/CUDA) and I deploy code on Linux and/or Windows servers on Azure.
Without first hand macos 'power user' experience I can understand why it might be difficult to wrap your head around. Sometimes things that might take you 5 seconds on Linux take your 8 hours for some reason. Like installing Python. And the. A version changes something and you need to reinstall python but everything is broken and nothing works. And then all of a sudden home brew doesn't work and you need to spend a week researching why and it turns out you need to boot into single user mode and chmod every folder in some directory for everything to work. Pure nightmare
I agree 100%. While it does take me a day or two to set up a new Mac, 99% of that time is downloading IDEs and configuring them the bizarro way I like them configured.
Based on the tone of when it was introduced in the article, I would say that the author means that Windows has a lot of stuff running on a normal install that is unnecessary and entirely fragmented both in location of files and in location of configurations.
One thing that keeps me somewhat sane in windows are those heroes who spend hundreds of hours figuring out what each system service does, whether it’s needed, and how to disable it if it’s not.
Well sure I guess you can live in WSL and use Powershell (not sure how the two are related?), but what about the rest of the shit. How do you put up with random reboots from updates, demands from Edge to update, all the privacy-invading telemetry, and random shit like Candy Crush installed from the Windows Store?
Last time I had to switch between my Mac laptops I wrote up an Ansible playbook that does (almost) all the configuration on the new machine - pulls the dot files from git repo and installs everything I need. It takes less than an hour to move to a new system for me now.
1) 3 days to setup macOS?
Yes, it took me at least 3 days, keep in mind that a setup is not just installing software, it's also dotfiles, shell environment, automout (I use NFS a lot), PGP/GPG-alike keychains, the OS keychain, Firewall (pf in my case), privacy settings, company-related software, etc. So yes, it takes time, which I am okay with. My problem with macOS is the fact that updating/upgrading the system crashes a lot of configuration.
2) Why FreeBSD?
Because I love it :) my company's product is based on FreeBSD, my servers are FreeBSD, my operating system of choice for teaching is FreeBSD. The handbook is there, all man pages are well written, pkg is easy to use, it's a whole system. Also: ZFS and DTrace makes your life easier. Sure, I can have ZFS on Linux and eBPF, but why learn a new technology when DTrace is rock-solid. FreeBSD is not "just" an OS, it's a complete self-hosted development ecosystem.
3) WiFi?
Yes, WiFi is not the best, but not everyone needs 100Mbps connection. I have a wired connection at home to use when streaming movies to my PS4 (also a FreeBSD-based system), but other than that, it's fine. I will still donate every year so the devs improve it.
Apologies for the bad English, it's not my native language.
> 3 days to setup macOS? Yes, it took me at least 3 days, keep in mind that a setup is not just installing software, it's also dotfiles, shell environment, automout (I use NFS a lot), PGP/GPG-alike keychains, the OS keychain, Firewall (pf in my case), privacy settings, company-related software, etc. So yes, it takes time, which I am okay with. My problem with macOS is the fact that updating/upgrading the system crashes a lot of configuration.
I do get frustrated when other people jump in to in effect say "jeeze, three days, what are you doing wrong?"
I sent my work Macbook into Apple for a keyboard replacement, which naturally means they have to wipe the SSD, as one does with keyboard replacements. Setting it up again meant replicating three years of cruft that I had long since forgotten about. Its been a month since I did this, and I'm still not up to the level it was before.
Password manager, check. Both the native 1Password and browser extensions. Speaking of browsers, need to install both Firefox and Chrome for testing. Brew? Ok. AWS, gotta configure new access credentials there, now lets install the aws-cli, oh its not available in a package manager, cool. Node, Go, Rust, Elixir, ok now maybe my git repositories? Oh, git isn't installed, lets install xcode, and there's a system update, that'll take about 25 minutes. Didn't I have a command to quickly switch kubernetes contexts? Lets see if I snippeted that somewhere, actually I guess i need to install eksctl and kubectl now. Don't forget email sign-in, calendar sign-in, gotta install slack, iterm, VSCode, jeeze I remember VSCode being a lot more productive, yup I'm missing about twenty extensions.
This stuff is really, really hard to automate; not because its technically hard to automate, though in some cases it is, but its shit I do, like, once every three years. No one automates things they do three times a decade. Cloud or local server system image backups can help, but I'm not giving Apple a full system image for Time Machine to use, there's too much sensitive data on this machine. Its just hard! And that's ok.
1) Before sending your Mac in for repair, use Carbon Copy Cloner [1] or SuperDuper! [2] to make a clone of your system drive to a spare SSD.
2) When your Mac is returned to you, if the system drive has been wiped, then use the same software to restore your backup.
Both these programs are free (gratis) for the described use, and have a reputation for reliability. The spare SSD drive will cost about $80 (how much is your time worth?).
I'll definitely do something like this the next time. This one just caught me by surprise; I should have done the research, or listened to him when he asked if I'd backed up my data (they always ask that, even if there's little chance of an SSD wipe), but apparently the SSD encryption module (touchid) and keyboard are all the same unit, so replacing the known defective 2016-2018 keyboards wipes everything. True world class engineering from Apple, but what can you do, there's plenty of blame on my side as well.
is it really that hard to automate things on macos? I would have thought you could put most of the install instructions for everything you listed into a bash script? is macos locked down in some way?
why is logging into things an issue if you have a password manager to auto fill things?
on linux I just make a backup of the firefox folder so I don't have to reinstall the extensions on my new computer.
all the settings files I want to keep are kept in syncthing folder then i have a bash script to create softlinks where those settings are supposed to be.
doing all that manually every 3 years would be my idea of "really, really hard". with a script you just add the instructions once and then you can keep reusing it without any effort
I admire people who write and publish in non-native languages, I don't have the guts to publish in my bad German.
It's clear you're non-native, but it's not "bad English". :) Thanks for sharing it with us! I like hearing about the range of experiences that highly technical people have with macOS. I'm still trying to use it as my daily driver, even though it takes a few days of setup (compared to the old days of building a preconfigured image), because my alternate option (Linux, for me) also takes a similar amount of time. You're right about Windows being a trash fire.
PS: "outside of the box" is the end of the idiom referring to "thinking outside of the box" (thinking differently about a problem). I think the one you meant to use in your article was "out of the box", which means the first experience with a product when it's opened or unwrapped (think: "taken out of the box"). I hadn't even noticed how similar these two are until today.
He says better Wifi and Bluetooth are the only things he's missing. That's kind of a big deal, though, right? Your wifi speeds touch literally every part of using the system, from downloading packages to browsing the web to watching YouTube or listening to Spotify. Bluetooth is also significant quality of life, from keyboards and mice to headphones and automatic syncing with devices.
I understand his concerns, and it seems like everyone has their "why I left macOS" hot-take cocked and loaded these days, but this sort of sounds like "I left macOS and the only thing I miss are the things that makes a laptop useful in 2020."
Anecdata: I don't use wifi on my work computer. Ethernet is faster and more reliable, and since my desk/monitors/mouse/keyboard stay in the same place it's easy to drag another wire over there too.
Yeah, WiFi is funny like that. I can go long long periods of time without it. But then when you don’t have it, it can really suck. But as long as it’s there in some capacity, nbd.
The same can be said about Bluetooth. I don't use it very often, but whenever I really need it it either doesn't work at all or takes quite some time to get right.
Wireless headphones seem silly until you try them. I was in the "airpods are dumb" camp until I got the pros, and now I exclusively use them except when doing something latency dependent. (Counter-Strike)
I've now got a requirement for solid bluetooth (5) in every pc I own. I don't "need" it, but I also don't need most of the things that make my life more pleasant.
I love my AirPods. There are still a few rough edges, but the experience is vastly superior to anything else I’ve tried.
One important thing to be aware of: because of their size, you can only fit a single battery cell inside each bud; as I understand it, this prevents any ability to specially manage the lifetime of the battery. I remember hearing something like “expect about 380 full recharges until the battery is no good.”
Anyway. Switching between devices is so seamlessly awesome. “Transparency” mode works well and is easy to switch in and out of. They do a great job of detecting if you just pulled them out of your ear and manage stop/start. Combined with a cellular Apple Watch, it totally rocks, giving you complete communications hook-in and letting you keep your phone at home. I assume they make you buy the phone not only because it’s technically easier to implement the full experience, but also because the Watch would totally cannibalize iPhone sales. The iPhone suddenly feels like a niche device when you have the AirPod/Watch combo.
My primary complaint with the AirPod Pros is that no matter which size insert I use or how tightly I secure them, they tend to fall out of my ears when I smile. I wish you could get them in the “peg” form-factor.
You don’t remember when laptops didn’t come with wireless, I suppose — I think it’s possible that there are other distinguishing features of laptops and desktops.
I use Ubuntu which mostly works in these regards. When I first switched from Mac to Linux on a MBP, which has spotty Bluetooth support, the solution was pretty easy--I bought a low-profile USB Bluetooth adapter with strong Linux compatibility, plugged it in, and never thought about it again.
Haven't needed it, but I do recognize that may not be the case for everyone, hence my use of "IME".
I've since switched away from Mac hardware and don't have this problem anymore, along with a myriad of other problems that Apple created for me (I can actually upgrade my RAM to be >16 GB, I don't need to carry around a bag of dongles, etc.).
The whole "have to carry around a bag of dongles" thing to me has always been so disingenuous. I'm kind of convinced that anyone who says it has never actually used one of the recent MacBooks as a daily driver.
You throw one of those little "7 in 1" USB-C hubs in your bag. That's it. That's all you need.
Sorry if I came across as a little aggro--I'm definitely projecting the typical anti-Mac person onto you. It's generally step one of the hate on the MacBook playbook. "Psssht if you buy a MacBook you're living the dongle life."
And it's just not been the case for me with the recent MacBook Pros. Like your experience, with past laptops I've used I've always had to carry around different adapters, regardless of whether the machine was Windows or Mac. No matter how many ports the vendor tries to shoe-horn into the laptop, they can never quite get them all in there, and so you end up with a dongle or series of dongles.
With the MacBook implementation of USB-C, I carry one small USB-C hub that gives me pretty much everything from ethernet to microSD. The infamous "dongle book pro" actually has made my life less dongle-y.
And that's where I see the whole "bag of dongles" as a disingenuous line of attack from people who don't actually use the machines for their real lives. In practice, it's the least dongle-y laptop I've ever used.
I agree that the article is light on workflow details, making it difficult for us the reader to sympathize with the author’s arguments.
As I use a laptop for my main machine, I don’t use any Bluetooth devices with it and I connect it to a thunderbolt dock with Ethernet. The Lenovo one. I find wired devices more reliable and wireless devices more expensive, heavier and usually slower.
I love my little Apple pen and ear pods for my phone and tablet though. It’s just a different "workflow".
> Your wifi speeds touch literally every part of using the system, from downloading packages to browsing the web to watching YouTube or listening to Spotify
Why this obsession with Wifi? If I can I always use a wired connection. It's faster, more reliable and removes a major failure mode.
If wireless is so awesome, how come WIFI routers have wires inside them!
I'm only being a little hyperbolic here--there are a lot of reasons people prefer or even require WiFi.
If I have gigabit ethernet at every place I use a computer, then sure, I'm not saying no. But even in a work from home environment where I have a little more control, it's not always an option.
We had a once in a century ice storm a few years ago. We were without power for four days during in the winter.
They re-established power gradually and I still remember the crowds at Starbucks. It was all people waiting to use the outlets to recharge their phones.
Meanwhile, while the power was out and we had no heat, my wall phone worked fine throughout.
You know that they have an independent power supply, and if you don't use a cordless handset, they work fine even with the electricity out.
I also have the last version of the phone book that was printed in my city, and I was making calls all week and finding people and resources to help me deal with the emergency.
I'm a huge FreeBSD fan because of what the author is hinting at; it's easier to grok because of it's stricter separation of user space and system space and because it has great documentation. I try to use it for any server project, and ran it as a desktop for a little bit.
That being said, the reason I don't go 100% all in on it and use Ubuntu is the upgrade process. The article says they were successful upgrading minor versions, but major version upgrades are a real pain. Last time I did it it was showing me diffs of system scripts and asking me to make calls on what changes to accept; I chose "use latest" for everything, and ended up breaking "sudo" and effectively lost control of my cloud instance.
If there's a better way to perform system upgrades, I'd love to hear it, because I think the OS is beautifully minimalistic and closer to Unix philosophy than Ubuntu/OSX.
At some point, pkg is to take over the updates of base, kernel, etc.
That isn't to say that the current freebsd-update isn't any good, just that it needs someone knowing they are doing. And even experienced people do make mistakes. Case in point, you tried to upgrade, you were given a locally-modified file under /etc/ to merge with the new version, you made a small mistake, shouldn't result in losing the system.
If I'm not mistaken, when Debian is confronted with the same problem usually asks whether to keep the old file or replace with the new, which basically narrows the ability to merge the two together in many cases. FreeBSD actually is slightly smarter there, in the sense that it can even detect whether the comment header of the file is the only thing that changes and actually merge it on its own. But, if there are real differences (e.g. you modified /etc/ssh/sshd_config), it asks you to merge the incoming with the current version. In any case, I think FreeBSD is extremely stable when it comes to what goes into /etc and how stable the file formats of individual files there are, so I think it would be perfectly reasonable to just simplify the choices presented to the user.
In my previous job we actually had an exhaustive list of IGNOREd files for freebsd-update, just to make sure that nothing was to get clobbered by some major or even minor upgrade. And that freebsd-update would practically run to completion without any interaction of this sort.
I highly recommend zfs with beinstall (/usr/src/tools/build/beinstall.sh ). It automatically creates a new ZFS bootenv, installs world & kernel to it, and upgrades your packages. If there is an issue, you can select the old bootenv from the boot loader prompt and be back in your old, pre-updated OS with nothing at all changed.
I find that the order of most user friendly to least, in terms of the "big 3" BSD's, is FreeBSD, OpenBSD, and NetBSD. But a lot of people love OpenBSD; personally I like it because it has the boggle command line game.
I have OpenBSD on my Thinkpad T480s and pretty much everything worked great out of the box. Even something like the Volume buttons that on Linux with DWM I had to manually map to amixer they just work. Of course there are some trade-offs. Bluetooth is a big one that might not work for people. Overall compared to FreeBSD the setup was dead simple with hardly any work. I know people complain about speed on OBSD, but I have not noticed it being that much slower. I am in love and don't see myself hopping again for a while (famous last words obviously).
OpenBSD feels more cohesively designed, in my experience. If you have an older ThinkPad lying around, it should be a dead-easy installation and a pretty easy setup process. Back when I used it on a laptop, I liked its design but I didn't find it practical for my workflow compared to Linux and macOS.
No. At least not for security updates. There is syspatch for the base system and "pkg_add -u" now also also pulls from a package collection called "packages-stable".
The regular 6 month base system updates have been automated as well with "sysupgrade". Doesn't work for whole disk encryption if you have required networking firmware that can't be distributed (I'm looking at you Intel), For that particular case you have to copy the new ram disk kernel (bsd.rd) to the root of the disk. Regular updates are super easy now.
FreeBSD is my second OS and has been for over 10 years. Prior to that I was a Slakware adherent. I do dread the update/upgrade process of FreeBSD. Beyond the need to deal with the diff editing, I often need to rebuild/install packages. Maybe I'm doing it all wrong (likely!) but I can't recall the last time I needed to get an update to an application because of any changes to Windows.
You may want to look at the freebsdX-compat packages (or similar options in config, if you're building your own kernel/world).
If you install those, your software built for older versions of FreeBSD will mostly continue to work (lsof seems to need a recompile almost always though).
Fresh rebuilds are probably 'better', but there's lots of reasons you might need to run something built on an earlier version, and it should work (if not, and there's no clear reason, it's a bug).
This is what happened to me, I did the diffs, it said "if you built packages from ports, rebuild them now". I did everything with pkg (probably had to reinstall those, in retrospect) but I guess sudo was, by DigitalOcean (FreeBSD servers come with sudo and a freebsd user preinstalled there).
Yeah, my current process is to assume that the upgrade will fail and spin up a second instance with backups restored. This is for the cloud, where I have less root access (DigitalOcean doesn't give you the root password, and can't change it in recovery).
For a box you have full control of, the good news is I can almost always fix it in single user mode.
Increasingly these days I'm thinking: the M1 MacBook Air holds up very well, but I also want to use Linux: why not both?
My biggest problem with Linux / BSD is the GUIs. I don't know why but whenever I use a Linux desktop, the mouse either moves too fast or too slow and there's no sweet spot.
The GUI keyboards shortcuts are all messed up: I love the "⌘" (super) key because then ⌘C is copy everywhere, and ^C is an interrupt in the terminal. I have ⌘← , ⌥← , ⌘⌫ , ⌥⌫ , ⇧⌘← , ⇧⌥← ingrained in my memory (for moving, deleting, and selecting by line or by word). I have fussed with remapping keys but there's always that one app that won't cooperate.
So why not both: shift down from a single the top of the line 16" MacBook Pro to a MacBook Air, and then pair it with a desktop running Linux that I SSH into over LAN.
- LAN internet speeds are fast enough that I don't notice any latency.
- I'll have convenient access to both Linux and macOS, for testing.
- I can spec up the Linux desktop as much as I want. It could even have a Threadripper.
- The MacBook Air will have better battery life than any Linux or BSD laptop.
- I can use Tailscale[1] to set up a no-fuss VPN, so that I can still SSH to it even when I'm not on LAN.
- All the GUIs will still be macOS, so things like mouse speed and keyboard shortcuts will all be familiar.
It might even net out to be around the same price. My 16" MacBook Pro was like $3,000. An M1 MacBook Air would be $1,500 (with some storage and memory), and the remaining $1,500 is more than enough to build a beefy desktop.
I used to do this until I just gave in and installed Linux as my daily driver. The separation between the machine I was using and the Linux install I was interacting with was too great, and was more frustrating than anything. Also using modern Linux features locally was a big draw, as cgroups and native Docker are great.
Also, try adaptive acceleration with your touchpad or mouse.
> Windows is still a nightmare, setting up macOS took me 3 days the last time, Linux takes way more if you’re building it from scratch. Setting up FreeBSD took me 3 days, however this meant that I will NOT need to change it again for a very, very, VERY long time.
What exactly is involved with "Setting up macOS" to take you 3 days?
I just pulled out my old 2015 15-inch MacBook Pro, reformatted the machine and installed Big Sur on it and got it in exact condition needed to continue my work that is on my main driver.... Took me 2 hours tops.
I keep all documents in iCloud Documents, so they sync to all my machines, and I install everything through Homebrew, so once I pull that in, I run `brew bundle install` and let it do it's thing...
> Every time Apple pushed an updated, my pf.conf and automount configs got broken on macOS. They either got deleted or they moved somewhere. Well, the last 2 times it just got deleted.
I think the author was really, really integrated into the Darwin part, and not using the mac as just a throwaway VSCode machine.
From that perspective, sure macOS is far from being as well maintained and easy to integrate as it was on the 10.3 days for instance. That seems valid to completely move to FreeBSD from this point of view.
> What exactly is involved with "Setting up macOS" to take you 3 days?
Different needs for different people?
I mean, every time I reinstall Windows it takes some days to me to setup it too (removing all crapware/ads, changing settings in places that doesn't make sense to me, install all my programs without a proper supported package manager [1], etc etc). In my NixOS setup it takes literally minutes if you exclude the download/build phase.
[1]: Yeah, I know about and use chocolatey. Still isn't the same, since for example most programs still use their own update mechanism, installation of packages fail constantly (specially more complicated packages like NVIDIA drivers) and there is some other headaches.
Yep. I just migrated from a mishmash of windows and Linux to macOS and it took less than a day to sort it out including loading 50 gig of photos and videos into Photos app, migrating a couple of medium size spreadsheets to Numbers, getting set up on AWS, migrating credentials over, everything.
And with Catalina Apple has made it stupidly easy to install fresh now.
You used to have to create a bootable USB, now you can boot into recovery mode, reformat the drive and install any of 3 versions of macOS. The version that come with your machine, the last version of macOS compatible or latest version.
They are from the platforms I was coming from. It was a shit show. All my VMs are in AWS. I don't do any dev directly on this computer. I really hate doing that. All it takes is one slip up with some tooling and your entire control surface is covered with body parts and shit. It's like operating a lathe with dreadlocks and floaty sleeves on. I'm not living that life any longer. Nor am I lugging around 500gb of VMs everywhere. This is a very nice terminal node and I like that.
Which constant Apple news? You'll find if you look back through my post history I was very critical of Big Sur and the M1 but after taking time to understand it, measure it and use it I'm less concerned. When I put all the vendors on the table, including Linux, the tradeoffs were still in my favour with Apple.
Honestly, I don't get all the hate systemD receives. It works well enough, creating launchers for services is super easy, and in all this years It never gave my a headache.
There was misunderstanding of the technical details early on. I think poor communication and empathy exacerbated those early difficulties (= the kernel cmdline debug fiasco).
Now that it's a little more stable, I'm pretty happy with it and I especially appreciate having 'systemctl --user' to manage user services, which didn't really exist before (ssh/gpg-agent, etc).
I see... well to be honest, the change to systemd was a big one, so some problems are to be espected. I took a while to flock to ubuntu, so, problably when I first starting using it, it was alread stable.
And yes. It suports good tools for diagnostics like journalctl.
>man, I did not realize that macOS is that complicated. Why is there a "studentd" running? I don't even use Classrooms :/
I find myself running in to the same thing whenever I am poking around my running processes. There are tons of daemons running as root that are not easy to identify. eg: "What the hell is eoshostd?", and googling isn't super helpful because Apple doesn't seem to have public docs for this stuff.
"Linux has systemd" and that's why the author didn't select it supposedly. I hear this line of thought from people that use FreeBSD often, although I suspect if you ask many of those people why systemd is so horrible you won't actually hear a legitimate (and still relevant) reason from them. I suggest anyone that thinks systemd is terrible and doesn't spend lots of time in Linux to watch this presentation by Benno Rice.
I'm not against the "concept" of systemd, I think the BSDs need a "system" layer as well, just not systemd. The ideas are amazing, the implementation is the problem.
> Linux has systemd, not my favorite thing out there, Windows is privacy nightmare. That left me with 2 major options: Linuxes without systemd (Gentoo, in my case) or BSDs.
Even if you don't like systemd (I don't but I live with it because I never have to interact with it anyway), this is such a weird way to phrase it. Imagine saying "Linux has `apt`, not my favorite thing out there. That left me with two major options: Linuxes without apt (Arch, in my case) or BSDs." Just... change the program. Apparently Arch can switch over to sysvinit just by installing two packages from the AUR (see https://wiki.archlinux.org/index.php/SysVinit), Devuan exists, et cetera.
Of course, the author was sold on BSD before making this argument - and fair enough, at least it's a free system. But in my opinion it's ridiculous to discount a family of systems because of one common program.
My basic dislike for systemd is it a large change to the overall system, and doesn't provide me any benefits that I can tell. Beyond that, I've experienced or read about many negative things.
a) I ran into an issue with changes in startup scripts in Debian which meant I could no longer hit ctrl-c to stop network initialization on a laptop when it couldn't get a dhcp lease (it was either not connecting well to wifi , or trying to get a lease on a disconnected wired NIC; it was a while ago, I don't quite remember)
b) there have been many secuirty issues in systemd and systemd-* utilities. Quite a few of which were repeats of issues existing daemons had been through, that shouldn't have been repeated.
c) I have read that in default configurations a user's programs will be terminated after the user logs out; that's not acceptable for me, and a large change in default behavior
For me, systemd is yet another churny subsystem that drives aggrivation, so since I was already exposed to FreeBSD through work, it made sense to me to go in that direction at home, instead of sticking with Debian and accepting systemd.
Ever used systemd-timesyncd or the systemd-dhcp thing, oh and systemd-resolved ?
All these inferior implementations are barely configurable and lack features for use-cases outside of desktop users.
The init system is nice, but why the hell does it re-implement svchost.exe
I did. I've been maintaining python+ncurses tool for configuring system: network, ntp, etc. If you don't have dhcp server running on your network, then ifup will block until it resolves address (ui freeze). If not, then you have to kill it and also dhcp reolver. After switching to systemd-networkd it just works in background.
All configuration is done with .ini style files so that no need to use special parser for /etc/network/interfaces.
Disclaimer: I jumped on linux few years ago when systemd already had some momentum, so that I wasn't used to either init or systemd. I found the later easier to pick up.
systemd-resolved is inferior to what? It's the only DNS system on Linux that can handle LLMNS + split-DNS (i.e. a VPN and your LAN running alongside each other) + DNS-over-HTTPS correctly. Sure it's a pain in the arse to configure correctly, but it's more featureful than NetworkManager.
systemd-timesyncd is inferior to what? a fully fledged ntpd? go ahead, use that if you really need to, but on 99% of Linux machines systemd-timesyncd is enough.
I'm doing exactly the same thing right now. But with a desktop..
macOS is so pissing me off, half the file system is being locked down, Apple makes exceptions for their own stuff (like the recent thing where they disable network filtering for their apps). Every app has to be sent to Apple for notarisation, and much more.
I got into macOS because it was a great POSIX system with great desktop apps and a consistent UI. But Apple is making it into an iPad with keyboard now. The UI is getting more and more bulky, apps are being dumbed down (and promoting even dumber iOS ports) etc. The NeXT underpinnings are neglected. The benefit is gone.
I chose FreeBSD over Linux because most mainstream distros are doing the same thing. Ubuntu is pushing snap too much, redhat is pushing systemd into everything. I don't want this commercial control again, even though it's not as bad as on Windows. Windows 10 has similar crap like Apple, forced updates, telemetry etc.
Also I like FreeBSD's consistent environment and excellent documentation among many other things.
I've been using a work laptop as my 'primary' machine for years, and it was a Mac. I was using Macs pretty much since 2005 for life and work.
As more of my life moved into the browser, I was curious to try a Linux desktop again; I hadn't done so since KDE on debian in 2004.
I've been on Linux Mint since the summer, and I have been really astonished at how easily everything just works. For me, it's a combination of things: 90% of what I do is in the browser; all the software development tools I use are cross-platform; and the Cinnamon desktop environment is really just good enough for anything I find myself doing.
If you are ideologically opposed to using macOS, then it's never going to satisfy you. This is fine! Figure out what matters to you, and select for that.
Same, the only drawback I see is if you make an app full size (not full screen) the app shell rounds corners but the topbar is flat so there's a small gap in the corner.
Yeah, Big Sur is the first OS X change that has felt somewhat better in awhile. It's a stupid simple thing, but man the new default system sounds are nice. The old bell was so triggering and yet i never bothered to change it.
The argument of "you have to put a non-trivial amount of time into setting any OS up, so you might as well use X" is so disingenuous. It's easy to forget the day to day conveniences when you are using windows or osx every day for a few years. If you could accurately sum up the time spent fiddling with your nix desktop OS over the lifetime of the install, the "area under the curve" so to speak, windows and osx are going to absolutely blow any nix distro out of the water. It is parasitic and insidious how many things don't "just work". You will invariably end up wasting much of your life getting software, printers, scanners, webcams, monitors, or whatever to work the way you want with your nix distro. And even when it does "work" it's not going to be as good as a real desktop OS. I ran freeBSD on my Toshiba Satellite 1905-s301 for 4 years before getting a MBP and I have zero temptation to go back.
I've moved to Ubuntu this week as my new dell xps arrived.
The last drop was macOS opening Apple Music every time I put my bluetooth phones in, with no way to disable it even when removing the system integrity thing.
Apart from re-learning some of the hotkeys, everything* just works. Font rendering is sharp, Gnome feels snappy enough for the kind of work I do. It feels good to be treated like an adult by my work-tool.
* had to install a driver to get the fingerprint scanner to work
PSA for MacOS denizens:
Are you annoyed enough at iTunes popping up every time the in-line button on your headset is clicked to actually want to do something about it?
Me too. (Finally.)
In the "works for me"™ category, from a terminal window:
sudo chmod a-x /System/Library/CoreServices/rcd.app/Contents/MacOS/rcd
(When the sudo command asks you for a password, use the normal password for your logged-in acct.)
You are welcome.
I am getting to that point too - except for iOS / MacOS development will use a MB Air. After Catalina upgrade my scanner was disallowed by MacOS. "HPScanner.app will damage your computer" .. Report malware to Apple ... so I said let me try my old MBP running Ubuntu .. boom DocumentScanner could recognize the scanner and off I went !
I've got to wonder: I've used FreeBSD and OpenBSD in the distant past and I can't see a good reason for using a Unix for a desktop system. I know it works, but many Linux distros provide a much less painful experience, especially if you're coming from MacOS.
The author stated in the article he stayed away from Linux because most of them run systemd and since his servers run BSD he figured his desktop might as well too.
Depends on the distro. I get frustrated with Ubuntu all the time because sometimes I want the latest package now not in several months. FreeBSD updates packages fast when they change. I use Manjarno on one machine and it works well this way though.
I haven't seriously used the start menu since Vista only ever using it when the only copy of a file is saved to %AppData%\Microsoft\Windows\Start Menu\Programs. I just hit win+keyword and hit enter to navigate around. I've only ever used the App Store for upgrading from Win10 S to Win 10 and I rarely even notice the ads are there.
I don't have any issue with the things you raised because they're just not serious issues. I'm more concerned with other problems like forced updates & workstation windows getting cranky without reboots & the batshit Frankenstein of new+old control panels in win10 & the fact that most computers in the world use unix-like operating systems.
So he doesn't like Linux just because of systemd? In my opinion, that's pretty ignorant. If you look beyond all the drama of systemd vs no-systemd, systemd is actually very useful, powerful and conceptually simple service manager. Yes, it's a "monolith" and "not Unix-y", but from a user perspective it doesn't matter. All that matters is that it's simple to use and reliable. Having been accustomed to systems with systemd, I often struggle to manage Linux systems without it as efficiently. And I don't even have a deep knowledge of systemd, just the basics of service management commands and the structure of unit files.
I've installed Ubuntu, Mint and Debian in many different ways, but never did it take three days, though I did end up tweaking a bunch of stuff in the following days. I mean, I guess, if you take that time into account, then the "installation time" becomes a bit longer, but... not really. Or are you building them for very spesific (or odd) hardware?
Personally I'd never dream of setting up any serious production environment on a Mac or Windows PC though, outside those spesifically geared towards it like .NET. But then, if you use the latter, it kinda narrows down your options though. I mean, isn't that just common sense these days?
I'd be interested in hearing your experience. As a layman workstation user, sometimes I think about not using anymore a Linux kernel but I'd for sure miss the granular control of Portage.
The most important thing for me was that i always have the newest Software (like a rolling distro) but my System is stable in my case 12.2-RELEASE, thats the biggest difference from Linux you have a Full System (Kernel NFS etc.) and then everything else like firefox etc in form of pkg's (binarys) or ports (builds), so i have everything, i can change my system from RELEASE to STABLE to DEV, and choose my pkg from rolling to quarter (kind of a point release for the pkg's) or build every pkg myself (ports).
I find it interesting how definitions of words drift away from their original meaning.
> is becoming less Unix-y every year
And the examples given are: no package manager, outdates packages, not open free software
Doesn't this exactly describe the original UNIX OSs? Solaris, IRIX, HP-UX and so on?
It sounds to me like a self fulfilling prophecy - he models his expectations on a modern Linux or BSD distribution, and of course at the end they will match what he wants.
>Linux has systemd, not my favorite thing out there, Windows is privacy nightmare.
Two questions; I'm not that educated in this field:
1. Is Windows LTSC not an option for people like this?
2. I'm running it on my laptop and am seeing better performance with telemetry turned of, but am unsure if its still a "privacy nightmare". what tools can I use verify that my OS is not calling home?
I just find them so tiring. I'm sure it says more about me getting old than anything about the content authors, but I've just had enough decades of it. It's the same reason I could never get into Reddit. I don't need the same OS / atheism / whatever flamewars from 1995.
If you're older than me, you probably thought it was old and busted in 1995 already, and if you're younger, maybe that time was 2000 or 2005 or 2010 or whatever. But, sooner or later, it happens to all of us.
Quote: "This is where many people will tell me “Okay but not everything works outside the box”, true! but which OS works outside the box these days anyway? Windows is still a nightmare,..."
Well, Windows pretty much works out of the box. I mean it's the main main selling point. This and backward compatibility. Which informs me the article's author instead of getting the information first hand relied on rumors. That's a shame. IMO better say it right out "I hate Windows and I boycott Microsoft" rather than spelling lies.
I disagree. If you install Windows as a system builder or on a system from a manufacturer without their specific build, it definitely doesn't work out of the box. There is a lot of driver discovery and installation that needs to be done. That's not say that this hasn't improved over the years. If you ever built NT/2000, you'd know what an utter nightmare it could be. It also is highly dependant on you own definition of "working out of the box" is...
Are we talking ancient history here or current version? Because OP was not satisfied with latest versions of MacOS, hence his migration.
I build my own computers, I deal with friends/relatives computers too and latest Win10 versions are definitely working out of the box. Sure, it has crap telemetry but it can be turned off. I use a nice utility called WPD, in 20 seconds I am done taking Windows crappy ads and telemetry off with it.
"dependant on you own definition of "working out of the box" is..."
My definition of working out of the box is that I put Win disc installer, it will installs then once I am pass initial questions it will do its thing on its own and at the end I am at a nice desktop where I can do anything. And Win has recognized all of computer's internals and it's a usable PC. Rest is optimization for specific tastes, like getting Firefox and having uBlock Origins to have a smoother web surfing experience. What's your definition?
Quote in context; "That's not say that this hasn't improved over the years. If you ever built NT/2000, you'd know what an utter nightmare it could be."
I acknowledge that things have improved, but speaking as someone that deployed PC's to be used as CAD workstations in large batches at a time, Windows 10 was a marginal improvement over Windows 7. Yes, they were imaged, but building and testing that image, even with Microsoft's tools, is not a trivial task.
Being professional open source developer since 2010 and Linux user since 1995, I can just answer with: Nope, most important is to get things done. I love to use Linux as Desktop, but sometimes it's just impossible to use it, and not every thing is bad. The printing experience for instance on Linux/BSD is pretty bad.
I've recently moved away from Linux (to Windows) because of bad hardware support (with Ubuntu, bluetooth audio was PITA, graphics was buggy... and the whole lot of things I know and expect from 20 years of using Linux on the Desktop for most of the time).
I started using Windows only because I was putting off installing Linux on the laptop... and I wanted give WSL a shot. With WSL 2.0 and Docker Desktop working so well with it, I don't miss Ubuntu at all.
When I did boot Ubuntu from a live disk to try something out, I found it felt much faster than my Windows install... but I still don't see myself switching back anytime soon.
i dont think freebsd as a desktop (or server for that matter) is usable beyond hobby projects. luckily HN doesn't have many experienced techies so this post wont get downvoted by much and some poor soul out there will install this relic thinking its better than macos.
Yes, but it's not configured automatically and things mess up when you switch from you monitor to your laptop. Also, gtk3 apps look blurry when upscaled sometimes.
Not fractional though. If the correct scaling is 200%, or 300%, then yes, it works. If it's 140% you're out of luck for automatic. Also, when you unplug the monitor, the windows are messed up.
On top of that, a lot of GTK3 apps look blurry with fractional scaling (icluding Chrome).
And in about 3 months he will switch back. At this point you can just install ChromeOS from Neverware and have better experience. You can install third party apps with flatpaks.
I am sympathetic, use whatever works for you and I'm happy when open source works for people, but:
> but which OS works outside the box these days anyway? Windows is still a nightmare, setting up macOS took me 3 days the last time, Linux takes way more if you’re building it from scratch. Setting up FreeBSD took me 3 days, however this meant that I will NOT need to change it again for a very, very, VERY long time.
(I think he means "out of the box", ie, without spending lots of time configuring or fighting with it to get it set up.)
I think Mac OS works for most people without spending 3 days to set it up. For OP, he spent the same amount of time setting up FreeBSD as he did MacOS... I'm not confident this is typical.
> Every time Apple pushed an updated, my pf.conf and automount configs got broken on macOS. They either got deleted or they moved somewhere. Well, the last 2 times it just got deleted.
Aha. OK, I don't even know what this is... but ok, I can believe if you are using features are more unixy they will work better on FreeBSD with less tinkering. Which doesn't mean they work "out of the box", especially for non-technical users. Just that, sure, they aren't any better and are sometimes worse on MacOS.
> Unix is outdated and Apple does not care about it.
I think this is probably accurate, Apple is no longer interested in maintaining the OS (and installed/supported packages) as an up to date unix competitive with other unixes.
> This is where many people will tell me “Okay but not everything works outside the box”, true! but which OS works outside the box these days anyway? Windows is still a nightmare, setting up macOS took me 3 days the last time, Linux takes way more if you’re building it from scratch. Setting up FreeBSD took me 3 days, however this meant that I will NOT need to change it again for a very, very, VERY long time.
Why is Windows a nightmare? I use Windows 10 on a PC I built and it's fine. I also use it for dev using the Linux subsystem with WSL2 and Powershell.
Why did setting up macOS take 3 days? I don't even think it would take 1 day if I set up a new machine from scratch.
Why are you building Linux from scratch for comparison??? It took me two hours to set up Debian by hand yesterday.
And then they say FreeBSD also took 3 days of setup (?!), but that's alright because they won't have to change it going forward. Leaving aside my skepticism of that last part, the same probably applies to macOS.
EDIT: I think the author actually is better served by FreeBSD given the fact that they were hand-modifying persistent packet-filter rules. That is not a thing I would suggest someone do on a macOS machine. But the head scratcher I have is why they would try to do this on macOS (or Windows) in the first place, and why setup times took so long besides the breaking updates.