Hacker News new | past | comments | ask | show | jobs | submit login
My Hackintosh Hardware Spec (infinitediaries.net)
319 points by morid1n on Apr 26, 2017 | hide | past | web | favorite | 271 comments

It's sad to say but this is the only way to get a decently powerful desktop Mac right now (that's not using server grade Xenon chips, I don't need ECC).

One thing that's unfortunate though (about a Hackintosh) I think about to change (maybe already has) is the use of a R9 280x due to driver issues in OS X (i.e. there are no drivers for newer GPUs). I think now you can use, or soon will be able to, Pascal based GPUs which would great for deep learning/CUDA. It's a shame Apple has been relentless in their quest to ship inferior GPUs in Macs.

The biggest beef I have with this Touchbar MBP I've got is the GPU is pretty crap compared to NVidia's offerings-- and I think it's somewhere that Apple could dominate. Sure it'd take partnering with GPU manufacturers, innovation, and money in the space but that's exactly what the Mac needs right now and Apple is fully capable of... or someone's god forbid sacrificing a small bit of portability for the power.

I really want to love the mac but outside of iOS development I don't have much of a reason to run it these days. Windows + Linux subystem is pretty great, or just booting directly into Linux.

To be honest developing in native Linux lately has been wonderful, it's incredibly fast, driver support is pretty damn good, and desktop is attractive enough.

I even have Thunderbolt 3 ports on my desktop which have been good, I just wish VNC clients were faster so I could network-in my Macbook Pro's screen better. The latency even over thunderbolt three is pretty atrocious.

Edit: Just so I don't SPAM another comment, as the nice folks below have pointed out... PASCAL is supported. Upvoted you all because I'm so happy. Gonna get my hackintosh back in action tomorrow.

Edit 2: I'd also like to say NVMe is mind blowingly fast on the desktop I've got and something sorely lacking from Apple's offering adding another downside to the current Mac desktops.

Edit 3: Cleared up note about thunderbolt 3 and VNC.

> I think now you can use, or soon will be able to, Pascal based GPUs

They are supported! I'm writing this comment on a hackintosh running a GTX 1060. Nvidia released Pascal-compatible Mac drivers 2 weeks ago (April 11).

Interesting but unverified side story, the drivers were apparently released because of an email from a redditor: https://www.reddit.com/r/apple/comments/65hpc8/funny_story_a...

> Windows + Linux subystem is pretty great, or just booting directly into Linux.

I built this new computer recently and gave Windows 10 and Ubuntu 16.04 both a shot. I decided I didn't want to relearn how to be productive with them and turned it into a hackintosh, and I'm very happy with it so far.

> They are supported! I'm writing this comment on a hackintosh running a GTX 1060. Nvidia released Pascal-compatible Mac drivers 2 weeks ago (April 11).

Same! I was waiting for the pascal drivers to take the 1080 Ti plunge, and I am super happy I did (literally installed it 48h ago, upgrading from a 980). I have a 6700k "mac" that flips to a top of the line gaming pc if I am in the mood for that.

Glorious: http://fred.d.pr/FQLuV

I'm looking to build something like that. Did you use a specific guide/golden build?

There's the base guide at tonymacx86 [0], which got me 95% there. Then it was a matter of learning about specific things (like installing and enabling NVidia drivers, etc).

It doesn't take a ton of time. I did keep great notes about all the process in hopes I'd have time to write an extensive guide, but still haven't quite been able to come up with the time to write it.

[0] https://www.tonymacx86.com/threads/unibeast-install-macos-si...

Please share those notes if you don't find the time for a full blog post.

2018 is still a long way from here and who knows how that new Mac Pro will be.

Second the other commenter... Please scan them at the very least if you can

Try running Instruments on it, or opening an iBook. Won't work. There's issues (driver code signing related, I believe) that appear when trying to run apps that use Metal.

Not sure I agree that the overhead of running windows and linux subsystem is greater than keeping a Hackintosh running. I ran a Hackintosh for several years, but finally got tired of the update process, waiting for the right video drivers to be released, maintaining all this context, and then still have some quirks every now and then.

The only solution to me would be running OS X on my own desktop hardware. I'd gladly pay Apple a nice premium for this.

I get that, and I haven't done any updates yet, but from the research I've done I don't expect much trouble until the next major version of macOS. We'll see how it stands up over time.

Its actually really unfortunate this is the only way go to get power/ adaptability. I had a hackintosh for a couple years and while it ran great I was always nervous the next OS upgrade would render it useless. I got a Mac Book Pro and retired it (Linux and sold to a friend).

Recently I went to a projection mapping seminar (its kinda like DJing but using video loops), everyone who attended had older macbook Pros (with hdmi to power the projectors). The production machines they use to do the projections live are all microsoft windows machines, as its the only way to get enough video horsepower to run the multiple displays.

The artists running the seminar said they couldn't recommend macs anymore, excepting for some backend asset creation. Since most software is mac/windows right now its not a bid deal, I worry if the mac demand goes south, the software won't be ported.

I'm hoping linux starts being more viable solution for content creation. Seems like its heading that way.

I've been VJing professionally for ~12 years with some friends and we have always exclusively used Windows machines because our internal testing showed that Macs were rarely enough to handle high-quality multi-screen projections properly. For the same price as a top of the line Mac, we were able to build a PC with roughly 2x the specs (and always a better GPU). We did however rely on Macs for QT encoding (usually PhotoJPG for best quality/size ratio), because even our fastest Windows boxes couldn't keep up with our slowest Mac (our conspiracy theory is that Apple intentionally crippled QT encoding on Windows). Over the last couple years we have switched purely to content production so we let another external company do the projections + mapping and we don't care what they use because it's doing mostly just video playback.

Do you have a writeup on your content production pipeline? I'm terribly interested. I used to use VVVV for sound-reactive video that I would project at parties and small DJ shows around town, but then I spent 8 years living out of a suitcase and couldn't keep up with the hardware demands.

Edit: low res demo from 2008: https://vimeo.com/2577056

We started out in a similar way, doing parties and shows, barely surviving. Then HD came along when all our hardware was PAL (720x576) i.e. useless. We couldn't afford to buy replacements for our MX50 (never mind that such a beast didn't and probably still doesn't exist), EDIROL V-4s, reliable DVD-players, projectors, multi-channel MPEG decoder cards, etc. We have several rooms full of PAL hardware we will probably never touch again, and which nobody would probably buy except for nostalgic purposes.

I continually experimented with automating sound-reactive video, but it was unpredictable and difficult to get a pleasing, "fluid" result which didn't cause seizures, so we always did it by hand using hardware such as EDIROL V-4 or the MX50. It was also a lot more fun to punch buttons and slam sliders to the beat of the music ;)

Do you mean current or previous pipeline? Unfortunately most projects are somewhat unique and require at least some variation, if not a complete overhaul in workflow, depending on what the customer wants. I never got a chance to use VVVV in a commercial capacity, although there is an upcoming project where we are seriously considering it. Generally we used custom in-house software (e.g. for 360 projections using 2x 4-channel MPEG-2 decoder cards) or VJ software such as ArKaos and Resolume. Like I said before, these days it's purely content production, so the pipeline is relatively similar to a game studio. We still encode everything into various QT formats and we rent hardware for shows because as you said, it's hard to keep up with ever-changing hardware demands.

If you are interested in our projects, check out http://www.motionlab.at (go the old site for now, the new site is WIP).

Some of that gear may be worth more than you might expect.

The V4s still sell quite well for some reason, some of the other analog gear is still in demand for circuit bending.

If the projectors are decent they can still hire pretty well too.

The seizure-inducing aspects are my favorite part! One of the DJs once complained that every time the video came on people would stop dancing and stare at the screen with dazed expressions. :)

I'd probably take a look at Touch Designer or Unity for sound reactive stuff now.

Keijiro Takahashi at Unity Japan has a great github repository full of relevant libraries.


Given that you're mostly doing synchronized video on the projectors, I'm curious: if you did this setup today, would you still attempt to use a multi-GPU setup? Or would you prefer something like a bunch of cheap video-decoding-capable chip PCs attached to each projector/display and then networked to a computer that 1. sent (MIDI?) control cues and 2. acted as a NAS for the devices to stream from?

We have worked with both, but I would vote for the former - there are less points of failure and you don't need an IT person to make sure every node is running smoothly. We built the latter using multiple 4-channel MPEG-2 streaming cards + MIDI racks and used it for all sorts of events (the biggest was 360-degree back in 2006: http://www.motionlab.at/projects/content-production/urban-ar...) but there were always problems and I often had to rip hardware/software apart to figure out what was wrong. That's why I ended up writing a minimalistic tool for controlling the cards, the vendor's included software wasn't meant for our type of work - thankfully there was a decent C++ API available. Workflow-wise it's also much easier/faster to render one big video than a bunch of smaller videos, not to mention you can check final output on one screen. We wired up 8 mini-screens in the rack to preview the streams, but they were quite small, making it difficult to determine if the footage was ok.

This is exactly where I'm stuck right now.

Using a hackintosh for mapping/vj'ing as the off the shelf hardware isn't at all practical and I want to keep using OSX only software.

Honestly it's a nightmare.

I want as many video outputs as possible, enough power to run multiple layers of video, be running game engines at the same time, transcode a video in a pinch and more.

Switching to windows is probably the only option going forward.

>I don't need ECC

We all need ECC. It should be a standard feature by now.

While I agree, can you explain more about why you think we all need ECC?

https://danluu.com/why-ecc/ is a decent read.

I'd also be curious of numbers from people who DO run ECC about how many times it's saved them. Some things it's really necessary for (financial transactions comes to mind). That said, it should be much easier to get ECC in consumer hardware. Major props to AMD for their recent chips that allow it. Hopefully Intel follows suit.

I run ECC everywhere possible. I know of two instances when it mattered (detected a failing chip), and suspect it was correcting single-bit errors for a while before that in those cases. I've also resurrected someone's laptop by determining it failed due to bad RAM and replacing that. (Easy enough diagnosis - intermittent, random-seeming hard-lockups and corrupted data on disk.)

ECC also thwarts Rowhammer and similar attacks, if that matters to you.

Note that it isn't enough for AMD to restore ECC support in consumer kit; you also need motherboard support, and the MB makers are also complicit in raising ECC costs.

The problem is that it is unsupported, so getting a board where the BIOS can enable it & you can count on it working is a bit of a crap shoot. I had the same problem ~8 years ago when a friend and I built new desktops. You've got to do a lot of manual reading, forum reading and review reading before buying a board. Or buy it locally from a place with a good return policy. This is quite a bit different from server grade kit where ECC is fully supported.

Also, it's quite depressing when you need to change the motherboard especially if you are obsessed with cable management and invested so much time ensuring great air flow.

Didn't AMD CPUs already supported ECC for many years?

I had a K6/2 with ECC back in the late 90's. So, yes.

But for whatever reason, AMD never seems to put ECC in the bullet lists for why you should buy their parts. I guess as someone else mentioned its because the motherboard manufactures don't enable/qualify it even though its likely just a matter of firmware tweaks ever since the DRAM controller moved on chip. I got it working on a cheap phenom II/gigabyte motherboard (IIRC) some time ago as well. In that case I don't think the motherboard even advertised it, but I had some unbuffered ECC DIMMs lying around and I plugged them in, and they worked. Of course the only real indication besides booting the machine that it was actually working was a kernel blurb during boot about it. I don't think I got the EDAC reporting to give me soft error rates at the time.

I don't have the numbers handy but here's a basic explanation. Due to the amount of RAM we all run today the probability of having a RAM error is surprisingly high. IIRC it's at least once per year.

I feel like it would have to be much more frequent than that (at least monthly with perceivable consequences) to get a typical user to care.

The Wikipedia [0] page for ECC RAM states that Cassini-Huygens spacecraft had a fairly static count of 280 errors/day.

[0] https://en.wikipedia.org/wiki/ECC_memory

Most desktop computers aren't in space.

That same wiki article references Google's experienced numbers, with a high end of "about 5 single bit errors in 8 Gigabytes of RAM per hour"

Google at the time was buying memory chips that had failed manufacturer QA, stuffing them on to DIMMs themselves, and then running whatever seemed to pass.

That number was consistent on-the-ground pre-launch and post-launch (with the exception of a short period of higher error instances due to a solar flare).

Eeek, at those rates surely there are some undetected triple flips.

There are lots of alternatives to VNC that are much more efficient and depending on your connection can even stream HD video. Unfortunately many of the alternatives are poorly documented.

RDP will give better results and there are tons of implementations.

NoMachine is probably the best documented and supported. However is proprietary but they have a free version. I believe it can do streaming video.

Spice is poorly documented and seems to not be maintained however it is very efficient and can stream janky video.

Have a look at XRDP with libfxcodec for streaming video (need lots of bandwith) otherwise without libfxcodec for decent low bandwidth

I've done some research into streaming video on a remote desktop for a project before and I've never heard of libfxcodec, can you go into more detail about it? I can't find anything about it.

I didn't believe you until this brought up the parent comment as the only result:


Shortest Googlewhack(-ish) I've ever accidentally stumbled upon myself, for sure!

try: https://github.com/neutrinolabs/librfxcodec


Oops yep, thanks for the correction j_s

I can really only use Spice on Linux. The Mac client is unusable. Haven't tried the Windows build.

It would be a nice protocol if there were functional clients, but I fear that train has sailed.

Mac OS "as it's now called" is a compelling offering because of third party commercial software support including Microsoft and Adobe. Linux is great for some developers who work with stacks that live entirely in Linux. There are still some things though that it doesn't stack up well against like the software available for Windows or Mac.

As someone who does front end graphic work to mockup apps before building them as part of my workflow I couldn't see myself switching off Mac OS unless Microsoft drops the registry, replaces NTFS, stops with the spyware being built in, disables the forced updates and finally goes with one UI/UX for everything.

> I couldn't see myself switching off Mac OS unless Microsoft drops the registry, replaces NTFS, stops with the spyware being built in, disables the forced updates and finally goes with one UI/UX for everything.

Why do you care about the registry and NTFS?

Why do you care about registry and NTFS if you are a frontend guy?

Desktop developer could care, but a web one?

Drops the registry? What does that even mean?

The Registry is the centralised configuration store which Windows introduced in Windows 95. Lots of people still don't like it.

...because it's an impenetrable tangle of GUIDs, deep hierarchies, and keys left behind by apps long uninstalled.

And for all intents and purposes you can pretend it doesn't exist, at this point.

I've opened up Regedit in the past year, but I work with a piece of benighted software that supports a weird but useful feature designed fifteen years ago when doing unfathomable things with the registry was something people did.

That was my point. The fact that you can fiddle with it to change undocumented stuff should not be brought up as something negative.

What are the alternatives? The UNIX way of dumping hidden config files in $HOME doesn't look any better.

These days XDG spec and most apps put things in ~/.config/, either as a single file or in their own subdirectory. It's easy to navigate and search, and apps can use whatever config format they want.

I'd argue that's preferable to a complicated hierarchy of obscurely named keys.

Might not look better, but I can copy all my settings from computer to computer, unlike the registry.

Also, the registry is used for a lot of stuff besides configs. So my experience on Windows is that you have to run the installers (=slow), rather than simply copying all the files to clone a system.

Plus, it tends to fragment all across the disk, or something like that, and your Windows system inevitably slows down over time. Maybe SSDs eliminate that problem; I (fortunately) haven't had to use Windows in quite a few years.

It's straightforward to copy the per-user software settings from one registry to another.

You can't copy full software installs like that, but it's not the fault of the registry. Installing typical software means affecting dozens of system settings all in different places. You can't easily copy linux software along with every setting it affects either. You're best off either reinstalling or copying the entire drive, both of which work on Windows.

Text files are for the most part readable. The registry can be ugly to work with, if you have to. (which seems to be less and less these days)

The last time I opened regedit was 2014.

No, that's not true--I did once in 2015 to make a game written in 1998 work.

I think we can lay this one to rest.

Capslock as Ctrl? That's a regedit.

Copying putty settings from one host to another? That's a regedit.

(Semi-) permanently disabling live scans from Windows Defender? That's a regedit.

I use SharpKeys to automate the keymapping process on Windows.


One of the first things I install on any new system. Gotta have my Caps Lock->Escape.

I did it few weeks ago - Windows still doesn't like having either IPSec endpoint behind NAT by default.

I'm not a fan of the registry but what's wrong with NTFS?

Fragmentation is rather annoying, snapshots and copy-on-write would be nice to have.

Snapshots are there since a while through VSS ;-)

Well there's good news: nvidia recently announced beta Mac drivers with support for their Pascal video cards[1]

That means you can do a Hackintosh build with their latest video cards (GTX1080 or Titan) with full driver support.

[1]: https://9to5mac.com/2017/04/11/nvidia-releases-pascal-web-dr...

I believe there are sleep issues with the new Pascal cards, or has this been fixed?

Last update said that was fixed.

I haven't done any VNC client comparison, and don't use it regularly anymore, but JollysFastVNC[1] was originally written to be a much faster VNC client than the others that were available. Still works on macOS 10.12.

[1] https://www.jinx.de/JollysFastVNC.html

> a decently powerful desktop Mac

That's to be defined. I for example use a MBP with an E-GPU Setup (Sonnet Echo Express-III with a Pascal GPU inside). Put the MBP in my Docking station and I have a decently powerful desktop Mac. I love it, I hope that is the future.

I thought about this but read that the E-GPU setup can have some performance penalties-- though this was with the Razer One and not a MBP.

Any real world issues or hiccups, or has it been as good as you make it sound?

The penalty is about 25% for my setup, but I'm still using a TB2 MBP and an external SSD on the same TB port. I think that's fair but of course with TB3 the performance penalty should be less.

But there are unfortunately other things to consider. In my case the Echo express enclosure is not made for Graphic cards and you have to modify it to make it work - done in about one hour when you are really careful. The wait for Pascal drivers was sad. Other than that the only other thing that comes to my mind is that you cannot remove the MBP from the docking station before shutting it down. (I have read that sleep should also work but it does not for me)

Edit: If things work really depends on the exact setup. I have just seen the video posted in another comment (https://www.youtube.com/watch?v=yho3rCNfzGE) and it looks like I have just been really lucky with my setup.

> I think now you can use, or soon will be able to, Pascal based GPUs which would great for deep learning/CUDA.

You can use them right now but the driver is still in beta. See https://arstechnica.com/apple/2017/04/nvidia-releases-beta-g...

"I'd also like to say NVMe is mind blowingly fast on the desktop I've got and something sorely lacking from Apple's offering adding another downside to the current Mac desktops."

I see that folks are putting nvme cards into their older mac pros:


Is there a recipe that allows one to boot from an nvme add-in card on an older mac pro ? (in my case, early 2009)


I would suggest this card, it should work, it is SATA based as opposed to NVMe. I have installed a few of these in on several older systems (x58 based and others) and saw a very large performance bump, even versus SSDs connected via aftermarket PCIE SATA-3 controller (running at 500-550mbs). YMMV if attempting to clone an existing drive - I have had this problem with add-in PCIE cards, older motherboards, and various operating systems - a fresh install to one of these drives will likely be easier.

I don't think so, the Mac Pro's firmware doesn't understand NVME, you'd need to use an ACHI based M2 SSD with the PCIe card. (so many acronyms!)

Xenon is a gas, and is also a chip used in the XBox 360 [1]. The server chip model from Intel is called Xeon.

Sorry to be a name nazi, but some people may not be aware of the difference. Its hard to tell online if people don't notice the error or don't know they've made an error.

[1] https://en.wikipedia.org/wiki/Xenon_(processor)

Yes, you can run Pascal: https://www.youtube.com/watch?v=yho3rCNfzGE (comparison with AMD, using external GPU enclosure)

...but you might not like the results under MacOS

(Edited for typos)

I have a hackintosh with dual 280x's running in crossfire. Not sure what exactly you meant by that sentence.

Nice build, but I'm a little surprised to see it get this much traction at HN. The hackintosh subreddit has builds like these for days.

Mine is a dual boot setup on an i7-6700k on Z170, 64GB RAM, and a very old and well-supported Radeon 5770 GPU. I will admit mine sits mostly in Windows with a number of Linux VMs running for development, which is why I maxed out the RAM.

Previous build was an LGA775 build. More specifically, an LGA771 hack. Again, ancient Gigabyte P35DS3L motherboard, 6GB RAM, and a modified Xeon X5470. It was extremely stable. I may bring it back, but I used the SSDs and PSU in my new build.

Not a troll attempt, but I want to know why people use Hackintosh over, say, Ubuntu or Fedora?

What can you do with Hackintosh that you still can't with a Linux distro?

Edit: Possibly a bit late to ask, but I should have asked if you had ever seriously tried Linux and if so what your experiences were like?

Just another data point: I've used Linux as my main desktop from about 1999 to 2008. I'm familiar with installing it from 80 floppies (before anyone points it out, that was before 1999, more like 1994) and recompiling <1.0 kernels to get the drivers I want installed. I actually make money off customizing embedded linux-es (arm boards) and writing software for them. I prefer the command line to mouse clicks and "wizards".

I moved to OS X because I want a desktop that actually works. I have several Linux VMs - one per project - which I use mostly from the command line.

At the moment, OS X and iOS are the least annoying desktop and mobile operating systems. Note that I said least annoying. They all suck in different ways and I wouldn't use OS X as a server. I also think Apple has no idea how to do online services; for example I don't use iCloud for anything. Still, the operating systems beat the alternatives.

Only problem is, you have to use OS X for a few months to realize how much invisible polish has been put in. I first bought a MacBook, and it took me a year or more to decide to switch my desktop to a Hackintosh as well.

Hello, unrelated to the OP but, concerning the Linux and Floppy experience. I had installed Slackware 3.6 (with a 2.0.36 Linux kernel) from floppies in 1998. This was a great experience because Slackware at this time was actually distributed in floppies (!)

Notice that I had downloaded these floppies through a 55.6 kbps dialup line (!!) from the Slackware ftp server (!!!)

The great thing about the Slackware distribution was that it was separated in "subsystems" each one being more or less standalone and being able to download and install separately. So there was the "base" subsystem which was on N floppies (I can't remember probably around 6) that contained the Linux kernel, libraries, bash and a couple of other useful utilities which could be downloaded, installed and with this you could start using Linux! And then there were various extra subsystems, for example the "development" subsystem that contained gcc, the "X" subsystem (with the FVWM window manager IIRC) etc. This made it really great to be downloaded through a dialup line.

I can remember various great stories from my installation and configuration process, for example becoming an expert in how disk partitioning works, configuring the modem by giving it AT commands (to dial my ISP's phone I needed to issue the ATX3DT command followed by the number), to make my (ISA) sound card work I needed to boot on windows (95) so that it's IRQ and memory address was configured properly and only then then (cold) reboot on Linux so it would work (!), using autoconfigure/make/gcc to compile stuff (this actually is needed and in 2017), configuring X by editing text files and playing with my monitor resolution and refresh rate etc.

Happy days !

Very interesting! My question is, how did you find a solution for a problem when you got stuck? Ask a friend? Trial and error? But what about things like needing a specific command for partitioning disks a certain way? Books?

I had a dialup connection so I could search the internet! There was a bunch of mailing lists back then where you could ask questions, also sometimes I checked the USENET (alt.os.linux.*) and there were some excellent how-tos... It seems there still are some how-tos (I'm not sure if they are the same as they were back then):

http://www.tldp.org/HOWTO/Sound-HOWTO/x320.html http://www.tldp.org/LDP/lame/LAME/linux-admin-made-easy/inst...

Also the Slackware linux had a nice installation guide:


Finally, some configurations were explained in the corresponding man pages of each util.

I still remember installing Slack around the year 2000 and completely borking Xorg.

Luckily I had ethernet and a cable modem - Lynx to the rescue!!! A lot of the linux forums were very 'text' friendly in those days - I dread to think what it would be like nowadays trying to navigate a forum from the cli.

I left slackware about 5 years ago for Xubuntu, mainly because I needed to use my laptop for work and not just having fun tweaking and learning.

The whole "Apt get into it!" just really works for me now. I think the final straw for me was last minute trying to compile a video editing suite (Cinelerra?) on slack, getting frustrated and then finding out it was much quicker for me to nuke the HDD, install Ubuntu Studio and an hour or so later I was happily video editing.

Now nostalgia is knocking at the door so maybe I will make a new partition and see how slackware has developed in the last few years...

man and the Internet. Yes the internet was around in 1998 :) . News groups, IRC, mailing lists... all existed. Trial and error as well of course.

Being from that time period myself (18 years old) and installing Slackware as well RedHat (and even Caldera ... anyone remember that?) there was plenty of documentation.

However unlike the parent poster I got a bunch of Linux distributions by either mail ordering or going to conference. In fact I went to Linux conference in Atlanta circa 1997 and meet Linus himself... but more importantly picked up some distribution CDs/disks.

It was the good ole days for sure.

Oh man, the days of needing to go to a conference to pick up Linux CDs. Yes, I definitely remember Caldera. Only thing from that list I haven't done is meet Linus though I guess I will get around to it at some point.

Actually come to think of it, Ubuntu really hit the CD nail on the head with Shipit, those of you with 28.8 modems will know this feeling. I still have my 7.04 CD.

I also remember volunteers at my university who were organizing sessions where you could go with a blank CD and they would copy over a Linux distro!

And of course there were computer magazines that offered Linux distros in their CDs!

Ditto, BTDT, got the t-shirt. Slackware diskset downloads over 14.4k in '95, used Linux exclusively until 2002, bought a Mac, haven't looked back.

I use Linux ALL DAY EVERY DAY at work, but IMO the desktop environments still suck, and the Mac is my desktop.


The Linux desktop experience has come a long way since then. Try Ubuntu 16.04.

This is a single datapoint and I am a senior systems engineer using a lot of Linux at work but its desktop experience is not even remotely close to be as good as MacOS. It reached the level that I can give it to people who do not work in IT though.

Depends. In some regards I like Gnome 3 UI as it comes with Fedora more, than Finder. With some details under the cover (doesn't have trouble browsing samba shares after connecting to another network, or per-network interface DNS, so you can resolve hostnames on internal network even when connected to VPN) are something, you get used to really fast.

I have a VM of it. Unity is still as annoying as it was in 2008.

I've heard a lot of reported display/performance issues when running it in a VM. Try installing it on bare metal.

I mean design, not performance...

I enjoy my own tweaked GNOME3 more than OSX.

I'm sorry people keep recommending vanilla Ubuntu to you. Unity is awful, and Ubuntu will be rid of it soon anyway. Until then, use Kubuntu, or Ubuntu GNOME.

> OS X and iOS are the least annoying desktop and mobile operating systems

I assume you're not including Windows Phone in that, because its surely less annoying than either iOS or Android (apart from lack of apps, which I consider to be a blessing in disguise)

No, I'm afraid I stopped caring about what Microsoft does long ago. I have a Windows 7 install on my desktop, but it only has games on it and I almost never boot it any more.

Unfortunately I am cursed with a long memory and I keep grudges forever, and I'm old enough to remember the old Microsoft with the patent threats against Linux and the illegal monopoly abuse practices.

The "new" Microsoft still abuses patents against Linux and Android, so you don't have that long memory for that.

Oh, but they started the "intellectual property" threats way before Android was even an internal alpha.

Oh dear. Is that the new benchmark for "old" now? Are there people here who don't remember that??

I think that most people would answer your question by saying "because I need Mac app XYZ and Linux doesn't have a nice equivalent yet". I used to have a MacbookPro. I'm now 100% Linux (Debian). In my particular case, the only reason I would go back to using a Mac is for Logic Pro X (music creation software). I've used Ardour and Bitwig on Linux, but Logic is still superior in many ways. Having said that, it's not enough to drag me back to Mac, and there's no way in hell I'll ever use Windows again on any of my own systems. Generally speaking, I'm now 100% FOSS, and plan to be so for the rest of my tech days.

I use macOS for, wait for it, MS Office.

Yes, LibreOffice in Ubuntu or Fedora are good enough technically, and there are same problems when exchanging documents with Office for Mac as there are with LibreOffice.

However, deflecting blame works. When exchanging files with someone and something is broken in LO, it is automatically fault of LO. When something is broken in Office for Mac, it is an uh-oh moment, as obviously it is the latest product from the same company, so the effort to resolve the issue is much more constructive.

Another reason is, that both Mail.app and Outlook are usable as Exchange clients (yes, I'm stuck with that, with IMAP disabled there :(). Evolution is a trainwreck. There is a plugin for Thunderbird, but it is a no-go, as it is a yearly subscription. One time price would be fine, but no way I'm going to pay yearly for using Exchange mail in Thunderbird.

> Exchange [...] with IMAP disabled there

This saved my bacon in a similar situation back in the day: http://davmail.sourceforge.net

Context: Linux user since 1994, also uses Windows often, software developer (anything from low-level embedded to Clojure and ClojureScript).

It's all about Mac OS. It lets me get things done without dealing with silly things.

Let's quickly list what I sorely miss when switching to my Linux machine:

* Multiple monitor support: connect any number of monitors, any time, and have them Just Work. HiDPI or not, doesn't matter. The OS even remembers which of your windows where placed on which monitor at which size, and will do its best to move them when you connect a monitor. No other OS even comes close.

* Consistent keybindings (Emacs-style) in all windows and dialogs. Control-a always gets me to the beginning of the line, whether in a text editor or in a file-open dialog box.

* Reasonably consistent keybindings in apps. Can expect Cmd-q to quit every app.

* Flawlessly working suspend/resume on laptops.

* Full-screen any app and it works fine.

* Apps like LaunchBar (I think quicksilver used to be a free alternative).

* Spotlight, which finds everything.

* Ability to remap any key to whatever I want and have it work everywhere.

* Drag & drop everywhere. And if you laugh at this, consider that this is coming from a command-line guy with 25 years experience with computers. The way Mac does drag & drop is faster and more convenient than fiddling with command line. For example, did you know you can drop a file or a directory from anywhere into any file/open dialog box?

* Apps like Simplenote, Bear, Ulysses: excellent tools for specific purposes.

* TextExpander.

* Predictable ubuquitous working clipboard.

For me, switching over to my Ubuntu machine is an exercise in frustration. I can't redefine my keys, there is nothing like TextExpander, multiple monitors just don't work unless stars align just right and you have all the monitors in just the right order at boot time and you better not mix hidpi with normal. Drag and drop is nonexisting. Copy/paste is a free-for-all where each app does things differently and you have multiple clipboards (Ctrl-V vs middle mouse click).

Basically, in order to get things done, I'd much rather work on a Mac.

A side note: to really understand why the Mac is so good, you have to work on it for a while, with someone showing you things (like dragging files into file/open dialogs). People seem to think this is about superficial things like aesthetics, or Adobe software. It's not.

Great response. Did you manage to post any/all of this to the feedback-request thread by the guy from Ubuntu about 3 weeks ago? If not, I think you should. I might even do so on your behalf as I've definitely felt some of these pain-points myself.

Ubuntu tried to make a decent desktop with Unity and failed. Unity was the trigger for me for switching to OS X.

The problem was, they need to put a LOT of polish in to reach OS X levels of usability, and they didn't look like they were doing that, they had some nebulous dreams of making a unified mobile/desktop interface instead.

There isn't anything in particular I'd suggest to them, because they need improvements just about everywhere. Say, is the network manager actually usable these days? When I switched it was easier to just disable it and move along.

And the main elephant in the Linux room right now is systemd. Low level Linux is basically being taken over by an incompetent developer who doesn't understand the Unix philosophy of software design and can't deliver working software anyway. He's the author of the abomination called pulseaudio - back then it was required to uninstall/disable it to have sound again. Now, software written by him is becoming a dependency of almost everything - he is trying to replace all the Linux infrastructure layer with his ideas. Sorry, I see no reason to take a look at Linux on the desktop again.

Pulseaudio in Linux has the same role, as Core Audio in macOS.

Systemd in Linux has the same role, as Launchd in macOS. Some aspects of systemd were inspired by launchd and SMF.

Interestingly, it is a good, useful thing that contributes to polished experience when it comes to macOS, but for some reason, when someone does equivalent for Linux, it is suddenly a bad thing, just because it is different than in the past and moves the polish and experience with Linux to a higher level?

Pray tell, what "polish" is contributed by systemd's binary logs that get lost in case of a crash?

Unix philosophy aside, Pottering can't be trusted to provide working software...

There are pros and cons for binary logs, they get you something (https://docs.google.com/document/pub?id=1IC9yOXj7j6cdLLxWEBA...), and you may lose something.

If you have such problems, that your machine crashes and loses filesystem data, just redirect it to the syslogd and go on.

Ubuntu took Linux from relative obscurity to being a relevant player on the computer desktop.

Before Ubuntu, if you expected to get a build of a typical popular Windows program/game you would get laughed at. Now it's virtually expected. That's not failure.

systemd is not as much a matter of controversy as people want it to be. If it was, you would see people like Linus rejecting it, but they haven't because it's not a major factor right now.

I know, I've used Ubuntu since it was out. I was very happy that I get a polished Debian. Until they dropped KDE from the mainline and started putting all their resources in Unity...

Some of us don't think init is great either, a mess of shell scripts that individually reinvent the wheel

Irrelevant. Ubuntu, which the other people here are discussing, used upstart before it switched to systemd. For about a decade.

You've just made one of the errors discussed in http://uselessd.darknedgy.net/ProSystemdAntiSystemd/ .

I'm not interested in your appeal to authority, especially as it claims comparing systemd with init is pointless (why?) and contains massive flaws like "Proponents (of systemd) are usually part of the modern Desktop Linux bandwagon"

I only posted about multiple monitor support, I wanted to focus on one specific pain point, and this is I think the worst.

I agree. I've run Linux at home for ages, just not on the desktop. Recently I dabbled with switching over to it on the desktop again to see how far it's come along and was shocked to see the state of multi-monitor and HiDPI support in Ubuntu. I couldn't care less about whether it's using systemd or what format the logs are in. How can we still be struggling with multiple monitors and getting the size of text and mouse cursors wrong?

Don't forget that CMD-C, CMD-X, and CMD-V, the system clipboard shortcuts, work in both desktop apps and the terminal. Whenever I'm using Linux it drives me mad that terminals have different shortcuts. Yes, I know it's mostly for historical reasons, but that doesn't make any less annoying.

Really, consistency is something that macOS holds over other desktop operating systems in a lot of ways, and that holds true even for third party apps for the platform. After having used macOS for many years, using other operating systems where most never gave much thought to or saw any value in getting on the same page and following a set of conventions is jarring.

Just be wary when drag & dropping a folder onto an existing folder, as the destination folder and all contents will be nuked before starting the copy! This is IMHO unexpected, especially if you're coming from Linux/Windows, where folders/files are appended (with subsequent pop-up dialogs asking whether to overwrite).

What? You must be doing something wrong. I would say that is unexpected and definitely doesn't match my experience.

Here it is explained in more detail with screenshots:


A dialog pops up with 2 choices: Stop or Replace. Replace will delete everything in the destination folder before copying over the source. If you hold the Option key while drag&dropping, you can get one of multiple different pop-up dialogs, depending on the circumstances... and the results are not very predictable.

> there is nothing like TextExpander

We use ubuntu at work and I was VERY worried about this. But autokey is actually good enough. The UI is not quite as slick but the rest of it is a little better. The exception is the little pop up form fill things which are cool but I found I did not really miss. https://github.com/autokey-py3/autokey

Well... This actually reinforces my point. I tried it. Installed, ran it, saw the weird warning from gtk. Then tried to use the ",gd" example shortcut in the terminal. This did not work, then froze my keyboard, and after some frantic desktop switching using the mouse, autokey finally crashed.

I'm describing this experience because it's a common one: it's symptomatic of the "Linux desktop" experience and is the reason why I would much rather use Mac OS. It's not that I can't make things work -- I probably could, but I want to spend my time on other things.

I had an MBP for a little over a year, and after it was stolen I switched back to Linux - Xubuntu to be precise, I never liked unity, and I tend to go for a more minimal desktop on any system and XFCE has more than enough customization for my needs.

The 2 major issues for me with osX were:

1: Selecting icons to copy. I don't know if this has been fixed, but if you have a file browser window open and you want to click then shift-click to select a range of files, it would always just make a rectangle (graphically) between the 2 click points, not run left to right through the window and select everything in-between, (I don't know if I am explaining this clearly)

To select the range you wanted you would have to either switch to list view or cmd click any files which got left out. I really though it was a bug until I found this forum post https://forums.macrumors.com/threads/shift-doesnt-select-ran... where they defend the behavior.

2: And kind of a silly one, but when you plug in an hdmi monitor, you can only control the volume on the monitor itself. And as with the previous issue, I was surprised at the responses I found in the forums. Complete finger-in-ears-lalalalala "It's better this way, and impossible the way you say even though linux and windows can do it that way"

Anyway, for my personal use case, linux trumps both Windows and Apple, and, in opposition to most comments I see on this thread, I run Win7 and osX in VMs if I need specific software (SketchUp on Win and nothing really on osX, I just have the VM set up in case I need something)

>like dragging files into file/open dialogs

You mentioned this twice, and now I'm intrigued. What exactly do you mean by this?

Drag a file/folder into an Open dialog and the dialog will switch to that location.

Similarly drag a folder into a Save dialog and it will switch to that folder.

Drag a file onto the "Open..." button on a web page to skip the open dialog entirely.

Drag a file into Terminal to paste that full path into the terminal...

Exactly. It's hard to believe how much time that saves. Also, you can drag the little file icon from an application title bar.

And for recent apps consistently rename the file in place from that same title bar.

And the far decried HFS+ file system had a feature I hope they will maintain in APFS. When you moved or renamed a file around, most of the time even third party app managed to figure out the new location of the file when seeking for it instead of just displaying a broken path dialog.

And this replacement only occurred if the resource was not present at the search path. Meaning that moving and replacing by a similarly named file result in similar file being used, while moving result in suggesting new location.

Don't know what's the behavior is as of today, and maybe by now Linux also provide this reliably, but first time I've seen this was even pre OSX I believe. And maybe there is a simple trick, but it looked like black magic for the 20 years me.

My guess is a Carbon API which uses inodes instead of absolute path.

That was the case (and created a huge debate) back in the old Carbon vs Cocoa days

Infamous tech note 2034: https://mjtsai.com/blog/2014/10/08/the-source-of-technote-20...

Just tried it on Linux, and… all these features work pretty much exactly the same way.

I'll mix it in with my other favourite feature so it's more fun. Say you can't be bothered navigating to find & open a file in App B you already have open in some other app (A).

Get to the Open dialog, activate App A, click-hold on the little file icon next to the file name on the top bar of App A and drag it onto the Open dialog over at B. Suddenly, the dialog has the appropriate file selected, wherever it happens to reside on your file system. Magic!

Edit: literally everyone beat me to it.

Final Cut Pro, encode/decode videos in ProRes, Sketch app, Xcode, Photshop & Lightroom (don't even mention Gimp for Linux, i don't know any professional who'd touch it), and generally a much less frustrating and more productive UI (better support of external monitors, better system fonts, etc )

Exactly. I actually switched off from Mac to Windows last year and I do miss Sketch App and Xcode. I image in Linux also Adobe tools would be something that many would miss.

A whole bunch of things for me. Many already listed here, but I'd add two: 1) macOS's high quality text rendering. I keep checking in on Linux and despite installing all manner of weird hacks over the years, I've never got text looking as good as on the Mac in every app. 2) I've not found anything vaguely as good as Screenflow on Linux either for recording the screen and audio live and editing all in one app.

Because Linux (regardless which one) does not provide you the same desktop experience as MacOS. This is a subjective topic, all I am saying the experiences are different.

Personally I prefer MacOS over Linux because it just works from the UX point of view and I can run Linux on it pretty easily with Vagrant.

One reason: audio applications (and associated hardware). I run Linux for the majority of my development work, but Mac just owns for audio.

That said, I have 4 hard drives, with 3 OS's in my main computer. So, I can boot into whatever, depending on my needs.

I ran Linux on the desktop from ~1996 until about 2008 when I switched to OS X. The things that keep me from going back is a much shorter list now than it was back then, but for me it boils down to:

1: Lightroom/Photoshop. I'm a hobbyist photographer, and the tools just aren't as good in Linux. Gimp, Darktable etc are interesting, but they just aren't up to par yet with Adobe's offerings. And basically everyone in photography seems to be using Lightroom + Photoshop.

2: iMessage - being able to have my SMS and iMessage messages on the desktop (and synced back and forth from my iphone) is just way too useful every single day. Yes there are apps that provide similar functionality, but the beauty of iMessage is that nobody has to use an app. They just send texts like normal. Maybe there is a solution to this that works in Linux but I haven't seen it.

Mostly for those with iOS/macOS development & the creative suite of software from Adobe & Apple.

There rest might do it for the heck of it too, slick UX & benefit of doing ML/DL from one single OS & machine instead of context switching between Linux machine & a macOS based laptop for the above reason.

I triple boot my i7-4770K+GTX780 with Windows 10, MacOS and Ubuntu. While I alternate between MacOS and Windows 10 every couple of months I rarely jump into Ubuntu unless it is to repair/recover something.

I've tried using it more. I recall using Ubuntu full time for about 2 months a couple of years ago.

But I'm a designer and use various CAD apps which are mostly available in Windows, some in MacOS and none in Ubuntu. I also recall breaking a lot of things when tinkering with Ubuntu.

So while I constantly have this drive to try Linux and open-source workflow I just never manage to make it work. I break things too easily and many trivial tasks are too much of an "hassle" (like getting proper CUDA support to use in Blender) meaning the chances of breaking things while tinkering is higher.

The two main things I haven't found viable alternatives for under linux are ITerm2, and reliable+configurable touchpad gestures (with features comparable to bettertouchtool + controllermate).

It's slightly amusing to me that by far and away the best terminal emulator that I've ever encountered is OSX-only, and it's embedded in my workflow enough that it would be a real pain to do without. I was going to note that FinalTerm inspired some of those features but was now dead, but going by [1] it appears it may be resurrected.

[1] https://github.com/RedHatter/finalterm-reborn

The only valid reason is software you need or if you want to do native apps for iOS.

I'm pretty happy with my my laptop that has the high end mac book stats (besides the display) and costs less than 50% of it.

I run Xubuntu on it and it basically has everything I need, but XCode.

Xcode (and by extension iOS development) is one of those things that makes you wonder if it ultimately does more damage to Apple in the long run than good in keeping it Mac-only. What is the purpose in limiting the number of possible developers of apps/programs for your ecosystem by their brand loyalty?

So your laptop's build quality, trackpad, battery life, weight are on par with MBP?

Same question. I am pretty sure such a thing does not exist at any price.

Reasons why I want a Hackintosh:

- Apps that I've grown used to that are macOS only like 1Password, Photoshop, and Quiver - Integration with iMessage - I really enjoy being able to send messages from my laptop - Avoiding weird edge cases where shit just doesn't plain work on Linux. I have a lot of issues with a HiDPI display on Linux that I don't ever have to deal with on Mac.

I know Linux is about configurability, and that's why I'm running Elementary OS at home, but these are the reasons why I would consider a Hackintosh now.

The only thing stopping me from ditching OS X for Ubuntu entirely is I still use Adobe apps for work. I dual boot for now.

Compile software for iOS.

This is the only reason I have considered a Hackintosh in the past, actually got one but didn't get around to writing code for it because I got busy with other things. I wish it was less of a closed garden. I'm not sure how it helps their cause by setting the entry bar tied to hardware costs.

Just adding to the other responses, one thing I miss whenever I am not on mac is quicklook/preview. Finder can preview most files you browse instantly. be it music, images (including photoshop and raw camera files) video, text, pdf, spreadsheets, docs, etc. Preview is a really amazing and simple app.

There was an Ubuntu thread some weeks ago that seemed to paint a pretty good picture of Linux pain points.

Run macOS software..

Which software in particular?

On top of my head of apps I use more or less daily. Xcode, Hazel, DragonDrop, 1Password, DayOne, Alfred, KeyboardMaestro, LINE, DaisyDisk, Sketch, Hopper, Fantastical 2, SnappyApp, BetterTouchTool and so on.

Some may have equivalents (but I'm sure nothing as polished) or will run under Wine but I have bad experiences with both.

There are also workflows that take advantage of macOS features (e.g. location service) that wouldn't work on Linux I guess.

Adobe, Avid, Scrivener, Scapple, MS Office (yes), Movie Magic Scheduling, MM Budgeting to name just a few among many.

Not Mac OS specific, but Photoshop or the Unity engine are good examples.

I would miss Xcode for sure. Linux unfortunately has nothing that compares, unless you think a bunch of xterms running vi is a development environment. Don't even get me started on Eclipse (yuck!)

Visual Studio Code is available for Linux. I've never used it, but I'd think it would compare favorably to Xcode (especially since everyone seems to complain loudly about Xcode).


Thanks, I'll have to give it a shot. Always thought VS Code was more geared toward .JS but it looks like they have cobbled together extensions to support C++ [1].

I used Visual C++ long ago, in a past life when I used to develop on Windows, and it was pretty good. Played around with more recent versions of it (2010 - 2015) and boy has it gone downhill! Very slow and clunky compared to what I remember.

Have you used an IDE besides Xcode? I've found Xcode to be one of the worst IDEs I've ever used. And in fact, a bunch of xterms is even better.

macOS is easier to use in pretty much every way, and I write Mac and iOS software which you can't do on Linux.

Install git without requiring root privileges... Also git is a 3rd party application, yet somehow all distros treat git as part of base system/OS bundle/whatever, so Fedora 25 has set version of 2.9, 24 of 2.7, etc. WHY. WHY. It's third party software, not part of your god damn OS - there should be only one version of git in the repos for ALL FEDORA VERSIONS - and that would be the latest... Come the fuck on.

Replace "git" with every other package....

Linux is so amateur hour with all plainly stupid decisions, I'd rather pay good money for proper unix.

If you want the latest and greatest of everything, you could use Arch Linux.

Git today in many ways could be seen as fundamental component of many tools. Many packaging and build tools use it to fetch data. (like homebrew, plugin systems for many text editors etc.)

Also, the git version of your distribution IS relevant because other packages depend on it. For example, on my Ubuntu system the git package is a dependency of over 170 other packages. If you could install a newer version, a lot of these other packages might break.

The recommended way of installing git on macOS is via Apples git variant by installing Xcode which also requires root privileges btw.

If you grab git from Homebrew, it doesn't overwrite the Xcode one, so if you depend on that you're good. How hard could this possibly be for apt-get and the like?

to be fair, there is LinuxBrew[1]; also, apparently you can use nix to the same effect.

I do believe that Linux-based desktop OSs should separate base system from user software, kind of like *BSDs have been doing; actually, I'd really distros to embrace something like homebrew, where packages are installed per-user

[1] http://linuxbrew.sh/

> If you want the latest and greatest of everything, you could use Arch Linux.

You're kidding me, right? https://git.archlinux.org/svntogit/community.git/log/trunk?h...

There's sometimes weeks before node is updated.

Updating PostgreSQL from 9.6.1 to 9.6.2 took 4 months.

And right now .net core build is failing on my archlinux test server.


If you want latest - use macOS with brew

If you need the latest versions so badly, why don't you just compile from source instead of waiting 4 months?

thanks package manager, you're useless

Your criticisms of Linux are basically that there is a proper security/stability process in place.

0.2% desktop users agree. Continue live in your strange bubble, just stop mentioning linux to normal folk.

I didn't realise normal folk came to HN, but your advice is noted.

You know who wrote git, right?




Please stop posting like this here.


I just broke down and finally got a 2015 5k iMac. At some point I just need to get work done instead of waiting for the latest and greatest and the 2015 iMac is powerful enough. I got a good deal on a refurbished one too, so I'm pretty happy with it.

There's only so much of the hardware rat race worth keeping up with anymore at the desktop level. I just pray that external GPUs will become supported on MacOS. That's the final piece for many people, I think.

> There's only so much of the hardware rat race worth keeping up with anymore at the desktop level.

I believe there's only so much of the hardware rat race worth keeping up with at all. Assuming most of Hacker News is coding, we're spending a lot of time in text editors; even if you are constantly recompiling code, CPU-wise is anything giving you that much of a boost over anything past Haswell or the PCIe/NVMe drives in the current MBP?

However, my work machines are a 2014 MacBook Pro 15 with a GTX 960 eGPU / TB2 when I need mobile hashcat, a T440s, and a Surface Pro, all pretty old hardware. 90% of the time I don't notice RAM pressure or that I am CPU bound, even when running a few VMs (say, Kali + Win10 on my MBP, or a few Win10 images in Hyper-V on the T440.) When I need more power than that, I can usually just rent it out of AWS/Azure.

I think sometimes the hardware game - especially on the Apple front - seems about keeping up the cycle and appearances of getting the new hotness. Yes, the exhilaration of having the next big thing is great, but functionally I'm inclined to believe that having the old thing is just fine 99% of users, and probably 80% of HN readers.

> Yes, the exhilaration of having the next big thing is great, but functionally I'm inclined to believe that having the old thing is just fine 99% of users, and probably 80% of HN readers.

A couple of months ago I dusted off an Acer Aspire One netbook (released in 2010, Atom N450 processor, 2GiB RAM) to install OpenBSD 6.0, after a few years with Linux and Mac OS X on other systems. Surprisingly it is my primary machine now.

Today I wanted to see if there were even smaller netbooks with usable keyboards around and came across this video comparison of the Sony Vaio P and Fujitsu UH900: https://www.youtube.com/watch?v=szbfvV4vwEI

The video was posted in February 2017 and the thing to notice are the complaints about the mini netbooks being unusable for web surfing and viewing videos in 2017. 99% of users are absolutely convinced that the reason web pages load slowly is not in fact horrible garbage JavaScript bloatware, but that their computers are too old and slow. HN users do things like uMatrix filtering and `youtube-dl -f`. I think that the 99% of users are much more demanding of their hardware than HN readers, unfortunately for all the wrong reasons.

if not constantly, then, really really frequently, i recompile Android projects.

before buying a new machine recently, i wanted to know what benefit a fast processor would offer for my specific usage pattern. so i repeatedly recompiled a representative Android project at different CPU speeds on my existing (over-clockable) machine.

the machine needed about 15% less time to recompile at 4.1 GHz than it did at 3.4 GHz.

Well, I replaced my 2010 MBP with a 2015 MBA, just because I got bored with it, and was also bored with waiting for a 'proper' MBP update. I also got it within a good deal, so I haven't had to touch my piggybank which has £1500 in it since 2013 waiting for Apple to hear the voice of developers who are busy with more advanced things than selecting the perfect emoji on the touch bar at a Starbucks.

The home user problem is the GPUs in iMacs are shit for gaming.

The work user problem is the iMac isn't designed for 100% CPU usage for extended amounts of time.

I work from home as a consultant, so I have both problems :) Thus, hackintosh...

Yup. I gave up and replaced my 2006 Mac Pro with a 2016 5k iMac once I realized the prices were sub-$2k instead of the $3k I was expecting. I've always hated all-in-ones but lately have found displays and computers last roughly as long as each other so it's less of a concern. That GORGEOUS 5k display sure doesn't hurt, either.

There's only so much of the hardware rat race worth keeping up with anymore

My personal data point of Moore's Law slowing down: my 8 year old Core 2 Quad is still half as fast as a modern i5. For it to still be usable would have been impossible in previous decades. With an SSD and modern video card I still game on it.

External thunderbolt GPUs are already supported (with a little bit of hacking). Check out https://egpu.io

I ran a Hackintosh laptop for a couple of years - a Dell mini 9, probably the best supported laptop for Mac OS ever.

It was great. Basically everything worked, and it never crashed.

Well, almost. Sometimes sound would stop working until reboot. Sometime wifi would not work after wake from sleep. I couldn't upgrade Mac OS. The keyboard was garbage, as was the screen. Ironically, A friend tripped over the cord and smashed the screen - if it was a real mac MagSafe would have saved it. (I replaced the screen for $35, which was nice).

All in all it was great, and after it I saved and saved and bought a real Mac Book Air 13 in 2012. I could not be happier. I have used it for tens of thousands of hours and it's still going strong taking a lot of abuse in West Africa.

At the end of the day, a Hackintosh just approaches the Apple experience, but I don't think it will ever get there.

The sleep/awake problems you're describring here also happen quite often of "real" Macs.

I have a fairly large sample size, and I've had zero issues with sleep/wake, except one: rarely, when I used to use a 2014 MBP for my desktop (external monitor, keyboard, mouse), about 1 in 30 times I would disconnect it to go to a meeting, it wouldn't wake from sleep and required a hard reboot. It seems to be related to losing the monitor/keyboard, as it never happened other situations.

I unfortunately buy a new MBP every year (except 2017 because I've got a 2016 and the upgrade didn't have the 32gb I expected). I have had problems with sleep/wake on probably every single machine.

Did you leave it on to where it got to 0-5% battery and shut off? When I do that it takes me about 10 minutes to start actually using the thing again it's so throttled regardless of the fact that it's now plugged into an outlet. Even restarting it at that point doesn't help much. I have to just sit there waiting for it to hit 8% or 10% or whatever it hits to stop throttling everything.

More often than not when I plug in my two monitors one won't come on until I do it again. Or maybe the third time.

Then about 30% of the time when I close it and unplug everything and remove my headphones the thing is still running and my music blasts through the office.

No manufacturer is infallible but this is the kind of stuff I would've quickly replaced my laptop and gone to another brand over if I wasn't so tied into OSX.

Sleep/wake kills my webcam sometimes, but it's been blamed on Android file transfer utility. Other than that, no issues yet for me!



if I go to activity monitor and find the "android file transfer agent" process and close it, the camera starts working again

I'd say sleep/wake is vastly better on "real" Macs than any other platform I've seen. Though I think I've heard reports of issues on the latest MBPs, is that what you refer to?

My 2007 MacBook Pro had insomnia. It would wake up randomly and refuse to go to sleep. What killed it was it woke up in a plane's overhead bin causing it die from overheating in my laptop bag.

My 2012 MacBook Pro would just refuse to wake up sometimes.

My 2016 MacBook Pro is sometimes slow to wake from sleep, but no major issues yet.

My 2007 Mac Pro would cause a kernel panic when putting it sleep.

I did the same with a MSI Wind U100 - absolutely fantastic machine, worked pretty much seamlessly on the distro I got.

I had a hackintosh build that was working really well for two years. At the time I would never had considered windows. But I did the switch to windows about 2 months ago (using a linux vm as dev backend). And I surprisingly had zero issue.

The hackintosh can be a good way to taste the hardware diversity (even if the article don't go very far on the hardware side). Once you do, you realise how locked up you are by Apple.

Does Windows support of BASH help at all? I am curious if anyone finds that useful.

It is getting more and more useful. Microsoft is really working on it. I am using a VM for now to be sure to get 100% linux environment working. But I hope to be able to do all my dev work in Windows in a few months.

There is one thing missing still, it is something like iTerm 2. I am currently running terminator on linux via X11. Having a nice native Windows terminal emulator would be a big plus. There is ConEmu, but after a lot of tinkering I was unable to make proper color and mouse support working.

MacOS is a very nice system for developers, and I am very sad to move away from it. I wrote about it here: https://medium.com/the-missing-bit/leaving-macos-part-1-moti...

There was an interesting interview on Windows Weekly podcast eps 514.

Q & A with Special guest Rich Turner, Senior Project Manager, Bash on Windows and Windows Console.

He talks about future bash/Linux support.

I use it daily. It's not quite as good as the Mac's terminal in many ways, but in a few ways it's even better (apt > homebrew, for example). If all you need to do is ssh into a VM or server, it's awesome. If you need to run a specific version of a legacy Fortran compiler, it might not work for you.

I switch between all three systems over the course of a day, but in terms of (A) system consistency, and (B) well-designed (third-party) applications, Mac still wins.

Yes, the Mac has good consistency. But I am very pleased with Windows window management and multimonitor support. Being able to resize multiple adjacent windows with a single drag is really nice. The window+tab and alt+tab shortcuts are complimentary and better than the mac counterpart.

Kind of hijacking the thread, but what's the best software for running graphical VMs on top of a Linux desktop? (i.e. a Parallels equivalent)

I tried setting up MacOS through Qemu/KVM (using these instructions [0]) and it installed pretty much fine. I don't have a spare graphics card or monitor for VFIO, so I tried running it in a window, but the mouse and keyboard capturing was really finicky, so it was completely unusable.

[0] https://github.com/kholia/OSX-KVM

Do you have a spare M/KB or a KVM switch of some kind? Then what you should do is pass an USB hub (most MBs have more than one built-in anyway) to the VM and connect second pair of M/KB there.

Yes, but I'd prefer not to do that. I have a 27" UHD screen, so it's big enough to display both the VM and my desktop. I'd like to display the VM in a window, like this:


While I agree CPU perf improvements have been quite incremental, PCI express SSDs and 64gb ram make Hackintoshes worth it.

I almost wish Apple would just quietly make this easier for folks, blessing it without explicitly blessing it.

Just use Hackintosh approved components[1] and you'll have an easy ride.

[1] https://www.tonymacx86.com/buyersguide/april/2017

I can't overstate how common parts are for this. My motherboard wasn't officially on their list yet the only difficult I had was my USB keyboard windows/alt/ctrl key wouldn't properly work. I had to drop $50 on a Mac keyboard.

Was about on part with the difficulty of installing Ubuntu now-a-days.

They basically have blessed it as much as you could reasonably expect them to. With careful hardware selection you will have no issues. My only recommendation, in general, is to use a USB audio device rather than integrated audio.

Just as with any Hackintosh setup: The guy who builds the system seems to have "no" knowledge about hardware and thinks he could write about it. This guy says it's a eight core CPU which is just wrong. The power supply is WAY too oversized resulting in bad efficiency and thus a bigger power consumption. He also states he needs no special kexts. Well.. that's because you are using Clover! God damn.. there is so much wrong with >90% of those articles online on how to build a Hackintosh... it's amazing.

It's a great PSU having over 90% efficiency at just 20% load. plus being completely silent at that load. The GPU alone needs more power than GTX 1080. At 90% CPU TDP utilization, the setup will pull more than 450W. Add some overclocking, the total power consumption goes easily over 500W. How is that being oversized?

- The system won't pull more than 350 to 400W - A GTX 1080 needs less power than the R9-280X -- AnandTech and others have shown that - System Idle is around 80W (thanks AMD ^^) -- 20% load would be around twice: ~150W. So the PSU runs at about 10% load in idle --- When running at 10% load the PSU has a worse efficiency - Why the fuck is he buying a AMD GPU? Ahh.. Hackintosh BS :/ ("no real NV drivers...") - Even if OCd, the system will not pull more than 400W (look at AT and their OCd X99 system). Maybe 420W.

The way I'm reading it here he's referring to the Mac Pro he didn't buy.:

"The spec I was going to go for was an 8-core model, with 32 gigs of RAM, a 1 TB flash drive, additional external storage, two D700s"

Unless I missed something.

Also, I've often wondered who needs a 700+ watt power supply? How many video cards, drives, CPUs, etc etc could that handle?

Electronics that are running at capacity tend to have more issues. If a 700 watt power supply wastes a couple bucks worth of power a year, but lasts twice as long, who cares about the power?

Power supply longevity is already extremely high compared to any other component, if you're looking mostly at high-quality brands like Seasonic. Their top tier product line (with 650W models and up) has a 12-year warranty, and the next tier down (with 550W models and up) has a 10-year warranty. If you want a long-lasting power supply, you can just buy one directly.

When troubleshooting electronics, in general, power supplies/converters are always the first thing to check.

While I believe that power supply longevity is better than for hard disks/fans and other mech components, power supplies are still prone to failure and also damage caused by line voltage irregularities. That's part of the reason why serious servers have two redundant power supplies, its not just for battery back-up. Power supplies do fail, that's why they're FRU's (field replaceable units) in datacenter equipment.

Warranties are more about getting people to buy stuff than they are about actual longevity. Very very few consumers will bother to warranty a power supply from a 10 year old gaming rig, for example. Few people excercise warranties period. But they do create a warm-fuzzy feeling for buying decisions and if the margins are high enough for the MFG its not a big risk anyway.

What's more, a failing power supply can cause all manner of weird symptoms that don't immediately trigger the "bad psu" thought in most people's heads.

If I can avoid that by not pushing the limits, it's worth doing.

The longevity is usually great but if the PSU is low quality and it ends up malfunctioning, it tends to kill other components as well.

That's why I always invest in a great PSU.

So you can over-spec your quality, which is expensive, or over spec your wattage using average quality gear, which is cheap.

I could suspend that sign overhead with one extremely high quality aerospace serial numbered M2 bolt or six M4 bolts from the hardware store, financially I'm better off with the hardware store bolts.

> Electronics that are running at capacity tend to have more issues.

Do you have any statistics or other sources on that? (honestly curious)

No, I don't. Just anecdotes, conjecture, and common sense.

I'd like to see the numbers too.

A 700W 80+ power supply will generally be more efficient pushing 450W than a 500W PSU because the wattage output is in the mid-range where peak efficiency generally occurs.

Yes but without knowing that 4770K is a 4core i was expecting it to be 8core chip because rest of the specs is 2x 500gb ssd and 32gb ram, same as mac pro

It's not well written that's for sure.

The way I read English the "was going to go for was" part acts as a conclusive delimiter.

As in: I was going to go for an 8-core CPU but went with an i7 4770K instead.

Not many. The OP's build can't run two R280X in CF mode, not even if everything else is at stock speeds.

I have a 1500W supply in one machine. It draws around 60% of that at peak load, putting it near the peak of the efficiency curve.

The machine has two older graphics cards, 16 drives, an E5-1650 and a fairly power-hungry motherboard (X99-E WS). Also, lots of 200MM fans - it is nearly silent at full load.

> The power supply is WAY too oversized resulting in bad efficiency and thus a bigger power consumption

...but who gives a f? It's a desktop, no battery to drain. And of all the appliances in your home, the computers surely don't mean that much as a percentage of power usage. Yeah, other things are probably wrong too, but I think you picked the wrong detail to care about here :) Better to have a bigger power supply and be sure it'll handle whatever you might add to the machine in the future than to care about replacing it. And from my (and everyone else I talked to) experience, inefficient (power-wise) electronics tend to last way longer than efficient ones, so I prefer them (since my time wasted finding a replacement or a repair shop is worth more than the price of the extra power), even when it comes to fridges and washing machines.

(Commenting mostly because all this insane obsession with energy efficiency is getting on my nerves.)

I give a fuck :p And even if he overclocks his CPU and GPU it's still oversized. It just doesn't make any sense. I think a good system is well balanced with room for improvement. But we are talking about a 760W PSU! This systems consumes at least 300W to 350W under load. So thats twice the needed load! A 600W PSU would have also done the job but this thing is just pure overkill.

It is better to have spare wattage instead of running it at full load. It's not like the PSU uses power by itself, and a larger one will last longer and be future proof...

He's saying he may upgrade to GTX 1080ti - that's extra 250W.

It's the first time I can hear that oversized PSU leads to more power consumption - how come?

Somewhere in the power supply, power is converted via a switched inductance, the power transfer is controlled by the switching frequency, with toughly the same energy transferred for each cycle (constant I_peak). Losses are roughly f*I_peak^2. There are several factors limiting maximum f, so to get a bigger maximum power, bigger power supplies have bigger I_peak. Thus, for the same transferred power, a bigger supply will have bigger I_peak and smaller f, but since I_peak appears squared, the loss is worse. (This is the rough explanation, the devil is in the details)

Additionally, many power supplies are worse at load regulation if their load is too low.

I just looked at corsair's efficiency graph and it seem that the best efficiency is around 50% of load on their PSUs.

Additionally up to around 35% load the fan is off, passive cooling is enough.

From what I'm reading, the 1080ti and 280x both fall right before 250W, so the change shouldn't be significant.

Also PSUs have an efficiency, i.e. only 80% of consumed energy goes into the components (it's available), the rest goes as toaster/heater I guess. There are different standards it seems gold, platinum etc.

You seem to assume that a big power supply somehow draws more power than a small power supply give the same workload. That is not the case.

Yeah, no efficiency curve for that power supply but with 0 fan under 30% and this: The 80 PLUS Platinum certification guarantees greater than 90%, 92%, 89% efficiency at 20%, 50% and 100% operating loads,

It's probably here nor there, shame about no real curve though. Could be crapola at 10%, who knows.

All else being equal that's usually true though - a 750W model of the same PSU will usually draw more at 75W output than the 650W model of the same PSU. But with 80+ gold and above the difference will be negligible.

It is the case in most common usage scenarios. Power supplies tend to reach peak efficiency at around half of their rated maximum load. If your system spends most of its time below half the PSU's maximum, then a smaller PSU can be more efficient. If your PSU is oversized by something like 250W or more, you'll have trouble getting it to hit its peak efficiency even while gaming.

> Power supplies tend to reach peak efficiency at around half of their rated maximum load.

You aren't the only one to post something like this. Where are y'all getting this idea from? Efficiency is very flat in switching power supplies once load gets above about 10% of rating. Plus, computer power supplies have multiple voltage outputs, so in theory one output could be running near full load while another is basically idle[1]. The power output rating is calculated by increasing load current until the voltage drops unacceptably, then calculating the power output at that point. A 750W power supply is just as efficient running at 250W as it is at 750W.

[1] Not that this happens much at all in practice, but it could.

> The power output rating is calculated by increasing load current until the voltage drops unacceptably, then calculating the power output at that point.

It really isn't. Sure, the engineers who design the power supply do that test, but that is not at all what determines the advertised rating of the final product. At 100% of the advertised load, most computer power supplies are still delivering nominal voltage or slightly above, not 5% below as allowed by the ATX spec.

> Efficiency is very flat in switching power supplies once load gets above about 10% of rating. [...] A 750W power supply is just as efficient running at 250W as it is at 750W.

Yes, on either side of the peak efficiency, you'll have points of equal efficiency. And in the middle, you'll have a few percentage points higher efficiency. But more importantly, at the ~30W a typical desktop will actually be drawing most of the time, a power supply with a smaller rating will be substantially more efficient.

Buying 750W and larger power supplies just doesn't make sense for single-CPU, single-GPU systems. Power supplies with lower ratings already have plenty of headroom both built-in to their rating and in the difference between 500W and what a real desktop actually uses on real workloads. To the extent that having excess capacity helps longevity, a 550W or 650W model is already well past the point of diminishing returns and going up to 750W is pure vanity. If you want reliability, shop for PSUs that use high-quality fans and capacitors, don't just stupidly add an extra 30% on top of what's already for more PSU than you really need.

> It really isn't. Sure, the engineers who design the power supply do that test, but that is not at all what determines the advertised rating of the final product. At 100% of the advertised load, most computer power supplies are still delivering nominal voltage or slightly above, not 5% below as allowed by the ATX spec.

I assumed power supply manufacturers would want to slap the peak power number on their supplies for marketing purposes. If they are actually being conservative, then you are correct.

> Yes, on either side of the peak efficiency, you'll have points of equal efficiency. And in the middle, you'll have a few percentage points higher efficiency.

My point is that efficiency is a plateau, not a "peak". Once in the plateau the fluctuations of efficiency from one load point to another are not significant. Below a certain minimum load and past the peak power "knee" is a different story, but that plateau is very large.

> But more importantly, at the ~30W a typical desktop will actually be drawing most of the time, a power supply with a smaller rating will be substantially more efficient.

I never disputed this. I disputed the nonsense that power supplies have a meaningful "peak" efficiency, and that it is a function of it's rating. 30W is probably not enough of a base load for efficient operation of larger computer power supplies, but once that point is hit it no longer matters what the actual load is.

I'm also not suggesting it is a good idea to waste money on a larger supply than you really need.

> The power supply is WAY too oversized

The fan is off if the load is below 30% which is neat if you are into silent builds.

I have a 450W PSU and my fan runs with maybe 300rpm and is absolutely unhearable inside the case (even when it's night). So i won't count this as an argument.

~ 230W = 30 % Load (760W) ~ 135W = 30 % Load (450W)

The idle draw of such a system is below 100W (okok.. max 100W ;) ) so you won't hear any fan even with a smaller PSU.

That's true! And the system has 4 Noctua fans and a Corsair H80i water cooler, which will all make more noise than the PSU anyway.

Can you give a good resource that's not wrong? I'm really curious.


This is something to start with. But i don't know why they don't go with a NVMe SSD? Clover supports it and it's way faster. But overall these are relatively good guides. I still would make some minor adjustments but that doesn't matter.

Quick and dirty calculations of power usage (min - max):

  Motherboard: Asus Maximus VI Gene                   - 57 - 123
  CPU: Intel Core i7 4770K 3.5 GHz Haswell            - 70 - 80
  RAM: Kingston HyperX Beast 4 × 8 GB DDR3 2400 MHz   - 12
  SSD: Kingston HyperX 3K 480 GB (×2) (Striped)       - 1 - 4
  HDD: Western Digital WD Black 3 TB (×2) (Mirrored)  - 16 - 20
  GPU: Radeon R9 280X                                 - 15 - 257
  Cooling: Corsair H80i                               - 4
  Fans: Noctua NF-S12A (×2) & Noctua NF-F12 (×2)      - 4
  Sound: Bowers & Wilkins MM-1                        - 12
  Other: Bluetooth LE & Wi-Fi PCIe Module             - 2
Final output: 193 - 518 Watts. 750W PSU is not a huge overkill, especially if he wants to add more drives and/or a bigger GPU. Also as mentioned by others here, PSUs generally run quieter at lower loads.

He said he was going to buy the 8-core Mac Pro. What's wrong about that?

> This guy says it's a eight core CPU which is just wrong.

Actually it has 8 logical cores and 4 physical cores. For all practical purposes, you have 8 cores at your disposal. I build workstations for 3D rendering and I have yet to see a tool that cares whether the cores are physical or logical, as long as you have enough RAM for each core, you can use each one to it's fullest extent.

Unless the parallel operations you're running interweave nicely with the CPU pipeline, there are many instances where a hyperthreading enabled 4-core CPU performs no better than a non-hyperthreading enabled 4-core CPU.

On average, a setup with 4 logical cores and 4 physical cores performs 15%-30% faster than a setup with just 4 physical cores.

Fair enough, I wasn't aware performance was that bad.

Regardless, my point is that the article didn't specify if he meant logical or physical cores, therefore saying that the article is wrong, is partially incorrect (and seems like an unnecessary attempt to nitpick for the sake of nitpicking).

X-core CPU basically always means physical cores, even the marketing departments have gotten that message. Logical cores are called "threads" in that context.

>He also states he needs no special kexts. Well.. that's because you are using Clover!

All Hackintoshes require the FakeSMC kext (to bypass DSMOS), that's it. Some systems require more kexts, for example, to have networking, etc.

Clover doesn't patch or install kexts, but it patches the ACPI tables (especially DSDT). Maybe that's what you mean?

Clover does more than just patching ACPI tables. But when you change some ACPI tables and other registers (CFG...) via Clover you don't need those hacked kexts. It's as simple as that.

Lol. Doesn't grok hyperthreading is stealing idle execution units in a real core to make it seem like there are more cores.

FWIW I use a MBP 13 non-Retina mid-2012 with 16 GiB and two SSDs. Good enough and no weird compatibility issues.

I built a hackintosh the other day using clover. Overall went very smoothly. The only thing that was tricky was audio. The bundled realtek drivers didn't work and had to find some github repo with a modified version (can't recall what it was now).

Specs off hand: Asus Z170-A, Intel 6700K (Overclocked to 4.2), 32 gigs ram, 1tb ssd, nvidia 1060 powering dual 4k screens

What amazed me was how easy it was compared to years ago the last time I tried.

Did you use those new drivers nVidia just released for 10xx GPUs? I'll be building my first hackintosh soon and that's got me worried.

They will be, the 10XX series literally doesn't work on OS X otherwise.

what kind of power supply would you use w 1080Ti in the box?

Yep. You'll only get fairly low resolution until you install.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact