Hacker News new | past | comments | ask | show | jobs | submit login
New Mac Pro (apple.com)
539 points by fumar on June 3, 2019 | hide | past | favorite | 707 comments

I know being expandable is important here but I am stunned by the fact that an iMac Pro actually comes with better SSD and Video Card for less money and you get a screen to boot!

256g SSD is shameful, I don't care what they were thinking. No Nvidia support mentioned so I know a few that just wrote it off for that oversight alone. They will probably hold out just to be sure but that was the hope for them and others.

While I was never in the market for a Mac Pro, though the iMac Pro is not out of the question jump over the iMac top tier, I never expected such low starting specs for SSD and Video at this price point. 580X is old, 2017 old. If this is what they are struck with by not using Nvidia they have done everyone a disservice.

* did the spec this out in 2017 when they announced it and just locked it in?

* edit : Wrong on price, iMac Pro is 1,000 less than base Mac Pro

The first step up from 256GB is 2x512GB, and there's configurations up to 2x2TB. My guess is that the 256GB configuration is meant for users that work primarily with data that's kept on network storage.

There's no professional that would buy a machine with 256GB of storage. Let's call it what it is - a throwaway configuration meant to hit a price point.

If there's a single truth of Hacker News Apple threads, its that someone will eventually say "no professional would ever..." and then immediately be rebutted by multiple professionals claiming they indeed would "..." and not only would they "...", not doing "..." would be unusual and even undesirable.

Every machine I've worked on, bought or built has had 256 or 500gb. We only run the OS/apps on the main drive. Then a second drive for scratch disk and then SAN/NAS storage for the media. It's VERY normal.

I've found that 256G is 'not quite enough', so my old Macbook has 256G and I moved my media folder to one of those flush-mount microSD card adapters. It's read-mostly so who cares about write cycles?

We only store the OS and apps, so anytime the storage is full it's usually cause someone hasn't emptied their downloads folder or they're incorrectly caching the the OS drive.

My experience in personal use is that data loss with microSD is extremely common, much worse than, say, magnetic disks (and certainly much worse than SSD).

I'm sure you're right, but with a read-mostly workload and backups this is less of an issue.

I mean for one thing I carry a backup of that data with me anyway. It's on my laptop and my phone.

Use industrial cards.

I used think that was the problem too. But then I had this issue with cards that weren't cheap.

Multimedia professionals do not store their data in internal/primary drives.

They generally have RAID boxes with either Ethernet or thunderbolt connection.

Most video editing/rendering is done on big storage arrays anyway. So 256GB storage is more than enough for most serious multimedia professionals.

I'm not entirely sure my Lightroom cache would fit on this drive...

That's what a scratch disk is for

I'm not sure a Mac Pro is the best use-case for Lightroom users, if you're working on something that requires that kind of speed/power you're probably on something like Photo Mechanic Plus and/or Capture One Pro.

Lightroom allows to move the cache on another drive...

Besides, perhaps you're the user that needs the multi-TB version. Others, e.g. video professionals wont have huge tens-of-TB source files in their main system drive.

I guess the point is for $6000 it should have another drive.

It should, but with a 24MP camera you can create ~50GBs of compressed RAW files per session pretty easily so, a TB of storage won’t do.

If you add a high performance spinning disk its performance won’t be enough and it will be noisy even when idle.

In the end, it’s better to not add anything and let professionals bring their own storage into the mix.

At the end of the day this won’t be sold to new beginners with no files. The buyers will have their resilient storage and the internal drive won’t be used much anyway.

Unless it’s configured as a developer workstation, but it’s generally MacBook Pro’s role.

Among other things it has, it has a special video decoding card. Some of those in the market go for $3000-$5000 alone.

So I'm a web dev but in a previous role we bought a 2 man company that made medical renders. I ended up making them a mini-SAS Raid 5 made out of SSDs. This is before M.2 started taking off. That thing did 1.1 Gigabyte/sec write on a little table top with terabytes of storage. It was over 20x more than they were used to.

The monitor stand costs $1000 so it still seems like they would splurge a bit on the base built in drive.

You would if you had a storage appliance on a SAN, as the SAN is almost certainly faster than a single SSD.

This is an extremely common setup for video production, so it’s not out of the target market at all.

There are plenty of cheap NVMe SSDs that can push 3.5Gbytes/sec (Samsung 970, Adata XPG). Even with dual 10Gbe NICs, you can't match that, nor the low access times of local storage.

But I do agree that SAN storage is the norm here.

Lots of animation / multimedia houses use workstations with 40Gbps+ adapters (40GbE or Infiniband) connecting to network storage.

It's done this way so people can connect to the storage that's appropriate for the task at hand.

Different projects can be stored on different SAN/NAS arrays , each specc'ed out according to the size/needs of the project.

eg a short run animation doesn't need the same storage capacity of a full length feature film. They may have similar throughput needs though. (Summarising here, but the general concept is ok)

So, lets say someone is a Flame editor (Flame generally has high bandwidth needs). They're working on project A for today. So they connect to the storage being used for that project from their workstation. The next day, they might be working on a different project, so will be connecting to a different set of storage.

Other people using different software will connect to the same storage for their tasks, but can have different latency/throughput needs.

Obviously, this isn't the approach taken by single person multimedia er... "houses". ;)

It's usually direct-attached storage, not network-attached. They use USB-C or Thunderbolt cables to join to a RAID storage device, and then backup/archive to a network based storage pool later.

I rarely see direct attached storage anymore. It's too cost effective from a media management standpoint to just to right to 10Gbe RJ45 or fiber network storage. The only direct storage I see is if speed is absolutely critical and that's very rare, mostly just high end 3D stuff.

Got it. I'm a few years removed but fast DAS raid boxes for each workstation were common with work product being synced to a network share. Looks like the NICs and SANs are fast enough now to run everything off the network.

We are past that now, new PCIe 4.0 SSDs just have been showcased along with the new AMD chips and they can do 5GBytes/s read and a bit above 4GB/s write (AMD is rumored to have invested in the R&D of the controller). You'd need 40 GbE to match one -- and EPYC Rome, also scheduled for this fall, will have 160 lanes allowing for dozens of them. You could very easily reach 100 GByte/s read which no network will match.

> You could very easily reach 100 GByte/s read which no network will match.

High end networking gear already has higher throughput:



A SAN/NAS using the same PCIe 4.0 SSD's you mention could probably fill the pipes too.

... and it would probably need a bunch of network stack tuning. ;)

>> You could very easily reach 100 GByte/s read which no network will match.

> High end networking gear already has higher throughput:

100GB/s > 200Gbps

You would need 4x 200Gbps ports to reach 100GB/s, so 2x MCX653105A-ECAT (each 2x 16-lanes) at > $700 each, and pay for 1/10th of a ~$30 000 switch, IOW 100GB/s would cost you ~ $4400, before paying for the storage.

Sure, it could be done, but it wouldn't be cheap, and you'll have used most of the PCIe lanes.

Agreed. Higher end network gear is $$$. :(

EPYC servers (128 PCIe lanes) would probably be the go too, not Xeons.

This is just Imagineering though. ;)

With the specifics, wouldn't it be 4 cards needed? Each card has 2x 100Gb/s ports, so 8 ports in total.

The 200 gb/s network adapter that you linked are 4 times slower than 100GB/s. The parent comment wrote explicitly 100 Gbyte/s.

Oops. Didn't spot that, sorry. :)

That being said, after re-reading the comment they're talking about adding multiple PCIe cards to a box to achieve 100GB/s of local total throughput.

That would be achievable over a network by adding multiple PCIe 200Gb/s network cards too. :)

Nah, a motherboard with enough M.2 connectors could easily exist. Or, U.2 or OCuLink. We have already seen 1P EPYC servers with six OCULink connectors...

Sure. My point is just that whatever bandwidth you can do locally, you can also do over the network.

As a sibling comment mentions though... the cost difference would be substantial. :(

Twin ConnectX6 adaptors, gives you 800Gbps, or ~1GB/s, at an absolute theoretical max.

It's good to see that local storage has finally returned to the reasonable state of being faster than network storage. SATA / SAS was a long, slow period ...

If it’s 800 gbps then it’s 100GB/s, not 1...

You're right. Brainfade.

So, even with protocol overhead from all the stack layers chewing up maybe an order of magnitude, that'd still leave 10GBps.

So .. I guess it's still possible, if impractical, to outperform a good PCIe SSD with the latest network interface.

... ~10GB/s can be done by a single 100Gb/s adapter.

More 0's needed? :)

cough 1GB/s can by done by 10GbE.

Maybe a slight typo there? Need to add a few zeros? :)

You're right, doh. See above.

Pcie 3 was not the bottleneck for SSD. They typically are only 4 pcie lanes, when they could go up to 16 for 4x the bandwidth.

But the standard M.2 NVMe interface happens to only have 4 lanes. PCIe4 will double the available bandwidth for these very common SSDs.

The new x570 motherboards will have them soon the pci 4.0 soon

Thunderbolt 3 can beat that though

A typical product aimed at mid-range video producers, the G-Speed Shuttle SSD, can do up to 2800MB/s. That's 32TB of local Thunderbolt 3 attached SSD storage.

Mind you, you'll pay $15K for it, but if you're in that business you can well afford it even if you're not a top tier Hollywood production shop.

Given that your storage array costs that much, the fully loaded Mac Pro price (somewhere in the $20K range?) is not that outrageous. The people who use Red cameras and G-Tech storage arrays are the Mac Pro demographic Apple is going for here.

Disclaimer: I used to work with G-Tech but no longer there.

Prices have dropped then, cause the G-Speed shuttle I use is 96TB and doesn't cost half that much. I've also used almost every model and in the real world and you don't ever get close to advertised speeds for R/W on those. Plus when the volume gets full it will drop to <100MB/s write.

They are popular though. I see them a lot, but I have had very little success with them over the years.

Thunderbolt 3 basically just provides 4 lanes of PCIe 3.0

I know nothing about SAN, but I'm still surprised that it would be faster than a PCIe connected local SSD?

I'm just speculating here, but I assume that online storage isn't faster, but it may be bigger, online, easier for IT to manage, and fast enough. It also seems likely that some employers are worried about engineers or creatives just walking out with commercially sensitive data.

This. Have a look at the various Linus Tech Tips videos on their ingestion and editing workflow. There's a reason they upgraded their network to 10Gige and it wasn't for fun.

Decent network cards (40GbE, etc) with reasonable network NAS/SAN storage can return data in similar time frames to a local PCIe SSD.

That's for storage physically located nearby though. eg same building, no more than 1 or two network hops (roughly).

Higher end networking from the last decade or so has become really fast. :)

40GbE is indeed crazy fast. Didn’t realize that you can have PCIe cards that do this for less than $500.

For a home lab or dev/test gear, where getting stuff off Ebay is an ok thing to do, older Mellanox cards can commonly be found for under $100. Sometimes much cheaper. depending on what the seller is doing. :)

Looking quickly at the US Ebay just now, here are some older single port ConnectX-2 cards:


US $23.00 each, free shipping. Note - I don't know the seller at all, this is just from a quick look on Ebay.

There's a tonne of stuff on there. ConnectX-2 is no longer supported by Mellanox, though the cards themselves are generally pretty bullet proof.

Drivers for Linux and FreeBSD come with the OS. :)

Drivers for Windows are a bit more complicated. Mellanox has an archive section with the ConnectX-2 drivers, which work for most people.

Cabling directly between two cards - instead of going via a switch - is pretty common for people just trying out the tech. It lets them plug one card into (say) a FreeNAS server, with the other card in their desktop or workstation. Removes the ~120MB/s limit of 1GbE, assuming any kind of reasonable disks in the connected NAS. :)

If this kind of thing is of interest, probably the best place on the whole internet (not joking) for this stuff is Serve The Homes' Networking forum:


Plenty of advice and stuff to read there, and a good place to ask questions from people that have a clue. :)

Latency will be higher, but throughput can be nearly the same with a fast enough network.

It's more about size and stability.

I can have a 100TB volume for under $10,000 that multiple editors can R/W to over 10GbE. We probably average about 400-600mb/s to a client on a simple setup in real world situations, which is fast enough for a few editors to work on multiple streams depending on the server. It's easier to backup/archive from an IT standpoint.

Most people working solo don't need to access hundreds of TBs though. They're fine with local or thunderbolt attached storage devices.

My OS drive is for the OS and applications I don't care about. 128GB is plenty tbh.

I really can't imagine this world where "professionals" keep everything on a single drive and are primarily concerned with capacity of that single drive.

My gaming computer has three SSDs, including a 120GB for the OS, and a pair of magnetic drives.

My 2011 Air has 128gb and I'm always having to clean it up to install updates. Yes, it has years of cruft, and some userland stuff, but it's really just a netbook for me, so I don't keep much on it.

I find 256GB fine for writing apps. I think more space would be a waste compared to a faster processor to speed up my compile times. If I had infinite money I'd probably put any additional space on extra drives as a RAID array to help with IO read bottlenecks during compiling since I don't have any use for more.

I totally understand gamers need more space, people who collect movies and videos need space, graphic designers need more space. Maybe even Android OS developers need it due to the Android Open Source Project, AOSP, having like 100 repos you need to clone all at once. I don't need it as an app developer, though.

Apple has a history of offering small base storage/memory options, and then charging more for the upgrade versions than they physically cost to produce. It's how they make different price level products when the products are all fundamentally the same. This lets them cater to poorer customers than they could otherwise since the rich ones buy the overpriced upgrades and subsidize the poorer customers' hardware. So it isn't all bad.

Most often (depending on the cycle in the SSD tech generation) larger SSDs are faster (see new PCI4 controllers who need a number of NAND chips for best performance). A 256GB has a high probability of being not the best choice for compilation. A 2018 MacRumors thread shows:

    - 256 SSD 1400 MB/s Write 2200 MB/S Read
    - 512 SSD 1846 MB/s Write 2500 MB/S Read
    - 1T SSD 2500 MB/s Write 2600 MB/S Read
    - 2T SSD 2600 MB/s Write 2700 MB/S Read
So this is mostly with write, the iMac Pro I'm writing this on with 1TB is in line with ~2800 MB/s write speed.

But of course this could be very different with the new iMac which might have a very fast write SSD in the 256 GB model.

Well, I know plenty of developers who opted for the 256GB MacBook Pro models and they're fine with it, myself included. Sure, I'd prefer more storage if I can have it, but I can definitely live with this and still have plenty of space.

Saying no one would buy it is an exaggeration.

I consider myself a professional (at least in that I get paid to do what I do), and I don't think my work hard drive has more than probably 40GB used on it. So 256GB seems like excessively plenty.

I never use the OS drive for storage, and unless its a requirement to run, I dont load apps on the system drive either. I normally have at least 3 drives on my computer. OS/Apps/Data. With imaging software, replacing one piece of the 3 is damn simple, and there is a better chance of ransomware not taking out your data. So I would definitely use and keep the 256GB drive.

A lot of people are disagreeing with you but I certainly concur. Sure, there are ways to work around it, but this is a $6,000 computer and you could get more storage than this in the 2004 iMac.

I refuse to believe that meticulously emptying out your trash and downloads folder so that you can continue to squeeze by with 256GB is now acceptable for ultra high end, "I don't know a single person who can afford this" computing.

I am a professional and ironically enough work with big data. 256GB is fine for me.

If you're still relying on local storage in 2019 for development, analytics, video/image work, etc. you are simply doing it wrong

I have petabyte network storage, so don't disparage me my 256 Gb SSD.

Though it is a bit slim if you want to store lots of multimedia or host multiple VMs.

Really? For software? I hardly use 200GB and that's including a lot of extra, personal stuff on my drive.

Or even more accurately, swap "price point" for "profit margin".

Unless these cases are manufactured in remote monasteries, the costs don't add up and price point wasn't a concern.

It's probably there for people who are going to swap out for their own SSD anyways.

I'd buy one in a heartbeat, I don't honestly give a shit what storage comes with the system. I've got a separate, dedicated system for storage that's got all kinds of backup/redundancy/failover/etc. configured.

If you're a developer on a large (multi-GB) codebase, a beefy multicore system can be great for quick compile times, but hefty storage may not be a necessity.

Or is this meant to be the "I just want to buy my own disks separately model", with 256GB drive just there to allow OS to be pre-installed?

EDIT: I see now there are only two internal drive bays, so that would mean tossing the 256GB that comes with it if you wanted to max out the internal storage.

> I see now there are only two internal drive bays, so that would mean tossing the 256GB that comes with it if you wanted to max out the internal storage.

Or using some of the several PCIe slots to add lots more SSDs. It looks like the default configuration has enough spare PCIe slots and lanes to accommodate 8 M.2 SSDs in suitable risers. That can get you another 16TB for ~$1.8k (consumer-grade SSDs).

You can't buy drives separately. Storage is almost certainly raw flash, exposed as NVMe through the Apple T2 controller.

They didn't mention storage expansion. This was a big red flag. While 250gb is fine for an OS drive, should I ever want to upgrade, what is the catch?

The 256GB model is an old Apple hack to show a lower starting price (which rarely anyone would buy at this time) and which quickly goes up as you upgrade the components.

Right. If you're working with multi-terabyte files that live on a NAS or an external directly-attached storage, the local drive is basically a cache of the OS plus whatever you're physically working on at that moment.

And it seems to come with 10gbe by default which is consistent with it.

2x 10GBE actually. Plus four 40Gbps Thunderbolt ports.

I find that link aggregation is pretty much useless and/or unstable. Not sure what good two 10gbe ports do. But seeing 10gbe becoming part of the standard spec is a really welcomed improvement. Not the least because it will drive prices down.

I'm not mad that it's 10GBASE-T, but I'd personally rather 10G over SFP+. The sorts of shops that can afford these probably already have some pretty burly networking infrastructure, and 10GBASE-T is both expensive and (IME) less reliable.

The encoding/decoding capacity of Ethernet-over-twisted-pair seems to be reaching a point of sharply diminishing returns and the hardware for 10GBASE-T is really expensive for what's been, in my experience, a less reliable experience than SFP+. I'd rather tell somebody who needs 10GBASE-T to use an RJ-45 adapter than to have to go grab yet another Mellanox card to stick in a computer this expensive.

But isn't SFP+ not compatible with 1G (i.e. can only be used with a 10G network equipment)? If you are selling a machine that people will plug into either a 1gbe or 10gbe network, you'd rather have some interface that can downgrade.

You can use SFP in an SFP+ socket, but not the other way around. The downgrade to 1GB would work fine. Since this is still a desk focused machine vs. datacenter, I'd expect most offices to be wired to cat6 ethernet which can do 10GB if needed vs. SFP+ cables or SFP+ to Fiber.

That, and I'd still expect an RJ-45 port somewhere.

Not always true, it needs explicit support for SFP as it's not backwards compatible.

You can find SFP+ modules that support both 10G and 1G.

>but I'd personally rather 10G over SFP+.

Is that the wrong way around? I'm struggling to parse otherwise.

Another way to say 10G via an SFP+ transceiver, as opposed to 10G via twisted pair.

1) https://en.wikipedia.org/wiki/Small_form-factor_pluggable_tr...

2) https://en.wikipedia.org/wiki/Ethernet_over_twisted_pair

Read as s/over/on/

> Not sure what good two 10gbe ports do.

1 x 10 GbE for your regular Ethernet/IP traffic and 1 x 10 GbE for your storage traffic. You do run your network storage over a physically separate network, right?

I don't know. Intuitively I would have thought that a NAS that updates itself automatically, with SMB1 disabled, is less dangerous that a NAS that is in theory insulated from the internet but not patched for security vulnerabilities (since not connected) and connected to machines that are on the internet and could be infected.

I am not sure what “a physically separate network” means when it is used by the same computer. (Also, what about accessing your storage over WiFi? Or, you have to have cables strewn all over the place?)

For a lot of "pro" usage scenarios, the local environment will have one "LAN" that has Internet gateway, and various local services on it, plus a dedicated storage network using a separate switching infrastructure (and thus, separate NIC port on your machine).

WiFi is not really an option, for performance but also often for security reasons.

So yes, you'll have two cables from wall-jacks to the machine for networking.

> you have to have cables strewn all over the place?

Of course. WiFi isn't going to work in shared user environments that chew bandwidth for breakfast.

Not having to worry about the security issues of WiFi too can be helpful (depending on your security requirements).

I'm curious what your experiences are here. With enterprise-level networking equipment and LACP (802.3ad), I've never run into any weird issues.

I tried between two enterprise dlink switches, and also between a synology NAS and a dlink switch.

Link aggregation in general only provides performance with multiple connections to multiple machines, so using link aggregation between one client (Mac) and one NAS will like result in zero performance improvement (the packets will only use one of the cables). It only makes sense if you have two or more NAS that you want to access simulatenously (or two or more clients accessing the same NAS, but that wouldn't be a use case for 2x 10gbe port on a client).

Synology also supports balance slb bonding, which in theory goes around this single connection restriction. However I ran into some connection problems with some Windows clients. Never went to the bottom of them but they went away when I disabled the bonding.

In any case, it is hard to saturate a 10gbe connection with a single NAS, unless it is packed with SSDs, which I wouldn't assume for mass storage. So I am not sure there is much value in aggregating the links in the first place.

D-Link gear isn’t exactly what I’d judge networking standards on, they work I suppose but they’re hardly what I would install in even a small business office.

I have multiple LACP bonds on my Juniper EX2200 at home working without issue, though the single stream limits you mentioned are the one thing LACP can’t fix.

> enterprise dlink switches

"enterprise" DLink switches aren't really a thing yet, regardless of what their marketing team wants to brand them as. :(

Cisco, HPE, etc have "enterprise" switches. DLink might be in a decade.

> it is hard to saturate a 10gbe connection with a single NAS, unless it is packed with SSDs

No, it's just a matter of having enough spindles behind it.

As a rough guide, with a (say) average spinning rust HDD able to push out 100MB/s when reading, you'd only need 10 such drives to push out 1000MB/s (raw).

In the real world, you need extra spindles as some of the data being pushed out is just internal checksum/redundancy, and doesn't go over the network.

But for reading back large files in mostly sequential access, you'll hit 1GB/s from about 10 drives onwards pretty easily. More drives, more throughput.

I would defer to people with more enterprise hardware experience than me for serious NAS set ups, but my experience with various generations of 12 disks synology nas is that you loose a lot of performance to disk vibrations / inefficiencies of the raid implementation / sync between drives / tcp, etc. So I don't think it scales linearly. With a synology DS3615 and 12 HGST Helium drives in RAID5, I barely get over 1GB/s locally while each drive individually is capable of over 200MB/s sustained speed.

Yeah, no idea with Synology. When I was originally looking at NAS solutions, they (and QNAP) just seemed expensive for not much product.

Went with FreeNAS instead, as I was already very familiar with building systems, it's based on FreeBSD (OSS), and it gives better tuning on the higher end.

Doesn't have as pretty a GUI though. ;)

LACP does generally “just work”, the problem is when you want one machine or session to be able to max multiple links. The solution here remains the same as it always has - multipath. I hope Apple has added support for SMB multi-channel support for these users since I last checked.

Even without link aggregation (which does generally work quite well), this would allow you to have a dedicated SAN + network solution, which is often very useful.

Would be second interface be for a storage network?

And how many people here who claim to use a SAN for sure use a NAS is funny.

From maintaining large image storage FC SANs with Apple xServe and IBM DS for some years this is no fun to administrate for peak performance and I would assume most smaller shops have NAS instead of SANs (no arguing larger shops have SANs as we did driving ten photo studios).

that just shows that they put the tiny, essentially useless HD in to keep the base price artificially low.

Isn't providing a lower cost option a good thing?

Considering what a price tax Apple has had on internal storage, "lower cost" is a bit far-fetched.

Even the old 5400 RPM HDDs cost an arm and a leg if you bought it pre-installed into your MacBook. This was back in the days where you could unscrew the lid on your MacBook Pro with a standard screwdriver and replace the internal memory.

Bait and switch is a dark pattern, even if it's a common one.

It's a product with a price tag and spec sheet anyone can find. Can't really see how they can be any more transparent.

It's not a transparency issue, this dark pattern exists everywhere. You advertise "New X starting at $x" which is low enough to attract customers. But the thing you sell for $x is actually lacking in some small but important capacity making it unattractive for actual use. Then you sell an upgrade for the target price and customers are more willing to pay because it feels like they're paying a small premium for the thing they want.

It's a neat psychology trick. Customers are far more receptive to upsells when they're the one doing the upselling.

Where do you think the 'switch' is in that? At what point do they switch something from what was previously offered to something not expected? I don't think there is any switch. The specs are presented as up-front as any hardware spec is.

The switch is that you're some way through the "funnel" before you discover that to get a decent spec machine you need to spend a lot of money. It's the "power of commitment" sales trick.

I don't agree - it says how much storage each version comes with up front in large font on the marketing page https://www.apple.com/mac-pro/specs/. It even lists the 256GB SSD option first, as the default option.

Your assumption requires that no one actually wants or will buy the 256gb option. I for one definitely would consider it, all my current computers have only 256gb and I haven't had issues (external hard drives are cheap).

> low


For development, I'm actually perfectly fine with 256GB SSD. And I don't care about the video card much at all.

Still, I don't see this as a developer machine — I think an iMac Pro is a much better value proposition for developers, and in my case, where I really mostly care about single-threaded performance for interactive development in Clojure and ClojureScript, I'm looking at the iMac.

On the flip side over in Linux land I built an 8 core 16T desktop with 64GB of RAM and two 27” 4K HDR monitors for less than half the price of the base Mac Pro.

Oh and it has an RTX2080 which beats an RX580 so badly it’s practically attempted murder.

I bet the apple tax I really do (typing this on a 5th gen mini) but the base model is hilariously expensive and in a world where I can buy a 12 core/16T for 499 that beats a 1200 intel cpu obsolete before its released (and that 12C will drop straight in).

They’ll sell, apple stuff always does but unless you need macOS’s for some reason I don’t see how.

More like Xeon tax.

The new Mac Pro has a ton more PCIe lanes and DDR4 channels. It's a completely different league, it can't be directly compared to mainstream platforms that only have dual-channel and 16-24 lanes.

EPYC would've been a better value, but it's still not cheap. Big computers with lots of memory and I/O capacity will always look ridiculously expensive next to mainstream desktop.

Threadripper exists for PCIe + DDR4 (less than the reported 2TiB because of no RDIMM support, but theoretically could hit 2 TiB if people start producing 256GB UDIMMs.

But... 64 lanes of PCIe 3.0. 32 cores. 64 threads with SMT. 80MB of cache (think of all the locality!). 256GB of RAM support (if you're using more, you're probably doing scientific compute and you're probably better off on Linux anyway, I'd assume). ECC support. 1700 USD. Quad channel RAM.

Xeon W-3175X? 48 lanes of PCIe 3.0. 28 cores. Probably no more Hyperthreading after Zombieload. 512GB RAM support. ECC support. $3000 USD. Hexa channel RAM.

And better yet? Competent TR motherboards with all the RAM/GPU/whatever support you need go as low as 400 CAD.

Also, if you're less Mr. Moneybags, the 2920X exists. 12C/24T, same memory and IO capacity. 650USD.

But wait, there's more! TR 1900X is older, but: 8C/16T, 64 PCIe 3.0 lanes, quad channel RAM. Same kind of memory support: 256GB. 300USD.

A far shot from "ridiculously expensive" considering 300 USD (or even 650 USD) is less than some mainstream desktop CPUs. The 9900K is 490USD. The 9980XE is 2000USD.

Also, TR3 with PCIe 4.0 is on the horizon, and Zen 2 with PCIe 4.0 is here. 24 lanes of PCIe 4.0 has equivalent bandwidth to 48 lanes of PCIe 3.0: same as the Xeon W-3175X.

Big computers with lots of memory and IO capacity can be decently cost effective. You just can't ask Intel.

It will be ironically funny when people start posting benchmarks of macOS on their (Hackintosh) Threadripper based workstations. ;)

Don't worry, by the time this baby ships, you'll be able to get Ryzen 3900X's that are PCIe 4.0 instead of the Mac Pro's PCIe 3.0. Only 24 lanes, but that's all you'll need.

New ryzens have 24 and pci express 4.0.

Support ecc as well.

No, even with Xeon and buying from big OEMs it still will be twice as cheap.

Do you have a quote for that? It was true for a while on cost/performance because the previous Mac Pro was never upgraded but outsight of that anomalous period the major players tend to be fairly competitive if you’re comparing equivalent parts.

Dell Precision 5820 Tower - Intel Xeon W-2145(8c,ht), 32gb ecc, Quadro P4000, 256gb NVMe - 3600$

Well, that's cool, but that's comparing Apples to Linux-Oranges, I guess.

I do have a Linux box with a fast Intel CPU and it's nice and fast, but a Mac it ain't. I'm happy to pay more for a Mac machine, if only for the fact that I get an OS with working copy/paste in all applications. I won't be buying the Mac Pro anytime soon, though, I'm not willing to pay that much. But there are people who are and I'm glad there is something being offered there.

>if only for the fact that I get an OS with working copy/paste in all applications

That's such an oddly specific reason to require MacOS. Can you say more? I can't say I've ever had a problem with copy/paste in Linux, in any application.

If this issue were serious enough to me to consider doubling my computer's price tag, I'd at least first look at Hackintoshing, which tends to be pretty solid these days when you can pick out specific hardware in advance.

I suspect the issue is that in Linux the copy paste comnands are specific to each application, which can be annoying when copy pasting between multiple alplications., without a lot of configuring.

Hmm, that sounds like a possibility, though as a practical matter every UI application I can remember using on Linux uses Ctrl-C / Ctrl-V, and terminal applications use Ctrl-Shift-C / Ctrl-Shift-V - because Ctrl is for sending signals to applications (like Ctrl-C). That's universal enough that I don't see any issue. (Actually I prefer it to the way Apple separates Cmd and Ctrl, which I find infuriating because of the awkward finger positioning it requires.)

Ctrl-Y, a la emacs. E.g. Matlab uses that by default.

It’s funny you bring up working copy/paste, because lately i’ve been having problems with ‘pboard’ just randomly dying on my Macbook.

Not to come out over-defensive, but your base configuration (I guess?) doesn't include a GPU, I bet your caches are significantly smaller, and also no ECC ram. I also bet your storage is considerably slower.

No that price included a RTX2080 which is a pretty decent card and better than the base config in the new Mac Pro.

Platform supports ECC all Ryzens do, though it’s on the board vendors to support it officially.

Cache size no idea, will not that the new 3900X would obliterate the sky lake Xeon (12 cores vs 8 and comparable or better IPC) on in the Mac Pro base confit, it’s an 8 core older architecture against AMD’s new best consumer processor and its 499 vs intels neatest equiv at 1100, it’s not even remotely close at the moment.

My point was that ECC ram is twice as expensive. Cache is super expensive (your consumer chip will have significantly less), so are PCI lanes (again, consumer chips have significantly fewer of these), and also your Ryzen chip doesn't have AVX-512 support which is important for e.g. video work.

ECC ram can actually be a negative in certain use cases. Recording studios don't like the extra latency the ECC adds for MIDI for example.

ECC does not add latency and MIDI isn’t even sensitive to additional clock cycles in buffered memory (we are talking Nanoseconds here)

Audioland is a magic place where even different brands of hard drives can "affect the sound quality."

I'm not even a bit surprised by these outlandish claims of ECC RAM latency affecting MIDI timing.


>On the flip side over in Linux land I built an 8 core 16T desktop with 64GB of RAM and two 27” 4K HDR monitors for less than half the price of the base Mac Pro.

I am betting at half the price of Mac Pro, it doesn't include support for ECC, and hence Memory aren't ECC, the CPU aren't Server Grade ( Even on the AMD side you will need to use at least EPYC ), motherboard aren't Server Grade, with less PCI-E slot and not a decent case with decent power supply. Yes Ryzen supports ECC, but it is not tested as such, and broad vendor have to do their testing as well. And testing is expensive. ( Hence why Server Grade CPU are expensive )

Honestly I love AMD and loathe Intel. But these kind of comparison all over the internet is like saying I could get a 500hp Nissan GTR over 500hp Ferrari at half the price, but why did we not ever see that argument in Cars forum and only on tech forum.

And what 27" 4K HDR monitors are you using? I've been looking for something similar for a while and haven't seen much with those base specs...

LG 27UK650’s, good for the price.

Why is iMac better than iMac pro for single threaded?

Because the Xeon-W CPUs reduce frequency per core as you increase your usage to more and more cores. The 18-core version still has 4.3 GHz single-core turbo frequency.

My workflow is mostly multicore so to me the iMac Pro is clearly superior.

What's the general split between laptops and desktops for developer use? Certainly I've migrated to 100% laptop which gets regularly used in a variety of locations.

Everything about the base specs are just good enough. I’m not defending it per se, but if you spec’ed the machine for any given purpose, whether software dev, or rendering, it’d probably raise the cost by at least $1-2k, and then you’d have the “real” machine. So, the reason the price looks egregious is because most of it is just base cost.

This is a computer made for the engineers who’ve already “made it,” who are making that $500k a year and are looking for a Porsche over a Corvette.

It's designed to be useful for more than one kind of "Pro". The old Mac Pro, for instance, seemed to completely forget about music studios and their professional requirements for Macs. This new machine may seem like overkill to software developers, but as an audio engineer, it's perfect.

I don't even want a 1TB SSD in it, the 256 is perfect to hold the OS, a few DAWs, and all the plugins I could ever want. Everything else gets saved to drives in a toaster anyway. A rackmountable unit with a ton of PCI slots for HDX/Dante cards was on my Christmas list, and I'm not alone- there's a reason they made a point of showing how many HDX cards it can fit in their presentation.

It also looks like an amazing workstation for video editors. I really don't think it's designed for software engineers who make 500k a year.

256 is not enough for a serious main drive in a DAW. Sample libraries should all be on the fastest drive. There are single instruments that take up 50GB. And consider that most studios are recording in 24 or 32 bits at higher frequencies than 44.1Khz. 1TB is probably enough for a music production system although I'd personally prefer larger so that I don't have to be swapping things around all the time.

I don't know anyone that keeps their samples / data on their main drive. Most have external thunderbolt enclosures.

Maybe you haven’t looked around in a while; the toasters are Thunderbolt-attached now, and they take (en-cartridged) NVMe SSDs. There’s nothing slow or high-latency about that. Copy your assets over to your project disk from your NAS at the start of a new project, and then forget about it.

Alternatively, forget hotswap and use a Thunderbolt DAS with RAID6. Burn your projects from your DAS to a portable SSD when you want to pass them over. Only takes a minute or two.

In addition to computing I also dabble in woodworking, where there are tools in the 'corvette or porsche' classification that everyone drools over, and those guys can spend way more money on tools at a lower salary than I do for 'fancy' Macbooks. Of course, their tools last 3x longer if taken care of, but the outlay can still be breathtaking.

"A good craftsman doesn't blame his tools" isn't a warning against complaining. It's a warning about picking bad tools in the first place and scapegoating them instead of accepting that it was your decision all along.

Who and/or how many engineers are making $500k a year!?

Download and install the app "Blind" to be shamefully informed of how many there are. Seemingly there is either a very large amount, or there are lots of SWEs who like to lie. Lots of it seem like stock options from FAANG and Uber.

I've been on the internet. I know people lie. The OKCupid blog found that "There are consistently 4x the number of people making $100K a year than there should be" [1]. They also lie about their height, and what they look like.

What reason should I have for trusting anonymous self-reported data in a category where people are known to exaggerate?

[1]: https://theblog.okcupid.com/the-big-lies-people-tell-in-onli...

Yeah, so... plenty of software engineers are making $500k a year. That is total compensation; you should expect half of that to come from non-salary things like stock options and bonuses.

Maybe some people are lying, but that seems about right to me for actual senior people (leading projects, maybe managing people).

My last year at Google, my W2 income was in the area of $300,000. I was a "level 5" with good performance reviews, and the scale goes up to 9. I sold all my stock the second it was issued ("autosale"), so the W2 income is pretty close to the amount of cash I got.

Programmers focused on the right task are worth their weight in gold. There are very few fields where an hour of time put in can save society as a whole thousands of hours. Software engineering is one of those, and we get to skim off a little bit of that value we created in the form of cash.

Unfortunately, the 'right' tasks for high TC are too often things like Google+

There is also much software that is complete garbage. If the ones with $500k TC are writing decent software, then those making $100k may be writing the garbage.

Depends a lot on where they're working. I know more than one person that had 15+ years experience that was making around 100k/year, then moved to a FAANG and hit > 500k total compensation in just a 2-3 years.

For OkCupid there's a clear motivation for exaggerating things like height, looks, and income. What motivation does one have for anonymously posting an exaggerated income? Are there really that many trolls who want to depress those making less than $500k/yr?

People like in anonymous communities all the time. One motivation would be to impress other (perceived-to-be) successful members so to be asked for stories to tell, for advice, etc.

I don’t know why people are like that, but anonymity thins out the middle group of semi-/socially-truthful people by providing an opportunity to be much more honest or much less honest than is normally possible.

$500k is achievable total compensation at a few companies for some people, though.

s/SWEs/Anonymous chat participants/

Some Senior/Most Staff SWE and above at FAANG, Airbnb, Uber, Lyft, Dropbox, LinkedIn, etc.

Any FANG company

> This is a computer made for the engineers who’ve already “made it,” who are making that $500k a year and are looking for a Porsche over a Corvette.

Is this what "Pro" means now?

Yes. Move to Vermont or Michigan, get 3 of those new monitors, plus this machine, and bask in the glory as they warm your room in the winter with the heat turned off.

> This is a computer made for the engineers who’ve already “made it,” who are making that $500k a year and are looking for a Porsche over a Corvette.

What about an EV such as a Tesla?

I was tempted to make a joke about the Porsche or Corvette being luxury vehicles but I've noticed we spend way too much time nitpicking after fine details. It deflects from the thesis to do so and I'm not a fan. I can understand GP's point just fine without getting into quibbles over that.

Instead I took umbrage with the idea that a $3000 laptop which is our primary tool is a luxury item. I think it's one of many signs that we're a bunch of cheapskates. Other industries have different perspectives on this.

While I tend towards being a cheapskate on many physical things I also understand diminishing returns, and to me once you hop to the other side of the price-performance curve I define that as a luxury. For me, I get a lot more value out of a laptop and accompanying software ecosystem that helps me be more productive for my typical development cycle, and a lot of the stuff at the OS and above is pretty darn subjective and context-sensitive to the kind of development cycle.

From an overall productivity standpoint, because the biggest bottleneck to programmer productivity is mental and physical health, data would imply that I should spend more on exercise equipment, a better chair that keeps me from getting injured than on a laptop that gets maybe another 10% faster compile of already less than 30 seconds (incremental compilation anyone?) for the $1k difference between a 15" Macbook Pro and a 13" Macbook Air. No amount of money I dump into any hardware or software will make AWS provision its resources everything faster either, and that's what I sit and wait on the most for feedback rather than direct code compiles. And a fat CI / CD server is not run on my laptop unless I'm running Jenkins locally or Concourse.

Am I really a cheapskate if I can do my job just as well on a $1k machine as on a $6k one? The new Mac Pro is overkill for many development workflows.

For a great craftsman, I expect them to get something out of any tool that a lesser person could accomplish. They can make a good tool do things I wouldn't think of, but can also work around the limitations of a lesser tool.

So if you couldn't get anything out of a better tool, I'd start asking uncomfortable questions about you.

Do I think I as a developer could leverage a Mac Pro to speed up my code-build-test cycle? If the whole team had them, then I'd absolutely tune our tools to use the extra cores, monitors, etc. We are better at troubleshooting when the feedback loop is shorter.

But to me the Mac Pro is more of a tool for designers. If a designer is turning in the same work on a 1k machine I would ask about our process first, the designer second, and the tool third.

They run a network of containerized linux machines.

On a serious note: why doesn't Apple do any modern container/virtualization?

For example, why can't you set up containers for different versions of xcode and build your code in the appropriate one? or farm it out?

Then they're waiting for a Taycan.

Most of the people that are buying this will attached very large external storage as it is most likely going to be used as a video editing workstation.

Is video editing external storage more likely to be shared 10GbE NAS or direct attached Thunderbolt 3 with software RAID?

NAS in any reasonably sized shop.

Is there a prevalent vendor/stack, e.g. Netapp, QNAP, FreeNAS, Unraid?

We use a system from https://www.editshare.com

Stornext. The poster above mentioned NAS, but I reckon it would be seen more often as not connected to a SAN.

No prevalent stack AFAIK, but add Synology and WD Nas to the list.

The kinda places I see these going in would not touch either of those products. They are both seen as the most entry level of storage devices

are we talking about workstation nas here? we have nothing but praise for synology performance and reliability. software is stable, tiering works flawlessly, expansion is inexpensive, web ui is good, ssh gives you a proper shell - do whatever operations on your nas locally, faster.

Editing workstation NAS, different IOPS use case. I ran a Synology as a test with VMware and even though the unit was certified, it was nightmarishly sluggish. That said, they're perfect for Most business not in the editing space.

I was talking storage solutions for video editing. They typically use vendors building solutions to cater for that market.

> 580X is old, 2017 old

Is it? This site claims it came out in March 2019 but I definitely can't keep the various graphics card lines separate in my head: https://www.techpowerup.com/gpu-specs/radeon-pro-580x.c3398

Yes, 580X is old (it's the same hardware as RX 480 from 2016) In fact, everything in that machine is old: the Xeons they use are rocking the Skylake core form 2015, with all of the recently discovered side-channel vulnerabilities.

The upcoming AMD Threadripper 3 CPUs with PCIe4 would be a much better Pro offering. Clock-for-clock they are beating Skylake now, rumored to have up to 64 cores. Also, PCIe4-based RAID 0 arrays are pushing 15GB/sec transfer speeds IIRC.

It’s more likely the new Mac Pro uses the as-of-yet unannounced (but leaked [1]) Xeon W-3xxx CPUs. The current Xeon W fare doesn’t have the core counts Apple is advertising.

[1] https://www.tomshardware.com/news/intel-xeon-cascade-lake-w-...

Xeon W-3175X is 28 cores (and 250W)..

But yes, I would expect a dual die 64 core monster like the 92xx series they recently announced. Of course the 300W power/thermal solution apple was talking about seems a bit limiting in that case.

Yes, but Apple lists 12, 16, and 24 core Xeon W options, none of which are currently available from Intel but do match those leaked for Cascade Lake Xeon W. So it’s quite doubtful Apple is using Skylake Xeon W since the majority of the specs don’t match.

Don't forget that Apple can see AMD and Intel's future roadmaps.

AMD definitely has a superior offering with Threadripper 3 but we haven't seen what Intel is going to offer in comparison. And switching CPU manufacturers is never trial.

Intel’s public roadmap has been highly unreliable. They’ve been delayed by years, and that was before the recent vulnerabilities were discovered. I’m not convinced about the value of their roadmaps.

That being said, if you were only allowed to pick one platform to sell to your customers, in a machine which should last years, then Intel would definitely be the safer bet. It’s also the platform most of the software providers are likely gonna optimize first.

I'm not sure why they'd consider Threadripper for the Mac Pro. This machine fits into Epic's target cases and could possibly live up to the asking price of the whole package at that point. Given how the rest of the mac ecosystem is going, I'd only expect the most stubborn of professional communities pick this up (film editing folks seems to still buy the Apple hype).

I’d consider this a “v1” Apple product. Everyone knows you wait for v2.

Film editing kinda makes sense because there is probably a lot of lock in on the software for people who don't like the Adobe monthly charge BS.

I'd be surprised if the monthly charge is a problem here. We're talking about a machine that costs more than 6 years worth of subscriptions (for the entire suite, business price, ~1/3 less for just premiere pro, business price) for the lowest tier machine (which isn't all that great for the stated use case).

Maybe there is a technical reason but the story I've gotten from people I know doing this is that they are looked down on if they're PC users. It could be that Apple's software is that much better but it seems much more likely that they're charging what they know they can based on the generally stubbornness folks have around their software and workflow. I partly can't blame them, if your job is in a creative space the last thing I'd want to do is constantly rework a workflow and deal with the machinery itself rather than the content and output.

Do these roadmaps include running at half a speed after a couple of months due to (arguably purposeful) design flaws ?

So Threadripper 3 has not been (effectively) cancelled? (Disappeared from all roadmaps.) Any links?

I so want to buy that CPU, but I'm starting to suspect it might never be released.

Great, so it changed recently. Before that, the silence was deafening for a long time. Can't wait for my future Threadripper system.

The 580X is just an RX580 that has been clocked a few percent higher.

But it gets worse, the RX580 is an identical design to the RX480, with just a few silicon tweaks to clock slightly higher.

Clock for clock, the 580X preforms identically to the RX480 that launched in August 2016.

The 580X is technically "new" but it's the same Polaris 20 GPU chip that has been around since the RX480 in 2016 and has seen several rebadges accompanied by slight clockspeed bumps. This particular rebadge is Apple-specific and debuted with the recent iMac refresh.

Ah, okay, that makes sense. Thanks.

That was a sub $300 GPU back in 2016 (the RX480). There is nothing "pro" about it in 2019.

Sure maybe that mobile-workstation card came out this year but the tech behind it is very old (GCN 4.0). In fact all of AMD's current offerings are fairly old now. Navi is the first GPU architecture since 2012 that's not based on GCN.

My understanding is that a Pro 580X is just a rebadging of the Pro 580, which is from mid-to-early 2017, and all the 5xx series is just minor spec bumps on top of the 4xx series.

The Radeon Pro 580x is the same hardware as the Radeon RX 580, except the Pro version is underclocked

>I know being expandable is important here but I am stunned by the fact that an iMac Pro actually comes with better SSD and Video Card for less money and you get a screen to boot!

I'm guessing it's just price discrimination. People who are in the market for a Mac Pro (or rather, their employers) are probably not price sensitive.

The branding distinction between "iMac Pro" and "Mac Pro" is just incredibly confusing.

I think the Macbook and Macbook Air is much worse, but I suppose it's unclear if both are currently active lines.

Agreed, and the solution is stupidly simple -- just swap the names! The whole thing is so un-Apple.

I partly understood it with the "first"-gen New MacBook; it was experimental, and they wanted to give it this kind of minimalist-mystique of flipping the naming convention where this forward-looking device was now simply "The MacBook". But why does it even still exist now that the other ones have adopted most of its features?

At least in that case there are three letters of difference, instead of one :P

> 256g SSD is shameful

If you don’t want it, don’t buy it.

There’s a pretty obvious general pro workflow for a configuration like this, where OS and apps are stored on the main drive and data is stored elsewhere.

Where’s the shame in that?

The entry level Macbook Pro is not the best bang for the buck. For about as long as I can remember, the best value in the neighborhood of $2950. So the 'good' Macbook gets cheaper due to inflation and not much else.

Right now you can get 32 gig of RAM and the 512 SSD for around that mark. I'd probably throw in the video card upgrade for a personal computer, maybe not for a work one unless you're doing AI.

I'm less concerned about the 256GB SSD. If this is a true Pro set up, then there's going to be very large storage media attached. The internal SSD is just to store the OS and apps. My main concern is the lack of a mention of Nvidia support. I'm very curious of the apparent bad blood between Apple and Nvidia. Were the GPU issues from the 2011 MacBook Pros to blame, or just one of the pieces of straw?

Honestly, I think the SSD size is okay. 512 would have been better but considering this is fully expandable, it's much cheaper to manually upgrade the SSD to whatever you like with after market SSDs anyway vs Apple charging more for it. Also the people who are looking to buy such a beast aren't going to be price sensitive. So this lets them price it at $5k.

256 gig is an embarrassment. My phone has 256 gigs. My laptop from 2011 had 256 gigs...

Neat. My new MacBook Pro has 256GB. I'm using ~133GB. That's XCode and the Xcode Beta, the full Office suite, Logic Pro, Firefox, Chrome, Safari, IntelliJ, GoLand, Photoshop, Illustrator, a bunch of other Adobe apps, and on and on. When I'm at the desktop I use an external drive for dumping junk to, and for Time Machine backups.

There is a point of diminishing returns, and given that most people don't have a giant Steam library on their Mac, 256GB is more than sufficient for a large number of users. And as many others have said, when you're working on professional data like video and audio it is almost always on an external array.

You are right. It is sufficient for a large number of users. However, the Mac Pro is a high end workstation used by professionals, targeted for applications like high resolution video editing that require insane amounts of storage.

Relatively speaking, storage is cheap. A high performance, 1 TB NVME drive can be had for under $350. This should be the base configuration on a high end workstation...

What benefit does a 1TB SSD have over 256GB for a video workstation that needs to connect to a 100+ TB array for the real work? If your video editing task can get by with even the 4TB maximum build-to-order option, then you're not in the target market.

I don't buy it. Lots of "professionals" edit hour long 4K video that would easily fit in under a TB. Then they archive that work.

Even if this is not the case, applications alone are getting larger all the time. 256G is ridiculous as a base config.

> targeted for applications like high resolution video editing that require insane amounts of storage

None of which is local. People have RAID boxes for that.

My disk array has 256 bigger units.

256GB SSD and base video card is fine.

I would definitely look at a Mac Pro for software development (every minute counts) and if you're in that group there is simply no need for more storage or decent graphics.

That's silly to be honest. I am a developer myself and I can tell you Mac Pro or iMac Pro is a waste of money in every way imaginable.

Look up barefeats benchmarks. Unless you have very specialised needs (like video editing) there is visible penalty when you increase the number of your cores. For every day usage 6-8 cores is what you should be aiming at, and preferably with highest clock speed possible.

The more cores you have, the lower clocks get and the harder it is to sync cache between them. If your software cannot really use that much parallelisation it will be slower, not faster.

For 99,99% devs spending cash on iMac Pro over iMac with i9 is waste of money and probably degraded performance. Mac Pro is even worse as you need to buy external monitor too.

My compile times being shorter by seconds isn't important. Having lots of cores and memory is. There are many types of software to develop and mine is best developed and tested on a bevy of VMs or containers.

That said, I could still avoid the Apple tax with a Precision or something from System76. My my employer tends toward Apple on the desktop and Dell in the datacenter, but $dayjob will more likely refresh with a MBP than one of these behemoths.

I build my own whitebox systems at home mostly, and AMD's been good to me on price/performance. I have a few Macs. Wouldn't mind a Talos II from Raptor Engineering or one of these newest Mac Pro machines if I could justify the cost.

There are edge cases for sure, never said there aren't any. If you need to run several VMs, each with a few cores and few gigs of memory, and all of those under heavy load at the same time, then apple "Pro" machine might be just what you need.

What you say is true, but apple has generally been better at selecting processors from intel that aren't just re-bagged low clock xeons. It wouldn't surprise me if this thing is 5Ghz turbo on a couple cores. There was that 28core 5ghz beast intel was talking about (definitely not 300W though) and a few rumored W3175X, follow on.

All of the Xeons in Mac Pro say 4.4Ghz turbo from what I saw at apple.com.

So the 2019 iMac i9 @ 5Ghz on 4 cores will beat the crap out of them for software development purposes, and from the benchmarks so far it seems there is very little (or none at all) throttling on new iMacs.

I disagree. Having more cores is becoming more and more relevant to software engineers these days. The power wall is forcing application scaling to happen through core count, so a lot of the software development in the world is focused on that. Having a beefy multicore machine is becoming more and more relevant to devs.

I agree. Just wanted to point out that Ryzen provides it at a much better price point.

It depends on your needs. With Scala/Hydra I routinely run my 8-core i9 MacBook Pro to 100% for quite a few minutes.

So I could definitely see more cores improving build times.

I also see my 6 core MBP under 100% load at times, and I work with more "standard" stack of ruby/python and swift.

But from what I read (on barefeats benchmarks, no less) clearly shows that Xeon CPU is inferior for those use cases to fastest Core iX CPU.

To put it in simple terms, for peak loads choose Core iX. For heavy, long term loads choose Xeon.

"100%" CPU utilization means very little with modern Intel CPUs and TurboBoost. Intel CPUs have the headroom to clock much faster than base clocks but often can't sustain due to thermal constraints.

Using a desktop processor with more thermal headroom (with adequate cooling) would presumably be much better bang for your buck than increasing core counts in an already thermally-challenged laptop.

iMac Pro is super silent, and I like it like that. I heard the iMac is a bit noisy ...

Your imagination is probably just not that big ;-)

meh, i'm just starting to fool around with music editing and my 250 GB laptop is out of storage. Also, it's a three year old laptop and that's what it came with. They could have kept the stainless steel and given me a bigger drive.

Who exactly is the intended user for this ultra-expensive, non-portable, box if it's too lame for hobbyists?

This is just apple sticking a vacuum into their customers pockets, applying the same over-pricing policy that is now standard on their phones.

What's wrong with a 256gb ssd? More than enough for plenty of use cases.

It's almost like they assume everyone's going to upgrade the drive. Might as well just ship without any at all.

$5000 for imac pro(+applecare)? that is bonkers. $6000 is even more crazy. You can get more storage, better gpu, more options for network for much less. And you wont need to deal with apple geniuses to fix the issues.

Those alternatives don’t run macOS.

It's funny this just came out when it did. I was just looking at getting an older Mac Pro 5,1 and upgrading all the components in it like a few friends have done.

For under $2K you can have a machine that competes with newer machines that cost twice that much. If you're a professional creative type, these machines are the real deal:

The Most Powerful Mac Is 6 Years Old and Not Sold By Apple - https://www.vice.com/en_us/article/8xkq8k/mac-pro-upgrade-co...

the Mac Pro 5.1 was designed to accommodate up to 12 cores: “Even though a single core isn’t fast, imagine having 12 of them for video editing and audio—those cores together are faster than my brand-new MacBook,” Mazzarolo said. The new iMac Pro can have up to 18 cores; new MacBook Pros max out at four cores.

The 5.1 can take a whopping 128 GB of RAM, which is equal to what a fully upgraded iMac Pro can take and double what Apple says the trash can Mac Pro maxes out at (it’s worth noting that the RAM used in newer Mac computers is usually faster)

The 5.1 can be modified to use modern SSDs, which Mazzarolo said are in some cases faster than the ones used in the new iMac Pro

The 5.1 can use almost any brand-new graphics card from most manufacturers, which is the main reason why a fully souped-up, old Mac Pro can outperform new computers. “With some rendering engines, the AMD cards that Apple uses [in new Mac Pros] don’t even work,” he said. “In general, even mid-level graphics cards we put in are as fast as those in the iMac Pro. We can put in better cards and we can put in two of those.”

On the Facebook group, Mazzarolo posted benchmarks of one of his custom-built rigs playing 5K, 6K and 8K RED RAW video clips against current-model Apple computers. A new, 15-inch MacBook pro and a recent “trash can” Mac Pro weren’t capable of playing the video at more than 8 frames-per-second. His custom-built model was able to get 24 fps in each case.

The big issue with older mac pros is for how long is it able to install the latest versions of MacOS?

There are ways around the official support drop as well, I run macOS mojave fine with a Mac Pro 3,1. These original cheese grater macs are truly still the best desktop tower they'll ever make at this point. It'd be fun to trick one out with a 3rd gen threadripper, which is probably faster than the 28-core rumored $35,000 mac pro.

Here's the patcher: http://dosdude1.com/mojave/

There are caveats, but I've found it to work well. Of course it's totally unsupported so here be dragons, etc.

I get where you're coming from but the whole point of getting a Mac is "it just works". I just don't have the time to deal with a Hackintosh. Also from past experience, there's a reason why they sunset older machines. At some point the OS updates are just no longer compatible with the old hardware. The Mojave patch notes that you linked further confirms this.

Another issue is that Windows 10 is now at the point of working decently and it has Ubuntu baked in. Apple also has iCloud sync software for it. I will miss Mac Apps, but given economics, most people just crank out electron apps anyways.

I give up on Apple computers. imo There's no viable option for iPhone or iPad though, so I'll still will be with Apple on some level.

I'm not sure if you can say Macs just work anymore, and obviously this is the realm of people who like to tinker or make things work for the sake of the process. If only there was a word for it... Oh wait I think it's up up next to that Y up there :)

The old hardware is pretty well supported in the Mac Pro because of how modular it remains. The majority of the incompatibility comes from not having a GPU that works with Metal. Those patch notes are mostly for devices with soldered components, like the GPU.

They don't just work any more; I have an older cheese grater mac pro; and new video cards for it are a dismal lottery. You could say this is why a modular Mac is a bad idea. My Apple MAC SE; still just works; it runs Word 5; which is all it ever did.

macOS 10.15 drops support for the MacPro5,1 (and for no other Macs).

dang. that is a bummer. i picked up a MacPro5,1 from a govdeals auction and it is a pretty nice computer. it even runs dark mode in OSX Mojave. guess it's days are numbered =(

i do have to say though, the single core performance of my MacPro5,1 is pretty abysmal. even with similar clock speeds, my 2015 mac mini is 50% faster in single core performance.

Which cpu model do you have in the MacPro5,1?

Intel(R) Xeon(R) CPU E5620 @ 2.40GHz quad core, dual cpu. so 8 cores total. multi-core performance is great!

we use it as a Unity build box and it functions great since we're normally doing multiple builds at a time, and image compression & shader compilation are multi-threaded. but it's definitely slower if you are just doing 1 build at a time.

sucks that in 2 years when Apple starts requiring the latest Xcode for iOS build submissions we will have to retire this guy =(

As an upgrade thought, that box should be able to handle dual X5675's, which are fairly cheap on Ebay.

There's a very noticeable performance improvement from (eg) the E5645's, let alone the E5620's.

Saying that as I used to use a self build box (Supermicro motherboard) with dual E5645's as my desktop, but it always "felt" a bit slow. :/

Had the opportunity to upgrade to X5675's, and now it "feels" fine interactively due to the higher base and boost clock.

The highest model, X5690's, from all reports are literally no faster in use than the X5675's (maybe thermal throttling?). So not worth the extra spend. ;)

These alternatives can run nvidia gpu :)

You can get iCloud on Windows now https://support.apple.com/en-us/HT204301

It's not a great solution, but I'm not paying over $6000 for a Mac Pro.

Pro machines for A/V production are routinely $7-10k and have been for as long as I can recall, right back to the 90s. You’re probably just not in the market. These are for commercial buyers who make lots more money from these machines than they spend.

Also, six months ago everyone was bellyaching about how Apple has no serious pro devices and was abandoning the desktop. Now it’s too expensive, and my eyes can’t roll back far enough.

Arguably, the whole idea of a modular machine is that the base price can be lower, and then you can spend the money on the things that are valuable to you.

I've owned the last two Mac Pros. I won't be able to afford this one. But fair enough: in this case it may be that they can't raise the ceiling without raising the floor.

I am all set to buy one second-hand in five years time. But only If the design looks less like a tea-trolley in real life than it does on the video.

Can't both of those things be true?

Those machines tend to have way better specs.

Apple just isn’t listening.

I guess you’re not very familiar with the Mac Pro line. The ones Apple let die in 2012. You know the ones with starting prices at $2000-$3000 adjusted for inflation

Apple leadership also deems developers as “pros” as well at least publicly. I don’t know what they were thinking

Makes me wonder. If you're a software engineer at Apple, are you given a $6k Mac Pro and $5k display to program with?

out of the box maybe

Yep. Ubuntu lost by several orders of magnitude. They lost me with Unity, which seemed like something overly fancy which wasn’t required. Had they gone for stability instead of UI, they mighy have kept the users and developed a more suitable alternative to macOS.

It really feels like a list battle. I’d pay $200 a year for an OS, especially an open-source one that doesn’t send my data to Amazon.

ubuntu does not send your data to amazon, and has not for years now, the search plugin was pulled almost immediately.

If you don't like Ubuntu's GUI, it's baked into Windows 10 now. It's not ideal since you're missing out of the remaining good Mac apps (that aren't electron based) but it's a viable alternative now.

Apple is only thinking of video editing for this box.

Well of course, they've done the marketing work and know who has the money to buy it. People complaining about GPUs in this thread are missing the point.

The LG 5K -- by itself is $1300.

4k gaming displays are $800. plus there some bonuses with gaming displays: high refresh rate, freesync.

None of those things really matter to the target audience though. Freesync doesn't have much use when you're working with static images or videos with a fixed frame rate. When you're editing video, you want your refresh rate to equal the frame rate of your video (or a multiple or it) to avoid judder and interpolated frames. A refresh rate higher than your frame rate, but not an even multiple of it, would be worse than running at the same rate as the video. Running it at an even multiple of the frame rate may provide some benefit, but it is marginal.

The pros are mainly concerned with color accuracy and gamut, neither of which gaming displays are known for.

These aren’t for gamers....

Yeah the pricing for the base model is insane. You can get a more powerful PC with 4x the storage for half the price.

And you're right about Nvidia. I also stream on Twitch, was hoping I might be able to get one of these to replace my current PC. Without Nvidia there's no way.

If you spec something with similar workstation grade hardware, you'll be in the same ballpark. Gaming PC hardware isn't comparable.

Prices from newegg

    SuperMicro 5039A-i        730
    Xeon W-2145              1175
    4x8GB ECC RAM $74 x 4 =   296
    AMD Radeon RX 480         415
    SAMSUNG 970 PRO 512GB     160
    Total:                   2776
So Apple charges more than x2. And if one would use Ryzen and consumer GPU, price would be massively reduced.

The CPUs they're using are Cascade Lake, not Skylake. So I'd add at least about $500 there. You'll also need to add ~$300 for the giant 1400 watt PSU, and at least $500 for the custom cooling solution and case. Most PC cases are really not built that well.

Where did you get that $500 increase? Intel doesn't increase prices between generations, at least not that much. If anything, they could even reduce the price to have a chance to compete with new Threadrippers.

The $500 came from the new chips, plus the motherboard Apple is using is much higher spec than the SuperMicro.

It's got 8 full length PCIe slots, and two of those are the MPX slots so it's effectively 10 if you use MPX cards. It has dual 10gb NICs, and two TB3 ports.

Have a look here https://zworkstations.com/products/z840/

I tried coming up with something similar to the base spec MacPro but got larger storage, more cores and an Nvidia card with a decent saving.

It is $3000, but

>Quad-Core 2.6GHz Intel Xeon Broadwell E5-2623 v4 [4-cores / 8-threads]

We are well into Skylake era, a Broadwell is ancient comparatively speaking. The Mac Pro will be Cascade Lake, not state of the Art like EPYC with 7nm. But still a very large improvement.

>HP Z Turbo Drive Even if we choose a 256GB version of Z Turbo instead of SATA as you confined, the Turbo Drive will still be a lot slower.

>HP Thunderbolt 2 (20GB/s; one port) +$200 Yes, Thunderbolt Port are expensive. And these are only TB2, not even TB3, and still missing 3 Port.

>Intel X540-T2 | Dual-Port | RJ45 | PCIe 2.0 | 10GbE Network Adapter

Even this 2013 Dual Port 10Gbps Ethernet is a $250 upgrade.

>Wireless Mouse and Keyboard, and WiFi Added those .

So a not similar config with a very old Intel CPU cost $3475. I would imagine if you spec it exactly the same as Mac Pro, with TB3 and every hardware and ports, the HP would cost $4K+, with an ugly design. Quite a bit of saving but not as dramatic as some would imagine.

Plus, those machines are frigging awesome.

I assume this uses ECC DDR4 which would at a minimum explain some of the price tag. That will set you back $500+ retail for 32GB alone, still don’t understand where the rest is going with a 8-core CPU and a RX580X though.

It's an 8 core Xeon, plus a custom motherboard with dual 10gb, I would think that alone is probably about $2000.

I guess a 8-core Xeon W costs a cool $1K by itself, 10Gb networking isn’t tremendously expensive these days so it probably contributes $300 to the cost at MSRP. Still, $6K is too much, $4-5K would be less outrageous and still turn a healthy markup.

Yeah, I was expecting it to be either significantly more powerful than the iMac Pro or to have a lower cost for the base model. Neither of those things happened. I was willing to go out of my way to get hardware that's not quite ideal for my use case and still pay extra for it just to have macOS, but $6k is just too much. It might be expandable but each generation of Intel CPUs is incompatible with the last generations motherboards. If you care about single-core performance then it just doesn't make sense.

Maybe the case makes up most of the price?

How many people actually need an 8 core Xeon vs an i9? Almost none. And that's without taking into account how much extra you're paying just for the hardware you're getting on the base model. I was willing to compromise and pay for workstation hardware just to have macOS, but not at a 2x markup.

256g of SSD would be appropriate under the conditions that a) this is a very basic and affordable configuration, so less than 3k and b) the machine would be user-expandable with NVMe drives or similar. Then you would get the minimal SSD just good enough to install the OS and put in third party storage to save money. And of course, you would want to be able to put in more than 4T of storage.

256G of SSD is also fine if you store all of your data on a enterprise file array. Then the 256G only is used for OS, Applications and caching.

Why do you think that it's not user expandable? Surely you can just put a PCIe SSD card in the new Mac Pro and use that for storage?

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact