256g SSD is shameful, I don't care what they were thinking. No Nvidia support mentioned so I know a few that just wrote it off for that oversight alone. They will probably hold out just to be sure but that was the hope for them and others.
While I was never in the market for a Mac Pro, though the iMac Pro is not out of the question jump over the iMac top tier, I never expected such low starting specs for SSD and Video at this price point. 580X is old, 2017 old. If this is what they are struck with by not using Nvidia they have done everyone a disservice.
* did the spec this out in 2017 when they announced it and just locked it in?
* edit : Wrong on price, iMac Pro is 1,000 less than base Mac Pro
I mean for one thing I carry a backup of that data with me anyway. It's on my laptop and my phone.
They generally have RAID boxes with either Ethernet or thunderbolt connection.
Most video editing/rendering is done on big storage arrays anyway. So 256GB storage is more than enough for most serious multimedia professionals.
Besides, perhaps you're the user that needs the multi-TB version. Others, e.g. video professionals wont have huge tens-of-TB source files in their main system drive.
If you add a high performance spinning disk its performance won’t be enough and it will be noisy even when idle.
In the end, it’s better to not add anything and let professionals bring their own storage into the mix.
At the end of the day this won’t be sold to new beginners with no files. The buyers will have their resilient storage and the internal drive won’t be used much anyway.
Unless it’s configured as a developer workstation, but it’s generally MacBook Pro’s role.
This is an extremely common setup for video production, so it’s not out of the target market at all.
But I do agree that SAN storage is the norm here.
It's done this way so people can connect to the storage that's appropriate for the task at hand.
Different projects can be stored on different SAN/NAS arrays , each specc'ed out according to the size/needs of the project.
eg a short run animation doesn't need the same storage capacity of a full length feature film. They may have similar throughput needs though. (Summarising here, but the general concept is ok)
So, lets say someone is a Flame editor (Flame generally has high bandwidth needs). They're working on project A for today. So they connect to the storage being used for that project from their workstation. The next day, they might be working on a different project, so will be connecting to a different set of storage.
Other people using different software will connect to the same storage for their tasks, but can have different latency/throughput needs.
Obviously, this isn't the approach taken by single person multimedia er... "houses". ;)
High end networking gear already has higher throughput:
A SAN/NAS using the same PCIe 4.0 SSD's you mention could probably fill the pipes too.
... and it would probably need a bunch of network stack tuning. ;)
> High end networking gear already has higher throughput:
100GB/s > 200Gbps
You would need 4x 200Gbps ports to reach 100GB/s, so 2x MCX653105A-ECAT (each 2x 16-lanes) at > $700 each, and pay for 1/10th of a ~$30 000 switch, IOW 100GB/s would cost you ~ $4400, before paying for the storage.
Sure, it could be done, but it wouldn't be cheap, and you'll have used most of the PCIe lanes.
EPYC servers (128 PCIe lanes) would probably be the go too, not Xeons.
This is just Imagineering though. ;)
With the specifics, wouldn't it be 4 cards needed? Each card has 2x 100Gb/s ports, so 8 ports in total.
That being said, after re-reading the comment they're talking about adding multiple PCIe cards to a box to achieve 100GB/s of local total throughput.
That would be achievable over a network by adding multiple PCIe 200Gb/s network cards too. :)
As a sibling comment mentions though... the cost difference would be substantial. :(
It's good to see that local storage has finally returned to the reasonable state of being faster than network storage. SATA / SAS was a long, slow period ...
So, even with protocol overhead from all the stack layers chewing up maybe an order of magnitude, that'd still leave 10GBps.
So .. I guess it's still possible, if impractical, to outperform a good PCIe SSD with the latest network interface.
More 0's needed? :)
Maybe a slight typo there? Need to add a few zeros? :)
Mind you, you'll pay $15K for it, but if you're in that business you can well afford it even if you're not a top tier Hollywood production shop.
Given that your storage array costs that much, the fully loaded Mac Pro price (somewhere in the $20K range?) is not that outrageous. The people who use Red cameras and G-Tech storage arrays are the Mac Pro demographic Apple is going for here.
Disclaimer: I used to work with G-Tech but no longer there.
They are popular though. I see them a lot, but I have had very little success with them over the years.
That's for storage physically located nearby though. eg same building, no more than 1 or two network hops (roughly).
Higher end networking from the last decade or so has become really fast. :)
Looking quickly at the US Ebay just now, here are some older single port ConnectX-2 cards:
US $23.00 each, free shipping. Note - I don't know the seller at all, this is just from a quick look on Ebay.
There's a tonne of stuff on there. ConnectX-2 is no longer supported by Mellanox, though the cards themselves are generally pretty bullet proof.
Drivers for Linux and FreeBSD come with the OS. :)
Drivers for Windows are a bit more complicated. Mellanox has an archive section with the ConnectX-2 drivers, which work for most people.
Cabling directly between two cards - instead of going via a switch - is pretty common for people just trying out the tech. It lets them plug one card into (say) a FreeNAS server, with the other card in their desktop or workstation. Removes the ~120MB/s limit of 1GbE, assuming any kind of reasonable disks in the connected NAS. :)
If this kind of thing is of interest, probably the best place on the whole internet (not joking) for this stuff is Serve The Homes' Networking forum:
Plenty of advice and stuff to read there, and a good place to ask questions from people that have a clue. :)
I can have a 100TB volume for under $10,000 that multiple editors can R/W to over 10GbE. We probably average about 400-600mb/s to a client on a simple setup in real world situations, which is fast enough for a few editors to work on multiple streams depending on the server. It's easier to backup/archive from an IT standpoint.
Most people working solo don't need to access hundreds of TBs though. They're fine with local or thunderbolt attached storage devices.
I really can't imagine this world where "professionals" keep everything on a single drive and are primarily concerned with capacity of that single drive.
My gaming computer has three SSDs, including a 120GB for the OS, and a pair of magnetic drives.
I totally understand gamers need more space, people who collect movies and videos need space, graphic designers need more space. Maybe even Android OS developers need it due to the Android Open Source Project, AOSP, having like 100 repos you need to clone all at once. I don't need it as an app developer, though.
Apple has a history of offering small base storage/memory options, and then charging more for the upgrade versions than they physically cost to produce. It's how they make different price level products when the products are all fundamentally the same. This lets them cater to poorer customers than they could otherwise since the rich ones buy the overpriced upgrades and subsidize the poorer customers' hardware. So it isn't all bad.
- 256 SSD 1400 MB/s Write 2200 MB/S Read
- 512 SSD 1846 MB/s Write 2500 MB/S Read
- 1T SSD 2500 MB/s Write 2600 MB/S Read
- 2T SSD 2600 MB/s Write 2700 MB/S Read
But of course this could be very different with the new iMac which might have a very fast write SSD in the 256 GB model.
Saying no one would buy it is an exaggeration.
I refuse to believe that meticulously emptying out your trash and downloads folder so that you can continue to squeeze by with 256GB is now acceptable for ultra high end, "I don't know a single person who can afford this" computing.
If you're still relying on local storage in 2019 for development, analytics, video/image work, etc. you are simply doing it wrong
Though it is a bit slim if you want to store lots of multimedia or host multiple VMs.
Unless these cases are manufactured in remote monasteries, the costs don't add up and price point wasn't a concern.
EDIT: I see now there are only two internal drive bays, so that would mean tossing the 256GB that comes with it if you wanted to max out the internal storage.
Or using some of the several PCIe slots to add lots more SSDs. It looks like the default configuration has enough spare PCIe slots and lanes to accommodate 8 M.2 SSDs in suitable risers. That can get you another 16TB for ~$1.8k (consumer-grade SSDs).
The encoding/decoding capacity of Ethernet-over-twisted-pair seems to be reaching a point of sharply diminishing returns and the hardware for 10GBASE-T is really expensive for what's been, in my experience, a less reliable experience than SFP+. I'd rather tell somebody who needs 10GBASE-T to use an RJ-45 adapter than to have to go grab yet another Mellanox card to stick in a computer this expensive.
Is that the wrong way around? I'm struggling to parse otherwise.
1 x 10 GbE for your regular Ethernet/IP traffic and 1 x 10 GbE for your storage traffic. You do run your network storage over a physically separate network, right?
WiFi is not really an option, for performance but also often for security reasons.
So yes, you'll have two cables from wall-jacks to the machine for networking.
Of course. WiFi isn't going to work in shared user environments that chew bandwidth for breakfast.
Not having to worry about the security issues of WiFi too can be helpful (depending on your security requirements).
Link aggregation in general only provides performance with multiple connections to multiple machines, so using link aggregation between one client (Mac) and one NAS will like result in zero performance improvement (the packets will only use one of the cables). It only makes sense if you have two or more NAS that you want to access simulatenously (or two or more clients accessing the same NAS, but that wouldn't be a use case for 2x 10gbe port on a client).
Synology also supports balance slb bonding, which in theory goes around this single connection restriction. However I ran into some connection problems with some Windows clients. Never went to the bottom of them but they went away when I disabled the bonding.
In any case, it is hard to saturate a 10gbe connection with a single NAS, unless it is packed with SSDs, which I wouldn't assume for mass storage. So I am not sure there is much value in aggregating the links in the first place.
I have multiple LACP bonds on my Juniper EX2200 at home working without issue, though the single stream limits you mentioned are the one thing LACP can’t fix.
"enterprise" DLink switches aren't really a thing yet, regardless of what their marketing team wants to brand them as. :(
Cisco, HPE, etc have "enterprise" switches. DLink might be in a decade.
> it is hard to saturate a 10gbe connection with a single NAS, unless it is packed with SSDs
No, it's just a matter of having enough spindles behind it.
As a rough guide, with a (say) average spinning rust HDD able to push out 100MB/s when reading, you'd only need 10 such drives to push out 1000MB/s (raw).
In the real world, you need extra spindles as some of the data being pushed out is just internal checksum/redundancy, and doesn't go over the network.
But for reading back large files in mostly sequential access, you'll hit 1GB/s from about 10 drives onwards pretty easily. More drives, more throughput.
Went with FreeNAS instead, as I was already very familiar with building systems, it's based on FreeBSD (OSS), and it gives better tuning on the higher end.
Doesn't have as pretty a GUI though. ;)
From maintaining large image storage FC SANs with Apple xServe and IBM DS for some years this is no fun to administrate for peak performance and I would assume most smaller shops have NAS instead of SANs (no arguing larger shops have SANs as we did driving ten photo studios).
Even the old 5400 RPM HDDs cost an arm and a leg if you bought it pre-installed into your MacBook. This was back in the days where you could unscrew the lid on your MacBook Pro with a standard screwdriver and replace the internal memory.
It's a neat psychology trick. Customers are far more receptive to upsells when they're the one doing the upselling.
Still, I don't see this as a developer machine — I think an iMac Pro is a much better value proposition for developers, and in my case, where I really mostly care about single-threaded performance for interactive development in Clojure and ClojureScript, I'm looking at the iMac.
Oh and it has an RTX2080 which beats an RX580 so badly it’s practically attempted murder.
I bet the apple tax I really do (typing this on a 5th gen mini) but the base model is hilariously expensive and in a world where I can buy a 12 core/16T for 499 that beats a 1200 intel cpu obsolete before its released (and that 12C will drop straight in).
They’ll sell, apple stuff always does but unless you need macOS’s for some reason I don’t see how.
The new Mac Pro has a ton more PCIe lanes and DDR4 channels. It's a completely different league, it can't be directly compared to mainstream platforms that only have dual-channel and 16-24 lanes.
EPYC would've been a better value, but it's still not cheap. Big computers with lots of memory and I/O capacity will always look ridiculously expensive next to mainstream desktop.
But... 64 lanes of PCIe 3.0. 32 cores. 64 threads with SMT. 80MB of cache (think of all the locality!). 256GB of RAM support (if you're using more, you're probably doing scientific compute and you're probably better off on Linux anyway, I'd assume). ECC support. 1700 USD. Quad channel RAM.
Xeon W-3175X? 48 lanes of PCIe 3.0. 28 cores. Probably no more Hyperthreading after Zombieload. 512GB RAM support. ECC support. $3000 USD. Hexa channel RAM.
And better yet? Competent TR motherboards with all the RAM/GPU/whatever support you need go as low as 400 CAD.
Also, if you're less Mr. Moneybags, the 2920X exists. 12C/24T, same memory and IO capacity. 650USD.
But wait, there's more! TR 1900X is older, but: 8C/16T, 64 PCIe 3.0 lanes, quad channel RAM. Same kind of memory support: 256GB. 300USD.
A far shot from "ridiculously expensive" considering 300 USD (or even 650 USD) is less than some mainstream desktop CPUs. The 9900K is 490USD. The 9980XE is 2000USD.
Also, TR3 with PCIe 4.0 is on the horizon, and Zen 2 with PCIe 4.0 is here. 24 lanes of PCIe 4.0 has equivalent bandwidth to 48 lanes of PCIe 3.0: same as the Xeon W-3175X.
Big computers with lots of memory and IO capacity can be decently cost effective. You just can't ask Intel.
Support ecc as well.
I do have a Linux box with a fast Intel CPU and it's nice and fast, but a Mac it ain't. I'm happy to pay more for a Mac machine, if only for the fact that I get an OS with working copy/paste in all applications. I won't be buying the Mac Pro anytime soon, though, I'm not willing to pay that much. But there are people who are and I'm glad there is something being offered there.
That's such an oddly specific reason to require MacOS. Can you say more? I can't say I've ever had a problem with copy/paste in Linux, in any application.
If this issue were serious enough to me to consider doubling my computer's price tag, I'd at least first look at Hackintoshing, which tends to be pretty solid these days when you can pick out specific hardware in advance.
Platform supports ECC all Ryzens do, though it’s on the board vendors to support it officially.
Cache size no idea, will not that the new 3900X would obliterate the sky lake Xeon (12 cores vs 8 and comparable or better IPC) on in the Mac Pro base confit, it’s an 8 core older architecture against AMD’s new best consumer processor and its 499 vs intels neatest equiv at 1100, it’s not even remotely close at the moment.
I'm not even a bit surprised by these outlandish claims of ECC RAM latency affecting MIDI timing.
I am betting at half the price of Mac Pro, it doesn't include support for ECC, and hence Memory aren't ECC, the CPU aren't Server Grade ( Even on the AMD side you will need to use at least EPYC ), motherboard aren't Server Grade, with less PCI-E slot and not a decent case with decent power supply. Yes Ryzen supports ECC, but it is not tested as such, and broad vendor have to do their testing as well. And testing is expensive. ( Hence why Server Grade CPU are expensive )
Honestly I love AMD and loathe Intel. But these kind of comparison all over the internet is like saying I could get a 500hp Nissan GTR over 500hp Ferrari at half the price, but why did we not ever see that argument in Cars forum and only on tech forum.
My workflow is mostly multicore so to me the iMac Pro is clearly superior.
This is a computer made for the engineers who’ve already “made it,” who are making that $500k a year and are looking for a Porsche over a Corvette.
I don't even want a 1TB SSD in it, the 256 is perfect to hold the OS, a few DAWs, and all the plugins I could ever want. Everything else gets saved to drives in a toaster anyway. A rackmountable unit with a ton of PCI slots for HDX/Dante cards was on my Christmas list, and I'm not alone- there's a reason they made a point of showing how many HDX cards it can fit in their presentation.
It also looks like an amazing workstation for video editors. I really don't think it's designed for software engineers who make 500k a year.
Alternatively, forget hotswap and use a Thunderbolt DAS with RAID6. Burn your projects from your DAS to a portable SSD when you want to pass them over. Only takes a minute or two.
"A good craftsman doesn't blame his tools" isn't a warning against complaining. It's a warning about picking bad tools in the first place and scapegoating them instead of accepting that it was your decision all along.
Sort by comp
What reason should I have for trusting anonymous self-reported data in a category where people are known to exaggerate?
Maybe some people are lying, but that seems about right to me for actual senior people (leading projects, maybe managing people).
My last year at Google, my W2 income was in the area of $300,000. I was a "level 5" with good performance reviews, and the scale goes up to 9. I sold all my stock the second it was issued ("autosale"), so the W2 income is pretty close to the amount of cash I got.
Programmers focused on the right task are worth their weight in gold. There are very few fields where an hour of time put in can save society as a whole thousands of hours. Software engineering is one of those, and we get to skim off a little bit of that value we created in the form of cash.
I don’t know why people are like that, but anonymity thins out the middle group of semi-/socially-truthful people by providing an opportunity to be much more honest or much less honest than is normally possible.
$500k is achievable total compensation at a few companies for some people, though.
Is this what "Pro" means now?
What about an EV such as a Tesla?
Instead I took umbrage with the idea that a $3000 laptop which is our primary tool is a luxury item. I think it's one of many signs that we're a bunch of cheapskates. Other industries have different perspectives on this.
From an overall productivity standpoint, because the biggest bottleneck to programmer productivity is mental and physical health, data would imply that I should spend more on exercise equipment, a better chair that keeps me from getting injured than on a laptop that gets maybe another 10% faster compile of already less than 30 seconds (incremental compilation anyone?) for the $1k difference between a 15" Macbook Pro and a 13" Macbook Air. No amount of money I dump into any hardware or software will make AWS provision its resources everything faster either, and that's what I sit and wait on the most for feedback rather than direct code compiles. And a fat CI / CD server is not run on my laptop unless I'm running Jenkins locally or Concourse.
So if you couldn't get anything out of a better tool, I'd start asking uncomfortable questions about you.
Do I think I as a developer could leverage a Mac Pro to speed up my code-build-test cycle? If the whole team had them, then I'd absolutely tune our tools to use the extra cores, monitors, etc. We are better at troubleshooting when the feedback loop is shorter.
But to me the Mac Pro is more of a tool for designers. If a designer is turning in the same work on a 1k machine I would ask about our process first, the designer second, and the tool third.
On a serious note: why doesn't Apple do any modern container/virtualization?
For example, why can't you set up containers for different versions of xcode and build your code in the appropriate one? or farm it out?
Is it? This site claims it came out in March 2019 but I definitely can't keep the various graphics card lines separate in my head: https://www.techpowerup.com/gpu-specs/radeon-pro-580x.c3398
The upcoming AMD Threadripper 3 CPUs with PCIe4 would be a much better Pro offering. Clock-for-clock they are beating Skylake now, rumored to have up to 64 cores. Also, PCIe4-based RAID 0 arrays are pushing 15GB/sec transfer speeds IIRC.
But yes, I would expect a dual die 64 core monster like the 92xx series they recently announced. Of course the 300W power/thermal solution apple was talking about seems a bit limiting in that case.
AMD definitely has a superior offering with Threadripper 3 but we haven't seen what Intel is going to offer in comparison. And switching CPU manufacturers is never trial.
That being said, if you were only allowed to pick one platform to sell to your customers, in a machine which should last years, then Intel would definitely be the safer bet. It’s also the platform most of the software providers are likely gonna optimize first.
Maybe there is a technical reason but the story I've gotten from people I know doing this is that they are looked down on if they're PC users. It could be that Apple's software is that much better but it seems much more likely that they're charging what they know they can based on the generally stubbornness folks have around their software and workflow. I partly can't blame them, if your job is in a creative space the last thing I'd want to do is constantly rework a workflow and deal with the machinery itself rather than the content and output.
I so want to buy that CPU, but I'm starting to suspect it might never be released.
But it gets worse, the RX580 is an identical design to the RX480, with just a few silicon tweaks to clock slightly higher.
Clock for clock, the 580X preforms identically to the RX480 that launched in August 2016.
I'm guessing it's just price discrimination. People who are in the market for a Mac Pro (or rather, their employers) are probably not price sensitive.
If you don’t want it, don’t buy it.
There’s a pretty obvious general pro workflow for a configuration like this, where OS and apps are stored on the main drive and data is stored elsewhere.
Where’s the shame in that?
Right now you can get 32 gig of RAM and the 512 SSD for around that mark. I'd probably throw in the video card upgrade for a personal computer, maybe not for a work one unless you're doing AI.
There is a point of diminishing returns, and given that most people don't have a giant Steam library on their Mac, 256GB is more than sufficient for a large number of users. And as many others have said, when you're working on professional data like video and audio it is almost always on an external array.
Relatively speaking, storage is cheap. A high performance, 1 TB NVME drive can be had for under $350. This should be the base configuration on a high end workstation...
Even if this is not the case, applications alone are getting larger all the time. 256G is ridiculous as a base config.
None of which is local. People have RAID boxes for that.
I would definitely look at a Mac Pro for software development (every minute counts) and if you're in that group there is simply no need for more storage or decent graphics.
Look up barefeats benchmarks. Unless you have very specialised needs (like video editing) there is visible penalty when you increase the number of your cores. For every day usage 6-8 cores is what you should be aiming at, and preferably with highest clock speed possible.
The more cores you have, the lower clocks get and the harder it is to sync cache between them. If your software cannot really use that much parallelisation it will be slower, not faster.
For 99,99% devs spending cash on iMac Pro over iMac with i9 is waste of money and probably degraded performance. Mac Pro is even worse as you need to buy external monitor too.
That said, I could still avoid the Apple tax with a Precision or something from System76. My my employer tends toward Apple on the desktop and Dell in the datacenter, but $dayjob will more likely refresh with a MBP than one of these behemoths.
I build my own whitebox systems at home mostly, and AMD's been good to me on price/performance. I have a few Macs. Wouldn't mind a Talos II from Raptor Engineering or one of these newest Mac Pro machines if I could justify the cost.
So the 2019 iMac i9 @ 5Ghz on 4 cores will beat the crap out of them for software development purposes, and from the benchmarks so far it seems there is very little (or none at all) throttling on new iMacs.
So I could definitely see more cores improving build times.
But from what I read (on barefeats benchmarks, no less) clearly shows that Xeon CPU is inferior for those use cases to fastest Core iX CPU.
To put it in simple terms, for peak loads choose Core iX. For heavy, long term loads choose Xeon.
Using a desktop processor with more thermal headroom (with adequate cooling) would presumably be much better bang for your buck than increasing core counts in an already thermally-challenged laptop.
Your imagination is probably just not that big ;-)
Who exactly is the intended user for this ultra-expensive, non-portable, box if it's too lame for hobbyists?
This is just apple sticking a vacuum into their customers pockets, applying the same over-pricing policy that is now standard on their phones.
For under $2K you can have a machine that competes with newer machines that cost twice that much. If you're a professional creative type, these machines are the real deal:
The Most Powerful Mac Is 6 Years Old and Not Sold By Apple - https://www.vice.com/en_us/article/8xkq8k/mac-pro-upgrade-co...
the Mac Pro 5.1 was designed to accommodate up to 12 cores: “Even though a single core isn’t fast, imagine having 12 of them for video editing and audio—those cores together are faster than my brand-new MacBook,” Mazzarolo said. The new iMac Pro can have up to 18 cores; new MacBook Pros max out at four cores.
The 5.1 can take a whopping 128 GB of RAM, which is equal to what a fully upgraded iMac Pro can take and double what Apple says the trash can Mac Pro maxes out at (it’s worth noting that the RAM used in newer Mac computers is usually faster)
The 5.1 can be modified to use modern SSDs, which Mazzarolo said are in some cases faster than the ones used in the new iMac Pro
The 5.1 can use almost any brand-new graphics card from most manufacturers, which is the main reason why a fully souped-up, old Mac Pro can outperform new computers. “With some rendering engines, the AMD cards that Apple uses [in new Mac Pros] don’t even work,” he said. “In general, even mid-level graphics cards we put in are as fast as those in the iMac Pro. We can put in better cards and we can put in two of those.”
On the Facebook group, Mazzarolo posted benchmarks of one of his custom-built rigs playing 5K, 6K and 8K RED RAW video clips against current-model Apple computers. A new, 15-inch MacBook pro and a recent “trash can” Mac Pro weren’t capable of playing the video at more than 8 frames-per-second. His custom-built model was able to get 24 fps in each case.
Here's the patcher: http://dosdude1.com/mojave/
There are caveats, but I've found it to work well. Of course it's totally unsupported so here be dragons, etc.
Another issue is that Windows 10 is now at the point of working decently and it has Ubuntu baked in. Apple also has iCloud sync software for it. I will miss Mac Apps, but given economics, most people just crank out electron apps anyways.
I give up on Apple computers. imo There's no viable option for iPhone or iPad though, so I'll still will be with Apple on some level.
The old hardware is pretty well supported in the Mac Pro because of how modular it remains. The majority of the incompatibility comes from not having a GPU that works with Metal. Those patch notes are mostly for devices with soldered components, like the GPU.
i do have to say though, the single core performance of my MacPro5,1 is pretty abysmal. even with similar clock speeds, my 2015 mac mini is 50% faster in single core performance.
we use it as a Unity build box and it functions great since we're normally doing multiple builds at a time, and image compression & shader compilation are multi-threaded. but it's definitely slower if you are just doing 1 build at a time.
sucks that in 2 years when Apple starts requiring the latest Xcode for iOS build submissions we will have to retire this guy =(
There's a very noticeable performance improvement from (eg) the E5645's, let alone the E5620's.
Saying that as I used to use a self build box (Supermicro motherboard) with dual E5645's as my desktop, but it always "felt" a bit slow. :/
Had the opportunity to upgrade to X5675's, and now it "feels" fine interactively due to the higher base and boost clock.
The highest model, X5690's, from all reports are literally no faster in use than the X5675's (maybe thermal throttling?). So not worth the extra spend. ;)
It's not a great solution, but I'm not paying over $6000 for a Mac Pro.
Also, six months ago everyone was bellyaching about how Apple has no serious pro devices and was abandoning the desktop. Now it’s too expensive, and my eyes can’t roll back far enough.
I've owned the last two Mac Pros. I won't be able to afford this one. But fair enough: in this case it may be that they can't raise the ceiling without raising the floor.
Apple just isn’t listening.
I guess you’re not very familiar with the Mac Pro line. The ones Apple let die in 2012. You know the ones with starting prices at $2000-$3000 adjusted for inflation
Apple leadership also deems developers as “pros” as well at least publicly. I don’t know what they were thinking
It really feels like a list battle. I’d pay $200 a year for an OS, especially an open-source one that doesn’t send my data to Amazon.
The pros are mainly concerned with color accuracy and gamut, neither of which gaming displays are known for.
And you're right about Nvidia. I also stream on Twitch, was hoping I might be able to get one of these to replace my current PC. Without Nvidia there's no way.
SuperMicro 5039A-i 730
Xeon W-2145 1175
4x8GB ECC RAM $74 x 4 = 296
AMD Radeon RX 480 415
SAMSUNG 970 PRO 512GB 160
It's got 8 full length PCIe slots, and two of those are the MPX slots so it's effectively 10 if you use MPX cards. It has dual 10gb NICs, and two TB3 ports.
I tried coming up with something similar to the base spec MacPro but got larger storage, more cores and an Nvidia card with a decent saving.
>Quad-Core 2.6GHz Intel Xeon Broadwell E5-2623 v4 [4-cores / 8-threads]
We are well into Skylake era, a Broadwell is ancient comparatively speaking. The Mac Pro will be Cascade Lake, not state of the Art like EPYC with 7nm. But still a very large improvement.
>HP Z Turbo Drive
Even if we choose a 256GB version of Z Turbo instead of SATA as you confined, the Turbo Drive will still be a lot slower.
>HP Thunderbolt 2 (20GB/s; one port) +$200
Yes, Thunderbolt Port are expensive. And these are only TB2, not even TB3, and still missing 3 Port.
>Intel X540-T2 | Dual-Port | RJ45 | PCIe 2.0 | 10GbE Network Adapter
Even this 2013 Dual Port 10Gbps Ethernet is a $250 upgrade.
>Wireless Mouse and Keyboard, and WiFi
Added those .
So a not similar config with a very old Intel CPU cost $3475. I would imagine if you spec it exactly the same as Mac Pro, with TB3 and every hardware and ports, the HP would cost $4K+, with an ugly design. Quite a bit of saving but not as dramatic as some would imagine.