Hacker News new | comments | show | ask | jobs | submit login
Power9 to the People (nextplatform.com)
125 points by ajdlinux 8 months ago | hide | past | web | favorite | 89 comments



The business model for these sorts of things should be "target ubiquity". Think GPUs. Huge computational power, low cost per computational power unit.

The problem becomes that Power9 is now effectively a substrate machine for NVidia GPUs more than it is a competitive offering by itself. In my previous life, working on building accelerated computing platforms a decade or more ago, some rules of thumb from customers were that they needed to see 5x at least performance delta to justify serious consideration. And that you couldn't charge 5x the price for that performance delta.

So if power9 is 5x the speed of a Xeon in a set of relevant test cases (ignoring GPUs), and comparable price for a system, sure, it has a good chance of being successful. If not, it is questionable at best.

As others have noted, cheap ... as in really dirty cheap ... development/test boxen for devs to build at home are critical for this. Back in my SGI/Cray days, I was advocating for, though failing to convince, management to enable us to sell older indys, indigos, etc. with IRIX and dev bundles for home/app dev use, at a very low price. My argument was (back then) that nothing could touch Irix for ease of use, and that it made sense to seed developers with this without worrying about making money on them, rather having them create the content that people demanded, that would help us sell machines. Management was worried it would decimate our developer revenue stream. Rather shortsighted I think in retrospect.

Power9 looks poised to suffer the same fate, though not because of OS/compiler costs, but because the hardware will be un-affordable for pretty much everyone.

This is why you have to target ubiquity. You need those developers. If they can't afford your boxes, or your stack, then you aren't going to get them.


> our developer revenue stream.

This sounds terrible to me. From a POV of a platform vendor, the only "developer revenue stream" I'd care about would be popularity among developers. Give away the basic tools, sell slightly more specialized tools for the price of a pizza, get as many people hooked up as possible. Send your machines to universities at a discount. Sell older machines to hobbyists, and support the community. Help open-source projects run on your hardware and take advantage of its unique features (if any).

Of course, there is a niche to actually sell top-notch enterprise-grade tools for your systems, too, because corporations gladly buy support and stability. But this is usually a tiny trickle compared to the hardware sales, and its very existence is entirely dependent on your system being widely popular and ubiquitous.

As a bottom line, it's much better to get a 5% margin from a $100M market than a 25% margin from a $10M market.


Exactly


I think this is exactly correct. It's why the Raspberry Pi is so brilliant for the ARM ecosystem. It gives devs an easy way to get exposed to ARM and paves the way for higher end ARM servers, laptops, desktops, etc.

Power9 would really have to double down to have any chance of beating x64 and ARM, especially with RISC-V also about to enter the mix with its own advantages (openness). I think their only chance would be to actually sell it at cost or at a loss, at least at the entry level.


"nothing could touch Irix for ease of use"

Love my Fuel. Wish I could still use it as a dialy driver (too expensive to run electrically). Management really destroyed you guys, eh?


There is a $400 4 core that Raptor Engineering is using in the Talos II. The top bin SO is only $5k, go look at Skylake pricing. This is dirt cheap.

Hint, without specifics due to NDA, go talk to Supermicro sales directly about POWER systems.


> So if power9 is 5x the speed of a Xeon

Is Xeon 5x the speed of Xeon? How do those institutions ever upgrade then?

I think what you're really saying is Power9 needs to be 5x the speed compared to Xeons from 5 years ago or whatever. Because otherwise new Xeons would never replace older Xeons either, following that logic.

Power9 and other Intel alternatives just need to show 1.5x-2x the value compared to Intel's latest.


I read it as the following: "If you offer a machine that has N times the performance of the current industry standard machine, it should cost cost kN times the current industry standard machine where k < 1".

That is, there's no incentive to pay the cost of switching if you don't save any money in the longer term. And that "longer term" is usually but a few years.


Yes, this was what I was trying to convey. Customers (when I ran my company) explained it to me in terms of Moore's law. They could wait 5 years for 1 order of magnitude (back then). Or I could deliver that (near) order of magnitude now, but the economics still had to work out. So kN where this was not tremendously different in TCO than existing solutions.

kN might be 1.25x existing costs for 5x performance today. This was an actual example given to me. Customer then indicated that they'd be quite interested in performance at that price.

k*N >> 2x, and it got problematic. So I had my range to work with.


I think POWER9 is not radically different from a Skylake Xeon, from a performance perspective.

AFAICS, the selling point of this thing is that it has a higher BW and cache coherent connection between the GPU's and CPU's.

Of course, some users like national labs, Google etc., might want to invest in this as a strategic investment, to keep Intel from becoming even more of a monopoly than they already are.


Good example with IRIX. I liked it as well, but I liked HP-UX too. So, YMMV.


“If this doesn’t work for IBM, if this doesn’t give Big Blue a chance to really capture a bigger slice of HPC and take some aggressive share in machine learning and accelerated databases, it is hard to imagine what could.”

No it isn’t; why did all non-intel processors fade into obscurity and why does intel architecture with all of its Zilog 80 baggage dominate? Because one can’t get a dirt cheap (under $500) POWER9 desktop or server; same goes for MIPS, UltraSPARC, anything else that’s modern and non-intel... why is ARM so popular? Because one can buy it dirt chip for tinkering with it at home. That is what propelled ARM and intel, people would install a Linux ISO on their parents’ old PC and it built up familiarity.

Until that happens with POWER (or any other non-ARM, non-intel architecture) history has taught me that it will fail. Hardware has to be easily accessble and dirt cheap, and things like software mirroring or compilers must come with the OS and all the software has to be gratis. Otherwise history says - fail! Sometimes of epic proportions (hp, sgi, Sun, IBM).

Sun Microsystems for example eventually realized that the software has to be gratis and open source, but they lived in this fantasy world where they thought they could charge anything they wanted for hardware — and would indignantly argue about it when it was pointed out to them that it’s way too overpriced and expensive. That lost them familiarity with the hardware and familiarity with the OS because for most people a second hand used PC-bucket with GNU/Linux was good enough and the target audience wouldn’t have known or cared about all the advanced features anyway. We all know how that ended.


Yes, as an HPC admin, let me foretell how it will go here at our research group.

Me: Oh look this is fascinating! We have a worthy competitor to intel processors and we can do our work way faster. Let's get this!

Boss: Ok, tell me why one of this costs more than all of our other servers put together?

Me: But, it is so much better, it does blah, blah and more blah.

Boss: Do we really need it? What would we save by getting this instead of the commodity servers we normally buy. May be we should look into making our code work better with GPUs instead?

Me: ..........

Don't get me wrong. I think this really is fascinating. Just for the HFT traders and the top 1% buyers of exotic hardware. Not to an average HPC shop like ours, until the economy of scale and premium catches up to value.


>> Not to an average HPC shop like ours...

And there lies the problem. HPC used to be a tiny market with huge budgets where expensive hardware was the norm. It's just not that big a deal any more. It's commodity hardware now with lots of cores and GPUs running OSS. They won't make it trying to charge a premium just because it's Power9.


I wouldn't call Cray computers with x86 Xeon Phis commodity hardware though.


Cray CPUs are commodity tho'. The secret sauce is Aries, the interconnect. I ported code to the XC30 in a previous job.


I agree with your points [2][3]. Why is this so hard for vendors to understand?

It is an old idea called "mind share" [1] or in this case technical-mind-share, a vendor should want the largest possible community of technical people interacting with their technology and attempting to use it in as many niches as possible.

I'm not sure it even has to be uniformly cheap, as long as there is a low-cost entry point to hook people. If the price ratchets up from that low entry point but it is a compelling solution people will want it in their technical lives and get it funded somehow.

[1] https://en.wikipedia.org/wiki/Mind_share [2] I would point to RISC-V as another example of a newly popular machine architecture due to its initial low-cost. [3] noting the preferenced technology must meet the minimal acceptable threshold for all the usual dimensions.


As a corollary: see Microsoft, Adobe, etc. turning a blind eye to casual home users pirating their software. IIRC Microsoft even offered a free upgrade from pirated Windows 7 to a legit Windows 10 Home.


M68K and PowerPC architectures died out, and were available in consumer systems (Macintosh) and while not "dirt" cheap still in the ballpark of affordable. And you could find old cheap ones and run linux on them too.

The real reason Intel/AMD64 is popular is the same reason Windows is popular -- mindshare and economies of scale. IBM and the x86 platform won the business desktop in the 1980s and that has been propelling everything on that architecture ever since.


Dunno, not sure I buy it. From a software devel point of view it's "just" linux. The full ecosystem is there, clang/gcc, ruby, perl, python, javascript, apache, mysql, nginx, postgres, etc. About the only thing I can think of that might be different is if you are using cycle counters for profiling things, but it's rare to need to get to that level these days.

I think the real problem is everyone likes to target the top of the Intel systems that sell in very low quantities. Sure I'm sure HFT folks can make huge $$$ trading just a bit faster. But the bulk of Intel server sales are the low/mid range. Things like the E5-2620/E5-2630 that costs $300-$500 per CPU and has nice capable servers starting around $2k.

Sure $65k power system with 4 Voltas sounds great. But few will drop over $50k on a server without having a few cheaper servers in production for awhile. Beating the collective performance of 10 $5k servers is also going to be tough. Amusingly Intel fails that same test, rarely are the top chips the best price/performance unless the cheaper servers trigger needing to buy a new building.

Not to mention if IBM starts getting traction you can bet those $5k per CPU intel chips will start getting discounts rather quickly.


There’s a certain amount of laziness that prompts people to get bigger servers. I’ve been at a few companies where the solution to performance issues (as in, the thing that makes us money isn’t keeping up with customers and therefore we can’t grow our business) is to make the server bigger.

Switching to a distributed system is sometimes too far off the radar and costs precious engineering time. So the story is “just buy a bigger and faster computer, we’ll figure out how to shard this later.” The company drops more cash on IBM or whatever, and the engineers focus on features that close sales deals.

And then there are the larger companies which buy some expensive IBM servers and port their software in order to squeeze discounts out of Intel.


It's a difficult balance to get right. A $2k server as the GP describes costs less than a week of a fully loaded engineer. If those $2k buys you another X time (for any X >> week) of the application working, then it's not at all obvious that it's the lazy or otherwise wrong step to take.

(Lots of handwaving in the above, main takeaway is that it's the expected and rational path in a world where engineers are expensive and scarce and servers are cheap)


In this position.

Inherited a complete mess of a system (response time measured in minutes) running on an Xeon E3 w/8GB of RAM and 500GB of spinny rust.

It's going to take months if not years to unscrew some of the mess so I suggested to the boss the quickest way of improving it was to upgrade the server so we ordered a Xeon E5 dual processor system with 32GB of DDR4 and 1TB DC grade SSD.

Once I've ported the system from PHP5 to 7 on that and moved to a recent MySQL version we'll get a very nice perf improvement for basically free and then I can refactor the mess and at the end we'll have a better optimised system running on much better hardware.

Hardware is cheap, people's time isn't.


> Hardware is cheap, people's time isn't.

Sadly it's not always that simple.

People's time, salaries, falls into the base expenditure.

Hardware acquisition is a discretionary cost, which is usually squeezed particularly in a cost-centre like IT support. Or the existing hardware is under a multi-year lease from a vendor like Dell or HP and has to be paid-for whether it's obsolete or not.

Where I've worked ( big non-IT-orientated companies ) it's more typical to throw a couple of people onto a project to debug and refactor an application than buy better hardware. Or even worse, implement a new project on existing shared hardware.


Yes that can happen if people are already on staff and can be assigned to such an effort. But if the choice is spend $50K on new hardware, vs. hire three new devs to rewrite/optimize the software, the decision is easy.


Yep, there is only me and company doesn't want to hire a new dev so hardware is the cheapest way to speed things up now and it'll still be there when we are finished.


Salaries are taxed, hardware is deducted and amortized.


But the reason that intel is and has been cheap is the scale that their monopoly on the desktop has created. This position has given them a opportunity to develop fabs for node sizes that is out of reach for any competitor while keeping a competitive price point. At the moment it seems like Moores law is slowing down, and ARM on mobile has given scale to independent fabs to catch up with intel. So the situation may actually be a bit different this time.


It isn't about the cost of the cpu. Yes Intel eventually won on the market scale. But the real problem was, that you couldn't get a Sparc or PowerPc or whatever on a standard PC motherboard for a reasonable price. From the middle to end of the 90ies, lots of the Linux community would gladly have bought a non-x86 CPU/motherboard combo. The RISC-processors of that time still had an edge over Intel. Even later, with the G5 processor, they could have been competetive, if there had been a CPU/motherboard combo available.


Exactly this! Linus himself wrote that the only reason Linux exists is because a Sun workstation was too expensive.


I wonder if Moore's law was the real reason why all the RISC vendors failed while Intel succeeded.

Lets say IBM sells server hardware for 60K USD per machine versus Intel's 20k (I made the numbers up) and shows, through extensive benchmarking, that despite the higher initial cost, their machine's performance/dollar is better than Intel.

Then 2 years later Moore's law happens. Intel's new 20k offering now outperforms IBM's old machine on every metric. In hindsight, IBM didn't make sense financially.

With Moore's law slowing down, maybe now its the right time for IBM to make a move.


Intel chips are more RISC-like on the “inside” after the instructions are decoded, and in that sense RISC is successful. ARM is RISC and very successful. But most RISC and CISC vendors alike are failures by comparison to Intel and ARM, and the IBM Power processors are just not that performant per watt. You don’t even have to wait 2 years, they’re already not as good.

Part of this may be due to Intel’s lead in manufacturing technology, and have nothing to do with architecture.

(Historically, RISC was much more competitive. Talking about current state of affairs.)


ARM is RISC and very successful.

Actually, ARM is more CISC-y than MIPS, POWER, SPARC, etc. and recent cores break more complex instructions into uops just like x86.

If you consider uops RISC, then just about every architecture is "RISC-like on the inside". RISC is about a simple and restrictive user-facing ISA, not the internal microarchitecture.

In the long term, CISC makes a lot of sense --- it saves on fetch bandwidth and instruction cache to essentially have the CPU "decompress" complex instructions to execute internally, and the core is many times faster than memory.

IMHO "pure RISC" was an academic exercise, and the only reason why early CISCs were easily beaten was because they were sequential/in-order, and memory bandwidth wasn't a bottleneck at the time. With the growing core speeds, memory becoming a bottleneck, and invention of parallel uop decoding/execution, CISC could do more per clock and with less instructions. You can see this trend here:

https://en.wikipedia.org/wiki/Million_instructions_per_secon...

The best ARM core on that list can do 3.5 DMIPS/MHz, and the best MIPS at 2.3 DMIPS/MHz, while the best x86 core is at >10 DMIPS/MHz.


Have a look at slide 43 here:

https://riscv.org/wp-content/uploads/2016/01/Wed1345-RISCV-W...

This is from a couple years ago. Even more important (I can't find it right now) is that risc-v compiles code to fewer bytes than x86 and fewer micro-ops as well. With the smaller/simpler instruction set it also requires less area and power, so it may be just a matter of time now.


I guess the "even more important" thing you're thinking of is https://arxiv.org/abs/1607.02318 ?

So with RISC-V GC + macro fusion of a few common idioms you get slightly less micro-ops than x86-64, armv7, or armv8.

That being said, for high-performance cores such as Skylake, POWER9, or the latest aarch64 server cores the ISA probably doesn't matter that much.


Yes, that what I meant. But the small number of instructions on risc-v means less hardware to implement it. If it wins on code size and micro-ops but has far fewer instructions to deal with, it should result in smaller circuitry to implement. So far it has been implemented with less area and power consumption than ARM cores of similar performance and there doesn't seem to be a reason it can't scale to bigger/faster cores the same way x86 has.


I think the usual argument is that the decoder is a small fraction of the total transistor count in a high-performance core.

For microcontroller class HW, the x86 ISA might be a crippling disadvantage compared to RISC-V. For a high-performance core, which AFAIK is something like 20-30 million transistors, not so much. The bigger the core, the smaller the advantage of the ISA.

But yes, I don't there is any reason why RISC-V couldn't be used to create a high-performance core competitive with the x86, POWER, ARM of the day. It's just a hugely expensive affair.


While I agree that RISC and CISC is not much of a distinction in 2017 it's worth noting that that Cortex A15 is very far from the state of the art for ARM chips. The Cortex A75 would be about twice as fast, and Apple makes an ARM architecture that beats Intel's best (per MHz) in certain other single thread artificial benchmarks.


The big advantage that RISC had back when it came out was that you could fit an entire RISC core on a single piece of silicon, something that isn't really applicable in the modern day.

Right now the big advantage of RISC in the high end is ease of parallel decode thanks to instructions always starting on 32 bit boundaries. Which is maybe a 5% power advantage at most. 64-bit x86 has lots of historical baggage and the people doing 64-bit ARM were clever so they're actually about equal in instruction density. It was an advantage to have fewer instructions back in the heyday of RISC but it isn't any longer and both ARM and x86 have tons of different instructions.

When you move down from high power OoO cores to in order ones then having 32 registers instead of 16 becomes more of an advantage and the benefits of simpler decode become more significant too.


Agreed. Moore's law doesn't happen on its own and Intel has been leading the way in fabrication technology.

As for ARM, I think their success has more to do with their business model (licensing instead of fabricating) than with their architecture.


ARM is the future. Example Apple's A11. For single thread ((which is what's most important on 90% of today's use cases) side note can this stop being true by 2020?) can compete with single threaded processes.

Apple's A11 (single thread) 4217 (multi) 10164 https://browser.geekbench.com/ios-benchmarks

Intel i7 8700k (single thread) 6089 (multi) 26654 https://browser.geekbench.com/processor-benchmarks

It appears that we are near the end of physically making smaller nano meters and we are back to the co-processors (Like the old Amiga days) Co-Processors and some kind of Assembly (A Lisp that complies to an ARM Assembly to much for one to ask for???) are going to be the way we improve speed in the future.


Coincidentally (as no doubt you are aware) yesterday Microsoft released Windows 10 for ARM devices with built-in emulation, and a couple of hardware partners launched suitable systems based on SnapDragon 835 SoC. The future is arriving very fast.


Softbank's purchase of ARM is looking like a genius move even more now.


Yeah, but beyond that, I'm still of the generation (born 1981) that grew up hearing about amazing IBM RISC RS/6000 workstations, and so I'm still inexplicably drawn to any mention of this kind of technology as a “shiny thing”. I know it's mundane and broadly available in the lowest common-denominator hardware but it still attracts me like a moth to a flame. I know I'm a sucker.


ARM is the past, as pretty much every prominent chipmaker currently using ARM is in the RISC-V consortium today, and will be selling chips based on RISC-V tomorrow.


For the past year and a half, there has been no hardware for RISC-V. Where can one buy a cheap RISC-V desktop or a 19” rack mountable 1U server?

Furthermore, RISC V is one of the most awful designs I have ever seen in a processor internally. The assembler mnemonics are idiotic (compare and contrast with MC68000), the assembler is backwards (intel syntax of dst, src instead of src, dst)... it’s just an awful, non-orthognal, non-intuitive design. It’s an imaginary processor for imaginary hardware.


You can definitely get cheap MIPS machines, I have a MIPS board in my robot that only costs $5 in bulk.


Do you have a link to which MIPS machine it is?


Not the OP, but here are a few MIPS boards I'm familiar with in the under-$100 range.

Microchip makes a couple low-end MIPS boards:

http://www.microchipdirect.com/product/search/all/DM320103

http://www.microchipdirect.com/product/search/all/DM320004-2

When you outgrow those, you can move up to the Creator Ci20 from Imagination: https://www.mouser.com/search/ProductDetail.aspx?R=0virtualk...


> Zilog 80 baggage

8080, please. The 8086 was designed to be assembly-level source compatible with 8080's so porting software (here's to you, Gary) from CP/M would be easy.


> 8080, please. The 8086 was designed to be assembly-level source compatible with 8080's so porting software (here's to you, Gary) from CP/M would be easy.

1. The 8086 was not designed to be assembly-level source compatible, but nevertheless intended that assembly code from 8080 could easily be ported to 8086 (mostly find & replace)

2. Zilog developed a different assembly language for the Z80 than Intel used for the 8080 (for copyright reasons, I think - though nearly everybody would agree that Zilog's assembly syntax is better; compare for yourself: http://nemesis.lonestar.org/computers/tandy/software/apps/m4... ). The Z80 assembly language was a strong inspiration for the x86 (8086) assembly language (Intel syntax).

3. The Z80 introduced two index registers (IX, IY) over the 8080. A strange coincidence that Intel also did introduce two index registers for the 8086 (si, di).

4. Do the INI/INIR/IND/INDR/OUTI/OUTIR/OUTD/OUTDR that were introduced by Zilog for the Z80 strongly remind you of the INS/REP INS/OUTS/REP OUTS (just consider that the 8086 uses the direction flag) instructions of the 8086?

etc.

TLDR: Intel took more "inspiration" from the Z80 than is generally acknowledged.


The 8080 and 8086 designs draw directly from Z80. You can see it in the assembler mnemonics and how the operands are handled. And segmented memory access, oh my!


The 8080 predates the Z-80 by about two years.


He likely meant the 8086 and 8088.


That's possible, but, still, even though I find the Z-80 a rather kludgy and inefficient thing (compared to the simplicity of the 6502 and the 6502 machines we had at the time), if felt much neater than the 8086 family.


Actually it is possible to get cheap MIPS stuff, not only in anonymous Android devices but also (m)atx boards Lemote produce for the CCP (for obvious reasons).


Desktop is what buys mindshare. A Raspberry Pi is an embedded device, a desktop and a very entry level server all in one and exactly that is the appeal. Where can one buy a dirt cheap, modern MIPS 1U server or a desktop today?


Can they be purchased in US or Europe?


Sure: https://rwmj.wordpress.com/2015/04/29/mips-creator-32-bit-du...

They have abysmal performance however. An RPi 3 (or just about any 64 bit ARM board) will perform better and cost less.


> why does intel architecture with all of its Zilog 80 baggage dominate

The Z80 was a clone of the 8080, not the other way around.


You do understand that this is server not PC tech? The mainframes and minicomputer that service hundreds of users at once? You remember the old client server model? IBM System/370 IBM AS/400 etc before they became System Z etc?


But the machine shown here is not a mainframe nor a AS/400 or successor. It does not run AIX, OS/400 (pardon me: IBM i) or any of their mainframe OSes. It runs Linux.

And while I am sure it is a sweet machine that offers great performance and reliability, it is rather expensive. If you want to - especially if you mostly care about the GPU part anyway - you can get an Intel or AMD based machine for a fraction of the price.

IBM still stells the large POWER servers that run AIX or IBM i (in addition to Linux), and they still make their zSeries mainframes. But at least the latter are even more expensive.

All of these are really great machines; IBM has decades of experience building computers for some fairly demanding customers, and from what I know about these machines, it shows. But do not confuse the machine described in the article with IBM's top-shelf machines.


As someone who literally grew up on HPC (sgi Origin 2000 and 3800’s to be precise), yes, I completely understand.


Can I rent this by the hour in bluemix?

Just due to volume and cost, I’ve basically given up on the idea of owning my own POWER system in my home rack. If ibm got aggressive with cloud pricing and put together some dev friendly packages that were all tooled up and ready to go, I’d be more than willing to play with it and see how it performs for my tasks. In spirit, I’d love to see a competitive alternative to AWS and Intel, we need it, but I have a hard time paying a premium for it to then experiment and find out if it can actually outperform Main Street.

Tooling is huge here too, there are very material advantages to the incredible hardware optimization done by node.js, pypy, llvm, etc.. since apple transitioned off of PowerPC, I don’t think this platform has had nearly the same amount of attention. I’d love to see IBM giving away access to their version of a “micro” instance to OSS developers


For those complaining about the price, here's some comparisons:

"two Power9 chips, four Volta GPUs accelerators, and 256 GB of memory" =~$65,000

The NVidia DX1 (8 Tesla GPUs, 2 Xeon, so much more GPU, less CPU) is $125,000[1]

Configuring up a SuperMicro GPU server looks to be around $50,000[2]

[1] https://www.engadget.com/2016/04/05/nvidia-dgx-1-deep-learni...

[2] https://www.thinkmate.com/system/superserver-1029gq-txrt


It's still an entry barrier. I can develop the software that'll run on the NVidia box on my $2000 Xeon workstation and have a reasonable expectation the $125K box will behave as expected.

While I trust POWER9 to be very fast, I have had odd performance differences (both positive - meaning I overspent in hardware and had some explaining to do - and negative - meaning I was too optimistic and had some explaining to do) when moving from x86 to SPARC, Itanium and POWER (and MIPS - I'm that old).

The Talon are good entry level POWER9 systems for reasonable prices, but I'd love to see parts trickling down to Xeon E5 prices.

If the barrier of entry is too high, only what already runs on POWER8 will move to POWER9. Xeons have good cost/performance and are a safer bet.


I really badly want to get myself a Talon POWER9 system, but I’m scared of being stuck with another “exotic” overpriced machine (in my time I’ve owned BeBoxes and NeXTStations). Do you have one? Do you use it as a daily workstation? Can I ask you some questions about it?


>Do you have one? Do you use it as a daily workstation?

Nobody does, they are still in the pre-order stage.


Debian is available for POWER9.

This means you'll have at least KVM so you can run nearly anything on top of that including Windows VMs with PCI-passthrough for gaming,etc.

The cost is non-trivial but I've been working up my nerve to pull the trigger and go this route.


KVM + Windows requires x86(-64) hardware. POWER9 and windows means QEMU, which is emulation and slow.



> in my time I’ve owned BeBoxes and NeXTStations

Now I search eBay for one. I have a couple Suns ;-)

> Do you have one? Do you use it as a daily workstation?

I wish. Can't justify the price tag for a novelty item.


Me either. I mean... it’s comparable in price to my mainline alternative (a ”New 2018” MacPro, whatever form it may take and whenever it may be released) but the mainline alternative is sure to be a well-integrated system and fully supported by the producer and software houses alike, whereas the fascinating novelty... <shrug>


Are you both talking about the Talos POWER9 or is a Talon I can't Google for?


Talos II


Sorry. Spell checker.


Yeah that’s absolutely true.

The ecosystem of tuned software just isn’t as big on Power.


Power9 might be a better fit as the invisible hardware behind the #serverless services today (think BigQuery, etc...)

Lots of good points in this thread, but Power is likely going to go for the invisible de-facto cloud data center hardware route.


I am not sure why water cooling. It is just so error prone.


Data centers. More space efficient and less hot air to deal with.


Too little, too late. There's RISC-V now.


POWER9 represents a colossal investment in high-performance processors. It'd take a couple dozen billion dollars for RISC-V to get there.


RISC-V is an ISA, while these high-performance processors are implementations of an ISA.


This was about the RISC-V ecosystem.


If it was IBM, it could get up there real fast in performance, by reusing power. Unfortunately, they're having issues letting go of the power ISA.


That's exactly what I said when someone wanted to know how to get RISC-V way up there in clock rate or single-threaded. The other possibility was semi-custom job at AMD or something. Just implement it in microcode with a little extra hardware if necessary. Reuse tens to hundreds of millions of silicon development. If not high in single-threaded but open, I suggested using Leon3 GPL for fast development or OpenSPARC for lots of cores. OpenPITON already taped out a 25-core chip on 32nm SOI based on OpenSPARC. Many academics used Leon for designs whether taping out or not.

IBM is different, though, in that they have a legacy system effect. Their locked-in customers are where this money is coming from. That's why they keep making the stuff to sell for exhorbitant rates. They really need to create different tiers of pricing justifying it by saying the firmware or whole stack is optimized for this, that, etc. If it's gravy train business, that optimized stack is priced at the fortune the marketing team says it's worth. If it's not (i.e. Raptor or RISC-V), it gets a steep discount just to encourage more adoption with associated software porting. They seem too dumb to do this.


In principle, yes. In practice, it's still at least a few years out in the space they're targeting. At least.


I think it might be there sooner than we expect. The WD announcement (completely different market) came as a great surprise, a least to me. I bet we can be surprised by RISC-V being used in many different ways relatively soon.


"It's just an ISA" though. Processor design is hard and takes time, yo. OTOH, I hope too!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: