
Mono on AIX and IBM i - lewurm
http://www.mono-project.com/news/2018/05/29/mono-on-aix-and-ibm-i/
======
lambda
Seems like IBM is really increasing its investment in open source software and
hardware for PowerPC recently. I've noticed them offering bounties for PowerPC
related work on Rust, there's this recent work on Mono, and OpenPOWER has led
to the development of truly open servers and workstations like the Talos II.

~~~
jrs95
It does seem kind of odd though. The price/performance of PowerPC compared to
Intel is still quite bad. To me it seems they’d be better off giving up on
this term and focusing on other things. Maybe they could start with making
Watson an actual thing instead of just a marketing term for consulting
services.

~~~
dragontamer
The $1300 18-core / 72-thread Power9 seems like a good deal IMO. That's
severely undercutting Intel's i9-7980XE (18-core / 36-thread), and offering
grossly superior specs to boot, like wtf 90MB L3 cache, 8x DDR4 controllers,
and dual-socket support.

In contrast, the i9-7980XE is MSRP of $2000, only has 4x DDR4 controllers and
is single-socket only.

The 18-core Xeon Gold 6140 is $2400. Only 6x DDR4 controllers (although it
seems to go up to quad-socket for what its worth).

AMD EPYC offers the single-socket 24c / 48t 7401p in and around $1300 (I'm
seeing it ~$1100 and $1200). 8x DDR4, so its far more comparable to the Power9
machine. But Power9 is likely faster on a single-core basis, has a unified
mesh instead of the 4x NUMA configuration of EPYC. EPYC doesn't have a unified
L3 cache: tasks only effectively have 8MB of L3 (but there are lots of 8MB L3
caches scattered throughout EPYC).

So EPYC vs Power9 is a fair comparison at ~$1300, but Intel is severely
overpriced in comparison. Actually, I'm not liking any of Intel's higher end
options this generation, EPYC and Power9 seem superior on paper... as long as
you don't need AVX-512.

~~~
Narishma
I think the main problem is that there are no low-cost POWER systems for
people to tinker with like there are for x86 and ARM. You are limited to
expensive high-end servers, and that leads to a lack of software support.

~~~
dragontamer
[https://www.digikey.com/product-detail/en/nxp-usa-
inc/TWR-P1...](https://www.digikey.com/product-detail/en/nxp-usa-
inc/TWR-P1025/TWR-P1025-ND/3198171)

Power certainly exists, its just less popular. The above board is under $250
and seems to be good enough for tinkering usage.

Software support is certainly weaker. I think Power9 primarily exists for
people who are planning to write their own application servers, or otherwise
are willing to spend time recompiling OSS over to the Power9 architecture
(Gentoo style or similar).

A "real" server application will want to buy a Talos II Workstation, for
$3000+. If you're worried about performance, you have to develop and test on
the big stuff.

If you're planning on a high-performance database optimizations (ie: lets say
you wanted to improve PostgreSQL's performance), you wouldn't test on a Rasp.
Pi or Intel i7. You'd buy a Thunderx2 ARM, Intel Gold, or AMD EPYC to test on.
The architecture of "big boy" chips is grossly different than the smaller
scale stuff.

\----------------

And the way you "test" on big-boy machines, is through cloud services. You
don't necessarily have to buy a full machine if you're testing (although local
access is certainly useful).

~~~
lambda
Raptor does have a sub-$2000 machine on pre-order:
[https://www.raptorcs.com/content/TL1BC1/intro.html](https://www.raptorcs.com/content/TL1BC1/intro.html)

That is starting to approach the price point where it's worth it if you want
to tinker, but with something that is a lot closer to a real machine than a
$250 embedded board.

I tried taking a look for cloud services that offer POWER machines, but
couldn't find anything that I could just sign up for and try for relatively
cheap. There are a couple of universities that have free POWER clusters that
you can request access to, but for tinkering I'd actually rather be able to
pay for what I'd need rather than having to manually request access.

Do you have references to cloud services that would allow you to easily get
access to POWER9 systems by the hour? IBM's own cloud services only seem to
offer bare-metal POWER8 servers by the month; as far as I can tell (from their
fairly confusing pricing page and docs), all of their virtual servers or
hourly servers are Intel.

~~~
dragontamer
I remember hearing of some Power cloud offerings, but unfortunately I can't
find any these days.

I find old-press releases going to broken web-pages:
[https://www.ovh.com/world/news/cp1606.world_exclusive_ovhcom...](https://www.ovh.com/world/news/cp1606.world_exclusive_ovhcom_offers_ibms_power8_on_runabove_the_public_cloud_instance_up_to_100_times_faster)

But that's not really helpful. Hmm, I guess Power8 / Power9 cloud systems seem
to have disappeared. IBM really needs to work on getting cloud-instances ready
for tinkering.

> Raptor does have a sub-$2000 machine on pre-order:
> [https://www.raptorcs.com/content/TL1BC1/intro.html](https://www.raptorcs.com/content/TL1BC1/intro.html)

With RAM and storage, it will be $3000+.

With "price/performance" parts, like the $1300 18-core CPU (something that
would make the entire purchase worth it IMO), you'd be above $4k+.

~~~
lambda
Oh, yeah, looks like you also have to add the CPU, that's not included in the
base price.

Although, for RAM and storage you can purchase that separately at cheaper
prices.

I just tried to sign up for IBM's cloud, to see if I could use the limited
free trial or $200 credit to try out one of the bare-metal systems.

Nope. The limited free trial only seems to apply to a few of their SaaS
offerings; and the $200 credit also doesn't apply to their "infrastructure
services" like VMs, bare metals servers, storage, etc.

Additionally, it looks like you need to get manual approval for you account to
be able to use the VMs; even once I added my credit card info, they tell me my
account needs to be reviewed by someone before I can actually rent server
time.

On AWS or GCS, you just sign up and spin up VMs. I've used that many times for
quick compat testing against different operating systems, some quick extra CPU
cycles, deploying test servers, and so on.

But on IBM's cloud, you need someone to actually manually approve your account
before you can do anything but use their SaaS offerings.

They really don't seem to want to attract people casually trying things out.

 _edit_ : after a bit, they did approve my account for infrastructure access.
But the bare-metal POWER servers can't be rented by the hour, only by the
month, and they start at $1000 per month. Still not really helpful for casual
exploration. They also have such wonderful usability features as requiring you
to allow popups in order to configure and order servers or VMs.

------
apaprocki
IBM people, if you're reading this and ported Mono to work with XCOFF, how
about adding an XCOFF back-end to LLVM? Hell, I'd even take being able to
bootstrap GCC 8 from source using your compiler.

------
azinman2
Wow seems like a lot of work for something that won’t run fast (until PASE is
cut out of the picture), and likely has a very small user base. Who is likely
to use it?

~~~
pjmlp
Actual IBM users, according to the post.

------
falcon620
Huh. I honestly thought AIX was dead by now. I remember porting something to
AIX 20 years ago, it wasn't fun. I guess IBM stuff doesn't die, it just gets
more expensive.

~~~
freehunter
I mean... it's Unix. That's like saying "I ported something to Linux 20 years
ago, shouldn't it be dead by now?" and also like saying "I ported something to
OSX or Solaris, it wasn't fun" because AIX is Unix exactly like OSX or
Solaris.

~~~
falcon620
Cause every Unix-like OS is exactly the same?

At the end of the 90s they had spent like 15+ years diverging from each other.
Some more gracefully than others.

~~~
freehunter
But it's not Unix-like. It's Unix. It's _actually_ Unix, as in UNIX™.

But at any rate, your comment implied it should be dead because it's so old.
Which is a silly statement, considering just how old UNIX™/Unix systems
actually are.

~~~
hapless
I think the implication was that he thought it sucked 20 years ago, so it is
surprising that it has continued to suck for 20 more years

~~~
pjmlp
Actually it was much better experience than HP-UX, where aCC was stuck between
K&R and ANSI C, during the time I happened to do heavy UNIX development.

