Methinks the commenters here are having too much fun contending the amd&arm claim because it's in the headline and it's fun to call articles stupid. But that detail is only incidental to the message of the statement. And at this moment, I don't see anyone discussing the meat of this resignation letter.
> So, today, we're building disposable systems, without any consideration to future products, because when the time will come, we'll pick the next trendy SoC. No need to have what makes an architecture: a vision towards the future allowing different hardware generations to share hardware and software; a design allowing today's software to be able to run on tomorrow's hardware as long as the changes are not too drastic; consistency in choices of busses, key chips, address maps; a real firmware, not something utterly disgusting as EFI which can't even compete with > 20 years old Open Firmware (IEEE 1275); and reliably enumerable busses. No, sir, we'll take today's latest SoC, wire a few extra devices wherever we can find some hole in the address map, put a castrated u-boot on it, and today's stable Linux tree, and here you go, here's your ``platform''. And people cheer.
This issue at hand is that a minority of oss developers sometimes fight very hard to keep a long-term architectural vision maintained. But, an overwhelming majority just swoops in and does the minimal work necessary to get what they need running today with little consideration for tomorrow. Eventually the long-term minority gets frustrated by being eternally overwhelmed and resigns. This is such a case.
So, questions seem to be: Is this really inevitable or can something realistically be done to improve the situation? Is it going to bite us in the ass or is it actually "better" in a worse-is-better kind of way?
I think that statement is missing an "and." So he's disgusted by EFI. What impact does that have five layers up the stack?
Computing has moved from being bespoke to commodity. In a very real sense, cheaper is better - more people can afford it and the people who could always afford it can afford a lot more of it. I suspect that the benefits of cheaper is better vastly outstrip thia guy's grievances about EFI booting.
But what is his main grievance? He talks about how systems aren't designed anymore, they're just built from SoCs off commodity parts. So hardware isn't as reusible across generations, but hardware is cheap so who cares? Cheap solves lots of problems in the end. Compilation is cheap. VMs are cheap. Containers are cheap. All kinds of abstractions to let 99.99999% of people ignore the problems he talks about are cheap. What problem can bespoke computers solve that can't be solved by enough cheap computers?
As one of the people who has done work to keep Linux running on an neglected hardware platform (OpenSPARC T1 with an FPGA target) I agree with this article.
The patches for supporting various functions in the FPGA version of the T1 chip were never mainlined and have diverged from the current version of the kernel. The old version of the kernel they originally applied to no longer compiles without a number of changes. Automake doesn't work with the old makefiles anymore. So if you type make, you have to rewrite the makefile. Then if you do that, they still don't compile because of behavior changes in gcc. Some of these are the fault of the kernel developers making bad decisions, like including both -Wall and -Werror in their flags which cases compilation to fail with newer compilers when new warnings are introduced. (Compilers are always allowed to add warnings later, and indeed they do.) Along with a few other things that fail.
Since I didn't want to develop on a years old kernel anyway, I forward ported Sun's old patches to a newer version of the kernel and successfully moved the newest kernel code available for the platform forward a few years. But was never able to get the latest kernels working, because the newest kernels all fail on that platform during initialization.
I'm sure it's something I could fix if I needed to, but tracking down why a kernel crashes on an entire platform before it starts init is just no fun.
Maybe someone else fixed that bug and pushed it forward again, but as far as I know the current status of running Linux on an FPGA with the OpenSPARC T1 core is that the latest kernel available is 3.9 from my tree on github. [1]
Which is a shame, this is one of the very few open cores for a truly fast and well designed processor available in the world.
> I'm sure it's something I could fix if I needed to, but ...
So in other words, no demand, no work. Had your job depended on it, you'd have done so. For everything else, if no hobbyist is willing to work on it then it gets de facto deprecated at some point down the read.
Plus, OpenSPARC T1 has been around for what... a decade? For all of its technical merits, that's about as old as dinosaurs in IT time. Deep Blue defeated Kasparov in 1997 and was the size of an oversized US fridge. Smart phones packed the same amount of computing power 13 years later. :-)
I mean. I'm not sure what your point is trying to explain my own motivations to me. It was part of what I was getting paid to do at the time. But I didn't need the newest kernel to run on it, so I never ported beyond 3.9. Arguably had it been a passionate side project, I may have gone further. Trying to use this as some kind of capitalistic example of how the market forces just didn't motivate it enough is... missing a lot of the point around so much of what happens here and why a lot of the people who do this work continue to do it.
Also, while the T1 core may be somewhat dated, it is still the newest general purpose ASIC design that's ever been fabricated as a major CPU that you can actually download the verilog for and put on an FPGA.
The advances in computing power haven't been driven by tons of changes in the core units since the gigahertz wars. They are by no means static, but an OpenSPARC T1 core has a surprising amount of similarity to a SPARC core shipping in new designs today. It's an extremely valuable model and opportunity to play with hardware modifications that is otherwise unparalleled outside the few organizations that regularly fabricate new ASICs.
First sentence should be, "Today's GUI-based end user world is amd64, armv7, and soon aarch64." There are tens of millions of systems where the ability to run a web browser, PDF viewer, or office suite is irrelevant. I'm not talking about bare-metal embedded, either. I mean server and backend systems, HPC systems, RTOS-based systems, and soft real-time processing systems.
I get the impression that the author of this is completely focused on desktop and mobile software and has written an editorial based on the perspective that those are the only realms that exist. I agree with her conclusions with respect to consumer and office platforms, but think she makes the gross error of then projecting that onto the rest of the computing world. Many platforms will stay alive outside of this space with good toolchain support and poor GUI application support.
Yeah. But the author seems to have gotten some good money from amd you write about them rather than go out and say that, amd is also dying much like the others.
I highly doubt if arm will ever become a serious contender in the desktop computing market, let alone the hpc market which is even more difficult with what Intel can guarantee.
> I'm not talking about bare-metal embedded, either. I mean server and backend systems, HPC systems, RTOS-based systems, and soft real-time processing systems.
What architectures would you consider competitive in each of these areas? As far as I know, most of the areas you listed are migrating to ARM and AMD64 for new designs/installations at a pretty quick pace.
"Have a look at all free operating system projects: they really only support amd64, armv7 and aarch64. Gee! Sometimes they pretend to support a few other experimental platforms. Or other platforms which have not been tested in years and are only cross-built because they are not self-hosting anymore."
I'm pretty sure the NetBSD[1] and OpenBSD would dispute that statement. OpenBSD even compiles its ports directly (not cross compiled). Both projects benefit from these architectures by finding bugs.
Also, although not a popular company, Oracle just introduced a new SPARC and given Oracle's behavior, they wouldn't do it unless it had a profit and a future.
1) look at one of their exhibits, these folks believe in running on anything including toasters
This was written by Miod Vallet a couple days ago. He was the main nonstandard arch developer for OpenBSD and resigned last week. So, the context is that the author was one of the few leaders of the Luddites and considers it a lost cause.
Kinda of figured, and for a lot of stuff, I don't disagree with the sentiment, but I still think it has value.
My biggest frustration with the alternate chip vendors is the total lack of a simple ITX / ATX motherboard with a chip, expandable memory (preferably ECC), and a standard set of ports. Even ARM is problematic at this.
I am mystified at how hard it is for a chip vendor to produce a motherboard.
They're all at least in the same order of magnitude. And in my experience, arm-none-eabi-gcc is a way nicer development experience than MPLab or AVR Studio. avr-gcc is decent once you get it set up properly.
There are way more MIPS systems than aarch64 systems, and even more Power8 systems than aarch64. Risc-V is interesting as an open source design, with commercial interest. GPUs are not any of those three. There is quite a bit of work going on, it may just seem a bit niche to some people.
Can you really count the iPhones? They are not open in any sense of the word and you would have to do some significant contortions to run anything but Apple's OS on them.
Yes, because they are significant in taking away budget, resources and brain power. And they make people believe that other platforms are becoming less significant.
I'm personally just going for a simple count of how many things (servers, phones, remote controls...) uses a particular CPU architecture. It probably makes the 8051 rank quite highly (and I bet lots of them have hard / impossible to replace ROM for software).
It's easier than trying to come up with some soft of definition for an "open system" or a "real computer".
The post is about ecosystems and what people work on/for, is a library maintainer more likely to spend work or merge fixes for android/ios support or power8 support?
The vast majority of MIPS systems are bargain-basement embedded systems, though. Also, the ISA is pretty messy (especially at the kernel layer) and doesn't support many modern features such as per-page W^X or crypto acceleration.
I dislike monopolies more than most, but MIPS isn't currently a very strong contender for ARM's market share.
Right now MIPS rules the television and set-top-box world, because there are, I think, two Chinese companies both of which own the IP for a MIPS core and all the digital TV decoder logic; which means that they can produce a complete end-to-end TV chipset and not have to pay anyone anything. So they're vastly cheaper than anything else.
Also, don't dismiss them. My last company was a startup producing a portable native gaming platform. I did a couple of ports to some MIPS-based smart TVs, and it totally rocked; I think we ended up with Lego Batman running on it in HD with an XBox 360 controller plugged into the USB diagnostics port on the back.
Of course, at the userland level all these platforms are horrible piles of fail. The ones I've seen all run badly-ported Linux kernels, and on at least one they hadn't bothered to write device drivers, and as a result the TV UI stack did audio by talking via pipes to a standalone executable, running as root, which fiddled with the hardware registers. Unsurprisingly we had latency problems...
We didn't have control over the base OS --- we just had to port to whatever foul junk the manufacturers had put on it, and usually had to link to their UI libraries, as our platform was intended to run alongside the vendor UI.
I vaguely remember Linaro, but that might have been ARM (which we also supported). I don't recall Prpl.
Well, there's still some x86-32 around (some not too old Atoms), and armv6 (Raspberry Pi). And I'm hoping OpenRISC or RISC-V will be successful eventually.
I don't really get the point about how open software lost. Sure the hardware is disposable, but consider that the amd64 platform is only a relatively minor modification of the x86-32 platform, and that has only seen relatively small changes since the 386 - which is what the first linux was based on 24 years ago. The 32 bit arm architecture, just like the 386, came out in the mid eighties.
32bit x86 even got some new devices last year through the Intel Quark family. As these devices don't MMX and other stuff, they have an even older instruction set then the Atoms. And yet they are brand new.
The last time I tried running Linux on OpenRISC, it was... interesting. There's upstream kernel support, but it's not compatible with the version of libc I could find - fails to boot due to syscalls that were removed when the code was pushed upstream.
> No, sir, we'll take today's latest SoC, wire a few extra devices wherever we can find some hole in the address map, put a castrated u-boot on it, and today's stable Linux tree, and here you go, here's your ``platform''. And people cheer.
today's stable Linux tree? Shit people still ship 2.6.x kernels these days because they don't want to invest the engineering resources to keep their vendor-specific customizations accumulated over YEARS in the same processor family up to date.
I'd like to notice that Google recently invested significant effort in using power8, including building their own servers based on it.
I hope these guys have enough motivation and resources to keep the platform alive. They use a huge lot of open-source software so I suppose the support will not be locked out in-house.
I think it is merely an effort to not give Intel the monopoly over Google's datacenters. Because otherwise, Google will not have any negotiating power regarding prices of the chips.
I think power density and performance per watt are more important for Google than chip price. Power8 has greater density than Xeons with comparable Mips/W or Mflops/W.
This. I think a side-benefit is that it lets them evaluate alternatives to x86 on a larger scale, and that if anything, the effort will lead to more portable code.
Old news? When even game consoles, the standard for going proprietery for performance, went amd64/arm, the consolidation of platforms was a foregone conclusion.
> "aarch64" is the new 64-bit ARMv8, I had to Google that name.
It's not exactly new though, it first shipped in the iPhone 5S a bit more than 2 years ago.
> There are various other minor and not so minor platforms like Power, Mips, OpenSparc, etc.
Which is exactly the platforms the author (Miod Valaat, who recently resigned as OpenBSD nurturer/implementer/maintainer for many of these) is talking about considering he explicitly lists them in the first paragraph:
> Only a few people actually know about the rest of the ecosystem (mips, power8...), and noone gives a shit anyway.
I can relate to the stuff about free software losing the battle against commercial software. I've maintained 2 open source projects over the part 10 years and for both cases, I saw an 'as-a-service' alternative which did the same thing come up and get a massive amount of traction.
The worst part is that the attitude towards open source projects has degraded. One time I mentioned one of my (somewhat popular) open source projects in a pubic chat room (just a single post) and I was accused of spamming! Even though the project was relevant to the chat and it was completely free and open source.
People don't value free software anymore. It's all about ridiculously expensive SaaS services with massive lock-in nowadays.
So I've had enough. I'm joining the dark side. I'm going to start building services and help make the world a worse place.
I'm not going to be one of these poor old FSF guys who keep fighting for software freedom for their whole their lives. Things are getting worse - You just have to accept it and move with it.
Mill Computing will release a new CPU with much potential since it will be fast and consume 1/10 of the energy of an Intel CPU. I hope that they will be successful and I plan to buy a Mill computer one day. BSD is a better OS than Linux so it will be my preferred choice.
I wish that the author would be able to accept the world how it is: like a priest, one does a lot of good work and receive little appreciation. That is how it is. If you are able to get satisfaction from the fact that you do something good, you keep on doing good for the rest of your life.
"Mill Computing will release a new CPU with much..." (insert recycled hype of a product that has yet to have a HDL implementation, let alone silicon)
"BSD is a better OS than Linux..."
I think you've built your world out of dogma where it's a lot more fun to have "an opinion" and express it than to deal with the nuance involved in the real world.
Their CPU is years away. There isn't even an FPGA prototype of it anywhere. And once they reach prototype stage, it will take around 5 to 10 years for it to reach consumers.
> I wish that the author would be able to accept the world how it is: like a priest, one does a lot of good work and receive little appreciation. That is how it is. If you are able to get satisfaction from the fact that you do something good, you keep on doing good for the rest of your life.
I think there is lots of inertia in staying with existing platforms (x86, ARM.) Even Intel got their ass kicked with Itanium even though they tried to undo a lot of mistakes there.
Any platform that's not on enough programmers' desks, being actively used, becomes legacy. Once it becomes legacy, it dies. It can survive long periods as a niche product, but, then, it won't run new software, the compilers won't target it, new applications won't be written for it and, ultimately, the last machine of its kind will be turned off, replaced by the current incarnation of something that is actually used.
With the cost in multiple billions to build a fab and the consolidation of semiconductor manufactures the reduction in # of architectures isn't surprising.
I think this is what we get because we've successfully evolved the business model up the stack, and the bottom is largely solved. This id a good thing, except for those who enjoy innovating (or reinventing) at the bottom layer.
Replicating content here since it's impossible to read it on mobile at pastebin.com
Today's world is amd64, armv7, and soon aarch64. Everything else is dead, Jim. Noone is investing enough money and brain power in the other architectures. Only a few people actually know about the rest of the ecosystem (mips, power8...), and noone gives a shit anyway.
Keeping an obsolete platform alive is fun because this reminds you of the '90s, when there was a large choice of hardware platforms, with roughly similar cost/power rations. Eventually, the cheap PC killed almost all competition, and the smartphone market gave ARM an unhealthy market share in the embedded systems world.
Then, after a while, it's not fun anymore, because noone is writing code with your platform in mind, because it's not deemed powerful enough, because modern compilers no longer support your platform (or they produce broken code for it, which is even worse). Does your platform have a hardware limit of a few hundred MB of physical memory? You won't be able to run a web browser or even a PDF viewer on it. Does someone still run today's gcc's testsuite on your platform? No? Sorry dude, here's a nickel...
Keeping a platform alive is a real team work. This requires a serious commitment from all the ``building block'' projects: a not-too-bug-ridden toolchain (as/ld/gcc), as well as support in the flagship projects (emacs, python, X11, mozilla, libreoffice...), and accurate, up-to-date documentation available free of charge.
None of this is true of platforms, except for amd64, armv7 and aarch64.
Because of this, trying to keep a platform alive is really going against the tide.
Have a look at all free operating system projects: they really only support amd64, armv7 and aarch64. Gee! Sometimes they pretend to support a few other experimental platforms. Or other platforms which have not been tested in years and are only cross-built because they are not self-hosting anymore.
Of course, you'll still find a few looneys which will do an incredible amount of work to prevent the decay of their platform of choice, and give the impression that these platforms are still first class citizens. But these guys are the same as ten years ago. And eventually, they get tired and give up. Just like me.
The worst part in this, is to look back and realize that, after all those years, free software has lost. Companies working on non-free software have been smart enough to get the momentum of the free software developers to work on embedded platforms in order to tremendously shrink they software engineering costs, yet getting positive press.
So, today, we're building disposable systems, without any consideration to future products, because when the time will come, we'll pick the next trendy SoC. No need to have what makes an architecture: a vision towards the future allowing different hardware generations to share hardware and software; a design allowing today's software to be able to run on tomorrow's hardware as long as the changes are not too drastic; consistency in choices of busses, key chips, address maps; a real firmware, not something utterly disgusting as EFI which can't even compete with > 20 years old Open Firmware (IEEE 1275); and reliably enumerable busses. No, sir, we'll take today's latest SoC, wire a few extra devices wherever we can find some hole in the address map, put a castrated u-boot on it, and today's stable Linux tree, and here you go, here's your ``platform''. And people cheer.
I've spent 20 years in my life trying to promote a certain view of free software, its values and its ethic, fighting for it, getting involved, trying to lead by example, and on this day, I get the feeling that all I did was wasting my time and that nothing I did has been useful.
It's a hard pill to swallow.
Better resign now than keep trying and only get bitterer in the process.
> So, today, we're building disposable systems, without any consideration to future products, because when the time will come, we'll pick the next trendy SoC. No need to have what makes an architecture: a vision towards the future allowing different hardware generations to share hardware and software; a design allowing today's software to be able to run on tomorrow's hardware as long as the changes are not too drastic; consistency in choices of busses, key chips, address maps; a real firmware, not something utterly disgusting as EFI which can't even compete with > 20 years old Open Firmware (IEEE 1275); and reliably enumerable busses. No, sir, we'll take today's latest SoC, wire a few extra devices wherever we can find some hole in the address map, put a castrated u-boot on it, and today's stable Linux tree, and here you go, here's your ``platform''. And people cheer.
This issue at hand is that a minority of oss developers sometimes fight very hard to keep a long-term architectural vision maintained. But, an overwhelming majority just swoops in and does the minimal work necessary to get what they need running today with little consideration for tomorrow. Eventually the long-term minority gets frustrated by being eternally overwhelmed and resigns. This is such a case.
So, questions seem to be: Is this really inevitable or can something realistically be done to improve the situation? Is it going to bite us in the ass or is it actually "better" in a worse-is-better kind of way?