> So, today, we're building disposable systems, without any consideration to future products, because when the time will come, we'll pick the next trendy SoC. No need to have what makes an architecture: a vision towards the future allowing different hardware generations to share hardware and software; a design allowing today's software to be able to run on tomorrow's hardware as long as the changes are not too drastic; consistency in choices of busses, key chips, address maps; a real firmware, not something utterly disgusting as EFI which can't even compete with > 20 years old Open Firmware (IEEE 1275); and reliably enumerable busses. No, sir, we'll take today's latest SoC, wire a few extra devices wherever we can find some hole in the address map, put a castrated u-boot on it, and today's stable Linux tree, and here you go, here's your ``platform''. And people cheer.
This issue at hand is that a minority of oss developers sometimes fight very hard to keep a long-term architectural vision maintained. But, an overwhelming majority just swoops in and does the minimal work necessary to get what they need running today with little consideration for tomorrow. Eventually the long-term minority gets frustrated by being eternally overwhelmed and resigns. This is such a case.
So, questions seem to be: Is this really inevitable or can something realistically be done to improve the situation? Is it going to bite us in the ass or is it actually "better" in a worse-is-better kind of way?
Computing has moved from being bespoke to commodity. In a very real sense, cheaper is better - more people can afford it and the people who could always afford it can afford a lot more of it. I suspect that the benefits of cheaper is better vastly outstrip thia guy's grievances about EFI booting.
Isn't this the point where you fork the OSS project? Then you get to be the one who maintains the long-term architectural vision.
The patches for supporting various functions in the FPGA version of the T1 chip were never mainlined and have diverged from the current version of the kernel. The old version of the kernel they originally applied to no longer compiles without a number of changes. Automake doesn't work with the old makefiles anymore. So if you type make, you have to rewrite the makefile. Then if you do that, they still don't compile because of behavior changes in gcc. Some of these are the fault of the kernel developers making bad decisions, like including both -Wall and -Werror in their flags which cases compilation to fail with newer compilers when new warnings are introduced. (Compilers are always allowed to add warnings later, and indeed they do.) Along with a few other things that fail.
Since I didn't want to develop on a years old kernel anyway, I forward ported Sun's old patches to a newer version of the kernel and successfully moved the newest kernel code available for the platform forward a few years. But was never able to get the latest kernels working, because the newest kernels all fail on that platform during initialization.
I'm sure it's something I could fix if I needed to, but tracking down why a kernel crashes on an entire platform before it starts init is just no fun.
Maybe someone else fixed that bug and pushed it forward again, but as far as I know the current status of running Linux on an FPGA with the OpenSPARC T1 core is that the latest kernel available is 3.9 from my tree on github. 
Which is a shame, this is one of the very few open cores for a truly fast and well designed processor available in the world.
So in other words, no demand, no work. Had your job depended on it, you'd have done so. For everything else, if no hobbyist is willing to work on it then it gets de facto deprecated at some point down the read.
Plus, OpenSPARC T1 has been around for what... a decade? For all of its technical merits, that's about as old as dinosaurs in IT time. Deep Blue defeated Kasparov in 1997 and was the size of an oversized US fridge. Smart phones packed the same amount of computing power 13 years later. :-)
Also, while the T1 core may be somewhat dated, it is still the newest general purpose ASIC design that's ever been fabricated as a major CPU that you can actually download the verilog for and put on an FPGA.
The advances in computing power haven't been driven by tons of changes in the core units since the gigahertz wars. They are by no means static, but an OpenSPARC T1 core has a surprising amount of similarity to a SPARC core shipping in new designs today. It's an extremely valuable model and opportunity to play with hardware modifications that is otherwise unparalleled outside the few organizations that regularly fabricate new ASICs.
I get the impression that the author of this is completely focused on desktop and mobile software and has written an editorial based on the perspective that those are the only realms that exist. I agree with her conclusions with respect to consumer and office platforms, but think she makes the gross error of then projecting that onto the rest of the computing world. Many platforms will stay alive outside of this space with good toolchain support and poor GUI application support.
In HPC x86-64 (in fact, Intel x86-64) is the only thriving architecture: here is the breakdown from the last Top500:
- 445 systems are Intel x86-64
- 21 systems are AMD x86-64
- 19 systems are IBM Blue Gene/Q. IBM discontinued the Blue Gene line, so these systems would be replaced with something else as they age.
- 7 systems are IBM Power 7. There are no IBM Power 8 systems on the list.
- 7 systems are SPARC-based. AFAIK all of them are Fujitsu systems installed in Japan, and they are not accessible to researchers outside of Japan.
- 1 system is based on Chinese ShenWei architecture.
- There are no ARM (neither ARMv7 nor ARM64) systems on the list.
I highly doubt if arm will ever become a serious contender in the desktop computing market, let alone the hpc market which is even more difficult with what Intel can guarantee.
What architectures would you consider competitive in each of these areas? As far as I know, most of the areas you listed are migrating to ARM and AMD64 for new designs/installations at a pretty quick pace.
I'm pretty sure the NetBSD and OpenBSD would dispute that statement. OpenBSD even compiles its ports directly (not cross compiled). Both projects benefit from these architectures by finding bugs.
Also, although not a popular company, Oracle just introduced a new SPARC and given Oracle's behavior, they wouldn't do it unless it had a profit and a future.
1) look at one of their exhibits, these folks believe in running on anything including toasters
My biggest frustration with the alternate chip vendors is the total lack of a simple ITX / ATX motherboard with a chip, expandable memory (preferably ECC), and a standard set of ports. Even ARM is problematic at this.
I am mystified at how hard it is for a chip vendor to produce a motherboard.
E.g. Freescale Cortex M0 for $1.09 in singles: http://www.digikey.ca/product-detail/en/MKL03Z8VFG4/MKL03Z8V...
Cheapest AVR (ATTiny4 for $1.36) http://www.digikey.ca/product-detail/en/ATTINY4-TSHR/ATTINY4...
Cheapest PIC (PIC10F200 for $0.60) http://www.digikey.ca/product-detail/en/PIC10F200T-I%2FOT/PI...
They're all at least in the same order of magnitude. And in my experience, arm-none-eabi-gcc is a way nicer development experience than MPLab or AVR Studio. avr-gcc is decent once you get it set up properly.
Why do you think so?
It's easier than trying to come up with some soft of definition for an "open system" or a "real computer".
I dislike monopolies more than most, but MIPS isn't currently a very strong contender for ARM's market share.
Also, don't dismiss them. My last company was a startup producing a portable native gaming platform. I did a couple of ports to some MIPS-based smart TVs, and it totally rocked; I think we ended up with Lego Batman running on it in HD with an XBox 360 controller plugged into the USB diagnostics port on the back.
Of course, at the userland level all these platforms are horrible piles of fail. The ones I've seen all run badly-ported Linux kernels, and on at least one they hadn't bothered to write device drivers, and as a result the TV UI stack did audio by talking via pipes to a standalone executable, running as root, which fiddled with the hardware registers. Unsurprisingly we had latency problems...
Out of interest, what did you use for your base OS when you were doing MIPS development? Did you reuse an existing distro? Does Linaro support MIPS?
EDIT: Just found out there's a Linaro-like organisation for MIPS called Prpl. Is that what you used?
I vaguely remember Linaro, but that might have been ARM (which we also supported). I don't recall Prpl.
I don't really get the point about how open software lost. Sure the hardware is disposable, but consider that the amd64 platform is only a relatively minor modification of the x86-32 platform, and that has only seen relatively small changes since the 386 - which is what the first linux was based on 24 years ago. The 32 bit arm architecture, just like the 386, came out in the mid eighties.
today's stable Linux tree? Shit people still ship 2.6.x kernels these days because they don't want to invest the engineering resources to keep their vendor-specific customizations accumulated over YEARS in the same processor family up to date.
I hope these guys have enough motivation and resources to keep the platform alive. They use a huge lot of open-source software so I suppose the support will not be locked out in-house.
There are various other minor and not so minor platforms like Power, Mips, OpenSparc, etc.
It's not exactly new though, it first shipped in the iPhone 5S a bit more than 2 years ago.
> There are various other minor and not so minor platforms like Power, Mips, OpenSparc, etc.
Which is exactly the platforms the author (Miod Valaat, who recently resigned as OpenBSD nurturer/implementer/maintainer for many of these) is talking about considering he explicitly lists them in the first paragraph:
> Only a few people actually know about the rest of the ecosystem (mips, power8...), and noone gives a shit anyway.
The worst part is that the attitude towards open source projects has degraded. One time I mentioned one of my (somewhat popular) open source projects in a pubic chat room (just a single post) and I was accused of spamming! Even though the project was relevant to the chat and it was completely free and open source.
People don't value free software anymore. It's all about ridiculously expensive SaaS services with massive lock-in nowadays.
So I've had enough. I'm joining the dark side. I'm going to start building services and help make the world a worse place.
I'm not going to be one of these poor old FSF guys who keep fighting for software freedom for their whole their lives. Things are getting worse - You just have to accept it and move with it.
And we're likely to see a raise of RISC-V soon.
> Only a few people actually know about the rest of the ecosystem (mips, power8...), and noone gives a shit anyway.
I wish that the author would be able to accept the world how it is: like a priest, one does a lot of good work and receive little appreciation. That is how it is. If you are able to get satisfaction from the fact that you do something good, you keep on doing good for the rest of your life.
"BSD is a better OS than Linux..."
I think you've built your world out of dogma where it's a lot more fun to have "an opinion" and express it than to deal with the nuance involved in the real world.
Their CPU is years away. There isn't even an FPGA prototype of it anywhere. And once they reach prototype stage, it will take around 5 to 10 years for it to reach consumers.
Mill is cool and all but it's probably going to end up like countless other fascinating designs, which is to say quite irrelevant.
Doing good things usually does not pay bills. :-(
Today's world is amd64, armv7, and soon aarch64. Everything else is dead, Jim. Noone is investing enough money and brain power in the other architectures. Only a few people actually know about the rest of the ecosystem (mips, power8...), and noone gives a shit anyway.
Keeping an obsolete platform alive is fun because this reminds you of the '90s, when there was a large choice of hardware platforms, with roughly similar cost/power rations. Eventually, the cheap PC killed almost all competition, and the smartphone market gave ARM an unhealthy market share in the embedded systems world.
Then, after a while, it's not fun anymore, because noone is writing code with your platform in mind, because it's not deemed powerful enough, because modern compilers no longer support your platform (or they produce broken code for it, which is even worse). Does your platform have a hardware limit of a few hundred MB of physical memory? You won't be able to run a web browser or even a PDF viewer on it. Does someone still run today's gcc's testsuite on your platform? No? Sorry dude, here's a nickel...
Keeping a platform alive is a real team work. This requires a serious commitment from all the ``building block'' projects: a not-too-bug-ridden toolchain (as/ld/gcc), as well as support in the flagship projects (emacs, python, X11, mozilla, libreoffice...), and accurate, up-to-date documentation available free of charge.
None of this is true of platforms, except for amd64, armv7 and aarch64.
Because of this, trying to keep a platform alive is really going against the tide.
Have a look at all free operating system projects: they really only support amd64, armv7 and aarch64. Gee! Sometimes they pretend to support a few other experimental platforms. Or other platforms which have not been tested in years and are only cross-built because they are not self-hosting anymore.
Of course, you'll still find a few looneys which will do an incredible amount of work to prevent the decay of their platform of choice, and give the impression that these platforms are still first class citizens. But these guys are the same as ten years ago. And eventually, they get tired and give up. Just like me.
The worst part in this, is to look back and realize that, after all those years, free software has lost. Companies working on non-free software have been smart enough to get the momentum of the free software developers to work on embedded platforms in order to tremendously shrink they software engineering costs, yet getting positive press.
So, today, we're building disposable systems, without any consideration to future products, because when the time will come, we'll pick the next trendy SoC. No need to have what makes an architecture: a vision towards the future allowing different hardware generations to share hardware and software; a design allowing today's software to be able to run on tomorrow's hardware as long as the changes are not too drastic; consistency in choices of busses, key chips, address maps; a real firmware, not something utterly disgusting as EFI which can't even compete with > 20 years old Open Firmware (IEEE 1275); and reliably enumerable busses. No, sir, we'll take today's latest SoC, wire a few extra devices wherever we can find some hole in the address map, put a castrated u-boot on it, and today's stable Linux tree, and here you go, here's your ``platform''. And people cheer.
I've spent 20 years in my life trying to promote a certain view of free software, its values and its ethic, fighting for it, getting involved, trying to lead by example, and on this day, I get the feeling that all I did was wasting my time and that nothing I did has been useful.
It's a hard pill to swallow.
Better resign now than keep trying and only get bitterer in the process.
From the (ex-)OpenBSD developer Miod Vallat (ex according to https://www.reddit.com/r/linux/comments/3u7z6c/todays_world_...):