Quite so, hardware is the holy grail. (Alan Kay: "People who are really serious about software should make their own hardware.") I dream of a golden age of experimentation in vertical stacks: specialized hardware designed for specialized classes of application with only so much OS as is needed to support them. Perhaps if the cost of developing hardware falls the way the cost of developing software did, we might see something. Why not an Erlang machine? A Lua machine? A spreadsheet machine?
Enough to bother with?
Order of magnitude is table stakes for interesting, wouldn't you say? Radical experiments demand radical gains. Surely there is room for an order of magnitude if one is willing to sacrifice general-purpose computing.
Order of magnitude performance improvement isn't going to be possible. That basically requires that over 90% of your cycles are currently being wasted by the OS somehow. Maybe this project could get 20% improvement.
You're talking about the OP's project and I was not – at least not when I brought up orders of magnitude. The confusion is my fault. I implicitly changed the subject to my own fantasy tangent.
My point is that if one is going to build a narrow vertical stack up from specialized hardware, there had better be a 10x advantage over running the application the ordinary way or the experiment becomes a why-bother. Also, the application had better be valuable enough to justify the effort.
This vision of systems design has been alive in the Forth community for a long time – maybe not the "iterating on hardware as part of application development" part, but certainly the specialized vertical stack idea, just in a very austere form. They make the tradeoff of dramatically reducing what the software will do in order to make it feasible to develop that way. That's a tradeoff most of us aren't willing to make. But I have a feeling there are more options if one is talking strictly about servers.
> Order of magnitude performance improvement isn't going to be possible.
I think the term 'order of magnitude' has started taking on a connotation of essentially meaning 'a lot'. It's a fair observation, but I hear it bandied about so often that I rarely actually think the parties are in fact using it literally.
It depends on how efficient/inefficient the OS's network stack and data transfer to user space is. For managed runtimes in a VM taking advantage of zero copy APIs is a challenge. I don't think 'order of magnitude' is possible but clearly there are a lot of cases where if implemented correctly this idea could dramatically improve performance.
A company called Wang took this approach with their word processing workstations. It might have been before your time, but anyway, this approach has been tried before and it didn't really work out. General purpose hardware running a general purpose operating system that abstracts away that hardware's peculiarities won the day for a variety of reasons.
Pendulums swing back the other way, though, when there's a game-changing advantage to be had. And server software that only has to produce well-formed output to be sent over the wire has considerable leeway in how those well-formed outputs get produced. We've seen that leeway be exploited in a major way at the programming language level, not so much at the OS level and not at all at the hardware level, yet. The question is what hidden advantages one might uncover by doing so.
Wow feeling old. Wang was a major supplier of purpose-built word processors for offices. 1970s timeframe. Prior to that they made sophisticated calculators for science and engineering and later finance.
Executives and most managers still had secretaries and dictated letters and memos. The Wang system was revolutionary. A multiuser, networkable word processing system that completely changed the game in terms of the time and effort necessary to produce typewritten documents.
They were supplanted in the 1980s by the more general purpose PC but definitely hold a significant place in the history of business computing.
Are general purpose operating systems really that inefficient?
I've thought about "boot into JVM" before, and I think it's enticing for technologists since it's so "clean", but all the projects aiming for this seems to have died from lack of interest (e.g. BEA Virtual JVM/JRockit Virtual Edition).
I'm not asking whether the general-purpose stacks are that inefficient at general computing, but whether there are classes of applications that could gain from a much more specialized stack. "Order of magnitude" comes in only as a way of saying that the gain would have to be large to justify the effort.
Edit: Perhaps I should explain where I'm coming from. I work on a high-performance spreadsheet system. One of the things that makes spreadsheets interesting is that their computational model is powerful enough to be valuable, yet not so powerful as to amount to general-purpose computing. Think of a server that doesn't need to do anything but access spreadsheet data, perform spreadsheet calculations, and serve them over the network to some client. Such a server's responsibilities are so specialized that one can't help but wonder how far down the stack one might push them and what one might gain by doing so. I daydream about this sort of thing.
There's many examples of hardware currently in use that can be programmed using software rather than a soldering iron (or more modern equivalents), but they tend to be within the realm of electronic rather than software engineering.
At a previous job I wrote software for a manufacturing company, and it was a real eye-opener to see one of the head engineers there - who had never in his life wrote a program, as we would understand it - modifying the complex ladder logic of a PLC[1] that operated parts of the factory, while I made changes to the software on the controlling PC. I realised that we were doing essentially the same thing, just in completely different spheres of operation.
Another example would be FPGAs, for relatively cheaply one can get a board with such a chip on it, and prototype all sorts of hardware designs essentially by writing software (in VHDL or Verilog). Again I've not done it personally but a friend of mine in smartcard research does this all the time, and doesn't call himself a software developer either even though it's really the same thing, just a different application to the usual general purpose machine.
Quite so, hardware is the holy grail. (Alan Kay: "People who are really serious about software should make their own hardware.") I dream of a golden age of experimentation in vertical stacks: specialized hardware designed for specialized classes of application with only so much OS as is needed to support them. Perhaps if the cost of developing hardware falls the way the cost of developing software did, we might see something. Why not an Erlang machine? A Lua machine? A spreadsheet machine?
Enough to bother with?
Order of magnitude is table stakes for interesting, wouldn't you say? Radical experiments demand radical gains. Surely there is room for an order of magnitude if one is willing to sacrifice general-purpose computing.