The big thing here seems to be defining domain-specific languages (they don't use that term) which allow certain classes of problems to be written concisely. The examples given of those language look rather conventional:
this.textContents(if b then b.printString() else "")
^ KSSimpleLayout new
keep: #topLeft of: ’titleBar’ to: 0@0;
keep: #right of: ’titleBar’ to: #right offset: 0;
keep: #height of: ’titleBar’ to: 25;
They created about 100 domain-specific languages, and in each, the demo programs are short, because the languages are well-matched to the specific problem. But they didn't get to the point where they put users on these languages. That's needed to find out if they're as concise as claimed, or the users end up writing complex code to get around limitations of the language.
There are widely used domain-specific languages, such as Matlab and Excel. In each of those, stuff the language designer thought of is easy, and hard stuff is really ugly. That's what usually happens.
Not seeing the big breakthrough here.
I find writing the anchors in QML (Qt Quick's layout language) to be quite pleasant and easy.
width: parent.width / 2
There is much much more, much of which makes it more concise, but that's the basics.
It appears as an example in the cola source: https://github.com/robertpfeiffer/cola/tree/master/function/...
In this last phase of the STEPS project, sub-systems like the graphics system were close to running fast enough on commodity hardware, that they changed course during this last phase, abandoning time sloted for design iterations and exploration, to focus on optimization instead.
IMHO rather because in research projects at universities there is no position (in the sense that one can spend money on it) for people who aren't either researcher or administration. UX people surely have their importance but they neither bring research results nor they are a "necessary evil" (adminstration - which is no contradiction that good adminstration personal can be really helpful).
Abstraction leaks, and I don't believe in any silver bullet.
Right now we live in a world where everything we do depends on multiple layers of 10s-100s of millions of lines of code each for the compilers, the operating systems, the browsers or platform libraries (ex Qt). It's physically impossible to read a significant amount of the code in any of these layers, much less understand it. Then we get to add in modules and libraries like Boost or Js-Framework-Melange-Of-The-Week and we don't even know how anything we are doing with the topmost platform level even works. We've gone from Fortran to this in 60 years. What is this going to look like in another 60 years?
An alternative future is something like Nile. Reading Nile's code requires some up-front study. That sounds like work compared to "I can look at a for-loop in any package and know instantly that it is for-looping er... something... for some reason..." But, it's the kind of up-front study that is reasonable for a smart high schooler to pull off. With reasonable effort, that kid could understand literally every character of the Nile source and start making major, sweeping modifications. Contrast that with Cairo --which at a high level is not much more complex than Nile, but I'll be damned if anyone but it's maintainers can claim to understand every line and make major changes.
Good to see some work in a different direction.
This also improves security: auditing correctness and writing mandatory access control policies in another DSL.
I seem to recall they support full unicode, by the way (which makes sense today, there's not many good reason to not try for full utf-8 compliance unless you want to do some entirely different encoding (like a simpler, prefix-based one) and then do conversion at the edges of your system)).
Where is the integration build of frank? Just one binary with source that work, alan kay did run some demos on a mac. Its too bad that this great software is now starving on some researcher archives :(
I call for reproducible science:
Where is the CI Server of the project? It can't be that hard.
Everything that we're doing currently points to a huge scattering of knowledge but little-to-no understanding. So that's what Alan was shooting for: trying to express everything with as little as possible.
The reason we can't do it yet is that, while we think we know what we're doing, we still don't understand what we're doing.
You may not believe in a silver bullet but very smart people believe we can do vastly better. I think you may agree that we need to at least try.
It's kind of odd that Alan Kay, the visionary of DynaBook, who advocates the importance of exchanging of ideas, has the belief that a single man should be able to handle thing of the scale of industry. Karl Max believes in social labour division, and I think he is right.
I am personally quite sure that software construction will one day be a form of mathematics with symmetry being found in everything from biology and neurology to physics.
If doing the first in Scheme, methods like this tie it to hardware:
My only counter might be that Ulysses is a monumental work of art. We don't have anything like that in software -- and not just because we don't have a Joyce. We lack that standardized alphabet. That we compile/interpret programs, that we have a completely different language for data access, that we have in-memory data structures, I could go on and on -- it's all getting in our way from creating something monumental. (I feel like I'm cheating a bit; I've seen the other side and it's beautiful.)
Far as symmetry, there's potential there given some work is reducing it to math and simple constructs. Hardware too. Ill try to rrmember to give you a link when I'm off work.
As this process unfolds, what once was a simple system that could be fully understood and reliably reasoned about balloons into a monstrosity with 20 layers and thousands of complicated moving parts which no person has any hope of reasoning about fully. The abstractions start “leaking” because the people working on them have gaps in understanding which leads to interface mismatches all over the place, and all kinds of bugs and other software flaws.
But in general, most of the complexity is incidental, created out of the goop of the program itself and the institutional structures of the programmers working on it rather than the inherent challenges of the problems it tackles.
Alan Kay’s idea is: what if we focus really really hard on cutting that complexity down, and spend years of disciplined effort simplifying our abstractions until a full usable computer system can have its source code printed at normal size in one medium-length book, instead of needing a whole library of books. Maybe such a system wouldn’t be used “in production”, day to day, but it might inspire us to simplify and refactor our foundations elsewhere.
OK, let's say you are right, and Alan Kay has come up with a 20 kLOC functional system. Everyone starts to coding on it and quickly produce 100+ distributions and millions of Apps. Surprise, the old problem comes back!
It's an ecosystem/social/management problem that is about human, not technology.
When you have reduced what took 200 million LOC (win + office) to 20k.
Everybody would need to write 10s of thousands of apps to be in the same mess we are in, not hundreds.
Old-school coders lament that the last completely understandable computer was some model by Commodore -- the C64 or Amiga, depending on whom you ask. Computers used to ship with complete schematic diagrams and enough documentation that you could know what every byte of system code did and where every I/O register was and what they did. These days, not so much -- before the massive, proprietary OS kernel even gets loaded there are many hardware and firmware interfaces to be navigated, many of them protected by patent IP laws and only available under NDA, if at all.
The intent of the STEPS paper is to explore possible answers to the question "how powerful a system can we build with modern tech, that is still completely comprehensible by a single person?"
But: where is the source? Some sub systems sources are available (Nile,Gezira,Cola, Ometa..). But the state of the projects does vary strongly.
Also for such a system you need something that works, you need continues integration.
You need reproducible science.
where is a reproducible build of the "frank" system that STEPS did create?
reproducible science and build should NOT be a big problem IMO.
As an interesting note to this, my personal take on the matter is not to reinvent programming, that is too hard. But to reinforce it with a graphical user interface. A programming GUI will simplify programming in a similar way that an icon based file manager simplifies file manipulation. The main benefit is to free users from having to memorize textual command patterns. I haven't done a good job of explaining my vision of it, but here it is anyway: https://nzonbi.github.io/xoL/
That may be the case on average, but I know of at least one counterexample for certain.
Software development is a hard problem. Even a small system or ruleset easily explodes into incomprehensible complexity, a phenomenon called emergence.
There is also a certain fragility in how computers and computer programs work. Even the slightest error or failure can stop a system completely.
But then you lost me.
I do not think that graphical programming is the future. Next to nothing of the essential complexity of developing anything, from small algorithms to large systems, comes from the syntax, although it can contribute to the accidental complexity, together with the used tools, lack of time, and other factors.
Replacing a formal written language with symbols or flowchars have been tried and retried, and it never really took off. I believe there are good reason for this, and one of them is that you can not simplify a complex problem by using a tool that has less power - or expressiveness. These tools typically make simple problems simpler and harder problems next to impossible.
An other problem is that visual tools, and especially flow charts based tools, actually tends to be harder to comprehend when the graphs and visualisations are getting larger, compared to their equivalent textual representation.
Anyway, while selecting a good language is important and can improve or reduce the accidental complexity, the accidental complexity is just a small fraction compared to the essential complexity in larger systems, as argued by Fred Brooks in 1975.
But I agree that graphical symbols and drawings, pictograms and other visual elements could possibly enhance the readability and comprehension of written text, just as a small graph can help in natural language texts.
But then it is perhaps more along the lines of Knuths literate programming ideas, where the program is also the documentation. Knuth didn't quite made it right though as described here: http://akkartik.name/post/literate-programming . Not really his fault as he tries to build on top of existing languages, and just about all of them have have a rather backwards structure for technical reasons.
However, that link also quotes Fred Brooks and this nugget of wizdom from 1975. (or is it -85.)
“Show me your flowcharts [code] and conceal your tables [data types],
and I shall continue to be mystified.
Show me your tables, and I won’t usually need your flowcharts;
they’ll be obvious."
To sum up: The essential complexity is more or less impossible to get at, at least without AI (but then it just exists but we can spend more artificial brainpower on it while we are slacking off - and hope we are not killed and used as raw material for neural compute machines somewhere in the process.)
The accidental complexity on the other hand could possibly be reduced with better languages and editors that formats and displays the code in the most visual effective way for our poor minds, and I'm all for editors were the comments would be nicely typeset and where I could insert images and graphs, and also all other visual improvements that could enhance our awareness of the global structures.
"Literate Programming in the Large":
which is a bit of a wandering talk, but well worth watching: it talks about real, huge comolex systems, and how lp can help bring them back under control.
2) The other approach is to shift your perspective, so you don't have a complex solution to a compkex problem; but rather a few small and lightly connected straight forward solutions to simple problems. Now I doesn't matter if you're bad at communicating, because the ideas you try to convey are simple ideas.
I think there are a few reason that STEPS appear to work: what we're doing with our complex software is simple stuff. If we know what we want to do (the hard part) doing so in a straightforward way becomes easy. If we can make hierarchies of abstraction and isolated sub-systems.
Note that simple doesn't mean trivial. These solution often require quite a lot of theoretical education; but not an unreasonable amount.
"Clasp: Common Lisp using LLVM and C++ for Molecular Metaprogramming":
"Using Common Lisp to refactor C++":
"Clasp — Bringing Common Lisp and C++ Together":
"Last week VPRI published the final "STEPS Toward the Reinvention of Programming" paper[this submssion].
Although it is 2012, it had been unavailable publicly until now. A great way to catch up on how FONC ended. It was worked on by many individuals being cited as part of HARC.
The three/four pillars of reproducible research
Often in the discussions around open and reproducible science we focus three main types of research products
1. the paper: the traditional narrative document, often 2. the data: increasingly appreciated as a first-class citizen of scientific record with data repositories providing DOIs
3. the code: recognized as a research product, but the last to be integrated into literature system and given DOI
4. the environment: a critical and under-appreciated ingredient for computational reproducibility
THAT should be the minimum for all science projects IMHO. What do you think?
On the other hand, I believe the paper(s) might very well be more important: exploring the ideas. I think it's a difference between a research project in computer science, and an engineering project. The first lambda calculus was just a project - they thought they'd need years to get the thing running. And then someone just coded up eval() in machine-code and they were bootstrapped.
I to want the implementation of eval(), but I'm not unhappy that they've shared their ideas.
But I'm a huge fan of everyone involved in this, so that's not criticism -- just speculation.
Some more links on the WP page for VPRI:
And VPRI's own wiki:
As far as I've been able to gather, the most immediately useful stuff, is the ideas and concepts in OMeta, as well as the various implementations of OMeta.
... but people wanted their rounded corners.
... so now it's not concise.