
What's wrong with Software nowadays and how we can fix it - rtens
https://github.com/zells/core/blob/master/manifesto.md
======
skywhopper
Anyone who says this:

    
    
        in the virtual world of computers,
        everything can be replicated, from complex
        physical phenomena to abstract ideas,
        and even your own mind.
    

Does not understand computers well enough to be lecturing the world about the
need for software "literacy". The assertion that computers can represent any
of those things is a complete falsehood.

Computers are great with complex mathematical models of physical phenomena,
and in some limited cases this is extremely useful. But smart engineers know
the limits of their tools, and computer models are not an exception.

But to assert that abstract ideas or the human mind can be replicated in a
computer only shows that the author either has no idea of the actual state of
the art, and/or has no idea of what he means when he says this is possible.

One of the first things you learn in Computer Science is that not all problems
can be solved with a computer. It's amazing that people so enamored with math
that they want to believe they can model every phenomenon in the universe with
a computer apparently disbelieve the math that proves computers can't
correctly model everything.

Weird. Please don't try to teach the world that computers can do everything.
They can't. And that should be lesson one in any plan to increase software
literacy. Starting, apparently, with its advocates.

~~~
rtens
I remember a draft of the manifesto that had indeed way more classifiers in
it. I guess removing almost all of them for better readability was maybe not
the best idea.

DecoPerson and noxToken are right about this being a general statement about
computing and not present day technology. Whether or not the human brain can
be completely simulated by a machine is still an unanswered question but
nothing I came across so far convinced me that it's not possible.

While it's true that not all mathematical problems can be solved
algorithmically, that doesn't limit at all the kind of things a computer can
simulate. Most physical phenomena don't have a mathematically precise answer
and approximations are usually good enough. I hope I understood you correctly
there.

If you say "computers can't correctly model everything" then I probably
disagree with your definition of a "model". In my understanding of the word
the concept of correctness doesn't even make sense. I'm convinced that models
should be first and foremost be judged by their "usefulness".

To make this discussion more concrete, it would be very helpful if you could
provide an example of something that cannot be modeled with a computer.

~~~
BuuQu9hu
Sure! Most real numbers are uncomputable, for starters. "approximations are
usually good enough" is a smoke screen that assumes that it's always possible
to make approximations and iteratively improve their accuracy, something which
generally is only true for computable numbers!

Another example, something I've been studying recently, is the phenomenon of
patterns. Patterns are very abstract, and it's not at all clear how a computer
can model a pattern without resorting to some concrete instantiation of it. At
best, we have blueprints, models, code, designs, and instances; these are all
themselves occurrences of patterns, but they fail to embody the pattern
itself.

While I've got your attention, "Cryptography also enables a high level of
security." is a ridiculous line. Cryptography, by itself, is only a building
block. Security is _structural_. Check out object capabilities:
[http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf](http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf)

Edit: Scott Aaronson discusses limits of quantum computation in this talk:
[http://www.scottaaronson.com/blog/?p=2903](http://www.scottaaronson.com/blog/?p=2903)
Basically it is totally possible that minds _cannot_ be simulated by computers
without destroying the mind in question!

~~~
rtens
I must admit that I have only very limited understanding of numeric and
cryptography. But I do find these very interesting discussion points.

The uncomputable real numbers are the irrational ones, no? But those you can
compute to an arbitrary precision. It's just for most applications, you don't
need much precision. But again, I've not done very much simulation so I might
miss something.

Patterns are something I haven't thought about in this context. I'm not
entirely sure what kind of patterns you're talking about. I'm guessing
software patterns. I was thinking about abstract patterns and had to think of
Deep Mind and how it learned to play Go by recognizing patterns.

I stay with my statement about security. I never said that it's sufficient.
But necessary. I think a cryptography-enabled capability-based security model
could be very interesting.

Will definitely read Aaronson's talk. That's a very interesting subject.

~~~
inimino
Simplify your message. What is the core, key idea of what you want to do? If
you can't explain it in a sentence, keep trying.

~~~
rtens
I want to "increase the number of people who are able to write, modify,
combine and share software by building a tool that makes writing software easy
and fun, and accessible to everyone." I even have a slogan. I call it
"Enabling Software Literacy"

That said, I'm highly skeptical towards the "good ideas can be stated in a
sentence" heuristic.

~~~
inimino
If you want anybody to listen, you're going to have to keep repeating it.

It's a good start. I can support software literacy. Now you need to state your
plan to get there just as concisely. Probably not as easy.

------
coderjames
>To fix this, we need to increase the currently tiny number of people who are
able to write, modify, combine and share software. We need a tool that makes
writing software easy and fun, and accessible to everyone. A tool that enables
software literacy.

Global access to niche knowledge doesn't seem like an effective use of the
world's time to me. Let's look at this same idea, but replace "develop
software" with "perform brain surgery."

> To fix this, we need to increase the currently tiny number of people who are
> able to perform brain surgery. We need a tool that makes performing brain
> surgery easy and fun, and accessible to everyone. A tool that enables
> neurosurgical literacy.

Just like Software Engineers don't need to know how to perform neurosurgery,
accountants, marketers, burger flippers, and salespeople don't need to know
how to write software. I'd rather the CEO of the company I work for spend time
on growing the business, not learning how to write a "Hello, world!" program.

~~~
FroshKiller
I disagree with your analogy. In my opinion, programming is more analogous to
composition. Most courses of study require at least one composition course,
because no matter what discipline you're studying, it's important that you be
able to organize your thoughts and express them clearly as well as understand
the formal expressions of others.

I personally experience a huge overlap between ordering my thoughts for
writing and making a mental model of a problem and its programmatic solution.
Yet it's easier for most people to pick up a pen or open a word processor and
express themselves than it is to write a working program.

Why should it be so? Apart from implementation details of hardware platforms
and runtimes, you're really talking about expression in different units:
subroutines rather than paragraphs, APIs rather than outlines, user stories
rather than theses.

To bring it back to your analogy, consider what a tool that lowered the
barrier to entry to brain surgery might do. It might certainly do things like
list the particular skills, tools, and medications required to perform certain
procedures. It might provide a check list of crucial steps and a sort of
troubleshooting tool. It isn't going to make an expert out of an unskilled
user, and a particularly skillful surgeon might not benefit from the tool at
all, but on balance, it would raise the baseline competency of many users and
help ensure that certain critical requirements were met.

A good assisted programming tool would do the same. It would help the user
accurately model the problem, it would advise the user of certain best
practices, and it would certainly warn the user of common mistakes. This is
all very much in line with a good word processor, with features like
spellcheck and document templates.

So I disagree that programming is analogous to brain surgery. Programming
itself is a tool for the expression of an idea and much more akin to
composition, and there is no good argument in my opinion against tools to
assist with either. Remember that for a large part of our history, literacy
was considered a niche skill, and widespread literacy led to explosive growth
of our knowledge of the world. I'm not one of those "literally everyone should
learn to code" types, but there is quite a lot of potential in spreading
programming literacy, even if you feel most people have no business practicing
it.

~~~
bluetomcat
Why not we (software developers) take the route of improving end-user software
and letting people easily accomplish their mundane everyday tasks, instead of
letting them program as a remedy to the sorry state of software? Have you
looked at it that way? Would people buy a car that required them to tweak it
and service it right after it went out of the showroom?

~~~
FroshKiller
I think you misunderstood me. I'm not saying that people should become
literate in programming so they can program their own solutions, although
that'd obviously be good. I'm saying that we should encourage people to become
literate in programming if only so they could express their needs more clearly
and evaluate solutions better.

I likened programming to composition. Most people who took composition courses
don't necessarily need to write papers on a daily basis, but they certainly
benefit from being able to interpret and evaluate material. If I could get
people to understand the general concept of a data type, I'd certainly have an
easier time of communicating with them and satisfying their needs.

------
CJefferson
I usually try hard not to be cynical, but in this case I can't stop myself.

This seems to be another attempt at a grand, unified packaging and IPC /
linking method. There have been hundreds of these, and there is no attempt at
all to discuss where all the others went wrong, and how this will improve upon
them.

~~~
mooreds
Obligatory xkcd cartoon: [https://xkcd.com/927/](https://xkcd.com/927/)

~~~
euske
I tend to think that these xkcd URLs are becoming like bible verses for geeks.
I find myself memorizing those numbers. (e.g. "Ugh, he's getting like xkcd/386
again.")

~~~
ninju
386 is funny one!

~~~
kabdib
But with 698, it all hinges on the delivery. Get the timing wrong and watch
the rotten vegetables fly.

------
pttrsmrt
At first glance, it seemed like yet another silicon-valley-neoliberlist-style
lots of words and ideologies, but no code and practicalities. But it seems
like OPs thesis has a more hands-on approach:

[http://zells.org/res/Zells_DiplomaThesis.pdf](http://zells.org/res/Zells_DiplomaThesis.pdf)

~~~
airesQ
From the 10mins I spend looking at it:

\- It seems some kind of language/computation-model. Loosely based on a
"everything is an actor" model.

\- It did look goodish. I tried following the Fibonacci example, which sort of
made sense (I got the impression that recursion is handled by creating new
nodes/zells). The discussion chapter also seemed interesting.

\- It has that abstruse academic feel, where sometimes it is hard to assess
whether the problem is my ignorance, or just vagueness of the publication.

\- Motivation sections usually have an exaggerated tone to them (i.e. this
will totally change everything), but this one, and the article above, are a
bit over the top.

\- There are also some over-the-top statements sprinkled throughout the thesis
(e.g. "a model of virtual objects which exist independent of any hardware").

~~~
rtens
\- that's exactly what it is. And the way I see it, actors are an
implementation of OOP (in it's original meaning) so I would stick with
"everything in an object"

\- thanks =)

\- I had that same feeling while writing it, constantly balancing my own
ignorance with an acceptable level of vagueness

\- call me the over-the-top guy. But the way I see, your vision has to be
grand, if not megalomaniac if you want the essence to survive the collision
with reality

\- seamlessly migrating networked objects are one of the lesser over-the-top
ideas though

~~~
inimino
If the thesis is relevant at all to what you're doing now, might I suggest
linking to it directly from the manifesto? Otherwise it sounds mostly like you
are very excited about something, but I got almost no idea what that something
is.

~~~
rtens
That's probably the biggest weakpoint of the manifesto but also kinda on
purpose. While the thesis was all about "let's build something" the manifesto
is a fresh start trying to answer the question "what is the problem?". The
thesis is still relevant but a bit outdated and I'll probably rewrite it in
multiple blog articles.

What I'm planning to use as information starting point is zells.org but it's
all still in an early stage. At the moment I'm working on creating a document
that visualizes the "end product" that I have in mind.

~~~
inimino
After looking at the thesis and the manifesto, I think your best bet is to
create one worked example, in pseudo-code or whatever, and then see if you can
get anyone on board with that.

------
gmluke
> Just as knowing why apples fall down and aeroplanes fly up, the citizens of
> the 21st century need to know that computers are not magical boxes but
> composed of dynamic models.

In all honesty, I don't know why apples fall down and aeroplanes fly up. I
just know that they do. No doubt that improving software literacy is a
worthwhile goal, but I think the author hasn't made a strong case for it in
the opening paragraphs.

~~~
Waterluvian
One of the most common challenges that I see engineers face is to effectively
empathize with the user. When you know a lot about something, you just see it
in a fundamentally different way. This makes it difficult to focus on making
it easy for the user to do what they want, not what you think they should
want.

When I drive my car, I just want to get from point A to point B while
listening to a podcast. I don't care to know how they work. This doesn't make
me enfeebled or ignorant, I would just rather commit my learning cycles
elsewhere.

Computers need to be the same. Why would we be so foolish as to think that
it's a wrongdoing that people are able to effectively use computers without
having any clue about how they function?

Literacy isn't about forcing people to learn things, it's about ensuring that
everyone has the baseline exposure to help them discover _if_ they have an
interest in a topic, and then the resources to explore that topic if they
choose.

~~~
marcosdumay
> When I drive my car, I just want to get from point A to point B while
> listening to a podcast. I don't care to know how they work.

Yet, you've spent months explicitly training for that task.

~~~
dreta
And that’s just learning how to use the car. We’re talking about _building_
one.

~~~
panic
I think the car analogy kind of falls apart at this point. Using a computer
and writing programs for a computer are both interacting with software. Take
spreadsheets, for example: is making a spreadsheet analogous to building a car
or to driving one? What about drag-and-drop visual programming?

------
jdavis703
> Just as knowing why apples fall down and aeroplanes fly up, the citizens of
> the 21st century need to know that computers are not magical boxes but
> composed of dynamic models.

I'm not sure most of the public could explain why gravity works, or how an
airplane actually gets in the sky. This is because most people are not
inquisitive by nature, they generally take things at face value without
questioning why. This kind of attitude does not work if you want to build
something complex whether it be software, a car, or a bridge. I think the
first step is how we as a society can encourage a generation of thinkers and
tinkerers.

~~~
sbuttgereit
> This is because most people are not inquisitive by nature, they generally
> take things at face value without questioning why.

I have to disagree with this statement. Almost all people that I meet, pretty
much across all group boundaries you can imagine, I find to be inquisitive.
Very inquisitive in fact.

What I think you may be observing is that the depths of questions that many
are interested in aren't great... gossip magazines, for example serve to give
answers to an inquisitive populous, we can question the value of such
questions, but that is still a drive to know something.

Also, I find that when people do ask deeper questions they can be simply
overwhelmed by the answers. I'm not particularly good at mathematics, but
often times work with people that are in the very top tier in that realm: I
will ask certain questions for which I'm simply not prepared to hear the
answer... the answer requires background that I simply don't have... I am
genuinely interested, but the required prep work is simply out of the
question. One could argue that the answers can be simplified as well, but
that's not always true.

------
dgreensp
I'm going to buck the trend of cynicism and say this is beautiful and matches
my own ideas closely, though I have not read the paper yet.

How many of us seasoned programmers came to an understanding with computers by
playing in a "toy world" of comprehensible, somewhat visible, documented,
predictable abstractions, such as Logo, HyperCard, Excel, BASIC, or even
assembly code, and now perform bizarre ceremonial rites on a daily basis to
get a teetering stack to perform our bidding as part of "real" programming?
There's a vicious interplay between how "bad" and complex software is and how
arcane and unapproachable it is, even to experts, driving away all but the
most determined to crack the code, who then work together to try to build
quality components and applications against great odds.

~~~
dgreensp
After reading the thesis, the proposed model of computing is like a concurrent
Smalltalk where everything is completely mutable, even an object's code and
prototype chain. The author then writes a function to calculate the Fibonacci
sequence by turning it into a distributed system, with some effort, and then
runs the code and talks about its performance!

At first glance, there seems to be a lot of incidental complexity and creative
choice in expressing a function from integer to integer in this system, which
goes against the ideas of code reuse and separating meaning from optimization
-- i.e. that there is one global Fibonacci function that we can inspect and
understand and don't have to rewrite for performance, or in another language,
or to run in a distributed fashion.

------
sickbeard
> More and more, users are put into - sometimes golden - cages, and forced to
> hand over their ideas, personal information, and even identities to
> international quasi-monopolies that put everything into walled gardens which
> the creator can only access through tiny keyholes.

I don't think this has anything to do with software, it seems more like an
attempt to educate the masses on the evils of facebook and twitter, but then
again that isn't a software problem. You can see this kind of locking system
in financial businesses like loans/mortgages, or even benign ones like the
eyeglasses business.

~~~
rtens
They might not be as benign as you think:
[https://www.youtube.com/watch?v=CAeHuDcy_bY](https://www.youtube.com/watch?v=CAeHuDcy_bY)

But I agree that this is not a software problem but a business model problem.
The solution however, can be software.

------
aethertron
Pleased to see more efforts to fix fundamental problems with computing. It's
certainly needed, this won't get fixed by piling more crap on top of the
existing stacks (including: building OSS versions of proprietary things.)

This one seems to address some of the values I think are important, so, neat.

But I'd argue ordinary people just need to be able to use software, not
necessarily build it. So, available software should be high-quality. There
should be meaningful choices between alternatives. I think that means: no
lock-in.

------
eternalban
Whenever the physical world intrudes on the illusory "virtual world of
computers", the man behind the curtain is observed in its less than
superlative aspects and we're less prone to attribute OZ powers to "software".

Software /is/ magical in many aspects. But its own inherent magic has never
been anything other than sleight of hand sort of magic. Even then, software
magic borrows from the glory of /insanely/ complex physical machines, and
various wondrous features of Nature itself.

One case in point was when Moore's Law and economy collided and the practice
had to ramp up on concurrency and deal with SMP. A bit later (and still on
going) we're ramping up on distribution and dealing with CAP. In the former
case, the illusory 'indivisible' platform was seen to have a sort of
geography. In the latter case, the illusory notions of linear 'time' and
smooth 'space' was smashed.

OP's remedy for a software-driven world's ailments is software literacy. But
the physical head poking through the curtain should make it clear a large
subset of these ailments have to do with physical things, such as computing
devices, infrastructure, access to energy, and even softer concerns such as
legal and political cover for making, providing, and operating software.

Imagine if every single person was a world class chemist and biologist. Would
we be able to do away with drug companies? You have that amazing molecule all
figured out -- do you have the physical capability for actually realizing it?

------
Gravityloss
I'm taking the opposite opinion for the sake of argument: The more you
understand about software, the more frustrated and disappointed you will be,
as you see all the easily avoidable flaws around you.

Web pages that have a few lines of text are unreadable on mobile devices and
drain your battery.

Setting the spin speed on a washing machine takes ages because the computer
polls for input only about once per second, so it misses most of your button
presses. How to reconcile this if the clock speed is thousands to millions of
cycles per second and the whole system has only a handful of inputs and
outputs.

These are not tough technological limitations, but completely easily avoidable
human failures. There can be nothing but negligence, cynicism and depression
to explain them.

People would be a lot happier to just say "I guess it must be like this", or
"My device is getting old and slow". A bit like "God works in mysterious ways"
or "it must be fate" can often feel better than "our government is corrupt" or
"I don't have any kind of plan".

I guess it's the same about any thing you feel passionate about. If you cared
about clothes or food, seeing all the junk out there might make you less
happy. A friend of mine was a barista. He wasn't happy that people were paying
the same amount for worse coffee.

~~~
rtens
Wow that's depressing. I suddenly feel the impulse to give you a hug and tell
you that it's gonna be alright. I wish I was more sure that it actually will
be.

------
barnacs
I haven't read the paper yet but the idea looks promising.

I've had this itch myself for a long time. I can't quite put my finger on it
but we really need something like a high level assembly language. One that
incorporates high level concepts like networked computers, cryptographic
identites for users, access to a global, shared pool of data and algorithmic
primitives regardless of which application they were originally written for,
or what domain specific higher level language they were created in.

Like, once a user has entered their address into a computer, they shouldn't
ever have to enter it again in a different software. Once someone has written
an algorithm which takes input x and produces output y, noone should ever have
to rewrite it again, but everyone should be able to reuse it. Applications
shouldn't all indvividually handle transferring data between devices of the
same user or even between users, this should be already built into our
programming model. And so on.

If this sounds like some utopian dream or incoherent babble, that's because it
probably is at this point. But I'm convinced this is the future of computing
we should be aiming for, not piling up more stuff onto our existing stack
that's just barely held together by ~50 year old technolgies built for that
age.

~~~
rtens
Wow that's exactly how I feel. I couldn't even formulate it that well. But I'm
very happy to know that other people feel this itch as well.

Your sentiment of "aiming for the future we want" instead of "piling up more
stuff on our existing stack" reminded me very much of this talk:
[https://www.youtube.com/watch?v=gTAghAJcO1o](https://www.youtube.com/watch?v=gTAghAJcO1o)

------
exelius
IMO we're kind of headed that way now. I always saw this as the point of
Google's combined interest in TensorFlow and Kubernetes.

Containers are the piece of the toolchain we've been missing. Now we actually
have some feasible logical methods (deep neural networks + gradient descent)
that can be used to structure existing computational tools into deeper, more
intelligent systems.

Think of it this way: what's the difference between (an ideal) container and
an artificial neuron? Structurally, they are nearly identical: they both have
collectors and emitters, and perform some non-linear action in concert with
other similarly-structured systems. Containers can also help with some of the
"trust" problems: if we're shipping around trained data models (or container
images) rather than the actual training data, we can push storage out to the
edge of the network and run the models there. Containers provide a common
computing language that enables you to do that.

This platform is not a leap forward for the theoretical capabilities of AI;
but it is a shortcut that should eventually make it easy for AI researchers to
leverage vast existing libraries of software written in any one of hundreds of
programming languages.

I actually suspect that this is just the first generation; there are a number
of software problems that currently can't be solved easily in a multithreaded
manner. However, if you can build a container that does what you need, you can
eventually train a model to replicate the container -- it may be less
computationally efficient, but with future orchestration platforms it may end
up being more time-efficient.

------
dreta
This industry is in its infancy. We have some of the greatest minds working in
it, and yet we barely know how to make things work right, and OP’s suggesting
we try and design tools that an average person can use to create powerful
software. How about we let highly-trained specialists figure out how to write
complex software first, before we try to teach average programmers how to do
it, and perhaps then we can turn our heads towards the masses.

~~~
sowbug
Making computers incredibly simple to use seems to have worked out well in the
case of the iPhone. Lots of common usability problems in computer UI simply
went away with a much better design. It's hard to say where the mobile-phone
industry would be today if that shift hadn't happened in 2007.

Perhaps the same could happen for computer programming. For example, in the
1990s, HTML introduced coding to lots of people who otherwise probably
wouldn't ever have considered it. A forgiving declarative interface was a lot
more appealing than learning what compilers, linkers, functions, parameters,
and APIs were.

------
rtens
Since this is my first submission on hacker news, I'm a bit overwhelmed by the
amount and quality of the responses. I'll try to address critiques
individually but in order to not have to repeat myself continuously I wanted
to thank everybody for their time and energy this way. Your feedback is much
appreciated and I'll use it to debug the idea and the manifesto.

------
ysavir
I think this misses the problem completely. The problem is not that people
that trust computers need to put more effort into understanding computers. The
problem is that people keep trusting computers.

What people really need to be educated on is "Why computers are not and never
will be trustworthy". And they don't need the details of why, just the bottom
line.

~~~
rtens
Why are computers not trustworthy nor ever will be?

------
orclev
Can we __please __just let this idea die. Almost since the day programming was
invented there has been at least one person out there trying to dumb it down
to work for the average person and it __always __fails. Computers are __hard
__to program because they 're __complicated __. Programming languages strike a
balance between simplifying some aspects of that complexity and exposing
enough of the inner workings to efficiently implement algorithms. Different
languages strike different balances but ultimately they all are more
complicated than the average person can handle because at the end of the day
the computer itself is more complicated than the average person can handle. No
amount of dumbing down or simplifying things is going to create a programming
language that you 're going to want to write serious programs in but that the
average person is going to be able to understand.

~~~
inimino
It sounds like you really want programming to continue to be difficult.

~~~
orclev
On the contrary, I want programming to be easier where possible, but the goals
of this manifesto (and all the similar ones before it) are trying to fix the
wrong thing. Better and more powerful abstractions are always good, I'll
always welcome a new tool that makes my job easier. However the goal of this
project is to make a tool that's explicitly designed to make code that's
easier to read for the average person, not easier to program in. This is in
many ways a set of opposing goals. Things that are easier to understand tend
to lead to more verbose code and vice versa. Look for instance at assembler.
Conceptually assembler is __very simple __but because of that simplicity
implementing anything non-trivial in assembly tends to be very verbose. On the
other extreme languages like APL and Haskell are __very concise __but they 're
hard to understand because they're very complex and leverage a lot of very
powerful concepts.

What I want are new tools that leverage powerful concepts to allow me to
efficiently express complex ideas. These tools by their very nature are hard
for the non-programmer to understand. Look at what's being proposed in this
manifesto. They're advocating for what is in essence internet enabled Excel.
Do we really want to start trying to write applications in Excel? Can you
imagine the horrible spaghetti code that would result in? Imagine how much
code you're going to have to write to express even mildly complicated
concepts? Imagine trying to maintain one of these beasts.

Further more, for this idea they propose to get off the ground, the vast
majority of programs would need to be written in this new language they're
proposing, which is, quite frankly, not going to happen anytime in the next 20
years at least. We're __still __using C and assembly, and they don 't look to
be going anywhere anytime soon (although _maybe_ Rust can dislodge a little of
the C). I __do __think it would be good for more of the population to at least
have a passing understanding of basic programming concepts, at the very least
learning how to solve problems using abstract thinking would be useful, but I
don 't think trying to create some kind of "simple" programming language is
the way to do it. There are "simple" programming languages out there, just
look at any of the toy languages designed to teach children how to program.
But that's the thing, they'll always just be toys, they expose a very limited
subset of capabilities for solving a limited subset of problems (mostly they
focus on making it easy to draw on the screen since it allows for simple games
that provide positive feedback for children learning).

~~~
rtens
If I understand you correctly you're probably thinking about how more concise
notation systems make things easier for experts but harder for novices, e.g.
maths and music scores, and probably Haskell as well.

So I suppose you mean "Things that are easier to understand _for novices_
[...]".

But that's not always the case which is incidentally proven by your examples.
Assembly Language is a lot easier to write _and_ understand for novices than
machine code. And Fortran even more so. Bret Victor illustrates this very
nicely in the video I linked to.

I'm convinced that the power that makes something more concise and at the same
time easier to write and understand is abstraction. Or more precisely, the
_right_ abstraction, since a bad one can have to opposite effect. So I agree
with you that better and more powerful abstractions are always good.
Spreadsheets for example are a great abstraction for some problem but terrible
for others.

So in order to "leverage powerful concept" and "express complex ideas" you
need to be able to create your own abstraction. In this way my proposal is
very different from Excel which barely allows to create any new abstractions.

What the manifesto apparently doesn't make clear is that I'm not proposing a
new programming language but a programming _model_. The language part is more
like byte-code than Java but it's really a protocol. The result is something
that would serve a similar purpose as HTTP but would be vastly more powerful.

I think the mistake that these "language designed for children" that you have
in mind is that they add training wheels instead of removing the pedals. The
latter manages way better to teach you the basic concepts (steering,
balancing) first and once you mastered these, gears and transitions increase
your efficiency.

When it comes to software, I would take a similar approach and create a
software authoring/execution environment that can be used by novices and
experts alike. Novices would learn the concepts (message passing, abstraction)
without actually writing code. The coding would then be added to increase the
user's efficiency by providing a more concise notation. The actual language
used would best be domain-specific to achieve maximum efficiency.

~~~
orclev
Ah, now this is a very interesting conversation and you're right, the
manifesto doesn't get any of that across.

Abstractions are the key to powerful programming languages, but they're also
the thing that people struggle with the most when learning programming. It
appears a certain segment of the population is just not capable of thinking in
abstractions in the way that programming requires (or at least they do not
care to invest the effort to learn how). Math has a similar problem, with high
level maths requiring a level of abstract thinking most people are not
comfortable with.

The flip side of that though is that all abstractions leak. Abstractions are
convenient shorthand, but to properly use them you still need to understand
the thing they're abstracting over and that becomes more true the larger and
more powerful the abstraction is. The leap from machine code to assembly
language for instances is a very small abstraction, it's mostly a matter of
mechanical symbol replacement with very nearly a 1 to 1 mapping from assembly
keyword to machine OP (there's some small nuance around single vs. multi-byte
ops as well a register vs. immediate vs. address ops). Because the abstraction
to machine code is so thin it's not terribly important to understand the
actual machine code because the abstraction when it does leak does so in only
very small and minor ways and it's usually easy to figure out what's going on.
Once you start working in larger and more powerful abstractions it becomes a
lot more important to be able to peek behind the curtain as it were.
Understanding classes for instance and the nuance involved in dynamic dispatch
in something like C++ is very important to properly understanding their
behavior, limitations and tradeoffs.

When you talk about using this new Excel like model as a generic programming
model I'm immediately reminded of the actor model. It's a very similar design,
but it's also not really a good fit for certain kinds of tasks. That's part of
the problem when you start talking about some kind of generic model (or
abstraction as the case may be), it likely will work quite well for certain
classes of problem but be wholly unsuitable for others. I also don't see it
actually addressing the issue the manifesto brings up which is to promote
programming literacy. The vast majority of people are going to have no
interest in learning about the various abstractions programmers employee. It
could be argued that they would benefit from doing so, and that some classes
that teach basic programming abstractions should be mandatory in either
Highschool or College level courses, but I suspect we'd see as much success
there as we do with the higher level maths like statistics and calculus.

As for the toy languages most of them aren't too terrible but in the interest
of not getting bogged down in a study of advanced topics they tend to force
certain abstractions such as being loosely typed, or having no namespacing.
For most real languages various escape hatches are provided to allow you to
bypass the abstractions when necessary (in extreme cases FFI allows for
escaping the language entirely), but all those escape hatches tend to
introduce dark and mysterious corners in the language that are __very __hard
to explain to novices because they tend to strip away all the layers of
abstraction exposing the dangerous moving parts of the language. In order to
avoid confusing novices with features it 's assumed they'll never use most of
the toy language elide the escape hatches you'd expect of a real language and
instead simply say "No, you can't do that" in instances where the student
wants to do something that isn't built into the language. In particular being
able to link to and utilize C libraries is a hugely powerful feature that most
real languages support, but which tends to introduce all kinds of
complications not only at the language level but also at the tooling and build
level that most toy languages want to avoid.

------
justaman
I think there is room for a "global librarian". Moreover, when AI starts to
program itself, having a logically linked system that can be queried for
potential implementation strategies will be beneficial for integrating new
features. However I think google can already do this to some regard. Perhaps
its time for something like "site: github, lang: python, tags: [csv, excel,
graph]. Then add some local db that can receive notifications from github on
changes?

------
nradov
Viewpoints Research Institute (Alan Kay's group) is working on solving some of
those problems. They have published some interesting papers although I don't
know how practical their ideas are.

[http://www.vpri.org/html/work/ifnct.htm](http://www.vpri.org/html/work/ifnct.htm)

------
oZe
I propose capital punishment for people who do not put enough effort into
optimizing memory usage and performance. I know we have more than 64kb of ram
now. That does not mean we have to add useless shit that just bloats
everything.

~~~
rtens
What should be the punishment for premature optimization?

------
jrochkind1
Yet another "If only Hypercard had been the future" dream.

I agree it would have been nice if it had been. But there are Reasons it
didn't work out that way, and probably isn't gonna go there.

~~~
rtens
What do you think those reasons are? In case of HyperCard I think it's that
there wasn't any inter-stack-linking let alone networking. What I'm crying
about is that Smalltalk was not the future.

~~~
jrochkind1
It's too hard/expensive to make that kind of abstracted GUI that lets you make
real software, and too hard to keep it updated as the environments/contexts
change, as in your example of networking became important, and it wasn't part
of hypercard, and would have taken a lot of time and skill to make it so in a
useful way.

It's just too hard. You can make that kind of layer for a special special
purpose domain (say, making simple games, or sound engineering, or what have
you), but it's hard even to do that right, let alone a general purpose
universal environment. Software is expensive enough to develop with quality
the 'ordinary way', let alone trying to develop and maintain some kind of
layer on top that let's people do it without.... without what? Without knowing
what they're doing? At some point, if you are able to make it actually
powerful enough, it's going to be just as 'hard' as anything else, isn't it?

~~~
inimino
HyperCard failed, but not necessarily for technical reasons.

It might be worth making some hypotheses and testable predictions here about
how much complexity is needed for various tasks, and how well current tools
are doing at avoiding unnecessary complexity.

I think surveying the current landscape and past efforts, and getting a dialog
going along the way, would be a lot more valuable than immediately starting to
design a solution.

I'm very confident there are opportunities here, but not so confident about
the chances of any particular approach.

------
dustingetz
Hi rtens, I would like a concrete (not abstract) explanation of what a cell is
and how to compose an application with them; your docs are very abstract.
Perhaps a hello world application?

~~~
rtens
The thesis mentioned elsewhere contains some concrete examples but my goal
with the manifesto was to get to the root of the problem I'm trying to solve
which is quite abstract indeed. I do not have a document yet that visualizes
what I want to build but I'm working on that. I'll post updates on twitter.

------
z3t4
I'm also working on solving this problem, but my solution is to teach vanilla
HTML, CSS and JavaScript by making an IDE with WYSIWYG, real time analysis and
other goodies.

~~~
rtens
That's great. What's your target group?

~~~
z3t4
Trying to get people who make Word documents to make web documents instead,
using HTML. I think it's time to bring HTML and the web to the mainstream.
Companies are currently paying hundreds thousands of dollars to copy/paste
from Word to a CMS that store the document in a database blob, when they could
make HTML documents themselves for peanuts. Once you have the documents in
HTML format instead of a closed binary format, it's possible to use all the
nice tools we developers use, like git.

------
alpos
The cells concept seems like a global git repo atop a NFS. While such a thing
might help the people who choose to use it. I think you will have to be very
careful to avoid the competing standards trap.
[https://xkcd.com/927/](https://xkcd.com/927/)

If you are truly passionate about bringing programming to more people, then I
suggest starting by trying to teach each existing language to a different
person who doesn't know programming. The intro experiences of each of those
people should confirm or refine your ideas about what needs to be built in
order to get the maximum number of people programming.

Alternately, you could start with the observation that it seems to be the case
that even most programmers do not make most of their own solutions. If that is
true then a good thing to build might be a general programming language that
would get most programmers to start making most of their own solutions.

Either way, the result should be a language that is so much simpler to write
in and that provides such a small time to useful solution that not only will
most programmers choose to use it to solve their own problems but most
programming instructors will choose to use it as a first language.

