
Study of non-programmers' solutions to programming problems [pdf] - yxlx
http://www.cs.ucr.edu/~ratana/PaneRatanamahatanaMyers00.pdf
======
jtolmar
Top three takeaways for me: event-based logic, sets instead of loops, and
using past tense instead of state. Events and linq-like queries are popular
enough, that last one is interesting.

Especially in an environment where you mostly interact with objects via
events, I think querying an object's past sounds pretty doable. Naively we
could hold on to all events ever and query for one that matches what we're
talking about. Less stupidly, these past tenses are usually in forms like "if
this happened recently" or "if this happened ever" which a compiler could
rewrite into a variable that the relevant event sets.

So, the compiler sees "if Pacman ate a power pellet within the last ten
seconds" in whatever syntax it accepts. It goes to Pacman's "eat power pellet"
function and appends code to set Pacman's last-power-pellet-eaten variable,
which it has to introduce. The original conditional gets rewritten in terms of
timestamp comparison.

~~~
zyxley
The "sets instead of loops" one makes me think of Ruby, where operations on
sets are a basic part of the philosophy in a way they aren't in a bunch of
languages.

Even the basic "do this 5 times" is expressed in the same way as a set
comprehension:

    
    
        this_set_of_items.each do |item|
          puts "hey, look at #{item}"
        end
     
        5.times do |index|
          puts "look, a line!"
        end

~~~
DanitaBaires
I took it more like the way you specify certain jQuery operations such as:

    
    
        $('.card').hide();
    

to mean "hide all cards", or

    
    
        $('.card.green').show();
    

to mean "show all green cards". It's really comfortable to think about
operating on the set like this and not having to worry about looping on the
individual items.

~~~
buro9
In fact thinking of it as a set frees computing to do things concurrently
invisibly.

A lot of other comments here say "ah, but that's just JS iteration still"...
but there's really nothing to prevent it being the equivalent of a Go
routine... no order implied, no iteration implied, but all the things in the
set will have the functions hide() or show() called.

------
ZanyProgrammer
My perspective, as someone who came relatively later to programming as a
career, is that the skill of "thinking like a programmer" isn't too hard to
acquire once you've taken a few college CS classes and started to simply
program a lot.

Rather, what I'm constantly amazed at (and maybe its because I'm working in
stereotypical enterprisey environments?) is how complex, convoluted, long and
hard to follow a lot of code is. So much code goes into what to a non
programmer or novice would seem like a fairly simple task. And so much code is
really horribly written and designed and implemented. Its very much true what
your professors say about how you'll spend more time reading code than writing
it. Anyways, I'm sure my perspective as both a junior dev and a late career
switcher is somewhat biased, but there you have it.

~~~
twa927
Why do you hold the assumption that code can be made much simpler than it is?

I've done many years of programming and wrote some bigger programs from
scratch by myself. Almost always the actual code is far more complicated than
one would assume it should be. And it's not because it's of low quality. It
happens that writing something nontrivial that works in the real world
requires all the details, checks and abstractions it contains.

If you were in school recently, maybe you were lead to think that programming
is an extension of math, where everything is pure, simple and provable. But it
isn't - programming is much messier.

~~~
gambler
I'm not OP, but I can answer the question as if it was addressed to me. I
believe that a lot of "real life" code can be simplified because I've had
_way_ too many cases where I open some program and remove 60 to 80 percent of
its code without affecting the overall logic. (This usually includes obvious
duplication, needless abstraction layers and things that reimplement
functionality of standard libraries.)

The difficult part usually isn't the restructuring itself, but understanding
what the program is supposed to be doing in the first place. Fortunately,
deleting code is one of the best ways to learn about what it's doing. You need
some good tooling to do this safely, though.

~~~
cesarbs
> needless abstraction layers

In my experience this is one of the most offending things in large code bases.
I've worked on code that does what it's supposed to do, but you can't see that
it's doing it because the solution to the initial problem statement is
completely diluted in a mess of factories, abstract classes, gigantic class
hierarchies, delegates, and so many other abused patterns.

In some cases the extreme level of abstraction can be justified by the need to
have a system that's extensible in many different places. But more often
that's not it, it's really just patterns being overly used and abused. You can
rewrite the code to be much clearer with less than half the original size and
have no negative impact from it.

~~~
Sacho
And after you've "fixed" this "problem" for the current state of the app, you
happily leave, and the next person is tasked with adding new features, and
suddenly they find the app incredibly rigid and impossible to modify, so they
add some abstractions like factories, delegates, etc...

I mean, anecdotes. I find it funny that in a profession allegedly heavily
influenced by science, we keep trading anecdotes(in my experience, "more often
than not", etc) instead of having any hard data, tests and experiments to
compare.

~~~
cesarbs
I wasn't advocating writing extremely rigid, non-extensible code. I was
alluding to unnecessarily abstract code bases where e.g. there's a class
hierarchy with 7 classes when in fact 3 would properly describe the problem
domain. Some people get _really_ carried away coming up with abstractions and
in the end they just write a lot of meaningless or purposeless code.

------
jbclements
As someone with a definite and well-established viewpoint on CS education, I
sort of expected to hate this paper... but I didn't. I thought it made some
excellent points, and I very much liked the idea that people not trained as
programmers might be able to point us toward new paradigms.

At this point in the development of programming languages, the problem is not
really that we can't build languages that do what you want; by and large, for
unambiguous specifications (and yes, that is a big qualifier), we can.

At this point, then, the conversation shifts from "how can I meet the
machine's needs?" to "how can the machine meet my (programming) needs?"
Another analogy: we're no longer just stone-age people looking for a rock that
doesn't shatter when we hit things with it; we're can shape our rocks now, and
we're trying to figure out what shape allows us to hit things hard without
cutting our hands.

Go, functional and declarative programming! Oops, I gave it away. Sorry.

------
Deregibus
I feel that at its core programming is about taking a conceptual idea (e.g.
pac-man moving around a maze) and determining the unambiguous logic that
describes it. The language used to express that description has a significant
effect on the end result, but it's the ability to develop the logic in the
first place that really separates "programmers" from "non-programmers".

"Non-programmer" isn't meant as a slight. This style of problem solving works
great a significant portion of the time. Natural languages can describe a
solution to a lot of problems very concisely because a) there's a lot of
implicit context that clears up many of the potential ambiguities, and b)
you're typically present and available to handle any unexpected situations
that may arise. For many problems, the best solution is one that can be
specified quickly, will work 90% of the time, and can be easily adjusted for
most of the other 10% of the time. Natural languages and fuzzier thinking work
great for this.

But this approach doesn't work well for problems where the solution is either
too complex too be easily described using natural language, or situations
where data sizes or time constraints make it unfeasible for you to be
available to handle unexpected situations. In this case the solution requires
all of the logic to be precise, unambiguous, and developed up front. It's a
different way of thinking than what has typically been asked of humanity, and
natural languages are pretty poor at expressing that logic.

I think the paper has some good points, but I'm not sure how much you can
really draw from it other than verification that natural-style problem solving
doesn't work well for the type of problems that are typically solved by
programming. If you asked a bunch of experienced programmers to write programs
that will tell you how to "go to the store and buy me some milk", you'd
probably get similar results about how the programs didn't handle the many
different unexpected situations that might occur in such a simple task.

~~~
petra
>> But this approach doesn't work well for problems where the solution is
either too complex too be easily described using natural language,

For such solutions, let's say in the business domain - the programmer could
work with the domain experts on the general structure of the domain model -
business objects, members and methods - while the domain expert will design
fill the methods, maybe validation rules(with some tool), all small code
sections(so it might be easier to think un-ambigously) - and than the model
will be fed to an automated system like naked-objects/ISIS that will handle
all the technical stuff automatically.

Than if the system detects an ambiguity, it will offer debug info(and code
view) in a format that domain experts understand, and let them fix it - or ask
help. And of course you could add testing and code review with programmers and
domain experts(who can now read the code) to the mix.

And yes, sure this won't fit every system. But it maybe extend the power of
the domain expert.

>> or situations where data sizes or time constraints make it unfeasible for
you to be available to handle unexpected situations.

For such situations, a search engine having access to the full code specified
in an ambiguous language(not necessary natural), could help tools find/build
an code containing an optimized form, and maybe offer help about how to
integrate it.

~~~
njharman
It's fundamentally wrong to have non domain expert, "generic" devs. To be
effective your devs need to become domain experts in whatever they are
developing

------
dsjoerg
This is a wonderful idea. There are few programming environments that are
useful to non-programmers with the possible exception of Excel.

A study like this helps us gain inspiration and to remind ourselves how non-
programmers think about programming problems.

In the words of the linked pdf: "Programming may be more difficult than
necessary because it requires solutions to be expressed in ways that are not
familiar or natural for beginners."

~~~
mcguire
Medicine may be more difficult than necessary because it requires solutions to
be expressed in ways that are not familiar or natural for beginners.

Law may be...

Physics may be...

Getting the point? Beginners may not always be the best yardstick of
everything.

~~~
justinlardinois
I think you're missing the point. We don't have any control over the fact that
medicine and physics are complicated because they reflect nature. Law is
complicated because it's an attempt to create a set of rules that apply to all
possible human scenarios.

This study, and the person you're replying to, aren't saying that all
programming languages are unnecessarily complicated and should be changed.
They're just saying that they're inaccessible to beginners. The authors of the
study note that they're designing a programming language for beginners, so
these things are useful for them to know.

~~~
_asummers
One of Rich Hickey's talks talks about this (I think it was 'Are We There
Yet?'). In it, he references musical instruments, which are absolutely NOT
designed for beginners. Why should they be? Tools shouldn't be designed
specifically to PRECLUDE beginners, but why should any of them be optimized
for beginners?

A good tool should allow someone to progress from beginner to novice to
expert, but at the end of the day, the expert is who the tool is really
designed for. Anything else is just bonus.

~~~
TheOtherHobbes
>In it, he references musical instruments, which are absolutely NOT designed
for beginners.

Musical instruments increasingly are designed for beginners. A typical
software suite makes it easy to make decent music by tapping a few buttons
more or less in time.

 _Classical_ instruments aren't designed for beginners, because they weren't
designed at all - they evolved from crude and simple historical originals.

But they're a subset of music as a whole.

Arguably the problem is that computer languages are NOT designed for anyone.
This is why so many software products and services are significantly broken so
much of the time - to an extent that would be ridiculous and completely
unacceptable for hardware objects.

I don't see a problem with at least exploring new language models that have
roots in perceptual psychology instead of in hardware design.

~~~
echlebek
Classical instruments are designed, just as much as any other artifact created
by humans. If pianos were not designed, then neither were bicycles, looms or
printing presses.

------
Kenji
The argument that we have to take people who are unfamiliar with the work and
look at how they approach it is alienating me. Would you design mathematical
notations based on the opinion of 5th graders? Would you build a skyscraper
based on how 5th graders feel about it because it's more natural?

EDIT: For those who didn't read the article, I say 5th grader here because a
large part of the study is actually about them. Not because I arrogantly
compare people from other fields to children.

~~~
vidarh
Yes, except where what the 5th graders come up with fails to meet other
important criteria.

A lot of notation is the way it is because of history and inertia, rather than
because practical considerations or requirements means it needs to be that
way.

If there are changes we can make to make languages more approachable without
making them worse in other ways, it makes sense to opt for making them more
approachable.

~~~
wrp
> A lot of notation is the way it is because of history and inertia...

Less than you may think. Leibniz and other mathematicians spent years debating
notational forms in mathematics before settling on what we have. See Florian
Cajori's _A History of Mathematical Notations_.

~~~
vidarh
So history and inertia by now, in other words. The point is not that people
didn't think about them when they first came up with them, but that they are
not generally regularly revised based on e.g. practical teaching experience or
research.

------
Too
When was this written? They quote an article from 1985 saying that programming
languages has not been designed with human interaction in mind. I think quite
a lot has happened since then.

I also think the conclusion that people prefer sets over loops is biased
because the problem domain is a database table where it is more natural to
work with sets.

In any case i find the study interesting to compare with how people write
software requirements and not to how languages are designed. Requirements are
often written in natural language form and are often written by product
managers that are non-programmers. I found the answers to be extremely similar
to user stories seen in requirements, in particular the use of end user
perspective when describing how pac man should move.

~~~
machinelearning
2001 src:
[http://www.cs.cmu.edu/~NatProg/publications.html](http://www.cs.cmu.edu/~NatProg/publications.html)

------
jeletonskelly
Of course non-programmers aren't used to being extremely precise with their
grammar to express solutions to logic problems. We have the capability of
getting the "gist" of what someone is trying to express that machines current
don't. Expressing those thoughts to a machine requires precise syntax and well
formed flow control. I honestly don't find much in this paper that's very
surprising, but I think the participants did pretty well given that they
aren't expected to be extremely precise on a day-to-day basis.

~~~
masterzora
It's not just about non-programmers being insufficiently precise. It's also
about the different ways they express precision compared to how you have to do
it for existing programming languages. For example, the paper talked about how
the participants tended to talk about doing something to everything in a set
in vectorised terms whereas scalar languages tend to be more common outside of
scientific settings.

------
capote
Is it really necessary to make programming more accessible to non-programmers?
(this is how interpret some of the introduction about making programming
easier to a 'beginner')

How is it different from making structural engineering more accessible to non-
structural engineers, dentistry more accessible to non-dentists, etc?

Take my latter question not so literally—I mean to ask what is wrong with
everyone having their own profession as a result of their passions/natural
talent?

~~~
Splines
I don't think this is a matter of making programming easier for people who are
not interested in programming - I look at this as a study of how "normal"
programming could be made to model how the human mind might think naturally.

Non-programmers often need to "program". For example, consider writing email
filtering rules. The wording and flow could be improved with the learnings
from this study. "Apply this label to this mail and all the ones like it, then
archive all of those mails".

~~~
bordercases
You would be surprised how much you can get away with by learning to work with
the machine directly. The way that the human mind works without training is
naturally ambiguous and chaotic. There are of course benefits to offloading
some of the thinking and parsing to the compiler or interpreter whatever;
that's why we have Python, because C and Assembly weren't enough.

But human beings have been inventing formal notations because they end up
being the right tool for the job, most of the time, to guide thought, even if
the upstart cost of using them takes some work. I suspect that being afraid of
formality will gradually make you lose power over the computer, and will cause
unpredictable results when it isn't crystal clear where the impedance to
communication and understanding with the machine lies.

I'm all for the more declarative style of programming though, even if it
doesn't resemble any particular natural language specifically (remember that
papers like these still have a grand Anglo bias in their implementations –
imagine the sparsity of a Chinese version of these studies, and the potential
difference of the results!).

------
c3534l
I wouldn't know how to write something that "summarizes how I (as the
computer) should move Pacman in relation to the presence or absence of other
thing." I can't figure out what that's supposed to mean. Given the image
shown, I would have said "if PacMan reaches a wall, the computer should not
continue moving PacMan in that direction."

Also, why anthropomorphize the computer? And why are we "summarizing" instead
of instructing the computer? The question asked for a declarative solution and
then is surprised that the children gave declarative answers. [edit: I misread
what was being said in that portion of the paper and that portion of has been
replaced by this bit in square brackets]

The experiment seems extremely sloppy. It can't possibly show what it purports
to show, there's way too much in the design of the study that could have
biased the results, the sample sizes tiny, the task unclear, and the
quantitative analysis subjective.

~~~
masterzora
> _The list of things they taught kids for the PacMan study also seemed
> very... well, technical._

What list? Perhaps I'm just missing it or there's another link I haven't seen
but they don't seem to mention having taught the kids anything--especially
anything technical--before the Pac-Man study.

~~~
c3534l
My mistake, that list was something the researchers _before_ the task was
conducted, not part of it.

------
wslh
Not exactly the same but many years ago I did a small experiment: I gave a few
relatively easy logical puzzles to my family (including my grandmother who
only finished elementary school) and to friends studying CS and math. In
general, my family solved the puzzles faster and making very basic (but
useful) representations that my friends who followed more formal thinking.

Obviously I can't extrapolate or make big assumptions about this tiny
experiment but I underestimate my family members capabilities. Also, it is
obvious that on complex areas of study it's very difficult to came up with a
solution if you are a novice.

------
kriro
Note: This was published in 2001. Here's a Scholar link of other papers citing
it (none of the top ones are very recent) in case anyone wants to dig deeper:

[https://scholar.google.de/scholar?cites=9767640341703630929&...](https://scholar.google.de/scholar?cites=9767640341703630929&as_sdt=2005&sciodt=0,5)

A good search term is "computational thinking" which probably needs to be
combined with a couple of synonyms for "non-programmer".

------
caseyf7
Reminds me how non-programmers quickly pick up the vectorization of R and
programmers keep writing loops.

------
z1mm32m4n
It feels a little bit disingenuous to say that you can use a spreadsheet's
builtin "sum" function to compute a sum. It honestly sounds like an argument
in favor of functional languages; here's a Haskell program that's just as
simple:

Prelude> sum [1, 2, 3] 6

Hey, you could even search for a library that sums lists for you in C, then
include and call that function.

~~~
eru
Yes, it's a simple first order function on some kind of collection.

A sum function is not very `function' at all. (It sure feels at home in
functional languages, more than in imperative bit-by-bit piecemeal
programming, sure.)

~~~
z1mm32m4n
I think the idea that the author was trying to express was that in a
spreadsheet language, you select a data range and then choose a function to
perform on that data. That's the "functional" aspect I took away from my
reading of it.

~~~
the_french
Its no huge surprise that spreadsheets implement a form of FRP . Specifically,
you can look at loeb's function [1] as an example of the relation. Being able
to focus on the data itself while easily composing operations is definitely
functional in nature.

However, FP is not a silver bullet either. In practice I think humans care a
bit less about correctness than compilers and would like some aspects of the
language to be 'fuzzy' for lack of a better word. To most people "1" and 1 are
the same thing so why shouldn't the compiler understand that. There could be
potential for a dynamic-fp language of sorts.

[1] [http://blog.sigfpe.com/2006/11/from-l-theorem-to-
spreadsheet...](http://blog.sigfpe.com/2006/11/from-l-theorem-to-
spreadsheet.html)

~~~
eru
`Fuzziness' and FRP are pretty orthogonal.

Ie even in Haskell as it is, it's pretty easy to add an 'if' that takes
'truthy ' and 'falsy' values like in Python. Just use a type-class 'Booly'
that include a conversion function toBool.

------
grapevines
_For example, a typical C program to compute the sum of a list of numbers
includes three kinds of parentheses and three kinds of assignment operators in
five lines of code_

Let me take the opportunity to plug a new language which I have spent the last
5 months designing: github.com/jbodeen/ava

An ava solution -- 9 lines of code, and 1 set of parentheses -- would look
like this:

    
    
      let rec sum list = 
        let are_we_at_the_end = 0 in
        let take_a_number_and_the_rest_of_the_list n list =
          add n ( sum list )
        in
        list 
          are_we_at_the_end
          take_a_number_and_the_rest_of_the_list 
        in
    

Maybe we need to _zoom out_ of ancient languages into more intuitive paradigms
if programming is to become easier for more people to access

~~~
im_down_w_otp

      -module(math).
      -export([sum/1]).
    
      sum(ListOfNumbers) ->
          sum(ListOfNumbers, 0).
      sum([Number | RestOfList], Subtotal) ->
          sum(RestOfList, Subtotal + Number);
      sum([], Total) ->
          Total.
    

edit: blargh... sorry @simoncion, I didn't see your reply. It apparently takes
me longer than 14min to type that out on my phone without typos + proper
spaces to treat it like code :-)

~~~
simoncion
> sorry @simoncion...

No worries. It's astonishing how absolutely awful on-screen phone keyboards
_still_ are for doing anything more involved than writing a brief human-
language message.

~~~
eru
That's because they are optimized for that.

No reason, apart from economics, why even with current IDE technology we
couldn't make one that works well for specific programming languages.

~~~
simoncion
> That's because they are optimized for that.

That's like a big chunk of my point. :) Phone/tablet keyboards _really_ suck
for anything other than _short_ human-language text entry. You wanna write a
five-page paper? A code snippet to demonstrate a problem? Forget about it.

It's astonishing that these devices have been around for _at least_ seven
years and their packed-in keyboards still fail at these tasks.

------
wrp
There are two aspects of this study that I think nullify its value.

First, they take naive user reasoning as normative. Nobody remains a naive
user for long. When I first started leaning to program, shifting from set-
based to iteration-based reasoning about collections was a bit of a jolt, but
it didn't take me long to become comfortable with it.

Second, they ignore that programming is an activity requiring much more
precise reasoning than typical daily life. You must learn to think differently
and it is beneficial for the notation to enforce this.

I will also point out that English-like programming languages have been
promoted for decades and they haven't caught on.

------
mikehollinger
Essentially isn't excel "programming for non-programmers?"

------
petra
So what's the closest language/language subset that fits this ?

~~~
jeletonskelly
SQL

~~~
justinlardinois
This is definitely true, and why I had more than a few stumbles learning it
after I was already an experienced programmer.

Case insensitivity in string comparisons is a good example. It makes perfect
sense to a non-programmer but is not at all what a programmer would expect.

~~~
alayne
What case insensitivity are you referring to?

~~~
defen
MySQL with default collation / locale / operators does case insensitive string
comparison IIRC

~~~
justinlardinois
Exactly. Something like 'P' = 'p' evaluates to true.

------
erez
I always tell non-programmers (and sometimes programmers interfacing with the
systems I maintain):

Don't tell me what I need to do, tell me what You want to do. Mostly clients
seem to think they need to come up with the way to accomplish stuff, rather
than express the need and let the programmer figure out how to meet that need.

------
todd8
The problem with programming isn't solving simple problems. The hard part is
dealing with the hard problems. An important contribution of Computer Science
is the recognition that abstractions (functional, procedural, object-oriented,
relational, ...) are necessary when writing software of any significance. The
paper's simple problems perhaps give insight into how a programming language
for children should be designed, but for the most part its recommendations
should be ignored.

The paper points out that non-programmers (5th graders) have trouble with NOT,
AND, and OR and suggest in a separate paper that table based queries can avoid
some confusion with these Boolean operators. I'm sorry, but a programming
language without Boolean operators is going to be worthless. Just because 5th
graders haven't learned De Morgan's Laws doesn't mean that we should throw out
Boolean operators. What about lambda expressions, functions as first class
elements, higher dimensional arrays, recursion, complex numbers, binary and
decimal internal integer representations, floating point with exponents, built
in log functions, setjump/longjump, call-with-continuation, threads,
concurrency, interrupt handlers, atomic locking, streams, files, relational
data bases, the list goes on and on.

Programming in a programming language for Kids tends to be tedious and very
concrete. Scratch bored me to tears. Perhaps its a good fit for kids, but it's
not going to be used to write a web server. I just don't see how these
"experiments" give us any insight into non-toy programming.

In the 1970's there were still plenty of professional programmers and fellow
grad students that felt like programming in assembly language was the highest
form of programming. It was challenging, I did my fair share, but it was also
brutish and nasty. There were no powerful abstractions to facilitate ones
programming. Everything was concrete and explicit and terrible. The history of
programming languages has been to build a tower of increasingly powerful
abstractions over the hardware below. C++ templates, Haskell's type system,
Scheme's call-with-continuation, SQL, these are so far removed from the simple
little operations being performed by the processor, but they give us the power
to write the programs that we do.

The use of abstraction in programming isn't limited to programming languages.
Libraries supporting matrix operations won't make sense to a 5th grader or
anyone else that hasn't studied matrices. So how is a 5th grader going to
describe rotation of a graphical element? They don't know matrix math or
trigonometric functions? Should these be eliminated from programming
languages? Operating systems also insulate us from the hardware through
abstractions not present in the physical hardware: processes, scheduling,
virtual memory, files, abstract sockets, networks, threads. What about the
other tools we use like relational data bases, source code control systems
like git, and bug trackers? How about pseudo-random numbers and encryption?
What do 5th graders know of these?

Finally, some professional programmers have to understand deeper issues,
programming complexity, turing incompleteness, regular expressions, context-
free grammars, LR parsing, performance of algorithms, correctness arguments.
All of these issues have some impact on programming. Are we really going to
throw all of this out because it is confusing to 5th graders? or even adults
that haven't studied these issues?

~~~
CJefferson
Firstly, no-one is suggesting we get rid of the hard programming. But
actually, I disagree with your point of view.

I do research into A.I., and often work with companies. I find problem fall
into 3 categories. Approximately:

50% of problems are extremely trivial problems which we've know how to solve
for at least 10 years, we just need to help the companies use the existing
techniques.

15% of problems require techniques from the last 10 years or so, so require
extensive up-to-date expertise but aren't interesting research.

5% of problems are interesting, hard problems which lead to interesting
research problems.

30% of problems are so far beyond the state of the art they are impossible.

Helping people in that first 50% solve their problems without having to talk
to us is an interesting area. At the moment they use tools like Excel and
Microsoft Access. I believe there must be better tools for those problems,
which aren't any more complicated. Why can't my mother easily produce (for a
concrete example) create and update a schedule for her darts league, without
needing me to help?

~~~
soared
I took a business analytics class which taught SQL and basic machine learning.
We exclusively used an excel plugin, and it was exceptionally easy to get
great results. This sounds like something that would help that 50%.

------
jakelarkin
TL;DR

non-programmers tended to define/use

\- declarative event-based rules over imperative flow

\- set manipulations instead of 1by1 iterative changes

\- list collections instead of arrays. ability to sort implicit

\- rule-based exclusions for control flow instead of complex conditionals with
NOTs

\- object-oriented state but no inheritance

\- abstract past/future tense to describe information changing over time
instead of defining state-variables

other issues

\- not well specified mathematical operations

\- AND used as logical OR e.g. "90 and above"

\- life-like motion/action assumed instead of defined; e.g. not defining x,y
location and frame-by-frame delta

~~~
pierrec
In the paper, they dismiss the example of "if you score 90 and above" as
incorrect use of "and" (or too vague to turn into any formal logic).

However, looking at your summary, it suddenly sticks out that this issue could
actually be connected with the tendency to use set manipulation. "If you score
90 and above", and I suspect many other seemingly abusive uses of "and", can
turn out to be perfectly valid if you consider them as (infinite) set
manipulations. However, I'm not sure which of the two explanations is closer
to the actual cognitive processes behind such a phrase. Seems to me that
humans are naturally comfortable with many set manipulations, while current
computers require fairly elaborate abstractions in order to deal with them as
sets, especially infinite. This might be one of the gnarly parts of human ->
machine translation.

~~~
dkbrk
To elaborate on this point, "if you score 90 and above" could be parsed in two
different ways:

    
    
      1. "if [you score 90] and [you score above 90]"
      2. "if you score in [{90} 'and' {x: x > 90}]"
    

[1] is unsatisfiable. [2] is still ambiguous, as it's unclear in natural
language whether 'and' is a set union or intersection.

In mathematical terminology, 'and' in this countext would mean set
intersection, but I don't think it's necessarily "incorrect" to have this mean
set union in natural language.

To elaborate, take: C = A union B. Here are two propositions about C:

    
    
      I. forall c in C. (c in A) OR (c in B)
      II. (forall a in A. a in C) AND (forall b in B. b in C)
    

These propositions are not equivalent. [I] actually implies C is a subset of
(A union B), and [II] implies that it's a superset. Note that set builder
notation for C, {c: (c in A) OR (c in B)} is structurally very similar to [I].

I think [II] is the interpretation of 'and' that is intended through the
natural language use. It's essentially a form of set construction: I am
constructing a set; it contains 90, and it contains the numbers above 90. As a
set construction it also adds an implicit constraint that the new set can't
contain anything not in the operands, so that resolves the superset ambiguity
(it would be patently absurd in natural language to claim that 55 could be in
the set "90 and above").

~~~
coldtea
I don't read it that way. To me the use of the word "AND" is a red herring, as
it wasn't meant to imply a logical operation, but rather in the usual sense
(and meaning "plus that") to denote the range as half open.

    
    
      [90, infinity)
    

as opposed to:

    
    
      (90, infinity)
    

[https://en.wikipedia.org/wiki/Interval_%28mathematics%29#Inc...](https://en.wikipedia.org/wiki/Interval_%28mathematics%29#Including_or_excluding_endpoints)

