Hacker News new | past | comments | ask | show | jobs | submit login

TL;DR

non-programmers tended to define/use

- declarative event-based rules over imperative flow

- set manipulations instead of 1by1 iterative changes

- list collections instead of arrays. ability to sort implicit

- rule-based exclusions for control flow instead of complex conditionals with NOTs

- object-oriented state but no inheritance

- abstract past/future tense to describe information changing over time instead of defining state-variables

other issues

- not well specified mathematical operations

- AND used as logical OR e.g. "90 and above"

- life-like motion/action assumed instead of defined; e.g. not defining x,y location and frame-by-frame delta




In the paper, they dismiss the example of "if you score 90 and above" as incorrect use of "and" (or too vague to turn into any formal logic).

However, looking at your summary, it suddenly sticks out that this issue could actually be connected with the tendency to use set manipulation. "If you score 90 and above", and I suspect many other seemingly abusive uses of "and", can turn out to be perfectly valid if you consider them as (infinite) set manipulations. However, I'm not sure which of the two explanations is closer to the actual cognitive processes behind such a phrase. Seems to me that humans are naturally comfortable with many set manipulations, while current computers require fairly elaborate abstractions in order to deal with them as sets, especially infinite. This might be one of the gnarly parts of human -> machine translation.


To elaborate on this point, "if you score 90 and above" could be parsed in two different ways:

  1. "if [you score 90] and [you score above 90]"
  2. "if you score in [{90} 'and' {x: x > 90}]"
[1] is unsatisfiable. [2] is still ambiguous, as it's unclear in natural language whether 'and' is a set union or intersection.

In mathematical terminology, 'and' in this countext would mean set intersection, but I don't think it's necessarily "incorrect" to have this mean set union in natural language.

To elaborate, take: C = A union B. Here are two propositions about C:

  I. forall c in C. (c in A) OR (c in B)
  II. (forall a in A. a in C) AND (forall b in B. b in C)
These propositions are not equivalent. [I] actually implies C is a subset of (A union B), and [II] implies that it's a superset. Note that set builder notation for C, {c: (c in A) OR (c in B)} is structurally very similar to [I].

I think [II] is the interpretation of 'and' that is intended through the natural language use. It's essentially a form of set construction: I am constructing a set; it contains 90, and it contains the numbers above 90. As a set construction it also adds an implicit constraint that the new set can't contain anything not in the operands, so that resolves the superset ambiguity (it would be patently absurd in natural language to claim that 55 could be in the set "90 and above").


I don't read it that way. To me the use of the word "AND" is a red herring, as it wasn't meant to imply a logical operation, but rather in the usual sense (and meaning "plus that") to denote the range as half open.

  [90, infinity)
as opposed to:

  (90, infinity)
https://en.wikipedia.org/wiki/Interval_%28mathematics%29#Inc...


The ACTUAL meaning:

3. "if [you score 90] do [x] and if you score [above 90] also do [x]"


I think about it as:

"90 and above" is the smallest set X satisfying both:

* 90 is in X

AND

* Above(90) is a subset of X

Another description of this set is:

An element x is in the set "90 and above" if x is 90 OR if x is in Above(90).

AND/OR are dual to each other, and it's just a matter of perspective on whether you're building up the set(OR) or constraining the set(AND).


Analysis: non-programmers don't naturally code to the Von Neumann machine architecture. They instead declaratively define higher-level rules and operations with a naive grammar strongly influenced by human language and experience.


I don't think "programmer" and "non-programmer" is a binary distinction. Most programmers don't think when iterating an array, but even experienced programmers sometimes take a non-programmer like approach when they see an unfamiliar problem.

In other words, if we could make a programming language that's more approachable to non-programmers, we might benefit everyone.


I thought the most interesting one was the use of then as a temporal construct. First this, then this, then that. That's the common use in literature, but that sense of then is implied by statement sequence or function composition in programming. When you are programming the use of then as conditional consequence is so natural you don't think of it as different to the way most people (66%) use it.


This is something I'm having trouble explaining to Business Analysts - they tend to think and define systems declaratively whereas when we receive a set of requirements we prefer to work from an imperative set of instructions. I've not succeeded in conveying this successfully (they either think devs are just being difficult/lazy or they just fail to grasp what we need) - anyone else suffered this? If so have you got any tips as to how it could be better conveyed?


They are telling you what to build, you should be deciding how to build it. A business analyst who writes requirements as imperative pseudo code is doing most of your thinking for you.


That is like saying that the person who came up with the conjecture did more work than the person who discovers the proof...


Maybe you could use a more declarative language? Or build a domain-specific language yourself?


Indeed, if they give you declarative rules, use prolog or some expert system engine.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: