
The Essence of Event-Driven Programming [pdf] - michaelsbradley
http://www.cis.upenn.edu/~jpaykin/papers/pkz_CONCUR_2016.pdf
======
lunchladydoris
Since the content is way, way above my head, I'll only comment by saying that
I find it quite annoying how many academic papers don't have a date on the
front page. It's a small thing, but can help to establish context.

~~~
brudgers
One trick I picked up somewhere in the days when access to papers was by
xerox'd copy (which is in the post mimeograph era) is to look at the
references. The approximate date of the paper is that of the most recent
reference (looks like 2015 here). Because journals want current research
authors have an incentive to site the most recent work practical.

Not that I disagree that dates on academic articles would be helpful...though
I think it would tend to reflect publication rather than authorship which
could be a couple of years difference.

------
Dowwie
Why is it that authors of academic papers refuse to share the publication
date? If it weren't for the file name, I wouldn't have even a remote clue.

~~~
flor1s
I think academics might only consider their work published once it is accepted
by a conference or journal.

If it's a preprint, it is not published yet, hence it has no publication date.
Even if you submit a paper today, it might not be published in this year. Some
journal papers take years to revise. Some people might get early versions of
their conference paper rejected. For every paper I have send to a conference,
the conference itself added the conference information (including year and
conference dates) to the paper.

~~~
eternalban
You have a point but publication date, specially in this fast moving field, is
a low grade data point. As another commenter wrote in this thread, we're
reduced to deducing date range by looking at referenced prior work.

I for one am grateful to Cornell for making ArXiv happen. They stamp the date
on the preprint and their url enodes the general date [/<yy><mm>.<sernum>] as
well:

[https://arxiv.org/pdf/1701.06020.pdf](https://arxiv.org/pdf/1701.06020.pdf)

------
jitl
Can someone explain like I'm 5?

~~~
lstyls
What I got from this paper is that the authors are using a flavor of symbolic
logic called "temporal logic" to formalize concepts fundamental to the event-
driven programming paradigm. The notation seems to make intuitive sense as an
extension to the formal logic I learned in my undergraduate discrete math
course.

I'm not all the way through the paper but I actually find it quite helpful in
explaining the essential meaning of these concepts. I haven't worked with
these concepts much, and the tutorials I've read in my free time have focused
more on practical use of various implementations of eg futures rather than
explaining their actual abstract meaning.

Caveat: I am not familiar with this academic domain so this is my own
interpretation. If someone who is better versed can elaborate or correct me,
please, please do so.

~~~
neel_k
That's pretty accurate! (I'm one of the authors.)

In event-based programming, one of the basic abstractions are something
variously called "events", "futures", or "promises". The idea of a future is
that it is a data structure that can yield a value at some point in the
future, but not might be ready to produce one right away. This idea dates back
to (at least) MultiLisp and the Connection Machine, but many modern languages
support this primitive -- for example, Javascript, Rust, Ocaml, C#, Java and
Scala all have good library support for it.

Our basic idea is that futures correspond very closely to the interpretation
of the "eventually" operator in temporal logic. So if we could give a typed
lambda calculus for temporal logic, then we could implement the eventually
operator with futures. The benefit of doing this is that we could design
languages which combine the easiness of reasoning featured by purely
functional programming, while still offering good support for event-based
programming (eg, for GUIs). In addition, if we are coming from a typed
language that has enough invariants, we can also simplify the implementation
of futures (since we know statically that certain problems like deadlock can't
arise).

My personal goal is to figure out the fundamental primitives of interactive
programming. Ultimately I'd like to be able to go from a framebuffer to a
comfortable GUI toolkit in a few hundred lines of code, so that teaching the
principles of how to implement these things can fit into a semester-long
course. (Since there is a fairly small limit on how much code a student can
reasonably write/comprehend in a term, the better we understand a problem the
more we can teach.)

~~~
chriswarbo
Very nice explanation.

> we could design languages which combine the easiness of reasoning featured
> by purely functional programming, while still offering good support for
> event-based programming (eg, for GUIs).

Are there any particular difficulties (or niceties!) when these are combined?

For example, does this cause implementation details to leak, like requiring
the programmer to choose between multiple definitions of things like function
types: "a -> b", "a -> eventually b", "eventually a -> b", "eventually a ->
eventually b", "a eventually-> b", "a eventually-> eventually b", ....?

Does it make reasoning easier or harder, e.g. regarding resource usage (space
leaks, lazy IO, etc.)? How does it compare to lazy evaluation?

If someone uses events _internally_ to a library, e.g. to read from some
stream, could they encapsulate that such that users of their API wouldn't have
to notice/care about the events? I'm picturing something like exceptions, or
lazily-evaluated IO values, which can trigger errors inside seemingly pure
code, e.g. like in Haskell where the `readFile` call in `do { content <\-
readFile "foo.txt"; return (toUpperCase content); }` might work fine, but then
an IO error is triggered inside the pure `toUppercase` function. Would such
things cause reasoning problems about futures?

~~~
jpfed
Well, one can imagine a smart enough language could perform some trivial
mappings to reduce the zoo of potential function types: a -> b can be
trivially mapped to both (a -> eventually b) and (eventually a -> eventually
b).

------
jraedisch
∆⊢e:A

Can someone explain this (from p. 3) to me, or point me to an explanation? I
am especially unsure about the usage of ∆ and : in this context.

~~~
mafribe
As this paper is about a Curry-Howard correspondence, the statement has two
(equivalent) readings.

Reading 1. It's a typing judgement. You can read ∆⊢e:A as "the program e has
type A, assuming the free variables in e are typed as described by ∆".

Reading 2. It's a logical judgement. You can read ∆⊢e:A as "the proof e is a
proof of the logical formula A, assuming the free variables in the proof e
stand in for proofs as described by ∆".

~~~
jraedisch
Thanks! I took a look at the Wiki page
[https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence)
but didn't see any Deltas there.

~~~
s_ngularity
A great overview of this stuff is
[http://homepages.inf.ed.ac.uk/wadler/papers/frege/frege.pdf](http://homepages.inf.ed.ac.uk/wadler/papers/frege/frege.pdf)

~~~
jraedisch
And this one actually has Deltas in it. Thanks!

