

Call by Meaning [pdf] - da02
http://www.vpri.org/pdf/tr2014003_callbymeaning.pdf

======
namin
The challenge they envision to address reminds me of a section in Bret
Victor's talk, The Future of Programming.

"[13:54]

So, say you’ve got this network of computers, and you’ve got some program out
here that was written by somebody at some time in some language; it speaks
some protocol. You’ve got another program over here written by somebody else
some other time, speaks a totally different language, written in a totally
different language. These two programs know nothing about each other. But at
some point, this program will figure out that there’s a service it needs from
that program, that they have to talk to each other. So you’ve got these two
programs—don’t know anything about each other—written in totally different
times, and now they need to be able to communicate. So how are they going to
do that? Well, there’s only one real answer to that that scales, that’s
actually going to work, which is they have to figure out how to talk to each
other. Right? They need to negotiate with each other. They have to probe each
other. They have to dynamically figure out a common language so they can
exchange information and fulfill the goals that the human programmer gave to
them. So that’s why this goal-directed stuff is going to be so important when
we have this internet—is because you can’t write a procedure because we won’t
know the procedures for talking to these remote programs. These programs
themselves have to figure out procedures for talking to each other and fulfill
higher-level goals. So if we have this worldwide network, I think that this is
the only model that’s going to scale. What won’t work, what would be a total
disaster, is—I’m going to make up a term here, API [Application Programming
Interface]—this notion that you have a human programmer that writes against a
fixed interface that’s exposed by some remote program. First of all, this
requires the programs to already know about each other, right? And when you’re
writing this program in this one’s language, now they’re tied together so the
first program can’t go out and hunt and find other programs that implement the
same service. They’re tied together. If this one’s language changes, it breaks
this one, it’s really brutal, it doesn’t scale. And, worst of all, you
have—it’s basically the machine code problem. You have a human doing low-level
details that should be taken care of by the machine. So I’m pretty confident
this is never going to happen. We’re not going to have API’s in the future.
What we are going to have are programs that know how to figure out how to talk
to each other, and that’s going to require programming in goals."

[http://worrydream.com/dbx/](http://worrydream.com/dbx/) (quote taken from
this helpful transcription of the talk at [http://glamour-and-
discourse.blogspot.ch/p/the-future-of-pro...](http://glamour-and-
discourse.blogspot.ch/p/the-future-of-programming-bret-victor.html))

~~~
JoeAltmaier
Can still be an API, but its the 'meaning API' where you probe for common
ground, negotiate formats.

------
wyc
I typically code with the manual alongside my editor so syntax lookups are
cheap. I prefer dumb and honest APIs coupled with good type systems so that
understandable semantics are expressed within the routines themselves.

Therefore, I'm pretty apprehensive about using a system that seemingly
encourages ambiguity, e.g. on page 6:

var epochMethodName = K.ask(clock, "what is the name of that's property
(methodName) which represents the current time in Unix epoch
format?").methodName

If the system is rigid and expects a structurally-similar sentence, then isn't
that specification more difficult to remember than simple method names? This
could form an unnecessary implicit schema, basically the same as winded
function calls.

If the system is extended enough to understand a variety structures conveying
the same meaning, what guarantees do I have about maintaining a consistent
result? We developed a separate language for mathematics to (among other
reasons) eliminate ambiguity, e.g. the exclusive or due to the ambiguity of
the word "or". This reminds me of the Dijkstra paper on natural language
programming that resurfaced on HN recently:

[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html)

I don't think within the code itself is the right place to help the computer
make sense of my meaning. Such a complex system may mislead the user into
thinking that the system is smarter or more capable than it really is, e.g.
you can't tell a toaster to K.ask("what's your property (methodName) to bake a
cake?"). How is natural language-based service discovery different than a
lookup for all APIs implementing the cake-baking interface? I don't think I
understand the bridge that the paper makes from natural-seeming language to
formal logic.

Instead, I think something along the lines of this proposed system could
useful when used while programming. While writing code, I would enjoy a way to
look up "a method for this object that does something along the lines of XYZ".
I would be interested in a more powerful search engine for documentation that
leverages type systems and comment-search more usefully than just word-
matching. Right now, the closest thing is Stack Overflow.

------
andor
_" In this paper we discussed the large shortcomings in today’s programming
practices to support the growing need of harnessing data and software
components found on the Internet. The tasks of discovery and ensuring
compatibility between components are for the most part done manually, making
existing systems hard-to-manage and fragile and building new systems a real
challenge."_

If a developer wants to understand his/her system as a whole, it's a
requirement to understand all included libraries as well. Automatic checks can
help a lot, but they must work _really_ well before they can replace manual
work. Their work, in particular, adds another language for metadata on top of
the existing code, which was self-describing in the first place. When changing
the actual code, the metadata description must be changed as well. How do you
ensure that the description is accurate? Use yet another specification
language and model to check it?

~~~
tree_of_item
How was the existing code "self describing"? That sounds like what people who
don't want to write comments or documentation say, that their code is
magically "self describing".

~~~
andor
The code is self-describing because it's written in a formal language with
precisely defined semantics. By reading it, you should be able to understand
what it does.

Here's an "obfuscated" version of the code in from the paper:

    
    
      a = {
          b: function() {
              return Math.floor(new Date().getTime()/1000);
          }
      };
    

This function takes the number of milliseconds since 1970, divides it by 1000
to get the number of seconds, and rounds that value down. There is need for an
additional explanation of _what_ the function is doing.

Granted, this is a toy example, and I'm certainly not against comments that
describe what the next x lines of complex code are doing. But the paper is
about using metadata to help a computer, not a person, to understand what's
going on.

------
vinceguidry
I've noticed the problem in my own coding that the first paragraph outlines.
Having to manually discover the most useful libraries, maintain the code
gluing each of these libraries to the rest of the code.

It's a pain in the neck, doing it once or twice is fine, but once you start
getting to around half a dozen projects or so in various states of development
/ production, going back to change the code on all of them when something
changes gets pretty onerous.

I've started to recognize that I basically need to either be willing to either
vendor dependencies or build interface classes to separate responsibilities.

So now I maintain an 'extractions' gem that implements common patterns as
modules. For example right now I'm building a Persistence module that I can
include to persist data. Point it at the instance variable containing the
data, stored as a Set of value class instances, tell it which engine to use,
and the name of the value class it's persisting, so it knows which columns to
make in the table, and it handles all the dirty work automatically, presenting
a clean interface to the calling library.

So now I can change or update the persistence library used in the extractions
gem, say, by changing ActiveRecord to Sequel or even just separate DB
libraries and templated queries, and all of my projects can use the new code
with a simple 'bundle update'. Shared conventions, made easier by other
extractions, make everything work together fluidly.

Eventually I'd love to get to the point where I don't even have to 'bundle
update', running production code could poll the git server daily to see if the
extractions gem has changed, then pull the repo and restart the server, the
code for which can be stored, again, within the extractions gem.

This could get a little dangerous, so I'm eventually going to build a testing
apparatus to test the effects of updating the extractions gem. Spin up a bunch
of VMs, provision them, pull the repos, run the test suites and finally start
the servers. Any errors get reported.

------
tbrownaw
Precision has to come from somewhere.

Having your actual production code read "call the function that does <this>"
and the interpreter/compiler looks it up based on NLP of the documentation
seems very silly and fragile and hard to debug.

Having an IDE that can look things up for you (whether full NLP or just
keywords) in the API reference and give you a sorted-by-relevance list to pick
from, seems like a great idea. (Some already look by substring matching what
you type anywhere in the method name, rather than just as a prefix of the
name. Which is already pretty good.)

