Hacker News new | past | comments | ask | show | jobs | submit login
Eve: Programming designed for humans (witheve.com)
1070 points by ibdknox on Oct 28, 2016 | hide | past | favorite | 374 comments

I think that Eve is tackling the wrong problem.

Allow me an analogy: "Bronk, the math designed for humans." Instead of dense algebraic expressions like "3x+49", you get to write "thrice the value of x plus 49." You may consider this a straw man, but I think that if you look hard at existing programming languages, you'll see that they are all designed for humans, and that the challenge in programming is in formulating your thoughts in a precise fashion. Should languages create higher-level abstractions to allow humans to reason about programs more efficiently? Yes! But that's not what this environment is about.

I do see one possible rebuttal to this, which would be an entirely different form of programming that is to traditional programming what google search is to the semantic web; that is, rather than specify programs precisely, we give examples to an approximate system and hope for the best. In many ways, that's how our biological systems work, and they've gotten us a long way. I don't see that happening in Eve, though.

This seems to always happen when people try to make programming "more human". Programming languages succeed by walling off ambiguity. The better and faster they do it, the stronger the language, even if the syntax looks ugly.

Even your example shows it instantly. We know how to read 3x+49 but would have to ask of "thrice the value of x plus 49", "did you mean 3 times what you get from adding 49 and x or 49 more than 3 x's?".

Projects to humanize programming always seem to suffer from "magic genie" syndrome. When you ask a genie for $1mm you don't expect him to go rob a bank or kill your dad for his life insurance to get it. Human language makes tons of implicit assumptions about the recipient. It would take a general (strong) AI to make that work with a computer.

The ultimate expression of success would have your "program" simply read "I'd like a game just like Flappy Bird but with my logo instead of the bird".

Actually, we agree completely with this view. We tried going down this path [1], and ultimately concluded it was the wrong direction, for many of the reasons you point out here.

But Eve is a full programming language. The "humane" aspects are not about making the language more ambiguous, but about changing the focus of tooling from the machine to the human. It's about things as simple as Unicode support for language localization. Or rendering headers in your code. It's about making the syntax and semantics very small and easy to understand. It's about closing the write-compile-test loop, by showing the result of code /in/ the actual code. It's about debugging tools that work with you to solve a problem.

We're just saying we don't want to compromise. We want a beautiful, concise, easy to understand language AND we want humane tooling. In order to get that, we had to abandon the traditional imperative programming model, and that comes at a cost. But I think in the long term it will be worth it.

[1]: http://incidentalcomplexity.com/2016/06/14/nlqp/

I was initially confused by the "human" line too, but once I looked at the video and the examples I understood what you guys are trying to do.

There are a lot of really interesting approaches there. I really like the ideas!

I think a lot of service devs (read: stateless) won't get it. But people who code front end and deal with business logic daily might find something to love in here.

Worth discussing if nothing else! You got my star!

>changing the focus of tooling from the machine to the human

>It's about debugging tools that work with you to solve a problem.

>we want humane tooling

I know how hard it is to explain what exactly your product does. I myself constantly struggle with this. Eve is an interesting concept that is worth a closer look and discussion. But you guys need to learn how to explain its benefits using a less ambiguous language.

Well the linked page and videos explain the benefits of the current release pretty well. They should take less than 30 minutes to view. I think part of the reason they use ambiguous language is they see this programming language as the first step in achieving a series of more ambitious goals. They've also been writing specific prototypes which they also believe work toward these goals for the last two years.

...So, literate Smalltalk. Or literate Lisp.

so basically Java?

No, basically COBOL, which supposedly was a pogramming language so easy to use that managers could code the problems themselves without those pesky programmers that slow everything down all the time.

Good tooling, easy to understand, unambiguous? Sounds like Java.

Why can't people just use Java?

Must be a server-side dev. Java client-side trainwrecks include: struts, tempest, Java Server Faces, AWT, Swing.

True it's unfair to blame Java alone for these, because client side is more difficult-- we've been searching for solutions for decades in a variety of languages.

Modern front-end is a mush of dozens of tools/contexts, etc. Way more complex than TeX, and way less reliable and with minimal debugging context. But as the Eve guys have said, we're using the same layering of precompilers and interpreters we've always done. It's a mess because it's not just Java (or Python or Ruby) it's CSS HTML JS XML JSON, and the whole rest of the alphabet soup.

It's about time we reconsider platform as a whole again rather than merely a sum of fragments. That's why I find Eve interesting.

there's quite a lot of bad practices surrounding the language back from where the enterprise tooling was a mess and xml was the rage, we're ten years past that but the language hate has never ceased (see my comment: -1)

I get the hate from the frameworks craze, but if one is in a position of being forced a crazy bloated framework without recourse that's not Java fault.

apart from that, you can debug servers the other side of the world with ease, hot replace code as you execute it, walk the heap with a lot of different views available, it has recently got top notch profiling, a sane injection and inversion of control framework which you can still debug at runtime, all the libraries you can dream of and bridges to hardware for the most demanding tasks.

I have held out since I love the language. I started using Java when it was new and actually very pragmatic and useful. Hated the enterprise phase, got bogged down by it but rose from the ashes. Now I'm a strong proponent of simplicity and libs over frameworks. And Java has great speed and simplicity for most tasks when memory management isn't a problem. But it's a constant struggle against old values in the business. And it doesn't take many conservative and bureucratic people to ruin a creative crowd. So I am looking to jump ship to what I consider in many ways inferior languages just to be able to be part of a modern culture.

To be clear, I still work at a company which is extremely forward (in averyting in beta sort of way). But still these old values hold everyone back and cause endless discussions of frameworks, processes and control structures vs creativity and speed.

What do you think of Scala?

Java? What a horrible language.

How so? (Just genuinely curious as I always here this) I have asked the Java developers at my job if they have this common mindset and they don't seem to agree. I don't work with Java that much so I'm just honestly curious.

Java is a great virtual machine, a good collection of open source libraries, an ok language, and a nightmare collection of best practices and people who enforce them.

I don't have a beef with the language, it is the developers who have never programmed in another language and still think that FactoryBuilderImpls are a good idea. They have never ventured outside of an IDE, but insist that Python isn't a real programming language. I can program Java but I never want to program with "Java developers" if I can help it.

That in a nutshell is why I said Java is a horrible language.

That said, people with a strong preference for a particular language are comparable to religious fanatics. A good engineer doesn't let language preference and emotional attachment to something get in the way of building something better.

If learning something else is too uncomfortable or seems unecessary to you then you are no better than a religious person.

btw, before I get downvoted the "religious person" analogy was what a senior developer said of me when I was a .NET guy a long time ago.

note: the same applies to the religious sects who worship libraries and frameworks (Mostly front-end Javscript web developers)

Your message could still be effective (or even more effective) without putting down "religious people".

He said religious fanatics who are also people but a niche inside a religion. What is wrong with that as it rings through? Both defend something mostly without bases with extreme vigor and energy.

Yes! I program PHP because I feel it fills a spiritual void for me.

I'm a Zend-Buddhist.

Cannot edit the post for some reason, but sleepy this morning and not native English ; rings true of course.

There's certainly no shortage of crummy developers layering on useless abstractions for no reason other than dogma. Don't worry, soon all those young and thoughtless developers will be coding in nothing but JavaScript, and the crufty Java die-hards will be pumping prime contracts to migrate old Java business engines well into retirement.

Java is incredibly verbose and does not support many modern paradigms very well (see Rx and streams). For that sort of OOP language, C# is far better. The JVM is excellent, however.

> Or rendering headers in your code

Sorry, you lost me right there. If you need headers, you've already run off the rails (no pun intended) IMHO.

Headers as in document headlines and literate programming not as in header files.

Thanks for pointing that out. I was wondering why I was getting so many downvotes.

You seem to conflate "allowing ambiguity" with "not requiring specification of extraneous detail."

It is not a new mistake. Many people thought that programming with GC, or with high-level languages, or with generics, was just a kind of magic that couldn't lead to understandable programs. But these things succeed because, while they allow you to stop worrying about certain details, they do so while still remaining perfectly well-defined and unambiguous.

The fact that you no longer can know how the low-level details will be handled can be very uncomfortable. If there is most of what you have spent your programming career doing, it seems like it must produce ambiguity and inefficiency. But this isn't generally the case. Optimizing OCaml compilers can often produce code faster than C simply because the higher level of expression expected from the programmer leaves more room for the implementation to generate the right low-level code.

I won't say that I know Eve will be successful, but it certainly is possible. It produces unambiguous behavior which just happens to no longer specify lots of details most programming environments make you think about. When I think about the work I do, most of which involves providing live representations of data, and executing certain rules when it changes, I would guess that most of the details I worry about are extraneous and could be dealt with by a sufficiently-smart environment that encompassed both the database and the program.

I would say that GC, generics, and maybe even HLLs are useful because they are tools to constrain the set of possible states the program can possibly enter.

When your language and runtime provide good tooling for constraining the state space, then that allows you to elide specification of those extraneous details.

E.g., some form of GC is a basic runtime requirement for most of the HLLs today. They could almost not exist without it.

Absolutely agree. Similarly, I think we are finding that if you want to program reasonably on distributed hardware or over state spaces too large to hold in memory, you need a programming environment that constrains your ability to specify that things happen in sequential order, or to specify details about how you listen for events or get access to data directly.

Eve may not turn out to be the right abstraction away from details. But the parent comment complaining about the details it takes away from the programmer is probably on the wrong side of history.

I'd say that the details still exist but they've been handled with a very powerful idea which is sane defaults triumphing over endless required specification. In most cases you still can reach the minutia, you just usually don't need to.

There's a difference between producing unambiguous behavior given a specific input and making it easy to create that input in the first place.

I'm with you along the lines of embedding defaults to reduce the boilerplate code needed to get a minimum working app, but there comes a point where this may end up requiring advanced coders to learn how to go deeper in order to override those defaults to get novel results. This could result in a language thats easy for a beginner to get started in but difficult for the intermediate to progress any further.

That sounds like a good description of operating systems, programming languages, databases, or any of dozens of other abstractions that work great, enable people to use them as black boxes without worrying about the details, and also are rewarding to customize or hack on internally for the small subset so inclined.

If that's what this is, a new field outside "programming" that allows an order of magnitude more people to author behavior for simple systems, that'd be amazing.

There are improvements that can be made though - I think one of the best articles I've read about this was here: http://worrydream.com/#!/LearnableProgramming.

Lots of things that show tools and ideas that would make learning programming a lot easier.

> When you ask a genie for $1mm you don't expect him to go rob a bank or kill your dad for his life insurance to get it.

Well, maybe not the first time. But players in my campaigns tend to learn quickly.

It's unfortunate the parent is the top comment here. There's a common thing that happens when a new idea shows up that doesn't easily fit into existing categories:(most) people give it a cursory look over, and then decide it's just another instance of boring category X.

This is especially common in discussions about humanizing programming. I think it's partly because people are invested in the current way of doing things, having spent so much time developing their particular skills; and partly because our attempts at serious alternatives have largely been failures, so far. That makes it easy to see any new idea in this space and automatically class it as already understood. But there is room between C++ and toy visual languages, and someone may find something good there yet—and this will illuminate things just that much more if nothing else.

Look into the lineage of ideas the Eve team have moved through to get where they are today, and you've got to admire their search process even if you don't like the results.

In any case, it's patently false that Eve's innovation is analogous to the algebra example given in the parent.

After spending half an hour reading the article I feared the comments would be like this.

The concepts here are fantastic. Unfortunately, as intelligent as they are, many in this industry seem to have difficulty grasping the benefit of user-focused design, abstraction and simplicity.

UI design by CS engineers has always been terrible; programming languages are no exception to that.

It isn't virtuous to suffer unnecessary complexity. And programming shouldn't be complex just because we can manage it in spite of the complexity.

There's still code here, the text is just for people, so I wonder if this is maybe a misunderstanding.

The language presented is a variant of datalog and is as formal as any other language. If you're curious in the semantics, they boil down to Dedalus [1].

As a simple example of that, here's a clock: http://play.witheve.com/#/examples/clock.eve

[1]: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-...

For an introduction to Dedalus with less reading, Peter Alvaro's 2015 Strange Loop talk was fantastic.



Edit: Actually Chris Granger gave an Eve talk at the same conference. It's interesting to see how Eve has evolved since.


I think the intent is clear for people with preexisting familiarity with the same ideas. A clojure dev with some datomic. I read declarative, set oriented, database of facts, pattern matching and thought 'the revenge of prolog then'.

This is really nice work. I mean this in the best possible way: I think I could teach my children to use this.

Actually Chris Granger was heavily involved with Clojure. David Nolen gives a great shoutout to Chris Granger and Eve's predecesor, Light Table at minute 17:00 a couple of weeks ago at the Clojure TRE conference:


Another implementation of the clock demo


"You may consider this a straw man, but I think that if you look hard at existing programming languages, you'll see that they are all designed for humans, and that the challenge in programming is in formulating your thoughts in a precise fashion."

Non-programmer: OK, Eve sounds great, so, I want to search for a Slack message. How would I do that?

Eve-programmer: Obviously, it's just:

    search @slack
       [#message from body]
Non-programmer: Ummm...OK, great. So now if I wanted to, say, send an email.

Eve-programmer: Easy! In the same way, you just:

    commit @email
      [#email to: "corey@kodowa.com"
              subject: "It's party time!"
              body: "Hey Corey, the party starts this Friday."]
Non-programmer: ...

The challenge of programming has always been wrapping your head around very formal abstractions and "thinking like the machine". These Eve snippets still look very much like a programming language to me. I don't think they will mean anything to a non-Eve-programmer without training in the semantics of how Eve programs work and the syntax of how to build the expressions, and the model over which the operate.

Eve very well may be a big productivity advance over current development environments, but I don't see it eliminating programming as a profession anytime soon.

This milestone is very much about making a programming environment, so you're right, that's still code. Though to address this strawman, compare that to what you would write in Java or even Python. Some of the best comments we've gotten is when we've shown people eve code and told them to ignore the symbols, just read the words. Their eyes really open up and they tell us pretty exactly what the block is doing. No, it's not all the way there, but that's a big step forward.

In any case, check out our followup on what Eve is and isnt [1] - we're under no delusion that this is the end user story... yet :)

[1]: http://programming.witheve.com/deepdives/whateveis.html

The "ignore the symbols and just read the words" idea is part of Ruby's promise as well. In practice, I don't personally find it very satisfying, because it's easy to write things that read like they do one thing, but actually do something else. So you have to train yourself to ignore what it says until you figure out what it does.

I think the better metric to optimize for is how easy it is to go from seeing a piece of code for the first time to having a mental model for what its runtime behavior will be. Do you think Eve is strong on this in addition to its "readability"?

> I think the better metric to optimize for is how easy it is to go from seeing a piece of code for the first time to having a mental model for what its runtime behavior will be

Actually yes, we've seen some evidence of this. In one instance, I was demonstrating the syntax of Eve to someone who really only had experience programming HTML, and he was able to point out a bug in my code without even running the program.

I believe this is because the syntax is very minimal, there are very few core language operations (only 9), and the underlying data model is completely consistent throughout the language (everything is a record). You can only query and update records. When you only have a few consistent tools, you can wield each one with greater confidence.

Also, in Eve if you want to know what a block does you can just ask for the output right next to the block, so there's no guessing. You can look at it as text, a table, a chart, or any number of visualizations you can come up with. When you have the output and the source code right next to each other, updating in real time, and recompiling automatically as your code changes, you can get a better sense for what each individual part does in a block of code. You can even point at a specific element and ask "What draws this". We hope that all of these features will encourage new users to feel the freedom to explore, make mistakes, and build their own mental model of how Eve works, without having to understand set theory.

Thanks! I hoped that was the case.

Fair enough, but I thought an earlier goal of Eve was to bring programming to non-programmers? Did that change as Eve evolved, or am I just mis-remembering?

It is, but it's a multi-step process. :) Checkout the three milestones at the bottom of the post: http://programming.witheve.com/#justataste

If we can eliminate software engineering as a profession our work will be done. There will be nothing left for humans to do, because at that point we will have invented a General AI.

Up until that point, software engineering will be a well paid job. I really don't understand this attitude that everyone should be a programmer or that programming should be easy. It isn't easy. Obviously, we should remove unnecessary friction from the process, but framing your thoughts precisely takes discipline. Building maintainable systems takes practice and understanding. It takes years of study and work to become a good engineer.

I definitely think people should be exposed to programming in school, but we study physics and chemistry there too and nobody expects everyone to be a physicist or chemical engineer. Specialization is a good thing.

That said, I'm down to compete with whatever you people think can replace me. I love a good challenge.

nobody expects everyone to be a physicist or chemical engineer. Specialization is a good thing.

The difference with software engineers is that they have the power to build their own tools; physicists and chemical engineers largely don't, unless they themselves are also software engineers. You aren't going to use knowledge of chemistry to build general software but you can always find a use for software engineering in any domain. This puts it in the same category as literacy and mathematics, rather than strictly being a specialization pursued toward its own end.

Don't quote me out of context

I definitely think people should be exposed to programming in school, but we study physics and chemistry there too and nobody expects everyone to be a physicist or chemical engineer. Specialization is a good thing.

I am saying that people should be taught programming in school, like literacy and mathematics, but that doing so is not going to prepare students to make production-grade software. We need actual engineers for that.

It's not unthinkable that a language, by design, facilitates or enforces precise definitions.

As a far-fetched example, think of Lego. You can't "fail to compile" your Lego bricks. You have a finite selection of bricks (all clearly visible and usually available within arm's reach), and your job is to just lay one at a time. Given any brick, it's "obvious" to a human how it fits with other bricks. The worst you can do is essentially create a crappy Lego design. As you noted going from a rough thought of

"Uhm I want to build a Dragon of about this size..."

to a finished build requires a powerful AI. But we don't need to go that far to be better than

"Place $brick1 in p=(32,17) at orientation o=(0,pi); place $brick2 in p=(38,15) at o=(-pi,0); ..."

The rigidity of the bricks inherently prevent you from overlapping them, but not the above declaration :)

Not at all. Good tools are very important, and we shouldn't stop working to make them better. What I disagree with is the idea that good enough tools can replace engineers without them becoming General AIs. Until we get to that point, there will be systems that need to be built that only professionals are qualified to work on. I'm not saying that we need licensing boards or any of that nonsense, just that nobody is going to merge poorly constructed code into an important project.

I don't think it was implied good enough tools can replace qualified engineers. But I feel sometimes engineers don't realize that tools can not only be valuable for beginners (itself worthwhile, imo) but actually make experienced professionals more reliable (less error-prone) and work faster.

Taking the Lego analogy further, even if I, an experienced Lego builder, can construct a set entirely in my head, and write a correct assembly script, building it physically is likely faster and certainly less error-prone.

Those kinds of goals are hard to achieve with software (especially considering what we have is already pretty good!), but I think they're a worthy pursuit.

Examples of language/interface design that seem strictly beneficial: (regardless of experience or project)

- Showing the result of [valid] code changes as soon as possible (a feedback problem);

- Disallowing invalid expressions (a consistency problem);

- Displaying relevant components (functions, libraries, APIs, variables, etc) (a visibility problem);

- Bringing documentation closer to code (making functionality obvious)

Eve tries to tackle some of those challenges, especially the documentation problem and the visibility problem.

Solving those issues can both improve productivity and bring more people into programming.

Okay, but who is your customer, the engineer or the layperson? At a certain point, our needs are going to pull in separate directions. Simple is good, but certain problems have a baseline of complexity that can't be eliminated. Who are you planning to side with?

You would be surprised how non obvious Legos actually are to people not familiar with them already.

This is akin to thinking doors are intrinsically intuitive. There is actually a ton of training that kids go through for that "intuition."

I don't understand your example. If you abstract away the hard part of searching a database and sending an email (as most programming languages do), then every language is as easy as the snippets you posted. In python:

    messages = slack.search()
    email.send(to='corey@kodowa.com', subject='hi', body='yo')
Are you saying that Eve removes the difficulty of interfacing with external programming systems? If so, I haven't seen that in their documentation... For one, you still need to know HTML to do anything useful.

On my first admittedly unfair impression I feel the same way, but I'd love to be proved wrong. Chris Granger is a serious programmer whose work deserves careful consideration.

This is just a dsl, you can write easily something equivalent or even better in other languages, but doesn't really tell you anything about the underlying language. What is the difference with:

    let messages = search slack_message body
to me this version seems more readable without the square brackets, hash and @ distractions. Why as a non programmer I would ever want to know when to use @ instead of # and when to put square brackets? I think that the F# version that I posted it's easier to read and to write for a non programmer.

No, it's more than just saying let messages = search, as in eve this is being run whenever search changes, whereas in other languages it is run when you explicitly call it

Now this sounds very dangerous - one of big problems of declarative style is that it's too easy to accidentally build a system that does much more than you intended, totally killing performance.

There is a very, very big difference between running a search once and re-running it whenever search changes; if you want it one way then you definitely don't want it the other way and that would be a serious problem.

That's not a "in this language" issue, you can do both ways in any language, but this choice you made should be (a) explicit and (b) obvious, which it doesn't seem to be in this case; where a reader can reasonably expect the search to be run once.

Actually in Eve it is explicit, there is bind and then there is commit:


Did we read the same article? Were you somehow mistaking the comments for code? The actual code is pretty clearly based around some well-defined deterministic semantics and still demands precision and consistency from the programmer.

And moreover its promise appears to be exactly "higher-level abstractions to allow humans to reason about programs more efficiently", in direct opposition to what you wrote here.

EDIT: Of course programming languages are designed for humans... they wouldn't exist otherwise. But they also tend to be pretty strongly influenced by the imperative nature of assembly language. The Eve team seems to be asking: What if we ignored that entirely? In a way this seems a lot like a spreadsheet - you can write a lot of little code blocks that aren't executed in any particular order and yet produce deterministic results.

Is this a response to the literate programming aspect of Eve that's presented? The actual datalog code that is written seems quite succinct for what it accomplishes.

The math example is more informative when it's not something trivial. For example, take the following notion about a sequence of X:

∀ e, ∀ d, ∃ N : ∀ n > N, P(|Xn - Y| > e) < d.

A more human way to say this is that eventually, we will become arbitrarily confident that X is arbitrarily close to Y. This is the notion of convergence in probability, and the formalism of that concept is way easier to process with a little human explanation. Density of notation helps sometimes, but not always. When the math isn't trivial, it's the case here too.

"..rather than specify programs precisely, we give examples to an approximate system and hope for the best."

I agree with this, as it has the most potential to bring an entirely new paradigm to software development.

I've been working on a research project like this, using genetic algorithms to write programs. Unit tests are used to guide the fitness. The AI discovers a solution program that satisfies the test cases.

Using Artificial Intelligence to Write Self-Modifying/Improving Programs


Eve's goal (or more correctly, "vision") is to "bring programming to the masses". [1]

>Should languages create higher-level abstractions to allow humans to reason about programs more efficiently? Yes! But that's not what this environment is about.

Uh, ok? It's certainly about more than that, but the language is built on high-level abstractions, making at least certain class of problems very efficient to reason about.

> ... that the challenge in programming is in formulating your thoughts in a precise fashion

Yes, one might say that the real problem[1] is how to teach people programming quickly. I'd say that a good environment is likely to be an important part of any solutions to that problem though.

It will not be the whole solution though. People will have to put in some work, and at some point improvements in environment/language will see diminishing returns. At that point you'll need better/new pedagogic techniques too reduce the learning time.

When that point is reached is unknown. I don't think it's there yet, but maybe it's not so distant.

If the we restate the problem to "how to make as many people as possible learn programming" some type of gameification is probably an efficient solution.

> Should languages create higher-level abstractions to allow humans to reason about programs more efficiently? Yes! But that's not what this environment is about.

I think it actually is. Under the hood, when you look beyond the "Literate programming is awesome" it seems to be more along the lines of terse reactive prolog?

I am not sure if they managed to do what they envisioned, but this might be the "next spread-sheet".

On the other hand, if density were no problem at all, everybody would be using APL/J/Kx, and that isn't the case either.

I don't think that's a fair characterization at all. This isn't just a wordier way to say the same thing. It's a much more declarative way of expressing what code should do.

The global invariant example is a good one. That's not just expressing the same thing less tersely.

> that the challenge in programming is in formulating your thoughts in a precise fashion

Were you thinking of Notation as a Tool of Thought?


This is exactly the argument I give when people ask why programs can't be written in plain English. The first reason is obviously ambiguity - programming languages let us express computation precisely. But the other reason is also for general productivity in being a programmer.

There was a not-so-distant time when mathematical theorems were expressed in plain English, without any precise notation. With the introduction of more precise ways to express mathematical statements and logic, mathematics has proliferated, since there is a standard to express problems and results.

While I love toy models as toys, I don't think there is a real use case for toy models in a professional development workflow.

I'm sorry but your algebra as English prose analogy makes me think you haven read the article. I could be wrong of course, but i don't see how you could draw that analogy after seeing the Eve code snippets on the page.

You may consider this a straw man, but I think that if you look hard at existing programming languages, you'll see that they are all designed for humans, and that the challenge in programming is in formulating your thoughts in a precise fashion. Should languages create higher-level abstractions to allow humans to reason about programs more efficiently? Yes! But that's not what this environment is about.

I think you're on the right track when you said "thoughts in a precise fashion"

Humans don't think in a precise fashion. And that's why I think pattern-matching with "give examples and hope for the best" is a way forward.

Think about how we program. We are given requirements and we noodle about it for a bit and start writing some code that is nowhere close to what the final source will be.

When I think about pattern-matching within the context of Eve, I think of automating the way us, as humans, go about our programming. We have some idea of what we want to do, and typically we need to google stuff for APIs and examples. Let's automate that to a certain extent. Let's use the machine learning advances that have happened in the past decade to automate much of the "research" that we all do when we program.

As a programmer, I'm still surprised how luddite-like we are when it comes to our tools. Many of us still are still in the "programming language + text editor" = "programming".

But most of us use tools to help use look for definitions, to do refactoring, to explore our code base. We need to take that to the next level, where some fuzzy logic is used by our programming systems to say "yeah, I think I know what you mean, are any of these examples close to what you're getting at?"

I think it's all about the tooling these days. We need much more advanced tooling to help us out.

>Think about how we program. We are given requirements and we noodle about it for a bit and start writing some code that is nowhere close to what the final source will be.

That is not how one writes software...

That's exactly how I do it. Play around with the problem, and once I've messed with it a bit, take a step back and plan out the right way to do it. Sometimes my "proof of concept" doodling will have salvageable bits. Very rarely (or if it was an easy requirement) the play code will be close to ready. Sometimes, I have to throw it all away. Such is life.

OTOH, it is how one prototypes software. "Build one to throw away" is totally a thing.

The comments don't define the syntax, so your example doesn't provide much oomph. I don't consider it a strawman, I consider your argument to be based on a misapprehension.

I agree. I also think something like Unity3D came a lot closer to solving "programming for non-programmers". You can create quite complex projects without writing a single line of code.

Hi All!

Many of the folks here have been following us for a long time and we're really excited to finally pull everything together to show you all where our research has taken us. Eve is still very early [1], but it shows a lot of promise and I think this community especially will be interested in the ideas we've put together. As many of you were also big Light Table supporters, we wanted to talk about Eve's relationship to it as well [2].

We know that traditionally literate programming has gotten a bad rap and so we laid out our reasoning for it here. [3]

Beyond that, we expect there will be a lot of questions, so we'll be around all day to try and answer them. We'll also been doing a bunch of deep dives over the next several weeks talking about the research that went into what you're seeing here, how we arrived at these designs, and what the implications are. There was just way too much to content to try and squeeze it all into this page.

Also, a neat fact that HN might be interested in:

Eve's language runtime includes a parser, a compiler, an incremental fixpointer, database indexes, and a full relational query engine with joins, negation, ordered choices, and aggregates. You might expect such a thing would amount to 10s or maybe 100s of thousands of lines of code, but our entire runtime is currently ~6500 lines of code. That's about 10% the size of React's source folder. :)

[1]: http://programming.witheve.com/deepdives/whateveis.html

[2]: http://programming.witheve.com/deepdives/lighttable.html

[3]: http://programming.witheve.com/deepdives/literate.html

So your defense of literate program gives, as a defense, what I would think of as an attack on literate programming:

  Take this strawman for instance, how could you find the bug in 
  the following code without the accompanying comment?

  // Print every other line of the array to the console
  for (var i = 0; i < myStringArray.length; i++) {

  This shows how important intent is to a program. Yes, the  
  example is trivial, but without that comment, how would we know
  that it's wrong?
Because the problem with that is that now you have two competing sources of authority -- the comment, and the code. One of them is right and one of them is wrong. But which is the right one?

In the real world, the answer is almost always "the code that's actually been executing, rather than the comment that hasn't," which is why it's a good idea to minimize commentry of that type, to prevent confusion on the part of the reader.

So if you're doubling-down on literate programming, how do you address that problem? That article says, "Inconsistencies between the code and the prose are themselves errors in the program and they should be treated as seriously as a buffer overflow," but does Eve actually do that? It's hard for me to see how it could, in the examples given.

(If the idea is simply that this is extra work for the programmer to do -- that they need to describe every implementation twice, once in code and once in English -- I submit that it's not very human-focused at all, if those humans are programmers.)

This is why comments that specify what the code does are a "code smell".

Comments should explain things that are not obvious from the code. Comments should be things like

// Note: PCI-DSS requirements apply below

// Must check status register and FIFO to determine completion due to flaky hardware

// Algorithm below is modified Knuth-Morris-Pratt

// This is O(scary), but seems quick enough in practice.

(Now, if someone lets me put images in comments, that would be amazing. Good for state transition diagrams and random scribbles)

A state transfer diagram would probably be more useful as a view on the code (similar to how the Smalltalk browser is a simple graphical representation/layout of code. Or indeed any IDE with advanced code folding (show me the class name and public methods only).

But there's ways to mix the two: Python doctests is one. The lp approach is to "escape" the code, not the comments. I really do think some richer data structure than flat text files is needed - simple multi-/hyper-text seems like a minimum. Not only links (most ides have this: hoover to display help, click to goto definition etc).

Why so few seem to successfully add vector graphics and diagrams I don't know. I guess jupyter / ipython might be a rare breath of air. But who really wants to maintain 10k locs of code and 15k locs of tests in ipython notebooks?

We need something more (lively kernel might be more interesting here - webdav for persistence Web browser for run/edit/view - I'm not sure if the required power Canberra realised any simpler).

I personally only see the utility of the notebook as a demonstration tool of a finished product. Kinda goes to your comment on the notebooks and code maintenance. It's a good teaching or presentation tool but not (IMO) a development tool.

The main question is if there's any fundamental reason for why we can't have both: a rich, versioned, distributed data store for our logic and data - and a number of views that allow us to inspect and modify it.

The best example I'm aware of is probably Smalltalk coupled with monticello version control and a solid object database like gemstone/s (I'm not aware of a real open/free workalike for the last part).

That's not not say relational or logic databases aren't useful - but it's more difficult to manage the data, query code and program code in one integrated system if the database takes the form of an external server. That said, Ms visual studio and the SQL db gui does a pretty good job of presenting a somewhat coherent environment.

I used to be strongly in the Linux/Unix camp - thinking that the datastore we call the filsystem is a good abstraction. But I've come to believe that even the strong legacy of user familiarity isn't a good enough reason to stick with it. Even if we were a little bit more serious and at least went all plan9 - rather than the half-hearted state of Linux/Unix (everything is a file, or a database file, or a binary file or an archive of a filsystem or...).

That said a "filsystem" might very well still be a decent building block for a higher level system (with decent search, for example).

But yeah, I don't think jupyter is the be all, end all of development. But it is an interesting way to move legacy programming environments forward in a low friction manner.

For the OODB, perhaps Squeak's Magma? http://wiki.squeak.org/squeak/2665

I think the literate in literate programming is too formal. Maybe it should have been conversational programming.

The comments should be the things that you would tell a new team member during a pair programming session. You assume she knows what the syntax of the language is, but you don't assume she knows the business intent behind the code or the history of trial and error that led to its current state.

> Now, if someone lets me put images in comments, that would be amazing. Good for state transition diagrams and random scribbles

That's exactly what JetBrains MPS (https://www.jetbrains.com/mps/) does. Even better: the state transition diagram is the code. Some of my colleagues are playing with it - not sure if it's used in production yet.

You didn't solve the OPs problem, you just moved it around.

In his case, his algo said X, and his code did Y.. very easy to see the mistake.

In your case, lets add a comment

// Note: PCI-DSS requirements apply below

Now 3 years later, the law changes and the requirements do not apply. So you are at the same situation. Code does X, comments say Y. Which is right?

Not a very easy to keep comments and code in sync

That's sort of a different problem. I often think of comments as "why" and code as "how" (or "what" if you're declarative). If how breaks, then the code breaks, so you have immediate feedback. But if why breaks - like if your reasons for writing it that way no longer apply - then it's impossible to recognize immediately. If there were instead a way to codify the assumptions behind why, such that the why statements would break when the assumptions become false, that would be interesting.

But at any rate, why is relevant, and code doesn't express the why.

My approach to this is that the comment can almost always be turned into a text. A test class called FooPCIDSSCompatibility with tests for all the bits both defines things more strongly and will start failing if you ever break it (either accidentally or because it is no longer required). Either way you need to update either code or test to get through the build process.

Comments can be useful for annotating algorithms or referencing stack overflow though.

That comment I'd interpret as the requirements must be considered in this region; so if the law changes, the requirements still apply, it's just that the code needs to be updated.

Requirements are usually exogenous to the code.

"PCI-DSS requirements apply" isn't adequate in a literate program for the same reason that "must work correctly" isn't adequate. Explaining the relevant requirements in detail is a crucial goal.

What about this:

1. Specs&design docs and code should be in separate files, because I believe the separation of concerns should be applied there. That's indeed the opposite of literate programming.

2. There should be two-way links between documentation and code: in the code, one should have links to the spec; and from the spec, one should have links to the code.

3. If the specs or the code changes, those links should be displayed in a different way to warn the reader that things are potentially not in sync. How to do that: check if the links point to the latest version. The maintainers have to update the links to remove the warnings.

With respect, I disagree.

Specification and design/implementation are not separate concerns. They are dual.

A sufficiently detailed specification is an implementation. Prolog does this (and Eve has a very similar feel).

As engineers, we traditionally work declaratively at the top of the "V" and imperatively at the bottom of the "V" -- but the reasons for this are largely historical/cultural.

We could (in theory) carry out the analysis/refinement process using entirely declarative notation.

The problem domain has primacy. Analysis separates problem-domain concerns and the duality takes care of the translation between problem and solution domains.

(OK -- so this is basically just a reiteration of the thesis of good old-fashioned AI -- that with a sufficiently powerful theorem prover and a sufficiently large and detailed knowledge base -- solutions will just pop out of a largely mechanical analysis process -- and I'm pretty sure this isn't at all trendy right now ... so I should relegate this to the "thinking out loud" bucket ...)

I'm fairly sure he was attempting to agree with the OPs problem.

Most coding editors (even the "deprecated since the 70ies" Vim) shall let you follow links you put in comments. For someone who likes to have as much code as possible on a single screen, this is the best option.

+1 for images. It's been my main reason for looking into tools like doxygen

Agreed, so then most programming systems are wrong if things are not obvious?

Well, it's just a code smell. Smelly code isn't necessarily wrong, it's just more likely to be wrong.

Sometimes a piece of code needs to be highly optimized to the point of being basically unreadable, at that point it's probably worth adding explanatory comments that you would normally avoid.

Wait, what? The code is a translation of a requirement to an implementation. The comment describes the requirement. The only problem with competing sources of authority is when the comment disagrees with the real requirements of the programmer/business/whatever.

As another example, what if it were a method name instead of a comment?

  function printEveryOtherLine(myStringArray) {
    for (var i = 0; i < myStringArray.length; i++) {
Is your argument that the method name is incorrect because the code dictates a different behavior?

This is the almost the same example. Function names also don't execute. The parent's point was that if the code has been tested or was considered working, and then you noticed this in the code, you should think twice before "fixing" the behavior to match the comment or function name.

Then why even name anything? Are you saying I should just name my functions and variables a, b, c, d, etc.?

That's an extreme position. Function names are a valuable hint of what the function is supposed to do. But if the name doesn't match the implementation, which one is wrong? We don't know.

But you do know something might be off, which is better than not knowing when something is off.

I was thinking the same from the root of this argument: "redundant encoding" isn't a way to automatically fix errors, but rather only a way to detect errors. Like a one-bit Error Correcting Code: the fact that the parity bit is wrong tells you something is corrupt, but it doesn't let you know what the right value should be. There's one useful thing you can avoid doing in response to such an error being detected: not rely blindly on either the implementation or the specification being correct, but instead check them both.

In fact, I was reading about the bitcoin redemption vulnerability in stripe, and how often times security bugs are discovered by "huh, that's weird" rather than by eureka, and this ECC (error correcting code, see the pun?) seems like it would help to provoke that, and likely validates the amount of effort that it takes.

Fully agreed. That's why I consider descriptive identifiers a valuable hint.

> We don't know.

I don't know about you but I certainly know that if a printLine function accidentally does something else, it's definitely the code that's wrong and not the function name.

Even if the product has been running flawlessly in production for 5+ years?

"Proven in use" and backwards compatibility is a bitch.

Well, if it's a library function like printLine and it does something else instead of printing to the console, that's probably a mistake (though if it does something so wildly different it cannot be simply a bug, for example logging to a file, I'd still wonder if the name is wrong). The problem is that you can't generalize this reasoning. For example:

  def sumAllNumbers(nums: List[Int]): Int = {
    nums.filter(_ % 2 == 0).sum
Is the function name wrong or is the implementation? We can't tell without looking at how the function is used, and maybe not even then! Maybe we have to ask the person who made the last change. Maybe they changed the implementation for a valid reason and forgot to change the name. It happens!

So the name definitely helps, but it's not conclusive.

Of course names are important to us, human programmers, but the compiler doesn't given an 'f'.

We use mnemonics, because we can't remember numbers as well, but to the compiler, addresses are pretty much like names.

It's a bit more than that. Function names are precisely encapsulation of the intent of the function. And unlike comments, function names are not subject to rot.

Also addresses are not like names. As with names in general, a name can refer to more than one object. (Consider modules, for example.)

> function names are not subject to rot

I don't agree with this. I think often a function will drift from its name if functionality is added to an aspect of its implementation or because of refactoring.

And thus it's important to also rename things when refactoring, or else you get awful confused when your Foobinator(x) function returns you a Splunkinated value, or your FrobbleTheFribbets() call also woggles the wiggles.

>Function names are precisely encapsulation of the intent of the function

that's a rosy way to say, names entail a message.

An Address can also be computed, your argument is invalid.

Well no, I think the argument is that this kind of comment gets outdated easily, so if the program works as expected, the comment is probably outdated (new requirement, the program was changed to print every line, forgetting about the comment). If the program does not work as expected, then it's a bug and the comment is still valid.

This does not happen in your function name example: if the program requirement changed to print every line, no sane programmer would change the code in this function, they'd write a new function printEveryLine() instead and use that. So you can be pretty sure this is a bug.

> if the program requirement changed to print every line, no sane programmer would change the code in this function they'd write a new function printEveryLine() instead and use that.

I'm not sure I understand the distinction you are making here, so I'd like to try and understand. From my perspective, function names are just another form of comment. After all, in many languages, as soon as you press "compile", your function names are mangled into something a machine can understand.

So you feel that programmers are more inclined to treat the function name as an authority, as opposed to a comment. Is that an accurate? I'm curious what lead you to this conclusion.

>So you feel that programmers are more inclined to treat the function name as an authority, as opposed to a comment. Is that an accurate? I'm curious what lead you to this conclusion.

I think this is a reasonable claim, even though as you've said there's nothing strictly enforcing this behavior. Function names are generally highlighted by the editor/IDE, and the developer is forced to stare at them and at least begin to type them out while calling them. They're brief and repeated over and over again in the source code. They're just harder to ignore. Comments on the other hand only appear once per comment, tend to be italicized and greyed out by IDEs, are generally at least one sentence in length and can span multiple paragraphs, and are never going to be directly referenced from within the code itself. It's still very possible to change a function without updating its name to suit its new purpose. But I know for me personally I'm much more prone to missing comments that I need to now delete or reword. I have to acknowledge the function name when I write code calling it or read code referencing it, but comments are just sort of there, hovering in the background. It's easier to read them once and then tune them out and forget that they're there.

People have to call a function, using its name. The function name is the mental-model handle you have on the function. If the name is a bad representation of what the function does, then the function probably just won't ever get called, because nobody will be able to build a mental model of it.

Indeed, someone else will probably end up duplicating the effort of writing the same function over again—but with an accurate name this time—because they aren't aware that the functionality they want exists in the codebase already.

And this is all an implicit consideration made by every programmer, every time they define—or later modify—a function. We all know that we'll "lose track of" functions if we don't call them something memorable for their purpose. So we put thought into naming functions, and put effort into renaming functions when they change. (Or, with library code, to copy-pasting functions to create new names for the new variant of the functionality; and then factoring out the commonalities into even more functions. We go to that effort because the alternative—a misnamed function—is almost effectively beyond contemplation.)

A function can be called from different places, maybe it will be used in code not yet written. So a programmer has a need to print every line. The code to be changed calls printEveryOtherLine. They won't change that function to print every line, because they know that would break things elsewhere.

> If the program requirement changed to print every line, no sane programmer would change the code in this function, they'd write a new function printEveryLine() instead and use that. So you can be pretty sure this is a bug.

Unfortunately many programmers seem not to be sane.

no sane programmer would

If this were only true.

It is possible to make the text a test spec that is then verified. See: http://jenisys.github.io/behave.example/tutorials/tutorial01...

His complaint is that it isn't being done here, so all the prose is jut going to get out of date and incorrect, and meanwhile you have to write everything twice for no benefit.

The problem I see with verifying comments in English (that is, using English as a specification language) is that it's impossible in the general case. So you must, in practice, use a subset of English grammar/nouns which turns into a formal specification language. In turn, this becomes a programming language of sorts, and then we stray away from the goal of "programming languages for humans".

I'd love to be proven wrong, but I think the problem is that this goal itself is mistaken. If we can formally specify something using a language X, this necessarily becomes a language not "for humans" -- i.e. a formal language that doesn't follow the rules of natural language and that we must be trained in. Natural language is notoriously bad at unambiguous specification.

I think there's one kind of "specification language" that works completely unlike programming itself—and that's the interactive form of specification known as requirements-gathering that occurs in the conversation between a contract programmer and their client.

I could see potential in a function-level spec in the form of an append-only transcript, where only one line can be added at a time. So you (with your requirements-specifier hat on) write a line of spec; and then you (as the programmer) write a function that only does what the spec says, in the stupidest way possible; and then you put on the other hat and add another line to the spec to constrain yourself from doing things so stupidly; and on and on. The same sort of granularity as happens when doing TDD by creating unit tests that fail; but these "tests" don't need to execute, human eyes just need to check the function against them.


On another tangent: I've never seen anyone just "bite the bullet" on this, acknowledge that what they want is redundant effort, and explicitly design a programming language that forces you to program everything twice in two separate ways. It seems to me that it'd actually be a useful thing to have around, especially if it enforced two very different programming paradigms for the two formalisms—one functional and one procedural, say; or one dataflow-oriented ala APL or J, and the other actor-modelled ala Erlang.

Seems like something NASA would want to use for high-assurance systems, come to think of it. They get part-way there by making several teams write several isolated implementations of their high-assurance code—but it will likely end up having the same flaws in the same places, given that there's nothing forcing them to use separate abstractions.

And then you'd need to describe the specification language and how would that be verified?

I suppose that English is imperfect and could use improvements. Ideally those two would converge. That would solve problems about the interpretation of laws, as well. How would that convergence work? To answer that you'd have to know how language is acquired in the first place. I suppose some form of self-reference in the language that mirrors some of Chomskie's stipulated universal grammar, if it exists.

Right, this is exactly the argument we are making. Given just the block of code above, we don't know enough to make the determination. Who is right? We can't possibly no without more. That's why commenting code is not enough, and that's why literate program is important.

Imagine if you came across this in actual code. You might first check if the code was changed recently and by whom, and whether the comment or the code came first. But that still wouldn't tell the whole story. To get it all, you would find out who checked in the code, and what their original intent was.

What we're advocating is to treat source code more like a design document. It's a collection of thousands of design decisions all informing the specification of the final product, the program.

> If the idea is simply that this is extra work for the programmer to do -- that they need to describe every implementation twice, once in code and once in English

Right, so the point is that prose is not redundant. As Knuth put it literate programming is "a mixture of formal and informal methods that reinforce each other." When we communicate to one another face-to-face, we use gestures, expressions, intonation, etc. to articulate an idea. On the internet, when we communicate with just text, much of my meaning is lost forever and never apparent to anyone who reads this.

Programming is much the same way. The code only tells you so much about the program. Without any context, readers are often disoriented and confused by new codebases. If no one is willing to read your code, no one will ever contribute to it. GitHub and SourceForge are veritable graveyards of amazing yet abandoned projects that will never see another contribution, because their codebases are so inscrutable. I know I'm guilty of this as well; when I revisit old codebases, I hardly know even what I was thinking.

Anyway, I think there's a lot to explore here. I think what you raise is an issue, and it will always be important to address, but I really believe that literate programming is important and can lead to better code. So we're going to see where we can take it with Eve, and I hope you'll give it a shot!

Well, I did skim a little bit through Eve's source code, specifically the files in https://github.com/witheve/Eve/tree/master/src and subs, and the first thing that stroke me was: it's not literate. As a matter of fact, it doesn't have more comments than an average code base, maybe even less.

Why? Where is the prose? What makes programs supposed to be written in Eve different from Eve itself?

Trying to do literate programming in traditional languages is pretty crappy [1]. Some heroic effort ended up having to go into this release as well so things aren't as commented as we wanted, but that will improve. There are a few files that are pretty good though [2].

Our Eve source on the other hand is wonderfully literate and it's been a really great experience. Here's the code that makes the inspector work [3].


[2]: https://github.com/witheve/Eve/blob/master/src/runtime/join....

[3]: http://play.witheve.com/#/examples/editor.eve

I only looked at 2.. but honestly, that looked like the kind of code I got delivered from offshore teams. 3 lines of comments saying what a for loop will do, then 1 line to do the for loop. I would -1 most of that on a code review, and ask for comments that just describe what the code is doing to be removed.

I can show you plenty of crappy code delivered from local teams too, especially myself!

Good software can be written with bad code.

Python is just "ugly" C underneath, containing for example a 1500 line switch statement, but this does not in any way change the fact that it's a great piece of software.

You didn't address at all what forces the programmer to keep the prose and code in sync. If you still have code that's read and translated by the compiler, and prose that's ignored by the compiler completely, how have you actually changed anything?

Well, I don't think anything can force the programmer to keep the prose and code in sync. The best we can do is give you an incentive to do so. The argument that was presented is that comments will get out of sync with code, but we're trying to provide tooling that will incentivize you to keep code and comments in sync.

I'd also like to say that if code is easier to read in the first place, there will be more eyes on it, and discrepancies will be resolved sooner. In most programming environments, a comment might describe a part of a much larger function. But the output of this code is far removed from the actual source. So discrepancies between comment and source are harder to spot.

It Eve, we intend these documents to be for more than just programmers. Who again, might not understand your code, but would understand the written prose and see the output /right there in the code/ that contradicts it. A bug like this would be brought to your attention much sooner than if your code was simply hosted as on GitHub.

Actual thing that will happen, if you're lucky: Dev changes the code to fix a bug, tests it, modifies it etc., finally gets it done. Deletes loving paragraphs of now obsolete text and writes in a one-liner description ("handle gravity effects")

Actual thing that will happen, if you're not lucky: Same as above, but they don't delete the obsolete paragraphs.

You can simply use BDD to keep in sync the requirements with the code. But BDD is not just prose, it is much much more powerful and useful.

Every company needs a human-readable definition of what a product does. There is no way to eliminate the problem of keeping prose and code in sync. Traditionally, we just hide the problem by keeping the prose and code entirely separate. Keeping them together makes it harder for them to drift apart.

Your example doesn't make sense to me. If I see a comment that says something different from what the code does I assume that the comment is wrong and I do a simple blame that will show me the jira that has been implemented in that code to double check. For sure I don't need to read all that is specified in the Jiras whenever I look at any piece of code. The code explains itself quite well. If you write code that is not understandable then that is the problem, and adding thousands of comments in the form of documents will only make the things worse instead of solving your problem.

I don't know that I really agree with this criticism. I don't think having the additional context gives you two authorities which you are helpless to reason about. The code will tell you what is happening regardless of what is intended, the comments will tell you what was intended regardless of what is happening. More often than not, having both of these pieces can tell you where inconsistencies in the larger picture have arisen and you can use your own judgement to figure out what it all means and what you might do next. Maybe I'm biased because I'm pretty proficient in the business domains I specialize in, so I can understand, conceptually, what the application is suppose to support and consider the validity of the programmer's intent as well as the code's function.

But I'm not sure why it would be considered beneficial to withhold knowledge of intent in favor of purely what's happening; except maybe to junior/support staff that aren't likely to understand the motivation behind a developer's intent. Perhaps, because of the concern that code gets maintained and related commentary doesn't?

>The code will tell you what is happening regardless of what is intended, the comments will tell you what was intended regardless of what is happening.

The comments will tell you what was intended at some point regardless of what is happening. In any mature code base that includes these "what not why" comments, the comments will inevitably document old intents that have since changed.

> Maybe I'm biased because I'm pretty proficient in the business domains I specialize in, so I can understand, conceptually, what the application is suppose to support and consider the validity of the programmer's intent as well as the code's function.

Given that, why do you need these kinds of comments at all? You already know what the code is supposed to do anyway, why introduce a comment that you might later forget to update, leaving less knowledgeable colleagues confused?

> Given that, why do you need these kinds of comments at all?

Because in large, logically complex systems, especially large complex systems that are sold to many customers like my area of expertise, ERP systems implementation & development, there are many, many right answers (there are many wrong answers, too). Take inventory costing; my current project is adding an industry specific inventory handling methodology to an existing ERP system. Consider the inventory unit costing: there are multiple methods (standard, weighted average, FIFO, retail method, etc.), there can be different derivations and components (purchase cost, landed costs, material costs, direct overhead cost, indirect overhead costs) and to top things off, not only do systems implement more than one of these at a time, sometimes different of these can be useful concurrently depending on context: if I'm analyzing my vendor costs per item, I might not want my overhead costs in the picture, but I still need overheads in the capitalized cost of the item when determining Cost of Goods Sold when I ship product out, for example. In software, if I see a function like (not real, way oversimplified pseudo-code):

function calc_item_trans_cost() { total_item_trans_cost = item_cost + item_freight_cost return total_item_trans_cost }

I might not know exactly where or how to use this properly, even having some expertise. Was the developer trying to get at a landed cost? Are they expecting that this would only be used in the invoice matching process and shouldn't really be used outside of that? I might get that contextually from surrounding code or from the (various) contexts that the function is called in, but now I have a research project trying to figure out why they created this so that I don't use it in inappropriate contexts... or I can re-invent the wheel (not a good strategy). My professional knowledge will tell me where to look for other telltale signs to figure out the answer, but not immediately just from looking at the code since there are multiple valid answers. If I had:

// Function for determining item costs for three way matching. // We record freight costs separately from item material costs, // but invoices have no such breakout, so we do the math here // to facilitate the matching process.

function calc_item_purch_cost() { total_item_purch_cost = item_cost + item_freight_cost return total_item_purch_cost }

now I get it. It's not that the system has a simplified view of landed costing, but there is a specific purpose that was trying to be accomplished by this function.

> You already know what the code is supposed to do anyway, why introduce a comment that you might later forget to update, leaving less knowledgeable colleagues confused?

Most of the time the comments in the code I work with really don't change much in regard to intent. If the comments get too close to describing the logic, then yes, the comments don't age well as the logic that achieves that intent is improved, but those are the comments that are probably not needed. The contextual comments, however, are important (even for me to understand what I wrote after some time has passed).

Even so, I was speculating/rationalizing on why people might hold that somehow "ignorance is bliss" when I spoke of junior staff. I don't hold that belief myself. I haven't found that withholding information even from junior/support staff is better than giving more complete pictures. I find the informed person much more useful that the person guessing at what it could all possibly mean and fit together.

> The code will tell you what is happening regardless of what is intended, the comments will tell you what was intended regardless of what is happening.

Unless you write the comment, then the code, then change the code and don't update the comment. Now if the code is wrong, both the comment and code are wrong.

In current languages letting code handle the 'what', and comments handle the 'why' works well enough. Not saying it is a perfect system, but the idea of the comment being expanded to include the 'what' as well doesn't seem like progress in the right direction to me.

I agree with you point in some cases and not others. I would characterize my position that you should comment just enough that someone other than you (or your distant, future self) and understand what the goal is; why being more important than what, what being more important than how.

If you're working with a relatively simple application where the information the application deals with is conceptually simple: you're absolutely right, explaining what and then having code that has a finite set of possible intents would just be wordy. On the other hand, I gave an example of ERP system inventory costing algorithms in another answer in this general thread where the complexity is different: cost can mean different things under different circumstances and different processes. We try to generalize these algorithms into lower lever functions, but you have to understand the what to know how it's appropriate to use the function... or if it is actually achieving it's goal: it may be coded in a way to be valid in all but the use cases targeted. There simply is no one size fits all answer to how to appropriately comment code (or if to do it at all).

As for bad comments and comment rot... any code, commented or not, can be written poorly or not maintained properly. Yes it's an extra moving part, and if you're not going to be as diligent about documentation as the code then, yes, often times your better without it. I think, however, it's better to force the issue. I don't think you need freestanding documents but by god if you have comments and I see code in a code review where the comments have been neglected: reject that submission.

The criticism is valid. I have yet to encounter a comment in production code that projects future intent for a piece of code, but having comments lag behind revised code is all too common. Comment rot is one of the motivating reasons behind 'sparse comments' school of thought.

Eve places the source of truth clearly in the document surrounding the code. The code disagrees with the description, the code should be changed unless there is some evidence in git that the code functions as intended.

I agree, if I'm writing a paragraph of english followed by a paragraph of code... what's the point? Comments at the moment are generally one liners and can quickly get outdated if not maintained.

Rule of thumb: If comment and code disagree, usually both are wrong.

> Print every other line of the array to the console

The odd or even lines? I don't think English is going to replace programming languages anytime soon.

I've been following Eve for awhile and I'm impressed with the progress!

A bug: that flappy bird program listing caused the page to flicker for me. It seems like the content is being repainted over and over. I'm on a Nexus 5x using the stock Chrome. Scrolling that block a ways off the screen made the flickering stop, but it came back when I scrolled back up. Hope this is helpful!

Thanks yeah, that helps. It's going to be a little shaky on mobile right now, but we'll be making the experience better in future builds.

I really enjoyed you demo.

Every time I delve into Xerox PARC and ETHZ papers, along side my own experience, I envision the days when our programming environments would be all like this, even for systems programming like tasks.

All the best for the road ahead.

I don't know if you got this far, but what would deployment look like? You showed a flappy bird clone, does that mean this will produce mobile apps?

Hi, I apologize if this is off topic but I just thought you should know the site does not display properly in my browser.

Maybe this is just an unsupported use case but I think it's because the browser window isn't wide enough so the text on the left gets cut off. I have my browser window filling up half of the screen.

EDIT: works now!

It certainly isn't off-topic. There's a lot of UX work to do now that we've gotten a solid foundation out of the way. Finding a good way to pack a solid Eve editing experience into smaller screens is high on that list.

I can't scroll down currently.

Firefox 49.0.2

Makes it very hard to read.

I'm also seeing an issue with no scrollbars on the tutorial. Firefox 49.0.2 Win8.

Awesome stuff though, excited to see where it goes.

Same, Firefox 49.0.2 on Windows 7.

I opened the quickstart in Chrome and went all the way through without a hitch until the very end - the last code block doesn't appear to do anything. Clicking submit doesn't update the visible output, nor does it clear the form as the code shows it should.

I'm thoroughly liking the environment other than the hiccups though. I'm not familiar with datalog, but have dabbled a bit in prolog and always liked the way unification felt.

Same thing, Firefox with 960px width (half a 1080p monitor)

Chrome here 1080p at 960px. Ubuntu. http://i.imgur.com/PxLlDkP.png

EDIT: Also: base.js:7035 Blocked a frame with origin "https://www.youtube.com" from accessing a frame with origin "http://programming.witheve.com". The frame requesting access has a protocol of "https", the frame being accessed has a protocol of "http". Protocols must match.

EDIT2: Why does opening the chrome console fix the text cut off issue.... this is so weird.

But did your finger get unstuck? Did it get stuck during installing Gentoo? These things happen. Careful.

:o how did you know about my stuck finger! I need to learn to hack better to counter-hack people like you! ;)

Fix confirmed!

:) . Could have been worse! Rather than Gentoo you could have attempted an OpenBSD install, and then https://xkcd.com/349/

Should be fixed, try hard refreshing.

Fix confirmed!

I at least fixed the main webpage's issue with not-quite phone sized screens.

Are values/records in Eve immutable?

They are immutable after a search action, but you can mutate records and values after a bind or commit action. Here is one of the docs on the "add operator", which adds values to a record: http://docs.witheve.com/handbook/add/

The set operator sets a value on a record: http://docs.witheve.com/handbook/set/

These values can be literals, or even other records.

How does this relate to Jetbrains' MPS (meta programming system)?

I think there's potentially a lot of ways we can improve our programming environments. I like moon shots, people willing to explore new ways that are unconventional, approaching problems in different ways. I've followed Light Table and have been looking forward to seeing what Chris and friends come up with for Eve.

That said, this is feeling very grandiose. I'd like to understand more clearly where they see Eve being useful — and where Eve would not be useful. For example:

- how does one implement new algorithms? A simple example, how do I write Quicksort?

- can Eve be written in Eve?

Admittedly, I haven't thought about this nearly as much as the Eve team has. And perhaps I'm just lacking the required imagination at this point. That said, I'd like to see these types of questions addressed. There's nothing wrong with a tool that's useful in a particular set of circumstances. I'd like to know what the Eve team thinks those circumstances are.

Check out our followup on that : http://programming.witheve.com/deepdives/whateveis.html

1) Eve is amazing at graph algorithms, not so much at writing quicksort. At the same time, you don't need to write quicksort in Eve - it has all of the things you'd get from something like a SQL database. In general, anything requiring strict sequential order will be just ok right now, but it turns out actually very few things do.

2) sure and we have written some of it in Eve, check some of my other comments for links :)

Thanks for the follow-up. Indeed, the link you provided is exactly what I was asking for. (And I see that it's there in your initial post as well.) Cheers!

Is there a more precise description of the runtime semantics anywhere? “A variant of Datalog” is helpful but doesn’t say much.

For instance, it looks like every “commit” is essentially a sequence point that produces a step in the “fixpointer”—and from that terminology I guess Eve is Turing-complete, unlike Datalog? Do you test all patterns at every commit, or only those whose dependencies have changed? And if the latter, then how do you track such dependencies? Are they fully specified declaratively by queries? And what is the granularity—per term, per database?

The semantics are similar to Dedalus, which is compared to Datalog here: https://www.youtube.com/watch?v=R2Aa4PivG0g

That was an excellent talk, thanks. It cleared up a lot of details about the paradigm for me. That said, now I’m more interested in Dedalus/bloom than Eve. :P

Seeing Eve's development out in the open is inspiring. Many of the commenters are coming from positions of extreme skepticism towards new approaches to programming, probably justified by the history of the field, but I'd recommend to spend some more time looking into the research idbknox and his team have been conducting (see https://www.youtube.com/watch?v=VZQoAKJPbh8, for instance) and how much they have iterated based on user experimentation.

A few questions come to mind now: 1 - This new demo, with the messaging app sample, and references to "my pm wants this or that" seems to point a change in positioning, from targeting non-programmers to professional programmers. Is this accurate?

2 - Is there a plan to integrate the grid-style or the adlibs style UI into this new iteration?

3 - If non-programmers are still a target, it seems to me that the ease of importing data from external sources would important to reach broad usage. Any research here?

4 - Html and css might be a hurdles for new users, any way Eve will help on this front?

(edited for formatting)

Solid questions!

1. We could be more clear about this, but expanding to suit the needs of non-programmers is still very much in the cards. It falls under the umbrella of milestone 3 [1].

2. We're very interested in improving the process of generating UI. To start with, that will mean expanding our library of views (essentially Eve's version of web components) to cover a wider range of common use cases. Beyond that though, HTML is pretty crufty with layers upon layers of of additions and legacy support, it would be very interesting to explore alternative markup models.

3. It certainly is very important. Our model for interacting with the external world is pretty simple. For the most part You can use our first party modules (like `@http` for web requests) to pull the requisite data into Eve where you can transform it into whatever shape is appropriate. For cases where you need entirely new transport systems (like connecting Eve to USB devices), you can create a new module just like the first party ones and side load it in. Since the runtime is currently written in Typescript, that's the language of choice for these extensions. We may support others in the future.

4. For users unfamiliar with HTML and CSS, the best we can do is insulate them from it, as in #2. If it's a subject that personally interests you, I'd love to strike up a conversation on the mailing list about how we can improve the markup process [2].

[1] http://programming.witheve.com/#justataste [2] https://groups.google.com/forum/#!forum/eve-talk

How is this significantly different from something like Light Table? (edit: didn't realize it was build by the people who built Light Table). It feels like a rehash of that + Python notebooks, with a bit of Xcode's playground thrown in.

Problem is that despite those tools being available, I literally never use them. Ever. Nor do I have a need for them.

I kind of like the idea of being able to find bits of code in a larger codebase in a document-like format. That's actually a pretty neat innovation here.

But beyond that, I don't think I'd use this (edit: per comment thread conversation below, I'm upgrading this to not being sure if I'd used this, but it's a maybe). I need an IDE that will help me through the Battle of Stalingrad, not a basic case like a police offer pulling someone over for a speeding ticket, which is what many of these kinds of next-gen UIs always seem to show.

Basic stuff is easy enough to accomplish right now. I don't need a new IDE to help me add a button to each row in a list any quicker than I currently can with existing tools, and I feel like a core problem with these types of efforts are that they start with basic cases and never really progress from there - many engineering problems simply do not reduce down to adding a button to a screen.

That all said - I think if you pivoted and built a wiki-like overlay that could be dropped over an arbitrary codebase (e.g. extrapolate out the document-like overlay over code to document and organize a codebase), holy crap, I would instantly pay money for that, especially if it was distributed team-friendly.

That's fair. We don't talk about it in here because there was already way too much to discuss, but we've done a few non-trivial things in Eve. All of the inspector interactions are actually written in Eve [1], as is the program analyzer that provides the intelligence to make it work [2]. An important thing to keep in mind as well is that Eve does a lot with very little. 10kLoC of Eve would implement most systems twice over. So the programs will always be pretty short.

The intent is to handle anything you could imagine writing in python. As this is 0.2.0 of a language pretty unlike anything currently mainstream, we're nowhere near there, but with engineering there's no reason we can't be. The IDE is only one portion of the story here, the languge is another one. It's a variant of Datalog that has far reaching implications on how we can work in teams (global correctness), how software evolves (complete compositionality), and how we think about larger scale systems (orderless and distributed). The latter will be the highlight of our second milestone where we'll do another big release like this one, but for the rest we'll be doing deep dives over the next couple weeks about how we believe this language scales much better to the realities of software today.

More is coming. :)

[1]: https://github.com/witheve/Eve/blob/master/examples/editor.e... [2]: https://github.com/witheve/Eve/blob/master/examples/analyzer...

And that's fair too. Honestly I'm happy to see where you guys go and reevaluate whether or not I'd used it as you guys make progress. There are definitely some very interesting innovations in what you're doing, and hopefully that point came across in my comments (despite some initial scoffing on my part).

> How is this significantly different from something like Light Table?

Well, it's effectively 'Light Table version 2'.

> How is this significantly different from something like Light Table?

It's written by the same people that wrote Light Table.

Well, if version 0.2.0 doesn't fit your enterprise software needs it probably isn't very interesting.

Actually I cab think of many Enterprise uses for this... most users in an enterprise just want to build small apps which manipulate shared data

Parent is being sarcastic.

Agree completely, although I’m a little less excited about documenting code structure than the use of real-time feedback for code.

I’m a firm believer that minimizing the feedback-loop is directly correlated with productivity and code quality. Just taking the time to set up robust debugging tools (breakpoints, data inspection/manipulation) in my projects has given me more than one “wow you work fast” in my short career,

Honestly I think this looks excellent for exposing children to programming.

I humbly disagree. My kid is going to get something totally straight forward imperative and easy to begin with, something in the realm of «one print "I am the best", two goto one». I strongly believe these are the basic building blocks which make everything else easy to understand, and they give you a good feeling about how the things work at the bare metal.

I think it's a shame that QBasic isn't bundled with Windows anymore. That was how I got started, on some old 486s in my chemistry/programming teacher's classroom, and it was really a fantastic integrated programming experience, very simple, but with room to do some really cool things. I remember that by the end of a year, a friend and I made a really terrible, but mostly working, version of Slime Volleyball, with graphics and even some basic MIDI sound effects.

Doesn't come with windows but here is Microsoft small basic

I think "looks excellent for children" is a sign that it might be excellent for experts. We want something that can represent a system in the clearest simplest way possible. So simple a child could see how it works and so simple that mistakes are obvious. The question becomes "Will Eve scale to systems that solve real-world problems and not just toy examples for kids?" Will an Eve solution for a real-world problem look simpler and easier to understand than it does now in the languages we're using today? One data point in its favor is how much they've accomplished in so few lines of code if the IDE, the compiler, and a relational database are implemented in 6500 lines of code.

  Eve's language runtime includes a parser, a compiler, an incremental fixpointer, database indexes, and a full relational query engine with joins, negation, ordered choices, and aggregates. You might expect such a thing would amount to 10s or maybe 100s of thousands of lines of code, but our entire runtime is currently ~6500 lines of code. That's about 10% the size of React's source folder. :)

Yeah, this kind of short feedback loop where mistakes are easy to find is perfect for a teaching environment.

iPython, tonicdev... and now this.

I actually totally agree. Yes. Or even for teaching in high school/college.

I hate being preachy, but Problem is that despite those tools being available, I literally never use them. Ever. Nor do I have a need for them.

You don't know what you don't know.

I never said I've never used them. I've used them and decided they were not useful for me based on those experiences. So no, in this case, I do in fact know.

Except you describe the kinds of problems you typically solve (placing a button in a row) which are the smallest/simplest case of the problems that we're talking about, so it seems that you might not know about the problems that other devs face in composition of business logic, layering of complex build chains, etc or see a reason why solutions to these problems would ever be useful. Fair enough, but then you probably aren't the target market, so it doesn't make much sense to ask for a pivot to something else.

I really think you guys are misunderstanding my comment. Placing a button in a row is not a complex thing and frankly not something I spend a lot of time doing or worrying about. Look, it's just not. Does it require a new type of tool? IMO, no, it really doesn't.

I'm quite aware of the kinds of problems "other devs face", and they are far more complex than plopping a glorified view inside of another view and hooking up an action when someone taps it.

I simply do not see enough in Eve to convince me that it will significantly help with much more complicated things - things like what you describe - composition of complex business logic, layering complex build chains, etc - and here's a few other common things I don't see it providing significantly new and interesting support for:

- creation and management of complex data models and how they map down to SQL or NoSQL persistence - orchestrating complex, multithreaded processes - API design and management - cloud infrastructure architecture, design and management - container design and management - transaction processing - analytics


Could something like Eve progress to encompass those types of activities? Possibly (and frankly I'd love to see it - imagine something like Eve's UI for seamlessly managing infrastructure across Azure, AWS, Google Cloud and on-prem infrastructure - that would be amazing). But IMO, it's starting by solving the wrong kinds of problems by reinventing the wheel (new language).

Like I've said before, I don't need or want a new IDE or language to help me with trivial problems. I want tools that help with the harder stuff.

Actually it is the other way around. Apple Playgrounds in XCode and SWIFT were inspired by Light Table. Chris Lattner, the creator of SWIFT even publicly acklowledges Light Table as inspiration on his blog:


I think this is awesome and I encourage this extraordinaire effort.

Here are my reasons:

1. I've actually coded something similar a long time ago (I think Eve is even better than my solution) and it worked. My team and I were able to make entire apps in a heartbeat.

2. The reason it works is because, as pg famously wrote, programmers think in the language they use. Our cognitive load and power are function of the language we think in. The key to Eve on this purpose is that you can program not only your app, but the development organisation that goes with it. Also, it is beginner friendly.

3. With 1. and 2., you get that Eve is related to reflectivity and homoiconicity. Maybe it is what's behind homoiconicity: your code and your organisation share the same language.

I wish the Eve team the best.

First impressions: I like the look of this language/IDE. It's not general purpose but it seems to be good at what it does. I think live programming - where you make changes to your code and immediately see the effect without having to recompile and restart your system - is the way forward. The adoption of literate programming is interesting - I think commenting code is unsatisfactory for a variety of reasons and am looking for an alternative, and Knuth's idea might be worth building on. The other idea I like is the ability to point at a widget and find out the code that draws it - that makes debugging a lot easier.

  Instead of thousands of debugger options, let's have a magic tool where you click on the thing that's going wrong.
There are reasons for the thousands of options though! Like, "debug this specific object" or "breakpoint when this condition is satisfied" exist as primitives already; there's more complexity there than just "what is the data backing this UI element". If Eve has indeed managed to get away from needing them, that is laudable, but it would be achieved by the full set of things you might want to do being blindingly obvious, not "magic" special-cases for "I was expecting something here and I don't see it"

This is a neat idea. But I'm wary of the pitch, at least a little. I like to know what my code is doing, at least in general. So when they said "programming designed for humans," that worried me: I don't like magic, that that's usually what "programming for humans" entails.

Seems to me it will have some good uses. The promo of it being "more human" is a stretch, yes, esp. given the actual programming language didn't look any friendlier than any other language imo (e.g. world<-[#div style: [user-select: "none" -webkit-user-select: "none" -moz-user-select: "none" children ...) But it's nice of them to make a valiant effort at trying to create a new method of doing things. Reading the "What Eve Is" page is helpful:

http://programming.witheve.com/deepdives/whateveis.html "Being explicitly designed for data transformation, there are somethings that Eve will particularly excel at..." (after beta stage, of course)

That said, it'll be interesting to see what software people decide to create with it and where this system excels.

Heh, as another commenter said below, "i see the first problem is that you're generating HTML". The nice thing is, that is in no way intrinsic to Eve. The @browser DB is a regular DB like any other that we happen to sync to the client, and the client has a bit of code (which users will be able to extend in the future with other functionality) that takes HTML-shaped records and turns them into actual DOM on the screen.

Since we are targeting programmers first right now, it seemed better to focus on other aspects of the editing experience than to try and re-invent document markup to start with. That said, we're not opposed to exploring that possibility down the road. If you have any thoughts on the subject I'd love to strike up a conversation on the mailing list [1].

[1] https://groups.google.com/forum/#!forum/eve-talk

I like the look of this project, and it's in many ways inspiring, but here's my cynical take:

I think that Eve won't be conducive to creating applications beyond a few hundred lines of code -- after that, the "human-friendly" programming paradigm becomes an obstacle to production. Once people actually understand how the code works, the document style becomes superfluous.

I suspect that Eve will be a great learning tool and pique the interest of those who would otherwise never program, but Eve will be an introductory tool that users will inevitably graduate from to other languages that are perhaps more powerful, concise, and scale better.

A ton of programming is done in Access and Excel. Perhaps most business calculation and presentation formulas are done in those "integrated" environments, and not in languages as we "programmers" know them.

I can see this fitting a niche between Excel and "normal" programming. That niche might be a lot bigger than we think.

I think we may be a bit blinded as regular programmers to who is actually a programmer, and where programming happens.

Read up on Daedelus language and Datalog that Eve is based on. It's pretty powerful. You just haven't looked deep enough.

It seems like you are aiming for two main things at once, 1) to create a "literate programming" environment (document structure, real-time visualization, etc) and (in service of that?) to use this "world as data" model. As a non-expert programmer, I have to say I don't quite understand the implications of the search/bind/commit approach, specifically whether it was necessary in order to implement the features of the environment that make it human-friendly, or whether you consider it to be, in itself, a human-friendly approach.

My feeling is that the language itself is not really any more human-friendly than any other. You say in the "what Eve isn't" section that it's not for non-programmers-- but if the environment and language were both truly human-friendly, one benchmark of that would likely be a lower barrier to entry for non-programmers.

That said, again as a non-expert programmer, I see massive value particularly in the "document" approach-- though I see it less as human-friendly, and more as human(s)-friendly. Most of my programming experience has been as a graduate student, either scientific programming or (small) app and website development, and it is often the case that code is passed down in time from person to person. Each time you get someone else's project you initially have to rely on code organization conventions, file names and comments to figure out how the program really fits together, before you can even start working with it (and the tendency, for small-ish projects, is to want to avoid this work and just rewrite it yourself). The code-as-document approach seems wildly better for these particular use cases. I want to echo king_magic's comment, that if a wiki-like overlay could be used on top of an arbitrary codebase, it would go a long way toward human(s)-friendly programming, and I'd use the heck out of it.

> wiki-like overlay

so just a nice rendering of comment? sounds like it might be possible with syntax highlighting and markdown-enabled comments

I've just started following along with the quick-start tutorial, and I have to say that the presentation is quite fantastic.

Combining the source, tutorial documentation and program output along with the ability to selectively enable blocks of code makes for a unique experience that I haven't seen elsewhere (Jupyter Notebook comes quite close, however). I recommend anyone reading this to give it a shot: http://play.witheve.com/#/examples/quickstart.eve

It's a shame the majority of the posters here seem to be preoccupied with the tagline, rather than the actual project.

I was just listening to DHH's interview [0] on Tim Ferriss's podcast. On the question of beautiful code DHH said among other things "I open up any piece of code in Basecamp and it kind of reads like a great table of contents..." That immediately made me think of the work being done with Eve based on recent screenshots. Can't wait to try it out.

[0] http://fourhourworkweek.com/2016/10/27/david-heinemeier-hans...

So my immediate thoughts are this looks very nice for designing websites with. It does look like the ideas all rely on a static webpage though (not many moving parts). As I understand it, your page is built with a set of tagged content in a DB, and then further queries can access data via those tags. You've fit together a DB + real time feedback + visualisations in an appealing way that actual makes creating webpages look fun. Something that is otherwise a terribly monotonous task in the current JavaScript climate.

I wonder how well these ideas work as you ramp up to complex algorithms though. For example 5 nested for loops with 10 000 records of data would likely choke your visualisation to death. Also often the decision of choosing your data structures (list vs hash table vs concurrent queue vs ...) are paramount to the performance of the application. A single DB I can't imagine always being the best approach, but one idea could be to measure the data frequency passing though and optimise for the best structure perhaps? Similar to how SQL operates at the moment.

The idea of splitting functionality up into blocks is interesting, though I think you are focusing too much on the literate side of things. I think I would just keep the English text to a minimum, only explaining the 'why' and let the code explain the 'what/how'. But forgetting that, the block separation idea is nice enough on it's own, especially with the table of contents.

I've not looked very in-depth at the language, but what you did in the video did seem a bit like magic at times, and the simplicity seemed to hint at a lot of code hidden away behind simple looking APIs. Meaning doing anything out of the norm would find yourself having to roll a lot of your own code, but I could be assuming wrongly here so won't dwell on it.

I did notice that there was no autocomplete popups in the video. Does this mean you've forgone a type system of any sort? I would hope not as TypeScript has shown the productivity hike adding a few simple type annotations can give. Fully 'dynamic' code bases tend to be nightmares after a certain LOC threshold.

All in all congrats for giving a new outlook on programming by combining a set of old ideas in a new streamlined way, and giving HN something else to grumble about for a while!

In all honesty I'm more impressed by this [1], but I will take a look.

[1] http://www.red-lang.org/2016/07/eve-style-clock-demo-in-red-...

Pretty neat (understatement). From a quick glance, blend of notebook/literate with smalltalkish introspection.

Kudos for keeping at the idea for long (I remember when you left LT and talked about possible ideas a while back) and delivering something that simple yet inspiring.

ps: the team bug fixing interactions reminds me of IBM Jazz days, it seemed so heavy, and here it seems so light.

"An IDE like Medium, not Vim"

What's wrong with Vim? I think they're trying to cater to "the non-programmer crowd" with this, but by trying so hard it's alienating real programmers. I love vim, and I hate Medium with passion. I'm sure there are a lot of other HN people who are not so much a fan of the types of people who just write meta posts, rant posts, listicle posts, self help posts, growth hacking posts on Medium and call themselves a "maker" without actually building anything meaningful. So, you've lost me there Eve. Anyway, that aside, looking more into how this tool really works, this is NOT something an everyday Joe will use like they speak English. This is an actual programming language. The language itself is no easier than python or ruby (actually subjectively speaking the syntax is less intuitive than python for example).

Also I think they're imagining that Excel users will just eat this up, since it's kind of like how people run pseudo-programs on excel documents. But people used those because Excel was the most popular app that lets them work with numbers back then. Now there are many ways to achieve what Eve is trying to do, plus Excel. Why would any lay person jump on an obscure technology doesn't do anything significantly new?

Overall I think there's a problem with assuming that this is for "humans" just because it's "literate programming". "Humans" don't want all this literate programming stuff exposed. They just want to push a button and take care of things. On the other hand, programmers (I guess we should call them "non-humans") don't need all this literate programming stuff. They are trained to understand how programming languages work. They may think this is neat at first, but very soon will start to think all these comments are getting in the way.

I have personally never seen any programming related technology titled "... for humans" actually work for "humans". Perhaps because these tend to be built by extremely talented programmers who are too talented that they are out of touch with how their grandma uses computers, they probably think their creations are easy enough for "humans".

Sorry if this sounded too negative. I am willing to discuss and correct my comments if I am wrong about anything.

I think what they mean by "human" is non-professional.

All programming tools are for humans. But they're also almost exclusively designed with the expectation that the user will be able to devote afternoons, weeks, months, years to learning the tool. That's a fair expectation if your target audience is programming professionals... If it's your job, you can amortize those learning costs over some months of paychecks.

But that a) creates a high barrier to entry for people to get into the biz. And b) mostly locks out people who want to do just a little programming. I.e. the Excel crowd.

The field Eve is in, so called "end-user programming" is still trying to advance beyond spreadsheets, which were the last big win for non-(professional programmers). Well, HTML was probably in there too. WordPress templates, etc. But spreadsheets were probably the last quantum leap in the field.

We're overdue for another.

> But spreadsheets were probably the last quantum leap in the field.

> We're overdue for another.

I'm working on it, and do think I have a solution. Unlike Eve, for example, you can (and we do) create (almost said "write", but there's no textual code) an OS kernel in it.

And it's nothing like the approach Eve is taking, which I consider to be nothing like why spreadsheets were (and are) successful.

To me, Eve is nothing more than textual datalog programming + a high-level stdlib + a UI wrapper.

The only area we agree is that logic programming is an underutilized paradigm, and in my visual programming environment, I do use it for business rules (where it absolutely excels).

I tend to agree that Eve isn't quite the right direction. The REPL is important, so that's good. And one-click deployment, one-click forking of development environment, those things are critically important. But from there...

I agree the high-level stdlib is a problem. I think what we need is an easy-to-dip-your-toes-into environment that provides a path down into "real programming" and ideally down to metal.

I think the big problem isn't exactly UI or language or high enough level APIs, it's the way the APIs are structured. The two big issues I see with most tools are:

1) tendency to layer opaque frameworks on top of opaque frameworks which makes debugging down into the system difficult, if not impossible. (i.e. React) In this sense I think it's interesting that Eve is trying to build on top of itself. Anywhere there's an opaque layer you are putting a massive learning curve for anyone who is grappling with the finer details of that API.

2) declarative interfaces. (i.e. Ember) These require the programmer to have detailed knowledge of the inner workings of the runtime. Feels like we need tools that are made of primitives that the consumers of the tools have a clear path to understanding. There is this idea that the user will never have to understand how the tool works, but my experience is that you always eventually do. You always need to understand the layer below you, so the task is to make those layers easy to dip into... not to try to insulate your users from the details.

Eve seems to partly fail on both of these.

What does your approach look like?

I agree with you that APIs are where things fail, and without a good solution for that, you're not going to make much progress. I've spent a ridiculous amount of time working on that problem (and do believe I have a solution).

I'll be sure to post to HN when my visual programming environment is ready to use.

Today is Eve's day. :)

That's what I was thinking. I'm a human, I like my Vim and C.

The main limitation of a language like that is that, if you care about performance at all, you have to program for the machine first, not for humans.

You're always writing for both to some extent. For me this is the main challenge of programming; writing code that executes efficiently and correctly, and that is legible to humans.

I love Python and other high concept languages but if I want to code for performance I need to understand what the machine is doing. The tower of abstraction is so high with these languages, at the end of the day it's easier to go, eff it, I'll do it in C, than to try to understand exactly how this code will be executed down at the bottom of the tower.

Yup, and even if you don't care much about performance, you still have higher level languages which are good enough.

That's why I wonder where Eve fits in. Most people don't want to program. And this is NOT because they're stupid. It's because they have tons of other important things to worry about. Most people don't want to build their own TV set. They buy it because their expertise is in other things. Maybe they're a doctor. Maybe they're a fields medal winning mathematician. Heck, I'm a programmer and deal with all kinds of programming languages but don't want to know what's going on inside my TV set.

I haven't looked in depth at Eve, but it sounds like it's a bridge for the people that _do_ want to program but don't want (or even need) to understand the underlying complexity. As a close to home example, I've had several game ideas I've wanted to program for a while. Simple things, I've had notebooks describing behaviors and what should happen when in the game for years. My day job is software development, but I haven't been able to grok a game engine enough to implement playable versions of these games.

In other words, I don't want to build my own TV set but I do want the favorite channels menu on my TV set to behave slightly differently.... perhaps it has profiles for each family member. Conceptually simple but hard as hell to execute from scratch.

I've seen a few of the "excel crowd" comments so far. Could you imagine the performance of this system if they really let a few of the "excel people" make their own charts and graphs like are in that video? It would destroy the system in short order, I'm guessing.

It could be useable, I don't want to dismiss it entirely without even trying it, but honestly the website's text has really turned me off, so I probably won't.

I'm not so sure. Granted, I might write html in vim, but I'd prefer writing Markdown or ReStructuredText. We could have a (subset of) full html in the hn comments - but it'd likely get in the way, rather than be helpful. Medium is a nice way to publish some text with images. Note the "with images" part. Vim is a much better text editor, but I don't think it's a very good hypertext editor. I don't necessarily need WYSIWYG all the time, but an object oriented/rich graphical editing environment certainly has its place.

I'm reminded of the original MVC paper[1] - about presenting an interface that matches the users mental model, through the interactive graphics and back into the computers internal data model.

Earlier it occurred to me how crazy it is that so many of our popular languages work on such primitive data structures still. The most obvious being null-terminated ascii strings in a globalized world - where ascii is effectively always the wrong choice (or at least, "not best"). But I'm thinking more about using any other structure than images, 3d models, databases. Why do we work so often with bare variables? I mean, in embedded programming, I guess it's interesting to think about two's compliment, overflow etc - but really. If we can't even handle numbers, vectors in a natural way - aren't we doing something terribly wrong?

The more I think about it, the more I come to believe that many of the ideas of hypertext, Smalltalk/Object Orientation, HyperCard, CAD/3D programs and 2d drawing packages (as well as decent DTP programs) are on the right track: let us use type information and other meta-data to manipulate structures - both "code" and "data" in richer ways. Let me see my code as a graphical finite state machine/diagaram. Allow me to work with my generated matrix as a heightfield in 3d and as a greyscale image. Let me visualize the steps of my sort algorithm, not just look at assembler in a debugger. Why not show me my stack(s) as 2d or 3d stack(s)? Not all the time, but when I want to?

I do like vim, for editing text, and working with our rather dumb programming languages, that might be considered "text+". But they're not really rich, they are hard to visualize, listen to, paint on. And for some things, that's fine. All we need is some text.

But I would argue most good history books has a few images, and maps. Rich media. Even math books do. Maybe the problem with fourth generation languages have been that they wanted to "fix" programming. Rather than just fix some parts, like Elm, with its natural integration of stepping back and forth in time.

Vim is a great tool - but it may be overspecialized on manipulating text: text objects, lines and letters. Those are not generally the semantic objects of programming (excepting perhaps those that make a living coding in brainfuck). I view lisp as a way to simplify "down" to text. And something like hypercard, smalltalk or the lively kernel project as attempts at simplifying "upward" -- to work with the structure. But that requires something beyond simple, human-readable text. You need to save a workspace, and a memory image. The systems acquire a lot of complexity in order to realize their simplicity.

But in case of lisp, the system is generally rather complex underneath. There's still compilation to machine code. There's still some kind of graphical interface, font rendering. Maybe what amounts to a different operating system who's only job is to boot your operating system that runs your lisp environment...

Personally I don't really think C is particularly simple. I still have to look at the assembly to know what's going on. And if the code ends up running on an Arm, or a CPU not yet invented, I'm still kind of lost. I don't really understand the details of how the assembly I can read, interacts with hyperthreading. In the end, what I really care about is the effect of the code - and that is really something I need to test and measure in the context of a running system. And the complexity of that, is likely so that I end up with sampling under various conditions, and doing some timing runs.

This "eagle view" approach doesn't mean I don't ever think about memory layout, algorithm complexity and such. But it does mean I rather rarely think about what actually happens deep inside the machine and the three cache layers.

[1] http://heim.ifi.uio.no/~trygver/themes/mvc/mvc-index.html

God bless you for being the only one on this comments page (as of right now) to mention HyperCard.

> What's wrong with Vim?

Emacs is better ;) But the comparison is a bit disingenuous. Medium is a blogging platform, VIM is an extensible text editor.

I think they were trying to emphasize that their strength in "readability", hence Medium.

I watched the video, and immediately realized I would never want to program in that kind of environment. The "literate programming" would get in my way.

It would be like having someone follow me all the time and translate my words in realtime right next to me whenever I speak something. (Actually it's more like me saying something in English AND having to translate after every sentence) It's super distracting because I can't focus on what I'm saying because I lose my train of thought.

I would say its closer to the reason we include emoticons in text messages. Text alone loses something from the original intent, so we add emoticons as a way to bring some of that back. Plain code as well cannot express softer yet still crucial aspects of software design like "intent" or "audience" of the subsequent code.

Emacs vs Vim in an unrelated thread, obligatory on HN :) // I concur

Sorry, but what about this is "designed for humans"?

What do the keywords mean? What's the language paradigm? Why do I want this when it's essentially coalescing a lot of APIs into a language that you've provided no spec for?

Why would I want my language to work with slack?!

I'm not impressed. It just looks like another functional language with a bunch of addons tacked on to make things "easier" or "for humans".

Drop the buzzwords and get to the meat please.

There's plenty of meat in the blog post, on the Github etc. I suspect it's for "humans" because it's literate, functional programming with a REPL and a natural approach to how data is managed. It's a lot more than just a functional programming language. When did it become cool to make uninformed claims about how worthless other people's work is?

So if this is "programming for humans", all of us are just writing in some alien language because we're aliens?

When did it become fashionable to call everybody else some variant of insane or delusioned and tout your own solution?

Sorry, we're not trying to call you an alien, quite the contrary! We appreciate that you are a human, and we feel a lot of our tools aren't designed for people like you and me. Our argument is that even our best tools today are built with constraints imposed by the earliest computers (isolated, single core). We've moved past these limitations technically, but our ability to program to the best of our ability on these new, more capable machines (connected, multi-core) is hamstrung by restrictions still imposed by even the newest languages. Look at Swift, a brand new language that can only offer time-travel debugging in a special playground context.

What we're saying is that it's time to move past writing code for the consumption of the machine. If we want the power of computing (not necessarily programming as it exists today) to be more collaborative, and more inclusive; and to reach a wider, more diverse audience, then our tools need to start being designed for humans. Right now Eve is focusing on a table of contents, readable and relevant error messages, and clean, consistent, simple semantics. It's a modest start, but that's where we are. :)

So again, I'm very sorry we offended you, and I hope this explanation helps make our view clearer.

The vitriol you're bringing to this thread is far beyond what I'd expect considering the topic. What has angered you so greatly?

I agree and hate the "for humans" meme. It's insulting and doesn't help your case.

I think "for humans" is just intended to convey that the language and environment are based on research into human cognitive traits: how we think and learn.

"for humans" doesn't imply that the rest are insane or delusional. It's the same context as "Wow Usain Bolt is not human".

As a counterpoint: the example of the document containing blocks of code inside what looked more like a word document, to me, exactly explains visually what "designed for humans" means in this context, and I think it's well put.

That's not revolutionary at all. You can do that already with tons of environments. Here's one for Javascript, based on Markdown: https://github.com/jostylr/litpro

Did someone make a rule that everything needs to be a revolution?

Yes, it's called advertising.

Can you post a link so that I can try litpro? Or does it need a download and installation?

While I like the idea, the problem with the demo is that it has about 10 files, with a bunch of small code blocks. What does it look like in a real-world application with 1000s lines in 100s of files?

I'm sorry, but that doesn't give the connotation that it's "designed for humans". Perhaps if they could give a clear indication of what that means other than "it's in a word document", I'd be more inclined to look at it a little harder.

Watching the video is probably the best way to get an understanding of what it's about. It's hard to describe an "experience" with text. Their video made more sense to me than most of the written explanation.

It's hard to describe an "experience" with text.

That's kind of the the thing that literate programming is trying to solve[0]. But if the experts aren't able to do it for their own product, what chance does a random programmer have with their own code?

[0] - https://news.ycombinator.com/item?id=12818653

"When we communicate to one another face-to-face, we use gestures, expressions, intonation, etc. to articulate an idea. On the internet, when we communicate with just text, much of my meaning is lost forever and never apparent to anyone who reads this. Programming is much the same way."


No, he's absolutely right; it's not clear at all how this is supposed to be human-friendly. I think it's incredibly hard to read and looks like it's going to be an unmaintainable mess by throwing random APIs like Slack into the standard library. I wouldn't say I'm salty, but I'm certainly confused as to why this is what it claims to be, rather than just a very specific beginner web dev language.


>I'm sorry, but that doesn't give the connotation that it's "designed for humans". Perhaps if they could give a clear indication of what that means other than "it's in a word document", I'd be more inclined to look at it a little harder.

To be fair, this can easily be read as bitter. "I'm sorry, but", "Perhaps if they could ... I'd be more inclined to look" both come off that way.

You can do this with Jupyter notebooks.

I just tried Jupyter... I couldn't figure it out. Could you post a link to a tutorial that I could use?

The easiest way (and the method I use) is to just use the version packaged with Anaconda. See: http://jupyter.readthedocs.io/en/latest/install.html#id3 .

There are also instructions on that page you can follow (installation via pip) if you already have a reasonably modern version of Python installed, and you have an appropriate C compiler available. This is a pain to configure if you are using a Windows machine.

Assuming the 'jupyter notebook' command succeeds, a browser window should pop up, displaying a UI for manipulating individual notebooks.

If you have already successfully completed the installation, and are instead looking for guidance on using Jupyter Notebook itself, then your best bet would be to look at some of the examples: https://try.jupyter.org/

I tried the https://try.jupyter.org/ and that was what I could not figure out. It looks like some kind of csv editor once I got to see some output

Don't think of it as a replacement for javascript. Think of it as a replacement for excel.

yes I totally agree. I work with data scientists and doctors needing small workflow applications and this would be perfect for them

Hey! That "for humans" crap worked really well for making my grid system popular! :^)

Next thing you know, we'll be marketing it as a "fresh, humanely designed, organic, farm-raised" programming language. ;)

"Because it will pull investors" is a perfectly valid response around here. :)

I thought the video was a good example of "designed for humans". Did you watch it?

Here is a link you may find helpful: http://docs.witheve.com/handbook/literate-programming/

The basic premise is we Humans are cognitively build to remember and follow stories. Eve supports Literate Programming, so you can write code like a story.

The meat is that most programming languages are not designed for humans. Many weren't designed at all so much as hacked together for context they were originally used in with terrible consequences for learning or effective use by average person. C and early PHP probably lead that. Many others were affected by committee thinking, backward compatibility, preference of their designer, or the biases of people who previously used hacked-together languages. There's few languages where someone or a group sat down to carefully engineer them to meet specific objectives with proven success in the field at doing those. Examples include ALGOL, COBOL, Ada, ML for theorem proving, BASIC/Pascal for learning imperative, and some LISP's.

So, if we're designing for humans, what does that mean? It means it should easily map to how humans solve problems so they can think like humans instead of bit movers (aka machines). BASIC, some 4GL's and COBOL were early attempts at this that largely succeeded because they looked like pseudo code users started with. Eve appears to take it further in a few ways.

They notice people write descriptions of the problem and solution then must code them. The coding style has same pattern. They notice people have a hard time managing formal specs, the specifics of computation, error handling, etc. So, they change the style to eliminate as much of that as possible while keeping rest high-level. They noticed declarative, data-oriented stuff like SQL or systems for business records were simple enough that laypeople picked it up and used it daily without huge problems. They built language primitives that were similarly declarative, simple, and composable. They noticed people like What You See Is What You Get so did that. Getting code/data for something onscreen, being able to instantly go to it, modify with computer doing bookkeeping of effects throughout program, and seeing results onscreen is something developers consistently love. Makes iterations & maintenance easier. They added it. Their video illustrated that changes on simple apps are painless and intuitive compared to text-based environments with limited GUI support. Their last piece of evidence was writing their toolchain in Eve with it being a few thousand lines of code. Shows great size vs functionality delivered ratio with the quip about JavaScript libraries being bigger illustrating it further.

So, there's your meat. Years of doing it the common way, which wasn't empirically justified to begin with, showed that people just don't get it without too much effort. Then they keep spending too much effort maintaining stuff. Languages like COBOL, BASIC, and Python showed changes to syntax & structure could dramatically improve learning or maintenance by even laypersons. Visual programming systems, even for kids like with Scratch, did even better at rapidly getting people productive. The closer it matched the human mind the easier it is for humans to work with. (shocker) So, they're just doing similar stuff with a mix of old and new techniques to potentially get even better results in terms of effective use of the tool with least, mental effort possible by many people as possible with better results in maintenance phase due to literate programming.

That's my take as a skeptic about their method who just read the article, watched the video, and saw the comment here by one of project's people. I may be off but it all at least looks sensible after studying the field for over a decade.

Smalltalk was very consciously, and conscientiously, designed for humans.

I did Smalltalk in school and once you peel away a few layers it doesn't feel like it was designed for humans at all. Perl is also said be designed for humans but takes a very different approach.

What did you feel was bad about Smalltalk?

As for Perl, it was hacked together to automate some work Larry Walls was doing on BLACKER VPN at TRW.

That's a good point. It will be on the next version of that comment.

I guess if you approach it as a "programming language" it's odd. I understood nothing of the syntax nor the semantics. But you have to admit that the way it relates different dimensions of a "program" is much more approachable than any IDE out there, in a way that is more "humane" I guess.

In what way is this more "humane"?

Programming languages by design are for human consumption. In what way is this "special" or "designed" for them?

Here, I think you are wrong. I strongly suspect that while the high-level features of programming languages are chosen for human consumption, the implementation details and tooling are often chosen arbitrarily, or for machine-convenience. For example, I don't generally consider language environments where the leading white space on a line matters, or languages where trailing white space matters to be "designed for humans." Computers might be good at counting spaces, but we're mediocre at visually estimating the width of a blank area in text. Such an environment asks the user to develop more machine-like skills rather than attempting to accommodate their weaknesses.

Or it asks the user to use tabs? Or an editor that inserts spaces when you press tab? Indentation has never been a stumbling block for anyone but the most junior of programmers.

Sure, there are all sorts of ways to lessen the pain of syntactically significant indentation. Indeed, I'd say that the annoyance can be made sufficiently small that by the time someone is an experienced enough coder to recognize that the pain has always been communally self-inflicted, they're too used to it to care.

Do also note that I'm not saying that indentation is by itself bad.


It does more of the heavy lifting automatically. For example, rather than having to explicitly build a data structure to keep track of events that have happened, or build some message bus to receive and react to them, Eve allows you to express the fact that you want to react to them, and its runtime takes care of the rest.

How well this scales and remains available, well, that's an implementation challenge, but the user interface looks very convenient. It is potentially a higher level of abstraction over current "high level programming", just as high level programming was over assembly.

What you describe is normally handled by any number of perfectly good libraries. The selling point here seems to be "we've thrown a bunch of random libraries together in the standard library", which isn't a super compelling argument to me.

Could you share an example library or framework that allows a programming model similar to what they demonstrate in the video? I'd like to learn more.

I am aware of frameworks that can pass information around in similar ways, but only with a lot more ceremony of the sort that's not necessarily a benefit. Observe specifically the code blocks with "search" that react to events without needing to be connected in any direct way with event production; and event production doesn't need to involve data structures or storage.

Maybe this model won't work in applications of meaningful scale, but maybe it will.

How I see it, languages are only one projection of what a program is. The missing part is what confuses "normal" people (unlike people who spent long hours learning how to map things in their head). This project reminds me of AOP, old IBM multidimensional orthogonal concerns projects. You can go back and forth between different views of the program, that helps tremendously.

It's high-level language so it's closer to how people think than how machines operate.

EDIT: all program has to do is to 1) get data 2) transform data 3) output data

Anything that allows to express these 3 steps using less idioms is more "human" language IMO

Fortunately for those of us who are interested in this, impressing you was never the point.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact