Hacker News new | comments | show | ask | jobs | submit login
Lamdu - towards the next generation IDE (peaker.github.io)
200 points by mjn 1118 days ago | hide | past | web | 88 comments | favorite



FWIW, the main goal of Light Table is to provide a platform for people to experiment with and build these kinds of experiences without having to worry about all the other standard editory stuff. By using a browser for the UI, we have the opportunity to very easily play with all sorts of really interesting display and editing paradigms.

Unfortunately, the notion of editing an AST is by definition language specific, so it's unlikely that someone could create the structural editor and just have it work for everybody. Moreover, as they mention here for Lamdu, it often requires looking at the language slightly differently and enforcing some rules that wouldn't normally exist in the code. But, with a decent foundation we can at least make writing the necessarily language specific parts relatively straightforward.

For those new to this world and wondering why this stuff doesn't seem to exist, one of the biggest problems with projectional editors is handling the translation problem. How do you reconcile changes in your representation with changes in the underlying code? Can you reliably parse handwritten code into your AST representation write it back and so on without any loss? What happens with handwritten styles that maybe don't fit into the projection's way of viewing the world? What if I know a better way to output the AST than you? It's also particularly difficult dealing with change over time since there are no true unique identifiers for bits of code.

The approach they're taking is likely the correct approach for the "future" - we should be designing languages and approaches to coincide with the ability to tool them. Even better would be to never have a handwritten format at all: the cannonical representation is always the AST. And though you might be able to edit a projection of it as text, you're never at a loss for how to get back to your "good" representation. This is the world I think we ultimately need to get to and you'll be seeing some really cool stuff from us in that vein early in 2014.


The main problem with structured editing is that it requires programmers to write code in a certain order that is often goes against their "flow;" enforcing a rigid flow is a huge usability negative. Even without structured editing, we see cases where increased flexibility improves flow; e.g. in dynamically typed languages. Unfortunately, improvements in flow often lead to reductions in feedback (e.g. via static typing).

There are things we can do to have our cake (flow) and eat it (feedback) also. Code completion was one of the great boons of structured editing (introduced in Alice Pascal circa 1985), and by the late 90s we learned how to create "language aware" editors that could leverage this feature without flow disrupting structured editing. The same goes for static typing: we can, through some heavy type inference, infer semantic information responsively while the user is typing, and use that to provide responsive feedback.

I'm in the camp where the programming experience should be considered holistically. The IDE is a part of that experience, and so language design should occur concurrent with IDE design. With some smart incremental compilation magic along with language-specific rendering in the IDE, we can build programming experiences that provide the benefits of structured (and projectional) editing without the flow costs. Or at least, that is the premise of my research :)


There's a structured editing mode for Emacs called Paredit, to use for s-expression languages like Lisp, Racket, Clojure.

Since the AST for such languages is s-expressions, and some people struggle with parens, this is an interesting "fun size" example of editing the AST.

The thing is, paredit is quite challenging for most people to adopt, due to what you mentioned about the "flow" they've already learned. Magnar Sveen has a great video about this at http://www.youtube.com/watch?v=D6h5dFyyUX0

Best quote: "If you think paredit is not for you, then you need to become the kind of person that paredit is for."

EDIT: To clarify, Magnar is quoting technomancy a.k.a. Phil Hagelberg.


Yes, Paredit's not really a straitjacket, but your editor having new concepts — verbs which operate on code units.

It overrides some deletion commands to maintain your code well-formed, but there's simple ways (like cutting text) to break those rules, so it's ill-formed. (Currently, paredit won't start if it sees you're editing ill-formed text. But if it's already started, it'll continue running.)


Right, I realize it's not an inescapable straitjacket, and I'm definitely not putting down paredit; upvoted your comment.

I mention it because it's an example of structured editing of an extremely simple AST. And even people who want to use it (who want to leverage the so-called "straitjacket"), often find it quite hard to change their flow and adapt it.

That seems not to bode well for this being enthusiastically used generally. OTOH I suppose you could argue that a lot of people like auto-completing IDEs, and/or hate to type, so who knows.


Available for vim as well. This is one of the things that consistently makes my life better. http://www.vim.org/scripts/script.php?script_id=3998


The main problem with structured editing is that it requires programmers to write code in a certain order that is often goes against their "flow;" enforcing a rigid flow is a huge usability negative.

I agree that structured and visual editing paradigms have failed. But ultimately all you're saying here is that structure approaches haven't offered enough positive to offset the cost of learning them. That may be true but it would be good to know why.

My guess is that the data structures generated by the text of programs is just too varied and complex for any visual representation system to be of use. The programmer is interested in the final structure of their application and every intermediate one. Creating a program to represent those visually should be harder than implementing the application. Multiply that by every application that could exist.


> But ultimately all you're saying here is that structure approaches haven't offered enough positive to offset the cost of learning them. That may be true but it would be good to know why.

No, I'm claiming that bad flow, even after you've learned and adapted to a structured paradigm, needs to be solved directly. Even if you can provide really great feedback, the bad flow will still exist and cause inefficiencies not offset by enhanced feedback, which is aimed at an orthogonal set of problems.

Edit: a good analogy might be between agility and armor in a tank. You can obviously make the tank more survivable by improving either one, but its overall usage becomes more limited when emphasizing one feature at the expense of the other.


"Unfortunately, the notion of editing an AST is by definition language specific, so it's unlikely that someone could create the structural editor and just have it work for everybody. "

This is, well, false. See, for example, the harmonia project at berkeley(http://harmonia.cs.berkeley.edu/harmonia/index.html), and ensemble before that (http://harmonia.cs.berkeley.edu/harmonia/publications/ensemb...)

This stuff has been done forever, and it's been "the next generation" forever. Things like separation of structural editing from the language specficness of an AST while still providing a sane and useful editor are paths that have been tread before.

Not that this means people shouldn't try, but as you can see from just these two links, there has been a lot of research and effort put into this area in general. This is an area where i'd read a lot more than i'd sit down and code, because a lot of smart people have been this way before.

Case and point: Susan Graham (and maybe Tim Wagner), who lead the Harmonia project (and many projects before that), is basically the goto person in the world if you ever wanted to know anything about incremental analysis of programming languages (she also helped write gprof, among other things)


> This is, well, false.

Harmonia was more about language-aware editing, Ensemble and Pan were more about structured editing.

> This stuff has been done forever, and it's been "the next generation" forever.

The Harmonia stuff never worked that well when I tried it. I've also read many of Graham and Wagner's papers on their tooling, and you know what....

> This is an area where i'd read a lot more than i'd sit down and code, because a lot of smart people have been this way before.

...it turns out we can do much better than what they did given per language consideration. Once I decided not to use Harmonia for the Scala IDE, I came up with a way to incrementalize Martin's compiler fairly easily. It wasn't that hard in hindsight, but none of those fancy algorithms were really necessary at all! Instead, we just needed an incremental computation framework like Glitch [1] to do the heavy lifting transparently, which eventually led to my work in live programming.

So the moral of this story: don't trust everything you read from an academic conference. Often work is incomplete, not appropriate for your context, outdated, or even unworkable. Usually, its incomplete and there is lots of room for improvement.

[1] http://research.microsoft.com/apps/pubs/default.aspx?id=2013...


>This is, well, false.

I don't think it can be called true or false that easily. Sure you can translate multiple languages into one AST representation (maybe embellished to allow round-tripping). But is it still the same language? I would say no. But I would go further and say that not even a language specific AST is the same language.

I think the idea that we can seperate programming languages from their surface syntax and still claim they are the same languages is misguided. Yes programming languages have semantics, but the user interface aspect is an inherent part of their identity because it is an inherent part of what it means for humans to use language. There is no such thing as syntactic sugar.


There's been some interesting work by Conor McBride on Epigram where he uses structural editing to help communicate the proof burdens that the compiler can infer and what remains for the user to do. Epigram is long gone, but there are still papers describing how it worked.


Also, Elliott's Tangible Functional Programming:

http://conal.net/papers/Eros/


the canonical representation is always the AST

This is what a next-generation programming environment must have. Primarily, one manipulates AST nodes, projected visually into lines of code. But, to preserve the comfort and flexibility of free-form text editing, the editor knows about non-AST text fragments, tree "holes", and other representations of intermediary editing state. (A big UI problem for sure!)

An editor that can deal with the ways in which code changes over time elegantly is what we really need. Since editing a plaintext projection discards code movement metadata, editing the AST directly is paramount. Without that metadata, the programming environment can't version control structured code automatically.

Looking forward to 2014!


An approach that might work with existing languages is:

To use a traditional flat editor (not a structured editor), but show inferred information around the code. I am thinking of visuals that are akin to http://explainshell.com/ but a lot more compact.

The inferred information need not be shown always; it could be shown dynamically based on the user's context (current line, current function, etc)


>> Can you reliably parse handwritten code into your AST representation write it back and so on without any loss? What happens with handwritten styles that maybe don't fit into the projection's way of viewing the world?

Genuinely asking, is this really a problem? It sounds solvable to me.


The approach they're taking is likely the correct approach for the "future" - we should be designing languages and approaches to coincide with the ability to tool them.

Unfortunately, we still have many in the PL community that don't think that way.


this 2004 post by oliver steele remains one of the best things i've read on the topic: http://www.osteele.com/posts/2004/11/ides

key quote: "In fact, the most powerful languages may initially have the least powerful tool support. The reason for this is that the language developer, like the language adopter, has to make a choice: whether to dedicate limited development resources towards language features, or towards tool support."


There are other efforts to create structural editing: * JetBrains MPS : http://www.jetbrains.com/mps/ * The project I currently work on at JetBrains jetpad-projectional. https://github.com/JetBrains/jetpad-projectional It allows you to have structural editing on the web with the help of GWT.


MPS is really amazing (or crazy, depending on your perspective). It's been in development for years and is perhaps best described an IDE for building languages and IDEs. So there's a structural/projectionally edited language for defining structurally/projectionally edited languages (yes really), another one for type systems, another one for converting such languages into other languages. Eventually things compile down to Java (textually) so you can use all the standard libraries and so on. There are even languages for defining IntelliSense style IDE features.

The main problem with these kinds of things is the incredibly sharp learning curve. It's more of a learning wall, really. It's a lot of effort to learn, and then it turns out that it's really hard to get in the mindset of defining new languages to solve problems, even when it's been made a lot easier than using yacc/lex/writing your own compiler backend/etc. I never really found a use for it, but I keep it filed away in the back of my mind in case one day I do.


An example structured editor based on MPS:

http://mbeddr.com/

It allows state-machines to be coded in an enhanced C like language with nice visual representations.


If the author is reading, consider throwing in this link into the Similar efforts section:

http://research.microsoft.com/en-us/people/smcdirm/liveprogr...


The author is "Peaker" on HN, Reddit, and maybe also SO.


There is also a text editor called [Yi](http://www.haskell.org/haskellwiki/Yi) written in Haskell, that seems similarly to pursue the idea of continually parsing a program as it is written. Is there any similarity or relationship?


« The project is not open source and is planned to be a commercial IDE. »

Not saying there's anything wrong with that, but it's worth emphasizing, imho.


That refers to Projucer, not Lamdu. Lamdu is open source.


That was a misleading sentence in the "Projucer" section, referring to Projucer. Fixed.

Lamdu is open source.


You're talking about Projucer, not Lamdu, right?


The real gem in lamdu compared to LT, Subtext,MPS etc is the integrated versioning, IMO. Far too long have we been doing snapshot-based development. I hope this is a language feature and not just an IDE feature, however, so that the evolution of the program is expressed within itself.


Reminds me of Intentional Programming http://en.wikipedia.org/wiki/Intentional_programming which I think is the furthest along in this area.


One of the things that worries me about IDE's these days is how its making learning syntax obsolete and promotes an attitude to avoid reading documentation. You still need to have some idea of the syntax though, but you don't have to know every little syntax detail of a language. This is largely true with Eclipse + Java combination. I've recently had problems during interviews. The interviewer asks questions related to Java syntax, now though I spend a good deal of time coding in Java- My mind is totally broken to think outside the IDE. The interviewer thinks I might simply be lying about the Java experience I claim to have.

The same is not true for C and Perl, where I use simple editors. I can think problems out on the paper, or even build a abstract version of a whole complex app with just pen and paper.

The best kind of IDE's is what I've seen in the embedded domain. Which assist you during work, you still have to read and internalize the documentation well. But the IDE will suggest you improvements, let you probe ports, registers and let you put break points to see if things are actually going around as you think they should. Above all you still have to learn the best practices the hard way, the IDE simply assists you to do that along the way.


I've had exactly the experience you describe, although it's not from IDE use in my case -- it's from coffeescript and livescript. I bombed a JS interview about six months ago because they wanted to sit in the room and watch my screen while I took their test (already problematic, in my opinion, but let's pretend I would have taken the job.) What clinched the deal I'm sure was when I couldn't remember if it was .length or .length(), started to Google it, realized I looked like an idiot, wasting time because none of my bookmarks were on the laptop I'd been handed, etc. etc.

You see, I usually write in the functional style, so my code seldom has to ask arrays about their sizes, and when it does, it uses '[*]'. When I was back in for-loop land, I spoke like a for-eigner...


It sounds like you have a lot of experience. Why do you waste time with interviewers who are interested in whether you know a particular syntax or not?


Interesting project. Still, if you end up with a document that is this rich - wouldn't the smalltalk/self approach be better? As I understand it, if you work in the "lamdu" realm, you've already departed the notion of programs as text files that are "lifted" into code and data (objects) by the compiler/interpreter system - and you still don't have real interaction with your smart data (you cant drag the scales of a graph, or have a live preview of animation to see what framerate would be best etc - saving the changes back into your vm image/document)?


This looks awesome and interesting. On another note it looks like the rendering is all done in OpenGL, which I have always wondered about - how often are IDEs written from scratch using OpenGL? Is it commonplace?


Why do you think its being rendered in OpenGL? If you render to a 3D device directly, you'll at least need to work with an appropriate 2D graphics layer on top of it (e.g. Quartz, Cairo, or Direct2D) to help you with font rendering or even basic line rendering! Bezier curves, font vectors, and anti-aliasing ain't easy.


It is using OpenGL indirectly via the graphics-drawingcombinators library (http://hackage.haskell.org/package/graphics-drawingcombinato...).

Indeed the fonts may not be as pretty as with Cairo etc but on the other hand there are very nice animations when edits make stuff move around.


Ah! I hand rolled my own code editor using WPF (mostly canvas and font rendering) for my live programming work. Its amazing what can be done when not using inflexible high level components.


I just looked at the cabal file, which lists the build dependencies. Also, there's a lot of OpenGL code in the repo :)


Very cool but I am afraid this is geek candy that would not fly for mere mortals. When you say next-gen IDE do you mean for a certain class of developers?

The problem with these type of development tools is that it moves your brain from thinking in terms that of a human to thinking in a very structured way more attuned to machines.

This is a problem with functional programming in general, it is fundamentally anti-human, people don't think functionally but rather procedurally.

Full disclosure here, IDE maker so I have skin in this game :)


I think that's a broad mischaraterization of functional programming. It is certainly not obvious a priori that people think procedurally. It certainly does not match my experience teaching programming to complete beginners--even concepts like mutable variables and loops are not particularity intuitive.

People really like to think by analogy and think based on relations. Functional programming makes this much simpler by giving you simple abstractions and, crucially, letting you not worry about extraneous machine details. In a functional language, even the order your code gets evaluated is below your level of abstraction.

Ultimately, functional programming lets you talk about what where imperative languages force you to talk about how. That's pro-human. It's exposing the underlying machine and computation--imperative programming, in a word--that's anti-human!


> People really like to think by analogy and think based on relations.

People really like to think by analogy and metaphor. Go objects!


Go doesn't have objects.


Actually, machines are attuned to flat, unstructured lists of instructions, the polar opposite of functional programming. But when people wanted to think clearly, they invented math, of which FP is nearly the computational manifestation. You have the human-vs-machine thing exactly backwards. The reason we tend to think in imperative terms is one of familiarity: most of our first programming experiences are in languages that are (sometimes high-level, but still) descendants of assembly, because at first that was the only programming that existed: flat lists of instructions designed for a machine.

Granted, if a human wants to describe a physical process, they will use procedural language. But computers are primarily about information, and we think about information in terms of relationships. That's exactly what FP is about: expressing computation in terms of relationships.


I agree that a Turing machine is imperative...when I refer to machines I am generally referring to an inflexible approach to solving problems, not so much how computers execute instructions.

As humans I think we need the ability to be messy and imprecise in order to be creative.

>The reason we tend to think in imperative terms is one of familiarity

That's debatable given that both approaches have been around for about the same length of time with different outcomes in terms of adoption.


I'm writing OCaml now. I'm being messy and creative all over the place, sketching out ideas, and simply not trying to compile them until I think it'll work. I do this precisely because OCaml maps naturally onto how I think about the problems I'm working on right now. It's pseudocode until I run make, at which point the computer is running very useful checks on my reasoning.

The "inflexible approach to solving problems" you refer to has, as far as I can tell, nothing at all to do with "machines", except perhaps in an exceptionally abstract sense, in the form of virtual machines designed for thinking about. On the other hand, problem solving doesn't get much more flexible than raw assembly. My point stands.

I think the early adoption of imperative languages was more due to pragmatism than elegance[0]; at first, you had to stay close to the machine to get anything done, at first because that's all that existed (Fortran beat Lisp into existence by a year) and then for performance. Remember, Fortran was still basically a shortcut for assembly, while Lisp started as a purely mathematical abstraction, designed expressly for thinking about, that some goofball wrote an interpreter for.

[0] This principle is still clearly in force today. We wouldn't bother with C++ and JVM languages and Unix so much if pragmatism wasn't paramount.

I'm using Fortran as my example of an early imperative language. There might have been earlier ones (maybe a version of COBOL?), but since they were imperative it doesn't materially affect my point.


> That's debatable given that both approaches have been around for about the same length of time with different outcomes in terms of adoption.

If unix had been created in LISP or ML, I believe we might be in the opposite position. Though that hinges on a LISP unix being as successful as the C one.


But before UNIX there was Symbolics, and Symbolics failed in the market whereas UNIX succeeded.


Was that because C was better than Lisp though, because of timing, or because the people working on UNIX's goals were more closely aligned with the industry as a whole?

There are of course many other things it could have been, but these are the first that come to my mind.


>But when people wanted to think clearly, they invented math,

No, they didn't. They invented a syntax two milleniums ago, optimized for use-cases of those ages, using alphabets of those times. Thinking in greek symbols is not helping anyone think clearly. It's just more confusion, because now suddenly, i can't type or phrase a relevant question out loud, until i get a greek keyboard and learn the proper pronounciations of the greek alphabet.

The notion we should just stick to a notation and conventions optimized for a different era, a different culture, a different alphabet and a different writing tool (paper), is ridiculous. Do NASA scientists calculate speed using knots? Or power using horse power? They do not.

Math is to computer science, what classical music is to pop music. A historical relic that stopped having economic and cultural value beyond being a mere status symbol.

>of which FP is nearly the computational manifestation.

No, it's not. One could argue the same thing for logic programming. Computation is not the manifestation of math. It's a strategy to answer a mathametical question. The original strategy was to just 'try things and explore' and when you found (guessed?) the answer, you would prove it to be sure.

Functional programming, in the religious Haskell sense, is just a term rewriter. That's not the manifestation of computation. It's just one way to specify a strategy. In the case of Haskell, which is a term rewriter with a bible full of fine print and exceptions, a very sado-machistic one.

>But computers are primarily about information, and we think about information in terms of relationships

No, that would be Prolog. A different default strategy. Less fine print, but not the cure-all either.

>That's exactly what FP is about: expressing computation in terms of relationships.

Nope. A relationship doesn't have a computational direction. In math, all these statements express the same relationship:

    double( x ) / x = x
    double( x ) = x * x
    x * x = double( x )
In Haskell, only one of them happens to be legal. Because you are not writing a mathematical equation. You are specifying a computation. The fact that isn't even obvious, makes matters worse.

> But computers are primarily about information, and we think about information in terms of relationships

Yes, and the biggest challenge, the pain everyone tries to lessen, is managing the coordination and standardisation of changes (mutations) to that information. Wrapping every computation with the same type of state-monad actually helps a lot with this, and has been the most popular strategy to deal with this problem in the last 20 years.


Relationships can be directional if you want them to be. Also, I wish people would stop conflating "religious" functional programming with pragmatically expressing algorithms as compositions of functions. That said, I don't think you and I really disagree much on the big picture.


>Also, I wish people would stop conflating "religious" functional programming with pragmatically expressing algorithms as compositions of functions.

Yes, me too. And that type of functional programming is very popular and very succesfull. Every programmer uses it often, when they touch SQL, jQuery, LINQ, etc.

>I don't think you and I really disagree much on the big picture.

I love functional programming.

But i consider languages like Haskell and their derivates to do a lot of harm to the reputation of functional programming. Lazy evaluation and the whole pretending math == computation. It's borderline harmfull to the development of a programmer to even be exposed to it. The last thing you want a programmer to believe, is that there is some intrinsic order of execution that is magically correct and optimal, and can easily be derived. There isn't. The correct order of execution is not even objective (in a GUI one would trade throughput for lower latency, for example), so the notion we can just skip that whole part, and have the 'compiler take care of it' seems damaging to me. Languages that allow you to specifiy these things manually are considered ugly mutations of some kind of pure math. Sinners. That we need return to one true god, which is "pure" math, mascerading as a term rewriter and bible full of fine print, and a zero tolerance on maintaining global state. Yuck.


I've never heard anyone refer to Haskell's lazy evaluation as The One True Evaluation Order. it's just one interesting way of doing things. I think there's a useful place for Haskell in the space of programming languages, I just don't want it to be the default example of a functional language.


> This is a problem with functional programming in general, it is fundamentally anti-human

I would argue that the "anything goes" dynamic procedural languages ( like Ruby, Python ) are far more anti-human, in that human ability to reason about large masses of code scales orders of magnitude worse than in languages that provide a strong theoretic framework for reasoning about code.


You should read up on flow and how it applies to programming, start here:

http://en.wikipedia.org/wiki/Flow_(psychology)

The reason straight jackets lead to poor usability is that human minds are fairly diverse in the way they solve problems, and rigid constraints imposed by a tool are likely to trip up your thinking.

There is a tension between flow and feedback, but its not clear at all that one dominates the other as you've stated.


I find dynamic languages break my flow badly by eventually losing clarity on what things are supposed to do. Haskell is annoying as he'll to begin with but eventually those constraints give very tight, very fast feedback that's very flow-compatible.


Were you at the social innovation lab kickoff at Hopkins a few weeks back? I thought I saw a booth for Reify Health.


I wasn't personally, but I another guy on the team made it. Let me know if you'd like to say hi and grab a beer!


sure, will be in touch!


Constraints are what prevent fallacious reasoning. It's better to have a tool/framework tell you you're wrong then not knowing about problems or believing falsehoods.


Again, there is tension between flow and feedback. The ability to express and tolerate incomplete and incorrect solutions when trying to solve, and more importantly, understand a problem, is very important. Bondage and discipline languages typically lack that capability, which is why many programmers are still flocking to dynamic languages. For this reason, even Haskell is trying to get into the hybrid typing game (type inference also helps of course, without which Haskell would be unusable).

Ideally, we could achieve flow without sacrificing semantic feedback (or vice versa). It is definitely a worthy goal


I've been fiddling with a System F language for a bit and find the types not as burdensome as I had originally thought. I couldn't imagine learning to use it from scratch though. I do think the loss of magic inference is a boon though.


You are repeating what appears to be a completely baseless claim all over this discussion. What evidence do you have to support the notion that languages you happen to dislike decrease flow?


>I would argue that the "anything goes" dynamic procedural languages ( like Ruby, Python ) are far more anti-human,

Is it anti-human to be able to interact with the code, and explore what it actually does? A run-time type error is one that has real example data.

The assumption we can write perfect code immediately and easily, or that a type analysis can just guide us through, is not the case. And it's breaks down even worse, when 99% of your code is interacting with systems outside of the scope of the typing system. (database servers, client-side browsers, network connections, file-systems). Dynamically typed language are good fits, when the code is mostly glue-code between multiple systems outside of the scope of any type analysis.

But let's not equate functional programming with static typing.

>in languages that provide a strong theoretic framework for reasoning about code.

You act like many of us are ever writing complicated algorithms. We're not. We're writing simple algorithms that deal with complex structures of information. And in the few cases where the algorithms get so complicated you want your invariants to be formally proven, anything less than a full blown theory prover, will be insuffient anyway.


Sure if it was easy for people to think functionally then functional programming is without a doubt more elegant. But elegance doesn't negate the fact that the reasoning model demanded by functional programming isn't friendly to the way we think. The elegance argument is something proponents always bring up, but it misses the point.

While dynamic procedural languages have their problems, I think they work for the most part, hence their success Vis-à-vis functional languages. As far as the issue of scale, that's why modular constructs as added to manage large code bases.


I don't know how you're coming to this result, but it /is/ easy for people to think functionally. You can teach intro-CS classes in both functional and imperative styles and generally they pick up both styles at the same rate. I've seen it year after year of teaching these courses. The myth that functional programming doesn't match human thought is just flat out FUD, there's no evidence to support it.

The adoption of programming languages is more influenced by economic factors than informed by solid engineering.


I wont argue about your experience in the class room but both styles of programming have been around for about the same length of time and yet their adoption rates speak volume.

We can debate this ad infinitum and probably wont come to an agreement.


The adoption rates do say a lot of things, but not about the claims you're making about human reasoning.


> This is a problem with functional programming in general, it is fundamentally anti-human, people don't think functionally but rather procedurally.

There was a study a while back[1] that showed non-programmers a series of statements of the sort,

   int a = 10;
   int b = 20;
   a = b;
and asked them the value of a. It seemed that some people could not grasp even very basic fundamentals.

The thing is, if you'd never programmed before but you HAD basic algebra knowledge, the third line above would have broken your brain. 10 =/= 20. You can't change the value of a, it's against everything you know!

The point is, people don't think functionally or procedurally. Both of these programming styles are just learned behaviours, not basic human nature.

[1] http://www.codinghorror.com/blog/2006/07/separating-programm...


What IDE are you working on?

I agree with the argument against functional programming, though I would qualify that with pureness (doing everything functionally) rather than just the use of functions in general: functions are sometimes the most natural way to do something. Haskell is good for what it is, an experiment in pure functional programming, which has taught us a lot about programming in general.


I agree, that's why I was asking which class of developers they are targeting...

My company is building an IDE that is purely about productivity, specifically for building business web applications. Check my profile for link.


Cool. I checked out your page, it seems your tool logo is very similar to ours (MS) :).

I'm not sure if it is appropriate to think about specific groups of developers in a general purpose IDE. You really can't guess how they will use your tool, which applies to PL design in general. Personally speaking, there are functional programming enthusiasts who think wildly different from the way I do, and I guess they would sort of select out given my lack of understanding of their psychology as reflected in my design.


Haha...the logo similarity is pure coincidence...

Incidentally some time back I posted about the similarities between Google and Microsoft logos: https://news.ycombinator.com/item?id=6678308 :)


With four colored boxes...I've seen our logo all over the place even before it was our logo :)


Looks cool ! The screenshots weren't really explicit to someone not familiar with haskell. Is there any video available ?


A true HN Christmas present. If nothing else it reminded me to take time to watch the Subtext videos. :)

I noticed that the (single) argument to sum in most of the screenshots doesn't have a name? Is that special-cased in the language or the editor, or am missing something bigger?


I might be interpreting your question wrong but that is specific to Haskell (which he uses as the foundation of lambdu). It looks like sum is being composed with filter which is given the iterable and a predicate.


They mention that they modified haskell to use explicitly named parameters. You can see that in the filter call but not the sum call.


Our current thought is that single-parameter functions do not need an argument name.

We might change a lot of our decisions later as we tune it for working with more realistic code.


I hope you expose the color coding as a config file, so folks with an eye for design can contribute.


https://github.com/Peaker/lamdu/blob/master/config.json

You can change it while Lamdu is running for immediate effect :-)


Subtext website, for the other curious http://www.subtext-lang.org/ .



There are 3 things that will kill a project almost instantly at the beginning: bad design, bad implementation, bad license.

I don't know about the first two but GPL is not a good choice.


I think GPL is fine for a full application, as long as the reusable parts are made into a library under a different permissive license.


We're probably going to switch to a BSD license.


Not a big GPL fan, but why would that matter in this case...plugins?


Because this IDE is built on top of Haskell ecosystem I have a serious problem to see the bright future of it regardless of its license.

I mean:

- IDE should have an integrated compiler.

- IDE should download only necessary libaries or functions needed by a program(from central repository, local or remote).

- IDE should catch a bug and allow me to fix it while program is running.

- IDE where I am able to fix a bug on my clients phone or his wrist watch remotely without the need to download whole Haskell Platform there. No JS/HTML5 BS.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: