The web is so broken. The whole "web app" concept is just a giant hack.
The job of us web developers today consists in employing a never ending pile of hacks (e.g. AJAX, long-pooling, semi-broken languages and implementations, non-standard vendor APIs) to fight a browser into submission so it can be used to run a general application instead of simply navigating hypertext, it's original purpose. All that because there's incentive in keeping users inside walled gardens and holding a ransom on their data.
It's unbelievable we take as granted reinventing the wheel over and over again, dealing with weak standards and APIs designed by committee, fighting browsers into rendering interfaces and invoking the right callbacks.
I would rather take a web based on open APIs and rich clients (running native code) than the kludges we have today. The trend around mobile apps and some very successful native apps on the Mac App Store that consume web services seems to me as a best-of-two-worlds approach.
> I have a hunch that WebDAV combined with standard HTTP authentication could be the answer. I’m not 100% sure on it, but I can easily envision a world where you fix bugs in your website by opening it up in a browser, reading a stack trace, fixing the JS in that same browser and persisting your changes back to the server.
> I dream of the days when the Web truly does resemble SmallTalk.
This would be more like a world with 5 different SmallTalk implementations, each slightly different, and all with a crappy/non-existant standard library and security model.
But that exists. It's called native programs, which are free to access web API's and whatnot just as much as any browser is.
The whole point of the web is you don't have to download software, you don't have to store your data locally. Sure, it's a pain to support the quirks of different browsers and some badly designed web standards, but I'll take that any day over having to program for entirely different operating systems. I fail to see how web apps are a hack at all. Sure, they're not the "original purpose" of the web, but they work awfully well for things like GMail, GDocs, GMaps, etc. -- I'm awfully happy to have all these inside of browser tabs, instead of cluttering up my machine with more installed apps.
Well, you realize all the products you mention were only possible because someone figured out a hack (XMLHttpRequest), and now everybody relies on something that wasn't thought out in any shape or form? Can you see how fragile is the software stack an entire web industry is basing itself on?
Also, I don't find GMail, GDocs or GMaps the epitome of good UI and interaction models, I can point a handful of quirks (GDocs is particularly bad.). I don't think these products withstand a round of user testing, people just take it for granted because they are provided for free. When a better alternative shows up (Sparrow), people flock to it even if it's a paid app. Of course, Google killed it, they are web app zealots after all and need the revenue from the ads.
Unix was also a hack. C was also a hack. Windows 95 was also a hack. Windows NT had better design, but with hacks bolted on to make it compatible with Win 9X. Java was a hack. Linux was a hack. The Internet itself is the biggest hack of all.
In fact I dare you to name one successful platform that wasn't a hack. Because from where I'm standing, the platforms that weren't hacks failed, in addition to being on the whole horrible.
> I don't find GMail, GDocs or GMaps the epitome of good UI and interaction models, I can point a handful of quirks
With all those quirks, GMail killed the desktop email client for me. For the last 4 years I haven't been able to use any other client. Even on my iPad, before the latest "native" GMail on iOS I was using the web version, simply because Apple's Mail app wasn't doing threaded conversations well.
For all the quirks of the web version of Google Maps, at least it is available everywhere. Just ask the poor schmucks that were in a hurry to upgrade to iOS 6.
Same argument goes for GDocs. It's available everywhere and it allows for efficient collaborative editing. You just open a browser and you're good to go. It doesn't even have to be your browser. I do value this a lot.
> I don't think these products withstand a round of user testing, people just take it for granted because they are provided for free.
I'm a Google Apps customer and I also pay for Google Drive, if it matters. And quite the contrary, people receiving stuff for free tend to be more self-entitled and critical. That people don't voice too many negative opinions on these products is kind of shocking, because either Google has a really good PR department or these products do in fact satisfy most users.
> When a better alternative shows up (Sparrow), people flock to it even if it's a paid app
For what is worth, I didn't.
I believe you tried to correct me, but you proved my point even further.
Desktop-like functionality on a software meant to navigate between linked documents is simply a broken interaction model.
Sparrow? It existed for a couple of years before being bought by Google and people hardly "flocked" to it. It was mostly a marginal app. The overwhelming majority of OS X users either used Mail.app or Gmail on the web.
Google just brought it for the talent and to get some better native app tech. Not because people were ...flocking to Sparrow and lessening Google's revenue. I doubt there was even a dent to their revenue.
How is that different from, anything else, really? The first wheel was also a hack. Fire was also a hack (hey, let's bang this two stones together).
XMLHttpRequest was not a hack. It was a feature engineered my Microsoft that people found another use for. After that the hack status was gone: it was standardised, documented, best practices were written, an interchange format was invented for it (JSON), etc.
Don't conflate the origin with the result.
On the other hand, the platforms we /do/ download software onto (in real time) are disgusting.
(I wrote PHP for the first time last week. Oh. My. God. Why does this crap exist? WHY IN GOD'S NAME (yes I am shouting) DO WE PUT UP WITH THIS UNBELIEVABLY MISERABLE CRAPFEST WE CALL MODERN WEB DEVELOPMENT!?)
Um . . . breathe.
I don't have any answers today that don't involve burning something to the ground, public shaming, or retreating to a code monastary and crafting a diamond (which /never/ works).
How do we get unstuck?
Oh yeah, and it's not fairly hard. After all, it's a well defined, text-based protocol. It's sweet to implement if you're not using C. That is, before you started supporting persistent connections or interacting with the content.
In any case, it's loads better than FTP and SMTP (at least HTTP spelled things completely), and loads easier to use than binary protocols.
Of course, there's nothing truly "evolved" in software and nothing designed that didn't evolve, it's just two ends of an axis measuring a weird think like "control and unitary vision". An extreme of "evolved" technology are things like the browsers and PHP - the templating language that was never intended to be a programming language and somehow mutated into one (and I assign most of its success to being attuned to the "evolvability spirit" of our web). Most "well designed" or "well engineered" software that could have powered the web either failed miserably or never reached the point of being usable or just wasn't there at the right time.
I think there's a third path somewhere in this chaos, a path of software "designed/engineered to evolve", as opposed to "evolved" and "designed" software, a "nirvana" we should look for in this darkness. But in the meantime we can either embrace the "twisted creatures of webvolution"... or choose a different line of work :)
I would define the former as evolved and the latter as a hack.
Web technology seems to have developed from a little of each. People were using iframes to do HTTP requests so XMLHttpRequest was added to IE, People were using long polling so HTML5 now supports websockets etc. etc.)
I think the point of the grandparent is that the original standards were never meant to power applications, the web has changed but due to backwards compatability or old implementations the technology still has some warts.
An example of this is CSS, it's a fantastic system if you want to lay some static text on a page interleaved with some static images and style the whole lot. If you are trying to create a semi-traditional GUI or a page where you can't know the sizes of elements in advance then the shortcomings of CSS become abundantly clear.
...in my view, if you can clearly see "the distinction" between authors adding functionality and users abusing existing functionality for new purposes, it means that one party is clearly moving too slow: either the "users" are too slow to discover new ways to adopt and (ab)use the new software or the developers are working to slow and the users-developers need to do too many ugly hacks because the features they need take too long to be released and standardized (unfortunately the current state of web-development seems closer to this state of affairs).
But there are possible solutions: I think things like standardization could be accelerated by having "tzars" in all committees (people that can just choose by themselves how some things over which they have authority get to work without needing a consensus or to justify to others why they choose one way) - I'm using the word "tzar" as in Haskell's "syntax tzar", the guy who could just choose what syntax a certain language feature needs and end the discussion right there because nobody had the right to contradict him  (I consider H's design-by-committee the only example where a committee did something mostly right and I wish similar "committees" would work on things like CSS and new ECMAScript versions...). But again, if standards evolve too fast, implementors will have to play catch-up and end-up with half-done implementations, which today happens even for sloth-slow evolving things like CSS.
...but of course, the real "root of all evil" is the backwards compatibility requirement, but we can't get away from this.
...and I know, some people will want to burn me at stake for screaming the "we're not going fast enough" heresy about web tech :)
Side Note: I also just really, really hate app stores. I find them to be much more restrictive than just throwing a website up and allowing anyone to access it. The idea of having to pay someone to allow users to be able to use my stuff is way too restrictive to me.
> On the web, where the user must constantly accept and run code from unknown and untrusted developers, code must run in a sandbox to minimize risk of unwanted access to the user's computer.
First, sandboxed environments are not limited to browsers (e.g., OS X 10.8). Second, browsers are often the attack vector, because they were never meant to run applications, and have broken security models (e.g., no application signing). So you end up in a sandboxed environment with terribly limited APIs in the name of security, and on the other hand have hard to trace security holes, like XSS.
The browser should be left for documents.
The only way to provide the best user experience and operating system integration is via native applications, regardless of the environment, which communicate via network protocols.
I won't. Installing native apps for everything I do? When I can just go to a app website and use it? I have no intention to go back into the 80s.
Of course current operating systems have poor support for such kind of behavior, but that could be remedied. I think the "activities" concept in Android already is first steps towards that goal.
If you have different permissions and a complex sandbox with a runtime that supports classloaders, you run into the same security problems that Java currently enjoys for applets.
People want to build software from a small kernel that scaffolds itself. They want to immediately switch between "testing the app" mode and "building the app" mode. They want something more than a text file, a tool pipeline, and an executable at the other end. On the other hand, they don't want their code to turn into an opaque binary blob that can't be diffed, can't be read by text tools, and can't be shared with people who don't feel like using your sweet "Invented on Principle" tool.
It feels like we're converging on the ideal solution, but it's happening more slowly than I think many of us predicted.
I don't exactly know what I'm creating yet (maybe there aren't terms for it yet), because I change things as I go and it's not done yet. But the current vision is a sustainable automation platform, where you can add/change/build/do anything (because it's open source) with what you're working with. So you can create yourself a "testing the app" button and "building the app" button, and that will become something that's available to you. Actually, I'm trying to make it so that those buttons will be automatically available for you as you naturally do the things you'd normally do, but I haven't gotten to this part yet. (Oh, and perhaps instead of buttons you have to press, new output can simply appear on your screen right away.)
In short, my vision agrees very much with what you're saying, but I still have a lot of work to make my project a viable building tool, and it is happening very slowly indeed.
 John Maloney got it with Morphic, and even coins the term Liveness at about the same time Tanimoto does. However, this was originally in Self and relatively independent from Self's hot swapping capabilities. It also didn't really map back to code very well (it was very dependent on direct manipulation via Morphic's edit menu).
For the last four decades, Smalltalk has been providing a glimpse of the future. That future is still waiting to happen.
Smalltalk was crazy innovative, while Self (Smalltalk's only real successor) gave us the first live graphics toolkit (Morphic). But the future is still being invented, and it will be a much better experience than Smalltalk ever was.
If I'm missing something, can you please provide some more detail?
Hancock defines the term "live programming" in his thesis  and its where I get my definition from (before Hancock, the term doesn't exist, though liveness was defined by Tanimoto and Maloney in the 90s). Basically, live programming is about continuous feedback for which hot swapping might be useful (though it must be said its not exactly necessary nor is it sufficient). But there is much more to it: you want continuous feedback about the code you are editing, not just some idea that the code will run sometime in the future in a running program. You want to also observe the behavior of this code in a way that is comprehensible, and map this behavior back to your code.
I wrote and presented a paper  on live programming back in 2007. Ralph Johnson (a big Smalltalker) was in my audience and had the same complaint: he only saw hot swapping and not the experience I was presenting. To him, it was mechanism, not exeperience. I wonder if this is a problem with Smalltalkers in general.
Some of the basis for my assertion came from this article: http://liveprogramming.github.com/liveblog/2013/01/13/a-hist...
I wrote a lengthy post in the history article you linked to. Its just not the live programming history that I'm familiar with, they seem to be falling into the same smalltalk mechanism trap that I was talking about in this thread.
With the kind of live programming that you mean there is some meta system that is monitoring your code and continuously giving you feedback on it. Perhaps it does this by just running the code and showing the output, perhaps it displays the execution trace in some way, perhaps it displays a visualization of a data structure over time. In a way, live feedback on static type errors could also be considered a limited form of live programming. Maybe it's a good idea to adopt a new term for this kind of live programming? It would also help from a marketing perspective I think, to have a new thing that people can be excited about rather than a term that they associate with a boring, limited and old fashioned feature (i.e. hotswapping).
Even with this second notion of live programming the question of updating running code does not go away. If you are developing a game, you may want to do live programming by running the game next to the code and have that be updated whenever the code changes. But a game has state, and how do you transport that state to the next version of the code? Hot swapping code by blindly mutating a function pointer in the running game is obviously not the answer. That's just a hack that works some of the time: it doesn't work when updating code while the running game is still in the middle of something, and it corrupts the state when there is a bug in the code, and it doesn't work at all when data structure structure changes. The perspective "how to transport the state to the next version of the code" is much better than "how to I shove new code into the running system with the old state". The same issue comes up with most programs, not just games. This is still an open problem as far as I know. For live programming we also need tools to manage and reset the state. When you have corrupted your state with a bug in your code, you want to be able to quickly go back to a previous non-corrupt state. Even if you change the entire programming model, you'll still have to address this state update problem in some way.
REPLs and interactive programming existed long before the "live programming" experience was defined (by Hancock), and I only use the term to describe what Bret was showing off in his IoP talk as well as the experience the Light Table people seem to be striving for. I might be a bit pedantic, but there are plenty of other terms to describe the older less live experiences! Hot swapping is just some mechanism to achieve some undefined experience; "I changed my code while my program is running" is vague enough. It typically has to be coupled with some other refresh mechanism (e.g. stack unwinding) to be useful, and even then it typically doesn't do more than it advertises (func pointer f was pointing to c_0 and now points to c_1).
Now live coding...is completely different and has an independent origin from live programming. Whereas "live coding" is about some programmer coding "live" in front of an audience, live programming is about receiving continuous comprehensible feedback about your program edits in the context of a running program. Quite a huge difference in meaning with very different goals!
> With the kind of live programming that you mean there is some meta system that is monitoring your code and continuously giving you feedback on it.
Its coding with a water hose vs. a bow and arrow. Debugging is not a spearate experience and happens continuously while editing, if you can't provide enough continuous feedback to get rid of a separate debugging phase, then its not really live programming.
> Maybe it's a good idea to adopt a new term for this kind of live programming? It would also help from a marketing perspective I think, to have a new thing that people can be excited about rather than a term that they associate with a boring, limited and old fashioned feature (i.e. hotswapping).
But the new term was coopted to describe an old experience! Hancock's definition is unique (no one used this term before 2003), fairly complete, and its very compatible with what Bret Victor was showing off in his IoP work. Why should we back off and invent yet another new term to describe the new experience whose original term was hijacked to descirbe old experiences because people couldn't understand the new one? Crazy!
> But a game has state, and how do you transport that state to the next version of the code?
Today this is framework specific, and all major game engines have a way of doing this as they want to allow the designers to script levels in real time without losing their context. It doesn't even require language support necessarily, but its not something you ever get "for free," its something that is baked explicitly into your framework.
> The same issue comes up with most programs, not just games. This is still an open problem as far as I know. For live programming we also need tools to manage and reset the state. When you have corrupted your state with a bug in your code, you want to be able to quickly go back to a previous non-corrupt state. Even if you change the entire programming model, you'll still have to address this state update problem in some way.
No one has figured out how to yet come up with an expressive general programming model that achieves this efficiently, but you can always "record" the input event history of your program and re-exec the entire program on a code change; i.e. there is an inefficient baseline. You still have problems with causality between program output and input; e.g. consider the user clicking a button that no longer exists or moved! Lots of work still to do...just don't take away my term please!
Yes that's what I mean! A tiny difference in the terms we use: live coding vs live programming. That's why it's confusing to people.
> Why should we back off and invent yet another new term to describe the new experience whose original term was hijacked to descirbe old experiences because people couldn't understand the new one? Crazy!
Sometimes you have to cut your losses ;-) Another reason why I dislike the term "live programming" is because it confuses two separate concepts: continuous feedback and rich feedback. Conventional debugging is pressing a button to run your code and see what the result is. Instead of just displaying the result, you could display the entire execution trace (time traveling debuggers). You could write unit tests and display which passed and which failed. You could output some visualization of some data structure in the program. For a game you could output a series of frames overlaid on each other (like Bret Victor does). Then you have type checking, for numerical code sensitivity to floating point bit width, performance profiling, etc. This is all about giving different kinds of feedback. Continuous feedback is about getting feedback without having to press a button. Classical live programming is running the program continuously and continuously displaying its output. This is the continuous feedback version of ordinary debugging. Automated background unit test runners are the continuous version of unit testing. In the same way you have a continuous version of the other debugging techniques. Both continuous feedback and rich feedback are very valuable, and although they are stronger together they are separate concepts. Perhaps it would be a good idea to have separate words for them, that would certainly greatly clarify "live programming".
> but you can always "record" the input event history of your program and re-exec the entire program on a code change; i.e. there is an inefficient baseline. You still have problems with causality between program output and input; e.g. consider the user clicking a button that no longer exists or moved!
Yes, this is robust to internal data structure changes but no longer robust to UI changes. Viewing a program as a series of event stream transformers and time varying values as in FRP may help a bit. At the lowest level you have a stream of mouse clicks on pixel (x,y) and keyboard events with keycode k. Then the UI toolkit transforms that stream of events to event streams on UI elements: click on button "delete", text input to textfield "email address". Then that gets transformed to logical operations and data: delete_address_book_entry(...) and email_address. Then that gets transformed to the complete time varying high level state of the entire program (address_book_database). You can try to transport the state on each of the different levels, but in the end I think a completely automated solution is impossible. You are going to need domain specific info on how to do schema migration in the general case. For live programming that may not be worth it because you can just start over with a fresh state, but for things like web site databases you don't want to lose data so you have to manually migrate. [tangent: Currently there are a lot of ad-hoc solutions to this e.g. never remove an attribute from your data model, and when you add new attributes make sure all code works even if that attribute is missing. Reddit even goes so far as to structure its entire database as "key,attribute,value" triples instead of using a structured schema so that the schema never needs to change, but of course this just moves the problem from the database into the code that talks to the database. A principled approach where you write an explicit function to migrate your data from schema version n to schema version n+1 would work better. That migration function takes the entire state/database with schema n as input and produces an entire new state/database with schema n+1. When the state/database is large this would take too long to do it in one pass, but with laziness that can be done on-demand.]
You don't need to limit yourself to running one instance of the program. You could record multiple input sequences representing multiple testing scenarios, and display the results of running each of them, or even display each of them being continuously performed so that you can see all the steps in between. In any case as you say there is lots of work still to be done.
Again if we go back to Hancock's thesis, it's all there! It's not just about continuous feedback, it's about feedback with respect to a steady frame, it's about feedback that is relevant to your programming taks, it's about feedback that is comprehensible. Hancock got it right the first time, there is no classical live programming (though there were other forms of liveness before). Actually, this is something I didn't get myself in my 2007 paper.
I don't think I need to abondon my word, especially since the standard bearer are Bret's demos; people want "that", not some sort of vaguely defined Smalltalk hot swapping experience. The community I'm fighting for the word is small and insignificant vs. the Bret fans :).
As for the rest of your post, explicit state migration is a big deal for deployment hot swapping (Erlang?) but ultimately a nuance during a debugging session. A "best" effort with reset as a back up is more usable.
But maybe take a look at our UIST déjà Vu paper : here the input is defined as a recorded video that undergoes Kinect processing, and we are primarily interested in the intermediate output frames, not just the last one. So the primary problems are one of visualization, while we ignore the hard problem of memoization and just replay the whole program. We even have the possibility to manage multiple input streams and switch between them.
Kinect programs are good examples of extremely stateful programs with well defined inputs. One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.
Yes, the problem is not with the definition of the term, but with the term "live programming" itself! It is too vague and can apply to too many concepts, and hence we're seeing people use it and interpret it for many different concepts. Nobody will go read a thesis to learn what a term means. But then again "object oriented programming" is vague as well. The notion of "steady frame" does seem oddly domain specific. In the words of that thesis: water hosing your way towards the correct floating point cutoff value or towards the value of a parameter in a formula that produces an aesthetically pleasing result works great, but I'm not convinced that you can "water hose" your way to a correct sorting algorithm for example. Perhaps I have misunderstood what he meant though.
> A "best" effort with reset as a back up is more usable.
Yeah, I agree. I think the same primitives that can be used for building good explicit state migration tools, like saving the entire state and recording input sequences or recording and replaying higher level internal program events, can also be used for building good custom live programming experiences. So they are not two entirely disjoint problems.
> But maybe take a look at our UIST déjà Vu paper 
That's very interesting and looks like an area where live programming can work particularly well! A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream. Of course LightTable is trying to do some of that, but while it started out in a quite exciting way they seem to be going back to being a traditional editor more and more (albeit extensible).
> One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.
Probably you've seen that already, but have you looked at self adjusting computation? http://www.umut-acar.org/self-adjusting-computation
That's exactly what we're trying to do with LT, see my "The Future is Specific" post .
> they seem to be going back to being a traditional editor more and more (albeit extensible)
This is a necessary detour as we build a foundation that actually works and allows us to really make the more interesting stuff. If we can't even deal with files, what good are we going to be at dealing with the much more complicated scenario of groups of portions of files? :)
True. But I think the word has worked well until recently.
> The notion of "steady frame" does seem oddly domain specific.
Not really, but please wait for a better explanation until my next paper. One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now (that the UI represented by the steady frame is probably not the GUI that is used be an end user).
> Probably you've seen that already, but have you looked at self adjusting computation? http://www.umut-acar.org/self-adjusting-computation
Their work doesn't seem to scale yet (all examples seem to be small algorithmic functions) while I'm already writing complete programs, compilers even, with my own methods, which are based more on invalidate/recompute rather than computing exact repair functions. I'll be able to relate to this work better when they start dealing with bigger programs and state.
I just saw this: http://www.infoq.com/presentations/Live-Programming
"Sam Aaron promotes the benefits of Live Programming using interactive editors, REPL sessions, real-time visuals and sound, live documentation and on-the-fly-compilation." :D
> One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now
Yea, this interpretation of 'steady frame' is fully general I think: the ability to compare feedback of version n with feedback of version n+1 without getting lost. My interpretation was more specific because of the water hose vs bow and arrow analogy: continuously twiddling knobs until you get the result you want vs discrete aim-and-shoot. For example picking the color of a UI widget with a continuous slider vs entering the rgb value and reloading. Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.
> which are based more on invalidate/recompute rather than computing exact repair functions
You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute. For example if you have a List<Changeable<T>> then each item in the list can be repaired independently, if you have Changeable<List<T>> the whole list will be recomputed. Although you probably want to automatically find the right granularity rather than force the user to specify it?
Ya, I saw it to. I didn't see the talk though, but I expect it to be more of the same promotion of live coding as somehow actually being live programming (programming is like playing music! Ya...).
> Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.
A sorting algorithm can be fudged as a continuous function. But then here continuous means "continuous feedback", not "data with continuous values." The point is not that the code can be manipulated via knob, but that as I edit the code (usually with discrete keystrokes and edits), I can observe the results of those edits continuously.
> You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute.
I'll have to look at this work more closely, the fact that we need custom repair functions at all bother me (repair should just be defined simply as undo-replay). The granularity of memoization is an issue that has to be under programmer control, I think.
I wrote this really bad unpublished paper once  that described the abstracting-over-space problem as a dual and complement of the abstracting-over-time problem. It turns out, for simple scalar (non-list) signal (reactive) values, the best thing to do was to simply recompute. However, for non-scalar signals (lists and sets), life gets much more complicated: it makes no sense to rebuild an entire UI table whenever one row is added or removed, and so we want change notifications that tell us what elements have been added and removed. However, I've changed my mind since: it is actually not bad to redo an entire table just to add or remove a row, as long as you can reuse the old row objects for persisting element. If my UI get's too big, I can create sub-components that memoize renderings unaffected by the change (basically partial aggregation).
Now how does that relate to theList.toMultiSet example? Well, the implementation of toMultiSet could be reduced to partially aggregated pieces very easily (many computations can actually), which could then be recombined in much the same way as rendering my UI! Yes, the solution that decrements/increments the count on a specific insertion/deletion is going to be "better", but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.
I still need to understand their work better, but I approached my work from a direction opposite of algorithms (FRP signals, imperative constraints). I have a lot of catching up to do.
Yes, that's exactly what you get if you do not implement a custom traceable data type (their terminology for a data type that supports repair) provided you write your code in such a way that the memoization is effective. Note that traceable data types do not necessarily need to be compound data structures, it can be e.g. an integer as well. E.g. summing a list of integers to an integer, now if one of the integers in that list gets changed, you do not need to recompute the entire sum, or even a logarithmically sized part of it: you can just subtract the original int and add the new int back in.
Here is also an article that does something related but in an imperative context rather than a functional one: http://research.microsoft.com/pubs/150180/oopsla065-burckhar...
I can't help but smile while I watch.
SmallTalk is an uncommon language these days, with the exception of its half-descendant, Objective-C.
Ruby borrows Smalltalk's OO semantics pretty much completely. Just sayin'
My favorite system that supports hot-swapping is Linux.
It works with every language, there is an excellent scheduler, processes are protected from each other, there are all kinds of interprocess communication techniques available, you can choose between files and databases for persistence.
I think the lame workflows people have while working within a conventional OS are a failure of imagination and a lack of understanding about how operating systems work. In particular, I suspect that a lot of people overestimate the cost of exec and IPC. Or, for example, do you really appreciate the fact that the operating system keeps file system data in RAM even if you exit the program and start up another one accessing the same file a microsecond later? Image-based persistence is not necessary for responsiveness.
Use exec(), use a database, create the editors that make you productive. I've used Lisp and Smalltalk and I've created a miniature VM/OS running in the browser with processes and persistence and all that. These days I'm more excited about old fashioned Unix techniques.
Indeed. There are all the Lisp dialects to start with.
And it's no surprise that hence SmallTalk had this feature too: Alan Kay (+), one of the key creator of SmallTalk, acknowledged the heavy influence of Lisp on SmallTalk:
(+) Famous quote attributed to Alan Kay: "I coined the term "object oriented". I can tell you I didn't have C++ in mind."
The old page concept is plenty fine in many contexts IMO.
The Browser — A Lament
*Binstock:* Still, you can't argue with the Web's success.
*Kay:* I think you can.
*Binstock:* Well, look at Wikipedia — it's a tremendous collaboration.
*Kay:* It is, but go to the article on Logo, can you write and execute Logo programs?
Are there examples? No. The Wikipedia people didn't even imagine that, in spite of the
fact that they're on a computer. That's why I never use PowerPoint. PowerPoint is just
simulated acetate overhead slides, and to me, that is a kind of a moral crime. That's
why I always do, not just dynamic stuff when I give a talk, but I do stuff that I'm
interacting with on-the-fly. Because that is what the computer is for. People who
don't do that either don't understand that or don't respect it.
The marketing people are not there to teach people, so probably one of the most
disastrous interactions with computing was the fact that you could make money selling
simulations of old, familiar media, and these apps just swamped most of the ideas of
Doug Engelbart, for example. The Web browser, for many, many years, and still, even
though it's running on a computer that can do X, Y, and Z, it's now up to about X and
1/2 of Y.
Treat every web site or address or endpoint as an object. Allow message passing as though the web were simply a single object-oriented system resident on the same computer. The definitions of the APIs are kept as standard, so the system "just works" with any site.
This is, of course, the vision of APIs. But could we do better? Could we make it closer to a true object-oriented system, even simpler to use? Not sure if it would make a big difference, but as we know, sometimes a simple thing that makes connections easier can make a huge difference (eg: XmlHttpRequest).
Make the web as easy as smalltalk. For every API, and eventually, every site. Not just the ones we decide to make language-specific libraries for. One library per language just interfaces with "the web" as a fully-generic API you can access as easily as any object.
Just thought I'd share this thought. Any thoughts or ideas?
In particular, check out the tutorial page (Smalltalk in a browser!): http://amber-lang.net/learn.html
It's a little surprising how much we keep reinventing.
Rack <-> WSGI
Werkzeug is just an app server on top of WSGI that happens to be very nice, with features like the above described.
We are currently experimenting in distributing LK runtimes in nodejs instances so that basically a network of active runtimes can be created that communicate / exchange objects.
Note that the core development moved to github .
(it's a semi-port of MIT's Scratch language).
It was very neat, and so easy to try out. You essentially download it, run it, and it's like a mini-operating system running. You can inspect the system itself and make changes, and then save/load these images. Basically what the article described. There are definitely some very good ideas there. Unfortunately, it wasn't easy for me to figure out how to write some simple programs that print stuff to standard output, and I couldn't really find any samples online. It really makes me appreciate golang.org's Hello World sample right on the front page.
And I'd also recommend Pharo over Squeak at this point, since they have been aggressively improving the Squeak project and creating their own clean modern Smalltalk.
If you want to use Smalltalk for UNIX-style scripting, I'd suggest GNU Smalltalk.
So I was surprised but also disappointed: to anyone who has worked with a REPL for instance it's not a big deal to hot swap piece of code. Many programmers use Emacs and do that almost daily in their text editor… And that's how the web was first designed (the first web browser, Mosaic, allowed to edit the page you were visiting, much like giant and distributed wikis or, with nowadays dynamism in web pages and services, much like giant and distributed Lisp machines or Smalltalk images).
Your web pages become live views of the underlying controllers. Modifying state in one client automatically modifies it in another. And I'm pretty sure it supports hot-swapping.
I'm working on a WordPress editor called WPide, live css/less editing is something I have planned. When your code editor is part of your website/app I think there is a lot more scope for improving the editing process.
With the current code on github I'm in the middle of implementing git functionality, just struggling with the push/pull side of things, ssh keys specifically.
I did have a concept in place for live editing of css without constant round trips to the server by passing data over cookies. This was some time back, I've since realised HTML 5 has much better ways to achieve that communication. That was when I planned to have the css editor in a different browser window but it might be easier to move the css editor panel onto the front end as a small panel like firebug/inspector. I've got code completion for PHP and WordPress so don't see why css can't be auto completed as well.
Where does version control fit in this vision?
If you don't, a fork is automatically created.
Too much effort is wasted working around old obstacles.
- write snippets in a workspace, as vincie mentioned.
- write code against an API that doesn't exist, run it, have it fail, and then fill in the API and implementation.
- write stubs that fail automatically, and then fill them out (similar to above).
- you get to play with the live objects in a work space.
- fill in missing implementation.
- fix any errors.
Be warned, you might hate your so called modern environments once you get back to them.
For Objective-C coders there is hope: http://injectionforxcode.com
Heaven help us…
WebDAV had some varying levels of support in Windows and Mac desktops. But AFAIK the Windows stuff got broken; not sure about Mac. I think that access from a GUI file browser was one of its main use cases in the 90's. If nobody does that anymore, and I don't think they do, then the protocol will need a new app or it will continue to die.