Hacker News new | comments | show | ask | jobs | submit login
The Web Is Becoming Smalltalk (zacharyvoase.com)
111 points by zacharyvoase on Feb 10, 2013 | hide | past | web | favorite | 109 comments



> Now that conventions are moving towards ‘single-page’ web apps, the concept of a ‘page’ is losing its special meaning.

The web is so broken. The whole "web app" concept is just a giant hack.

The job of us web developers today consists in employing a never ending pile of hacks (e.g. AJAX, long-pooling, semi-broken languages and implementations, non-standard vendor APIs) to fight a browser into submission so it can be used to run a general application instead of simply navigating hypertext, it's original purpose. All that because there's incentive in keeping users inside walled gardens and holding a ransom on their data.

It's unbelievable we take as granted reinventing the wheel over and over again, dealing with weak standards and APIs designed by committee, fighting browsers into rendering interfaces and invoking the right callbacks.

I would rather take a web based on open APIs and rich clients (running native code) than the kludges we have today. The trend around mobile apps and some very successful native apps on the Mac App Store that consume web services seems to me as a best-of-two-worlds approach.

> I have a hunch that WebDAV combined with standard HTTP authentication could be the answer. I’m not 100% sure on it, but I can easily envision a world where you fix bugs in your website by opening it up in a browser, reading a stack trace, fixing the JS in that same browser and persisting your changes back to the server. > I dream of the days when the Web truly does resemble SmallTalk.

This would be more like a world with 5 different SmallTalk implementations, each slightly different, and all with a crappy/non-existant standard library and security model.


> I would rather take a web based on open APIs and rich clients (running native code) than the kludges we have today.

But that exists. It's called native programs, which are free to access web API's and whatnot just as much as any browser is.

The whole point of the web is you don't have to download software, you don't have to store your data locally. Sure, it's a pain to support the quirks of different browsers and some badly designed web standards, but I'll take that any day over having to program for entirely different operating systems. I fail to see how web apps are a hack at all. Sure, they're not the "original purpose" of the web, but they work awfully well for things like GMail, GDocs, GMaps, etc. -- I'm awfully happy to have all these inside of browser tabs, instead of cluttering up my machine with more installed apps.


> I fail to see how web apps are a hack at all. (...) Sure, they're not the "original purpose" of the web, but they work awfully well for things like GMail, GDocs, GMaps, etc.

Well, you realize all the products you mention were only possible because someone figured out a hack (XMLHttpRequest), and now everybody relies on something that wasn't thought out in any shape or form? Can you see how fragile is the software stack an entire web industry is basing itself on?

Also, I don't find GMail, GDocs or GMaps the epitome of good UI and interaction models, I can point a handful of quirks (GDocs is particularly bad.). I don't think these products withstand a round of user testing, people just take it for granted because they are provided for free. When a better alternative shows up (Sparrow), people flock to it even if it's a paid app. Of course, Google killed it, they are web app zealots after all and need the revenue from the ads.


Arguments about the hacky-ness and fragility of web apps are always extremely silly.

Unix was also a hack. C was also a hack. Windows 95 was also a hack. Windows NT had better design, but with hacks bolted on to make it compatible with Win 9X. Java was a hack. Linux was a hack. The Internet itself is the biggest hack of all.

In fact I dare you to name one successful platform that wasn't a hack. Because from where I'm standing, the platforms that weren't hacks failed, in addition to being on the whole horrible.

> I don't find GMail, GDocs or GMaps the epitome of good UI and interaction models, I can point a handful of quirks

With all those quirks, GMail killed the desktop email client for me. For the last 4 years I haven't been able to use any other client. Even on my iPad, before the latest "native" GMail on iOS I was using the web version, simply because Apple's Mail app wasn't doing threaded conversations well.

For all the quirks of the web version of Google Maps, at least it is available everywhere. Just ask the poor schmucks that were in a hurry to upgrade to iOS 6.

Same argument goes for GDocs. It's available everywhere and it allows for efficient collaborative editing. You just open a browser and you're good to go. It doesn't even have to be your browser. I do value this a lot.

> I don't think these products withstand a round of user testing, people just take it for granted because they are provided for free.

I'm a Google Apps customer and I also pay for Google Drive, if it matters. And quite the contrary, people receiving stuff for free tend to be more self-entitled and critical. That people don't voice too many negative opinions on these products is kind of shocking, because either Google has a really good PR department or these products do in fact satisfy most users.

> When a better alternative shows up (Sparrow), people flock to it even if it's a paid app

For what is worth, I didn't.



XMLHttpRequest is not a hack. It was deliberately developed by Microsoft as part of Internet Explorer to enable desktop-like functionality in Outlook Web Access. [http://www.alexhopmann.com/xmlhttp.htm]. It was then copied by Netscape, and it spread from there.


> XMLHttpRequest is not a hack. It was deliberately developed by Microsoft as part of Internet Explorer to enable desktop-like functionality in Outlook Web Access.

I believe you tried to correct me, but you proved my point even further.

Desktop-like functionality on a software meant to navigate between linked documents is simply a broken interaction model.


Or you see it as a finishing off the implementation of the hypermedia model's original idea of transclusion.


>When a better alternative shows up (Sparrow), people flock to it even if it's a paid app. Of course, Google killed it, they are web app zealots after all and need the revenue from the ads.

Sparrow? It existed for a couple of years before being bought by Google and people hardly "flocked" to it. It was mostly a marginal app. The overwhelming majority of OS X users either used Mail.app or Gmail on the web.

Google just brought it for the talent and to get some better native app tech. Not because people were ...flocking to Sparrow and lessening Google's revenue. I doubt there was even a dent to their revenue.


>Well, you realize all the products you mention were only possible because someone figured out a hack (XMLHttpRequest), and now everybody relies on something that wasn't thought out in any shape or form? Can you see how fragile is the software stack an entire web industry is basing itself on?

How is that different from, anything else, really? The first wheel was also a hack. Fire was also a hack (hey, let's bang this two stones together).

XMLHttpRequest was not a hack. It was a feature engineered my Microsoft that people found another use for. After that the hack status was gone: it was standardised, documented, best practices were written, an interchange format was invented for it (JSON), etc.

Don't conflate the origin with the result.


Sure, you don't have to download software.

On the other hand, the platforms we /do/ download software onto (in real time) are disgusting.

(I wrote PHP for the first time last week. Oh. My. God. Why does this crap exist? WHY IN GOD'S NAME (yes I am shouting) DO WE PUT UP WITH THIS UNBELIEVABLY MISERABLE CRAPFEST WE CALL MODERN WEB DEVELOPMENT!?)

Um . . . breathe.

I don't have any answers today that don't involve burning something to the ground, public shaming, or retreating to a code monastary and crafting a diamond (which /never/ works).

How do we get unstuck?


PHP has nothing to do with the browser (i.e. the "platform you download software into"). If you don't like programming in PHP, then pick a different server side language instead. Even JavaScript can mostly be avoided, if you want, by using one of the higher level languages that compiles down to JavaScript.


My objections don't start (or end) with PHP. Browsers are terrible platforms, and the protocols we have to deal with are likewise terrible (ever written an HTTP proxy? Oh boy).


> (ever written an HTTP proxy? Oh boy)

Oh yeah, and it's not fairly hard. After all, it's a well defined, text-based protocol. It's sweet to implement if you're not using C. That is, before you started supporting persistent connections or interacting with the content.

In any case, it's loads better than FTP and SMTP (at least HTTP spelled things completely), and loads easier to use than binary protocols.


"A giant hack" is your name for evolved technology, as opposed to designed technology.

Of course, there's nothing truly "evolved" in software and nothing designed that didn't evolve, it's just two ends of an axis measuring a weird think like "control and unitary vision". An extreme of "evolved" technology are things like the browsers and PHP - the templating language that was never intended to be a programming language and somehow mutated into one (and I assign most of its success to being attuned to the "evolvability spirit" of our web). Most "well designed" or "well engineered" software that could have powered the web either failed miserably or never reached the point of being usable or just wasn't there at the right time.

I think there's a third path somewhere in this chaos, a path of software "designed/engineered to evolve", as opposed to "evolved" and "designed" software, a "nirvana" we should look for in this darkness. But in the meantime we can either embrace the "twisted creatures of webvolution"... or choose a different line of work :)


I think there is a distinction between a platform's authors adding more functionality and the platform's users abusing existing functionality to make it do something it was never meant to.

I would define the former as evolved and the latter as a hack.

Web technology seems to have developed from a little of each. People were using iframes to do HTTP requests so XMLHttpRequest was added to IE, People were using long polling so HTML5 now supports websockets etc. etc.)

I think the point of the grandparent is that the original standards were never meant to power applications, the web has changed but due to backwards compatability or old implementations the technology still has some warts.

An example of this is CSS, it's a fantastic system if you want to lay some static text on a page interleaved with some static images and style the whole lot. If you are trying to create a semi-traditional GUI or a page where you can't know the sizes of elements in advance then the shortcomings of CSS become abundantly clear.


I don't like the distinction because for a software whose developers respond to what the users are doing, after every hackish new way the users find to (ab)use it, the developers come and add things that make it easier to do the things done by "hacks" (but at the same time they also need to keep the hacks working). And if the "users" of a software like the browsers are actually developers, it's obvious that they are not going to expect for features to be properly implemented, they will just (ab)used whatever half-assed features they can find.

...in my view, if you can clearly see "the distinction" between authors adding functionality and users abusing existing functionality for new purposes, it means that one party is clearly moving too slow: either the "users" are too slow to discover new ways to adopt and (ab)use the new software or the developers are working to slow and the users-developers need to do too many ugly hacks because the features they need take too long to be released and standardized (unfortunately the current state of web-development seems closer to this state of affairs).

But there are possible solutions: I think things like standardization could be accelerated by having "tzars" in all committees (people that can just choose by themselves how some things over which they have authority get to work without needing a consensus or to justify to others why they choose one way) - I'm using the word "tzar" as in Haskell's "syntax tzar", the guy who could just choose what syntax a certain language feature needs and end the discussion right there because nobody had the right to contradict him [1] (I consider H's design-by-committee the only example where a committee did something mostly right and I wish similar "committees" would work on things like CSS and new ECMAScript versions...). But again, if standards evolve too fast, implementors will have to play catch-up and end-up with half-done implementations, which today happens even for sloth-slow evolving things like CSS.

...but of course, the real "root of all evil" is the backwards compatibility requirement, but we can't get away from this.

...and I know, some people will want to burn me at stake for screaming the "we're not going fast enough" heresy about web tech :)

[1] http://my.safaribooksonline.com/book/software-engineering-an...


I disagree with your idea of running native code on the client as a replacement for the web in the context of laptops and desktops. The app store works because apple curates it. In the Android marketplace, many applications are viruses (http://www.androidguys.com/2012/12/14/lookout-18-million-inf...). If more than 9-10% of laptops and desktops had access to the app store, I would imagine that malware would be a more significant problem. Therefore, while I agree with the rest of your complaints, I disagree with the idea of making the web run using native code. I think that an interpreted environment will become good enough to do everything a user wants as computers get faster. In fact, if we can assume the standard user watches Youtube videos and sends and receives images and text through Facebook/Instagram/..., then interpreted code is fast enough today. On the web, where the user must constantly accept and run code from unknown and untrusted developers, code must run in a sandbox to minimize risk of unwanted access to the user's computer.

Side Note: I also just really, really hate app stores. I find them to be much more restrictive than just throwing a website up and allowing anyone to access it. The idea of having to pay someone to allow users to be able to use my stuff is way too restrictive to me.


Strictly speaking, native clients have nothing to do with app stores or speed. I just mentioned mobile apps and some software available in the Mac App Store as success cases because native clients backed by HTTP APIs seem to be a much saner development environment than shoehorning an app to run inside a browser, which is limited to one language, crappy APIs, broken interaction models and lack of standards.

> On the web, where the user must constantly accept and run code from unknown and untrusted developers, code must run in a sandbox to minimize risk of unwanted access to the user's computer.

First, sandboxed environments are not limited to browsers (e.g., OS X 10.8). Second, browsers are often the attack vector, because they were never meant to run applications, and have broken security models (e.g., no application signing). So you end up in a sandboxed environment with terribly limited APIs in the name of security, and on the other hand have hard to trace security holes, like XSS.


This is what I, after years of web development consulting, now advocate.

The browser should be left for documents.

The only way to provide the best user experience and operating system integration is via native applications, regardless of the environment, which communicate via network protocols.


> I would rather take a web based on open APIs and rich clients (running native code) than the kludges we have today.

I won't. Installing native apps for everything I do? When I can just go to a app website and use it? I have no intention to go back into the 80s.


There is no dichotomy between "the web" and "native applications", like the rhetoric here seems to imply. The line is much more blurred. You can buy "apps" on the Chrome Web Store that are launched from outside of the browser, render their own windows without Chrome's... chrome, etc. You can write native code that runs in a sandbox (NaCl) that gets updated whenever you hit Ctrl+R.

And combining the two, you can "install" webapps, which happen to be launched like native apps, which happen to be written and updated like webapps, which happen to have all the performance and access-to-resources of native apps. Photoshop--the real one, not a janky Javascript clone--running "in" your web browser. Though it's not really a web browser any more, at that point: it's just a platform, like Silverlight, Adobe Air, etc. It just happens to be one for which there is this default "hypermedia viewer" app that comes with it.


Ideally native app wouldn't require installation any more than a web app would. There is no reason why native apps couldn't be loaded dynamically on-demand from the internet. Java Web Start did something like that, but like all Java it was bit clunky.

Of course current operating systems have poor support for such kind of behavior, but that could be remedied. I think the "activities" concept in Android already is first steps towards that goal.


There are inherent problems which are not Java specific. Your app would need to have very few rights for starters, the sandbox should be as small as possible (like browser/JS).

If you have different permissions and a complex sandbox with a runtime that supports classloaders, you run into the same security problems that Java currently enjoys for applets.


Security, permission management, and sandboxing would be critical concepts of a system running code from potentially untrusted sources. But I don't think the security problems often associated with Java and Flash are inherent for such sandboxes. Web browsers themselves are an excellent example of sandboxing that seems to have stood fairly well against attacks. And NaCl demonstrates that the security model of web browsers is extensible to native code.


I don't disagree, but the web has a huge advantage over native applications: deployment. Installation is a nop. Every user has the latest version. The same website will (probably) work on an iPad and a Windows PC. Agile companies like Facebook roll out multiple releases each day. Imagine if Microsoft Word wanted to install an update every day! :)


...what would be so wrong with something like Word installing even a few updated a day, if it would do it "stealthly" and without disrupting my workflow? It can be done. People writing autoupgradeable software should learn a thing or two from good spyware writers...


Applets were always the answer. Amazing how badly a technology can fail because of start-up time.


Chris Granger's Light Table is also trying to do it. Of course, Bret Victor's talk breathed new life into this whole thing. But everything old is new again.

People want to build software from a small kernel that scaffolds itself. They want to immediately switch between "testing the app" mode and "building the app" mode. They want something more than a text file, a tool pipeline, and an executable at the other end. On the other hand, they don't want their code to turn into an opaque binary blob that can't be diffed, can't be read by text tools, and can't be shared with people who don't feel like using your sweet "Invented on Principle" tool.

It feels like we're converging on the ideal solution, but it's happening more slowly than I think many of us predicted.


Exactly, I'd love more interactivity but you can pry my plain text source files from my cold dead hands. The smalltalk environment was way too magical and too easy to mess up when I tried it.


Add me to the list of people dedicating themselves to trying something in this very area.

I don't exactly know what I'm creating yet (maybe there aren't terms for it yet), because I change things as I go and it's not done yet. But the current vision is a sustainable automation platform, where you can add/change/build/do anything (because it's open source) with what you're working with. So you can create yourself a "testing the app" button and "building the app" button, and that will become something that's available to you. Actually, I'm trying to make it so that those buttons will be automatically available for you as you naturally do the things you'd normally do, but I haven't gotten to this part yet. (Oh, and perhaps instead of buttons you have to press, new output can simply appear on your screen right away.)

In short, my vision agrees very much with what you're saying, but I still have a lot of work to make my project a viable building tool, and it is happening very slowly indeed.


Its not the same. Light Table and Bret's demos go way beyond what Smalltalk ever did. Its not just about hot swapping, but liveness, and the Smalltalk community never got that [1]. But it doesn't stop them from saying they've already done it because they don't understand what they are seeing, and thinking is hard.

[1] John Maloney got it with Morphic, and even coins the term Liveness at about the same time Tanimoto does. However, this was originally in Self and relatively independent from Self's hot swapping capabilities. It also didn't really map back to code very well (it was very dependent on direct manipulation via Morphic's edit menu).


More than (almost) any other language/environment, Smalltalk is capable of this kind of 'liveness'. That it hasn't been implemented in the way you refer is due more to a lack of development effort than anything else.

For the last four decades, Smalltalk has been providing a glimpse of the future. That future is still waiting to happen.


That's a vacuous statement if I've ever heard one. Nothing in Smalltalk makes achieving liveness easier than say Java or, more obviously, a language with very encapsulated state like Erlang, and definitely not various visual languages where you get liveness for free (Quartz Composer!). It's telling that Granger et al are basing Light Table on Clojure/Lisp rather than Smalltalk (it will be interesting to see what Bracha does with Newspeak, however). Also consider various game editing engines (Unreal, Unity) that offer live scene scripting capabilities in whatever scripting language and C++ they support.

Smalltalk was crazy innovative, while Self (Smalltalk's only real successor) gave us the first live graphics toolkit (Morphic). But the future is still being invented, and it will be a much better experience than Smalltalk ever was.


Smalltalk is capable of evaluating statements as they're entered, and it's a relatively small step to reflect those changes immediately on compilation - thus, 'liveness', as in Light Table. I can't imagine being able to do that in Java.

If I'm missing something, can you please provide some more detail?


Liveness is an experience, hotswapping is a mechanism that gets you 5% of the way their. Its like saying "we've done 5% of the work ,the rest of the 95% should be easy right?" Actually, just figuring out what the experience is difficult. So more details...

Hancock defines the term "live programming" in his thesis [1] and its where I get my definition from (before Hancock, the term doesn't exist, though liveness was defined by Tanimoto and Maloney in the 90s). Basically, live programming is about continuous feedback for which hot swapping might be useful (though it must be said its not exactly necessary nor is it sufficient). But there is much more to it: you want continuous feedback about the code you are editing, not just some idea that the code will run sometime in the future in a running program. You want to also observe the behavior of this code in a way that is comprehensible, and map this behavior back to your code.

I wrote and presented a paper [2] on live programming back in 2007. Ralph Johnson (a big Smalltalker) was in my audience and had the same complaint: he only saw hot swapping and not the experience I was presenting. To him, it was mechanism, not exeperience. I wonder if this is a problem with Smalltalkers in general.

[1] http://llk.media.mit.edu/papers/ch-phd.pdf

[2] http://research.microsoft.com/apps/pubs/default.aspx?id=1793...


I understand the difference between 'hot-swapping' and 'liveness'. However (admittedly, I may be mistaken) I believe the Smalltalk architecture already has the requisite functionality (eval and reflection - just like Lisp) to support this, although no one's actually implemented it yet. (And it ought to be more straightforward than building it in a Lisp-to-Javascript compiler on top of a Lisp on top of the JVM!)

Some of the basis for my assertion came from this article: http://liveprogramming.github.com/liveblog/2013/01/13/a-hist...


Any Turing-complete language has the requisite functionality to support liveness, but something like Time Warp support by the OS/VM would make it easier. But honestly, at this point, even designing the experience (vs. implementing it) is hard enough, and we owe a lot to Bret Victor's talk here. Hancock's thesis sets high standards on how the feedback must be comprehensible (as opposed to some random lights flashing on and off).

I wrote a lengthy post in the history article you linked to. Its just not the live programming history that I'm familiar with, they seem to be falling into the same smalltalk mechanism trap that I was talking about in this thread.


Similar to self, iolanguage would be a strong place to start for getting cheap liveness.


I don't quite understand this distinction between liveness and hotswapping, as all the examples of liveness that I've seen involve hotswapping code that causes graphic or audio side effects, and clearly that sort of thing has very real practical limitations.


There is one demo in Bret Victors' IoP talk where he is live programming a sorting algorithm and something non-graphical is visualized (in this case, control flow and local variable states). The hotswapping really isn't the focus at all; its the live feedback that is important.


You might be fighting a battle that's already lost. For most people, live programming is having a running system with a REPL or equivalent attached so that you can run & update code inside that system. For example a running web server with some mechanism to add a new request handler while the server is running. Or a program that's playing programmatic music or some kind of graphical demonstration where you can add and redefine functions to change the music/graphics. This kind of "live programming" is basically the same as hot swapping, but some people also associate it explicitly with a live stage performance (e.g. music or visual).

With the kind of live programming that you mean there is some meta system that is monitoring your code and continuously giving you feedback on it. Perhaps it does this by just running the code and showing the output, perhaps it displays the execution trace in some way, perhaps it displays a visualization of a data structure over time. In a way, live feedback on static type errors could also be considered a limited form of live programming. Maybe it's a good idea to adopt a new term for this kind of live programming? It would also help from a marketing perspective I think, to have a new thing that people can be excited about rather than a term that they associate with a boring, limited and old fashioned feature (i.e. hotswapping).

Even with this second notion of live programming the question of updating running code does not go away. If you are developing a game, you may want to do live programming by running the game next to the code and have that be updated whenever the code changes. But a game has state, and how do you transport that state to the next version of the code? Hot swapping code by blindly mutating a function pointer in the running game is obviously not the answer. That's just a hack that works some of the time: it doesn't work when updating code while the running game is still in the middle of something, and it corrupts the state when there is a bug in the code, and it doesn't work at all when data structure structure changes. The perspective "how to transport the state to the next version of the code" is much better than "how to I shove new code into the running system with the old state". The same issue comes up with most programs, not just games. This is still an open problem as far as I know. For live programming we also need tools to manage and reset the state. When you have corrupted your state with a bug in your code, you want to be able to quickly go back to a previous non-corrupt state. Even if you change the entire programming model, you'll still have to address this state update problem in some way.


> You might be fighting a battle that's already lost.

REPLs and interactive programming existed long before the "live programming" experience was defined (by Hancock), and I only use the term to describe what Bret was showing off in his IoP talk as well as the experience the Light Table people seem to be striving for. I might be a bit pedantic, but there are plenty of other terms to describe the older less live experiences! Hot swapping is just some mechanism to achieve some undefined experience; "I changed my code while my program is running" is vague enough. It typically has to be coupled with some other refresh mechanism (e.g. stack unwinding) to be useful, and even then it typically doesn't do more than it advertises (func pointer f was pointing to c_0 and now points to c_1).

Now live coding...is completely different and has an independent origin from live programming. Whereas "live coding" is about some programmer coding "live" in front of an audience, live programming is about receiving continuous comprehensible feedback about your program edits in the context of a running program. Quite a huge difference in meaning with very different goals!

> With the kind of live programming that you mean there is some meta system that is monitoring your code and continuously giving you feedback on it.

Its coding with a water hose vs. a bow and arrow. Debugging is not a spearate experience and happens continuously while editing, if you can't provide enough continuous feedback to get rid of a separate debugging phase, then its not really live programming.

> Maybe it's a good idea to adopt a new term for this kind of live programming? It would also help from a marketing perspective I think, to have a new thing that people can be excited about rather than a term that they associate with a boring, limited and old fashioned feature (i.e. hotswapping).

But the new term was coopted to describe an old experience! Hancock's definition is unique (no one used this term before 2003), fairly complete, and its very compatible with what Bret Victor was showing off in his IoP work. Why should we back off and invent yet another new term to describe the new experience whose original term was hijacked to descirbe old experiences because people couldn't understand the new one? Crazy!

> But a game has state, and how do you transport that state to the next version of the code?

Today this is framework specific, and all major game engines have a way of doing this as they want to allow the designers to script levels in real time without losing their context. It doesn't even require language support necessarily, but its not something you ever get "for free," its something that is baked explicitly into your framework.

> The same issue comes up with most programs, not just games. This is still an open problem as far as I know. For live programming we also need tools to manage and reset the state. When you have corrupted your state with a bug in your code, you want to be able to quickly go back to a previous non-corrupt state. Even if you change the entire programming model, you'll still have to address this state update problem in some way.

No one has figured out how to yet come up with an expressive general programming model that achieves this efficiently, but you can always "record" the input event history of your program and re-exec the entire program on a code change; i.e. there is an inefficient baseline. You still have problems with causality between program output and input; e.g. consider the user clicking a button that no longer exists or moved! Lots of work still to do...just don't take away my term please!


> Whereas "live coding" is about some programmer coding "live" in front of an audience, live programming is about receiving continuous comprehensible feedback about your program edits in the context of a running program. Quite a huge difference in meaning with very different goals!

Yes that's what I mean! A tiny difference in the terms we use: live coding vs live programming. That's why it's confusing to people.

> Why should we back off and invent yet another new term to describe the new experience whose original term was hijacked to descirbe old experiences because people couldn't understand the new one? Crazy!

Sometimes you have to cut your losses ;-) Another reason why I dislike the term "live programming" is because it confuses two separate concepts: continuous feedback and rich feedback. Conventional debugging is pressing a button to run your code and see what the result is. Instead of just displaying the result, you could display the entire execution trace (time traveling debuggers). You could write unit tests and display which passed and which failed. You could output some visualization of some data structure in the program. For a game you could output a series of frames overlaid on each other (like Bret Victor does). Then you have type checking, for numerical code sensitivity to floating point bit width, performance profiling, etc. This is all about giving different kinds of feedback. Continuous feedback is about getting feedback without having to press a button. Classical live programming is running the program continuously and continuously displaying its output. This is the continuous feedback version of ordinary debugging. Automated background unit test runners are the continuous version of unit testing. In the same way you have a continuous version of the other debugging techniques. Both continuous feedback and rich feedback are very valuable, and although they are stronger together they are separate concepts. Perhaps it would be a good idea to have separate words for them, that would certainly greatly clarify "live programming".

> but you can always "record" the input event history of your program and re-exec the entire program on a code change; i.e. there is an inefficient baseline. You still have problems with causality between program output and input; e.g. consider the user clicking a button that no longer exists or moved!

Yes, this is robust to internal data structure changes but no longer robust to UI changes. Viewing a program as a series of event stream transformers and time varying values as in FRP may help a bit. At the lowest level you have a stream of mouse clicks on pixel (x,y) and keyboard events with keycode k. Then the UI toolkit transforms that stream of events to event streams on UI elements: click on button "delete", text input to textfield "email address". Then that gets transformed to logical operations and data: delete_address_book_entry(...) and email_address. Then that gets transformed to the complete time varying high level state of the entire program (address_book_database). You can try to transport the state on each of the different levels, but in the end I think a completely automated solution is impossible. You are going to need domain specific info on how to do schema migration in the general case. For live programming that may not be worth it because you can just start over with a fresh state, but for things like web site databases you don't want to lose data so you have to manually migrate. [tangent: Currently there are a lot of ad-hoc solutions to this e.g. never remove an attribute from your data model, and when you add new attributes make sure all code works even if that attribute is missing. Reddit even goes so far as to structure its entire database as "key,attribute,value" triples instead of using a structured schema so that the schema never needs to change, but of course this just moves the problem from the database into the code that talks to the database. A principled approach where you write an explicit function to migrate your data from schema version n to schema version n+1 would work better. That migration function takes the entire state/database with schema n as input and produces an entire new state/database with schema n+1. When the state/database is large this would take too long to do it in one pass, but with laziness that can be done on-demand.]

You don't need to limit yourself to running one instance of the program. You could record multiple input sequences representing multiple testing scenarios, and display the results of running each of them, or even display each of them being continuously performed so that you can see all the steps in between. In any case as you say there is lots of work still to be done.


> Sometimes you have to cut your losses ;-) Another reason why I dislike the term "live programming" is because it confuses two separate concepts: continuous feedback and rich feedback.

Again if we go back to Hancock's thesis, it's all there! It's not just about continuous feedback, it's about feedback with respect to a steady frame, it's about feedback that is relevant to your programming taks, it's about feedback that is comprehensible. Hancock got it right the first time, there is no classical live programming (though there were other forms of liveness before). Actually, this is something I didn't get myself in my 2007 paper.

I don't think I need to abondon my word, especially since the standard bearer are Bret's demos; people want "that", not some sort of vaguely defined Smalltalk hot swapping experience. The community I'm fighting for the word is small and insignificant vs. the Bret fans :).

As for the rest of your post, explicit state migration is a big deal for deployment hot swapping (Erlang?) but ultimately a nuance during a debugging session. A "best" effort with reset as a back up is more usable.

But maybe take a look at our UIST déjà Vu paper [1]: here the input is defined as a recorded video that undergoes Kinect processing, and we are primarily interested in the intermediate output frames, not just the last one. So the primary problems are one of visualization, while we ignore the hard problem of memoization and just replay the whole program. We even have the possibility to manage multiple input streams and switch between them.

Kinect programs are good examples of extremely stateful programs with well defined inputs. One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.

[1] http://research.microsoft.com/apps/pubs/default.aspx?id=1793...


> Again if we go back to Hancock's thesis, it's all there!

Yes, the problem is not with the definition of the term, but with the term "live programming" itself! It is too vague and can apply to too many concepts, and hence we're seeing people use it and interpret it for many different concepts. Nobody will go read a thesis to learn what a term means. But then again "object oriented programming" is vague as well. The notion of "steady frame" does seem oddly domain specific. In the words of that thesis: water hosing your way towards the correct floating point cutoff value or towards the value of a parameter in a formula that produces an aesthetically pleasing result works great, but I'm not convinced that you can "water hose" your way to a correct sorting algorithm for example. Perhaps I have misunderstood what he meant though.

> A "best" effort with reset as a back up is more usable.

Yeah, I agree. I think the same primitives that can be used for building good explicit state migration tools, like saving the entire state and recording input sequences or recording and replaying higher level internal program events, can also be used for building good custom live programming experiences. So they are not two entirely disjoint problems.

> But maybe take a look at our UIST déjà Vu paper [1]

That's very interesting and looks like an area where live programming can work particularly well! A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream. Of course LightTable is trying to do some of that, but while it started out in a quite exciting way they seem to be going back to being a traditional editor more and more (albeit extensible).

> One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.

Probably you've seen that already, but have you looked at self adjusting computation? http://www.umut-acar.org/self-adjusting-computation


> A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream.

That's exactly what we're trying to do with LT, see my "The Future is Specific" post [1].

> they seem to be going back to being a traditional editor more and more (albeit extensible)

This is a necessary detour as we build a foundation that actually works and allows us to really make the more interesting stuff. If we can't even deal with files, what good are we going to be at dealing with the much more complicated scenario of groups of portions of files? :)

[1]: http://www.chris-granger.com/2012/05/21/the-future-is-specif...


That's great to hear! I really hope LightTable works out.


> Yes, the problem is not with the definition of the term, but with the term "live programming" itself!

True. But I think the word has worked well until recently.

> The notion of "steady frame" does seem oddly domain specific.

Not really, but please wait for a better explanation until my next paper. One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now (that the UI represented by the steady frame is probably not the GUI that is used be an end user).

> Probably you've seen that already, but have you looked at self adjusting computation? http://www.umut-acar.org/self-adjusting-computation

Their work doesn't seem to scale yet (all examples seem to be small algorithmic functions) while I'm already writing complete programs, compilers even, with my own methods, which are based more on invalidate/recompute rather than computing exact repair functions. I'll be able to relate to this work better when they start dealing with bigger programs and state.


> True. But I think the word has worked well until recently.

I just saw this: http://www.infoq.com/presentations/Live-Programming

"Sam Aaron promotes the benefits of Live Programming using interactive editors, REPL sessions, real-time visuals and sound, live documentation and on-the-fly-compilation." :D

> One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now

Yea, this interpretation of 'steady frame' is fully general I think: the ability to compare feedback of version n with feedback of version n+1 without getting lost. My interpretation was more specific because of the water hose vs bow and arrow analogy: continuously twiddling knobs until you get the result you want vs discrete aim-and-shoot. For example picking the color of a UI widget with a continuous slider vs entering the rgb value and reloading. Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.

> which are based more on invalidate/recompute rather than computing exact repair functions

You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute. For example if you have a List<Changeable<T>> then each item in the list can be repaired independently, if you have Changeable<List<T>> the whole list will be recomputed. Although you probably want to automatically find the right granularity rather than force the user to specify it?


> I just saw this: http://www.infoq.com/presentations/Live-Programming

Ya, I saw it to. I didn't see the talk though, but I expect it to be more of the same promotion of live coding as somehow actually being live programming (programming is like playing music! Ya...).

> Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.

A sorting algorithm can be fudged as a continuous function. But then here continuous means "continuous feedback", not "data with continuous values." The point is not that the code can be manipulated via knob, but that as I edit the code (usually with discrete keystrokes and edits), I can observe the results of those edits continuously.

> You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute.

I'll have to look at this work more closely, the fact that we need custom repair functions at all bother me (repair should just be defined simply as undo-replay). The granularity of memoization is an issue that has to be under programmer control, I think.


You don't need custom repair functions, the default is undo-replay, but in some cases it helps performance to have custom repair functions. For example suppose you have a changeable list, and a changeable multiset (a set that also keeps a count for each element). Now you do thelist.tomultiset(). If the list changes, then the multiset has to change as well. If you applied their framework all the way down, this might be reasonably efficient. But with custom repair functions it can be more efficient: if an element in the list gets added, just increment the count of that element in the multiset. If an element gets deleted, decrement the count of that element in the multiset.


I feel like we are turning hackernews into lambda-the-ultimate :)

I wrote this really bad unpublished paper once [1] that described the abstracting-over-space problem as a dual and complement of the abstracting-over-time problem. It turns out, for simple scalar (non-list) signal (reactive) values, the best thing to do was to simply recompute. However, for non-scalar signals (lists and sets), life gets much more complicated: it makes no sense to rebuild an entire UI table whenever one row is added or removed, and so we want change notifications that tell us what elements have been added and removed. However, I've changed my mind since: it is actually not bad to redo an entire table just to add or remove a row, as long as you can reuse the old row objects for persisting element. If my UI get's too big, I can create sub-components that memoize renderings unaffected by the change (basically partial aggregation).

Now how does that relate to theList.toMultiSet example? Well, the implementation of toMultiSet could be reduced to partially aggregated pieces very easily (many computations can actually), which could then be recombined in much the same way as rendering my UI! Yes, the solution that decrements/increments the count on a specific insertion/deletion is going to be "better", but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.

I still need to understand their work better, but I approached my work from a direction opposite of algorithms (FRP signals, imperative constraints). I have a lot of catching up to do.

[1] http://lampwww.epfl.ch/~mcdirmid/papers/mcdirmid06turing.pdf


> but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.

Yes, that's exactly what you get if you do not implement a custom traceable data type (their terminology for a data type that supports repair) provided you write your code in such a way that the memoization is effective. Note that traceable data types do not necessarily need to be compound data structures, it can be e.g. an integer as well. E.g. summing a list of integers to an integer, now if one of the integers in that list gets changed, you do not need to recompute the entire sum, or even a logarithmically sized part of it: you can just subtract the original int and add the new int back in.

Here is also an article that does something related but in an imperative context rather than a functional one: http://research.microsoft.com/pubs/150180/oopsla065-burckhar...


The Bret Victor presentation for those interested: http://vimeo.com/36579366

I can't help but smile while I watch.


This kind of hot-swapping can be achieved on quite a few languages; certainly Perl and Ruby in addition to JS, and I'm sure there's a dozen more languages that allow you to redefine at runtime. The difference is that Smalltalk made hot-editing the default.

SmallTalk is an uncommon language these days, with the exception of its half-descendant, Objective-C.

Ruby borrows Smalltalk's OO semantics pretty much completely. Just sayin'


Small rant ahead. You have been warned.

My favorite system that supports hot-swapping is Linux.

It works with every language, there is an excellent scheduler, processes are protected from each other, there are all kinds of interprocess communication techniques available, you can choose between files and databases for persistence.

I think the lame workflows people have while working within a conventional OS are a failure of imagination and a lack of understanding about how operating systems work. In particular, I suspect that a lot of people overestimate the cost of exec and IPC. Or, for example, do you really appreciate the fact that the operating system keeps file system data in RAM even if you exit the program and start up another one accessing the same file a microsecond later? Image-based persistence is not necessary for responsiveness.

Use exec(), use a database, create the editors that make you productive. I've used Lisp and Smalltalk and I've created a miniature VM/OS running in the browser with processes and persistence and all that. These days I'm more excited about old fashioned Unix techniques.


You can even do it in Java with Eclipse (Notch did it during his screencast), although I don't know the specifics.


Notch used a special programming framework to get their. Many game engines support similar features; the ability to modify the game world while its running, and can even do that with C++. The problem is you don't get it for free (framework transparent hot swapping isn't good enough), and its only because of special compromises made in the framework that you can get liveness at all.


Intellij keeps telling me it has the feature. Although I've never managed to use it on purpose.


"...I'm sure there's a dozen more languages that allow you to redefine at runtime"

Indeed. There are all the Lisp dialects to start with.

And it's no surprise that hence SmallTalk had this feature too: Alan Kay (+), one of the key creator of SmallTalk, acknowledged the heavy influence of Lisp on SmallTalk:

http://www.franz.com/services/conferences_seminars/lisp_50th...

(+) Famous quote attributed to Alan Kay: "I coined the term "object oriented". I can tell you I didn't have C++ in mind."


This exactly. Tim Berners Lee and Roy Fielding both conceived of the web as a giant Lisp machine oriented around the URL / Hypermedia concept. HTML and the subsequent XML is a crippled form of an s-expression. If you change the tags into parens, map the functions into visual structure, you have a Lisp. TBL's first web browser in 1991 was a two-way client: you could edit the page and submit it back to the server and it would publish it. ReST is how distributed computing was meant to work in a homoiconic language. And JSON? C'mon, change the colon, bang, it's a Lisp. Lisp is fundamental; the AST is the avatar of all languages.


It only makes sense if you talk about utility webapps (like twitter). Not everything is a webapp though. Wikipedia for example, is just a display of documents. Should it become a fancy webapp? I don't see much benefit in it.

The old page concept is plenty fine in many contexts IMO.


    The Browser — A Lament

    *Binstock:* Still, you can't argue with the Web's success.

    *Kay:* I think you can.

    *Binstock:* Well, look at Wikipedia — it's a tremendous collaboration.

    *Kay:* It is, but go to the article on Logo, can you write and execute Logo programs? 
    Are there examples? No. The Wikipedia people didn't even imagine that, in spite of the 
    fact that they're on a computer. That's why I never use PowerPoint. PowerPoint is just 
    simulated acetate overhead slides, and to me, that is a kind of a moral crime. That's 
    why I always do, not just dynamic stuff when I give a talk, but I do stuff that I'm 
    interacting with on-the-fly. Because that is what the computer is for. People who 
    don't do that either don't understand that or don't respect it.

    The marketing people are not there to teach people, so probably one of the most 
    disastrous interactions with computing was the fact that you could make money selling 
    simulations of old, familiar media, and these apps just swamped most of the ideas of 
    Doug Engelbart, for example. The Web browser, for many, many years, and still, even 
    though it's running on a computer that can do X, Y, and Z, it's now up to about X and 
    1/2 of Y.
ref: http://www.drdobbs.com/article/print?articleId=240003442&...


Wikipedia is pretty bad example imo, as it is actually quite advanced webapp of sorts. More specifically, it's a webapp for authoring, collaborating, searching and viewing documents.


I can't believe that the fact that Chrome can hot-swap code is surprising to anyone... Have you never been working on an app, opened up the console, changed a variable or function definition and seen the effect in real time?


I'm with you on that. It's the surprising the number of web developers who are just now finding out about this which leads me to another questions, what tools do they use if not Chrome Developer tools?


I imagine a text editor, terminal, browser, and a lot of Alt/Cmd+Tab, F5/Cmd+R, Ctrl+C+up+enter. But I can't say for sure.


Those are general tools. I want to know what tools people use for debugging Javascript that are similar to Chrome Developer tools or Firefox's Firebug.


I believe the implication there was that the debugging process is "try, try again".


Haha ah thanks for the clarification cdcarter. That is a process I know all too well when debugging code in Internet Explorer 8.


I read the title and had some first thoughts spring to mind, and didn't reach the same conclusion as the author. While cool it was, it was limited in reach, and I think you could go further with this analogy.

Treat every web site or address or endpoint as an object. Allow message passing as though the web were simply a single object-oriented system resident on the same computer. The definitions of the APIs are kept as standard, so the system "just works" with any site.

This is, of course, the vision of APIs. But could we do better? Could we make it closer to a true object-oriented system, even simpler to use? Not sure if it would make a big difference, but as we know, sometimes a simple thing that makes connections easier can make a huge difference (eg: XmlHttpRequest).

Make the web as easy as smalltalk. For every API, and eventually, every site. Not just the ones we decide to make language-specific libraries for. One library per language just interfaces with "the web" as a fully-generic API you can access as easily as any object.

Just thought I'd share this thought. Any thoughts or ideas?


You just described http. Every URL is am object and the post,put, et al are the messages.


Check out Seaside, the Smalltalk web framework. It has a edit-in-the-browser mode, where you can pretty much edit anything to make your web app work. Seaside itself is a component-centric framework, not page-centric as most web frameworks are. You build independent components, which you place here and there to make your app.


Clamato, also from Avi Bryant, is an interesting approach to in browser development: https://bitbucket.org/avibryant/clamato/wiki/Home


Clamato has been superseded by Amber: http://amber-lang.net/

In particular, check out the tutorial page (Smalltalk in a browser!): http://amber-lang.net/learn.html


Oh, very nice, thanks!


The language is written as 'Smalltalk', not 'SmallTalk'.


I felt this way when I first saw better_errors: https://github.com/charliesome/better_errors

It's a little surprising how much we keep reinventing.


Stuff like that has been in Python for a long time. (Werkzeug)


That sounds like a clone of Rack, that doesn't support the latest version of Python.


Few that aren't running a charity use Python 3 in production web apps.

Rack <-> WSGI

Werkzeug is just an app server on top of WSGI that happens to be very nice, with features like the above described.


Another approach to this type of thing worth checking out is Dan Ingall's Lively Kernel http://lively-kernel.org/


Great to see LK come up here. Yes, the approach of Lively goes towards what is mentioned in the article, right now however, the client session (what you run in the JS VM in your browser) is what the image is for Lively. We developed a persistence mechanism that will read/write a JS object graph so that your working state of your session can be saved and resumed, just like a Smalltalk image [1].

We are currently experimenting in distributing LK runtimes in nodejs instances so that basically a network of active runtimes can be created that communicate / exchange objects.

Note that the core development moved to github [2].

[1] http://www.hpi.uni-potsdam.de/hirschfeld/publications/media/...

[2] https://github.com/LivelyKernel/LivelyKernel


Snap is also worth a look:

http://snap.berkeley.edu/init

(it's a semi-port of MIT's Scratch language).


I recently tried out the binary at squeak.org. It was recommended to me when I described to some people what I'm trying to create (a system for building software using which you can improve itself).

It was very neat, and so easy to try out. You essentially download it, run it, and it's like a mini-operating system running. You can inspect the system itself and make changes, and then save/load these images. Basically what the article described. There are definitely some very good ideas there. Unfortunately, it wasn't easy for me to figure out how to write some simple programs that print stuff to standard output, and I couldn't really find any samples online. It really makes me appreciate golang.org's Hello World sample right on the front page.


Yeah, Smalltalk and stdin/stdout don't usually go together, unfortunately. Smalltalk is just not designed to be a part of the UNIX ecosystem, since it's a big environment and not a small purpose-built program.

And I'd also recommend Pharo over Squeak at this point, since they have been aggressively improving the Squeak project and creating their own clean modern Smalltalk.

If you want to use Smalltalk for UNIX-style scripting, I'd suggest GNU Smalltalk.


It's not the web. The web is presentation-independent semantic information rooted at stable URLs. Single-page javascript is a poorly-planned delivery platform for applications with essentially siloed state that is steadily displacing the web, and no amount of grafted-on functionality can enable the kinds of repurposing and unanticipated reuse we started building before writing off the web as insufficiently shiny.


> Well, the way I (and Roy Fielding) see it, the browser is a stateful agent that just renders HTML to a viewport, and executes JavaScript.

I cringed.


I hate to be picky, but it's Smalltalk, not SmallTalk.


I was expecting to see a comparison between stateful object and server and method calling as message passing and api call.

So I was surprised but also disappointed: to anyone who has worked with a REPL for instance it's not a big deal to hot swap piece of code. Many programmers use Emacs and do that almost daily in their text editor… And that's how the web was first designed (the first web browser, Mosaic, allowed to edit the page you were visiting, much like giant and distributed wikis or, with nowadays dynamism in web pages and services, much like giant and distributed Lisp machines or Smalltalk images).


An editor that might no be too well known in the web world, Unity3D's editor, has an awesome way to edit code on the fly. Public variables are represented as generalized form controls like sliders and dropdowns. Very nice for the later phases of development. Tweaking the timing and speed of an animation or the effects of some physics float can be a pain without being able to change on the fly. I wish the web tools had a similar setup.

http://docs.unity3d.com/Documentation/ScriptReference/Editor...


And this makes Cross-Site Scripting even more fun then it already is! We can't even make the browser limit sharing across websites securely: what makes you think that this edit mode will not be a target?


I'm surprised nobody has mentioned Meteor yet. As far as I understand, developing for Meteor means having a single codebase that erases the distinction between server and client.

Your web pages become live views of the underlying controllers. Modifying state in one client automatically modifies it in another. And I'm pretty sure it supports hot-swapping.

http://meteor.com/


While a lot of the comments here are focussing on other aspects of the article, I think the most basic actionable takeaway is we need a simple method of persisting changes made in Firebug/Developer Tools to the code. Once mentioned, it seems like a no brainer - why doesn't this already exist?


There is something called firefile that will save your css changes made in firebug but when I used a year or so ago it was a little buggy and stripped comments out of css (breaking WP Sites).

I'm working on a WordPress editor called WPide, live css/less editing is something I have planned. When your code editor is part of your website/app I think there is a lot more scope for improving the editing process.

With the current code on github I'm in the middle of implementing git functionality, just struggling with the push/pull side of things, ssh keys specifically.

I did have a concept in place for live editing of css without constant round trips to the server by passing data over cookies. This was some time back, I've since realised HTML 5 has much better ways to achieve that communication. That was when I planned to have the css editor in a different browser window but it might be easier to move the css editor panel onto the front end as a small panel like firebug/inspector. I've got code completion for PHP and WordPress so don't see why css can't be auto completed as well.

I kind of thought creating an editor built with PHP, HTML, javascript and css would make it easier for people to get involved and create a really good editor that was moldable by any develope but as yet no one really seems to get excited about a Web based editor and probably think I'm wasting my time but I'll plod on..


>I’m not 100% sure on it, but I can easily envision a world where you fix bugs in your website by opening it up in a browser, reading a stack trace, fixing the JS in that same browser and persisting your changes back to the server.

Where does version control fit in this vision?


Assuming you have access to the repo, it can be pushed either to master or another branch of your choice.

If you don't, a fork is automatically created.


Web development is nuts! We need new web tools and stop trying to fit round pegs into square holes.

Too much effort is wasted working around old obstacles.

Javascript was meant as a bridge to a Java based browser. That never happened and we're just left with the afterbirth.


Tim Felgentreff did some cool work on remote debugging with MagLev. The video is pretty sick: http://blog.bithug.org/2011/09/maglev-debug


You might try looking at http://amber-lang.net/ as a test for your ideas. It's a Smalltalk implementation running in JavaScript.


So you want your live editing beeing persistent? What about testing? I think you should first write a test that fails before attempting to fix it...


Writing a test that fails is a subset of what you can do. Pharo (and other Smalltalks) support writing traditional test cases and running them in a test runner. However you can also:

  - write snippets in a workspace, as vincie mentioned.  
  - write code against an API that doesn't exist, run it, have it fail, and then fill in the API and implementation.  
  - write stubs that fail automatically, and then fill them out (similar to above).
The difference is, when things fail:

  - you get to play with the live objects in a work space.  
  - fill in missing implementation.  
  - fix any errors.
And then much of the time continue execution from the point of failure. No need to restart anything. I simply could not appreciate the variety of options available in a Smalltalk environment until I used it for some fun projects over a few months.

Be warned, you might hate your so called modern environments once you get back to them. For Objective-C coders there is hope: http://injectionforxcode.com


Test your code in a Workspace before editing your classes. Depends on the complexity of the change you are making of course. If it is complex, clone the image and test on the clone.


Are you looking for http://amber-lang.net/ by chance?


WebDAV?

Heaven help us…


Why the dislike? It's not a bad protocol.


From what I remember, it's pretty chatty. It appears to be on its way out. Is there any new software that's using it?

WebDAV had some varying levels of support in Windows and Mac desktops. But AFAIK the Windows stuff got broken; not sure about Mac. I think that access from a GUI file browser was one of its main use cases in the 90's. If nobody does that anymore, and I don't think they do, then the protocol will need a new app or it will continue to die.


I've used it to bolt file storage on a REST API for enterprise stuff. It seems to be a good fit. I'm even using the Windows WebDAV implementation... working around its quirks is madness: http://www.greenbytes.de/tech/webdav/webdav-redirector-list.... Note that XP SP3 has the port-handling bug even though it's not on that list.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: