Hacker News new | comments | show | ask | jobs | submit login
The Erlang Shell (medium.com)
127 points by strmpnk 1101 days ago | hide | past | web | 60 comments | favorite



> When systems have faults due to concurrency and distribution, debuggers will not work.

Those are very tricky indeed, mix in threads with pointers and a system becomes haunted. "This one customer noticed a crash, on this hardware, after running for 10 weeks, but ... we couldn't reproduce it". "Oh I suspect there is a memory overwrite or use after free that corrupted the data". People start doubting their sanity -- "Oh I swear I saw it crash, that one time, or ...did I, maybe I was tired...".

Someone (could have even been Jesper Andersen, the author) said that the biggest performance gain an application gets is when it goes from not working or being crashed to working. And the biggest slowdown is also when it goes from working to crashing unexpectedly.

There was talk of 60 hour weeks here before, one of the things that happens at 8pm at night is people huddled over a keyboard debugging some of these hard to track bugs. Managers and some programmers see it as great heroism, pats on the back for everyone, but, others see it as reaping the fruits of previous bad decisions and it is a rather sad sight for some. It all depends on the perspective.

I guess the point is one of the main qualities of Erlang is not concurrency but fault tolerance. Many systems copy the actor concurrency patterns and claim "we have Erlang now but it is also faster!", and that is a good thing, but I haven't heard too many claim "We copied Erlang's fault tolerance, and it is even better than Erlang's!". [+]

[+] you can do the same pattern up to a point using OS processes, LXC containers, separate servers, having watchdog processes restart workers, for example.


Nitpick: mixing threads with pointers isn't really the problem; it's mixing threads with shared mutable state.

A concurrency idiom I've gotten a lot of mileage out of is to use threads, without shared state, but with pointers. Instead of shared state, you use a message passing style where you pass pointers over queues. Threads only mutate data that they are explicitly handed via a queue. After putting an object on a queue, they NULL out their local reference.

It's basically like Go's goroutine idiom, but with C/C++/Python, real OS threads, and a thread safe queue (which is all a channel is). There is also some relation to the SEDA architecture.

The logical structure of such a program is not really that different than an Erlang program, I imagine. It just works a little better with existing C/C++ codebases, and is potentially faster in some cases.

It has a completely different character than spaghetti written using threads and shared state, and can be made extremely reliable. Especially because the explicit channels provide a hook for testing black box behavior.


Instead of shared state, you use a message passing style where you pass pointers over queues.

If you don't have a queue library and don't want to risk getting the locking wrong, you can get one from pipe(2). While simultaneous writes/reads are in theory permitted to be interleaved, the implementation would have to be actively insane to interleave anywhere other than page boundaries.

...I actually did this when I was at university, for the first class that covered locking and concurrency. Amazingly enough, I didn't fail that assignment for delegating what we were supposed to be learning to the OS.

The logical structure of such a program is not really that different than an Erlang program, I imagine. It just works a little better with existing C/C++ codebases, and is potentially faster in some cases.

It has a completely different character than spaghetti written using threads and shared state, and can be made extremely reliable.

Maybe this is why I don't get all the C++-hate? The things that make code unreliable or thread-unsafe tend to also make it hard to read and think about (even without threads), so I just find some other way that doesn't give me a headache.

Interestingly code that doesn't give me a headache tends to have a strong resemblance to functional-style code, even tho actual pure functional code also tends to be mildly headache-inducing.


Yup, pipe() is actually a great, minimal solution. POSIX guarantees PIPE_BUF to be at least 512 bytes, so you are guaranteed atomicity for any size pointer. I think on Linux the unit of atomicity is 4096 bytes, which is a page.

Yeah, there are a million ways to use C++. To be fair, most people have their architecture forced upon them, rather than creating it from scratch. And the C++ world does have a culture of spaghetti with shared mutable state and threads. In Erlang you have the opposite culture. So I can understand the association that people make with a language, even though it's more a matter of culture.

In addition, C++ doesn't even have a thread safe queue in the STL (although I was pointed to a draft for one in the next version of C++ by a coworker). pipe() is a good solution I would use, but not portable to other OSes.


That is true. One can build an almost shared nothing architecture in any of the languages supporting thread safe queues and threads.

It is a continuum. One could write it with shared mutable state too, using locks. The problem is it is easy to mess up.

C, a large program could load modules, and some of them are thread safe some are not. One could bring in a new module that calls some initialization routine (say curl_global_init()) and then another part of the system ends up calling it as well. But it is not thread safe and should only be called once. It is not intentional sharing, but it just happens as it is running in the same memory space.

Someone likened that to running your code in Windows 3.1 when you Command & Conquer game would crash and take down the word processor with it. They didn't intend on doing so but it happened by accident as a bug. Sometimes running your application as a Windows 3.1 system is acceptable, sometimes it isn't. It depends on the problem.


Yeah, but the beauty of this approach is that it always makes architectural sense to put a given library in a single stage, and in that stage only. That library is enclosed in an "actor", and it communicates with the other stages/threads via an application-specific message (not using data types from the library; you want the loose coupling).

If the library has non-reentrant code, then you can't have more than one thread for the stage -- you can't parallelize it. But it generally won't lead to correctness problems.

curl is actually a great example, because it has an event loop for parallelism (rather than threads). So you would run a single thread for curl, because you wouldn't want more than one thread anyway.

Say you are writing a web crawler. From the network/curl stage, you just pass off pointers to blocks of memory to parsing threads. Parsing threads will be CPU bound so you will likely want to run instances of that loop in multiple threads, and you will be able to do it with no problem, since they don't depend on the curl library. They just take in blocks of memory and output some data structure to another queue.

This is also a good way to compose say the curl event loop with event loops from other libraries (GUI libraries, perhaps). Hence the relation to SEDA (http://en.wikipedia.org/wiki/Staged_event-driven_architectur...).


I read your discussion and really get all the points, and still I just think "don't make me think." The thing with Erlang is, I don't have to care about threads, pointers, libraries, re-entrant code, thread safety etc. I can just write a couple of actors, who do one thing each (in parallel, because Erlang) and they can talk to each other, share some data if they want. The only thing I need to keep in mind is, who has what data (if I need it).

I can hook up to the Erlang runtime (in local, staging, production, wherever) and talk to these actors. I can ask them "hey, what's your state now," "who are you talking to?" Sure, Erlang is slower in some cases, but for the value you get, I think it is priceless in many cases.


Sure, I wasn't saying not to use Erlang. Just saying that it is very possible to use threads and pointers in a sane (and Erlang-style) way.

I am a big fan of interpreted languages and use them as my default. The situation I was describing was basically the only time I've ever rewritten in C++ for speed! It just doesn't happen that often.

But I do wish the interpreted languages like Python, node.js, and Erlang all had better and more consistent C APIs (more like Lua's). C itself is not that hard, but the C APIs definitely put people off.

Actually I wonder for Erlang, with the web crawler example, would you have to copy entire web pages if you wanted to pass them off from a "network actor" to a "parsing actor"? The threads + pointers solution easily avoids that, while retaining modularity.


Yes, Erlang has a "share nothing" concurrency model. You would need to send the data over to another process. There is one exception, which would come into play here, and that is binary data. Normal Erlang data ("Erlang terms") such as lists, tuples, strings, integers etc. are always copied, but binary data (raw binary data, but often representing string data) is just referenced. If you download a web page, you'd save the whole content as a binary, send it over to the other actor (and it would feel just as being "copied" because there is no difference compared to the other data types). Under the hood, Erlang would just pass the reference and handle everything for you (reference counting, garbage collection etc.).

Also, when you split, or reference sub binaries, these become just pointers into the original binary data.


It wouldn't make sense to have a network actor and a parsing actor. Because these two tasks (downloading the data and parsing it) are not concurrent, you should instead have a crawler actor that downloads -> parses -> stores.


I'm another recent Erlang convert (though I'm still trying to get used to the paradigm and ecosystem). There are a few concepts that haven't quite clicked for me.

I come from a Rails background. While Erlang's hot code reloading and the ability to attach to running nodes is cool, I am not quite able to connect the dots on how this is useful. In Rails, a process is short-lived by design (the lifecycle of a request). Attaching to a process is useless, but the Rails console essentially gives you an isolated process to inspect production code and data. I don't see how extending that to an existing process would be useful.

So, challenging my assumptions, perhaps short-lived processes aren't the best approach for a web service. Perhaps that's the paradigm in Rails because Ruby lacks the tooling to make a long-lived process easy to maintain. If a long-lived process is as easy to maintain as short-lived processes, then suddenly new architectures open up. How are these architectures different? What are the benefits? What are some examples?


> In Rails, a process is short-lived by design (the lifecycle of a request)

Yes and no. Rails processes are so huge that they are loaded up and kept around between requests. Erlang processes (which are not unix processes!) are much smaller and lighter. One Erlang program can handle tons of concurrent, always open requests, whereas Rails just isn't cut out for that.

As to what Erlang is good for, I think its sweet spot with web stuff is where you need a lot of always open connections, such as with web sockets.

I'm a fan of Erlang, but for most SaaS kinds of startuppy things, am not sure it's a good fit: you get so many more tools packaged up and easy to use for you with Rails that unless you know Rails does not work for you, I would almost always choose Rails.


When you are running in production and something is not right, you can attach to the console and see what is going on. This is of course not something that you should do often, but can be a real life saver.

Simulating something that is designed to work with thousands or even millions of users at the same time is pretty much impossible. Hot code loading and attaching to console gives you the option to see what is going on in real time and even fix it if you have to.


I get that. For persistent connections, it's a life saver. But for a synchronous web service, every connection comes and goes quickly. What do you gain from attaching to the console in that situation that you wouldn't get from the rails console?


If you only have a socket dispatcher and a quick request processor that servers a page and dies, then that code path is quite minimal.

Now you should probably have a supervisor. Your connections could be handled by a connection or socket pool. You could have a database driver. There could be a background processing queue. Now things get interesting, not all those things get torn down and recreated on each request.

Even for short lived requests you could attached to a live system and trace one of the short lived requests or set a debug point to see what it does in "slow motion" to so speak.

One can use Erlang for quite a bit without touching the distribution (as in distributed nodes across many systems forming a single cluster). Then the shell can help you log in to any other node, transparently.

The ability to reload code at runtime is not something many systems can boast. Because that is one case you might touch once you have found the problem, but that is again one thing you can do.

You can diagnose loading and system resource utilization as well. If you have a fully-featured system (say downloaded from Erlang solutions) open the shell and type observer:start(). to start the observers. It will launch a GUI window (make sure to have X forwarding if running via SSH). There you can see runtime stats, the application tree, process list, ETS tables. For each application in the tree you can click on the process and see its state.


Thanks! That makes sense. I see it now. For the benefit of other Rails devs looking in: think about debugging issues with unicorn or passenger on a live system without interrupting service. Then think about being able to fix the code without interrupting service. That's powerful. That's not something I do every day, but I can understand why that's powerful.


Why should you not do it often?


If you find yourself often having to manually access the console on a production system, clearly something in your testing and deployment methodology is horribly wrong.


In erlang it's not idiomatic to do it super often, but when a bug does pop up in production, as they tend to do with every known program, it's pretty sweet to be able to look at it and fix it without dropping state or connections.


If you want to develop web pages, don't do Erlang. If you want to do web services, do Erlang. I've used both Ruby (Rails) and Python (trying to do concurrency with tornado, greenlets, gevent) and they just fell flat. You can do it, but it is incredibly messy and unstable (because there's so much work involved).

In Erlang, you would typically answer a web request in a new actor every time. Just as in Rails, running for example N number of threads serving one request each. Of course, in Erlang that number of actors can vary, which means your system will be more responsive and you'll have lower average latency. However, what differs mostly, is that in Erlang all those actors live inside one VM and you can have central actors managing things, collecting statistics, keeping global state, talking to databases / other services. This, I think, is the core strength of Erlang compared conventional single threaded / GIL systems.

A more concrete example would be to dynamically show the current request rate on a web page. You can just create an actor that gets a message from any other actor once they handle a request, and keeps track of how many such messages per second arrive. Then when rendering that web page, you just ask that actor for it's current value. Simple, beautiful, dynamic. No need for a database or any other central storage.


> If you want to develop web pages, don't do Erlang.

I'm not sure I agree. There's https://github.com/ChicagoBoss/ChicagoBoss and quite a few other frameworks (https://github.com/ChicagoBoss/ChicagoBoss/wiki/Comparison-o...) and there's also Elixir and some frameworks for it (https://github.com/dynamo/dynamo).

It's of course nowhere near the ease of development of simple web pages in Django or Rails, but it's not like you need to reinvent everything from scratch either. And you gain ability to scale almost without any effort. It's a tradeoff, of course, and using Erlang for every web page you build from now on would be probably an overkill. But there are web sites which are not services yet, which could still benefit from Erlang greatly. And I think (and hope) that the initial overhead of Erlang will get smaller and smaller over time as new frameworks emerge and that someday deploying your blog on top of Erlang will become viable choice :)


> it's not like you need to reinvent everything from scratch either

No, not everything, but lots of stuff.

Recent example: I had to fix the Postgres database driver to better handle queries like

    "select * from foobar where id = any($1)",
      [uuid1, uuid2, uuid3]
Also, ChicagoBoss is a bit rudderless at this point in time - Evan Miller has moved on to other things, and the guy who briefly took his place has as well. My own branch of it on github seems to be the only one getting any attention lately, and that's just small fixes here and there:

https://github.com/davidw/ChicagoBoss


>In Rails, a process is short-lived by design (the lifecycle of a request).

I'm not sure if this is by design, or a limitation of Ruby.

>short-lived processes aren't the best approach for a web service.

Not for me - I'm building all of my web services in Erlang these days, and there's little to no relationship between the life of most processes in the service and the life of requests. Although each request is usually serviced by its own process, that process may interact with other processes that may have been alive for as long as the application has been running. For example, my RAM caching strategies are usually written as part of the application, not put in front of them. Makes pseudo-ESI easy to hand-roll.

Also, short lived processes may have been spawned by long lived processes, so changing how they're created can be done without taking anything down.

I find this style so much more performant than the one thread per request thing from Ruby, Perl, PHP et al. It's very easy and natural to minimize the amount of time spent in the db (which is where I'm usually bound.)


This is exactly the type of response I was looking for.

A few questions and comments:

1. RAM caching - I can see a big benefit to keeping cache in Erlang. No marshaling of data to and from memcached, for example. How do you handle eviction?

2. Avoiding the db - I'm going to guess this ties in with RAM caching. Are you doing a write through cache or something like that?

Thanks for your response, it's helping it all to come together for me.


Since you recently started out, may I ask you what good resources you found to get started with Erlang ?


http://learnyousomeerlang.com/ seems to be the standard reference. I find it most amusing in both style and name (Learn You Some Erlang -> LYSE -> Lies) as well as informative.


Besides 'Learn You Some Erlang', Joe Armstrong's new 'Programming Erlang' is very good, too: www.amazon.com/dp/B00I9GR4TW/


This is a good book to start with:

http://learnyousomeerlang.com/

Free on the web, but buy a copy - it's worth it.


curious, which type of projects do you plan to develop in Erlang? especially after rails.


That's the question I can't answer. I have some problems I want to solve. If I build them in Erlang the way I would on Rails, then there's not much gain. I'm trying to figure out how Erlang will allow me to solve those problems differently. One problem is how to handle authentication across our private and public APIs. With Rails, I've got a db with credentials and tons of memcached running to minimize latency (since every endpoint needs to check auth). How would a long running application in Erlang handle that differently? How is that better than what we currently have?


> As an Erlang programmer I often claim that “You can’t pick parts of Erlang and then claim you have the same functionality. It all comes together as a whole”.

I am guilty of this. I am relatively new to erlang programming and I have skipped on some of the features that make the language great. For instance I have neglected learning both supervision and hot code reloading.


Hot code reloading I'd personally grant you as an exception; it's a maintenance tool, not a development tool, and you can largely ignore it until you want it, if you're adhering to OTP principles.

Ignoring supervision, however, is...not wise, if you have -any- sort of reliability considerations (even ones as weak as "I'd prefer it to remain up"; if it's just a one off thing that you'll manually execute again if it fails, meh, fine).

Supervision is really, really easy to use, at least for the base use case (creating a good hierarchy, so that processes restart the other processes that should be restarted, is an architectural consideration that can be a bit hairy), and it's truly heartening to grep a log for something you've been running a while, find crashes that indicate issues you want to fix (even simple stuff, like sanitizing inputs), but the system is still running as though nothing happened.


Well you're relatively new :). It takes time to absorb an entire language's ecosystem. As long as you take it upon yourself to acquaint yourself with those features, things will be fine.

The biggest thing to get on your initial survey should be minimal awareness so you know that that feature exists when you encounter the problem it was originally built to solve.

Supervision though, is fundamental to OTP and Erlang. It's certainly easy to learn so I would consider that a low hanging fruit for you.

Hot code reloading is a consequence of the continuation passing style that is prevalent in every Erlang process and easy to understand too (although employing it in production can be a bit trickier).


@lostcolony @banachtarski thanks guys. I will start with supervision first then. See if I can make use of it in some of my existing code.


I find it funny that the author willingly gave up static typing to have Erlang process control—I made the opposite switch as I grew tired of the overhead and attrition of modeling a complex domain in Erlang. I think the Erlang VM is easily my favorite place to live as a programmer, but I do with Erlang-the-language gave more.


What were your thoughts re: Dialyzer?


It's nice and once I learned how to use it I tried to be very consistent with it. The success types provide good documentation, but I found that the types still felt very flat. The fundamental component of Erlang abstraction is the process and the process tree, so your types reflect that and don't end up composing well.


(To clarify: rather your types end up composing much like messages between actors. That's a very fine way to compose things, but it's still just one way)


> And all this without service interruption. And you get all this for free, just because you picked Erlang as the implementation language.

GDB or even VS's debugger can do a lot of magic even if you use C. Until absolutely necessary, leave debugging information reasonably high and optimization reasonably low or off and you'll get an amazing, dynamic environment to inspect a live C system.


Reading about "correct software" as if it were an realistic goal makes me itch. Sure, we should be striving for as few bugs as possible, but any non-trivial application will never be made "correct", if only because it'd be impossible to write the full specification to compare it to.


Oh nice. They have now almost a Common Lisp environment. Getting closer to Greenspun's tenth rule.


Is it actually possible to do hot code reloading while preserving the state of the running system in Common Lisp? I'm genuinely curious, I kind of had this impression that it was a feature fairly unique to Erlang.


> Is it actually possible to do hot code reloading while preserving the state of the running system in Common Lisp?

Yes.

> I'm genuinely curious, I kind of had this impression that it was a feature fairly unique to Erlang.

It's not. iIRC there's even a version of the JVM with code reloading.

What's interesting about Common Lisp is that not only can code be dynamically compiled into the running image but class definitions can be changed and live instances will be updated without stopping the program or reloading anything. There are a tonne of features in Common Lisp like this that are geared towards robust, maintainable software.


> It's not. iIRC there's even a version of the JVM with code reloading.

I've seen some info about the JVM implementation, but I could've sworn it wasn't able to preserve state while hot swapping. I haven't dug in to it though, so it's totally possible it handles that.

> There are a tonne of features in Common Lisp like this that are geared towards robust, maintainable software.

I'm a fan of Lisp in general, though my experience is limited to a tiny bit of Scheme and experimenting with Clojure. I'm glad to see that stuff like this is available in at least one of the Lisp variants! Question for you (if you program in Common Lisp): What are the main selling points? I spend a lot of time already with a couple of languages, Elixir/Erlang (for fun), C#/F# (for work) - what is it that would make me want to switch from one of those to Common Lisp?

Thanks!


BTW, a very Erlangish actor system, complete with hot code swapping (while preserving state), is available for Clojure and is called Pulsar (all open-source): http://docs.paralleluniverse.co/pulsar/


In no specific order:

1. Configurable compiler. You can tell the compiler where it's safe to ignore types and where to be strict about types. For the battle-hardened critical sections of code turning off run-time type checking can be a pretty big win performance-wise.

2. Extensible compiler. Macros are like tiny compilers. It's possible to create DSLs for the programmer to work with a problem in terms of nouns and verbs that make sense for their task while still compiling to VHDL for your application. Common Lisp becomes the substrate on which you build the language you need to solve your problem.

3. The best error handling system I've seen in any language. Signals, handlers, and restarts allow higher-level code to choose an appropriate strategy for recovering from errors under various conditions without losing state and restarting the computation.

4. Top-class introspection tools. The JVM has been able to do some of this stuff for a while but with CLOS and the error handling facilities available Common Lisp makes this so much better. You can inspect any object, change values on slots, trace any function, step through the entire stack, and recompile code into the running image. There are nice tools written on top of these facilities which provide some of the most elegant debugging sessions I have ever encountered. Stop your program? Why? I can connect to a running image, inspect the running program interactively, and compile the fix into it without restarting. Even over SSH directly from my IDE.

And that's just the stuff in the specification. I'm a fan of the newer ASDF systems for defining systems and their dependencies (implicit file-system structure for defining module composition? No thank you). There are some really solid libraries available (iolib, bordeaux-threads) and a very nice FFI for interfacing with alien code. And various implementations provide interesting extensions depending on what you need. Clozure Common Lisp, for example, has really good native Cocoa bindings. SBCL has a very slick compiler for generating fast code. Most of them provide threading extensions and there are libraries which abstract away the differences if you're writing a library that depends on threads.


Thanks for breaking it down! I'll have to investigate further, but there is a lot here to like.


As someone who's written a bit of Common Lisp, I can say that the macros are a pretty awesome feature to use. Common Lisp has the advantage over Clojure and Scheme that the SBCL compiler is incredibly fast; it can come close to C#/F# on Mono for some tasks, in spite of Lisp being a dynamic language. It also compiles code and expands macros much faster. It's got more power than the other lisps, but it's also less elegant, so make of that what you will.


> I can say that the macros are a pretty awesome feature to use

Erlang has them in the form of parse transforms and Elixir has them too and makes them easy to use. Not as easy as sexp based languages, but almost.


There is also LFE (Lisp Flavored Erlang) [0] which gives lisp style macros and some other goodies to Erlang

[0] http://lfe.github.io/


I use Common Lisp and Scheme at work, and have written and maintained several systems heavily relied on class redefinition and run-time code loading. They're great. OTOH, I feel Common Lisp's concurrent process handling is rather weak, and I envy Erlang for that (Not that CL can't do the trick, but usually what implementations offer is fairly low-level and I had to roll my own higher-level IPC.)

For example, how well CL's class redefintion and instance reinitialization work with preemptive threads? It isn't obvious how each implementation address this issue by skimming docs, so I'm curious if anybody with direct experience. When I implemented CLOS/MOP style OO in my Scheme, I ended up using a global lock during class redefinition, for it need to modify lots of places. It's MT-safe now, but I'm still wary to run class redefinition on a system written in my Scheme, which is busy handling requests.


> there's even a version of the JVM with code reloading.

Yes, because of Java's extensible classloading mechanism, all you have to do is drop an war/ear to the 'deploy' folder of your application server. New files are deployed, rewritten files are redeployed and deleted files are undeployed.


sure, any existing connections continue to be serviced by the old code, and new connections are picked up by the new code.

if you need more elaborate lifecycle management of components as they are upgraded, then maybe OSGI could do that.

erlang's approach really appeals to me though -- not only does the code get upgraded, but it's in cooperation with its message loop so you don't miss a step, and you also get a callback to migrate your actor state.


Is there a mechanism to migrate existing data to the new code?


If you have external state stored in say a database, I think you have to control your deployments to migrate the schema before the model objects can be deployed.


So no.

One of the beauties of Erlang's hot code loading is the ability to migrate internal application state. An example:

I'm storing a lookup table going from key to value as a tuple {key, value}. With a change going out, I'm going to start doing {key, {value, other-value}}. In Erlang there's an easy pattern in place to do that. Just provide a function that goes from {key, value} to {key, {value, sensible-default}} and as part of the code reloading process that function will be run before the new code starts executing.


You may have misunderstood. A file can have multiple, say 'deployment units'. You can have a 'migrate deployment unit' which depend on the 'model deployment unit' which forces the app server to deploy the migrate unit first. You can have code there that migrates the tables and fill with default values before your model entities become available for use. From your comment this seems to be the same as what Erlang does for your example, unless you are claiming that Erlang's implementation is superior which could be the true, since I have never written any non-trivial programs in it.


I think the difference here is that the migration of internal state happens the moment the module is swapped out in memory. Erlang doesn't have mutable state, so state is instead passed around within a module. When a fix needs to be deployed, the two versions (old and new) run simultaneously, any new processes immediately run the new code, old processes call a `code_change` function which takes the current state within that process, performs any transformations required for the new code, and then the old module code is swapped out for the new.

That's not at all a perfect explanation of the process, but it's more or less useful enough to describe the difference between the two platforms.


An interesting example of Erlang hot code reloading in embedded system: http://www.youtube.com/watch?v=96UzSHyp0F8


One loads and replaces incrementally in Lisp.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: