Hacker News new | past | comments | ask | show | jobs | submit login
Clojure and the technology adoption curve (juxt.pro)
98 points by Ernestas on Nov 10, 2015 | hide | past | web | favorite | 62 comments



It is a myth that you have to be a genius programmer to pick up Lisp dialects like Clojure. I think it's the opposite: the cognitive load of more complex languages that are "easier to learn" than Clojure (like Scala or even Java itself, for instance), a complexity I consider accidental and not essential to whatever problem you are solving, is more difficult.

The hard part of Clojure really boils down to one thing: you do not have an assignment operator. If you realize you must now program without that, how would you do that? Answering that question in concrete cases is your only real problem.

Other things, like the parens, well, you stop even noticing them after your first few hours. When you first start, just move the paren one word to the left from where you're used to and you're good to go. The syntax after that is so simple, you will find it liberating.


For myself, there are a couple things that make Clojure difficult. I believe they all stem in part from the language being simple.

First, there seems to be at least two ways to do things: the verbose way and the succinct way. This creates more things beginners have to memorize.

Second, so many :keywords. Presumably some function/macro needed to be made more flexible and/or more succinct for certain circumstances so they add magical (from this beginner's POV) :keywords to make it translate their arguments in different ways. Clojure functions seem more like Unix commands rather than functions in most programming languages I'm used to.

And finally, an imperative programmer's tendency to create long functions. As a beginner my brain starts to shutoff once I encounter a Clojure function that reaches 5-10 lines. But 5-10 lines in an imperative language is pretty short.

Clojure is a simpler language, but because of that it doesn't benefit from the syntax highlighting more complex languages may have. This makes it more difficult (for me) to parse similarly lengthed Clojure programs. Of course similarly lengthed Clojure programs can do more than an imperative language, so it should probably be split into more functions, which would make it easier for me, as a beginner, to read.

I feel like people tout the succinctness of Clojure as a benefit because you can write shorter programs. I've come to think it's a benefit because you can section off more of your code into descriptive functions without exploding your code length, which make the program easier to read. When I start to use that succinctness to write less code it quickly becomes unreadable if I revisit it in a month.


Well I think you're absolutely right! There was a post not too long ago about creating "friendlier Clojure" that was all about addressing this problem. I find that I write the code first with pure "intellectual will" until all my tests pass. Then I go back and try to simplify, pull out functions into smaller functions and remove more complex logic.

As far as syntax highlighting is concerned, I find that using emacs helps a lot here. This is my `emacs.d` config.

https://github.com/nickbauman/emacs-dot-dee

This allows structural navigation for Clojure code. Good luck.


> you do not have an assignment operator

Except that you do have assignment. Even more so, every def is a Var, defn definitions themselves being vars that can be redefined, in true Lisp tradition and you also have atoms and you can use mutable arrays or any mutable collections you like, with people doing plenty of that. The emphasis on its simplicity is also misleading. For example Clojure developers pride themselves on how Clojure does not do OOP, except when it does of course, plenty of examples being in Clojure's standard library, starting from really basic things such as ISeq.

And this is actually confusing beginners that read introductions such as yours and I think it's doing Clojure a slight disservice, especially because this isn't defining what Clojure is or explaining why it is awesome.

I do agree with the sentiment. Just the other day I was trying to understand a piece of C# code that was written in a classic style, mutating variables and arrays in place to calculate something that could have been described as pure and very understandable expressions. And oh my god, it's as if I forgot how that was and I hated every minute of it.


Not in any way that matters to people who use the assignment operator in other languages. Of course what you're saying is correct but pointing this out to beginners is like explaining that the steering wheel of the car can be removed and used as a lethal weapon. It's pedantic and misleading.

I don't say and I don't see others saying "Clojure does not do OOP". If you notice this feel free to point this out; even ask me for support. Our community should be exemplary of keeping these things consistent. We all need to do our part.


The tradeoff for anything "simple" is that you are then left to pick up the pieces (with libraries or your own code) if you want to do anything "complex".

As an example: C doesn't have the complication to the language of having "built in" types for hashes or lists. This makes the language easier, since you don't have to learn the extra syntax and grow the mental model of how they were implemented. On the other hand, if you need a hash or list - you need to create your own (or adopt someone else's implementation via a library).

C is a simple and powerful language, but there's so much you have to do yourself, that it's easy to get something wrong.

Clojure (and lisps in general) hide their complexity in macros. This level of abstraction can be great, until you have to dive into the macro and see exactly how it's manipulating your code. Or write your own and troubleshoot the resulting dynamic code.


The tradeoff you're referring to is summarized "Powerful but doesn't provide much." JavaScript comes to mind. This is not the case with Clojure. Most macros are basic and straightforward. Their power is, of course, unparalleled in other languages feature sets. With great power comes great responsibility, eh? And you don't have to use macros. Not that much is implemented in macros. I think you have some valid insights but you may be overstating your case.


> This is not the case with Clojure.

Well, it actually is the case, it's just that they've built out the core library with a ton of functionality; intermingling the actual Clojure keywords (there's only ~17 of them) with the convenience macros and functions built up from those two primitives. Someone wrote all of those functions and macros. Hundreds of functions and macros were built by the language creators to give the functionality of a complex language.

> Not that much is implemented in macros.

74 of the functions exposed in the core api alone are macros. Perhaps the most often used one is "defn" (and somewhat ironically "defmacro").


    intermingling the actual Clojure keywords (there's only ~17 of them) with the convenience macros and functions built up from those two primitives.
It sounds like you are drawing a distinction between the core Clojure language and the macros and functions in the core Clojure namespace. This is not really a valid distinction. That's the thing about Lisps, that much of the language is implemented in the language itself. Macros make the language easier to extend by both the language designer and random developers and users in appropriate cases. You can keep special forms to a minimum. Those those functions and macros are considered part of the core language even if they aren't special forms.


In case you're interested, Lisp can be implemented with 7 primitives (some say 5). So in a sense Clojure went overboard to increase the verbosity of Lisp to make it easier for newcomers: http://stackoverflow.com/questions/3482389/how-many-primitiv...


... so it sounds like we're in violent agreement.


> Not that much is implemented in macros.

And this is exactly what is wrong with Clojure and its community. An unexplainable lack of ability to embrace macros and all the power they can bring.

I could never understand a single anti-macro argument, they are all too detached from the reality.


Is it really lack of ability to embrace, or a desire for simplicity of understanding? (which is also reflected in working with plain data structures in all kinds of libs). I think core.async is a great use of macros and suggests the Clojure community has ability to use it where it makes sense.


Nothing can improve simplicity and readability more than macros. They make sense most of the time, almost always, not just in some edge cases.


Very simple way to put it. Probably why people argue that it's a question of nurture. When introduce to the bliss(at first) of mutable memory it a bit complicated to ask people to twist their brain, until late down the road. Now taught without this unfair comparison point, people will quickly find the "mutation free" easy to deal with with proper basic principles.


I'm not familiar with the myth of having to be a genius to learn clojure. Not having an assignment operator is the same problem anyone would have in trying to move from an imperative to a functional language.

You can be productive in clojure, I think, without being extremely fluent. But Clojure does tend to some perl-esque terseness. Browsing down this page (http://clojure.org/reader) everything is fine until you start getting into macro characters, and then the dispatch macro, and then the regex dispatch macro. I think you could write perfectly fine clojure code without knowing the details of all of these, but it can be frustrating to not know if your code is idiomatic clojure or merely badly translated from another language.


I've definitely been experiencing the frustration of not knowing whether or not I'm writing idiomatic Clojure or not. Looking through the actual Clojure + other's source code and occasionally asking on IRC when I feel like I'm doing something wrong has been a big help.


> It is a myth that you have to be a genius programmer to pick up Lisp dialects like Clojure. I think it's the opposite:

People should learn first through Racket and its awesome DrRacket. I wish I first learned programming in Racket and not Basic and Assembly.

My kids will go through Racket in a few years.


I agree. In our Programming Languages class the professor used Racket when introducing functional programming. It worked very well.


> The hard part of Clojure really boils down to one thing: you do not have an assignment operator.

Nah. You have `let` which is exactly that.


Probably worth noting that a lot of the companies with an interest in Clojure cited as Early Majority are simply non-Clojure companies that picked up an Early Adopter startup as an acquisition and need to continue development and does not necessarily represent greenfield Clojure development at that company (I know this for a fact regarding one of the companies cited on the list, suspect the same for a few of the others.)

While Clojure foundered a bit after its start in the race to become Java.next I think that what has ended up saving it, or at least given it new life that it really needed, is a particular combination that is not even mentioned anywhere in the article: Clojurescript and React wrappers like Om and Reagent. I know more people considering Clojure(script) as a path to a combined web app and mobile app (via React-native) than I do people looking at Clojure to power the back-end.


I think (and have seen from personal experience) that some of those groups are more just individual teams that have adopted clojure on their own but are relatively niche and isolated from the larger companies. I don't think it's mostly made up of acquisitions. Teams in some large tech companies are free to choose the language that best fits their needs and so you can find pockets of clojure, go, etc ... around. Also the ones I've had personal experience with were all backend teams. I think clojurescript is still lagging backend clojure in tech companies but all I have is anecdotal evidence.


Here in Germany I only see Clojure ads in relation to big data startups[0].

Very seldom do classic companies ask for anything other than JavaScript on the browser or Java on the JVM.

[0] Same applies to other FP languages on the JVM.


It's same same in the US outside of SF and NYC


A long while ago I wrote a Java book [1] (my usual cookbook style) and decided to also support Scala and Clojure. It was so very much easier writing Clojure wrappers than Scala wrappers.

Java and Clojure mix very well in projects: set up a separate Java source path and let lein build everything.

[1] you can grab a free copy at http://markwatson.com/opencontent_data/book_java.pdf - it is my Java Semantic Web book.


In my mind you are not leaving OO behind when you get into Clojure, but the big conceptual challenges revolve around doing things in an immutable way. It is obvious how to do some things and not to do others.


As an example there's an essay at

http://www.lispcast.com/solid-principles-in-clojure

which elaborates on SOLID OO design in Clojure and provides some code examples. For example there's a pretty clear example of the right way to apply SRP to a functional paradigm.

I'd interpret it as you need SOLID to do successful OO (required but not sufficient), but using as much SOLID as reasonably possible under any paradigm also happens to result in good/better software. In that way there can be a lot of confusion about what is OO and what is merely usually found nearby OO but isn't OO specific.


(Disclaimer: I'm a huge fan of Eric Normand, and had a couple of dinners with him at Clojure Conj last year.)

I think the value of SOLID is that it gives you tools for thinking about your code.

I think there is lots of room for a critique and even a debunking of SOLID. Indeed, Kevin Henney has done just that. http://yowconference.com.au/slides/yow2013/Henney-SOLIDDecon...


Reading the books such as "The Joy of Clojure" and "Clojure Applied" helped me to leave the OO behind when getting into Clojure.


I've been going through "Clojure for the Brave and True". It's been great so far, it does a good job of explaining out functional programming.

Though I already had one foot in that realm with playing around with Racket and learning Scheme in college.


One cannot leave OO behind in a language that by definition, supports OO via protocols and multi-methods.


OO is such an overloaded term, it's not possible to really draw the line in any concrete way. Much in the way that "functional programming" tends to mean different things to different people... but OO is even more varied in its actual meaning, in general.


How are protocols and multi-methods OO?


Multi-methods are how Lisp based languages have done OO since the early days.

LOOPS in Interlisp-D, here for a time travel to Xerox PARC.

http://www.softwarepreservation.org/projects/LISP/interlisp_...

Check the "LOOPS, A Friendly Prime" book.

Meta-methods are at the core of CLOS, Common Lisp Object System, made famous with the "The Art of Metaobject Protocol" book.

http://www.amazon.de/The-Metaobject-Protocol-Gregor-Kiczales...

They are also used by Dylan, the Lisp with Algol like syntax developed by Apple,

Protocols provide the same type of polymorphism offered by Objective-C protocols, Java/C# interfaces, Go interfaces, ...

Many mainstream developers might only know one way of doing OO, but back in the day we could choose between Smalltalk, Lisp, Beta, Eiffel, Sather, C++, Modula-3, Oberon, Component Pascal, SELF, .....

Each had their own view how encapsulation, polymorphism, message dispatch, type extensions should take place.

So it is kind of funny to have some in FP languages bashing OO, while successful FP languages are actually hybrid. At the same time having people in teh OO side bashing FP, while their languages keep absorbing FP concepts.

Eventually we will get back to Smalltalk and Lisp, which already provided both ways in the late 70's.


But protocols and mulitmethods are different from OO in the sense that the functions are decoupled from the state. You don't store state in a protocol, you just define an interface. That's pretty different from the way most people think about OO in c++, java, swift, objective-c, etc. In Clojure, you have Records and maps, which hold your "state" or your values, and you have protocols which define your functions, and the two are isolated from each other and not attached in any way. That's quite different from OO in general, don't you think?


No, because there isn't one way of doing OO.

Just go read the Xerox PARC papers on how to do OO in Lisp, for example.

Back when OO was new there were multiple ways of how to do OO.

Some languages used the Smalltalk approach.

Others took the Simula approach where objects were an evolution of modules that could be extend and manipulated.

And there were lots of other options scattered around OOPSLA papers.

What happens is that there are now a couple of generations of new developers that didn't live through the procedural to object oriented programming revolution, nor were doing their CS degree in those days, so many understand OO as C++, Java et al do it and think no other way is possible.

The way Lisp does it, is quite common in the OO languages that offer multi-dispatch in method binding.

Since all method arguments types are used in the method resolution, it doesn't make sense to bind the methods to a specific object.

LOOPS and CLOS books/papers are pretty clear that they are doing OOP.


According to Alan Key, who invented the term:

> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them. [1]

You can do this easily with multimethods, in fact they allow for significantly "later" binding than traditional OO languages like Java or C++.

[1] http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...


multi-methods at least are isa?-based. This implies that they obey ad-hoc hierarchies created via derive as well as traditional java inheritance hierarchies.

protocols are little more than open-ended interfaces (i.e. I can extend them at run-time to my things and to other things).


Just a slight addendum: Clojure multimethods can resolve to a concrete method implementation based on any function of their parameters.

So, in addition to single dispatch based on class (a la Java), you could also dispatch based on the classes of multiple parameters or on the value of the field 3 objects deep.


kinda. The dispatch is on a single value and isa? isn't mapped over that single value if it happens to be a collection. This means you can dispatch on multiple concrete values in a collection or the isa? hierarchies of a single thing, but you don't get to isa? everything in the collection (unless you do it yourself over some limited set of things you care about; like you said, method dispatch is over ANY function).

This means that you can do something like

    (defmulti cares-about-a-and-c 
      "multimethod that cares about the first and third args" 
      (fn [a b c] [a c]))
    
    (defmethod cares-about-a-and-c [:alpha :gamma]
      [a b c]
      (prn "got :alpha and :gamma"))
    
    (defmethod cares-about-a-and-c [1 3]
      [a b c]
      (prn "got 1 and 3"))
but the following won't really work how you want it to:

    (defmethod cares-about-a-and-c [String String]
      [a b c]
      (prn "Got two things that match (isa? String)"))

   
    (cares-about-a-and-c "foo" nil "bar") ;; doesn't call our last method
You could, however, define something based on class and not isa? via your dispatch function:

    (defmulti cares-about-class-of-a-and-c
      ""
      (fn [a b c] (mapv class [a c])))

    (defmethod cares-about-class-of-a-and-c [String String]
      [a _ c]
      (println "Got the strings: " a " and " c))


I am having a lot of trouble trying to understand the hype behind functional programming. I have read McCarthy's paper on LISP, completed Odersky's course on Scala etc. No revelation so far ( yes, maybe I am stupid, but I won't admit it ). Is it only useful for study as a model that inspired modern programming languages ? For example :

"obvious power of code becoming data."

Many languages have eval() where data can be treated as code.


Traverse a <String, Int> hashmap, remove all strings with an even key string size and sort it by the last character in the string, then add the string size to the integer (here you have a list of integers) and sum the product of the numbers at index 0 and n-1, 1 and n-2, ...

Compare the the code size in Java and Clojure.

If that is not enough, do the same thing but start with java objects (and their clojure equivalent: a hashmap) for some given object/map property.

EDIT: And if that is not enough, use as property of the object a string, defined only during run-time by a user input.


Exactly this. The amount of code to do things is shockingly small.

When I was first learning Clojure I stumbled on a "lack of good documentation" in third party libraries. I'd google around for something that solved the problem I was trying to solve so I wouldn't reinvent the wheel. I'd find, say, a github repo that had a couple of sentences in its README that purported to solve my problem and then nothing else. What the hell, I thought; why no decent doc? Then I realized: the solution was implemented in 40 lines of Clojure. Reading the source was the fastest way to figure it out.

It still happens to me. Every time it does, I have this warm feeling of investing in something of great value.


I also use several libraries that don't have a decent (or any) doc, just docstrings in source code and I kind of got used to it now, or, as Emacs puts it:

Code never lies, comments sometimes do (Charles de Gaulle)


I'd love to see some example code for doing this.

I'm trying to implement it in Kotlin and am curious how it would compare.


  (def mymap {"asd" 1 "qwe" 2 "rtz" 3 "foo" 4 "bar" 5 "quz" 6 "bnm" 7})

  (let [ints (->> (filter (comp odd? count key)  mymap)
                                   (sort-by (comp int last first))
                                   (map (fn [[s i]]
                                          (+ (count s) i))))]
                     (reduce + 0 (map * ints (reverse ints))))
Gives me 341


As other commenters said, you're confusing hype behind Lisp with hype behind FP. They may be related, but are not the same thing.

Almost all currently used languages have some functional programming support. Many "modern" languages are functional in nature, even if it's well hidden behind syntax sugar.

Lisps, and code-as-data, is different, in that it's about compile-time meta-programming and not runtime execution model. There are non-lisp languages which offer similar capabilities, see Elixir and Nimrod for some recent examples.

Anyway, if you want to understand practical advantages of FP you should just use it in practice. It shouldn't really come as a surprise that you can't see practical benefits if you only studied a bit of theory...


I think you're confusing two concepts:

- functional programming (Haskell, Scheme, Clojure) which favors recursion and stateless functions

- homoiconicity (code=data) (all Lisps)

These two concepts are completely orthogonal, and there are many Lisps such as Common Lisp, Emacs Lisp where functional programming style is not encouraged. Most of the code in these languages can hardly be called functional.

Conversely, not all functional programming languages are Lisps or homoiconic, in fact most of them aren't.


> Many languages have eval() where data can be treated as code.

I will just address this point, which IMHO is orthogonal to your question about the benefits of functional programming. Let me give you a simple example of macro: a macro print-variable that takes a variable and prints: "$variable = $value", where $variable is the variable name and $value its corresponding value. I wrote it on my Clojure REPL, and run it that way:

    user> (def x 4)
    #'user/x
    user> (print-variable x)
    x = 4
    nil 
In a language without macros, the function print-variable wouldn't have access to the string "x" once x has been defined. In a language with eval, you could do something like:

    def print-variable(variable_name):
        value = eval(variable_name)
        eval("print("+variable_name+"="+value+")")
But you would have to call that function with a string, which is not as elegant and can lead to errors as the string could be misspelled.

Now, I will show you the code of the macro in Clojure:

    (defmacro print-variable [variable]  
         `(println '~variable "=" ~variable))
What is does is take the variable you want to print, and defer its evaluation (~variable) only when you want to know what value is assigned to that variable. In the meantime, it can get the name of the variable ('~variable).

Here is what it does when I expand the macro:

    user> (macroexpand '(print-variable x))
    (clojure.core/println (quote x) "=" x)
The clojure compiler transforms the macro into the expanded code, and then injects it into the rest of the code.

Hope this helps.


Your Python example wouldn't work. eval is still scoped lexically. You'd need to get current stack frame, get locals from the previous frame and only eval within these locals. That's what TCL `upvar` does, BTW.

Macros let you avoid this kind of problems when meta-programming (but then they bring their own problems: hygiene, for example).


Good point, I haven't really thought the Python example through. I can't find a quick fix now, so I leave it as an illustrative code (even that failing code is an inferior version of the macro, so...).


I can't seem to find it now, but I remember reading an essay, probably by Joel Spolsky -- on mentoring a newly hired junior Cprogrammer. The programmer had implemented some functionality in C, and had written lots of tiny function calls, with descriptive names, that composed to form the solution. The author called the programmer into the office, and berated the coding style, citing "efficiency" as a reason for why the program was an example of poor coding style.

The main point of the article, was that the junior programmer was right: well structured, self-documenting code is always better than fast code. If it ends up being too slow, and (in the case of C) can't be fixed with "inline" and "-O2", then, and only then, move towards a different, hopefully more efficient solution.

Now, obviously, writing functional-style C isn't the only (or perhaps even the best) way of writing structural, clean, C code.

But functional composition, especially with help from the underlying programming language, can be compact, and very easy to reason about.

Never mind the fact that approximately half of Haskell (and some clojure?) programmers appear to think that "f a b" is more readable than "scaleVector Vector Factor". Conciseness can be a weakness as well as a strength.


One main goal of functional programming is writing less error prone code. When mapping a function over a sequence there's no way to get an off by one error, or any other bug associated with typical for loop.

Not a lot of aspects of functional programming are tied to the code is data idea (formally called homoiconicity). However when writing Clojure, the user is actually inputting Clojure data structures. Something I've noticed is structural navigation inside the editor has beneficial. I can edit Clojure code so much faster than any algol-family (C-like) language I've used. And while I find it important to spend more time thinking than typing, it's easier for me to stay in flow when the code is so malleable.


A dense functional language like Clojure or Haskell has about a 10x reduction in lines of code once you get rid of all the OO boilerplate and replace it with composable, higher-order functions. I used to write c++ for a living and when I switched to Clojure, my days are still full of the same level of development work, but the amount of code that I actually write is dramatically less. In a functional language, a lot more goes into the thought process of how to construct algorithms rather than in meeting the demands of the OO environment you work in. In some languages you have no choice but to make classes for even the tiniest of tasks, which is a limiting requirement to put in the hands of someone trying to creatively solve a problem from any angle without preconceived design patterns influencing the approach. FP instead lets you concentrate on other matters and usually use your brain for the fun stuff more often.


Have you read "Why Functional Programming Matters"? https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.p...


Rich Hickey had a talk (not sure which one it is at the moment) about how we used to refer to programmers as data processors as the core of your job was reading data from somewhere, processing that data, then moving that data somewhere else. We've moved away from that definition but the fact is, at its core, we are still data processors.

Once you've convinced yourself of that and you look at Clojure through that lens you will find immense power in that language, as everything is built around data transformations.

I do think the move away from data processors as a title is more of an ego thing...none of us probably wants to be called a data processor.


In a nutshell:

- Functional programming style tends to make your programs easier to reason about, as it avoids state/mutation. This also makes testing your programs much easier, and can also make concurrency easier.

- It's a different way of thinking. Some problems are much easier to solve in a functional fashion, some are much easier in a imperative fashion.

- Functional code can increase code re-usability as it decouples data from its operations

- A lot of functional languages have very advanced type systems compared to say, Java or C++. I'm personally a big fan of static typing and after learning F# I am sorely missing its type system in C#


I had many revelations while reading and doing the exercises of the SICP book.


I think the idea behind homoiconicity is that you can generate and modify code using the same tools you would use on data; not string functions and regular expressions to manipulate the text you would call eval() on. Trying to understand semantics of code blocks this way is like parsing html with regular expressions. I used to modify built-in functions in emacs lisp by calling a modifying function (though later I was told the canonical way to do it was just to copy the function elsewhere, modify it, and evaluate it after the built-in was loaded).


> Many languages have eval() where data can be treated as code.

That's significantly less powerful if you lack structured ways to compose things together before evaling them.


Let's say I'm sold on Clojure (I am; I think it's awesome) and I want to convince my boss to convert a crufty Java 7 enterprise app to a better alternative. What is the argument for Clojure compared to, say, Kotlin? How difficult is a complete conversion to Clojure compared to other JVM alternatives. My guess, in terms of ease of transition:

  Java 8 > Kotlin > Scala > Clojure
That's just my impression, especially if you're risk averse and want to make improvements as incrementally as possible. But I could be wrong. Anyone have a good counterexample?


Clojure's strongest selling point as compared to other JVM languages is probably its concurrency story. If your enterprise app is of any sort of size, you've probably run into concurrency related bugs.

Another strong advantage is the development lifecycle. It's much tighter thanks to the live coding built right in. This should make it much faster to catch bugs or feel out new code. You can also use this to implement zero-downtime deploys and patching.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: