we bought the drone.io domain name back before Drones were all over the news. We thought this definition of Drone, "a unit which takes commands from a primary source" made sense for a cloud-based CI solution.
Follow you around like a drone eh? Look, we're not the ones who politicized the term. A robotic bee mascot is not going to put off people's discomfort. Which is unfortunate because it looks like a really promising tool for CI...
The workflow is pretty basic right now, however, we plan on adding matrix and parallel builds in the near future. Could you elaborate a bit more on your workflow? I definitely want to make sure Drone supports more than just simple use cases.
From my experience with Jenkins, as a build/deployment/release engineer the past 6 years, you probably want to:
- chain jobs - needed for larger projects; ideally this should even allow composing jobs to have nice, modular jobs which can be launched standalone or chained
- some kind of powerful templating system - needed for reducing configuration duplication; ideally this would keep track of all the "children" in case of updates
- you also probably need enterprisey features later on, like SSO using AD/LDAP, fine grained ACLs based on groups, etc
But job chaining and job templating should be higher priorities for the workflows since they affect the overall architecture. Jenkins has been struggling for a while to re-architect to allow this, not entirely successfully.
You also want a plugin system if you don't have one, especially one with dependencies (i.e. the Git plugin can server as a dependency for the Github plugin).
Chaining jobs and parallel ones are both very important. Especially the last one since it saves you a lot of time waiting the tests to complete. Also a big plus is to be able to run certain set of tests only when a specific event is fired eg ran test A when somebody pushes to branch X
This is pretty neat. Functions that read from or write to http.Request.Body should be able to accept a Context object, assuming they take a Reader/Writer. If they take an http.Request, you're going to pass in Context.req, similar to how custom.Request would contain a custom.req field.
Have you considered using OverlayFS in a future version of Docker, instead of AUFS? It comes bundled with Ubuntu 12.04 and higher, and my understanding is that it could possibly get merged w/ kernel 3.10
In the first test, Pat is almost twice as a fast as the Gorilla framework. In the second test, when we added a bit more logic to the handler (marshaling a struct to JSON), Pat was only about 18% faster than Gorilla. In fact, it turns out it takes longer to serialize to JSON (8000ns) than it does for Pat to route and serve the request (6000ns).
Now, imagine I created a third benchmark that did something more complex, like executing a database query and serving the results using the html/template package. There would be a negligible difference in performance across frameworks because routing is not going to be your bottleneck.
I would personally choose my framework not just based on performance, but also based on productivity. One that can help me write code that is easier to test and easier to maintain in the long run.
rorr, you appear to be hellbanned. Here's your comment, since it seemed like a reasonable one:
> Now, imagine I created a third benchmark that did something more complex, like executing a database query and serving the results using the html/template package. There would be a negligible difference in performance across frameworks because routing is not going to be your bottleneck.
If you're performing a DB query on every request, you're doing something wrong. In the real world your app will check Memcached, and if there's a cached response, it will return it. Thus making the framework performance quite important.
Notice the top 3 frameworks (pat, routes and Gorialla) have almost identical performance results. The point being is that routing and string manipulation are relatively inexpensive when compared to even the most lightweight TCP request, in this case to the memcache server.