Hacker News new | past | comments | ask | show | jobs | submit login
Iris: Fast back-end web framework for Go (iris-go.com)
179 points by chenzhekl on June 21, 2016 | hide | past | web | favorite | 120 comments



Any time I see a chart about web server performance without displaying the corresponding error rate and latency numbers I start to think that the author does not understand the performance aspect of HTTP services entirely.


Error rate? I believe when doing http benchmarks it is assumed to be zero.


The performance comes from using https://github.com/valyala/fasthttp instead of the stdlib net/http. From this project's FAQ:

>Why creating yet another http package instead of optimizing net/http?

Because net/http API limits many optimization opportunities.


And then you got your database and you speed is comparable to the rest of the world :)

What I think could work is a pure in memory database with server combination. Something like redis for websites.


I use Redis as the primary and only database. This results in quite large savings in terms of server infrastructure since memory has never been cheaper than it is today and the minimal amount of resources Redis consume in relation to the massive amount of throughput it provides. Additionally, it has data persistence reliability on par with Postgres and other battle-tested databases[0], so couldn't be happier about that.

The selling point of Redis for me has been its simplicity. It has a fairly small set of features in comparison to other databases, yet extremely powerful.

[0] - http://oldblog.antirez.com/post/redis-persistence-demystifie...


"it has data persistence reliability on par with Postgres" if we pretend transactions do not exist and postgres is running in some non-default config then maybe, but we don't need to pretend.


Redis has transactions as well.


yes and if you set appendonly yes appendfsync always you will get about +/- PG write performance without all the features Now I know 0 production projects using these Redis settings and I would argue it's not intended use of Redis same way as running PG with fsync = off is not generally what you want to use


My experience is actually that you'll often get a lot worse performance with redis if you enable durability. The reason being that postgres can batch several connection's/session's transaction commit fsyncs together, which IIRC redis can't.

(Disclaimer: Postgres guy, so I'm likely not impartial)


Why isn't it intended use? And are there actual problems with it even if it isn't intended use?


It's not intended use because fsyncing every write to disk makes it inferior to every datastore designed to fsync every write. See here PostgreSQL.

For read performance if your dataset fits in memory PG is just as fast as Redis.

The main attraction of Redis is in-memory writes with async persistence. Trading durability and availability for the increased performance. If you give up this then there is precisely 0 reasons to choose Redis over PostgreSQL.


> The main attraction of Redis is in-memory writes with async persistence.

FWIW, you can have that in postgres as well, cf. synchronous_commit = off

> Trading durability and availability for the increased performance. If you give up this then there is precisely 0 reasons to choose Redis over PostgreSQL.

As a PostgreSQL developer, I'd say that performance / latency / jitter can still be reasons to choose redis over postgres. The per-read overhead in redis is often a good bit lower than in postgres, and there's less variability in response times.


https://muut.com/blog/news/april-2014-service-failure.html

https://muut.com/blog/technology/redis-as-primary-datastore-...

Counter point of a real world implementation where they tried to make it the primary data store.


Have you even read the article? This was caused by Mongo running with a wrong flag, and the authors explicitly said that their backup script only contained ONE backup. That's bad engineering, not fault with Redis. Redis is extremely safe if you configure it correctly.


Yes, on a small number of nodes it works in a handful of use cases. That isn't the same as it being equal and equivalent in terms of reliability at scale.

They resorted to weird hack-y shit for something I use MySQL or Postgres for on a regular basis. So have most people who built stuff on Redis like this:

> I use Redis as the primary and only database. > Additionally, it has data persistence reliability on par with Postgres and other battle-tested databases[0], so couldn't be happier about that.


redis by default does not fsync every change, instead it syncs every second. So no it's data reliability is not on par with Postgres in the default configuration.


MemSQL (http://www.memsql.com/community/) might also be an interesting option to try.


Wow, that looks cool. Will look into it after work :)

Thanks


The world is not only about CRUD apps with a database backend.


A surprisingly large part of the world is about CRUD apps. And among the non-CRUD apps a surprisingly large part of them is still CRUD.


There are a plenty of in-process 'databases' for Go starting from a simple built-in map and ending with something like https://github.com/boltdb/bolt or even https://github.com/hashicorp/go-memdb .


Like ETS/DETS for Elixir/Erlang?


I've been using a local key/value store as my backing database and it works great. Primary key access is usually less than 1µs and net/http is typically my bottleneck.

That being said, unless you're getting a massive number of requests, using a specialized HTTP library seems like overkill.


What do you mean by usually? What do the "bad" cases look like and how often do they happen? How are you measuring that key access and is it happening across cores? Are you setting any OS level settings around isolation?


It's worth noting that fasthttp doesn't support HTTP/2.0.

According to the README, there are plans for it in the future.


Hmm, thus not so fast after all ...



Who's deploying a go web server without putting it behind nginx? Not many people.


The graph on the homepage says echo2-fasthttp processed almost half the requests that iris processed, but in half the time. Wouldn't that make them approximately equal? I think what it's trying to communicate is the number of requests each framework managed in 500ms, but it isn't clear.


Wouldn't trust programmers, who make illogical graphs like that one :D


I wouldn't trust any web framework that isn't yet on https://www.techempower.com/benchmarks

It might not be perfect, but it's common ground.


Cool site. FWIW, fasthttp powers iris, and it's in these benchmarks (and doing very well at that!).


That graph is really confusing, is the x axis a progression of processing times from 0 up to ~1000ms? If so then wouldn't it mean that Echo-2-fasthttp has nearly as good requests/second?


It is confusing. They meant the benchmark used a 500ms "workload" (sleep) per request handler -- x isn't really an axis.


I dont want to interrupt you guys, just writing this info. I will not try to do criticism on other comments, you're free to tell everything about the project, but I think you should read the README first and navigate to the benchmark suite (which is not mine) https://github.com/smallnest/go-web-framework-benchmark . He explains why this is the most realistic benchmark suite so far. Thanks you for using (or talking about) Iris!


The graph at their GitHub page might be a bit easier to understand: https://github.com/kataras/iris

There's a bit more detailed bench here: https://github.com/smallnest/go-web-framework-benchmark


Features: https://kataras.gitbooks.io/iris/content/features.html

(seems to be missing a data/SQL/ORM layer)

Usage docs for rest api: https://kataras.gitbooks.io/iris/content/render_rest.html#us...


Not including an ORM layer does not decrease "value" of this framework, because you can attach such lib on your own. In Go the you are very flexible to set up a "a place where data will be stored/queried" - there is fabulous database/sql (or additional layers like lib/pq [1]) which is great for many use cases; also orm-like solutions like gocraft/dbr [2]; oh, I also should mention Gorm [3] or Bolt [4]. Setting them up is really easy in any Go web frameworks because (almost) all database modules rely on abstraction from stdlib.

[1] https://github.com/lib/pq

[2] https://github.com/gocraft/dbr

[3] https://github.com/jinzhu/gorm

[4] https://github.com/boltdb/bolt


That's true; the same could be said for many other parts of the stack, including several features that were included, like TypeScript compilation.

Given that interacting with a database is at least as common a webdev task as, say, i18n, I thought it was worth noting that this framework does not have a db solution of choice.


Isn't lib/pq more like a driver that's used through database/sql than an additional layer ?


Indeed. Thank you for pointing this out.


Given the state of the art of golang ORMs, I guess this might be a plus.

I have tried xorm and gorm (besides database/sql) and while I settled on gorm, they both have some fundamental design flaws:

* xorm onInsert/onUpdate hooks are designed the wrong way. Last time I tried, hooks received object instances by value and not by reference, meaning I could not actually update things before hitting the database.

* both xorm and gorm have schema-generation capabilities, but have you looked at the schema they generate? Last time I checked, gorm-generated schemas had basically no referential integrity whatsoever, and no foreign key was being generated.

But I guess it's only time before a well-done golang ORM shows up.


Yeah, with GORM you have to generate the relationships manually.


I have had good results with Beego/Beego ORM lately.


Could someone clarify: why is Go faster than say Python? And if the answer is just that it's a lower level language and therefore has less overhead, why not just use C?


Former C++ dev, current Python dev here.

Go is somewhere between Python and C. Go has a garbage collector, scheduler, and runtime type information (aka reflection, aka introspection). Like C, Go has "value types" whereas everything in Python (or even Java/C#) is a reference (this gives more control over memory layout, generaly less indirection, and generally less work for the garbage collector). In this sense, Go performs similarly to Java for serial tasks.

For parallel and concurrent tasks (e.g., web servers), things get more interesting. Efficient concurrency in C is hard, and efficient parallelism in Python is hard (async IO makes efficient concurrency easier, but it's not widely used as far as I can tell). Go's goroutines solve both of these problems by providing a lightweight threading mechanism that abstracts over both OS threads and async IO (I/O is always async in Go, but there are no callbacks, promises, or async/await). These lightweight threads (goroutines) can be dispatched and moved across thread boundaries, and there is no Global Interpreter Lock (unlike Python) so shared memory parallelism is easy.

Basically, Go is as easy as Python (even easier for nontrivial applications in my opinion), about 20-30 times faster than Python (or about half as fast as C or on par with Java/C#), and much much nicer for concurrent and parallel tasks than all of the above.


Nitpick c# has value types


In C#, can you create an array of (x, y, z) tuples, where x, y and z are floats, and the whole array is store contiguously, without any indirection? Last time I checked C# documentation, it wasn't possible, but maybe I missed something.

Edit:

Ok, I definitely missed something ;-) I checked again C# documentation and kasey_junk is right. The tuple can be implemented as a struct, which is a value type (not a reference type), and arrays items are stored in contiguous memory. This is a big advantage of C# over Java (value types are planned for a future version of Java).


I know. Java and C# have primitives as well, which are also values and not references. I was speaking broadly. :)


Go has high level constructs like python, C# or Java. However, it compiles down to machine code directly, like C. This is much faster than C# or Java, which compile to an intermediate interface (CLR and java bytecode respectively, which then run in a VM), and miles ahead of Python, which interprets from the source every single time the program is run.

The closest language to go would probably be C++, and the language designers were quoted saying the main driving force behind go was to replace C++ due to it's complexity.


> This is much faster than C# or Java, which compile to an intermediate interface (CLR and java bytecode respectively, which then run in a VM)

No. Both C# and Java use JITs, and their JITs have been carefully tuned to focus on hot spots. By not having a JIT, Go (in the 6g/8g/gccgo implementations) loses some important optimizations that are very helpful for making virtual-heavy languages like Go faster. JITs enable on-stack replacement, bailouts, and self-modifying code, which enable speculative devirtualization and (polymorphic) inline caching, just to name two techniques. These are very important optimizations that generally you can only get reliably in an ahead-of-time setting with profile-guided optimization, which Go 6g/8g don't have (and PGO is kind of a worse version of what good JITs do anyway).

The true benefits of AOT are reduced startup time and a greater tolerance for slower compilation, allowing for more sophisticated compiler optimizations—instruction scheduling, alias analysis, range analysis, etc. But Go 6g/8g are focused on fast compilation and omit most of those optimizations anyway.


> By not having a JIT, Go loses some important optimizations that are very helpful for making virtual-heavy languages like Go faster.

How does Rust work in this regard? By avoiding virtualization, and compiling each function to many specialized variants corresponding to each possible permutation of input and output types?


With generics, yes, like C++ templates: each combination of parameters results in one instance.

However, you can opt into using trait objects instead, which do use virtual calls.


Is pcwalton saying that a JIT compiler is more adequate for a "virtual-heavy" language like Go (to use devirtualization with inline caching and bailout), and an AOT compiler is more adequate for languages like Rust and C++ that mostly use code specialization?


It's not quite as clean-cut as that, though it is self-evident that optimizing at runtime (e.g. JIT) is the only recourse for optimizations that are otherwise reliant on runtime information (e.g. dynamic dispatch/virtual calls). Go does have an advantage over Java in that it prefers static semantics (pcwalton's assertion that Go is "virtual heavy" should likely be taken as relative to Rust/C++), so has less to benefit from devirtualization, and thus less of an argument to go the JIT route (which would take monumental engineering effort).

I believe pcwalton's points are that 1) running in a virtual machine does not imply that a language is slow (though I bet he'd agree with you that JIT introduces substantial memory overhead), and 2) Java, specifically, is not slow, thanks to a highly-tuned JIT.


Thanks! That's exactly what I wanted to know.


You forgot to mention a significant another drawback of most JIT implementations, compared to AOT: the bigger memory footprint.


Compiling Python down to machine code and eliminating the VM interpreter loop does not speed it up dramatically (iirc ~30%), it has been tried. It's largely because Python is a dynamically typed language with a lot of runtime dispatch, and Go is a statically typed language with less runtime dispatch.


> This is much faster than C# or Java, which compile to an intermediate interface (CLR and java bytecode respectively, which then run in a VM).

Except there are things like ExcelsiorJET, CodenameONE, J9, .NET Native, CoreRT, Mono -AOT, IL2CPP.

As usual, language and implementation aren't the same thing.


IIRC, Go code compiled with the gc compiler has about the same performance as Java running on the JVM. Surely Go isn't much faster than Java.


You are mostly correctly. There are some things that Go can do to perform better than Java. Go arrays have one less step of indirection compared to Java arrays. And Go allows for more control about the memory layout of structs/classes and arrays.


But it also doesn't perform as well with large data sets and has less compiler optimizations and no JIT.

Saying Go is much faster than Java is nonsense.


The speed at which a big data set is processed has nothing to do with the language or its' compiler. It's the way in which the data set is streamed through memory by a particular program. Go offers two standardized interfaces for this, `io.Reader` and `io.Writer`. Also, if the data set can be batched, it's trivial to parallelize processing in Go, while it's a big hurdle in Java.

A JIT is not a performance feature per-se, either. It can be used for runtime code-optimization and -specialization which can improve performance. Some JVM implementations try and do this as well as they can automatically. The stuff they optimize, though, is exactly the kind of indirection that doesn't exist in Go in the first place.

The optimizations `javac` does are one of the few things that allows Java code to run at a competitive speed. And they're mostly trading space for performance, hence the unusually large memory footprint of Java applications.


The big data set stuff I generalized around has more to do with the runtimes involved. Most JVM gc systems for instance are moving collectors while the golang one isn't (or wasn't last I looked). This can cause slow downs in comparison to a comparable GC environment especially around large data sets in long running environments.

"And Go allows for more control about the memory layout of structs/classes and arrays." Is an absolutely true statement, and it means that for some classes of problems (namely systems that prioritize GC latency for throughput) golang might be faster. The opposite is also true, especially with regard to large interdependent data sets, the golang runtime handles those worse than most JVMs.

I'd also completely disagree with your statement "Also, if the data set can be batched, it's trivial to parallelize processing in Go, while it's a big hurdle in Java." Especially with regard to "fast". There are more & better high performance concurrency libraries in Java than there are in go.

All that said, I was really reacting to this line "This is much faster than C# or Java, which compile to an intermediate interface" which is utter nonsense.

In the end, talking about performance in such large grained ways is usually not valuable but I'm willing to say that golang performs in the same category of performance as most jvms and that calling go faster than java is nonsense unless you speak about very specific cases.


There are two things here -- how fast can a language be in theory vs how fast a particular implementation is.

Interpreters are typically slower (though simple to write and portable). The standard Python/Ruby/PHP implementations are interpreters. (The Python interpreter doesn't interpret from source every time, though; it usually uses the bytecode from previous runs.)

Implementations that generate machine code are more complicated and less portable, but have the potential to be faster. However, while compiling "down to machine code directly" can help startup time, it has little to do with what allows a language to be fast. What matters is whether it's possible to generate efficient machine code.

Static typing, for example, allows a compiler (ahead-of-time or JIT/VM) to generate more efficient machine code [1]. The ability to monkey-patch code -- which is common in many dynamic languages -- makes things harder for a compiler.

That said, there are compilation techniques to make dynamic languages pretty fast -- much faster than the standard Python/Ruby/PHP interpreters. For example, Javascript VMs have gotten 10x (or more?) faster over the last 5 years. (One of the big ideas: <https://en.wikipedia.org/wiki/Inline_caching>.)

A great example of all this is Facebook's PHP compiler. The first version would compile to machine code ahead-of-time, but it wasn't that much faster than the standard PHP interpreter (and definitely slower than the Firefox/Chrome Javascript VMs). They eventually switched to a JIT compiler, which is a better strategy for a dynamic language. Later, they transitioned to statically-typed sorta-PHP-compatible language called Hack, which allows for even more efficient machine code.

[1] There are different levels of static typing, too. For example, Go is statically typed but doesn't have generics. A C# generic data structure can be converted to more efficient code than a Go "interface{}"-based data structure.


> The ability to monkey-patch code

"The liability to monkey-patch" in my opinion. :p


The early version of hiphop (Facebooks php compiler) would compile PHP into C++. It was really interesting how they went from C++ to now a full featured JIT.


Right. But just to be clear, C++ was an implementation detail. You could treat the entire toolchain as a black box that read in PHP code and produced a single executable binary.


Go actually lets you work at a quite high level, much like C# or Java (and the performance is in the same ballpark). I think a better question to ask is: why is Python so slow?


Alex Gaynor had an interesting take on this - https://speakerdeck.com/alex/why-python-ruby-and-javascript-...

From what I remember - even leaving aside core language speed - the idioms of certain languages lead people to write slow data structures and algorithms. You can write (much) faster Python but it might start to look un-Pythonic.

But of course there is the counter-argument - if your web framework is your bottleneck then you're doing something weird. Personally I don't work on high-traffic sites and I'll choose expressivity over speed any day.


There are tons of blog posts about companies that have switched from Ruby or Python to Go and have been able to massively scale down their amount of servers while handling the same load. Here's one example: https://www.iron.io/how-we-went-from-30-servers-to-2-go/

I don't think it's weird that the web server could be a bottleneck in a web application (even if it is a database-driven app, as most are).

Of course, if you're making an internal CRUD app that two people are going to use, it's unlikely to matter which stack you use.


I suppose 'weird' wasn't the right word but the vast majority of web development probably isn't constrained by the performance of the language.

> if you're making an internal CRUD app that two people are going to use

I know you're exaggerating for comic effect but still. You can run sites that have millions of visits a day on a $20/month VPS and still not have to worry about performance - unless you're doing something completely resistant to caching. I don't personally know anyone that has to handle more traffic than that but if you believed the general chatter on HN then that segment of the market doesn't even exist. Going from 30 servers to 2? I might be able to cut my overall hosting bill by a few hundred dollars a year but it's not top of my list of concerns.


>There are tons of blog posts about companies that have switched from Ruby or Python to Go and have been able to massively scale down their amount of servers while handling the same load. Here's one example

FWIW, these cases don't come down to raw rq/s performance, but are more due to RAM usage.


In the iron.io blog post, they write that their CPU load was drastically reduced as well.


To make Python fast in that regard, you'll have to rely on C-based libraries (C as in Cython) like httptools or libuv. http://magic.io/blog/uvloop-blazing-fast-python-networking/


Ok, so why is Python so slow (comparatively)? I had understood that must of the speed-critical tasks in python were done by wrappers of lower-level C code. Example: numpy. Is this not correct?


You can indeed do that, and there are other ways to speed up Python (PyPy, Cython, or even just writing more efficient Python code). However, the problem is that you now have to write C.

A lot of the slowness in Python probably comes from the many memory allocations all over the place. Go gives you a lot of control over allocations even though it is a garbage collected language.

These slides posted by another commenter has some good points about why Python is so slow: https://speakerdeck.com/alex/why-python-ruby-and-javascript-...

Interestingly, there was actually a talk at PyCon US 2016 about using Go's http server in Python via C. https://www.youtube.com/watch?v=CkDwb5koRTc


The reason it's slow is because python interprets it's input. It essentially introduces major indirection for instructions to be executed.

So for '1 + 1', you can't just output some fast x86 instruction and inline it. You'll usually need to jump to where the python vm will do 'a + b' in C. You're having to do a bunch of redundant work at a higher level to emulate the CPU (sorta).

That's why you have JIT compilers that will read '1 + 1' and compile the appropriate machine code, store that in an executable memory region and jump to it. First time might be slow, but after that it's pretty fast. Because you need to jump to the dynamically generated code, you usually compile whole functions at a time.

This topic is super complicated but also super interested but I tried to simplify it a bit here.


FWIW: There is a new set of async / await bits in CPython 3.5 that are allowing some really interesting new performance profiles.

http://magic.io/blog/uvloop-blazing-fast-python-networking/

This is almost exactly on par with golang's net/http for a simple echo server, yet it is written in python.


Python is not slow, cPython is slow. PyPy is pretty fast.


PyPy is pretty fast... for a Python runtime. It's not "fast" without qualifiers. The only 1990s-style dynamic language runtime that's fast-without-qualifiers is LuaJIT.


From my understanding, the main reasons are that python is interpreted and has the global interpreter lock: https://wiki.python.org/moin/GlobalInterpreterLock.


The GIL makes Python faster than a Python without it would be.


yeah, the GIL only slows down threaded code. The big reason that the GIL hasn't been removed is that any patch removing the GIL slows single threaded performance.


The GIL massively slows down threaded code. To the point where (almost?) all multithreaded cases are actually slower than single threaded cases. :(


C is hard.

You're right though, why not use C? It's a good language, and it's hard to beat for it's low level powers, portability and speed.

What go gives you is high level productivity, testing, a solution for package management (abit a rubbish one), and a good ecosystem of 3rd party libraries for things like AWS.

The things that suck about C:

- It's hard to do right. There's a great book called 'Deep C Secrets' on this topic by Peter van der Linden. If you haven't read it, I recommend against writing a large project in C until you have.

- C has no package management solution at all. Go doesn't have a great one, but at least it has some kind of high level management for this. Working with C dependencies and the C various build tools for them is a nightmare.

- C has no memory safety, which means if you do screw up, the 'things that can do wrong' are much much worse than if you screw up in a relatively safe language like python or java.

- C (and even 'modern C++') suffer from major portability problems. Not that it's not portable; it is, but in order to be portable, you have to write weird, arcane and terrible code. It's entirely common to see code littered with `#ifdef WIN32 ...` or a typedef for every primitive type (eg. mInt32) to abstract across compiler differences etc. This means any code coverage you get is probably going to poorly represent the actual code in the library. Oh, did I mention C has no test runner? (although to be fair, CMake helps).

On the other hand, it is extremely embedable, and if you know what you're doing, it is the right choice. Have a look at this excellent highly portable IPC library: https://github.com/saprykin/plibsys <-- That's the right choice for the right job.

It's also the right choice, arguably, for a low level component that might import into some other slow-as-balls language like python. I'd argue Rust is a better choice, but hey, its much of a muchness.

...but for a web service or web framework?

nah.

Go was written specifically for those purposes, with high throughput performance as its goal, and a significant amount of effort devoted to optimizing that.

It's not suitable for something like plibsys either.


Besides, keep in mind that the C language has some "weird corners" and in some considtions, you might encounter an "unspecified behaviour" scenario.

For short, there are some cases in which the C language specification does not tell you what to do and how to interprete code and in this cases, the choice is left to the compiler.

This is not trivial as it might look.

For example, what happens when you omit the return statement at the end of a function? GCC will to automatically insert a return statement for you, but Clang will not. Both are perfectly fine behaviours... As long as you are aware of it.

If you always compiled your code with GCC and suddenly switch to clang you might see weird shit, because the processor reached the end of your function, found no return instruction, and just went ahead and executed whatever code was put after your function (and it might not be so obvious what code is there).

(But in a way, this is part of the beauty of the C language: there is very little abstraction, and it is easy to understand what will happen when you run C code.)


>Have a look at this excellent highly portable IPC library: https://github.com/saprykin/plibsys <-- That's the right choice for the right job.

>[...]

>...but for a web service or web framework?

>nah.

>Go was written specifically for those purposes, with high throughput performance as its goal, and a significant amount of effort devoted to optimizing that.

>It's not suitable for something like plibsys either.

I found this part of your comment difficult to understand. Is C not suitable for plibsys after all, or is it Go which wouldn't be suitable, or something else?


Sorry, I was referring to go at the end there.

The point I was making is:

Don't pick go. Or C. Or Rust. ...unless it's the right tool for the job. Or at least the right sort of tool; there's plenty of cross over.

In this case (web framework), C isn't the right tool for the job. ...but, to be fair, C is the right tool for some jobs.


> C has no package management solution at all.

Curiously, I just found this today:

https://conan.io/

It seems to be a full featured package manager to use with C and C++.


Thanks for this answer. After reading it it seems to me that every other answer is just saying "C is hard" with a lot of words.


I have to write a whole bunch of

  if Sys.info()["sysname"] == "Windows"
even in R!


Discussed on reddit before and I found that Iris uses caches to speed up.

https://www.reddit.com/r/golang/comments/4a8yit/is_this_the_...


There was a discussion about Iris on the github issue tracker of Gin, another framework, dated March 13 2016:

   https://github.com/gin-gonic/gin/issues/560
Some benchmark were made too.


Thanks for posting. I was wondering how this compared to Gin.


why (and how) comparing web frameworks to httprouter, it is just a router.


That's the wrong type of graph for that data... should clearly be a line graph.


nice site to know the other frameworks.


"It’s gonna work good on all devices."

Don't they mean work well?


The project creator is not a native English speaker. Let it go.


If I were a non-native speaker, I'd be happy to have feedback on my English.

Just sayin'


Why do you take the time to make empty comments? Think about that.

Anyway, it's acceptable American English.

Comments empty and wrong.


Not only is this page riddled with typos that lead me to doubt the quality of the code, the concept of a framework is fundamentally complex and at odds with the goals of Go.


> Not only is this page riddled with typos

I'd argue that at least the author tried to write a real documentation, which 95% of Go library authors don't do.

> the concept of a framework is fundamentally complex and at odds with the goals of Go

The famous "You don't need that with Go ™ ".

It's more like Gophers hate the word "framework","dependency injection" and "orm". It's hardly a framework, it's a router and a middleware stack.

Instead of being enthusiastic about others using their language, like in any other community, Gophers like to hate, shame, mock other people because they didn't do things "the Go way", whatever they think it is. It's one of the most toxic community I have ever seen.


I didn't mean to hate, shame, or mock anyone. I would rather that programmers who create complex antipatterns stay in other languages that encourage them, and if light critique has that side effect, so be it.


I'm sure the author would love your help fixing up the typos:

https://github.com/iris-contrib/website

I just submitted a PR to fix a few minor issues. In general, the grammar issues here are really minimal and the page seems to communicate the project and its goals very clearly without being too distracting.

I didn't find any explicit typos, like misspelled words, but I didn't look that hard either, hopefully you can help out!


Really, the typo thing was not the main point of my original comment, it was just an aside. I'm sorry to anyone I have offended.

I mainly take issue with the proliferation of Go frameworks, especially HTTP frameworks. Frameworks are an antipattern and there are already a huge number of HTTP frameworks that are all incompatible and reimplement the same functionality. I don't think this is useful for the language and in fact think it hurts it.


I'm curious, what makes you think frameworks are an antipattern in Go?

What is fundamentally wrong with creating a framework with specific goals around performance and API?

If Go's stdlib were so comprehensive such that frameworks were not necessary, I would get your point, but it's my understanding Go exists as a simple language on which to build larger programs, not as a "one true way" language with everything included.


> it's my understanding Go exists as a simple language on which to build larger programs, not as a "one true way" language with everything included.

I agree. However, there is a big difference between libraries and frameworks. I believe frameworks are an antipattern in any language, not just Go, but Go's emphasis on simplicity makes frameworks for it especially grating to me.

Rather than poorly summarize, I'll link two of my favorite articles criticizing frameworks; one is humorous [1] and the other is more serious [2].

[1] http://discuss.joelonsoftware.com/?joel.3.219431.12

[2] http://tomasp.net/blog/2015/library-frameworks/


Thanks for the links. I misunderstood your displeasure with frameworks as an issue with Go, it makes more sense that you (and many others) see frameworks as bad. The arguments against them are pretty compelling, but I can't fault someone for building one :P


You're missing the distinction between frameworks and libraries. Libraries are the preferred way to reuse code in Go. Generally, libraries have a "do one thing well" approach and can be easily swapped out; frameworks try to do everything, and they tend to try to integrate more deeply into your application (which makes them harder to swap out).


I enjoyed coding in Go, but the "community" killed all enjoyment of the language for me.

I took off my big boy trousers, put on my human being shorts, and decided to take my enthusiasm elsewhere.

Shame, because the language does have a lot going for it.


The project creator is from Greece and is not a native English speaker. How about maybe cut some slack.


Sure, so long as he's open to incorporating corrections.


Because being a native english speaker has a lot to do with the quality of code, how rude..


Typos are easily checked by dictionary - no need to be native speaker for that :)


Checking for typos using a dictionary makes little sense. Let's take the example that was pointed out in another comment: "It’s gonna work good on all devices." According to a dictionary (Oxford American, but use which ever you prefer), good can for example stand for "having the required qualities, of a high standard", which can easily sound fitting when you're not a native speaker.


English is a language with grammar that has a Stack Exchange site and can be Googled. The typos on the page are common for native English speakers who don't take the time or effort to check their writing for correctness. The author did not grow up from babyhood writing Go programs, but we expect them to be correct and adhere to known idioms.


The same can, for example, be said about Russian, but I doubt you would expect anyone with minimal experience in Russian to write without any errors despite that fact.


Being a learner is laudable. However, that means being open to constructive criticism.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: