After having consumed thousands of tutorials, I found this one exceptionally lucid and instructive. Kudos to the author!
His approach of stepwise expansion and refinement lets him introduce feature after feature without the whole thing looking stilted or shopping-list cluttered. For each feature, he shares an itch with the reader and then scratches it. This keeps the reader interested and motivated.
Here, at least in my opinion, is a style and an approach that other tutorials would do well to emulate.
I've tried a few of the Go web frameworks and it feels like they often add a lot of indirection. It's appropriate for Python or Ruby, but it feels out of place in Go. Yes you might cut out a little repetitious code but DRY doesn't seem to be a key pillar of the Go language
You might find https://github.com/relops/sqlc really nice from an API design point of view (inspird by jOOQ, a Java sql library).
Sqlc is no longer under development, but you can easily use it as a reference to adapt other libraries to have a similar API.
Any other good sql libs i should know about? My coworkers mostly like sql by hand, so no magic ORMs preferably. But unmarshalling rows seems like an absolute requirement.. this is just anarchy without it.
This is why I like CGI. I can make an effort to understand it.
...but, this 'dont use anything but the standard library' movement that keeps popping up here and on reddit is just ridiculous.
It's just fall out from people who can't be bothered figuring out how to use 3rd party dependencies.
No matter how amazing and excellent your language development team is, the man power of work in the wider ecosystem is significantly larger, and avoiding the work done by others to 'hand roll' your own solution is not the answer.
Yes; learning frameworks is tedious.
Yes... sometimes someone elses 'style' of code might not match yours...
...but, you have to ask yourself: are you writing code as an intellectual exercise, or trying to build something?
Because, if you're trying to build something, and you want to write every little piece of it yourself; you'll never build it.
This is the same (daft) argument as people suggesting they build their own game engines.
Building websites isn't trivial; and there are parts you need to do it aren't in the standard library, and never will be (eg. redis, memcache, elasticsearch, aws, grpc, the list is endless).
I appreciate the simplicity of just 'standard library only' to get started, realistically, is it meaningful to talk about?
I doubt you'll ever write a service that is 'standard library' only.
The Go community does have a tendency to avoid building or looking for all-inclusive frameworks, but, e.g., gokit.io, gorillatoolkit.org, and various third-party database tools (sqlx, gorm) are still popular.
The Go community is anti-framework, not anti-library.
And what's the problem with frameworks? Well, remember up until now most Go shops (including my own), are micro-service orientated. You want small, lightweight, high-performance services. Why would you want or need a large monolithic bag of tricks in there? If you need complex routing, are you sure it's a micro-service? If you can't keep track of the datastore and need an ORM, are you sure it's a micro-service?
Also, it's a discipline, not a requirement. In the same way other languages have their idioms (I'm a Ruby dev of 10+ years experience too - those idioms are baked in _hard_ for me), Go's idioms are around structure, performance, and what is reasonable to write at a larger level.
Should you use libraries in Go? Yes. Should you use frameworks? Nobody is stopping you, but are you sure Go is the right place for you and your monolithic application?
Also, I don't see a problem with the official Go website having a tutorial in Go using the standard library. Why would they not?
1) It enforces appropriate design patterns for the task at hand.
2) It produces code structures that are standardized rather than idiomatic.
3) It provides a superstructure that allows the integration of a variety of different plugins without having to write a lot of boilerplate.
I've no doubt this applies to go as equally as it applies to any other language.
That said, bad frameworks (which is most of them) are often worse than none at all.
Micro-services are the appropriate design pattern that means frameworks are redundant in a micro-service architecture.
Code structures do not need to be standardised in a service you can read the entire source code to in 20 minutes and replace in a few days if you need to.
Micro-services do not want or need the integration of a variety of plugins, and little to no boilerplate is required.
Many projects will only need a few of these, some will need all, and almost all non-trivial code needs third party libraries of some kind. Using go just for services and writing your ui in js just means you're shifting those problems into js libraries, and now you have 350,000 problems, including left-pad.
I agree this hostility to a 'framework' (a.k.a. a bunch of libraries) while containing some truth, has become a damaging religion. Library authors in Go tend to stress they are not writing a framework to appease the anti-framework gods, and new users are told to just use the stdlib without the caveat that larger projects will require more structure and pkgs from elsewhere.
The main reasons new users and in particular beginners look for a framework to get started with are quite simple - it provides guidance as to best practices, provides reassurance when they face unfamiliar problems, and gives them a community to ask questions in if they get stuck. None of these are unhealthy reasons.
I do understand and sympathise with the hostility in some quarters to frameworks as monolithic bags of code which everyone imports and uses when they don't have to, but feel this hostility is being cargo-culted on places like reddit rather than understood as an admonition not to unthinkingly import lots of code.
Unlikely, I myself slowly came to arrive at this philosophy after 15 years in programming. Nothing lasts forever except bit-rot! Look if I want to look back at my youth in old age, that'll mean looking at, running, playing with old code.
Do I want my time spent discovering and documenting obscure bugs in rare contexts/situations of other people's code, or my own? Guess which.
Has a 3rd-party framework/lib been fine-tuned and painfully optimized for many months for my actual use-case, or their authors'/audience's? Guess which.
Is the stuff these actually offer any actual rocket science, or just pretty basic easily grokked stuff except 80% of it I don't even need and that 10% I really do and nobody covers turns out to be nigh on impossible to integrate with their very peculiarly own idiomatic ways of describing things which I wouldn't ever have any trouble parsing through had they used my own peculiarities? Guess.
Now don't get me wrong, am I going to reinvent CUDA, OpenGL, .NET/JVM or other crucial infrastructure layers, of course not.
But most helper libraries and bootstrapping frameworks are at best fluff for a MVP, then get that timebomb the hell out of your codebase built-to-last-the-ages. Do it for your golden years.
(I make an exception for 3rd-party Haskell code as of now. First, you usually just need functions together with the insights that drove their design and workings, not as copy-paste but to adopt-adapt and thus most efficiently learn from. Secondly, the core language is just 6 primitives or so, everything else in Haskell is technically syntactic sugar and language extensions, so writing "pure" Haskell would probably be as nightmarish as coding in Lisp s-expressions (then again, I sure know some delight in this). Thirdly, with equational reasoning and referential integrity guaranteed for every piece of Haskell code that compiles, the fragility is a lot less and mostly comes from IO interactions with the outside/real world, which I'm ultimately going to recoup control over myself anyway. Further, most Haskellers are as of now way more seasoned and I can only learn from their work. Even furthermore, even much/most of the built-ins are written for learners or clarity, not efficiency/robustness/all-edge-cases-correctness/etc and will need custom replacements with time, on a case-to-case basis. Lastly, it'll be a breeze (in comparison) to just keep various versions of older compiler versions around, it's all quite self-contained for the most part, Stack excels at this but one could decidedly do so manually in case Stack gets stuck decades later.)
The problem with frameworks is that they often turn out to be cancerous: one type of idea metastasises all over your code and then forces you to express everything in a philosophy that might get in the way over time.
In my experience: this eagerness to get entangled in frameworks tends to be inversely proportional to experience.
This gets you code generating html pages. It doesn't store anything. It doesn't do anything. It doesn't create any relation between any data and what is on screen. It stores things extremely insecurely (it can probably overwrite the application serving this with random binary data, trivial to privilege escalate (just overwrite the application itself)). It's slow, no caching layer, no ... It can't work, really, without tons of extra code. And it doesn't support all sorts of necessities of today's web apps. No auth, no users, no client identification, not even bloody cookies (although yes, all of those are doable with extra code). There's nothing in front of the application, allowing you to serve multiple applications.
filename := title + ".txt"
body, err := ioutil.ReadFile(filename)
(I love how the code making an extreme security error of using unchecked pathnames includes a discussion about exactly what permissions to use in the open call to open that file. Talk about missing the forest for the trees)
Just found a way to crash the application : just upload an invalid template, because the error is ignored on the template parsing, which can NULL the pointer the template, which will panic the app upon rendering the template. Hell, you're pretty much bound to do that by accident.
Please, people: DO NOT run code like this, unless you want to run a bitcoin miner for someone else, an illegal file mirror, or worse. For the love of God, do NOT give code on the same server any payment details for your web app and do NOT run this behind a firewall, or within your amazon virtual private cloud.
This brings back the very bad old days of C code serving webpages: so exploitable you're bound to exploit it by accident (ie. crash it). And it doesn't even have the good parts of the old C code : this thing uses introspection, which will make rendering templates about as fast as in python or ruby.
I think the stdlib is awesome, but the biggest gripe I have with the standard net/http library would be the multiplexer (ServeMux). Out of the box if you provide it with a handle like /foo but the user making a request adds a trailing slash (e.g. GET /foo/), it returns a 404, compared to other libraries like julienschmidt/httprouter which will handle cases like this for you.
For example: https://play.golang.org/p/Q96EulBUfI
Other than that I think the stdlib is bang-on for simple http servers. Wouldn't recommend it if you're planning on writing a web server backed by a database, however. Compared to other web frameworks like django, the stdlib is a great library for working at a lower level in the stack, whereas Django is more of a content-oriented framework built around higher level concepts like basic CRUD applications.
These libraries also support middleware wrapping , a technique that the default mux supports but doesn't help you with. For example, adding CORS headers or logging the request.
> Be conservative in what you do, be liberal in what you accept from others
Good read, not a ton of practical advice though.
I got a good laugh honestly. Nice article which is filled with interesting, although not necessarily useful performance ideas. Their site does load impressively fast at ~10ms page-load reported by Chrome.
Unfortunately, as a professional web developers, my clients never need such optimizations. They want new features long before they ever get upset about pages being sluggish in the second range. I'd love to be required to do more stuff like this, and I'm sure some people do, but the vast majority of the suggestions are looking for performance at all costs.
In the end I think I'll use a few external packages and just expose the licenses in the app, but I'm doing this primarily for myself. If I were making something I expected to be in wide use I would seriously consider sticking to the standard library just for that clarity.
Generally these frameworks are met with general acclamation on HN. (Followed by a lot of replies generally defending the larger frameworks, which then devolve into a couple of threads about how hard it is to pack resources together, the importance of a "blessed" ORM, and the security implications of requiring people to assemble their own security stack for things like CSRF, etc.)
Are they less worthy if they're simply built into the language from day one? A lot of recent languages are taking such a tack, after all.
Safer road for a webapp is to start from a net.TCPConn kind of server with your own tiny HTTP/1.0 parser and a tiny templating engine, your own or a 3rd party one (it's absolutely not hard). Pay attention to synchronization, packages that do implicit synchronization are better left to their owners, avoid unnecessary accidental complexity like that.
(Go user since Go 1.0, speaking from the experience)
Yet you complain Go lacks "production ready defaults"? Surely the built-in server would be more battle-tested here, and you can use its existing configurability, rather than having to enjoy the pain of implementing timeouts yourself after your server dies.
Networking packages in Go's standard library are just too poor quality to argue about them. And it's not like they are different here, most languages don't have decent built-it networking libraries either, they usually are developed by 3rd parties as language matures.
Sure, but from what I know, Go actually supports them now. As problems are discovered, they will be dealt with. It saves you reinventing the wheel.
You had suggested writing your own simple HTTP/1.0 implementation. HTTP/1.0 is not enough for talking to the outside world. It lacks vhosts, it lacks keepalive. You lack HTTPS support too without adding your own TLS layer. You would have to use nginx or something in front for it to be ready for the outside world. By your logic, wouldn't Go's own HTTP server be fine, then? It'd certainly be easier.
Web applications is the task that Go was directly intended for. It was intended for some other ones as well but Web applications was the main one.