Hacker News new | comments | show | ask | jobs | submit login
Go-bootstrap: Generates a lean and mean Go web project (go-bootstrap.io)
150 points by didip 807 days ago | hide | past | web | 58 comments | favorite



Excellent overview. I'm sick to death of landing pages that are vague, hard to navigate, and leave you wondering what the project is all about; you avoided all of that. In particular, the "Decisions made for you" clearly answers many of the questions someone will have when they investigate a project like this. Kudos.


"Decisions [It/We] Made for You" is brilliant. 10/10 will use on my next README, landing page, etc.


I can't agree enough! More projects need this "decisions made for you" section in them.

Hmm, maybe I should go through the common Ruby frameworks and add a section like this in a pull request...


Indeed, it is one of the best one page introduction for a project I have ever seen. Making me go back to some of my projects and redo the READMEs.


Thanks for the feedback!

The project is aimed at getting up to speed as fast as possible, and the docs is geared towards that.


>It does not use ORM nor installs one.

Take a look at [1]. Congratulations, you've written an ORM.

The belief that ORMs are evil is precisely the belief that this sort of code should be repeated everywhere database access is performed. If you have generalized routines for interacting with the database with more comfortable abstractions then string concatenation, you are using an ORM, but possibly a poorly tested, poorly documented homegrown one instead of a generally accepted solution that has more eyes on it. You are what you claim to be above.

Which is not bad, lightweight ORM is awesome. You could also debate terminology that these are not really objects, but the spirit is still pretty similar to activerecord and sqlalchemy.

[1]https://github.com/go-bootstrap/go-bootstrap/blob/master/bla...


> The belief that ORMs are evil is precisely the belief that this sort of code should be repeated everywhere database access is performed. If you have generalized routines for interacting with the database with more comfortable abstractions then string concatenation, you are using an ORM

This is incorrect.

The sqlx library, included in OP, has generalized routines for interacting with the database, but is not an ORM. The squirrel[1] library for Go lets you produce queries without "string concatenation", but is also very much not an ORM.

An ORM is a specific style of library that attempts to map object oriented data patterns to relational concepts. That's why it's called an Object-Relational Mapping. There are good reasons[2] why people find this approach problematic, which aren't down to cargo culting them as "evil" or believing that everyone should repeat data access code in all projects.

[1]http://github.com/lann/squirrel [2]http://en.wikipedia.org/wiki/Object-relational_impedance_mis...


The simple/typical use case of Django's ORM, sqlalchemy, Rails ActiveRecord, etc. is not much more than a native API for composing SQL queries, and the automated mapping of table rows from the RDBMS into native data structures in the application (which happen to be objects because of the language).

When you claim that you are not using an ORM, a reasonable person would take it to mean that you are forgoing the use of query builders and automated mapping from SQL results to application data structures.

So, it may be true that classifying these libraries as ORMs is incorrect, but the "user experience" as a developer between these libraries and typical ORMs appears to be pretty much the same. Or is that unfair?


I have a lot of experience with Django's ORM and I would say that the essential character of these heavier ORMs is missing from the afore mentioned libraries. They can be used in the way you describe, but that's not their typical usage.

AR/Django encourage you to describe your entire schema, _including_ the relationships between tables, as attributes on your model objects. Upon doing this, you get simple and reliable programmatic access to some basic access and storage patterns.

Using this knowledge of your data model, the ORM can now provide you with more advanced tools: it can automatically join across tables (`select_related`), lazily load dependent data[1], generate SQL schema for you, transparently cache queries[2], automatically provide HTML form validators, and even automate database schema migration for you. The more completely you model the system, the more it can do for you.

In these systems, the database is subservient to the model. This is a problem, because the database is reality and the model is a model.

Dropping to "opaque SQL strings" is discouraged, both because it's considered error prone ("You should leave SQL to the professionals! There are lots of eyes on this!") and because there is often no graceful way to integrate custom query code with your model layer; instead, you investigate how to do so within the confines of the ORM. For every case where you can eventually find what you need (`select_related("user__friends__email")`), there are a dozen where you can't.

People start writing things in application code that could be handled easily and more efficiently by the database: aggregations are a classic, since ORMs support is either missing or incredibly complex. As soon as the application becomes non-trivial[3], the problems magnify. They don't do this because they are stupid, or bad developers, they do this because it's what the tools encourage: they encourage simplistic CRUD access and mistrust/suspicion/fear of SQL.

SQLAlchemy is quite different from this, because its primary focus is to model databases, not to provide some kind of declarative object language with its own set of semantics that do not exist in SQL.

Because Go can't do things like meta classes or creating/modifying types at runtime, a lot of this simply isn't there, even though people want it.

Sqlx does like 2 things; it adds named query parameter support, and it marshals rows into structs. Squirrel is just a query builder. The whole philosophy that the database must somehow be modeled and that access to the database is done via that model is absent.

[1] This is especially durable tarmac with which to pave your road to hell.

[2] http://github.com/jmoiron/johnny-cache

[3] This can mean "lots of requests", or "complex schema", or "complex reporting requirements"... all sorts of things.


Most ORM's are evil. sqlachemy being the only one that I can think of that is not evil (assuming you don't use the object mapping feature). I have found time and again that ORM's create unintended consequences in productions. Generation after generation of ORM's I have dealt with always have the same problem the create N+1, bad queries, memory bloat and are horrible to deal with in production. Once almost all the database operations are replaces with hand writing queries the application performance as expected. It is amazing to me that even though experiences engineers who have had ORM failures continue to use them. Some of have found that the cost of ORM is greater than the benefit and prefer simple libraries.


I think the real reason is that if you are using an ORM, then you should have probably used a non-relational database.


And if you look at dal/README.md https://github.com/go-bootstrap/go-bootstrap/tree/master/bla... , they say that they got the "data access layer" defintion from wikipedia http://en.wikipedia.org/wiki/Data_access_layer , and in the last line of the wiki page says:

> Object-Relational Mapping tools provide data layers in this fashion, following the active record model. The ORM/active-record model is popular with web frameworks.

:D


It may be technically true that this DAL is not really on ORM, but saying that the project skeleton does not contain an ORM is misleading at best. It would be simplest to just rework the language to something like, "a lightweight ORM" or "a database access layer like what's found in traditional ORMs" or something.


I don't really see any object-relational mapping there. I see a set of helper functions for writing basic insert/delete/update queries. Calling that an ORM (even a minimal one) is a bit of a stretch.


I like the project. You integrated a lot of well known and standard packages that people writing Go-WebApps would want, and didn't make the project super heavyweight. Very useful and still very light. I will definitely be using it, since one of my biggest problems with getting new go machines set up is going out and finding all the packages I have used in the past.

On top of that, you got me another Gorilla Secure Cookie default integrity key to add to my project for attacking Gorilla SecureCookies.


This project generates scaffolding code, that key should be generated too.


Excellent feedback! I've updated the code to randomly generate the key during bootstrap.


Would love to see something like this for RESTful Web Services built on Go with /users, auth, rate-limiting, etc already working out of the box.


I wrote something like that a while back: https://github.com/iangudger/basicGoAPI

No rate limiting, but it has pretty much everything else. If you want to add it I would welcome a pull request :)


Your project is GPL licensed. Is that your idea of a joke?

There is a place for the GPL, but this is not it.


I have found the GPL is workable in web projects:

https://programmers.stackexchange.com/questions/132485/does-...

I understand why people don't like the GPL but its not a showstopper for most business applications.


You are grossly misguided and should consult a lawyer.

Linking to a random programmers stackexchange question is an unwise way to make licensing decisions.

Per GNU's own faq at https://www.gnu.org/licenses/gpl-faq.html#UnreleasedMods:

  A company is running a modified version of a GPL'ed program on a web site. Does the GPL say they must release their modified sources?

  The GPL permits anyone to make a modified version and use it without ever distributing it to others. What this company is doing is a special case of that. Therefore, the company does not have to release the modified sources.

  It is essential for people to have the freedom to make modifications and use them privately, without ever publishing those modifications. However, putting the program on a server machine for the public to talk to is hardly “private” use, so it would be legitimate to require release of the source code in that special case. Developers who wish to address this might want to use the GNU Affero GPL for programs designed for network server use.

In case you missed it: However, putting the program on a server machine for the public to talk to is hardly “private” use, so it would be legitimate to require release of the source code in that special case.

There are numerous other potential consequences to the GPL and the question of when propagation occurs is ambiguous.


In fact, you are actually the one who is grossly misguided. Per that exact quote,

  The GPL permits anyone to make a modified version and use it without ever distributing it to others. What this company is doing is a special case of that. Therefore, the company does not have to release the modified sources.

Specifically

  Therefore, the company does not have to release the modified sources.
The faq is explicitly stating that the GPL would not require releasing modified source, and that if you want to force the releasing of modified sources, you should use the AGPL, as it has a clause to cover network server software


As your quote says, it would be a legitimate request... but not one made by the GPL. Instead, if you wanted to require this from your users, you would have to use the Affero GPL which was written for this very purpose (ie to close the "application server loophole" in the normal GPL).


Are there any precedents for this side of the GPL being enforced?

How would you even know, or prove, in the first place that a company was running a modified version of the GPL'd source on their servers?


You probably are thinking of the AGPL, which would make that requirement. Normal GPL doesn't.


I could also see this implemented as a standalone proxy which would forward requests to an app server (possibly adding custom headers like `X-User: someusername`). I was actually thinking of going with exactly this kind of architecture for my next project but wasn't sure if the re-usability benefits would be worth the extra work. Anyone has experience with this kind of setup?


Rate limiting isn't something your app should be concerned about. That should be handled a layer up, e.g. nginx. Chances are it does a much better job than whatever you could come up with.


Maybe if it's trivial blanket rate-limiting on a single server.

Anything more complicated that needs synchronization/database access, I'd rather just read and maintain application-level middleware. It's not rocket science.


Look at Tyk for this, it should be handled at a layer higher than any individual API IMHO.


After fiddling with the initial rev, and having issues - I pulled your May 8 commits and this is very nice and working well! Thank you so much! I've been using a "book in progress" for better practices for go web programming (http://www.manning.com/chang/). But this was much better for me to get going! Cheers.


Glad to hear that the project helps!


I'm a bit confused as to why the initial step is a "git clone". Why not "go get"?


I agree. I thought it would be a blank project skeleton you are supposed to clone, but it’s not. So `go get` would make much more sense on the landing page.


Great point! I've updated the instruction.


Looking better, but since the 'go get' step compiles your main.go and puts it in $GOPATH/bin/go-bootstrap, the instructions could be simpler still:

    go get github.com/go-bootstrap/go-bootstrap
    $GOPATH/bin/go-bootstrap -dir github.com/$GIT_USER/$PROJECT_NAME
    cd $GOPATH/src/github.com/$GIT_USER/$PROJECT_NAME && go run main.go


Just got this up and running, this is fantastic! great work


This is really exciting. Thank you very much.

Anyone with a 'large' web project written in Go want to chime in?


IMHO, once you've got a db attached, "secure cookies" are a bad idea.


without reference to this project -

Perhaps you meant storing all session data is a bad idea versus just an ID? (If so, I'm with you)

If not - how would you identify an authenticated user? Or, how would you look up all their relevant session data in the DB?


"securecookies" is a term used, at least in the context of github.com/gorilla/sessions, to refer to a session storage based on encrypting all of the session data and sending it as a cookie. That means all of your session data, including if the user is authenticated and even which user it is, is sent to the browser and back to the server on the next (and subsequent) request(s). This is an interesting concept, but IMHO, rather flawed. About the only valid use is for small micro-apps that don't have any server side persistent storage.

A db based session, which really wouldn't be that hard to set up with github.com/gorilla/sessions, would just send a randomly generated session id to the client in a cookie, save the data in the db, then read that data back out of the db on the next request.


> That means all of your session data, including if the user is authenticated and even which user it is, is sent to the browser and back to the server on the next (and subsequent) request(s). This is an interesting concept, but IMHO, rather flawed. About the only valid use is for small micro-apps that don't have any server side persistent storage.

I'm curious: what do you consider particularly flawed? DB backed sessions with simple ID-storing cookies suffer many of the same problems, with the primary issue being that you can MitM the cookie and masquerade as another user if not served over HTTPS.

DB (SQL, Redis, et. al) backed sessions are nice if you are storing genuine data (i.e. form data), because cookies typically have a 4KB per domain limit in most browsers.

If you are just storing a user ID, email address and/or admin flag, the cookie is authenticated (to prevent modification of those values) and served over HTTPS (only, ever) then there isn't an immediate problem there. You also don't have to worry about hitting your DB for each request - Redis is real quick, but (without hard numbers) I don't expect that sending 1KB of cookie header data would be slower either.


Keep in mind, your authenticated cookie is exactly as easy to MITM as a ID storing cookie, and potentially more dangerous. A session can be deleted from the db, and then that session cookie is dead. An authenticated cookie is good forever, unless you start putting expiration times or something similar. Issue with that is now you have to make sure it's all done correctly, and there's no bugs that may make the cookie good forever/impossible to revoke. Whereas, you get that kinda stuff for free with a db backed session, just delete the session and they are logged out, period.


You could always, you know, change the code (which is simple) so that an "infinite" expiry date is no longer valid. Line 215.

https://github.com/gorilla/securecookie/blob/master/secureco...

As for "impossible to revoke" well if you have control over your server you can do whatever you like so this falls into the very not-at-all-impossible category. As a baseline as long as there is no personal information in the "secure cookie" there really is no issue at all.


So you just removed the ability to have a "Log me in forever" checkbox.

A expiry date system is literally impossible to revoke without somehow maintaining a list of valid or invalid cookies, and by that point, you are hitting a database for each cookie.

So, one way, you don't have as much control, and you can't revoke a stolen cookie with potentially high level access rights. The other way, you are replicating a db backed session, and heaping complexity on top of it.


If we are dealing with a subset of bad cookies, then I suppose it becomes a hard question of what-to-do. We could put something into the code to distinguish and then require a minimum last good date of some sort. It is messy and I wouldn't want to do that, but really I just want to point out that your insistence on it being "impossible" just isn't true. If we just let ourselves revoke everything, then it is much simpler. I wonder how the "known subset of bad cookies" situation arises. I suppose it could, but you could in turn release an announcement telling your users that for all of their protection you are requiring that they login again, insert a minimum required date and move on with life. There are plenty of ways to deal with the situation and none of them are impossible.


Really? Could you try actually reading what I've written?

    impossible to revoke without somehow maintaining a list of valid or invalid cookies
Completely true. A minimum date like you are saying is still "some kind of list of valid or invalid cookies". Or are you going to get into the semantics of "that's not a list"? As an example of something that you can't do without tying a database into it, I've seen sites that let you log individual computers out of the system from your account page. No way to do that without a valid/not valid check in the DB.


Values are values whether or not they are in a db. It is situation dependant and probably not (happily) manageable in a large scale application. Saying things are "impossible" with such a heavy hand is a bit over the top. If my cookies have a date that is valid, then I can easily keep that bit of nasty logic in the code. I wouldn't do that personally, but it is doable. Take it easy :)


Would't we be able to avoid this if we did away with the secure cookie and replaced it with a jwt (jason web token)? This way, there is no state to maintain in the database, and authorization can expire?


there isn't much difference between a secure cookie and a JWT. Well, except that JWT is just signed, not encrypted, so your cookie contents are visible. Also, JWT has issues, https://auth0.com/blog/2015/03/31/critical-vulnerabilities-i.... The main thing is that you have added nothing by using JWT, cause you still can't expire a specific token without storing some kind of "token status" in the database.


Ah - yep, I've betrayed I'm not as familiar with gorilla as I might like to be. Yet.

The way you've described things is how apps I'm familiar with do it (the latter way.) Thanks for clarifying.


Also, you don't need to sign or encrypt the cookie if it's just a securely random session_id.


> Also, you don't need to sign or encrypt the cookie if it's just a securely random session_id.

You should authenticate it anyway, to prevent someone from trying to brute force ID generation (and therefore masquerading as another user). Otherwise all hope rides on you using a sufficiently long (CSPRNG-sourced) ID. Authenticating it is good practice.


Authentication really doesn't do anything that extending your id key by that number of bits wouldn't do, i.e. a 32 byte random ID is just as hard to collide as a 16 byte random ID and a 16 byte signature. Technically, if there are any weaknesses in your signature, they may end up making your ID+signature easier to collide. Just going with a pure random ID means 1 less key that you have to keep out of source control.


If you don't trust your random numbers to begin with, signing one with another doesn't help you. As Vendan points out, it's worse. Just keep it simple.

    > to prevent someone from trying to brute force ID generation
That's the point of using a large securely-random number.

It's beautifully simple to give the client nothing more than a large random number to authenticate them.


You can also check this one http://defaultproject.com/ Based on Goji and Mongo.

This site uses it http://gifuk.com/


I wrote a simple one of these for rust using nickel.rs a while ago.

https://news.ycombinator.com/item?id=9519642


great initiative ! thanks a lot !




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: