The title here is a bit misleading: the first few paragraphs of this are a really enlightened description of the challenges of engineering for a startup regardless of what framework you are using.
Even if you have no interest in Django at all I recommend reading the start of this.
> I wrote this guide to explain how to write software in a way that maximizes the number of chances your startup has to succeed — by making it easy to maintain development velocity regardless of the inevitable-but-unknowable future changes to team size, developer competence and experience, product functionality, etc. The idea is that, given the inherent uncertainty, startups can massively increase their odds of success by putting some basic systems in place to help maximize the number of ideas, features, and hypotheses they can test; in other words, maximizing "lead bullets," to borrow the phrase from this blog post by Ben Horowitz.
Thanks, Simon! I thought a lot about how to position this piece, in terms of whether it was really about Django, or about Python, or about software architecture more broadly, or software architecture specifically for startups, or whether it's really primarily a business book that just happens to contain code snippets.
In some ways it's all of these things. But I think for people who aren't already senior developers, it's much easier to understand the advice if it comes with some specific context and functioning code snippets. My hope is that by making it Creative Commons (including for commercial use), people will remix the book to make it work for their specific framework, startup, language, etc.
This way people can also incorporate the sections they like into their internal style guides, change them to say whatever they wish I'd said instead, make online courses based on the content, sell their derivative works, etc.
One of the main reasons it's about Python and Django (and not Flask, FastAPI, Express, etc.) is because Python and Django are both transparently run by non-profit foundations, and development is done in the open by large communities of contributors. This is a significant enough advantage that even if, say, FastAPI has some technical advantages over Django on any given day, over the long run it's hard to see any other stack becoming a better choice for startups until both the language and the framework are managed in a similar manner.
That makes a lot of sense, especially given your decision to use Django for code examples. I wonder if it would be worth extracting out those first few paragraphs into a separate article? They're really good, and applicable to way more than just Django.
It's not a bad idea. It would be cool to make a single serving page for this and give it some more structure, break out recommendations specific to different technologies, incorporate content written by others, get testimonials, etc. But it's also evergreen content, so if I wait a year or two to incorporate feedback then I think it will still be just as valuable.
Exactly. FastAPI's maintainer was so unprepared he had to stir up a storm to prevent a legitimate Python feature from coming in. One of the benefits of a larger and more involved community process is the fact that each person exerts less individual control.
They threatened the Python language committee if they didn't get their way wrt PEP 576. Their library also contains C extensions with undefined behavior, and telemetry that phones home.
I'm a big Django fan and a former (unsuccessful) startup founder. I only took a quick look at this but wanted to leave a comment for anyone else sifting through new. My initial impression:
1. There's a bunch of things a I disagree with in here.
2. This is very well written.
3. It's well organized.
4. It's both concrete and high-level.
5. There's a ton of info here.
I'm so burnt out on shallow content, it's really refreshing to see something well argued, whether I agree with every conclusion or not.
> There's a bunch of things a I disagree with in here.
This is largely how I feel about things like Two Scoops of Django or Effective Python -- I disagree with a lot of the specific advice, but I still learned a ton from them. That might just be the best you can hope for with this genre of writing.
> It's both concrete and high-level.
Thanks! I was trying to really push the idea of having it be almost a business book with code snippets. When I went through college, you could either study business or study software. But these days being good at both is kind of table stakes, so I really feel like technical books should reflect that. But there also isn't a lot of precedent here, so it was a struggle trying to figure out how to actually make it work. The closest pre-existing example is probably something like Clean Code, but the goal of that book (as well as Pragmatic Programmer) is very explicitly to make developers better at programming, rather than making your business more successful. Which is a subtle difference, but it actually leads to both the specific advice and the larger structure of the work being very different.
Same on Two Scoops. It was incredibly useful when I was a junior dev in my first job doing Django. So often it's like "there's five ways I can imagine doing this. What's one that at least doesn't totally suck?". It was nice having an opinionated guide for that. I thought I put it in the giant beginner Python/Django syllabus I made[0], but I guess not.
Reading your guide more in depth, I really wish I could send it back in time to myself. A weekend going through this would have up-leveled me by a year. I don't know, maybe it would have just caused me more headache. The app I was working on was definitely following a lot of not best practices, and seeing even more might have just been too much!
> That might just be the best you can hope for with this genre of writing.
I’m a sample of one so file this away appropriately but that’s the primary way I evaluate technical books. If I agree with everything, it means I’m not learning anything. When I don’t agree, I’m either wrong which rules because I get better. Or I’m correct but had to think more deeply to make sure. Either one is a profound win (for me).
> Thanks! I was trying to really push the idea of having it be almost a business book with code snippets. When I went through college, you could either study business or study software. But these days being good at both is kind of table stakes
1. I will buy a business book with code snippets
2. I am biased abt 1, because I also agree with the premise that being good at both (or at least passable) is table stakes these days
Agreed. The most important take, IMHO, is that you have structured/disciplined way to do things, and do it everywhere.
It could lead to a lot of code (the part about all that validations upfront), but having a distinct and regular style surely is a benefit for a large-sh team, that look like is the focus.
For a smaller one, some of this stuff can be different, but the point is well argued.
The core thing is that I haven't been a fan of the services layer described here. I'm more of the opinion that if you're going to use Django, use it all the way. Fat models and all that.
But, when I read more thoroughly, the guide makes a really compelling case. I've generally scoffed at the idea that you shouldn't couple your business logic to Django in case you want to swap out Django for something else. It's hard to imagine a realistic situation where that doesn't lead to a total rewrite anyway. But putting business logic in functions for readability and testability makes sense to me.
Other disagreements flow from that same one. Not testing your models much being an obvious one.
Not a fan of Hungarian notation. I've had a better experience with type hinting than the author.
I don't share the disdain for URL parameters.
From my first quick skim, that felt like a lot. But I really do agree with much more in this than I disagree with (one big app to start, small serializers, good function calls, unique names and lots more). And the things I do disagree with, this guide has me reconsidering (except type hinting. I love that shit). I mostly wanted to point out that even if I don't agree with everything written there, I think it's very well written.
I'm founder/CTO of a successful SaaS company with 10+ years of code evolution in the same Django codebase. I tried and tried for years to make fat models/model managers work. I feel like it just doesn't work.
It's just not composable enough. You can't reuse business logic in web views, forms, admin views, API, background processing, etc... time and time again the Django tooling gets you 90% of the way there, then leaves you with no way to do what you need done cleanly so you one-off an ugly solution to it. Soon you code is littered with them.
We ended up with a solution much like this article and despite it being a bit of a pain it always works in all contexts.
I generally use "services" when there's a glob of logic that works with a number of models and other things. For example, you might have some order checkout logic that has to write to a number of database tables, send email and other notifications, and ping a third party service or two. You don't want to tie all that logic to a single model as it crosses a number of boundaries, and you don't necessarily want it inside a view, as it might be run in some other context such as the command line, or perhaps run from different views.
On the other hand, if there is logic that is neatly encapsulated within a model instance or queryset, I don't see a good reason to artificially make a service for it. Generally refactoring into services is something I do when it's obvious what their responsibilities and boundaries are.
For the Rails developers like me who are thinking the Django crowd have lost their minds for even considering such a thing (and who went straight to the comments instead of reading the article), I just learnt that Django views are roughly equivalent to Rails controllers.
Yes this is a good compromise and something I do also - services for cross-model business logic that can be called from multiple entry points, e.g. views, admin, management command, celery tasks, etc.
I've gone back and forth on fat models vs service issues myself - I've landed on a bit of both in the end, but it's made harder by DRF not really recommending the fat model approach (with respect to validation).
Isn't it amazing we're still all poking around how to best represent business logic in software when this has been the whole damn point of what we're doing for the last 30 years at least? We got pretty good at the pure computing stuff pretty quick, but like hell we can all agree on how to represent that a food purchase over $100 by a user in Wyoming needs a 5% tax added.
The problem is that different levels of granularity imply enormous differences in representation.
If you operate in a single jurisdiction and sell just a coupld of SKUs, handling tax can be a 20-line affair. On the other end you have SAP multi-million line ERPs that handle tax as far as the eye can see and you need literal specialists just for that.
That has touched on something deeply important. I wish i knew what :-)
My 2c is "software literacy". We live in a literate world so we all more or less can read the sentence "food purchase over $100 by a user in Wyoming needs a 5% tax added." and understand it.
But 98% of people could not scan the same in C, C++, Python etc.
And 98% of those of us that could wont understand the context the code sits in (whats the variable name 'WY_TAX_PCAGE'?)
I think your comment emphasises the importance of good function and variable naming. I've read some incredibly clear and obvious C code and some incredibly hard to understand Python code that both do the same thing - it basically all comes down to the naming choices used. `TAX_RATE_WY` would be far easier to understand and maintain (especially when seeing other state abbreviation suffixes nearby) and has the added benefit of using the same `TAX_RATE_` prefix for _all_ similar values so is much easier to pick up on when scanning through the code.
The science of reading and literacy would likely be a very valuable course to teach in CS programs.
As a startup CTO and heavy django user, I'd agree with the general assessment - there are some things I don't like stylistically... but that's style. Everything in here is well reasoned and systematically covered. Kudos to you and your team for thinking this through and sharing it.
As an example of some nitpicks:
* I never interact with the raw request data, I pass that through to a form or serializer for validation. It handles the null/blank string based on my configuration. In fairness to your approach, your code is much more readable and will be substantially easier for someone who is new to Django to pick up.
* I use mixins to handle fat models, while also pushing validation logic into serializers.
* I separate out all my files and prefer to have small, atomic files that are easily testable (e.g. models, views, etc. are all micro-sized files).
* args/kwargs is super handy when doing something really simple, then calling super. I feel like this is an exception that should always be allowed, but otherwise strongly agree.
* I bias towards class use, even though I agree there's no real need. I find this to be a good convenience for new developers coming aboard.
I want to reiterate that I feel like these are nitpicks / minor disagreements of compromises. Using a document like this as a guidepost is awesome for a new team kickstarting with Django - better to have a reasonable, well thought out set of rules codified than to have a whole hodgepodge of design decisions.
On the point of classes and functions, it doesn't specifically address "function based views" and "class based views" of Django - it's mostly a tirade on OOP in Python. The majority of the Django world has long switched to class based views.
Yeah it definitely feels like it's from a different era, I got nostalgia just hovering over the links in Arial bold and thinking of all the hours I'd spend trying to come up with the perfect :hover color. Now it's just lighten 10%, and that's if you even bother since everyone is on their phone.
Not mentioned here but important, I strongly prefer and recommend an opinionated framework like django, rails, (possibly ember) over mix match frameworks (express, flask) because they have already thought more about most design decisions than you want to (or your engineers). Especially for an early stage startup, the performance, scalability, or reliability of your stack isn't going to kill you. A long time to market will kill you, reinventing the wheel will kill you. When I see django apps built and developed, a shockingly high amount of effort is dedicated to actually solving business problems vs other less opinionated frameworks.
I've done a full circle on this one and arrived at -- it depends on the team. Specifically, if you have teammates who know the purpose of these frameworks and all their parts, they might do well with a leaner library based approach. But i've met many devs now -- some much stronger in general than myself -- who I later realized don't actually know why these things exist. So they go without a validation or jobs layer from the start, and then we have to do a bunch of refactoring after a few months. My sniff test is to ask people about ORMs. If the person dislikes ORMs because "I know SQL, I don't need help with it" -- they probably don't know that ORM's _mostly_ exist to provide the common abstractions over SQL you'll inevitably need (or make) in a business logic heavy web app. A more experienced (with web frameworks) dev should use (or not) an ORM for more thoughtful reasons than "I know SQL" or "I used X and it sucked so I'd rather use nothing".
In short I generally agree, but I think some teams (in some contexts) can thoughtfully choose a more lightweight approach.
You are correct about time to market, but in my experience the more a framework does for you, the more friction it will give when you have market validation and need to build out. Keep it simple, use something like next.js, postgres, perhaps a query builder like knex.js and start building. Easy to reason about, massive community to get developers and support. On top of it you can build it all using. TypeScript and quickly refactor and itterate with confidence.
By the time you get to that point, you've won and can afford to hire a team to handle the friction to build out. People scale Rails and Django apps to large customer bases before having to start thinking about how to design for "web scale".
Problem is, you need to figure out a way to send and receive data from the backend (an API), a way to show validation errors, a way to do i18n and translations, a way to access your database (an ORM or similar... no, no raw sql, thanks , a way to do authentication and authorization, an admin or a cms to browse your database,a way to do background jobs or long running tasks, a way to send emails, a way to write CLI maintenance tools, a migrations system, and so many other things I could spend all day talking about.
Of course for a todo list list app or a landing page it is an easier solution, but real world applications need a lot more things you'll have to figure out by yourself. I'm telling you this from my own experience with Next. What we used to give for granted with Django suddenly we found out we had to rewrite a TON of code ourselves, and glue togheter many libraries of varying quality and maintenance, and the result wasn't much better or documented or performant or enjoyable as what django used to provide us with.
Another huge advantage of opinionated frameworks is that someone who knows the framework will know how to navigate a new codebase day one. As a consultant, I've seen this countless times. So much so that unless I'm working on a pet project I like to stick to as much of a 'plain vanilla', idiomatic use of an opinionated framework like Rails or Django as possible.
The article is very nice, some great advice, and like others said it's not just relevant to Django.
One part I don't agree with is about frontend. Frameworks have stabilised quite a bit, pick React or Angular and you can expect support for years to come. I still maintain a 5+ year old React frontend with lots of logic, updating hasn't been an issue at all (codemods), since we moved it to CRA it has been even easier.
Things like formatting dates: you can wrap it in your own function then it takes a few minutes to swap the library, but you likely won't have to change it if you follow the tips in this article about dependencies, which are very good!
Anyway, great article, very good advice. I think it does a good job of explaining that you should think about why you do things and focus on what matters and makes a difference.
I love this guide, because the company I'm working for is seriously suffering software quality issues due to lack of proper /modern/ doctrine. The project was driven by a Java guy who didn't study Python+Django even the slightest. I think I should really share this with my colleagues.
> Rule #11: URL parameters are a scam
I think this section is going a bit too far. The examples are inappropriate.
First of all, you should not use hierarchical URLs for APIs. There's a good API design guide from MS[1], which discourages embedding resource hierarchy into API URL structure. Always design APIs around resources, and resource hierarchy is not a part of resource obviously.
Also, `GET /clothing/shirt/<str:shirt_id>/pants/<str:pants_id>/` is a completely invalid example for bashing URL parameters. This is clearly a search operation, and search queries SHOULD be passed as GET query parameters in the first place.
I also personally discourage toggling output using GET flags, because it kills caching. Just try to minimize the number of representations for each resource, and this will naturally reduce the number of endpoints and optional flags. If an attribute is big, remove it from the main representation and serve it through a dedicated endpoint.
Previously, our company tried to migrate an old service to Django + Google Datastore (w/ 3rd party library) + external access control framework. You see, it didn't survive long, and we had to start a new project. The attempt only increased the number of legacy systems, lol.
...
Apart from what's written here, there are a lot more ways to f** up the system. I know, one guide can't simply cover all possible cases, but I want you to know that the world is big and the possibility is, of course, infinite.
> The project was driven by a Java guy who didn't study Python+Django even the slightest.
Ultimately it's not the language or the framework that matters but rather how fast you can learn what the market needs. A Java dev will be able to be productive in Java immediately, as will node dev in node, as in ruby dev in ruby.
This is great stuff! after working w/ various Django apps for years (anywhere from 3 dev teams to 200 dev teams), it's great to read stuff that confirms my biases :D
Regarding services, I'll go as far as to say adding ANY method on models instead of handling logic in services is a recipe for disaster. How many times have you seen:
class GodModel:
@property
def status(self):
# 1 million lines of logic and who knows how many queries
I've actually seen this pattern in every Django project :(
Regarding urls, instead of enforcing a flat file, I'd highly recommend always using django_extensions[0]. You'll get `shell_plus` that auto imports model and `show_urls` that you can grep for endpoint and gives you the handler.
Are there any good articles or examples you can share that elaborate on why using services is best? Writing a custom model manager method for these sorts of operations seems to work best. For instance, the create_account service could easily be part of the User.objects manager:
class UserManager(models.Manager):
def create_account(self, sanitized_username: str, ...):
# the rest of the code in this method is the same as the example.
...
return user_model, auth_token
class User(models.Model):
...
objects = UserManager()
>>> User.objects.create_account(sanitized_username="blackrobot", ...)
(<User: blackrobot>, 'fake-auth-token:12345')
The benefit here is that other parts of your code only need to import the User model to access the manager methods. It also allows for the User.objects.create_account(...) method to be used by related models, without risking a circular import, by using the fk model's Model._meta.get_field(...) method.
I'm not opposed to services, I just don't see when they'd be particularly useful.
I like your approach and I think what you’re proposing can also be fine in many situations. Managers are not the same as models and using them here is not drastically different than using a separate service class/function. Managers can be accessed through the model and they have “enough” exposure to table wide operation (querysets). I usually start with managers in a separate file (managers.py) for my business logic and when the project grows, I extract the logic into services in a way that only queryset definitions remain in the manager. You can mock manager’s methods for tests (get_queryset) and the business logic code in them can be written in a relatively portable manner.
It might be a little bit more convenient, but really, models are central to everything else. You're spamming your most central code with arbitrary crap that you are only interested in perhaps 0.1% of the time.
Once you get out of the OOP mentality, it's much easier to shuffle code around, and keeping things that logically belong together close to each other in separate files. Move the crap out of the way and enjoy the cleanliness. Less mental overhead helps you make better decisions faster.
And yes, sometimes you have to deal with a circular import, but it's not the end of the world, just decide which file is the most basic, and don't let that import other less basic files at the top level, but only inside functions. Or try to decouple the logic.
Isn't a mix of fat models and services best? Say for a user model you have first name, middle, and last name. You add a property "full_name" that joins those 3. Putting that logic in a service feels confusing and unintuitive.
On the other end, if you have a complex auth mechanism that needs to talk to several external APIs, putting that in a service feels natural. You're making remote API calls, possibly pulling in other models, and it's a clearly defined "business area".
In my opinion and experience, treating the model as anything but a way to talk to the database behind a service interface is a very slippery slope.
My service methods receive and return pure objects (pydantic or attrs) that I serialize from the models. No other part of the app gets to pass around that service’s model, updating it willy nilly, maybe saving the updates, maybe not.
The service completely hides the model and all corresponding persistence logic behind its interface.
The decoupling you achieve is worth the extra boilerplate. It’s the only way I have ever seen Django apps not become giant balls of mud.
Reading your example code and explanation already makes me hope I never have to open my debugger on this code. :)
A simple service that I explicitly import and call methods on is so much easier to understand. Hell, even if all services were global, singleton, objects with static methods that'd even be preferable.
That logic has to go somewhere right? I'm not sure what the issue is.
There's the "heuristic" of expecting a property to not be expensive to access (which django already kind of throws out the window depending on how you fetch the model), but otherwise I don't see how services fixes this. Is copy and pasting that code over to a services.py file better?
Services are very nice for dealing with python's import issue. Accessing other models from a method / property on a model is ugly. But it's very hard to structure and name services in a logical way, especially when the lines start to cross. You end up with a "shared_services.py".
Keeping the business logic in a service (aka Anemic Model in Domain Driven Design terms) allows you to change the business logic using different versions of the same service. You can then inject the appropriate implementation for a given context using IoC (Inversion of Control).
I'm at a crossroads for a hobby project I'm working on. I have a model for youtube video metadata. I have functions to do things like transform the metadata into display values to be rendered to template.
Should that logic be a function that takes a youtube metadata model object, or a class method on the model to return display values?
There are more functions of increasing complexity after this display function.
If you're not able to quickly answer it, I'd put it where you feel like you'd first look for it (where it feels natural) and move on - don't try to optimize too early. The correct answer is going to depend heavily on what your app is doing, how it's structured, etc, so providing a good answer from an HN comment is going to be really hard. It's ok to be "wrong", just make sure you're consistently wrong so when you need to refactor it's not so bad. Eventually the answer will become clear...and if it never does, you probably made the right choice and saved a lot of time :)
You didn't fake it, you're making it. Whether you learned along the way or had book knowledge or experience from before isn't something anyone will care about.
One option would be to define this View Model that takes your Model in the constructor, then make this view model object available in the template context.
I wrote my company’s giant backend in Django, and while I don’t regret it, five years in the ORM still makes no sense to me for more complex cases and really, really miss languages with better type checking. The current options are not good enough or have no library adoption.
The article makes a lot of sense to me. My main gripe is with the choice of Django (which is a given, I understand that).
Anno 2021 I'd never choose a lang+FW that does not bring strong types, and type safety over several system barriers.
Like using Kotlin+jOOQ for the backend. Or use no backend at all and use Hasura with a generated client lib in Elm. Or Rust with Yew.
I know these are not as much used as Django/Rails/Express/Spring+Hibernate, but I'm just not ready to walk into the no-type-safety swamp again out of my free will.
It's hard to static type with DRF, even with drf-stubs package. Good luck with static typing your managers, it's impossible due to circular imports. It's easier if you add services layer but not many people like it. There's also a lib which allows you to convert db entities into dataclasses so it's easier to throw around types but it's not a "Djangoese" approach so I'd be very careful with that.
With microframeworks it's easier as you can structure your project as you like add more layers of abstraction so it's not that tightly coupled. Fe. sqlalchemy -> pydantic schema.
As I'm reading the article and the discussion here, I'm seeing mostly talked around REST. How are folks supporting a GraphQL interface with Django? Also,
> Rule #17: Keep logic out of the front end
I'm curious how folks who use a db-to-api tool (e.g. Hasura) think of this? I'm currently working on a codebase that has Hasura and we're doing, what seems to me, a massive amount of logic in React. But as far as I know, there's no alternative under that scheme.
I'm of the view that most of the DB-to-API tools are about empowering front-end developers to build full-stack applications. If you're already perfectly happy writing full-stack code, you're much less likely to be the target audience of such products. By essentially exposing your ORM over the wire, all you're doing is punting the responsibility of making sense of that data into the client. Increasingly i've been finding that the shape of persisted data isn't the best shape for _understanding_ that data, so I nearly always want some business logic layer that acts as a translator between the database and API domains.
Most of my work over the last few years (especially with using Relay + GraphQL) has been about pushing business logic back out of the client and into the server where it belongs for most apps (this isn't a universal rule). Client code can still do lightweight transformation of data, but i'm a huge proponent of keeping client codebases as lean as possible.
We are doing the same, and are quite happy with it.
This means that very often, implementing a new feature can be done exclusively on the frontend, as opposed to spending time on both the frontend and the backend as we would with the old-school approach.
With Hasura, if you need want some server-side logic to be written away as an API, you could try 2 approaches:
1. Write a REST/GraphQL API in your favourite framework and bring them in as Actions/Remote-Schemas in Hasura. Hasura will add them to the GraphQL API.
2. If you like, and if its possible, you can also abstract away logic in database functions and then Hasura will expose them.
I use Elm on the frontend with a fully type safe generated client lib (based on the GraphQL schema Hasura provides). Fully type safety over the API barrier and in the frontend.
Having worked at two unicorn-ish startups that were founded and grew up on Python stacks I specifically want to work on a statically typed backend for my next job. I'm tired of the mishmash that is a large mature Python codebase.
> But even if we don't get the full benefits of static typing (more useful IDEs, better performance, eliminating certain classes of errors, etc.), at least we mostly don't have to deal with the insane errors that you get with implicit type coercion in JavaScript.
I find vscode + pyright useful for static typing. My only complaint is that pylance is closed source.
Better performance - transpile to a statically typed language. Forces you to stick to a small/sane subset of python.
I'm in a similar boat coming from a Rails mishmash and looking at our next product, particularly debating JS vs Kotlin. While the upsides of static typing are pretty clear, here's my problem. All our app really does is receive a JSON request, read some data from a database, do some computation (this is the step where static would be nice), write some data to a database, and then respond with more JSON. So the inputs and outputs of the system are collections of heterogeneous maps that static typing systems don't handle well. So if I want those nice static guarantees in my code, I need to write layers of ORM and serializer code to get data in and out of my system. I'm not sure the static benefits are worth the advantages of not having to maintain this additional layer of data transformation.
Given that you're coming from Rails you may be pleasantly surprised by DotNet Core 5 (C#).
In addition to the static typing, by choosing an MVC project with an Entity Framework (EF) backend you get something similar to active record, with full-stack views where your database can be connected to your frontend if you like, scaffolding, migrations, seeding, relationships with navigational properties defined in your models, routing by discovery/convention, validation by model attributes, strongly-typed views, out-of-the-box client-side validation based on the same attributes, and server hot reload ('dotnet watch').
Through generic types and EF you don't need to write ORM or serialiser/mapping stuff, and at the same time you gain self-contained releases (no framework installations needed on the server) and a very performant stack with great language features.
Core may come from Microsoft, but it's free, open source, cross-platform, fast, and supported (plus Visual Studio has a free community edition). It's seen as an enterprise thing, but is a fantastic option for small projects too.
I must admit that despite using C# since around 2001 I've never done more than glance at F#, so can't answer from the language perspective.
However as all the DotNet languages use the same DotNet Core framework (which despite the name is not necessary on the deploy servers) they use the same libraries and modules. So whilst I don't know how seamlessly they integrate into the F# code, they should certainly be available.
Yep, this is where you can end up with tons of mapping boilerplate and potential bugs. I tend to reach for code-generation tools, schema based protocols like Protocol Buffers, and prefer code simplicity over performance concerns. The hope is that the price you pay for building around strict consistent data models across all your systems will pay off in maintainability and fewer bugs.
A well written and interesting article, but I strongly disagree with
> There are many reasons why, when it comes to SaaS startups, having a SPA front end that's powered by a REST API is better than having a monolith where the front end is a mix of templates and javascript.
If your SaaS isn't something that depends on highly dynamic data, like a stock trading application, then it often isn't worth it.
I'm firmly in the DHH camp, that a "monolith, with Javascript sprinkles" is easier to begin with, especially considering testing (UI testing is more brittle, compared to asserting against templates) and validation (SPA introduces the need to validate both in the frontend and server side).
I've seen projects get completely lost in the overhead of Javascript tooling, adding no value over what can be achieved using server-side generated HTML templates.
I would, however advocate keeping controllers "thin", passing responsibility to a service layer, so you if you need a JSON API in the future, you can build this out using existing services.
I agreed. Then did a project[1] with Hasura and a generated client lib in Elm and I'm no longer looking back. If I can get away with "no backend code" I'll do it again in a heart beat.
> Most existing advice on software architecture is written for $10B+ companies, and as such tends to focus on maximizing things like performance, scalability, availability, reliability, etc.
Definitely.
I'd argue GraphQL is a good example of this as it solves issues that only very large teams/apps have.
> most software architecture advice for 10bn companies.
I agree with this point and pointed out in a previous comment[0] that as an industry, small companies lose out because they're chasing a platform. And are doing things the FAANG way, which they don't nee.
> the other way, big tech supresses innovation is by brain-washing or propaganda. look at all the companies formed by former faang engineers. majority of them go for vc-funding, use complicated tools at the onset. not because it's a need but because that's what they're used to. what would've been a simple frontend is now a monorepo monster, a simple backend a mesh of microservices. lastly, big tech supresses competition by open sourcing tools that are technically excellent but not needed by 90% of the companies out there. instead of companies innovating the companies are now chasing the platform i.e trying to keep up with the tools released by big tech. this is a microsoft playbook.
like others have commented might not agree with everything but that was a good read.
I'm a Node user and agree with many of the points here.
For the REST API, my work is 99% writing queries and business logic. I use Fastify because it includes sanitization/validation out of the box. I use Fauna as the data layer because it solves authentication/authorization out of the box too.
> URL parameters are a scam
I agree with the author that URL params do not solve all problems, but OTOH they do cover +90% of use cases for CRUD stuff.
Also, most routers I know of, specify routes with URLs. Defaulting to using URLs with multiple query params can get pretty messy.
2. doing real world situations with slightly weird scenarios: This is very confusing. Why did they make this so complicated.
3. after digging into the source code and reading the docs: Oh this makes sense after all. I'm going to extend it where need be but overall stick to the pattern
4. after hiring 5 engineers to work on the same code base: DRF is the bible and and shall heap scorn all MRs that deviate from its mantras.
I have started building a new product. I used standard serializers for everything I can get away with.
If I hit a pain point with a serializer that I can't solve in 10 minutes I drop back to manual serialization methods and leave it at that.
When I have to tidy up or bring more people we try to do everything by the book because, as you said, there's a lot of very good reasons why REST framework is structured the way it is and the predictability it brings is worth the cost of dealing with edge-case quirkiness.
It is pretty confusing, which is why I advocate using only the core pieces of it. This way you get basically all of the benefits, but without each new developer needing to read through hundreds of pages of extra documentation to try to understand lots of random design patterns that might not even be a good idea in the first place.
When I've created style guides for startups, I've been pretty explicit about saying "We use REST framework, but here are all the pieces we don't use, and these are all of the sections of documentation that you don't need to read or understand."
In my experience, most CRUD flask apps end up replicating 80-90% of django projects with 200-300% of the effort. Heck, getting pytest working with flask and sqlalchemy is still a struggle.
ORM, IAM, admin interface, are all included or supported at a decent enterprise-level by Django and will save a TON of early dev hours and rookie mistakes. Remember, this advice is geared at small organizations (startups), which means there is plenty of backend and internal infrastructure required.
Yes, I also found the Django ORM to be absolutely horrendous. The need for using a serializer was also incredibly offputting and the default behavior of said serializers was almost always unhelpful and in fact caused more work than just crafting a JSON (or XML) object as I would do in Flask. Why some developers prefer these awful abstractions is beyond me.
I actually like the Django ORM. Something that I learned working with Django 8+ years is that to use its ORM effectively, you have to play Django's game. Sometimes just by denormalizing a few fields will make some queries/relationships way easier.
At work we have a marketplace engine and a digital wallet/financial app built entirely with Django (100k+ loc) and we don't have a single raw SQL query, everything is done with the Django ORM.
The ORM works pretty well as long as you only need to do pretty basic CRUD queries, though I do agree that it gets much worse as your queries get more complex.
Serializers are definitely one of the worst parts of DRF. Anecdotally, I used Pydantic (instead of Marshmallow as the author recommends) to get around this.
I use pydantic for serializers in Django too- and recently started experimenting with Django Ninja https://django-ninja.rest-framework.com/ if you haven't seen it already.
The learning curve is steeper than a lot of stuff in the Django ecosystem but I felt the payoff was well worth it. It’s a bit like Django itself: maximally decomposable and expressive, so much so that the “right” way to do something can be unclear at first. I found it hard to wrap my head around the division of labor between serializers, views and viewsets, but it eventually made sense and now I see the elegance in it.
IMO, Django Rest Framework has too much magic for my tastes. I haven't spent much time with it (which is maybe why it's so confusing), but it always seems like I need to set certain properties and then it "just works"; I haven't found a good list of those magic properties.
The reality is that DRF documentation can fail at the edges because it's a big framework, however it's ridiculously well documented compared to 95% of frameworks or libraries I've used.
For the edge cases, you can read the source code; it's very clear and well commented. And if you fail to understand a feature, it's relatively easy to drop to a lower level of abstraction without losing too much.
Ex: don't like nested serializers in a read-only method? Just write a serializer method and return your own JSON. Can't grok ViewSets with mixins? Just use a regular API view and override what you need. Don't like routers? Use regular views, and so on.
I want to agree and disagree with a minor point. The argument that error codes should be digits[0] is misguided. Errors should be named. E.g.: {"invalid_username": "An invalid username was provided."} is better than {"40001": "An invalid username was provided."}.
Why? Because you'll often add coded errors far into the future or multiple at the same time on different branches. When you merge two errors will have the same code but totally different messages. This can lead to regressions in userspace or annoying merges where an error code has to be manually looked up and changed.
Its unlikely that you'll define the "invalid_username" error code more than once and if you do it will likely have a similar error message. Plus, in your test suite `assert error.code == "invalid_username"` is much more explanatory than `assert error.code == 40001`.
Merge conflicts come with helpful characters ("<<<<<<<<") to show where the conflict originated. If you merge two branches (especially if they are large or reviewed on separate days), its pretty easy to overlook the fact that they share an error code. This will not raise a conflict in git. But is in fact a merge error and users will experience a regression because of it.
> Including a unique code with an error message does help with SEO when users search for a solution.
Maybe. But I don't see how Googling "Product name 40001 error" is better than the user reading "account_deactivated", understanding intuitively what happened and Googling "Product name reactivate account" or "Product name account deactivated".
It sometimes depends if you are relying on first party documentation or not.
I have used systems with full and proper documentation and searching for an error code will give an exact match and as much detail as I want, sometimes in offline documentation and sometimes online. While error messages often cause problems on Google.
For your example, try selecting it and searching 'account_deactivated' in Google.
Showing results for account deactivated
Search instead for account_deactivated
Click that 'search instead' link. Google clearly ignores the underscore in the results so it's not doing what it says it will. The way around this is to quote the text but IMO searching something like '<productname> error 0x5678' is clear, reliable, and easy. These will often be paired with a token like your example anyway, such as IRQL_NOT_LESS_OR_EQUAL, so you get both.
I agree with most of what the author advocates for. However, I'd like to point out that the first priority while building a startup is to get everything working fast.
If your idea works, there'd be a lot of users/clients, you'll probably get funding and you can go ahead and re-write your code to your specific use cases with a larger and better team.
The author mentions some really great and valid points on improving code quality of which I disagree with using FBVs over CBVs. Maintaining Class Based Views is inherently easier than maintaining FBVs. CBVs are well-defined and can be modified more easily in lesser amount of time essentially performing the same tasks as FBVs for a particular logic.
However, when using APIs, I do think that FBVs are more clear in some cases and have better readability. At the same time though, CBVs make sense when you're performing only one of GET, POST, or DELETE operations.
Inheritance is not as hard to figure out in these classes as the author claims it is. In some cases, it might become overwhelming but that is not going to happen at all if your app never grows which leads me to my original point, you need to build whatever you are working on fast.
CBVs, fat models ( which I also suggest using ) are easier to implement than writing more code in scattered FBVs.
Why do I suggest fat models?
Adding well defined methods that relate to the context of your model inside your model, in my opinion, increases readability and makes testing easier.
Again, the goal is to build and grow a startup. You can always implement entirely different approach that works specifically for your use-case later. Even then, I think CBVs would be useful.
The issue with fat models is that if you use DB models and then put business logic in there (assuming that's what you mean by fat models), things fail pretty quickly.
But like you rightly point out here that is not a good reason to avoid class based views. They can be simple to understand and maintain.
What I've found useful is to maintain a two level hierarchy. I've used the term "view models" to describe the classes that feed into CBVs and "db models" for django models.
They could both be dataclasses with declarative mapping between the two, so you don't write the tedious code of parsing database results into view models.
Your dismissal of both the DRF tutorial example and generics as a whole is entirely unconvincing. For a very simple example such as the one provided the suggestion to not use them appears reasonable, only because the accompanying serializers which answer your questions in an even more clear and declarative manner are not included. For more complicated endpoints your suggestion becomes an absolute nightmare.
> I was surprised that you didn't advocate for using traditional server-rendered HTML for views that don't require significant frontend interactivity/client-side state management?
I don't know that there's one best choice for how to render the front end. I generally like Angular because it's strongly opinionated, and in general, strong opinions in software leads to faster velocity and lower TCO. But I don't think that Angular is the universal best choice.
I do however think having the front end powered by a REST API is 100% the right move, because it draws a line in the sand where if private data or incorrect data are getting returned then there is clearly an issue. Having a specific place (the JSON response) where you can write tests against is a huge win, one big enough that imho it outweighs every possible disadvantage of REST.
To be honest, I was surprised that you advocated writing so many tests for each view, easily 10-20 per view.
Part of the beauty of Django's various class based interfaces is that you can be confident that if you add a validator to a model field, then it will be validated by the corresponding ModelForm in the corresponding CreateView.
But then again, you advocate against OOP in python including CBVs. If you do write all your endpoints as functions, it makes more sense you need to test it because it's easy to forget to include a line of validation or whatever it is.
FWIW I love the fact that various Django classes are like DSLs. DSLs are less powerful by construction, so less buggy. It is almost like using low-code. But I do see your point about tracking inheritance and control flow, it can be a challenge at scale.
Just started building my SaaS startup with Python (FastAPI, Pydantic, postgres). Any tips on alternative stacks if they can massively impact productivity?
The biggest productivity impact is not switching frameworks. That out of the way, Django is the OG batteries included (DRF is top API choice), Flask is the OG micro-framework, FastAPI is the new hotness, Starlette is in FastAPI (and maybe better?), Pyramid has some cool ideas, and then there are more that I can't say anything on (Bottle, Falcon, Sanic, Hug).
I'm using Starlette on it's own, with a few other deps likes Marshmallow, and it's been enjoyable and hassle-free. In the past I've done significant django/flask/DRF work and really feel like Starlette is the successor of all of them.
I'm under the impression there's only a handful of people who use Starlette/FastAPI AND have a proper experience with DRF. I'm on a similar boat, I have experience with Django/Flask/DRF and was thinking about starting a project with Starlette (I used it before and I prefer it over FastAPI) but I've decided to continue my DRF journey. Would you mind sharing why, in your opinion, Starlette might be the successor of DRF?
I would say my favorite things about Starlette are:
- refined - it feels like all the lessons from DRF were put into Starlette.
- lightweight - minimal dependencies, isn't built specifically for relational data (no orm or other constructs from django that were originally built around models etc)
- flexible - has just enough to build the basics without being too opinionated (app/request/response), let's you choose how to handle things like serialization (e.g. add marshmallow)
- documentation - like DRF, has very solid documentation.
- asgi - the successor to wsgi in many ways, allows things like background tasks so I don't need to set up celery.
- authentication/middleware - easy to set up and customize, again clear documentation and api surface
There's probably more, such as it support for websockets, push, and more, but for me I'm just running a standard http api (sqwok.im).
Assuming you are a Python developer: That's a great stack. I work at a recently minted unicorn and that's what a big chunk of our service layer is written in.
Only caveat is that it's not as opinionated as Django, so you'll need to document some rules as to how you structure your code. For what it's worth, this article gives some good advice you can apply.
Best productivity hack I can recommend is that it's worth investing time into observability once you reach product/market fit: you will slowly have to invest more labor into troubleshooting your application rather than merely adding more product features. And depending on your market, some customers might be very sensitive to that.
We recently started working with "traditional" Django + Hotwire + React wrapped into web components for interactivity.
We've previously tried React+Hasura/Postgraphile and React+DRF/GraphQL, but this is next level productive and a joy to use, and bugs are rare. Particular love for Django CBVs.
I understand rule #11. The question that occurred to to me was:
How do the various approaches (and the recommended approach) affect SEO?
Is a rule like this more about creating URL's that make sense to a human? If so, how important is that really?
No opinion at this point, just wondering. Cognitive load (one of the arguments in support of the suggested approach) isn't all that meaningful when it comes to SEO. Right?
this is really amazing. as a rails person who enjoyed having ppl much more experienced than I give me best practices "rails", django always felt a bit daunting the few times I tried it because there weren't as many best practices to follow.
Very much enjoyed this post and it gives me confidence to try my next project in django
The up-front costs are certainly higher in terms of the learning curve for the average dev, but it sure beats rewriting the whole thing every few years. reagent dates back to 2013 and has been mostly unscathed by the JS ecosystem churn. I'm under the impression that some (JS-based) React devs catch on to reagent pretty quickly, so the up-front costs might not even be that high.
All this, and if you have really complicated state, check out Fulcro, which gives you a normalized database in your browser, among other things: https://book.fulcrologic.com/
But isn't Fulcro based on Om, not Reagent? How is that an option for React devs? I was surprised when Fulcro went with Om as it's generally inferior to Reagent/re-frame. Om was heavily hyped by David Nolen when Clojurescript first appeared but Reagent soon became the better option for SPAs and re-frame followed suit as Clojurescript's Redux.
Why can't React devs use Om or Fulcro? Lots of Fulcro was inspired by Om, but it's at least mostly, if not entirely, a rewrite of David Nolen's code, and the ideas and capabilities of Fulcro now go well beyond the scope of Om.
Personally I think the MVC framework causes more overhead for startups. It's like trying to shoehorn your use case into complexity for nothing. Classic example of a best practice that needs to be revisited
Lots of good wisdom in here, which is clearly hard-won. I'm seeing a lot of common themes with the pain points that I've found working with Django over the years, and while I agree with (or at least don't disagree with) the majority of these, in some cases I've gone with different solutions:
1. I agree with the recommendation to decouple serializers from models, although I haven't finished this migration in my current codebase. In practice all of our ModelSerializers override many field definitions, and have as much boilerplate as if we built them from scratch. However the ModelSerializer approach is really productive for the first stages of your product when you're just cranking out CRUD views. (Maybe this is a trap.)
2. I've also found "Fat models" to be quite painful in Django. Fat services is probably the most obvious approach to solving this, and it's what we evolved into. Another approach I've been exploring is going full DDD (i.e. your Domain Models are not ActiveRecord / Django DB models).
A common concern I have with both of these approaches is: if you're eschewing the Model<>Serializer linkage, and you're also trying to treat your ORM models as dumb data mappers instead of fat models with logic, then what are you gaining from a heavyweight framework like Django? Wouldn't you be better with FastAPI/Flask using Marshmallow for the API, and SQLAlchemy for the ORM? The main answer I come back to is "Django Admin" but with things like Forest Admin and hopefully one day an OpenAPI admin from FastAPI, this benefit may evaporate too.
3. Regarding "Hungarian notation", I'm a bit dubious about using `_list`, `_set`, etc in variable names; type hints are working out better for me in this area. Prior to adding type hints I did have some coding standards around explicitly suffixing functions that return QuerySets, because this is the one case I think you actually need to care about when reading code -- if you mistake a string for a list, your tests will fail. If you mistake a QS for a list, your code will pass, but you might accidentally trigger O(N) DB lookups where you should have O(1).
4. RE: integration tests on views, my codebase is structured similarly, and I prefer to target services. You should have tests on your views too of course, but I think it's more ergonomic to drive your business logic tests at the service layer, without the DRF API machinery in between you and your code under test (i.e. so you see the actual domain exceptions, and not just the API response codes). We get view coverage by layering some happy-path API tests on top, and a small number of long-form scenario-based integration tests for full workflows using a discursive black-box API testing style.
If your domain is simple, then view-based integration tests probably suffice. It's probably where I'd start for a small project now. Operating in a fairly complex domain model, we write lots of UTs around complex business domain rules. "Test at the lowest level that lets you clearly describe the feature you are testing" is my general approach. In my experience the view layer is pretty boring, and we seldom find any interesting bugs there, so I don't feel the need to spend much time testing it (it's just boilerplate). The domain/service layer is where all our bugs tend to be. If you trust that your service layer is well-tested, then your view-tests can be quite simplistic.
I'd note that this point is also vulnerable to the fuzziness of what actually demarcates UT vs. IT, and I think that definition probably drives a lot of the differences in which slogan folks advocate here. I definitely don't go with the "UTs cannot touch the DB" approach, though I'd like to experiment in that direction.
Perhaps from the author's rigorous definition of API error codes, the response to this point would be something like "your API error responses should be rich enough that you can debug all your integration tests with them". I can see that might be a good line to draw in the sand.
Excellent article -- despite (like others) disagreeing with several of his specific points, I think his overall take to optimize for predictability, readability, and simplicity is great.
A few years ago I took over as tech lead at a previous workplace where the only tool they had used was C++ (really more like C with classes) with no use of the STL or other libraries they didn't build in-house. And this was for a hotel reviews and photos startup! At one point, in a few weeks we replaced 20,000 lines of horribly low-level C++ custom client and server code with 1000 lines of Ansible scripts. https://benhoyt.com/writings/using-ansible-to-restore-develo...
I disagree with his point about foo_list and bar_dict suffixes. In Python it goes against the duck-typing philosophy (and in statically typed languages you definitely don't need them). You should be able to write a function that takes an iterable sequence, whether it's a list, set, or other container type. I think it's simpler to name "user_list" just "users", and if it's a dict, name it for what the keys represent, for example "users_by_id".
As author of the article about that he linked to about escaping outputs rather than sanitizing inputs (https://benhoyt.com/writings/dont-sanitize-do-escape/), thanks for linking that. :-) The OP does provide some counterpoint to my article, but I think he glosses over the real issues here.
I still believe it makes little sense to "sanitize" input generally (you actually can't), and it gives a false sense of security -- what characters you need to sanitize depends entirely on what destination you're outputting to (and often there are multiple targets, like HTML and SQL). As just one example, if you're sanitizing a name field for "Conan O'Brien", do you strip the ' character? For SQL you definitely have to. For HTML attributes you need to, as sometimes those are single-quoted. But you cant actually strip this, because its part of his name! (See what I did there.) And even if you do sanitize, you still need to escape output correctly to avoid security holes. My answer is to use tools that handle this correctly on the output side (a good db library, an ORM, an HTML template library with auto-escaping, and so on). Django's ORM and template libraries auto-escape already, so why risk mangling people's names, email addresses, or other input? (Another example: so many systems mangle the "+" in email addresses: https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-mo...)
That said, this is debating the individual points ... again, I really appreciate his thoroughness and his overall point.
> My answer is to use tools that handle this correctly on the output side (a good db library, an ORM, an HTML template library with auto-escaping, and so on).
Came here to say exactly this! Unless you're building raw SQL queries out of the strings, or you're using them in element.innerHtml, then you should not have to worry about it. Django ORM / Django templates / React / Vue /... will correctly escape stuff for you.
Excellent post, thanks for sharing! A couple of comments from a person not in django camp.
First of all, if we're talking about apis I wouldn't go with django in a first place. Of course, familiarity matters, but django requires a person to pick up quite a lot of concepts (forms, models, serializers, views etc) together with rather confusing multi app structure, which takes time to get used to and it's clearly visible from this post given the amount of things to be aware of.
Second, the rule on types clearly highlights how much pain does it bring to do a big project in a dynamic language. I mean, just stop for a second and think about it - you need to do special tricks to help yourself to understand even whether a variable is a list or a scalar value and I'm not even talking about mutable classes where you can't be sure about which fields are legit and which are accidental / monkey patched.
After doing a smaller project in django I've decided to try to do it in go only to be amazed how much faster it was, precisely because of the absence of problems mentioned above. It's not much harder to code (on the same level actually), type system doesn't stand in the way all the time, but at the same time you're in total control of your code.
There are few more points I would like to add:
- control inputs and outputs of your api. Sometimes I see people just passing an object with input data down the stack even without enumerating all the fields, because of reasons. Aside for the obvious security risk it also brings a lot of uncertainty in what the downstream code expects. It's the same antipattern as with javascript where functions are often defined as `function abc(params)` where params - is just a hash of unknown nature. Why is it bad? Because it's impossible to say what the code expects to find in this object without reading the full code. Same goes for the output. Sometimes people simply dump a model to the json and assume that's fine. Here there is the reverse trouble - maybe you know what's being sent but you have no idea, whether it's being used or not and can't deprecate/remove fields without checking all the client code.
- Sometimes you can write the code in a way that prevents you from messing it up. E.g. sqlboiler in go land. It generates the model against the database instance, which means that it's not possible to add business logic to them and that limitation makes it completely safe to work with them from anywhere in the code without a fear that there are some side effects there
- Be suspicious to REST paradigm. It sounds nice in theory like many other things do, but in fact it's really limiting, especially when you're doing an api for an spa. It'll very soon need an endpoint that won't map into any resource or should be a combination of several of them and by strictly sticking to REST you can soon find it really hard to implement even innocent demands from the frontend.
- Use minimal amount of dependencies. This is particularly painful in node land where you get 300mb of node modules just to get started. Every new dependency is something that can potentially break and/or bring security risks and/or get abandoned. If some functionality can be written in a couple of hours it's worth just doing it rather than depend on a random thing from github.
- Write your decisions as comments in the code. The code should be self descriptive in what it does but it won't tell why it does that and in many cases that's the most important thing to know.
- Be aware of ssr and spa combo. It's pretty popular nowadays and for a reason, but it also brings a load of complexity, since ssr is done with libraries like react which means that in addition to your django application you'll have to run a node app somewhere and think about syncing state back and forth or duplicate data access layer. The same goes for spas - it's worth thinking hard about whether there is a real need for it, since it immediately brings more complexity
- Maybe I've misread it but I strongly disagree on saving sanitised input in the database. Any modern orm will make sure that you won't get an sql injection, most of templating systems either escape the data by default or can be tweaked to do so. In return you get the flexibility of adapting the output as you need it in a specific case. Also, just think about the case when you sanitize user input only to realize months later that you need to do it differently. What will you do in this case?
- Something not mentioned there - I think as a general rule developers should try hard to avoid getting more external dependencies like queues, storages and so on. For a really long time a postgres instance can easily cover all needs and is super robust. With every new external dependency things will quickly get more and more complicated
- Structure your dployments to make it easy to spin up new services / cronjobs on the same code base. Complexity lies in centralization. If there is one huge app to do it all that's being deployed as a single unit, it'll very quickly become quite scary to deploy it. If it's possible to separate individual chunks of work to run independently it's almost always beneficial to do so. If the code independent and can be deployed in separate units, it's million times safer to develop and deploy compared to one monolithic superservice.
Found these points to be very well thought out especially as it relates to the impact of type safety upstream and downstream and minimizing extensive dependency hierarchies.
Can you point me to some resources on using go to design services, best practices, etc? Are there other approaches that are possibly better than go which you are considering? What would you suggest as an alternative to REST api?
Even if you have no interest in Django at all I recommend reading the start of this.
> I wrote this guide to explain how to write software in a way that maximizes the number of chances your startup has to succeed — by making it easy to maintain development velocity regardless of the inevitable-but-unknowable future changes to team size, developer competence and experience, product functionality, etc. The idea is that, given the inherent uncertainty, startups can massively increase their odds of success by putting some basic systems in place to help maximize the number of ideas, features, and hypotheses they can test; in other words, maximizing "lead bullets," to borrow the phrase from this blog post by Ben Horowitz.