Always define the API first. Figure out the information necessary, and make sure it meets both front-end constraints (e.g. no information is missing, does it need to be one call or multiple calls, etc.) and back-end constraints (can this be efficient to retrieve from the database, does one server have all the info, etc).
Once you've verified that the API design works for the needs of both ends, then you lock that down and let front-end and back-end teams work independently.
And even if you're coding on your own, it forces you to think about the information architecture first, which is a good habit to get into so you don't paint yourself into a corner in either direction.
Front-first risks building an interface which requires data is which is too complex to retrieve. Back-first risks building the wrong endpoints. API-first requires you to design for both before you start building.
Outside in drives a much more effective requirements discussion as you work your way down the stack. If the front end asks for data that's "too complex to retrieve" then that's usually obvious from the mock up and it is cheaper to change it at that point before code lower down the stack has been written.
It leads to much cleaner API design and better negotiations about what should go where since the written code of the layer above effectively drives the requirements for the layer below.
An architect working in a vacuum is only going to result in unmitigated disaster in my experience. :)
But I disagree where you say that it's usually obvious from a mock-up that the backend is going to be too complex. In my experience that never occurs to people at all. All too often I've seen designers spend months mocking up fantasy functionality while the backend team is drawing up tables, shards and microservices, only for it to eventually collide in total incompatibility, when the front-end developers say "but the back end doesn't let us do three-quarters of this!" and the back-end developers say "but obviously you can't get the data that way, what ever made you think you could?"
Forcing the API conversation/specification to happen at the start, rather than in the middle, forces extreme clarity for both sides, so that necessary critical compromises are hashed out at the beginning, instead of discovering a horrible incompatible collision midway through.
that's because this methodology is called waterfall, and it has proven to not work, time and time again.
The project should not start with full requirements - it should start with a single requirement/feature (even if that feature by itself doesn't fullfil the purpose of the software). What is the topmost important feature? Do that part first, and have it working end to end. It should take as little time, and done as fast as possible. It should allow the design team to create as simplified a design as possible, and allow the backend to use as simplified a backend as possible. It should allow testing to be done well (good coverage, good unit testing etc).
Then once this is done, the client will judge it, and say what can be improved. Do that one improvement, only, before going back to the client and repeat.
This has been what most successful projects done.
If you're planning your sprint, the point is to figure out the API call first together, before work starts on either front end or back end.
Real-world projects exist on a wide continuum between "maximum agile" and "maximum waterfall", and usually fall somewhere in between. You can't build AWS or Google Search purely out of 2-week sprints with no further advance planning or specification. :P
Which was easy to do in retrospect or half way through the task, but doing that verification step up front didn't work.
For instance if on the frontend you need to display an infinite list of items, how you get your data can be heavily influenced by your interface. If you plan on having a super fast scroll with the user flying through the items, you might want to do it differently than for a page by page site.
If you design API first you must make this choices upfront, and you paint yourself in a corner if you change opinions when the resulting UX is in your hands and you’re playing with it.
Impossible. What will the API serve? A list of horses? Wouldn't work if the user needs a mortgage calculator. There is no choice but to define the front-end first. The question is whether it's defined nebulously or with thought - it's rare for thoughtlessness to be the optimal strategy.
I say this as a backend engineer. The backend genius shines when we figure out how to serve the frontend efficiently (low compute cost) and adapt to rapidly changing frontend requirements without delaying releases.
One of the biggest benefit of starting with the API is that both the backend and frontend devs can start playing with that contract, and discovering the nitty-gritty ways where it's less ideal than you'd hoped when you designed it pre-code.
So you get to pressure test it, and iterate on it, from both sides, instead of just from one of them.
Seems to be a glaring contradiction. How would you know if it met front-end constraints if you haven't defined the front-end yet?
My experience is that front-end is king. It defines what the app must do. The API and back-end are the how. Obviously, you need the what before the how.
If you run into a constraint, then you figure out how to work around it, or what can be sacrificed while still having the front-end deliver the what.
Sometimes that approach leads a project into a cemented API and new requirements that arise after real world application can be even harder to implement as a result.
So I guess I’m not arguing, but agreeing while adding that the approach needs a couple qualifiers.
I'm a backend dev who does quite a bit of frontend though I deeply dislike the current state of frontend (bring back Webforms)
For me it all starts with the database, always. Code, ux, network is all ephemeral. The schema, the invariants and the data are forever. (this is also why I think microservices are often the wrong approach, you're solving a code problem at the expense of the data)
I've been thinking about the concept of code as pipes for data and the (incorrect) emphasis we place on code. This is why something like MongoDB never clicked for me, code is worthless without the data it operates on and ensuring the quality of that data takes priority over anything.
Of course you need to know roughly what the data will be used for and approaching it database first can result in some nasty surprises if the requirements change, but the rest of the code will rot and be replaced far sooner than the database. Given the churn on frontend the schema in the database will probably see 5 front end frameworks come and go.
Edit: having written this I realised this is also why I don't like or understand letting your ORM handle database migrations. The code is built on top of a dependable database, letting it inform the design of the database is completely the wrong way round, in my opinion.
Here's a really interesting discussion about this quote and the idea behind the above: https://softwareengineering.stackexchange.com/questions/1631...
As a backend dev myself, I totally agree with it, but I think when we're developing full blown apps we should have the end user in mind and focus on their experience using our app more than on adopting data-driven UI design.
If you want to get fancy, you can use Mirage JS  to emulate network requests with mock data.
First design the data. This will tell you how this part of the solution integrates with what everybody else is doing, and will make very obvious some problems that otherwise you may only discover after the system is mature into usage.
Then design the UX and users processes. Programmers get to design only half of those, but both have to be set at the same time.
Then you can do everything else.
And yes, letting an ORM dictate migrations guides people into a completely wrong sequence.
The frontend requirements can help define what api's are required but if you treat your database as dumping tables you limit what your data can do.
You add can add and remove indexes at any point. If you made a bad decision or no decision you would add those when more information is available.
With regard to migrations though, I tend to disagree. Code is very easy to manage in version control, so the migration files are merely the data structure represented in code. I love migrations and they've vastly changed the way I work.
The data model is the thing that stays more invariant in an app, and you can start with that:
But a centralized database is not the future. Client-first web apps are:
So your data will be stored client-side and synced, with Dat or MaidSAFE or IPFS etc.
I agree, though I generally reduce this question to, "what is the shape of my data?".
Thats how I know I'm on the right path. I start to organize the data and new things that appear fit naturally into what I've already discovered.
This is the epitome of organic design because you are simply the conduit between the truth and a schema definition. If you force yourself to not be clever and just interpret what you've found, your database model will be solid.
Can you talk about how you figure this part out? As you imply, this seems like it is your actual first step, not designing the database.
There's all the stuff that comes before actually doing that, generally, though not always, the feature is driven by some front-end requirement and may have accompanying product specification and possibly even UX designs. But I try to take those and think about what the domain concept being addressed is, rather than the shape of the data the front-end requires. Almost always the front-end and requirements will change through iteration so I find that purely mapping the FE concepts directly to the data-structure is less robust than taking a step back and thinking about the actual data without consideration for the proposed front-end design. Though the front-end can still inform aspects of the database at this stage it should only provide minor optimizations to the schema through indexes or normalization choices.
I suppose in part it's taking a domain-driven design approach and making the first step encoding the domain in the database. It has been my experience that many headaches in applications I encounter could have been avoided if the constraints had been enforced against the data at the persistence level rather than assuming the application level code will handle it correctly.
I definitely agree that when possible it is very useful to enforce important constraints in the database.
Anything else is ephermal.
History is littered with millions of enterprise software that can't beat the simplicity of a spreadsheet or the flexibility and efficiency of a unix command line
When you write front-to-back, you don't get an immediate sense of what the deeper relationships and nuances of your data models are going to look like. You're designing for what you immediately think you want to see.
This approach can work well, until you run into things that end up being a lot more difficult to implement behind-the-scenes than the data you just expect to appear on the page in front of you.
Personal experience/preference, but the thing that has (almost) never led me wrong was doing a walkthrough of the domain of the application, and then thinking about what the client-side pages and functionality are going to look like (page-by-page, user stories), and then implementing the backend. Finally, wire up the backend to the frontend mockups/design.
For more complex systems, there are things that need to be ironed out in their entirety and have their modeling/relationships, and functionality proven on the API side of things before you decide to start writing UI elements for them.
On the flipside, having worked on projects where there have been major, multi-month setbacks, it was nearly always because the models/domain weren't well thought out and it needed major components to be re-worked (either UI or backend) before we could proceed.
- Assuming PRD and UX are ready. The Frontend team take a look and come up with the Frontend data model they want. The focus is on making sure it's easiest for them to maintain and develop the client side.
- The Backend team take a look and come up with the Storage data model that is flexible enough for short term future. The Backend should anticipate the growth of the product and try not to be put in a situation where they have to redesign the database/infrastructure.
- Then both teams meet to negotiate the "Contract", which is the API layer. In some case, the API is more similar to the Frontend data model. In some case, the API is more similar to the Storage data model. In other case, there is a translation layer between Storage <-> API <-> Frontend (not as desirable, but not the end of the world). There are standard on how API should be designed (https://aip.dev/).
- When the contract is established, both teams can run more independently to each other, release at different times, and only needs to sync when there is a need for the contract to change. The API is designed to be easily extended, but not changed or deleted. The Backend team doesn't care if the Frontend data model exists. The Frontend team doesn't care what kind of Storage the Backend uses. It's an abstraction.
The complexity of the project can't be entirely eliminated. A good TL makes explainable trade offs on which parts of the system bear the complexity. It's their job to think not only about the architecture, but also how the development can be sustained in the future. This is especially true when the product is developed by a medium-large team. The focus is no longer on having the leanest/optimal code base. The focus is on making sure many parts of the system can move more independently, in parallel, and that it's scale-able to the growing business needs. It's a classic case of Amdahl's Law applicable to real life.
Of course, this is not always applicable to small projects that is never intended to have more than a couple engineers. Even then, when I work on solo full stack projects, I still unconsciously do this contract first approach.
What you describe is pretty much front-end first with the tweak that when the teams negotiate, the burden of proof lies on the back-end. The back-end should contort as much as possible to satisfy the needs of the front-end, allowing it to remain as clean and nimble as possible. Only when it matters should the back-end be able to materially affect the shape of the api. In most cases "mattering" means balancing the needs of this front-end with another that also consumes the back-end.
I'm not totally clear on what you disagree with. it seems like you're suggesting roughly what the article is saying -- figure out the front-end and then implement the backend.
what is the diff between what this article is saying and what you're saying?
Here's an example: Recurring events.
Sounds simple, right? Well, that's what I thought too, until I spent two months learning about what the "rrule" and "rruleset" RFC specs are, and how nightmarishly complex it is to implement a mixture of single and recurring events with granular CRUD and associated records.
You need to allow updating or deleting all recurring events from a master record, or just a single instance of the recurring event, and to be able to attach DB records to specific virtually-generated dates of the recurrence pattern (this one was hell, since databases don't allow foreign-keys on views). Even had to submit a PR for a Postgres library for parsing rrule's to add the proper support that we needed.
In retrospect what should have happened was try to alter the business requirements so that the functionality was slightly different, but you'd never have known that if you go front-to-back.
Any advice on how to do this without a domain expert to hand?
Books, videos, and watching people work are all good. You could also learn to do the thing yourself, if that's possible.
Almost nothing else about an application matters besides your data models. The UI and other niceties are just a (very replaceable) skin around CRUD-And-CRUD-Accessories on data models. If you screw those up, you have a nice looking, useless theme. Which is pretty ironic given how much disproportionate value non-technical businesspeople place on the UI of things because they can see it/interact with it.
(Not that the UI isn't massively important, but you can still interact with an API minus a frontend. The opposite scenario gives you nice-looking pictures.)
To make it not become a tour-de-force, I start with a golden path, then various aspects (validation, additional operations on data besides rendering, etc). If the data doesn't exist yet, it's mocked. Then later, it's implemented (or made 'live') according to the mocked "schema".
It's not exactly TDD (I rarely write a test except for integration tests or the occasional "let's make sure it's not off-by-one").
It's not top down either, exactly. Perhaps you could call it "needs driven development" or "user driven development". You need it from the start and you use it as soon as possible. From then on, it's just improvements.
If you work back-to-front, you won't have anything working until late in the game. You're constantly imagining all the non-consequential "what-ifs".
But in a way, front-to-back is like TDD except you're not testing via unit tests. You're testing by using. And you will be using it so much that it will be tested thoroughly. You iterate continuously while the top layer / front end keeps working.
Sure, there can be an overarching design. There can be an architecture. There should be. But after that, it's about slowly molding the system towards that design. If you have understood the design, you know when you're veering off course. For sure, it's not like building a bridge.
With front-to-back, you mock out data. With back-to-front, you mock out interfaces. This is how I write Django projects. I write models, which I can start using right away. Sure it's in the REPL, tests, or admin pages. But I'm using real functionality with real data from the very beginning. Then the custom views and templates get added on top.
> For sure, it's not like building a bridge.
It's super interesting how many processes are inherited from traditional engineering.
"For example, I previously tended to use a bottom-up approach to development. However, SICP showed me the benefits of what it calls "wishful thinking" with a more top-down approach."
Some projects I spent more thinking on, but especially where specs and design are more vague, I approach it from front to back
Start with the UX and work your way backwards. I don't like it from an engineering perspective, but it's the only way to go to build products.
Users just do not care what kind of horrible kludge is running behind the slick UI.
Since neither the user nor the programmer knows the right answer, we have to work iterations in. Each iteration is followed by feedback. Given how often we end up being completely wrong, it is best to get to the first feedback as quickly as possible.
This! Back-to-front is a good way to build a scalable, performant, cleanly designed software. Front-to-back is the only viable way to build a usable product.
> Users just do not care what kind of horrible kludge is running behind the slick UI.
For 99.9(9)% of users, the UI is the product. Even those who do know what "back end" means, do not care.
Front-end is pretty much as close to the user as you can get.
For instance, if your product has a lot of UX nuances, and you aren't artistically inclined, it can make sense to build out the interface first, work out the UX, and then proceed to the backend from there. However, someone with the ability to build wireframes should probably do that first, work on the API, and come back to the frontend.
I'm good at backend stuff, but have a hard time keeping interest in a project I'd I spend all my time in the backend. The frontend keeps me interested because it's visual and helps me see what my goal is.
Plenty of projects I have worked on will sacrifice anything to have a frontend match a drawing. A friend currently works on a project where finding the purchase button requires you to scroll two inches in a widget where they hid the scroll bar to see the purchase button on a standard 15 inch laptop (as the mockup was built on a wide monitor).
On my current project, if there is so much as a semi-colon out of place, communications will see it. Forget a background process? Nobody notices that.
I would then mock these out so I could start prototyping the rest of the frontend. This usually involved setting up UI components, pages, and linking them up with actions. The goal of the prototype was to demonstrate a few end-to-end (albeit mocked) flows to the user. With the flows and data types in place, it was trivial to design a set of API endpoints and build out the backend.
I've worked on projects that were very front-to-back and without having a constant eye towards the backend, the UI requirements would have ended up dictating a non-composable, non-extensible backend. But I guess applying taste and knowledge to these problems is kind of the point of being "full stack"
The trouble comes in keeping a clear idea of what an iteration is supposed to test or accomplish beyond "makes it better" - and an end-to-end iteration is expensive. That is where the idea of putting the data model or UI "first" develops, since it lets you build from the technical details towards the concept, as a way of constraining and filtering your thinking.
Something I want to try but have not gotten around to willfully doing, is to treat the initial design stages as an exercise circuit of focusing on each layer for a limited period of time each day, rotating from one to the next.
I'm pretty sure that you could get some great results if a work week were spent doing a four hour cycle, with one hour each thinking and researching data, interfaces, UI, and premise/market.
And the risk of using mock data to populate a UI (to build it without any backend) is that we don't have good tools for tracking all of the places that we are still using mock data. This can lead to blind spots that make estimating delivery schedules very difficult. Because you may discover late in the process that some corner cases are still using fake data. And, it's a miserable way to work, trying to push through to a milestone when you have no idea how much further you have to go.
This is a huge timesaver, because if product decides to change the frontend, you haven't committed huge resources to the backend. Building software front-to-back helps people agree on what needs to get built.
You firstly establish a purpose of your solution. Thinking the job to be done, the users, the features..
Then you do a design. Likely you have a design team, that does research and creates wireframes or high fidelity mock ups.
Then you establish requirements for the various layers in your stack.
So what I described is a fairly typical process. It is front to back.
I can't imagine someone building a product, starting with a myopic focus on the database, followed by rest contracts, etc...
For example, your process puts great emphasis on the visual design. To me, that aspect is almost entirely orthogonal to the software design. That's not to say it isn't important to getting a good end result, but once you know what interactions and information architecture you need, how you draw things on a screen doesn't affect anything else much.
I suspect a lot of teams would start with the external models like interactions and information architecture, then move to internal models like database schemas or REST APIs, and then implement the former in terms of the latter.
I consider this be a very hard thing to accomplish. There are generally too many unknown unknowns. I almost always need exploratory work to figure things out.
1. High-level, low fidelity sketch of the user stories and the UI
2. High-level, low fidelity sketch of a possible database schema
3. Refinement of UI and user stories
4. Mid-level sketch of possible API
5. Refinement of DB schema and backend models
6. First pass implementation the UI
7. First pass implementation of the backend (db + models + API)
... A few more iterations, bug testing, and voila! Of course, minor variations of this could also work well. For example, 6 and 7 could probably be easily switched.
After all an interface is, by definition, where two things meet and interact. There are too many unknowns otherwise.
Recently I started learning ASP.NET Core Razor Pages and found it quite refreshing that the line between front-end and back-end is very much blurred and somehow, the cognitive load is much lesser, than compared to, say, and Angular + API stack.
I felt the same for some of the projects I developed using Django, though, Razor Pages is even more merged when it comes to front-end and back-end.
I'm involved with the two biggest types of software in general computing: control systems and large complex commercial transactional systems.
The former don't feature a UI or a database. While the latter have the primary issue of modeling a complex problem domain.
That complex model is by necessity produced without regard to any particular human or system interface, or data persistence. Are such concepts so alien to audiences here?
If you’re prototyping greenfield product, it’ll morph many times as you build it. Once you’re happy where it is, and you’ve stabilized on data structure (not database schemas), then just start replacing mocks withr your API calls. Acceptance tests will tell you if you’ve done it right.
If your product or service doesn't exist without the UI then why bother with a REST API at all? Write a nice stateful application that is rendered server side. There are great frameworks for this in almost every language.
A good REST API is consumed by applications and developers and they expect you to stick to the conventions or they will turn to another provider if they can.
I also like to think in terms of denotational semantics which works rather well with REST APIs. The entities represent the domain objects which model what the system does. The state transfer operations become the game between the client and the system. If the domain maps cleanly the client and the server never have to guess or assume the state of the other. You can follow the operations from the URLs (HATEOS, etc).
What I find happens to some systems where the design is driven by the UI are domain models that span both the client and the server. This leads to routes that fill data for specific UIs, entities that have different representations based on which route they're fetched from, and the dreaded "RPC over HTTP," that the OP seems aware of.
A lot of this can be avoided if you never intend to expose your REST API to external developers. Just don't have one. Less to worry about.
Designing front-to-back that way makes a lot of sense. I just find that if you do that from a UI through a REST API that a lot of teams and developers skip on maintaining the REST conventions and then end up sad when their project becomes difficult to maintain and onboard new developers to.
Also found there to be a motivational aspect to this. Since I can't push the buttons or turn the knobs because its just UI at this point, I can't help but breathe life into it so that I can actually use it.
Personally I think the approach is very similar to TDD and so for some of us it's easier to dive first on the front-end. It's like translating user specifications to tests and then writing the code for the tests to pass.
So, sure: It's useful to be aware that you can work front-to-back, back-to-front, front-and-back-to-middle, etc. But you still can't make this choice without consideration and one size absolutely does not fit all.
1. Data Models
2. Data Stores
3. Business Logic
6. Application Interfaces (resource level)
7. Presentation Interfaces
If this is the case then you should develop the plugins first.
For what it's worth I am on your side given a bunch of constraints that match my prior experience. So I have typically worked on small scrappy IT teams at companies that are working on something else -- whether maintaining a Roku channel or putting together the heating ducts and plumbing assemblies for a building or making sure your marketing companies are not making illegal promises to customers that you can't fulfill. Inside those teams there is some sort of pain that we are meant to alleviate -- say, executives are spending too much time telling accounting in slipshod fashion their estimates for how much money they anticipate will be coming in from various prospective contracts, and roughly when it will come. The idea is born: an app which accounting can grab information from, automatic emails to the executives, and a user interface that fits the executives like a glove so that this is a better-than-painless experience for the people who actually use it.
Given this context, I agree that you want to repeatedly iterate on the frontend of the product over and over until the actual users start to change the nature of their complaints. Their complaints started with: "I need to put in a Flotsam which is also a Jetsam, how do I do that?" / "What do you mean, I thought flotsam and jetsam were different things?" / "Well for established Wakes they are, but not when we are looking at a new Surf. Then sometimes they are different but sometimes something is both, until that Surf becomes a Wake." So it is a failure of your domain model to match theirs.
And you are right, being able to have that domain model in a couple of JSON structures really helps with your ability to refactor everything around, especially if you have a type system which can just tell you "look that function is still designed around the last iteration and it's gonna break now with this new structure."
Those problems are really hard to fix when that data structure has already been broken apart into tables to be stored in a relational database and then ossified into various Data Access Objects and API calls which perform the updates.
By contrast with this approach you have a time where the product has succeeded in matching the user's domain model, which comes when they say something like "hey I changed this in this other system but I am not seeing the change in your app" or maybe just "hey I ran into a bug where it looks like the software is no longer saving my changes." Something that reveals that they think this software is in beta rather than pre-alpha development. You have a conversation like, "It was never saving your changes, that's a forthcoming feature." / "Uh, how was this ever supposed to work if it didn't save my changes?" / "No, like, that has been part of the design from day one, but that turns out to be a really costly part of the development effort so we wanted to get you to sign off that this is exactly the app you want before we start to build that layer of persistence." / "Oh. Well where can I sign up? I really want this app!"
Also really good for the idea of "build one to throw away", once you have fixed up your domain model then it can be nice to start from a clean slate.
Let me also say where you are limited: sometimes you do have a reasonably good guess at the domain model. For example you might be doing anthropological work as a developer, sitting in on meetings and getting a sense for how people talk about a system, before building the app. Or, you are interfacing with an external API and its domain model is presented to you directly in its documentation. Or, you are designing a game engine and the decisions are up to you -- the "users" are people who will have to play the game later.
Here it is helpful to build back-to-front. In fact there is a really nice REST principle which you may wish to emulate if you can, called HATEOAS. If you can do it, it makes things so much simpler. The basic idea about HATEOAS is the exact inversion of what you are saying: a polymorphic frontend, rather than a frontend which knows what API calls to make and in what sequence. So the idea is to have an intentionally crappy frontend -- it is crappy because it is generic, the backend tells it "here are the many different sorts of objects tracked by the system, and here are the things you can do with them," and it configures up a crappy UI based on this skeleton which the backend gives it. UIs created procedurally by robots reading data structure descriptions will never be as pretty and fits-your-hands-exactly as ones you create yourself. And the key technology which enables this front-end polymorphism is precisely linking: a web browser is able to show all web sites and is agnostic about how exactly everything connects because the web server tells it how everything connects. So if you have a survey app, when you get a survey from the backend the backend also tells you something about "to submit a new response to this survey, POST it to this URL, and validate the contents against this schema first." The crappy UI probably shows you the 10 questions for the survey up above, and then down by the "submit new response" form it contains 10 answer fields and you have to scroll between the top and the bottom for each question. Very inconvenient, because it is generic.
But on the flip-side, you usually get a good-enough-for-developers-for-now UI on the frontend, and now you can modify the domain model and services purely on the backend.
My white whale is probably to combine both of these together someday. :)
> nice REST principle which you may wish to emulate if you can, called HATEOAS. If you can do it, it makes things so much simpler. The basic idea about HATEOAS is the exact inversion of what you are saying: a polymorphic frontend
Huh. I'd never really thought of HATEOAS in this way. I've used it before to build an API that can be consumed by another service in a flexible way, but have never really reached for it when building a UI. I guess this would solve the problem by only guaranteeing relationships rather than guaranteeing the full API?
This would definitely protect against churn when the structure of the URL's change, but doesn't this still depend on having a solid understanding of the domain from the get-go?
Is it, I suspect, mostly because we failed DRY and repeated ourselves at the various layers? (i.e. our domain model is echoed in our DB, Redis cache, API structure, App structure, stylesheets...)
If that's the core problem, it's solvable two ways... an app with no backend, or a front-end that is fully generic serving up a backend that calls the domain model shots.
This means understanding the problem domain. What does the app solve? Better yet, how does the app either (1) save money/resources for the problem, or (2) generate revenue streams?
When you start from this approach, things become much more clear. Generally speaking, start with the UX low fidelity wireframes first. What are the user-stories for the user? Can they log in? Can they do some CRUD functionality relative to the problem domain?
Okay, what data does this user need for these pages? What sort of user-activity are they going under? Create a set of datatables describing the problem domain.
It needs a user table? check. It needs a license table? Just map out all the data needed, and group them into appropiate tables.
Fromo there, map out the relationships in a relational format to get a better prospective on the bigger feature
Are these features even needed according to the frontend? Okay, go from there. What is the cost to implement each feature, and how does this impact the development velocity / budget of the project? Go and find the sweet point of what's defined as a MVP, and then proceed to figure out what a stage 2 looks like.
Does the original data model have a migrationary upgrade path? Imagine if your developing the backend. Do you forsee problems running the migrations?
Now you need to think about scalability and whether that actually matters here. Changes are it doesn't for the vast majority of apps.
Do you need subscribers to aggregate bulkInsertions to the database? Do you need a redis cache to prevent unnecessary calls to database? Or do you need something more complex, because the end user is a developer and your providing a high scalable Web API service?
Just keep cycling outside-> in until you've satisfied all the results into a proper MVP database and a MVP UX pen paper design.
At this point, you'll want to define the API and how the frontend will consume it. How will the backend handle it? For instance, if you spec it graphql, prepare for a world of pain on the backend and an easy life on the frontend.
You should think about your API design, and use design patterns to think about how you can keep the frontend as simple as possible. The state of truth should of an application should live closest to its data source(s), so tread with this path in mind. Sometimes the frontend does the same work (e.g. financial calculations) to reduce the total number of calls to the backend. You might want to consider everything else here at this point, e.g. whether websockets are needed etc. And other factors in the application such as third party providers.
There's many right solutions to a problem domain, but there are just as many wrong solutions as well. Go from the business side first, and go outside-in. The best answer is the simplest one that satisfies all criteria, both in UX and how scalable it needs to be
- CRUD on table records, which makes the data schema independent
- Actions which can update multiple tables, call 3rd party etc
- Queries which can go more complex things than GET or GET list, such a search, or filtering, or sorting
- Selections, which are gets or queries run over multiple tables and combining the results
I'm addition each endpoint returns in JSON, but can have a
suffix attached to template that into HTML
I find this organizes well and covers all use cases
If anyone wants to see a Node template of this, it's