Hacker News new | past | comments | ask | show | jobs | submit login

I remember that all the web shops in my town that did Ruby on Rails sites efficiently felt they had to switch to Angular about the same time and they never regained their footing in the Angular age although it seems they can finally get things sorta kinda done with React.

Client-side validation is used as an excuse for React but we were doing client-side validation in 1999 with plain ordinary Javascript. If the real problem was “not write the validation code twice” surely the answer would have been some kind of DSL that code-generated or interpreted the validation rules for the back end and front end, not the fantastically complex Rube Goldberg machine of the modern Javascript wait wait wait wait and wait some more to build machine and then users wait wait wait wait wait for React and 60,000 files worth of library code to load and then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)




Even worse: Client-side validation and server-side validation (and database integrity validation) are all their own domains! I call all of these "domain logic" or domain validation just to be sure.

Yes, they overlap. Sure, you'll need some repetition and maybe, indeed, some DSL or tooling to share some of the overlapping ones across the boundaries.

But no! They are not the same. A "this email is already in use" is serverside, (it depends on the case). A "this doesn't look like an email-address, did you mean gmail.com instead of gamil.com" is client side and a "unique-key-constraint: contactemail already used" is even more down.

My point is, that the more you sit down (with customers! domain experts!) and talk or think all this through, the less it's a technical problem that has to be solved with DSLs, SPAs, MPAs or "same language for backend and UI". And the more you (I) realize it really often hardly matters.

You quite probably don't even need that email-uniqueness validation at all. In any layer. If you just care to speak to the business.


"A "this doesn't look like an email-address"

unfortunately this also needs to be done server side, unless your trusting the client to send you information that is what your expecting?

client side validation makes for a good user experience, but it does not replace the requirement to validate things server side, and many times you will end up doing the same validations for different reasons.


"It depends".

If it's merely a hint for the user (did you make a typo?) there's no need to ensure "this is a valid email address". in fact: foo@gamil.com is perfect valid email-address, but quite likely (though not certain!) not what the user meant.

I've seen hundreds of email-adres-format-validations in my career, server-side. The most horrible regexps, the most naïve assumptions[1]. But to what end?

What -this is a question that a domain expert or business should answer - does it matter if an email is valid? trump@whitehouse.gov is probably valid. As is i@i.pm[2]. What your business- expert quite likely will answer is something in line of "we need to be sure we can send stuff so that the recipient will can/read it", which is a good business constraint, but one that cannot be solved by validating the format of an email. One possible correct "validation" is to send some token to the email, and when that token is then entered, you -the business- can be sure that at least at this point in time, the user can read mail at that address.

[1] A recent gig was a Saas where a naïve implementor, years ago, decided that email-addresses always had a TLD of two or three letters: .com or .us and such. Many of their customers now have .shop or somesuch. [2] Many devs don't realize that `foo` is a valid email-adress. That's foo without any @ or whatever. It's a local one, so rather niche and hardly used in practice; but if you decide "I'll just validate using the RFC, you'll be letting through such addresses too!". Another reason not to validate the format of email: it's arbitrary and you'll end up with lots of emails that are formatted correct, but cannot be used anyway.


Just because some places implemented the validation wrong does not mean the validation should not occur.

The validation is there to catch user mistakes before sending a validation email and ending up with unusable account creation.


You are missing the point. Sorry for that.

It doesn't matter if an email has a valid format: that says almost nothing about it's validity. The only way you can be sure an address can receive mail(today) is by sending mail to it. All the rest is theatre.

And all this only matters if the business cares about deliverability in the first place.


No, I understood your point and I agree sending the email and getting some type of read receipt is necessary.

You seem to think that because of this client validation should be skipped. On that point I disagree. If you can tell that it's not a valid email address (bigtunacan@goog obviously invalid since missing a TLD) then no email should be sent. Good UX is to let the customer/user know there is a mistake in the email address.


I think the main concern for frontend validation was before HTML5 came along with validation attributes. You can easily produce HTML input validation attributes from a Yup schema for example by using its serialization feature (https://github.com/jquense/yup#schemadescribeoptions-resolve...). Here is an example from some silly code I wrote a while back testing htmx with deno https://github.com/dsego/ssr-playground/


I always saw client side validation for improving UX and server side validation for improving security.


I once needed to order something in the company's ordering system, but for some reason my manager wasn't set as an approver, by virtue of some glitch, since it had worked a few weeks before, and if you wanted to change approvers you'd need the current approver to approve. But that wasn't set. A classical chicken and egg situation.

The button for changing approvers was greyed out, so out of boredom I changed it to active in the client-side code. Lo and behold after clicking the "active" button I got a box for selecting the approver.

I could select any user in the company. Even the CEO or myself.

I did the right thing and mentioned this to our IT Security department. Since obviously this could be used to order really expensive stuff in the name of the CEO or whoever.

They came back to me and told me, the vendor (I'm not sure I want to mention them here because they're happy to sue), knows about this for 3 years and won't fix it.


Oracle. Must be.


ServiceNow


Indeed servicenow is the clunkiest and saddest software in use today. Unbelievably terrible. You have to see it to believe it.


Even worse^2, client-side validation may differ from server-side validation and from database-side validation. I cannot imagine client-side checking for a validity of a phone number using freshly downloaded database of current carriers and assignment rules in different countries, I prefer to maintain it server-side, even though it could be possible (thanks to guys from Google and their Libphonenumber). But again, I don't trust the client, so it needs to be re-validated later. Then it will be converted to some native data structure on order to make things faster and unified, a later it will go to a database with its own coercion and validation routines just before application will do a query. This validation trusts the format so it will just make sure the result of conversion is correct. But then the query itself carries a validation aspect: when the number must be unique in some context, it will return error, which will bubble up to user.


A "this doesn't look like an email-address, did you mean...

Stop right there.

I'm tired of receiving mail from people that gave my email address as if it was their own.

Never ever accept an email address unless you can instantly confirm it's valid sending an email and waiting for an answer. If the user can't access their email on the spot, just leave it blank and use another data as key.

I wish they included that in GDPR or something.


I think this is the point of the client side check though - if the user makes a typo (e.g. gamil.com) then the client side validation can prompt them to check, before the server sends the validation email and annoys the owner of the typoed address.


My point is that it doesn't matter if some arbitrary string looks like an email address, you need to check.

If it isn't valid the server won't annoy anyone. The problem is that the address is valid. And not theirs, it's mine.

The moment the users need to be careful, they will. Make the problem theirs, not mine.

"Sorry sir, the address you provided returns error" or "haven't you received the confirmation email YET? really? there are other customers in the line" and see how soon they remember the right address, perfectly spelled.

Even big ass companies like Paypal that have no problem freezing your monies, allow their customers to provide unchecked email addresses and send income reports there. (here)


You can (and should) definitely do both. But needing to validate that a user has access to the entered email address doesn't mean you should do away with client-side validation entirely.


You missed my point, I'm afraid.

I meant that it very much depends on the business-case (and hence laws and regulations) what exactly you'll have to verify, and therefore where you verify and validate it.

Do you need an address to contact people on? You'll must make sure that the user can read the emails sent to that by you. Do you merely use it as a login-handle? Then it probably only has to be guaranteed unique. Do you need to just store it in some address-book? Then just checking roughly the format is probably enough. "It depends".


> Do you need an address to contact people on? You'll must make sure that the user can read the emails sent to that by you. Do you merely use it as a login-handle?

Pretty humongous dick move to use someone else's email address as one's own login for some website, wouldn't you agree? What if it's a popular website, and the owner of the address would like to use it for their id; why should anyone else be able to deprive them of that?

And thus it's also a dick move from the site operator to allow those dicks to do that. So no, it doesn't depend: Just don't accept untested email addresses for anything.


Again: this depends on the business case.

Not all web-applications with a login are open for registration. Not all are public. Not all are "landgrab". Not all have thousands of users or hundreds of registrations a week. Not all are web applications and not all require email validation.

Some do. But, like your niche example proves: the business-case and constraints matter. There's no one size fits all.


> I'm tired of receiving mail from people that gave my email address as if it was their own.

Did you mean “receiving mail intended for people that gave my email address”? Because that's how I usually notice that they did.


It really wasnt about client side validation or UX at all. You can have great UX with an MPA or SPA. Although I do think it’s slightly easier in an SPA if you have a complex client like a customizable dashboard.

Ultimately it’s about splitting your app into a server and client with a clear API bounday. Decoupling the client and server means they can be separate teams with clearly definied roles and responsibilities. This may be worse for small teams but is significantly better for large teams (like Facebook and Google who started these trends).

One example is your iOS app can hit the same API as your web app, since your server is no longer tightly coupled to html views. You can version your backend and upgrade your clients on their own timelines.


Ouch!

I’ve worked in two kinds of organizations. In one of them when there is a ‘small’ ticket from the viewpoint of management, one programmer is responsible for implementation but might get some help from a specialist (DBA, CSS god, …)

In the other a small ticket gets partitioned to two, three or more sub teams and productivity is usually reduced by a factor more than the concurrency you might get because of overhead with meetings, documentation, tickets that take 3 sprints to finish because subtasks that were one day late caused the team to lose a whole sprint, etc.

People will defend #2 by saying thar’s how Google does it or that’s how Facebook does it, but those monopolists get monopoly rents that subsidize wasteful practices and if wall street ever asks for “M0R M0NEY!” they can just jack up the ad load. People think they want to work there but you’ll just get a masterclass in “How to kill your startup.”


I’ve worked at the same company for a long time. For about 15 years, my team was embedded in a business team and we managed things however we wanted. We could move very quickly. Then, about 5 years ago, we were moved into the tech organization. We were forced to adopt agile, sprints, scrum masters, jira, stand ups, etc. It probably takes 10 times longer to get the same amount of work done, with no improvement in quality. The amount of meetings is truly astonishing. I’m convinced the tech org mainly exists to hold meetings and any development work that occurs is purely accidental.


But is your loss from adopting those teach standards, or from being un-embeded in the business team?

Tech orgs and those standards exist because:

- tech generally doesn't understand business - the business struggles to express it's needs to tech

Embedding worked for you, but how big was your team? Could that scale?

I'm not questioning your success or your frustrations, but how unique was the situation for your success?


Same experience as me. Scrum is a disease in this industry.


What you may not see is quality-of-life improvements for executive management, planning, and scheduling. Communication and alignment can be both more important and more profitable than just velocity alone.


I work at a company that makes a very clear distinction between API and View layer. Our API spans 200+ endpoints. We have 6 backend and 6 frontend developers.

As far as iterations go it’s very rapid. Our work teams are split into 1 backend and 1 frontend developer. They agree on an API spec for the project. This the contract between them and the frontend starts working immediately against a mock or very minimal version of the API. Iterate from there.


This is a pretty popular approach, and I use it sometimes, but "agree on an API spec for the project" does gloss over how challenging and time consuming this can be. How many people here have ever gotten their API wrong? (raises hand). There's still a lot of ongoing coordination and communication.


Oh certainly. It’s pretty rare to get things exactly right on the first try. For this reason we hide new endpoints from our public open api spec and documentation until we are satisfied that some dust has settled on them a little bit.

Still you only have to get it mostly right. Enough to get started. This only starts to become a huge problem when the endpoints is a dependency of another team. When you’re in constant communication between the developer building the API and the developer building the client it’s easy to adjust as you go.

I find a key part of a workflow like this though especially if you have multiple teams is to have a lead/architect/staff developer or whatever you may call it be the product owner of the API.

You need someone ensure consistency and norms and when you have an API covering functionally as broad and deep as the one I work on, it’s important to keep in mind each user story of the API:

- The in house client using the API. This generally means some mechanism to join or expand related records efficiently and easily and APIs providing a clear abstraction over multiple different database table when necessary. - The external client, used by a third party or the customer directly for automation or custom workflows. The biggest thing I’ve found helps these use cases is to be able to query records by a related field. For example if you have some endpoint that allows querying by a userID, being able to also query by by a name or foreignID passed over SSO can help immensely.


Yep. I was in a type 1 startup. Stuff got done fast.

Company forced us to type 2 using Angular. projects thar used to take a couple of days for one person became multi month efforts for a dozen developers across three teams.


Sounds like the problem is having "sprints". As far as I know, most teams at Google and Meta don't.


They need scaled agile, where every 5 or 6 sprints you group them into an program increment, with extra overhead and even more ridiculous symbolic rituals. Your team is held to an arbitrary commitment months out, then executives shift the ground under your feet and make everything irrelevant. Dev teams love it!

</s>


It's remarkable that non-tech enterprise need all this agile for poor internal CRUD apps, but FAANG-scale product development somehow does not.


What do teams at google and meta practice?


it's called resume-driven development


Speaks some truth to Graber's Bullshit Jobs thesis


Generally you don’t want to reuse the same API for different types of clients, you want backends for frontends (BFF) that are specialized for each use and can be moved forward in their own pace. The needs and the requirements differs a lot between a browser, app and server-to-server call.

And just because you serve HTML doesn’t necessary mean that you backend code is tightly coupled with the view code, HTML is just one adapter of many.

A boundary doesn’t get better just because you slip a HTTP barrier in between, this is the same type of misconception that has driven the microservice hysteria.


> you want backends for frontends (BFF) that are specialized for each use

third time I've heard this thing and the reasoning still escapes me.

First there's ownership. Backend team owns API. Frontend teams own clients (web/android/ios/cli) etc. Do you now have a BFF for each client type? Who owns it then ? Don't you now need more fullstacks ?

there's confusion. Now you have 2 sets of contracts (API-BFF, BFF-clientIOS, BFF-clientAndroid, ...). You now have more human synchronization overhead. Changes take longer to percolate throughout. More scope for inconsistencies.

And there's performance. Adding more hops isn't making it faster, simpler or cheaper.

Isn't is better to have the API team own the single source of ownership ?


(not the op so this is jme...)

  > Do you now have a BFF for each client type? Who owns it then ? Don't you now need more fullstacks ?
everyone has an opinion, but ime ideally you'd have 1 bff for all clients from the start

  > there's confusion. Now you have 2 sets of contracts (API-BFF, BFF-clientIOS, BFF-clientAndroid, ...). You now have more human synchronization overhead. Changes take longer to percolate throughout. More scope for inconsistencies.
yep, i have literally experienced the chaos this can cause, including the endless buzywork to unify them later (usually its unify behind the web/html bff which breaks all kinds of frontend assumptions)

  > Isn't is better to have the API team own the single source of ownership ?
it depends on what it means 'api team'... but ideally bff has its ownership separate from 'backend' wether that is in 'api team' or outside i think is less important ime

but... ideally this separation of ownership (backend backend, front end for backend) allows each to focus on the domain better without mixing up say localization in the lower level api's et

iow having a bff is sort of like having the view model as a server... that way multiple clients can be dead simple and just map the bff response to a ui and be done with it

(thats the ideal as i understand it anyways)


Companies do this, but it is really hard to support. I prefer teams that own an entire vertical slice. Then they know their API and more importantly, The WHY? their API does what/how it does. A BE team can never know the entire context without exposure to the end use IME, and there is far less ownership. YMMV and it will ultimately come down to how your company is organized.


> Don't you now need more fullstacks ?

Yes. I’m generally against specialization and splitting teams. This of course depends on what type of organization you have and how complex the frontend is. iOS and Android is usually complex as it is so they are typically specialized but I would still keep them in the team.

Specialized teams not only creates synchronization issues between teams but also creates different team cultures.

What this does is that it induces a constant time delay for everything the organization does. Because teams no longer can solve an entire feature the organization instead spends more time on moving cards around in the planning tool of choice. The tiniest thing can require massive bureaucratic overhead.

Solutions also has a tendency to become suboptimal because no technician has an general overview of the problem from start to finish. And it also quite common that the same problem is solved multiple times, for each team.

By making BFFs specialized, instead of the teams, you don’t need to spend time to create and design a generalized API. How many hours hasn’t been wasted on API design? It adds nothing to customer satisfaction.

This also means that you separate public and private APIs. External consumers should not use the API as your own web client.

Specialized BFFs is not only to have a good fit for the client consuming it but it also about giving different views of the same underlying data.

E.g assume we have an article with multiple revisions (edits). Handling revisions is important for the Admin API but for the web client that serves the final version of the article not at all, it shouldn’t even be aware of that the concepts of revisions exists.

Creating a new a BFF is as easy as copy&paste an existing one. Then you add and remove what you need.

The differences between BFFs is usually how you view your schema (GET). Writing to your model (POST) is likely shared because of constraints.

What is then different views of the same data? An SQL query (or VIEW). Too many APIs just maps a database table to an endpoint 1:1, those APIs are badly designed because the consequence of that is that the client needs to do an asynchronous HTTP JOIN to get the data it needs, very inefficient.

By writing SQL to fit your BFFs you will then realize that the ORM is the main problem of your architecture, it usually the ORM that creates the idea that you only have one view of the same data, one table to one entity. But SQL is a relationship model, you can’t realistically express that with 1:1 only.

By removing the ORM you will also solve the majority of your performance issues, two birds one stone scenario.

Ownership of a BFF should ideally be by the ones consuming it.

iOS and Android can usually use the same BFF, they don’t differ that much to warrant a new BFF. If there are any differences between the two, give them different endpoints within the same BFF for that specific use case. When designing APIs one should be pragmatic, not religious.

BFF is nothing more than an adapter in hexagonal architecture.


> Yes. I’m generally against specialization and splitting teams. This of course depends on what type of organization you have and how complex the frontend is. iOS and Android is usually complex as it is so they are typically specialized but I would still keep them in the team.

Right why have someone _good_ at a particular domain who can lead design on a team when you can have a bunch of folks who are just ok at it, and then lack leadership?

> Specialized teams not only creates synchronization issues between teams but also creates different team cultures.

Difference in culture can be cultivated as a benefit. It can allow folks to move between teams in an org and feel different, and it can allow for different experimentation to find success.

> What this does is that it induces a constant time delay for everything the organization does. Because teams no longer can solve an entire feature the organization instead spends more time on moving cards around in the planning tool of choice. The tiniest thing can require massive bureaucratic overhead.

I've seen this true when I was by myself doing every from project management, development, testing, and deployment. Orgs can have multiple steak holders who might throw a flag at any moment or force inefficient processes.

> Solutions also has a tendency to become suboptimal because no technician has an general overview of the problem from start to finish. And it also quite common that the same problem is solved multiple times, for each team.

Generalists can also produce suboptimal solution because they lack a deeper knowledge and XP in a particular domain, like DB, so they tend to reach for an ORM because that's a tool for a generalists.

> By making BFFs specialized, instead of the teams, you don’t need to spend time to create and design a generalized API. How many hours hasn’t been wasted on API design? It adds nothing to customer satisfaction.

Idk what you're trying to claim, but API design should reflect a customers workflow. If it's not, you are doing it wrong. This requires both gathering of info, and design planning.

> This also means that you separate public and private APIs. External consumers should not use the API as your own web client.

Internal and external APIs are OK, this is just a feature of _composability_ in your API stack.

> Specialized BFFs is not only to have a good fit for the client consuming it but it also about giving different views of the same underlying data.

If the workflow is the same, you're basically duplicating more effort than if you just had a thin client for each platform.

> E.g assume we have an article with multiple revisions (edits). Handling revisions is important for the Admin API but for the web client that serves the final version of the article not at all, it shouldn’t even be aware of that the concepts of revisions exists.

Based on what? Many comment systems or articles use an edit notification or similar for correcting info. This is a case by case basis on the product.

> Creating a new a BFF is as easy as copy&paste an existing one. Then you add and remove what you need.

That sounds terrible, and very OO. I'd rather generate another client for my openapi documented API, in whatever language is most appropriate for that client.

> The differences between BFFs is usually how you view your schema (GET). Writing to your model (POST) is likely shared because of constraints.

That's a stretch, if I need a form, I likely need the same data if I'm on iOS, Android, native, or web. Again it's about execution of a workflow.

> What is then different views of the same data? An SQL query (or VIEW). Too many APIs just maps a database table to an endpoint 1:1, those APIs are badly designed because the consequence of that is that the client needs to do an asynchronous HTTP JOIN to get the data it needs, very inefficient.

Yes, those API are not being designed correctly, but I think you said folks are wasting too much time on design, so not sure what your arguing for here other than to not just try and force your clients to do excessive business logic.

> By writing SQL to fit your BFFs you will then realize that the ORM is the main problem of your architecture, it usually the ORM that creates the idea that you only have one view of the same data, one table to one entity. But SQL is a relationship model, you can’t realistically express that with 1:1 only.

Yet ORMs are tools of generalists. I agree they are generally something that can get in the way of a complex data model, but they are fine for like a user management system, or anything else that is easily normalized.

> By removing the ORM you will also solve the majority of your performance issues, two birds one stone scenario.

That depends a lot on how the orm is being used.

> Ownership of a BFF should ideally be by the ones consuming it.

Why? We literally write clients for APIs we don't own all the time, whenever we call out to an external/3p service. Treat your client teams like a client! Make API contracts, version things correctly, communicate.

> iOS and Android can usually use the same BFF, they don’t differ that much to warrant a new BFF. If there are any differences between the two, give them different endpoints within the same BFF for that specific use case. When designing APIs one should be pragmatic, not religious.

The workflows Shou be the same. The main difference between any clients are the inputs available to the user to interact with.

> BFF is nothing more than an adapter in hexagonal architecture.

That's what a client is...


You are comparing apples with oranges. I'm talking about organization, you about individual developers.

I can have fullstack that is better than a specialist. Specialist only means that they have specialized in one part of the architecture, that doesn't necessarily mean that they solve problems particular well, that depends on the skill of the developer.

And the point is that even if they do have more skill within that domain, total overall domain can still suffer. Many SPAs suffer from this, each part can be well engineered but the user experience is still crap.

If your developers is lacking in skill, then you should definitely not split them up into multiple teams. But again I'm talking about organization in general, that splitting teams has a devastating effect on organization output. Difference in culture will make it harder to move between teams, thus the organization will have much more difficult time planning resources effectively.

BFF is all about reflecting the need of the client, but the argument was the a generalized API is better because of re-usability. The reason why you split into multiple BFFs is because the workflow isn't the same, it differs a lot between a web client and a typical app. If the workflow is the same you don't split, that is why I wrote BFF per client type, a type that has specific workflow (need & requirement).

> This is a case by case basis on the product.

Of course, it was an example.

> That sounds terrible, and very OO. I'd rather generate another client for my openapi documented API, in whatever language is most appropriate for that client

I'm talking about the server here, not the client.

> That's a stretch, if I need a form, I likely need the same data if I'm on iOS, Android, native, or web. Again it's about execution of a workflow.

But the authentication and redirects will probably be different, so you can reuse a service (class) for updating the model, but have different controllers (endpoints).

> Yes, those API are not being designed correctly

Every generalized API will have that problem in various degrees, thus BFF.

> Yet ORMs are tools of generalists.

Oh, you think a fullstack is generalist and thus doesn't know SQL. Why do you believe that?

> That depends a lot on how the orm is being used.

Most ORMs, especially if they are of type active record, just misses that mark entirely when it comes to relationship based data. Just the idea that one class maps to a table is wrong on so many levels (data mappers are better at this).

ORM entities will eventually infect every part of you system, thus there will be view code that have entities with a save method on, thus the model will be changed almost from everywhere, impossible to track and refactor.

Performance is generally bad, thus most ORMs has an opaque caching layer that will come back and bite you.

And typically is that you need to adapt your database schema to what the ORM manage to handle.

> We literally write clients for APIs we don't own all the time,

The topic here is APIs you control yourself within the team/organization. External APIs, either that you consume or you need to expose is different topic, they need to be designed (more). The point is internal APIs can be treated differently than external ones, no need to follow the holy grail of REST for your internal APIs. Waste of time.

But even with external APIs that you need to expose they can be subdivided into different BFFs, no need to squeeze them into one, this has the benefit that you can spend less time on overall design of the API, because the API is smaller (fewer endpoints).

> That's what a client is...

I'm specially talking about server architecture here, the client uses the adapter.


> If your developers is lacking in skill

Are. En developer is, flera developers are.

> Most ORMs, especially if they are of type active record, just misses that mark entirely

Miss. En ORM misses, flera ORMs miss. (Du fixade ju "are"!)

> Performance is generally bad, thus most ORMs has

Have. En ORM has, flera ORMs have.

Kom igen, så jävla svårt är det inte.


Agreed! There are many things in IT industry that are prone to this kind of almost magical thinking, and "boundaries" / "tight coupling" is one of them. I realized that when tried to actually compare some stuff I had been doing at work through years, being fascinated with uncoupling things. Well, if you start measuring it, even at the top level (time, people, money spent) then it is so clear that there are obvious tight couplings at architecture level (like data on wire containing some structure or transferring a state of application), and it is very tempting to remove them. But then we may actually find ourselves having a subtle tight coupling, totally not obvious, but effecting in a need of two teams or even two tech stacks and a budget more than twice the size because of communication / coordination costs.


This development style might be a better DX for the teams. But Facebook on the web is an absolute dumpster fire if you use it in a professional capacity.

You can't trust it to actually save changes you've made, it might just fail without an error message or sometimes it soft-locks until you reload the page. Even on a reliable connection. Error handling in SPAs is just lacking in general, and a big part of that is that they can't automatically fall back to simple browser error pages.

Google seems to be one of the few that do pretty good on that front, but they also seem to be more deliberate for which products they build SPAs.


> Decoupling the client and server means they can be separate teams with clearly definied roles and responsibilities

How desirable this is depends on the UI complexity.

Complex UIs as the ones built by google and facebook will most likely benefit from that.

Small shops building CRUD applications probably won't. On the contrary: the user requirements often cross-cut client and server-side code, and separating these in two teams adds communication overhead, at the best of the hypotheses.

Moreover, experience shows that such separation/specialization leads to bloated UIs in what would otherwise be simple applications -- too many solutions looking for problems in the client-side space.


There is no reason other than poorly thought out convenience to make the webbrowser/webserver interface the location of the frontend/backend interface. You can have a backend service that the web server and mobile apps all get their data from.


When a company gets to the stage where they actually need a mobile app, it is pretty easy to add API endpoints in many/most/all? major web frameworks. Starting out with the FE/BE split slows you down immensely.


IMHO it is completly doable to do a state transfer with HTML to a mobile device instead of writing a separate application using a separate technology. Then we can deal with coupling server-side, e.g. "view's team" can use some templating system and "core team" can play with logic using JSP-Model2 architecture or something similar.


There is a third option, which is that FE-facing teams maintain a small server side application that talks to other services. That way the API boundary is clearly defined by one team.

It sounds a lot more annoying to have to manage one client and many servers instead.


Or even skip the DSL and use JS for both client and server, just independently. Validation functions can/should be simple, pure JS that can be imported from both.


Validation logic is surprisingly simple but almost always lives in different domains. Unique columns are a great example, the validation has to happen at the database layer itself and whatever language is used to call it will just be surfacing the error.

Language and runtime decisions really need more context to be useful. JS everywhere can work well early on when making a small number of devs as productive as possible is a goal. When a project scales parts of the stack usually have different priorities take over that make JS a worse fit.


> felt they had to switch to Angular about the same time and they never regained their footing in the Angular age

And in this case what actually happened is exactly what we had expected would happen: tons of badly-written Angular apps than need to be maintained for foreseeable future because at this point nobody wants to rewrite them so they become Frankensteins nobody wants to deal with.


And people want to complain about COBOL.


> then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)

As far as I know, windows explorer has been extremely slow for this kind of operation for ages. It's not even explainable by requiring a file list before starting the operation, I have no idea what it is about Windows explorer, it's just broken for such use cases.

Just recently, I had to look up how to write a robocopy script because simply copying a 60GB folder with many files from a local network drive was unbelievably slow (not to mention resuming failed operations). The purpose was exactly what I wrote: copy a folder in Windows explorer.

What does this have to do with React or JavaScript?


Can you argue a bit more genuinely and not pick on such a minor point as validation? I think parent mentioned other points? How about the logical shift to let client do client things, and server do server things? Server concatting html strings for bilions of users over and over again seems pretty stupid.


No more stupid than concatting json for those same users


Why not stress the argument further and say server "concats" http strings or sql strings? It's because of the nonsense from the web platform that inefficient text-based transports such as json became prevalent in the back-end btw.


But you can download another node package from npm to delete those other npm packages: npkill. For whatever reason this is as they say in the javascript world "blazingly fast"


Wasn't Ember the idiomatic choice before React? I don't remember Angular being that popular with Rails devs generally.


I definitely associate it (Angular) with Python BE devs for some reason.


It was/is quite popular with .NET developers due to TypeScript being very similar to C# and implementing similar patterns like dependency injection (I know dependency injection/IOC isn't .NET specific).


This was my experience as well. Angular became really popular in enterprise teams that were already full of devs with a lot of C# and MVC or MVC experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: