Even when I knew very little about web development, the principle of not accessing the database on the client side seemed obvious and very important.
This trick is already in use in a lot of places. If you click an upvote on Reddit, it doesn't do a complete round-trip, it just increments the count in place, and then issues a command to the server to do a "real" increment. If, in the meantime, someone disabled your account, then there is a disagreement. But it's obvious that the client's idea loses.
Eventual consistency is in mainstream use on the server, with frameworks like Cassandra et al. This is just a generalization of that to the web client.
I understand the argument that this type of lying is "ok" for the sake of responsiveness, since 99% of the time the data will be accepted.
That argument isn't valid in my eyes.
When I update the users screen to reflect the data they entered, to me, that's an indication that I accepted the data. If I were to first show the data as accepted, and then show an error message if something went wrong, I am undermining the trust the user is placing in my software.
It's almost like the software is confused and inconsistent? Kind of like a person who says one thing and then says something entirely different within a few seconds?
With every software suite, there is an implicit promise that it won't try to mislead me. I don't use software that lies to me, I don't expect others will use it either.