The M in MVC has come to mean “data model,” but it originally referred to the “mental model” of the user. What kind of thing are we trying to manipulate and what is the user’s mental model of such a thing?
How about a bank account? A mental model of a bank account would include useful operations like deposit, withdraw, transfer, checkBalance, and these would be the methods on the object. The data schema and the persistence would be necessary, of course, but an implementation detail. Models were smart and mapped to human perceptions of a thing, rather than dumb data persistence layers.
These kind of business logic operations have often been moved into controllers, which couldn’t have been further from the original intention.
MVC, smalltalk, OOP we’re all about stopping and thinking about the way humans think while interacting with computers. It was about designing nice interfaces for interaction based on human expectations, not database requirements. Internal object schemas and data persistence were implementation details of an object that could—if you did it right—be easily changed without changing the interface.
But we can’t help ourselves, and instead OOP today is a world of getters and setters with a little bit of data validation (if we’re lucky) and models are just a schema plus a generic data persistence interface (maybe an orm). And the business logic exists in the controller, the least important, least reusable component of the architecture.
> it originally referred to the “mental model” of the user.
Do you have a source for that? I agree that that’s a useful way to think about MVC (somewhere downthread someone wrote that the model should be like a headless version of the application, which is similar), but I’m curious about the original expressions of that idea.
Since the link won't open for some, here's the relevant bit (the first two lines of the abstract of the paper linked to):
"MVC was conceived in 1978 as the design solution to a particular problem. The top level goal was to support the user's mental model of the relevant information space and to enable the user to inspect and edit this information."
I agree, and I wonder if this is just what happened as an accident of history or so much a tendency of human nature that we couldn’t have done it any other way. Maybe simple data models is the mental model of computing that couldn’t be easily changed.
I feel interactions between multiple objects is more complicated than looping over SQL query results, ORM logic or object graph traversal logic.
Arbitrary message passing between objects is like an N-way network of communication. The number of participants in an object graph increases the complexity of OOP systems. Kind of like a distributed system since every object is in a different state.
Yes absolutely. Reenskaug has stated that MVC was designed for simple operations (and co-designed DCI as an architecture for more complicated ones). And a number of the early OOP people including Alan Kay, have said something to the effect of “Erlang is the only true OOP language.”
I did a deep dive reading the early papers and watching the lectures from the 70s, 80s and 90s on this a few months ago. The early Xerox employees developing smalltalk seem to have originally thought the idea of encapsulation would compose at all levels. That as object interaction got more complicated, you would simply group a few related objects together inside a larger object, and the rest of the application would use that encapsulating objects interface, and you could go infinitely deep that way while managing the complexity. Later, in the 80s, Kay would talk an about writing objects in smalltalk then gluing them together with a glue language (usually Mesa C), because he felt smalltalk worked well for programming in the small but not the large.
Again, I think erlang got a lot right here, using a different model for programming the small (functional) vs programming in the large (actors/otp).
But to hear the OOP pioneers talk about objects, they consistently describe the objects not in terms of data but in terms of behavior, similar to Erlang actors being processes. Each object is supposed to represent a “computer” and the network of objects is supposed to work like a distributed system.
I know which mental model I like better, but objectively, it’s a very different concept from what most developers think of as OOP today (although very similar to microservices).
This is basically Task Driven Development. You start from a list of user affordances and workflows, try to make them as clear, predictable, and robust as possible, and work backwards. It's top down from the user perspective, not bottom up from the developer/library perspective.
Apple's take on this is the reverse. It enforces UI conformity across apps because there are only so many UI objects in the library. You can build your own, but it's much harder than bolting together what's there already.
This is good for a unified look and feel, and fine for many common applications. But IMO it's not really MVC.
On the web you regularly see applications which are half task driven but not very robust, and break if the user does something a little unexpected.
Example: I got a 2FA code from Namecheap yesterday on my laptop, didn't have my phone next to me, closed the laptop, found my phone in the main office, logged in on the desktop, and it let me right in without the code.
TDD is really a kind of behavioural programming. Instead of tracing code paths you're tracking user behaviours and making sure the paths through the app match behavioural expectations with some sane leeway.
The original conception of MVC fits that nicely. What we have today - not so much.
This idea of "multiple objects" operating as "one object together" is some idea I really like too.
a) A good object orientated API is enjoyable to use, if it maps well to what you want it to do. Look at the developer productivity of ActiveRecord, Django ORM, SqlAlchemy or Hibernate. The object graph model is kind of fun to work with and many developers prefer it to working directly in SQL.
b) Where object orientated APIs fall down is where you want behaviour that the data underlying graph model does not support. I am thinking of OpenGL rendering pipeline or operating system APIs such as POSIX.
c) The Document Object Model in web browsers and the Component Object Model (Microsoft windows, word, Visual Basic, office suite etc) are both dreams that everything on the screen and on the computer was object orientated and could be interacted with with a simple API. Most cross platform GUI frameworks are object orientated even if the underlying graphical APIs are procedural. For example, win 32 API is procedural.
d) There is impedance mismatch of object orientation, procedural (C programming) and data structure driven (including data-driven or data orientated, relational tables, or matrixes)
e) UML entity relationship diagrams are another dream that people had to model objects and relationships in computer systems that didn't pan out completely.
I have a number of ideas in this space. I think graphical user interface development is in its infancy still and all the approaches we use have shortcomings of some sort and I say this as a devops/software engineer as someone who only did a small amount of frontend development in previous roles. I've been loosely following the Rust desktop development progresses.
I desire system behaviour to be trivially easy to transform from one architecture to another architecture. This is my dream.
Take for example Postgres' process orientated model or an imaginary system that uses threads per network socket that you want to refactor to be multiple sockets per thread. The idea of "Late architecture" means we should be capable of transforming this model from one to another slightly different model without dramatic destructive code changes.
a) How do you model behaviour without tying it to a mechanism, so that it can be refactored easily. In Java we have interfaces or Rust we have traits.
b) If you have an extremely rich data model structure, is it flexible enough for future behaviour to be supportable? I feel that introducing plurality (1 to many) (many to many) is a pain point.
One of my ideas is that if you were to log the behaviour of a program with timestamps and implement a program that implements the same log, then its behaviours are identical.
> Most cross platform GUI frameworks are object orientated even if the underlying graphical APIs are procedural. For example, win 32 API is procedural.
Well, win32 is kind of object-oriented, in a way, even though it doesn't always map cleanly to OOP languages.
Windows are basically objects whose methods you call through SendMessage. There is even inheritance by replacing the window procedure and delegating to the parent procedure. There is polymorphism in that many kinds of windows support the same messages (e.g. WM_PAINT) and can decide how to handle them.
> One of my ideas is that if you were to log the behaviour of a program with timestamps and implement a program that implements the same log, then its behaviours are identical.
>> These kind of business logic operations have often been moved into controllers, which couldn’t have been further from the original intention.
Moving them out of the object model without putting them in the controller is actually a good thing IMO. I don't want to test controller plumbing or data persistence, but I do want to focus on the business logic, so simple controllers that route to smart objects with dumb data models helps. I agree the smart parts don't belong in the controller but I don't think it's as bad as you make it sound.
There’s no reason your model can’t have an abstraction layer that contains the business logic and a concrete layer that has the persistence (indeed, if it’s complex at all it should).
How about a bank account? A mental model of a bank account would include useful operations like deposit, withdraw, transfer, checkBalance, and these would be the methods on the object. The data schema and the persistence would be necessary, of course, but an implementation detail. Models were smart and mapped to human perceptions of a thing, rather than dumb data persistence layers.
These kind of business logic operations have often been moved into controllers, which couldn’t have been further from the original intention.
MVC, smalltalk, OOP we’re all about stopping and thinking about the way humans think while interacting with computers. It was about designing nice interfaces for interaction based on human expectations, not database requirements. Internal object schemas and data persistence were implementation details of an object that could—if you did it right—be easily changed without changing the interface.
But we can’t help ourselves, and instead OOP today is a world of getters and setters with a little bit of data validation (if we’re lucky) and models are just a schema plus a generic data persistence interface (maybe an orm). And the business logic exists in the controller, the least important, least reusable component of the architecture.