Hacker News new | past | comments | ask | show | jobs | submit login
Why I No Longer Use MVC Frameworks (infoq.com)
461 points by talles on Feb 15, 2016 | hide | past | favorite | 194 comments

I feel like this is one of those cases where someone rejects currently existing tools, but then goes on to throw out the baby with the bath water when trying to come up with an alternative.

Figure 7 (http://cdn.infoq.com/statics_s2_20160209-0057/resource/artic...) looks to me like the sort of MVC you would do with vanilla PHP or friends back in the day.

In figure 9 (http://cdn.infoq.com/statics_s2_20160209-0057/resource/artic...): he handwaves saying that you can compose things, but nothing in the article suggests that SAM provides better mechanisms to sync data between client and server. E.g. what if the view is a table, and the action is deleting an item? Does an update consist of re-downloading a whole new table worth of data? Would it also do that if I deleted that same item from some other view? Or does it require 2 http requests in series (a DELETE followed by a GET)? What would an undo action look like in terms of HTTP requests and responses? The graphs feel like they're largely meaningless. Having arrows in a diagram pointing from one word to another doesn't say anything about the complexities of the system.

Hey Leo,

I must say, I've also become disillusioned with MVC for front-end frameworks, actually after using Mithril [1]. I'm sure you've seen me lurking and asking questions in the repo since before Mithril got big. I used it on a few projects and decided the controller was not an abstraction that made much sense. I looked elsewhere and stumbled upon domchanger [2] but after trying to use it seriously and forking some changes also ran into many impasses. This is what inspired me to develop domvm [3], a simple, pure-js composable vdom view layer for plain models. I took a few of Mithril's good ideas, removed globals, made the magic as opt-in modules (so a bit more wiring is needed) and ended up with something extremely fast (at least 2.5x Mithril), composable and mixed imperative/declarative. It's been a good journey and I'm very happy where everything ended up. Thanks for some inspiration :)

[1] http://mithril.js.org/

[2] https://github.com/creationix/domchanger

[3] https://github.com/leeoniya/domvm

Hi leeoniya. domvm is a an interesting project that I've been keeping my eyes on. I'm thinking of adapting some of your ideas for the rewrite of Mithril that I'm working on, particularly APIs related to re-rendering control. Stay tuned :)

Wow, what a gracious reply. Best of luck to you both!

Indeed, i have no intention of marginalizing the excellent engineering that Leo has put into Mithril, it is a solid lib and if it "clicks" for you then by all means use it.

I would be delighted if my projects had a positive effect on a Mithril rewrite. I originally wanted to improve Mithril but alas it was not architecturally possible without large BC breakage and alienating the existing userbase.

Actually, as a proof of concept I wrote an adapter for domvm that allows Mithril's link rotator demo to work unmodified: https://github.com/leeoniya/domvm/tree/master/demos/mithril-...

In the end everything in software is an iteration of something else.

It goes both ways. I never expected that Mithril would inspire new projects, and I've come to really appreciate the great insights that come from these projects. At the end of day, we all want better software, and having more hands on deck (even if each person is doing their own thing) is a net positive in my opinion. JS fatigue critics can say all they want, but this is how innovation happens :)

You're a classy guy, Leo.

You're the kind of programmer I aspire to be like and be surrounded with.

> I feel like this is one of those cases where someone rejects currently existing tools, but then goes on to throw out the baby w/ the bath water when trying to come up w/ an alternative.

Humorously, this is exactly what the current set of client side MVC libraries did.

No offense, but I think it's disingenous to compare the SPA movement with a guy with vague graphs that look like they suggest going back to what people used to do 5-10 years ago.

The SPA movement "threw out" the notion of ajaxing HTML snippets in favor of ajaxing structured data for many reasons: better separation of concerns, better asset cacheability, better defaults against XSS, better infrastructure for multi-client architectures, the list goes on. I'd argue that security w/ data endpoints is far easier to audit and reason about than the old school RPC-style send-me-html-when-I-do-X server interfaces.

AJAXing data is great (It should be AJAJ btw :)

But when do you update the view? Angular dirty-checks the model on each $digest cycle. React dirty-checks the view.

Why not simply require the component to call a function to indicate it's changed a couple variables in its state? Simply keep references to the DOM elements in your component and update them. It's much faster and gives you more control, and a programmer who forgets to write the function call will realize it as soon as they don't see the update. The IDE or linter can even have a static analyzer that flags a missing call to the function.

I think that a lot of times these attempts at convenience (such as Angular's two way binding, or the virtual DOM) just throw more layers of complexity and slow things down from a relatively simple straightforward approach, while providing little other than saving keystrokes.

> Why not simply require the component to call a function to indicate it's changed a couple variables in its state

This is actually roughly what knockout and ember (pre-glimmer) do. They are known as KVO (key-value observer) systems and have implementation challenges of their own: knowing when to batch operations, dealing w/ computed properties and rx glitches (in reactive systems, a "glitch" is the name given to temporary inconsistencies that occur between stable states), and added complexity in terms of requiring the model layer to be observable-based (as opposed to POJOs in Angular/React/friends).

Also, high quality KVO systems are far more difficult to implement. To my knowledge, Vue is currently the fastest KVO-based javascript library in existence and in order to support its POJO-like model API, it's significantly larger than Snabbdom (which is one of the fastest vdom implementations currently, despite clocking at a mere 200-300 LOC)

AFAIK, most of the challenges faced by KVO system have not been as extensively explored (at least by the javascript community) as virtual dom algorithm optimizations have, so currently I believe high quality virtual dom libraries are likely to perform better at various real life scenarios than current state-of-art KVO systems.

Right. But why not just make the app developer explicitly specify that some variables in the state have changed, and let the developer and requestAnmationFrame do the batching? After all, they are the best positioned to know when a batched update has been done.

Letting the developer tell the engine when to batch is actually not the hard part. React works exactly like that, for example. Mithril defaults to most-common-scenario call profile, but mostly as a matter of convenience, not because there's anything inherently difficult about exposing that flexibility to the developer.

The pain point that templating engines address is automating the process of figuring out what DOM changes are caused by what state changes. In order to do that, you have to either dirty check the state tree (as Angular 1 does), dirty check the template tree (as React/Mithril/vdom does), or have an observable state tree (as Knockout does). If the templating engine defers its responsibility to the developer, then if, for example, you do a `dataList.pop()`, you are responsible for writing out the code to remove the last item in a DOMNodeList, or updating some count label, or whatever else the view may be doing. This works ok in a small app, but it tends to become hard to maintain as a codebase grows in size (due to requirement changes or whatever).

In our framework, we just give a standard way for the developers to refresh tools and make incremental updates:


Remember, tools are reusable and the tool's developer is the one who has to write that code. The app developer just plops the tool on a page and it renders itself.

Most tools don't need 60fps efficiency for animations. they would implement a simple .refresh() method (similar to React's render method, except without the virtual DOM). When the tool is first activated, it typically renders any HTML it needs to inside this method, unless the HTML was already there (because eg it was rendered server-side). It typically renders a template with some fields fromthe state, just like in Ember.

Right after this, the tool usually just stores references to elements it wants to update. For example,

  tool.$foo = tool.$(".someNode");
And then when it comes time to update, you just do:

  someTool.state.x = 5;
  someTool.state.y = 8;
  someTool.state.z = "moo";
  someTool.stateChanged("x", "y", "z");
And the tool's constructor method would have done this:

  .set(function (oldValues) {
      this.state.x + this.state.y
This event occurs whenever either x or y was reported as changed. Our framework would make sure these onStateChanged events are triggered at most once per animation frame.

In your example, I'd signal that some array changed, and the tool would figure out what changed via dirty checking. But why do that crap? Why dirty-check at all? In our framework, we have streams and messages posted to the streams which are supposed to say what changed. A move was made in a chess game. A person wrote a chat messge. These things are updates, which are hard to represent in Angular and React as mere maps from plain data to the DOM.

What's wrong with that? Any tool developer can add event listeners for when something changes, and do whatever update thy have to do. The app developer just updates a tool's state and it just works. If you need 60fps or just want to render 1,000 constantly updating tools on a page (BAD IDEA) then you can do it.

I guess I should have said that the rest of our framework uses the same concepts. Instead of syncing data like Firebase or Parse, we treat data as streams onto which one or more collaborating users post messages. We take care of making sure the order of the messages is the same everywhere, and we take care of a ton more things such as pushing realtime updates, managing subscriptions, access control etc. All you have to do is implement the visual change when the server says a chess move has been made, etc.

We even have a convention for "optimistic" changes that assume that your POST succeeded by simulating messages that should have come in. Once in a while, the assumption is violated, eg if another user posts another message in the meantime, or the server becomes unreachable. Then we .refresh() the stream to its last confirmed state and all tools automatically refresh also because they have event listeners on that Q.Stream object's onRefresh event. Then your tool may want to retry the pending actions with OT, ask the user, or whatever. And so forth.

Have you seen any framework with this straightforward model?

> In your example, I'd signal that some array changed, and the tool would figure out what changed via dirty checking. But why do that crap?

The example I usually use is a data table (sortable columns, filtering, batch delete, pagination, etc. you know, the usual suspects). The main benefit of a templating engine is that you don't need to write various routines to do each variation of DOM manipulation and worry about the various permutations, you just write the template once.

Personally, I prefer to not rely on querySelector if possible in order to avoid code smells related to identifying elements in a page with reused components, and I prefer to avoid observables because I think that their "come-from" quality makes them harder to debug.

In addition, I feel the declarative nature of virtual dom enables a level of expressiveness and refactorability (particularly wrt composable components) that is difficult to convey to someone who's more used to procedural view manipulation.

> Have you seen any framework with this straightforward model?

Yes, I believe Flight is similarly event-based, and Backbone can be used like that, too, pretty much out of the box.

Being a framework author, I take great interest in improvements in framework design, but to be honest, I haven't gotten much out of event-based systems and I generally feel like they are a step backwards from virtual dom in a number of areas. Mind you, I'm not saying that event-based frameworks are bad. Plenty of Backbone codebases work just fine, and if your framework works for you, then that's great.

What is really the difference between this vaunted "declarative" syntax:

Which is then picked up by the framework to do two way databinding, dirty checking and other inefficient stuff it assumes you want, vs the equally declarative:

  <div class="foo"></div>
And then have the component's code look for ".foo", save the reference and update it when the state changes? It's easy for the component developer to know what to do, after all, and it might involve more than just text replacement. Angular has filters as a poor man's declarative version of functions.

The trick to fast rendering is: don't read from the DOM, throttle DOM updates to requestAnimationFrame, and turn off fancy css when animating.

It's true that without two-way databinding, you have to read from the DOM. Maybe in that sense two-way databinding is good, but when should the update happen? Onchange?

> What is really the difference

Well, the difference has already been explained to some extent in other comments. In the first snippet using a templating engine, the engine automates the DOM manipulation. There is no procedural "and-then-have-the-component's-code-do-X" step.

In the second snippet you're responsible for writing that code and making sure that your `.foo` query didn't unintentionally pick up some other random DOM element, that you don't have logical conflicts in the case of some code relying on the class name's presence and other code toggling it, etc.

Re: performance, I think today it would be wise to start questioning the idea of hand-optimized DOM manipulation being faster than libraries, because most people aren't templating engine authors and don't know the tricks that those engines use, the algorithms to deal w/ the hairier cases, or what triggers deoptimizations in js JIT engines, whereas library authors are actively dealing with those concerns on a ongoing basis.

Two-way data binding is somewhat orthogonal to the templating engines' primary goals. All a 2-way data binding does is wrap a helper around an event handler (e.g. oninput) and a setAttribute call. But as I mentioned above, you don't need to use the bidirectional binding abstraction; you can have explicit re-render calls instead.

> But when do yo update the view? Angular dirty-checks the model on each $digest cycle. React dirty-checks the view.

The answer is in that statements. Angular and React is not you.

> Simply keep references to the DOM elements in your component and (you) update them. It's much faster and gives you more control...

The point of React and Angular is that you don't have to think about updating the DOM.

So they run into the typical problem of handling the "general case" in their own special way, and leaving you out in the cold for the other cases. Someone made a chess move? Someone queued a new song? Someone wrote a new chat message?

I agree with you. One point I'd like to make, there is a big difference between white papers and real applications running production. Architectures are just general guidelines and won't perfectly fit every real world problem. Moreover, part of why Software Eng. are in high demand and well payed is because we have to deal with complexities that real world problems bring with them.

> baby w/ the bath water when trying to come up w/ an … w/

Would you mind writing "with"? It's two extra characters, but it makes the sentence much more readable. (Words are recognized by shape, especially the shortest ones.)

Thank you. Abbreviations tend to physically shorten text at the cost of cognitively lengthening it. They're useful in cases where space is more constrained than usual, such as on an advertising placard, an axis label on a chart, or the virtual string-space of a tweet. They are also useful in cases such as telegraphy, texting, and some handwriting, where writers are constrained by input difficulty. Where writers are not so constrained, abbreviations act as small disfluencies in otherwise smoothly flowing text. They burden the reader unnecessarily and should be avoided, except in cases such as "etc.", where the abbreviation is the more common form.

Professional writers and editors, and I've been both, have considered this one of the standard "UI design" principles of the field for generations.

(And no, I'm not talking about formal versus informal writing. It's a more general principle, such as using mixed case or putting spaces between words.)

It's easier for me to parse "w/", for the record. You're right that words recognizeable by shape, and "w/" looks really distinct to me.

c_ b/ g% h) u; q| z* f< h\ i. f% v< d- o# c" a" t/ m~ y+ t] o& o/ r\ c~ o* p( j] j^ l& u@ d} o^ m& i)

OP has since reedited their comment --- thank you very much for that, OP!

> Words are recognized by shape, Words are recognized by shape

w/ is both incredibly short and has a very unique shape.

Many people – including people speaking english as second language – are unfamiliar with it.

I, for example, didn’t know w/ meant with until now.

It's pretty common. Now that you know what it means, you'll grow to like it.

And to think, there's an alternate 'verse where the above comment was s/w\//with/, you'd still not know...

People have a hard time with MVC because frameworks that use the phrase MVC are always more complex than necessary.

There's a simple set of separations that you need to pay attention to for your code to be what I like to call "not stupid".

1. Separate your code from your data. You shouldn't be kicking out HTML with your code, like this guy does in figure 1. When you combine HTML or any display information with code that means your designer is your coder and your coder is your designer. Designers suck at code. Coders suck at design. Keep that shit separate.

2. Separate your logic code from your code that changes your data. You shouldn't be running updates/saves/inserts from 30 different locations in your code. Define your interface, expect that interface to work. When you need to shift from Oracle to Redis to RDS, you shouldn't have to refactor 80% of your application. You refactor your data update code and leave everything else that works alone.

There you have it. Model, View, Controller. You have a data model that can be displayed in any number of ways, and you control CRUD on that Model via the controller. It's not a religion. Just make sure there's a logical separation within your code so you can delegate as your team and application grows.

Architect things logically and call it whatever you want.

I agree with your big picture, but I think point 1 is giving too much importance to HTML. You can have proper separation of M, V and C whether your HTML output is defined in a dedicated template file or in a .js file. At a high enough level of abstraction, they're both just different ways of defining a function from an app state to UI, so in terms of overall app architecture, there is no difference.

The only material difference I've ever seen is that when people define their HTML output in template files, they have to invent a whole separate language for doing basic control structures like conditionals and looping. Why re-invent the wheel when you can use the same language that the rest of your app is programmed in?

The point above is that the designer does not know how to code that I agree 100% with. So any JS implies that either the programmers are doing the design or they extract the GUI from the design wasting time. In fact even templates can be too high barrier to ask a designer to use them, but at least with templates the design->working site conversion is simpler and any competent programmer should be able to learn the templates.

I also find that asking the designers to put keywords in the site mockup and use that as ad-hock templates could work rather well.

This argument doesn't make sense to me. Regardless of how the design is implemented (JS, HTML) - the designer shouldn't care because they are handing off the design to the developer as a pdf, sketch, etc. file. If he/she is both a designer and a front-end developer than great - he/she knows how to code. What's the problem?

A big benefit of an existing MVC framework, if you following 1 and 2 above, is that when you need to bring new people onto a project (or move between parts of a large project) the quirks, usage paradigms and whatnot are all known quantities with expert advice written by a community, all free at your disposal. Home-baked MVC implementations can be simpler and more direct, but often suffer from a much taller learning curve.

I guess this works well for body shops (though I'm not sure that body shops work well for anyone), but I've never seen this theory play out well in practice.

Often I find it the other way round. You hire an Angular expert and, if he's junior, you spend the next 3 months unteaching him all the bad habits he's learnt, or, if he's a senior, spend the next 3 months arguing with him about who knows Angular better, all the while kissing goodbye to any maintainability your code base once had.

If you have simple, well factored code, a good coder can learn it quickly. In fact, juniors often get it faster because they don't have to map their limited understanding of "patterns" to the documentation on the website or the behaviour they see on the ground.

What gets me about all this is that web app development is just about the simplest programming you can do. It always staggers me the lengths otherwise intelligent developers (sorry, "software engineers") go to make their architectures as complicated as possible. I sometimes think it's because they're intelligent that they do this, perhaps out of fear of not being challenged?

As an aside, if you're spending three months arguing about who knows something better instead of putting the ideas under discussion on practice to see which one works better, you're either doing your hiring wrong or your team dynamics suck (or both)

You're right about the complexity, but I never did get the whole "separation of concerns" thing with respect to web applications. Perhaps it makes sense for large applications, but in my experience with smaller, simpler ones, it just means making more little pieces that have to work together correctly for everything to work, and one small change means having to go through several files and make tiny edits in a bunch of places instead of having it all together. IMHO you're separating something that just isn't very suitable to separation.

Generating HTML directly in the code makes it very easy to figure out how everything works, and thus modify it, since you see the whole process in one place.

Speaking as someone who recently turned a ~5kLoC, ostensibly MVC and consisting of over a dozen files (mostly empty or nearly so), but quite trivial application into a single file of less than 100 lines, it's amazing how much complexity some people can introduce.

The problem lies when your application grows. Remember that one of the "rules" in computer science is modularity.

Here's a couple of examples ---------------------------------

For example, lets say you have a data set that returned user information. Combining it directly with the html may be easier. The issue happens when you want to use that same data (or portions of a that data) elsewhere. From there you would have to rewrite the query. Having the data-set separated (either in a class or a function that returns raw data, then having that data parsed and placed within html works much better. There you don't have to rewrite anything (and have the satisfaction that the data "works").

On the other side, separating the html will be helpful as well. Say that the interface has now changed. You are not going to remove a column in a table and reorganize other components. Instead of having to look through the code where you may have placed the html with the data, all you need to do is to make your changes accordingly in your presentation layer.

All in all, it may seem like a lot more work to get things started (I don't use MVC in the traditional sense, and instead have my own programming rig). The reward lies when you can separate duties among team members. Someone can work on the presentation (and the browser compatibility issues that it includes) while another one can work on the business logic. So on and so forth.

So refactor the code when you find yourself needing that separation. You'll find yourself with a much more understandable code base.

You can still have the separation of duties that you describe because, in the case of web app dev, HTML and CSS are already separate languages

> Designers suck at code. Coders suck at design. Keep that shit separate.

I currently work in a small-ish company, where I am the ONLY person designing/developing multiple systems. (I do get some input now and then, but it's mostly all down to me.)

I like this idea in principle, but you don't always have the resources to achieve this!

> 1. Separate your code from your data. You shouldn't be kicking out HTML with your code, like this guy does in figure 1. When you combine HTML or any display information with code that means your designer is your coder and your coder is your designer. Designers suck at code. Coders suck at design. Keep that shit separate.

How do you feel about the common case where there is one person for both coder and designer roles? This is extremely common. Your comment makes it sound like the coder and designer are always separate people when in reality, for most websites, the front end coder is the designer.

I think I was pretty plain. Designers suck at code. Coders suck at design.

That use case is the "we should only have to pay one guy and he should know everything" use case. Or the "I have an idea for an app. I'm a company now." use case.

Know why so many bootstrap sites look pretty much exactly the same?

--- If someone wants to change the color of the header on a website, should it require a code change?

If there's a table that someone wants to change the border on, should that require a person who knows three languages? Or should that require a guy who read an html book last week?

Conversely, what if there's a bug report about an element that is not displaying on a Nook Color? Should that require a developer to look at? Or wouldn't his time be better used working on an actual programming problem?

--- I interviewed a developer recently and asked him about a particularly difficult challenge that he worked his way through. Usually people respond with a problem that manifested itself in multiple ways or tracking down an important bug in a widely used library. This particular answer was about an errant price change on a major retail web site that took 6 days to implement. A price change should be a zero down time absolutely no-brainer data change. But some idiot along the way decided that he should embed html AND javascript in his java class and it took a team of people several days to track down where exactly the error was that produced the errant price. There's stupid code like that all over the place.

With experience, you learn that you can avoid those kinds of problems completely with very little overhead early on.

But some idiot along the way decided that he should embed html AND javascript in his java class and it took a team of people several days to track down where exactly the error was that produced the errant price.

A team of people, several days, and no one thought to just grep the codebase to find the relevant pieces? If that "idiot" didn't conveniently put the pieces all in one place, maybe it would've taken even longer to find? I wouldn't blame that on being "stupid code"...

I had a little trouble with this one myself. Apparently there was a lot of '<' + elementType + '>' + value + doClosingTag(); kind of junk all over the place. Add to the mix javascript that called a web service and got pricing and another piece of javascript that called a different web service that adjusted pricing based on sales.

I'm not going to defend this practice or design. I would never have let something like that get anywhere near production myself.

You see a lot of people in this thread who don't feel that separation of code and display information is important. Imagine how that looks in an agile environment where nobody steps back to look at the big picture and developer turnover is high - like maybe a model that included churning offshore resources in and out as requirements demanded it. It's bad.

I've seen this kind of thing in action. One particular case that sticks out in my memory involved function calls that didn't actually exist but were caught by magic methods and created on the fly. That one took me and another dev a few days to track down. If you find it hard to believe, you're either lucky or are severely underestimating the power of stupid.

Designers coding is fine. Designers programming is not.

Careful on that line as some of us become programmers and are quite apt at playing both roles when needed.

In reality separation of concerns is a luxury only afforded to larger teams and orgs.

If you have the people, do it. If you don't, you're in for a bad time concerning yourself with it.

Granted the html in Java is just straight idiocy, and my point refers to far less insidious mixings.

separation of concerns in your organization is a luxury.

separation of concerns in your codebase is a necessity.

I was about to write the same thing, so true. Even if you are working alone at the moment, doesn't mean that you should glue everything together and end up with spaghetti code.

I was more of referring to the level of separation.

I feel like a designer would have the opposite sentiment.

*Take with a pinch of salt

Often it makes sense to have the code and the data together. I am thinking about OOP, where your classes define the data and the methods that act upon.

Having inherited a new project recently I was disappointed to see the documentation talking about following the philosophy of thin models and fat controllers (the opposite of what is considered best practices for Django, which the project is using). It means that there isn't one obvious place to look for the expected functionality,there is unnecessary repetition of code and its way less efficient than it could be if it was done the other way - models with data definitions and related methods in the same place.

I am all for separation of concerns, but I don't think splitting along the lines of "data" and "code" is the best way to achieve that.

You say HTML is data? That's funny, I always thought it was code.

Regarding 1. You need to build your architecture with your (current and future) organization in mind. If you don't have people in your project/company who only know one particular technology ("web designers" writing HTML/CSS) there is no need to separate that particular technology from the rest of your "code". If that fits your framework better, of course you could separate those technologies, but I don't see why you should build it like that just because.

It's not just because.

It is a foreseeable problem that you will have to support multiple display layouts with the same set of data. It is a foreseeable problem that you will need to train and delegate.

You can be the full stack guy today and learn everything you can about all aspects that you are interested in. It's a great way to learn. But if there is no logical separation in place, you are stuck where you are until you either refactor the entire thing or you find somebody who went down the same path and has the exact same skill set as you.

Not segmenting your code is stupid. Plain and simple.

If you want to see code that doesn't separate things logically, have a look at old vbulletin or phpbb code. Look at what happened historically in both of those projects. Think about what it's like to add a new feature when you have 40 php pages that you have to touch, each written by a different developer, and each with html mixed in with the php. You have to know the whole set of code in order to do anything productive. The end result for both was security problem after security problem and a design that has not transformed significantly in 10+ years.

You can buy yourself job security by building a project that requires the developer to know a specific language, an MVC library, a specific version of HTML, a particular javascript library, and how to interact with web apis from two vendors from both server side and client side code.

You can earn yourself a promotion and new interesting things to work on by building projects that you can walk away from.

> Not segmenting your code is stupid. Plain and simple.

And overly broad, generalizing statements are not?

Remember to be respectful towards your peers.

That was not a comment directed at jacobr. It was a statement about code.

There's stupid code all over the place. Twenty years ago, you had to know networking, hardware, relational databases, and programming to get anything done. Nobody did QA testing, AB testing, or even had a development environment seperate from production.

We're slowly growing up and realizing our mistakes. If we can't call our own code stupid, then we've achieved a level in our political correctness culture that ... well, that's just plain impressive.

If anyone felt personally attacked, I certainly did not mean for it to come across that way. I am in a bit of a mood tonight...

you thinking too easy, as a project grows you actually need to run seperate and now you have a problem, either you group per component (something like addresshandler) so that you have

component/Controller component/Model component/View

Or you group by enity, something like customer_management, where you put all your customer related controllers:

customer_management/View1 customer_management/View2 customer_management/View3 customer_management/Controller1 customer_management/Controller2 customer_management/Controller3

now you introduce subfolders: customer_management/Controller/Controller1 customer_management/Controller/Controller2 customer_management/Controller/Controller3 customer_management/Model/Model1 customer_management/Model/Model2 customer_management/Model/Model3

Both have their ups and downs and still are related to MVC. However you've still outgrown MVC a little bit since now your so big that actually some things doesn't make sense, so you add a datalayer on top of your models or a service layer, etc. things get really messy now, when you don't change. That's why programming is hard. Correctly or logical separation of code changes as your application evolves but people don't change it so they blame the separation concept.

Sorry, but you just don't get the fact that SAM is reactive/functional and what benefits you can drive from it.

I am certainly reactive, but also functional - so thanks for that.

It took me 2 hours to realize how funny this comment was.

I didn't address SAM at all, other than pointing out that the author's figure did not have a separation of code and display.

Can you please re-read the paper, the whole point of the pattern like Cycle.js is to separate logic from effects. Logic != Code

Oh boy...

I take it that you are the author and I've completely derailed your thread by going off on my own tangent.


I will give it a second read and see if I can't come up with some more relevant feedback for you.

Can you expand on the benefits you can drive from it? Just getting a snark reminder about how someone doesn't "get" something is not helpful.

>Designers suck at code. Coders suck at design. Keep that shit separate.

As a developer, it's pretty easy to pick up design basics when you've been coding frontends for a few years. On the other hand, I regularly see professional designers come up with... very impractical ideas, to put it nicely.

Anyway, your statement is not necessarily always true.

I believe another major issue arises in the form of the design of these new API's are not designed using the historical knowledge that got us here.

It seems to me at least that every 10 years or so a new group of fresh grads enter the workforce and by sheer will (and all nighters) recreate different forms of the same solutions to familiar problems.

Meanwhile, the previous designers and implementors seem to fade (who knows where they end up?) and the normal evolution we should be seeing in our API's, frameworks, and collective knowledge simply inches (is that too generous?) along.

We are giving different names to original ideas. For example, I have read (sorry, I can't find the links) recently where others have published on interesting new design patterns only to find out they are essentially renamed versions of the original GoF patterns.

The same challenges exist and we are seeing someone else's take on the solution and it feels too familiar because they look at what's out there and provide only a marginal variation on a theme.

I believe this partly explains this "been there, done that" that we're seeing these days.

Anyone around over 20 years or more will likely agree.

It's not a bad thing in my view. Watching an inexperienced and enthusiastic team lurch forward with some new hotness, often different angles on old problems, is one of the things that keeps me interested.

That pioneering community mature and start recognising the similarity of their deeper issues with more classical computation problems. Then they start to investigate earlier thinking and solutions - or engaging older developers with wider knowledge.

But they almost always add something in the early stages as they were not prejudiced by legacy solutions.

Rails was a good example of this - it massively changed a lot of thinking in good ways. Many people I met in the early days of Rails went on to need outputs from earlier generations - niche languages, classical data structures, lexers, low level debuggers etc. - the very things they thought they were disrupting.

Older developers are not immune to this - I've sketched out some complicated (to me) data requirement on a whiteboard, with some loose thinking on how we might solve it, only to be informed it's a classical problem with a standard solution pattern developed by someone in Greece 2000 years ago.

> someone in Greece 2000 years ago

How the Ancient Greeks Invented Programming http://www.infoq.com/presentations/Philosophy-Programming


Matt Butcher explores the philosophical systems devised by Plato and Aristotle, showing how Plato laid the foundations for what is now OOP, while Aristotle’s dynamic model is at the core of FP.

> someone in Greece 2000 years ago

not sure if it's a hyperbole, but it gave me a good laugh :)

curious. What was the classical problem?

I've been in a situation that could have been described exactly that way, and in my case it was the Sieve of Eratosthenes.

Traditional OOP/Imperative programming tends to encourage this "reinvention". Where the old guard goes? I think programmers age out when they see the same "trend" for the 3rd time and decide they need to get off the hamster wheel and go into management. GoF patterns make sense if your language is statically typed and OO and you don't have real data abstraction.

To be fair, most OOP practitioners don't even know what "data abstraction" means: this is what is meant by "it's better to have 100 functions that work on one datastructure than 10 functions that work on 10 different datastructure. The latter is OO, the former is FP.

Your first paragraph makes some sense, but I truly disagree with your second.

Good OOP design is very similar, if not identical, to good FP design. For example:

- The iterator pattern: is essential to more abstract FP, see for instance 'the essence of the iterator pattern: https://www.cs.ox.ac.uk/jeremy.gibbons/publications/iterator.... The iterator as a stream is a coinductive data structure and a such a fundamental structure in FP. Also note that transformations on iterators are applicative.

Then, look at many of the other GoF structures: the visitor pattern, which is very similar to recursive algorithms on trees. The adapter is basically applicative, the Factory is difficult, but could be seen as a coinductive structure.

Also, one should note that in (modern) proper OOP, composition is your most important tool. This is very similar to composition of functions.

You picked an especially bad pattern for your proof. Iterators aren't inherently functional. In the imperative world they needlessly maintain state and are by nature one giant side effect. Data abstraction doesn't even require an iterator "pattern" to achieve iteration, that's the whole point of data abstraction that a "pattern" isn't required but merely a "concept". In Clojure, for example, the concept is a sequence which has stateless traversal. That paper you quote even makes the distinction between the separate approaches taken from functional versus imperative.

Think C++ iterators. Those maintain state, but not mutable state.

You have full control over the iteration,. but everything is immutable and thread-safe.

> Meanwhile, the previous designers and implementors seem to fade (who knows where they end up?)

'Promoted' to management, due to the all-pervasive and asinine perception that the only way to recognize a highly skilled technical person is to waste their time with bossing other people around.

I think this is also due to the fact that, no matter what is taught in coding schools or universities, you cannot fully understand a pattern until you've seen it working in real apps, with real use cases, and your mind grasped the whys on top of the hows.

I think part of the problem is that MVC is pretty heavyweight. Most UI doesn't need that kind of flexibility, but when you want it, you want it. So you need a way to make it simple most of the time and still have access to the details.

In web development it is probably complicated by that fact that, in my opinion, declarative positioning and sizing of elements is a pipe dream. It looks simple until you try to actually implement it, and HTML/CSS has only a rudimentary implementation. (As far as I know, Motif and Apple's constraints are the only UI toolkits that have a solid implementation) Given what we want to do with the web these days, I think we would be better off with programming the web page declaratively. Something like what Qt does. I've never found an easier way to write a UI than Qt.

You nailed it.

MVC was popularized because after enough people went and just threw together random things for enough projects, turnover, learning curve, switching between projects, etc just became a huge burden. The big sales pitch for MVC on the web (largely influenced via Rails) was a good enough common structure for web apps that developers could learn and apply across projects.

It worked in that regard, but as usage became more popular we ended up adding on more and more and more and more to try to make the "common" solve everything. Constant changes screw the entire point of the "we all learned and know this" benefit.

Flexibility of your approach is probably more important in the world today than following a rigid structure in an attempt to let the structure solve everything.

I like the approach outlined here...but I'd have a hard time using a framework based on it. I'd rather write my own code using a simplified process and minimizing dependencies if I'm going outside of something that's clearly a major player that can be counted on to stick around.

You just mentioned something very important: minimizing dependencies. Open source frameworks are sold as a way to minimize development complexity, but they add the big pain of depending on external packages that frequently change without notice. If you develop you own code, at least you know what and how things changed.

>frequently change without notice

Unless you are doing something so spectacularly, mind-blowingly irresponsible in your dependency management that changes just come in of their own accord (i.e. Go's defaults), no they don't. They change when you choose to vendor the new version or change the pinned version number.

It's true that your dependencies won't get security updates until you decide to upgrade, but other people are certainly not writing security updates for your in-house code.

You should care about the code quality of the 3rd party libraries you use, with the understanding that you might one day take over maintenance of them, at least for internal purposes. That's still not a reason to duplicate effort.

Not trying to be argumentative, but MVC was popular in web development well before RoR came on the scene (Struts for example). When RoR came on the scene, it was during a time when there was an explosion of MVC frameworks in languages other than Java (Django, CodeIgniter, Zend Framework, etc). I would argue that RoR's popularity stems more its use of Active Record and all the buzz surrounding that at the time.

I remember. I started out with Struts too but Rails was the first framework that got things to a mode of operation that people started trying to copy in every other language.

That's the main thing that I meant. Rails was the first that did things in a way that everyone else tried to emulate.

That's what I was thinking; the boom in web frameworks started right after RoR was released. I still consider RoR to be the framework that popularised the MVC paradigm in web applications (but I might be wrong / have an incorrect point of view)

He nails the problem: API bloat due to chaotic front end needs, but there is a much, much simpler solution: move the UI/DOM building back to the server side.

This gives you a nice, secure environment to build your UI in with the full power of the query language your datastore provides, and it doesn't require any particularly complex architecture or discipline to maintain.

And, of course, I would be remiss if I didn't mention my baby, http://intercoolerjs.org/, as a tool to help you do it.

Yes, yes, yes.

Javascript is useful for web apps.

When a blog has time to display a load animation in 2016 someone, somewhere should have their client side JS privileges withdrawn ;-)

> When a blog has time to display a load animation in 2016 someone,

Using JavaScript client-side doesn't necessitate this. You could inline the data into the page and you would not be able to tell if it was client-side rendered.

Only when one of the scripts is blocked and I get placeholders sprinkled from title and down, main content was readable though IIRC. Happened to me on a MSFT site recently :-]

Which is exactly why sites like blogspot make me do a WTF.

I remember the days of 3-tier client-server computing. This exact problem. How much knowledge does your middle tier need to have about your presentation tier? Do you shape your database schema to your business logic, or provide stored procs that can translate your "pure" schema into the structures required by the middle tier? All of those questions.

The key one we're asking here is "how much business logic do you need in your presentation layer?" Too little then you're round-tripping for simple form validation and your UI is unresponsive. Too much and your UI becomes tightly coupled to your business rules (and you start exposing too much attack surface).

Modern SPA web apps are making a deliberate trade-off, moving more logic into the client so the app is more responsive.

The problem of making the API schema "pure" or tightly coupling it to the UI is the same old "how much business logic sits in the database" debate. Ideally, of course, all endpoints would relate directly to application entities so that changes to the UI don't change the API. But this increases the number of round-trips and reduces responsiveness. Same for rendering HTML on the server - increased round-trips and reduced responsiveness.

The answer depends on the trade-offs that the project needs. Is the API complex and used by lots of clients? Keep it pure. Is the API small and/or used by only one client? Tailor it to the client? Is responsiveness paramount? Put as much as possible client-side. Is security paramount? Put as little as possible client-side.

IMHO there is no right answer that works across all situations, just as there wasn't back in the 3-tier day.

Yeah, but I see them making a lot of the same mistakes that were made back then and I can help but notice that, in the context of web applications, you have an opportunity to render the UI in a trusted environment where the code being executed is guaranteed to be written by a non-hostile by just rendering/executing server-side.

If you throw HATEOS-without-thinking-or-arguing-about-it on top of that, it seems like a no brainer to bias toward that approach.

I used to think that web apps were dumb and thick apps were obviously better in most cases, but I've changed my tune in the last few years as I've come to understand HATEOS and disentangle it from the JSON API quagmire it got into:


Client side logic and rendering has a HUGE number of advantages when done correctly. Most sites don't but it is possible. API Churn is simple to solve (graphql, mvvm pattern, or as a proxy api wrapper can all be used quite effectively to solve it) and if you build an universal (meaning progressive enhancement enabled) application you can pick up the advantages of a SPA without losing the very limited advantages that a true server rendered app gives you.


If you want to realize those advantages, you will end up introducing a "simple" solution like GraphQL. In an insecure environment.

Read, man. Don't just react.

I think you misunderstand what GraphQL is. It's not direct access to the database. It is a view model abstraction on top of your database which includes permissions checks for each node and edge, so in many ways it's actually more secure than the alternative of securing each endpoint adhoc.

GraphQL is a copy of OData / IQueryable which is over 5yrs old. Its not new. Just like flux, is EventSourcing rebadged.

If u search about these technologies, which are quite old, u can find the positive and negative consequences of their implementations in the wild.

ie. exposing IQueryable<T> is an anti-pattern and security vulnerability. As soon as your API is public, and you try and lock it down from simple DoS attacks, your not going to end up at the beginning. The only place these technoligies belong would be in your trusted environment - API <=> DataSource, not Client <=> API

I understand what GraphQL is: it's a step towards full datastore access on the client side. It exposes some security issues (what if you don't want a user to have access to some data and miss a security constraint, as you will, and they modify your GraphQL language to see it?) but it isn't the full security clusterfuck.


Again, the tradeoff is touching your API (or security model) every time UX needs change, or exposing more and more security issues client side.

That's absolutely not true. For each node type and edge type in the graph, you provide a required canSee() function which controls its visibility. This granular access control leads to a more secure system than the alternative, which is doing access control at the endpoint level and then hoping that the endpoint doesn't fetch data it isn't supposed to.

Is canSee() in the spec? Looks useful, but I haven't seen it before (heh).

Great question!

No, no it is not in the spec, as of February 15th, 2016:


Update: I think Peter Hunt is talking about Facebook's internal server-side implementation of GraphQL. That's the only way his comments make any sense given what's publicly available.

It doesn't belong in the spec, it belongs in the implementation. But yes, the reference implementation (graphql-js) should probably be updated to demonstrate access control.

> I think you misunderstand what GraphQL is.

> For each node type and edge type in the graph, you provide a required canSee() function which controls its visibility.

> It doesn't belong in the spec


Am I on candid camera?

The contents of the GraphQL working draft[0] do not include any of the following words:





[0] - https://facebook.github.io/graphql/

GraphQL itself has no position on any of those things, it's out of scope and up to the developer to handle.

You can return null if someone requests an object they're not allowed to access, or return an error, or whatever it is that you're currently doing.

I've been playing around with GraphQL, and my approach has just been to include a permissions object for the actions a user can take on the resource:

  { permissions: { destroy: false, update: true, ... } }

You SHOULD be checking permissions on each node and edge, but the details are entirely in your hands.

You SHOULD be checking permissions on each node and edge, but the details are entirely in your hands.

I agree 100%.

However, that's not what the GP said above, which was "It is a view model abstraction on top of your database which includes permissions checks for each node and edge, so in many ways it's actually more secure than the alternative of securing each endpoint adhoc.". I contest the use of "includes permissions checks" and "actually more secure" for a system that does not at any point specify any type of security at all. It's just as secure as any random REST API or route (in other words, as secure as you make it, and not any more).

See below: I think Peter Hunt is talking about Facebook's internal (or maybe public?) server-side implementation of GraphQL. That's the only way his comments make any sense given what's publicly available.

That's certainly possible, but it cuts against his argument which is that "It is a view model abstraction on top of your database which includes permissions checks for each node and edge".

That's maybe what FB's internal vision of GraphQL is, but that's not what we have available out here in non-FB land. If carsongross's argument is that GraphQL is moving the database to the client side, and all of the security issues that go with it, then a rebuttal that says 'Nope, GraphQL is more secure' isn't going to cut water if the only specification of security is locked away. Particularly since people may be accidentally throwing their GraphQL client side not realizing that there are security issues involved at all (since Peter Hunt says it's more secure for example)

> move the UI/DOM building back to the server side

> Intercooler responses are HTML fragments.

I'd argue that, just like described at the end of the article, it's just shifting the burden on the server. It's a trade off.

IMHO React gets a lot of things right :

- DOM diffing

- Components

- 1 model as a unique source of truth

And when one thinks about it, IT IS MVC in its purest form, since there is only one model, one big view(the tree of components) and one controller(using event delegation). What React nailed is an efficient way to rerender the view.

React doesn't solve the fundamental problem of either API churn or security issues in browser client code. It may be elegant (I don't think it is) but it is orthogonal to the issue at hand.

Server side MVC and HTML as a transport has a huge number of advantages, not the least that HATEOS "Just Works" without anyone having to think about it, but even if it didn't, I would think that the API churn/security tradeoff would give the development community pause about heavy client-side logic.

React doesn't solve the fundamental problem of either API churn or security issues in browser client code.

I'm not sure what problems you are referring to here, since you don't seem to have defined them anywhere. They sound like they'd be well outside the scope of a library like React, though, which is essentially just a declarative UI rendering tool.

And rendering on the server solves API churn by... having no API at all? Then what do you do once you'd like to add non-browser clients like native apps or an actual API for third parties?

Additionally, I don't see how rendering on the server frees you from thinking about security or setting (and enforcing) proper permissions. All you gain is that the problem is less visible and your entities are obfuscated in chunks of redundant HTML.

In fact, now you may open up new security holes like XSS that you can avoid easily with proper client-side widgets.

> What React nailed is an efficient way to rerender the view.

Well, it’s not really efficient.

A more efficient solution might be if we wouldn’t even need DOM diffing, but instead every module would be based on a reactive model, and we wouldn’t have to deal with the diffing anymore.

I'm not sure how it would work since the DOM is not reactive. At the end of the day, any "view" library for the browser is a DOM builder. It cannot be reactive, by nature.

edit: s/nature/definition

> It cannot be reactive, by nature.

Indeed it can't. But you can get close. The key is that the side-effect has to be much more granular. Instead of re-rendering a virtual dom for an entire component as a side-effect, the side-effect is changing a single attribute on an element, or updating its list of children.

This means using a template language where one can make these associations between variables and DOM. JSX doesn't work here because its output could be anything.

I am toying with an experimental framework that implements these ideas. I am convinced we can do better than diffing vdoms.

mmm... Instead of a dom it will be called dam (domain actor model)... look ma' I can also write cool stuffs...

>>with the full power of the query language your datastore provides

If your datastore exists in your server side rendering application. I would say that "chaotic front end needs" exist in applications that require data from multiple data sources: many different APIs and datastores that need to be independently queried. If that is true, is there really a big difference between having a server side or client side view model that wraps that complexity?

I don't know if I completely understand what you are saying, my comment doesn't rely on multiple data stores. I'm talking about a web application with a single (say, SQL-based) datastore.

I am saying that without the full power of that data store on the client side (that is, an optimizable query language, update and insert statements) you will always be thrashing your API around to deal with chaotic UI needs.

If you do expose the full datastore functionality on the client side (which GraphQL is a move towards) then you have a different problem: you are exposing your datastore in a fundamentally insecure environment.

So you are screwed either way.

That's exactly what I'm saying. If you have a web application with only one data source built in, then GraphQL is not for you. In fact, that whole post wasn't for you. You probably should stick with a monolith until you can't.

If you have a large, complex web application that integrates data from many different sources, then an API aggregator or view model is essential.

GraphQL is not an attempt to move the datastore clientside.

Yes in that instance you are correct that server-side rendering is easier. It doesn't make it any easier to build a mash-up application in which you are relying on multiple sources of data that you do not control.

I really like a middle-tier functional API (ideally, Node) that can make requests as needed to a dumb backend that is really just a DB endpoint and then runs functional computations (map/reduce) to marshall the data as configured for a view to a client.

You don't have to hassle backend for expensive API updates on a heavyweight language that makes exposing an endpoint a pain in the ass (sorry, Java), and you get exactly the data you need in exactly the way you need.

You don't even have to go that far. Just let the 'front end' developer write json api's.

intercoolerjs just seems to be an inlined version of jquery .load()? (Which I use a lot)

Or am I missing something?

At some level, no. But, at another level, yes: REST-ful dependencies, progress indicators, CSS transitions, history support, custom HTTP response headers for meta-actions and so on.

You can check out the docs here:


He starts with a negative tone when describing react & redux without backing up his negatively with proper & convincing arguments just so that later he can throw in his alternative solution.

I don't know why we keep doing it but often times we cover our eyes on purpose and brush off existing solutions backed by thousands of engineers so that it's easier to make our points.

Especially considering his proposed solutions looks an awfully lot like not-very-fleshed-out remakes of React and Redux.

This is not true, please visit the SAM room on Gitter https://gitter.im/jdubray/sam

In a nutshell:

in redux the reducer updates the model as: state.counter += 1 ;

All SAM suggests is that this is an unfortunate coupling, you should write it as: // action function increment(value) { return value + 1 ; }

and then present the output of the action to the model:

state.counter = data.counter ;

Redux does not have any next-action-predicate either.

That's not true either. In Redux the reducer is always pure and never modifies its arguments, at least in principle. It returns a new copy representing the new state. Dan Ahramov relies on deep-freeze to always assert this fact in all of his tutorials.

Regarding next-action, in Redux subscribers are called after the root reducer returns the new state. You can always call dispatch from a subscriber.

You and Sam are proving my point by not actually taking the time to understand the existing solutions.

I'm extremely open to finding a better solution to React/Redux, but your example makes very little sense.

> in redux the reducer updates the model as: state.counter += 1 ;

That's not how Redux code would look, nor is that how Redux state works.

> you should write it as: [...]

That actually looks very much like how you'd actually write a reducer. Can you please clarify how you think this differs from idiomatic Redux code?

> Redux does not have any next-action-predicate

Inventing terms does not aid in advancing a discussion, especially if you do not define them. But reading through the OP, it appears that a next-action-predicate is just a selector, or possibly a selector used to implement what in Redux would be solved my middleware? The description is quite cryptic, but it at least claims to solve a problem which is already solved cleanly in Redux, which rather raises the question of what the advantage it yields is.

I mean this nicely, but: You're not really convincing me that SAM is a different pattern. Perhaps a simple example app? Ideally some small idiomatic redux todo app ported to SAM; that should show the differences (and benefits) very clearly.

Take a look at this boilerplate project (https://github.com/erikras/react-redux-universal-hot-example)... I think Redux with the multireducer lib removes that coupling you speak of.

Isn't GraphQL/Relay (or similar tools) the solution here? Has the best of custom endpoints (can be conceptualized as a single, infinitely customizable custom endpoint) without the drawbacks (there's only one endpoint, and backend engineers just need to expose a schema).

It was kinda disappointing to see him fail to properly address tools that were designed to resolve his original concern. He basically decides that GraphQL forces you to "shape your models to your views" (which is false, GraphQL/Relay just collates and trims your models before delivering them to your views). In a sense, it allows him to continue to say "yes" to his frontend developers (which is good from a product standpoint) without adding a ton of custom endpoints.

I'm not sure why author dismisses graphql. It greatly simplifies server-side work. You just need to implement one endpoint that responds accurately and quickly to a composed query in accordance to what rights the current user has. It's not as easy as "serve result of SELECT *" but it's well defined and flexible for consumers.

It can be as easy as "serve result of SELECT *". We're working on it with Nodal. http://graphql.nodaljs.com/ - some pretty neat progress so far.

Agree. His opening complaint is from the Front End saying "I need x, y, and z so make me an API that returns {x, y, z}" which he wouldn't need to do for them if they were using GraphQL.

IQueryable. Go look it up. MS did GraphQL years ago.

If you got out of your insurlar JS/FB/Twitter bubble, u will see the anti-pattern that IQueryable become.

I don't think IQueryable is very composable.

It goes bit beyond simple query builder because it can automatically translate for you more expressive queries into SQL (http://www.infoq.com/presentations/theory-language-integrate... ) But I don't think people could define parts of IQueryable in .ascx files and compose it into single IQueryable result of which feeds the components. That's the main idea of graphql.

GraphQL is "structurally typed" where IQueryable is nominally typed by its generic argument. This make GraphQL more easily composable, but IQueryable is still composable with functions.

I always find it interesting how something seemingly as simple (conceptually speaking) as building a web app can in truth turn out pretty darn complicated.

I'm not seeing all the complexity we see here or in other frameworks is warranted, but clearly there is something going on making this quite hard.

I'd be interested to see fundamental research (and vulgarization thereof) on this.

Maybe because there is more to creating a usable interface that can have multiple points of operation from a user perspective than the underlying data storage or API interfaces. Developers that concentrate heavily on backend data interfaces think their worlds are so important and have spent so much time looking down on the front end that now, when better front ends are demanded, they fall apart trying to create them... The same does happen in reverse, designers tend to want certain types of interfaces and workflows that are simply hard to manage in any practical way.

The difference is that now, more people on both sides of the spectrum are starting to gain insight into the rest, and starting to realize that good "full stack" development is hard. There are also nearly infinite options, and even more opinions.

Changing the API design because a frontend dev doesn't like the style is definitely an anti-pattern. Customising APIs for every imaginable use case will get painful fast!

Angular does have powerful services to transform objects. For example last week a library (ngTagInput) was expecting a list of objects with {id, name} however the server was returning a list of integer id's. It was trivial to replace the id with the {id, name} object in the service, without requiring changes to either the library or the back end - the API is still generic.

APIs should be designed such that the total effort to build both sides of the equation is minimised. APIs are, after all, designed for consumption - you can return data in any number of formats - eg a proprietary binary format, or whatever, but if your consumers find it hard to use, then the API sucks. End of story.

Curious to what extent you stand by this statement -- as an exploration not a refutation ;) So lets stretch it to the limits. Lets create an API for a non-programmer. For example, lets allow him to make a survey in iOS.

You could, for example, create an Excel file -- lets assume these non-programmers know Excel a bit. One column could designate a template, another column could be used for the question. Perhaps a column for materials (e.g. video files or images). I guess the point is clear, you could make columns that do something specific in the app. Add some documentation and presto, you have an API for non-programmers :)

The Excel file itself could be given to a programmer who implements it or it could be uploaded to some automated process that spits out an iOS app. A rudimentary version could for example attach some pre-specified views with some questions and images.

What do you think, is it possible to create an API for non-programmers via some software that they do know?

The reason I choose to work with MVC frameworks is because more often than not other people work on my code with me.

If I would create my own abstractions it would be good for me but very, very bad for others.

If you are serious about front-end Javascript, you owe it to yourself to check out the ClojureScript libraries Reagent (and re-frame) and Om.Next.

Aside from the pithy syntax compared to even the most futuristic ECMAScript, meaning frontend code is faster and easier to write (and throw away, if need be), there's another key element Clojure brings to solving this API proliferation problem, which is just how easy it is to do data transformations. The Clojure standard lib makes it trivial (dare I say, fun?) to carve up and reshape whatever data you're consuming to fit the model your application needs.

Yeah, I feel that the real innovations in UI programming are happening in ClojureScript right now based on top of React which is really a very low-level dependency. Javascript is mainly copying what's happening there with libraries that allow you do do reactive programming (like RxJS), have immutability, etc.

First he dismisses React, then he says his code is looking a lot like React without the ceremony, then he shows code samples that look a lot like React except they use string concatenation (hello XSS) to build the DOM (goodbye DOM diffing) and instead of following the modern React idea of small composable components they are huge and bloated.

Is this satire?

I thought he was quite positive about React, but he seemed to be uncomfortable with their lack of provision for the model layer.

From one of the screenshots:

    div class="opacity-'+slider.maskOpacity
Even better once you notice the actual inline `style` attribute two lines up. headdesk

It's almost as if just about anyone can write a blog post on the internet :)

WTF? Blindly concatenating strings... hello XSS..

It's a typical Hacker News cliche: "Why I No Longer Use [commonly accepted standard]" but I am honestly glad that the author put a lot of effort into explaining the theoretical AND practical concepts behind his stance. So in case you skipped the article because it fell through your "textbook HN hit" filter, please take my word for it that it's worth reading.

Can't really see how the proposed solution is different from React + Redux.

Not using an MVC framework is often fine if you're working alone on a project but you might just end up reinventing angular...

Then when new people come in to your project, the learning curve for them will be steeper than Angular - Also you won't be able to hire any 'batteries-included' programmers because nobody will know your framework when they join your company.

Worse, engineers might refuse to join your company if they see that you've built a custom solution from scratch... Chances are that your framework won't be as good as Angular or React.

Maybe you should try Google's Polymer framework? I've tried all the major ones and Polymer is the least 'frameworky' one.

Also, you won't benefit from the ready-made modules that are available out there.

I feel like part of the "problem" of API churn is because we are writing APIs that live in the controller layer. Just because it runs on the back end doesn't mean it lies within the model layer.

Beyond that, it certainly makes for less code to make the model directly line up with the view, but you create coupling between the two. This seems like a major difference between MVC I've experienced in the web vs desktop apps back in the late 90's - in a desktop app my view didn't rely on the model's code, just on the model's data. But nowadays with rails and spring and django the view is coupled directly to model code.

Reading this I am kind of happy we rolled our own framework and dogfooded it for the past 5 years.

It has only a few concepts:

  * Pages (webpages)
  * Tools (components)
  * Events
  * Users (accounts)
  * Streams (data)
Everything is accomplished around these basic concepts.


That's always going to lead to a solution you like (because you built it) but it loses the advantages of open source in both "many eyes" where the code gets massive amounts of testing and "prior knowledge" where you can recruit new people who already have some knowledge of the technology you use. Neither of those are necessities, of course, but they're definitely useful advantages.

The problem with MVC is that there is no clear definition of what it is, and so what you're left with is people who are putting code in the View, code in the Model and code in the Controller. Then, go try and debug that. You wind up having to logging basename(__FILE__) to understand what file is responsible for what. It is a nightmare for debugging. Then you have to to talk about the elephant in the room -- performance. Performance using MVC frameworks is orders of magnitude slower than using straight procedural or functional development. It should not take a rocket scientist to understand that calling a function directly is faster than instantiating an object and then abstracting an abstraction from an abstraction from abstraction just to return a result. I dont know why people do it, but I find myself cleaning up the mess more often than not. You cannot convince me that a 10x decrease in performance and a 10x increase in code complexity is a good thing.

Searching through the comments here, only one of them mentions SAM, which is the point of the article. Some nuanced thought about what's going on here is therefore warranted. So let's start with old-school Smalltalk's MVC: this was a message-passing system consisting of three types of concurrent processes:

    * models maintain a list of "subscribers" that they regularly send certain 
      messages to, in response to the messages that they receive from the outside 
      world. This can as always involve the internal state that each process 
    * views subscribe to models and then present a graphical description of the 
      messages they receive. 
    * controllers listen to user-interface events from the view, and use it to send
      messages to models.
These have transformed mightily in the intervening years; a model these days often refers to some form of schema for data structures which are returned by services; controllers often query those services and provide them to the view; views may define their own service-queries. Often there is no clear notion of subscription except possibly subscribing to events from the GUI, which is the responsibility of the controller; the controller basically handles everything which isn't either the structuring of data or the organization of the display components.

SAM appears to be coming from the latter developments and is concerned with an associated explosion: Every view and interaction more or less gets its own service on the back-end, providing a sprawling codebase which resists refactors; they also get their own controllers which maintain them.

In the SAM idea, the model now reprises its earlier notion of managing subscribers and containing business logic: however it is now "dumbed down" from its potentially-very-general Smalltalk definition: the model is instead meant to hold and maintain a single data-structure. (Can there be multiple concurrent models?) The model apparently only receives requests for an all-at-once update, but it may decide to transform its state `new_state = f(old_state, proposed_state)`. Presumably then it again tells its subscribers about the new state if it is not identical to the old state. (Each view is expected to compute the diff itself, probably?)

A "state" in SAM appears to be identified with a "view": a "state-representation" is a function from a model to a DOM. Your GUI consists of a bunch of these, and hypothetically we can diff the DOM trees to better understand what needs to change on an update of the related model properties; the "view" is basically just a bunch of representations of the underlying state with some "actions." These "actions" are not actually parallel processes at all but do take the classical responsibility of the "controller", routing user-interface events to messages to send to the model. The apparent point here is that they should correspondingly be very "dumb" controllers: they just create a transformed version of the data that they received from the model and then send it back to the model as a "nomination" for a possible change.

Finally there appears to be a strange next-action callback which may be part of every "view update." (It's not clear where this lives -- in the "action"?) I am not entirely sure why this exists, but it may be that the algebra of actions without this callback is merely an applicative functor, not a full monad. The essential idea here is that the function apparently can compute a follow-up action if such needs to happen.

If I'm understanding this correctly, then a simple app which seems relatively hard to structure this way would contain:

    * a counter stored in a database,
    * a button which should ideally increment that counter,
    * a display showing the current value of the counter,
    * a notification window telling you when your click has updated the counter.
I'm using a counter since it's got a nice algebra for dealing with coincident updates; if someone else updates the counter then your update commutes with theirs, saving the client-side complexity.

Without a framework, we would simply define two endpoints: GET /count (or so) gets the current count; POST /increment will increment the current counter and will say "Success! The current count is now: ____", telling you what you changed the count to.

Here it seems like you need three models. First we have the server-side model of what's going on:

    server model Counter:
        private count, Subscribers
        message increment(intended_count): 
            if intended_count == count + 1:
                count += 1
                return ('success', count)
                return ('failure', count)
        message count():
            return count
The requirement that we only nominate new versions of the underlying data-structure means that we cannot just define the appropriate increment service which always succeeds, but must instead tell the client that the request has failed sometimes. Then there are two models on the client side: one holds a list of outstanding requests to increment (updated by actions, retrying any failures) and the other one holds the currently-cached value of the server-side data (because we need to update this by both the former client-model's update events as well as by some automatic process). You would probably condense these in practice into one model, however they are different concerns. The former model, however, is absolutely required, as it provides a way for the "notification window view" to appear and disappear when one of the requests has succeeded.

This seems unnecessarily complicated given the simple solution in terms of two API endpoints -- however it does indeed fulfill its desire for lower API bloat and some separation of concerns.

I was intrigued enough to look at the source of his Star library (https://bitbucket.org/jdubray/star-javascript/src/5806219be6...), and got very little out of it. Aside from a few truly strange things (https://bitbucket.org/jdubray/star-javascript/src/5806219be6...), it's just generally not commented or documented, and I have a hard time figuring out how it serves the goals of that article.

I'll take another look later, but I'm curious if anyone else got much out of it?

I don't think MVC belongs on server-side web frameworks at all. It's not a continuously updating interactive display, it's a one-off thing. You don't have a model object sitting around sending messages to the view when it updates. This doesn't happen. There is no triangle of messaging. What really happens is a request comes in, and your "controller" is the one doing all the calling.

The HTTP server is by its very nature a pipelined or layered application. Requests are turned into responses through a series of transformations. They go down the layers as they work their way to your database, and back up the layers as they are built into responses. This is, incidentally, why functional programming is such a great fit for web servers.

<< As a veteran MDE practitioner, I can assure you that you are infinitely better off writing code than metadata, be it as a template or a complex query language like GraphQL. >>

Reading it is worthwhile if only for this phrase.

I'm not super convinced OP's SAM architecture is that revolutionary. It will take something waay better to displace the current champion: React+Redux.

And speaking of the champion, here's a good write up[1] about tfb's helper library 'react-redux-provide' that automatically matches names in this.props with Redux actions and reducer outputs. It's a simple thing, but tremendously reduces the wiring boilerplate for React+Redux apps.

[1] https://news.ycombinator.com/item?id=11098269

Controllers are important for being gatekeepers, which can be adjusted (think middlewares which add security, logging or other changes uniformly) and inspected outside of the MVC code, and separates outside from inside.

It might be worth the tradeoff in the short-term, but it's the lifecycle TCO of code counting support and modifications over many projects which will validate or invalidate a particular approach (there is also a cost in terms of hiring and learning curve for inventing an alternate convention).

I hope it works out.

It happens time and again, another pattern from the programmers perspective, when MVC really is much more than a pattern, and it includes the users perspective. It's not about separation primarily, that's the nerd's view.

I've made an attempt to clarify what MVC is really about. It was posted here before, I appreciate if you read it: https://news.ycombinator.com/item?id=10730047

First, his rocket example won't even work (in the actions.decrement function, "'model' has not been declared"), but what he's talking about isn't actually new in React land.


Not that every idea has to be novel, but I think the code in this repo provides a far better example of FRP (which SAM is) than that article.

Please colleges, please make computer science/software engineering students take at least one course where they have to write essays to pass the course.

They don't, unfortunately. In my case I was very lucky. One of the first things I did was Google about what programmers would do -- as opposed to programming which I was scared of in the beginning days -- and I found out about Joel Spolsky very early on in my studies. Getting a bit of breadth about the culture and more about the philosophies and theoretical underpinning, from various writers, has been quite helpful to me.

This is what he said: http://www.joelonsoftware.com/articles/CollegeAdvice.html

You may guess what his number one tip is ;)

My number 1 'hack' that I learned from these classes is to eliminate every unnecessary word and be as short and concise as possible. That way, I make fewer mistakes, get my point faster across and I communicate more clearly in general since there's no risk of being long winded. I never do it in HN comments though, it takes a lot of time.

My number one hack is to take the conclusion from the third paragraph/wherever and, since it's the most important thing to communicate, move it to be the first part (in front of all the tedious "here's the background on the problem we were trying to solve").

I've worked on proprietary frameworks very similiar to what the author describes. They are great for the web and in my opinion allow for tighter code. The issue is that I haven't found a good, open-source one for the web.

Also, interesting, is the fact that computer engineer "coding" is much more into the state concepts brought up in the article.

There's an excellent tool to avoid the default value mess shown there: defaults


It will fill your option object with the data if it's undefined

Also, if he ever needs to check if a variable is a number, there is another one-line-of-JS project on npm that does just that!

But it isn't one line:

    var clone = require('clone');
    module.exports = function(options, defaults) {
      options = options || {};
      Object.keys(defaults).forEach(function(key) {
        if (typeof options[key] === 'undefined') {
          options[key] = clone(defaults[key]);
      return options;
Besides the tests and the fact that someone is maintaining it. It's not much, but it's code that doesn't have to be maintained by you

Yeah even _.extend

MVC, in a strict sense, as in Gang of Four, almost does not exists in JavaScript frameworks. What we see is MVP (Backbone.js, Riot.js), or MVVM (KnockoutJS, AngularJS). ReactJS prvides just a View.

I think the author might like the simplicity of something like Vue.js

Cool abstraction, but would love to see actual code implementation.

https://bitbucket.org/snippets/jdubray/ Can also share more code privately

pub/sub is the best, call it command bus, call it Flux, Redux,...

Ah, but my version of pub/sub is much better than the others, because they don't adhere to the fundamental principles of pub/sub which I have derived[1].

[1]: I have discovered a truly marvellous proof of these principles, which this comment is too small to contain.

This was a sobering read.

"Premature optimization is the root of all evil" -- Knuth

Because you have time?

I prefer the "VC" style (MVC minus the M). It is way too much work to have to redefine your models on both the client side and server side, when you can simply have all the data in temporary dictionary structures. You define your models just once in the database itself. I don't know if anyone else does this, but it keeps the amount of code to a minimum.

I use Django on the server, with Ember on the client-side. Without a concrete model layer in the client that has computed properties and, to a lesser extent, observers, things that were trivial to implement would become much more difficult.

Of course, I look forward to the day when we can write our models once for both the server and the client, but I've yet to see a "full-stack" solution compelling enough to get me to abandon both Django and Ember.

To be fair, and if I'm understanding you correctly, you can use ember-data alongside Ember to get your model layer (or even just use plain Ember.Objects as needed).

Ember fastboot seems like it'll address some of your concerns too - it'll allow for initial rendering of a page on the server with the intention of the client taking over after that.

I use Ember Data along with Ember. Ember Data is my model layer. I should have made that more clear.

As for FastBoot, correct me if I'm wrong, but right now it supports rendering static pages for SEO. This is not useful for business apps. Rehydration on the client-side (what you seem to be referring to) is still being worked on. But it does not replace the server-side application.

At any rate, eventually someone will bring together the best of server and client technologies in one full-stack package. With all due respect to what's out there now, I haven't seen anything approaching the elegance of Ember and Django.

You can do this with any Javascript-on-the-backend solution today. Not saying it's better, but we're doing it. Node is the obvious one, but we're running V8 in Java on the backend so we can use Spring-boot and a bunch of other Java stuff we get "for free" at the company.

If you use a framework like Meteor, a small version of your database runs on the client, so you don't have to keep re-implementing various DB functions/features to display data to users or waiting for a round trip to run a trivial query. The server code turns into a set of replication rules and various functions that should never run on the client (security, touches a lot of data, etc).

This way is good for CRUD apps, but may not be the best approach for others.

I build thick controllers and rely on the database schema + DAL to handle the "model". 'course one could argue the array the data is dumped to is a "model" but its clearly not an object like MVC enforces normally.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact