

Today, Web Development Sucks - hbrundage
http://harry.me/2011/01/27/today-web-development-sucks/

======
mechanical_fish
_Write once, run on the continuum between the server and browser, and forget
that theres actually a difference between the two. Validations can be shared
and server dependent ones can be run there..._

This might be an interesting goal to work towards, but I'm not convinced that
one actually wants to achieve it. I'm skeptical that abstracting away the
boundary between client and server is a good idea. Unless you're a DRM true
believer, there will always be an essential difference: The server is (more-
or-less) guaranteed to be running the code you wrote, and the client _is not_.
In the end, unless you are comfortable with allowing an AI to dynamically
adjust your application's attack surface for you, you'll always want
visibility and control of what gets done where.

 _SEO works fine because the page can be rendered entirely server side._

Are we missing the point that web-based applications and web pages have
totally different semantics? The major difficulty with getting Google to index
my single-page application is not the need to run multiple rendering engines.
It's that Google indexes web pages, and my application is probably not built
out of web pages -- not without a great deal of creative thought, anyway. Try
to imagine an iOS application that could be fully rendered for Google. How
would you do that? Does each possible window and window state get a URL? How
should Google index the little popups that appear when the user executes a
three-fingered leftward swipe with a twist?

The reason why you're writing your views and rendering twice is probably that
you need to design them twice. You can either make your users view your app as
Google does, as a series of HTML pages at distinct sensible URLs with a
minimal amount of Javascript sprinkled on top (which was good enough for 1998,
and even 2008, but perhaps not good enough to compete with iOS long-term), or
you can design a glorious GUI experience for your users that is largely opaque
to Google, or you can do the work twice: Figure out one view of your data that
appeals to humans and another that appeals to indexing bots. And that job
probably can't be done for you by some magic framework. Figuring out sensible
views of your data is design work, for humans.

 _A departure from the routing paradigm found in Rails, Sinatra, Sammy, and
Backbone. The traditional one URL maps to one controller action routing table
no longer applies._

We didn't converge on this routing paradigm arbitrarily. Among other
considerations, it is very strongly influenced by Google, a company that
literally _pays you money_ if you design a URL scheme that can be usefully
interpreted by Googlebots.

~~~
hbrundage
Great points.

The attack surface is something I didn't consider, and presents a tough
challenge. If the framework were to implement the transport layer in a
predictable way, I think it still might be a net win. It could build in all
sorts of automatic good protection against XSS and CSRF and have even better
control than today's frameworks since it can validate on both sides and know
what to expect. The ever changing attack surface is a problem without a doubt,
but the usual vectors can be better protected against, and the visibility and
control issues can be mitigated with inline directives.

With regards to SEO, I disagree. There are two issues at heart here. Firstly,
some web applications are single user apps with no publicly indexable data (ex
Mockingbird, Basecamp), and optimize landing pages to direct users to use the
application and explain why it is worthwhile. That's the data they want in
Google, not the data from within the application. For these types of apps the
issue SEO isn't that big. The issue is big with apps like Hunch, where as you
say there are many states and non page-like semantics. Take a moment to
examine the data that these kinds of apps want indexed and searchable. It's
usually central to the app, the meat of the whole thing, and in the case of
Hunch, available as a discrete page because it makes sense. This leads me to
believe that you can without too much difficult come up with a URL scheme for
representing it that either does or doesn't have an anchor in it. Thats the
central idea, is that the routing table is the same on both sides of the wire.
The first page they visit can be rendered server side and all ensuing pages
can be rendered client side using fragments after the "#", and the Googlebot
can index the pages as they are all renderable server side. This also ties
into your third point, that the routing paradigm used by Google and everyone
now is the only way to go. I really don't know how to solve the multiple state
vs url segments problem while remaining indexable, but I believe it can be
done. Do you disagree that the paradigm is no longer as useful?

~~~
boucher
New Twitter is really the example to look at here (and they aren't the first).
New Twitter doesn't have unique pages anymore for tweets, and everything is
happening in a single page. But each of those tweets also has a real HTML
version generated separately for SEO. That is a perfectly valid way to do it,
especially when you think of New Twitter as a content creation app, and each
individual tweet as its own random piece of content.

Granted, the simplicity of Twitter's content makes that choice easier than it
might be for a lot of people, and I agree that hopefully we'll converge on
tools and frameworks that will make this more automatic. But in a world where
your app is largely running on the client, and data exchange is done largely
via some kind of simple REST AJAX API, writing an additional (likely quite
simple) HTML template for the same data doesn't seem like an impossible
challenge for most web apps.

~~~
joe_the_user
I'm not sure New Twitter is good model in its implementation however. I don't
think I'm the only one who's noticed that the new interface is a client-side
pig, taking _all_ the CPU time it can grab.

------
asnyder
Yes, we thought this in early 2005 and created NOLOH (<http://www.noloh.com>)
as a solution. You write your app in a single tier, rather than having to
worry about all the plumbing (Cross-browser issues, AJAX, client-server
interaction, Comet, etc). You can even use your existing HTML, JS, CSS, if you
like, and NOLOH then renders a version of your app targeted specifically
towards the end user, whether it's IE, Chrome, Mac, Windows, etc. or a search
engine robot, in a "lightweight" (only the correct and highly optimized code
is loaded for a specific user) and "On-demand" (only loads the resources that
are absolutely necessary at a given point) manner, thus allowing for rich web
apps like gmail, to load instantly, and as needed, rather than a traditional
web fat-client.

Every few months someone will write a post like this, and I wince. We've
written several extensive articles for php|architect magazine and have
presented NOLOH at several major web development conferences around the world.
The fact of the matter is that tools like NOLOH exist, there are others, and
they can be used now. Today, web development doesn't need to suck.

If you're interested in the specifics of the above stated "lightweight" and
"On-demand", specifics can be found in the article "Lightweight, On-demand,
and Beyond" in <http://www.phparch.com/magazine/2010-2/november/>.

[edit] Link to free December issue of php|architect article "NOLOH's Notables"
so that you can more easily see what I mean without the November issue
paywall. [http://beta.phparch.com/wp-content/uploads/2010/12/PHPA-
DEC-...](http://beta.phparch.com/wp-content/uploads/2010/12/PHPA-
DEC-12-noloh.pdf)

Disclaimer: I'm a co-founder of NOLOH

~~~
rufugee
Have you considered that you could potentially make a lot more by open
sourcing NOLOH and building a support infrastructure around it? It seems like
the closed source aspect of it might be holding you back. If NOLOH truly is
the Rails of the future, it might be wise to unleash it.

I've always thought this was part of what kept REBOL from catching on. Nice
little language, but its closed nature hampered its adoption.

~~~
asnyder
We have. The problem of course stems from the more established brands picking
apart our tech and incorporating it into their own, thus making our tech
obsolete. There's also the issue regarding our existing customers that paid
for Professional and Enterprise licenses. We currently offer free licenses to
open source projects, but we do understand that there are those that are
turned off to any proprietary tech, even if they'll never actually go under
the covers.

We'll likely revisit this issue later in the year after we make a series of
major product announcements. Those announcments may make it unnecessary for
the tech to continue to be closed, until then, everything we do other than the
core is open source and available on github, this includes numerous modules.
I'll likely blog about this in the next month or so.

------
DjDarkman
This whole article complains about something simple. Web development sucks,
but not because of form validations, it sucks because of IE. But that's
another story.

> Services must be adapted to spit out JSON data for interpretation and
> rendering client side, or have their view code refactored to be accesible by
> fragment for use in AJAX calls.

It must be real hard to call your JSON encode function on your data set, that
would make it 1 line longer. :)

> A radical departure from the jQuery mindset of DOM querying and
> manipulation, and use a UI kit instead. We aren’t in Kansas any more folks,
> its time to go where every other platform has gone and use the language to
> its fullest.

People wouldn't query the DOM if that wasn't the most effective way of getting
stuff done. jQuery has a UI kit, it's called jQuery UI. And how is using
jQuery equals to not using JavaScript to the fullest?

> The DOM should become an implementation detail which is touched as little as
> possible, and developers should work with virtual, extendable view classes
> as they do in Cocoa,QtGui, or Swing.

The author said "using JavaScript to the fullest" and now the author tells us
to create "extendable view classes" in JavaScript. JavaScript is a
prototypical language, you can't use it to the "fullest" if you force OOP on
it.

> If we want to build desktop class applications we need to adopt the similar
> and proven paradigms from the desktop world. Sproutcore, Cappucino, Uki, and
> Qooxdoo have realized this and applied these successfully.

"proven paradigms" is a ridiculous term by itself in software development,
putting it in web development context just makes it dumber. If desktop apps
are so good, then how come web apps are still around? There is a reason why
people still prefer jQuery over many of those.

Overall I feel this article is highly biased and the author doesn't really
understand web development. The author's complaint's are invalid because the
stuff that he is missing are already around.

~~~
markkanof
I agree with a lot of your points, except where it comes to JSON encoding. In
many of my apps I use an ORM, NHibernate to be specific. I've found that this
makes a lot of operations on the server side much simpler as there is less
code for me to write and debug.

The problem comes though when I'm ready to encode to JSON to send to the
client. Your correct that it is just one additional line to encode to JSON,
but NHibernate creates objects that have relationships to other objects, and
serializing those relationships can be very tricky. I have not found a great
way to do this and often end up writing simplified versions of the server side
classes and code to map from a server side class to a client side class.

Now you could argue that my framework or language sucks, but I think this goes
to the point of the article. I feel the code I am writing on the server side
is pretty solid and I am happy with my productivity. But as soon as I
introduce any complex behavior on the client side, I end up with a lot of
duplication and the entire code base gets much harder to manage.

~~~
famousactress
I feel your pain, but it's not a new problem.. I spent years building server
side apps with Hibernate, and before there was JSON, there was XML, or
transfer-objects, or even hibernate entities that were disconnected from the
backend (and would throw arbitrary exceptions when calling across un-fetched
relationships). There's always a need to sort out how to represent these
models across tiers.. In general, I favor only serializing relationships to
things that are completely dependent (ie: dependent children who's lifecycles
are married to the parent).. I try to never serialize relationships to
independent entities.. let those get fetched by id if the client needs them.

------
gfodor
There are two frameworks that come close to this, both of them are rarely
mentioned and I am not really sure why other than the fact that they aren't
"sexy".

First is SmartClient:

<http://www.smartclient.com/>

SmartClient has the best databinding support I've seen in a JS lib. Binding is
automatic between server and client, between controls, and validation code can
be written once. It also has the richest UI control library I've seen, full of
controls that actually work and aren't just a shiny layer over simple code.

Second up is OpenLaszlo:

<http://www.openlaszlo.org/>

OpenLaszlo provides a much more raw layer for creating UX on the web, similar
to the OP's complaints about not having a UIKit like API. Additionally, it
provides a declarative language for laying out controls, and also provides
expression binding, so you can say "the width of this element is always 2x the
height of this other one" and the binding and event handlers are created
automatically. Its declarative language is XML, so there are some nice
homiconic properties you get by making it possible to return XML from a web
service to generate UI elements. As a bonus, it can generate Flash or DHTML.

Edit: Of course, both of these frameworks have their downsides. They're
horribly ugly to look at (out of the box.) The declarative language for
SmartClient is really ugly Javascript, and the language for Laszlo is really
ugly XML.

However, they've solved the hard problems and left the "easy" ones. They're
both open source. If someone were to come along and clean up some of the
syntax and add some real polish to either of these, I think they'd really be
remarkable technologies.

~~~
clojurerocks
The problem with OpenLaszlo is that while apparently its actively developed i
rarely hear about anybody using it. It seems to have had its hayday years ago
and now is used only by a very few projects.

~~~
gfodor
My entire point here is that these frameworks are not sexy or popular but have
very strong technical foundations and it would be a noble goal to fork either
of them to polish them into something better.

------
steverb
We've taken the view that the server side code and the client side UI are two
different applications and should be developed as such.

We write all the server side stuff as "REST-like" web services and then use
whatever makes sense for the UI, whether that is javascript, html emitted from
the server, action-script or native binaries.

Separation of concerns.

~~~
hbrundage
The concerns aren't separate, thats the whole point! The validation and view
logic is shared in "both" applications as you put it, so we either have to
duplicate code or try and only put it in one place. This doesn't work or
requires monumental effort, hence the whole post.

~~~
steverb
They are separate. Yes, the UI and the service both validate the data, but
they do it for different reasons. One is concerned with the user experience,
and the other is concerned with data validity. There may be some functional
duplication there, but they are separate concerns.

The service validates to make sure that the data it is dealing with is safe
and isn't going to corrupt something.

The UI validates input to make sure that the service won't reject it so that
the user doesn't have to deal with inconvenience of making a round trip to the
server, or having to remember what a valid value for a field is.

If you're not feeling up to making things easy for your users you can always
throw all the data at the service and wait for it to tell you why the data is
invalid. You don't HAVE to validate the data twice.

~~~
jamesgeck0
As a user interface shishya, I agree completely.

As a developer, I still have to write code that validates the data twice and
possibly deal with validation failures in two different ways. Further, I have
to make sure that both checks are using the same criteria for validation and
keep them in sync if requirements change. This seems less than ideal.

------
gcv
You can still handle your validations server-side in a single-page
application. The client JavaScript code sends a JSON packet to the server
containing whatever data it needs to process. The server responds with a JSON
packet which looks like

    
    
        {"status": "ok", "data": ...}
    

to signify success, or

    
    
        {"status": "validation_failed", "details": {"email": "malformed email"}}
    

to signify validation failure. Then the client-side code updates the UI
appropriately. The client JS code does not bother with validation at all. If
this seems wasteful in terms of hitting the server, remember that you have to
talk to the server for this task anyway.

~~~
Joakal
A user makes a mistake in typing email, adds in rest of form. Presses submit.
Form takes 3s to send. Throws validation error and highlights email. User
corrects it and sends it again (hopefully you're not punishing your users by
clearing the form).

Or, you could present client-side validation which alerts user on focus out.

Other uses for client-side validation:

\+ Multiple emails (Gonna verify them all in many calls?)

\+ Prerequisite for further steps

Of course, have server-side validation. Client-side validation is convenience
for the user.

------
bdclimber14
This could be extended to say "Today, Development Sucks." Today, not only do
you need a web-accessible application to stay competitive, you best develop
native iPhone and Android apps. Even better, whip together a native Mac app.

You think both client-side and server-side validation is bad? Try developing 4
different client-side applications on different languages and frameworks.

Evernote came out and said it attributed part of its success to developing
native apps for Android, iPhone and now Mac. Adobe Air and HTML provide an
inferior user experience on respective devices.

This is also the reason I feel 37Signals is falling into obsolescence. They
just launched a mobile "site" for Basecamp. Not an app, but an HTML, mobile-
optimized site. Their blog admits to simply wanting to focus at "what they are
good at" which is the first foot in the obsolesce coffin. They hired an iOS
developer for their Highrise iPhone app, but said they felt the talent should
be in-house for future projects. I agree, but their decision, again, was to
keep doing the same old thing. Not exactly an innovative, hacker mindset in my
opinion.

~~~
Joeri
> Try developing 4 different client-side applications on different languages
> and frameworks.

People aren't going to keep doing that. As soon as there's a web app store
that allows them to achieve a better cost/revenue ratio, they'll move away
from native apps. Sure, web apps have inferior user experiences, but the
difference won't be big enough to keep them on the native platforms, just like
how it didn't keep them on the native desktop.

------
swah
Strangely he didn't mention server-side javascript, which would be the obvious
way to share code between client and server.

~~~
foobarbazoo
This is how SproutCore validation logic is used on the server, BTW.

It's easy to use SproutCore's (very) powerful model layer on both the client
and the server.

------
Robin_Message
Just a small note, but despite the common belief, Gmail isn't written with
GWT. Serious web app development seems to result in creating a framework of
your own, as your examples show, so I'd agree some useful frameworks would be
good, but we have not yet worked out what they should do.

~~~
clojurerocks
What is Gmail written in?

~~~
bokchoi
Closure

<http://code.google.com/closure/>

~~~
clojurerocks
Any idea why thats being used as opposed to GWT?

~~~
gcv
Gmail predates GWT by several years. I'm also not entirely sure Gmail is
written in Java to begin with.

------
julianb
_Google built the GWT so they didn’t have to write code twice, but I don’t
want to be stuck in the Java world or be forced to learn the whole GWT and
make any open source buddies of mine learn it too._

Groovy/Grails works well with GWT. Idiomatic Groovy looks more like Python
than Java. Another possibility is Vaadin, which is built on GWT.

~~~
lukesandberg
i don't have a lot of experience with vanilla GWT, but i have been using
vaadin a lot lately and it is very frustrating to look at the (bloated) html
that vaadin generates and not to be able to do anything about it (without
designing new widgetsets).

Also i have found with vaadin the client side rendering times can be a
problem. ( though my whole team is new with vaadin so there's a good chance
that that is our fault)

I definitely see the appeal of vaadin, but i think that ultimately its not the
paradigm you want for web application development. I think that for many cases
you really need to know exactly what will run on the client side and what will
run on the server.

Back when i was doing lots of silverlight development there was a very clear
division between the client side and server side code. However, because they
were both implemented in the same language i was able to factor out shared
components (models mostly) into seperate projects that i could individually
compile for both sides (for smaller chunks of functionality such as some
socket protocol validtion code i just linked the files into multiple
projects). This allowed for good reuse as well as good seperation. I think
that this is ultimately what you want: the ability to share some code between
both sides not to blur the line completely.

------
cgbystrom
Agree with you on most points.

However on some points I'm not as convinced. Take the DOM abstraction, it
works and is implemented by several SPA frameworks. You can design good
looking, "desktop class" applications pretty fast. But problem arises quickly
when your graphic designers send you those PSD mockups, full of great looking
artwork waiting to come to life.

With a normal DOM approach you've always been able to solve this. With hacking
some HTML, tuning CSS and a lot of swearing you pull through. But with SPA
frameworks that favor "components" over low-level, raw DOM elements things
usually aren't as straightforward. Very often you need to start picking apart
the provided "ready-made components" to get any work done. Ends up being very
contra-productive and usually takes way longer to do.

It has happened to me numerous times before with both GWT and Adobe Flex. Nice
and shiny, given you don't try changing the layouts too much. Surely, I'd be
one happy camper if this wasn't the case, web development need to move
forward. And I hope the goals proclaimed by both Cappuccino and Sproutcore
will work in practice some day.

Regarding your other point about routing, departing from the route paradigm
will be only be true if what your designing is not a document-centric
application. In my world, an SPA can be either document-centric or desktop-
like (containing a lot of UI state, as you mention). Think the answer to that
is the boring "it depends".

Worth mentioning, many of these issues are stuff we've been trying to resolve
with the Planet Framework (<http://www.planetframework.com>), essentially
bridging the gap between client and server.

~~~
hbrundage
Great points. The design issue is a big one, and I think the main reason is
that PSDs can be translated to HTML and CSS easily, but extracting a theme for
a set of widgets is much harder. The widgets give you rich interaction and an
easy way to set up complex layouts, but as you said can be very rigid and not
allow easy modification of their behaviour. And I agree, I end up longing for
the simple world of HTML and CSS where I know exactly how to change something
instead of fighting the framework. I think thats something that will be
remedied if a framework like Planet or my hypothetical one reaches ubiquity
however. If developers learn the ways the framework works, just like how they
learned the ways HTML, CSS, or Rails works, they'll be ok with extending the
widgets to do what they want.

Another good point about routing, and this I'm admittedly fuzzy on. Above
mechanical_fish mentioned some serious SEO issues that arise from moving away
from the standard paradigms, so I'm not quite sure how it should be solved.

All in all, Planet looks neat, thanks for sharing!

------
AndrewO
I'm with him on most of this, but I just can't agree that frameworks like
Sproutcore or Uki are good fits for all or even most of the the web apps out
there. We've spent years showing off the beautiful things you can do with HTML
& CSS and most web users have come to expect that.

Sure, the desktop-in-browser approach works in some places, but ignoring
standard elements and replacing them with non-semantic, inline-styled <div>
elements, and script handlers (e.g. inspect elements on
<http://ukijs.org/examples/core-examples/controls/>) strikes me as cavalier
and reminiscent of how SOAP treated HTTP.

The DOM is not something to be coerced and abstracted upon: it is the
presentation structure for the largest aggregation of human knowledge ever. It
deserves some respect! :-)

~~~
foobarbazoo
SproutCore loves the DOM. You're thinking of Cappuccino.

------
rst
For validations, at least, there are Rails plugins for checking model
validations in the browser (by augmenting the form helpers to generate
Javascript which does the checks). For example,
<https://github.com/dnclabs/client_side_validations>

This won't work to forward more complicated logic, though. For those who are
really feeling the pain, the best cure might be writing the back end using
server-side JS, so you can run that code in either environment.

------
Niten
I'm surprised there was no mention of WebSharper in here, it's almost exactly
what the author is asking for. And it's implemented in a functional
programming language too...

