
Ask HN: What is the future of back-end development? - johnnydoebk
Apparently, WebAssembly will revolutionize the client-side development. The web apps will be very much like desktop ones: high performing, can be written in any language. At the same time they are cross platform, can be easily distributed, and do not require installation.<p>Meanwhile, every &quot;builder of web3.0&quot; who&#x27;s trying to &quot;fix the Internet&quot;, or &quot;make the web faster, safer, and more open&quot; is doing it in such a way that back-end development is not required. I.e. they are working on decentralized (either p2p or federated) universal back-ends that they assume should be used by everyone.<p>Do you believe that in future the back-end development as we know it today will be obsolete as everybody will use some p2p BaaS platform and the web-development will be all about implementing client-side apps?
======
mabbo
Some people don't like the word much, but "serverless" is going to become a
bigger deal.

You'll write your code, complex or simple; you'll hand it off to some cloud
system; you'll write a bit of configuration; you're done. Likely the
configuration part will become less and less required.

You won't think about hardware, scaling, load balancing, etc, it will just
happen for you. The data store being used will be abstracted away so that you
don't know/care much about who provides it. Configuration problem.

Containers, AWS Lambda, these are just the first course of this meal.

Why? Because developers are expensive. Time spent doing anything other than
the critical piece of writing business logic code is time wasted, and longer
time to market.

That's how I see the future of the server side.

~~~
scrollaway
This model isn't "serverless", it's PAAS. It's not much of a future. Heroku is
useful for prototyping, not scale. (Edit: Doesn't mean it can't be more
popular though - the web is full of prototypes)

"Serverless" as in AWS Lambda has very specific use cases. The people who are
serving websites on Lambda or something are not only doing it wrong, they are
wasting money while locking themselves in to an architecture that doesn't fit
their app.

I say this as a heavy Lambda user ([https://hearthsim.info/blog/how-we-
process-replays/](https://hearthsim.info/blog/how-we-process-replays/)).
Lambda is super useful if you have to scale CPU-heavy tasks. I do think that
model will play a big role in the future, but I _definitely_ do not think
it'll be the go-to model for backend. It's a complete paradigm change.

~~~
mabbo
Consider the problems that both Lambda and Containers/PaaS solve for you:
ignoring infrastructure. It's there, sure, but I don't care much about it.

Now take that idea to the absolute extreme. I write code, I push it into the
cloudy-cloud machine, and it's serving clients immediately.

~~~
collyw
As someone that knows how to set up a server, I found Heroku a lot more
difficult to get a Django app running than on a "normal" server. And a lot
more of a PITA to debug.

~~~
aptwebapps
What about the second time you ran a Django app on Heroku? But Heroku is not
even in `mabbo's first course (maybe the appetizer?) so that's slightly a
straw man.

~~~
StavrosK
I don't know how valid that argument is, because the second time I set up a
server was just tweaking two Ansible variable files and running the
"provision" command.

~~~
aptwebapps
Sure, but what I was getting at was parent was just having learn how to use
Heroku. And while it's true that automation is something you can do yourself,
what about keeping the server up-to-date and patched and so forth? Also,
repeating myself here, Lambda et. al. are a step further than Heroku.

~~~
StavrosK
There are tradeoffs. You don't have to keep Heroku patched, but you also can't
tune it if you need something specific.

------
british_india
Given the complex services we already have, services that access complex,
multi-source databases of the relational and NoSQL kind, I find it amusing
that UI developers are so unaware of the complexities of the back end--
complexities that are normally hidden from them--that they think they can be
reduced to so many calls. WebSockets don't magically assemble complex back-end
data. Amusing in the extreme.

~~~
douche
It's the classic "Can't you just add another checkbox here?" situation. Yes, I
can add a checkbox there. But I have to add the code to include that setting
in the API that the front-end calls, to give you a way to retrieve it and
change it. Then I have to add code to the data access layer for this new
setting. Then I have to write the SQL scripts to add columns/tables for this
new setting, as well as change scripts to change the schema. Then I have to
test all the pieces involved, test that upgrading the database from N-3 recent
releases to the new schema works correctly.

And this is for a dead simple little setting.

~~~
caseymarquis
Sounds like your framework sucks. With an orm and some sort of json
serialization that's literally just:

Put bagel in toaster.

Update the db model.

Add a migration.

Update the viewmodel <==> db model translation in relevant requests.

Get bagel and butter it.

Commit.

Test the migration on a staging db if feeling paranoid, cause it's just adding
a column.

Push to master.

Update production.

Take last bite of bagel.

If you aren't also doing the frontend, tell the guy who is to check out the
updated swagger docs.

~~~
grosbisou
> cause it's just adding a column

This kind of mentality is why most applications are shit and broken. And why
most hires are worthless.

And everyone is not working on a cookie cutter over-bloated rails website.
Sorry but this kind of remark simplifying our job always pisses me off.

~~~
prplhaz4
"can't we just" is top of the list as a trigger for my bullshit detector.

In a system of any complexity, changing your data model is an activity that
requires due diligence to mitigate any downstream risk.

~~~
collyw
Me too. When my previous team leader said that, I knew he hadn't bothered to
think it through.

------
mrweasel
The ability to build a solid backend platform and infrastructure is going to
be increasingly important in the future. The "backend" is becoming
increasingly anonymous (as in not seen, not as in privacy), but it has never
been more important.

If nothing else the increasing focus on data collection and processing will
push the need for custom backend forward for many years. Any every application
that needs to store or share data via an API, needs an backend that understand
the data it receives so that it can validate, process and distribute that data
correctly.

Backends (custom backends) aren't going away any time soon.

Your question reads as if you believe that "backends" equals desktop
applications. Even in that case it's still hard for web application to compete
with desktop applications. It's a matter of personal preference of cause, but
something like Google Docs feels weirdly restricted, confined and cramped
somehow, simply because it lives in the browser.

~~~
digitalarborist
I don't think custom backends are going away either, I do think it will become
way less prevalent then it is today though, similarly how all programs used to
do manual memory management, this hasn't gone away but most programmers now
rely on garbage collection.

So much of backend work is redundant, login, user info management, allowing
the creation of groups of users, user following, user data uploading or
sharing. Not to mention all the DevOps, managing CloudFormation scripts, VPCs,
proxies, load balancing, dns, ssl, cdn, database schema, the list goes on, and
that's before there has been any original development at all.

With react native and electron (I haven't found the atom browser or vs code
weirdly restricted) the web stack is moving into desktop territory and with
serverless frameworks becoming more prevelent hopefully this will make writing
original software easier with less redundant work.

As to your point about data processing needing a custom backend, I see the
exact opposite. Generally all this data is just funneled into a generic hadoop
cluster and can be manipulated any which way from there using spark or any
number of data analysis tools.

~~~
rimantas
With react native and electron web stack is running in circles.

How many times should this be repeated: web stack is easier to write only if
you don't know shit of the native SDKs. Otherwise it does not even compare.

~~~
sotojuan
Well yeah... that's the point of React Native isn't it? For people who don't
know Swift or Objective-C and want an iPhone app. React Native apps are faster
to write for them and perform well.

I can't comment on how they compare with native iOS apps.

------
jalfresi
Personally I don't see any of that happening - client-side app development on
the web is a chaotic mess with no "official" (whatever that means) way of
doing things. Throw into that mix the constant reinvention thats going on
(yeah can't wait to see everything get re-invented AGAIN when web-assembly
comes along), for what constitute incredibly marginal gains in the UI space
(the web UI was already way fast enough 5 years ago without introducing mind-
bending paradigms like react).

On top of all that, the very concept of web apps (in my opinion) is
fundamentally flawed in that they break the web (fundamentally flawed in a
similar way that Flash as a interactive platform was flawed e.g user
interaction along a timeline is an inherent conflict).

So personally I expect all the client-side web app stuff to collapse under its
own weight at some point. As a example, I've been a web developer for 20
years, and for my own projects at home I don't even bother with web front
ends; I use QML and QT and bang out a great UI in a fraction of the time it
takes to produce a web UI. It's not worth the hassle anymore, but then I have
the luxury of my own projects not having many (any?) users beside myself.

As for back-end, I've actually seen a shift more towards "systems" e.g a set
of isolated sub-systems interacting via event busses or queues over HTTP/JSON
either as micro-services or something close to that. Then a separate "web
front-end" app pull data for output/display, almost as a client to that
system. So a cleaner separation of The Real System from Web Application Back-
end. And in this space, things seem to becoming much more standardised means
of communication between parts, whilst at the same time there is a Cambrian
explosion of new tools, languages etc which can be exploited. There is very
much an idea that separate sub-systems can be written in languages which can
best take advantage of for the problem space.

It's very interesting to see the differences in the way the front-end and
back-end communities are approaching progress/innovation.

Anyway, thats my opinion/view of it all. Sorry for the wall of text

~~~
lj3
What do you think about a browser plugin that runs arbitrary .exe code via a
plugin? It would have to be sandboxed, of course, but it would give you all of
the benefits of a browser distribution model without having to deal with the
DOM.

~~~
jalfresi
Why not just make QML a media type and a spec that browsers can to implement?
Then we can do away with all this HMTL-Page-Is-An-App nonsense

~~~
lj3
Why limit ourselves to a single UI format? Forget the part I said about a
plugin. What if browsers were able to load and execute LLVM bitcode natively
in a sandbox? You could get native performance in the browser using any
language you'd like.

~~~
yencabulator
Please get non-Chrome browsers on board: [https://developer.chrome.com/native-
client](https://developer.chrome.com/native-client)

~~~
lj3
Please tell Google to publish a spec for the pepper api and to rewrite it so
it doesn't require the blink rendering engine. As much as I dislike Mozilla, I
give them props for trying to reverse engineer the pepper api from what was
implemented in the chrome browser. But, expecting them to change their
rendering engine to support an unpublished spec is ridiculous.

------
sktrdie
I think backend will move more towards streaming endpoints (WebSockets).
Already with concepts such as Observables we're building apps entirely around
streams of events. So to me the connection between frontend and backend will
just be what you already use to communicate between your app components
(streams). With regards to "where the backend is", it doesn't really matter.
It can be on N different servers. What's important is that we'll be able to
generalize lots of logic and automate most of the things (things like auth,
database writes/reads, etc). Other more specific things we'll always have to
write ourselves, but the management of an "actual server" will be more and
more abstracted, and we'll eventually deal with it entirely on the app-code
side of things.

------
DoubleGlazing
The more people try to build simple and "elegant" front ends, the more us
backend developers have to do to support them.

I seem to be doing a lot more "mashup" work than ever before, joining up
disparate back end systems in order to hide it all behind a simple modern
front end. In the past if I was developing an order processing system I would
work on the view/UI side of things as well, now I expose the functionality via
an API and let the front end guys develop a nice interface for it all.

Also lets not forget you don't actually want any mission critical,
confidential or security stuff happening on the front end no matter how
WebAssembly gets.

There will always be a backend.

------
jbb555
WebAssembly will not revolutionize anything. It's like java where people claim
it's as fast as native code, and yet every single program runs like a dog and
takes 100 times the resources. Only worse because it runs in a browser.

~~~
brianwawok
The financial exchanges running in Java would like to disagree with your gross
generalization.

~~~
gwbas1c
Yes, but: These a highly-tuned code that starts infrequently and doesn't use
GC. VMs still have a startup cost; although one could argue that it's
something that can be optimized out if there's enough desire.

~~~
John23832
Come on man. "Yes, but:", you were wrong. No need to save face.

------
emilsedgh
I don't think WebAssembly is going to revolutionize front end development.

What you can do with WebAssembly regarding front-end development is running
your Qt/GTK+ type of programs on web.

But noone is interested in that. As a matter of fact, current trend seems to
be ditching that sort of program on desktops and using HTML/JS/CSS stack for
desktops as well (atom, slack, etc) because its much easier to create a top-
notch UX using HTML stack.

HTML stack is very resilient. It evolves at a fast pace. Don't underestimate
it.

WebAssembly will definitely have its own use case in the future (games, apps
with specific performance requirements) but its not replacing frontend
development.

Regarding backend, its not going away. Serverless doesn't mean backend-less.
It means backend deployed in a very distributed fashion.

I personally think backends will be thinner and thinner as databases gain more
responsibility an functionality.

~~~
T-A
> because its much easier to create a top-notch UX using HTML stack

No, because it's what the hordes of inexpensive web developers know how to
use.

~~~
awolden
> inexpensive web developers

Ask most companies how much they are spending on their web-dev teams and they
might disagree with you on that one

~~~
T-A
Web Developer Salary Range: [http://www.itcareerfinder.com/brain-food/it-
salaries/web-dev...](http://www.itcareerfinder.com/brain-food/it-salaries/web-
developer-salary-range.html)

Software Engineer Salary Range: [http://www.itcareerfinder.com/brain-food/it-
salaries/compute...](http://www.itcareerfinder.com/brain-food/it-
salaries/computer-software-engineer-salary-range.html)

------
otobrglez
\- More "reactive" frameworks / languages / tools.

\- More functional principles and more resilient implementations.

\- Complete decoupling from lower levels. Meaning that you don't care about
servers anymore. Apps are Dockerized and resources are dedicated
automatically.

\- No more human designed APIs. Why write APIs if modern machine learning
principles can replace the whole concept?

\- More lambda / serverless concepts.

\- APIs and integrations will be automated and auto discovered.

\- More (micro)services, mixing of different languages and stacks.

\- More streams, more events,...

~~~
rimantas
It's funny how future is described by many with what's hot that day. If you
were to ask similar question ten years ago, the future would looks so wildly
different (iPhone was announced in 2007).

------
pauljaworski
When I started building out the back-end of my latest project, I realized its
only purpose was to persist data to a NoSQL database. I had a small epiphany
and decided I didn't need to build a server at all - I could just use
something like Firebase to accomplish that task.

Now that I'm getting deeper into the project, I need to add in features like
PDF generation. No problem! I'll use microservices for that. Well, that means
I need to build a Docker container, deploy it to AWS ECS, and configure AWS
API Gateway to talk to it. None of these things are what I would consider
"front-end" dev work.

Back-end is surely changing, and many of the needs can be shifted to the
front-end/BaaS platforms, but there's still plenty of work left! I think we'll
see the back-end role transition more toward building and maintaining these
microservices and possibly the business logic portions of the front-end.

~~~
fermuch
An easier approach IMO would be to create an amazon lambda script to process
the pdf and make it run when you upload a file to a bucket in amazon s3.

It's easy to do and you don't need to mess with anything more than two simple
API.

~~~
pauljaworski
I initially planned on using Lambda for things like this, but in this
scenario, it does not seem appropriate. I'm a total noob to Lambda, so correct
me if I'm wrong, but I need a system with Qt installed to use my PDF library,
and it doesn't seem like I can accomplish that with Lambda.

------
donatj
Simple Client/server isn't going anywhere anytime soon. It's too simple to
implement and understand and use to die. Good enough almost always wins.

------
tinganho
I think the future of back-end development is heavily decided by the future
programming language. I think we are still missing a programming language that
is safe, fast and productive. Safe meaning type safe and memory safe(Also
memory leak safe), Fast meaning near zero abstraction costs, and productive
meaning productive syntax and good tooling support. And it has to be built on
modern compiler architecture, which means a compiler exposes an API for
tooling(symbol lookup, refactoring, etc.), so people around the world don't
need to reinvent the wheel.

I personally don't think any programming language fits the above description.
And we have yet to invent it.

------
tonyedgecombe
My hope is it will get a lot simpler, even trivial work seems far more complex
that it needs to be. I'm pretty sure I was more productive writing VB apps
twenty years ago than I am with current web development.

~~~
sotojuan
Yep, I hope for the same but I don't think it'll happen :-(

~~~
tim333
You'd think there would be a way and there might be some money in it for the
work it would save. Perhaps something like a boilerplate app that does the
usual stuff you need like [https://github.com/sahat/hackathon-
starter](https://github.com/sahat/hackathon-starter)

But perhaps less hackathon orientated?

------
mempko
I would love the future to be P2P. However, I don't see it happening unless we
have an economic revolution. There is just too much money to be made from
collecting data. Data is king.

My guess is that future of back-end is using tighter languages like C++ and
rust. Why? Because energy will become more expensive and we will require fewer
machines to operate the application. These apps will run on single purpose
kernels like exokernels and unikernels where the OS and app are married like
in the "good old days"

------
samblr
I tend to agree with development as we largely see will become obsolete or in
other words development will/should become really easy. It is primarily
because designing backend is very similar to designing a deterministic state
machine albeit a large one. And strangely enough, there is much re-invention
of that state machine that happens in different projects by different set of
people using different frameworks.

And going a step futher - relationship between tables in database should
reflect on how an _operational_ /working UI will look like. eg: say _blog_ and
_comments_ are two tables and related by foreign key. So in _operational_
front end - it makes sense to see them linked together in say _blog_ page.

Now sprinkle machine learning concepts to it - a relationship between two
interlinked tables where 'p' columns are in _blog_ table and 'q' columns in
_comment_ table can be represented in 'n' different ways in say 'm' different
type of clients(like web mobile etc).

Now is it not possible to link design of whole web app to our voice commands
and let some _deep learning_ algorithm figure best schemas, front end,
frameworks and even show a demo MVP.

------
weddpros
Many/most backends are stateless... In a way, they push the complexity of
distributed systems to the db. They don't deal with sharding, load balancing,
request routing... so they leave some performance on the table.

They are still built as if it's normal to split the backend in two: app server
+ db.

Maybe backends will learn from databases, and become sharded, redondant, and
stateful. The border between app server and db could become fuzzy...

------
SFJulie
It depends of the QE and next central banks announcement.

If liquidity is cheap on the market, innovations will tends towards KPEX=0
thus more SaaS and stuff will be produced backed with heavy marketing
disguised as technical conf/blogs, and corps will have the money to kill
competition by buying it. Hence it will not be a truly competitive market.

If there is a contraction of liquidity however only the fittest will survive
(low OPEX) and then SaaS, p2p, BaaS will die for more «old school» kind of
development based on costing/pricing. Clearly this would mean the return of
GUI and all kind of transactional databases and small meaningful data.

The problem with financial markets is they are like weather : hard to
forecast.

And for the same reason we know there is a global warming, there is clearly a
tech bubble. The more we wait for solving the problem, the more it will hurt.

An industry future is always related to the confidence and money investors
have in investing into it to expect ROI.

With the paranoia going on all around the world, expect security to be the
next money maker.

------
p4wnc6
There are many software jobs in which the ultimate output of your work is not
something that interacts directly with a user or client. In my line of work,
machine learning and statistical modeling, the true output of the work is most
often "answers to questions." You still need to write systematic code to be
able to handle common questions, adjust for new questions, add new
capabilities, etc.

The only way that such a thing could be commoditized into a front-end
interface is if you could devise an interface that allowed for all the
different kinds of questions that will be asked, the ways they will change,
the new features that will be wanted -- because most of the work is taking
some backend pipeline that is already optimized for being able to answer
questions of type X, and then figuring out how to generalize it without losing
any performance in order to also answer questions of type Y.

It's almost always highly specialized to the specific company and line of
business involved, so consulting companies can pop up to take away some of the
in-house work, but in general there is no conceivable "as-a-service" thing
that could.

Thus, you're left with needing to manage your own backend, probably for quite
a long time to come.

Finance, ML for search interfaces, small-data statistics consulting (like
political statistics, ecology, and other fields), education analytics, and
many other fields offer work that falls into these categories.

Basically, anywhere that there is a business or domain science researcher that
needs ad hoc computer programs whose lives as programs will generally only
serve to answer scientific questions for that researcher. The ad hoc nature of
how the questions change most often mean that no service that pretends to put
a front-end API over top of the science questions can adequately capture the
variety of things that are needed, especially once the further need to heavily
optimize them is added in.

------
brad0
I think as standards become more adopted it will become easier to use tooling
for the majority of backend development.

What is it that keeps backend developers in a job? We write code to give
access to data in a way a consumer can use.

ie: we have input, we process the input and give the output (CS 101)

Every time we write something new one or more of those three elements has
changed. Input, Processing or Output.

eg:

Input: The DB has a new table with data you need to expose on the website

Processing: The list of comments need to be ordered by upvotes rather than
chronologically

Output: The existing JSON output also needs to be output as protobuf

Whenever any one of these elements is standardised it reduces our workload.
Usually as a result of standardisation tools and frameworks are developed to
take advantage of this.

If you want to know the future of any technology keep on top of their
standards.

EDIT:

As well as keeping track of standards you should extrapolate what would happen
if everybody adopted that standard. Who knows, maybe you could build a tool
everyone uses and start a business around it.

------
LukeB42
Hardware: Better "development boards". Fanless, slimmer server hardware. TPUs
in the typical datacenter within fifteen years.

Frontend: Webassembly for augemented reality: Hololens or whichever Linux-
compatible equivalent comes out on top.

Backend: Evolution as we know it appears to be an optimization function that
balances between diversity and fitness, so with that in mind I'll hypothesize
a healthy mix of RESTful HTTP/2 with optional streaming. Static resources
should live in decentralised overlay networks guarded by frequent trust
computation. See
[https://github.com/Psybernetics/Synchrony](https://github.com/Psybernetics/Synchrony)
and [https://github.com/Psybernetics/Trust-
Toolkit](https://github.com/Psybernetics/Trust-Toolkit) for PoCs of this last
part.

------
tzakrajs
I had this long rant, then I deleted it because I realized I misunderstood
you. Another attempt: The vast majority of compute will live in the datacenter
for the next 30 years bc quantum computers require superconductivity.

That compute will become like a common utility and will most certainly be used
to power the APIs that consumer electronics call. The P2P adds unnecessary
complexity when client/server, a simpler solution, works well to bootstrap an
application from a trusted source without the need for homomorphic encryption
or other exotic schemes.

Bandwidth, compute and storage will continue to increase faster in the center
than our consumer electronics on the edge. The backbones, internet exhanges
and datacenters are where density equals economy of scale.

And webframeworks will come and go without a revolution, just another leaf in
the wind.

------
cweagans
I always thought that backend development would converge around a Parse-style
API -- basically just CRUD as a service. There's always going to be special
bits of logic that need to be executed in certain conditions, so probably some
straightforward way of handling that (like Parse's Cloud Code feature) is
needed + some way to handle recurring jobs/background workers.

IMO, right now, web development is just an exercise in how complex we can make
string concatenation. Getting away from that would be really nice.

Things I hope _don 't_ happen: everyone uses lambda for everything, everything
happens on a blockchain style network, etc. Those technologies are cool, but I
feel like a lot of people view them as a golden hammer.

------
iElectric2
When large code bases will have sane(!) static typing and functional
programming principles. And no, slamming types onto Python won't help much.

~~~
polotics
Hi, why the anger? What's the difference between (quote)slamming types into
python(unquote) and adding support for types? Do you mean we should all learn
Haskell now?

------
timwaagh
I think the backend will remain pretty much the same as it always has been.
decentralized networks will have their role however they will be too
complicated to be cost efficient to develop for small-to-medium businesses,
due to their increased complexity. the trend towards high-level development
platforms that take care of a lot of things will continue. the trend towards
generic backends that require little programming at all (think CMS) will also
continue.

------
falcolas
I picture the easy (CRUD, blogs, etc) backend development somewhat continuing
as it has for years, frankly. The easy use cases are going to continue to rely
on PAAS offerings; those offerings will simply go from "colocated web hosting"
to Heroku, Lambda, and GAE. As easy edge caching from CDNs continues to
proliferate, we'll see more and more use of those as well.

Backends which are expected to do more, however, are going to be different.
With the loss of growth in computing speed due to faster silicon, I think the
growing complexity of our software programs is going to force the pendulum
swing back away from "fast enough" and towards "optimize everything". We
simply won't be able to rely on faster silicon to handle the greater
complexity without further thought on our part.

Pushing the additional complexity to the client will cease to become enough,
since the capability to do heavy client side processing is becoming less
reliable. The reason is simple: personal computation devices are smaller,
lighter, and with more and more unpredictable levels of computation power (as
they are throttling to conserve the battery and minimize heat). This leads me
to speculate that more and more work is going to be pushed back towards the
server, which has fewer restrictions. I do think that communication between
the server and client will grow in complexity as well, which will make us look
at the days of long-polling HTTP and websockets as "the good old days".

All that to say, the future of backend development is much like the past.
Supporting thin clients with unpredictable computing environments by making
everything run as quickly as possible in a large, distributed datacenter.

------
sontek
I don't believe the backend development we know today is obsolete or will be
having many changes in the upcoming years. In every project I've worked on the
front-end has always been the slower/buggier part of the stack because front-
end is so complex with device/browser compatibility and having to respond.

What I do believe is that we are seeing a game changing revolution in backend
development with RethinkDB ( [http://rethinkdb.com/](http://rethinkdb.com/) ).
It is an extremely fast distributed database with the ability to get a real-
time feed of data as it changes for a query.

I've done two projects with RethinkDB and I can't imagine going back to
operating postgresl/mysql/mongo because of the joy it is to scale out
RethinkDB and to use it as an application developer.

They've also created a new service that abstract it away even more and allows
front-end developers to be extremely productive in their prototypes:
[http://horizon.io/](http://horizon.io/) and although its not for me I see a
lot of value in it for people like you who want the backend to get of their
way.

------
ninjakeyboard
This is totally dependent on the domain. If you're working in digital video,
for example, much of the work is moving away from the client toward server
side solutions. I don't think you can make any sweeping generalizations -
there are just more options that suit different needs. The need to scale is an
increasingly important and popular topic and that won't go away any time soon.

------
eyan
How far into the future are you looking at? There's got to be a server
somewhere. Somebody has to write code for those, aside from caring and
feeding.

Smaller applications, in the Big Data sense, will still be used on site. At
least say, 10 years down (yeah, pulled that figure from my ass).

So no. I don't believe that back-end development will be obsolete. Middle is
the new back.

------
shams93
At my work we still run all our own iron. Many classes if applications will
continue to be way too complex to run entirely on a mobile device. Web
assembly will enable things like blender3d, Photoshop and ableton live to run
within a browser it's not going to entirely wipe out the back end rather the
desktop and local applications will go away.

------
k__
My take:

FaaS will become big.

Back-end development with this will become so easy that a front-end dev can
cobble together a pretty performant and secure back-end.

Only FaaS provider will hire back-end devs.

</joke>

I think since much is transitioning to realtime data (which FaaS can't do godo
ATM), there is still a huge demand for back-end devs in the next years.

------
Swennemans
What about oCaml/Reason?

oCaml/Reason can compile to JavaScript making it posible to write everything
in one very powerfull language. In the near future we can probably also use
jsx.

Seems like an improvement over using node and React.

~~~
miguelrochefort
Same with F# and Fable.

------
strictnein
Java. It's the past, present, and future. We'll never rid ourselves of it.

------
the_arun
back-end paradigm shift will be there. Similar to what is happening for the
front-end. Servlerless architectures are the hosting & platform part. But
someone has to write business logic for the backend. Correct?

------
ganarajpr
Potentially GraphQL.

------
verdverm
Kubernetes

------
alex_duf
serverless is the next big buzzword for back-end architecture

~~~
juandazapata
Serverless has been around for about a decade, since Heroku was founded in
2007.

~~~
kossae
I believe a lot of people are confusing "serverless" (e.g. heroku e.g.
[http://justserverless.com/blog/what-is-serverless-
com/](http://justserverless.com/blog/what-is-serverless-com/)) with "FaaS"
(e.g. AWS Lambda). However I feel like the latter definitely is more
appropriate given the rise of FaaS.

------
i336_
> _Apparently, WebAssembly will revolutionize the client-side development. The
> web apps will be very much like desktop ones: high performing, can be
> written in any language._

I unfortunately don't have the specifics myself, but this would be the perfect
point for someone else to chime in about DOM manipulation speed and the other
various sources of browser overhead. I know they're nonzero myself.

WebAssembly won't turn the browser into a "perfect" runtime. Things will still
be glitchy and slow and buggy and stuff, like they are today. Except now there
will be multiple WA implementations, with low-level bugs 99% of webdevs won't
be able to debug... :D

It may be helpful to see WA as a W3C-ratified JVM (Java VM) for your browser:
like the JVM, WA is bytecode-based, and like the JVM, WA will be targetable
from many languages.

You might even compare it to JVM + Swing - where WA is the JVM, and the DOM
(and all other related bits) are like Swing. (Swing is notorious for being
slow, although it's recently a lot faster.)

I think client-side will "break though" in the way you describe when there's
something that bridges the (relative) ease-of-use of the DOM and the
performance/accelerability of WebGL. I doubt that will happen soon though,
considering the cementedness of HTML+CSS and the associated investment (eg
full-stack CSS3 hardware accel).

-

I don't have a clear picture of the backend side of things at the moment, but
I can offer these comments:

You've probably heard of WebTorrent. WebTorrent is not BitTorrent, because
WebRTC P2P channels use SCTP on top of DTLS, which is effectively UDP on the
wire. Besides being a headache for sysadmins who manage deep-packet-inspecting
proxies, this essentially isolates the Web as its own "network" in practice -
the only way you can talk to servers is via TCP+HTTP or UDP+DTLS+SCTP+<your
protocol>. This makes things somewhat difficult. WebTorrent is actually a
fully independent network that uses SCTP instead of TCP or UDP - the protocol
is the same, it's just tacked on top of SCTP (& co). I expect (or at least
hope!) that in a couple years next year most BitTorrent clients will have
WebTorrent support.

There's also the fact that mobile networks are notorious at handling "unusual"
data, because cellular data gets mangled by lots of pesky/fussy infrastructure
as it bounces between your device and what might be considered the
"traditional" Internet backbone. I've heard UDP is touch-and-go, for example,
or even access to unusual ports. There's also the well-known fact that radio
dropouts are still common and fundamentally hard to fix, even in 1st-world
areas. This makes peer-to-peer networking incredibly hard.

A relay-as-a-service system to handle issues like these (SCTP->TCP gateway;
connection persistence (very hard); etc) would make for an excellent DDoS
generator, so at this point these types of things would need to be fixed at
the application level, on a case-by-case basis. Unfortunately.

For another example consider Skype, which recently switched to an entirely
client-server model, wherein the client talks solely to Microsoft servers; in
the old days if the Skype client decided you had an awesome CPU/RAM and
network and your NAT config was sane, it made you a supernode for clients who
didn't have working NAT. Yup. Ref: "skype supernode", also
[http://www.zdnet.com/article/skype-ditched-peer-to-peer-
supe...](http://www.zdnet.com/article/skype-ditched-peer-to-peer-supernodes-
for-scalability-not-surveillance/)

There's also the fact that it's not (yet) possible to cleanly and reliably
detect what kind of connection you're using on all OS platforms, so the old
supernode system might decide your flaky home 3Mbps DSL connection isn't
supernode material, but once you switched your laptop from Wi-Fi to your $5/MB
100Mbps 4GX connection, suddenly your uplink would look perfect for servicing
all 100 clients that appeared to be geographically nearby... :D - and all P2P
systems suffer from issues like these.

In short, federated, distributed peer-to-peer applications aren't quite there
yet, and I expect WebAssembly is one of those technologies that will probably
take enough time to mature that you'll have plenty of time to figure out where
it's headed and position yourself appropriately.

~~~
andoma
Might be worth to clarify that WebRTC encapsulates the SCTP data streams in
DTLS which is just normal UDP packets. So there are no "real" SCTP packets on
the network as a result of WebRTC.

~~~
i336_
_Oh._

Thanks. Fixed!

------
dan31
Many web apps are indeed suited as a logic sprinkled atop several *aaS
endpoints. Music streaming, social networking, collaboration and such.

This client-side logic can be made to run really fast with the evolution of JS
engines and now also WebAssembly. Meanwhile communication cost is always bound
by signalling over wires times number of endpoints. More time waste comes from
data redundancy invoked by a physical separation of services (severity depends
on the app nature). Within the fastest client side possible, overall latency
will be capped by messaging.

Still all is well while you have 3-4 endpoints to mash up: probably staying
within 1-2 seconds of psychologically acceptable latency so that your user
won't switch to something else too fast.

Things shall not be that funny once you need to integrate more than 5
endpoints. Messaging overhead turns into seconds and tens of seconds. A good
example comes from business software. Parts of a typical ERP suite are tightly
coupled on data: should you have a person, it is the same person for
accounting, CRM, BPM, project planning etc. Integrate these parts the SOA way
and obtain irreducible chatter between endpoints and thus slow UX.

It was observed that up to 40% of total system workload in typical SOA-style
ERP suite is in data exchange between the core ERP system and the satellites.
One has to control the flow between endpoints transactionally, hence
aggressive caching is not only very complex there, but would not actually
work.

Having that in mind and thinking of the enterprise software, I am personally
in big favour of approach that SAP HANA guys took, where the business re-
unites all the apps on a single platform and lets them share data in
transactional way. This is contrary to the widespread belief that future
enterprise systems should be a collection of SaaS components integrated
through standard interfaces.

The management of redundancy is prohibitive, and extra workload from shipping
data back and forth creates a bottleneck. And, by the way, there is no need to
implement physically disjoint microservices to achieve great system
modularity, neither should you build monolith anymore to address performance
problems. Leverage the capabilities of modern software platforms and "do
microservices the right way".

In the world of enterprise software understanding comes right now that data
integration always beats messaging in total cost of ownership.

Surely there is no sliver bullet, what is good for enterprise software might
not suit other domains. But just imagine when one builds a game engine where
polygons are rendered on one machine, physics calculations are done on the
other machine, multiplayer logic is on third, and all flows and mixes up
through the user machine. Wouldn't it be kinda slow? But enterprise software
done via messaging-based integration isn't really far from that.

