

One Web App = One Language - hbbio
http://blog.opalang.org/2012/04/one-web-app-one-language.html

======
tqs
The innovation here is using the type system to determine whether code should
be executed on the client or the server. Contrast this with writing code
manually for node and the browser. In this case, though it's all javascript,
the programmer must explicitly divide the code for the server and the client.
Or contrast with Meteor. Here there is a bit more code sharing, but the
programmer still needs to think about client vs server (by checking
`Meteor.is_client` and `Meteor.is_server` where appropriate).

In Opa (as I understand it--I haven't used it), the programmer can be explicit
about where code execution should occur, but the compiler can also infer where
it should occur, determine the best place to put the "data boundary", and warn
the programmer if code is executed in an unsafe context.

For "traditional" next-gen web apps, it's a tough call that this is enough of
a win to warrant learning a new language. Further, because this style of
server-client programming is pretty immature, programmers may want to have
tight control over where code is run.

However, I think this is very important work for the future of distributed
computing. In particular, consider if we are ever to move from the web's
dominant server-client architecture to a more fluid peer-to-peer model, with
code moving from client to client where appropriate (i.e. mobile agents). For
example, say my phone needs to grab some data from my laptop, but rather than
have the laptop send all the raw data to my phone and have my phone process
it, the system figures out that it would be more efficient for my phone to
send code to my laptop, have the laptop process the data locally, then send
the processed data back to my phone.

To be able to make inferences like in that example, I think we will need to
use approaches like Opa.

~~~
Animus7
> For example, say my phone needs to grab some data from my laptop, but rather
> than have the laptop send all the raw data to my phone and have my phone
> process it, the system figures out that it would be more efficient for my
> phone to send code to my laptop, have the laptop process the data locally,
> then send the processed data back to my phone.

I've been romanticizing this kind of dynamic client-hopping execution in my
head for a long time, and I even tried to implement it once. The biggest
problem is that it's a provably impossible problem to solve! If your language
is Turing-complete, it reduces to the halting problem.

You can make guesses (as Opa does) and cover the simple cases with code
analysis, but in practice such attempts have devolved into inefficiency and
synchronization problems once you start implementing nontrivial systems over
fallible interfaces such as the web. It all brings to mind FGCS[1], and how
that just didn't work.

[1] <http://en.wikipedia.org/wiki/Fifth_generation_computer>

~~~
koper
What exactly is the problem you are referring to as unsolvable?

~~~
javajosh
The problem of deciding where best to execute code.

------
Animus7
We already have this. Node.js.

So I'm confused as to why we need another language which is like JS but _not
really_ , that compiles to Node (which it doesn't, but the author wants to
give it a shot).

Sharing server and client stuff in Node is already trivial. So are RPC's
(dnode) and realtime streaming (socket.io). And for the sake of security/perf,
this separation works very well.

Not to detract from the accomplishments here -- the type inference looks
awesome, and I started off as a "compilers guy". But... why?

~~~
mabdt
As a member of the Opa team, I can think of at least three problems that Opa
solves for you compared to Node.js frameworks.

1) Asynchronous (a.k.a. event driven) programming. Lightweight threads are a
very efficient paradigm but dealing with callbacks manually is cumbersome and
is sometimes very difficult to get right. (Exercise: try to code a solver for
the "Tower of Hanoi" in Node.js that is really non blocking :-))

The Opa compiler cuts your code into chunks and makes it non-blocking by
managing all the callbacks for you. (Technically, Opa compiles functions using
a continuation-passing style.)

2) Consistency between code and data (using static types). Opa features a
static type checker with (almost complete) type inference. Programmers can
also declare typed MongoDB collections and build typesafe DB queries in the
Opa syntax. In the end, consistency between code and data is statically
ensured in the entire application (client code, server code, and database
queries).

Needless to say, there exist no such things as script code or database code
injections in Opa programs :)

3) Transparent protection against XSS attacks. Most frameworks are based on a
templating system that sees XHTML values as a "flat string with holes". In
this setup, the problem of choosing the right escaping function at insertion
points is called "context sensitivity" and is generally considered very
difficult to solve without hints from the programmer. In Opa (as in Scala)
XHTML and XML values are first class tree-based data structures. Inserting a
string into a XHTML value will generate an automatic typecast and trigger the
proper escaping function by default in a very predictable way.

In the end, I believe that Opa has the potential to be for V8+NodeJS what
Scala+(Lift or Play) are to the JVM -- Opa being at the same time arguably
simpler to use.

How does this all sound to you?

~~~
nemoniac
These are all potential wins. However, having tried the tour and the example
code, a concern is the memory usage on the client side. Even for simple
examples, it's very high. Prohibitively high for mobile usage. Would you care
to comment on this? Is it something that can be improved upon in future
versions or is it inherent to the approach?

~~~
hbbio
Currently, we did not work a lot on reducing the size of the generated
JavaScript code nor the memory usage. There are definitely huge potential
gains in that space, and we planned to focus on this once we have the Node.js
backend: Benefits will be doubled.

------
macrael
Are there good examples of this kind of coding? I'm excited about the idea of
using the same language and maybe some of the same models on the client and
the server, but the idea of just writing one application and having pieces of
it execute on the server and pieces of it execute on the client is not
clicking for me yet. Is it a good idea to abstract away the communication
between the client and the server? It seems to me that designing that is one
of the important parts of writing a good client/server app. How does that work
when you are running the same code on both sides?

~~~
_delirium
I've played with toy versions I've hacked up, and I think the idea is
promising, but the analysis is difficult. The idea is that you should be able
to write, say, an HTML canvas game as if you were running code on one machine,
writing to a 2d framebuffer. But then how do you deploy it? If the clients are
powerful enough (and there are no dependencies on server-side state), you can
run it all client-side, as JS that implements the game and writes to the
canvas.

Alternately, you can run everything server-side, except every time a
framebuffer update is needed, you generate an AJAX call which triggers the
appropriate canvas update, treating the client as just a remote framebuffer.
But that can be way too high-latency, and not run _enough_ on the client side.
So instead you might want to put the data boundary somewhere between those two
extremes. Ideally the compiler would figure out where, based on some kind of
analysis (either static or from profiling data, or both) that tries to
minimize latency problems and/or data transfer, and takes into account
capabilities of the client and server.

------
sciurus
Some previous discussions:

<http://news.ycombinator.com/item?id=2925609>
<http://news.ycombinator.com/item?id=2802736>

