

The Smartphone as Application Server - mnemonik
http://vijayan.ca/blog/2015/03/17/the-smartphone-as-application-server/

======
quanticle
What I don't understand is why the smartphone is the _server_. Naïvely, I'd
expect the server to be the Raspberry Pi, and for clients to be the
smartphones. Clients connect to the server and push content which the server
displays on the screen. Something like X11, in other words. Looking at what
the application does, I don't even think it would be that difficult to convert
it to an architecture like what I described above - in essence you'd be
reversing the arrows on your interaction diagram.

Can you elaborate as to why you built the server on the smartphone rather than
Raspberry Pi? What does that get you? How do you get around the limitations
that phone operating systems (especially iOS) impose on background processes?
What's the impact on battery life?

~~~
kannanvijayan
Author here. The point of the design is that I'm treating the Pi as the
display server (i.e. the client, or X11). The smartphone is the app-server
because it hosts the relevant state.

The reason it's structured this way is that we want the ability to push code,
logic, and data to the endpoint and have the endpoint execute it.

Consider a traditional usage scenario: your computer is the display server
(web client), and the app server sits somewhere out on the cloud. The app
server holds the application and the state, and you download it on the fly.
The client (browser on your PC) is largely stateless, and the cloud-based
server is the state carrier that ships both logic (in the form of a webpage +
js) and data to the client on an as-needed basis.

In this design, the thing that's stateless is the endpoint (the thing
connected to the TV). The state-carrier is the phone. The state-carrier needs
to send logic and data to the client so that the client can execute it
locally. This approach is in fact very natural.

Now, as you note, the actual transport details can be flexible. We _could_
design it so that the transport is handled by a small proxy that runs on the
Pi. The proxy would be started when the browser session is initiated, and both
the browser and the phone would connect to the proxy, and the proxy would act
as mediator. But that would be an implementation detail. In the abstract, the
phone would still be the server and the endpoint (the pi) would still be the
client.

Fundamentally, we're starting an application on the phone, and sending parts
of it to the pi for the pi to execute. The most natural model for that is for
the phone to be the HTTP server (the state and logic originator) and for the
pi to be the web client (the stateless device).

It's a bit counterintuitive, but if you think about it for a bit, it starts
fitting together really well.

~~~
quanticle
Thanks for the clarification. I think the point of confusion for me was around
the word "endpoint". As a web developer, I'm so used to thinking of the phone
as the "endpoint" that the notion of the phone as the app server threw me a
little.

One other concern I have is input lag. You mention gaming, which can require
some pretty precise input timing. Have you run into any issues around that? Or
are modern wifi network generally Good Enough that lag isn't a problem?

~~~
kannanvijayan
I only had enough time to whip up the quick demo here. The responsiveness is
pretty tight as far as latency goes - it "feels" like the photo on screen
moves in tandem with the one under my fingers. No perceptible lag.

I wanted to try out some more interesting demos. One idea I had was to run the
HexGL racing game ([http://hexgl.bkcore.com/](http://hexgl.bkcore.com/)) on
the endpoint, and then have the app on the phone interpret the gyro as
steering and send it over to the endpoint.

However, I found that WebGL on the pi is slooooow, for both Chrome and
Firefox. I don't think this is a raw power issue so much as using desktop
browsers on a mobile device, where the browsers haven't been compiled with
driver support for the graphics chipset (the Pi can run Minecraft just fine,
so it should be able to handle a simple racing game).

My educated guess (from just the photo-viewing demo) is that the latency is
good enough to support responsive gaming input.

------
endergen
We explored ideas around this space at Emotely and Brass Monkey. There is much
yet unexplored. This approach basically gets you a WiiU/Chromecast second
screen platform:
[https://www.youtube.com/watch?v=NE8-TntjYB4](https://www.youtube.com/watch?v=NE8-TntjYB4)
[http://francoislaberge.com/blog/my-time-at-brass-
monkey/](http://francoislaberge.com/blog/my-time-at-brass-monkey/)

~~~
kannanvijayan
This is sweet. I knew this was obvious enough of an idea that other people
would have had it, and potentially taken it further.

One of my longer-term thoughts on this approach is that it may extend beyond
simply a "controller" notion. Ultimately, I was thinking the phone could
become the sole state carrier, and all other devices (workstations, laptops,
game consoles, projectors) could become stateless endpoints which are
ephemerally controlled as needed.

For example, you could arrive at home, connect your phone to your home
computer, and initiate a "home session" which kicks off all the 'webapps' you
have defined for your home-computer environment, on your home computer. The
apps are structured client/server, and they save their state back to the phone
(but use the computer's processing, display, audio, etc.). When you're done,
you disconnect and leave. All your state is saved to your phone, and any other
computer you come across can temporarily become your "home computer".

Likewise, any computer can be your "work computer". And any game console can
be "your" game console.

The phone, in this context, seems like the ultimate proxy for the individual.
It travels with a person. It has access to the inputs and data in the
vincinity of the individual. It can be controlled by the individual. What
better place to put all of your computational state, than in there?

Anyway, this was some far-fetched, pie-in-the-sky thinking I had about the
long-term implications of such an architecture.

Happy to see other people working on the concept.

~~~
endergen
I agree. We focused on the controller use case as it made for a simpler API,
but it was always my interest to make it more generalized. Just perhaps
through a different product that was more open and generalized.

Things like having your phone cache webapps that could be casted over LAN
(thus not requiring a persistent connection), this would useful for playing
say games/apps when on planes or log cabins, etc.

Or having the phone create metascreen, synchronizing multiple monitors with
fullscreen webpages open that you would split up the graphics of a game or
movie to run across all the screens, thus making a bigger screen.

So many possibilities. Ultimately though, I'm still on the fence to if this is
the best approach vs a mixed approach of using bluetooth, zigbee, wifi, http,
websockets, normal sockets, and etc all abstracted to use whatever each device
has to offer.

------
gfosco
Built something like this at YC Hacks, except any device or even a webpage
could function as the server and provide a real-time or REST interface. We had
5 demo apps showing off various examples, including a super cool two-factor
auth demo..

Trying to explain the concept to people in 30 seconds was non-trivial.

------
dropit_sphere
This is neat. I have a hard time thinking of uses off the top of my head, but
the idea of a no-installation phone-paired TV app has potential.

Thoughts of
[https://youtu.be/_mjxd47OVUI?t=57s](https://youtu.be/_mjxd47OVUI?t=57s) go
through my head.

