

Passing Pointers - brettcvz
http://blog.filepicker.io/post/27855218260/passing-pointers

======
fzzzy
Yes! I really hope more people catch on to why this is an excellent idea.

Urls as pointers are not needed in a world where Facebook controls everything
and has access to everything by id in it's backend.

A world where services provide and consume urls is a world where it doesn't
matter what server something is on, everyone can participate.

------
icebraining
It's always refreshing when people building web services actually _understand_
the web. (Note: not sarcasm, unfortunately this is rare).

~~~
ChuckMcM
Except that if you understand the web you understand that a URL is worthless.
Worse than worthless its down right dangerous. The moment you write a URL is
the moment the clock starts ticking on the data that it refers to being the
same data as the URL describes. This can fail in spectacular ways. A loooong
time ago there was a web site that hosted comments on a golf courses that
referenced to a hosted community site. The front side of the application would
put a snippet of the comment and a link to the full comment. The service
passed away into the web zombie land (the web site was still serving pages,
the links still pointed at the comment site, and nobody was home). The comment
site got sold or acquired and someone put up malware on every single inlink.
Blammo armed and dangerous.

The concept the OP is going for is 'deferred work' which is to say not to pass
around data that isn't going to be used. And that is indeed a noble goal, but
you must have a way to vet that the pointer you passed still points to the
thing you thought it did, or you will find out what so many C programmers have
discovered about caching pointers, bad bad bad idea.

~~~
brettcvz
The central issue here is trust. If you trust the provider of the url to
maintain the link, ensure that it stays alive and pointed to what you expect,
then it's fine.

In the C caching pointers case, the issue is that the system makes no
guarantees about the "live"-ness of any prior pointers, whereas on the web
this is entirely possible and encouraged (See oft-cited post about "Cool URIs
don't change)

~~~
BHSPitMonkey
I don't think I trust any web site in that regard, though. I could even see
Google yanking the floor out from under me as a consumer of one of their URL
schema.

~~~
icebraining
If you didn't trust any web site, you wouldn't click on any link.

Trusting a website doesn't mean you have to trust them _indefinitely_. You can
trust that the URL will be kept alive for a certain length of time - minutes,
hours, days, etc - and deal with them accordingly.

------
throwaway54-762
Aw, and I expected a C/C++ article! Still, this is good stuff: "zero copy" for
web content.

~~~
brettcvz
Ha good point - Same concept, way different level of abstraction

------
liyanchang
"my data stores and yours are practically collocated".

Truth. Every time I ping from my server and get ridiculously low response
times, I have to pause to think before thinking "Thank you AWS".

~~~
brettcvz
Until everywhere has fiber to the curb, the internet backbone is going to be
orders of magnitude faster than the last-mile speeds

~~~
ori_b
At which point, the demand for bandwidth will go up, and the internet backbone
will have to grow, and it will _still_ be orders of magnitude greater than
last mile speeds. Internet backbone will always be faster than last mile
speeds by necessity.

~~~
brettcvz
Fair point. It will be interesting to see in what areas the demand for
bandwidth will go up. At some point doubling resolution is no longer
noticeable.

~~~
ukd1
Games with massive textures (retina?), streaming lossless music, 1080p / 4k
&&|| 3D 'netflix'?

Bandwidth will always get used up. "64k is enough"...LOL.

------
kstenerud
This will only work so long as services copy the contents of the URL you give.
This opens up a whole slew of security and permission issues. Otherwise the
original link becomes the weak link in a potentially long chain of links.

A pointer is handy and convenient until the resource it points to disappears.

~~~
joshma
I'd also point out that URLs aren't exactly pointers as they don't (all)
support writes in addition to reads. It'd be interesting to see a webservice
support locking, reading, and writing from URLs as pointers.

~~~
joshma
Nice, didn't notice that! (For those interested, it's actually documented
here: <https://developers.filepicker.io/docs/web/#fpurl-contents>)

A 501 sounds like the closest error code, and I'd say having both asynchronous
and synchronous modes of locking might be useful. Synchronous just holds the
connection open (certain frameworks don't mind long-lived connections) while
an asynchronous method might pass in a callback_url in the request to be hit
when the file is ready, in the case of lockage.

(NB: to be honest I'm not too sold on the demand for locking vs ovewriting, I
guess I threw it in the list of [things that files can do]. Might be
interesting to see this need evolve as files move to the "cloud" though.)

EDIT: While I'm at it, a PUT method for creating files could be cool too, to
let people use filepicker without the JS widget.

// oops, missed the link. meant to reply to sibling comment

------
stuffihavemade
I'm running into this problem right now with S3. I have a bunch of files on a
cdn that I want to store in my bucket but (as far I know), I have to download
them all to my machine before storing them. I'd love for the api to accept a
URL.

~~~
brettcvz
If you can convince S3 to integrate with Filepicker.io we'd put a statue of
you in our office.

------
bjornsing
So true. For example, I love gmail but can't believe how many times a day I
download an attachment just to upload it as an attachment to another email.
It's ridiculous.

------
ukd1
This is cool, but I wonder when it will actually be done by reference rather
than reference then copy.

When will I be able to keep a "this is in use here" record for a file that is
stored else where. Whilst way better than download-upload on broadband / cell;
it still seems dumb to copy in the first place, even if it is on internet
backbone.

~~~
clord
This would require some sort of single-sign-on, or capabilities system. Would
love to see capabilities-based security for web services, actually. "Here is a
token that grants permission to service X to perform action Y for duration Z."
Can OpenID and ilk do this?

Anyway, apps that don't need security should be as you describe already.

~~~
Kevin_Marks
This is pretty much exactly what OAuth does - enables the user to authorize a
web service to access another one on their behalf, with per-app, per-user
constraints.

------
Scene_Cast2
Yay, garbage collection and memory management! How do you know whether a URL
has expired or not? I'm assuming temp URLs, which is quite reasonable in a lot
of cases.

~~~
brettcvz
HEAD requests?

