This looks very nice. It'll need lots of docs with examples to understand how to put together the nicely flexible building blocks you provide.
That will make Refile trigger the specified processing at the moment the URL is requested. You should then put a CDN or a reverse proxy in front to kind of cache these processed files, so that they're not processed each time.
About the documentation, I wrote documentation for each of the plugins, and I linked them all on http://shrinerb.com. The plugin names should hopefully indicate what feature they accomplish, and since they're all individual, it's up to the user to choose what features they want (i.e. combination of plugins).
"upfront" but async background does seem to be the best way to approach it to me, at least if you had to pick one -- that's what Shrine does?
Yes, I was inspired by CarrierWave's uploaders, the goal of Shrine is to have all of the uploading logic in the uploader. And it even allows you too hook up to ".included" of the model that receives the attachment methods, from inside the uploader (http://shrinerb.com/rdoc/classes/Shrine/Plugins/Included.htm...).
This minimizes the time spent processing a request and keeps the front-end snappy.
Another part that I mind is the user experience. Ok, both of these gems allow you to know when the backgrond job is in process, and then you can display some placeholder image until the processing is done. But that's not really nice user experience. In Shrine the user immediately sees the image they uploaded, because it has that image already cached (usually on the filesystem), and then from the user's point of view the uploading is finished, before the background job even started. CarrierWave also has the cached image, but by design it's not possible to use it, while Paperclip doesn't cache images.
TLDR: forget shrine too, use an image server
I noticed that many problems that you mention other uploading gems have are actually solved by Shrine:
- "Transformation juggling" -- Versions are processed at instance-level, so you don't need to remember a class-level DSL.
- "File path config" -- If you change the location where your files are stored, the existing files on old locations will still work normally
- "Form validation dance" -- Very simple with Shrine, you just add an "<attachment>" hidden field.
- "Schema pollution" -- Shrine stores uploaded files exactly in the way that you described, in a single column as JSON
The idea of an image server indeed looks very interesting, but I think you're forgetting that this solves only one part of file uploads (although I agree it's a big part, and imgix looks really sexy). If you're using an image server, you still need to do:
- Caching the file (this still needs a direct upload endpoint)
- File validations (most notably preventing big files)
- Storing the file to image server (this still needs a background job)
- Model attachment logic (all of the assignment/callback logic)
One problem that I always had with on-the-fly processing is, if you have a page with a lot of photos, and you want to change the URL so that they're processed differently, how will that page look when a first person visits it? I think it will be horrible, the user will have to wait for a really long time to actually see the images, which is not nice user experience.
Also, the "image server" strategy can be used only for images, so if you want to upload documents, audio or video, you still need do regular processing on the application side. What's nice about uploading gems is that they provide a general solution.
> If you're using an image server, you still need to do...
yes and we should move towards solving issues in the image server, not yet another client gem
> File validations (most notably preventing big files)
big file upload can equally be prevented on image server as on a rails server, settings should be in the webserver fronting it.
validations are "easy" since you're only validating json attributes, but i guess as u mentioned, shrine has this benefit too.
> Storing the file to image server (this still needs a background job)
even better than background job: we always direct upload from browser/client (regardless of whether we are using s3 or not).
> Model attachment logic (all of the assignment/callback logic)
a client lib can do this for your rails app. and it isn't much https://github.com/choonkeat/a...
> One problem that I always had with on-the-fly processing is, if you have a page with a lot of photos, and you want to change the URL so that they're processed differently, how will that page look when a first person visits it? I think it will be horrible, the user will have to wait for a really long time to actually see the images, which is not nice user experience.
we can still "pregenerate" after the author uploads his images, the browser just have to make the requests (not necessarily visible) and the image servers can cache those generated images for a long long time. then when the "first other person" view the page, those images are as fast as anything.
storing the generated image or not is still an image server's decision. my own implementation chose not to store because i find those generated files more trouble to carry around.
again, these are things for different image servers to iron out and one-up each other with.
> Also, the "image server" strategy can be used only for images, so if you want to upload documents, audio or video, you'll need to use something else. What's nice about uploading gems is that they provide a general solution.
actually, there's nothing in an image server that makes it inherently image only. i'd say file servers provide as general a solution as those uploading gems
for example, attache server handles pdf uploads well and even renders the thumbnail when requested via <img src="">. the same applies for any other file types - the goal is actually to render thumbnails for everything, as per how your desktop renders the same files are thumbnails.
again, these are things for different image servers to one-up each other with.