This post is a timely reminder for me, and I will be taking another look at Uppy with keen interest. Thanks for carrying on the good work and getting to 1.0!
That could make our public coding efforts worthwhile financially. It’s a bit of a gamble. We’re believers/biased in part because we just enjoy working on oss, and in part because we’re quite a lean company and don’t need to make boatloads of money to make investors happy or anything (we’re bootstrapped since 2009, so no investors, and ramen profitable since 2012. Me and my cofounder are both devs and not in it to get yachts, but rather to have nice and rewarding work, basically)
Whether it really also becomes a financial success remains to be seen. 3y of devtime isn’t cheap so it'll be a bit sad if there is no ROI at all, but we'll comfort ourselves with the thought that we'll be able to provide a better and more reliable uploader to Transloadit's existing encoding customers (because going open has made these projects better thanks to exposure to more minds and environments), and making them very happy, even if there’s 100x more people that don’t make us money.
If you are a service selling a product targeted towards developers, I can't think of better marketing than something like this, where tons of users will use it for free, but many users (like me) who hadn't previously heard about your core product will find out about it through this.
EDIT: One thing to add about the marketing angle, another commenter mentioned that this was posted multiple times before reaching the front page, and some of the generic "Awesome! Will definitely try this!" comments here by low-karma users makes me think there is some astro-turfing going on. That said, I don't really mind it. Author created a useful tool, open sourced it, and I'm now glad I know about it. Kudos to him and if it helps more people become aware of his business (which obviously funded his creation of this open source tool) more power to him.
As for astroturfing: I am certainly guilty of self-promotion. When this post didn’t get traction, after a few days I thought maybe Show HN finds this interesting. In addition, my team of ~5 will have upvoted this post (although that probably works counter productive with HN's algorithms, I couldn’t/wouldn't stop them). Other than that we don’t deploy any schemes here, what you likely see is that, because we exploded on Reddit over the weekend, its users are posting it to HN too.
It is for finding _actionable_ content like this that I come to HN. Thank you OP for posting.
If you want to save money then yes, you could configure direct S3 uploading with Uppy, we do have better examples fot that these days (e.g. we now also show back-end code for signing the requests)
However it seems you got it working now so maybe no reason to change?
Am I supposed to be running my own companion server, or is the server shown in the examples supposed to work for my project (I'm seeing CORS errors)?
The Companion server used in the examples (companion.uppy.io) is really only meant for demo purposes and hence throws CORS errors if you'd try to use it for your own website.
You have three options:
1. Disable Instagram/Dropbox/Google Drive. Then you can use Uppy without any special server components (it'll just upload to your Apache server or S3 bucket)
2. Enable Instagram and friends but run Companion on your own server. It can be installed as middleware into an existing Express server, or run as a Standalone server on a different port than your webserver.
3. Use Transloadit's hosted Companion server. Requires a paid subscription (but also gets you hosted tus servers for upload handling, and our encoding platform, all of which are globally distributed)
When I see sample code, I generally assume that I can copy and paste it onto my own site. So a comment "You can try this code on OUR site, but if you want to use it on YOURS, you need to take care of your own companion hosting" in your sample code would be helpful.
I realize that this sentence (while clear, accurate, and quite reasonable) takes a bit of polish to turn into a positive message which won't scare away potential users ;-)
Alternatively, maybe you could set up your demo server to accept requests from randos for exploratory purposes, but with a quota set low enough that it won't be abused for production?
We’re not using much of that part yet, but it’s good to know that this is being actively developed
Really loving Shrine too!
Uppy looks like a great replacement.
I now open up waterfox on those occasions when I need to use fireftp.
I'll definitely give this a try out if it can do the job instead
I don't know if GCS is built into uppy at present (contrary to another comment, I don't believe GCS could be called "S3-compatible"), but I suspect there's a way to use uppy hooks to add it. As long as GCS also allows storage locations that allow upload only to signed time-limited URLs, the same approach could be used.
Where you put the file on the cloud storage and what you do with it is, I believe, not uppy's concern. But if you are for instance using the ruby shrine file attachment library (which is built out with examples to support uppy, and direct-to-S3, as a use case) -- shrine strongly encourages you to use a two stage/two location flow, where (eg) any front-end-uploaded things are in a temporary 'cache' storage, which on S3 you might want to use lifecycle rules to automatically delete things from if older than X. The files might only moved to a more permanent storage on some other event.
Once you get into it, it turns out all the concerns of file handling can get pretty complicated. But having the front-end upload directly to cloud storage can be a pretty great thing, depending your back-end architecture, for preventing any of your actual app 'worker' processes/threads from being taken up handling a file upload, dealing with slow clients, etc. Can make proper sizing and scaling of your back-end a lot more straightforward and resource-limited.
1) You allow append-only access for the world, maybe in combination with an expiry policy. Indeed only useful for a few use cases I'd say
2) You deploy signing of requests, and you only sign for those who are logged in, or otherwise match criteria important to your app. A bit more hassle, and still requires server-side code (whether traditionally hosted or 'serverless'), but at least your servers aren't receiving the actual uploads, taking away a potential SPOF and bottle-neck.
That said, I'm not sure how serious you are about handling file uploads, but uploading directly to buckets often means uploading to a single region (on aws, a bucket may be hosted in us-east-1 for instance, meaning high latency for folks in e.g. Australia). This may or may not be problematic for your use case, but it did bring us complaints when we had that.
You can use https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acc...
S3 acceleration uses the cloudfront distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. This costs more money though.
It's probably a happy problem if you end up worrying about S3 as a very DDoS-able part of your system.
Running up hosting bills is an scenario that can be addressed with various technical means (like a sibling comment explains). Many people seem to judge the risk * probability too small to put a lot of preemptive effort into it. It's basically a question of how much damage would be done until your monitoring catches it. AWS has also been known to "forgive" bills that were caused by malicious attackers in some situations.
But maybe you're talking about something else? Happy to dive in deeper
It would be nice if this can function as a point and click upload point and also a scriptable upload point for savvier users.
Just a thought!
(and yes, we'll get to all of them :)
I see there's ability to upload directly to S3, any ability to upload direct to Azure Blob store?
npm i && npm run bootstrap && npm run dev:with-companion
One big differentiator with other open source uploaders is that we go above and beyond to get higher degrees of reliability, to the point of ridiculousness, maybe:
- We use https://tus.io under the hood for resumability to make file uploads survive bad network conditions (train enters a tunnel, you walk to the basement, share something from a club, switch cell towers, walk in range of wifi, have spotty wifi, are in rural areas). Tus is an open standard with many implementations.
- Our 'Golden Retriever' plugin can recover files after a browser crash or accidental navigate-away (full post + video from our hacking trip where we built this: https://uppy.io/blog/2017/07/golden-retriever/)
The reason we obsess over this is that Transloadit (our company) was getting complaints about files not making it to our encoding platform, even though the platform was stable. We realized one out of maybe every thousand uploads just fails due to bad network conditions. Something you don't notice when you either have a very stable connection or don't upload so many files. But it's wild out there, and if you handle 150.000 uploads a day, you can see how many complaints you might end up receiving. So we got a bit frustrated with the state of uploading (downloading had been resumable since HTTP/1.1) and that's how we ended up creating https://tus.io (, and then Uppy)
Some differences between the uploader widgets themselves:
- Uppy is open source and can be used with your own back-end. Free as in liberty & pizza
- Uppy is vanilla JS with support for React/Native
- Uppy has resumability (via the open standard https://tus.io) and can recover from browser crashes / accidental navigate-aways. Uploads will just continue where they broke off
- Uppy supports _less_ external sources (e.g. 1.0 comes with support for Dropbox, Instagram, Google Drive, but we don't yet have support for e.g. Facebook, or Google Photos)
Some differences between the companies/back-ends (should you optionally use Uppy+Transloadit to handle the uploading, fetching from e.g. Instagram, and encoding):
- Transloadit offers more encoding features (Uploadcare is making good progress tho, they recently added video encoding for instance). Those features can be combined in workflows. So you leave a JSON recipe ("Template") with us that says: for every video, take out thumbnails, watermark some of those, detect faces in others, store those separately, all in one 'job', or how we like to call it: "Assembly"; because it can be a chain of virtually infinite jobs that take the output of other jobs as input for their own
- Transloadit does _not_ offer a CDN, instead only exports results to storage/buckets you own (not sure if we'll add this)
Just a _few_ differences, there are more. But it seems they are getting fewer and maybe in 5 years we'll be identical companies : ) but yeah so there's some time left until then that we can still afford pleasantries :D
- https://news.ycombinator.com/item?id=19756159 (same ID as prev. post by same user)
The restriction lasts for about a year, as the FAQ explains. Then it's ok for it to appear again.
This post was actually pretty interesting to read and I'm glad it finally made it to the front page.