

Announcing my bootstrapped startup: RestBackup - mleonhard
http://www.restbackup.com/

======
nroach
It looks interesting, but at this point it feels more like a "web app" than a
"startup". Granted, the two are often interchangeable these days, but to me
the difference is one of formality and commitment.

For example:

What happens to my data if you decide that this isn't fun anymore? What
happens if you get sign-ups that store a bunch of data but don't pay their
bills? What happens if you get sued, are you personally liable or is this an
entity? What are the terms of use? What kind of support is available?

From your site, it appears that you are in the process of solving these
questions, but there's likely to be a lot more road to cover before it goes
from web app to startup in my mind.

Good luck!

~~~
mleonhard
Yes, these are details that every new company must attend to. I will
accomplish these in the coming months. Let me answer a couple of your
questions:

1\. I'm doing this to make money. I also want the satisfaction of creating
something of value to folks all over the world. It's high-time that our
software back itself up automatically.

2\. When a customer uploads a file to RestBackup, their account is billed for
the cost of storing the file for the full retention period. If they don't pay
their bill, their account will be deactivated, but their paid data will be
kept. Unpaid data will be handled on a case-by-case basis.

3\. I will choose a lawyer this month to iron out the terms of service.
Incorporation will occur before the paid Beta-test program begins.

There's an even more important question: Am I building the right product? I
need early adopters to help answer this question. Thanks for your feedback and
I hope to see you in the RestBackup alpha test!

------
dedward
Technicalartup point of view - what value are you adding, and what are you
doing that someone else can't duplicate with a minimum of effort?

Right now this is a fairly transparent re-use of S3 with some clever features
on top - I can see the appeal - but I'm thinking "Yeah, we could rig that up
over the weekend and do the same thing here.... no need to pay double the
price for S3 storage when a few hours of in-house work will take care of it.

Also - as to "never delete" "Never" is an awfully long time. Mistakes are
made, and while "never" might seem like a safety net, there also might be a
time when I _need_ to delete data that's been stored offsite -for legal
reasons or because someone screwed up.

It looks like something targeted at technically-savvy users, and those users
are probably equally capable of doing this themselves with S3 or otherwise.

~~~
ryanhuff
Regarding your point about this being something easily duplicated, I think the
point of this service is that it frees you of the need to worry about building
and maintaining a non-core activity. People could also easily clean their own
house and cut their own grass, but millions of people hire others to do the
work.

~~~
robryan
I don't think this applies in this case, those activities take time at
constant intervals. This saves some time once in implementation for a constant
additional cost.

------
jluxenberg
What does this service do that can't be easily done by a client-side library?
Also, S3 already has a REST API (see
<http://mashupguide.net/1.0/html/ch16s05.xhtml> ), you've just eliminated the
URL signing bit.

~~~
mleonhard
RestBackup keeps every application's data separate. Data uploaded through an
access url can be downloaded only through the same access url. This is super
useful for commercial software, since each user and license can have its own
access url. To accomplish the same thing with S3, one would need to run a
request signing service.

RestBackup also prevents accidental and malicious deletions. The service keeps
your data for the specified period of time no matter what.

~~~
d2viant
Interesting points, so I would suggest emphasizing those more. Right now your
page jumps right into API level stuff like URL's and curl commands. If I'm
just glancing at your site nothing stands out as to why it's different than
S3. I need to be sold on it before I ever get to the point of using those
commands.

------
teoruiz
Would you tell us more on how you are going to bootstrap the company? I mean,
with no VC money involved, are you investing yourself in the EC2
instances/space that you need in advance? Do you have a day job?

Good luck!

~~~
mleonhard
I worked at Amazon as an SDE on the SimpleDB team for a few years and saved my
runway money. At Amazon, I learned a lot about running highly scalable web
services. I left Amazon a month ago to build RestBackup full-time. I'm taking
full advantage of AWS to keep expenses low and grow with revenue.

Now I need early adopters to help me turn this into a killer product!

Thanks!

------
DenisM
This is excellent. I love the simplicity of it.

I also love it that files can't be overwritten - clearly you thought this
trough! I am currently appending large random strings to my S3 uploads to
avoid overwrites.

However, I would prefer to have the files stored in my own AWS account. I'm
thinking I would create a special bucket, give you the credentials for it and
you would run the EC2 instances required to accept the files and store them
there. You would bill me per URL and for the bandwidth, while Amazon will bill
me for the storage directly.

~~~
mleonhard
I'm glad that you like the API. :) I did think a lot about it, but it still
needs a lot of work.

Why would you like to use your own S3 bucket as the backing store? I would
love to talk with you more about this.

~~~
DenisM
Because I want to retain ownership of the data. Transmission is transient and
I'm ok with outsourcing that to a little-known company, however storage is
lasting and I want to retain meaningful control over it.

I understand that ties you to AWS and restricts your future options, but this
might be a good place for you to start - targeting existing AWS users who need
a little bit of extra automation.

------
IgorPartola
Not sure if I missed it in the API documentation but what happens if I PUT a
file with a file name that already exists?

One thing that drives me nuts about a lot of backup services is that they say
that the data is encrypted. It is important to recognize the two places where
data can be intercepted:

* In transport, unless it's done over TLS/SSL. Most services provide this. * Once stored. This is done very rarely and if it is done, the provider is the one that has the encryption key.

With your service you can actually take care of both by simply instructing
your users to encrypt the file before uploading it.

Another bits of functionality I'd say you are missing are "diff" backups. I
don't necessarily want to upload my entire 300GB multimedia library each day
when I want to back it up (especially if I'm being charged for traffic). I'd
rather upload a small diff.

A very cool use of this technology would be a FUSE plugin for browsing your
storage area.

~~~
mleonhard
Hi Igor, thanks for your comment. If you PUT a file that already exists,
you'll get a 405 Method Not Allowed error. With the right software, you could
keep track of what is already uploaded and compute a diff locally on your
computer. When uploading a diff, you would want to extend the expiration of
the previously uploaded files. I'm open to suggestions for how to add this to
the API.

The httpfs FUSE plugin should work with RestBackup. I can fix it if it doesn't
work. Would you like to sign up for the alpha-test and give it a try?

------
delano
\- I love clean REST APIs so I can clearly see the value in your service over
others (e.g. using S3 directly). But the benefits probably aren't immediately
obvious to most potential customers. Maybe consider a side-by-side comparison
of steps involved.

\- Do you encrypt the data before storing it to S3 or store as-is? I need to
think about it more but assuming you store as-is, I think I prefer that over
Tarsnap's approach. You may want to consider a comparison between this service
as well. (As a side note, I would be using Tarsnap right now but I'm in
Canada)

\- What is the need for charging for Request Processing? I guess that would
prevent storing many small files, but it complicates the pricing model. It
will be easier to communicate your offering if all costs are included in the
data transfer and storage prices.

~~~
mleonhard
I'm glad that you like the API. I'm working on a side-by-side comparison of
RestBackup and other services.

RestBackup receives the data from the customer via HTTPS and submits it to S3
also over HTTPS. So the data is encrypted in-transit but remains in S3 as-is.
This is the minimum viable product. I'm considering encrypted storage as a
feature for the public Beta.

RestBackup has per-request charges for the same reason that S3 and CloudFiles
have them: to avoid losing money on high-volume request loads on small files.
If you need to store many small files, it might be more efficient to package
them up in a tar archive.

Thanks for the great comment and the tweet! I hope to see you in the alpha-
test!

------
blasdel
_> Use the HTTP PUT verb to upload your files to the service. Since this is a
REST interface, the file will live at the path you specify. Retrieve it with
an HTTP GET. You cannot replace existing files. PUTs to existing files will
fail with "405 Method Not Allowed"._

PUT does not mean what you think it means. The whole point of PUT is that
you're replacing an existing resource, which gets you idempotence. You're not
supposed to use it to create a resource at the supplied URL, and the whole
point is that you can PUT to the same URL repeatedly.

You want POST. Using PUT is making your service _less_ restful.

~~~
mleonhard
I'm pretty sure I got it right. The HTTP 1.1 RFC says:

> _The PUT method requests that the enclosed entity be stored under the
> supplied Request-URI._

<http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6>

I interpret this to mean that if you PUT to /some/file then you should be able
to GET /some/file and get back the same thing. RestBackup access urls provide
this behavior.

~~~
blasdel
Sorry, you're right about using PUT to create a resource at the supplied URL
-- but you're still breaking the expectation of idempotence: repeating the
same request should have the same result, not 201 the first time and 405
thereafter. If you really want those semantics use POST.

If you're really set on using PUT, you could change your semantics a bit so
that PUTting the original file again results in 200 OK but attempts replace it
with a different file return 409 Conflict or something.

------
lsc
what are your thoughts on the high cost of S3 vs. the cost of the
infrastructure involved? Is your plan to just target customers for whom the
premium doesn't matter? or do you plan on using your own infrastructure at
some point?

~~~
mleonhard
I'm starting out using my Amazon work experience to make efficient use of S3,
SimpleDB, and EC2. But I am definitely open to lower-priced products built on
other services and our own infrastructure. I'd love to talk with you about
what your customers need.

~~~
lsc
my customers need

1\. cheap. (these are my customers, right?)

and

2\. Not managed or controlled by me. (the whole /point/ of backups is to cover
them if I really screw it up, so backing up on to my own equipment is of
limited utility.)

Actually, a s3 clone at half the price would be about perfect.

------
zmmmmm
How are you going to compete when there are so many free storage services? A
company I work for is contemplating how to provide this kind of in-application
backup, but with so many free options - Google Docs storage, SkyDrive, DropBox
etc., I wouldn't be likely to opt for something that required payment. What's
going to be your differentiator?

~~~
mleonhard
Thanks a lot for your question. I would love to hear more about your company's
needs.

Free services have their limitations. If you want to ship a product that can
backup out-of-the-box, without any customer configuration, good luck doing
that for free. Save yourself the pain and use RestBackup. My management API
lets you create and manage thousands of access urls, with per-url limits. A
software vendor can use this API to generate an access url for each
application license that they sell. This way each customer's data is separate
and secure and the vendor maintains control of the charges incurred by each
customer. Paid services like RestBackup can offer these kinds of management
features while still remaining economical for commercial software vendors.

I'm not interested in competing with free services. I want to build a product
that folks will pay for.

------
mitjak
Also, what's this with the width of the page being > 1280 pixels? I thought
1024 is still the common standard.

~~~
mleonhard
Can you recommend a good website designer who can fix it for me and make the
site look great?

~~~
mitjak
I found the site design quite ok. The logo needed a bit of spicing up to
convey what you do, but other than that the site was clear at laying out what
your service was all about. The width was the only issue that caught/poked me
in the eye.

------
steveklabnik
I like the MVP feel. No delete is a great example of "we'll build it later,"
in my mind.

~~~
mleonhard
Yeah, my first draft of the API was really complicated. Then I decided to cut
it down to the minimum-viable-product and later add features that my customers
need. I spent a little too much time building the prototype. Looking back on
it, I could have spent more time on this announcement page.

------
zackattack
Cool, way to go. Is this much different from the free open source tool s3cmd?
<http://s3tools.org/s3cmd>

Tell you what I _do_ have a burning need for: automatic backup of my free,
CPanel-based shell account. This includes my databases, my crons, my home
directory... I want to be able to jump webhosts in a jiffy if necessary. I
would definitely pay for that.

I would also just be comforted in knowing that ALL my data is backed up off
site! I'm sure other people have similar concerns. I'd be delighted to pay
$5-$10/month for such a service and I wouldn't use hardly any download
bandwith so your variable costs would be low.

Consider it.. please?

~~~
mleonhard
s3cmd is a command-line tool for working with S3. You can use curl, wget, and
other standard HTTP clients to work with RestBackup.

I would love to offer a tool that can automatically back up CPanel to
RestBackup. Ideally, CPanel Inc. would add this feature into CPanel. Can you
run a cron job in your shell account to do the backups? Please join the alpha-
test and I can help you set this up. Submit the feedback form to request your
invitation: <http://www.restbackup.com/#feedback>

