I've used Google Spreadsheet as an "API" before (manually, not using this wrapper), and it is definitely not made to be used as such. Under heavy load, it slows down or times out, and even under the best circumstances it's not exactly fast.
It's great in a pinch. My use-case was a spreadsheet that others in my organization wanted to edit, then we'd pull the data onto a web page and display nicely. It worked, but eventually we had to set up a secondary server to cache the spreadsheet result because pulling directly from Google in realtime was too unreliable.
That'll let you use a Google spreadsheet as a datasource and expose the data to your liquid templates.
Combine it with something like https://www.netlify.com that'll run the builds for you and let you trigger builds via webhooks, and you have a Google spreadsheet based publishing engine where the final sites are completely static and live straight on a CDN.
Used it for one site where we used the scripting options of Google spreadsheets to add a "Publish Now" image button to the sheet, so in the end user could edit the the sheet, and then press "Publish" to trigger a build+deploy.
If the users are already used to spredsheets, it can be a really handy little anti CMS :)
Admittedly naive question: if the advantage of having Google Sheets store the data is that you can use the Sheets editor when you want to, could something like Sheetsu store the data separately, much faster, for the API, and then just sync any changes between the Google Spreadsheet and their data? So you could still access via Google when needed, but these problems don't matter?
I made a node.js package that does exactly this, a local cache in json with few hooks for data modification, and you can control when to update it to avoid timeouts [1]
Then we used it as backens for a blog for my University team as we were already using Drive, being each entry a google docs document [2]
Thanks for sharing. It's clear this approach works for small datasets and if you want something more reliable under heavy load just go for a proper database. Might work well with prototypes though.
I also implemented something like this very recently [1] and I would recommend building in caching from the start as I was consistently getting throttled by the Google API even in development (just live reloading the page as I styled it).
Secondly, while the load page and then load the data in as JS approach definitely works, I found that just pushing the data in as HTML on the server side was easier to cache and faster to load.
There are a bunch of 'fork-n-go' github jekyl sample applications that wrap a webpage with a Google Spreadsheet datasource.
Here is a sample (from a github employee) that I've used in the past. Source: https://github.com/jlord/hack-spots and live site: jlord.us/hack-spots/#info - not sure how they are caching the spreadsheet, but I've never seen it not perform (not sure about concurrent users or whatever though).
I've been looking at workflows like this, but the problem is being completely cut off from editing or testing locally when offline.
That is, it would be great to keep my data in a google sheet and transform it into a static json file that lives in my app source, but then if I need to work offline, all I can do is edit the json and remember to reflect the changes into the sheet later.
I don't suppose anyone has found a workflow that solves this?
Google Spreadsheets is awesome as a quick and dirty CMS. I use it all the time at the newspaper i work for, in combination with Tabletop (https://github.com/jsoma/tabletop). Only thing i lack right now is some kind of library that could cache those sheets using Redis for speed.
That's nice - I've recently made a small project to wrap Github's Gists around an open API[1] for the same use case - a small open database easily browseable -, but your project seems much more interesting - spreadsheets are a very interesting way to solve this. Good job.
I would really prefer something like this to be a local library or something, adding another remote service in between negatively affects security and reliability. Especially in this case where there is no HTTPS yet.
Used this approach at a previous company and was amazed at first. Wondered why it wasn't more commonly used. A poor mans API! Whats not to like?? Then I found out why: when put under any kind of load it completely falls over and stops sending data at all. Also randomly would throw security/authentication errors that none of our devs noticed because it only happened on windows and older versions of OSX (we were a Linux shop). Its a fun and novel approach but should never be used in a production application that needs to scale.
You don't have to implement Google API or use any lib to talk with it. Just paste link and get API end-point with docs, which is easier to use. Much more easier than creating your app at Google Console, connecting with it, etc. Especially if you are not a programmer.
The top part of the code is separated from the bottom plumbing, and is sprinkled with comments in Dutch for my students to edit ('8th grade', Dutch 2VWO).
I actually used it more as an example and inspiration to talk about databases in general. The group is a mix of students from the previous year; by setting the bar high and encouraging them to change the parts they knew something about, I could gauge their individual skill level a bit. Pink product listings ensued. JSBin really is an awesome tool I couldn't do without.
Great My blog[1] uses a static side generator called middleman[2] and I was looking for a way to share 'quotes' in the website's subtitle. This seems to be the best solution! I was about to write a simple Sinatra API to run in localhost, but that's a way better solution for my use case!
UPDATE: A good use case is to use the HTML form to store subscribe/notify-me emails for landing pages. I was about to build a YAML file or use SQLite3 to store emails via form, but this is so much more convenient.
Ah man, this is perfect! finally a way to get simple servers running with very little hassle.
One major feature I would like is the ability to specify cells for I/O. Eg in some sort of "my api" console I could say "/custom_api, {stuff:A5, things:A6}, B2". Then requests to GET /custom_api would plug in the key-value of "stuff" into A5 and "things" into A6 and then respond with whatever is in B2.
Once more useful features are up, the ability to disable parts of the API (eg dump the whole spreadsheet) would be necessary.
Obviously you wouldn't build a large app with this, but I could see many traditional businesses using it for more than just prototyping.
You can do that with SpreadServe[1] now. Instead of Google docs it uses real Excel .xls spreadsheets and supports VBA and XLL addins. It turns your spreadsheets into async scriptable servers. There are Python & Java APIs.
Google Spreadsheets already have a native REST API. I use it to feed the charts in http://pittsburghtoday.org/
The key to getting good performance is to not put too much data into the sheets. Low thousands of rows is not a problem.
Nice way to bypass zapier.com for some simple stuff because they can get expensive quickly. I just started using it to store a log of supersaas.com appointments in a Google spreadsheet, works great.
I see that you request access to 'view and manage' the spreadsheets and files in my Google Drive? Is this for each spreadsheet/file or just the one I've linked to the in URL?
This is great! I want to build an app that can insert transactions on the go into my existing budgeting spreadsheet, and this really takes away a lot of the dirty work.
So I threw this into a website to try and load some data into a table, and get:
XMLHttpRequest cannot load http://sheetsu.com/apis/d54d6315?sort=Description&order=asc. No 'Access-Control-Allow-Origin' header is present on the requested resource. How do I handle that?
An alternative is also http://calcfusion.com/ ( disclaimer, I work there and we just launched ). It uses xls spreadsheets and is more oriented towards business rules and computations. There are some nifty features like audit and versioning, automatic html form building etc...
I didn't thought about it yet, but right now when you put =SUM(A1:A10) into cell, and GET (or POST) it via API, you will get the result (sum of the column 1-10). So, it'll be working with sheets the same way.
ex. when you put =Sheet2!A1 and get that via API, you will get the value of A1 cell in Sheet2.
Slightly unrelated but - isn't there a super cost-effective way of quickly whipping up a CRUD API with minimal configuration? Often I implement fairly backends in Java or Go that could might as well have been "configured" from the beginning, instead of implemented each time.
If you're into node, you might want to check out json-server. A single json file with one sample entity is the basis of a full API. Check it out at https://github.com/typicode/json-server
Whoa, looks neat and honestly sounds too good to be true.
>Intercept XMLHttpRequest to fake a REST server based on JSON data. Use it on top of Sinon.js to test JavaScript REST clients on the browser side (e.g. single page apps) without a server.
I'll check it out this weekend, but man... thanks for sharing!
Is there a way to get data from each sheet in a workbook? I'd love to use this, it is so simple. But it only seems to get the first sheet in my workbook.
Awesome! FWIW, my needs don't require any sort of "relational" concept as some comments have suggested, I just need to be able to do all existing operations on each sheet in the workbook. I guess being able to get a list of the sheets in the workbook might be useful, as well, or get a sheet by name, but for my use case I already know the name.
But google throttles requests to popular docs right? I mean, the "Wow, this file seems very popular" thingy... Is there any way to calculate it? Maybe you could pass JMeter [0] or bench-rest [1] so we can have some numbers for orientation?
Anyways, it's really handy for simple and non traffic-intensive projects. Well done!
API doesn't look very REST to me. It is basically just using HTTP as a JSON transport. The JSON contains a "status" code which appears to loosely follow the HTTP status code numbering. But if that was the intent then why not just use HTTP status codes like a true REST endpoint?
Otherwise, a neat idea. Needs a Office 365 connector too.
It's great in a pinch. My use-case was a spreadsheet that others in my organization wanted to edit, then we'd pull the data onto a web page and display nicely. It worked, but eventually we had to set up a secondary server to cache the spreadsheet result because pulling directly from Google in realtime was too unreliable.