Nikolai said yesterday “Is Google spreadsheet becoming the new Microsoft Access?”. You know the visual database and form designer. Well, for the web and pure mashup apps, that are mainly or completely based on the web’s infrastructure, so to say cloud services, one could answer “Yes, it can cover at least the database part”. It may be poor in features, but a nice start.
Well all this buzzy stuff aside, read on to see how to use JSONP with a Google spreadsheet, even though it’s not exactly provided.

For me as a front end engineer and JavaScript lover I always think twice before I set up a server system with a database and some Python or PHP code in between which then provides me with the data I need. This is painful, actually too much work for a lazy front-end dev like me, I would also have to do the permission handling and let alone the maintenance. Of course there are projects where it is not possible to store everything in the cloud, as what would be the solution I would favor over the before mentioned server infrastructure. (Let me just get this straight: by storing in the cloud I mean storing my data on mashable online services, which I can access via JavaScript, you will see in a second.)
Ideally JSONP is the solution of choice for getting the data, but sometimes there is not this choice and you have to look around for other ways.


The easiest and probably most relevant reason why to use JSONP (read here how it started), is that it is native JavaScript. There is no mapping from one format to another required, so you get natural performance. Second it is just a simple technique for cross-domain communication. And you don’t have to setup no complicated event handling for getting informed about when the data are loaded, it’s all handled.

Google spreadsheet

Let’s take this tiny spreadsheet, it contains the data I want to request, e.g. for showing on the website.

This spreadsheet is really easy to maintain, no geek knowledge is required for that, and this is the great thing about it. If that can serve as the data source for a web service or widget it just makes maintenance so much easier, and you can also tell your customer “just change the data as you like”. It’s not only, that changing is easy, also the permission handling is just a couple of clicks away. The data are hosted on one of the best working networks and you can be pretty sure that if you have no high-traffic site, that those data will just always be served at awesome speed. Enough reasons? I think so.

How to get the data

The google docs unfortunately show nothing really nice about JSONP. Yes you can make the ATOM feed of the spreadsheet be served as JSONP, but look at the data format (better don’t :-)), it really is not what you expect when working with JSON. The data are linear in there, so you have to apply the logic that actually the spreadsheet has in it again to reformat the data, and may be convert and take them apart again. Just believe me it’s not what one expects. If you like the pain, try it out.
And there is the CSV export of the data, that I always loved for it’s simplicity. Just add &output=csv to the URL and you get your spreadsheet as CSV. Very nice. Unfortunately trying to load those data using of course fails:

CSV loading fails

CSV loading fails

Well, trying doesn’t cost a dime. Of course, I tried this first, though it was clear that it wouldn’t work.
I didn’t want to give up on this nice format, it has all the goodies, the lines, columns of the spreadsheet, no overhead, easily human-readable (which is always nice for finding errors in the parsing process) and it just maps so well to JSON. That’s the clue, it maps to JSON, now I just need to get that done. And since I didn’t want to add the overhead of passing the data through a Yahoo! Pipe just for getting a JSONP format I thought, why not make the CSV look like JSONP. The two formats are close enough.


After trying a little bit and seeing the result that google spit out and fixing it again I had the solution (the spreadsheet below), pretty easy actually.

The CSV format is already comma separated, that is kind of half the way to JSON. At least the comma will make sure that we can distinguish the columns from one another. I ended up adding the callback name to the first cell on the left like this __callbackFunctionName__(' the opening parentheses and the apostrophe starts the function call that will be made inside the script tag you load it into. Now you can already guess that either every cell is surrounded by apostrophes, but that would just be too much hazzle and too error-prone, so I decided to just add a column before and one behind the data, when parsing the CSV those will just be dismissed, since they won’t contain a headline, therefore have no key to map the data to.
The last line is a little special, it contains the extra cell containing 0); which is closing the function, this is actually only there to create an extra column, where each line has just a space in it, so we get the comma at the end of every line. So that in the end our JSONPed CSV (the human-readable version here) looks just like this:

__cfn__(',name,url,status,irc, ',  
 ',dojango,,released,, ',  
 ',dojodocs,,preview,, ',  
 ',dools,,in progress,, ',  
 ',xray,,alpha,, ',  
 ',d.js,,unreleased,, ',0);

The client

Every line is now a parameter for our callback function. Simply use dojo’s hook onto the callback function and we got our spreadsheet data in any site within JavaScript. A little after-parsing and done.
The dojo source code may look like this:


    var url = "";{url:url});
    dojo.connect("__cfn__", function(){
        var args = dojo._toArray(arguments),
            colNames = args[0].split(",").slice(1, -1),
            data = [];
        for (var i=1, l=args.length-1, d; i<l; i++){
            // Convert the data using the colNames to an object.
            // (Might be an esoteric step, but correct this way.)
            d = {};
            dojo.forEach(args[i].split(",").slice(1, -1), function(item, index){
                // Do some CSV conversion.
                item = item.replace(/""/g, '"');
                if (item.charAt(0)=='"' && item.charAt(item.length-1)=='"'){
                    item = item.substr(1, item.length-2);
                d[colNames[index]] = item;
        dojo.query(".theProjects")[0].innerHTML = dojo.toJson(data, true);

You can download and try out a test page here.

Of course, there are a couple of drawbacks.

  1. You have to make the spreadsheet adhere to a certain style, the first column, the last column, etc. and one error in there may break it entirely. So there is not much fault tolerance. Additionally you are not supposed to use new lines in the content of the spreadsheet!
  2. The callback name is fix, though if you are using this sheet in one app this should be ok.

But see the advantages, you get permission handling for editors for free, you can let your customer edit the data without your involvement and so on. As always, when the use case requires this solution you can pull it out of your sleeve.
Oh, the source code above is also still not handling the escaped commas I guess, so there are still places where this can be improved.

Further thinking

Another way I though about was simply only surrounding the CSV by /* and */ which would relieve us from any JavaScript parsing error, since everything is a comment. But my quick test using innerHTML didn’t work. And innerHTML is not that relieable anyway so it might be better I didn’t waste more time on it. May be loading this into CSS tag enables us reading the data back … I don’t know, those are just random thoughts.
Another thought Nikolai had was “A spreadsheet-based CMS”. Of course, why not. You can basically store any kind of data in the spreadsheet, even HTML if you really would want to.

This can spin of more discussions and interesting ideas, let’s see what the future will bring.
The web is becoming more fun every day!