Scraping, scripting, hacking

I just finished my talk at Mashed Library 2009 – an event for librarians wanting to mash and mix their data. My talk was almost definitely a bit overwhelming, judging by the backchannel, so I thought I’d bang out a quick blog post to try and help those I managed to confuse.

My talk was entitled “Scraping, Scripting and Hacking your way to API-less data”, and intended to give a high-level overview of some of the techniques that can be used to “get at data” on the web when the “nice” options of feeds and API’s aren’t available to you.

The context of the talk was this: almost everything we’re talking about with regard to mashups, visualisations and so on relies on data being available to us. In the cutting edge of Web2 apps, everything has got an API, a feed, a developer community. In the world of museums, libraries and government, this just isn’t the case. Data is usually held on-page as html (xhtml if we’re lucky), and programmatic access is nowhere to be found. If we want to use that data, we need to find other ways to get at it.

My slides are here:

[slideshare id=1690990&doc=scrapingscriptinghacking-090707060418-phpapp02]

A few people asked that I provide the URLs I mentioned together with a bit of context. Many of the slides above have links to examples, but here’s a simple list for those who’d prefer that:

Phew. Now I can see why it was slightly overwhelming ūüôā

If you love something, set it free

Last week, I had the¬†privilege of being asked to be one of the¬†keynote speakers at a conference in Amsterdam called Kom je ook?. This translates as “Heritage Upgrade” and describes itself as “a symposium for cultural heritage institutions, theatres and museums”.

I was particularly excited about this one: firstly, my partner keynoters were Nina Simon (Museum Two) and Shelley Bernstein (Community Manager at the Brooklyn Museum) – both very well known and very well respected museum and social web people. Second (if I’m allowed to generalise): “I like the Dutch” – I like their attitude to new media, to innovation and to culture in general; and third – it looked like fun.

Nina talked about “The Participatory Museum” – in particular she focussed on an oft-forgotten point: the web isn’t social technology per se; it is just a particularly good tool for making social technology happen. The fact that the online medium allows you to track, access, publish and distribute are good reasons for using the web BUT the fact that this happens to populate one space shouldn’t limit your thinking to that space, and shouldn’t alter the fact that this is always, always¬†about people and the ways in which they come together. The changing focus of museum moving from being a content provider to being a platform provider also rang true with me in so many ways. Nina rounded off with a “ten tips for social technology” (slide 12 and onwards).

Shelley gave another excellent talk on the incredible work she is doing at the Brooklyn Museum. She and I shared a session on Web2 at Museums and the Web 2007, and once again it is the genuine enthusiasm and authenticity which permeates everything she does which really comes across. This isn’t “web2 for web2’s sake” – this is genuine, pithy, risky, real content from enthused audiences who really want to take part in the life of the museum.¬†

My session was on setting your data and content free:

[slideshare id=768086&doc=mikeellisifyoulovesomethingsetitfreefinal-1227110930707512-9&w=425]

Hopefully the slides speak for themselves, but in a nutshell my argument is that although we’ve focussed heavily on the social aspects of Web2.0 from a user perspective, it is the stuff going on under the hood which really pushes the social web into new and exciting territory. It is the data sharing, the mashing, the API’s and the feeds which are at the heart of this new generation of web tools. We can resist the notion of free data by pretending that people use the web (and our sites) in a linear, controlled way, but the reality is we have fickle and intelligent users who will get to our content any which way. Given this, we can either push back against freer content by pretending we can lock it down, or – as I advocate – do what we can to give user access to it.

hoard.it : bootstrapping the NAW

What seems like a looong time ago I came up with an idea for “bootstrapping” the Non API Web (NAW), particularly around extracting un-structured content from (museum) collections pages.

The idea of scraping pages when there’s a lack of data access API isn’t new: Dapper launched a couple of years ago with a model for mapping and extracting from ‘ordinary’ html into a more programmatically useful format like RSS, JSON or XML. Before that there have been numerous projects that did the same (PiggyBank, Solvent, etc); Dapper is about the friendliest web2y interface so far, but it still fails IMHO in a number of ways.

Of course, there’s always the alternative approach, which Frankie Roberto outlined in his paper at Museums and the Web this year: don’t worry about the technology; instead approach the institution for data via an FOI request…

The original prototype I developed was based around a bookmarklet: the idea was that a user would navigate to an object page (although any templated “collection” or “catalogue” page is essentially the treated the same). If they wanted to “collect” the object on that page they’d click the bookmarklet, a script would look for data “shapes” against a pre-defined store, and then extract the data. Here’s some screen grabs of the process (click for bigger)

Science Museum object page An object page on the Science Museum website
Bookmarklet pop-up User clicks on the bookmarklet and a popup tells them that this page has been “collected” before. Data is separated by the template and “structured”
Bookmarklet pop-up Here, the object hasn’t been collected but the tech spots that the template is the same, so knows how to deal with the “data shape”
Defining fields in the hoard.it interface The hoard.it interface, showing how the fields are defined

I got talking to Dan Zambonini a while ago and showed him this first-pass prototype and he got excited about the potential straight away. Since then we’ve met a couple of times and exchanged ideas about what to do with the system, which we code-named “hoard.it”.

One of the ideas we pushed about early on was the concept of building web spidering into the system: instead of primarily having end-users as the “data triggers”, it should – we reasoned – be reasonably straightforward to define templates and then send a spider off to do the scraping instead.

The hoard.it spider

Dan has taken that idea and run with it. He built a spider in PHP, gave it a set of rules for templates and link-navigation and set it going. A couple of days ago he sent me a link to the data he’s collected – at time of writing, over 44,000 museum objects from 7 museums.

Dan has put together a REST-like querying method for getting at this data. Queries are passed in via URL and constructed in the form attribute/value – the query can be as long as you like, allowing fine-grained data access.

Data is returned as XML – there isn’t a schema right now, but that can follow in further prototypes. Dan has done quite a lot of munging to normalise dates and locations and then squeezed results into a simplified Dublin Core format.

Here’s an example query (click to see results – opens new window):

http://feeds.boxuk.com/museums/xmlfeed/location.made/Japan/

So this means “show me everything where location.made=Japan'”

Getting more fine-grained:

http://feeds.boxuk.com/museums/xmlfeed/location.made/Japan/dc.subject/weapons,entertainment

Yes, you guessed it – this is “things where location.made=Japan and dc.subject=weapons or entertainment”

Dan has done some lovely first-pass displays of ways in which this data could be used:

Also, any query can be appended with “/format/html” to show a simple html rendering of the request:

http://feeds.boxuk.com/museums/xmlfeed/location.made/exeter/format/html

What does this all mean?

The exposing of museum data in “machine-useful” form is a topic about which you’ll have noticed I’m pretty passionate. It’s a hard call, though (and one I’m working on with a number of other museum enthusiasts) – to get museums to understand the value of exposing data in this way.

The hoard.it method is a lovely workaround for those who don’t have, can’t afford or don’t understand why machine-accessible object data is important. On the one hand, it’s a hack – screenscraping is by definition a “dirty” method for getting at data. We’d all much prefer it if there was a better way – preferably, that all museums everywhere did this anyway. But the reality is very, very different. Most museums are still in the land of the NAW. I should also add that some (including the initial 7 museums spidered for the purposes of this prototype) have some API’s that they haven’t exposed. Hoard.it can help those who have already done the work of digitising but haven’t exposed the data in a machine-readable format.

Now that we’ve got this kind of data returned, we can of course parse it and deliver back…pretty much anything, from mobile-formatted results to ecards to kiosk to…well, use your imagination…

What next?

I’m running another mashed museum day the day before the annual Museums Computer Group conference in Leicester, and this data will be made available to anyone who wants to use it to build applications, visualisations or whatever around museum objects. Dan has got a bunch of ideas about how to extend the application, as have I – but I guess the main thing is that now it’s exposed, you can get into it and start playing!

How can I find out more?

We’re just in the process of putting together a simple series of wiki pages with some FAQ’s. Please use that, or the comments on this post to get in touch. Look forward to hearing from you!