Introducing Mobile Museum

I’m ramping up to launch a sister site to this one. It’s called Mobile Museum and will be a series of semi-structured written interviews with people who have developed, authored or project-managed mobile solutions. Some of these people will be museum people, others won’t…

If you’re interested you can find out more over on http://mobilemuseum.org.uk/ where there’s a link to a signup form. Expected launch date – end September 2011.

Streetmuseum: Q&A with Museum of London

Streetmuseum – a rather lovely iPhone app by the Museum of London – launched a few weeks ago, and almost immediately began to cause a bit of a buzz across Twitter and other social networks. It’s hardly surprising that people have responded so positively to it – the app takes the simplicity of the Looking Into the Past Flickr group and combines it with cutting-edge stuff like AR and location-based services (think Layar++) to bring historical London into a modern-day context.

I caught up with Vicky Lee last week and asked her a bunch of questions about the app. Here’s what she had to say:

Q: Please introduce yourself, and tell us about your involvement with the Museum of London iPhone app project

I’m Vicky Lee, Marketing Manager for the Museum of London. As part of the launch campaign for the new Galleries of Modern London I’ve been working with creative agency Brothers and Sisters to develop a free iPhone app – Streetmuseum – that brings the Museum to the streets.

Q: Tell us about the app – what it does, and how you’re hoping people will use it, also about how successful it is being

Streetmuseum uses augmented reality to give you a unique perspective of old and new London. The app guides users to sites across London where over 200 images of the capital, from the Museum of London’s art and photographic collections, can be viewed in-situ, essentially offering you a window through time. If you have a 3GS iPhone these images can be viewed in 2D and also in 3D, as a ghostly overlay on the present day scene. The AR function cannot be offered on 3G iPhones but users can still track the images through their GPS and view them in 2D, with the ability to zoom in and see detail. To engage with as many Londoners as possible, images cover almost all London boroughs. Each image also comes with a little information about the scene to give the user some historical context.

What we bet on from the start was that users would enjoy finding images of the street they live or work on and would be quick to demonstrate this to their friends and colleagues – helping to spread the word about Streetmuseum but also raising the profile of the Museum itself, particularly among young Londoners who we have previously struggled to reach. We hoped that the app would spread virally in this way within days and it certainly seems to have worked as in just over 2 weeks the app has had over 50,000 downloads. It’s just been released in all international iTunes stores so we’re expecting this figure to rocket over the coming weeks.

Q: Why did you choose to build an iPhone app as opposed to something else (Android, web, etc)

When I wrote the brief for a viral campaign to promote the new galleries and reposition the Museum of London, I had no idea we would end up launching an app. I hadn’t for one moment considered that we could afford to develop an app but Brothers and Sisters’ instinct from the start was that this was what we needed to change perceptions about the Museum. As soon as we understood how the concept fitted in with the overall marketing campaign (which also uses images from the Museum’s collections) it was the only option we wanted to pursue.
As with most Museum projects we were limited by budget so it was a case of either iPhone or Android but not both. To launch with maximum impact our feeling was that we had to go out with an iPhone app, therefore benefiting from the positive associations with the Apple brand and securing the interest of the media. We hope now to be able to secure funding to develop an Android version of the app in response to the many requests we have received.

Q: Can you tell us a bit about the financial model? Did you build it in partnership with someone else?

As a free museum reliant on funding, we would not have been able to create this app without collaborating with Brothers and Sisters. The partnership was mutually beneficial, generating media coverage for both parties and new business leads for the agency. Using images from the Museum’s collections meant that all the content was readily available so this kept costs down. Licensing agreements on certain images made it complicated to charge for the app, however it was always our intention to launch this free in order to reach the widest possible audience.

Q: Overall, what have you learnt about the process so far?

Simple works best. We originally planned to include user generated content but dropped this idea to ensure we stuck to our budget and timescale. Ultimately the idea is not that original but its simplicity has made the app an easy sell, both nationally and internationally.
I’d certainly give myself more time in future – we delivered the app in an incredibly short amount of time which gave little opportunity to review how it worked in practice. With more time we could have carried out user testing and refined the concept further to end up with an even slicker product.

Q: What else have you got planned for mobile at the MOL into the future?

We’re keen to keep the momentum going and stay ahead of the field, so, together with Brothers and Sisters, we are already looking at how we can develop this concept further. If we can secure additional funding we’d like to explore different subject areas and tie-in with future exhibitions and gallery redevelopments. Most importantly though we need to build upon what we have already achieved and keep evolving to ensure that any new apps continue to be newsworthy. We are also looking into the possibility of adding more images to the current Streetmuseum app and developing a version for Android phones.

Crowdsourcing photosynth

I wrote about Photosynth when it first came out as a plugin back in August 2007.Then, I wasn’t sure, and felt that it was a technology looking for a reason. Since then, Microsoft have done a few very, very cool things with it. The most important of these is that anyone can now create Photosynths (essentially, think image stitching, but in all dimensions..).

All you have to do is go to the Photosynth site, download the app and chuck some photos at it. It munges away for a bit and then after a bit uploads them all to the Photosynth site and gives you a link. It helps very much when you’re taking the photos to think about the fact that you want them to be connected: they obviously have to be the same scene, and I’ve found that standing reasonably still and taking around you tends to work reasonably well.

A “good synth” (the software tells you how “synthy” your selection is once it’s uploaded it – presumably a measure of how well it has managed to stitch stuff together) is pretty satisfying, although there are some obviously winning features which are missing. The single most obvious one of these is that you can’t add links or hotspots to the synth you create. For museums particularly, I think this’ll be a problem.

I did a synth a while back of the Boxkite at Bristol Museum. It’s a nice object to use (or so I thought) – it’s up in the rafters and you can walk all around it, taking photos from 360 degrees. As it happens, the result is pretty good, but not great. I’m wondering whether the software might have confused one side of the object from the other. Either way, it gives an insight into how museums could start using Photosynth to enhance collections online. More interestingly, perhaps (given the fair size of the Photosynth plugin), it could be used in-gallery (maybe with a Microsoft Surface..) to let audiences really engage with objects. Have a poke around the Photosynth site to get a feel for other museum stuff.

Extending Photosynth a bit further is what this post is all about, though.

When I saw the astonishing CNN Photosynth from Obama’s Inaugeration I started thinking about how else you could use it to enhance online experiences. I had what I thought at the time was an original idea (looking now I realise that Nick Poole had commented on my original post suggesting exactly this!) – how about using Flickr as a source for building a Photosynth?

Apollo 10 Command Module

Apollo 10 Command Module - thanks to Gaetan Lee

I needed an iconic object that would have been Creative Commons licensed on Flickr. Apollo 10 turned out to be a good one – I ran a search on Flickr and found 40 CC photos I could use, all taken in the Making the Modern World gallery of the Science Museum, my old stomping ground.

There’s no API I’m aware of for Photosynth yet. This is another missing trick – imagine if you could step straigt from Flickr to a 3D synthed view of any search… – so for my experiment I had to download the entire set of search results. For this, I used a cunning app called Downloadr, which lets you automatically download all Flickr pics which match a certain search. Then it was just a matter of re-uploading the images via Photosynth.

The result is here. Given that this is entirely made up of images taken at completely different times and by different people, I think it works pretty well. The crowd sourcing element adds a lot to Photosynth, I think. It’s still a shame that it isn’t possible to add links or otherwise play with the resulting synth – I think it’d add a lot.

Let me know if you think of other objects that could be synthed in this way and I’ll give it a go…

For the webs2, please follow the crowd

The last talk I gave – in December 2008 – was at Online Information and titled “What does Web2.0 DO for us?”.

Here are the slides (my third slide deck to get “homepaged” on slideshare…yay…):

[slideshare id=812457&doc=whatdoesweb2doforusmikeellisv12-1228296734998366-8&w=425]

This one was attempting to focus on Web2.0 in the Enterprise. Frankly, “The Enterprise” is a subject which fills me with fear, dread and trepidation, but the movement of Web2.0 into that space is probably inevitable as sales teams around the world spot another opportunity and sell it out to cash-rich bods wanting to “be innovative” in the name of their behemoth of a company. It’ll be interesting to watch.

The talk was popular, which I’m pleased about. Online Information is a funny old conference – the halls are stacked with basically the same company replicated about 200 times: reasonably bad CMS systems with reasonably bad sales people trying to sell to a reasonably badly informed market of people. I sound over-rude, but I have to be honest – I last went in about 2003 and absolutely nothing has changed. Which can’t be good in the tech field, right?

My slides were supposed to be about one thing (why the social web is important in “The Enterprise”, and why “The Enterprise” should take it seriously) – in the end, I actually focused on why “web2” is important to people rather than as a “thing” in abstract. I see the connecting of people with other people as reason for believing in the social web as a sound platform upon which to build any content. I believe this engagement is key to bringing (heritage) content to the foreground; furthermore, I think that even though web2.0 has been hyped to death, we should continue to believe in what “the social web” means. Mainly, we should believe this because the social web is about people and connections and as such has enormous importance to us as social, connected animals. 

One of the problems with talking about “Web2.0” is that the phrase carries an implicit weight with it: as soon as there is a count attached, you’re naturally looking for the current one to expire – for “Web2” to be replaced by “Web3” and shortly after that, “Web4”. Useful though “Web2.0” is as a phrase, I’m with the commentators now who suggest we talk about “the web”, or – my preference – “the social web”. Not because it is any less important, but because it is more so.

Incidentally, earlier today I was researching some stuff for a keynote I’m due to give in The Hague later in February (more details soon…) and used Google Trends to check on the phrase “web2.0”. It’s interesting to note that it reached its peak during q4 2007, and has since dropped off in popularity: 

 

Web2.0 on Google Trends

You’ll see immediately that this follows the Gartner Hype Curve prediction (or at least the beginning of it) – it’ll be interesting to watch in the coming months and years how the curve settles into a dampened “plateau of productivity”. (I’d also be interested if anyone can figure out why there is a gap between 2004 when O’Reilly first mentioned the phrase and mid-2005…)

For the graph junkies, here’s the same period for the phrase “social web”:

 

"Social Web" on Google Trends

So. That’s the hype. Maybe now we can get on with producing some astonishing, user-focused content..

Mashed Museum 2008

On June 18th 2008 (the day before UK Museums on the Web conference) a bunch of us met in a room at Leicester University to do some museum mashing. Our aim was:

…to give ourselves an environment free from political or monetary constraints. The focus of the day is not IPR, copyright, funding or museum politics. Our energies will be channeled into embracing the “new web”: envisaging, demonstrating and (hopefully) building some lightweight distributed applications.

I thought I’d borrow Matt’s N95 and do a quick “interview” (loose term) of everyone that wanted to say something about what they’d done. Finally last night I got round to chucking this into Window’s Movie Maker and doing some editing and have uploaded the result to http://blip.tv/file/1029060. It’s around 12 minutes long, so grab a cuppa and a comfy chair…

Finished?

The following day I did a presentation at the conference itself. This is now on Slideshare (and embedded below)

[slideshare id=488768&doc=ukmw08mashedmuseum-1214573625065381-9&w=425]

I’d just like to say thanks loads to everyone who attended – I know giving up a day is always problematic (even if it involves beer at the end). I hope you had fun. I know I did. Also enormous thanks to Ross Parry and the MCG for giving us the opportunity to do this.

Several people who attended have written about / linked to the things we built:

(I’ll also be continuing to update www.mashedmuseum.org.uk with future museummashingmoments…)

The message? Well, Lee Iverson from the Univerity of British Columbia used a phrase during his presentation the following day to beautifully encapsulate what I’ve posted about so, so often – and the one thing that makes any mashups possible:

If you expose data, you lose control but give it life

And that pretty much sums it up.

hoard.it : bootstrapping the NAW

What seems like a looong time ago I came up with an idea for “bootstrapping” the Non API Web (NAW), particularly around extracting un-structured content from (museum) collections pages.

The idea of scraping pages when there’s a lack of data access API isn’t new: Dapper launched a couple of years ago with a model for mapping and extracting from ‘ordinary’ html into a more programmatically useful format like RSS, JSON or XML. Before that there have been numerous projects that did the same (PiggyBank, Solvent, etc); Dapper is about the friendliest web2y interface so far, but it still fails IMHO in a number of ways.

Of course, there’s always the alternative approach, which Frankie Roberto outlined in his paper at Museums and the Web this year: don’t worry about the technology; instead approach the institution for data via an FOI request…

The original prototype I developed was based around a bookmarklet: the idea was that a user would navigate to an object page (although any templated “collection” or “catalogue” page is essentially the treated the same). If they wanted to “collect” the object on that page they’d click the bookmarklet, a script would look for data “shapes” against a pre-defined store, and then extract the data. Here’s some screen grabs of the process (click for bigger)

Science Museum object page An object page on the Science Museum website
Bookmarklet pop-up User clicks on the bookmarklet and a popup tells them that this page has been “collected” before. Data is separated by the template and “structured”
Bookmarklet pop-up Here, the object hasn’t been collected but the tech spots that the template is the same, so knows how to deal with the “data shape”
Defining fields in the hoard.it interface The hoard.it interface, showing how the fields are defined

I got talking to Dan Zambonini a while ago and showed him this first-pass prototype and he got excited about the potential straight away. Since then we’ve met a couple of times and exchanged ideas about what to do with the system, which we code-named “hoard.it”.

One of the ideas we pushed about early on was the concept of building web spidering into the system: instead of primarily having end-users as the “data triggers”, it should – we reasoned – be reasonably straightforward to define templates and then send a spider off to do the scraping instead.

The hoard.it spider

Dan has taken that idea and run with it. He built a spider in PHP, gave it a set of rules for templates and link-navigation and set it going. A couple of days ago he sent me a link to the data he’s collected – at time of writing, over 44,000 museum objects from 7 museums.

Dan has put together a REST-like querying method for getting at this data. Queries are passed in via URL and constructed in the form attribute/value – the query can be as long as you like, allowing fine-grained data access.

Data is returned as XML – there isn’t a schema right now, but that can follow in further prototypes. Dan has done quite a lot of munging to normalise dates and locations and then squeezed results into a simplified Dublin Core format.

Here’s an example query (click to see results – opens new window):

http://feeds.boxuk.com/museums/xmlfeed/location.made/Japan/

So this means “show me everything where location.made=Japan'”

Getting more fine-grained:

http://feeds.boxuk.com/museums/xmlfeed/location.made/Japan/dc.subject/weapons,entertainment

Yes, you guessed it – this is “things where location.made=Japan and dc.subject=weapons or entertainment”

Dan has done some lovely first-pass displays of ways in which this data could be used:

Also, any query can be appended with “/format/html” to show a simple html rendering of the request:

http://feeds.boxuk.com/museums/xmlfeed/location.made/exeter/format/html

What does this all mean?

The exposing of museum data in “machine-useful” form is a topic about which you’ll have noticed I’m pretty passionate. It’s a hard call, though (and one I’m working on with a number of other museum enthusiasts) – to get museums to understand the value of exposing data in this way.

The hoard.it method is a lovely workaround for those who don’t have, can’t afford or don’t understand why machine-accessible object data is important. On the one hand, it’s a hack – screenscraping is by definition a “dirty” method for getting at data. We’d all much prefer it if there was a better way – preferably, that all museums everywhere did this anyway. But the reality is very, very different. Most museums are still in the land of the NAW. I should also add that some (including the initial 7 museums spidered for the purposes of this prototype) have some API’s that they haven’t exposed. Hoard.it can help those who have already done the work of digitising but haven’t exposed the data in a machine-readable format.

Now that we’ve got this kind of data returned, we can of course parse it and deliver back…pretty much anything, from mobile-formatted results to ecards to kiosk to…well, use your imagination…

What next?

I’m running another mashed museum day the day before the annual Museums Computer Group conference in Leicester, and this data will be made available to anyone who wants to use it to build applications, visualisations or whatever around museum objects. Dan has got a bunch of ideas about how to extend the application, as have I – but I guess the main thing is that now it’s exposed, you can get into it and start playing!

How can I find out more?

We’re just in the process of putting together a simple series of wiki pages with some FAQ’s. Please use that, or the comments on this post to get in touch. Look forward to hearing from you!

Lights, bushels.

Brian has written a short post about universities actively trying to stop promotional material (yes – promotional material) finding freedom on the web. How funny is that?

On a related note, Sarah Perez from ReadWriteWeb did a post a couple of days ago about hidden image resources in the so called “deep web”. The list of links is great – I particularly like Calisphere and this collection of the 1906 SF earthquake. Lovely.

A couple of things though – first, surely Perez is wrong to suggest that these images are “the deep web”? I did a couple of tests looking for images via Google and it all seemed to be spidered ok. This one for instance was found via a Google search for the image title. It also appears on Google Image Search. Granted, you’d likely not find it given the quantity of other stuff, but it is definitely being spidered, so to me that means it’s not Deep Web. I may have missed something..

The finer point is more interesting, which is about what these institutions have done (or not) to promote these exceptionally fine collections. I haven’t looked into it any further in these cases but it’s familiar territory (you know, the whole open content, CC licensing, Flickr-usage, watermarking, marketing gubbins).

That’s where it comes back to Brian’s post – the content is great, the hard work has been done: the digitisation, the cataloguing, the site design. Then at the last hurdle, fear seems to strike. Better hide the content, you know, in case someone – like – uses it.

Go figure.

Domain names. They aren’t really important.

Not found…A long thread has broken out (interesting phrase, “broken out”, implies a viral, “can’t stop it once it’s started” kind of feel…which is strangely apt..) on the Museums Computer Group email list about domain names. Fundamentally it started as a “should we move to .com” question and then as is the way on these kinds of discussions, moved on to pastures new.

I thought I’d just clarify my position on this..

1. It really doesn’t matter what domain TLD (that’s the bit after the name, i.e .co.uk/.com/.org etc) you use. There is one known exception in this: .ac.uk domains are often used to verify that your institution is academic, and so having this TLD can in some circumstances get you in to resources that you otherwise wouldn’t have been able to access. The reason the .ac.uk domain has this kudos is because it is very difficult to get one and requires agreement from JANET before you can do so. They also have a pretty strict naming convention – I’ve tried before to buy names from JANET which are non-institution related, for instance ingenious.ac.uk, and they were having none of it.

Pretty much every other TLD (with the exception of the obvious ones like .mil, .gov.uk, etc) can be bought by anyone. They prove nothing about you or your institution and are therefore useless as an indicator of institution status.

2. The legacy position where “.com means it is US based, and probably sells stuff” is so far out of date I can’t even believe anyone is having the conversation. I can buy a .com domain (provided it is available, which many aren’t..) and I’m UK based and nothing whatsoever to do with ecommerce.

3. Your users (and last time I looked they were the important people in this equation..?) really, really won’t care. At all.

4. The actual domain name itself is reasonably important but not hugely important provided your search engine optimisation is good. Most of your users will probably have come to you by going to Google, searching for your institution name and then clicking on the first (or most useful looking) result that they find. Again, they probably won’t even look at the domain name. See 3…

5. Once you’ve decided on your domain name, buy as many important TLD’s as you can. My list is generally: .com, .co.uk, .org, .org.uk – I tend to avoid all the .me.uk and .biz and so on, but given how cheap they are you might as well get the lot if you feel strongly about them.

6. When you’ve done that, though, choose a primary domain – i.e. one that you will promote, use on your signature, put on marketing literature. Do nothing whatsoever with your secondary names (all the ones you bought in step 5!) apart from redirect them to your primary domain.

7. Finally, and this is a matter of minor preference rather than anything else, but make sure your email address has the same suffix as your domain name. So if you decide to use www.someotherdomain.com then you should be mike.ellis@someotherdomain.com. This is as I say reasonably unimportant but I (and therefore probably some others) quite often look at an unknown persons’ email address and use the suffix to locate their website.

Whadda you reckon?