WordPress culture hackday, anyone?

A long and interesting thread on the MCG list prompted a thought – how about a WordPress hack-and-knowledge-sharing-kinda-day for culture?

We could talk about stuff like:

  • best approaches and favourite plugins
  • ways to hook into existing systems like Omeka
  • building some simple plugins to interface with CH open data
  • building some simple plugins to interface with other CH systems (collections management, library systems, etc)
  • how to deploy, move and scale sites

Dunno – I’m making this up as I go along. But if you’re interested in this totally vague proposal, chuck your name and thoughts into this form * and we’ll see if it goes anywhere.

(* If you’re in one of those institutions who have a short-sighted IT department that doesn’t let you look at the whole internet, just add your thoughts in a comment on this post….)

QR isn’t an end, it’s a means

QR seems to have taken on a bit of a life of its own over the past few weeks. Not only have I seen far more of the codes in the wild, but there seems to be many more people writing about it, many more news articles – and also (which is nice) – lots of people emailing me to ask how they can “do QR”.

Google Trends graph for "QR"

QR is a great technology. Actually, no – it’s an ok-ish technology. The more important thing is that the awareness means gains in popularity, which in turn means more people will know what a QR code is, how to use it – and also make them aware of some of the foibles. As with anything, this isn’t about how awesome the technology is. Many, many geeky people will tell you QR is crap – which in some ways it is per se – but the important thing is market penetration, expectation, device support – and (most importantly), the content experiences which underly it.

Underlying the concept of QR though is something rather more important, which I think many people miss in their rush to play with the latest and greatest thing. The important thing is this: QR is a way of poking the digital world into the real world. In a way, QR is simply one technology in a line of technologies that does this. Remember the first time you saw a URL on a piece of print advertising? That was digital poking into real, albeit in a slightly crap way. Then bluetooth. Now QR.

Ultimately, the concept is the same in each of these cases: put a marker in the real world which allows your audiences to connect with content in the virtual world.

The technology with which you do this can be agnostic. This year it might be QR. Next it might be NFC or AR. The following – who knows, image recognition / hyper-accurate GPS / whatever. The facts remain the same:

First: People have to have a desire to engage with the marker in the first place. Why would you go to the effort of scanning a QR code with no knowledge of what that code might provide for you? Nina Simon just recently blogged about QR Codes and Visitor Motivation which asks this question. The cost curve – as always – has to balance: the value that your user gets out must be greater than the effort that they have to put in – and (almost more important), you have to make this value clear before they scan.

Second: A proportion of people will never take part / have the technology to take part. QR scanning (or – even more so – NFC or whatever the next big thing is) will be a niche activity for the foreseeable future. Bear in mind that not only does your user have to have a QR code reader installed, they also need the right kind of phone, an internet connection at the point of scan AND a contract with their provider that lets them use this connection. These things are becoming more real, but it is by no means a given yet.

Third – and possibly the most important – the content that you deliver should add something significant to their experience. This is tied to the first point. Here’s a banner I snapped when I was in London recently:

UCL zoology QR code

If you scan this you get a link to the UCL Zoology Museum (and ironically, out of shot to the left is the URL that the QR code sends you to..). From a user experience perspective, I bet you 50p I can get my smartphone out, type in the url and be looking at the relevant content quicker than you can boot up a QR app, scan and open.

In this instance, you do actually end up at a mobile-friendly site and some interesting links to QR technologies in use at UCL – which is fantastic. But the use case and motivation aren’t really articulated in the physical world.

Finally – you can easily put some measures in place to track usage, and use this to inform future activity. Here’s another example, this time from the British Library:

British Library QR

If you follow this link, you’ll find it goes to http://www.bl.uk/sciencefiction. The problem with this is that the URL is the same one as is being used on the poster, around the web and in all their other marketing. So when it comes to evaluating the use of QR – and whether it has been successful as a means to pull in new visitors – my suspicion is the BL won’t have any idea how to separate out these clicks from any of the others.

The simple solution to this is to use something like bit.ly and create a unique URL which is specifically for this QR code. More advanced techniques might include things like appending a string to the end of the URL (for example www.bl.uk/sciencefiction?source=qr) – or using Google Analytics “campaigns” to track these.

(Note that you could also get even more clever by having separate unique QR codes for separate advertising zones or even for separate posters – imagine the impact of being able to track which posters or areas have been most successful…now that’s cool use of a technology…)

Coming back to the beginning of this post – the overriding point here is that QR, and many other technologies similar to it, provide a very exciting way of bringing digital content into the real world. With some upfront thinking, genuinely interesting content can be delivered in this way and users can be made to engage. As ever, though, it isn’t about the technology but about the use, motiviation and content which lies behind the technology. These are the things that count.

Strategic digital marketing: don’t be dis(integrated)

I was asked to speak at At-Bristol recently at a gathering of marketing people from the UK Association for Science and Discovery Centres.

The topic of choice was strategic marketing. Now, as I made it clear on the day, I’m not – officially, at least – a “marketing person”. Nonetheless, I’ve spent more than a decade working with content-rich organisations on the web, and a core part of my role has been about getting people to stuff. And if that isn’t – at some level at least – about “marketing”, then I don’t know what is.

Rather than doing anything too fluffy and high-level, I thought I’d focus on ten practical activities which ultimately help pull together strategic ways of thinking about digital marketing. The list certainly isn’t definitive, by the way, but it should help…

#1: Develop a Shared Vision

This sounds obvious, but it is actually one of the hardest things to do. When you’re working with cross-departmental teams such as IT, web, marketing, a clearly defined strategy is a difficult thing to agree on. One of the best tricks I’ve found for doing this is to map (visually, if you can!) your high-level organisational strategy to your web and marketing strategies and look for common ground. It helps keep you and your team heading in the same direction, but is also useful for “justifying” digital activity.

#2: Decide What “Success” Is

Too often, organisations have badly-thought-out notions of “success”. Measuring success is easy in a profit-making organisation: leads, conversions, sales – etc. For everyone else, it’s often much harder. Strangely, our organisations often then fall back on “virtual visits” as the metric of choice, ignoring things which can be better indicators of engagement and success.

#3: Use Google Analytics

There’s italics on “use” on this one for a reason. L0ts of organisations have installed GA and use it a bit – but few actually use it properly to try to understand how users are engaging with their content. This is hardly surprising given the huge and sometimes baffling amount of information the system offers you, but nonetheless something to focus on.

#4: Have a Social Media Strategy, Not Just A Presence

In the particular context of this conference, almost all of the organisations represented had a fairly strong presence on sites like Facebook, Twitter and so on. But few of them (and this is very common) had a sense of why. Social Media needs thinking about strategically in order for it to succeed in the longer term – and it needs to fit with your strategy and purpose. Sometimes this means not doing it!

#5: Be Aware Of How Your Organisation Fits

This one covers a whole range of stuff, from user testing to things like keyword monitoring and feed-reading. You can’t hope to market your content if you don’t understand the trends, people and technologies of your environment.

#6: Use A Dashboard

This one is for all the “I’m too busy to do all this stuff” people out there. Using a dashboard (for me, it’s a combination of Netvibes and Google Reader) saves a huge amount of time when it comes to monitoring all this activity. The Google Analytics dashboard is the same – use these tools to radically reduce the noise and replace it with signal.

#7: Build Internal Knowledge

Building knowledge within your organisation is often forgotten. Let people know what you’re doing – whether you’re talking about marketing activities, ways of measuring success or wider strategic goals. Send a monthly “KPI” emailing, have a “lunch and learn” session – do whatever it takes to keep people in the loop and break down those organisational silos. If you do this regularly you’ll start to understand what the barriers are and how to remove them – and you’ll probably get some interesting ideas from others about how to improve what you do as well.

#8: Fail Quickly: Be Iterative

It’s as true in marketing as in anything else: try stuff, see what works – build on what does work, kill off what doesn’t. Use things like multivariate testing to rapidly tweak on the fly and then use this knowledge the next time you launch a campaign, send a mail shot or whatever.

#9: Understand Search

Search is a powerful web traffic driver, but it needs to be understood in the context of SEO, “Search Intention” and other factors. Do what you can to get up to speed with how content and links can improve your search engine rankings, and what this means to your traffic and auciences.

#10: Share!

Talk to people at other similar organisations and ask them what they’re doing. Find out what works, what doesn’t, and why. Set up a monthly meeting to discuss your web stats and campaigns, or put together a discussion mailing list. Your peers are probably going to be the single best source of information – use them!

That’s it.

What do you think? How do you join up your digital activity in strategic ways in your organisations?

[slideshare id=6090989&doc=developingyourdigitalstrategyppteduformatting-101209093311-phpapp02]

Managing and growing a cultural heritage web presence

I’m absolutely delighted (and only slightly scared) to announce that I’ve been commissioned to write a book for Facet Publishing.

Ever since I started working with museums online, I’ve felt that there is a need for strategic advice to help managers of cultural heritage web presences. There are of course hundreds of thousands of resources if you’ve got technical questions, but not many places where you can ask things like “how should I build my web team and structure my budget?” or “how do I write a strategy or business plan?”.

Facet approached me in July asking whether I’d be interested in authoring something for them, and this seemed like the ideal opportunity to try and answer some of these questions.

My (draft) synposis is as follows:

This book will provide a guide for anyone looking to build or maintain a cultural heritage web presence. It will aim to cater both to those who are single-handedly trying to keep their site running on limited budget and time as well as those who have big teams, large budgets and time to spend.

As well as describing the strategic approaches which are required to develop a successful online presence, the book will contain data and case studies on current practice from large and small cultural heritage institutions. This research will help give the reader an insight into how these institutions manage their websites as well as providing hints and tips on best practice. It will have an accompanying web presence which will provide template downloads and other up-to-date information including links and white papers.

As you’ll see, I have no intention of trying to do this all by myself – over the coming year I’m going to be on the phone to many of you (hide now!) asking how you do what you do, and compiling this into what I hope will be a useful guide.

If you have any ideas about what I should include, or the questions I should be asking – please do get in touch either via this blog or on Twitter at @m1ke_ellis!

Dear DCMS. Please find our stats.

* An open letter to whoever it may concern at the Department of Culture, Media and Sport *

Dear Sir/Madam

My attention was drawn recently to a Freedom of Information request which was made to you regarding museum web statistics.

The request was made by an ex-colleage and friend of mine, Frankie Roberto, who used the rather lovely WhatDoTheyKnow website to submit it. For those who don’t know, this site lets anyone submit and track FOI requests publically.

As you’ll of course know, the original request went something like this:

Please could you send me the monthly website statistics for all of
the museums which you hold data on, for as far back as the data is
held?

Please also specify which metrics (eg hits, visits, unique
visitors) are used, and which software is used to measure the
statistics (if available).

Frankie submitted this on 15th April 2008.

A fair amount of correspondence seemed to go on between you and Frankie. I won’t repeat the content here. Instead I’d like to jump straight to the last letter on the thread, dated 6th June 2008. Here, you say this:

Following a search of our paper and electronic records, I have established that the information you requested is not held by this Department. We would advise you to seek the information from the website managers for the individual museums in which you have a particular interest.

This caught me slightly by suprise.

For seven years while Head of Web at NMSI I used to (twice yearly, if I recall) – gather and coordinate web statistics for our three national museums (The Science Museum, London; Railway Museum, York and Media Museum, Bradford). And (again, providing my memory hasn’t gone really badly wrong), I seem to remember that it was DCMS who asked for, and received, these stats.

I had a fabulous time working for NMSI, but I can say without hesitation that these six-monthly forays into the depths of log files and Excel spreadsheets were consistently the least pleasant bit of my job. It was important, however: DCMS web stats were, and probably still are, one of the measures by which funding was distributed to national museums. So we knuckled down and got on with it, painful though it was.

It was therefore with a certain amount of concern that I read your letter to Frankie.

Now – I do fully understand that organisations are big and that processes change. I also understand that things get lost. So I’m not going to get hysterical – but I do think it’s important that you maybe go have another look for them. Frankie tells me he doesn’t need the stats any more, but the more I think about it, the more I think it’s important that these are made public and available. At the very least, it’ll make me feel better for those dark Excel days.

I look forward to hearing from you

Regards

Mike Ellis

(Selling) content in a networked age

I’m just back from Torquay where I’d been asked to speak at the 32nd annual UKSG conference. I first came across UKSG more than a year ago when they asked me to speak at a London workshop they were hosting. Back then, I did a general overview of API’s from a non-technical perspective.

This time around, my presentation was about opening up access to content: the title “If you love your content, set it free?” builds on some previous themes I’ve talked and written about. Presenting on “setting content free” to a room of librarians and publishers is always likely to be difficult. Both groups are – either directly or indirectly – dependent on income from published works. I’m also neither publisher nor librarian, and although I spent some time working for Waterstone’s Online and know bits about the book trade, my knowledge is undoubtedly hopelessly out of date.

Actually, I had two very receptive crowds (thank you for coming if you were there!) and some really interesting debate around the whole notion of value, scarcity and network effects.

[slideshare id=1228656&doc=settingcontentfreeuksg2009final-090331123331-phpapp01]

Like any sector, publishers and librarians have their own language, their own agendas and their own histories of successes and failures. Also like any sector, they are often challenged to spend time thinking about the bigger picture. Day jobs are about rights and DRM, OPAC and tenure. They aren’t (usually) about user experience, big-picture strategy or considering and comparing approaches from other sectors.

What I wanted to do with the presentation was to look at some of the big challenges which face (commercial) material in the networked world by thinking a bit more holistically about people’s relationship with that content, and the modes of use that they apply to the stuff that they acquire via this networked environment.

The – granted, rather challenging – title of the presentation is actually a question cunningly disguised as a statement. Or maybe it’s a statement cunningly disguised as a question. I lost track. The thing I was trying to do with this questatement (and some people missed this, more fool me for being too subtle) was to say: “Look, here’s how many people are talking about content now: they’re making it free and available; they’re encouraging re-use; they’re providing free and open API’s. They’re understanding that users are fickle, content-hungry and often unfussy about the origin of that content. What, exactly, do we do in an environment like this? What are the strategies that might serve us best? Can we still sell stuff, and if so, how?”

The wider proposition (that content fares rather better when it is freed on the network than when it is tethered and locked down) is a source of fairly passionate debate. I’ve written extensively about Paulo Coehlo’s experiments in freeing his books, about API’s, about “copywrong“, about value, authority and authenticity. The suggestion that if you free it up you will see more cultural capital is starting to be established in museums and galleries. The suggestion that you might, just might, increase your financial capital by opening up is for the most part considered PREPOSTEROUS to publishers. Giving away PDF’s increases book sales? Outrageous. Apart from the only example I’ve actually seen documented, of course, which is Coehlo’s, and that seems to indicate a completely different story.

There are fine – and all the finer the closer you examine them – levels of detail. Yes, an academic market is vastly different from a popular one: you don’t have the scale of the crowd, the articles are used in different ways, the works are generally shorter, the audiences worlds apart. But nonetheless, Clay Shirky’s robust (if deeply depressing) angle on the future – sorry, lack of future – of the newspaper industry needs close examination in any content-rich sector. I don’t think anyone can deny that the core proposition he holds up – that the problems that (newspaper) publishing solves (printing, marketing and distribution) are no longer problems in the networked age. I don’t think that what he’s saying is that we won’t have newspapers in the future, and he’s definitely not saying that we won’t need journalists. What he is saying – and this was the angle I focused on in my slides – is that this change is akin to living through a revolution. And with this revolution needs to come revolutionary responses and understanding that the change is far bigger and more profound than almost anyone can anticipate. The open API is one such response (The Guardian “Open Platform” being an apposite example). Free PDF’s / paid books is another. Music streaming and the killing of DRM is another.

Revolutions are uncomfortable. The wholesale examination of an entire industry is horrifically uncomfortable. Just take a look at the music business and you’ll see a group of deeply unhappy executives sitting around the ashes of a big pile of CD’s as they mourn the good ‘ole times. But over there with music, new business models are also beginning to evolve and emerge from these ashes. Spotify is based on streaming, Last.fm is based on social, Seeqpod is a lightweight wrapper for Google searches, The Pirate Bay ignores everyone else and provides stuff for free.

Which ones are going to work? Which ones will make money? Which ones will work but displace the money-making somewhere else? The simple answer, of course, is that no-one really knows. Some models will thrive, others will fail. Some will pave a new direction for the industry, others we’ll laugh at in five years time.

So where can the answers be found? Predictably for me, I think all sectors (including academic publishing!) need to take a punt and do some lightweight experimentation. I think they need to be trying new models of access based around personalisation, attention data and identity. They need to examine who gets paid, how much and when. They need to be setting stuff free in an environment where they can measure – effectively – the impact of this freedom across a range of returns, from marketing to cultural to financial. If they do this then they’re at least going to have some solid intelligence to use when deciding which models to take ahead. And it may be that this particular industry isn’t as challenged as most people assume, and that the existing models can carry on – lock it down, slap on some DRM, charge for access. It’d be far less uncomfortable if this was the case. But at least that decision would be made with some solid knowledge backing it up.

Open Access is one clear way of forging this debate ahead. But once you get under the apparently simple hood of the OA proposition, it actually turns out that not only are many institutions simply ignoring guidelines to produce OA versions of published works but that the payment models are complicated and based on a historical backdrop which to many seems inherently broken. I’d be interested to hear from someone with way more knowledge than me on the successes and failures or market research done on setting content free in this way.

It was clear to me in talking to a range of people at UKSG – librarians, publishers, content providers – that there are huge swathes of evidence missing – surprising, perhaps, from sectors which pride themselves on accuracy and academic rigour. When I asked “how many people aren’t coming to your site because search engines can’t see your content?” or “what is your e-commerce drop-out rate?” or “how much of your stuff do you estimate is illegally pirated?”, very few had coherent – (or even vague) (or any!) – answers.

More telling, perhaps, is that the informal straw poll question I posed to various people during the conference: “Do you feel that this is a healthy industry?” was almost always answered with a negative response. And when I asked why, the near-consistent reply was: “It’s too complicated; too political; too entangled” or from one person: “the internet has killed us”.

I’m really not as naive as I sometimes appear 🙂 I know how terribly, terribly hard it is to unpick enormous, political and emotive histories. When I suggest that “we need to start again”, I’m obviously not suggesting that we can wipe the slate clean and redefine the entire value proposition across a multi-billion dollar, multi-faceted industry. But I think – simply – that awareness of the networked environment, a knowledge of how people really use the web today and an open mind that things might need to change in profound ways are very powerful starting points in what will clearly be an ongoing, fraught and fascinating discussion.

Creative Spaces – just…why?

There’s been a fair bit of buzz around the launch of the NMOLP (National Museums Online Learning Project) – now apparently renamed as “Creative Spaces” for launch.

I’ve known about this project for a long while – when I was at the Science Museum, very initial discussions were taking place at the V&A about how to search and display collections results from more than one institution. The Science Museum were invited to take part in the project, but in the end didn’t because of resourcing and budgetary issues.

My second touch on the project was from the agency end – the ITT briefly crossed my desk at my current employer, Eduserv. We considered bidding, but in the end decided that it wasn’t a project we could deliver satisfactorily given the particulars of the scope and budget.

Back then – and I think now, although someone from NMOLP will have to confirm – the project was divided into two main sections: a series of “webquests” (online learning experiences, essentially) and a cross-museum collections search. The webquests can be seen here, but I’m not going to consider these in this post because I haven’t had time to spend enough time playing to have an opinion yet.

The Creative Spaces site is at http://bm.nmolp.org/creativespaces/ – at first glance, it’s clean and nicely designed, with a bit of a web2.0 bevel thing going on. It’s certainly visually more pleasing than many museum web projects I’ve seen. The search is quick, and there’s at least a surface appearance of “real people” on the site. I hesitate to use the word “community” for reasons that I’ll highlight in a minute.

Design aside, I have some fairly big issues with the approach that is being taken here:

Firstly, this site, much like Europeana (which I’ll get my teeth into in a future post…) seemingly fails to grasp what it is about the web that makes people want to engage. I’m very surprised that we’re this many years into the social web and haven’t learnt about the basic building blocks for online communities, and are apparently unable to take a step back from our institutional viewpoint and think like a REAL user, not a museum one. Try looking at this site with a “normal person” hat on. Now ask yourself: “what do I want to DO here?” or “how can this benefit me?” or “how can I have fun”? Sure, you can create a “notebook” or a “group” (once you’ve logged in, obviously..). The “why” is unclear.

I’m also interested at how underwhelming the technology is. Take a look at www.ingenious.org.uk – a NOF digitise project which I worked on maybe 5-6 years ago. Now, I’m not over-proud of this site – it took ages, nearly killed a few people from stress, and the end result could be better, but hey – it has cross collections search, you can send an e-card, you can save things to your lightbox, you can create a web gallery. And this was more than five years ago. Even then, I was underwhelmed by what we managed to do. NMOLP doesn’t seem to have pushed the boundary beyond this at all, and as museums I think we should always be looking to drive innovation forward.

Secondly, I’m not sure that there is a reason why. Why would I possibly want to create a profile? Where is my incentive? Here’s Wikipedia talking about the Network Effect:

“A more natural strategy is to build a system that has enough value without network effects, at least to early adopters. Then, as the number of users increases, the system becomes even more valuable and is able to attract a wider user base. Joshua Schachter has explained that he built Del.icio.us along these lines – he built an online system where he could keep bookmarks for himself, such that even if no other user joined, it would still be valuable to him

The other day, I had a Twitter conversation with Giv Parvaneh, the Technical Manager at NMOLP regarding this post, which talks about “monetizing” media. He blogged his response here. Now, we had a minor crossed-wires moment (it’s hard to discuss in 140 chrs) – but my point was not that museums should “monetize” everything (although, I DO think that museums should learn about real business practices, but that’s another post altogether). My point was that users need to feel special to take part. They need to be part of a tribe, a trusted group who can do and say things that they find personally useful. They need experiences with integrity. If you’re not sure what I mean, just spend some time on the Brooklyn Museum collections pages. These guys get it – the “posse“, the “tag game“, the openness. Compare this back to what feels like a shallow experience you get on NMOLP. Now ask yourself – “where would I spend MY time?”.

The second major reason is that, once again, we’re failing to take our content to our users. This is a huge shortfalling of Europeana. People want experiences on their own terms, not on ours. Let’s not have another collections portal. Spend your social media money adding and updating entries on Wikipedia, or create an object sharing Facebook application. Or just put everything on Flickr. And, please, please create an API or at the very least an OpenSearch feed. If the issue is something around copyright – go back to your funders and content providers and sit them down in front of Google images for an hour so they can begin to understand how the internet works, before renegotiating terms with them!

The final reason hangs off the search facility. My vested interest here is of course hoard.it – and if you want to hear our rantings about the money spent on big, bad technology projects, then keep an eye out for our Museums and the Web Paper. We aren’t necessarily suggesting that the hoard.it approach should be the technology behind cross-collections searching. But we are suggesting that the approch that NMOLP have taken is expensive, old, clunky and ultimately flawed. Although it is a trifle over-simplistic as a response, why not just spend £20-30k on a Google Search Appliance and simply spider the sites. Why re-develop the wheel and build search from scratch?

If I was less of a grumpy old man, I’d feel bad about being this negative – I like the people involved, I like the institutions, and I understand the reasons why (museum) projects spiral into directions you probably wouldn’t ever choose. But then I remember that this site cost taxpayers just short of £2 million pounds, and that Europeana will cost €120 million. And then I realise that we have an obligation to keep badgering, nagging and criticising until we start to get these things right.

At the end of the day, Frankie sums it all up much more succinctly in his email to the MCG list than I do in this post. He simply asks: why?

Where the F have you been?

It’s been a long while (possibly the biggest gap since the launch of this blog..) since my last post – over a month.

This is unprecedented for me, and I’ve had four or five emails (thanks!) asking me why. I’ve always dodged around with an answer, not because I was trying to avoid some horrific truth but because until the last couple of days I simply haven’t had the brain time to devote to the reasons.

The first part of the answer to “Mike, where the F have you been?” is this: I’ve been busy keeping balls in the air: another presentation (What does Web 2.0 DO for us?) which I delivered to a roomful at Online Information 2008 on 4th Dec…the beginning stages writing a module for the new Digital Heritage MA/MSc at Leicester University – an opportunity which I’m hugely excited about, and not a little bit scared too…continuing work on three side-projects, none of which I can talk about just yet…development and writing for a corporate blog for internal comms…a desktop notification app…not to mention the hectic craziness of helping look after a 2-boy young family. Etcetefuckinra.

All of which is terribly boring, TBH, because if there’s one thing we all know about each other it is this: we’re all much too busy. In fact a corporate stat somewhere a while ago said that everyone believes themselves to be busier than 90% of everyone else. This is, of course, also true for me.

This leads to the second part of the answer: I’ve felt for a long time that the landscape of blogging has been changing considerably, particularly with lifestreaming now a part of our daily diet. I’ve blogged about noise on various occasions, and I’ve also noticed a huge shift in my own reading habits – a shift which has an obvious effect on my writing habits, too. I’m less interested in “blog post as news”, instead preferring longer, deeper, better written pieces like the beautifully-crafted Business Requirements Are Bullshit. I’m me – you’re you – but the important thing for me is that I write in a way which complements the medium and as much as possible brings some kind of value to those of you who have given up some of your valuable time to read what I have to say.

This brings me neatly on to the third part which was summed up in a conversation with Brian Kelly and Paul Walk over a post-work pint recently: why the F do we all blog, anyway? We were talking at the time about Paul’s much-commented post on blog awards. Paul is similar to me – and different to Brian – in that the former blogs as a hobby and not as a job. Paul runs his blog under his own name; Brian runs his (albeit not “officially”), under “UKWebFocus”. Brian has a series of blog policies and sticks closely to his particular topics; Paul could write about his washing powder if he so chose. I’ve always been clear (both to my readers and employers) that this isn’t a “work blog” – but it isn’t a “personal” one, either.

I started Electronic Museum as a way of reflecting on technology in the museum space. More than a year on and I’m interested in innovation, in technology ubiquity, in sharing data, in real people, in the value of attention data, in the user as focus. All of these call back to what makes museums unique, in my opinion, and it is in these arenas that I personally feel the battles for online content will be (or are being) fought and won. The point is it isn’t just a conversation about museums any more. And really, it never has been, in this always-on, radically-connected crazy internetwebthing we spend so much time staring at and talking about.

Much as I’ve carved a niche here with museum professionals who seem to value what I have to say, I’m also fascinated by the irony that nowadays it isn’t niche professionals that we need any more. Curators (museum and otherwise) – IMO – aren’t anything at all without the vision to see that what they know needs communicating in new, challenging ways; ways that may well undermine their professionalism purely because the social network they engage with has dug up someone who knows better than them. Content owners need to start to understand that value simply can’t be measured by “visits” when many people are out there having experiences with their content and not within the walled garden of their site. Technologists have got to stop hiding behind PEBCAC and start engaging with the people that are currently alienated by technology.

So what – exactly – am I saying?

I guess it is this: you’ll notice a shift over the coming weeks and months as I write about more of the things I’m doing outside of the museum space: my dabblings with the Arduino, for instance, the various other projects I’m continuously working on, a secretish partnership I’ll be able to talk about in January, and so on. I hope I won’t break the niche I’ve created – I hope that if you are a “museum professional” then you’ll continue to hang out here – I think what I have to say will be interesting, or at least mildly entertaining, whoever you are.

Webnoise

A lot of rumbling about the noise created by the (social) web has been reaching our ears recently. I’m not in this instance talking about the management of “outgoing” social media but more about how people deal with the sheer quantity of stuff which is arriving through various channels. The news feeds, tweets, emails, IM – all are part of the incoming stream. Then of course there are conversations with people in the real world (gasp!), paper-based print, TV and so on.

Fundamental, of course, to any conversation about technology is that you are ultimately destined to fail, if you’re hoping to know everything. I’ve been following the conversation at a sprint for more than ten years now and like to think that I’ve got a reasonably good grasp of the web technologies out there, but it doesn’t take a genius to recognise that the speed of change is so intense that we’re all going to get left behind sooner or later. Those who have tried particularly hard to keep up have suffered because of it – Om Malik’s heart attack and the death of Russell Shaw are pretty well publicised. While much of the media are swinging off in what is obviously ridiculous “blogging kills you” type directions, there are still some lessons. We’re all getting older (goddamit) and sooner or later we’ll be that “back in the days of X command-line interface, when the world was rosy” IT bod in meetings. Get used to it. I’m almost there already – remember the…NO, STOP..

There’s a tendency I’ve noticed when some are faced with this craziness: ostrich the problem. The argument is articulated like this: “With so much noise, maybe we’d be better off just not doing anything“. It’s either a conscious decision, or a rabbit-stuck-in-headlights paralysis. Either way, to me it’s always been the most spurious of positions to take. To steal and adapt (CC-styley!) a well-known phrase:

“where there is noise, there is signal”

Choosing to actively run away from the noise – to “not do the social web because it’s too noisy” is a hugely perverse argument. Yes, there is noise and hype…no, Twitter probably won’t last..no, you shouldn’t be on Facebook just because you can…but the point as far as I see it is this: the social web has signal far above the hype: signal far stronger than the noise, provided you can take a step backwards and look at the direction of travel rather than the individual paths being walked. The social web is important because it lets us connect, not because it lets us tweet.

There’s no doubt that the noise is intense – unfiltered, it is way more than most of us can cope with. Here’s a (probably incomplete) list of my current inputs. Every one of them is a stream of information but also a potential distraction, red herring, attention-grabber, too:

email (Outlook), email (Gmail), twitter (via twhirl), IM (Google Talk), IM (MSN), IM (Skype), phone (mobile), phone (desk), phone (skype), feeds (google reader), “the web”, …not forgetting conversations with real people…

I may be in the upper quartile of “wiredness” but I’ll bet most of you are exposed to these, and some possibly more.

As many commentators have pointed out, as the noise continues to grow (which it will), the signal to noise ratio drops and the need for us to find mediated experiences will become ever more important. My good friend Dan Zambonini pointed me to this excellent blog post by Kevin Kelly. Here’s a quote:

“I have tried to temper my celebration of the bottom with my belief that the bottom is not enough for what we really want. To get to the best we need some top down intelligence, too. I have always claimed that nuanced view. And now that crowd-sourcing and social webs are all the rage, it’s worth repeating: the bottom is not enough. You need a bit of top-down as well.”

He’s right of course – the lesson that we all take away is that although the technologies get more “intelligent” (dare I say, “Semantic”..?), the noise is probably increasing at a far greater rate. Net result – at least a cancelling-out of the “filtered benefit” and more likely – just more and more noise.

The human author – the topdown influence in Kelly’s post – is the conduit by which everything is managed. This role isn’t going anywhere, but it’s easy to forget this when we’re all getting excited about the machine -processable web, the API, Twitter and so on.

The human element is always going to be the single most important thing in the equation, which is exactly why the social web is so important, and can’t – or won’t – be ostriched.

Museums and the Web – Tuesday

So here I am in Montreal for Museums and the Web 2008. The journey was ok apart from the obligatory 2 hour delay out of Heathrow. Someone apparently spotted a snowflake on the runway so everything ground to a halt while they dispatched the emergency extreme weather squad to sort it out.

They know how to do weather over here. It’s obviously not snowed for a while but there are still remnant piles, 6-7 feet thick just knocking around the town. Show that to anyone in the UK and the transport infrastructure would have fallen apart in seconds.

So – this week at Museums and the Web: Today – pre conference Semantic Web workshop. Wednesday, I’m running a blogging workshop with Brian Kelly in which I’ll be talking about this blog: why I do it, how it’s going, what I’ve learnt. The afternoon is my workshop on mashups. Slides and stuff for all the above coming shortly.

Then Thursday the conference sessions start. Friday and I’m back in front of people with Brian for our paper ‘what does openness mean to museums?’.

Meanwhile, I’ve provided Jennifer and David with OneTag for the week – the aim in a nutshell is to try and capture the ‘buzz’ around the conference by aggregating any blog posts and tweets tagged ‘mw2008’ and do stuff with this content. J + D have found a bunch of willing volunteers to blog alongside the people like me who’d be doing it anyway. Basically, everyone is being encouraged to tag and post as much as possible.

Have a look at:

More later.