Chris Dent, whose blog is a treasure trove of interesting thinking about wikis, asks:
Even if you have everyone writing in blogs every day, how do you ensure that all those stories are distilled for information that is useful tomorrow, next month, next year and five years down the road?
This is a great question, and one that I have also been puzzling over a lot in my current employ. When we took over the running of Ireland’s oldest ISP a couple of years ago there was a huge information loss problem. So within the first week or so we set up RT to track customer enquiries, a blog for each member of staff to narrate their work, and a wiki to act as a basic customer management system and repository of useful information (it has since grown into much more, including our accounts package, but that’s a different story).
Now, we often have the opposite problem. I remember seeing the information somewhere, but can’t remember where it is – is it buried in a customer dialogue within RT? Did someone write it up on their blog? Was it added to the wiki?
In this version of the problem, ‘search’ probably is the simplest answer. But as Chris points out, search isn’t always the right answer. I would go further in my reasons why, however.
Search is useful when you’re looking for something, and you know what it is. Often you don’t have both halves of that. When a customer contacts you, for example, you should be able to pull up a single page of details about them where all the important facts will be listed: what level of support they have; what services they have; major problems on their account; key personnel etc. If a customer has recently had significant problems with their email, but this hasn’t been recorded on the wiki, and the person dealing with the customer now doesn’t know that, then they’re not even going to consider searching for information about it. Even if they have a vague memory of overhearing someone talking about some sort of problem with the customer’s account a month or so ago, they probably don’t even have enough information to search with.
It wouldn’t matter how great a search appliance we had, normal search just wouldn’t help here. This is where, as Chris points out, the process of wiki gardening comes in. Someone needs to tend to the wiki, carefully pruning back the less relevant information, and reshaping each page into its most useful form.
But this is a time consuming operation, and generally most people don’t have that sort of time. It’s hard enough trying to ensure that a summary of the key facts from each customer interaction gets copied over onto their wiki page, without also needing to spend another five minutes tidying that page up. In larger organisations, where call-centre staff are measured on how many queries they can handle per hour, the disincentive is much much stronger.
And, of course, any time where a human is copying information between two different computer systems, a giant red flag should pop up and scream that something really bad is going on.
I don’t have any great answers to this problem, but I wholeheartedly agree that enterprise wikis need to provide better tools for dealing with information stored outside themselves.
JotSpot did quite a lot of work in this area, providing two-way integration with Salesforce.com etc., but although that made for a cool demo, I don’t think that’s really what’s needed. Rather than just replicating the data stored elsewhere, the wiki needs to allow you to summarise what’s there, and then direct you off to view the full detail in situ. (Ideally searching within the wiki should pick up the full content though). But, crucially, there needs to be a way for the wiki to know when there’s un-summarised data needing handled, rather than users needing to remember to copy the data across.
Perhaps in the first instance it’s as simple as being able to add a little gizmo to a page that tells it how to find, for example, all RT tickets for this customer, which will then automatically list each ticket – initially in a default manner but where each entry can be edited in the traditional wiki way? As we move more and more into a world where different pieces of software can talk to each other with webservices, it should become easier and easier for this sort of information to be pulled across.
Semantic Mediawiki already provides an syntax for querying information within the wiki (although there’s no way yet to manually manipulate the results), so something like this could probably be repurposed to query information outside the wiki in a similar manner. Time to go ask on the semantic wiki list whether anyone’s working on anything like this. And I’ll certainly be paying close attention to how Socialtext (or anyone else, for that matter) tries to solve this issue.