We talk a lot about metrics. And when you do that, there’s always the risk what you’re measuring or why will become unclear. So this is worth repeating, as was reminded in a nice conversation with Anurag Acharya of Google Scholar (thanks Anurag!).
Metrics are no good by themselves. They are, however, quite useful when they inform filters. In fact, by definition filtering requires measurement or assessment of some sort. If we find new relevant things to measure, we can make new filters along those dimensions. That’s what we’re excited about, not measuring for it’s own sake.
These filters can mediate search for literature. They can also filter other things, like job applicants or or grant applications. But they’re all based on some kind of measurement. And expanding our set of relevant features (and perhaps a machine-learning context is more useful here than the mechanical filter metaphor) is likely to improve the validity and responsiveness of all sorts of scholarly assessment.
The big question, of course, is whether altmetrics like tweets, mendeley, and so on are actually relevant features. We can’t now prove that one way or another, although we’re working on it. I do know that they’re relevant sometimes, and I have the suspicion that they will become more relevant as more scholars move their professional networks online (another assumption, but i think a safe one).
And of course, measuring and filtering are only half the game. You also have to aggregate, to the pull the conversation together. Back when citation was the only visible edge in the network, we used ISI et al. to do this. Of course the underlying network was always richer than that, but the citation graph was the best trace we had. But now the the underlying processes—conversations, reads, saves, etc—are becoming visible as well, and there’s even more value in pulling together these latent, disconnected conversations. But that’s another post 🙂