Our growing user base stays pretty excited about using altmetrics to tell better stories about their impacts, and we’re passionate about helping them do it better. So while we both love discussing altmetrics’ pros and cons, we prefer to err on the side of doing over talking, so we don’t blog about it much.
But we appreciated David Colquhoun’s effort to get a discussion going around his recent blog post, so are jotting down a few quick thoughts here in response. It was an interesting read, in part because David may imagine we disagree a lot more than we in fact do.
We agree that bibliometrics is a tricky and complicated topic; folks have been arguing about the applicability and validity of citation mining for many decades now [paywall], in much more detail than either David or we have time to cover completely. But what’s sure is that usage of citation-based metrics like the Impact Factor has become deeply pathological.
That’s why we’re excited to be promoting a conversation reexamining metrics of science, a conversation asking if academia as an institution is really measuring what’s meaningful. And of course the answer is: no. Not yet. So, as an institution, we need to (1) stop pretending we are and (2) start finding ways to do better. At its core, this what altmetrics is all about–not Twitter or any other particular platform. And we’re just getting started.
We couldn’t agree more that post-publication peer-review is the future of scholarly communication. We think altmetrics will be an important part of this future, too. Scientists won’t have time to Read All The Things in the future, any more than they do now. Future altmetrics systems–especially as we begin to track who discusses papers in various environments, and what they’ve said–will help digest, report, flag, and attract expert assessments, making a publish-than-review ecosystem practical. Even today lists like the Altmetric top 100 can help attract expert review like David’s to the highly shared papers where it’s particularly needed.
We agree that a TL;DR culture does science no favors. That’s why we’re enthusiastic about the potential of social media and open review platforms to help science move beyond the formalized swap meet of journal publishing, on to actual in-depth conversations. It’s why we’re excited about making research conversation, data, analysis, and code first-class scholarly objects that fit into the academic reward system. It’s time to move beyond the TL;DR of the article, and start telling the whole research story.
So we’re happy that David agrees we must “give credit for all forms of research outputs, not only papers.” Although of course, not everyone agrees with David or Jason or Heather. We hear from lots of researchers that they’ve got an uphill battle arguing their datasets, blog posts, code, and other products are really making an impact. And we also hear that Impactstory’s normalized usage, download, and other data helps them make their point, and we’re pretty happy about that. Our data could be a lot more effective here (and stay tuned, we’ve got some features rolling out for this…), but it’s a start. And starts are important.
So are discussions. So thanks, David, for sharing your thoughts on this, and sorry we don’t have time to engage more deeply on it. If you’re ever in Vancouver, drop us a line and we’ll buy you a beer and have a Proper Talk :). And thanks to everyone else in this growing community for keeping great discussions on open science, web-native scholarship, and altmetrics going!