Impact Challenge Day 3: Create a Google Scholar Profile

We’ve covered two of academia’s most popular social networks for the November Impact Challenge so far. Let’s now dig into the research platform that’s most often used by researchers: Google Scholar.

Screen Shot 2014-11-04 at 12.00.58 PM.png

Google Scholar offers a popular way to create a profile that showcases your own papers and the citations they’ve received. It also calculates a platform-dependent h-index, which many researchers love to track (for better or for worse).

In today’s challenge, we’re going to get you onto Google Scholar, so you can up your scholarly SEO (aka “googleability”), more easily share your publications with new readers, and discover new citations to your work from across the entire scholarly web.

Step 1: Create your basic profile

Log on to scholar.google.com  and click the “My Citations” link at the top of the page to get your account setup started.

On the first screen, add your affiliation information and university email address, so Google Scholar can confirm your account. Add keywords that are relevant to your research interests, so others can find you when browsing a subject area. Provide a link to your university homepage, if you have one. Click “Next Step,” and–that’s it! Your basic profile is done. Now, let’s add some publications to it.

Step 2: Add publications

Google has likely already been indexing your work for some time now as part of their mission as a scholarly search engine. So, this step is pretty easy, compared to what it takes to get your work on to Academia.edu or ResearchGate.

Screen Shot 2014-11-03 at 5.24.05 PM.png

Google Scholar will provide you with a list of publications they think belong to you. You’ll need to read through the list of publications that it suggests as yours and select which ones you want to add to your profile. Beware–if you have a common name, it’s likely there’s some publications in this list that don’t belong to you. And there’s also possibly content that you don’t want on your profile because it’s not a scholarly article, or is not representative of your current research path, and so on. Read through the publications list and deselect any that you do not want to add to your profile, like the below newsletter item that Google Scholar thinks is a scholarly article, then click the grey “Add” button at the top of your profile.

Screen Shot 2014-11-04 at 12.17.07 PM.png

Next, confirm you want Google to automatically add new publications to your profile in the future. Note that this might add publications you didn’t author to your profile if you’ve got a very common name, but can be worth it for the time it saves you approving new articles every month.

Your profile is now almost complete! Two more steps: add a photo by clicking the “Change Photo” link on your profile homepage, and set your private profile to “Public.”

Step 3: Make your profile public

Your profile is private if you’ve just created it. Change your profile visibility by clicking “Edit” next to “My profile is private” and then selecting “My profile is public” in the drop-down box.

Bonus: Add co-authors

Screen Shot 2014-11-03 at 6.00.26 PM.png

While your profile is technically complete, you’ll want to take advantage of Google Scholar’s built-in co-authorship network. Adding co-authors is a good way to let others know you’re now on Google Scholar, and will be useful later on in the Challenge, when we set up automatic alerts that can help you stay on top of new research in your field.

To add a suggested co-author, find the “Add Co-authors” section on the top right-hand section of your profile, then click the plus-sign next to each co-author you want to add.

That’s it! Now you’ve got a Google Scholar profile that helps you track when your work has been cited both in the peer-reviewed literature and elsewhere (more on that in a moment), and is yet another scholarly landing page that’ll connect others with your publications. The best part? Google Scholar’s pretty good at automatically adding new stuff to your profile, meaning you won’t have to do a lot of work to keep it up.

Limitations

Dirty data in the form of incorrect publications isn’t the only limitation of Google Scholar you should be aware of. The quality of Google Scholar citations has also been questioned, because they’re different from what scholars have traditionally considered to be a citation worth counting: a citation in the peer-reviewed literature.

Google Scholar counts citations from pretty much anywhere they can find them. That means their citation count often includes citations from online undergraduate papers, slides, white papers and similar sources. Because of this, Google Scholar citation counts are much higher than those from competitors like Scopus and Web of Science.

That can be a good thing. But you can also argue it’s “inflating” citation counts unfairly. It also makes Google Scholar’s citation counts quite susceptible to gaming techniques like using fake publications to fraudulently raise the numbers. We’ve not heard many evaluators complaining about these issues so far, but it’s good to be aware of.

Google Scholar also shares a limitation with ResearchGate and Academia.edu: it’s somewhat of an information silo. You cannot export your citation data, meaning that even if you were to amass very impressive citation statistics on the platform, the only way to get them onto your website, CV, or an annual report is to copy and paste them–way too much tedium for most scientists to endure. Their siloed approach to platform building definitely contributes to researchers’ profile fatigue.

Its final major limitation? There’s no telling if it will be around tomorrow. Remember Google Reader? Google has a history of killing beloved products when the bottom line is in question.  It’s not exaggerating to say that Google Scholar Profiles could literally go away at any moment.

That said, the benefits of the platform outweigh the downsides for many. And we’re going to give you a way to beat part of the “information silo” problem in today’s homework.

Homework

Google Scholar can only automate so much. To fully complete your Google Scholar profile, let’s manually add any missing articles. And let’s also teach you how to export your publication information from Google Scholar, because you’ll want to reuse it on other platforms.

1. Add missing articles

You might have an article or two that Google Scholar didn’t automatically add to your profile. If that’s the case, you’ll need to add it manually.

Click the “Add” button in the grey toolbar in the top of your profile.

Screen Shot 2014-11-03 at 5.39.53 PM.png

On the next page, click the “Add articles manually” link in the left-hand toolbar. Then you’ll see this screen:

Screen Shot 2014-11-03 at 5.41.58 PM.png

It’s here where you can add new papers to your profile. Include as much descriptive information as possible–it makes it easier for Google Scholar to find citations to your work. Click “Save” after you’ve finished adding your article metadata, and repeat as necessary until all of your publications are on Google Scholar.

2. Clean up your Google Scholar Profile data

Thanks to Google Scholar Profiles’ “auto add” functionality, your Profile might include some articles you didn’t author.

If that’s the case, you can remove them in one of two ways:

  • clicking on the title of each offending article to get to the article’s page, and then clicking the “Delete” button in the top green bar

  • from the main Profile page, ticking the boxes next to each incorrect article and selecting the “Delete” from the drop-down menu in the top grey bar

If you want to prevent incorrect articles from appearing on your profile in the first place, you can change your Profile settings to require Google Scholar to email you for approval before adding anything. To make this change, from your main Profile page, click the drop-down menu that appears in the top grey bar. Select “Profile updates”:

Screen Shot 2014-11-03 at 5.35.22 PM.png

On the next page,  change the setting to “Don’t automatically update my profile.”

Screen Shot 2014-11-03 at 5.37.03 PM.png

Prefer to roll the dice? You can keep a close eye on what articles are automatically added to your profile by signing up for alerts and manually removing any incorrect additions that appear. Here’s how to sign up for alerts: click the blue “Follow” button at the top of your profile, select “Follow new articles,” enter your email address, and click “Create alert.”

3. Learn how to export your publications list in BiBTeX format

There will likely be a time when you’ll want to export your Google Scholar publications to another service like Impactstory or Mendeley. Here’s how to do that.

Screen Shot 2014-11-03 at 5.51.17 PM.png

Tick the box next to each article whose details you want to export, or tick the top left-hand box to select all articles on your profile. With the relevant articles selected, click the“Export” button like we see on the right here, then choose BiBTeX. Next, choose to export either the selected articles or all articles from your profile, then click the final “Export” button to download your “citations.bib” file.

4. Explore your citations

Your final bit of homework is to enjoy learning about all the different places you’ve been cited. Because Google Scholar indexes citations from across the entire scholarly web, there’s likely a lot of places in the world that you’re being cited, in many different publication formats.

Take some time to look not only at the numbers Google Scholar provides, but to also click through the numbers to see the actual citing publications themselves. Read them. See if you can’t connect with the authors on ResearchGate or Academia.edu, if you’re so inclined.

And if you haven’t yet been cited, don’t despair! Now that more people have the opportunity to find your research on Google Scholar and elsewhere, the citations likely aren’t far away.

In coming days, we’ll cover how to use Google Scholar to stay abreast of new research in your field and new citations to your work. Stay tuned!

Impact Challenge Day 2: Make a ResearchGate profile

Yesterday, you used Academia.edu to make new connections, find new readers for your work, and track how often your work is being read.

Today, we’ll help you master the other major player in the scholarly social network space, ResearchGate. ResearchGate, which claims 5 million scientists as users, will help you connect with many researchers who aren’t on Academia.edu (especially those outside North America). It can also help you understand your readers through platform-specific metrics, and confirm your status as a helpful expert in your field with their “Q&A” feature.

Given ResearchGate’s similarity to Academia.edu, I won’t rehash the basics of setting up a profile and getting your publications online. Go ahead and sign up, setup your account (remember to add detailed affiliation information and a photo), and add a publication or two.

Got your basic profile up and running? Great! Let’s drill down into those three unique features of ResearchGate that you’re going to explore for your Day 2 Challenge.

Finding other researchers & publications

Finding other researchers and publications on ResearchGate works a bit differently than on Academia.edu. Rather than allow you to specify “research interests” and find other researchers that way, ResearchGate automatically creates a network for you based on who you’ve cited, who you follow and what discipline you selected when setting up your profile.

So, key to creating a robust network is uploading papers with citations to be text-mined, and searching for and following other researchers in your field.

Searching for other researchers in your field is easy: using the search bar at the top of the screen, type in your colleague’s name. If they’re on the site, they’ll appear in the dynamic search results, as we see below with Impactstory Advisor Lorena Barba:

Screen Shot 2014-11-03 at 11.51.38 AM.png

Click on their name in the search results to be taken to their page, where you can explore their publications, co-authors, and so on, and also follow them to receive updates.

Screen Shot 2014-11-02 at 9.03.43 PM.png

ResearchGate also text-mines the publications you’ve uploaded to find out who you’ve cited; they add both researchers you’ve cited and who have cited you to your network, as well as colleagues from your department and institution.

Here’s how to explore your network: click the “Publications” tab at the top of your screen to begin exploring the publications that are in your network. You can browse the most recent publications in your area of interest, your network, and so on, using the navigation bar seen above.

Screen Shot 2014-11-04 at 10.47.38 AM

If you find an interesting publication, you can click the paper title to read the paper or click on the author’s name to be taken to their profile, where you can explore their other publications or choose to follow them, adding a new colleague to your network in a snap.

ResearchGate Score & Stats

If you’re into metrics, the ResearchGate score and stats offer lots to explore. The ResearchGate score is an indicator of your popularity and engagement on the site: the more publications and followers you have, plus the more questions you ask and answer, all add up to your score. Check out Christoph Lutz’s ResearchGate score–one of the more diversely-sourced scores I’ve seen to date:

Screen Shot 2014-11-03 at 12.02.01 PM.png

ResearchGate also helpfully provides a percentile (seen above on the right-hand side), so you know how a score stacks up against other users on the site. The score isn’t normalized by field, though, so beware that using the score to compare yourself to others isn’t recommended.

Some other downsides to be aware of: ResearchGate scores don’t take into account whether you’re first author on a paper, they weigh site participation much more highly than other (more important) indicators of your scientific prowess, and don’t reflect the reality of who’s a high-impact scientist in many fields. So, caveat emptor.

All that said, ResearchGate scores are fun to play around with and explore. Just be sure not to take them too seriously.

The stats are also illuminating: they tell you how often your publications have been viewed and cited on ResearchGate both recently and over time, what your top publications are, and the popularity of your profile and any questions you may have asked on the site’s Q&A section. On your profile page, you’ll see a summary of your stats:

Screen Shot 2014-11-03 at 12.24.07 PM.png

If you click on those stats, you’ll be taken to your stats page, which breaks down all of your metrics with nice visualizations:

Screen Shot 2014-11-03 at 12.25.33 PM.png

A caveat: like Academia.edu stats, ResearchGate stats are only for content hosted on ResearchGate, so it can’t tell you much about readership or citations of your work that’s hosted on other platforms.

Q&A

Now that we’ve made some passive connections by following other researchers, let’s build some relationships by contributing to the Q&A section of the site.

On the Q&A section, anyone can pose a question, and if it’s related to your area of expertise, ResearchGate will give you the opportunity to answer. We’ll talk more about the benefits of participating in the Q&A section of the site in the coming days, but basically it’s a good opportunity to help other researchers and get your name out there.

Click on “Q&A” at the top of your screen and explore the various questions that have been posed in your discipline in recent weeks. You can also search for other topics, and pose questions yourself.

Two more cool ResearchGate features worth mentioning: they mint DOIs, meaning that if you need a permanent identifier for an unpublished work, you can get one for free (though keep in mind that they haven’t announced a preservation plan, meaning their DOIs might be less stable over time than DOIs issued by a CLOCKSSS-backed repository like Figshare). And you can also request Open Reviews of your work, which allows anyone on ResearchGate who’s in your area of expertise to give you feedback–a useful mechanism for inviting others to read your paper. It’s a feature that hasn’t seen much uptake, but is full of possibilities in terms of connecting other researchers to your work.

Limitations

Several readers have pointed out that Academia.edu and ResearchGate are information silos–you put information and effort into the site, and can’t easily extract and reuse it later. And they’re absolutely correct. That’s a big downside of these services and a great reason to check out open alternatives like PeerLibrary, ORCID, and Impactstory (more on the latter two services in the days to come).

Some other drawbacks to both Academia.edu and ResearchGate: they’re both for-profit, venture capital funded platforms, meaning that their responsibility isn’t to academics but to investors. And sure, they’re both free, which seems like an advantage until you remember that it means that you are the product, not the customer.

One solution to these drawbacks is to limit the amount of time you spend adding new content to your profiles on these sites, and instead use them as a kind of “landing page” that can simply help others find you and your three or four most important publications. Even if you don’t have all your publications on either site, their social networking features are still useful to make connections and increase readership for your most important work.

In the coming days, we’ll cover other web services that offer auto-updates and data portability, so you don’t end up suffering from Profile Fatigue.

Two more things:

  1. Be sure to check your ResearchGate notification settings to cut down on spam. They send more emails than most email-fatigued academics care to receive.
  2. Make sure you’ve opted-out of sending invitations, so you don’t accidentally contribute to spamming others.

Homework

Set up your ResearchGate profile and at least three publications you think deserve attention. Next, search for at least 5 colleagues or well-known researchers in your field and follow each of them. Once you’ve established a network, take 10 minutes to explore the “Publications” tab of ResearchGate, browsing publications that have been recently published in your network.

In the coming days, take another 10 minutes to explore your ResearchGate score and stats. Are there any that surprise you, in terms of what’s getting a lot of readers? How might you incorporate this information into your professional life outside of ResearchGate: would you put it on your CV or website, into an annual review or grant application in order to showcase your “broader impacts”? It’s ok if you say “no” to these ideas–the point is to get you thinking about what these metrics mean, and if and when you might use them professionally.

As for the Q&A section of ResearchGate–we’ll cover that soon. Stay tuned!

Day 2: Nailed it.

 

qmEboC2VVjBgQ.gif

Now you’ve got connections on two of academia’s biggest social networks, and you’ve increased potential exposure for your publications, to boot. You’ve also got two new sources of metrics that’ll show how often you’re read and cited.

Are you ready for Day 3? We’re going to cover Google Scholar Profiles–a great tool for finding citations, upping your “googleability” even further, and staying on top of new publications in your field.

Until then, we welcome bragging about your ResearchGate mastery in the comments below! Questions also welcome. 🙂

PS It’s Day 2 and the November Impact Challenge is in full swing. It’s work, right? But stick with it–the work is worth it!

In 28 more days, your network and professional visibility will be in a place many scholars take years to reach, and ready to grow even more.

And today, we’d like to give you a extra little incentive. Here’s a deal: if you can finish all 30 days, we’ll hook you up with this free t-shirt to show off your achievement!

Screencap of the "Finisher" t-shirt, showing a boxer in silhouette with the  words "Finisher: November Impact Challenge" on it.

More info to come!

November Impact Challenge Day 1: Make a profile on Academia.edu

Welcome to the November Impact Challenge!

Over the next 30 days, we’re going to work together to supercharge your research impact. You’ll:

  • upgrade your professional visibility by conquering social media,

  • boost your readership and citations by getting your work online,

  • stay atop your field’s latest developments with automated alerting,

  • lock in the key connections with colleagues that’ll boost your career, and

  • dazzle evaluators with comprehensive tracking and reporting on your own impacts.

Each day’s challenge will look like this: we’ll describe that day’s important principle–why it’s important, how you can get started, and some resources to help you excel–and then share a homework assignment, where you’ll apply the concepts we cover in that day’s post.

Are you ready? Let’s dive in, starting with scholarly social media.

Make a profile on Academia.edu

You know all those things you wish your CV was smart enough to do–embed your papers, automatically give you readership statistics, and so on? Academia.edu and ResearchGate (which we’ll cover in tomorrow’s challenge) are two academic social networks that allow you to do these things and a lot more.

Perhaps more importantly, they’re places where your colleagues are spending a lot of their time. Actively participating on one or both networks will give you ample opportunity to have greater reach with other researchers. And getting your publications and presentations onto these sites will make it easier for others to encounter your work, not only for the social network they help you build, but also to improve the search engine optimization (SEO) of your research, making you much more “googleable”.

Generally speaking, both platforms allow you to do the following:

  • Create a profile that summarizes your research

  • Upload your publications, so others can find them

  • Find and follow other researchers, so you can receive automatic updates on their new publications

  • Find and read others’ publications

  • See platform-specific metrics that indicate the readership and reach you have on those sites

Today, we’ll cover getting started with Academia.edu. Let’s dig into the basics of setting up a profile and uploading your work.

Basic profile setup

Logon to Academia.edu. If you’re a firm believer in keeping your professional online presence separate from your personal one, you’ll likely want to sign up using your university email address. Otherwise, you can sign up using your Facebook or Google profile.

From here, you’ll be directed through the basic signup process.

Post a publication

How do you choose what to share? If you’re an established researcher, this will be easy: just choose your most “famous” (read: highly cited) paper. If you’re a junior researcher or a student, choosing might be tougher. A peer-reviewed paper is always a good bet, as-is a preprint or a presentation that’s closely related to your most current topic of research.

Got a paper in mind? Now comes the not-as-fun-but-incredibly-necessary part: making sure you’ve got the rights to post it. Most academics don’t realize that they generally sign away their copyright when publishing an article with a traditional publisher. And that means you may not have the rights to post the publisher’s version of your article on Academia.edu. (If you negotiated to keep your copyright or published with an authors’ rights-respecting journal like PLOS Biology, give yourself a pat on the back and skip the following paragraph.)

If you don’t have copyright for your paper, all hope is not lost! You likely have the right to post your version of the article (often the unedited, unformatted version). Head over to Sherpa/Romeo and look up the journal you published in. You’ll see any and all restrictions that the publisher has placed on how you can share your article.

If you can post your article, let’s upload it to Academia.edu. Click the green “Upload a paper” button and, on your computer, find the publication you want to upload. Click “Open” and watch as Academia.edu begins to upload your paper.

Once it’s uploaded, the title of your publication will be automatically extracted. Make any corrections necessary to the title, then click in the “Find a Research Interest” box below the title. Add some keywords that will help others find your publication. Click save.

Add your affiliation and interests to your profile

Adding an affiliation is important because it will add you to a subdomain of Academia.edu built for your university, and that will allow you to more easily find your colleagues. The site will try to guess your affiliation based on your email address or IP address; make any corrections needed and add your department information and title. Click “Save & Continue,” then add your research interests on the following page. These are also important; they’ll help others find you and your work.

Connect with colleagues

In this final step, you’ll be prompted to either connect your Facebook account or an email account to Academia.edu, which will search your contacts and suggest connections. Select and confirm anyone you want to follow on the site. I recommend starting out small, to keep from being overwhelmed by updates.

Congrats, you’ve now got an Academia.edu profile!

You can continue to spruce it up by adding more publications, as well as adding a photo of yourself and other research interests and publications, and connecting your Academia profile to other services like Twitter and LinkedIn, if you’re already on ‘em. (If not, don’t worry–we’ll cover that soon.)

Homework

Now that you have a profile, set aside half an hour to explore three important uses of Academia.edu: exploring “research interests” in order to discover other researchers and publications; getting more of your most important publications online; and using the Analytics feature to discover who’s following you, how often others are reading and downloading your work, and in which countries your work is most popular.

Research interests: To get started exploring, click on the research interests in your profile:

Screencap of Jonathan Eisen's profile, highlighting his research interests

For the search results that appear, take some time to explore the profiles of others who share your interest(s) and follow anyone that looks interesting. Click on the Documents tab of the search results and explore relevant papers and presentations (below is an example of what the “Human Microbiome” research interest page looks like); I’m willing to bet you’ll find many papers and connections that you weren’t aware of before.

Screen Shot 2014-11-02 at 8.42.52 PM.png

You can also search for other research interests using the search bar at the top of the screen.

Upload more papers & presentations: click the “Upload papers” tab at the top  right corner of your screen and upload at least two more papers or presentations that you think are worthy of attention. Remember to abide by any copyright restrictions that might exist, and also be sure to add as much descriptive information as possible, adding the complete title, co-authors, and research interests for your paper, all of which will make it easier for others to find.

Analytics: click the “Analytics” tab at the top of your screen and poke around a bit. Because you just created your profile, it’s possible you won’t yet have any metrics. But in as little as a few days, you’ll begin to see download and pageview statistics for your profile and your publications (as seen below), and other interesting information like maps, all of which can help you better understand the use your work is getting from other researchers!

Screen Shot 2014-11-04 at 1.34.58 PM

So–you’ve claimed your professional presence on one of academia’s biggest social networks and learned how to use it to find other researchers and publications. More importantly, you’ve optimized your profile so others can find you and your research much more easily.

Congrats! Day 1 Challenge: achievement unlocked!

Let’s see your results

Post a link to your profile in the comments, and let us know if you have any questions or tips on how to use Academia.edu.

See you tomorrow for our Day 2 challenge: mastering ResearchGate!

Is ResearchGate’s new DOI feature a game-changer?

ResearchGate is academia’s most popular social network, with good reason. While some decry the platform for questionable user recruitment tactics, others love to use it to freely share their articles, write post-publication peer reviews, and pose questions to other researchers in their area.

ResearchGate quietly launched a feature recently, one that we think could be a big deal. It may have huge upsides for research–especially for tracking altmetrics for work–but it also highlights how some of the problems of scholarly communication aren’t easily solved, especially when digital persistence is involved.

The feature in question? ResearchGate is now generating DOIs for content. And that’s started to generate interesting conversations among those in the know.

Here’s why: DOIs are unique, persistent identifiers that publishers and repositories issue for their content, with the understanding that URLs break all the time. A preservation strategy is expected when one starts issuing DOIs, and yet ResearchGate hasn’t announced one, nor has DataCite (which issues ResearchGate’s DOIs).

Some other interesting questions: what happens when users decide to delete content, or leave the site altogether? Will ResearchGate force content to remain online, or allow DOIs to redirect to broken URLs?

And what if a publication already has a DOI? ResearchGate does prompt users to provide a DOI if one is available, but there are no automated checks (as far as we can tell). That may leave room for omission or error. And a DOI that potentially can resolve to more than one place will introduce confusion for those searching for an article.

As a librarian, I’m also curious about the implications for repositories. IRs’ main selling point is digital persistence and preservation. So, if ResearchGate does indeed have a preservation policy in place, repositories may have lost their edge.

We’ll be watching future developments with interest. There’s great potential here, and how ResearchGate grows and matures this feature in the future will likely have an influence on how researchers share their work and, quite possibly, what it means to be a “publisher.”

Open Science & Altmetrics Monthly Roundup (October 2014)

Open Access Week dominated Open Science conversation this month, along with interesting UK debates on metrics and several valuable studies being released. Read on for more on all of it!

UK debates use of metrics in research evaluation

Academia’s biggest proponents and critics of altmetrics descended on the University of Sussex on October 7 for the event, “In Metrics We Trust?”.

Some of the most interesting finds shared at the meeting?

  • REF peer reviewers admitted they spend less than 15 minutes reviewing papers for quality, due to the sheer volume of products that need evaluation,

  • Departmental h-indices tend to correlate with REF/RAE evaluations, leading some to argue that time and money could be saved by replacing future REF exercises with metrics, and

  • Leading bibliometrics researchers disagree on whether altmetrics could be used for evaluation. Some said they cannot, no matter what; others said that they can, because altmetrics measure different impacts than citations measure.

The meeting ended with no clear answer as to whether metrics are definitely right (or wrong) for use in the next REF. We’ll have to wait for the HEFCEmetric committee’s recommendations when they issue their report in June.

Until then, check out the full debate in our Storify of the event, as well as Ernesto Priego’s archive of related tweets.

OA Week 2014 recap

The Impactstory team is still recovering from Open Access Week 2014, which saw us talking to over 100 researchers and librarians in 9 countries over 5 days. A full recap of our talks can be found on the Impactstory blog, along with “The Right Metrics for Generation Open: a Guide to getting credit for Open Science”, based on our most popular webinar from the week.

Interest in Open Access and Open Science has risen over the past year, making this year’s Open Access Week successful according to all reports. As Heather Morrison has documented on her blog, the past year has seen an increase in the availability OA documents and data–ArXiv.org alone has grown by 11%!

Other altmetrics & Open Science news

Did we miss anything?

What was your favorite event or new study released this month? Share it in the comments below, or on Twitter (you can find us @Impactstory).

Are you ready to take the November Impact Challenge?

Graphic for the challenge shows a boxer poised, ready to bout.

In a hugely competitive research landscape, scientists can no longer afford to just publish and hope for the best. To leave a mark, researchers have to take their impact into their own hands.

But where do you start? There are so many ways to share, promote, and discuss your research, especially online. It’s tough to know where to begin.

Luckily, we’ve got your back.

Drawing on years of experience measuring and studying research impact, we’ve created a list of 30 effective steps for you to make sure your hard work gets out there, gets attention, and makes a difference–in your field and with the public.

We’ll share one of these a day in November, and we challenge you to follow along and give each one a try.

If you’re up to the challenge, we guarantee that by the end of the month, your research will get a boost in exposure and you’ll also have made important connections with other scientists around the world.

Join us here on our blog on Monday, November 3rd for the Impact Challenge kickoff, or follow along via email.

Tracking the impacts of data – beyond citations

This post was originally published on the e-Science Community Blog, a great resource for data management librarians.

"How to find and use altmetrics for research data" text in front of a beaker filled with green liquid

How can you tell if data has been useful to other researchers?

Tracking how often data has been cited (and by whom) is one way, but data citations only tell part of the story, part of the time. (The part that gets published in academic journals, if and when those data are cited correctly.) What about the impact that data has elsewhere?

We’re now able to mine the Web for evidence of diverse impacts (bookmarks, shares, discussions, citations, and so on) for diverse scholarly outputs, including data sets. And that’s great news, because it means that we now can track who’s reusing our data, and how.

All of this is still fairly new, however, which means that you likely need a primer on data metrics beyond citations. So, here you go.

In this post, I’ll give an overview of the different types of data metrics (including citations and altmetrics), the “flavors” of data impact, and specific examples of data metric indicators.

What do data metrics look like?

There are two main types of data metrics: data citations and altmetrics for data. Each of these types of metrics are important for their own reasons, and offer the ability to understand different dimensions of impact.

Data citations

Much like traditional, publication-based citations, data citations are an attempt to track data’s influence and reuse in scholarly literature.

The reason why we want to track scholarly data influence and reuse? Because “rewards” in academia are traditionally counted in the form of formal citations to works, printed in the reference list of a publication.

Data is often cited in two ways: by citing the data package directly (often by pointing to where the data is hosted in a repository), and by citing a “data paper” that describes the dataset, functioning primarily as detailed metadata, and offering the added benefit of being in a format that’s much more appealing to many publishers.

In the rest of this post, I’m going to mostly focus on metrics other than citations, which are being written about extensively elsewhere. But first, here’s some basic information on data citations that can help you understand how data’s scholarly impacts can be tracked.

How data packages are cited

Much like how citations to publications differ depending on whether you’re using Chicago style or APA style formatting, citations to data tend to differ according to the community of practice and the recommended citation style of the repository that hosts data. But there are a core set minimums for what should be included in a citation. Jon Kratz has compiled these “core elements” (as well as “common elements) over on the DataPub blog. The core elements include:

  • Creator(s): Essential, of course, to publicly credit the researchers who did the work. One complication here is that datasets can have large (into the hundreds) numbers of authors, in which case an organizational name might be used.

  • Date: The year of publication or, occasionally, when the dataset was finalized.

  • Title: As is the case with articles, the title of a dataset should help the reader decide whether your dataset is potentially of interest. The title might contain the name of the organization responsible, or information such as the date range covered.

  • Publisher: Many standards split the publisher into separate producer and distributor fields. Sometimes the physical location (City, State) of the organization is included.

  • Identifier: A Digital Object Identifier (DOI), Archival Resource Key (ARK), or other unique and unambiguous label for the dataset.

Arguably the most important principle? The use of a persistent identifier like a DOI, ARK, or Handle. They’re important for two reasons: no matter if the data’s URL changes, others will still be able to access it; and PIDs provide citation aggregators like the Data Citation Index and Impactstory.org an easy, unambiguous way to parse out “mentions” in online forums and journals.

It’s worth noting, however, that as few as 25% of journal articles tend to formally cite data. (Sad, considering that so many major publishers have signed on to FORCE11’s data citation principles, which include the need to cite data packages in the same manner as publications.) Instead, many scholars reference data packages in their Methods section, forgoing formal citations, making text mining necessary to retrieve mentions of those data.

How to track citations to data packages

When you want to track citations to your data packages, the best option is the Data Citation Index. The DCI functions similarly to Web of Science. If your institution has a subscription, you can search the Index for citations that occur in the literature that reference data from a number of well-known repositories, including ICPSR, ANDS, and PANGEA.

Here’s how: login to the DCI, then head to the home screen. In the Search box, type in your name or the dataset’s DOI. Find the dataset in the search results, then click on it to be taken to the item record page. On the item record, find and click the “Create Citation Alert” button on the right hand side of the page, where you’ll also find a list of articles that reference that dataset. Now you have a list of the articles that reference your data to date, and you’ll also receive automated email alerts whenever someone new references your data.

Another option comes from CrossRef Search. This experimental search tool works for any dataset that has a DataCite DOI and is referenced in the scholarly literature that’s indexed by CrossRef. (DataCite issues DOIs for Figshare, Dryad, and a number of other repositories.) Right now, the search is a very rough one: you’ll need to view the entire list of DOIs, then use your browser search (often accessed by hitting CTRL + F or Command +F) to check the list for your specific DOI. It’s not perfect–in fact, sometimes it’s entirely broken–but it does provide a view into your data citations not entirely available elsewhere.

How data papers are cited

Data papers tend to be cited like any other paper: by recording the authors, title, journal of publication, and any other information that’s required by the citation style you’re using. Data papers are also often cited using permanent identifiers like DOIs, which are assigned by publishers.

How to find citations for data papers

To find citations to data papers, search databases like Scopus and Web of Science like you’d search for any traditional publication. Here’s how to track citations in Scopus and Web of Science.

There’s no guarantee that your data paper is included in their database, though, since data paper journals are still a niche publication type in some fields, and thus aren’t tracked by some major databases. You’ll be smart to follow up your database search with a Google Scholar search, too.

Altmetrics for data

Citations are good for tracking the impact of your data in the scholarly literature, but what about other types of impact, among other audiences like the public and practitioners?

Altmetrics are indicators of the reuse, discussion, sharing, and other interactions humans can have with a scholarly object. These interactions tend to leave traces on the scholarly web.

Altmetrics are so broadly defined that they include pretty much any type of indicator sourced from a web service. For the purposes of this post, we’ll separate out citations from our definition of altmetrics, but note that many altmetrics aggregators tend to include citation data.

There are two main types of altmetrics for data: repository-sourced metrics (which often measure not only researchers’ impacts, but also repositories’ and curators’ impacts), and social web metrics (which more often measure other scholars’ and the public’s use and other interactions with data).

First, let’s discuss the nuts and bolts of data altmetrics. Then, we’ll talk about services you can use to find altmetrics for data.

Altmetrics for how data is used on the social web

Data packages can be shared, discussed, bookmarked, viewed, and reused using many of the same services that researchers use for journal articles: blogs, Twitter, social bookmarking sites like Mendeley and CiteULike, and so on. There are also a number of services that are specific to data, and these tend to be repositories with altmetric “indicators” particular to that platform.

For an in-depth look into data metrics and altmetrics, I recommend that you read Costas’ et al’s report, “The Value of Research Data” (2013). Below, I’ve created a basic chart of various altmetrics for data and what they can likely tell us about the use of data.

Quick caveat: there’s been little research done into altmetrics for data. (DataONE, PLOS, and California Digital Library are in fact the first organizations to do major work in this area, and they were recently awarded a grant to do proper research that will likely confirm or negate much of the below list. Keep an eye out for future news from them.) The metrics and their meanings listed below are, at best, estimations based on experience with both research data and altmetrics.

Repository- and publisher-based indicators

Note that some of the repositories below are primarily used for software, but can sometimes be used to host data, as well.

Web Service

Indicator

What it might tell us

Reported on

GitHub

Stars

Akin to “favoriting” a tweet or underlining a favorite passage in a book, GitHub stars may indicate that some who has viewed your dataset wants to remember it for later reference.

GitHub, Impactstory

Watched repositories

A user is interested enough in your dataset (stored in a “repository” on GitHub) that they want to be informed of any updates.

GitHub, PlumX

Forks

A user has adapted your code for their own uses, meaning they likely find it useful or interesting.

GitHub, Impactstory, PlumX

SourceForge

Ratings & Recommendations

What do others think of your data? And do they like it enough to recommend it to others?

SourceForge, PlumX

Dryad, Figshare, and most institutional and subject repositories

Views & Downloads

Is there interest in your work, such that others are searching for and viewing descriptions of it? And are they interested enough to download it for further examination and possible future use?

Dryad, Figshare, and IR platforms; Impactstory (for Dryad & Figshare); PlumX (for Dryad, Figshare, and some IRs)

Figshare

Shares

Implicit endorsement. Do others like your data enough to share it with others?

Figshare, Impactstory, PlumX

PLOS

Supplemental data views, figure views

Are readers of your article interested in the underlying data?

PLOS, Impactstory, PlumX

Bitbucket

Watchers

A user is interested enough in your dataset that they want to be informed of any updates.

Bitbucket

Social web-based indicators

Web Service

Indicator

What it might tell us

Reported on

Twitter

tweets that include links to your product

Others are discussing your data–maybe for good reasons, maybe for bad ones. (You’ll have to read the tweets to find out.)

PlumX, Altmetric.com, Impactstory

Delicious, CiteULike, Mendeley

Bookmarks

Bookmarks may indicate that some who has viewed your dataset wants to remember it for later reference. Mendeley bookmarks may be an indicator for later citations (similar to articles).

Impactstory, PlumX; Altmetric.com (CiteULike & Mendeley only)

Wikipedia

Mentions (sometimes also called “citations”)

Does others think your data is relevant enough to include it in Wikipedia encyclopedia articles?

Impactstory, PlumX

ResearchBlogging, Science Seeker

Blog post mentions

Is your data being discussed in your community?

Altmetric.com, PlumX, Impactstory

How to find altmetrics for data packages and papers

Aside from looking at each platform that offers altmetrics indicators, consider using an aggregator, which will compile them from across the web. Most altmetrics aggregators can track altmetrics for any dataset that’s either got a DOI or is included in a repository that’s connected to the aggregator. Each aggregator tracks slightly different metrics, as we discussed above. For a full list of metrics, visit each aggregator’s site.

Impactstory easily tracks altmetrics for data uploaded to Figshare, GitHub, Dryad, and PLOS journals. Connect your Impactstory account to Figshare and GitHub and it will auto-import your products stored there and find altmetrics for them. To find metrics for Dryad datasets and PLOS supplementary data, provide DOIs when adding products one-by-one to your profile, and the associated altmetrics will be imported. Here’s an example of what a altmetrics for dataset stored on Dryad looks like on Impactstory.

PlumX tracks similar metrics, and offers the added benefit of tracking altmetrics for data stored on institutional repositories, as well. If your university subscribes to PlumX, contact the PlumX team about getting your data included in your researcher profile. Here’s what altmetrics for dataset stored on Figshare looks like on PlumX.

Altmetric.com can track metrics for any dataset that has a DOI or Handle. To track metrics for your dataset, you’ll either need an institutional subscription to Altmetric or the Altmetric bookmarklet, which you can use when on the item page for your dataset on a website like Figshare or in your institutional repository. Here’s what altmetrics for a dataset stored on Figshare looks like on Altmetric.com.

Flavors of data impact

While scholarly impact is very important, it’s far from the only type of impact one’s research can have. Both data citations and altmetrics can be useful in illustrating these flavors. Take the following scenarios for example.

Useful for teaching

What if your field notebook data was used to teach undergraduates how to use and maintain their own field notebooks, and use them to collect data? Or if a longitudinal dataset you created were used to help graduate students learn the programming language, R? These examples are fairly common in practice, and yet they’re often not counted when considering impacts. Potential impact metrics could include full-text mentions in syllabi, views & downloads in Open Educational Resource repositories, and GitHub forks.

Reuse for new discoveries

Researcher, open data advocate, and Impactstory co-founder Heather Piwowar once noted, “the potential benefits of data sharing are impressive:  less money spent on duplicate data collection, reduced fraud, diverse contributions, better tuned methods, training, and tools, and more efficient and effective research progress.” If those outcomes aren’t indicative of impact, I don’t know what is! Potential impact metrics could include data citations in the scholarly literature, GitHub forks, and blog post and Wikipedia mentions.

Curator-related metrics

Could a view-to-download ratio be an indicator of how well a dataset has been described and how usable a repository’s UI is? Or of the overall appropriateness of the dataset for inclusion in the repository? Weber et al (2013) recently proposed a number of indicators that could get at these and other curatorial impacts upon research data, indicators that are closely related to previously-proposed indicators by Ingwersen and Chavan (2011) at the GBIF repository. Potential impact metrics could include those proposed by Weber et al and Ingwersen & Chavan, as well as a repository-based view-to-download ratio.

Ultimately, more research is needed into altmetrics for datasets before these flavors–and others–are accurately captured.

Now that you know about data metrics, how will you use them?

Some options include: in grant applications, your tenure and promotion dossier, and to demonstrate the impacts of your repository to administrators and funders. I’d love to talk more about this on Twitter or in the comments below.

Recommended reading

  • Piwowar HA, Vision TJ. (2013) Data reuse and the open data citation advantage. PeerJ 1:e175 doi: 10.7717/peerj.175

  • CODATA-ICSTI Task Group. (2013). Out of Cite, Out of Mind: The current state of practice, policy, and technology for the citation of data [report]. doi:10.2481/dsj.OSOM13-043

  • Costas, R., Meijer, I., Zahedi, Z., & Wouters, P. (2013). The Value of research data: Metrics for datasets from a cultural and technical point of view. Copenhagen, Denmark. Knowledge Exchange. www.knowledge-exchange.info/datametrics

Open Access Week 2014 – a look back and ahead

118A3TUPCSiw6Y.gif

Much like Lil Bub, we’re bushed. Our Open Access Week 2014 was very eventful–we spoke with more than 100 researchers and librarians in 9 countries over 5 days. Here’s how we spent our time.

Throughout the week, Stacy hosted several sessions of “The Right Metrics for Generation Open: a guide to getting credit for Open Science,” where she talked about how Generation Open’s needs are evolving beyond that of previous generations of scientists. Altmetrics are particularly well-suited to meet those needs. You can view her slides on Slideshare, and read a long-form blogpost based on the presentation here on our blog.

Tuesday saw Stacy talking with faculty and librarians at the University of Alberta and the University of Memphis, where she explained “Why Open Research is Critical to your career”. (tl;dr: Change is coming in scholarly communication–so you should get on board and start making the most of the great opportunities that Open Science and altmetrics can offer you.) Check out her slides on Google Docs.

On Wednesday, Stacy had the pleasure of hangin’ virtually with librarians and library students at the University of Wisconsin, where they talked about the fact that “Altmetrics are here: are you ready to help your faculty?” After all, who’s a better neutral third-party to help faculty navigate this new world of altmetrics than librarians? Slides from that presentation are available on Google Docs.

Jason gave his popular talk, “Altmetrics & Revolutions: How the web is transforming the measure and practice of science” to researchers at the University of New Brunswick on Thursday. His slides are available on Google Docs.

Stacy rounded out the week by chatting with researchers and librarians at the University of Cape Town on Friday. Her presentation on the basics of altmetrics and how to use them–”Altmetrics 101: How to make the most of supplementary impact metrics”–is available for viewing on Google Docs.

Heather’s going to be the one in need of a nap over the next two weeks–she’ll be presenting on open data and altmetrics throughout Australia. Here are the events ahead:

  • Melbourne, Mon, 27 Oct, 9am–12.30pm:  Creating your research impact story:  Workshop at eResearch Australasia. Also featuring: Pat Loria, CSU, and Natasha Simons, ANDS. (sold out)

  • Melbourne, Wed, 29 Oct, 10–11am: Keynote presentation at eResearch Australasia. Register for conference

  • Brisbane, Mon, 3 Nov, 1–4.30pm: Uncovering the Impact Story of Open Research and Data. Also featuring Paula Callan, QUT, and Ginny Barbour, PLOS. QUT: Owen J Wordsworth Room. Level 12, S Block, QUT Gardens Point. (sold out)

  • Sydney, Wed, 5 Nov, 1.30–4.30pm: An afternoon of talks featuring Heather Piwowar. Also featuring Maude Frances, UNSW, and Susan Robbins, UWS. ACU MacKillop Campus, North Sydney: The Peter Cosgrove Centre, Tenison Woods House, 8-20 Napier Street, North Sydney. (sold out)

Are you ready, Oz?!

UuIom9saJP5eg.gif

The Right Metrics for Generation Open: a guide to getting credit for Open Science

You’re not getting all the credit you should be for your research.

As an early career researcher, you’re likely publishing open access journal articles, sharing your research data and software code on GitHub, posting slides and figures on Slideshare and Figshare, and “opening up” your research in many other ways.

Yet these Open Science products and their impacts (on other scholars, the public, policymakers, and other stakeholders) are rarely mentioned when applying for jobs, tenure and promotion, and grants.

The traditional means of sharing your impact–citation counts–don’t meet the needs of today’s researchers. What you and the rest of Generation Open need is altmetrics.

In this post, I’ll describe what altmetrics are and the types of altmetrics you can expect to receive as someone who practices Open Science. We’ll also cover real life examples of scientists who used altmetrics to get grants and tenure–and how you can do the same.

Altmetrics 101

Altmetrics measure the attention your scholarly work receives online, from a variety of audiences.

As a scientist, you create research data, analyses, research narratives, and scholarly conversations on a daily basis. Altmetrics–measures of use sourced from the social web– can account for the uses of all of these varied output types.

Nearly everything that can be measured online has the potential to be an altmetric indicator. Here are just a few examples of the types of information that can be tracked for research articles alone:

scholarly

public

recommended

faculty of 1000

popular press

cited

traditional  citation

wikipedia

discussed

scholarly blogs

blogs, twitter

saved

mendeley, citeulike

delicious

read

pdf views

html views

 

When you add research software, data, slides, posters, and other scholarly outputs to the equation, the list of metrics you can use to understand the reception to your work grows exponentially.

And altmetrics can also help you understand the interest in your work from those both inside and outside of the Ivory Tower. For example, what are members of the public saying about your climate change research? How has it affected the decisions and debates among policy makers? Has it led to the adoption of new technologies in the private sector?

The days when your research only mattered to other academics are gone. And with them also goes the idea that there’s only one type of impact.

Flavors of impact

There are many flavors of impact that altmetrics can illuminate for you, beyond the traditional scholarly impact that’s measured by citations.

This 2012 study was the first to showcase the concept of flavors of impact via altmetrics. These flavors are found by examining the correlations between different altmetric indicators; how does a Mendeley bookmark correlate to a citation, or to a Facebook share? (And so on.) What can groups of correlations tell us about the uses of scholarship?

Among the flavors the researchers identified were a “popular hit” flavor (where scholarship is highly tweeted and shared on Facebook, but not seen much on scholarly sites like Mendeley or in citations) and an “expert pick” flavor (evidenced by F1000 Prime ratings and later citations, but few social shares or mentions). Lutz Bornmann’s 2014 study built upon that work, documenting that articles that are tagged on F1000 Prime as being “good for teaching” had more shares on Twitter–uncovering possible uses among educational audiences.

The correlation that’s on everyone’s mind? How do social media (and other indicators) correlate with citations? Mendeley bookmarks are found to have the most correlations with citations; this points to Mendeley’s use as a leading indicator (that is, if something is bookmarked on Mendeley today, it’s got better chance of being cited down the road than something that’s not bookmarked).

Correlations with citations aren’t the only correlations we should pay attention to, though. They only tell one part of an impact story–an important part, to be sure, but not the only part.

Altmetrics data includes qualitative data, too

Many don’t realize that altmetrics data isn’t only about the numbers. An important function of altmetrics aggregators like Altmetric.com and Impactstory (which we describe in more detail below) is to gather qualitative data from across the web into a single place, making it easy to read exactly what others are saying about your scholarship. Altmetric.com does this by including snippets of the blogs, tweets, and other mentions your work receives online. Impactstory links out to the data providers themselves, allowing you to more easily find and read the full-length mentions from across the web.

Altmetrics for Open Science

Now that you have an understanding of how altmetrics work in general, let’s talk about how they work for you as an Open Scientist. Below, we’ve listed some of the basic metrics you can expect to see on the scholarship that you make Open Access. We’ll discuss how to find these metrics in the next section.

Metrics for all products

Any scholarly object that’s got a URL or other permanent identifier like a DOI–which, if you’re practicing Open Science, would be all of them–can be shared and discussed online.

So, for any of your scholarly outputs that have been discussed online, you can expect to find Twitter mentions, blog posts and blog comments, Facebook and Google+ shares and comments, mainstream media mentions, and Wikipedia mentions.

Open Access Publications

Your open access publications will likely accrue citations same as your publications that appear in subscription journals, with two key differences: you can track citations to work that isn’t formally published (but has instead been shared on a preprint server like ArXiv or other such repository) and you can track citations to work that appear in non-peer reviewed literature. Citation indices like Scopus and Web of Science can help you track the former. Google Scholar is a good way to find citations in the non-peer reviewed literature.

Views and downloads can be found on some journal websites, and often on repositories–whether your university’s institutional repository, a subject repository like BioRXiv, or a general purpose repository like Figshare.

Screen Shot 2014-10-22 at 4.16.36 PM.png

Bookmarks on reference management services like Mendeley and CiteULike can give you a sense of how widely your work is being read, and by what audiences. Mendeley, in particular, offers excellent demographic information for publications bookmarked in the service.

Software & code

Software & code, like other non-paper scholarly products, are often shared on specialized platforms. On these platforms, the type of metrics your work receives is often linked to the platform itself.

SourceForge blazed the trail for data metrics by allowing others to review and rate code–useful, crowd-sourced quality indicators.

On GitHub, you can expect for your work to receive forks (which signal adaptations of your code), stars (a bookmark or virtual fistbump that lets others tell you, “I like this”), pull requests (which can get at others’ engagement with your work, as well as the degree to which you tend to collaborate), and downloads (which may signal software installations or code use). One big advantage to using GitHub to share your code is that it allows you to mint DOIs–making it much easier to track mentions and shares of your code in the scholarly literature and across general purpose platforms, like those outlined above.

Data

Data is often cited in one of two ways: citations to data packages (the dataset itself, stored on a website or repository) and citations to data papers (publications that describe the dataset in detail, and that link out to the dataset). You can often track the former using an altmetrics aggregator (more on that in a moment) or the Data Citation Index, a database that’s similar to Web of Science which searches for mentions of your dataset in the scholarly literature. Citations to data papers can sometimes be found in traditional citation indices like Scopus and Web of Science.

Interest in datasets can also be measured by tracking views and downloads. Often, these metrics are shared on repositories where datasets are stored.

Where data is shared on GitHub, forks and stars (described above) can give an indication of that data’s reuse.

More info on metrics for data can be found on my post for the e-Science Portal Blog, “Tracking the Impacts of Data–Beyond Citations”.

Videos

Videos are created by many researchers to summarize a study for generalist audiences. Other times, videos are a type of data.

YouTube tracks the most varied metrics: views, likes, dislikes, and comments are all reported. On Vimeo and other video sharing sites, likes and views are the most often reported metrics.

Slide decks & posters

Slide decks and posters are among the scholarly outputs that get the least amount of love. Once you’ve returned from your conference, you tend to shelve and forget about the poster that you (or your grad students) have put hours worth of work into–and the same goes for the slide decks you use when presenting.

If you make these “forgotten” products available online, on the other hand, you can expect to see some of the following indicators of interest in your work: views, favorites (sometimes used as a bookmark, other times as a way of saying “job well done!”), downloads, comments, and embeds (which can show you how often–and by whom–your work is being shared and in some cases blogged about).

How to collect your metrics from across the Web

We just covered a heck of a lot of metrics, huh? Luckily, altmetrics aggregators are designed to collect these far-flung data points from across the web and deliver them to you in a single report.

There are three main independent altmetrics aggregators: Impactstory.org, PlumX, and Altmetric.com. Here’s the scoop:

  • Impactstory.org: we’re a non-profit altmetrics service that collects metrics for all scholarly outputs. Impactstory profiles are designed to meet the needs of individual scientists. We regularly introduce new features based on user demand. You can sign up for a 30-day free trial on our website; after that, subscriptions are $10/month or $60/year.

  • PlumX: a commercial service that is designed to meet the needs of administrators and funding agencies. Like Impactstory, PlumX also collects metrics for all scholarly outputs. PlumX boasts the largest data coverage of all altmetrics aggregators.

  • Altmetric.com: a commercial service that collects metrics primarily for publishers and institutions. Altmetric can track any scholarly output with a DOI, PubMed ID, ArXiv ID, or Handle, but it does publications the best. Uniquely, they can find mentions to your scholarship in the mainstream media mentions and policy documents–two notoriously hard to mine locations.

Once you’ve collected your metrics from across the web, what do you do with them? We suggest experimenting with using them in your CV, year-end reporting, grant applications, and even tenure & promotion dossiers.

Skeptical? You needn’t be. An increasing number of scientists are using altmetrics for these purposes.

Researchers who have used altmetrics for tenure & grants

Each of the following researchers used altmetrics, alongside traditional metrics like citation counts and journal impact factors, to document the impact of their work.

Tenure: Dr. Steven Roberts, University of Washington

Steven-Roberts1-528x528.jpgSteven is an Associate Professor in the School of Aquatic & Fishery Sciences at the University of Washington. He decided to use altmetrics data in his tenure dossier to two ends: to showcase his public engagement and to document interest in his work.

To showcase public engagement, Steven included this table in the Education and Outreach section of his dossier, illustrating the effects his various outreach channels (blog, Facebook, Flickr, etc) have had to date:

Screen Shot 2014-10-20 at 2.19.52 PM.png

For evidence of the impact of specific products, he incorporated metrics into his CV like this:

Screen Shot 2014-10-20 at 2.24.04 PM.png

Screen Shot 2014-10-20 at 2.25.35 PM.png

Steven’s bid for tenure was successful.

Want to see more? You can download Steven’s full tenure dossier here.

Tenure: Dr. Ahmed Moustafa, American University in Cairo

ahmed.jpgAhmed’s an Associate Professor in the Department of Biology at American University in Cairo, Egypt.

He used altmetrics data in his tenure dossier in two interesting ways. First, he included a screenshot of his most important scholarly products, as they appear on his Impactstory profile, to summarize the overall impacts of his work:

Screen Shot 2014-10-20 at 2.52.15 PM.png

Note the badges that summarize in a glance the relative impacts of his work among both the public and other scholars. Ahmed also includes a link to his full profile, so his reviewers can drill down into the impact details of all his works, and also review them for themselves.

Ahmed also showcased the impact of a particular software package he created, JAligner, by including a link to a Google Scholar search that showcases all the scholarship that cites his software:

As of August 2013, JAligner has been cited in more than 150 publications, including journal articles, books, and patents, (http://tinyurl.com/jalignercitations) covering a wide range of topics in biomedical and computational research areas and downloaded almost 20,000 times (Figure 6). It is somehow noteworthy that JAligner has claimed its own Wikipedia entry (http://en.wikipedia.org/wiki/JAligner)!

Ahmed received tenure with AUC in 2013.

Grant Reporting: Dr. Holly Bik, University of Birmingham

0167.pngHolly was awarded a major grant from the Alfred P. Sloan Foundation to develop a bioinformatics data visualization tool called Phinch.

When reporting back to Sloan on the success of her project, she included metrics like the Figshare views that related posters and talks received, Github statistics for the Phinch software, and other altmetrics related to the varied outputs that the project created over the last few years.

Holly’s hopeful that these metrics, in addition to the traditional metrics she’s reported to Sloan, will make a great case for renewal funding, so they can continue their work on Phinch.

Will altmetrics work for you?

The remarkable thing about each of these researchers is that their circumstances aren’t extraordinary. The organizations they work for and receive funding from are fairly traditional ones. It follows that you, too, may be able to use altmetrics to document the impacts of your Open Science, no matter where you work or are applying for funding. After all, more and more institutions are starting to incorporate recognition of non-traditional scholarship into their tenure & promotion guidelines. You’ll need non-traditional ways like altmetrics to showcase the impacts of that scholarship.

3 important steps to getting more credit for your peer reviews

A few years back, Scholarly Kitchen editor-in-chief David Crotty informally polled a dozen biologists about the burden of peer review. He found that most peer review around 3 papers per month. For senior scientists, that number can reach 15 papers per month.

And yet, no matter how much time they spend reviewing, the credit they get is the same, and it looks like this on their CV:

“Service: Reviewer for Physical Review B and PLOS ONE.”

What if your work could be counted as more than just “service”? After all, peer review is dependent upon scientists doing a lot of intellectual heavy lifting for the benefit of their discipline.

And what if you could track the impacts your peer reviews have had on your field? Credit–in the form of citations and altmetrics–could be included in your CV to show the many ways that you’ve contributed intellectually to your discipline.

The good news? You can get credit for your peer reviews. By participating in Open Peer Review and making reviews discoverable and citable, researchers across the world have begun to get the credit they deserve for improving science for the better.

But this practice isn’t yet widespread. So, we’ve compiled a short guide to getting started with getting credit for your peer reviews.

1. Participate in Open Peer Review

Open Peer Review is a radical notion predicated on a simple idea: that by making author and reviewer identities public, more civil and constructive peer reviews will be submitted, and peer reviews can be put into context.

Here’s how it works, more or less: reviewers are assigned to a paper, and they know the author’s identity. They review the paper and sign their name. The reviews are then submitted to the editor and author (who now knows their reviewers’ identities, thanks to the signed reviews). When the paper is published, the signed reviews are published alongside it.

Sounds simple enough, but if you’re reviewing for a traditional journal, this might be a challenge. Open Peer Review is still rarely practiced by most traditional publishers.

For a very long time, publishers favored private, anonymous (‘blinded’) peer review, under the assumption that it would reduce bias and that authors would prefer for criticisms of their work to remain private. Turns out, their assumptions weren’t backed up by evidence.

Blinded peer review is argued to be beneficial for early career researchers, who might find themselves in a position where they’re required to give honest feedback to a scientist who’s influential in their field. Anonymity would protect these ECR-reviewers from their colleagues, who could theoretically retaliate for receiving critical reviews.

Yet many have pointed out that it can be easy for authors to guess the identities of their reviewers (especially in small fields, where everyone tends to know what their colleagues/competitors are working on, or in lax peer review environments, where all one has to do is ask!). And as Mick Watson argues, any retaliation that could theoretically occur would be considered a form of scientific misconduct, on par with plagiarism–and therefore off-limits to scientists with any sense.

In any event, a consequence of this anonymous legacy system is that you, as a reviewer, can’t take credit for your work. Sure, you can say you’re a reviewer for Physical Review B, but you’re unable to point to specific reviews or discuss how your feedback made a difference. (Your peer reviews go into the garbage can of oblivion once the article’s been published, as illustrated below.) That means that others can’t read your reviews to understand your intellectual contributions to your field, which–in the case of some reviews–can be enormous.

Image CC-BY Kriegeskorte N from “Open evaluation: a vision for entirely transparent post-publication peer review and rating for science” Front. Comput. Neurosci., 2012

Image CC-BY Kriegeskorte N from “Open evaluation: a vision for entirely transparent post-publication peer review and rating for science” Front. Comput. Neurosci., 2012

So, if you want to get credit for your work, you can choose to review for journals that already offer Open Peer Review. A number of forward-thinking journals allow it (BMJ, PeerJ, and F1000 Research, among others).

To find others, use Cofactor’s excellent journal selector tool:

  • Head over to the Cofactor journal selector tool

  • Click “Peer review,”

  • Select “Fully Open,” and

  • Click “Search” to see a full list of Open Peer Review journals

Some stand-alone peer review platforms also allow Open Peer Review. Faculty of 1000 Prime is probably the best known example. Publons is the largest platform that offers Open peer review. Dozens of other platforms offer it, too.

Once your reviews are attributable to you, the next step is making sure others can read them.

2. Make your reviews (and references to them) discoverable

You might think that discoverability goes hand in hand with Open Peer Review, but you’d only be half-right. Thing is: URLs break every day. Persistent access to an article over time, on the other hand, will help ensure that those who seek out your work can find it, years from now.

Persistent access often comes in the form of identifiers like DOIs. Having a DOI associated with your review means that, even if your review’s URL were to change in the future, others can still find your work. That’s because DOIs are set up to resolve to an active URL when other URLs break.

Persistent IDs also have another major benefit: they make it easy to track citations, mentions on scholarly blogs, or new Mendeley readers for your reviews. Tracking citations and altmetrics (social web indicators that tell you when others are sharing, discussing, saving, and reusing your work online) can help you better understand how your work is having an impact, and with whom. It also means you can share those impacts with others when applying for jobs, tenure, grants, and so on.

There are two main ways you can get a DOI for your reviews:

  • Review for a journal like PeerJ or peer review platform like Publons that issues DOIs automatically

  • Archive your review in a repository that issues DOIs, like Figshare

Once you have a DOI, use it! Include it on your CV (more on that below), as a link when sharing your reviews with others, and so on. And encourage others to always link to your review using the DOI resolver link (these are created by putting “http://dx.doi.org/” in front of your DOI; here’s an example of what one looks like: http://dx.doi.org/10.7287/peerj.603v0.1/reviews/2).

DOIs and other unique, persistent identifiers help altmetrics aggregators like Impactstory and PlumX pick up mentions of your reviews in the literature and on the social web. And when we’re able to report on your citations and altmetrics, you can start to get credit for them!

3. Help shape a system that values peer review as a scholarly output

Peer review may be viewed primarily as a “service” activity, but things are changing–and you can help change ‘em even more quickly. Here’s how.

As a reviewer, raise awareness by listing and linking to your reviews on your CV, adjacent to any mentions of the journals you review for. By linking to your specific reviews (using the DOI resolver link we talked about above), anyone looking at your CV can easily read the reviews themselves.

You can also illustrate the impacts of Open Peer Review for others by including citations and altmetrics for your reviews on your CV. An easy way to do that is to include on your CV a link to the review on your Impactstory or PlumX profile. You can also include other quantitative measures of your reviews’ quality, like Peerage of Science’s Peerage Essay Quality scores, Publons’ merit scores, or a number of other quantitative indicators of peer-review quality. Just be sure to provide context to any numbers you include.

If you’re a decision-maker, you can “shape the system” by making sure that tenure & promotion and grant award guidelines at your organization acknowledge peer review as a scholarly output. Actively encouraging early career researchers and students in your lab to participate in Open Peer Review can also go a long way. The biggest thing you can do? Educate other decision-makers so they, too, respect peer review as a standalone scholarly output.

Finally, if you’re a publisher or altmetrics aggregator, you can help “shape the system” by building products that accommodate and reward new modes of peer review.

Publishers can partner with standalone peer review platforms to accept their “portable peer reviews” as a substitute (or addition to) in-house peer reviews.

Altmetrics aggregators can build systems that better track mentions of peer reviews online, or–as we’ve recently done at Impactstory–connect directly with peer review platforms like Publons to import both the reviews and metrics related to the reviews. (See our “PS” below for more info on this new feature!)

How will you take credit for your peer review work?

Do you plan to participate in Open Peer Review and start using persistent identifiers to link to and showcase your contributions to your field? Will you start advocating for peer review as a standalone scholarly product to your colleagues? Or do you disagree with our premise, believing instead that traditional, blinded peer review–and our means of recognizing it as service–are just fine as-is?

We want to hear your thoughts in the comments below!

Further Reading

 

ps.  Impactstory now showcases your open peer reviews!

 

Starting today, there is one more great way to get credit by your peer reviews, in addition to those above:  on your Impactstory profile!

We’re partnering with Publons, a startup that aggregates Open and anonymous peer reviews written for  PeerJ, GigaScience, Biology Direct, F1000 Research, and many other journals.

Have you written Open reviews in these places?  Want to feature them on your Impactstory profile, complete with viewership stats? Just Sign up for a Publons account and then connect it to your Impactstory profile to start showing off your peer reviewing awesomeness :).