Tracking the impacts of data – beyond citations

This post was originally published on the e-Science Community Blog, a great resource for data management librarians.

"How to find and use altmetrics for research data" text in front of a beaker filled with green liquid

How can you tell if data has been useful to other researchers?

Tracking how often data has been cited (and by whom) is one way, but data citations only tell part of the story, part of the time. (The part that gets published in academic journals, if and when those data are cited correctly.) What about the impact that data has elsewhere?

We’re now able to mine the Web for evidence of diverse impacts (bookmarks, shares, discussions, citations, and so on) for diverse scholarly outputs, including data sets. And that’s great news, because it means that we now can track who’s reusing our data, and how.

All of this is still fairly new, however, which means that you likely need a primer on data metrics beyond citations. So, here you go.

In this post, I’ll give an overview of the different types of data metrics (including citations and altmetrics), the “flavors” of data impact, and specific examples of data metric indicators.

What do data metrics look like?

There are two main types of data metrics: data citations and altmetrics for data. Each of these types of metrics are important for their own reasons, and offer the ability to understand different dimensions of impact.

Data citations

Much like traditional, publication-based citations, data citations are an attempt to track data’s influence and reuse in scholarly literature.

The reason why we want to track scholarly data influence and reuse? Because “rewards” in academia are traditionally counted in the form of formal citations to works, printed in the reference list of a publication.

Data is often cited in two ways: by citing the data package directly (often by pointing to where the data is hosted in a repository), and by citing a “data paper” that describes the dataset, functioning primarily as detailed metadata, and offering the added benefit of being in a format that’s much more appealing to many publishers.

In the rest of this post, I’m going to mostly focus on metrics other than citations, which are being written about extensively elsewhere. But first, here’s some basic information on data citations that can help you understand how data’s scholarly impacts can be tracked.

How data packages are cited

Much like how citations to publications differ depending on whether you’re using Chicago style or APA style formatting, citations to data tend to differ according to the community of practice and the recommended citation style of the repository that hosts data. But there are a core set minimums for what should be included in a citation. Jon Kratz has compiled these “core elements” (as well as “common elements) over on the DataPub blog. The core elements include:

  • Creator(s): Essential, of course, to publicly credit the researchers who did the work. One complication here is that datasets can have large (into the hundreds) numbers of authors, in which case an organizational name might be used.

  • Date: The year of publication or, occasionally, when the dataset was finalized.

  • Title: As is the case with articles, the title of a dataset should help the reader decide whether your dataset is potentially of interest. The title might contain the name of the organization responsible, or information such as the date range covered.

  • Publisher: Many standards split the publisher into separate producer and distributor fields. Sometimes the physical location (City, State) of the organization is included.

  • Identifier: A Digital Object Identifier (DOI), Archival Resource Key (ARK), or other unique and unambiguous label for the dataset.

Arguably the most important principle? The use of a persistent identifier like a DOI, ARK, or Handle. They’re important for two reasons: no matter if the data’s URL changes, others will still be able to access it; and PIDs provide citation aggregators like the Data Citation Index and Impactstory.org an easy, unambiguous way to parse out “mentions” in online forums and journals.

It’s worth noting, however, that as few as 25% of journal articles tend to formally cite data. (Sad, considering that so many major publishers have signed on to FORCE11’s data citation principles, which include the need to cite data packages in the same manner as publications.) Instead, many scholars reference data packages in their Methods section, forgoing formal citations, making text mining necessary to retrieve mentions of those data.

How to track citations to data packages

When you want to track citations to your data packages, the best option is the Data Citation Index. The DCI functions similarly to Web of Science. If your institution has a subscription, you can search the Index for citations that occur in the literature that reference data from a number of well-known repositories, including ICPSR, ANDS, and PANGEA.

Here’s how: login to the DCI, then head to the home screen. In the Search box, type in your name or the dataset’s DOI. Find the dataset in the search results, then click on it to be taken to the item record page. On the item record, find and click the “Create Citation Alert” button on the right hand side of the page, where you’ll also find a list of articles that reference that dataset. Now you have a list of the articles that reference your data to date, and you’ll also receive automated email alerts whenever someone new references your data.

Another option comes from CrossRef Search. This experimental search tool works for any dataset that has a DataCite DOI and is referenced in the scholarly literature that’s indexed by CrossRef. (DataCite issues DOIs for Figshare, Dryad, and a number of other repositories.) Right now, the search is a very rough one: you’ll need to view the entire list of DOIs, then use your browser search (often accessed by hitting CTRL + F or Command +F) to check the list for your specific DOI. It’s not perfect–in fact, sometimes it’s entirely broken–but it does provide a view into your data citations not entirely available elsewhere.

How data papers are cited

Data papers tend to be cited like any other paper: by recording the authors, title, journal of publication, and any other information that’s required by the citation style you’re using. Data papers are also often cited using permanent identifiers like DOIs, which are assigned by publishers.

How to find citations for data papers

To find citations to data papers, search databases like Scopus and Web of Science like you’d search for any traditional publication. Here’s how to track citations in Scopus and Web of Science.

There’s no guarantee that your data paper is included in their database, though, since data paper journals are still a niche publication type in some fields, and thus aren’t tracked by some major databases. You’ll be smart to follow up your database search with a Google Scholar search, too.

Altmetrics for data

Citations are good for tracking the impact of your data in the scholarly literature, but what about other types of impact, among other audiences like the public and practitioners?

Altmetrics are indicators of the reuse, discussion, sharing, and other interactions humans can have with a scholarly object. These interactions tend to leave traces on the scholarly web.

Altmetrics are so broadly defined that they include pretty much any type of indicator sourced from a web service. For the purposes of this post, we’ll separate out citations from our definition of altmetrics, but note that many altmetrics aggregators tend to include citation data.

There are two main types of altmetrics for data: repository-sourced metrics (which often measure not only researchers’ impacts, but also repositories’ and curators’ impacts), and social web metrics (which more often measure other scholars’ and the public’s use and other interactions with data).

First, let’s discuss the nuts and bolts of data altmetrics. Then, we’ll talk about services you can use to find altmetrics for data.

Altmetrics for how data is used on the social web

Data packages can be shared, discussed, bookmarked, viewed, and reused using many of the same services that researchers use for journal articles: blogs, Twitter, social bookmarking sites like Mendeley and CiteULike, and so on. There are also a number of services that are specific to data, and these tend to be repositories with altmetric “indicators” particular to that platform.

For an in-depth look into data metrics and altmetrics, I recommend that you read Costas’ et al’s report, “The Value of Research Data” (2013). Below, I’ve created a basic chart of various altmetrics for data and what they can likely tell us about the use of data.

Quick caveat: there’s been little research done into altmetrics for data. (DataONE, PLOS, and California Digital Library are in fact the first organizations to do major work in this area, and they were recently awarded a grant to do proper research that will likely confirm or negate much of the below list. Keep an eye out for future news from them.) The metrics and their meanings listed below are, at best, estimations based on experience with both research data and altmetrics.

Repository- and publisher-based indicators

Note that some of the repositories below are primarily used for software, but can sometimes be used to host data, as well.

Web Service

Indicator

What it might tell us

Reported on

GitHub

Stars

Akin to “favoriting” a tweet or underlining a favorite passage in a book, GitHub stars may indicate that some who has viewed your dataset wants to remember it for later reference.

GitHub, Impactstory

Watched repositories

A user is interested enough in your dataset (stored in a “repository” on GitHub) that they want to be informed of any updates.

GitHub, PlumX

Forks

A user has adapted your code for their own uses, meaning they likely find it useful or interesting.

GitHub, Impactstory, PlumX

SourceForge

Ratings & Recommendations

What do others think of your data? And do they like it enough to recommend it to others?

SourceForge, PlumX

Dryad, Figshare, and most institutional and subject repositories

Views & Downloads

Is there interest in your work, such that others are searching for and viewing descriptions of it? And are they interested enough to download it for further examination and possible future use?

Dryad, Figshare, and IR platforms; Impactstory (for Dryad & Figshare); PlumX (for Dryad, Figshare, and some IRs)

Figshare

Shares

Implicit endorsement. Do others like your data enough to share it with others?

Figshare, Impactstory, PlumX

PLOS

Supplemental data views, figure views

Are readers of your article interested in the underlying data?

PLOS, Impactstory, PlumX

Bitbucket

Watchers

A user is interested enough in your dataset that they want to be informed of any updates.

Bitbucket

Social web-based indicators

Web Service

Indicator

What it might tell us

Reported on

Twitter

tweets that include links to your product

Others are discussing your data–maybe for good reasons, maybe for bad ones. (You’ll have to read the tweets to find out.)

PlumX, Altmetric.com, Impactstory

Delicious, CiteULike, Mendeley

Bookmarks

Bookmarks may indicate that some who has viewed your dataset wants to remember it for later reference. Mendeley bookmarks may be an indicator for later citations (similar to articles).

Impactstory, PlumX; Altmetric.com (CiteULike & Mendeley only)

Wikipedia

Mentions (sometimes also called “citations”)

Does others think your data is relevant enough to include it in Wikipedia encyclopedia articles?

Impactstory, PlumX

ResearchBlogging, Science Seeker

Blog post mentions

Is your data being discussed in your community?

Altmetric.com, PlumX, Impactstory

How to find altmetrics for data packages and papers

Aside from looking at each platform that offers altmetrics indicators, consider using an aggregator, which will compile them from across the web. Most altmetrics aggregators can track altmetrics for any dataset that’s either got a DOI or is included in a repository that’s connected to the aggregator. Each aggregator tracks slightly different metrics, as we discussed above. For a full list of metrics, visit each aggregator’s site.

Impactstory easily tracks altmetrics for data uploaded to Figshare, GitHub, Dryad, and PLOS journals. Connect your Impactstory account to Figshare and GitHub and it will auto-import your products stored there and find altmetrics for them. To find metrics for Dryad datasets and PLOS supplementary data, provide DOIs when adding products one-by-one to your profile, and the associated altmetrics will be imported. Here’s an example of what a altmetrics for dataset stored on Dryad looks like on Impactstory.

PlumX tracks similar metrics, and offers the added benefit of tracking altmetrics for data stored on institutional repositories, as well. If your university subscribes to PlumX, contact the PlumX team about getting your data included in your researcher profile. Here’s what altmetrics for dataset stored on Figshare looks like on PlumX.

Altmetric.com can track metrics for any dataset that has a DOI or Handle. To track metrics for your dataset, you’ll either need an institutional subscription to Altmetric or the Altmetric bookmarklet, which you can use when on the item page for your dataset on a website like Figshare or in your institutional repository. Here’s what altmetrics for a dataset stored on Figshare looks like on Altmetric.com.

Flavors of data impact

While scholarly impact is very important, it’s far from the only type of impact one’s research can have. Both data citations and altmetrics can be useful in illustrating these flavors. Take the following scenarios for example.

Useful for teaching

What if your field notebook data was used to teach undergraduates how to use and maintain their own field notebooks, and use them to collect data? Or if a longitudinal dataset you created were used to help graduate students learn the programming language, R? These examples are fairly common in practice, and yet they’re often not counted when considering impacts. Potential impact metrics could include full-text mentions in syllabi, views & downloads in Open Educational Resource repositories, and GitHub forks.

Reuse for new discoveries

Researcher, open data advocate, and Impactstory co-founder Heather Piwowar once noted, “the potential benefits of data sharing are impressive:  less money spent on duplicate data collection, reduced fraud, diverse contributions, better tuned methods, training, and tools, and more efficient and effective research progress.” If those outcomes aren’t indicative of impact, I don’t know what is! Potential impact metrics could include data citations in the scholarly literature, GitHub forks, and blog post and Wikipedia mentions.

Curator-related metrics

Could a view-to-download ratio be an indicator of how well a dataset has been described and how usable a repository’s UI is? Or of the overall appropriateness of the dataset for inclusion in the repository? Weber et al (2013) recently proposed a number of indicators that could get at these and other curatorial impacts upon research data, indicators that are closely related to previously-proposed indicators by Ingwersen and Chavan (2011) at the GBIF repository. Potential impact metrics could include those proposed by Weber et al and Ingwersen & Chavan, as well as a repository-based view-to-download ratio.

Ultimately, more research is needed into altmetrics for datasets before these flavors–and others–are accurately captured.

Now that you know about data metrics, how will you use them?

Some options include: in grant applications, your tenure and promotion dossier, and to demonstrate the impacts of your repository to administrators and funders. I’d love to talk more about this on Twitter or in the comments below.

Recommended reading

  • Piwowar HA, Vision TJ. (2013) Data reuse and the open data citation advantage. PeerJ 1:e175 doi: 10.7717/peerj.175

  • CODATA-ICSTI Task Group. (2013). Out of Cite, Out of Mind: The current state of practice, policy, and technology for the citation of data [report]. doi:10.2481/dsj.OSOM13-043

  • Costas, R., Meijer, I., Zahedi, Z., & Wouters, P. (2013). The Value of research data: Metrics for datasets from a cultural and technical point of view. Copenhagen, Denmark. Knowledge Exchange. www.knowledge-exchange.info/datametrics

3 important steps to getting more credit for your peer reviews

A few years back, Scholarly Kitchen editor-in-chief David Crotty informally polled a dozen biologists about the burden of peer review. He found that most peer review around 3 papers per month. For senior scientists, that number can reach 15 papers per month.

And yet, no matter how much time they spend reviewing, the credit they get is the same, and it looks like this on their CV:

“Service: Reviewer for Physical Review B and PLOS ONE.”

What if your work could be counted as more than just “service”? After all, peer review is dependent upon scientists doing a lot of intellectual heavy lifting for the benefit of their discipline.

And what if you could track the impacts your peer reviews have had on your field? Credit–in the form of citations and altmetrics–could be included in your CV to show the many ways that you’ve contributed intellectually to your discipline.

The good news? You can get credit for your peer reviews. By participating in Open Peer Review and making reviews discoverable and citable, researchers across the world have begun to get the credit they deserve for improving science for the better.

But this practice isn’t yet widespread. So, we’ve compiled a short guide to getting started with getting credit for your peer reviews.

1. Participate in Open Peer Review

Open Peer Review is a radical notion predicated on a simple idea: that by making author and reviewer identities public, more civil and constructive peer reviews will be submitted, and peer reviews can be put into context.

Here’s how it works, more or less: reviewers are assigned to a paper, and they know the author’s identity. They review the paper and sign their name. The reviews are then submitted to the editor and author (who now knows their reviewers’ identities, thanks to the signed reviews). When the paper is published, the signed reviews are published alongside it.

Sounds simple enough, but if you’re reviewing for a traditional journal, this might be a challenge. Open Peer Review is still rarely practiced by most traditional publishers.

For a very long time, publishers favored private, anonymous (‘blinded’) peer review, under the assumption that it would reduce bias and that authors would prefer for criticisms of their work to remain private. Turns out, their assumptions weren’t backed up by evidence.

Blinded peer review is argued to be beneficial for early career researchers, who might find themselves in a position where they’re required to give honest feedback to a scientist who’s influential in their field. Anonymity would protect these ECR-reviewers from their colleagues, who could theoretically retaliate for receiving critical reviews.

Yet many have pointed out that it can be easy for authors to guess the identities of their reviewers (especially in small fields, where everyone tends to know what their colleagues/competitors are working on, or in lax peer review environments, where all one has to do is ask!). And as Mick Watson argues, any retaliation that could theoretically occur would be considered a form of scientific misconduct, on par with plagiarism–and therefore off-limits to scientists with any sense.

In any event, a consequence of this anonymous legacy system is that you, as a reviewer, can’t take credit for your work. Sure, you can say you’re a reviewer for Physical Review B, but you’re unable to point to specific reviews or discuss how your feedback made a difference. (Your peer reviews go into the garbage can of oblivion once the article’s been published, as illustrated below.) That means that others can’t read your reviews to understand your intellectual contributions to your field, which–in the case of some reviews–can be enormous.

Image CC-BY Kriegeskorte N from “Open evaluation: a vision for entirely transparent post-publication peer review and rating for science” Front. Comput. Neurosci., 2012

Image CC-BY Kriegeskorte N from “Open evaluation: a vision for entirely transparent post-publication peer review and rating for science” Front. Comput. Neurosci., 2012

So, if you want to get credit for your work, you can choose to review for journals that already offer Open Peer Review. A number of forward-thinking journals allow it (BMJ, PeerJ, and F1000 Research, among others).

To find others, use Cofactor’s excellent journal selector tool:

  • Head over to the Cofactor journal selector tool

  • Click “Peer review,”

  • Select “Fully Open,” and

  • Click “Search” to see a full list of Open Peer Review journals

Some stand-alone peer review platforms also allow Open Peer Review. Faculty of 1000 Prime is probably the best known example. Publons is the largest platform that offers Open peer review. Dozens of other platforms offer it, too.

Once your reviews are attributable to you, the next step is making sure others can read them.

2. Make your reviews (and references to them) discoverable

You might think that discoverability goes hand in hand with Open Peer Review, but you’d only be half-right. Thing is: URLs break every day. Persistent access to an article over time, on the other hand, will help ensure that those who seek out your work can find it, years from now.

Persistent access often comes in the form of identifiers like DOIs. Having a DOI associated with your review means that, even if your review’s URL were to change in the future, others can still find your work. That’s because DOIs are set up to resolve to an active URL when other URLs break.

Persistent IDs also have another major benefit: they make it easy to track citations, mentions on scholarly blogs, or new Mendeley readers for your reviews. Tracking citations and altmetrics (social web indicators that tell you when others are sharing, discussing, saving, and reusing your work online) can help you better understand how your work is having an impact, and with whom. It also means you can share those impacts with others when applying for jobs, tenure, grants, and so on.

There are two main ways you can get a DOI for your reviews:

  • Review for a journal like PeerJ or peer review platform like Publons that issues DOIs automatically

  • Archive your review in a repository that issues DOIs, like Figshare

Once you have a DOI, use it! Include it on your CV (more on that below), as a link when sharing your reviews with others, and so on. And encourage others to always link to your review using the DOI resolver link (these are created by putting “http://dx.doi.org/” in front of your DOI; here’s an example of what one looks like: http://dx.doi.org/10.7287/peerj.603v0.1/reviews/2).

DOIs and other unique, persistent identifiers help altmetrics aggregators like Impactstory and PlumX pick up mentions of your reviews in the literature and on the social web. And when we’re able to report on your citations and altmetrics, you can start to get credit for them!

3. Help shape a system that values peer review as a scholarly output

Peer review may be viewed primarily as a “service” activity, but things are changing–and you can help change ‘em even more quickly. Here’s how.

As a reviewer, raise awareness by listing and linking to your reviews on your CV, adjacent to any mentions of the journals you review for. By linking to your specific reviews (using the DOI resolver link we talked about above), anyone looking at your CV can easily read the reviews themselves.

You can also illustrate the impacts of Open Peer Review for others by including citations and altmetrics for your reviews on your CV. An easy way to do that is to include on your CV a link to the review on your Impactstory or PlumX profile. You can also include other quantitative measures of your reviews’ quality, like Peerage of Science’s Peerage Essay Quality scores, Publons’ merit scores, or a number of other quantitative indicators of peer-review quality. Just be sure to provide context to any numbers you include.

If you’re a decision-maker, you can “shape the system” by making sure that tenure & promotion and grant award guidelines at your organization acknowledge peer review as a scholarly output. Actively encouraging early career researchers and students in your lab to participate in Open Peer Review can also go a long way. The biggest thing you can do? Educate other decision-makers so they, too, respect peer review as a standalone scholarly output.

Finally, if you’re a publisher or altmetrics aggregator, you can help “shape the system” by building products that accommodate and reward new modes of peer review.

Publishers can partner with standalone peer review platforms to accept their “portable peer reviews” as a substitute (or addition to) in-house peer reviews.

Altmetrics aggregators can build systems that better track mentions of peer reviews online, or–as we’ve recently done at Impactstory–connect directly with peer review platforms like Publons to import both the reviews and metrics related to the reviews. (See our “PS” below for more info on this new feature!)

How will you take credit for your peer review work?

Do you plan to participate in Open Peer Review and start using persistent identifiers to link to and showcase your contributions to your field? Will you start advocating for peer review as a standalone scholarly product to your colleagues? Or do you disagree with our premise, believing instead that traditional, blinded peer review–and our means of recognizing it as service–are just fine as-is?

We want to hear your thoughts in the comments below!

Further Reading

 

ps.  Impactstory now showcases your open peer reviews!

 

Starting today, there is one more great way to get credit by your peer reviews, in addition to those above:  on your Impactstory profile!

We’re partnering with Publons, a startup that aggregates Open and anonymous peer reviews written for  PeerJ, GigaScience, Biology Direct, F1000 Research, and many other journals.

Have you written Open reviews in these places?  Want to feature them on your Impactstory profile, complete with viewership stats? Just Sign up for a Publons account and then connect it to your Impactstory profile to start showing off your peer reviewing awesomeness :).

7 ways to make your Google Scholar Profile better

Albert Einstein's Google Scholar profile

Google Scholar Profiles are useful, but are not as good as they could be. In our last post, we identified their limitations: dirty data, a closed platform, and a narrow understanding of what constitutes scholarly impact.

That said, Google Scholar Profiles are still an important tool for thousands of academics worldwide. So, how can researchers overcome Google Scholar Profiles’ weaknesses?

In this post, we share 7 essential tips for your Google Scholar Profile. They’ll keep your citation data clean, help you keep tabs on colleagues and competitors, increase your “Googlability,” and more. Read on!

1. Clean up your Google Scholar Profile data

Thanks to Google Scholar Profiles’ “auto add” functionality, your Profile might include some articles you didn’t author.

If that’s the case, you can remove them in one of two ways:

  1. clicking on the title of each offending article to get to the article’s page, and then clicking the trashcan/“Delete” button in the top green bar

  2. from the main Profile page, ticking the boxes next to each incorrect article and selecting the “Delete” from the drop-down menu in the top green bar

If you want to prevent incorrect articles from appearing on your profile in the first place, you can change your Profile settings to require Google Scholar to email you for approval before adding anything. To make this change, from your main Profile page, click the “More” button that appears in the top grey bar. Select “Profile updates” and change the setting to “Don’t automatically update my profile.”

Prefer to roll the dice? You can keep a close eye on what articles are automatically added to your profile by signing up for alerts (more info about how to do that below) and manually removing any incorrect additions that appear.

2. Add missing publications to your Profile

Google Scholar is pretty good at adding new papers to your profile automatically, but sometimes articles can fall through the cracks.

To add an article, click “Add” in the top grey bar on the main Profile page. Then, you can add your missing articles in one of three ways:

  1. Click the “Add article manually” link in the left-hand navigation bar. On the next page, add as much descriptive information about your article, book, thesis, patent, or other publication as possible. The more metadata you add, the better a chance Google Scholar has of finding citations to your work.

  2. Click “Add articles” in the left-hand navigation bar to get a list of articles that Google Scholar thinks you may have authored. Select the ones you’ve actually authored and add them to your profile by clicking the “Add” button at the top.
  3. Select “Add add article groups” from the left-hand navigation bar to review groups of articles that Scholar thinks you may have authored under another name. This is a new feature that’s less than perfect–hence we’ve listed it as a last choice for ways to add stuff to your profile.

Got all your publications added to your Profile? Good, now let’s move on.

3. Increase your “Googleability”

One benefit to Google Scholar Profiles is that they function as a landing page for your publications. But that functionality only works if your profile is set to “public.”

Double-check your profile visibility by loading your profile and, at the top of the main page, confirming that it reads, “My profile is public” beneath your affiliation information.

If it’s not already public, change your profile visibility by clicking the “Edit” button at the top of your profile, selecting “My profile is public”, and then clicking “Save”.

4. Use your Google Scholar Profile data to get ahead

Though Google Scholar Profile’s limitations means you can’t use it to completely replace your CV, you can use your Profile data to enhance your CV. You can also use your Profile data in annual reports, grant applications, and other instances where you want to document the impact of your publications.

Google Scholar doesn’t allow users to download a copy of their citation data, unfortunately. Any reuse of Google Scholar Profile data has to be done the old-fashioned way: copying and pasting.

That said, a benefit of regularly updating your CV to include copied-and-pasted Google Scholar Profile citations is that it’s a low-tech backup of your Google Scholar Profile data–essential in case Google Scholar is ever deprecated.

5. Stay up-to-date when you’ve been cited

One benefit to Google Scholar Profiles is that you can “Follow” yourself to get alerts whenever you’re cited. As we described in our Ultimate guide to staying up-to-date on your articles’ impact:

Visit your profile page and click the blue “Follow” button at the top of your profile. Click it. Enter your preferred email address in the box that appears, then click “Create alert.” You’ll now get an alert anytime you’ve received a citation.

Easy, right?

You can also click “Follow new articles” on your own profile to be emailed every time a new article is added automatically–key to making sure the data in your Profile is clean, as we discussed in #1 above.

6. …and stay up-to-date on your colleagues and competitors, too

Similarly, you can sign up to receive an email every time someone else receives a new citation or publishes a new article. (I like to think of it as “business intelligence” for busy academics.) It’s as easy as searching for them by name and, on their profile page, clicking “Follow new articles” or “Follow new citations.”

7. Tell Google Scholar how it can improve

Finally, Google Scholar–like most services–relies on your feedback in order to improve. Get in touch with them via this Contact Us link to let them know how they can better their platform. (Be sure to mention that an open API is key to filling the service gaps they can’t offer, especially with respect to altmetrics!)

Do you have Google Scholar Profiles hacks that you use to get around your Profile’s limitations? Leave them in the comments below or join the conversation on Twitter @impactstory!

Updated 12/19/2014 to reflect changes in the Google Scholar profile redesign.

4 reasons why Google Scholar isn’t as great as you think it is

These days, you’d be hard-pressed to find an academic who doesn’t think that Google Scholar Profiles are the greatest thing since sliced bread. Some days, I agree.

Why? Because my Google Scholar Profile captures more citations to my work than Web of Knowledge or Scopus, automatically adds (and tracks citations for) new papers I’ve published, is better at finding citations that appear in non-English language publications, and gives me a nice fat h-index. I’m sure you find it valuable for similar reasons.

And yet, Google Scholar is still deeply flawed. It has some key disadvantages that keep it from being as awesome as most imagine that it is.

In this post, I’m going to do some good ol’ fashioned consciousness-raising and describe Google Scholar Profiles’ limitations. And in our next post, I’ll share tips I’ve learned for getting the most out of your Google Scholar Profile, limitations be darned.

1. Google Scholar Profiles include dirty data

Let’s begin with the most basic element of your Profile: your name. If your name includes diacritics, ligatures, or even apostrophes, Google Scholar may be missing citations to your work. (Sorry, O’Connor!) And if you have a common name, it’s likely you’ll end up with others’ publications in your Profile, which you are unfortunately responsible for identifying and removing. (We’ll cover how to do that in our next post.)

Now, what about the quality of citations? Google Scholar claims to pull citations from anywhere on the scholarly web into your Profile, but their definition of “the scholarly web” is less rigorous than many people realize. For example, our co-founder, Heather, has citations on her Google Scholar Profile for a Friendfeed post. And others have found Google Scholar citations to their work in student handbooks and LibGuides–not the worst places you can get a cite from, but still: Nature they ain’t.

Google Scholar citations are also, like any metric, susceptible to gaming. But whereas organizations like PLOS and Thomson Reuters’ Journal Citation Index will flag and ban those found to be gaming the system, Google Scholar does not respond quickly (if at all) to reports of gaming. And as researchers point out, Google’s lack of transparency with respect to how data is collected means that gaming is all the more difficult to discover.

The service also misses citations in a treasure-trove of scholarly material that’s stored in institutional repositories. Why? Because Google Scholar won’t harvest information from repositories in the format that repositories across the world tend to use (Dublin Core).

Google Scholar Profile data is far from perfect, but that’s a small problem compared to the next issue.

2. Google Scholar Profiles may not last

Remember Google Reader? Google has a history of killing beloved products when the bottom line is in question.  It’s not exaggerating to say that Google Scholar Profiles could literally go away at any moment.

To me, it’s not unlike the problem of monoculture in agriculture. Monoculture can be a good thing. For those unfamiliar with the term, monoculture is when farmers identify the most powerful species of a crop–the one that is easiest to grow and yields the best harvest year after year–and then grow that crop exclusively. Google Scholar Profiles were, for a long time, the most easy to use and powerful citation reports available to scholars, and so Google Scholar has become one of the most-used platforms in academia.

But monoculture is also risky. Growing only one species of a crop can be catastrophic to a nation’s food supply if, for example, that species were wiped out by blight one year. Similarly, academia’s near-singular dependence on Google Scholar Profile data could be harmful to many if Google Scholar were to be shelved.

3. Google Scholar Profiles won’t allow itself to be improved upon

Other issues aside, it’s worth acknowledging that Google Scholar Profiles are very good at doing one thing: finding citations on the scholarly web. But that’s pretty much all they do, and Google is actively preventing anyone else from improving upon their service.

It’s been pointed out before that the lack of a Google Scholar API means that no one can add value to or improve the tool. That means that services like Impactstory cannot include citations from Google Scholar on Impactstory, nor can we build upon Google Scholar Profiles to find and display metrics beyond citations or automatically push new publications to Profiles. Based on the number of Google Scholar-related help tickets we receive, this lack of interoperability is a major pain point for researchers.

4. Google Scholar Profiles only measure a narrow kind of scholarly impact

Google Scholar Profiles aren’t designed to meet the needs of web-native scholarship. These days, researchers are putting their software, data, posters, and other scholarly products online alongside their papers. Yet Google Scholar Profiles don’t allow them to track citations–nor any other type of impact indicator, including altmetrics–to those outputs.

Google Scholar Profiles also promote a much-maligned One Metric to Rule Them All: the h-index. We’ve already talked about the many reasons why scholars should stop caring about the h-index; most of those reasons stem from the fact that h-indices, like Google Scholar Profiles, aren’t designed with web-native scholarship in mind.

Now that we’re clear on the limitations of Google Scholar Profiles, we’ll help you overcome ‘em by sharing 7 essential workarounds for your Google Scholar Profile in tomorrow’s post. Stay tuned!

4 things every librarian should do with altmetrics

Researchers are starting to use altmetrics to understand and promote their academic contributions. At the same time, administrators and funders are exploring them to evaluate researchers’ impact.

In light of these changes, how can you, as a librarian, stay relevant by supporting their fast-changing altmetrics needs?

In this post, we’ll give you four ways to stay relevant: staying up-to-date with the latest altmetrics research, experimenting with altmetrics tools, engaging in early altmetrics education and outreach, and defining what altmetrics mean to you as a librarian.

1. Know the literature

Faculty won’t come to you for help navigating the altmetrics landscape if they can tell you don’t know the area very well, will they?

To get familiar with discussions around altmetrics, start with the recent SPARC report on article-level metrics, this excellent overview that appeared in Serials Review (paywall), and the recent ASIS&T Bulletin special issue on altmetrics.

Then, check out this list of “17 Essential Altmetrics Resources” aimed at librarians, this recent article on collection development and altmetrics from Against the Grain, and presentations from Heather and Stacy on why it’s important for librarians to be involved in altmetrics discussions on their campuses.

There’s also a growing body of peer-reviewed research on altmetrics. One important concept from this literature is the idea of “impact flavors”–a way to understand distinctive patterns in the impacts of scholarly products.

For example, an article featured in mainstream media stories, blogged about, and downloaded by the public has a very different flavor of impact than a dataset heavily saved and discussed by scholars, which is in turn different from software that’s highly cited in research papers. Altmetrics can help researchers, funders, and administrators optimize for the mix of flavors that best fits their particular goals.

There’s also been a lot of studies on correlations (or lack thereof) between altmetrics and traditional citations. Some have shown that selected altmetrics sources (Mendeley in particular) are significantly correlated with citations (1, 2, 3), while other sources, like Facebook bookmarks, have only slight correlations with citations. These studies show that different types of altmetrics are capturing different types of impact, beyond just scholarly impact.

Other early touchstones include studies exploring the predictive potential of altmetrics, growing adoption of social media tools that inform altmetrics, and insights from article readership patterns.

But these are far from only studies to be aware of! Stay abreast of new research by reading through the PLOS Altmetrics Collection, joining the Altmetrics Mendeley group and following the #altmetrics hashtag on Twitter.

2. Know the tools

There are now several tools that allow scholars to collect and share the broad impact of their research portfolios.

In the same way a you’d experiment with new features added to Web of Science, you can play around with altmetrics tools and add them to your bibliographic instruction repertoire (more on that in the following section). Familiarity will enable you to do easy demonstrations, discuss strengths and weaknesses, contribute to product development, and serve as a resource for campus scholars and administration.

Here are some of the most popular altmetrics tools:

Impactstory

lopjuza.png

If you’re reading this post, chances are that you’re already familiar with Impactstory, a nonprofit Web application supported by the Alfred P. Sloan Foundation and NSF.

If you’re a newcomer, here’s the scoop: scholars create a free Impactstory profile and then upload their articles, datasets, software, and other products using Google Scholar, ORCID, or lists of permanent identifiers like DOIs, PubMed IDs, and so on. Impactstory then gathers and reports altmetrics and traditional citations for each product. As shown above, metrics are displayed as percentiles relative to similar products. Profile data can be exported for further analysis, and users can receive alerts about new impacts.

Impactstory is built on open-source code, offers open data, and is free to use. Our robust community of users helps us think up new features and prioritize development via our Feedback forum; once you’re familiar with our site, we encourage you to sign up and start contributing, too!

PlumX

PlumX Artifact Screen Shot.pngPlumX is another web application that displays metrics for a wide range of scholarly outputs. The metrics can be viewed and analyzed at any user-defined level, including at the researcher, department, institution, journal, grant, and research topic levels. PlumX reports some metrics that are unique from other altmetrics services, like WorldCat holdings and downloads and pageviews from some publishers, institutional repositories, and EBSCO databases. PlumX is developed and marketed by Plum Analytics, an EBSCO company.

The service is available via a subscription. Individuals who are curious can experiment with the free demo version.

Altmetric

Altmetric-Explorer-Screenshot-University-of-Texas-Sample.pngThe third tool that librarians should know about is Altmetric.com. Originally developed to provide altmetrics for publishers, the tool primarily tracks journal articles and ArXiv.org preprints. In recent years, the service has expanded to include a subscription-based institutional edition, aimed at university administrators.

Altmetric.com offers unique features, including the Altmetric score (a single-number summary of the attention an article has received online) and the Altmetric bookmarklet (a browser widget that allows you to look up altmetrics for any journal article or ArXiv.org preprint with a unique identifier). Sources tracked for mentions of articles include social and traditional media outlets from around the world, post-publication peer-review sites, reference managers like Mendeley, and public policy documents.

Librarians can get free access to the Altmetric Explorer and free services for institutional repositories. You can also request trial access to Altmetric for Institutions.

3. Integrate altmetrics into library outreach and education

Librarians are often asked to describe Open Access publishing choices to both faculty and students and teach how to gather evidence of impact for hiring, promotion, and tenure. These opportunities–whether one on one or in group settings like faculty meetings–can allow librarians to introduce altmetrics.

Discussing altmetrics in the context of Open Access publishing helps “sell” the benefits of OA. Altmetrics, like download counts that appear in PLOS journals and institutional repositories, can highlight the benefits of open access publishing. They can also demonstrate that “impact” is more closely tied to an individual’s scholarship rather than a journal’s impact factor.

Similarly, researchers often use an author’s h-index for hiring, tenure, and promotion, conflating the h-index with the quality of an individual’s work. Librarians are often asked to teach and provide assistance calculating an h-index within various databases (Web of Science, SCOPUS, etc.). Integrating altmetrics into these instruction sessions is akin to providing researchers with additional primary resource choices on a research project. Librarians need to make researchers aware of many tools they can use to evaluate the impact of scholarship, and of the relevant research–including benefits of and drawbacks to different altmetrics.

So, what does altmetrics outreach look like on the ground? To start, check out these great presentations that librarians around the world have given on the benefits of using altmetrics (and particular altmetrics tools) in research and promotion.

Another great way to stay relevant on this subject is to find and recommend to your grad students and faculty readings on ways they can use altmetrics in their career, like this one from our blog on the benefits of including altmetrics on your CV.

4. Discover the benefits that altmetrics offer librarians

There are reasons to learn about altmetrics beyond serving faculty and students. A major one is that many librarians are scholars themselves, and can use altmetrics to better understand the diverse impact of their articles, presentations, and white papers. Consider putting altmetrics on your own CV, and advocating the use of altmetrics among library faculty who are assembling tenure and promotion packages.

Librarians also produce and support terabytes’ worth of scholarly content that’s intended for others’ use, usually in the form of digital special collections and institutional repository holdings. Altmetrics can help librarians understand the impacts of these non-traditional scholarly outputs, and provide hard evidence of their use beyond ‘hits’ and downloads–evidence that’s especially useful when making arguments for increased budgetary and administrative support.

It’s important that librarians explore the unique ways they can apply altmetrics to their own research and jobs, especially in light of recent initiatives to create recommended practices for the collection and use of altmetrics. What is useful to a computational biologist may not be useful for a librarian (and vice versa). Get to know the research and tools and figure out ways to use them to your own ends.

There’s a lot happening right now in the altmetrics space, and it can sometimes be overwhelming for librarians to keep up with and understand. By following the steps outlined above, you’ll be well positioned to inform and support researchers, administrators, and library decision makers in their use. And in doing so, you’ll be indispensable in this new era of web-native research.

Are you a librarian that’s using altmetrics? Share your experiences in the comments below!

This post has been adapted from the 2013 C&RL News article, “Riding the crest of the altmetrics wave: How librarians can help prepare faculty for the next generation of research impact metrics” by Lapinski, Piwowar, and Priem.

Ten reasons you should put altmetrics on your CV right now

If you don’t include altmetrics on your CV, you’re missing out in a big way.

There are many benefits to scholars and scholarship when altmetrics are embedded in a CV.

Altmetrics can:

  1. provide additional information;
  2. de-emphasize inappropriate metrics;
  3. uncover the impact of just-published work;
  4. legitimize all types of scholarly products;
  5. recognize diverse impact flavors;
  6. reward effective efforts to facilitate reuse;
  7. encourage a focus on public engagement;
  8. facilitate qualitative exploration;
  9. empower publication choice; and
  10. spur innovation in research evaluation.

In this post, we’ll detail why these benefits are important to your career, and also recommend the ways you should–and shouldn’t–include altmetrics in your CV.

1. Altmetrics provide additional information

The most obvious benefit of including altmetrics on a CV is that you’re providing more information than your CV’s readers already have.  Readers can still assess the CV items just as they’ve always done: based on title, journal and author list, and maybe–if they’re motivated–by reading or reviewing the research product itself. Altmetrics have the added benefit of allowing readers to dig into post-publication impact of your work.

2. Altmetrics de-emphasize inappropriate metrics

It’s generally regarded as poor form to evaluate an article based on a journal title or impact factor. Why? Because high journal impact factors vary across fields and an article often receives more or less attention than its journal container suggests.

But what else are readers of a CV to do? Most of us don’t have enough domain expertise to dig into each item and assess its merits based on a careful reading, even if we did have time. We need help, but traditional CVs don’t provide enough information to assess the work on anything but journal title.

Providing article-level citations and altmetrics in a CV gives readers more information, thereby de-emphasizing evaluation based on journal rank.

3. Altmetrics uncover the impact of just-published work

Why not suggest that we include citation counts in CVs, and leave it at that? Why go so far as altmetrics? The reason is that altmetrics have benefits that complement the weaknesses of a citation-based solution.

Timeliness is the most obvious benefits of altmetrics. Citations take years to accrue, which can be a problem for graduate students who are applying for jobs soon after publishing their first papers and for those promotion candidates whose most profound work is published only shortly before review.

Multiple research studies have found that counts of downloads, bookmarks and tweets correlate with citations, yet accrue much more quickly, often in weeks or months rather than years. Using timely metrics allows researchers to showcase the impact of their most recent work.

4. Altmetrics legitimize all types of scholarly products

How can readers of a CV know if your included dataset, software project, or technical report is any good?

You can’t judge its quality and impact based on the reputation of the journal that published it, since datasets and software aren’t published in journals. And even if they were, we wouldn’t want to promote the poor practice of judging the impact of an item by the impact of its container.

How, then, can alternative scholarly products be more than just space-filler on a CV?

The answer is product-level metrics. Like article-level metrics do for journal articles, product-level metrics provide the needed evidence to convince evaluators that a dataset or software package or white paper has made a difference. These types of products often make impacts in ways that aren’t captured by standard attribution mechanisms like citations. Altmetrics are key to communicating the full picture of how a product has influenced a field.

5. Altmetrics recognize diverse impact flavors

The impact of a research paper has a flavor. There are scholarly flavors (a great methods sections bookmarked for later reference or controversial claims that change a field), public flavors (“sexy” research that captures the imagination or data from a paper that’s used in the classroom), and flavors that fall into the area in between (research that informs public policy or a paper that’s widely used in clinical practice).

We don’t yet know how many flavors of impact there are, but it would be a safe bet that scholarship and society need them all. The goal isn’t to compare flavors: one flavor isn’t objectively better than another. They each have to be appreciated on their own merits for the needs they meet.

To appreciate the impact flavor of items on a CV, we need to be able to tell the flavors apart. (Citations alone can’t fully inform what kind of difference a research paper has made on the world. They are important, but not enough.) This is where altmetrics come in. By analyzing patterns in what people are reading, bookmarking, sharing, discussing and citing online we can start to figure out what kind – what flavor – of impact a research output is making.

More research is needed to understand the flavor palette, how to classify impact flavor and what it means. In the meantime, exposing raw information about downloads, shares, bookmarks and the like starts to give a peek into impact flavor beyond just citations.

6. Altmetrics reward efforts to facilitate reuse

Reusing research – for replication, follow-up studies and entirely new purposes – reduces waste and spurs innovation. But it does take a bit of work to make your research reusable, and that work should be recognized using altmetrics.

There are a number of ways authors can make their research easier to reuse. They can make article text available for free with broad reuse rights. They can choose to publish in places with liberal text-mining policies, that invest in disseminating machine-friendly versions of articles and figures.

Authors can write detailed descriptions of their methods, materials, datasets and software and make them openly available for reuse. They can even go further, experimenting with executable papers, versioned papers, open peer review, semantic markup and so on.

When these additional steps result in increased reuse, it will likely be reflected in downloads, bookmarks, discussions and possibly citations. Including altmetrics in CVs will reward investigators who have invested their time to make their research reusable, and will encourage others to do so in the future.

7. Altmetrics can encourage a focus on public engagement

The research community, as well as society as a whole, benefits when research results are discussed outside the Ivory Tower. Engaging the public is essential for future funding, recruitment and accountability.

Today, however, researchers have little incentive to engage in outreach or make their research accessible to the public. By highlighting evidence of public engagement like tweets, blog posts and mainstream media coverage, altmetrics on a CV can reward researchers who choose to invest in public engagement activities.

8. Altmetrics facilitate qualitative exploration

Including altmetrics in a CV isn’t all about the numbers! Just as we hope many people who skim our CVs will stop to read our papers and explore our software packages, so too we can hope that interested parties will click through to explore the details of altmetrics engagement for themselves.

Who is discussing an article? What are they saying? Who has bookmarked a dataset? What are they using it for? As we discuss at the end of this post, including provenance information is crucial for trustworthy altmetrics. It also provides great information that helps CV readers move beyond the numbers and jump into qualitative exploration of impact.

9. Altmetrics empower publication choice

Publishing in a new or innovative journal can be risky. Many authors are hesitant to publish their best work somewhere new or with a relatively-low impact factor. Altmetrics can remedy this by highlighting work based on its post-publication impact, rather than the title of the journal it was published in. Authors will be empowered to choose publication venues they feel are most appropriate, leveling the playing field for what might otherwise be considered risky choices.

Successful publishing innovators will also benefit. New journals won’t have to wait two years to get an impact factor before they can compete. Publishing venues that increase access and reuse will be particularly attractive. This change will spur innovation and support the many publishing options that have recently debuted, such as eLife, PeerJ, F1000 Research and others.

10. Altmetrics spur innovation in research evaluation

Finally, including altmetrics on CVs will engage researchers directly in research evaluation. Researchers are evaluated all the time, but often behind closed doors, using data and tools they don’t have access to. Encouraging researchers to tell their own impact stories on their CVs, using broad sources of data, will help spur a much-needed conversation about how research evaluation is done and should be done in the future.

OK, so how can you do it right?

There can be risks to including altmetrics data on a CV, particularly if the data is presented or interpreted without due care or common sense.

Altmetrics data should be presented in a way that is accurate, auditable and meaningful:

  • Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming
  • Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription.
  • Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, localized to domain or type of product, makes it easy to interpret the data responsibly.

Assuming these presentation requirements are met, how should the data be interpreted? We strongly recommend that altmetrics be considered not as a replacement for careful expert evaluation but as a supplement. Because they are still in their infancy, we should view altmetrics as way to ground subjective assessment in real data; a way to start conversations, not end them.

Given this approach, at least three varieties of interpretation are appropriate: signaling, highlighting and discovery. A CV with altmetrics clearly signals that a scholar is abreast of innovations in scholarly communication and serious about communicating the impact of scholarship in meaningful ways. Altmetrics can also be used to highlight research products that might otherwise go unnoticed: a highly downloaded dataset or a track record of F1000-reviewed papers suggests work worthy of a second look. Finally, as we described above, auditable altmetrics data can be used by evaluators as a jumping off point for discovery about who is interested in the research, what they are doing with it, and how they are using it.

How to Get Started

How can you add altmetrics to your own CV or, if you are a librarian, empower scholars to add altmetrics to theirs?

Start by experimenting with altmetrics for yourself. Play with the tools, explore and suggest improvements. Librarians can also spread the word on their campuses and beyond through writing, teaching and outreach. Finally, if you’re in a position to hire, promote, or review grant applications, explicitly welcome diverse evidence of impact when you solicit CVs.

What are your thoughts on using altmetrics on a CV? Would you welcome them as a reviewer, or choose to ignore them? Tell us in the comments section below.

This post has been adapted from “The Power of Altmetrics on a CV,” which appeared in the April/May 2013 issue of ASIS&T Bulletin.

The ultimate guide for staying up-to-date on your data, software, white papers, slide decks and conference posters’ impact

Getting impact alerts for your papers was pretty simple to set up, but what about tracking real-time citations, downloads, and social media activity for your other research outputs?

There are so many types of outputs to track–datasets, software, slide decks, and more. Plus, there seems to be dozens of websites for hosting them! How can you easily keep track of your diverse impacts, as they happen?

Don’t worry–it’s literally our job to stay on top of this stuff! Below, we’ve compiled the very best services that send impact alerts for your research data, software, slide decks, conference posters, technical reports, and white papers.

Research data

Specific data repositories gather and display metrics on use. Here, we go into details on metrics offered by GitHub, Figshare, and Dryad, and then talk about how you can track citations via the Data Citation Index.

GitHub

github_logo.jpg

If you use the collaborative coding website GitHub to store and work with research data, you can enable email alerts for certain types of activities. That way, you’re notified any time someone comments on your data or wants to modify it using a “pull request.”

First, you’ll need to “watch” whatever repositories you want to get notifications for. To do that, visit the repository page for the dataset you want to track, and then click the “Watch” button in the upper right-hand corner and select “Watching” from the drop-down list, so you’ll get a notification when changes are made.

Then, you need to enable notification emails. To do that, log into GitHub and click the “Account Settings” icon in the upper right-hand corner. Then, go to “Notification center” on the left-hand navigation bar. Under “Watching,” make sure the “Email” box is ticked.

Other GitHub metrics are also useful researchers: “stars” tell you if others have bookmarked your repository and “forks”–a precursor to a pull request–indicate if others have adapted some of your code for their own uses. Impactstory notification emails (covered in more detail below) include both of these metrics.

GitHub, Dryad and Figshare metrics via Impactstory

Screen Shot 2014-06-06 at 953.png

Dryad data repository and Figshare both display download information on their web sites, but they don’t send notification emails when new downloads happen. And GitHub tracks stars and forks, but doesn’t include them in their alert emails. Luckily, Impactstory alerts notify you when your data stored on these sites receives the following types of new metrics:

Dryad

Figshare

GitHub

pageviews

X

X

downloads

X

X

shares

X

stars (bookmarks)

X

forks (adaptations)

X

Types of data metrics reported by Impactstory

To set up alerts, create an Impactstory profile and connect your profile to ORCID, Figshare, and GitHub using the “Import from accounts” button at the top of your profile. (If you already have an Impactstory profile, this button will appear as a blue “Connect more accounts” button instead.) This will allow you to auto-import many of your datasets. If any of your datasets are missing, you can add them one by one by clicking the “Import individual products” icon and providing links and DOIs. Once your profile is set up, you’ll start to receive a notification email once every 1-2 weeks.

Data Citation Index

If you’ve deposited your data into a repository that assigns a DOI, the Data Citation Index (DCI) is often the best way to learn if your dataset has been cited in the literature.

To create an alert, you’ll need a subscription to the service, so check with your institution to see if you have access. If you do, you can set up an alert by first creating a personal registration with the Data Citation Index; click the “Sign In” button at the top right of the screen, then select “Register”. (If you’re already registered with Web of Knowledge to get citation alerts for your articles, there’s no need to set up a separate registration.)

Then, set your preferred database to the Data Citation Index by clicking the orange arrow next to “All Databases” to the right of “Search” in the top-left corner. You’ll get a drop-down list of databases; select “Data Citation Index.”

Now you’re ready to create an alert. On the Basic Search screen, search for your dataset by its title. Click on the appropriate title to get to the dataset’s item record. In the upper right hand corner of the record, you’ll find the Citation Network box. Click “Create citation alert.” Let the Data Citation Index know your preferred email address, then save your alert.

Software

The same GitHub metrics you can track for data can be used to track software impact, too. To receive alerts about comments on your code and pull requests, follow the notification sign-up instructions outlined under Research Data > GitHub, above. To receive alerts when your software gets stars or forks, sign up for Impactstory alerts according to the instructions under Research Data > GitHub, Dryad, and Figshare.

Impactstory and others are working on ways to track software impact better–stay tuned!

Technical reports, working papers, conference slides & posters

Slideshare sends alerts for metrics your slide decks and posters receive. Impactstory includes some of these metrics from Slideshare in our alert emails.  Impactstory alerts also include metrics for technical reports, working papers, conference slides, and posters hosted on Figshare.

Slideshare

w8Zu8Ow.png

Though Slideshare is best known for allowing users to view and share slide decks, some researchers also use it to share conference posters. The platform sends users detailed weekly alert emails about new metrics their slide decks and posters have received, including the number of total views, downloads, comments, favorites, tweets, and Facebook likes.

To receive notification emails, go to Slideshare.net and click the profile icon in the upper right-hand corner of the page. Then, click “Email” in the left-hand navigation bar, and check the “With the statistics of my content” box to start receiving your weekly notification emails.

Figshare and Slideshare metrics via Impactstory

You can use Impactstory to receive notifications for downloads, shares, and views for anything you’ve uploaded to Figshare, and for the downloads, comments, favorites, and views for slide decks and posters uploaded to Slideshare.

First, create an Impactstory profile and connect your profile to Figshare and Slideshare using the “Import from accounts” button at the top of your profile. (If you already have an Impactstory profile, this button will appear as a “Connect more accounts” button instead.) For both services, click the appropriate button, then provide your profile URL when prompted. Your content will then auto-import.

If any Figshare or Slideshare uploads are missing–which might be the case your collaborators have uploaded content on your behalf–you can add them one by one by clicking the “Import stuff” icon at the upper right-hand corner of your profile, clicking the “Import individual products” link, and then providing the Figshare DOIs and Slideshare URLs. Once your profile is set up, you’ll start to receive a notification email once every 1-2 weeks.

Videos

Vimeo and Youtube both provide a solid suite of statistics for videos hosted on their sites, and you can use those metrics to track the impact of your video research outputs. To get alerts for these metrics, though, you’ll need to sign up for Impactstory alerts.

Vimeo and Youtube metrics via Impactstory

Vimeo tracks likes, comments, and plays for videos hosted on their platform; Youtube reports the same, plus dislikes and favorites. To get metrics notifications for your videos hosted on either of these sites, you’ll need to add links to your videos to your Impactstory profile.

Once you’ve signed up for an Impactstory profile, the “Import stuff” icon at the upper right-hand corner of your profile, then click the “Import individual products” link. There, add URLs for each of the  videos and click “Import”. Once they’re imported to your profile, you’ll start to receive notifications for new video metrics once every 1-2 weeks.

Are we missing anything? We’ve managed to cover the most popular platforms in this post, but we’d love to get your tips on niche data repositories, video platforms, and coding sites that keep you up to date on your impact by sending alerts. Leave them in the comments below!

Bookmark this guide. This post–and our other Ultimate Guide for articles–will be updated over time, as services change.

The ultimate guide to staying up-to-date on your articles’ impact

You published a paper–congrats!  Has anyone read it?  Cited it?  Talked about it on Twitter?  How can you find out–as it happens?

Automated alerts!  Email updates that matter come right to you.

We’ve compiled a two-part primer on the services that deliver essential research impact metrics straight to your inbox, so you can stay up to date without having to do a lot of work.

In this post, we’ll share tips for how to automagically track citations, altmetrics and downloads for your publications; in our next post, we’ll share strategies for tracking similar metrics for your data, code, slides, and social media outreach.

Citations

Let’s start with citations: the “coin of the realm” to track scholarly impact. You can get citation alerts in two main ways: from Google Scholar or from traditional citation indices.

Google Scholar Citations alerts

Google Scholar citations track any citations to your work that occur on the scholarly web. These citations can appear in any type of scholarly document (white papers, slide decks, and of course journal articles are all fair game) and in documents of any language. Naturally, this means that your citation count on Google Scholar may be larger than on other citation services.

To get Google Scholar alerts, first sign up for a Google Scholar Citations account and add all the documents you want to track citations for. Then, visit your profile page and click the blue “Follow” button at the top of your profile. You’ll see a drop-down like this:

Screenshot of a Google Scholar profile, showing the blue

Enter your preferred email address in the box that appears, then click “Create alert.” You’ll now get an alert anytime you’ve received a citation.

Citation alerts via Scopus & Web of Knowledge

Traditional citation indices like Scopus and Web of Knowledge are another good way to get citation alerts delivered to your inbox. These services are more selective in scope, so you’ll be notified only when your work is cited by vetted, peer-reviewed publications. However, they only track citations for select journal articles and book chapters–a far cry from the diverse citations that are available from Google Scholar. Another drawback: you have to have subscription access to set alerts.

Web of Knowledge

Web of Knowledge offers article-level citation alerts. To create an alert, you first have to register with Web of Knowledge by clicking the “Sign In” button at the top right of the screen, then selecting “Register”.

5sBUo1G.png

Then, set your preferred database to the Web of Science Core Collection (alerts cannot be set up across all databases at once). To do that, click the orange arrow next to “All Databases” to the right of “Search” in the top-left corner. You’ll get a drop-down list of databases, from which you should select “Web of Science Core Collection.”

Now you’re ready to create an alert. On the Basic Search screen, search for your article by its title. Click on the appropriate title to get to the article page. In the upper right hand corner of the record, you’ll find the Citation Network box. Click “Create citation alert.” Let Web of Knowledge know your preferred email address, then save your alert.

Scopus

In Scopus, you can set up alerts for both articles and authors. To create an alert for an article, search for it and then and click on the title in your search results. Once you’re on the Article Abstract screen, you will see a list of papers that cite your article on the right-hand side. To set your alert, click “Set alert” under “Inform me when this document is cited in Scopus.”

To set an author-level alert, click the Author Search tab on the Scopus homepage and run a search for your name. If multiple results are returned, check the author affiliation and subjects listed to find your correct author profile. Next, click on your author profile link. On your author details page, follow the “Get citation alerts” link, and list your saved alert, set an email address, and select your preferred frequency of alerts. Once you’re finished, save your alert.

With alerts set for all three of these services, you’ll now be notified when your work is cited in virtually any publication in the world! But citations only capture a very specific form of scholarly impact. How do we learn about other uses of your articles?

Tracking article pageviews & downloads

How many people are reading your work? While you can’t be certain that article pageviews and full-text downloads mean people are reading your articles,  many scientists still find these measures to be a good proxy. A number of services can send you this information via email notifications for content hosted on their sites. Impactstory can send you pageview and download information for some content hosted elsewhere.

Publisher notifications

Publishers like PeerJ and Frontiers send notification emails as a service to their authors.

If you’re a PeerJ author, you should receive notification emails by default once your article is published. But if you want to check if your notifications are enabled, sign into PeerJ.com, and click your name in the upper right hand corner. Select “Settings.” Choose “Notification Settings” on the left nav bar, and then select the “Summary” tab. You can then choose to receive daily or weekly summary emails for articles you’re following.

In Frontiers journals, it works like this: once logged in, click the arrow next to your name on the upper left-hand side and select “Settings.” On the left-hand nav bar, choose “Messages,” and under the “Other emails” section, check the box next to “Frontiers monthly impact digest.”

Both publishers aggregate activity for all of the publications you’ve published with them, so no need to worry about multiple emails crowding your inbox at once.

Not a PeerJ or Frontiers author? Contact your publisher to find out if they offer notifications for metrics related to articles you’ve published. If they do, let us know by leaving a comment below, and we’ll update this guide!

ResearchGate & Academia.edu

bhr3lLZ.png

Some places where you upload free-to-read versions of your papers, like ResearchGate and Academia.edu, will report how many people have viewed your paper on their site.

You can turn on email notifications for pageviews, downloads, comments, bookmarks, and citations by other papers on ResearchGate by visiting “Settings” (on both sites, click the triangle in the upper right-hand corner of your screen). Then, click on the “Notifications” tab in the sidebar menu, and check off the types of emails you want to receive. On Academia.edu, the option to receive new metrics notifications for pageviews, downloads, and bookmarks are under “Analytics” and “Papers”; on Researchgate, it’s under “Your publications” and “Scheduled updates”.

PLOS article metrics via Impactstory

Impactstory now offers alerts, so you’re notified any time your articles get new metrics, including pageviews and downloads. However, we currently only offer these metrics for articles published in PLOS journals. (If you’d like to see us add similar notifications for other publishers, submit an idea to our Feedback site!) We describe how to get Impactstory notifications for the articles that matter to you in the Social Media section below.

Post-publication peer review

Some articles garner comments as a form of post-publication peer review. PeerJ authors are notified any time their articles get a comment, and any work that’s uploaded to ResearchGate can be commented upon, too. Reviews can also be tracked via Altmetric.com alerts.

PeerJ

To make sure you’re notified with you receive new PeerJ comments, login to PeerJ and go to “Settings” > “Notification Settings”  and then click on the “Email” tab. There, check the box next to “Someone posts feedback on an article I wrote.”

ResearchGate

To set your ResearchGate notifications, login to the site and navigate to “Settings” > “Notifications.” Check the boxes next to “One of my publications is rated, bookmarked or commented on” and “Someone reviews my publication”.

Altmetric.com

Post-publication peer reviews from Publons and PubPeer are included in Altmetric.com notification emails, and will be included in Impactstory emails in the near future. Instructions for signing up for Altmetric and Impactstory notifications can be found below.

PubChase

Article recommendation platform PubChase can also be used to set up notifications for PubPeer comments and reviews that your articles receive. To set it up, first add your articles to your PubChase library (either by searching and adding papers one-by-one, or by syncing PubChase with your Mendeley account). Then, hover over the Account icon in the upper-right hand corner, and select “My Account.” Click “Email Settings” on the left-hand navigation bar, and then check the box next to “PubPeer comments” to get your alerts.

Social media metrics

What are other researchers saying about your articles around the water cooler? It used to be that we couldn’t track these informal conversations, but now we’re able to listen in using social media sites like Twitter and on blogs. Here’s how.

Social media metrics via Altmetric.com

Altmetric.com allows you to track altmetrics and receive notifications for any article that you have published, no matter the publisher.

S00Rpwu.png

First, install the Altmetric.com browser bookmarklet (visit this page and drag the “Altmetric It!” button into your browser menu bar). Then, find your article on the publisher’s website and click the “Altmetric it!” button. The altmetrics for your article will appear in the upper right-hand side of your browser window, in a pop-up box similar to the one at right.

Next, follow the “Click for more details” link in the Altmetric pop-up. You’ll be taken to a drill-down view of the metrics. At the bottom left-hand corner of the page, you can sign up to receive notifications whenever someone mentions your article online.

The only drawback of these notification emails is that you have to sign up to track each of your articles individually, which can cause inbox mayhem if you are tracking many publications.

Social media metrics via Impactstory

9GtkvJ6.png

Here at Impactstory, we recently launched similar notification emails. Our emails differ in that they alert you to new social media metrics, bookmarks, and citations for all of your articles, aggregated into a single report.

To get started, create an Impactstory profile and connect your profile to ORCID, Google Scholar, and other third-party services. This will allow you to auto-import your articles. If a few of your articles are missing, you can add them one by one by clicking the “Import stuff” icon, clicking the “Import individual products” link on the next page, and then providing links and DOIs. Once your profile is set up, you’ll start to receive your notification emails once every 1-2 weeks.

When you get your first email, take a look at your “cards”. Each card highlights something unique about your new metrics for that week or month: if you’re in a top percentile related to other papers published that year or if your PLOS paper has topped 1000 views or gotten new Mendeley readers. You’ll get a card for each type of new metric one of your articles receives.

Note that Impactstory notification emails also contain alerts for metrics that your other types of outputs–including data, code and slide decks–receive, but we’ll cover that in more detail in our next post.

Now you’ve got more time for the things that matter

No more wasting your days scouring 10+ websites for evidence of your articles’ impact; it’s now delivered to your inbox, as new impacts accumulate.

Do you have more types of research outputs, beyond journal articles? In our next post, we’ll tell you how to set up similar notifications to track the impact of your data, software, and more.

Updates:
12/17/2014: 
Updates to describe the revamped Impactstory interface and new notification options for ResearchGate and Academia.edu
5/27/2014: Added information about PubChase notification emails.

How to become an academic networking pro on LinkedIn

You now have a solid LinkedIn profile, but you don’t quite know what to do with it.

After all, it’s difficult for scientists to self-promote. To many, it just feels unnatural. Plus, your contacts are out of date, and LinkedIn functionalities like Endorsements seem to be not quite right you as an academic.

Given that, how exactly are you supposed to use LinkedIn appropriately to connect with other scientists and find job opportunities?

You’re in luck. On top of the tips we compiled for our last post, we’ve found the best strategies for using LinkedIn to network in academia.

In this post, we’ll tell you the keys to networking for academics on LinkedIn: how to find and sustain a professional relationship with colleagues and experts in your field, get others to Endorse and Recommend you in the right ways, and connect LinkedIn to the rest of your professional life.

1. Get connected to your existing web of co-workers and advisors

Lu6DG0z.png

It’s surprisingly easy to find people you already know and add them to your network on LinkedIn.

Use the Add Connections tab in the top right corner of your profile to connect LinkedIn to your email account.

LinkedIn then suggests Connections based on your contacts. A rule to follow for LinkedIn, as opposed to Twitter and Facebook, is that you should only select Connections you actually know and feel comfortable asking to keep in touch (former collaborators, co-workers, and friends are good choices).

When Connecting, it’s a nice touch to send a message saying hello. Networking is all about building meaningful relationships, not how many people you have in your virtual Rolodex.

2. Request introductions to new contacts

If you want a good way to meet potential collaborators or get an “in” for a job, Connecting with strangers can be useful.

But how do you get around the awkwardness of asking strangers to Connect? The answer: ask a current contact for an introduction.

Here’s an example of how that would work: I’m not currently Connected to genomics researcher Mike Eisen on LinkedIn, but let’s say I want to collaborate with him to do some research on a great idea I have.

H7IeBkj.png

The first thing I need to do to connect with him is find a contact that we have in common. So, I visit Mike’s profile. On the left-hand side is a “How You’re Connected” graphic. I can scroll through the list of contacts we have in common to find a suitable middleman–Mendeley’s William Gunn.

Next, I would click on the “Ask William about Mike” link. In the dialog box that appears, I’d write my request for an introduction and send it to William. The request should follow three key rules:

Be specific

William might take 10 minutes out of his day to write a recommendation for me, so I shouldn’t waste his time. That means telling him exactly why I want to meet Mike: what Mike does that interests me (he’s a genomics researcher), and what I’m looking to get out of an introduction (an opportunity to tell him about my great research idea: widgets for genomics researchers).

Include a “pitch” as to why an introduction would be valuable

Likewise, I should make it clear what Mike would get out of meeting me. What do I bring to the table? In this case, it’d be the chance to learn about a well-received new widget, and a future NSF grant opportunity.

Show appreciation, and also provide William with an “easy out”

William’s time is valuable, so I should make it clear that I’m thankful that he’s considering writing an Introduction. A good way to do that in addition to saying thanks is to give him a way to beg off without feeling too guilty.

Two additional rules for special scenarios are: 1) If we didn’t know each other well, I’d want to remind William how we met, and 2) If William does introduce Mike and I, I should follow up with an update and thanks.

Using these rules, here’s how my request for an Introduction reads:

Hi William,

I’m writing to ask if you’d be kind enough to introduce me to Mike (if, of course, you feel you know him well enough to do so). As you know, I’ve been toying with a new idea for widgets for genetics researchers. The prototype has been very well received by our initial user group; I think it has the potential to be a success, with the right stewardship.

It’s for that reason I want to connect with Mike. Being a well-known genomicist, Mike might be interested in the widget and, eventually, collaborating with me to go after a round of NSF funding. I hear there’s an upcoming “Dear Colleagues” letter that may be specifically related to genetics research widget design.

Thanks very much for taking the time to read this and considering my request. Feel free to decline if you don’t have the bandwidth to make the Introduction right now, I completely understand.

Best,
Stacy

One final note: keep your requests for introductions to “2nd degree connections”–that is, friends of friends–because your chances of getting a meaningful introduction to a stranger through a friend of a friend of a friend depends on too many variables to be successful.

3. “Cold call” people you want to get to know

This strategy is one of the most risky, but can also be rewarding if it helps you move beyond your existing network and break into new areas–especially important for those seeking jobs.

You can use LinkedIn messaging to send a short note to introduce yourself to and ask advice of individuals who have a job similar to the one you’re aiming for, or to get in touch with recruiters (if you’re looking for a job in industry). You might also consider writing messages to people you don’t know that have viewed your profile, if you think it’d have a payoff (i.e. a connection or, better yet, a lead on a job).

4. Boost your discoverability with the help of your network

eTqcpRL.png

Let’s be clear: Endorsements can be totally useless when not done right. In the past, I’ve been endorsed for “Library”. And I’ve seen Endorsements on others’ profiles for even more mundane things.

But Endorsements can be useful for academics, if done with care. The more people Endorse you for a skill or knowledge area (like “Grant writing”), the more you are associated with that skill by LinkedIn and search engines–thereby upping your appearance in search results, surfacing you to potential collaborators or future employers.

Here’s how to keep from getting Endorsed for something too vague to be useful. You can control what others are able to Endorse you for by editing the Skills & Endorsements section of your profile. Delete any skills that don’t apply or aren’t relevant. You can also reorder how those skills appear on your profile–helpful for breaking out of a loop where you are most often endorsed for the skills you’re most endorsed for.

 If you choose to Endorse others, be sure to only do so for people you know, and for skills you actually think they possess. Otherwise, it comes off as spammy.

5. Land at least one Recommendation

qHoBCtc.png

Recommendations can help you network passively using your profile. Having at least one Recommendation on your profile makes it clear what type of an employee or collaborator you are, which builds trust in your personal brand.

Asking others to write Recommendations for you doesn’t have to be awkward. Offer to write a Recommendation for them, and let them know you’d welcome a Recommendation in return. Just be sure to make it clear that reciprocation is by no means required.

When writing a Recommendation, make it clear how you know him or her. Did you serve as co-chairs for a professional society? Did she supervise you at your last job? Give specifics about what makes him or her a solid co-worker, and let the reader know what types of jobs you think she or he could excel at.

6. Let others know you’re here and ready to dance

Now it’s time to connect your LinkedIn presence to the rest of your professional life.

Make new LinkedIn Connections in your offline life by advertising that you are on the network. One way to do that is to create a memorable LinkedIn URL and include that URL on your business card. You can also put your custom URL or a LinkedIn badge prominently on your professional website or blog.

LinkedIn should be just one piece of your online identity. Academia.edu, Mendeley, and Impactstory all have functionalities that LinkedIn lacks; use those sites to host your publications, find new collaborators, and track impact metrics for your work.

7. Boost the signals and cut the noise from LinkedIn Notifications

LinkedIn’s Notification emails can be both a blessing and a curse.

Notifications about your Connections–which include information about their new jobs, promotions, and requests for Recommendations–can be a nice way to stay abreast of what your colleagues are up to, and a reminder to check-in with former coworkers to say hello.

However, all the Notifications can sometimes be too much. (Do you really need to know about your LinkedIn Connections’ work anniversaries?) You can reduce the “noise” if you are sure to only connect with people you know, and review your Communications settings to make sure you’re getting the types of email you’d prefer to see.

You’ll also want to pay close attention to what sort of Notifications you’re sending out. Job seekers especially should make sure their “Activity broadcasts” are set up correctly (go to Privacy & Settings > “Turn on/off your activity broadcasts”), so current employers don’t get emails letting them know you’re on the job hunt.

Are you ready to rumble?

By now, you’ve reconnected with coworkers and friends to build a meaningful network. And you’ve learned how to hack some of LinkedIn’s more annoying features–Endorsements and Notifications chief among them–to build your brand as a scientist, making new contacts and uncovering professional opportunities along the way.

Do you have other tips for networking using LinkedIn? Want to share a story about a time you triumphed–or failed–to make new Connections or get a Recommendation on the site? Leave them in the comments section below!

7 tips to supercharge your academic LinkedIn profile

Like 1.9 million other academics, you’ve got a LinkedIn profile. Along with the rest of us, you set it up to improve your visibility and to network with other researchers.

Well, we’ve got some bad news for you: your LinkedIn profile probably isn’t doing either of those things right now. Or at least, not very well.

The problem is that LinkedIn is built for businesspeople, not scientists; it’s tough to translate the traditional scholarly CV into the business-friendly format imposed by LinkedIn. So most scientists’ profiles are dull and lack focus on their most important accomplishments, and their networking attempts are limited to “friending” co-workers.

We’re going to fix that by giving you seven easy hacks to turn LinkedIn into a powerful tool for scholarly visibility and networking.  Today, we’ll help you supercharge your profile; then in our next post, we’ll show you how to leverage that profile to built a powerful professional network.

1: Bust down barriers to finding your profile

859Xevy.png

What good is a killer LinkedIn profile if no one can find it, or if your profile is so locked down they can only see your name?

 Your first job is to check your “public profile” settings (go to Privacy & Settings > Edit your public profile) to make sure people can see what you want them to.

What might others want to see? Your past experience, summary, and education, for starters; also include your best awards, patents, and publications. But don’t worry if you haven’t got the right content in place yet; we’ll fix that soon.

Next, double-check your settings by signing out of LinkedIn completely and searching for yourself on both LinkedIn and Google.

Are you findable now? Great, let’s move on.

2: Make your Headline into an ‘elevator pitch’

LinkedIn includes a short text blurb next to each person’s name in search results. They call this your “Headline,” and just like a newspaper headline, it’s meant to stimulate enough interest to make the reader want more.

Here are some keys to writing a great LinkedIn headline:

  1. Describe yourself with the right words: Brainstorm a few keywords that are relevant to the field you’re targeting. Spend a few minutes searching for others in your field, and borrowing from keywords found in their profiles and Headlines. For instance, check out Arianna C’s Headline: “Conceptual Modelling, Facilitation, Research Management, Research Networking and Matching”. Right away, the viewer knows what Arianna is an expert at. Your headline should do the same.

  2. Be succinct: Never use two words when one will do. (Hard for academics, I know. 🙂 ) Barbara K., who works in biotech, has a great Headline that follows this rule: “Microbiologist with R & D experience.”

  3. Show your expert status: What makes you the chemical engineer/genomics researcher/neuroscientist? Do you put in the most hours, score the biggest grants, or get the best instructor evaluations from students? This is your value proposition–what makes you great. Those with less experience like recent graduates can supplement this section by showing their passion for a topic. (I.e., “Computer scientist with a passion for undergraduate education.”)

  4. Use a tried and true formula to writing your headline: 3 keywords + 1 value proposition = Headline success, according to career coach Diana YK Chan. So what does that look like? Taking the keywords from (1) and value proposition from (3) above, we can create a Headline that reads, “Computer scientist with a passion for undergraduate education and experience in conceptual modelling and research management.” Cool, huh?

Well-written headlines are also key to making you more findable online–important for those of us who need to disambiguation from similarly-named researchers beyond ORCID.

3: Make yourself approachable with a photo

The next step to making yourself memorable to get a good photo on your profile. Here are three tips to remember:

r62Rflo.png

4: Hook ‘em with your Summary section

Now it’s time to encourage viewers of your profile to learn about you in more detail. That’s where the Summary section comes in.

Your Summary is an opportunity to provide a 50,000 foot view into your career and studies to date. Don’t just use this section to repeat information found elsewhere on your profile. Instead, write a short narrative of your professional life and career aspirations, using some of the keywords left over from writing your Headline. Here are three tips to help:

Be specific

Don’t use technical jargon, but do provide concrete details about your research and why it matters. Make yourself a person, not just another name in a discipline. Anthropologist Jason Baird Jackson does a great job of this:

“I have collaborated with Native American communities in Oklahoma since 1993, when I began a lifelong personal and research relationship with the Euchee/Yuchi people.”

Be up-front about what you want

Don’t beat around the bush when it comes to your professional goals. If you’ve done your job right, future employers, reviewers, students, and collaborators are probably reading your profile. Great. Now, what do you want to do with them? Let them know what you’re after, like scientist CW Hooker does in his Summary:

“I am always interested in discussing collaborations and future opportunities.”

Prove your value

Finally, use your Summary section to describe what you’ve done and why it matters. Elizabeth Iorns, breast cancer researcher and entrepreneur, explains to profile viewers that,

“Based on her own experiences as a young investigator seeking expert collaborations, Dr. Iorns co-founded Science Exchange. In 2012, after recognizing the need to create a positive incentive system that rewards independent validation of results, Dr. Iorns created the Reproducibility Initiative.”

 Right there is proof that she gets stuff done: she’s created solutions in response to service gaps for scientists. Impressive!

5: Give the scoop on your best work

If you’re a recent graduate or junior academic, it can be tempting to put all of your work experience on your LinkedIn profile.

Don’t do it!

Putting all of your positions on your profile can trivialize the more important work that you’ve done and make you look scattered.

Remember, your LinkedIn profile fills different role than your CV–it’s more of a trailer than a feature film. So include only the jobs that are relevant to your career goals. Mention a few specifics about your most important responsibilities and what you learned at those jobs, and save the gory details about your day-to-day work for your full CV.

A good rule for more senior researchers to talk mostly about your last 10-15 years of experience. Listing all of your past institutions will make for a monster profile that will turn readers off with too much detail.

After all, why would someone care if you were a lab assistant for Dr. Obscure at Wichita State University in 1985, when the more compelling story is that you’ve had your own lab since 2006?

6: Brag about your best awards and publications

Keeping it short and sweet also extends to discussing awards and publications on your LinkedIn profile. Highlight your best publications (especially those where you’re a lead author) and most prestigious awards (i.e., skip the $500 undergraduate scholarship from your local Elks club).

If you’re seeking an industry job, keep in mind that publications and awards don’t mean nearly as much outside of academia. In fact, you might want to leave those sections off of your LinkedIn profile altogether, replacing them with patents you’ve filed or projects you’ve led.

7. Add some eye-catching content

NsvQaut.png

If LinkedIn were designed for scientists, it’d be much easier to import information from our CVs. Too bad it’s not. Nonetheless, with a little ingenuity you can make the site great for showcasing what scientists have a lot of: posters, slide decks, and figures for manuscripts.

If you’ve ever given a talk at a conference, or submitted a figure with a manuscript for publication, you can upload it here, giving viewers a better taste of your work. Add links, photos, slideshows, and videos directly to your profile using the Upload icon on your profile’s Summary and Experience sections. Consider also adding a link to your Impactstory profile, so you can show readers your larger body of work and its popular and scholarly impact.

Want some inspiration? Neuroscientist Bradley Voytek has added a Wow Factor to his profile with a link to a TEDx talk he gave on his research. Pharmacology professor Ramy Aziz showcases his best conference talks using links to Slideshare slide decks. And Github repositories make an appearance alongside slide decks on PhD student Cristhian Parra’s profile (pictured above).

You too can upload links to your best–and most visually stimulating–work for a slick-looking profile that sets you apart from others.

If you’ve followed our steps to hacking LinkedIn’s limitations for scientists, that drab old profile is spiffed up and ready to share. Now you’re poised to make lasting connections with your colleagues via LinkedIn, and hook potential collaborators.

But! You haven’t even scratched the surface of LinkedIn’s value until you use it to network. We’ll show you how to do that in the second part of our series. Stay tuned!

Do you have tips for crafting great LinkedIn profiles, or what you–as an employer–look for in a LinkedIn profile? Leave them in the comments below!