What’s our impact? (November 2014)

As a platform for users interested in data, we want to share some stats about our successes (and challenges) in spreading the word about Impactstory.

Here are our outreach numbers for November 2014.

impactstory.org traffic

  • Visitors: 4,361 total; 2,754 unique
  • New Users: 247
  • Conversion rate: 8.9% (% of visitors who signed up for a trial account)

Blog stats

  • Unique visitors: 9,443 (31% growth from October)
  • Clickthrough rate: 0.75% (% of people who visited Impactstory.org from the blog)
  • Conversion rate: 19.7% (% of visitors to impactstory.org from blog who went on to sign up for a trial Impactstory account)
  • Percent of new user signups: 5.7%

Twitter stats

  • New followers 318 in November
  • Increase in followers from October 6.7%
  • Mentions 380 (We’re tracking this to answer the question, “How engaged are our followers?”)
  • Tweet reach 840,174 (We’re tracking this–the number of people who potentially saw a tweet mentioning Impactstory or our blog–to understand our brand awareness)
  • Clickthroughs: 180
  • Conversions: 5

What does it all mean?

impactstory.org: Overall traffic to the site was down, consistent with patterns of use we’ve seen in years past. (An end-of-the-semester dip in traffic is common for academic sites.) Conversion rates on impactstory.org went slightly down from October. We’re confident that new landing pages and general homepage changes we make in the coming months will improve conversion rates.

Blog: November saw an increase in unique visitors (another month of double-digit growth!), but what does that mean for our organization? Conversion rates actually went down from October, as did the blog’s share of new user signups for Impactstory. This points to a need to share more Impactstory-related content on the blog, and experiment with unobtrusive sidebars, slide-ins, and other ways that can point people to our main website.

That said, blogging doesn’t always result in direct signups, nor is it meant to. The primary aim of blogging is to educate people about open science and altmetrics (as a non-profit, we’re big on advocacy). And it helps familiarize people with our organization, too, which can result in indirect signups (i.e., readers might come back later and sign up for Impactstory).

Twitter: Our Twitter followers and mentions increased from October by about 1.5% and 25%, respectively. We’ll aim to continue that growth throughout December. (After all, we’re active on Twitter for the same reason we blog–as a form of outreach and advocacy.) We also passed an exciting benchmark: 5,000 Twitter followers!

We’ll continue to blog our progress, while also thinking on ways to share this data in a more automated fashion. If you have questions or comments, we welcome them in the comments below.

Updated 12/31/2014 to fix error in reporting conversion rates of impactstory.org visitors from blog.

Impactstory Advisor of the Month: Lorena Barba (December 2014)

A photograph of Impactstory Advisor Lorena Barba2014’s final Impactstory Advisor of the month is Lorena Barba. Lorena is an associate professor of mechanical and aerospace engineering at the George Washington University in Washington DC, and an advocate for open source, open science, and open education initiatives.

We recently interviewed Lorena to learn more about her lab’s Open Science manifesto, her research in computational methods in aeronautics and biophysics, and George Washington University’s first Massive Open Online Course, “Practical Numerical Methods with Python” (aka “Numerical MOOC”).

Tell us a bit about your research.

I have a PhD in Aeronautics from Caltech and I specialized in computational fluid dynamics. From that launching pad, I have veered dangerously into applied mathematics (working on what we call fast algorithms), supercomputing (which gets you into really techy stuff like cache-aware and memory-avoiding computations, high-throughput and many-core computing), and various application cases for computer simulation.

Fluid dynamics and aerodynamics are mature fields and it’s hard to make new contributions that have impact. So I look for new problems where we can use our skills as computational scientists to advance a field. That’s how we got into biophysics: there are models that apply to interactions of proteins that use electrostatic theory and can be solved computationally with methods similar to ones used in aeronautics, believe it or not.

We have been developing models and software to compute electrostatic interactions between bio-molecules, first, and between bio-molecules and nano-surfaces, more recently. Our goal is to contribute simulation power for aiding in the design of efficient biosensors. And going back to my original passion, aerodynamics, we found an area where there is still much to be discovered: the aerodynamics of flying and gliding animals (like flying snakes).

Why did you initially decide to join Impactstory?

For a long time, I’ve been thinking that science and scientists need to take control of their communication channels and use the web deliberately to convey and increase our impact. I have been sharing the research and educational products of my group online for years and we have a consistent policy with regards to publication that includes, for example, always uploading a preprint to the arXiv repository at the time of submitting a paper for publication. If a journal does not have an arXiv-friendly policy, we don’t submit there and look for another appropriate journal. We started uploading data sets, figures, posters and other research objects to the figshare repository since its beginning, and I’m also a figshare advisor.

Impactstory became part of my communications and impact arsenal immediately, because it aggregates links, views and mentions of our products. And with the latest profile changes, it also offers an elegant online presence.

Why did you decide to become an Advisor?

So many of my colleagues are apathetic to the dire control they put in the hands of for-profit publishers, and simply accept the status quo. I want to be an agent of change in regards to how we measure and communicate the importance of what we do. Part of it is simply being willing to do it yourself, and show by example how these new tools can work for us.

What’s your favorite Impactstory feature?

The automatic aggregation of research objects using my various online IDs, like ORCID, Google Scholar and GitHub. The map is pretty cool, too!

You’ve done a lot to “open up” education in computational methods to the public, in particular via your Numerical MOOC and Youtube video lectures. What have been your biggest successes and challenges in getting these courses online and accessible to all?

In my opinion, the biggest success is doing these things at the grassroots level, with hardly any funding (I had some seed funding for #numericalmooc, but none of the previous efforts had any) or institutional involvement. When I think of how the university, in each case, has been involved in my open education efforts, the most appropriate way to characterize it is that they have let me to do what I wanted to do, staying out of the way. There have not been technologists or instructional designers or any of that involved; I just did it all myself.

The biggest challenge? Resources, I guess—time and money. My scarcest resource is time, and when I work to create open educational resources, I’m stealing time away from research. This gets me disapproving looks, thoughts and comments from my peers. Why am I spending time in open education? “This won’t get you promoted.” SIGH. As for money, I raised some funds for #numericalmooc, but it’s not a lot: merely to cover the course platform and the salary of my teaching assistants. Funding efforts in open education—as an independent educator, rather than a Silicon Valley start-up—is really tough.

Thanks, Lorena!

As a token of our appreciation for Lorena’s outreach efforts, we’re sending her an Impactstory item of her choice from our Zazzle store.

Lorena is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Open Science & Altmetrics Monthly Roundup (November 2014)

In this month’s roundup: OpenCon 2014, “the crappy Gabor paper” snafu, sexier scientific graphics and 7 other ways November was a big month for Open Science and altmetrics. Read on!

“Generation Open” organizes itself at OpenCon 2014

An international group of students and early career researchers met at the first annual OpenCon meeting in Washington DC. While we couldn’t attend in person, we followed the conference via Twitter and livestream, as did many others from around the world. It was excellent.

Among the participants were Patrick Brown (PLOS), Victoria Stodden (Stanford), Erin McKiernan (Wilfrid Laurier University), and Pete Binfield (PeerJ), all of whom shared two important messages with the attendees: 1) even as a junior scientist, you can make choices that support open access, but 2) don’t feel bad if you’re not yet an OA revolutionary–it’s the responsibility of more senior scientists to help pave the way and align your career incentives with the OA movement’s aims.

You can read a great summary of OpenCon on the Absolutely Maybe blog, and watch OpenCon sessions for yourself on YouTube.

“The crappy Gabor paper” citation snafu

A recently published journal article gained some unwanted attention in November thanks to a copyediting error: the authors’ note-to-self to cite “the crappy Gabor paper” was left in the version of the article that made it to press, which someone found and shared on Twitter.

Because of the increased social media attention the article received, its altmetrics shot through the roof, causing cynics to point out that altmetrics must be flawed because they’re measuring the attention a paper received not for its quality but instead for a silly citation mistake.

We have a different perspective. Altmetrics aggregators like Altmetric.com are useful precisely because they can capture and share data about this silly mistake. After all, we expose the underlying, qualitative data that shows exactly what people are saying when they mention a paper:

Screen Shot 2014-12-09 at 9.58.09 AM.png

That exposure is crucial to allowing viewers to better understand why a paper’s been mentioned. Compare that to how traditional citation indices tend to compile citation numbers of a paper: simple numbers, zero context.

Towards sustainable software at WSSSPE2

The 2nd Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2) met in New Orleans to discuss what barriers exist to the long-term sustainability of code. Turns out, there are many, and they’re mostly rooted in a lack of proper incentives for software creators.

Among the many interesting papers presented at the meeting were:

Links to all conference papers and detailed notes from the meeting can be found on the WSSSPE2 website.

Other altmetrics & open science news

  • Are you a science blogger? Paige needs you! Paige Brown Jarreau (also known as Twitter’s @fromthelabbench) is studying science blogging practices for her PhD research. Respond to her “Survey for Science Bloggers” here.

  • Major UK research funder describes how they use altmetrics to gauge public engagement for the studies they fund: the Wellcome Trust recently published an article in PLOS Biology that outlines how the group uses altmetrics to learn about the societal impacts of the work they support. They also suggest that altmetrics may play an important role in research evaluation in the years to come. Read the full article here.

  • Meet Impactstory’s Advisor of the Month, Open Access superadvocate Keita Bando:  We chatted with Keita about his work founding MyOpenArchive, a precursor to Figshare. Keita also shared his perspective on librarians’ role in helping scientists navigate the often-complicated options they have for practicing Open Science, and gave us a look at his app that makes it much easier for researchers to sync their Mendeley and ORCID profiles. Read the full interview here on the Impactstory blog.

  • How a rip-off website is polluting Google Scholar profiles with spam citations: the Retraction Watch blog reports that a website created to illegally resell content to Chinese academics is inadvertently adding fake publications to some scientists’ Google Scholar profiles. Individual scholars can remove spam citations from their profiles themselves. Read more on Retraction Watch.

  • You can now batch upload publications to your ORCID profile: researcher identification service ORCID is great at finding your publications and other scholarly outputs automatically, but sometimes you need to add content manually. Luckily, ORCID just added a BibTeX import option, meaning you can upload many documents at once using a file that’s easily exported from Mendeley, ADS, and many other reference management applications.

  • Standards for scientific graphics from 1914 could make modern Open Science better: Jure Triglav recaps some important reports from the early 1900’s that have largely been forgotten to date. In them, some of science’s brightest minds set rules for creating scientific graphics, including no-brainers like, “Display the data alongside the graph”–which for some reason we don’t regularly practice today. Triglav also offers a proof-of-concept for how open data can be used to create better, more informative graphics in the modern era. Read the full article here.

  • Older articles more likely to be cited, now that they’re increasingly online: Google Scholar reports that there’s been a 28% growth in citations to older articles between 1990 and 2013 due to their increased availability online. It’s a great argument for “opening up” your older scholarship, isn’t it?

What was your favorite open science or altmetrics happening from November?

Ours was probably the Impactstory November Impact Challenge, but we couldn’t fit it into this roundup! (See what we did there?) Share your favorite news in the comments below.

Why Nature’s “SciShare” experiment is bad for altmetrics

Early last week, Nature Publishing Group announced that 49 titles on Nature.com will be made free to read for the next year. They’re calling this experiment “SciShare” on social media; we’ll use the term as a shorthand for their initiative throughout this post.

Some have credited Nature on their incremental step towards embracing Open Access. Other scientists criticise the company for diluting true Open Access and encouraging scientists to share DRM-crippled PDFs.

As staunch Open Access advocates ourselves, we agree with our board member John Wilbanks: this ain’t OA. “Open” means open to anyone, including laypeople searching Google, who don’t have access to Nature’s Magic URL. “Open” also means open for all types of reuse, including tools to mine and build next-generation value from the scholarly literature.

But there’s another interesting angle here, beyond the OA issue: this move has real implications for the altmetrics landscape. Since we live and breathe altmetrics here at Impactstory, we thought it’d be a great time to raise some of these issues.

Some smart people have asked, “Is SciShare an attempt by Nature to ‘game’ their altmetrics?” That is, is SciShare an attempt to force readers to view content on Nature.com, thereby increasing total pageview statistics for the company and their authors?

Postdoc Ross Mounce explains:

If [SciShare] converts some dark social sharing of PDFs into public, trackable, traceable sharing of research via non-dark social means (e.g. Twitter, Facebook, Google+ …) this will increase the altmetrics of Nature relative to other journals and that may in-turn be something that benefits Altmetric.com [a company in which Macmillian, Nature’s parent company, is an investor].

No matter Nature’s motivations, SciShare, as it’s implemented now, will have some unexpected negative effects on researchers’ ability to track altmetrics for their work. Below, we describe why, and point to some ways that Nature could improve their SciShare technology to better meet researchers’ needs.

How SciShare works

SciShare is powered by ReadCube, a reference manager and article rental platform that’s funded by Macmillan via their science start-up investment imprint, Digital Science.

Researchers with subscription access to an article on Nature.com copy and paste a special, shortened URL (i.e. http://rdcu.be/bKwJ) into email, Twitter, or anywhere else on the Web.

Readers who click on the link are directed to a version of the article that they can freely read and annotate in their browser, thanks to ReadCube. Readers cannot download, print, or copy from the ReadCube PDF.

The ReadCube-shortened URL resolves to a Nature-branded, hashed URL that looks like this:

Screen Shot 2014-12-04 at 4.18.16 PM.png

The resolved URL doesn’t include a DOI or other permanent identifier.

In the ReadCube interface, users who click on the “Share” icon see a panel that includes a summary of Altmetric.com powered altmetrics (seen here in the lower left corner of the screen):

Screen Shot 2014-12-04 at 6.11.41 PM.png

The ReadCube-based Altmetric.com metrics do not include pageview numbers. Because ReadCube doesn’t work with assistive technology like screen readers, it also disallows for the tracking of the small portion of traffic that visually-impaired readers might account for.

That said, the potential for tracking new, ReadCube-powered metrics is interesting. ReadCube allows annotations and highlighting of content, and could potentially report both raw numbers and also describe the contents of the annotations themselves.

Number of redirects from the ReadCube-branded, shortened URLs could also be illuminating, especially when reported alongside direct traffic to the Nature.com-hosted version of the article. (Such numbers could provide hard evidence as to the proportion of OA vs toll access use of Nature journal articles.) And sources of Web traffic give a lot of context to the raw pageview numbers, as we’ve seen from publishers like PeerJ:

Screen Shot 2014-12-04 at 6.26.31 PM.png

After all, referrals from Reddit usually means something very different than referrals from PubMed.

Digital Science’s Timo Hannay hints that Nature will eventually report download metrics for their authors. There’s no indication as to whether Nature intends to disclose any of the potential altmetrics described above, however.

So, now that we know how SciShare works and the basics of how they’ve integrated altmetrics, let’s talk about the bigger picture. What does SciShare mean for researcher’s altmetrics?

How will SciShare affect researchers’ altmetrics?

Let’s start with the good stuff.

Nature authors will probably reap a big benefit in thanks to SciShare: they’ll likely have higher pageview counts for the Nature.com-hosted version of their articles.

Another positive aspect of SciShare is that it provides easy access to Altmetric.com data. That’s a big win in a world where not all researchers are aware of altmetrics. Thanks to ReadCube’s integration of Altmetric.com, now more researchers can find their article’s impact metrics. (We’re also pleased that Altmetric.com will get a boost in visibility. We’re big fans of their platform, as well as customers–Impactstory’s Twitter data comes from Altmetric.com).

SciShare’s also been implemented in such a way that the ReadCube DRM technology doesn’t affect researchers’ ability to bookmark SciShare’d articles on reference managers like Mendeley. Quick tests for Pocket and Delicious bookmarking services also seems to work well. That means that social bookmarking counts for an author’s work will likely not decline. (I point this out because when I attempted to bookmark a ReadCube.com-hosted article using my Mendeley browser bookmarklet Thursday, Dec. 4th, I was prevented from doing so, and actually redirected to a ReadCube advertisement. I’m glad to say this no longer seems to be true.)

Those are the good things. But there’s also a few issues to be concerned about.

SciShare makes your research metrics harder to track

The premise of SciShare is that you’ll no longer copy and paste an article’s URL when sharing content. Instead, they encourage you to share the ReadCube-shortened URL. That can be a problem.

In general, URLs are difficult to track: they contain weird characters that sometimes break altmetrics aggregators’ search systems, and they go dead often. In fact, there’s no guarantee that these links will be live past the next 12 months, when the SciShare pilot is set to end.

Moreover, neither the ReadCube URL–nor the long, hashed, Nature.com-hosted URL that it resolves to–contain the article’s DOI. DOIs are one of the main ways that altmetrics tracking services like ours at Impactstory can find mentions of your work online. They’re also preferable to use when sharing links because they’ll always resolve to the right place.

So what SciShare essentially does is introduce two new messy URLs that will shared online, and that have a high likelihood of breaking in the future. That means there’s a bigger potential for messier data to appear in altmetrics reports.

SciShare’s metrics aren’t as detailed as they could be

The Altmetric.com-powered altmetrics that ReadCube exposes are fantastic, but they lack two important metrics that other data providers expose: citations and pageviews.

On a standard article page on Nature.com, there’s an Article Metrics tab. The Metrics page includes data not only from Altmetric.com, but also CrossRef, Web of Science, and Scopus’s citation counts, and also pageview counts. And on completely separate systems like Impactstory.org and PlumX, still more citation data is exposed, sourced from Wikipedia and PubMed. (We’d provide pageview data if we could. But that’s currently not possible. More on that in a minute.)

ReadCube’s deployment of Altmetric.com data also decontextualizes articles’ metrics. They have chosen only to show the summary view of the metrics, with a link out to the full Altmetric.com report:

Screen Shot 2014-12-05 at 10.11.47 AM.png

Compare that to what’s available on Nature.com, where the Metrics page showcases the Altmetric.com summary metrics plus Altmetric.com-sourced Context statements (“This article is in the 98th percentile compared to articles published in the same journal”), snippets of news articles and blog posts that mention the article, a graph of the growth in pageviews over time, and a map that points to where your work was shared internationally:

Screen Shot 2014-12-04 at 3.59.38 PM.png

More data and more context are very valuable to have when presenting metrics. So, we think this is a missed opportunity for the SciShare pilot.

SciShare isn’t interoperable with all altmetrics systems

Let’s assume that the SciShare experiment results in a boom in traffic to your article on Nature.com. What can you do with those pageview metrics?

Nature.com–like most publishers–doesn’t share their pageview metrics via API. That means you have to manually look up and copy and paste those numbers each time you want to record them. Not an insurmountable barrier to data reuse, but still–it’s a pain.

Compare that to PLOS. They freely share article view and download data via API, so you can easily import those numbers to your profile on Impactstory or PlumX, or export them to your lab website, or parse them into your CV, and so on. (Oh, the things you can do with open altmetrics data!)

You also cannot use the ReadCube or hashed URLs to embed the article full-text into your Impactstory profile or share it on ResearchGate, meaning that it’s as difficult as ever to share the publisher’s version of your paper in an automated fashion. It’s also unclear whether the “personal use” restriction on SciShare links means that researchers will be prohibited from saving links publicly on Delicious, posting them to their websites, and so on.

How to improve SciShare to benefit altmetrics

We want to reiterate that we think that SciShare’s great for our friends at Altmetric.com, due to their integration with ReadCube. And the greater visibility that their integration brings to altmetrics overall is important.

That said, there’s a lot that Nature can do to improve SciShare for altmetrics. The biggest and most obvious idea is to do away with SciShare altogether and simply make their entire catalogue Open Access. But it looks like Nature (discouragingly) is not ready to do this, and we’re realists. So, what can Nature do to improve matters?

  • Open up their pageview metrics via API to make it easier for researchers to reuse their impact metrics however they want
  • Release ReadCube resolution, referral traffic and annotation metrics via API, adding new metrics that can tell us more about how content is being shared and what readers have to say about articles
  • Add more context to the altmetrics data they display, so viewers have a better sense of what the numbers actually mean
  • Do away with hashed URLs and link shorteners, especially the latter which make it difficult to track all mentions of an article on social media

We’re hopeful that SciShare overall is an incremental step towards full OA for Nature. And we’ll be watching how the SciShare pilot changes over time, especially with respect to altmetrics.

Update: Digital Science reports that the ReadCube implementation has been tested to ensure compatibility with most screen readers.

Impact Challenge Day 30: Create a comprehensive impact profile at Impactstory.org

Yesterday, we covered all the ways that you can dig up evidence of your impacts online. You learned that metrics for your research exist across more than 18 platforms all around the Web. That’s a lot of data to manage.

What you need now is a single place to view your metrics (and the underlying qualitative data). You also need a way to share your metrics with others. That’s where Impactstory comes in.

Impactstory is a non-profit webapp that compiles data from across the Web on how often (and by whom) your research is being shared, saved, discussed, cited and more.

We automate much of the work of collecting impact metrics, so you don’t have to. And we provide rich, contextualized, Open metrics alongside the underlying data, so you can learn a lot in one place (and reuse most of the metrics however you want).

In today’s challenge, you’ll explore creating a comprehensive impact profile on Impactstory.org. Let’s get started!

Step 1. Explore an Impactstory profile

Screen Shot 2014-12-01 at 9.08.17 PM.png

One of our favorite Impactstory profiles belongs to genomics researcher Holly Bik. Her profile epitomizes all of the cool things you can do on Impactstory:

  • Discover metrics for your work from scholarly and popular social media
  • Import all of your papers, datasets, software, slide decks, and other scholarly products into a single profile
  • Highlight the scholarship and metrics you’re most proud of in your “Selected Works” and “Key Metrics” sections of your profile homepage
  • Learn who’s talking about your work and what they’re saying by drilling down into the metrics and underlying data
  • Connect your account to third-party services like Figshare, ORCID, and GitHub to get automatic updates & import your new research

Go ahead and poke around a bit on Holly’s profile. Take 5 minutes or so to explore. Go ahead, we’ll wait here.

Not everyone’s profile will look like Holly’s, to be sure. But no matter your career stage, chances are that an Impactstory profile will give you a lot of insight as to your many research impacts.

Step 2. Sign up for Impactstory

Now let’s get you set up with a free Impactstory trial.

You might have heard: we’re a subscription-based service ($60/year or $10/month). But we’re not going to make a hard pitch for you to subscribe.

Instead, you’re going to sign up for a free, 30 day trial, during which you’ll get a better chance to decide if Impactstory is right for you (and worth paying for*). Here’s how:

That’s it! Easy, huh?

Next, let’s walk through the simple steps it takes to get your scholarship onto Impactstory.

* We also offer fee waivers for anyone who can’t afford a subscription.

Step 3. Automate your Impactstory profile

Screen Shot 2014-12-01 at 8.29.17 PM.png

You’re now on the “Master Import Controls” page.

Next, you’ll be prompted to connect your accounts from across the Web. This will allow you to batch import many of your publications, software, data, and other scholarship that’s hosted elsewhere. And, once connected, we’ll automatically import your new scholarship, as it’s created.

As of this writing, you can connect Figshare, ORCID, GitHub, Publons, Slideshare, and Twitter for auto-importing of data and scholarship. You can also add a link to your Google Scholar profile and import those publications all at once using BibTeX.

We’ll use Figshare as an example for how to connect your Impactstory account to other services. To get started:

  • Click on the tile for the service you want to connect (in this case, Figshare)
  • Open a new browser window and get your Figshare author page URL (login to Figshare, click on your name and photo in the upper-right hand corner, click “My profile,” and then copy the URL that appears in your browser’s address bar.)
  • Switch back to the Impactstory browser window. In the Figshare pop-up, paste your Figshare author page URL into the box under “figshare author page URL”
  • Click the green “Connect to Figshare” button
  • You’re now connected!

Impactstory will then auto-import all of your public Figshare products and their metrics, and also update your Impactstory profile weekly with any new Figshare products and metrics.

The instructions above work for ORCID, GitHub, Publons, Slideshare, and Twitter, too. Just login to that appropriate web service to get your URL, username, or ORCID ID, and click the appropriate tile on Impactstory “Master import controls” page to insert the URL.

Step 4. Import your other scholarship to Impactstory

Screen Shot 2014-12-01 at 8.34.25 PM.png

It’s possible that you’ve got scholarly products squirreled away in places we can’t automatically import from. Maybe you’ve contributed to a GitHub repository that you don’t own, have a standalone website devoted to a research project, or have a video abstract for one of your articles.

No matter what you want to add to your profile as an individual product, here’s how to do it.

From the Main Import Controls page:

  • Click the “Add products individually by ID” link
  • On the next page, paste the identifier(s) for the product(s) you want to track. If you are adding more than one individual product at a time, be sure to add only one identifier per line.
  • Once you’ve added the identifiers for all the products you want to track, click the blue “Import” button. The products will be added to your profile.

Step 5. Step back and admire your profile so far

Now you’ve got all your scholarly products on Impactstory. Let’s take a look at how they look on the genre pages.

From your main profile page, click on the links in the left-hand navigation bar that correspond with the scholarly genre you want to explore.

For example, if you’ve got articles on your profile, go ahead and click on the “articles” link. Here’s what Holly Bik’s Articles page looks like:

Screen Shot 2014-12-01 at 8.40.08 PM.png

 You can hover over any of the blue or green badges to see the underlying data that document your scholarly and public impacts:

Screen Shot 2014-12-01 at 8.47.03 PM.png

 Or you can click on any title to see an in-depth description of the article and a summary of metrics. We auto-import as much information as possible, including your full citation and your abstract:

Screen Shot 2014-12-01 at 8.49.45 PM.png

 Click on the “Full-text” icon to see an embedded version of your paper (and you can add a link to the full-text, Open Access version of your paper, if we didn’t auto-import it for you–more on that below).

Click on the “Metrics” icon to see a drill-down view of your paper’s metrics, along with important context that we provide in percentiles:

Screen Shot 2014-12-01 at 8.53.24 PM.png

 And you can click through any of the specific metrics to go to the data provider website, where you can explore the underlying data:

Screen Shot 2014-12-01 at 8.55.43 PM.png

 Back on your profile, you can also click the “Map” icon to learn about where in the world your paper has been bookmarked on Mendeley, tweeted about, or viewed on Impactstory.org:

Screen Shot 2014-12-01 at 8.56.51 PM.png

 Hovering over any country gives you more information about the impacts that have happened in that country; you can also drill down into each country’s activity using the handy table at the lower-left of the page.

Step 6. Add links to your open access work

Now that you’ve seen all the ways your Open Access work is being reused online, let’s get more of your OA work onto your Impactstory profile.

For any article, dataset, or other scholarly product that’s not already embedded in your Impactstory profile:

  • Go to the main item page
  • Click on the “Full-text” icon
  • You’ll see an option to “Share your article” by uploading a full-text copy of your work or providing a URL.
  • Upload your article or provide your URL, and you’re done!

Step 7. Pretty up your profile

Now it’s time to put the finishing touches on your entire profile.

On your main profile page, add a short bio and a photo of yourself.

On your product pages for your most important research, add keywords and abstracts that’ll help others find your work more easily.

To add the bio, keywords, and abstract, just click on the field you want to edit, type in what you want to add, and then click the blue checkmark icon to save it to your profile.

That’s it! You now have a beautiful, complete Impactstory profile! Congrats!

Step 8. Dig into your metrics & notification emails

Now that your profile is complete, you’ll have 30 days’ worth of free trial to discover new metrics that your work has received.

Impactstory updates your profile with new metrics (and imports new products) on a weekly basis. Any new metrics will appear on your badges like so: Screen Shot 2014-12-01 at 9.23.11 PM.png.

We also will send you notification emails on a weekly basis that highlight your top 10 “greatest hits” metrics for the week.

Screen Shot 2014-12-01 at 9.28.07 PM.png

 Your notification emails will usually include milestone metrics (“You’ve just passed 2,000 views on your Slideshare slides!”) and sometimes it will include incremental metrics for your less popular research materials (“You got 1 Figshare view for your 2001 dataset, ‘Datum Obscurus.’ That brings your total views up to 7.”)

These notifications include contextual information, such as your total number of metrics to date for that item, and what percentile your item’s in, relative to other research products created in the same year or published in the same discipline.

If you’d rather receive your Notification emails more frequently, less frequently, or not at all, you can change your settings at impactstory.org/settings/notifications.

Step 9. Share your success far and wide

Now that you’ve got your Open impact data, how will you use it?

Well, some researchers use altmetrics to document their impact for grant applications and tenure. We’ve also heard of scientists using them for promotion and annual reviews. Consider whether these scenarios would work for you. The latter scenario in particular is a great way to test the water, to see if your supervisors and colleagues are amenable to altmetrics.

You can also share altmetrics-inspired warm and fuzzies with your collaborators. Email your co-authors with a link to your articles on Impactstory, so they can check out the data for themselves. It’s a great feeling when you see in black and white the effect your work’s having on others. Share it! 🙂

We also suggest putting a link to your Impactstory profile on your website or blog, and in your email signature. All super-effective ways to quickly share both your research and your impact with your colleagues.

When sharing your Impactstory data and profile, keep in mind that numbers are only one useful part of the data. You can print out your impact map and include it in an annual review; quote from open peer reviews that praise the quality of your research in your tenure dossier; and learn who’s sharing your work so you can connect with them via social media.

But ultimately, it’s up to you to decide what uses will be the best for you, depending upon your academic environment. Once you decide, let us know! We love to hear how scientists are using their Impactstory profiles.

Limitations

Many popular data providers including Google Scholar, Academia.edu, and ResearchGate won’t share their data with us (or anyone else) via Open API. So, we unfortunately can’t import metrics from those profiles to your Impactstory account.

It’s also hard for us (and all other altmetrics aggregators) to track scholarly products by URL alone. There simply aren’t great data sources for doing that ever since Topsy got bought out by Apple. We’re continuing to look for ways to get you this data. But in the meantime, we encourage you to mint DOIs for your work, so we can track it.

Homework

Now that you’ve got an Impactstory profile, make it awesome! Fill in the gaps in your publication history, add your most impactful work, connect your accounts, and so on. At the very least, information for all of your most important research products should be in your profile.

For your five most important products, add links to the Open Access versions of those works, if they’re available and you have the rights to post them. (If you remember, publishers’ restrictions might prohibit you from posting certain versions of your articles online.)

Once everything’s imported, it’s time to clean up your profile data. We import and clean up a lot of dirty and duplicate data for you, but some things might fall through the cracks. Here’s what to look for:

  • Mislabeled products: add missing descriptive information (journal names, authors, abstracts, and keywords that can help others find your work). It’s as easy as clicking in the area that needs to be updated, adding the info, and then clicking the blue checkmark button to save it.

  • Duplicate products: choose which version you’d like to delete, tick the box next to it, and click the trashcan icon at the top of your profile to get rid of it.

  • Miscategorized products: sometimes, products will end up in the Webpages genre or in other inappropriate places on your profile, due to incomplete descriptive information. To move a product from one genre page to another, check the box next to the item(s) to be moved, then click the “Move” folder icon at the top of your profile, select the appropriate genre from the drop-down menu, and you’re done!

Your final, enjoyable task is to now dig into the data that your Impactstory profile provides. Find unexpected mentions or reuse of your work online. Think about how you might use that data in a professional context. And give yourself a big pat on the back for completing the final Impact Challenge.

Congratulations!

105m7dgltRB2Ok.gif

You’ve successfully made it through all 30 days of the Impact Challenge! We’re proud of you!

You’re now an Open, web-savvy scientist who’s made valuable connections online and in real life. You’re sharing more of your work than you were before, and have found many new ways to get your work to those who are interested. And you’re able to track the success of your efforts, and the real-time impact of your scholarship.

We’ve had a lot of fun writing these Impact Challenges and talking with all of you who’ve participated. Thanks for joining in! And feel free to reach out if you’ve got ideas for future Impact Challenges.

PS

If you’ve accomplished all 30 Impact Challenges, we’ve got a gift for you and all other FINISHERs!

B1nbsyEIMAA6-0P.png

 The full rules for claiming your shirt can be found here.

Impact Challenge Day 29: Discover when your work is discussed & shared online

You’re engaging other scholars online; they’re discussing your open access work with you and other scientists; and you’ve minted identifiers that’ll let you track your work’s reach on the Web.

Now comes the fun part: measuring your research’s many impacts.

In today’s challenge, we’ll explore how the services you’ve signed up for–Academia.edu, Slideshare, Figshare, and so on–and others can be used to track the impacts of all of your research outputs.

Then tomorrow, we’ll cover our webapp, Impactstory, which brings together many of these metrics into a single, comprehensive impact profile.

Let’s dig in!

Citations

Citations are the “coin of the realm” to track scholarly impact, not only for your articles but also your research data, too. You can get citation alerts in three main ways: from Google Scholar, from traditional citation indices, and from newer databases like the Data Citation Index.

Google Scholar Citations alerts

Your Google Scholar profile can be used to alert you whenever your articles receive new citations online. It tracks any citations to your publications that occur on the scholarly web.

If you haven’t already signed up for citation alerts, visit your profile page and click the blue “Follow” button at the top of your profile. Select “Follow new citations” link and enter your preferred email address, then click “Create alert.” Notifications will arrive in your inbox when you receive new citations.

If you want to explore who has already cited you, visit your profile page, and click on the number of citations to the right of the article you want to track citations for:

Screen Shot 2014-12-01 at 2.49.18 PM.png

On the next page, you’ll see a list of all the papers that have cited you, some of which you’ll be able to click-through and read:

Screen Shot 2014-12-01 at 2.50.10 PM.png

Remember: Google Scholar indexes citations it finds in a wide range of scholarly document (white papers, slide decks, and of course journal articles are all fair game) and in documents of any language. The data pool is also mixed with respect to peer-review status; some of these citations will be in the peer reviewed literature, some will not. This means that your citation count on Google Scholar may be larger than on other citation services.

Web of Knowledge

Traditional citation indices like Scopus and Web of Knowledge are another good way to get citation alerts delivered to your inbox. These services are more selective in scope, so you’ll be notified only when your work is cited by vetted, peer-reviewed publications.

However, they only track citations for select journal articles and book chapters–a far cry from the diverse citations that are available from Google Scholar. Another drawback: your institution must have a subscription for you to set alerts.

Web of Knowledge offers article-level citation alerts. To create an alert, you first have to register with Web of Knowledge by clicking the “Sign In” button at the top right of the screen, then selecting “Register”.

5sBUo1G.png

Then, set your preferred database to the Web of Science Core Collection (alerts cannot be set up across all databases at once). To do that, click the orange arrow next to “All Databases” to the right of “Search” in the top-left corner. You’ll get a drop-down list of databases, from which you should select “Web of Science Core Collection.”

Now you’re ready to create an alert. On the Basic Search screen, search for your article by its title. Click on the appropriate title to get to the article page. In the upper right hand corner of the record, you’ll find the Citation Network box. Click “Create citation alert.” Let Web of Knowledge know your preferred email address, then save your alert.

Scopus

In Scopus, you can set up alerts for both articles and authors. To create an alert for an article, search for it and then and click on the title in your search results. Once you’re on the Article Abstract screen, you will see a list of papers that cite your article on the right-hand side. To set your alert, click “Set alert” under “Inform me when this document is cited in Scopus.”

To set an author-level alert, click the Author Search tab on the Scopus homepage and run a search for your name. If multiple results are returned, check the author affiliation and subjects listed to find your correct author profile. Next, click on your author profile link. On your author details page, follow the “Get citation alerts” link, and list your saved alert, set an email address, and select your preferred frequency of alerts. Once you’re finished, save your alert.

With alerts set for all three of these services, you’ll now be notified when your work is cited in virtually any publication in the world! But citations only capture a very specific form of scholarly impact. How do we learn about other uses of your articles?

Data Citation Index

If you’ve deposited your data into a repository that assigns a DOI, the Data Citation Index (DCI) is often the best way to learn if your dataset has been cited in the literature.

To create an alert, you’ll need a subscription to the service, so check with your institution to see if you have access. If you do, you can set up an alert by first creating a personal registration with the Data Citation Index; click the “Sign In” button at the top right of the screen, then select “Register”. (If you’re already registered with Web of Knowledge to get citation alerts for your articles, there’s no need to set up a separate registration.)

Then, set your preferred database to the Data Citation Index by clicking the orange arrow next to “All Databases” to the right of “Search” in the top-left corner. You’ll get a drop-down list of databases; select “Data Citation Index.”

Now you’re ready to create an alert. On the Basic Search screen, search for your dataset by its title. Click on the appropriate title to get to the dataset’s item record. In the upper right hand corner of the record, you’ll find the Citation Network box. Click “Create citation alert.” Let the Data Citation Index know your preferred email address, then save your alert.

Pageviews & downloads

How many people are reading your work? While you can’t be certain that article pageviews and full-text downloads mean people are reading your articles, many scientists still find these measures to be a good proxy. And some repositories like Dryad and Figshare provide this information, too, so you can track the interest in the datasets, slides, and other content you upload.

Publisher websites

Publishers like PLOS display pageview and download information for individual articles on their website, alongside other data like citations and altmetrics.

Let’s take a closer look at PLOS’s pageview & download metrics. PLOS combines pageviews that happen on their website with pageviews and downloads the article receives on PubMed Central in a single view on the top of the article’s page:

Screen Shot 2014-12-01 at 3.28.58 PM.png

If you click on the metrics tab of the article page, you get more useful information: total views and download numbers by source, over time; a basic impact graph; and a graph of the relative popularity of this article, compared to articles in the same discipline that are published in PLOS:

Screen Shot 2014-12-01 at 3.40.03 PM.png

Here’s a closer look at the views and downloads grid and graph:

Screen Shot 2014-12-01 at 3.43.31 PM.png

On articles’ Metrics pages, PLOS also provides other data, including citations from a variety of sources, social media and scholarly bookmarking services.

For PLOS and many other publishers, these metrics are only available on their websites. Some pioneering publishers go one step further, sending you an email when you’ve got new pageviews and downloads on their site.

Publisher notifications

In addition to displaying pageviews and downloads on their websites, publishers like PeerJ and Frontiers send notification emails as a service to their authors.

If you’re a PeerJ author, you should receive notification emails by default once your article is published. But if you want to check if your notifications are enabled, sign into PeerJ.com, and click your name in the upper right hand corner. Select “Settings.” Choose “Notification Settings” on the left nav bar, and then select the “Summary” tab. You can then choose to receive daily or weekly summary emails for articles you’re following.

In Frontiers journals, it works like this: once logged in, click the arrow next to your name on the upper left-hand side and select “Settings.” On the left-hand nav bar, choose “Messages,” and under the “Other emails” section, check the box next to “Frontiers monthly impact digest.”

Both publishers aggregate activity for all of the publications you’ve published with them, so no need to worry about multiple emails crowding your inbox at once.

Not a PeerJ or Frontiers author? Contact your publisher to find out if they offer notifications for metrics related to articles you’ve published.

Impactstory also offers alerts that include this data for PLOS articles, so you’re notified any time your articles get new metrics, including pageviews and downloads. (We’ll talk more about all the data we provide in tomorrow’s challenge.)

ResearchGate & Academia.edu

bhr3lLZ.png

Both ResearchGate and Academia.edu will report how many people have viewed and downloaded your paper on their site.

You can turn on email notifications for pageviews and downloads by visiting “Settings” (on both sites, click the triangle in the upper right-hand corner of your screen). Then, click on the “Notifications” tab in the sidebar menu, and check off the types of emails you want to receive.

On Academia.edu, the option to receive pageview & download notifications are described as “There’s new activity in my analytics (includes “Analytics Snapshot”)”; on Researchgate, it’s under Scheduled Emails > “Weekly update about my personal stats and RG Score.”

Dryad and Figshare Screen Shot 2014-06-06 at 953.png

Dryad data repository and Figshare both display page view and download information on their web sites, but they don’t send notification emails when new downloads happen. You can import your Dryad and Figshare-hosted metrics to Impactstory to get notification emails; more on that tomorrow.

Post-publication peer review

Some articles garner comments as a form of post-publication peer review.

PeerJ

PeerJ authors are notified any time their articles get a comment. To make sure you’re notified when you receive new PeerJ comments, login to PeerJ and go to “Settings” > “Notification Settings”  and then click on the “Email” tab. There, check the box next to “Someone posts feedback on an article I wrote” and select all the options under the “Activity on my articles” section, too.

ResearchGate

Any work that’s uploaded to ResearchGate can be commented upon. To set your ResearchGate notifications, login to the site and navigate to “Settings” > “Notifications.” Check the boxes next to “Someone reviews one of my publications” and “Someone bookmarks or comments on my publication.” (While you’re there, you can also check off “One of my publications was cited”–it’ll alert you any time another ResearchGate document cites one of your papers that’s on ResearchGate.)

Altmetric.com

Reviews can also be tracked via Altmetric.com alerts. Post-publication peer reviews from Publons and PubPeer are included in Altmetric.com reports and notification emails. Instructions for signing up for Altmetric.com notifications can be found below.

PubChase

Article recommendation platform PubChase can also be used to set up notifications for PubPeer comments and reviews that your articles receive. To set it up, first add your articles to your PubChase library (either by searching and adding papers one-by-one, or by syncing PubChase with your Mendeley account). Then, hover over the Account icon in the upper-right hand corner, and select “My Account.” Click “Email Settings” on the left-hand navigation bar, and then check the box next to “PubPeer comments” to get your alerts.

Social media metrics via Altmetric.com

What are other researchers saying about your articles around the water cooler? It used to be that we couldn’t track these informal conversations, but now we’re able to listen in using social media sites like Twitter and on blogs. Here’s how.

Altmetric.com allows you to track altmetrics and receive notifications for any article that you have published that’s got a DOI, PubMed ID, ArXiv ID, or Handle. It’s a type of altmetrics aggregator, very similar to Impactstory and PlumX.

S00Rpwu.png

First, install the Altmetric.com browser bookmarklet (visit this page and drag the “Altmetric It!” button into your browser menu bar). Then, find your article on the publisher’s website and click the “Altmetric it!” button. The altmetrics for your article will appear in the upper right-hand side of your browser window, in a pop-up box similar to the one at right.

Next, follow the “Click for more details” link in the Altmetric pop-up. You’ll be taken to a detailed report of your metrics and the underlying qualitative data.

This report (seen below) shows you not only the numbers, but also lets you read the individual blogs, policy documents, newspapers, and other online outlets that mention your article. The donut visualization at the top-left of the report includes a single, weighted score that attempts to sum up the attention that your work has received. Below the visualization is contextual information that shows you how the article’s metrics compare to those of articles published in the same year, journal, and so on.

Screen Shot 2014-12-01 at 4.24.53 PM.png

At the bottom left-hand corner of the page, you can sign up to receive notifications whenever someone mentions your article online.

The only drawback of Altmetric.com’s notification emails is that you have to sign up for a new notification for each article. This can cause inbox mayhem if you are tracking many publications.

Social media metrics via Impactstory

Impactstory provides many of the same metrics as Altmetric.com, rolled up into a single profile. (In fact, Altmetric’s such an ace data source that we use some of their data to in our reports.) More on that tomorrow!

Software metrics via GitHub

If you use the collaborative coding website GitHub to store and work with research data or software, you can see metrics and enable email alerts for certain types of activities.

As we discussed in our GitHub challenge, GitHub has some good metrics that can tell you how your code is being reused, commented upon, and so on–in real time. Some GitHub metrics that you’ll find on individual repository pages include:

  • Stars: some GitHub users “star” repositories as a means of showing appreciation for your work; others use them as a bookmark, so they can find and revisit your code more easily.
  • Forks: a “fork” is created when another user copies one of your repositories so they can explore and experiment without affecting your original code. It’s a good signal of reuse.
  • Pull requests: When a user wants to suggest changes to your code, they’ll issue a pull request. The number of pull request and identities of contributors can be good indicators of how collaborative your work is and who your high-profile collaborators are.

To enable notifications for your stars and forks, you’ll need to connect your GitHub account to Impactstory–GitHub itself doesn’t report on that just yet.

Slideshare

w8Zu8Ow.png

Though Slideshare is best known for allowing users to view and share slide decks, some researchers also use it to share conference posters. The platform sends users detailed weekly alert emails about new metrics their slide decks and posters have received, including the number of total views, downloads, comments, favorites, tweets, and likes.

Here’s how to view your Slideshare metrics on the Web: on your slide deck’s page, scroll down to find the “Statistics” tab under the description section, then click on it. Here you’ll find all the metrics related to others’ interest in your slides.

Some metrics you might accumulate include:

  • Views on both Slideshare and other websites
  • Embeds, which can tell you how many times and where others have shared your slides
  • Downloads, which can tell you if others have liked your slides enough to save them to their computer
  • Comments, which themselves can tell you what others think about your slides
  • Likes, which as you might guess can tell you if others like your work

To receive notification emails, go to Slideshare.net and click the profile icon in the upper right-hand corner of the page. Then, click “Email” in the left-hand navigation bar, and check the “With the statistics of my content” box to start receiving your weekly notification emails.

Vimeo and Youtube metrics

Vimeo and Youtube both provide a solid suite of statistics for videos hosted on their sites, and you can use those metrics to track the impact of your video research outputs (like your video abstracts).

Vimeo tracks likes, comments, and plays for videos hosted on their platform; Youtube reports the same, plus dislikes and favorites. You can view these metrics beneath the your videos on each platform.

To get metrics notifications for your videos hosted on either of these sites, you’ll need to add links to your videos to your Impactstory profile. More on that tomorrow!

Limitations

There are so many ways to collect metrics for your work, it’s hard to keep up. And even aggregators that attempt to collect these metrics for you into a single place–like Impactstory, Altmetric.com, and PlumX–don’t collect everything.

We recommend taking a hybrid approach to staying on top of your impacts: sign up for an aggregator that can collect Twitter, blog, Slideshare, Figshare, etc metrics into one place for you, then supplement any metrics they can’t track (for example, Web of Knowledge or Data Citation Index citations) with email notifications from specific services.

Homework

Do some serious thinking about what metrics mean the most to you. And with those metrics in mind, sign up for the appropriate notification emails that’ll keep you up-to-date on your impacts.

Tomorrow is the final day of the Impact Challenge, and we’re covering the subject we know the best: Impactstory! See you then!

Day 28: Make your work more permanent and trackable with DOIs

Throughout the Impact Challenge, we’ve touched on the importance of having persistent identifiers like DOIs for your research.

DOIs–digital object identifiers–make it easy for others to find your work by providing a permanent, unique identifier for each research output. That identifier will always redirect to where your work is stored, even if the URL changes, the journal you were published in disappears, and so on. All you have to do to make a DOI linkable is append “http://doi.org/” to the front of a DOI, like “http://doi.org/10.5061/dryad.585t4” for doi:10.5061/dryad.585t4.

DOIs also make it easy to track when and where your research is cited, discussed, shared, bookmarked, or otherwise used across the Internet. DOIs are widely used, understood by most researchers, and well-supported by platforms that track impacts across the Web.

Let’s dig into how you can get DOIs for articles, data, software, and other types of research outputs. It will set you up well for tomorrow’s Challenge, which will cover services you can use to track the impacts of your work using DOIs and other permanent identifiers.

DOIs for articles & preprints

Many journals issue DOIs for journal articles automatically. So, getting a DOI for your articles can be as easy as publishing with a journal that issues them.

If you’re planning to publish (or have already published) in a journal that doesn’t offer DOIs, that’s okay! You can archive a preprint or publisher-accepted postprint (peer-reviewed final draft of the article that’s not the formatted, published version) of your article on a platform that issues DOIs like Figshare, Zenodo, BioRxiv, or ResearchGate. Some institutional repositories also can mint DOIs. Here’s how.

Figshare, Zenodo, BioRxiv & some institutional repositories

All of these services work pretty much the same for issuing DOIs: you upload an article and a DOI is assigned automatically. We’ll briefly walk you through the process here using Figshare as an example.

  • Login to Figshare and click the “Upload” link in the upper-right corner.
  • Upload the article and click the “Add info” link.
  • Add a description of the file (metadata). Be as thorough as possible when describing it; rich descriptions can make it easier to find your article using search engines.
  • Some journals require that you add a statement to the archived preprint. It’s usually something along the lines of:  “This is a pre-print version of the following article: [full citation pointing to publisher’s website]. It is posted here with the publisher’s permission.” You can usually find the statement on the “Author’s Rights” section of your journal’s website, and some relevant policies can be found on Sherpa/Romeo.
  • Make the article “Public” (select the radio button for Public immediately to the left of the “Save changes” button.
  • On the item record that’s now live on the Web, you’ll see your DOI:

    Screen Shot 2014-11-30 at 3.43.09 PM.png

The placement of the DOI will vary depending upon what platform you’ve uploaded it to, but the result will be the same: as soon as you’ve completed the upload process, a DOI will be automatically generated.

ResearchGate

ResearchGate recently started allowing users to mint DOIs for articles that don’t yet have one, but it’s not done automatically:

  • Login to ResearchGate
  • On your profile page, click “Add your publications”.
  • Select “All other research” in the pop-up box.
  • Upload your article and add descriptive information, click “Save”.
  • On your item record, click the “Generate a DOI” button at the top-right of the page.
  • Confirm your publication details are correct and that the article doesn’t already have a DOI. Click “Generate a DOI” again.
  • You’ll now see your DOI:

    Screen Shot 2014-11-30 at 3.55.30 PM.png

DOIs for data

You can also get DOIs for research data thanks to disciplinary data repositories like Dryad, KNB, and many others found on re3data.org. Plus, some national data repositories like ANDS will issue DOIs for data, too.

When should you mint a DOI for your data? Natasha Simons of ANDS says a DOI should be applied when:

  • the data will be exposed and forms part of the scholarly record [this can be when you’re publishing supplementary data alongside a paper, “opening up” unpublished datasets, or otherwise making your data available to others];
  • the data can be kept persistent [it won’t have to be removed from the repository];
  • and the minimum DataCite metadata schema requirements can be met [you’ll need to provide information on the dataset’s Creator, Title, Publisher, and Publication Year; the Publisher information is communicated by your repository]

Getting a DOI for your data is usually as easy as just depositing your data. Nearly all data repositories that issue DOIs mint them automatically for new deposits.

Many repositories only issue a single DOI for a dataset, even if “versioning” (uploading of newer datasets, with the history of changes to the files preserved on the repository) is allowed.

But if you’ve got data that will be updated over time, you might need to use a repository that will issue a versioned DOI. Versioned DOIs can reflect what version of the data others are citing, making references to older versions of the dataset possible.

Dryad is just one repository that issues versioned DOIs. Here’s how it works:

  • You upload your data to the repository and get your base DOI (e.g. doi:10.5061/dryad.585t4)
  • When you upload a new version, Dryad will create a new suffix to the DOI that points to that particular version of the dataset (e.g. doi:10.5061/dryad.585t4.v2)

If you don’t have a national, disciplinary, or other specialized repository available to share your data, you can always deposit it to Figshare or Zenodo, where a DOI will be minted automatically.

DOIs for software

It’s easy to mint a DOI for your research software if you use GitHub to host your software and the connect it to your Figshare or Zenodo account. Here’s how it works:

  • Choose the public GitHub repository* you want a DOI for
  • Login to Figshare or Zenodo
  • Connect your Figshare or Zenodo account to GitHub
  • Use Figshare or Zenodo to select the GitHub repository you want a DOI for
  • Check that your GitHub repository is set to communicate with Figshare or Zenodo
  • Create a new “release” of your GitHub repository
  • Head back over to Figshare or Zenodo and make sure the full description of your software package appears in the Uploads section, then submit your software
  • An automatically-minted DOI will appear on the item record page. Here’s what that looks like on Zenodo:

    Screen Shot 2014-11-30 at 6.00.59 PM.png

In depth instructions to minting DOIs for software can be found on GitHub.

If you’re not on GitHub but want to mint DOIs for your software, you can upload your software and the accompanying documentation as binary files to Figshare and Zenodo, like you did with research data in the step above.

* Confusingly (for those of us more accustomed to data repositories and institutional repositories), GitHub calls specific software packages “repositories”.

DOIs for open peer reviews

An increasing number of journals and peer review platforms are issuing DOIs for open peer reviews.

If you’ve openly reviewed a journal article, there are two main ways you can get a DOI for your reviews:

  • Review for a journal like PeerJ or peer review platform like Publons that issues DOIs automatically
  • Archive your review in a repository that issues DOIs, like Figshare or Zenodo

DOIs will allow others to easily find your open peer reviews and also allow you to track discussions and reuse of your peer reviews across the Web, like you can with other scholarly outputs. That’s a major advantage over private, anonymous peer reviews, which are never seen beyond your editor and the article’s author and can rarely be claimed for credit towards the enormous amount of intellectual work they require.

DOIs for everything else

You can easily mint DOIs for your slide decks, posters, and even your blog posts if you upload them to Zenodo or Figshare, following the instructions outlined above.

Limitations

Many of the limitations of DOIs are caused by human error. For example, though it’s ideal for your links to your work to use the DOI link (more on that below), you can’t control whether others will actually do it. That’s because research is often shared online using regular, easy-to-copy URLs instead of DOIs.

The best you can do is provide the DOI on the same page where the research output is shared. List it front and center, along with a preferred citation, so that anyone who shares your work will hopefully see it and follow your instructions.

It’s also bad form to create more than one DOI for a research output. So don’t mint a DOI for anything that’s already got one.

The final limitation is that we’re all counting on the publisher or service provider to keep the DOI record up to date with the DOI registration agency (most commonly Crossref or DataCite). And keeping records up-to-date is what ensures that DOIs point to the correct place on the Web (which you’ll remember is useful if URLs change or journals fold).

Most reputable publishers do this, but some publishers and repositories may not be as responsible (for example, as of this writing, as far as we know, ResearchGate doesn’t have a documented preservation policy). If you aren’t sure if the publisher’s archiving policy is up to snuff, ask them about it.

Homework

First, mint DOIs for your 5 most important research outputs that don’t already have them. Bonus points if some of those outputs are not articles.

Once you have your DOIs, use them:

  • Put them onto your CV alongside your research products;
  • Update your ArXiv preprint metadata to point to them;
  • Put clearly-labeled preferred citations that include DOIs into your dataset or software documentation; and
  • Encourage others to always link to your review using the DOI resolver link (these are created by putting “http://doi.org/” in front of your DOI; here’s an example of what one looks like: http://doi.org/10.7287/peerj.603v0.1/reviews/2).

Now that you’ve got DOIs for your most important research outputs, we’ll explore how you can use altmetrics and impact tracking services like Altmetric.com and Impactstory to discover how often they’re cited, saved, shared, discussed, and otherwise reused online. Stay tuned!

Impact Challenge Day 27: Track your scholarly social media and website impacts with Twitter, Sumall, and Google Analytics

Throughout this Impact Challenge, we’ve explored many ways for you to get your work to other researchers, the public, and other audiences via the Internet, by making connections at conferences, and other means.

To close out the Challenge, we’ll share four techniques for measuring the success of your ongoing efforts, starting with basic social media and website analytics.

Social media and website analytics like those provided by Twitter, Sumall, and Google Analytics can tell you a lot about who’s following your work, the potential exposure your work has received, and some limited bits about the diverse uses of your work, beyond simple pageviews and download counts.

Let’s dig into four easy ways to explore the metrics behind your website and social media accounts.

Twitter Analytics

Twitter recently rolled out an Analytics feature, which can tell you not only how many followers you have, but also their demographics and how others are using your tweets. Are your tweets being retweeted or favorited very often? If so, what are the characteristics of those tweets with high engagement rates?

The wealth of data that Twitter provides can help you learn more about the audiences you’re having an impact with (Is your work resonating in the countries whose populations you’re studying? What subjects do your followers care most about? and so on). Here’s how to get started with Twitter Analytics:

  • Login to Twitter
  • Click on your picture in the upper right-hand corner and then select “Analytics” from the drop-down menu
  • You’ll see three tabs:
    • Tweet Activity: includes the exposure your tweets has received, the general rates that others have engaged with your tweets, and allows you to explore the activity that individual tweets have received.
    • Followers: breaks down the demographics of your followers, showing a growth chart,
    • Twitter Cards: most useful for advanced academic users who want to promote blog content and rich media. We won’t talk much about Twitter Cards in this post; check out this guide for more information.

Let’s dig into the Tweet Activity and Followers pages.

Tweet Activity

Screen Shot 2014-11-29 at 4.09.13 PM.png

Screen Shot 2014-11-29 at 4.11.23 PM.png

The first thing you’ll see on this page is a bar chart of the number of Twitter impressions your tweets have received over the past 28 days. Twitter impressions are the number of times your tweets have appeared in someone else’s timeline. You can think about this metric as being akin to the circulation statistics of a journal you’re published in–it’s not the same as readership, but it gives a sense of your overall exposure.

You’ll also see summaries of your average Engagements on the right-hand side of the screen. How often have others clicked on your links, retweeted and favorited your tweets, and replied to you over the past 28 days? And how many of each of these actions have you received per day, on average?

In the middle of the screen, you’ll see a list of your tweets in reverse chronological order, along with their individual number of impressions, engagements, and engagement rate.

Screen Shot 2014-11-29 at 4.16.15 PM.png

You can click on “View Tweet details” for any individual tweet to get a drill down view of the metrics:

Screen Shot 2014-11-29 at 11.02.11 AM.png

And this is where the good stuff lives. The chart at the top of the Tweet details page tells you the times when your tweet was most popular, and below it are the types of actions others took to engage with or share your tweet with others.

Over time, you can use this specific information, as well as more general information about your overall tweet activity, to learn when your tweets get the most impressions and engagement. That way, you can schedule your future tweets to post during similar times when sharing links to your blog posts, journal articles, and other scholarly products, so as many people see your work as possible.

Consider doing an informal analysis of your most popular tweets on a monthly basis. It’ll allow you to see what types of tweets are the most popular with your followers, and you can use that insight to share future links in a similar way.

An easy way to do this informal analysis is to export your Tweet Activity data as a CSV file. Open it up in Excel and use the Sort function to see which of your tweets have the most impressions, retweets, and other types of engagement.

Beyond Tweet Activity, knowing about your followers is a great way to learn the demographics of your audience and what unexpected demographics you’re reaching via Twitter.

Followers

Screen Shot 2014-11-29 at 11.04.23 AM.png

Much of your Followers page is self explanatory: How many followers do you have overall, and when did you experience a spike in follower growth? What are your followers most interested in? Where are they located? Who else do they follow? And what’s their gender?

You can compare information about your follower rate to information on your Tweet Activity page to see if any particular tweets or mentions can account for a dip or rise in follower growth.

And demographic information can be useful in other ways. For example, if you’re a public health researcher studying drug use among teens in northern Europe, one way to prove that you’re successful at reaching out to that group would be to dig into your Followers data and see where your followers live; who else they’re following and their interests could give you insight as to their age and other demographic information.

Twitter Analytics give you rich data on your specific impacts on Twitter. Sumall, on the other hand, can give you a 50,000 foot view of your impacts across Twitter and other platforms.

Sumall

Screen Shot 2014-11-29 at 4.38.21 PM.png

Sumall is a popular analytics platform that allows you to dig into your Twitter, Facebook, and other social media metrics. For the purposes of this challenge, we’ll explore only the most revealing Twitter and Facebook metrics that Sumall provides, which are:

  • Twitter
    • Mentions: How often did others use your handle to reply to you or comment about you?
    • Mention Reach: How many people saw your name in their timeline?
    • Retweet Reach: How many people saw a retweeted tweet of yours in their timeline?
  • Facebook
    • Post Likes: How often are others “liking” your post? This can give a big boost to your posts’ visibility among others’ friend networks.
    • Post Comments: How often are others engaging with your posts by commenting upon them?
    • Post Shares: How often have others reshared your posts?

Here’s how to explore these (and other) metrics: sign up for a free Sumall account using your Twitter or Facebook login, or by signing up with your email.

You’ll be prompted to connect other social media accounts; I suggest starting with Twitter and Facebook. Google+ and WordPress.com statistics are also available, but not detailed enough to be useful, in my opinion.

Once your social media accounts are hooked up, you’ll see the main Sumall interface. The Sumall interface is a bit buggy and suffers from some usability issues, but it is nonetheless illuminating for gaining quick and dirty insights into your metrics via charts and summaries.

On the left hand side of the screen are different metrics you can click on to add to the chart. The chart itself takes up most of the middle of the screen.

The chart lacks labeled X and Y axes; you have to hover over individual data points to see the dates at which particular metrics occurred and what those metric counts were:

Screen Shot 2014-11-29 at 4.43.51 PM.png

Below the chart is summaries of the data points you’ve added to the chart for the specified date range:

Screen Shot 2014-11-29 at 4.53.44 PM.png

At the top of the screen, you can set date ranges by clicking on the underlined dates. This allows you to compare data over certain periods:

Screen Shot 2014-11-29 at 4.52.51 PM.png

All of the metrics that Sumall provides give you a good overview of the reach your work has had, and how engaged others are with you in general on various platforms. Sumall isn’t as good as Twitter or the next two types of metrics providers at telling you about the performance of your specific posts.

Google Analytics

Google Analytics is a powerful platform that can tell you a lot about the traffic that your professional website and blog have received.

To get started, you need to sign up for a free Google Analytics account, then insert a small file onto your website that helps track your website’s traffic: how many people are visiting your site, where are they coming from, how long are they staying, what’s the most popular content on your website, and so on.

Hooking Google Analytics up to your blog is very easy if you’re running a WordPress blog: here’s a tutorial on how to do it in under 60 seconds.

Google Analytics provides a number of out-of-the-box reports that can be useful for learning about your site’s visitors and the content that’s most popular, as summarized by the University of Minnesota’s Academic Health Center:

  • Audience overview report provides an at-a-glance overview of all the key visitor metrics for your site.
  • Acquisition overview report provides an at-a-glance overview of visitor-source metrics for your site.
  • Behavior overview report provides an at-a-glance overview of the key pageview metrics for your site.

Let’s take a closer look at each report.

Audience Overview Report

Screen Shot 2014-11-29 at 5.09.21 PM.png

How many visitors have you received, and where do they hail from? Do visitors from certain countries stay longer on your website? How about visitors who’re using a mobile browser versus a desktop browser? Knowing more about our visitors’ demographics can tell us how good of a job we’re doing at engaging certain communities, and also clues like “Are visitors to my website who’re using mobile browsers leaving because they’re having a hard time reading on their mobile phones?”

Acquisition Overview Report

Screen Shot 2014-11-29 at 5.22.23 PM.png

Are more people searching for your site than they are being referred to your site from Twitter and Facebook? What social networks are sending the most traffic your way? Digging into this report, as well as drill-down views beneath the “Acquisition” section of the left-hand toolbar, can give you insight into how you might better promote your website or blog using social media.

Behavior Overview Report

Screen Shot 2014-11-29 at 5.15.27 PM.png

What are the most popular pages on your website or blog? Above, we’ve screencapped traffic for our blog over the past month. We see on the bottom right the most popular pages, as well as a summary of traffic just below the overall traffic chart. This can not only tell you the content on your website or blog that’s most eligible for resharing on social media as “evergreen content,” but also can tell you whether blog posts aimed at engaging the public are working.

For a comprehensive list of Google Analytics resources, check out KissMetrics’ link roundup.

What these platforms can’t tell you

None of these platforms expose much of the underlying, qualitative data like, “In what context was I ‘mentioned’ on Twitter?” or “What did all those Facebook comments actually say?”

So, be sure to use the data you’re gathering carefully!

Homework

Explore your Twitter Analytics data and sign up for Sumall or Google Analytics. After a few weeks’ worth of metrics have accumulated, dig into the data with these questions in mind:

  • Have there been spikes in engagement or traffic after I shared certain types of content?
  • What do these services tell me about the demographics of my readers, visitors, and followers?
  • How do those demographics differ from what I expected? How are they similar?
  • How might I use the data these sites provide to document my engagement efforts for professional purposes?

Tomorrow, we’ll dig into a key way to make use of your academic work trackable across the Web: minting permanent identifiers.

Impact Challenge Day 26: Expand your co-authorship base

In today’s challenge, we’ll share another way to increase your impacts beyond the Internet: co-authoring with a diverse group of colleagues.

Co-authoring is becoming increasingly common in many fields, for good reason: co-authoring “makes research more fun, productive, and efficient,” helps researchers “develop new ideas, extend our methodological toolkit, and share the workload,” allows senior researchers to share their expertise with younger scientists, and results in papers that some say contain stronger ideas and writing.

Co-authorship is also about bringing your own expertise to the table. Working with diverse co-authors can gain you a wider network of colleagues and increased connections in your field. And, if it’s done well, it secure you important allies at all career stages. After all, you never know where your grad students or postdocs will end up some day!

Plus, when you publish with a broad group of people, you help break down the “old boys network” while increasing the reach of your work — citation counts are higher for papers with gender and ethnically diverse co-authors.

Let’s learn more about what types of co-authors you can recruit a more diverse group of collaborators, how to work well with others, and some of the benefits and drawbacks of co-authorship in general.

What to look for in a co-author

In general, there are some things you should look for when recruiting co-authors outside of your own research group:

Complementary strengths

Are your potential collaborators excellent on theory, whereas you’re the computational methods wiz? Does a postdoc in your group know the ins and outs of R, while a PhD student you mentor can bang out a top-rate literature review in 24 hours? Having collaborators who possess complementary strengths to your own can make it easy to divide and conquer writing a better paper in less time.

Philosophy

Does this person respond to emails in a timely manner and deliver on promises? Knowing up front when you can count on someone takes a lot of the headaches out of collaboration.

And does this person’s working style jive well with your own? C. Titus Brown points out that he often ends up collaborating with others who aren’t big on computational biology, but that their shared, relaxed approach to writing is what makes their partnerships successful.

Challenge

Good co-authors are also those who challenge you to do your best work. Researcher Bob Hinings describes his best and longest-lasting collaborator thusly: “[I find] that other people are interesting and usually have better ideas than I do so I can build on their contributions and get great satisfaction from the process, even though at times it can be challenging. Royston is always full of ideas and it is a challenge to keep up with him.”

Collaborators with these characteristics can be found not only in your lab or university, but in other countries, different disciplines, and at many stages of their career. Let’s now dive into some ways you can look to diversify your group of collaborators.

Types of co-author diversity

Career stage

You can choose to co-author with scholars of your same career stage, more senior scholars, or with scholars who are junior to you. Each has its advantages and disadvantages, as CrookedTimber blog documents:

It is important for a junior scholar to show clearly his or her distinct contributions to a field and by co-authoring with senior scholars, some will be inclined to dismiss the work as that of the senior researcher…[When working with students] the junior scholar becomes the senior author due to his or her seniority as compared to the student co-author(s).

Co-authoring with junior scientists allows you to also mentor those with less experience. Consider giving full co-authorship credit to students who’ve helped on a project, rather than relegating their credit to the Acknowledgement section of your paper. It’s an easy way to diversify your co-author list while giving students a major leg-up.

That said, don’t make someone an author just to be nice. Respect the norms for your field and its written ethical guidelines. Many junior scholars bring their own strengths to the table. Ask them to take the lead on recording a video abstract, blogging about your study, or drafting a press release–your paper may be stronger for it!

Discipline

There are many good reasons to co-author with scientists from outside of your field (and even outside of academia): they can help your work reach different audiences, give an outside perspective on your field of study, and find ways to apply research in a clinical setting, among others.

For example, studies on sustainability science and data curation by hydrologist Praveen Kumar and information scientists Beth Plale and Margaret Hedstrom have been published from different perspectives in different venues. (Their work was both presented at the American Geophysical Union Fall Meeting in 2013 and published in the International Journal of Digital Curation.) Some may worry that this constitutes “double dipping” (publishing the same work twice) but if done properly, the focus and content of the two products are very different and get disciplinary information to their communities of interest.

And as the Dean of Drexel University’s College of Nursing and Health Professions points out, collaboration can move research into practice, developing clinical technology and saving lives.

Gender & ethnic diversity

National Cancer Institute’s Kenneth Gibbs Jr eloquently explains the argument for diverse research teams on the Voices blog:

[W]hen trying to solve complex problems (i.e., the sort of thing scientists are paid to do), progress often results from diverse perspectives. That is, the ability to see the problem differently, not simply “being smart,” often is the key to a breakthrough. As a result, when groups of intelligent individuals are working to solve hard problems, the diversity of the problem solvers matters more than their individual ability. Thus, diversity is not distinct from enhancing overall quality—it is integral to achieving it.

And the literature backs him up: one recent study has found that gender diversity on research teams leads to better quality publications. Another study found that ethnically diverse teams are more creative and produce higher quality ideas than ethnically homogeneous groups (albeit among a sample population of undergraduates). Papers with ethnically diverse co-authors also tend to get more citations, too.

But perhaps the best argument for having a gender- and ethnically-diverse group of collaborators is summed up in this tweet:

Output

A final way to consider diversity is in the context of research outputs. You can “co-author” not only journal articles, but also presentations, software, and other types of research outputs.

Impactstory co-founder Heather Piwowar once found a diverse group of collaborators by putting out a call on Twitter for others interested in organizing a panel for the ASIS&T Annual Meeting in 2011. The panel was fun, very successful, and allowed her to work with a more diverse group of researchers than she had anticipated.

And collaborators on genomics researcher Holly Bik’s Phinch project are industry software developers, not other researchers, which has led to the development of a beautiful data visualization app for large biological datasets.

So how do you find diverse co-authors? Let’s explore some strategies.

How to find diverse co-authors

Mentors

Communications researcher Philip N. Howard suggests tapping your mentors for co-authorship opportunities:

The first step in finding opportunities to co-publish is to let your faculty mentors know that you are available to help if they ever get such invitations. Faculty sometimes receive unsolicited invitations to write an article or contribute a book chapter. Since faculty often plan long-term writing agendas, they may decline an unexpected invitation. They may be more likely to accept such an invitation if they know they can share the research and writing tasks with a co-author.

Mentors may also be able to connect you with colleagues who are interested in a similar subject who might be in need of a collaborator with whatever skills you possess (computational methods, quick-but-thorough literature review writing, mastery of Stata, and so on).

Conference buddies

Remember all those interesting researchers whose work you admire that you met while hustling at conferences? They can make great collaborators. Shoot them an email to say hello, and share an idea or two you’ve been thinking on to see if they want to collaborate.

Social networks

Take a look at your social networks on Twitter, ResearchGate, and LinkedIn. After being on social media for a few weeks or months you’ll have met scientists in your network whose skills complement your own. Don’t be afraid to reach out to potential co-authors with an idea for a paper or project.

Cold-call

The final–and most challenging–way to find co-authors is to “cold call” a researcher that you want to collaborate with but haven’t met yet. Reach out to them via email or phone, send them an idea for a paper or two, and ask if they’d like to collaborate.

As a PhD student, Impactstory co-founder Jason Priem once emailed a researcher he admired with a request to co-author, offering to do the grunt work of writing a literature review. He was accepted onto the paper and now has a co-authorship credit with a respected researcher, broadening his co-authorship base and experience.

If you’ve got something to offer–a great idea, a complementary skill, or the ability to do something the lead author doesn’t want to do–you can find opportunities that aren’t readily apparent.

Making co-authorship work

So–you’ve got your co-authors lined up and ready to write. Now what?

Tseen Khoo of the Research Whisperer blog says all of the following are required for a successful co-authoring experience:

  • A feasible, agreed-upon schedule for drafting and deadline for completion.
  • A strong leader for the paper, someone who takes final responsibility for its proofing and submission (even though the actual tasks may be devolved to someone else…).
  • Proper version control. That’s why I emphasise the serial process of sending it around the team. When X has done their bit, they send it to Y (cc’ing the others), who then sends it to Z (cc’ing the others). Don’t fiddle with the writing till you are the one the document is sent to.
  • All members of the team to be committed to adding value to the publication, and doing their bit.

In the next section, we discuss co-author agreements, which can help you articulate the schedule and responsibilities that Tseen describes. Version control can be managed via email and Microsoft Word as described above, or by writing your paper on GitHub, WriteLaTeX, or Authorea.

Be sure to also avoid gift and ghost authorship (the practice of giving authorship credit to people who didn’t contribute to the paper)–both are still practiced by some academics but are heavily frowned upon by publishers.

The tricky bits

There’s no shortage of screeds that outline the many potential drawbacks to co-authoring papers:

Credit for authorship is starting to see some progress: some journals require specific articulation of author contributions (like this statement for this paper) and the recently released CRediT taxonomy may fix this problem altogether, once widely adopted.

And it may sound hokey, but the near-magical fix for most of these problems is simple: create a co-author agreement that puts into writing the roles, division of labor, and a set of standards that everyone will agree to abide by (like “responding to an email within 48 hours”, and so on). Elsevier Connect blog has posted a co-author agreement template, if you want to give it a try.

Co-authorship agreements are generally not legally-binding contracts, but instead ways for everyone to clarify the “rules of engagement” before a major writing project begins.

Homework

Brainstorm ideas for writing projects and a list of potential co-authors. If you want, you can divide the list into “low hanging fruit” and “dream co-authors” to make it easier to write.

If you’ve got the bandwidth to take on a new writing project right now, reach out to your potential co-authors in one of the ways described above and propose a collaboration. Otherwise, keep your list handy for a rainy day, when you’ll have the time to take on a new project.

And if you don’t have a diverse network of colleagues on your scholarly social media sites, you can start to fix that right now–start following 10 new people today.

Impact Challenge Day 25: Mentor other scientists

Even if you’re at the beginning of your research career, you can be a mentor.

Mentoring is a wonderful way to pay-it-forward, passing on knowledge and skills to younger generations of scientists. Mentors can help other researchers navigate tricky grant application processes, handle complex political situations in the lab, and connect with diverse colleagues and potential collaborators.

How does mentoring affect your impact? Well, impact isn’t all about citations and prestige–it’s about the effect you have on others, too.

And mentoring isn’t always the “wise professor helps student” scenario that many imagine it to be. PhD students can be mentors to other students, researchers can “peer mentor” other researchers, and increasingly scientists at all stages in their career are using the Web to mentor each other.

In today’s challenge, we’ll mostly tackle the latter type of mentoring: leveraging social media to advise and support other researchers.

First, let’s define mentoring.

Mentoring, loosely defined

Mentoring is often defined along the lines of “train[ing] or advis[ing] the mentee…so that they can work more effectively and progress,” but it’s so much more than that. And mentoring also no longer fits the rigid “wise professor helps student” scenario that I mentioned above.

In general, mentoring is about:

  • Listening carefully and giving impartial advice
  • Connecting junior researchers with opportunities
  • Helping others without the expectation of anything in return

And there are a number of specific activities that mentors tend to offer. National Center for Faculty Development & Diversity’s Kerry Ann Rockquemore defines those as:

  • Professional development (time management, conflict resolution, project planning, grant writing, basic organizational and management skills).
  • Access to opportunities and networks (research collaborations, funding , etc.).
  • Emotional support (to deal with the stress and pressure of the tenure track and life in a new location),
  • A sense of community (both intellectual and social).
  • Accountability (for research and writing).
  • Institutional/political sponsorship (someone to advocate their best interest behind closed doors).
  • Role models (who are navigating the academy in a way they aspire to).
  • Safe space (to discuss and process their experiences without being invalidated, questioned, devalued and/or disrespected).

Did you notice how most of these activities can be done by anyone, at nearly any stage of their career?

If you’re a graduate student, you can mentor undergraduates. And if you’re an early career researcher, you can do the same for graduate students, and senior researchers can do the same for you. Plus, researchers of similar standing with differing backgrounds can “peer mentor” one another. It’s all about paying it forward.

We tend to think about mentoring as only being face-to-face rap sessions, but the truth is that the Internet allows us to mentor people we’ve never met through a variety of means. The first of which is the idea of “distributed mentoring.”

Getting started with ‘distributed mentoring’

Distributed mentoring is a movement started by Diana Kimball to open up the practice of mentoring beyond the confines imposed by physical location. According to Diana, you can be a “distributed mentor” by creating a space on your website where you proclaim your interest in mentoring others over the Internet on a variety of topics.

Those who are interested in being mentored can read through your list and contact you via email to begin the process. You can “meet” via video chat or over the phone, as often as you’d like.

But distributed mentoring isn’t done in just one way. You don’t have to join Kimball’s movement to be a distributed mentor in essence. Instead, you can seek out others on social media who are in need of help.

There are many places on the Web where you can find junior researchers hungry for guidance. We’ll highlight three: Academia Stack Exchange, ResearchGate, and Twitter. Let’s break down how you can use each platform to help others.

Academia Stack Exchange

Screen Shot 2014-11-26 at 2.48.47 PM.png

Academia Stack Exchange is a spin-off from Stack Exchange, a popular computer programming Q&A site. On Academia Stack Exchange (Academia.SE), users can ask about most aspects of academia: how to format a CV, the etiquette of handling a reference request from someone who never showed up for class, where to find certain types of data or articles, and so on–the sorts of questions a mentee will often ask.

But there’s more to Academia.SE than that. Basically, the site works like this: someone posts a question and others answer it. Members of the Academia.SE community can vote answers up or down, based on quality. Points are assigned based on both what you contribute (questions, answers, edits, and so on) and whether others have voted your content up or down. And you accumulate points over time, gaining reputation, badges, and the ability to do more things on the site as your points increase.

Here’s how to use Academia.SE  for distributed mentoring: browse Academia Stack Exchange by topic (and also wander over to other Stack Exchanges, like this one for Chemistry or this one for Math) to find questions that match your expertise. And once you’ve signed up for an account, you can begin to answer questions.

If you’ve chosen to use your real name when signing up–which I recommend–others will be able to recognize your contributions. But whether pseudonymous or not, you’re still helping others, which is the whole point of mentoring.

ResearchGate

Screen Shot 2014-11-28 at 10.46.20 AM.png

Until now, we’ve mostly talked about ResearchGate as a platform to share your scholarship. But it also can be used to reach out to and help other scientists.

ResearchGate’s Q&A feature allows scientists to pose a question to others that have listed certain skills and expertise in their profile, and anyone matching those skills can answer.

Here’s how it works: under the “Topics” section of your profile, you can add and edit subject areas you’ve got expertise in. Then, on the Q&A section of the site, ResearchGate will prompt you with questions it thinks you can answer, based on the Topics you’ve listed in your profile.

Because ResearchGate is closely linked with your scholarly identity, it’s easy to get recognition for your contributions. Points are also added to your RG score based on the number of questions you answer, which gamifies the experience for a bit of fun.

Some have praised ResearchGate’s Q&A feature over that of similar services, but others criticize the site for the “useless” questions posed in the Q&A. You’ll have to judge for yourself whether the questions posed in your area of expertise are worth answering, and what value you can get out of engaging others on the site.

Twitter

Screen Shot 2014-11-26 at 3.35.28 PM.png

Twitter can be used for all kinds of mentoring and support activities, especially when using and following hashtags.

Hashtags like #madwriting can be used for accountability: many share their writing schedule with others like you’d share your “days since my last cigarette” with friends–to hold you to a promise of productivity and responsibility.

General hashtags like #phdchat, #gradchat, and #ecrchat are often used by students and early career researchers to pose questions and ask advice, as are hashtags for disciplines. Check in on these hashtags regularly and answer any questions that arise or offer to share your experience and advice. Not everyone will be interested, but many will appreciate your willingness to take a few minutes out of your day to help them.

The same goes for those that you’re already following on Twitter. Read through your homepage Twitter stream each time you login to see if anyone you’re following could use advice or support; and support them in any way you can, to the extent that you’re comfortable doing so.

One downside of using Twitter to mentor can be the sheer amount of unrelated tweets you have to sift through to find the stuff worth chatting about. Hashtags are a partial answer to that question, but right now, there’s not much you can do to fully solve it.

Limitations of distributed mentoring

It can be hard to create a safe space for others using very public forums like those mentioned above. Similarly, it’s difficult–and potentially risky–to offer access to opportunities and networks to someone you don’t know very well.

One way around these problems is to make initial connections on public-facing social media sites, if you want to, then exchange private contact information and continue mentoring via email, videochat, or telephone.

If you’re a stranger to a potential mentee, go slowly — mentoring can get complicated fast, and overcommitment and overinvolvement helps no one. To start with, it’s better to offer too little of yourself than too much.

Homework

Choose two platforms to experiment with as a distributed mentor. Then, sit back and “lurk” for a while, spending your time reading previous Q&As to get a feel for how it works on each platform, and answer at least one question on each platform. Additionally, consider setting up a “/mentor” section of your website and formally joining the Diana Kimball’s Distributed Mentoring movement.