Open Access Week 2014 – a look back and ahead

118A3TUPCSiw6Y.gif

Much like Lil Bub, we’re bushed. Our Open Access Week 2014 was very eventful–we spoke with more than 100 researchers and librarians in 9 countries over 5 days. Here’s how we spent our time.

Throughout the week, Stacy hosted several sessions of “The Right Metrics for Generation Open: a guide to getting credit for Open Science,” where she talked about how Generation Open’s needs are evolving beyond that of previous generations of scientists. Altmetrics are particularly well-suited to meet those needs. You can view her slides on Slideshare, and read a long-form blogpost based on the presentation here on our blog.

Tuesday saw Stacy talking with faculty and librarians at the University of Alberta and the University of Memphis, where she explained “Why Open Research is Critical to your career”. (tl;dr: Change is coming in scholarly communication–so you should get on board and start making the most of the great opportunities that Open Science and altmetrics can offer you.) Check out her slides on Google Docs.

On Wednesday, Stacy had the pleasure of hangin’ virtually with librarians and library students at the University of Wisconsin, where they talked about the fact that “Altmetrics are here: are you ready to help your faculty?” After all, who’s a better neutral third-party to help faculty navigate this new world of altmetrics than librarians? Slides from that presentation are available on Google Docs.

Jason gave his popular talk, “Altmetrics & Revolutions: How the web is transforming the measure and practice of science” to researchers at the University of New Brunswick on Thursday. His slides are available on Google Docs.

Stacy rounded out the week by chatting with researchers and librarians at the University of Cape Town on Friday. Her presentation on the basics of altmetrics and how to use them–”Altmetrics 101: How to make the most of supplementary impact metrics”–is available for viewing on Google Docs.

Heather’s going to be the one in need of a nap over the next two weeks–she’ll be presenting on open data and altmetrics throughout Australia. Here are the events ahead:

  • Melbourne, Mon, 27 Oct, 9am–12.30pm:  Creating your research impact story:  Workshop at eResearch Australasia. Also featuring: Pat Loria, CSU, and Natasha Simons, ANDS. (sold out)

  • Melbourne, Wed, 29 Oct, 10–11am: Keynote presentation at eResearch Australasia. Register for conference

  • Brisbane, Mon, 3 Nov, 1–4.30pm: Uncovering the Impact Story of Open Research and Data. Also featuring Paula Callan, QUT, and Ginny Barbour, PLOS. QUT: Owen J Wordsworth Room. Level 12, S Block, QUT Gardens Point. (sold out)

  • Sydney, Wed, 5 Nov, 1.30–4.30pm: An afternoon of talks featuring Heather Piwowar. Also featuring Maude Frances, UNSW, and Susan Robbins, UWS. ACU MacKillop Campus, North Sydney: The Peter Cosgrove Centre, Tenison Woods House, 8-20 Napier Street, North Sydney. (sold out)

Are you ready, Oz?!

UuIom9saJP5eg.gif

The Right Metrics for Generation Open: a guide to getting credit for Open Science

You’re not getting all the credit you should be for your research.

As an early career researcher, you’re likely publishing open access journal articles, sharing your research data and software code on GitHub, posting slides and figures on Slideshare and Figshare, and “opening up” your research in many other ways.

Yet these Open Science products and their impacts (on other scholars, the public, policymakers, and other stakeholders) are rarely mentioned when applying for jobs, tenure and promotion, and grants.

The traditional means of sharing your impact–citation counts–don’t meet the needs of today’s researchers. What you and the rest of Generation Open need is altmetrics.

In this post, I’ll describe what altmetrics are and the types of altmetrics you can expect to receive as someone who practices Open Science. We’ll also cover real life examples of scientists who used altmetrics to get grants and tenure–and how you can do the same.

Altmetrics 101

Altmetrics measure the attention your scholarly work receives online, from a variety of audiences.

As a scientist, you create research data, analyses, research narratives, and scholarly conversations on a daily basis. Altmetrics–measures of use sourced from the social web– can account for the uses of all of these varied output types.

Nearly everything that can be measured online has the potential to be an altmetric indicator. Here are just a few examples of the types of information that can be tracked for research articles alone:

scholarly

public

recommended

faculty of 1000

popular press

cited

traditional  citation

wikipedia

discussed

scholarly blogs

blogs, twitter

saved

mendeley, citeulike

delicious

read

pdf views

html views

 

When you add research software, data, slides, posters, and other scholarly outputs to the equation, the list of metrics you can use to understand the reception to your work grows exponentially.

And altmetrics can also help you understand the interest in your work from those both inside and outside of the Ivory Tower. For example, what are members of the public saying about your climate change research? How has it affected the decisions and debates among policy makers? Has it led to the adoption of new technologies in the private sector?

The days when your research only mattered to other academics are gone. And with them also goes the idea that there’s only one type of impact.

Flavors of impact

There are many flavors of impact that altmetrics can illuminate for you, beyond the traditional scholarly impact that’s measured by citations.

This 2012 study was the first to showcase the concept of flavors of impact via altmetrics. These flavors are found by examining the correlations between different altmetric indicators; how does a Mendeley bookmark correlate to a citation, or to a Facebook share? (And so on.) What can groups of correlations tell us about the uses of scholarship?

Among the flavors the researchers identified were a “popular hit” flavor (where scholarship is highly tweeted and shared on Facebook, but not seen much on scholarly sites like Mendeley or in citations) and an “expert pick” flavor (evidenced by F1000 Prime ratings and later citations, but few social shares or mentions). Lutz Bornmann’s 2014 study built upon that work, documenting that articles that are tagged on F1000 Prime as being “good for teaching” had more shares on Twitter–uncovering possible uses among educational audiences.

The correlation that’s on everyone’s mind? How do social media (and other indicators) correlate with citations? Mendeley bookmarks are found to have the most correlations with citations; this points to Mendeley’s use as a leading indicator (that is, if something is bookmarked on Mendeley today, it’s got better chance of being cited down the road than something that’s not bookmarked).

Correlations with citations aren’t the only correlations we should pay attention to, though. They only tell one part of an impact story–an important part, to be sure, but not the only part.

Altmetrics data includes qualitative data, too

Many don’t realize that altmetrics data isn’t only about the numbers. An important function of altmetrics aggregators like Altmetric.com and Impactstory (which we describe in more detail below) is to gather qualitative data from across the web into a single place, making it easy to read exactly what others are saying about your scholarship. Altmetric.com does this by including snippets of the blogs, tweets, and other mentions your work receives online. Impactstory links out to the data providers themselves, allowing you to more easily find and read the full-length mentions from across the web.

Altmetrics for Open Science

Now that you have an understanding of how altmetrics work in general, let’s talk about how they work for you as an Open Scientist. Below, we’ve listed some of the basic metrics you can expect to see on the scholarship that you make Open Access. We’ll discuss how to find these metrics in the next section.

Metrics for all products

Any scholarly object that’s got a URL or other permanent identifier like a DOI–which, if you’re practicing Open Science, would be all of them–can be shared and discussed online.

So, for any of your scholarly outputs that have been discussed online, you can expect to find Twitter mentions, blog posts and blog comments, Facebook and Google+ shares and comments, mainstream media mentions, and Wikipedia mentions.

Open Access Publications

Your open access publications will likely accrue citations same as your publications that appear in subscription journals, with two key differences: you can track citations to work that isn’t formally published (but has instead been shared on a preprint server like ArXiv or other such repository) and you can track citations to work that appear in non-peer reviewed literature. Citation indices like Scopus and Web of Science can help you track the former. Google Scholar is a good way to find citations in the non-peer reviewed literature.

Views and downloads can be found on some journal websites, and often on repositories–whether your university’s institutional repository, a subject repository like BioRXiv, or a general purpose repository like Figshare.

Screen Shot 2014-10-22 at 4.16.36 PM.png

Bookmarks on reference management services like Mendeley and CiteULike can give you a sense of how widely your work is being read, and by what audiences. Mendeley, in particular, offers excellent demographic information for publications bookmarked in the service.

Software & code

Software & code, like other non-paper scholarly products, are often shared on specialized platforms. On these platforms, the type of metrics your work receives is often linked to the platform itself.

SourceForge blazed the trail for data metrics by allowing others to review and rate code–useful, crowd-sourced quality indicators.

On GitHub, you can expect for your work to receive forks (which signal adaptations of your code), stars (a bookmark or virtual fistbump that lets others tell you, “I like this”), pull requests (which can get at others’ engagement with your work, as well as the degree to which you tend to collaborate), and downloads (which may signal software installations or code use). One big advantage to using GitHub to share your code is that it allows you to mint DOIs–making it much easier to track mentions and shares of your code in the scholarly literature and across general purpose platforms, like those outlined above.

Data

Data is often cited in one of two ways: citations to data packages (the dataset itself, stored on a website or repository) and citations to data papers (publications that describe the dataset in detail, and that link out to the dataset). You can often track the former using an altmetrics aggregator (more on that in a moment) or the Data Citation Index, a database that’s similar to Web of Science which searches for mentions of your dataset in the scholarly literature. Citations to data papers can sometimes be found in traditional citation indices like Scopus and Web of Science.

Interest in datasets can also be measured by tracking views and downloads. Often, these metrics are shared on repositories where datasets are stored.

Where data is shared on GitHub, forks and stars (described above) can give an indication of that data’s reuse.

More info on metrics for data can be found on my post for the e-Science Portal Blog, “Tracking the Impacts of Data–Beyond Citations”.

Videos

Videos are created by many researchers to summarize a study for generalist audiences. Other times, videos are a type of data.

YouTube tracks the most varied metrics: views, likes, dislikes, and comments are all reported. On Vimeo and other video sharing sites, likes and views are the most often reported metrics.

Slide decks & posters

Slide decks and posters are among the scholarly outputs that get the least amount of love. Once you’ve returned from your conference, you tend to shelve and forget about the poster that you (or your grad students) have put hours worth of work into–and the same goes for the slide decks you use when presenting.

If you make these “forgotten” products available online, on the other hand, you can expect to see some of the following indicators of interest in your work: views, favorites (sometimes used as a bookmark, other times as a way of saying “job well done!”), downloads, comments, and embeds (which can show you how often–and by whom–your work is being shared and in some cases blogged about).

How to collect your metrics from across the Web

We just covered a heck of a lot of metrics, huh? Luckily, altmetrics aggregators are designed to collect these far-flung data points from across the web and deliver them to you in a single report.

There are three main independent altmetrics aggregators: Impactstory.org, PlumX, and Altmetric.com. Here’s the scoop:

  • Impactstory.org: we’re a non-profit altmetrics service that collects metrics for all scholarly outputs. Impactstory profiles are designed to meet the needs of individual scientists. We regularly introduce new features based on user demand. You can sign up for a 30-day free trial on our website; after that, subscriptions are $10/month or $60/year.

  • PlumX: a commercial service that is designed to meet the needs of administrators and funding agencies. Like Impactstory, PlumX also collects metrics for all scholarly outputs. PlumX boasts the largest data coverage of all altmetrics aggregators.

  • Altmetric.com: a commercial service that collects metrics primarily for publishers and institutions. Altmetric can track any scholarly output with a DOI, PubMed ID, ArXiv ID, or Handle, but it does publications the best. Uniquely, they can find mentions to your scholarship in the mainstream media mentions and policy documents–two notoriously hard to mine locations.

Once you’ve collected your metrics from across the web, what do you do with them? We suggest experimenting with using them in your CV, year-end reporting, grant applications, and even tenure & promotion dossiers.

Skeptical? You needn’t be. An increasing number of scientists are using altmetrics for these purposes.

Researchers who have used altmetrics for tenure & grants

Each of the following researchers used altmetrics, alongside traditional metrics like citation counts and journal impact factors, to document the impact of their work.

Tenure: Dr. Steven Roberts, University of Washington

Steven-Roberts1-528x528.jpgSteven is an Associate Professor in the School of Aquatic & Fishery Sciences at the University of Washington. He decided to use altmetrics data in his tenure dossier to two ends: to showcase his public engagement and to document interest in his work.

To showcase public engagement, Steven included this table in the Education and Outreach section of his dossier, illustrating the effects his various outreach channels (blog, Facebook, Flickr, etc) have had to date:

Screen Shot 2014-10-20 at 2.19.52 PM.png

For evidence of the impact of specific products, he incorporated metrics into his CV like this:

Screen Shot 2014-10-20 at 2.24.04 PM.png

Screen Shot 2014-10-20 at 2.25.35 PM.png

Steven’s bid for tenure was successful.

Want to see more? You can download Steven’s full tenure dossier here.

Tenure: Dr. Ahmed Moustafa, American University in Cairo

ahmed.jpgAhmed’s an Associate Professor in the Department of Biology at American University in Cairo, Egypt.

He used altmetrics data in his tenure dossier in two interesting ways. First, he included a screenshot of his most important scholarly products, as they appear on his Impactstory profile, to summarize the overall impacts of his work:

Screen Shot 2014-10-20 at 2.52.15 PM.png

Note the badges that summarize in a glance the relative impacts of his work among both the public and other scholars. Ahmed also includes a link to his full profile, so his reviewers can drill down into the impact details of all his works, and also review them for themselves.

Ahmed also showcased the impact of a particular software package he created, JAligner, by including a link to a Google Scholar search that showcases all the scholarship that cites his software:

As of August 2013, JAligner has been cited in more than 150 publications, including journal articles, books, and patents, (http://tinyurl.com/jalignercitations) covering a wide range of topics in biomedical and computational research areas and downloaded almost 20,000 times (Figure 6). It is somehow noteworthy that JAligner has claimed its own Wikipedia entry (http://en.wikipedia.org/wiki/JAligner)!

Ahmed received tenure with AUC in 2013.

Grant Reporting: Dr. Holly Bik, University of Birmingham

0167.pngHolly was awarded a major grant from the Alfred P. Sloan Foundation to develop a bioinformatics data visualization tool called Phinch.

When reporting back to Sloan on the success of her project, she included metrics like the Figshare views that related posters and talks received, Github statistics for the Phinch software, and other altmetrics related to the varied outputs that the project created over the last few years.

Holly’s hopeful that these metrics, in addition to the traditional metrics she’s reported to Sloan, will make a great case for renewal funding, so they can continue their work on Phinch.

Will altmetrics work for you?

The remarkable thing about each of these researchers is that their circumstances aren’t extraordinary. The organizations they work for and receive funding from are fairly traditional ones. It follows that you, too, may be able to use altmetrics to document the impacts of your Open Science, no matter where you work or are applying for funding. After all, more and more institutions are starting to incorporate recognition of non-traditional scholarship into their tenure & promotion guidelines. You’ll need non-traditional ways like altmetrics to showcase the impacts of that scholarship.

3 important steps to getting more credit for your peer reviews

A few years back, Scholarly Kitchen editor-in-chief David Crotty informally polled a dozen biologists about the burden of peer review. He found that most peer review around 3 papers per month. For senior scientists, that number can reach 15 papers per month.

And yet, no matter how much time they spend reviewing, the credit they get is the same, and it looks like this on their CV:

“Service: Reviewer for Physical Review B and PLOS ONE.”

What if your work could be counted as more than just “service”? After all, peer review is dependent upon scientists doing a lot of intellectual heavy lifting for the benefit of their discipline.

And what if you could track the impacts your peer reviews have had on your field? Credit–in the form of citations and altmetrics–could be included in your CV to show the many ways that you’ve contributed intellectually to your discipline.

The good news? You can get credit for your peer reviews. By participating in Open Peer Review and making reviews discoverable and citable, researchers across the world have begun to get the credit they deserve for improving science for the better.

But this practice isn’t yet widespread. So, we’ve compiled a short guide to getting started with getting credit for your peer reviews.

1. Participate in Open Peer Review

Open Peer Review is a radical notion predicated on a simple idea: that by making author and reviewer identities public, more civil and constructive peer reviews will be submitted, and peer reviews can be put into context.

Here’s how it works, more or less: reviewers are assigned to a paper, and they know the author’s identity. They review the paper and sign their name. The reviews are then submitted to the editor and author (who now knows their reviewers’ identities, thanks to the signed reviews). When the paper is published, the signed reviews are published alongside it.

Sounds simple enough, but if you’re reviewing for a traditional journal, this might be a challenge. Open Peer Review is still rarely practiced by most traditional publishers.

For a very long time, publishers favored private, anonymous (‘blinded’) peer review, under the assumption that it would reduce bias and that authors would prefer for criticisms of their work to remain private. Turns out, their assumptions weren’t backed up by evidence.

Blinded peer review is argued to be beneficial for early career researchers, who might find themselves in a position where they’re required to give honest feedback to a scientist who’s influential in their field. Anonymity would protect these ECR-reviewers from their colleagues, who could theoretically retaliate for receiving critical reviews.

Yet many have pointed out that it can be easy for authors to guess the identities of their reviewers (especially in small fields, where everyone tends to know what their colleagues/competitors are working on, or in lax peer review environments, where all one has to do is ask!). And as Mick Watson argues, any retaliation that could theoretically occur would be considered a form of scientific misconduct, on par with plagiarism–and therefore off-limits to scientists with any sense.

In any event, a consequence of this anonymous legacy system is that you, as a reviewer, can’t take credit for your work. Sure, you can say you’re a reviewer for Physical Review B, but you’re unable to point to specific reviews or discuss how your feedback made a difference. (Your peer reviews go into the garbage can of oblivion once the article’s been published, as illustrated below.) That means that others can’t read your reviews to understand your intellectual contributions to your field, which–in the case of some reviews–can be enormous.

Image CC-BY Kriegeskorte N from “Open evaluation: a vision for entirely transparent post-publication peer review and rating for science” Front. Comput. Neurosci., 2012

Image CC-BY Kriegeskorte N from “Open evaluation: a vision for entirely transparent post-publication peer review and rating for science” Front. Comput. Neurosci., 2012

So, if you want to get credit for your work, you can choose to review for journals that already offer Open Peer Review. A number of forward-thinking journals allow it (BMJ, PeerJ, and F1000 Research, among others).

To find others, use Cofactor’s excellent journal selector tool:

  • Head over to the Cofactor journal selector tool

  • Click “Peer review,”

  • Select “Fully Open,” and

  • Click “Search” to see a full list of Open Peer Review journals

Some stand-alone peer review platforms also allow Open Peer Review. Faculty of 1000 Prime is probably the best known example. Publons is the largest platform that offers Open peer review. Dozens of other platforms offer it, too.

Once your reviews are attributable to you, the next step is making sure others can read them.

2. Make your reviews (and references to them) discoverable

You might think that discoverability goes hand in hand with Open Peer Review, but you’d only be half-right. Thing is: URLs break every day. Persistent access to an article over time, on the other hand, will help ensure that those who seek out your work can find it, years from now.

Persistent access often comes in the form of identifiers like DOIs. Having a DOI associated with your review means that, even if your review’s URL were to change in the future, others can still find your work. That’s because DOIs are set up to resolve to an active URL when other URLs break.

Persistent IDs also have another major benefit: they make it easy to track citations, mentions on scholarly blogs, or new Mendeley readers for your reviews. Tracking citations and altmetrics (social web indicators that tell you when others are sharing, discussing, saving, and reusing your work online) can help you better understand how your work is having an impact, and with whom. It also means you can share those impacts with others when applying for jobs, tenure, grants, and so on.

There are two main ways you can get a DOI for your reviews:

  • Review for a journal like PeerJ or peer review platform like Publons that issues DOIs automatically

  • Archive your review in a repository that issues DOIs, like Figshare

Once you have a DOI, use it! Include it on your CV (more on that below), as a link when sharing your reviews with others, and so on. And encourage others to always link to your review using the DOI resolver link (these are created by putting “http://dx.doi.org/” in front of your DOI; here’s an example of what one looks like: http://dx.doi.org/10.7287/peerj.603v0.1/reviews/2).

DOIs and other unique, persistent identifiers help altmetrics aggregators like Impactstory and PlumX pick up mentions of your reviews in the literature and on the social web. And when we’re able to report on your citations and altmetrics, you can start to get credit for them!

3. Help shape a system that values peer review as a scholarly output

Peer review may be viewed primarily as a “service” activity, but things are changing–and you can help change ‘em even more quickly. Here’s how.

As a reviewer, raise awareness by listing and linking to your reviews on your CV, adjacent to any mentions of the journals you review for. By linking to your specific reviews (using the DOI resolver link we talked about above), anyone looking at your CV can easily read the reviews themselves.

You can also illustrate the impacts of Open Peer Review for others by including citations and altmetrics for your reviews on your CV. An easy way to do that is to include on your CV a link to the review on your Impactstory or PlumX profile. You can also include other quantitative measures of your reviews’ quality, like Peerage of Science’s Peerage Essay Quality scores, Publons’ merit scores, or a number of other quantitative indicators of peer-review quality. Just be sure to provide context to any numbers you include.

If you’re a decision-maker, you can “shape the system” by making sure that tenure & promotion and grant award guidelines at your organization acknowledge peer review as a scholarly output. Actively encouraging early career researchers and students in your lab to participate in Open Peer Review can also go a long way. The biggest thing you can do? Educate other decision-makers so they, too, respect peer review as a standalone scholarly output.

Finally, if you’re a publisher or altmetrics aggregator, you can help “shape the system” by building products that accommodate and reward new modes of peer review.

Publishers can partner with standalone peer review platforms to accept their “portable peer reviews” as a substitute (or addition to) in-house peer reviews.

Altmetrics aggregators can build systems that better track mentions of peer reviews online, or–as we’ve recently done at Impactstory–connect directly with peer review platforms like Publons to import both the reviews and metrics related to the reviews. (See our “PS” below for more info on this new feature!)

How will you take credit for your peer review work?

Do you plan to participate in Open Peer Review and start using persistent identifiers to link to and showcase your contributions to your field? Will you start advocating for peer review as a standalone scholarly product to your colleagues? Or do you disagree with our premise, believing instead that traditional, blinded peer review–and our means of recognizing it as service–are just fine as-is?

We want to hear your thoughts in the comments below!

Further Reading

 

ps.  Impactstory now showcases your open peer reviews!

 

Starting today, there is one more great way to get credit by your peer reviews, in addition to those above:  on your Impactstory profile!

We’re partnering with Publons, a startup that aggregates Open and anonymous peer reviews written for  PeerJ, GigaScience, Biology Direct, F1000 Research, and many other journals.

Have you written Open reviews in these places?  Want to feature them on your Impactstory profile, complete with viewership stats? Just Sign up for a Publons account and then connect it to your Impactstory profile to start showing off your peer reviewing awesomeness :).

Open Science & Altmetrics Monthly Roundup (September 2014)

September 2014 saw Elsevier staking its claim in altmetrics research, one scientist’s calculations of the “opportunity cost” of practicing Open Science, and a whole lot more. Read on!

Hundreds attend 1am:London Conference

Jennifer Lin of PLOS presents at 1am:London (photo courtesy of Mary Ann Zimmerman)

PLOS’s Jennifer Lin presents at 1am:London (photo courtesy of Mary Ann Zimmerman)

Researchers, administrators, and librarians from around the world convened in London on September 25 and 26 to debate and discover at 1am:London, a conference devoted exclusively to altmetrics.

Some highlights: Sarah Callaghan (British Atmospheric Data Centre), Salvatore Mele (CERN) and Daniel Katz (NSF) discussed the challenges of tracking impacts for data and software; Dan O’Connor (Wellcome Trust) outlined the ethical implications of performing altmetrics research on social media, and our Director of Marketing & Research, Stacy Konkiel, shared where Impactstory has been in the past year, and where we’re headed in the next (check out her slides here).

As you might expect, 1am:London got a lot of social media coverage! Check out the Twitter archive here, watch videos of all the sessions here, and read recaps of the entire meeting over on the conference blog.

Elsevier announces increased focus on altmetrics

Elsevier is pledging increased organizational support for altmetrics research initiatives across the company in the coming year. According to their Editors Update newsletter, the publishing monolith will begin experimenting with the display of Altmetric.com data on journal websites. (Likely related: this altmetrics usability study, for which Elsevier is offering participants $100USD honorariums; sign up here to participate.) The company also recently announced that Mendeley will soon integrate readership data into authors’ dashboards.

NISO survey results reveal more concern with definitions than gaming

The American information standards organization, NISO, surveyed researchers to determine the most important “next steps” for altmetrics standards and definitions development. Interestingly, one of the most common concerns related to the use of altmetrics in assessment–gaming–ranked lower than setting definitions. Promoting the use of persistent identifiers and determining the types of research outputs that are best to track altmetrics for also ranked highly. Check out the full results over on the NISO site.

Other Open Science & Altmetrics news

  • California becomes first US state to pass an Open Access bill: The California Taxpayer Access to Publicly Funded Research Act (AB609) was signed into law by Gov. Jerry Brown in late September, making California the first state in the nation to mandate Open Access for state-funded research. Specifically, the bill requires researchers funded by the CA Department of Public Health to make copies of resulting articles available in a publicly accessible online database. Let’s hope the saying, “As Calfornia goes, so goes the nation” proves true with respect to Open Access! Read more about the bill and related news coverage on the SPARC website.

  • Nature Communications is going 100% Open Access: the third-most cited multidisciplinary journal in the world will go fully Open Access in October 2014. Scientists around the world cheered the news on Twitter, noting that Nature Communications will offer CC-BY as the default license for articles. Read more over on Wired UK.

  • “Science” track proposals announced for Mozilla Festival 2014: The proposals include killer Open Science events like “Open Science Badges for Contributorship,” “Curriculum Mapping for Open Science,” and “Intro to IPython Notebook.” The Festival will occur in London on October 24-26. To see the full list of proposed Science sessions and to register, visit the Mozilla Festival website.

  • Impactstory launches new features, sleek new look: last month, we unveiled cool new functionalities for Impactstory profiles, including the ability to add new publications to your profile just by sending an email. The redesigned site also better showcases the works and metrics you’re most proud of, with new “Selected Works” and “Key Metrics” sections on your profile’s homepage. Check out our blog for more information, or login to your Impactstory profile to discover our new look.

  • Research uncovers a new public impact altmetrics flavor–“good for teaching”: bibliometrician Lutz Bornmann has shown that papers tagged on F1000 as being “good for teaching” tend to have higher instances of Facebook and Twitter metrics–types of metrics long assumed to relate more to “public” impacts. Read the full study on ArXiv.

  • PLOS Labs announces Citation Hackathon: citations aren’t as good as they could be: they lack the structure needed to be machine-readable, making them less-than-useful for web-native publishing and citation tracking. PLOS is working to change that. Their San Francisco-based hackathon will happen on Saturday, October 18. Visit the PLOS Labs website for more information.

  • What’s the opportunity cost of Open Science? According to Emilio Bruna, it’s 35 hours and $690 dollars. In a recent blog post, Bruna calculates the cost–both in manhours and cash–of making his research data, code, and papers Open Access. Read his full account on the Bruna Lab blog.

What was your favorite Open Science or altmetrics happening from September?

We couldn’t cover everything in this roundup. Share your news in the comments below!

Join Impactstory for Open Access Week 2014!

This year, we’re talking Open Science and altmetrics in an Open Access Week 2014 webinar, “The right metrics for Generation Open: a guide to getting credit for practicing Open Science.” We’re also scheduling a limited number of customizable presentations for universities around the world–read on to learn more!

Register for “The Right Metrics for Generation Open”

The traditional way to understand and demonstrate your impact–through citation counts–doesn’t meet the needs of today’s researchers. What Generation Open needs is altmetrics.

In this presentation, we’ll cover:

  • what altmetrics are and the types of altmetrics today’s researchers can expect to receive,
  • how you can track and share those metrics to get all the credit you deserve, and
  • real life examples of scientists who used altmetrics to get grants and tenure

Scientists and librarians across all time zones can attend, because we’re offering it throughout the week, at times convenient for you:

Learn more and register here!

Schedule a customizable presentation on Open Science and altmetrics for your university

We’re offering a limited number of customizable, virtual presentations for researchers at institutions around the world on the following topics during Open Access Week 2014 (Oct. 20-26, 2014):

  • The right metrics for Generation Open: a guide to getting credit for practicing Open Science
  • Altmetrics 101: how to make the most of supplementary impact metrics
  • Why Open Research is critical to your career

Learn more about our webinars and schedule one for your department or university here.

What are you doing for Open Access Week?

Will you be attending one of our webinars? Presenting to your department or lab on your own Open Science practices? Organizing a showing of The Internet’s Own Boy with students? Leave your event announcements in the comments below, and over at OpenAccessWeek.org, if you haven’t already!

What Open Science Framework and Impactstory mean to these scientists’ careers

Yesterday, we announced three winners in the Center for Open Science’s random drawing to win a year’s subscription to Impactstory for users that connected their Impactstory profile to their Open Science Framework (OSF) profile: Leonardo Candela (OSF, Impactstory), Rebecca Dore (OSF, Impactstory), and Calvin Lai (OSF, Impactstory). Congrats, all!

We know our users would be interested to hear from other researchers practicing Open Science, especially how and why they use the tools they use. So, we emailed our winners who graciously agreed to share their experiences using the OSF (a platform that supports project management with collaborators and project sharing with the public) and Impactstory (a webapp that helps researchers discover and share the impacts of all their research outputs). Read on!

What’s your research focus?

Leonardo: I’m a computer science researcher. My research interests include Data Infrastructures, Virtual Research Environments, Data Publication, Open Science, Digital Library [Management] Systems and Architectures, Digital Libraries Models, Distributed Information Retrieval, and Grid and Cloud Computing.

Rebecca: I am a PhD student in Developmental Psychology. Broadly, my research focuses on children’s experiences in pretense, fiction and fantasy. How do children understand these experiences? How might these experiences affect children’s behaviors, beliefs and abilities?

Calvin: I’m a doctoral student in Social Psychology studying how to change unconscious or automatic biases. In their most insidious forms, unconscious biases lead to discrepancies between what people value (e.g., egalitarianism) and how people act (e.g., discriminating based on race). My interest is in understanding how to change these unconscious thoughts so that they’re aligned with our conscious values and behavior.

How do you use the Open Science Framework in the course of your research?

Leonardo: Rather than an end user of the system for supporting my research tasks, I’m interested in analysing and comparing the facilities offered by such an environment and the concept of Virtual Research Environments.

Rebecca: At this stage, I use the OSF to keep all of the information about my various projects in one place and to easily make that information available to my collaborators–it is much more efficient to stay organized than constantly exchanging and keeping track of emails. I use the wiki feature to keep notes on what decisions were made and when and store files with drafts of materials and writing related to each project. Version control of everything is very convenient.

Calvin: For me, the OSF encompasses all aspects of the research process – from study inception to publication. I use the OSF as a staging ground in the early stages for plotting out potential study designs and analysis plans. I will then register my study shortly before data collection to gain the advantage of pre-registered confirmatory testing. After data collection, I will often refer back to the OSF as a reminder of what I did and as a guide for analyses and manuscript-writing. Finally, after publication, I use the OSF as a repository for public access to my data and study materials.

What’s your favorite Impactstory feature? Why?

Leonardo: I really appreciate the effort Impactstory is posing on collecting metrics on the impact my research products have on the web. I like its integration with ORCID and the recently supported “Key profile metrics” since it gives a nice overview of a researcher impact.

Rebecca: I had never heard of ImpactStory before this promotion, and it has been really neat to start testing out. It took me 2 minutes to copy my publication DOIs into the system, and I got really useful information that shows the reach of my work that I hadn’t considered before, for example shares on Twitter and where the reach of each article falls relative to other psychology publications. I’m on the job market this year and can see this being potentially useful as supplementary information on my CV.

Calvin: Citation metrics can only tell us so much about the reach of a particular publication. For me, Impactstory’s alternative metrics have been important for figuring out where else my publications are having impact across the internet. It has been particularly valuable for pointing out connections that my research is making that I wasn’t aware of before.

Thanks to all our users who participated in the drawing by connecting their OSF and Impactstory profiles! Both of our organizations are proud to be working to support the needs of researchers practicing Open Science, and thereby changing science for the better.

To learn more about our open source non-profits, visit the Impactstory and Open Science Framework websites.

What’s our impact? (August 2014)

You may have noticed a change in our blog in recent months: we’ve added a number of editorial, how-to, and opinion posts, in addition to “behind the scenes” Impactstory updates.

Posts on our blogs and commentary on Twitter serve two purposes for us.  First, they promote our nonprofit goals of education and awareness.  Second, they serve as “content marketing,” a great way to get awareness of Impactstory to a broader audience.

We’ve been tracking the efficacy of this new strategy for a while now, and thought we’d begin to share the numbers with you in the spirit of making Impactstory more transparent. After all, if you’re an Impactstory fan, you’re likely interested in metrics of all stripes.

Here are our numbers for August 2014.

Organic site traffic stats

  • Unique visitors to impactstory.org 3,429
  • New users 378
  • Conversion rate 11.3% (% of visitors who signed up for an Impactstory.org account)

Blog stats

  • Unique visitors 4,381
  • Pageviews 6,431
  • Clickthrough rate (% of people who visited impactstory.org from the blog) 1.6%
  • Conversion rate (% of impactstory.org visitors to blog who went on to sign up for an Impactstory.org account) 9.8%
  • Percent of new user signups 1.8%

Overall: Our blog traffic has been steadily increasing from May onward: from 3896 pageviews to 6431 pageviews per month. And the number of unique visitors to our blog has increased, too: from 2,311 a month to 4,381 per month. We published four blog posts in August, two of which could be considered “content marketing”: an interview with Impactstory Advisor, Megan O’Donnell, and our monthly Open Science and Altmetrics Roundup.

What about clickthrough and conversion rates? On the one hand, it’d be helpful to compare these rates against industry norms; on the other hand, which “industry norms” would those be? Startup norms? Non-profit norms? Academic norms? In the end, I’ve decided it’s best to just use these numbers as a benchmark and forget about comparisons.

Twitter stats

  • New followers 215
  • Increase in followers over previous month 5.11%
  • Mentions 346 (We’re tracking this to answer the question, “How engaged are our followers?”)
  • Tweet reach 3,543,827 (We’re tracking this–the number of people who potentially saw a tweet mentioning Impactstory or our blog–to understand our brand awareness)
  • Referrals to impactstory.org: 271 users
  • Signups: 32

Overall: Our Twitter follower growth rate actually went down from May, from around ~8% new followers to ~5%. I did not (and have not yet) crossed the 5,000 follower threshold: a milestone that I intended to hit around August 20th. That said, engagement was up from the previous month by ~23%, a change that reflects conscious effort.

What does it all mean?

Our August numbers were no doubt affected by our subscription announcements and the new Impactstory features. I’m interested to see how these statistics change through September, which has seen an end to the “early adopter” 30 day free trial, and the debut of all the features we deployed during the 5 Meter sprint.

Our blog receives more unique visitors than our website, at this point, so increasing the number of blog-referred signups is a priority.

We could also stand to improve our conversion rates from organic website traffic, too. Our rates are lower than average when compared to other non-profits, publishing-related organizations, and IT.

Looking ahead

Given our findings from this month’s stats, here are our goals for September (already half-over, I know) and October:

  • Website: Jason and Heather will be working in the coming months to improve conversion rates by introducing new features that drive signups and subscriptions.
  • Blog: Increase unique visitors and the conversion rate for new signups–the former to continue to build brand awareness by publishing blogposts that resonate with scientists, and the latter the latter for obvious reasons. 🙂 One tactic could be to begin offering at least 1 content marketing post per week–a challenging task.
  • Twitter: Increase our growth rate for Twitter followers, pass the 5,000 follower mark, and continue to engage with our audience in ways that provide value–whether by sharing Open Science and altmetrics news and research, answering a question they have about Impactstory, or connecting them with other scientists and resources.
  • In general: Listen to (and act upon) feedback we get via social media. Continue to create useful blog content that meets the needs of practicing scientists, and to scour the web for the most interesting and relevant Open Science and Altmetrics news and research to share with our audience.

Questions?

Are there statistics you’re curious about, or do you have questions about our new approach to marketing? I’m happy to answer them in the comments below. Cheers!

Updated Dec. 31 2014 to reflect more accurate calculation for conversion rates from blog traffic.

Impactstory Advisor of the Month: Guillaume Lobet (September 2014)

September’s Impactstory Advisor of the Month is (drumroll please!)
Guillaume Lobet!

guillaume.png

Guillaume is a post-doc researcher at the Université de Liège in Belgium, in the Plant Physiology lab of Pr. Claire Perilleux. He’s also a dedicated practitioner of open, web native science, creating awesome tools ranging from a Plant Image Analysis software finder to an image analysis toolbox that allows the quantitative analysis of root system architecture. He’s even created an open source webapp that uses Impactstory’s open profile data to automatically create CVs in LaTeX, HTML, and PDF formats. (More on that below.)

I had the pleasure of corresponding with Guillaume this week to talk about his research, what he enjoys about practicing web native science, and his approach to being an Impactstory Advisor.

Tell us a bit about your current research.

I am a plant physiologist. My current work focuses on how the growth and development of different plant organs (e.g. the root and the shoot) are coordinated, and how modifications in one organ affects the others. The project is fascinating, because so far the majority of the plant research is focused on one specific organ or process and few has been done to try to understand how the different parts communicate.

Why did you initially decide to join Impactstory?

A couple of years ago, I created a website referencing the existing plant image analysis software tools (www.plant-image-analysis.org). I wanted to help users understand how well the tools (or more specifically, the scientific papers describing the tools) have been received by the community. At that time, an article-level Impactstory widget was available, and I choose to use it. It was a great addition to the website!

At the same time, I created a Impactstory profile and I’ve used it since then. (A quick word about the new profiles: they look fantastic!)

Why did you decide to become an Advisor?

Mainly because the ideas promoted by the Impactstory team are in line with my own. Researchers’ contributions to the scientific community (or even to society in general) are not only done by publishing peer-reviewed paper (even though it is still a very important way to disseminate our findings). The Web 2.0 brought us a large array of means to contribute to the scientific debate and it would be restrictive not to consider those while evaluating one’s work.

How have you been spreading the word about Impactstory?

I started by talking about it with my direct colleagues. Then, I noticed that science valorisation in general was not well known, so made a presentation about it and shared it on Figshare. To my great surprise, it became my most viewed item (I guess people liked the Lord of the Rings / Impactstory mash up :)). In addition, I also created a small widget to convert any Impactstory online profile into a resume. And of course, I proudly wear my Impactstory t-shirt whenever I go to conferences, which alway bring questions such as “I heard of that, what is it exactly?”.

You’re a web-native scientist (as evidenced by your active presence on sites like Figshare, Github, and Mendeley). When did you start practicing web-native science? What do you like about it? Are there drawbacks?

It really started a couple of years ago, by the end of my PhD. At that time, I needed to apply for a new position, so I set up a webpage, Mendeley account, and so on. I quickly found it to be a great way to get in touch with other researchers.

What I like the most about web-native science is that boundaries are disappearing! You do not need to meet people in person to build a new project or start a new collaboration. It brings together all the researchers of the same fields who are scattered around the globe, into a small digital community where they can easily interact!

As of the drawbacks, I am still looking for them 🙂

Tell us about your “Impact CV” webapp, which converts anyone’s Impactstory profile data into PDF, Markdown, LaTeX, or HTML format. Why’d you create it and how’d you do it?

A few months ago, I needed to update my resume and my IS profile contained all my research outputs. So I thought it would be nice to be able to reuse this information, not only for me, but for everyone who has an Impactstory profile. So instead of copying & pasting my online profile to my resume,  I took advantage of the openness of Impactstory to automatically retrieve the data contained in my profile (everything is stored in a Json file that is readily available from any profile) and re-use it locally. I wrapped it up in a webpage (http://www.guillaumelobet.be/impact) and Voilà!

What’s the best part about your work as a post-doc researcher at the Université de Liège?

Academic freedom is definitely the best part about working in a University. It gives us the latitude to explore unexpected paths. And I work with great people!

Thanks, Guillaume!

As a token of our appreciation for Guillaume’s hard work, we’re sending him an Impactstory t-shirt of his choice from our Zazzle store.

Guillaume is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Your new Impactstory

Today, it’s yours: the way to showcase your research online.

You’re proud of your research.  You want people to read your papers, download your slide decks, and talk about your datasets.  You want to learn when they do, and you want to make it easy for others to learn about it too, so everyone can understand your impact. We know, because as scientists, that’s how we feel, too.

The new Impactstory design is built around researchers. You and your research are at the center: you decide how you want to tell the story of your research impact.

What does that mean?  Here’s a sampling of what’s new in today’s release:

9ep0z5c.png

A streamlined front page showcases Selected Publications and Key Metrics that you select and arrange from your full list of publications.  There’s a spot for a bio so people learn about your research passion and approach.

Reading your research has become an easy and natural part of learning about your work: your publications are directly embedded on the site!  Everyone can read as they browse your profile.  We automatically embed all the free online versions we can find — uploading everything else only takes a few clicks.

gbazFPT.png

None of this is any good if your publication list gets stale, so keeping your publication list current is easier than ever: zoom an email publications@impactstory.org whenever you publish something new with a link to the new publication, and poof: it’ll appear in your profile, just like that.

Want to learn things you didn’t know before?  Your papers now include Twitter Impressions — the number of times your publication has been mentioned in someone’s twitter timeline.  You may be surprised how much exposure your research has had…we’re discovering many articles reaching tens of thousands of potential readers.

We could talk about the dozens of other features in this release. But instead: go check out your new profile. Make it yours.  We’re extending the free trial for all users for two more days — subscribe before your trial expires and it is just $45/year.

As of today, the three of us have taken down our old-fashioned academic websites. Impactstory is our online research home, and we’re glad it’ll be yours too.

 

Sincerely,
Jason, Heather and Stacy

What Jeffrey Beall gets wrong about altmetrics

Not long ago, Jason received an email from an Impactstory user, asking him to respond to the anti-altmetrics claims raised by librarian Jeffrey Beall in a blogpost titled, “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.”

Beall is well-known for his blog, which he uses to expose predatory journals and publishers that abuse Open Access publishing. This has been valuable to the OA community, and we commend Beall’s efforts. But we think his his post on altmetrics was not quite so well-grounded.

In the post, Beall claims that altmetrics don’t measure anything of quality. That they don’t measure the impact that matters. That altmetrics they can be easily gamed.

He’s not alone in making these criticisms; they’re common. But they’re also ill-informed. So, we thought that we’d make our responses public, because if one person is emailing to ask us about them, others must have questions, too.

Citations and the journal impact factor are a better measure of quality than altmetrics

Actually, citations and impact factors don’t measure quality.

Did I just blow your mind?

What citations actually measure

Although early theorists emphasized citation as a dispassionate connector of ideas, more recent research has repeatedly demonstrated that citation actually has more complex motivations, including often as a rhetorical tool or a way to satisfy social obligations (just ask a student who’s failed to cite their advisor). In fact, Simkin and Roychowdhury (2002) estimate that as few as 20% of citers even read the paper they’re citing. That’s before we even start talking about the dramatic disciplinary differences in citation behavior.

When it comes down to it, because we can’t identify citer motivations when looking at a citation count alone (and to date efforts to use sentiment analysis to understand citation motivations have failed to be widely adopted) the only bulletproof way to understand the intent behind citations is to read the paper that cites.

It’s true that some studies have shown that citations correlate with other measures of scientific quality like awards, grant funding, and peer evaluation. We’re not saying they’re not useful. But citations do not directly measure quality, which is something that some scientists seem to forget.

What journal impact factors actually measure

We were surprised that Beall holds up the journal impact factor as a superior way to understand the quality of individual papers. The journal impact factor has been repeatedly criticized throughout the years, and one issue above all others renders Beall’s argument moot: the impact factor is a journal-level measure of impact, and therefore irrelevant to the measure of article-level impact.

What altmetrics actually measure

The point of altmetrics isn’t to measure quality. It’s to better understand impact: both the quantity of impact and the diverse types of impact.

And when we supplement traditional measures of impact like citations with newer, altmetrics-based measures like post-publication peer review counts, scholarly bookmarks, etc we have a better picture of the full extent of impact. Not the only picture. But a better picture.

Altmetrics advocates aim to make everything a number. Only peer review will accurately get at quality.

This criticism is only half-wrong. We agree that informed, impartial expert consensus remains the gold standard for scientific quality. (Though traditional peer-review is certainly far from bullet-proof when it comes to finding this.)

But we take exception to the charge that we’re only interested in quantifying impact. In fact, we think that the compelling thing about altmetrics services is that they bring together important qualitative data (like post-publication peer reviews, mainstream media coverage, who’s bookmarking what on Mendeley, and so on) that can’t be summed up in a number.

The scholarly literature on altmetrics is growing fast, but it’s still early. And altmetrics reporting services can only improve over time, as we discover more and better data and ways to analyze it. Until then, using an altmetrics reporting service like our own (Impactstory), Altmetric.com or PlumX is the best way to discover the qualitative data at the heart of diverse impacts. (More on that below.)

There’s only one type of important impact: scholarly impact. And that’s already quantified in the impact factor and citations.

The idea that “the true impact of science is measured by its influence on subsequent scholarship” would likely be news to patients’ rights advocates, practitioners, educators, and everyone else that isn’t an academic but still uses research findings. And the assertion that laypeople aren’t able to understand scholarship is not only condescending, it’s wrong: cf. Kim Goodsell, Jack Andraka, and others.

Moreover, who are the people and groups that argue in favor of One Impact Above All Others, measured only through the impact factor and citations? Often, it’s the established class of scholars, most of whom have benefited from being good at attaining a very particular type of impact and who have no interest in changing the system to recognize and reward diverse impacts.

wvCL4TW.png

Even if we were to agree that scholarly impact were of paramount importance, let’s be real: the impact factor and citations alone aren’t sufficient to measure and understand scholarly impact in the 21st century.

Why? Because science is moving online. Mendeley and CiteULike bookmarks, Google Scholar citations, ResearchGate and Academia.edu pageviews and downloads, dataset citations, and other measures of scholarly attention have the potential to help us define and better understand new flavors of scholarly attention. Citations and impact factors by themselves just don’t cut the mustard.

I heard you can buy tweets. That proves that altmetrics can be gamed very easily.

There’s no denying that “gaming” happens, and it’s not limited to altmetrics. In fact, there have recently been journals that have been banned from Thomson-Reuters Journal Citation List due to impact factor manipulation, and papers retracted after a “citation ring” was busted. And researchers have proven just how easy it is to game Google Scholar citations.

Most players in the altmetrics world are pretty vigilant about staying one step ahead of the cheaters. (Though, to be clear, there’s not much evidence that scientists are gaming their altmetrics, since altmetrics aren’t yet central to the review and rewards systems in science.) Some good examples are SSRN’s means for finding and banning fraudulent downloaders, PLOS’s “Case Study in Anti-Gaming Mechanisms for Altmetrics,” and Altmetric.com’s thoughts on the complications of rooting out spammers and gamers. And we’re seeing new technology debut monthly that helps us uncover bots on Twitter and Wikipedia, fake reviews and social bookmarking spam.

Crucially, altmetrics reporting services make it easier than ever to sniff out gamed metrics by exposing the underlying data. Now, you can read all the tweets about a paper in one place, for example, or see who’s bookmarking a dataset on Delicious. And by bringing together that data, we help users decide for themselves whether that paper’s altmetrics have been gamed. (Not dissimilar from Beall’s other blog posts, which bring together information on predatory OA publishers in one place for others to easily access and use!)

Altmetrics advocates just want to bring down The Man

We’re not sure about what that means. But we sure are interested in bringing down barriers that keep science from being as efficient, productive, and open as it should be.  One of those barriers is the current incentive system for science, which is heavily dependent upon proprietary, opaque metrics such as the journal impact factor.

Our true endgame is to make all metrics–including those pushed by The Man–accurate, auditable, and meaningful. As Heather and Jason explain in their “Power of Altmetrics on a CV” article in the ASIS&T Bulletin:

Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming. Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription. Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, ideally localized to domain or type of product, makes it easy to interpret the data responsibly.

That’s why we incorporated as a non-profit: to make sure that our goal of building an Open altmetrics infrastructure–which would help make altmetrics accurate, auditable, and meaningful–isn’t corrupted by commercial interests.

Do you have questions related to Beall’s–or others’–claims about altmetrics? Leave them in the comments below.