Impactstory Advisor of the Month: Keith Bradnam (July 2014)

Headshot of Keith Bradnam

Meet our Advisor of the Month for July, Keith Bradnam! Keith is an Associate Project Scientist with the Korf Lab at UC Davis and active science communicator (read his blog, ACGT, and follow him on Twitter at @kbradnam).

Why is Keith our Advisor of the Month? Because he shared his strategies for success as a scientist at a well-attended Impactstory info session he organized at UC Davis earlier this month. Plus, he’s helping us to improve Impactstory every day, submitting bug reports and ideas for new features on our Feedback forum.

We recently emailed Keith to learn more about why he decided to become an Advisor, what made his recent workshop so great, and his thoughts on using blogging to become a more successful scientist.

Why did you initially decide to join Impactstory?

When I first heard about Impactstory, it just seemed like such an incredibly intuitive and useful concept. Publications should not be seen as the only form of scientific ‘output’, and having a simple way to gather together the different aspects of my academic life seemed like such a no-brainer.

In the past, I have worked in positions where I helped develop database resources for other scientists. These type of non-research positions, often only provide an opportunity for one formal publication a year (e.g. a paper in the annual Nucleic Acids Research ‘Database’ issue). This is a really poor reflection of the contributions that many bioinformaticians (and web programmers, database administrators etc.) make to the wider scientific community. In the past we didn’t have tools like GitHub to easily show the world what software we were helping develop.

Why did you decide to become an Advisor?

Impactstory is a great service and the more people that get to know about it and use it, the better it will become. I want to be part of that process, particularly because I still think that there are many people who are stuck in the mindset that a CV or résumé is the only way to list what you have done in your career.

I’m really hopeful that tools like Impactstory will forever change how people assess the academic achievements of others.

How have you been spreading the word about Impactstory in your first month as an Advisor?

I’ve mainly been passing on useful tweets from the @Impactstory Twitter account and keeping an eye on the Impactstory Feedback Forums where I’ve been adding some suggestions of my own and replying to questions from others. Beyond that, I’ve evangelized about Impactstory to my lab, and I gave a talk on campus to Grad students and Postdocs earlier this month.

How did your workshop go?

Well perhaps I’m biased 🙂 but I think it was well-received. There was a good mix of Grad students, Postdocs, and some other staff, and I think people were very receptive to hearing about the ways that Impactstory could be beneficial to them. They also asked lots of pertinent questions which has led to some new feature requests for the Impactstory team to consider. [You can view a video of Keith’s presentation over at his blog.]

You run a great blog about bioinformatics–ACGT. Why do you blog, and would you recommend it to others?

Blogging is such an incredibly easy way to share useful information to your peers. Sometimes that information can be succinct, factual material (these are the steps that I took to install software ‘X’), sometimes it can be opinion or commentary (this is why I think software ‘X’ will change the world), and sometimes it can just be entertainment or fun (how I used software ‘X’ to propose to my wife).

I think we’re currently in a transition period where people no longer see ‘blogging’ as being an overly geeky activity. Instead, I think that many people now appreciate that blogging is just a simple tool for quickly disseminating information.

I particularly recommend blogging to scientists. Having trouble following a scientific protocol and need some help? Blog about it. Think you have made an improvement on an existing protocol? Blog about it. Have some interesting thoughts about a cool paper that you have just read? Blog about it. There are a million and one topics that will never be suitable for a formal peer-reviewed publication, but which would make fantastic ideas for a blog post.

Blogging may be beneficial for your career by increasing your visibility amongst your peers, but more importantly I think it really improves your writing skills and — depending on what you blog about — you are giving something back to the community.

What’s the best part about your current gig as an Associate Project Scientist with the Korf Lab at UC Davis?

I think that most people would agree that if you work on a campus where you get to walk past a herd of cows every day, then that’s pretty hard to beat! However the best part of my job is that I get to spend time mentoring others in the lab (students, not cows), and I like to think that I’m helping them become better scientists, and better communicators of science in particular.

Thanks, Keith!

As a token of our appreciation for Keith’s hard work, we’re sending him an Impactstory t-shirt of his choice from our Zazzle store.

Keith is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Open Science & Altmetrics Monthly Roundup (June 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

UK researchers speak out on assessment metrics

There are few issues more polarizing in academia right now than research assessment metrics. A few months back, the Higher Education Funding Council for England (HEFCE) asked researchers to submit their evidence and views on the issue, and to date many well-reasoned responses have been shared.

Some of the highlights include Ernesto Priego’s thoughtful look at the evidence for and against; this forceful critique of the practice, penned by Sabaratnam and Kirby; a call to accept free market forces “into the internal dynamics of academic knowledge production” by Steve Fuller; and this post by Stephen Curry, who shares his thoughts as a member of the review’s steering group.

Also worth a look is Digital Science’s “Evidence for excellence: has the signal overtaken the substance?”, which studies the unintended effects that past UK assessment initiatives have had on researchers’ publishing habits.

Though the HEFCE’s recommendations will mainly affect UK researchers, the steering group’s findings may set a precedent for academics worldwide.

Altmetrics researchers agree: we know how many, now we need to know why

Researchers gathered in Bloomington, Indiana on June 23 to share cutting-edge bibliometrics and altmetrics research at the ACM WebScience Altmetrics14 workshop.

Some of the highlights include a new study that finds that only 6% of articles that appear in Brazilian journals have 1 or more altmetrics (compared with ~20% of articles published in the “global North”); findings that use of Twitter to share scholarly articles grew by more than 90% from 2012 to 2013; a study that found that most sharing of research articles on Twitter occurs in original tweets, not retweets; and a discovery that more biomedical and “layman” terms appear in the titles of research shared on social media than in titles of highly-cited research articles.

Throughout the day, presenters repeatedly emphasized one point: high-quality qualitative research is now needed to understand what motivates individuals to share, bookmark, recommend, and cite research outputs. In other words, we increasingly know how many altmetrics research outputs tend to accumulate and what those metrics’ correlations are–now we need to know why research is shared on the social Web in the first place, and how those motivations influence various flavors of impact.

Librarians promoting altmetrics like never before

This month’s Impactstory blog post, “4 things every librarian should do with altmetrics,” has generated a lot of buzz and some great feedback from the library community. But it’s just one part of a month filled with librarians doin’ altmetrics!

To start with, College & Research Libraries News named altmetrics a research library trend for 2014, and based on just the explosion of librarian-created presentations on altmetrics in the last 30 days alone, we’re inclined to agree! Plus, there were librarians repping altmetrics at AAUP’s Annual Meeting and the American Library Association Annual Meeting (here and here), and the Special Libraries Association Annual Meeting featured our co-founder, Heather Piwowar, in two great sessions and Impactstory board member, John Wilbanks, as the keynote speaker.

More Open Science & Altmetrics news

Stay connected

We share altmetrics and Open Science news as-it-happens on our Twitter, Google+, Facebook, or LinkedIn pages. And if you don’t want to miss next month’s news roundup, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

4 things every librarian should do with altmetrics

Researchers are starting to use altmetrics to understand and promote their academic contributions. At the same time, administrators and funders are exploring them to evaluate researchers’ impact.

In light of these changes, how can you, as a librarian, stay relevant by supporting their fast-changing altmetrics needs?

In this post, we’ll give you four ways to stay relevant: staying up-to-date with the latest altmetrics research, experimenting with altmetrics tools, engaging in early altmetrics education and outreach, and defining what altmetrics mean to you as a librarian.

1. Know the literature

Faculty won’t come to you for help navigating the altmetrics landscape if they can tell you don’t know the area very well, will they?

To get familiar with discussions around altmetrics, start with the recent SPARC report on article-level metrics, this excellent overview that appeared in Serials Review (paywall), and the recent ASIS&T Bulletin special issue on altmetrics.

Then, check out this list of “17 Essential Altmetrics Resources” aimed at librarians, this recent article on collection development and altmetrics from Against the Grain, and presentations from Heather and Stacy on why it’s important for librarians to be involved in altmetrics discussions on their campuses.

There’s also a growing body of peer-reviewed research on altmetrics. One important concept from this literature is the idea of “impact flavors”–a way to understand distinctive patterns in the impacts of scholarly products.

For example, an article featured in mainstream media stories, blogged about, and downloaded by the public has a very different flavor of impact than a dataset heavily saved and discussed by scholars, which is in turn different from software that’s highly cited in research papers. Altmetrics can help researchers, funders, and administrators optimize for the mix of flavors that best fits their particular goals.

There’s also been a lot of studies on correlations (or lack thereof) between altmetrics and traditional citations. Some have shown that selected altmetrics sources (Mendeley in particular) are significantly correlated with citations (1, 2, 3), while other sources, like Facebook bookmarks, have only slight correlations with citations. These studies show that different types of altmetrics are capturing different types of impact, beyond just scholarly impact.

Other early touchstones include studies exploring the predictive potential of altmetrics, growing adoption of social media tools that inform altmetrics, and insights from article readership patterns.

But these are far from only studies to be aware of! Stay abreast of new research by reading through the PLOS Altmetrics Collection, joining the Altmetrics Mendeley group and following the #altmetrics hashtag on Twitter.

2. Know the tools

There are now several tools that allow scholars to collect and share the broad impact of their research portfolios.

In the same way a you’d experiment with new features added to Web of Science, you can play around with altmetrics tools and add them to your bibliographic instruction repertoire (more on that in the following section). Familiarity will enable you to do easy demonstrations, discuss strengths and weaknesses, contribute to product development, and serve as a resource for campus scholars and administration.

Here are some of the most popular altmetrics tools:

Impactstory

lopjuza.png

If you’re reading this post, chances are that you’re already familiar with Impactstory, a nonprofit Web application supported by the Alfred P. Sloan Foundation and NSF.

If you’re a newcomer, here’s the scoop: scholars create a free Impactstory profile and then upload their articles, datasets, software, and other products using Google Scholar, ORCID, or lists of permanent identifiers like DOIs, PubMed IDs, and so on. Impactstory then gathers and reports altmetrics and traditional citations for each product. As shown above, metrics are displayed as percentiles relative to similar products. Profile data can be exported for further analysis, and users can receive alerts about new impacts.

Impactstory is built on open-source code, offers open data, and is free to use. Our robust community of users helps us think up new features and prioritize development via our Feedback forum; once you’re familiar with our site, we encourage you to sign up and start contributing, too!

PlumX

PlumX Artifact Screen Shot.pngPlumX is another web application that displays metrics for a wide range of scholarly outputs. The metrics can be viewed and analyzed at any user-defined level, including at the researcher, department, institution, journal, grant, and research topic levels. PlumX reports some metrics that are unique from other altmetrics services, like WorldCat holdings and downloads and pageviews from some publishers, institutional repositories, and EBSCO databases. PlumX is developed and marketed by Plum Analytics, an EBSCO company.

The service is available via a subscription. Individuals who are curious can experiment with the free demo version.

Altmetric

Altmetric-Explorer-Screenshot-University-of-Texas-Sample.pngThe third tool that librarians should know about is Altmetric.com. Originally developed to provide altmetrics for publishers, the tool primarily tracks journal articles and ArXiv.org preprints. In recent years, the service has expanded to include a subscription-based institutional edition, aimed at university administrators.

Altmetric.com offers unique features, including the Altmetric score (a single-number summary of the attention an article has received online) and the Altmetric bookmarklet (a browser widget that allows you to look up altmetrics for any journal article or ArXiv.org preprint with a unique identifier). Sources tracked for mentions of articles include social and traditional media outlets from around the world, post-publication peer-review sites, reference managers like Mendeley, and public policy documents.

Librarians can get free access to the Altmetric Explorer and free services for institutional repositories. You can also request trial access to Altmetric for Institutions.

3. Integrate altmetrics into library outreach and education

Librarians are often asked to describe Open Access publishing choices to both faculty and students and teach how to gather evidence of impact for hiring, promotion, and tenure. These opportunities–whether one on one or in group settings like faculty meetings–can allow librarians to introduce altmetrics.

Discussing altmetrics in the context of Open Access publishing helps “sell” the benefits of OA. Altmetrics, like download counts that appear in PLOS journals and institutional repositories, can highlight the benefits of open access publishing. They can also demonstrate that “impact” is more closely tied to an individual’s scholarship rather than a journal’s impact factor.

Similarly, researchers often use an author’s h-index for hiring, tenure, and promotion, conflating the h-index with the quality of an individual’s work. Librarians are often asked to teach and provide assistance calculating an h-index within various databases (Web of Science, SCOPUS, etc.). Integrating altmetrics into these instruction sessions is akin to providing researchers with additional primary resource choices on a research project. Librarians need to make researchers aware of many tools they can use to evaluate the impact of scholarship, and of the relevant research–including benefits of and drawbacks to different altmetrics.

So, what does altmetrics outreach look like on the ground? To start, check out these great presentations that librarians around the world have given on the benefits of using altmetrics (and particular altmetrics tools) in research and promotion.

Another great way to stay relevant on this subject is to find and recommend to your grad students and faculty readings on ways they can use altmetrics in their career, like this one from our blog on the benefits of including altmetrics on your CV.

4. Discover the benefits that altmetrics offer librarians

There are reasons to learn about altmetrics beyond serving faculty and students. A major one is that many librarians are scholars themselves, and can use altmetrics to better understand the diverse impact of their articles, presentations, and white papers. Consider putting altmetrics on your own CV, and advocating the use of altmetrics among library faculty who are assembling tenure and promotion packages.

Librarians also produce and support terabytes’ worth of scholarly content that’s intended for others’ use, usually in the form of digital special collections and institutional repository holdings. Altmetrics can help librarians understand the impacts of these non-traditional scholarly outputs, and provide hard evidence of their use beyond ‘hits’ and downloads–evidence that’s especially useful when making arguments for increased budgetary and administrative support.

It’s important that librarians explore the unique ways they can apply altmetrics to their own research and jobs, especially in light of recent initiatives to create recommended practices for the collection and use of altmetrics. What is useful to a computational biologist may not be useful for a librarian (and vice versa). Get to know the research and tools and figure out ways to use them to your own ends.

There’s a lot happening right now in the altmetrics space, and it can sometimes be overwhelming for librarians to keep up with and understand. By following the steps outlined above, you’ll be well positioned to inform and support researchers, administrators, and library decision makers in their use. And in doing so, you’ll be indispensable in this new era of web-native research.

Are you a librarian that’s using altmetrics? Share your experiences in the comments below!

This post has been adapted from the 2013 C&RL News article, “Riding the crest of the altmetrics wave: How librarians can help prepare faculty for the next generation of research impact metrics” by Lapinski, Piwowar, and Priem.

Ten reasons you should put altmetrics on your CV right now

If you don’t include altmetrics on your CV, you’re missing out in a big way.

There are many benefits to scholars and scholarship when altmetrics are embedded in a CV.

Altmetrics can:

  1. provide additional information;
  2. de-emphasize inappropriate metrics;
  3. uncover the impact of just-published work;
  4. legitimize all types of scholarly products;
  5. recognize diverse impact flavors;
  6. reward effective efforts to facilitate reuse;
  7. encourage a focus on public engagement;
  8. facilitate qualitative exploration;
  9. empower publication choice; and
  10. spur innovation in research evaluation.

In this post, we’ll detail why these benefits are important to your career, and also recommend the ways you should–and shouldn’t–include altmetrics in your CV.

1. Altmetrics provide additional information

The most obvious benefit of including altmetrics on a CV is that you’re providing more information than your CV’s readers already have.  Readers can still assess the CV items just as they’ve always done: based on title, journal and author list, and maybe–if they’re motivated–by reading or reviewing the research product itself. Altmetrics have the added benefit of allowing readers to dig into post-publication impact of your work.

2. Altmetrics de-emphasize inappropriate metrics

It’s generally regarded as poor form to evaluate an article based on a journal title or impact factor. Why? Because high journal impact factors vary across fields and an article often receives more or less attention than its journal container suggests.

But what else are readers of a CV to do? Most of us don’t have enough domain expertise to dig into each item and assess its merits based on a careful reading, even if we did have time. We need help, but traditional CVs don’t provide enough information to assess the work on anything but journal title.

Providing article-level citations and altmetrics in a CV gives readers more information, thereby de-emphasizing evaluation based on journal rank.

3. Altmetrics uncover the impact of just-published work

Why not suggest that we include citation counts in CVs, and leave it at that? Why go so far as altmetrics? The reason is that altmetrics have benefits that complement the weaknesses of a citation-based solution.

Timeliness is the most obvious benefits of altmetrics. Citations take years to accrue, which can be a problem for graduate students who are applying for jobs soon after publishing their first papers and for those promotion candidates whose most profound work is published only shortly before review.

Multiple research studies have found that counts of downloads, bookmarks and tweets correlate with citations, yet accrue much more quickly, often in weeks or months rather than years. Using timely metrics allows researchers to showcase the impact of their most recent work.

4. Altmetrics legitimize all types of scholarly products

How can readers of a CV know if your included dataset, software project, or technical report is any good?

You can’t judge its quality and impact based on the reputation of the journal that published it, since datasets and software aren’t published in journals. And even if they were, we wouldn’t want to promote the poor practice of judging the impact of an item by the impact of its container.

How, then, can alternative scholarly products be more than just space-filler on a CV?

The answer is product-level metrics. Like article-level metrics do for journal articles, product-level metrics provide the needed evidence to convince evaluators that a dataset or software package or white paper has made a difference. These types of products often make impacts in ways that aren’t captured by standard attribution mechanisms like citations. Altmetrics are key to communicating the full picture of how a product has influenced a field.

5. Altmetrics recognize diverse impact flavors

The impact of a research paper has a flavor. There are scholarly flavors (a great methods sections bookmarked for later reference or controversial claims that change a field), public flavors (“sexy” research that captures the imagination or data from a paper that’s used in the classroom), and flavors that fall into the area in between (research that informs public policy or a paper that’s widely used in clinical practice).

We don’t yet know how many flavors of impact there are, but it would be a safe bet that scholarship and society need them all. The goal isn’t to compare flavors: one flavor isn’t objectively better than another. They each have to be appreciated on their own merits for the needs they meet.

To appreciate the impact flavor of items on a CV, we need to be able to tell the flavors apart. (Citations alone can’t fully inform what kind of difference a research paper has made on the world. They are important, but not enough.) This is where altmetrics come in. By analyzing patterns in what people are reading, bookmarking, sharing, discussing and citing online we can start to figure out what kind – what flavor – of impact a research output is making.

More research is needed to understand the flavor palette, how to classify impact flavor and what it means. In the meantime, exposing raw information about downloads, shares, bookmarks and the like starts to give a peek into impact flavor beyond just citations.

6. Altmetrics reward efforts to facilitate reuse

Reusing research – for replication, follow-up studies and entirely new purposes – reduces waste and spurs innovation. But it does take a bit of work to make your research reusable, and that work should be recognized using altmetrics.

There are a number of ways authors can make their research easier to reuse. They can make article text available for free with broad reuse rights. They can choose to publish in places with liberal text-mining policies, that invest in disseminating machine-friendly versions of articles and figures.

Authors can write detailed descriptions of their methods, materials, datasets and software and make them openly available for reuse. They can even go further, experimenting with executable papers, versioned papers, open peer review, semantic markup and so on.

When these additional steps result in increased reuse, it will likely be reflected in downloads, bookmarks, discussions and possibly citations. Including altmetrics in CVs will reward investigators who have invested their time to make their research reusable, and will encourage others to do so in the future.

7. Altmetrics can encourage a focus on public engagement

The research community, as well as society as a whole, benefits when research results are discussed outside the Ivory Tower. Engaging the public is essential for future funding, recruitment and accountability.

Today, however, researchers have little incentive to engage in outreach or make their research accessible to the public. By highlighting evidence of public engagement like tweets, blog posts and mainstream media coverage, altmetrics on a CV can reward researchers who choose to invest in public engagement activities.

8. Altmetrics facilitate qualitative exploration

Including altmetrics in a CV isn’t all about the numbers! Just as we hope many people who skim our CVs will stop to read our papers and explore our software packages, so too we can hope that interested parties will click through to explore the details of altmetrics engagement for themselves.

Who is discussing an article? What are they saying? Who has bookmarked a dataset? What are they using it for? As we discuss at the end of this post, including provenance information is crucial for trustworthy altmetrics. It also provides great information that helps CV readers move beyond the numbers and jump into qualitative exploration of impact.

9. Altmetrics empower publication choice

Publishing in a new or innovative journal can be risky. Many authors are hesitant to publish their best work somewhere new or with a relatively-low impact factor. Altmetrics can remedy this by highlighting work based on its post-publication impact, rather than the title of the journal it was published in. Authors will be empowered to choose publication venues they feel are most appropriate, leveling the playing field for what might otherwise be considered risky choices.

Successful publishing innovators will also benefit. New journals won’t have to wait two years to get an impact factor before they can compete. Publishing venues that increase access and reuse will be particularly attractive. This change will spur innovation and support the many publishing options that have recently debuted, such as eLife, PeerJ, F1000 Research and others.

10. Altmetrics spur innovation in research evaluation

Finally, including altmetrics on CVs will engage researchers directly in research evaluation. Researchers are evaluated all the time, but often behind closed doors, using data and tools they don’t have access to. Encouraging researchers to tell their own impact stories on their CVs, using broad sources of data, will help spur a much-needed conversation about how research evaluation is done and should be done in the future.

OK, so how can you do it right?

There can be risks to including altmetrics data on a CV, particularly if the data is presented or interpreted without due care or common sense.

Altmetrics data should be presented in a way that is accurate, auditable and meaningful:

  • Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming
  • Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription.
  • Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, localized to domain or type of product, makes it easy to interpret the data responsibly.

Assuming these presentation requirements are met, how should the data be interpreted? We strongly recommend that altmetrics be considered not as a replacement for careful expert evaluation but as a supplement. Because they are still in their infancy, we should view altmetrics as way to ground subjective assessment in real data; a way to start conversations, not end them.

Given this approach, at least three varieties of interpretation are appropriate: signaling, highlighting and discovery. A CV with altmetrics clearly signals that a scholar is abreast of innovations in scholarly communication and serious about communicating the impact of scholarship in meaningful ways. Altmetrics can also be used to highlight research products that might otherwise go unnoticed: a highly downloaded dataset or a track record of F1000-reviewed papers suggests work worthy of a second look. Finally, as we described above, auditable altmetrics data can be used by evaluators as a jumping off point for discovery about who is interested in the research, what they are doing with it, and how they are using it.

How to Get Started

How can you add altmetrics to your own CV or, if you are a librarian, empower scholars to add altmetrics to theirs?

Start by experimenting with altmetrics for yourself. Play with the tools, explore and suggest improvements. Librarians can also spread the word on their campuses and beyond through writing, teaching and outreach. Finally, if you’re in a position to hire, promote, or review grant applications, explicitly welcome diverse evidence of impact when you solicit CVs.

What are your thoughts on using altmetrics on a CV? Would you welcome them as a reviewer, or choose to ignore them? Tell us in the comments section below.

This post has been adapted from “The Power of Altmetrics on a CV,” which appeared in the April/May 2013 issue of ASIS&T Bulletin.

Impactstory Advisor of the Month: Jon Tennant (June 2014)

Jon Tennant (blogTwitter), a PhD candidate studying tetrapod biodiversity and extinction at Imperial College London, was one of the first scientists to join our recently launched Advisor program.

jon.jpeg

Within minutes of receiving his acceptance into the program, Jon was pounding the virtual pavement to let others know about Impactstory and the benefits it brings to scientists. For this reason–and the fact that Jon has done some cool stuff in addition to his research, like write a children’s book!–Jon’s our first Impactstory Advisor of the Month.

We chatted with Jon to learn more about how he uses Impactstory, what it’s like being an Advisor, and what he’s doing in other areas of his professional life.

Why did you initially decide to create an Impactstory profile?

A couple of years ago, I immersed myself into social media and the whole concept of ‘Web 2.0’. It was clear that the internet was capable of changing many aspects of the way in which we practice, communicate, and assess scientific research. There were so many tools though, and so much diversity, it was all a bit daunting, especially as someone so junior in their career. Although I guess that’s one of the advantages of being at this stage – I wasn’t tied down to any particular way of ‘doing science’ yet, and free to experiment.

Having followed the discussions on alternative and article-level metrics, when ImpactStory was released it seemed like a tool that could really make a difference for myself and the broader research community. At the time, it made no sense to me how the outputs of research were assessed – the name or the impact factor of a journal was given far too much meaning, and did nothing to really encapsulate the diversity of ways in which quality or impact, or putative pathways to impact, could be measured. ImpactStory seemed to offer a decent alternative, and hey look – it does! Actually, it’s not an alternative, but complementary tool for a range of methods in assessing how research is used.

Why did you decide to become an Advisor?

Pretty much for the reasons above! One thing I’m learning as a young scientist is that it’s easy to be part of an echo chamber on social media, advocating altmetrics and all the jazzy new aspects of research, but many scientists aren’t online. Getting those people involved in conversations, and alerting them to cool new tools is made a lot easier as an Advisor.

I reckon this type of community engagement is pretty important, especially in what appears to be such a crucial transitional phase for researchers, including things like open access and data, and the way in which research is assessed (e.g., through the REF here in the UK). ImpactStory obviously has a role in making this much easier for academics.

How have you been spreading the word about Impactstory in your first month as an Advisor?

Mostly sharing stickers! They actually work really well in getting people’s attention. They’re even more doubly useful when people ask things like “What’s a h-index”, so you can actually use them as a basis for further discussion. But yeah, I don’t really go out of my way to preach to people about altmetrics and ImpactStory – academics really don’t like being told what they should be doing and things, especially at my university. I prefer to kind of hang back, wait for discussions, and inject that things like altmetrics exist, and could be really useful when combined with things like a social media presence, or an ORCID, and that they are one of an integrated set of tools that can be really useful for assessing how your research is being used, as well as a kind of personal tracking device. I’d love to hold an ImpactStory/altmetrics Q and A or workshop at some point in the future.

You just wrote a children’s book about dinosaurs–tell us about it!

Let it be known that you brought this up, not me 😉

So, pretty much just by having a social media presence (mostly through blogging), I was asked to write a book on kids dinosaurs! Of course I said yes, and along with a talented artist, we created a book with pop-out dinosaurs that you can reconstruct into your very own little models! You can pre-order it here.* I think it’s out in October in the UK and USA. Is there an ImpactStory bit for that…? [ed: Not yet! Perhaps add it as a feature request on our Feedback forum? :)]

* (I don’t get royalties, so it’s not as bad promoting it…)

What’s the best part about your current gig as a PhD student at Imperial College London?

The freedom. I have an excellent supervisor who is happy to let me blog, tweet, attend science communication conferences and a whole range of activities that are complimentary to my PhD, as long as the research gets done. So there’s a real diversity of things to do, and being in London there’s always something science-related going on, and there’s a great community vibe too, with people who work within the broader scope of science always coming together and interacting. Of course, the research itself is amazing – I work with a completely open database called the Palaeobiology Database/Fossilworks, where even the methods are open so anyone can play with science if they wish!

Thanks, Jon!

Jon is just one of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

The ultimate guide for staying up-to-date on your data, software, white papers, slide decks and conference posters’ impact

Getting impact alerts for your papers was pretty simple to set up, but what about tracking real-time citations, downloads, and social media activity for your other research outputs?

There are so many types of outputs to track–datasets, software, slide decks, and more. Plus, there seems to be dozens of websites for hosting them! How can you easily keep track of your diverse impacts, as they happen?

Don’t worry–it’s literally our job to stay on top of this stuff! Below, we’ve compiled the very best services that send impact alerts for your research data, software, slide decks, conference posters, technical reports, and white papers.

Research data

Specific data repositories gather and display metrics on use. Here, we go into details on metrics offered by GitHub, Figshare, and Dryad, and then talk about how you can track citations via the Data Citation Index.

GitHub

github_logo.jpg

If you use the collaborative coding website GitHub to store and work with research data, you can enable email alerts for certain types of activities. That way, you’re notified any time someone comments on your data or wants to modify it using a “pull request.”

First, you’ll need to “watch” whatever repositories you want to get notifications for. To do that, visit the repository page for the dataset you want to track, and then click the “Watch” button in the upper right-hand corner and select “Watching” from the drop-down list, so you’ll get a notification when changes are made.

Then, you need to enable notification emails. To do that, log into GitHub and click the “Account Settings” icon in the upper right-hand corner. Then, go to “Notification center” on the left-hand navigation bar. Under “Watching,” make sure the “Email” box is ticked.

Other GitHub metrics are also useful researchers: “stars” tell you if others have bookmarked your repository and “forks”–a precursor to a pull request–indicate if others have adapted some of your code for their own uses. Impactstory notification emails (covered in more detail below) include both of these metrics.

GitHub, Dryad and Figshare metrics via Impactstory

Screen Shot 2014-06-06 at 953.png

Dryad data repository and Figshare both display download information on their web sites, but they don’t send notification emails when new downloads happen. And GitHub tracks stars and forks, but doesn’t include them in their alert emails. Luckily, Impactstory alerts notify you when your data stored on these sites receives the following types of new metrics:

Dryad

Figshare

GitHub

pageviews

X

X

downloads

X

X

shares

X

stars (bookmarks)

X

forks (adaptations)

X

Types of data metrics reported by Impactstory

To set up alerts, create an Impactstory profile and connect your profile to ORCID, Figshare, and GitHub using the “Import from accounts” button at the top of your profile. (If you already have an Impactstory profile, this button will appear as a blue “Connect more accounts” button instead.) This will allow you to auto-import many of your datasets. If any of your datasets are missing, you can add them one by one by clicking the “Import individual products” icon and providing links and DOIs. Once your profile is set up, you’ll start to receive a notification email once every 1-2 weeks.

Data Citation Index

If you’ve deposited your data into a repository that assigns a DOI, the Data Citation Index (DCI) is often the best way to learn if your dataset has been cited in the literature.

To create an alert, you’ll need a subscription to the service, so check with your institution to see if you have access. If you do, you can set up an alert by first creating a personal registration with the Data Citation Index; click the “Sign In” button at the top right of the screen, then select “Register”. (If you’re already registered with Web of Knowledge to get citation alerts for your articles, there’s no need to set up a separate registration.)

Then, set your preferred database to the Data Citation Index by clicking the orange arrow next to “All Databases” to the right of “Search” in the top-left corner. You’ll get a drop-down list of databases; select “Data Citation Index.”

Now you’re ready to create an alert. On the Basic Search screen, search for your dataset by its title. Click on the appropriate title to get to the dataset’s item record. In the upper right hand corner of the record, you’ll find the Citation Network box. Click “Create citation alert.” Let the Data Citation Index know your preferred email address, then save your alert.

Software

The same GitHub metrics you can track for data can be used to track software impact, too. To receive alerts about comments on your code and pull requests, follow the notification sign-up instructions outlined under Research Data > GitHub, above. To receive alerts when your software gets stars or forks, sign up for Impactstory alerts according to the instructions under Research Data > GitHub, Dryad, and Figshare.

Impactstory and others are working on ways to track software impact better–stay tuned!

Technical reports, working papers, conference slides & posters

Slideshare sends alerts for metrics your slide decks and posters receive. Impactstory includes some of these metrics from Slideshare in our alert emails.  Impactstory alerts also include metrics for technical reports, working papers, conference slides, and posters hosted on Figshare.

Slideshare

w8Zu8Ow.png

Though Slideshare is best known for allowing users to view and share slide decks, some researchers also use it to share conference posters. The platform sends users detailed weekly alert emails about new metrics their slide decks and posters have received, including the number of total views, downloads, comments, favorites, tweets, and Facebook likes.

To receive notification emails, go to Slideshare.net and click the profile icon in the upper right-hand corner of the page. Then, click “Email” in the left-hand navigation bar, and check the “With the statistics of my content” box to start receiving your weekly notification emails.

Figshare and Slideshare metrics via Impactstory

You can use Impactstory to receive notifications for downloads, shares, and views for anything you’ve uploaded to Figshare, and for the downloads, comments, favorites, and views for slide decks and posters uploaded to Slideshare.

First, create an Impactstory profile and connect your profile to Figshare and Slideshare using the “Import from accounts” button at the top of your profile. (If you already have an Impactstory profile, this button will appear as a “Connect more accounts” button instead.) For both services, click the appropriate button, then provide your profile URL when prompted. Your content will then auto-import.

If any Figshare or Slideshare uploads are missing–which might be the case your collaborators have uploaded content on your behalf–you can add them one by one by clicking the “Import stuff” icon at the upper right-hand corner of your profile, clicking the “Import individual products” link, and then providing the Figshare DOIs and Slideshare URLs. Once your profile is set up, you’ll start to receive a notification email once every 1-2 weeks.

Videos

Vimeo and Youtube both provide a solid suite of statistics for videos hosted on their sites, and you can use those metrics to track the impact of your video research outputs. To get alerts for these metrics, though, you’ll need to sign up for Impactstory alerts.

Vimeo and Youtube metrics via Impactstory

Vimeo tracks likes, comments, and plays for videos hosted on their platform; Youtube reports the same, plus dislikes and favorites. To get metrics notifications for your videos hosted on either of these sites, you’ll need to add links to your videos to your Impactstory profile.

Once you’ve signed up for an Impactstory profile, the “Import stuff” icon at the upper right-hand corner of your profile, then click the “Import individual products” link. There, add URLs for each of the  videos and click “Import”. Once they’re imported to your profile, you’ll start to receive notifications for new video metrics once every 1-2 weeks.

Are we missing anything? We’ve managed to cover the most popular platforms in this post, but we’d love to get your tips on niche data repositories, video platforms, and coding sites that keep you up to date on your impact by sending alerts. Leave them in the comments below!

Bookmark this guide. This post–and our other Ultimate Guide for articles–will be updated over time, as services change.

Open Science & Altmetrics Monthly Roundup (May 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

GitHub & co. continue working to incentivize open science software

This month, collaborative coding site GitHub updated the public on their work with Figshare, Zenodo, and Mozilla Science to create citable code for academic software. Now, you can make any GitHub repository more citable–and accessible over time–by minting a DOI for it.

Researchers at the SciForge project responded to the announcement with a list of “10 non-trivial things GitHub & friends can do for science.” In their post, they pointed out that minting DOIs for software code is just the tip of the iceberg. Other challenges include reconciling GitHub’s commercial interests with what’s best for the scientific community, maintaining metadata quality for metadata submitted to DOI registries via Figshare and Zenodo, and optimizing how DOIs are issued for software that has multiple versions.

Of course, not everyone uses GitHub to manage their research software to begin with. If you’re a GitHub beginner, check out Carly Strasser’s “GitHub: a primer for researchers” and the GitHub guide to getting started.

Originator of Open Notebook Science, Jean-Claude Bradley, Dies

Chemist and Open Science advocate Jean-Claude Bradley passed away this month. Bradley is most famous for coining the term Open Notebook Science, which he used to describe his practice of “making all your research freely available to the public, and in real time”. His lab did its work this way for years. The Open Science community has lost a giant. Jean-Claude will be greatly missed.

How many scholarly documents are on the Web?

According to research published this month in PLOS ONE, “the [lower bound] number of scholarly documents, published in English, available on the web is roughly 114 million.”

Why is this important? Well, with the large number of scholarly documents on the web, we can text- and data-mine at scale–so long as these documents are all Open Access. But as @openscience pointed out on Twitter, 3 in 4 scholarly documents on the Web aren’t Open Access–which brings us to our next news item.

Are most researchers Open Access poseurs?

A recent publisher survey of Canadian authors found that while 83% agreed that Open Access to scholarship is important, less than 10% of authors considered OA when deciding where to publish. And a recently tweeted JASIST article from 2013 shows that only around 36% of European authors are taking advantage of publishers’ permissions to post OA copies of otherwise paywalled scholarship.

Why the disconnect between beliefs and practice? It’s not clear from these sources, but we hope that the numbers continue to increase over time, so we end up in a fully Open Access future.

Other recent altmetrics news

  • PeerJ makes peer-reviews more citable: the publisher now issues DOIs for open peer-reviews of its articles, making it possible to cite peer reviews using a permanent identifier. In doing so, peer-review contributions will remain accessible over time, even as URLs change, and reviewers will now be able to more easily track citations to their reviews (thereby incentivizing open peer-review).

  • Altmetrics-themed workshop at SSP 2014 Meeting: some of the area’s brightest minds–including Euan Adie (Altmetric.com) and William Gunn (Mendeley.com)–participated yesterday in the “21st Century Research Assessment” panel at this year’s Society for Scholarly Publishing annual meeting. As you might expect, the event was highly tweeted: check out the #sspboston hashtag on Twitter to witness the debate.

  • Australian and New Zealander librarians sought for altmetrics survey: a team of researchers seeks participants for a survey on support for altmetrics at Australian and New Zealand academic libraries. Respond to the survey on SurveyMonkey before it closes on June 7, 2014.

  • Impactstory launches notification emails, Advisors program: Now, you no longer have to visit impactstory.org to find out when your research has received new citations, downloads, or tweets. Instead, we’ll send you an email alert. We’re really excited about this new feature and also about another big launch that happened this month: our Advisors program!

    Impactstory users have been asking us for months how they can help spread the word. So, in addition to launching a Spread the Word resources page, we’ve started an Advisors program, so motivated advocates can better host Impactstory workshops, help us understand their needs, and advocate for altmetrics at their institution.  To learn more–and apply!–visit our website.

Upcoming events you can’t miss

Two great events are happening in June: the Altmetrics14 workshop in Bloomington, Indiana and the Special Library Association 2014 Annual Meeting in Vancouver, British Columbia. Heather will appear on an altmetrics panel and at the closing session of SLA ‘14, and Stacy will be in attendance at Altmetrics14. We hope to see you at both events! But if you can’t make ‘em, follow along on Twitter at #sla2014 and #altmetrics14.

Stay connected

We share altmetrics and Open Science news as-it-happens on our Twitter, Google+, Facebook, or LinkedIn pages. And if you don’t want to miss next month’s news roundup, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

The ultimate guide to staying up-to-date on your articles’ impact

You published a paper–congrats!  Has anyone read it?  Cited it?  Talked about it on Twitter?  How can you find out–as it happens?

Automated alerts!  Email updates that matter come right to you.

We’ve compiled a two-part primer on the services that deliver essential research impact metrics straight to your inbox, so you can stay up to date without having to do a lot of work.

In this post, we’ll share tips for how to automagically track citations, altmetrics and downloads for your publications; in our next post, we’ll share strategies for tracking similar metrics for your data, code, slides, and social media outreach.

Citations

Let’s start with citations: the “coin of the realm” to track scholarly impact. You can get citation alerts in two main ways: from Google Scholar or from traditional citation indices.

Google Scholar Citations alerts

Google Scholar citations track any citations to your work that occur on the scholarly web. These citations can appear in any type of scholarly document (white papers, slide decks, and of course journal articles are all fair game) and in documents of any language. Naturally, this means that your citation count on Google Scholar may be larger than on other citation services.

To get Google Scholar alerts, first sign up for a Google Scholar Citations account and add all the documents you want to track citations for. Then, visit your profile page and click the blue “Follow” button at the top of your profile. You’ll see a drop-down like this:

Screenshot of a Google Scholar profile, showing the blue

Enter your preferred email address in the box that appears, then click “Create alert.” You’ll now get an alert anytime you’ve received a citation.

Citation alerts via Scopus & Web of Knowledge

Traditional citation indices like Scopus and Web of Knowledge are another good way to get citation alerts delivered to your inbox. These services are more selective in scope, so you’ll be notified only when your work is cited by vetted, peer-reviewed publications. However, they only track citations for select journal articles and book chapters–a far cry from the diverse citations that are available from Google Scholar. Another drawback: you have to have subscription access to set alerts.

Web of Knowledge

Web of Knowledge offers article-level citation alerts. To create an alert, you first have to register with Web of Knowledge by clicking the “Sign In” button at the top right of the screen, then selecting “Register”.

5sBUo1G.png

Then, set your preferred database to the Web of Science Core Collection (alerts cannot be set up across all databases at once). To do that, click the orange arrow next to “All Databases” to the right of “Search” in the top-left corner. You’ll get a drop-down list of databases, from which you should select “Web of Science Core Collection.”

Now you’re ready to create an alert. On the Basic Search screen, search for your article by its title. Click on the appropriate title to get to the article page. In the upper right hand corner of the record, you’ll find the Citation Network box. Click “Create citation alert.” Let Web of Knowledge know your preferred email address, then save your alert.

Scopus

In Scopus, you can set up alerts for both articles and authors. To create an alert for an article, search for it and then and click on the title in your search results. Once you’re on the Article Abstract screen, you will see a list of papers that cite your article on the right-hand side. To set your alert, click “Set alert” under “Inform me when this document is cited in Scopus.”

To set an author-level alert, click the Author Search tab on the Scopus homepage and run a search for your name. If multiple results are returned, check the author affiliation and subjects listed to find your correct author profile. Next, click on your author profile link. On your author details page, follow the “Get citation alerts” link, and list your saved alert, set an email address, and select your preferred frequency of alerts. Once you’re finished, save your alert.

With alerts set for all three of these services, you’ll now be notified when your work is cited in virtually any publication in the world! But citations only capture a very specific form of scholarly impact. How do we learn about other uses of your articles?

Tracking article pageviews & downloads

How many people are reading your work? While you can’t be certain that article pageviews and full-text downloads mean people are reading your articles,  many scientists still find these measures to be a good proxy. A number of services can send you this information via email notifications for content hosted on their sites. Impactstory can send you pageview and download information for some content hosted elsewhere.

Publisher notifications

Publishers like PeerJ and Frontiers send notification emails as a service to their authors.

If you’re a PeerJ author, you should receive notification emails by default once your article is published. But if you want to check if your notifications are enabled, sign into PeerJ.com, and click your name in the upper right hand corner. Select “Settings.” Choose “Notification Settings” on the left nav bar, and then select the “Summary” tab. You can then choose to receive daily or weekly summary emails for articles you’re following.

In Frontiers journals, it works like this: once logged in, click the arrow next to your name on the upper left-hand side and select “Settings.” On the left-hand nav bar, choose “Messages,” and under the “Other emails” section, check the box next to “Frontiers monthly impact digest.”

Both publishers aggregate activity for all of the publications you’ve published with them, so no need to worry about multiple emails crowding your inbox at once.

Not a PeerJ or Frontiers author? Contact your publisher to find out if they offer notifications for metrics related to articles you’ve published. If they do, let us know by leaving a comment below, and we’ll update this guide!

ResearchGate & Academia.edu

bhr3lLZ.png

Some places where you upload free-to-read versions of your papers, like ResearchGate and Academia.edu, will report how many people have viewed your paper on their site.

You can turn on email notifications for pageviews, downloads, comments, bookmarks, and citations by other papers on ResearchGate by visiting “Settings” (on both sites, click the triangle in the upper right-hand corner of your screen). Then, click on the “Notifications” tab in the sidebar menu, and check off the types of emails you want to receive. On Academia.edu, the option to receive new metrics notifications for pageviews, downloads, and bookmarks are under “Analytics” and “Papers”; on Researchgate, it’s under “Your publications” and “Scheduled updates”.

PLOS article metrics via Impactstory

Impactstory now offers alerts, so you’re notified any time your articles get new metrics, including pageviews and downloads. However, we currently only offer these metrics for articles published in PLOS journals. (If you’d like to see us add similar notifications for other publishers, submit an idea to our Feedback site!) We describe how to get Impactstory notifications for the articles that matter to you in the Social Media section below.

Post-publication peer review

Some articles garner comments as a form of post-publication peer review. PeerJ authors are notified any time their articles get a comment, and any work that’s uploaded to ResearchGate can be commented upon, too. Reviews can also be tracked via Altmetric.com alerts.

PeerJ

To make sure you’re notified with you receive new PeerJ comments, login to PeerJ and go to “Settings” > “Notification Settings”  and then click on the “Email” tab. There, check the box next to “Someone posts feedback on an article I wrote.”

ResearchGate

To set your ResearchGate notifications, login to the site and navigate to “Settings” > “Notifications.” Check the boxes next to “One of my publications is rated, bookmarked or commented on” and “Someone reviews my publication”.

Altmetric.com

Post-publication peer reviews from Publons and PubPeer are included in Altmetric.com notification emails, and will be included in Impactstory emails in the near future. Instructions for signing up for Altmetric and Impactstory notifications can be found below.

PubChase

Article recommendation platform PubChase can also be used to set up notifications for PubPeer comments and reviews that your articles receive. To set it up, first add your articles to your PubChase library (either by searching and adding papers one-by-one, or by syncing PubChase with your Mendeley account). Then, hover over the Account icon in the upper-right hand corner, and select “My Account.” Click “Email Settings” on the left-hand navigation bar, and then check the box next to “PubPeer comments” to get your alerts.

Social media metrics

What are other researchers saying about your articles around the water cooler? It used to be that we couldn’t track these informal conversations, but now we’re able to listen in using social media sites like Twitter and on blogs. Here’s how.

Social media metrics via Altmetric.com

Altmetric.com allows you to track altmetrics and receive notifications for any article that you have published, no matter the publisher.

S00Rpwu.png

First, install the Altmetric.com browser bookmarklet (visit this page and drag the “Altmetric It!” button into your browser menu bar). Then, find your article on the publisher’s website and click the “Altmetric it!” button. The altmetrics for your article will appear in the upper right-hand side of your browser window, in a pop-up box similar to the one at right.

Next, follow the “Click for more details” link in the Altmetric pop-up. You’ll be taken to a drill-down view of the metrics. At the bottom left-hand corner of the page, you can sign up to receive notifications whenever someone mentions your article online.

The only drawback of these notification emails is that you have to sign up to track each of your articles individually, which can cause inbox mayhem if you are tracking many publications.

Social media metrics via Impactstory

9GtkvJ6.png

Here at Impactstory, we recently launched similar notification emails. Our emails differ in that they alert you to new social media metrics, bookmarks, and citations for all of your articles, aggregated into a single report.

To get started, create an Impactstory profile and connect your profile to ORCID, Google Scholar, and other third-party services. This will allow you to auto-import your articles. If a few of your articles are missing, you can add them one by one by clicking the “Import stuff” icon, clicking the “Import individual products” link on the next page, and then providing links and DOIs. Once your profile is set up, you’ll start to receive your notification emails once every 1-2 weeks.

When you get your first email, take a look at your “cards”. Each card highlights something unique about your new metrics for that week or month: if you’re in a top percentile related to other papers published that year or if your PLOS paper has topped 1000 views or gotten new Mendeley readers. You’ll get a card for each type of new metric one of your articles receives.

Note that Impactstory notification emails also contain alerts for metrics that your other types of outputs–including data, code and slide decks–receive, but we’ll cover that in more detail in our next post.

Now you’ve got more time for the things that matter

No more wasting your days scouring 10+ websites for evidence of your articles’ impact; it’s now delivered to your inbox, as new impacts accumulate.

Do you have more types of research outputs, beyond journal articles? In our next post, we’ll tell you how to set up similar notifications to track the impact of your data, software, and more.

Updates:
12/17/2014: 
Updates to describe the revamped Impactstory interface and new notification options for ResearchGate and Academia.edu
5/27/2014: Added information about PubChase notification emails.

Do you have what it takes to be an Impactstory Advisor?

Help us spread the word! (Photo licensed CC-BY-SA by Vacant Fever)

Help us spread the word!
(Photo licensed CC-BY-SA by Vacant Fever)

You’ve been asking for an opportunity to help spread the word about Impactstory. Here it is.

We’re recruiting a select group of researchers and librarians to become Impactstory Advisors!

Our advisors will:

  • Invite friends and colleagues to try out Impactstory

  • Give us feedback on features and report bugs

  • Host brown bag lunches and presentations on Impactstory at their school or library

  • Spread the word locally by hanging up our (soon to be released) cool new posters

  • Connect Impactstory to the rest of your online life–link to your profile from your Twitter bio, Facebook page, lab website, and anywhere else you can!

In return, we’ll foot the pizza bill for Impactstory workshops, give our Advisors access to Impactstory Premium (details coming soon!), send awesome swag, and share hot off the press news on planned features and other company developments.

The best benefit of all? Our community of like-minded, cutting edge Advisors will get the satisfaction of knowing they’re helping to change research evaluation for the better.

Think you have what it takes? Apply to be an Impactstory Advisor today!

Open Science & Altmetrics Monthly Roundup (April 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

Funding agencies denying payments to scientists in violation of Open Access mandates

Want to actually get paid from those grants you won? If you haven’t made publications about your grant-funded research Open Access, it’s possible you could be in violation of funders’ public access mandates–and may lose funding because of it.

Richard Van Noorden of Nature News reports,

The London-based Wellcome Trust says that it has withheld grant payments on 63 occasions in the past year because papers resulting from the funding were not open access. And the NIH…says that it has delayed some continuing grant awards since July 2013 because of non-compliance with open-access policies, although the agency does not know the exact numbers.

Post-enforcement, compliance rates increased 14% at the Wellcome Trust and 7% and the NIH. However, they’re still both a ways from seeing full compliance with the mandates.

And that’s not the only shakeup happening in the UK: the higher ed funding bodies warned researchers that any article or conference paper accepted after April 1, 2016 that doesn’t comply with their Open Access policy can’t be used for the UK Research Excellence Framework, by which universities’ worthiness to receive funding is determined.

That means institutions now have a big incentive to make sure their researchers are following the rules–if their researchers are found out of compliance, the institutions’ funding will be in jeopardy.

Post-publication peer review getting a lot of comments

Post-publication peer review via social media was the topic of Dr. Zen Faulkes’ “The Vaccuum Shouts Back” editorial, published in Neuron earlier this month. In it, he points out:

Postpublication peer review can’t do the entire job of filtering the scientific literature right now; it’s too far from being a standard practice….[it’s] an extraordinarily valuable addition to, not a substitute for, the familiar peer review process that journals use before publication. My model is one of continuous evaluation: “filter, publish, and keep filtering.”

So what does that filtering look like? Comments on journal and funder websites, publisher-hosted social networks, and post-pub peer review websites, to start with. But Faulkes argues that “none of these efforts to formalize and centralize postpublication peer review have come close to the effectiveness of social media.” To learn why, check out his article on Neuron’s website.

New evidence supports Faulkes’ claim that post-publication peer review via social media can be very effective. A study by Paul S. Brookes, published this month in PeerJ, found post-publication peer review using blogs makes corrections to the literature an astounding eight times as likely to happen than corrections reported to journal editors in the traditional (private) manner.

For more on post-publication peer review, check out this classic Frontiers in Computational Neuroscience special issue, Tim Gower’s influential blog post, “How might we get to a new model of mathematical publishing?,” or Faculty of 1000 Prime, the highly respected post-pub peer review platform.

Recent altmetrics-related studies of interest

  • Scholarly blog mentions relate to later citations: A recent study published in JASIST (green OA version here) found that mentions of articles on scholarly blogs correlate to later citations.

  • What disciplines have the highest presence of altmetrics? Hint: it’s not the ones you think. Turns out, a higher percentage of humanities and social science articles have altmetrics than for those in the biomedical and life sciences. Researchers also found that only 7% of all papers found in Web of Science had Altmetric.com data.

  • Video abstracts lead to more readers: For articles in the New Journal of Physics, video abstract views correlate to increased article usage counts, according to a study published this month in the Journal of Librarianship and Scholarly Communication.

New data sources available for Impactstory & Altmetric.com

New data sources include post-publication peer review sites Publons and PubPeer, and microblogging site Weibo Sina (the “Chinese Twitter”). Since we get data from Altmetric, that means Impactstory will be reporting this data soon, too!

And another highly-demanded data source will be opening up in the near future: Zotero. The Sloan Foundation has backed research and development for the open source reference management software that will eventually help Zotero build “a preliminary public API that returns anonymous readership counts when fed universal identifiers (e.g. ISBN, DOI).” So, some day soon, we’ll be able to report Zotero readership information alongside Mendeley stats in your profile–a feature that many of you have been asking us about for a long time.

Altmetric.com offering new badges

Altmetric.com founder Euan Adie announced that for those who want to de-emphasize numeric scores on content, the famous “donut” badges will now be available sans Altmetric score–a move heralded by many in the altmetrics research community as being a good move away from “one score to rule them all.”

Must-read blog posts about ORCID and megajournals

We’ve been on a tear publishing about innovations in Open Science and altmetrics on the Impactstory blog. Here are two of our most popular posts for the month:

Stay connected

Do you blog on altmetrics or Open Science and want to share your posts with us? Let us know on our Twitter, Google+, Facebook, or LinkedIn pages. We might just feature your work in next month’s roundup!

And if you don’t want to miss next month’s news, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.