Open Science & Altmetrics Monthly Roundup (September 2014)

September 2014 saw Elsevier staking its claim in altmetrics research, one scientist’s calculations of the “opportunity cost” of practicing Open Science, and a whole lot more. Read on!

Hundreds attend 1am:London Conference

Jennifer Lin of PLOS presents at 1am:London (photo courtesy of Mary Ann Zimmerman)

PLOS’s Jennifer Lin presents at 1am:London (photo courtesy of Mary Ann Zimmerman)

Researchers, administrators, and librarians from around the world convened in London on September 25 and 26 to debate and discover at 1am:London, a conference devoted exclusively to altmetrics.

Some highlights: Sarah Callaghan (British Atmospheric Data Centre), Salvatore Mele (CERN) and Daniel Katz (NSF) discussed the challenges of tracking impacts for data and software; Dan O’Connor (Wellcome Trust) outlined the ethical implications of performing altmetrics research on social media, and our Director of Marketing & Research, Stacy Konkiel, shared where Impactstory has been in the past year, and where we’re headed in the next (check out her slides here).

As you might expect, 1am:London got a lot of social media coverage! Check out the Twitter archive here, watch videos of all the sessions here, and read recaps of the entire meeting over on the conference blog.

Elsevier announces increased focus on altmetrics

Elsevier is pledging increased organizational support for altmetrics research initiatives across the company in the coming year. According to their Editors Update newsletter, the publishing monolith will begin experimenting with the display of Altmetric.com data on journal websites. (Likely related: this altmetrics usability study, for which Elsevier is offering participants $100USD honorariums; sign up here to participate.) The company also recently announced that Mendeley will soon integrate readership data into authors’ dashboards.

NISO survey results reveal more concern with definitions than gaming

The American information standards organization, NISO, surveyed researchers to determine the most important “next steps” for altmetrics standards and definitions development. Interestingly, one of the most common concerns related to the use of altmetrics in assessment–gaming–ranked lower than setting definitions. Promoting the use of persistent identifiers and determining the types of research outputs that are best to track altmetrics for also ranked highly. Check out the full results over on the NISO site.

Other Open Science & Altmetrics news

  • California becomes first US state to pass an Open Access bill: The California Taxpayer Access to Publicly Funded Research Act (AB609) was signed into law by Gov. Jerry Brown in late September, making California the first state in the nation to mandate Open Access for state-funded research. Specifically, the bill requires researchers funded by the CA Department of Public Health to make copies of resulting articles available in a publicly accessible online database. Let’s hope the saying, “As Calfornia goes, so goes the nation” proves true with respect to Open Access! Read more about the bill and related news coverage on the SPARC website.

  • Nature Communications is going 100% Open Access: the third-most cited multidisciplinary journal in the world will go fully Open Access in October 2014. Scientists around the world cheered the news on Twitter, noting that Nature Communications will offer CC-BY as the default license for articles. Read more over on Wired UK.

  • “Science” track proposals announced for Mozilla Festival 2014: The proposals include killer Open Science events like “Open Science Badges for Contributorship,” “Curriculum Mapping for Open Science,” and “Intro to IPython Notebook.” The Festival will occur in London on October 24-26. To see the full list of proposed Science sessions and to register, visit the Mozilla Festival website.

  • Impactstory launches new features, sleek new look: last month, we unveiled cool new functionalities for Impactstory profiles, including the ability to add new publications to your profile just by sending an email. The redesigned site also better showcases the works and metrics you’re most proud of, with new “Selected Works” and “Key Metrics” sections on your profile’s homepage. Check out our blog for more information, or login to your Impactstory profile to discover our new look.

  • Research uncovers a new public impact altmetrics flavor–“good for teaching”: bibliometrician Lutz Bornmann has shown that papers tagged on F1000 as being “good for teaching” tend to have higher instances of Facebook and Twitter metrics–types of metrics long assumed to relate more to “public” impacts. Read the full study on ArXiv.

  • PLOS Labs announces Citation Hackathon: citations aren’t as good as they could be: they lack the structure needed to be machine-readable, making them less-than-useful for web-native publishing and citation tracking. PLOS is working to change that. Their San Francisco-based hackathon will happen on Saturday, October 18. Visit the PLOS Labs website for more information.

  • What’s the opportunity cost of Open Science? According to Emilio Bruna, it’s 35 hours and $690 dollars. In a recent blog post, Bruna calculates the cost–both in manhours and cash–of making his research data, code, and papers Open Access. Read his full account on the Bruna Lab blog.

What was your favorite Open Science or altmetrics happening from September?

We couldn’t cover everything in this roundup. Share your news in the comments below!

Join Impactstory for Open Access Week 2014!

This year, we’re talking Open Science and altmetrics in an Open Access Week 2014 webinar, “The right metrics for Generation Open: a guide to getting credit for practicing Open Science.” We’re also scheduling a limited number of customizable presentations for universities around the world–read on to learn more!

Register for “The Right Metrics for Generation Open”

The traditional way to understand and demonstrate your impact–through citation counts–doesn’t meet the needs of today’s researchers. What Generation Open needs is altmetrics.

In this presentation, we’ll cover:

  • what altmetrics are and the types of altmetrics today’s researchers can expect to receive,
  • how you can track and share those metrics to get all the credit you deserve, and
  • real life examples of scientists who used altmetrics to get grants and tenure

Scientists and librarians across all time zones can attend, because we’re offering it throughout the week, at times convenient for you:

Learn more and register here!

Schedule a customizable presentation on Open Science and altmetrics for your university

We’re offering a limited number of customizable, virtual presentations for researchers at institutions around the world on the following topics during Open Access Week 2014 (Oct. 20-26, 2014):

  • The right metrics for Generation Open: a guide to getting credit for practicing Open Science
  • Altmetrics 101: how to make the most of supplementary impact metrics
  • Why Open Research is critical to your career

Learn more about our webinars and schedule one for your department or university here.

What are you doing for Open Access Week?

Will you be attending one of our webinars? Presenting to your department or lab on your own Open Science practices? Organizing a showing of The Internet’s Own Boy with students? Leave your event announcements in the comments below, and over at OpenAccessWeek.org, if you haven’t already!

What Open Science Framework and Impactstory mean to these scientists’ careers

Yesterday, we announced three winners in the Center for Open Science’s random drawing to win a year’s subscription to Impactstory for users that connected their Impactstory profile to their Open Science Framework (OSF) profile: Leonardo Candela (OSF, Impactstory), Rebecca Dore (OSF, Impactstory), and Calvin Lai (OSF, Impactstory). Congrats, all!

We know our users would be interested to hear from other researchers practicing Open Science, especially how and why they use the tools they use. So, we emailed our winners who graciously agreed to share their experiences using the OSF (a platform that supports project management with collaborators and project sharing with the public) and Impactstory (a webapp that helps researchers discover and share the impacts of all their research outputs). Read on!

What’s your research focus?

Leonardo: I’m a computer science researcher. My research interests include Data Infrastructures, Virtual Research Environments, Data Publication, Open Science, Digital Library [Management] Systems and Architectures, Digital Libraries Models, Distributed Information Retrieval, and Grid and Cloud Computing.

Rebecca: I am a PhD student in Developmental Psychology. Broadly, my research focuses on children’s experiences in pretense, fiction and fantasy. How do children understand these experiences? How might these experiences affect children’s behaviors, beliefs and abilities?

Calvin: I’m a doctoral student in Social Psychology studying how to change unconscious or automatic biases. In their most insidious forms, unconscious biases lead to discrepancies between what people value (e.g., egalitarianism) and how people act (e.g., discriminating based on race). My interest is in understanding how to change these unconscious thoughts so that they’re aligned with our conscious values and behavior.

How do you use the Open Science Framework in the course of your research?

Leonardo: Rather than an end user of the system for supporting my research tasks, I’m interested in analysing and comparing the facilities offered by such an environment and the concept of Virtual Research Environments.

Rebecca: At this stage, I use the OSF to keep all of the information about my various projects in one place and to easily make that information available to my collaborators–it is much more efficient to stay organized than constantly exchanging and keeping track of emails. I use the wiki feature to keep notes on what decisions were made and when and store files with drafts of materials and writing related to each project. Version control of everything is very convenient.

Calvin: For me, the OSF encompasses all aspects of the research process – from study inception to publication. I use the OSF as a staging ground in the early stages for plotting out potential study designs and analysis plans. I will then register my study shortly before data collection to gain the advantage of pre-registered confirmatory testing. After data collection, I will often refer back to the OSF as a reminder of what I did and as a guide for analyses and manuscript-writing. Finally, after publication, I use the OSF as a repository for public access to my data and study materials.

What’s your favorite Impactstory feature? Why?

Leonardo: I really appreciate the effort Impactstory is posing on collecting metrics on the impact my research products have on the web. I like its integration with ORCID and the recently supported “Key profile metrics” since it gives a nice overview of a researcher impact.

Rebecca: I had never heard of ImpactStory before this promotion, and it has been really neat to start testing out. It took me 2 minutes to copy my publication DOIs into the system, and I got really useful information that shows the reach of my work that I hadn’t considered before, for example shares on Twitter and where the reach of each article falls relative to other psychology publications. I’m on the job market this year and can see this being potentially useful as supplementary information on my CV.

Calvin: Citation metrics can only tell us so much about the reach of a particular publication. For me, Impactstory’s alternative metrics have been important for figuring out where else my publications are having impact across the internet. It has been particularly valuable for pointing out connections that my research is making that I wasn’t aware of before.

Thanks to all our users who participated in the drawing by connecting their OSF and Impactstory profiles! Both of our organizations are proud to be working to support the needs of researchers practicing Open Science, and thereby changing science for the better.

To learn more about our open source non-profits, visit the Impactstory and Open Science Framework websites.

What’s our impact? (August 2014)

You may have noticed a change in our blog in recent months: we’ve added a number of editorial, how-to, and opinion posts, in addition to “behind the scenes” Impactstory updates.

Posts on our blogs and commentary on Twitter serve two purposes for us.  First, they promote our nonprofit goals of education and awareness.  Second, they serve as “content marketing,” a great way to get awareness of Impactstory to a broader audience.

We’ve been tracking the efficacy of this new strategy for a while now, and thought we’d begin to share the numbers with you in the spirit of making Impactstory more transparent. After all, if you’re an Impactstory fan, you’re likely interested in metrics of all stripes.

Here are our numbers for August 2014.

Organic site traffic stats

  • Unique visitors to impactstory.org 3,429
  • New users 378
  • Conversion rate 11.3% (% of visitors who signed up for an Impactstory.org account)

Blog stats

  • Unique visitors 4,381
  • Pageviews 6,431
  • Clickthrough rate (% of people who visited impactstory.org from the blog) 1.6%
  • Conversion rate (% of impactstory.org visitors to blog who went on to sign up for an Impactstory.org account) 9.8%
  • Percent of new user signups 1.8%

Overall: Our blog traffic has been steadily increasing from May onward: from 3896 pageviews to 6431 pageviews per month. And the number of unique visitors to our blog has increased, too: from 2,311 a month to 4,381 per month. We published four blog posts in August, two of which could be considered “content marketing”: an interview with Impactstory Advisor, Megan O’Donnell, and our monthly Open Science and Altmetrics Roundup.

What about clickthrough and conversion rates? On the one hand, it’d be helpful to compare these rates against industry norms; on the other hand, which “industry norms” would those be? Startup norms? Non-profit norms? Academic norms? In the end, I’ve decided it’s best to just use these numbers as a benchmark and forget about comparisons.

Twitter stats

  • New followers 215
  • Increase in followers over previous month 5.11%
  • Mentions 346 (We’re tracking this to answer the question, “How engaged are our followers?”)
  • Tweet reach 3,543,827 (We’re tracking this–the number of people who potentially saw a tweet mentioning Impactstory or our blog–to understand our brand awareness)
  • Referrals to impactstory.org: 271 users
  • Signups: 32

Overall: Our Twitter follower growth rate actually went down from May, from around ~8% new followers to ~5%. I did not (and have not yet) crossed the 5,000 follower threshold: a milestone that I intended to hit around August 20th. That said, engagement was up from the previous month by ~23%, a change that reflects conscious effort.

What does it all mean?

Our August numbers were no doubt affected by our subscription announcements and the new Impactstory features. I’m interested to see how these statistics change through September, which has seen an end to the “early adopter” 30 day free trial, and the debut of all the features we deployed during the 5 Meter sprint.

Our blog receives more unique visitors than our website, at this point, so increasing the number of blog-referred signups is a priority.

We could also stand to improve our conversion rates from organic website traffic, too. Our rates are lower than average when compared to other non-profits, publishing-related organizations, and IT.

Looking ahead

Given our findings from this month’s stats, here are our goals for September (already half-over, I know) and October:

  • Website: Jason and Heather will be working in the coming months to improve conversion rates by introducing new features that drive signups and subscriptions.
  • Blog: Increase unique visitors and the conversion rate for new signups–the former to continue to build brand awareness by publishing blogposts that resonate with scientists, and the latter the latter for obvious reasons. 🙂 One tactic could be to begin offering at least 1 content marketing post per week–a challenging task.
  • Twitter: Increase our growth rate for Twitter followers, pass the 5,000 follower mark, and continue to engage with our audience in ways that provide value–whether by sharing Open Science and altmetrics news and research, answering a question they have about Impactstory, or connecting them with other scientists and resources.
  • In general: Listen to (and act upon) feedback we get via social media. Continue to create useful blog content that meets the needs of practicing scientists, and to scour the web for the most interesting and relevant Open Science and Altmetrics news and research to share with our audience.

Questions?

Are there statistics you’re curious about, or do you have questions about our new approach to marketing? I’m happy to answer them in the comments below. Cheers!

Updated Dec. 31 2014 to reflect more accurate calculation for conversion rates from blog traffic.

Impactstory Advisor of the Month: Guillaume Lobet (September 2014)

September’s Impactstory Advisor of the Month is (drumroll please!)
Guillaume Lobet!

guillaume.png

Guillaume is a post-doc researcher at the Université de Liège in Belgium, in the Plant Physiology lab of Pr. Claire Perilleux. He’s also a dedicated practitioner of open, web native science, creating awesome tools ranging from a Plant Image Analysis software finder to an image analysis toolbox that allows the quantitative analysis of root system architecture. He’s even created an open source webapp that uses Impactstory’s open profile data to automatically create CVs in LaTeX, HTML, and PDF formats. (More on that below.)

I had the pleasure of corresponding with Guillaume this week to talk about his research, what he enjoys about practicing web native science, and his approach to being an Impactstory Advisor.

Tell us a bit about your current research.

I am a plant physiologist. My current work focuses on how the growth and development of different plant organs (e.g. the root and the shoot) are coordinated, and how modifications in one organ affects the others. The project is fascinating, because so far the majority of the plant research is focused on one specific organ or process and few has been done to try to understand how the different parts communicate.

Why did you initially decide to join Impactstory?

A couple of years ago, I created a website referencing the existing plant image analysis software tools (www.plant-image-analysis.org). I wanted to help users understand how well the tools (or more specifically, the scientific papers describing the tools) have been received by the community. At that time, an article-level Impactstory widget was available, and I choose to use it. It was a great addition to the website!

At the same time, I created a Impactstory profile and I’ve used it since then. (A quick word about the new profiles: they look fantastic!)

Why did you decide to become an Advisor?

Mainly because the ideas promoted by the Impactstory team are in line with my own. Researchers’ contributions to the scientific community (or even to society in general) are not only done by publishing peer-reviewed paper (even though it is still a very important way to disseminate our findings). The Web 2.0 brought us a large array of means to contribute to the scientific debate and it would be restrictive not to consider those while evaluating one’s work.

How have you been spreading the word about Impactstory?

I started by talking about it with my direct colleagues. Then, I noticed that science valorisation in general was not well known, so made a presentation about it and shared it on Figshare. To my great surprise, it became my most viewed item (I guess people liked the Lord of the Rings / Impactstory mash up :)). In addition, I also created a small widget to convert any Impactstory online profile into a resume. And of course, I proudly wear my Impactstory t-shirt whenever I go to conferences, which alway bring questions such as “I heard of that, what is it exactly?”.

You’re a web-native scientist (as evidenced by your active presence on sites like Figshare, Github, and Mendeley). When did you start practicing web-native science? What do you like about it? Are there drawbacks?

It really started a couple of years ago, by the end of my PhD. At that time, I needed to apply for a new position, so I set up a webpage, Mendeley account, and so on. I quickly found it to be a great way to get in touch with other researchers.

What I like the most about web-native science is that boundaries are disappearing! You do not need to meet people in person to build a new project or start a new collaboration. It brings together all the researchers of the same fields who are scattered around the globe, into a small digital community where they can easily interact!

As of the drawbacks, I am still looking for them 🙂

Tell us about your “Impact CV” webapp, which converts anyone’s Impactstory profile data into PDF, Markdown, LaTeX, or HTML format. Why’d you create it and how’d you do it?

A few months ago, I needed to update my resume and my IS profile contained all my research outputs. So I thought it would be nice to be able to reuse this information, not only for me, but for everyone who has an Impactstory profile. So instead of copying & pasting my online profile to my resume,  I took advantage of the openness of Impactstory to automatically retrieve the data contained in my profile (everything is stored in a Json file that is readily available from any profile) and re-use it locally. I wrapped it up in a webpage (http://www.guillaumelobet.be/impact) and Voilà!

What’s the best part about your work as a post-doc researcher at the Université de Liège?

Academic freedom is definitely the best part about working in a University. It gives us the latitude to explore unexpected paths. And I work with great people!

Thanks, Guillaume!

As a token of our appreciation for Guillaume’s hard work, we’re sending him an Impactstory t-shirt of his choice from our Zazzle store.

Guillaume is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Your new Impactstory

Today, it’s yours: the way to showcase your research online.

You’re proud of your research.  You want people to read your papers, download your slide decks, and talk about your datasets.  You want to learn when they do, and you want to make it easy for others to learn about it too, so everyone can understand your impact. We know, because as scientists, that’s how we feel, too.

The new Impactstory design is built around researchers. You and your research are at the center: you decide how you want to tell the story of your research impact.

What does that mean?  Here’s a sampling of what’s new in today’s release:

9ep0z5c.png

A streamlined front page showcases Selected Publications and Key Metrics that you select and arrange from your full list of publications.  There’s a spot for a bio so people learn about your research passion and approach.

Reading your research has become an easy and natural part of learning about your work: your publications are directly embedded on the site!  Everyone can read as they browse your profile.  We automatically embed all the free online versions we can find — uploading everything else only takes a few clicks.

gbazFPT.png

None of this is any good if your publication list gets stale, so keeping your publication list current is easier than ever: zoom an email publications@impactstory.org whenever you publish something new with a link to the new publication, and poof: it’ll appear in your profile, just like that.

Want to learn things you didn’t know before?  Your papers now include Twitter Impressions — the number of times your publication has been mentioned in someone’s twitter timeline.  You may be surprised how much exposure your research has had…we’re discovering many articles reaching tens of thousands of potential readers.

We could talk about the dozens of other features in this release. But instead: go check out your new profile. Make it yours.  We’re extending the free trial for all users for two more days — subscribe before your trial expires and it is just $45/year.

As of today, the three of us have taken down our old-fashioned academic websites. Impactstory is our online research home, and we’re glad it’ll be yours too.

 

Sincerely,
Jason, Heather and Stacy

What Jeffrey Beall gets wrong about altmetrics

Not long ago, Jason received an email from an Impactstory user, asking him to respond to the anti-altmetrics claims raised by librarian Jeffrey Beall in a blogpost titled, “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.”

Beall is well-known for his blog, which he uses to expose predatory journals and publishers that abuse Open Access publishing. This has been valuable to the OA community, and we commend Beall’s efforts. But we think his his post on altmetrics was not quite so well-grounded.

In the post, Beall claims that altmetrics don’t measure anything of quality. That they don’t measure the impact that matters. That altmetrics they can be easily gamed.

He’s not alone in making these criticisms; they’re common. But they’re also ill-informed. So, we thought that we’d make our responses public, because if one person is emailing to ask us about them, others must have questions, too.

Citations and the journal impact factor are a better measure of quality than altmetrics

Actually, citations and impact factors don’t measure quality.

Did I just blow your mind?

What citations actually measure

Although early theorists emphasized citation as a dispassionate connector of ideas, more recent research has repeatedly demonstrated that citation actually has more complex motivations, including often as a rhetorical tool or a way to satisfy social obligations (just ask a student who’s failed to cite their advisor). In fact, Simkin and Roychowdhury (2002) estimate that as few as 20% of citers even read the paper they’re citing. That’s before we even start talking about the dramatic disciplinary differences in citation behavior.

When it comes down to it, because we can’t identify citer motivations when looking at a citation count alone (and to date efforts to use sentiment analysis to understand citation motivations have failed to be widely adopted) the only bulletproof way to understand the intent behind citations is to read the paper that cites.

It’s true that some studies have shown that citations correlate with other measures of scientific quality like awards, grant funding, and peer evaluation. We’re not saying they’re not useful. But citations do not directly measure quality, which is something that some scientists seem to forget.

What journal impact factors actually measure

We were surprised that Beall holds up the journal impact factor as a superior way to understand the quality of individual papers. The journal impact factor has been repeatedly criticized throughout the years, and one issue above all others renders Beall’s argument moot: the impact factor is a journal-level measure of impact, and therefore irrelevant to the measure of article-level impact.

What altmetrics actually measure

The point of altmetrics isn’t to measure quality. It’s to better understand impact: both the quantity of impact and the diverse types of impact.

And when we supplement traditional measures of impact like citations with newer, altmetrics-based measures like post-publication peer review counts, scholarly bookmarks, etc we have a better picture of the full extent of impact. Not the only picture. But a better picture.

Altmetrics advocates aim to make everything a number. Only peer review will accurately get at quality.

This criticism is only half-wrong. We agree that informed, impartial expert consensus remains the gold standard for scientific quality. (Though traditional peer-review is certainly far from bullet-proof when it comes to finding this.)

But we take exception to the charge that we’re only interested in quantifying impact. In fact, we think that the compelling thing about altmetrics services is that they bring together important qualitative data (like post-publication peer reviews, mainstream media coverage, who’s bookmarking what on Mendeley, and so on) that can’t be summed up in a number.

The scholarly literature on altmetrics is growing fast, but it’s still early. And altmetrics reporting services can only improve over time, as we discover more and better data and ways to analyze it. Until then, using an altmetrics reporting service like our own (Impactstory), Altmetric.com or PlumX is the best way to discover the qualitative data at the heart of diverse impacts. (More on that below.)

There’s only one type of important impact: scholarly impact. And that’s already quantified in the impact factor and citations.

The idea that “the true impact of science is measured by its influence on subsequent scholarship” would likely be news to patients’ rights advocates, practitioners, educators, and everyone else that isn’t an academic but still uses research findings. And the assertion that laypeople aren’t able to understand scholarship is not only condescending, it’s wrong: cf. Kim Goodsell, Jack Andraka, and others.

Moreover, who are the people and groups that argue in favor of One Impact Above All Others, measured only through the impact factor and citations? Often, it’s the established class of scholars, most of whom have benefited from being good at attaining a very particular type of impact and who have no interest in changing the system to recognize and reward diverse impacts.

wvCL4TW.png

Even if we were to agree that scholarly impact were of paramount importance, let’s be real: the impact factor and citations alone aren’t sufficient to measure and understand scholarly impact in the 21st century.

Why? Because science is moving online. Mendeley and CiteULike bookmarks, Google Scholar citations, ResearchGate and Academia.edu pageviews and downloads, dataset citations, and other measures of scholarly attention have the potential to help us define and better understand new flavors of scholarly attention. Citations and impact factors by themselves just don’t cut the mustard.

I heard you can buy tweets. That proves that altmetrics can be gamed very easily.

There’s no denying that “gaming” happens, and it’s not limited to altmetrics. In fact, there have recently been journals that have been banned from Thomson-Reuters Journal Citation List due to impact factor manipulation, and papers retracted after a “citation ring” was busted. And researchers have proven just how easy it is to game Google Scholar citations.

Most players in the altmetrics world are pretty vigilant about staying one step ahead of the cheaters. (Though, to be clear, there’s not much evidence that scientists are gaming their altmetrics, since altmetrics aren’t yet central to the review and rewards systems in science.) Some good examples are SSRN’s means for finding and banning fraudulent downloaders, PLOS’s “Case Study in Anti-Gaming Mechanisms for Altmetrics,” and Altmetric.com’s thoughts on the complications of rooting out spammers and gamers. And we’re seeing new technology debut monthly that helps us uncover bots on Twitter and Wikipedia, fake reviews and social bookmarking spam.

Crucially, altmetrics reporting services make it easier than ever to sniff out gamed metrics by exposing the underlying data. Now, you can read all the tweets about a paper in one place, for example, or see who’s bookmarking a dataset on Delicious. And by bringing together that data, we help users decide for themselves whether that paper’s altmetrics have been gamed. (Not dissimilar from Beall’s other blog posts, which bring together information on predatory OA publishers in one place for others to easily access and use!)

Altmetrics advocates just want to bring down The Man

We’re not sure about what that means. But we sure are interested in bringing down barriers that keep science from being as efficient, productive, and open as it should be.  One of those barriers is the current incentive system for science, which is heavily dependent upon proprietary, opaque metrics such as the journal impact factor.

Our true endgame is to make all metrics–including those pushed by The Man–accurate, auditable, and meaningful. As Heather and Jason explain in their “Power of Altmetrics on a CV” article in the ASIS&T Bulletin:

Accurate data is up-to-date, well-described and has been filtered to remove attempts at deceitful gaming. Auditable data implies completely open and transparent calculation formulas for aggregation, navigable links to original sources and access by anyone without a subscription. Meaningful data needs context and reference. Categorizing online activity into an engagement framework helps readers understand the metrics without becoming overwhelmed. Reference is also crucial. How many tweets is a lot? What percentage of papers are cited in Wikipedia? Representing raw counts as statistically rigorous percentiles, ideally localized to domain or type of product, makes it easy to interpret the data responsibly.

That’s why we incorporated as a non-profit: to make sure that our goal of building an Open altmetrics infrastructure–which would help make altmetrics accurate, auditable, and meaningful–isn’t corrupted by commercial interests.

Do you have questions related to Beall’s–or others’–claims about altmetrics? Leave them in the comments below.

Open Science & Altmetrics Monthly Roundup (August 2014)

August was a huge month for open science and altmetrics. Here are some of the highlights:

AAAS shrugs off scientists voicing concern

More than 100 scientists signed a letter of concern addressed to AAAS regarding their new “open access” journal, Science Advances–specifically, the journal’s exorbitant publication fees and restrictive licensing requirements.

As Liz Allen over on the ScienceOpen blog reports, the AAAS issued a “classic PR” piece in response. AAAS’s post doesn’t directly address the letter, and doubles down on their commitment to keeping Science Advances prohibitively expensive to publish in and difficult to remix and reuse for the benefit of science.

After a private phone call between AAAS’s Marcia McNutt and Jon Tennant and no indication that AAAS would reconsider their stance, Erin McKeirnan and Jon Tennant penned a final article expressing their disappointment in the organization.

Be sure to follow Jon Tennant and Erin McKeirnan, who spearheaded the effort to write the letters of concern and are talking candidly on Twitter about further developments (or lack thereof).

International Research, Science and Education Organizations tell STM Publishers: No New Licenses!

In early August, a similar kerfuffle emerged over the issue of content licensing for scientific publications. Creative Commons licenses have been the defacto standard for scientific publishing for years due to their simplicity of use and recognition, but the Association of Scientific, Technical, and Medical Publishers have released a suite of specialized licenses that some say intentionally confuse authors.

From the PLOS website:

The Association of Scientific, Technical and Medical Publishers has recently released a set of model licenses for research articles. In their current formulation, these licenses would limit the use, reuse and exploitation of research. They would make it difficult, confusing or impossible to combine these research outputs with other public resources and sources of knowledge to the benefit of both science and society. There are many issues with these licenses, but the most important is that they are not compatible with any of the globally used Creative Commons licenses. For this reason, we call on the STM Association to withdraw them and commit to working within the Creative Commons framework. [Click to read the full letter.]

The Association of STM Publishers issued a response, which unfortunately dismissed the  concerns raised. Catriona MacCallum and Cameron Neylon at PLOS continue to coordinate outreach on the issue; check out the PLOS Opens blog for the most up-to-date information, and consider contacting your favorite journals’ editorial boards to voice support for Creative Commons licenses.

Other Altmetrics and Open Science News

  • Shape the way publishers credit academic labor and expertise in scientific author lists: What can we do about honorary authorships and uncredited work in academia? CASRAI and ORCID have an idea: create a taxonomy for scientific author roles to help clarify who gets credited (and for what) on the byline of academic articles. Head over to the F1000Research blog to learn more and offer your feedback before Sept. 12.

  • Some scientists don’t find the Kardashian Index very funny: Since we covered a parody impact measure called the Kardashian Index in last month’s roundup, many have weighed in. Turns out, not everyone thought it was very funny, and many (rightly) called out the article for its belittling of scientists who engage others via social media. To read the responses and share your thoughts, visit the LSE Impact Blog.

  • More scholars post, discuss, and comment on research on Twitter than academic social networks like ResearchGate: Nature News surveyed more than 3,000 scientists on their use of social networks, and some of the results were surprising. For example, scientists are primarily on Academia.edu and ResearchGate for the same reason they’re on LinkedIn: just in case they’re contacted. And they more often share their work and follow conversations on Twitter than academic social networking sites. Check out the rest of the reported results over on Nature News.

  • Impactstory, PlumX, and Altmetric add new altmetric indicators: August saw an increase in the types of metrics reported by altmetrics aggregators. Impactstory recently added viewership statistics letting users know how often their embedded content has been viewed on Impactstory.org. PlumX rolled out GoodReads metrics, increasing altmetrics coverage for books. And Altmetric.com now tracks mentions of research articles in policy documents–a big win for understanding how academic research influences public policy.

  • “GitHub for research” raising $1.5 million in funding: The creators of PubChase, Zappy Lab, are seeking funding for Protocols.io, a scientific protocols sharing and reuse repository. In addition to the private, “angel” funding they’ve raised to date, they’re also pursuing crowdfunding via Kickstarter. Check it out today.

  • You can now share your work directly on Impactstory: we’ve gotten a lot of love this week for our newest feature: embeddable content. It’s just one of the many we’re rolling out before September 15. Here’s how embedded articles, slides, and code look on others’ profiles; login to your profile and start sharing your work!

Stay connected

Do you love these updates? You should follow us!

We share altmetrics and Open Science news as-it-happens on our Twitter, Google+, Facebook, or LinkedIn pages. And if you don’t want to miss next month’s news roundup, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

One month, three exciting new Impactstory features

In our last post, we hinted at the cool new set of features we’re rolling out over the next month as part of our Five Meter release.   We wanted to give you the inside scoop on these features before their debut and get your feedback!

Easier import to Impactstory, and keeping your profile more up-to-date

We know how much of a pain it is to keep your CV up-to-date, so we’re going to make Impactstory that much better at keeping your it current, without the need for you to do much (if anything at all).

We’re currently exploring routes to implementation that include:

  • Increasing the speed with which we sample third-party sites like Figshare and ORCID, so there’s less of a lag between when new products are added to those sites and when they appear on Impactstory. (That lag is currently one week, which is awesome for many of our users, but could be improved upon.)

  • Allowing you to email us a link or citation to a new product

  • Allowing you to tweet at us with certain hashtags and links to new products

Assuming you have to do anything at all to update your Impactstory profile, how would you prefer to do it? Forwarding manuscript acceptance emails? A bookmarklet a la Mendeley? What’s the easiest and least-hassle way we could do this for you?

Upload OA versions of your papers directly to Impactstory

This was one of the most wanted features mentioned in recent user interviews. And since we’re aiming to make Impactstory a solid replacement for scientists’ online web presence and CV, it follows that we should debut a feature that will allow researchers to share their work like they would on their website, but with less hassle.

What we’re most excited about for this feature debut is the ability to now track pageview and download counts for content that previously couldn’t be easily tracked on scientists’ websites.

The feature won’t provide permanent IDs like DOIs for uploaded content, nor will it provide full-scale archival preservation for content for now (like Figshare and many institutional repositories currently do, thanks to partnerships with CLOCKSS, etc). But we (like many of you) believe in the importance of permanent IDs and digital preservation. We’ll be keeping those issues in mind for future improvements and listening to see how much demand there is from users like you.

Ability to customize your profile’s appearance

You’ll soon be able to prioritize content and choose what people can see on your Impactstory profile, including current profile content and also new types of content that we’re calling widgets (think WordPress widgets).

Some widgets we’re aiming to debut include: the ability to feature a paper or product you’re proud of (as well as their metrics), a “bio” section, a research interests section, and integration with your blog.

Are there other uses for a customizable UI or types of widgets you’d love to see?

We’re also going to reformat profile badges to make ‘em more informative: the reformat will include the actual metrics themselves, percentile information, and possibly other information.

The customizable UI feature debut, as a whole, will set the stage for an oft-requested feature: the ability to group products into research packages.

Cool, so what’s next?

We’re going to start rolling this features out ASAP–the upload feature will likely be the first to happen, and it might happen later this week. We’re aiming to have all of these implemented by September 15.

We’d love to get your feedback in the comments on the questions we pose here, and welcome your thoughts over on the Feedback Forum on new features to consider implementing in our next sprints.

New pricing and new features, coming Sept 15th

It’s been an active couple of weeks at Impactstory. We’ve been thrilled at all the feedback we’ve received on our sustainability plan announcement, and we really appreciate the time many of you have put into sharing your thoughts with us.

Inspired by some of this feedback we’ve made some new plans. To continue furthering our vision of Impactstory as a professional-grade scholarly tool, in one month we’ll be adjusting the subscription price for new subscribers, and to go with it, launching an exciting to set of features.  Read on!

The suggestions

Many have suggested we go back to a free or freemium model, or find someone to charge other than our core users. And though we understand the appeal of these approaches (they were actually our Plan A for a long time), we won’t be going down those paths in the foreseeable future.  We’ve written about why elsewhere, as have some of our users and other folks around the web (Stefan’s post on the Paperpile blog was particularly good).

There was also a second set of suggestions, from folks who argued we should be charging more for Impactstory. Now that caught us by surprise.

To let you in on some of the background for why we chose our current price, we actually started with the idea of two bucks monthly. We knew the jump from free to subscription would sting, so we wanted to make it small. And we knew that we still have a ways to go before we deliver really compelling value for many users, so we wanted to ask for as little as we could. After a lot of discussion and some interviews, we eventually dared to push a bit higher, but drew the line at five dollars.

Undercharging? Seriously?

To hear that we might be undercharging was a bit of a shock. But when we examined the arguments for a higher price point, they made a lot of sense:

  • Your price establishes the perceived value of your product.

  • Your price only makes sense in relation to your market. Impactstory doesn’t have direct competitors, but we can look at the market for generally similar services. When we do, you see clusters around two price points: (1) free, like ResearchGate, Facebook, and so on, and (2) about $10/mo like GitHub or Spotify or Netflix. Crucially, there’s almost no one charging $5 monthly.

  • If we’re the cheapest thing people pay for, we’re establishing our value as the least important thing they pay for. That’s not the niche we’re shooting for.

  • And worse, people always assume you’re worth a bit less than you charge. So if our cost is “cheapest thing that’s not free,” then people assume our real value is: free. Nothing, no value.

This last point was particularly compelling when we read it, because it gets to the heart of why we’re charging in the first place: if we’re going to change researcher behavior and change the world, we have to establish ourselves as a professional-grade tool.

We can’t afford to be just something fun and cheap. And so we need to set a price that says that, loud and clear.  It looks like we got that price a little wrong with our first shot, and so we we’re going to adjust it.

So we’re making a change

We’re raising our subscription price to $60/year or $10/month, effective September 15th (one month hence).

Anyone who subscribes between now and September 15 will lock in their subscription at $5/month.  Everyone’s free trial will be extended till then, and new users will receive a 30 day trial.  And of course the no-questions-asked waiver will still be available.

But there’s a second part of this, too. Because raising the price can’t be the whole plan.

We get that some have been hesitant to use Impactstory for free. Part of the issue is that altmetrics aren’t widely accepted yet. We also know that if we want to sell Impactstory as a professional-grade tool with practical value for cutting-edge researchers, we’re going to need some very significant upgrades to what Impactstory does. It’s got to be worth the high price. That’s the whole point.

And so we’re going to be worth it

That’s why September 15th will also mark the completion of a huge new set of Impactstory features, collectively code-named Five Meter. We’ll be rolling these out over the course of the next month. It’s going to be one of our biggest feature pushes ever, and it’s going to be awesome.

The Five Meter spec isn’t 100% decided yet, but it’ll include a new more customizable profile page, stats on your twitter account and blog, support for your own domain name, new badges, and more.  Once these new features ship on September 15, our entire team is going to delete our professional webpages and online CVs, because at that point, Impactstory will be doing everything our webpages and online CVs do but better.

We think that’s something a lot of other researchers will want too, and want hard. And after a lot of conversation with the vanguard of web-native scientists–the folks we’re focused on right now–we’re convinced that’s an Impactstory they’ll gladly pay for. An Impactstory they’ll use, in earnest. And an Impactstory that’s way closer to transforming the way science is evaluated and shared.

As always, we’d love to hear questions or feedback! Email us at team@impactstory.org or tweet us at @impactstory.

 

All our best,

The Impactstory Team

P. S. Want to lock down that $45/year rate we talk about above? Login to your Impactstory profile, then head to Settings > Subscription. And if you aren’t already an Impactstory user but want to check out all the awesome new features we’ll be rolling out this month, sign up for a 30-day free trial now. Cheers!