Impactstory Advisor of the Month: Megan O’Donnell (August 2014)

We’re pleased to announce the August Impactstory Advisor of the Month, Megan O’Donnell!

1000704.jpg

As a Scholarly Communication Librarian at Iowa State University, Megan’s a campus expert on altmetrics and Open Science. Since joining the Advisors program, Megan has educated other campus librarians on altmetrics and Impactstory, and is currently hard at work planning an “intro to altmetrics” faculty workshop for the Fall semester.

We recently chatted with Megan about her job as a Scholarly Communication Librarian, how Impactstory benefits her scholarly activities, and how the new Impactstory subscription model has affected her outreach efforts.

Why did you initially decide to join Impactstory?

I’m still a new librarian in many ways. I just passed my 1 year anniversary as a full-time librarian this spring and my coauthors and I are finishing up what will be my first peer-reviewed work. Impactstory appealed to me because it was a way to showcase and track the work I have been doing outside of traditional publications. Without Impactstory I would never have known that one of my slideshows is considered “highly viewed” and continues to be viewed every week.

Why did you decide to become an Advisor?

A coworker suggested it to me. At first I was uncertain and I found myself thinking “But my profile is so empty! I haven’t ‘published’ anything yet! This won’t work.” In the end I decided that it was an important thing to do as a campus advocate for open access and altmetrics. There are many people who will be in the same position as me, wondering if Impactstory is worth it when they have so little to showcase. All I can say is that I can’t wait to fill my Impactstory profile up.

How have you been spreading the word about Impactstory in your first two months as an Advisor?

There’s not a lot of activity on campus during the summer. Most of our students are gone and many researchers and faculty are away on vacation, field work, or attending conferences so the majority of my time has been spent planning for an altmetrics workshop for fall. The one thing I did do this summer was to set up the chair of one of my departments with a profile. Impactstory provided a nice way to way to start a conversation about faculty and department work that tends to be left out by traditional metrics (such as the materials that her department produces for ISU’s extension program). I don’t think she’s completely convinced about the value of altmetrics, but she was open to creating an account to see what it could do and now she’s aware that there are other tools and measurements.

Once I got my Advisor package I visited other librarians in my department. We have a mix of faculty and academic professionals but everyone, no matter their rank, wanted one of the “I am more than my H-Index” stickers. I ran out within a week. The slogan speaks to everyone: no one wants to be judged solely on their citation numbers.

How has Impactstory’s new subscription model impacted your work as an Advisor?

A couple of my coworkers asked me about the change since I’m an Advisor. I spent a lot of time thinking about this and how it changed my feelings about Impactstory. After the initial knee-jerk reaction to having something “taken away”, I’ve come to the conclusion that it’s an acceptable change. The Paperpile blog post has already outlined many of the possible benefits, so I won’t repeat them here. The bottom line is I feel that I can recommend Impactstory because there’s nothing else like it.

Tell us about the workshops you’re planning on Impactstory for the Fall semester.

Iowa State University only began to having conversations around open access with the launch of our institutional repository, Digital Repository @ Iowa State University, in 2012. While the University Library has been very proactive in providing support with helping faculty prepare for promotion and tenure cases, much of it has revolved around those dreaded numbers: citations, Journal Impact Factor and the H-Index. The workshop I am designing will be an introduction to altmetrics with hands-on activities. It will likely end with having all participants create a trial Impactstory account that way they get an altmetrics experience tailored just for them.

What’s the best part about your work as a Scholarly Communication Librarian for the Iowa State University?

There are huge opportunities on this campus. If you’ve looked at my profile you’ll see that most of my recent work has been on data management planning. That really took off. We got support from the Office of the Vice President of Research, which is also sponsoring a panel discussion planned for Open Access Week, and from other campus units. Everyone is excited about the future of scholarly communications at Iowa State.

What advice would you give other librarians who want to do outreach on altmetrics to their colleagues and faculty?

I think it’s important to frame discussion about altmetrics as part of a larger picture. For example, NSF research grant proposals are judged on something called “broader impacts” which, in brief is “the potential of the proposed activity to benefit society and contribute to the achievement of specific, desired societal outcomes” (NSF Proposal Preparation Instructions). Altmetrics could give us some insight into if a grant has met its broader impact goals. How many views did the grant funded video receive? Was it picked up by a news outlet? Does anyone listen to the podcast? These types of activities are not captured any other way but they are important. Altmetrics can show the reach of research beyond the academy which is becoming increasingly important as research dollars are spread thinner and thinner.

Thanks, Megan!

As a token of our appreciation for Megan’s hard work, we’re sending her an Impactstory t-shirt of her choice from our Zazzle store.

Megan is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Open Science & Altmetrics Monthly Roundup (July 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

Impactstory announces a new sustainability model: $5/month subscriptions

Last week, we announced that we’re switching our non-profit sustainability model to a subscription plan: $5 per month after a free, 14-day trial period. From the Impactstory blog:

Our goal has always been for Impactstory to support a second scientific revolution, transforming how academia finds, shares, understands, and rewards research impact. Today we believe in that goal more than ever. That’s why we’re a nonprofit, and always will be. But this transformation is not going to happen overnight. We need a sustainability model that can grow with us, beyond our next year of Sloan and NSF funding. This is that model.

So what does five bucks a month buy you? It buys you the best place in the world to learn and share your scholarly impact. It buys you a profile not built on selling your personal data, or cramming your page with ads, or our ability to hustle up more funding.

Five bucks buys you a profile built on a simple premise: we’ll deliver real, practical value to researchers, every day. And we’ll do it staying a nonprofit that’s fiercely commitment to independence, openness, and transparency.

To read the full announcement, check out last Thursday’s post.

The K(ardashian)-Index debuts

Neil Hall has caused a stir with his paper, “The Kardashian index: a measure of discrepant social media profile for scientists” published last week in Genome Biology. The tongue-in-cheek article outlines Hall’s idea for a metric that identifies scientists whose presence on Twitter isn’t matched by a record of scholarly impact, evidenced by many citations to their work. Here’s how the index works:

pnO1N3W.png

Where “F(a) is the actual number of twitter followers of researcher X and F(c) is the number researcher X should have given their citations.”

While many viewed Hall’s paper as being all in good fun, some are concerned that by denigrating those with more Twitter followers than would be “appropriate” given their number of citations, it reinforces the idea that a very narrow type of scholarly impact is the type of impact that matters most, above and beyond one’s ability to communicate with others about the work they’re doing.

And by making fun of the idea that there might be more flavors of impact than traditionally assumed, we disincentivize researchers from ever breaking from the conservative approaches to measuring impact–approaches that no longer fully reflect reality for those practicing web-native science.

Huge progress made on 20+ Open Science projects at Mozilla Science Global Sprint

On July 22 New Zealand Standard Time, an international team of coders and scientists began a 52-hour sprint to improve Open Science lessons and learning materials, teaching tools, and software and standards for better science. The sprint was organized by Mozilla Science and coordinated virtually across the world using collaborative notepads, video conferencing software, and GitHub. Among the improvements made to Open Science software and standards was work done on Scholarly Markdown, the Open Access Button, and reproducible research guidelines. Improvements to teaching materials included bioinformatics, medical imaging, and oceanography capstone examples for Software Carpentry courses; Data Carpentry training materials like social science examples and lessons on Excel; and a great guide to using Excel for science. For more info, including can’t-miss links to other great Open Science projects, check out the Mozilla Science blog.

Other Open Science & Altmetrics News

  • Open Notebook Science marches on at the Jean Claude Bradley Memorial Symposium: In early July, Open Science advocates gathered for a one-day symposium celebrating the life and work of Jean Claude Bradley, Open Notebook Science pioneer. Some of Open Science’s finest minds presented at the meeting, including Antony Williams (Royal Society of Chemistry) and Peter Murray-Rust (Cambridge University). For more info, including links to the presentations, visit the JCBMS wiki.

  • 1:am altmetrics conference dates announced: The organizers of London’s first altmetrics conference released meeting dates and a preliminary lineup. 1:am will be held September 25-26, 2014 at the Wellcome Collection in London. Speaking will be publisher, researcher, and institutional representatives including Jennifer Lin of PLOS, Mike Thelwall of the University of Wolverhampton, Arfon Smith of GitHub, and Sarah Callaghan of the Research Data Alliance’s Metrics working group. Impactstory will also be in (virtual) attendance, outlining our non-profit’s vision for an Open altmetrics infrastructure. Sound interesting? Check out the 1:am website for more information and to purchase tickets.

  • Digital Science-backed startups had a big month: The innovative Macmillan Publishing subsidiary, Digital Science, had two cool announcements for the Open Science community in July: they invested in Write LaTeX, the startup responsible for Overleaf, a real-time, collaborative word processing environment for authoring scientific publications; and Figshare (who Digital Science also backs) was named Wired UK’s Startup of the Week. Congrats!

  • As WSSSPE2 approaches, killer papers on software sustainability and impacts are going online: The second Working towards Sustainable Software for Science: Practice and Experiences (WSSSPE) workshop is still months away, but we’re already seeing awesome papers like this one by Dan Katz (NSF) and Arfon Smith (GitHub) on creating mechanisms for assigning credit to software creators, and this one by James Howison (University of Texas at Austin) that proposes retracting bit-rotten publications in order to incentivize researchers to keep their research software accessible and usable. It’s obvious that excellent research will be shared at WSSSPE2 in November; for more information on the conference, check out the WSSSPE2 website.

  • The 2014 Open Knowledge Festival was a resounding success: Reports from the 2014 Open Knowledge Festival came streaming in across the Internet not long after the meeting ended in mid-July. Some highlights of the coverage: the OKFestival’s own Storify feeds describe the wealth of activities that happened at the Fest; festival goers were treated to excellent company and conversation at the ScienceOpen-sponsored ice cream break; and Lou Woodley’s apt write-up of the entire Festival, which drove home the point that in-person meetings are important–they bring like-minded people together and create opportunities for collaboration that you don’t often get by watching a meeting’s livestream.

Stay connected

Speaking of “bringing like minded people together”: we share altmetrics and Open Science news as-it-happens on our Twitter, Google+, Facebook, or LinkedIn pages. And if you don’t want to miss next month’s news roundup, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

Starting today, Impactstory profiles will cost $5/month. Here’s why that’s a good thing.

Starting today, Impactstory profiles cost $5 per month.

Why? Because our goal has always been for Impactstory to support a second scientific revolution, transforming how academia finds, shares, understands, and rewards research impact. That’s why we’re a nonprofit, and always will be. But (news flash), that transformation is not going to happen overnight. We need a sustainability model that can grow with us, beyond our next year of Sloan and NSF funding. This is that model.

So what does five bucks a month buy you? It buys you the best place in the world to learn and share your scholarly impact. It buys you a profile not built on selling your personal data, or cramming your page with ads, our ability to hustle up more funding, or a hope that Elsevier buys us (nonprofits don’t get acquired).

Five bucks buys you a profile built on a simple premise: we’ll deliver real, practical value to real researchers, every day. And we’ll do it staying a nonprofit that’s fiercely committed to independence, openness, and transparency. Want to fork our app and build a better one? Awesome, here’s all our code. Want access to the data behind your profile? Of course: it’s one click away, in JSON or CSV, as open as we can make it. And that ain’t changing. It’s who we are.

We’ve talked to a lot of users that feel $5/month is a fair deal. Which is great; we agree. But we know some folks may feel differently, and that’s great too. Because if you’re in that second group, we want to hear from you. We’re passionate about building the online profile you do think is worth $5 a month. In fact, we’re doing a huge round of interviews right now…if you’ve got ideas, drop us a line at team@impactstory.org and we’ll schedule a chat. Let’s change the world, together.

New signups will get a 14-day free trial. If you’re a user now, you’ll also get a 14-day trial; plus if you subscribe you’ll get a cool  “Impactstory: Early Adopter” sticker for your laptop. If you’re in a spot where you can’t afford five bucks a month, we understand.  We’ve got a no-questions-asked waiver; just drop us a line showing us how you’re linking to your Impactstory profile in your email signature and we’ll send you a coupon for a free account.

We’re nervous about this change in some ways; it’s not exactly what we’d imagined for Impactstory from the beginning. But we’re confident it’s the right call, and we’re excited about the future. We’re changing the world. And we’re delivering concrete value to users. And we’re not gonna stop.

Your questions, answered: introducing the Impactstory Knowledge Base

We’re launching a new feature today to make it even easier to use Impactstory: the Impactstory Knowledge Base.

We’ve seeded the Knowledge Base with answers to users’ frequently asked questions: how to create, populate and update your Impactstory profile, embed your Impactstory profile in other websites, and more. And we’ll be adding more articles–particularly those aimed at “power users”–in the coming months.

Head over to the Knowledge Base now to check it out!

Got a “how to” you want us to add in our next round of edits to the Knowledge Base? Email us at team@impactstory.org to share it.

7 ways to make your Google Scholar Profile better

Albert Einstein's Google Scholar profile

Google Scholar Profiles are useful, but are not as good as they could be. In our last post, we identified their limitations: dirty data, a closed platform, and a narrow understanding of what constitutes scholarly impact.

That said, Google Scholar Profiles are still an important tool for thousands of academics worldwide. So, how can researchers overcome Google Scholar Profiles’ weaknesses?

In this post, we share 7 essential tips for your Google Scholar Profile. They’ll keep your citation data clean, help you keep tabs on colleagues and competitors, increase your “Googlability,” and more. Read on!

1. Clean up your Google Scholar Profile data

Thanks to Google Scholar Profiles’ “auto add” functionality, your Profile might include some articles you didn’t author.

If that’s the case, you can remove them in one of two ways:

  1. clicking on the title of each offending article to get to the article’s page, and then clicking the trashcan/“Delete” button in the top green bar

  2. from the main Profile page, ticking the boxes next to each incorrect article and selecting the “Delete” from the drop-down menu in the top green bar

If you want to prevent incorrect articles from appearing on your profile in the first place, you can change your Profile settings to require Google Scholar to email you for approval before adding anything. To make this change, from your main Profile page, click the “More” button that appears in the top grey bar. Select “Profile updates” and change the setting to “Don’t automatically update my profile.”

Prefer to roll the dice? You can keep a close eye on what articles are automatically added to your profile by signing up for alerts (more info about how to do that below) and manually removing any incorrect additions that appear.

2. Add missing publications to your Profile

Google Scholar is pretty good at adding new papers to your profile automatically, but sometimes articles can fall through the cracks.

To add an article, click “Add” in the top grey bar on the main Profile page. Then, you can add your missing articles in one of three ways:

  1. Click the “Add article manually” link in the left-hand navigation bar. On the next page, add as much descriptive information about your article, book, thesis, patent, or other publication as possible. The more metadata you add, the better a chance Google Scholar has of finding citations to your work.

  2. Click “Add articles” in the left-hand navigation bar to get a list of articles that Google Scholar thinks you may have authored. Select the ones you’ve actually authored and add them to your profile by clicking the “Add” button at the top.
  3. Select “Add add article groups” from the left-hand navigation bar to review groups of articles that Scholar thinks you may have authored under another name. This is a new feature that’s less than perfect–hence we’ve listed it as a last choice for ways to add stuff to your profile.

Got all your publications added to your Profile? Good, now let’s move on.

3. Increase your “Googleability”

One benefit to Google Scholar Profiles is that they function as a landing page for your publications. But that functionality only works if your profile is set to “public.”

Double-check your profile visibility by loading your profile and, at the top of the main page, confirming that it reads, “My profile is public” beneath your affiliation information.

If it’s not already public, change your profile visibility by clicking the “Edit” button at the top of your profile, selecting “My profile is public”, and then clicking “Save”.

4. Use your Google Scholar Profile data to get ahead

Though Google Scholar Profile’s limitations means you can’t use it to completely replace your CV, you can use your Profile data to enhance your CV. You can also use your Profile data in annual reports, grant applications, and other instances where you want to document the impact of your publications.

Google Scholar doesn’t allow users to download a copy of their citation data, unfortunately. Any reuse of Google Scholar Profile data has to be done the old-fashioned way: copying and pasting.

That said, a benefit of regularly updating your CV to include copied-and-pasted Google Scholar Profile citations is that it’s a low-tech backup of your Google Scholar Profile data–essential in case Google Scholar is ever deprecated.

5. Stay up-to-date when you’ve been cited

One benefit to Google Scholar Profiles is that you can “Follow” yourself to get alerts whenever you’re cited. As we described in our Ultimate guide to staying up-to-date on your articles’ impact:

Visit your profile page and click the blue “Follow” button at the top of your profile. Click it. Enter your preferred email address in the box that appears, then click “Create alert.” You’ll now get an alert anytime you’ve received a citation.

Easy, right?

You can also click “Follow new articles” on your own profile to be emailed every time a new article is added automatically–key to making sure the data in your Profile is clean, as we discussed in #1 above.

6. …and stay up-to-date on your colleagues and competitors, too

Similarly, you can sign up to receive an email every time someone else receives a new citation or publishes a new article. (I like to think of it as “business intelligence” for busy academics.) It’s as easy as searching for them by name and, on their profile page, clicking “Follow new articles” or “Follow new citations.”

7. Tell Google Scholar how it can improve

Finally, Google Scholar–like most services–relies on your feedback in order to improve. Get in touch with them via this Contact Us link to let them know how they can better their platform. (Be sure to mention that an open API is key to filling the service gaps they can’t offer, especially with respect to altmetrics!)

Do you have Google Scholar Profiles hacks that you use to get around your Profile’s limitations? Leave them in the comments below or join the conversation on Twitter @impactstory!

Updated 12/19/2014 to reflect changes in the Google Scholar profile redesign.

4 reasons why Google Scholar isn’t as great as you think it is

These days, you’d be hard-pressed to find an academic who doesn’t think that Google Scholar Profiles are the greatest thing since sliced bread. Some days, I agree.

Why? Because my Google Scholar Profile captures more citations to my work than Web of Knowledge or Scopus, automatically adds (and tracks citations for) new papers I’ve published, is better at finding citations that appear in non-English language publications, and gives me a nice fat h-index. I’m sure you find it valuable for similar reasons.

And yet, Google Scholar is still deeply flawed. It has some key disadvantages that keep it from being as awesome as most imagine that it is.

In this post, I’m going to do some good ol’ fashioned consciousness-raising and describe Google Scholar Profiles’ limitations. And in our next post, I’ll share tips I’ve learned for getting the most out of your Google Scholar Profile, limitations be darned.

1. Google Scholar Profiles include dirty data

Let’s begin with the most basic element of your Profile: your name. If your name includes diacritics, ligatures, or even apostrophes, Google Scholar may be missing citations to your work. (Sorry, O’Connor!) And if you have a common name, it’s likely you’ll end up with others’ publications in your Profile, which you are unfortunately responsible for identifying and removing. (We’ll cover how to do that in our next post.)

Now, what about the quality of citations? Google Scholar claims to pull citations from anywhere on the scholarly web into your Profile, but their definition of “the scholarly web” is less rigorous than many people realize. For example, our co-founder, Heather, has citations on her Google Scholar Profile for a Friendfeed post. And others have found Google Scholar citations to their work in student handbooks and LibGuides–not the worst places you can get a cite from, but still: Nature they ain’t.

Google Scholar citations are also, like any metric, susceptible to gaming. But whereas organizations like PLOS and Thomson Reuters’ Journal Citation Index will flag and ban those found to be gaming the system, Google Scholar does not respond quickly (if at all) to reports of gaming. And as researchers point out, Google’s lack of transparency with respect to how data is collected means that gaming is all the more difficult to discover.

The service also misses citations in a treasure-trove of scholarly material that’s stored in institutional repositories. Why? Because Google Scholar won’t harvest information from repositories in the format that repositories across the world tend to use (Dublin Core).

Google Scholar Profile data is far from perfect, but that’s a small problem compared to the next issue.

2. Google Scholar Profiles may not last

Remember Google Reader? Google has a history of killing beloved products when the bottom line is in question.  It’s not exaggerating to say that Google Scholar Profiles could literally go away at any moment.

To me, it’s not unlike the problem of monoculture in agriculture. Monoculture can be a good thing. For those unfamiliar with the term, monoculture is when farmers identify the most powerful species of a crop–the one that is easiest to grow and yields the best harvest year after year–and then grow that crop exclusively. Google Scholar Profiles were, for a long time, the most easy to use and powerful citation reports available to scholars, and so Google Scholar has become one of the most-used platforms in academia.

But monoculture is also risky. Growing only one species of a crop can be catastrophic to a nation’s food supply if, for example, that species were wiped out by blight one year. Similarly, academia’s near-singular dependence on Google Scholar Profile data could be harmful to many if Google Scholar were to be shelved.

3. Google Scholar Profiles won’t allow itself to be improved upon

Other issues aside, it’s worth acknowledging that Google Scholar Profiles are very good at doing one thing: finding citations on the scholarly web. But that’s pretty much all they do, and Google is actively preventing anyone else from improving upon their service.

It’s been pointed out before that the lack of a Google Scholar API means that no one can add value to or improve the tool. That means that services like Impactstory cannot include citations from Google Scholar on Impactstory, nor can we build upon Google Scholar Profiles to find and display metrics beyond citations or automatically push new publications to Profiles. Based on the number of Google Scholar-related help tickets we receive, this lack of interoperability is a major pain point for researchers.

4. Google Scholar Profiles only measure a narrow kind of scholarly impact

Google Scholar Profiles aren’t designed to meet the needs of web-native scholarship. These days, researchers are putting their software, data, posters, and other scholarly products online alongside their papers. Yet Google Scholar Profiles don’t allow them to track citations–nor any other type of impact indicator, including altmetrics–to those outputs.

Google Scholar Profiles also promote a much-maligned One Metric to Rule Them All: the h-index. We’ve already talked about the many reasons why scholars should stop caring about the h-index; most of those reasons stem from the fact that h-indices, like Google Scholar Profiles, aren’t designed with web-native scholarship in mind.

Now that we’re clear on the limitations of Google Scholar Profiles, we’ll help you overcome ‘em by sharing 7 essential workarounds for your Google Scholar Profile in tomorrow’s post. Stay tuned!

Impactstory Advisor of the Month: Keith Bradnam (July 2014)

Headshot of Keith Bradnam

Meet our Advisor of the Month for July, Keith Bradnam! Keith is an Associate Project Scientist with the Korf Lab at UC Davis and active science communicator (read his blog, ACGT, and follow him on Twitter at @kbradnam).

Why is Keith our Advisor of the Month? Because he shared his strategies for success as a scientist at a well-attended Impactstory info session he organized at UC Davis earlier this month. Plus, he’s helping us to improve Impactstory every day, submitting bug reports and ideas for new features on our Feedback forum.

We recently emailed Keith to learn more about why he decided to become an Advisor, what made his recent workshop so great, and his thoughts on using blogging to become a more successful scientist.

Why did you initially decide to join Impactstory?

When I first heard about Impactstory, it just seemed like such an incredibly intuitive and useful concept. Publications should not be seen as the only form of scientific ‘output’, and having a simple way to gather together the different aspects of my academic life seemed like such a no-brainer.

In the past, I have worked in positions where I helped develop database resources for other scientists. These type of non-research positions, often only provide an opportunity for one formal publication a year (e.g. a paper in the annual Nucleic Acids Research ‘Database’ issue). This is a really poor reflection of the contributions that many bioinformaticians (and web programmers, database administrators etc.) make to the wider scientific community. In the past we didn’t have tools like GitHub to easily show the world what software we were helping develop.

Why did you decide to become an Advisor?

Impactstory is a great service and the more people that get to know about it and use it, the better it will become. I want to be part of that process, particularly because I still think that there are many people who are stuck in the mindset that a CV or résumé is the only way to list what you have done in your career.

I’m really hopeful that tools like Impactstory will forever change how people assess the academic achievements of others.

How have you been spreading the word about Impactstory in your first month as an Advisor?

I’ve mainly been passing on useful tweets from the @Impactstory Twitter account and keeping an eye on the Impactstory Feedback Forums where I’ve been adding some suggestions of my own and replying to questions from others. Beyond that, I’ve evangelized about Impactstory to my lab, and I gave a talk on campus to Grad students and Postdocs earlier this month.

How did your workshop go?

Well perhaps I’m biased 🙂 but I think it was well-received. There was a good mix of Grad students, Postdocs, and some other staff, and I think people were very receptive to hearing about the ways that Impactstory could be beneficial to them. They also asked lots of pertinent questions which has led to some new feature requests for the Impactstory team to consider. [You can view a video of Keith’s presentation over at his blog.]

You run a great blog about bioinformatics–ACGT. Why do you blog, and would you recommend it to others?

Blogging is such an incredibly easy way to share useful information to your peers. Sometimes that information can be succinct, factual material (these are the steps that I took to install software ‘X’), sometimes it can be opinion or commentary (this is why I think software ‘X’ will change the world), and sometimes it can just be entertainment or fun (how I used software ‘X’ to propose to my wife).

I think we’re currently in a transition period where people no longer see ‘blogging’ as being an overly geeky activity. Instead, I think that many people now appreciate that blogging is just a simple tool for quickly disseminating information.

I particularly recommend blogging to scientists. Having trouble following a scientific protocol and need some help? Blog about it. Think you have made an improvement on an existing protocol? Blog about it. Have some interesting thoughts about a cool paper that you have just read? Blog about it. There are a million and one topics that will never be suitable for a formal peer-reviewed publication, but which would make fantastic ideas for a blog post.

Blogging may be beneficial for your career by increasing your visibility amongst your peers, but more importantly I think it really improves your writing skills and — depending on what you blog about — you are giving something back to the community.

What’s the best part about your current gig as an Associate Project Scientist with the Korf Lab at UC Davis?

I think that most people would agree that if you work on a campus where you get to walk past a herd of cows every day, then that’s pretty hard to beat! However the best part of my job is that I get to spend time mentoring others in the lab (students, not cows), and I like to think that I’m helping them become better scientists, and better communicators of science in particular.

Thanks, Keith!

As a token of our appreciation for Keith’s hard work, we’re sending him an Impactstory t-shirt of his choice from our Zazzle store.

Keith is just one part of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

Open Science & Altmetrics Monthly Roundup (June 2014)

Don’t have time to stay on top of the most important Open Science and Altmetrics news? We’ve gathered the very best of the month in this post. Read on!

UK researchers speak out on assessment metrics

There are few issues more polarizing in academia right now than research assessment metrics. A few months back, the Higher Education Funding Council for England (HEFCE) asked researchers to submit their evidence and views on the issue, and to date many well-reasoned responses have been shared.

Some of the highlights include Ernesto Priego’s thoughtful look at the evidence for and against; this forceful critique of the practice, penned by Sabaratnam and Kirby; a call to accept free market forces “into the internal dynamics of academic knowledge production” by Steve Fuller; and this post by Stephen Curry, who shares his thoughts as a member of the review’s steering group.

Also worth a look is Digital Science’s “Evidence for excellence: has the signal overtaken the substance?”, which studies the unintended effects that past UK assessment initiatives have had on researchers’ publishing habits.

Though the HEFCE’s recommendations will mainly affect UK researchers, the steering group’s findings may set a precedent for academics worldwide.

Altmetrics researchers agree: we know how many, now we need to know why

Researchers gathered in Bloomington, Indiana on June 23 to share cutting-edge bibliometrics and altmetrics research at the ACM WebScience Altmetrics14 workshop.

Some of the highlights include a new study that finds that only 6% of articles that appear in Brazilian journals have 1 or more altmetrics (compared with ~20% of articles published in the “global North”); findings that use of Twitter to share scholarly articles grew by more than 90% from 2012 to 2013; a study that found that most sharing of research articles on Twitter occurs in original tweets, not retweets; and a discovery that more biomedical and “layman” terms appear in the titles of research shared on social media than in titles of highly-cited research articles.

Throughout the day, presenters repeatedly emphasized one point: high-quality qualitative research is now needed to understand what motivates individuals to share, bookmark, recommend, and cite research outputs. In other words, we increasingly know how many altmetrics research outputs tend to accumulate and what those metrics’ correlations are–now we need to know why research is shared on the social Web in the first place, and how those motivations influence various flavors of impact.

Librarians promoting altmetrics like never before

This month’s Impactstory blog post, “4 things every librarian should do with altmetrics,” has generated a lot of buzz and some great feedback from the library community. But it’s just one part of a month filled with librarians doin’ altmetrics!

To start with, College & Research Libraries News named altmetrics a research library trend for 2014, and based on just the explosion of librarian-created presentations on altmetrics in the last 30 days alone, we’re inclined to agree! Plus, there were librarians repping altmetrics at AAUP’s Annual Meeting and the American Library Association Annual Meeting (here and here), and the Special Libraries Association Annual Meeting featured our co-founder, Heather Piwowar, in two great sessions and Impactstory board member, John Wilbanks, as the keynote speaker.

More Open Science & Altmetrics news

Stay connected

We share altmetrics and Open Science news as-it-happens on our Twitter, Google+, Facebook, or LinkedIn pages. And if you don’t want to miss next month’s news roundup, remember that you can sign up to get these posts and other Impactstory news delivered straight to your inbox.

Impactstory Advisor of the Month: Jon Tennant (June 2014)

Jon Tennant (blogTwitter), a PhD candidate studying tetrapod biodiversity and extinction at Imperial College London, was one of the first scientists to join our recently launched Advisor program.

jon.jpeg

Within minutes of receiving his acceptance into the program, Jon was pounding the virtual pavement to let others know about Impactstory and the benefits it brings to scientists. For this reason–and the fact that Jon has done some cool stuff in addition to his research, like write a children’s book!–Jon’s our first Impactstory Advisor of the Month.

We chatted with Jon to learn more about how he uses Impactstory, what it’s like being an Advisor, and what he’s doing in other areas of his professional life.

Why did you initially decide to create an Impactstory profile?

A couple of years ago, I immersed myself into social media and the whole concept of ‘Web 2.0’. It was clear that the internet was capable of changing many aspects of the way in which we practice, communicate, and assess scientific research. There were so many tools though, and so much diversity, it was all a bit daunting, especially as someone so junior in their career. Although I guess that’s one of the advantages of being at this stage – I wasn’t tied down to any particular way of ‘doing science’ yet, and free to experiment.

Having followed the discussions on alternative and article-level metrics, when ImpactStory was released it seemed like a tool that could really make a difference for myself and the broader research community. At the time, it made no sense to me how the outputs of research were assessed – the name or the impact factor of a journal was given far too much meaning, and did nothing to really encapsulate the diversity of ways in which quality or impact, or putative pathways to impact, could be measured. ImpactStory seemed to offer a decent alternative, and hey look – it does! Actually, it’s not an alternative, but complementary tool for a range of methods in assessing how research is used.

Why did you decide to become an Advisor?

Pretty much for the reasons above! One thing I’m learning as a young scientist is that it’s easy to be part of an echo chamber on social media, advocating altmetrics and all the jazzy new aspects of research, but many scientists aren’t online. Getting those people involved in conversations, and alerting them to cool new tools is made a lot easier as an Advisor.

I reckon this type of community engagement is pretty important, especially in what appears to be such a crucial transitional phase for researchers, including things like open access and data, and the way in which research is assessed (e.g., through the REF here in the UK). ImpactStory obviously has a role in making this much easier for academics.

How have you been spreading the word about Impactstory in your first month as an Advisor?

Mostly sharing stickers! They actually work really well in getting people’s attention. They’re even more doubly useful when people ask things like “What’s a h-index”, so you can actually use them as a basis for further discussion. But yeah, I don’t really go out of my way to preach to people about altmetrics and ImpactStory – academics really don’t like being told what they should be doing and things, especially at my university. I prefer to kind of hang back, wait for discussions, and inject that things like altmetrics exist, and could be really useful when combined with things like a social media presence, or an ORCID, and that they are one of an integrated set of tools that can be really useful for assessing how your research is being used, as well as a kind of personal tracking device. I’d love to hold an ImpactStory/altmetrics Q and A or workshop at some point in the future.

You just wrote a children’s book about dinosaurs–tell us about it!

Let it be known that you brought this up, not me 😉

So, pretty much just by having a social media presence (mostly through blogging), I was asked to write a book on kids dinosaurs! Of course I said yes, and along with a talented artist, we created a book with pop-out dinosaurs that you can reconstruct into your very own little models! You can pre-order it here.* I think it’s out in October in the UK and USA. Is there an ImpactStory bit for that…? [ed: Not yet! Perhaps add it as a feature request on our Feedback forum? :)]

* (I don’t get royalties, so it’s not as bad promoting it…)

What’s the best part about your current gig as a PhD student at Imperial College London?

The freedom. I have an excellent supervisor who is happy to let me blog, tweet, attend science communication conferences and a whole range of activities that are complimentary to my PhD, as long as the research gets done. So there’s a real diversity of things to do, and being in London there’s always something science-related going on, and there’s a great community vibe too, with people who work within the broader scope of science always coming together and interacting. Of course, the research itself is amazing – I work with a completely open database called the Palaeobiology Database/Fossilworks, where even the methods are open so anyone can play with science if they wish!

Thanks, Jon!

Jon is just one of a growing community of Impactstory Advisors. Want to join the ranks of some of the Web’s most cutting-edge researchers and librarians? Apply to be an Advisor today!

The ultimate guide for staying up-to-date on your data, software, white papers, slide decks and conference posters’ impact

Getting impact alerts for your papers was pretty simple to set up, but what about tracking real-time citations, downloads, and social media activity for your other research outputs?

There are so many types of outputs to track–datasets, software, slide decks, and more. Plus, there seems to be dozens of websites for hosting them! How can you easily keep track of your diverse impacts, as they happen?

Don’t worry–it’s literally our job to stay on top of this stuff! Below, we’ve compiled the very best services that send impact alerts for your research data, software, slide decks, conference posters, technical reports, and white papers.

Research data

Specific data repositories gather and display metrics on use. Here, we go into details on metrics offered by GitHub, Figshare, and Dryad, and then talk about how you can track citations via the Data Citation Index.

GitHub

github_logo.jpg

If you use the collaborative coding website GitHub to store and work with research data, you can enable email alerts for certain types of activities. That way, you’re notified any time someone comments on your data or wants to modify it using a “pull request.”

First, you’ll need to “watch” whatever repositories you want to get notifications for. To do that, visit the repository page for the dataset you want to track, and then click the “Watch” button in the upper right-hand corner and select “Watching” from the drop-down list, so you’ll get a notification when changes are made.

Then, you need to enable notification emails. To do that, log into GitHub and click the “Account Settings” icon in the upper right-hand corner. Then, go to “Notification center” on the left-hand navigation bar. Under “Watching,” make sure the “Email” box is ticked.

Other GitHub metrics are also useful researchers: “stars” tell you if others have bookmarked your repository and “forks”–a precursor to a pull request–indicate if others have adapted some of your code for their own uses. Impactstory notification emails (covered in more detail below) include both of these metrics.

GitHub, Dryad and Figshare metrics via Impactstory

Screen Shot 2014-06-06 at 953.png

Dryad data repository and Figshare both display download information on their web sites, but they don’t send notification emails when new downloads happen. And GitHub tracks stars and forks, but doesn’t include them in their alert emails. Luckily, Impactstory alerts notify you when your data stored on these sites receives the following types of new metrics:

Dryad

Figshare

GitHub

pageviews

X

X

downloads

X

X

shares

X

stars (bookmarks)

X

forks (adaptations)

X

Types of data metrics reported by Impactstory

To set up alerts, create an Impactstory profile and connect your profile to ORCID, Figshare, and GitHub using the “Import from accounts” button at the top of your profile. (If you already have an Impactstory profile, this button will appear as a blue “Connect more accounts” button instead.) This will allow you to auto-import many of your datasets. If any of your datasets are missing, you can add them one by one by clicking the “Import individual products” icon and providing links and DOIs. Once your profile is set up, you’ll start to receive a notification email once every 1-2 weeks.

Data Citation Index

If you’ve deposited your data into a repository that assigns a DOI, the Data Citation Index (DCI) is often the best way to learn if your dataset has been cited in the literature.

To create an alert, you’ll need a subscription to the service, so check with your institution to see if you have access. If you do, you can set up an alert by first creating a personal registration with the Data Citation Index; click the “Sign In” button at the top right of the screen, then select “Register”. (If you’re already registered with Web of Knowledge to get citation alerts for your articles, there’s no need to set up a separate registration.)

Then, set your preferred database to the Data Citation Index by clicking the orange arrow next to “All Databases” to the right of “Search” in the top-left corner. You’ll get a drop-down list of databases; select “Data Citation Index.”

Now you’re ready to create an alert. On the Basic Search screen, search for your dataset by its title. Click on the appropriate title to get to the dataset’s item record. In the upper right hand corner of the record, you’ll find the Citation Network box. Click “Create citation alert.” Let the Data Citation Index know your preferred email address, then save your alert.

Software

The same GitHub metrics you can track for data can be used to track software impact, too. To receive alerts about comments on your code and pull requests, follow the notification sign-up instructions outlined under Research Data > GitHub, above. To receive alerts when your software gets stars or forks, sign up for Impactstory alerts according to the instructions under Research Data > GitHub, Dryad, and Figshare.

Impactstory and others are working on ways to track software impact better–stay tuned!

Technical reports, working papers, conference slides & posters

Slideshare sends alerts for metrics your slide decks and posters receive. Impactstory includes some of these metrics from Slideshare in our alert emails.  Impactstory alerts also include metrics for technical reports, working papers, conference slides, and posters hosted on Figshare.

Slideshare

w8Zu8Ow.png

Though Slideshare is best known for allowing users to view and share slide decks, some researchers also use it to share conference posters. The platform sends users detailed weekly alert emails about new metrics their slide decks and posters have received, including the number of total views, downloads, comments, favorites, tweets, and Facebook likes.

To receive notification emails, go to Slideshare.net and click the profile icon in the upper right-hand corner of the page. Then, click “Email” in the left-hand navigation bar, and check the “With the statistics of my content” box to start receiving your weekly notification emails.

Figshare and Slideshare metrics via Impactstory

You can use Impactstory to receive notifications for downloads, shares, and views for anything you’ve uploaded to Figshare, and for the downloads, comments, favorites, and views for slide decks and posters uploaded to Slideshare.

First, create an Impactstory profile and connect your profile to Figshare and Slideshare using the “Import from accounts” button at the top of your profile. (If you already have an Impactstory profile, this button will appear as a “Connect more accounts” button instead.) For both services, click the appropriate button, then provide your profile URL when prompted. Your content will then auto-import.

If any Figshare or Slideshare uploads are missing–which might be the case your collaborators have uploaded content on your behalf–you can add them one by one by clicking the “Import stuff” icon at the upper right-hand corner of your profile, clicking the “Import individual products” link, and then providing the Figshare DOIs and Slideshare URLs. Once your profile is set up, you’ll start to receive a notification email once every 1-2 weeks.

Videos

Vimeo and Youtube both provide a solid suite of statistics for videos hosted on their sites, and you can use those metrics to track the impact of your video research outputs. To get alerts for these metrics, though, you’ll need to sign up for Impactstory alerts.

Vimeo and Youtube metrics via Impactstory

Vimeo tracks likes, comments, and plays for videos hosted on their platform; Youtube reports the same, plus dislikes and favorites. To get metrics notifications for your videos hosted on either of these sites, you’ll need to add links to your videos to your Impactstory profile.

Once you’ve signed up for an Impactstory profile, the “Import stuff” icon at the upper right-hand corner of your profile, then click the “Import individual products” link. There, add URLs for each of the  videos and click “Import”. Once they’re imported to your profile, you’ll start to receive notifications for new video metrics once every 1-2 weeks.

Are we missing anything? We’ve managed to cover the most popular platforms in this post, but we’d love to get your tips on niche data repositories, video platforms, and coding sites that keep you up to date on your impact by sending alerts. Leave them in the comments below!

Bookmark this guide. This post–and our other Ultimate Guide for articles–will be updated over time, as services change.