Fetch multiple DOIs in one OpenAlex API request

Did you know that you can request up to 50 DOIs in a single API call? That’s possible due to the OR query in the OpenAlex API and looks like this:

https://api.openalex.org/works?filter=doi:10.3322/caac.21660|https://doi.org/10.1136/bmj.n71|10.3322/caac.21654&mailto=support@openalex.org

We simply separate our DOIs with the pipe symbol ‘|’. That query will return three works associated with the three DOIs we entered. As you can see in the query, a short form DOI or long form DOI (as a URL) are both supported.

This will save time and resources when requesting many DOIs. This technique works with all IDs in OpenAlex, to include OpenAlex IDs and PubMed Central IDs (PMID).

Example with python requests

Let’s write an example python script to show how we can get DOIs in batches of 50 using requests:

import requests

dois = ["10.3322/caac.21660", "https://doi.org/10.1136/bmj.n71", "10.3322/caac.21654"]
pipe_separated_dois = "|".join(dois)
r = requests.get(f"https://api.openalex.org/works?filter=doi:{pipe_separated_dois}&per-page=50&mailto=support@openalex.org")
works = r.json()["results"]

for work in works:
  print(work["doi"], work["display_name"])

# results
https://doi.org/10.3322/caac.21660 Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries
https://doi.org/10.1136/bmj.n71 The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
https://doi.org/10.3322/caac.21654 Cancer Statistics, 2021

Hope this is helpful!

New OpenAlex API features – continents, regions, and more!

You can now use the OpenAlex API to filter and group by continents and large geographic regions, such as the Global South. The full documentation is here.

To see a list of institutions in Europe you can do:

https://api.openalex.org/institutions?filter=continent:europe

So simple! You can group by continent as well. This will return a count of works where an author is associated with the institution’s continent:

https://api.openalex.org/works?group-by=institutions.continent

{
  "key": "Q46",
  "key_display_name": "Europe",
  "count": 26968686
},
{
  "key": "Q49",
  "key_display_name": "North America",
  "count": 25175848
},
{
  "key": "Q48",
  "key_display_name": "Asia",
  "count": 24805214
}...

The key field is the wikidata identifier for the continent, such as South America (Q18).

Querying the Global South

The Global South is a term used to identify regions within Latin America, Asia, Africa, and Oceania. We used data from the United Nations to build a list of countries associated with the Global South. It’s available as a boolean filter like:

https://api.openalex.org/institutions?filter=is_global_south:true

This allows for some very cool groupings, such as “show me authors associated with the Global South, grouped by country”:

https://api.openalex.org/authors?filter=last_known_institution.is_global_south:true&group-by=last_known_institution.country_code

New API Filters

We’ve added new filters for works:

  • has_pmid – works that have a PubMed identifier
  • has_pmcid – works that have a PubMed Central identifier
  • repository – works that can be found at the given repository, based on venue ID
  • version – works where the given version is available, such as acceptedVersion

Concepts Improvements

As requested by OpenAlex users, we modified the concepts tree so that it is a true hierarchy. This means when you search for works with the concept Computer Science, you’re also getting works tagged with those sub-concepts, such as Artificial Intelligence

Meet Casey – Now full time with OurResearch

Hi I’m Casey. I am excited to announce that I am now full time with OurResearch as a software engineer working on OpenAlex and Unpaywall!

My Journey

I freelanced for OurResearch prior to joining full time this summer. With Jason and Heather’s help I maintained Paperbuzz, Cite-As, and also built out a project to catalog academic journal pricing. With freelancing I was able to improve my python and data management skills in order to tackle bigger projects.

Prior to freelancing I enjoyed a career in the US Air Force, which I am proud of. I’m fortunate to have hundreds of hours as aircrew on multiple aircraft, as well as a variety of technical and leadership assignments. So if you ever want to talk airplanes be ready because I might talk your ear off!

My academic experience comes from my time in university pursuing advanced education.

My Vision with OurResearch

In December I helped build the API and set up Elasticsearch for a project called OpenAlex. That project has continued to grow and I love to see how many people are using it. My core job with OpenAlex is to provide front-line customer support, as well as maintain and improve the API and search infrastructure. I’m also working on several parts of UnPaywall.

It’s incredible that OurResearch tools are freely open and available. I find OurResearch has similar core values as my time in the Air Force: small teams empowered to make decisions, humble and accepting of feedback in order to make things better. That’s why we believe our community of users are invaluable and important in keeping those tools free, open, and easy to use.

So we will listen to your feedback, fix bugs and implement features quickly, and continue to maintain our documentation so the dataset and APIs are as frictionless as they can be. We welcome and need your help with this mission! So do not hesitate to contact me or the team.

I look forward to improving OpenAlex and Unpaywall, and to meeting those of you using OurResearch products!

– Casey

Fulltext search in OpenAlex

We’re excited to announce that we’ve added fulltext search to 57 million articles in OpenAlex, based on data from the General Index. This feature moves OpenAlex’s search function beyond title and abstract, covering the full text of 57 million documents, resulting in ~30 times more search results for many keyword searches!

What is the General Index?

The General Index is a very large database of n-grams that were extracted from 107 million journal articles. It’s openly available without restrictions, and is supported by 100 prominent professors and researchers.

An n-gram is a set of words that occur in a document. For example, in the sentence “the quick brown fox jumped”, a 3-gram would be “quick brown fox” and a bigram would be “brown fox”.

The n-grams from the General Index look like this:

{
    ngram: "sheet of cellulose nitrate",
    ngram_tokens: 4,
    ngram_count: 4
},
{
    ngram: "high than the diameter",
    ngram_tokens: 4,
    ngram_count: 1
}

So we know that the phrase “sheet of cellulose nitrate” occurred in the document four times. The General Index used a tool called spaCy to extract n-grams from articles, capturing from 5-grams down to unigrams from each document.

You cannot recreate a document from these n-grams due to the way that the text was processed (we checked this carefully). However, with the n-grams we know the phrases that exists in each document and how many times it was mentioned… which is great for search!

Enabling fulltext search

We matched the n-grams that had metadata to records in OpenAlex, then loaded the n-grams into Elasticsearch. The result is fine-grained, fulltext search across many articles in OpenAlex. This allows you to find words and phrases deep within a document. This feature is ready to use today!

Fulltext search is integrated into the main search feature, with priority given to title, then abstract, then fulltext: https://api.openalex.org/works?search=dna.

You can filter records to see those that have fulltext available, and you can search fulltext only.

Can I see the n-grams?

Yes you can! Each Work object in OpenAlex now includes an ngrams_url field; the URL you find there that points to a list of that work’s ngrams.

You can also access a work’s ngrams directly via DOI, by using this REST API endpoint:

/works/:doi/ngrams

So for example, to get the ngrams for the work with DOI 10.1016/s0022-2836(75)80083-0, you can call https://api.openalex.org/works/10.1016/s0022-2836(75)80083-0/ngrams.

And the best part is, because these API queries are cached they can be served even more quickly than the rest of our REST API…so you can feel free to scroll through thousands or even millions of DOIs using this endpoint.

Exploring the data

Looking across the OpenAlex data set, about 32% of articles prior to 2000 have fulltext, and about 25% of articles between 2000 and 2020 have fulltext:

The count by year is:

The coverage increases above 50% when a record has a DOI:

Finally, coverage increases above 70% and even up to 80% in years when an article has more than 50 incoming citations:

We hope you enjoy this new feature! We’re thankful to The General Index project for making this incredible data set available, and we’re proud to be one of the first organizations to host it in an easy-to-use manner.

Unsub – All Publishers Supported

Unsub is a dashboard that helps you reevaluate your big deal’s value and understand your cancellation options.

For the last few years we’ve supported a small set of very large publishers.

One of the most requested features has been support for more publishers.

As of today – right now – we support all publishers.

We heard you, and we’re super excited to get this in your hands. Here’s some important details:

  • All publishers are supported. We no longer support specific publishers, but rather we support any publisher.
  • A mix of publishers is supported. This was another oft requested feature, mostly related to aggregators, and actually naturally arose out of our change to support all publishers. Unsub dashboards no longer have logic filtering what titles are in your dashboard by publisher – so it’s just as easy for a dashboard to have titles from one publisher or 20 publishers.
  • Title prices are now required. Supporting all publishers, it’s not feasible for us to collect and update titles prices for all of their titles. For existing Unsub packages created before today, we’ve incorporated the public prices we had (for the big 5 we supported: Elsevier, Springer Nature, Wiley, Taylor & Francis, SAGE) into your packages. For new packages moving forward, you’ll have to upload your own title prices. We’ve updated the documentation accordingly.
  • APC report has moved from package to institution level. We have APC data for the big 5 publishers, but now that we’re moving to any publisher, we can no longer provide publisher specific APC reports. However, you can now get an APC report for your institution that includes an estimate of your APC spend for the big 5 publishers (Elsevier, Springer Nature, Wiley, Taylor & Francis, SAGE). See the APC Report documentation page for more.

But, we didn’t stop there. Here’s some additional features you can use today that we think you’ll enjoy:

  • Packages now have Descriptions. When you login to Unsub you’ll see evidence of this change straight away. You can use this package attribute to include a lot of detail about your package to remind your future self and others of important details about your package. See the docs for more information.
  • Package views now have an Edit Details tab. In this tab you can change the package name and description. See the docs for more information.
  • Packages have an optional filter setup step. This could be used for a variety of use cases, but first and foremost can be used to get back to the state of your package before today’s changes. That is, we no longer filter by publisher. If you had a Wiley package before today you should have only seen titles published by Wiley in your dashboard. However, moving forward, we do not filter by publisher, so that same Wiley package may include some titles from other publishers that were in your COUNTER reports. You can use this new feature to limit the set of titles that appear in your dashboard. See the Upload journal filter documentation page to learn more.

Notes:

  • During testing, we heard that aggregators may not provide a COUNTER 5 TR_J2 file. As we require a TR_J2 file if you choose COUNTER 5 in Unsub, we provide a fake TR_J2 file. Let us know if you run into any issues with this! See the docs page for more info.
  • As we support more publishers, we’ll run into more edge cases. We’ve heard that some publishers only provide a COUNTER 5 TR_J1 file – and do not provide TR_J2, TR_J3, and TR_J4 files. We don’t currently support the COUNTER 5 TR_J1 file. Get in touch if this is something you need.
  • There may be “growing pains” moving from support for 5 publishers to all publishers. For example, journal metadata that’s crucial to Unsub may not be complete for some journals. Please do get in touch if you run into any issues. We’ll be keeping an eye on things and will

If you are not a current Unsub subscriber and you’re interested to learn more schedule a demo or go ahead and purchase.

If you are a current Unsub subscriber, log in, kick the tires, and let us know what you think.

To learn more about all the new features head over to our documentation.

In an upcoming webinar (date to be announced soon) I’ll dive into all the new features and answer any questions.

OurResearch news: Heather stepping down

Hi everybody, this is Heather. I wanted to let you know I’m stepping down from OurResearch, effective mid-June 2022.

I’m so proud of what we’ve built over the last 10 years. I firmly believe the team will keep doing great things to advance open infrastructure in scholarly communications. My departure is on the most amicable of terms, and I will remain on the Board of Directors and OurResearch’s biggest fan.

Why leave? I’m ready for a change. This move has been in the works for some time. To start with I’ll take a few months off to rest and spend with my family (and cycle, read, and eat cookies) and then I’m not sure! 

Will keep this short and sweet because otherwise I’ll probably cry — building these ideas and tools with Jason has always been a labour of love. Wishing everyone the best. 

Rooting for the openiest of science ASAP,

Heather


Hey, this is Jason. This post is tough to write because I’d really like to say something profound and moving, something that expresses how much the last eleven years working with Heather have meant to me. Something that expresses how much I admire, respect, and love her. Something that conveys how OurResearch will always be incomplete without her–but how, at the same time, I’m 100% sure that we’ll continue to grow and prosper, thanks to the work she’s put in.

Now, I know that y’all know Heather is amazing. You know she’s smart and tough and kind and pragmatic and idealistic and authentic and clever and relentless and funny. You know that she’s put her heart and soul and love and self into Open Science and into OurResearch, and you know that she’s got a bigger heart and soul and love and self than just about anyone.

But y’all don’t know it like I know it.  I’ve seen it, up close, for eleven years. I’ve seen her on sleepless nights, when we had no money, when people were being mean, when servers were down, in the darkest and toughest of times. And I’ve never stopped being inspired by her. I’ve seen her perform code miracles and budget miracles and admin miracles and everything in between. And more than that: I’ve seen her do it with unflagging kindness, humility, and integrity. I’ve seen her as few have.

And I’m forever, deeply grateful for that: that I got to see her in action, be on her team, experience all the crazy highs and lows and sidewayses of cofounderdom with her. It’s been a profound honor.

So even more than I’ll miss Heather, I’m grateful for Heather. And I’ll be trying very hard to live up to her example, to practice all I’ve learned from her. Which means I’ll be working my guts out for OurResearch, because I believe in it with all my heart. We’ve got a great product in OpenAlex, a great team, a great board (including Heather still, huzzah!) and we’re going to be doing great things. I know that’s what Heather wants, and it’s what I want, and by golly we’ll do it. 

I’ll miss you, Heath. Thanks for a great decade. We won’t let you down.

j

New OpenAlex API features!

We’ve got a ton of great API improvements to report! If you’re an API user, there’s a good chance there’s something in here you’re gonna love.

Search

You can now search both titles and abstracts. We’ve also implemented stemming, so a search for “frogs” now automatically gets your results mentioning “frog,” too. Thanks to these changes, searches for works now deliver around 10x more results. This can all be accessed using the new search query parameter.

New entity filters

We’ve added support for tons of new filters, which are documented here. You can now:

  • get all of a work’s outgoing citations (ie, its references section) with a single query. 
  • search within each work’s raw affiliation data to find an arbitrary string (eg a specific department within an organization)
  • filter on whether or not an entity has a canonical external ID (works: has_doi, authors: has_orcid, etc)

Request multiple records by ID at once

This has been our most-requested feature and we’re super excited to roll it out! By using the new OR operator, you can request up to 50 entities in a single API call. You can use any ID we support–DOI, ISSN, OpenAlex ID, etc.

Deep paging

Using cursor-based paging, you can now retrieve an infinite number of results (it used to be just the top 10,000). But remember: if you want to download the entire dataset, please use the snapshot, not the API! The snapshot is the exact same data in the exact same format, but much much faster and cheaper for you and us.

More groups in group_by queries

We now return the top 200 groups (it used to be just the top 50).

New Autocomplete endpoint

Our new autocomplete endpoint dead easy to use our data to power an autocomplete/typeahead widget in your own projects. It works for any of our five entity types (works, authors, venues, institutions, or concepts). If you’ve got users inputting the names of journals, institutions, or other entities, now you can easily let them choose an entity instead of entering free text–and then you can store the ID (ISSN, ROR, whatever) instead of passing strings around everywhere. 

Better docs

In addition to documenting the new features above, we’ve also added lots of new documentation for existing features, addressing our most frequent questions and requests:

Thanks to everyone who’s been in touch to ask for new features, report bugs, and tell us where we can improve (also where we’re doing well, we’re ok with that too).
We’ll continue improving the API and the docs. We’re also putting tons of work into improving the underlying dataset’s accuracy and coverage, and we’re happy to report that we’ve improved a lot on what we inherited from MAG, with more improvements to come. We’ve delayed the launch of the full web UI, but expect that in the summer…we are so excited about all the possibilities that’s going to open up.

Unsub Webinar Series

We’re starting an Unsub (https://unsub.org/) webinar series next week!

Why would you want to attend? These webinars should help you get better value from Unsub regardless of whether you want to just understand your options, get a better deal on your big deal, or cancel your big deal. 

Every two weeks we’ll cover a new topic, with two time slots for each topic to serve a wider array of time zones: morning and afternoon PST (Pacific Standard) time.

If our webinar times don’t work for you, we are planning to record webinars and upload them for anyone to watch on Vimeo (https://vimeo.com/unsub).

Here are the first three topics we’ll cover:

  • Feb 8 & 10: Unsub demo – an overview of the product
  • Feb 22 & 24: Eric Schares demoing Unsub Extender
  • Mar 8 & 10: Deep dive on Unsub scenarios

Other topics are in the works – we’ll announce them soon. Let us know here, elsewhere, or email me (scott@ourresearch.org) if there’s any topics you’d like covered in our webinar series.

The webinar series is free. However, we will require registration so we know how many people are coming and to make it easier for you to remember to attend (i.e., Zoom email confirmation, add to your calendar, etc). 

Our first webinar is titled Unsub Demo – An Overview of the Product – Feb 8 and 10:

We’ll put out registration links soon for subsequent webinar topics.

OpenAlex Update: Jan 24 2022

The OpenAlex launch is going well!  Thanks for all of your feedback, comments, questions, and help spreading the word.  A few updates for you below.

Snapshot updates

There is a new native-format snapshot, with the following updates:

  • includes “abstract_inverted_index” in works
  • includes “raw_affiliation_string” in works.authorships (thanks for requesting this!)
  • includes “cited_by_api_url” in works is now a string not a list (sorry! the list was a bug)
  • corrected the spelling of institution.associated_institutions
  • “ids” dict doesn’t include entries for empty ids anymore (simplifies the data)

This new snapshot doesn’t have additional new works since the previous one, but we expect new works to be added in the next week, and approximately every 2 weeks after that.  A new MAG-format snapshot including new works will also be release at that time.  Each new snapshot will contain articles published up to just a few days before the snapshot release (rather than several weeks old as was the case with MAG). 

API updates

The same changes as described above for the snapshot, importantly including the “abstract_inverted_index” in the list and filter endpoints.

Nature write-up

The OpenAlex launch was covered in Nature this week!  You can read about it here:  https://doi.org/10.1038/d41586-022-00138-y  
We are really happy to hear that people are finding it easy to use!

OpenAlex Tips of the Day

We have been posting tips for using OpenAlex on Twitter every weekday.  

You can see past tips at this search link (whether you have a twitter account or not), and you can follow us on twitter here: @openalex_org

Questions?

We’d love to hear from you: team@ourresearch.org

OpenAlex launch!

OpenAlex launched this week! (January 3rd 2022 for those reading from the future 🙂 )

As expected:

We’re now pulling in new content on our own. Until now, we’ve been getting new works, authors, and other entities from MAG. Now that MAG is gone, we’re gathering all of our own data from the big wide internet.

The new REST API is launched! This is a much faster and easier way to access the OpenAlex database than downloading and installing the snapshot. It’s completely open and free–you don’t even need a user account or token.

We’ve now got oodles of new documentation here: https://docs.openalex.org/

Slight change of plan:

The MAG Format snapshot is now hosted for free, thanks to the AWS Open Data program. This will cover the data transfer fees (which turned out to be $70!) so you don’t have to. Here are the new instructions on how to download the MAG format snapshot to your machine.


We are extending the beta period for OpenAlex; we’ll emerge from beta in February. This is mostly in response to discovering issues with the coverage and structure of existing data sources including MAG. Extending the beta reflects the fact that the data will improve significantly between now and February.

Huge exciting news:

OpenAlex was built to offer a drop-in replacement for MAG. We’re doing that. But today, we’re also unveiling some moves toward a more innovative future for Openalex:

We’ve now built around a simple new five-entity model: works, authors, venues (journals and repositories), institutions, and concepts. Everything in OpenAlex is one of these entities, or a connection between them. Each type of entity has its own API endpoint.

We’ve got a new Standard Format for the snapshot, one that’s closely tied to both the five-entity model the API. In the future, this will become the only supported format. The MAG format is now deprecated and will go away on July 1, 2022.

In conclusion:

Thanks for your support, and please send us any feedback you find! In particular, let us know about bugs…it’s early days, and there will be plenty. We’re currently fixing these very quickly. Happy New Year, and happy OpenAlexing!

Best,
Jason and Heather