Wednesday, May 27, 2009

On PLoS' article-level metrics



As some of you may know, since March this year, PLoS has started an "article-level metrics" campaign [See Article redesign on PLoS Journals]. This is a very interesting topic and I'll try to break it down to you here.

This discussion is centered around a webinar recently gave by Pete Binfield, the Managing Editor of PLoS ONE [Article-Level Metrics (at PLoS and beyond)], so if you want more info, do not hesitate to listen to it. [Hat Tip at A Blog Around The Clock for the link].

An increasing problem put forward by many people is that there are simply too many articles being published and basically it's impossible for us to read them all. Imagine you work in colorectal cancer. Of course you'll have to read some of the new articles arising on that topic, as they may be useful for your research. But what about new findings in other types of cancer? OK, so you start adding thousands of RSS to your Google Reader, trying to incorporate journals where cancer articles may be published. Now you have >1000 new items in your Google Reader each morning and little time.

Do you see my point?

One approach to this problem (if you want to call it that), and is the idea behind, for example F1000, is to read (or read more thoroughly) only the papers that are actually worth your while. For this, you'll have to be able to asses if the article you've just found through your weekly (daily?) Pubmed search fits within this category.

If you are anything like me, you'll read the abstract and, if you are still interested, download the article and flip through the figures and Discussion to “evaluate” if you are going to read this thing completely: you've just made your own analysis of the article.
However, it may still be interesting (as complementary info) to know, for example, how many times the article has been downloaded, of if it has been extensively commented on the blogosphere. That may mean something, or at least suggest that, if it’s getting reviewed a lot, maybe you should read it soon.

These last measurements are “article-level metrics” –ALMs- (as opposed to “journal metrics”, such as the IF) as they refer specifically to certain article and not to the journal where it was published.
Generally (or up to now, maybe), I’ve been comfortable with one ALM: article citation. I don’t actually use it nor it has any influence on my decision of reading a certain article, but “I’m comfortable” with it in the sense that people that are actually interested in the “impact” (in its classical ISI meaning) of a particular article could just take a look at this little number. You can get this through Web of Science. However, this may not be very useful with a new article, as its citation count will be zero.

Interestingly, there are several other useful pieces of data that can be added to compile a whole list of an article’s related metrics, which can give a more complete view of the article’s “social impact”.

For example, article usage. This generally refers to the number of downloads of a particular article, or the number of views. I’m not sure if there is a strong correlation between downloads and citations (I know I’ve downloaded hundreds of articles I’ve never read), but many journals are now implementing these sort of metrics (although some just list the “most read” or “most downloaded” articles, without numbers).

And what about media coverage? Or blog coverage? As I mentioned, it may be interesting to know that a particular article has been commented many times in the blogosphere, for example at Researchblogging.org (of which we are members).

PLoS has started the article-level metrics program to include all these “types of measures” for its articles to “implement new approaches to the evaluation and filtering of journal articles”, which they hope other publishers will follow.

The idea is to integrate info on citations, usage, media coverage, blog coverage, expert ratings (for example F1000), social bookmarking activity (for example Connotea), etc., and display it right in the article’s web page!

Indeed, in every PLoS article you’ll find a new tab entitled “Related content”, where some of this info can be found. The idea is to have a complete picture of the article’s impact and not just its citation numbers. As we now have the technology to follow these other numbers and include them in the article’s web page, I think it’s a great idea to put them up there. Also, as is now typical at PLoS, you can rate the articles and leave comments, completing the scene.

The expert’s rating I was talking about hasn’t been implemented yet, and it “could be” coupled to F1000 some day (in the sense that if a particular PLoS paper has been reviewed at F1000, it will be displayed at the article's web page), but this hasn’t been settled yet. Anyway, for some, it would be nice to know that the particular article on which they are deciding to read or not, has been reviewed at F1000.

In conclusion, ALMs can be very useful and they are a great addition to the “classic” assessment tools we’ve been using. This is a great idea by PLoS, which is always trying new things to improve scholar communication.

And who knows? It may also be used as a not-so-serious tool: maybe down the line we’ll be compulsively looking at the stats of our own articles and betting a beer over who got more downloads during a particular week.

Mmm…actually, that’s not a bad idea…


ScienceBlips: vote it up!

Share/Save/Bookmark

0 Comments: