As some of you may know, since March this year, PLoS has started an "article-level metrics" campaign [See Article redesign on PLoS Journals]. This is a very interesting topic and I'll try to break it down to you here.
This discussion is centered around a webinar recently gave by Pete Binfield, the Managing Editor of PLoS ONE [Article-Level Metrics (at PLoS and beyond)], so if you want more info, do not hesitate to listen to it. [Hat Tip at A Blog Around The Clock for the link].
Do you see my point?
One approach to this problem (if you want to call it that), and is the idea behind, for example F1000, is to read (or read more thoroughly) only the papers that are actually worth your while. For this, you'll have to be able to asses if the article you've just found through your weekly (daily?) Pubmed search fits within this category.
If you are anything like me, you'll read the abstract and, if you are still interested, download the article and flip through the figures and Discussion to “evaluate” if you are going to read this thing completely: you've just made your own analysis of the article.
However, it may still be interesting (as complementary info) to know, for example, how many times the article has been downloaded, of if it has been extensively commented on the blogosphere. That may mean something, or at least suggest that, if it’s getting reviewed a lot, maybe you should read it soon.
These last measurements are “article-level metrics” –ALMs- (as opposed to “journal metrics”, such as the IF) as they refer specifically to certain article and not to the journal where it was published.
Generally (or up to now, maybe), I’ve been comfortable with one ALM: article citation. I don’t actually use it nor it has any influence on my decision of reading a certain article, but “I’m comfortable” with it in the sense that people that are actually interested in the “impact” (in its classical ISI meaning) of a particular article could just take a look at this little number. You can get this through Web of Science. However, this may not be very useful with a new article, as its citation count will be zero.
Interestingly, there are several other useful pieces of data that can be added to compile a whole list of an article’s related metrics, which can give a more complete view of the article’s “social impact”.
For example, article usage. This generally refers to the number of downloads of a particular article, or the number of views. I’m not sure if there is a strong correlation between downloads and citations (I know I’ve downloaded hundreds of articles I’ve never read), but many journals are now implementing these sort of metrics (although some just list the “most read” or “most downloaded” articles, without numbers).
And what about media coverage? Or blog coverage? As I mentioned, it may be interesting to know that a particular article has been commented many times in the blogosphere, for example at Researchblogging.org (of which we are members).
PLoS has started the article-level metrics program to include all these “types of measures” for its articles to “implement new approaches to the evaluation and filtering of journal articles”, which they hope other publishers will follow.
The idea is to integrate info on citations, usage, media coverage, blog coverage, expert ratings (for example F1000), social bookmarking activity (for example Connotea), etc., and display it right in the article’s web page!
Indeed, in every PLoS article you’ll find a new tab entitled “Related content”, where some of this info can be found. The idea is to have a complete picture of the article’s impact and not just its citation numbers. As we now have the technology to follow these other numbers and include them in the article’s web page, I think it’s a great idea to put them up there. Also, as is now typical at PLoS, you can rate the articles and leave comments, completing the scene.
The expert’s rating I was talking about hasn’t been implemented yet, and it “could be” coupled to F1000 some day (in the sense that if a particular PLoS paper has been reviewed at F1000, it will be displayed at the article's web page), but this hasn’t been settled yet. Anyway, for some, it would be nice to know that the particular article on which they are deciding to read or not, has been reviewed at F1000.
In conclusion, ALMs can be very useful and they are a great addition to the “classic” assessment tools we’ve been using. This is a great idea by PLoS, which is always trying new things to improve scholar communication.
And who knows? It may also be used as a not-so-serious tool: maybe down the line we’ll be compulsively looking at the stats of our own articles and betting a beer over who got more downloads during a particular week.
Mmm…actually, that’s not a bad idea…
Wednesday, May 27, 2009
Posted by Alejandro Montenegro-Montero at 8:37 PM
As some of you may know, since March this year, PLoS has started an "article-level metrics" campaign [See Article redesign on PLoS Journals]. This is a very interesting topic and I'll try to break it down to you here.
Labels: Trends and metrics
Posted by Alejandro Montenegro-Montero at 9:43 AM
* Josefowicz and Rudensky
Regulatory T cell development and control of Foxp3
* Curotto de Lafaille and Lafaille
Natural versus induced regulatory T cells
Mechanism of action of regulatory T cells
* Zhou, Chong, and Littman
Plasticity of regulatory and other T cells
* Riley, June, and Blazar
Regulatory T cell in therapy
Tuesday, May 26, 2009
Posted by Alejandro Montenegro-Montero at 11:41 AM
In the latest Q&A article from the Journal of Biology, Peter Doherty and Stephen Turner discuss what we know about pathogenicity and transmissibility of influenza viruses [See also A couple of Q&A papers, for other recently highlighted Q&A articles from JBiol].
Doherty PC & Turner SJ (2009)Q&A: What do we know about influenza and what can we do about it? Journal of Biology 2009, 8:46
Sunday, May 24, 2009
Posted by Alejandro Montenegro-Montero at 10:31 AM
The question I want to throw out there today is the following (which also gave the post its name): should we have disclaimers at out science blogs?
Maybe something on the lines of “The views and opinions expressed here represent only my own and not those of my employer” (or PI, etc) sort of thing, or maybe something like “these are just opinions and may contain errors”?
I’ve yet to see this sort of disclaimer on the science-related blogs I follow, although, as I mentioned, I’ve seen them around other blogs.
Sure we like to rant a little (some bloggers more than others… yes, you know who you are!) about lab work, University, grant awarding institution or even about some annoying labmate or professor, but we never use their names (or just use pretended names, on the lines of “Coronel Mustard”). Further, some bloggers don’t even use their own names. So, should we be worrying about this to the extent of having a disclaimer?
What about the science facts we give… of course we think of them of just being an opinion, and we also allow for comments: people can then express themselves which leads to a discussion, which is nothing less than the basis of science, and a great advantage of blogging. Should we also have a disclaimer for these facts?
So, here's the question I today put out there in the open, for all science-related bloggers: should we have a disclaimer?
Friday, May 22, 2009
Posted by Alejandro Montenegro-Montero at 4:45 PM
The Plant Cell (and I can only guess because they are tired of receiving crappy, unreproducible or badly analyzed qPCR results or, even worse, "semiquantitative" RT-PCRs -see below- ) has recently published yet another article along the same line which "provides guidelines for the experimental design and statistical analysis of qRT-PCR data from the statistician's perspective"1.
From The Plant Cell's Editor in Chief 2:
these are guidelines; any attempt to impose such analysis as standard while we are still struggling to persuade authors of the deficiencies of "semiquantitative" RT-PCR would be a difficult, if not impossible, task.
She also commented (although on a previous Editorial)3:
Over the past 2 years, The Plant Cell has taken steps to remove "semiquantitative" RT-PCR from the pages of the journalGood for The Plant Cell.
Please make the Editors's physical pain (which I assume they get when they see authors drawing quantitative conclusions in PCR analysis, from gels stained with EtBr) stop and follow their not-yet-imposed guidelines.
1 Rieu I, Powers SJ. (2009) Real-Time Quantitative RT-PCR: Design, Calculations, and Statistics. The Plant Cell 21:1031-1033 (2009)
2 Martin C (2009) Guidelines for Quantitative RT-PCR. The Plant Cell 21:1023
3 Martin C (2008) Refining Our Standards. The Plant Cell 20:1727
Posted by Alejandro Montenegro-Montero at 11:52 AM
Recently, two interesting articles, in a Q&A format, have been published in the high-tier Open Access Journal of Biology.
More and more high-throughput technologies and various -omics and imaging techniques call for the need of some degree of mathematical competence in the life sciences.
Systems biology is an area which undoubtedly calls for these skills.
So, what exactly is Systems Biology? This takes us to the first of the two articles I wanted to share with you: Jim Ferrell at Stanford completed a Q&A on Systems Biology.1
The second article, is related to Epistasis [Q&A:Epistasis], which as Roth et al.2 point out, can mean different things to different people.
1 Ferrell JE Jr: Q&A: Systems Biology. J Biol 2009, 8:2.
2 Roth FP et al: Q&A: Epistasis. J Biol 2009, 8:35.
Credit: image is from here.
Thursday, May 21, 2009
Posted by Alejandro Montenegro-Montero at 3:31 PM
I'll leave the description of the discovery and its implications to an expert as PZ Myers at Pharyngula [Darwinius masillae].
I will however, say just one thing: the media has ridiculously exaggerated this finding. "Missing link"? Of course not. Nobody uses that "concept" in evolution; in fact I bet that only (some) "science journalists" do, which is just misleading to the general public.
Evolution is not a simple chain.
Notably, the article has generated a lot of controversy and discussion regarding its quality, and again has put PLoS ONE's 'peer review' system up for questioning [see Brian Switek's review: Poor, poor Ida, Or: "Overselling an Adapid", especially the comment's section].
Anyway, here's the original reference of the finding:
Franzen et al. (2009) Complete Primate Skeleton from the Middle Eocene of Messel in Germany: Morphology and Paleobiology. PLoS ONE 4(5): e5723 (image is from this article).
Ok, back to preparing my qualification exam.
Tuesday, May 19, 2009
Posted by Alejandro Montenegro-Montero at 10:11 AM
I've just learned about Cell Press LabLinks (and apparently it has been around since '06) [From Cell Press online]:
Cell Press LabLinks are FREE, one-day symposia organized by local scientists in conjunction with Cell Press editors. Each LabLinks features local and keynote speakers discussing a unified topic in order to foster interactions between colleagues working on related questions – colleagues across town, across the street or even across the hall.
The upcoming meeting, on "Molecular Pathogenesis of Leukemia and Lymphoma", will take place on Friday June 5, 2009, at the Dana-Farber Cancer Institute, Boston, MA.
The only thing you have to do is register, as seating is limited.
Also (and this is just awesome), by registering you enter into a contest (10 registrants will be randomly chosen) in which a one-year personal subscription to the Cell Press journal of your choice will be awarded!
Check the upcoming meetings here and see if there is a Lablink symposium coming your way.
Posted by Alejandro Montenegro-Montero at 6:27 AM
I ended that post by stating,
Is it possible that other 'supposedly' legitimate journals exist, which are in fact being supported, without disclosure, by pharmaceutical companies?Apparently they do. And many from Elsevier, The Scientist reports [Elsevier published 6 fake journals] (you need to register to read full article -it's free-).
Scientific publishing giant Elsevier put out a total of six publications between 2000 and 2005 that were sponsored by unnamed pharmaceutical companies and looked like peer reviewed medical journals, but did not disclose sponsorship, the company has admitted.A total of six titles in a "series of sponsored article publications" were published by Elsevier's Australian office and bore the Excerpta Medica imprint, from 2000 to 2005.
"It has recently come to my attention that from 2000 to 2005, our Australia office published a series of sponsored article compilation publications, on behalf of pharmaceutical clients, that were made to look like journals and lacked the proper disclosures," said Michael Hansen, CEO of Elsevier's Health Sciences Division (See statement issued by Elsevier).
"We are currently conducting an internal review but believe this was an isolated practice from a past period in time (...) and it does not reflect the way we operate today (...) I can assure all that the integrity of Elsevier's publications and business practices remains intact" he added".What can I say... I am, to say the least, suspicious.
Saturday, May 16, 2009
Posted by Alejandro Montenegro-Montero at 9:47 AM
Recently I ran into a nice small story/advertisement video from Analtech, an American company who develops thin layer chromatography equipment. Interestingly (to some), some of their products have even been featured in CSI.
I thought it was really funny and proves the point that more and more companies are getting into the "advertisement through web videos" trend and put more money into it, seeing it as a nice investment considering the wide dissemination they can aspire to (I've never heard of Analtech before today, for example).
So, here it is: The Adventures of Ana L'Tech (first one of a series?). Enjoy!
Thursday, May 14, 2009
Posted by Alejandro Montenegro-Montero at 7:59 PM
Scientwist: A twitter user working in or around science(Source: http://tagdef.com/scientwist)
Do you know what Twitter is? If you don't, check the Wikipedia entry to get an idea and then join! (remember to follow me ;-) There's also a link in the right sidebar).
Posted by Alejandro Montenegro-Montero at 6:20 PM
While some of the rules within each list are fairly obvious and then not so helpful, and others are downright arguable, in general this is a good series to pass around in your lab and read, for example, during downtime at the bench or during your daily commute. I'm sure they will lead to deep conversations and analyzes (even if they take place at a bar on a Friday night) 1 and just by doing so, they are of great use.
Recently, other authors, besides Bourne (and collaborators), have decided to contribute to this series (see list below).
Articles like these are always welcomed, specially by grad students [see also the "How to succeed in science: a concise guide for young biomedical scientists" series by Jonathan W. Yewdell].
I know some other fellow bloggers have, in the past, talked about this series, but I wanted to share it with you anyway, as it is being continuously expanded.
So here's the list of articles, to all of which (as this is a PLoS journal) you have open access.
(You can download the whole series (except number 13, which is expected in the May 26th issue) in just one PDF file, here).
01. Ten Simple Rules for Getting Published
02. Ten Simple Rules for Getting Grants
03. Ten Simple Rules for Reviewers
04. Ten Simple Rules for Selecting a Postdoctoral Position
05. Ten Simple Rules for a Successful Collaboration
06. Ten Simple Rules for Making Good Oral Presentations
07. Ten Simple Rules for a Good Poster Presentation
08. Ten Simple Rules for Doing Your Best Research, According to Hamming
09. Ten Simple Rules for Graduate Students
10. Ten Simple Rules for Organizing a Scientific Meeting
11. Ten Simple Rules To Combine Teaching and Research
12. Ten simple rules for aspiring scientists in a low-income country
13. Ten Simple Rules for Choosing between Industry and Academia.
1 Of course, you can only discuss this on a bar on a Friday night if you party with fellow scientists...(which I do :-) )
Tuesday, May 12, 2009
Posted by Alejandro Montenegro-Montero at 6:35 PM
Indeed, depending on their targets, there are, in principle, several possible mechanisms through which miRNAs could affect tumorigenesis2.
For example, genetic alterations (in tumors) that lead to overexpression, amplification, or to changes in the silencing chromatin structure of a gene encoding an miRNA that targets one or more tumor suppressor genes, could inhibit an anti-oncogenic pathway. On the other hand, disruption (or silencing) of a gene encoding a miRNA that normally represses the expression of an oncogene, could lead to enhanced oncogeny3. This is of course, an oversimplification: the functional consequences of altered patterns of miRNA expression are just beginning to be understood and may not be as straightforward as this.
The latest Snapshot from Cell is entitled: MicroRNAs in Cancer, and lists several miRNAs that are deregulated in certain types of cancer, their molecular mechanisms, targets and their use in diagnosis.
While on the matter, let me also direct your attention to a Web Focus on MicroRNAs and Cancer (although it was freely available only until last September), which includes "original Research and Review articles, as well as Research Highlights from Nature Genetics, Nature Reviews Genetics and Nature Reviews Cancer".
1 I won myself a drink for giving the shortest definition I know.
2 Esquela-Kerscher A, Slack FJ (2006) Oncomirs - microRNAs with a role in cancer. Nat Rev Cancer. 2006 Apr;6(4):259-69.
3 Ventura A, Jacks T (2009) MicroRNAs and cancer: short RNAs go a long way. Cell. 2009 Feb 20;136(4):586-91.
Monday, May 11, 2009
Posted by Alejandro Montenegro-Montero at 11:21 AM
Everyone will (should, must) remember talking about bacterial growth phases in their undergrad Microbiology courses.
All of this phases have been typically explained on the basis of adaptation to new media, growth conditions, nutrient deprivation, etc.
In fact, a quick search on the web1 will give you the following explanation for the lag phase (which is similar to what is generally found in textbooks):
During lag phase, bacteria adapt themselves to growth conditions.
This adaptation is evident when cells are transfered from one type of media to another, and this 'lag' in growth is associated to a physiological adaptation to the new media (synthesis of enzymes that will be useful considering the new substrates, metabolic precursors, etc) before the onset of exponential growth. A good example of this 'adaptation' is diaxuie (a word coined by Jaques Monod).
Well, so I don't exactly remember how I came across with this paper, but it sounds interesting (or at least it made me save it to my computer):
Contact-mediated cell-assisted cell proliferation in a model eukaryotic single-cell organism: an explanation for the lag phase in shaken cell culture
Franck C, Ip W, Bae A, Franck N, Bogart E, Le TT.
Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, NY 14853, USA.
In cell culture, when cells are inoculated into fresh media, there can be a period of slow (or lag phase) growth followed by a transition to exponential growth. This period of slow growth is usually attributed to the cells' adaptation to a new environment. However, we argue that, based on observations of shaken suspension culture of Dictyostelium discoideum, a model single-cell eukaryote, this transition is due to a density effect. Attempts to demonstrate the existence of implicit cell signaling via long-range diffusible messengers (i.e., soluble growth factors) through cell-medium separation and microfluidic flow perturbation experiments produced negative results. This, in turn, led to the development of a signaling model based on direct cell-to-cell contacts. Employing a scaling argument for the collision rate due to fluid shear, we reasonably estimate the crossover density for the transition into the exponential phase and fit the observed growth kinetics.
Phys Rev E Stat Nonlin Soft Matter Phys. 2008 Apr;77(4 Pt 1):041905. (Now you understand the name of this post?).
Why were this guys (Laboratory of Atomic and Solid State Physics) working on this? I wonder...
1 Ok, on Wikipedia. Although not a reliable source for many things, it helps me on the point I'm trying to make here.
Labels: bacterial genetics
Friday, May 8, 2009
Posted by Alejandro Montenegro-Montero at 9:01 AM
I recently came across with VADLO, a rapidly growing search engine, created by two biologists, geared towards biomedical researchers.
"VADLO is a life sciences search engine, privately owned by Life in Research, LLC., based in Illinois, USA. VADLO caters to life sciences and biomedical researchers, educators, students, clinicians and reference librarians. In addition to providing focused search on biology research methods, databases, online tools and software, VADLO is also a resource for powerpoints on biomedical topics,mainly for which, VADLO was named one of the top 10 Health Search Engines of 2008 by AltSearchEngines".I haven't tested it yet, but it seems it's worth the time to at least take a look at it (maybe in your downtime).
Make sure you let us know what you think of it.
I don't remember exactly how I found this site, but the thing that has kept me coming back is their daily comic strip, "Life in Research" (which is focused on biomedical research, as opposed to PhD Comics which was created by an engineer).
Ok, so it's not as well drawn as it engineering counterpart, but has some good cartoons that are worth sharing.
Here are some of mi picks (I deliberately left some out that deserve their own post. I'll post them later) :
Wednesday, May 6, 2009
Posted by Alejandro Montenegro-Montero at 4:55 PM
Ok, maybe they have, but don't know what it is.
Anyway, (from Wikipedia):
Faculty of 1000 is a website for scientists, that provides rankings and commentary on current scientific research papers. The service is designed to act as a filter, highlighting the most significant research along with evaluations of the research written by other scientists that emphasize why a particular research paper is interesting or important.
Faculty of 1000 currently exists as two sister sites:
Faculty of 1000 Biology is an online awareness service for biologists. It is produced by a panel of over 2000 biological researchers, who regularly identify and evaluate the research articles that they have found most interesting in the recently published literature.
I usually keep an eye on Faculty of 1000 Biology which is right up my alley. It is greatly organized into areas such us Biochemistry, Bioinformatics, Biotechnology, Cancer Biology, Cardiovascular Biology, Cell Biology, Chemical Biology, Developmental Biology, etc (I just chose the first 7 of the alphabetically arranged list of areas), so you can search within your field of interest for what renowned scientists have selected as great papers.
So, how does it work? Scientists from all over the world, experts in particular areas of biology were invited to participate; to select papers they consider of interest1. In fact, "Faculty members are asked to evaluate and comment on the most interesting papers they read each month".
First, they write a short comment about it, mainly explaining why they have considered this paper and the main findings. They later rate the article (as Recommended, Must Read or Exceptional)2 and then classify it into any of seven paper types:
Novel Drug Target
Finally, they classify it into one of the categories, like the ones I just mentioned (Biochemistry, Bioinformatics, etc) and the article is posted at the F1000 website.
Charles F Stevens at The Salk Institute, California said:
“ The Faculty of 1000 Biology is the most radical publishing idea of recent times. It is much more rational to judge papers individually than to judge them by the impact factor of the journal in which they are published. ”Ok, so this doesn't say much, anyway. Anyone who rates an article (before reading it) based on the impact factor of the journal where it is published, is downright silly (not to say anything else).
As I mentioned a few week ago [Are we training pit bulls to review our manuscripts?]:
"(...) This will also teach them (if you haven’t told your students already) two things: 1) not everything you find in CNS journals (Cell, Nature, Science and in other one-word-title journals) is true and 2) just because a journal has low impact factor it does not mean that articles published there are weak and should not be considered in your research (...)"Anyway, the idea behind Faculty of 1000 Biology is good, and you should check it out sometime.
For me, it has become one of my go-to sites during my downtime at the lab [Are you looking for something to do during your downtime? Check Tid Bits, Downtime Edition]
1 Sort of what we do here at MolBio Research Highlights.... or maybe, (and more accurately), we sort of do what they do :-P
2 For more info on the rating system check this.
Icons are from the F1000 website.