Having been quite versed in the art of research 8 years post PhD, I have been very fortunate to witness a renaissance in publishing in two ways. First, I remember quite well during my PhD training (over 10 years ago), the process of preparing a manuscript for the highest ranked journal, submit, reject, reformat and submit to the next journal, reject, submit.....you get the story. Published manuscripts were usually in print form. The second method utilises the Internet and open access publishing.
During that time, impact factor was the key metric in which a publishing house was measured (The agony and ecstasy of the impact factor). This evolved quickly into a measure of a researcher’s performance. There was active push to get manuscripts into journals with a high impact factor as it reflects positively on the authors involved. Over the course of my career, merit in using the impact factor to judge a study was questioned (Porta 2006). These days, it is almost a profanity to even consider the Impact Factor as a measure of a manuscript’s value (Randy Schekman’s Piece on Nature, Cell Science and Occam’s Typewriter on Impact Factor). Believe it or not, I have heard a swear jar was enforced at a grant review panel, where whomever mentioned “Impact Factor” would contribute to the swear jar.
Okay so Impact Factor is not relevant, so what is a good metric? What is a good measure of a published manuscript for the likes of major granting bodies and prospective employers to judge your performance by? Speaking to many in the field and to many of my mentors, I would say the jury is still out; there really isn’t a good metric to go by, but nonetheless there should be one. In preparing this little piece I have come across Eugene Garfield, it seems he is the pioneer