I have been only partially active in publishing through traditional peer-review channels. I have published perhaps a dozen articles and book chapters through this process. I am active as a reviewer for about 10 different journals and conferences. Additionally, I’ve served as special editor and invited (non-peer review) author for several journals. As conference chair and co-chair I have also been involved in selection of papers, outstanding papers and posters, etc. I understand the review process as an author, reviewer, and editor.
But I’m dissatisfied, and growing more so, with the process for the following reasons:
- The process takes a long time (anywhere from about eight months to several years – depending on the field). By the time an article is finally in print format, it’s often partly obsolete, especially in the educational technology field.
- The process is not about quality. I’ll get into this a bit more later in this post, but from my experience, many, many good articles are poorly reviewed simply because the reviewer is not well informed in the area. I frequently turn down review requests when I feel I am not capable of serving the process well. I’m not convinced this is often the case. At several recent conferences, I was exploring the poster sessions (often comprised of articles that are “downgraded” to poster sessions at research-focused conferences). I was surprised at the exceptional quality of several posters. Inexplicably, excellent research-based papers were not receiving the attention they deserved (especially when accepted papers were of noticeably poorer quality). I can only conclude that reviewers failed to understand the research they were reviewing.
- The process is not developmental. With few exceptions, journals and conferences run on tight time lines. A paper that shows promise is often not given time to be rewritten due to time constraints. Peer review should be a developmental process (I threw out a few ideas on this process in Scholarship in an Age of Participation). Journals should not be knowledge declaration spaces. Journals should be concerned with knowledge growth as a process in service of a field of inquiry.
What then does a “good” review look like?
Let’s say it takes 40-80 hours to write a 5-7,000 word paper. A reviewer, in a timely manner of at most two weeks from initial assignment of the review, needs to:
- Read the article for general coherence
- Map out (mentally at minimum) the core arguments and support provided
- Evaluate the suitability of research methodology to the questions being considered in the paper
- Decide if the conclusions draw by the researchers/authors are warranted by the research conducted, paying particular attention to common research errors (such as causation/correlation, generalization based on too limited a sample, etc).
- Validate the quality and appropriate use of references, noting any significant gaps in existing literature
- Determine if the paper advances some aspect of knowledge in the field (i.e. does the paper say something new? Does it draw novel connections between disparate research? Does it debunk existing views held by researchers in the field, etc.).
- Finally, based on literature, methodology, conclusions, and original contribution to the field, determine if the article is suitable for publication. If the article is not suitable for publication, the reviewer should recommend improvements to bring the article up to high standards or suggest why it is not suitable for amending (i.e. out right rejection). If the paper is submitted for a conference, the reviewer may recommend downgrading it to a poster session.
How long should this process take?
From my experience, reviewing an article is at minimum a three to four hour task if the reviewer is familiar with the citations and methods utilized by the author(s). In many instances reviewers will require more time. For example, I’ve encountered articles that address a core subject that I am familiar with (learning technology or something similar) and then utilize a framework from sociology or psychology to express a viewpoint. If I’m not familiar with the core topic, declining to conduct the review is the only sensible response. Assuming I am familiar with the core concepts, I then need to take time to research the peripheral topics in order to effectively review the paper. This alone can add hours to a review.
The problem of being current in a diverse field…
In the field of emerging technologies, too many reviewers are not current and as a consequence should not be reviewing papers. If a person has not blogged, taught using Second Life, experimented with Twitter, or is not aware of the development of open educational resources, social learning theory, or personal learning environments and learning management systems, then they have no business conducting a review. Keep in mind, peer review is about subjecting your work to experts in the field. Because the emerging technology field is young, many reviewers are simply not competent to be conducting the breadth of reviews that they conduct.
Complicating this concerns is the diversity of our field. Educational technology is an aggregate field. We can just as soon discuss Vygotsky as we discuss XML, motivation theory as cloud computing, and social networks as systemic transformation. Even when journals are focused on a particular subset of this complex field, articles and references will require reviewers to devote significant time to effectively review an article.
Why bother reviewing papers if it’s so difficult? Well, it’s difficult because it’s important. The quality of thinking of the educational technology field is influenced by the quality of the papers being published. As such, peer review should be far more iterative than it currently is. The best journal I have come across in this regard is Innovate (James Morrison is the editor). Dr. Morrison provides a review process that is personal and developmental. I recall reviewing one article four times over a short period of time. The final product hardly resembled the original paper (I still suggested rejecting the final article, but I was “out voted” by the other two reviewers). In this instance, the paper quality was substantially improved through review, recommendation, and rewriting.
Peer review is also a personal learning process. Reviewing an article forces a person (at least it does for me) into a critical state of mind. Reviewing articles is a rich thinking and learning process. The reviewer, as much as the reviewed, benefits in the experience.
Why I’m frustrated
I recently submitted an abstract, which was accepted, for a special edition of a well known journal.
About four months after submission, I received the following response:
While a well-written paper, it appears to be a cut-and-paste from someone’s thesis or dissertation. I do not see how the history of the university is relevant for [deleted to preserve anonymity]. Some of it (The Contemporary University) might be of value to the reader, but I don’t believe the majority would hold the reader’s interest. The pages and pages of references are also a dead give-a-way that this is someone trying to get their graduate work published – which is appropriate. But it doesn’t appear to me that the writer took enough time to tweak the writing such that it would be appropriate for this journal.
(for what it’s worth, it was not a cut and paste article, it was written specifically for this journal submission)
The reviewer also selected a few responses about suitability of the article, relevance to journal theme (which in my eyes was moot as the editor had already accepted the abstract, confirming journal theme relevance), with the letter ‘S’ or ‘U’ posted beside each category. What does that mean?? Uber-fantastic? Stunningly Sucky? I don’t know. I suspect probably some variant of “satisfactory” or “unsatisfactory”.
This single review is what we (it was a co-authored paper) were given for rejection. No indication of ways to improve the article or suggestions for resubmission were offered. I was irritated (and still am). So I sent the editor the following email:
I find the quality of the feedback unacceptable, however. Based on what you provided, it appears that the reviewer paid scant attention to the article and its relevance for publication. The core assertion Dr. [deleted for anonymity] make is: information creation/dissemination patterns of an era is reflected in the design of a society’s knowledge institutions. [more deletions for anonymity purposes]. What we do around information is (more so than web 2.0 and technologies) foundational to how higher education will be transformed.
I fully understand if you and [name deleted for anonymity] as editors feel the article was not of sufficient quality to warrant publication. However, if your decision is based on the single review you provided below (by an individual who spent precious little time on the article it appears and whose most substantial comment is to state that it was cut and paste from a masters project due to number of references) it seems peer review was not well attended in this rejection.
I then received a response saying “We’re currently chasing down the second review and trying to understand why it wasn’t sent to you automatically as it should have been”. I have tremendous respect for the editor that composed this response (I’m not being sarcastic – I know the individual and would classify this person as a friend). I assume therefore that some type of software glitch occurred, which in itself raises concerns about how rejections are handled. But even then, my core concerns above – journal review as a knowledge growth and idea development process – are not addressed. And it’s not unique to this one journal. I think it’s endemic to the educational technology field.
Peer review via blogs
In contrast to the rather feeble review our article received, consider the quality and diversity of comments on this article I posted on this site last week. I do almost all of my article publishing on my elearnspace or connectivism site. It is very rare that I receive a similar quality of feedback from an academic journal. What is the future of peer review if it’s value to the author and the field is reduced due to time and quality of reviews? Is it any wonder that NBER is questioning peer review decline?
How do we develop reviewers?
How did you learn to do reviews? From informal discussion with peers, it seems that most people learn to do reviews by being thrown into the process. It might have started with reviewing a few papers for a conference or by being asked to sit on a journal editorial board. Regardless, it appears that most reviewers do not have formal “training” in conducting reviews. It’s a trial an error process, which places great responsibility on a journal editor to ensure reviews are well conducted.
It is both a privilege and a responsibility to review the best ideas of another member of the field. But it’s also a matter of personal reputation. Generally, depending on the review software, the editor will know who submitted the review. I find it personally satisfying to be invited to repeat conference and journal reviews based on effort put into previous reviews. I know of many others who share these views. My views of peer review have been heavily shaped by “old timers” who appeal to high quality paper review processes for journals and conferences. I just wish there were more editors who saw scholarship as iterative and developmental and held journal reviewers to high standards. I also wish we had more reviewers who recognized the opportunity they have to advance quality within the educational technology field. After all, we jointly hold each others success in balance each time we sit down and start typing out a review.
What are your experiences? Misery, of course, appreciates company. Do you have any particularly nightmarish journal experiences (as author, editor, reviewer)? Or do you agree with my assertion that journals should serve to develop ideas, not solely evaluate?