It’s not peer review if you aren’t familiar with the subject

I have been only partially active in publishing through traditional peer-review channels. I have published perhaps a dozen articles and book chapters through this process. I am active as a reviewer for about 10 different journals and conferences. Additionally, I’ve served as special editor and invited (non-peer review) author for several journals. As conference chair and co-chair I have also been involved in selection of papers, outstanding papers and posters, etc. I understand the review process as an author, reviewer, and editor.

But I’m dissatisfied, and growing more so, with the process for the following reasons:

  1. The process takes a long time (anywhere from about eight months to several years – depending on the field). By the time an article is finally in print format, it’s often partly obsolete, especially in the educational technology field.
  2. The process is not about quality. I’ll get into this a bit more later in this post, but from my experience, many, many good articles are poorly reviewed simply because the reviewer is not well informed in the area. I frequently turn down review requests when I feel I am not capable of serving the process well. I’m not convinced this is often the case. At several recent conferences, I was exploring the poster sessions (often comprised of articles that are “downgraded” to poster sessions at research-focused conferences). I was surprised at the exceptional quality of several posters. Inexplicably, excellent research-based papers were not receiving the attention they deserved (especially when accepted papers were of noticeably poorer quality). I can only conclude that reviewers failed to understand the research they were reviewing.
  3. The process is not developmental. With few exceptions, journals and conferences run on tight time lines. A paper that shows promise is often not given time to be rewritten due to time constraints. Peer review should be a developmental process (I threw out a few ideas on this process in Scholarship in an Age of Participation). Journals should not be knowledge declaration spaces. Journals should be concerned with knowledge growth as a process in service of a field of inquiry.

What then does a “good” review look like?

Let’s say it takes 40-80 hours to write a 5-7,000 word paper. A reviewer, in a timely manner of at most two weeks from initial assignment of the review, needs to:

  • Read the article for general coherence
  • Map out (mentally at minimum) the core arguments and support provided
  • Evaluate the suitability of research methodology to the questions being considered in the paper
  • Decide if the conclusions draw by the researchers/authors are warranted by the research conducted, paying particular attention to common research errors (such as causation/correlation, generalization based on too limited a sample, etc).
  • Validate the quality and appropriate use of references, noting any significant gaps in existing literature
  • Determine if the paper advances some aspect of knowledge in the field (i.e. does the paper say something new? Does it draw novel connections between disparate research? Does it debunk existing views held by researchers in the field, etc.).
  • Finally, based on literature, methodology, conclusions, and original contribution to the field, determine if the article is suitable for publication. If the article is not suitable for publication, the reviewer should recommend improvements to bring the article up to high standards or suggest why it is not suitable for amending (i.e. out right rejection). If the paper is submitted for a conference, the reviewer may recommend downgrading it to a poster session.

How long should this process take?

From my experience, reviewing an article is at minimum a three to four hour task if the reviewer is familiar with the citations and methods utilized by the author(s). In many instances reviewers will require more time. For example, I’ve encountered articles that address a core subject that I am familiar with (learning technology or something similar) and then utilize a framework from sociology or psychology to express a viewpoint. If I’m not familiar with the core topic, declining to conduct the review is the only sensible response. Assuming I am familiar with the core concepts, I then need to take time to research the peripheral topics in order to effectively review the paper. This alone can add hours to a review.

The problem of being current in a diverse field…

In the field of emerging technologies, too many reviewers are not current and as a consequence should not be reviewing papers. If a person has not blogged, taught using Second Life, experimented with Twitter, or is not aware of the development of open educational resources, social learning theory, or personal learning environments and learning management systems, then they have no business conducting a review. Keep in mind, peer review is about subjecting your work to experts in the field. Because the emerging technology field is young, many reviewers are simply not competent to be conducting the breadth of reviews that they conduct.

Complicating this concerns is the diversity of our field. Educational technology is an aggregate field. We can just as soon discuss Vygotsky as we discuss XML, motivation theory as cloud computing, and social networks as systemic transformation. Even when journals are focused on a particular subset of this complex field, articles and references will require reviewers to devote significant time to effectively review an article.

Why bother reviewing papers if it’s so difficult? Well, it’s difficult because it’s important. The quality of thinking of the educational technology field is influenced by the quality of the papers being published. As such, peer review should be far more iterative than it currently is. The best journal I have come across in this regard is Innovate (James Morrison is the editor). Dr. Morrison provides a review process that is personal and developmental. I recall reviewing one article four times over a short period of time. The final product hardly resembled the original paper (I still suggested rejecting the final article, but I was “out voted” by the other two reviewers). In this instance, the paper quality was substantially improved through review, recommendation, and rewriting.

Peer review is also a personal learning process. Reviewing an article forces a person (at least it does for me) into a critical state of mind. Reviewing articles is a rich thinking and learning process. The reviewer, as much as the reviewed, benefits in the experience.

Why I’m frustrated

I recently submitted an abstract, which was accepted, for a special edition of a well known journal.

About four months after submission, I received the following response:

While a well-written paper, it appears to be a cut-and-paste from someone’s thesis or dissertation. I do not see how the history of the university is relevant for [deleted to preserve anonymity]. Some of it (The Contemporary University) might be of value to the reader, but I don’t believe the majority would hold the reader’s interest. The pages and pages of references are also a dead give-a-way that this is someone trying to get their graduate work published – which is appropriate. But it doesn’t appear to me that the writer took enough time to tweak the writing such that it would be appropriate for this journal.

(for what it’s worth, it was not a cut and paste article, it was written specifically for this journal submission)

The reviewer also selected a few responses about suitability of the article, relevance to journal theme (which in my eyes was moot as the editor had already accepted the abstract, confirming journal theme relevance), with the letter ‘S’ or ‘U’ posted beside each category. What does that mean?? Uber-fantastic? Stunningly Sucky? I don’t know. I suspect probably some variant of “satisfactory” or “unsatisfactory”.

This single review is what we (it was a co-authored paper) were given for rejection. No indication of ways to improve the article or suggestions for resubmission were offered. I was irritated (and still am). So I sent the editor the following email:

I find the quality of the feedback unacceptable, however. Based on what you provided, it appears that the reviewer paid scant attention to the article and its relevance for publication. The core assertion Dr. [deleted for anonymity] make is: information creation/dissemination patterns of an era is reflected in the design of a society’s knowledge institutions. [more deletions for anonymity purposes]. What we do around information is (more so than web 2.0 and technologies) foundational to how higher education will be transformed.

I fully understand if you and [name deleted for anonymity] as editors feel the article was not of sufficient quality to warrant publication. However, if your decision is based on the single review you provided below (by an individual who spent precious little time on the article it appears and whose most substantial comment is to state that it was cut and paste from a masters project due to number of references) it seems peer review was not well attended in this rejection.

I then received a response saying “We’re currently chasing down the second review and trying to understand why it wasn’t sent to you automatically as it should have been”. I have tremendous respect for the editor that composed this response (I’m not being sarcastic – I know the individual and would classify this person as a friend). I assume therefore that some type of software glitch occurred, which in itself raises concerns about how rejections are handled. But even then, my core concerns above – journal review as a knowledge growth and idea development process – are not addressed. And it’s not unique to this one journal. I think it’s endemic to the educational technology field.

Peer review via blogs

In contrast to the rather feeble review our article received, consider the quality and diversity of comments on this article I posted on this site last week. I do almost all of my article publishing on my elearnspace or connectivism site. It is very rare that I receive a similar quality of feedback from an academic journal. What is the future of peer review if it’s value to the author and the field is reduced due to time and quality of reviews? Is it any wonder that NBER is questioning peer review decline?

How do we develop reviewers?

How did you learn to do reviews? From informal discussion with peers, it seems that most people learn to do reviews by being thrown into the process. It might have started with reviewing a few papers for a conference or by being asked to sit on a journal editorial board. Regardless, it appears that most reviewers do not have formal “training” in conducting reviews. It’s a trial an error process, which places great responsibility on a journal editor to ensure reviews are well conducted.

It is both a privilege and a responsibility to review the best ideas of another member of the field. But it’s also a matter of personal reputation. Generally, depending on the review software, the editor will know who submitted the review. I find it personally satisfying to be invited to repeat conference and journal reviews based on effort put into previous reviews. I know of many others who share these views. My views of peer review have been heavily shaped by “old timers” who appeal to high quality paper review processes for journals and conferences. I just wish there were more editors who saw scholarship as iterative and developmental and held journal reviewers to high standards. I also wish we had more reviewers who recognized the opportunity they have to advance quality within the educational technology field. After all, we jointly hold each others success in balance each time we sit down and start typing out a review.

What are your experiences? Misery, of course, appreciates company. Do you have any particularly nightmarish journal experiences (as author, editor, reviewer)? Or do you agree with my assertion that journals should serve to develop ideas, not solely evaluate?

18 Responses to “It’s not peer review if you aren’t familiar with the subject”

  1. Ed Webb says:

    George – I’m not in your field, exactly, although as an educator who uses digital technology I’m a consumer of what you do. The model you propose of journals as incubators and developers of ideas through a productive and iterative review process doesn’t sound like anything I have heard of in my own areas of research (political science, international and area studies), but it does sound attractive. I guess for the moment we have to stick to the blog and comment format if we want real time input from our peers. Also, you may find this of interest:

  2. Mark Bullen says:

    As somebody who has been involved in the peer review process as an author, a reviewer and a journal editor, I understand your frustration but I don’t think it is fair to condemn the traditional peer review process based on one unsatisfactory experience with one journal. Nor do I think is it appropriate to set up blogging as the alternative to traditional peer review. Yes, blogging can generate very thoughtful feedback but the blogosphere is also responsible for perpetuating ideas that have no basis in fact. One of my frustrations with blogs is that too often the identity of the blogger is either difficult to find or not provided. The reader has no way of assessing the qualifications of the author. Yet these blogs will be cited and reposted to other blogs. One of the most egregious examples of this use of social media to perpetuate unsubstantiated claims occurred last week. Don Tapscott tweeted a link to a blog that made a claim about the percentage of generation Y using social media. Within minutes this was retweeted by at least 10 people. Clearly none of these people had even read the blog because it was incomprehensible, had no author identification, and appears to have been autogenerated based on google ad words. Despite this kind of misuse of social media, I think they do play a valuable role in generating discourse, and disseminating ideas and information quickly, but I think educators have been far too uncritical in how we make use of information in the blogoshpere.

    Clearly academic journal publishing needs to catch up with the changes in technology and methods of communication. Many are doing this by moving to fully online publication and open access, eliminating arbitrary publication deadlines, and moving towards a continual publication model. This won’t do anything to help the review process but I think the problem you faced is a problem with how that journal manages its review process, not an inherent problem with peer review.

    At the Journal of Distance Education ( which I edit. I try to ensure that the reviewers have expertise in the content of the articles they are being asked to review and I review the reviews before the comments are sent to authors to ensure they are appropriate. Unless an article is completely unsuitable, I encourage the authors to make necessary revisions and resubmit. We publish an issue as soon as we have a reasonable collection of articles, our turn around time from submission to publication is usually less than 12 months and all articles are professionally edited. We have sections for peer-reviewed empirical research and sections for non-peer reviewed articles.

    There are probably many more things that academic journals could be doing to provide more developmental support and to generate discussion of the ideas presented in their articles but these things take time and most academic journals are being run by volunteers with minimal support.


  3. admin says:

    Hi Ed – while journals might not actively serve a development role in your field, I would suggest that an effectively completed review is developmental for the author…that is, if I submit an article, you provide a critique (reasoned and with support), I learn as an author due to your consideration. Peer review, for me at least, should be about more than simply saying “accept” or “reject”.

    …and yes, I guess blogs will work for now :) .


  4. admin says:

    Hi Mark,

    Thanks for your comments. My experience was not confined to this one instance. I’ve had other rejection experiences that have been almost as bad…and then others that have been very well handled with effective feedback.

    As soon as I read your mention of blog identification, I checked my about page. D’oh! on this site I still had the blank template that is included in a WP install. Ooops. Now updated :) .

    Your comments about Tapscott-like mentality are, sadly, reflective of my own experience as well. Which is why I appreciate the peer review process. I’m not in favour of “throwing out” peer review and replacing it with blogs, though in this journal submission, that may be accurate.

    I’m not suggesting that we dismiss peer review…but I want to ensure that it is working well for authors and reviewers alike. My post above was an attempt to address the value of review and the need for reviewers and editors to devote needed time.


  5. Mark Bullen says:

    Hi George:

    I didn’t really take your comments as a rejection of the peer review process but I fear others will. There seems to be a growing anti-gatekeeping movement, a (naive) view that all information should be left unfiltered and that only the end user should make decisions about its value… or, worse, that value should be left up to the “wisdom of the crowd”. That’s fine for the few of who have the time and inclination to submit everything they read to this kind of critical scrutiny. Most people don’t and rely on others to do some of this for them. Not that that absolves people of the need to be critical but it does hopefully make the process more manageable.


  6. The frustration I’m hearing sounds similar to mine about the issue that causes it: establishment of expertise. You seeem to be saying that only those with certain expertise in the field of the subject should do peer reviews. How do you establish the standard for this expertise? As in my concerns where people tend to choose their own experts by crowds, they may in fact be wrong. That is, the identified choice as expert may have been chosen not through real expertise but political clout, charisma, etc & the choice could be disastrous in the wrong field.

    Your statement: “Journals should not be knowledge declaration spaces. Journals should be concerned with knowledge growth as a process in service of a field of inquiry.” seems to contradict what your first point (expertise needed to review) seems to say & make the second point I mention.

    In our (where I work) peer review processes we have a mix of assumed expertise through experience/exposure to a field as well as others who are simply reactionaries, so to speak. Real expertise isn’t needed as they will develop it through exposure & learning the material being reviewed & the discussions. Otherwise it is accepted as the thought of another perspective which has value in itself.

    This balance will always be a tension in learning, who do you trust & why & for what. Ultimately I think it will depend on the individual situation & personalities involved, for good or ill.

  7. Scott Wilson says:

    I don’t think the problem is with peer review itself; the problem is one of volume. There are far too many papers produced in ed tech based on very, very little actual research work. As a result, reviewers are stretched very thin comprehensive peer review is hard to make time for, and paper quality is generally poor. (Conference papers can be especially bad)

    I’ve also noted how often the quality of posters far exceeds that of papers – I think partly this is due to much work in our field better suiting poster format (we made this, it looks like this, this is what some people said about it…), and is partly down to the sort of writing skills & experience people working in this field possess.

  8. Larry Cuffe says:

    Within online communities its easy to develop insular groups with little contact with, or reference to similar work, the historical context of their field, or similar fields which may have traveled the same way. The time component of the peer review process allows for a much more reasoned and researched response to new work than does the blogosphere.
    At the moment I see the following hierarchy:
    1) informal discusion off line
    2) blogs and online discusions
    3) Peer reviewed Journals
    4) highly cited papers in peer reviewed journals.
    Your can make an argument that 2) should become more important than 3) however I feel each has an important role to play in generating and stockpiling wisdom.

  9. Lanny Arvan says:


    We are a practitioner field. Where does scholarly communication fit in and journal publishing in particular? Is it part of the regular work? Or done as overload? In an academic discipline where it clearly is part of the regular work, one might look at other causes for journal quality decline (like the aging of the professoriate). For our field, however, I believe you can find the causes for the problems you identify in the nature of the work itself.

    The related issue you might want to consider is the audience for these journal publications. Are they for learning technologists and their professional development? Faculty who might be encouraged to embrace new approaches? Some other population? My view is that practitioner to practitioner, informal communication is great. For other audiences, I’m less sure. There is a separating the chaff from the wheat problem that is, in my view, significant and perhaps too difficult to solve.



  10. gsiemens says:

    Hi Ariel – I’ve had many debates with colleagues about expertise. I often encounter comments that are outright dismissive of “experts”. But we need to be thoughtful (careful?) in how we approach this. Simply because tools give everyone a voice does not mean that every voice is equally valid on all subjects. For example, I may be informed about educational applications of certain technologies, and in that area, I would suggest my opinion could be counted as valuable. But if the conversation turns to sociology and affinity groups – areas in which I have limited experience – then I need to acknowledge that limitation. I’ve used the term “connected specialization” to address this. It’s not that one person’s voice is always more valid than someone else’s. But, in context, based on experience and expertise, some opinions are to be valued about others.

    I don’t think acknowledging expertise negates the value of “growing a field”…though you do raise a good point. If expertise is to be desired, how do we develop people who are capable of serving as expert reviewers? The review process can partly serve this (I learned to review in stages – small tasks on conference committees…then more involved in journals…etc). But it’s the entire field (conferences, conversations, collaboration, mentoring) that serves this development role. Reviewers need to self-declare their lack of familiarity in an area. If I’m not familiar with technical details of XML, for example, I may still be able to see how separating content/presentation can benefit educators. In this instance, my review is driven by my area of expertise…and my comments should reflect this. It would be silly of me to state “the technical XML infrastructure suggested in this article is ______” if I don’t understand this dimension.

  11. gsiemens says:

    Hi Scott – agreed – conference papers can be bad. But, if a paper is exceptionally poor (in a journal) the editor should remove it from the review cycle – assuming I buy into the idea that we have too much volume. If an article is at the stage of peer review, it requires the full focus of the reviewer. Some filtering will generally have occurred by this stage. To reject an article that I don’t understand is not helpful to the author.

    The other option – and one that hasn’t been substantially explored by peer review – is to adopt a slashdot-style filtering approach. A community or network can provide an informal review through skimming, but at the point of consideration for formal publication, the article is subject to more rigor. The first stage is a loose filtering process, theoretically removing the worst articles from more thoughtful review. The model here is in line with “publish everything, let the network sort it out”. I’m inclined to favour this approach for filtering (rather than an editor), but new problems arise in this model too. Innovation, for example, will often be rejected by the existing mindset of a field. How far is crowdsourcing removed from group-think?

  12. gsiemens says:

    Larry – agreed. Blogs and journals serve different roles. They are not exclusive and we would not be well served if we sought to have either one replace the other. I introduced the blog example above in order to highlight the value of feedback when people who are commenting have familiarity with the subject. The main difference between blogs/journals *should be* the depth of review. Blogs are more informal. However, in the example I provided, the opposite occurred…simply because the journal review process was not well managed.

  13. gsiemens says:

    Hi Lanny,

    Yes, ours is a field of practitioners…but all practice has at least some theory (whether explicit or embedded). Journals that emphasize practice may not readily accept more philosophical pieces. Journals that emphasize “scientific” research may be less inclined to accept qualitative research summaries. As you note, audience influences what’s published.

    With tools like OJS enabling almost anyone to set up and run a journal, we could end up with thousands of micro-focused journals. But I’m not convinced this is a great idea either. Given the aggregate nature of our field, we need (as I mentioned above to Ariel) focus on connecting specialized elements. It’s tough to get the mix between “entire field” and “specialized subset” right…

  14. Nicola says:

    How about suggesting suitable reviewers at time of submission, as in it is an essential requirement of the submission process?

    You may know many people who might be able to or develop into a reviewer role within the relevant field – good reviewers and experts are difficult to find as you have mentioned.

    If each person that submitted to a journal or conference could suggest suitable people – it would immediately indicate their depth of experience in the field without even reading a word of the paper? It would maybe help the editors in reaching a wider range of potential reviewers ?

  15. How about shorten the number of pages in conference submits? I think one of the major problems in conference proceedings is that authors try to fill 8 to 15 pages. But their core statement can be done on 1 or two pages. If we would limit some conferences to a shorter amount of allowed pages (or a double-stage review process, first a shorter abstract and afterwards a longer article) AND allow a real peer-review (the submitters of the conference will have to peer-review the papers jointly with the committee) we should get a higher quality review process.

    As you mentioned, the members of the review board are often not capable of current topics nor willing to invest a large amount of time. So why don’t we go back to the basics of p e e r review and let the authors review what other conference authors have to say?

    @nicola: I also love your approach of let the authors suggest reviewers…

  16. Howard says:

    I like your suggested process and the developmental perspective; how authors and reviewers become almost co-creators within the process. Much more inline with what a social constituted knowledge process would look like. I think that many journals still reflect the residual processes that originally had discovery as the root metaphor for knowledge creation instead of development.
    I believe that the expansion and fragmentation of knowledge might also be at issue, at least in some journals. I remember Noam Chompsky commenting that when he completed his dissertation, pretty much any professor in his department could have sat on any student’s committee, but that knowledge specialization had made that no longer the case. Personally, I spent much effort searching for a dissertation topic that could fit a potential committee. There were many potential topics that I simply could not pursue and expect to find a knowledgeable committee. If your talking specific technologies it might be easy to say someone is or is not unqualified, but someone’s knowledge base is so much harder to qualify for editors or for themselves, politically and otherwise. Overall I think there must be better processes and Wolfgang might be onto something.

  17. Jonathon Richter says:

    Slow Peer Review vs. Open faced Blog as means to vet scholarly work? — seems to be two ends of a wider spectrum in the age of Digital Scholarship. Perhaps “flagged protections” or “patrolled revisions” re: wikipedia would be a better, more suitable 3rd alternative

  18. [...] are clearly some problems with the current peer review model and I’m interested in exploring some of the alternatives. [...]