“Aca 2.0 Q&A”
Usage scenarios and incentive systems
for a distributed academic publication model
R. Poss, S. Altmeyer, M. Thompson, R. Jelier
arXiv:1404.7753v2 [cs.DL] 10 May 2014
May 15, 2014
Abstract
“Academia 2.0” is a proposal to organize scientific publishing around
true peer-to-peer distributed dissemination channels and eliminate the tra-
ditional role of the academic publisher. This model will be first presented
at the 2014 workshop on Reproducible Research Methodologies and New
Publication Models in Computer Engineering (TRUST’14) in the form of
a high-level overview, so as to stimulate discussion and gather feedback
on its merits and feasibility. This report complements the 6-page intro-
ductory article presented at TRUST, by detailing the review processes,
some use scenarios and answering the reviewer’s comments in detail.
Contents
Prologue 2
1 Introduction 2
2 Peer review 3
2.1 Characteristics of the review process . . . . . . . . . . . . . . . . 4
2.2 Anonymous reviews . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Organization of the review process . . . . . . . . . . . . . . . . . 6
2.4 Double blind reviews . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Previous work on open peer review . . . . . . . . . . . . . . . . . 8
3 Scenarios and incentives 10
3.1 Fear of credit loss . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Early publication . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 File drawer effect . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Next-generation journals . . . . . . . . . . . . . . . . . . . . . . . 12
4 Questions and Answers 13
Acknowledgements 16
References 16
1
,Prologue
The present report is intended to be extended over time, as additional use
cases, issues or opportunities are envisioned. Suggestions for improvements or
new materials are welcome, and the contributor list can be adjusted accordingly.
The latest version of this document, if any, can be retrieved from http://arxiv.
org/abs/1404.7753.
1 Introduction
Supposing the publishing industry did not exist, what could we achieve to opti-
mize scientific dissemination of knowledge and progress, short of recreating the
publishing industry? This is what the “Academia 2.0” proposal [17] addresses.
The proposal can be summarized as follows:
• every researcher can self-publish online; including reviews of other works
whenever they wish;
• works are securely timestamped and identified by title and author list and,
where applicable, content hash;
• public organizations are responsible for publishing reviews that reviewers
wish to keep anonymous while retaining accountability;
• a new semantic object, the “post-hoc citation” can be used to assert prior
work, influence or plagiarism relationships when they are discovered only
after publication;
• a new distributed infrastructure serves for document indexing and lookup;
it also provides public and free interfaces for search and syndication, to
aggregate and distribute, on-demand, relevant documents: per field of
expertise, geographical area, social affinity, or relevance to a topic.
Following this proposal, a large number of questions have been raised by
the TRUST review committee. The process of answering these questions also
contributes to the elaboration of our model; a synthetic answer suitable for a
summary would fail to convey the more nuanced aspects of our proposal.
For example, many questions have arisen regarding the organization of peer
review: how to annotate works with measures of their appreciation by other
members of the scientific community. For this, we elaborate a more complete
structured answer in section 2.
In particular, the reviewers have highlighted that there have been already
multiple efforts in the past to make the review process of scientific works more
transparent. Our position is that the “Academia 2.0” proposal is complementary
to these previous efforts, by focusing on a different set of goals (cf. the first
question in section 4 and section 2 of the TRUST article). Nevertheless, we also
review the relationship between our work and contemporary efforts in open peer
review below in section 2.5.
Similarly, the proposal enables a new incentive system, which in turn en-
ables new scenarios for the way scientists interact with each other. We have
highlighted two such scenarios in the TRUST submission; the reviewers have
asked for more details and alternate scenarios, which we provide in section 3.
We then address the remaining questions in section 4.
2
, Table 1: Example review object: a TRUST’14 review
Author “Anonymous reviewer 12 mandated by the TRUST’14 program committee”
Title “Review of submission 1”
Target document handle { “Academia 2.0: removing the publisher middle-man while retaining impact”,
[ “Raphael Poss”, “Sebastian Altmeyer”, “Mark Thompson”, “Rob Jelier” ],
“sha256/e83b0a9861eec4906f52d269056925bd0692c77882ee54d0a62eb876cc61be69” }
Overall evaluation 2/3, higher is better
Reviewer’s confidence 4/5, higher is better
Reviewer comments The paper presents the idea to have an open publication model. The reviewing,
indexing etc will be done in a crowd fashion using semantic technology. The paper
is well written and presents novel ideas. My major concern for now is that the value
of semantic technology is overestimated. It is also not clear how it can be made sure
that not only too few reviews will be available and thus a wrong impression about
the work is made visible.
Review process start date 2014-03-14
Review process end date 2014-04-14
Review process characteristics
• Author identity known to reviewer at start of process
• Review committee known to author before start of process
• Review object not released publicly before end of process
• Reviewed object not released publicly before end of process
Review process coordinators { “Grigori Fursin” (INRIA, France), “Bruce Childers” (University of Pittsburgh,
USA), “Alex K. Jones” (University of Pittsburgh, USA), “Daniel Mosse” (University
of Pittsburgh, USA) }
Reviewer identity escrow { “Jose Nelson Amaral” (University of Alberta, Canada), “Calin Cascaval” (Qual-
comm, USA), “Jack Davidson” (University of Virginia, USA), “Evelyn Duesterwald”
(IBM, USA), “Lieven Eeckhout” (Ghent University, Belgium), “Eric Eide” (Univer-
sity of Utah, USA), “Sebastian Fischmeister” (University of Waterloo, Canada),
“Michael Gerndt” (TU Munich, Germany), “Christophe Guillon” (STMicroelectron-
ics, France), “Shriram Krishnamurthi” (Brown University, USA), “Hugh Leather”
(University of Edinburgh, UK), “Anton Lokhmotov” (ARM, UK), “Mikel Lujan”
(University of Manchester, UK), “David Padua” (University of Illinois at Urbana-
Champaign, USA), “Christoph Reichenbach” (Johann-Wolfgang Goethe Universitat
Frankfurt, Germany), “Arun Rodrigues” (Sandia National Laboratories, USA), “Reiji
Suda” (University of Tokyo, Japan), “Sid Touati” (INRIA, France), “Jesper Larsson
Traff” (Vienna University of Technology, Austria), “Petr Tuma” (Charles Univer-
sity, Czech Republic), “Jan Vitek” (Purdue University, USA), “Vladimir Voevodin”
(Moscow State University, Russia), “Vittorio Zaccaria” (Politecnico di Milano, Italy),
“Xiaoyun Zhu” (VMware, USA) }
(Note: this example reflects a real review)
2 Peer review
The proposal promotes open peer review: where the review process is transpar-
ent, and the text of reviews is public. If so desired, the identity of reviewers
may be optionally hidden by an identity escrow service.
The basic “building block” is the review object: a document with semantic
fields that identify separately the work(s) being reviewed by their document
handles, the body of the review (text and/or grades), the author of the review,
and which review process was used to produce the review object.
An example is given in table 1. This review object represents a possible
encoding in our proposal of an actual review produced by the process of blind
review of our submission to the TRUST workshop.
As the example illustrates, review objects must be self-contained: the se-
mantic data for evaluation scores embeds information on how to interpret the
scores; the characteristics of the review process are embedded in the review
object; the full list of individuals that serve as escrow for the identity of the
reviewer is given explicitly, for accountability. Although not present in this ex-
ample, we suggest also including contact information for the people involved,
and optionally digital signatures to disambiguate homonyms.
3