aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--paper.tex68
-rw-r--r--peer-review/1-answer.txt80
-rwxr-xr-xproject4
-rw-r--r--tex/src/references.tex9
4 files changed, 82 insertions, 79 deletions
diff --git a/paper.tex b/paper.tex
index 070223e..086e620 100644
--- a/paper.tex
+++ b/paper.tex
@@ -150,22 +150,24 @@ To highlight the necessity, a short review of commonly-used tools is provided be
\fi%
}
-To isolate the environment, VMs have sometimes been used, e.g., in \href{https://is.ieis.tue.nl/staff/pvgorp/share}{SHARE} (which was awarded second prize in the Elsevier Executable Paper Grand Challenge of 2011 but was discontinued in 2019).
-However, containers (in particular, Docker, and to a lesser degree, Singularity) are currently the most widely-used solution.
-We will thus focus on Docker here.
+To isolate the environment, VMs have sometimes been used, e.g., in \href{https://is.ieis.tue.nl/staff/pvgorp/share}{SHARE} (awarded second prize in the Elsevier Executable Paper Grand Challenge of 2011, but discontinued in 2019).
+However, containers (e.g., Docker or Singularity) are currently the most widely-used solution.
+We will focus on Docker here because it is currently the most common.
\new{It is hypothetically possible to precisely identify the used Docker ``images'' with their checksums (or ``digest'') to re-create an identical OS image later.
However, that is rarely done.}
-Usually images are imported with generic operating system (OS) names; e.g., \cite{mesnard20} uses `\inlinecode{FROM ubuntu:16.04}'
- \ifdefined\noappendix
- \new{(more examples in the \href{https://doi.org/10.5281/zenodo.\projectzenodoid}{appendices})}%
- \else%
- \new{(more examples: see the appendices (\ref{appendix:existingtools}))}%
- \fi%
-. The extracted tarball (from \url{https://partner-images.canonical.com/core/xenial}) is updated almost monthly, and only the most recent five are archived there.
- Hence, if the image is built in different months, its output image will contain different OS components.
+Usually images are imported with operating system (OS) names; e.g., \cite{mesnard20}
+\ifdefined\noappendix
+\new{(more examples in the \href{https://doi.org/10.5281/zenodo.\projectzenodoid}{appendices})}%
+\else%
+\new{(more examples: see the appendices (\ref{appendix:existingtools}))}%
+\fi%
+{ }imports `\inlinecode{FROM ubuntu:16.04}'.
+The extracted tarball (from \url{https://partner-images.canonical.com/core/xenial}) is updated almost monthly, and only the most recent five are archived there.
+Hence, if the image is built in different months, it will contain different OS components.
% CentOS announcement: https://blog.centos.org/2020/12/future-is-centos-stream
-In the year 2024, when long-term support (LTS) for this version of Ubuntu expires, the image will be unavailable at the expected URL \new{(if not abruptly aborted earlier, like CentOS 8 which will be terminated 8 years early).}
+In the year 2024, when long-term support (LTS) for this version of Ubuntu expires, the image will be unavailable at the expected URL \new{(if not aborted earlier, like CentOS 8 which will be terminated 8 years early).}
+
Generally, \new{pre-built} binary files (like Docker images) are large and expensive to maintain and archive.
%% This URL: https://www.docker.com/blog/docker-hub-image-retention-policy-delayed-and-subscription-updates}
\new{Because of this, DockerHub (where many reproducible workflows are archived) announced that inactive images (older than 6 months) will be deleted in free accounts from mid 2021.}
@@ -200,11 +202,6 @@ However, since they are not part of the core, their longevity can be assumed to
Therefore, the core Jupyter framework leaves very few options for project management, especially as the project grows beyond a small test or tutorial.}
In summary, notebooks can rarely deliver their promised potential \cite{rule18} and may even hamper reproducibility \cite{pimentel19}.
-An exceptional solution we encountered was the Image Processing Online Journal (IPOL, \href{https://www.ipol.im}{ipol.im}).
-Submitted papers must be accompanied by an ISO C implementation of their algorithm (which is buildable on any widely used OS) with example images/data that can also be executed on their webpage.
-This is possible owing to the focus on low-level algorithms with no dependencies beyond an ISO C compiler.
-However, many data-intensive projects commonly involve dozens of high-level dependencies, with large and complex data formats and analysis, so this solution is not scalable.
-
@@ -250,7 +247,7 @@ More stable/basic tools can be used with less long-term maintenance costs.
\textbf{Criterion 4: Scalability.}
A scalable project can easily be used in arbitrarily large and/or complex projects.
-On a small scale, the criteria here are trivial to implement, but can rapidly become unsustainable (see IPOL example above).
+On a small scale, the criteria here are trivial to implement, but can rapidly become unsustainable.
\textbf{Criterion 5: Verifiable inputs and outputs.}
The project should automatically verify its inputs (software source code and data) \emph{and} outputs, not needing any expert knowledge.
@@ -351,8 +348,9 @@ For Windows-native software that can be run in batch-mode, evolving technologies
The analysis phase of the project however is naturally different from one project to another at a low-level.
It was thus necessary to design a generic framework to comfortably host any project, while still satisfying the criteria of modularity, scalability, and minimal complexity.
-We demonstrate this design by replicating Figure 1C of \cite{menke20} in Figure \ref{fig:datalineage} (left).
-Figure \ref{fig:datalineage} (right) is the data lineage graph that produced it (including this complete paper).
+This design is demonstrated with the example of Figure \ref{fig:datalineage} (left).
+It is an enhanced replication of the ``tool'' curve of Figure 1C in \cite{menke20}.
+Figure \ref{fig:datalineage} (right) is the data lineage that produced it.
\begin{figure*}[t]
\begin{center}
@@ -455,14 +453,6 @@ There is a \new{thoroughly elaborated} customization checklist in \inlinecode{RE
The current project's Git hash is provided to the authors as a \LaTeX{} macro (shown here at the end of the abstract), as well as the Git hash of the last commit in the Maneage branch (shown here in the acknowledgements).
These macros are created in \inlinecode{initialize.mk}, with \new{other basic information from the running system like the CPU architecture, byte order or address sizes (shown here in the acknowledgements)}.
-The branch-based design of Figure \ref{fig:branching} allows projects to re-import Maneage at a later time (technically: \emph{merge}), thus improving its low-level infrastructure: in (a) authors do the merge during an ongoing project;
-in (b) readers do it after publication; e.g., the project remains reproducible but the infrastructure is outdated, or a bug is fixed in Maneage.
-\new{Generally, any git flow (branching strategy) can be used by the high-level project authors or future readers.}
-Low-level improvements in Maneage can thus propagate to all projects, greatly reducing the cost of curation and maintenance of each individual project, before \emph{and} after publication.
-
-Finally, the complete project source is usually $\sim100$ kilo-bytes.
-It can thus easily be published or archived in many servers, for example it can be uploaded to arXiv (with the \LaTeX{} source, see the arXiv source in \cite{akhlaghi19, infante20, akhlaghi15}), published on Zenodo and archived in SoftwareHeritage.
-
\begin{lstlisting}[
label=code:branching,
caption={Starting a new project with Maneage, and building it},
@@ -483,6 +473,15 @@ $ ./project make # Re-build to see effect.
$ git add -u && git commit # Commit changes.
\end{lstlisting}
+The branch-based design of Figure \ref{fig:branching} allows projects to re-import Maneage at a later time (technically: \emph{merge}), thus improving its low-level infrastructure: in (a) authors do the merge during an ongoing project;
+in (b) readers do it after publication; e.g., the project remains reproducible but the infrastructure is outdated, or a bug is fixed in Maneage.
+\new{Generally, any git flow (branching strategy) can be used by the high-level project authors or future readers.}
+Low-level improvements in Maneage can thus propagate to all projects, greatly reducing the cost of curation and maintenance of each individual project, before \emph{and} after publication.
+
+Finally, the complete project source is usually $\sim100$ kilo-bytes.
+It can thus easily be published or archived in many servers, for example it can be uploaded to arXiv (with the \LaTeX{} source, see the arXiv source in \cite{akhlaghi19, infante20, akhlaghi15}), published on Zenodo and archived in SoftwareHeritage.
+
+
@@ -1487,6 +1486,7 @@ Besides some small differences Galaxy seems very similar to GenePattern (Appendi
\subsection{Image Processing On Line journal, IPOL (2010)}
+\label{appendix:ipol}
The IPOL journal\footnote{\inlinecode{\url{https://www.ipol.im}}} \citeappendix{limare11} (first published article in July 2010) publishes papers on image processing algorithms as well as the the full code of the proposed algorithm.
An IPOL paper is a traditional research paper, but with a focus on implementation.
The published narrative description of the algorithm must be detailed to a level that any specialist can implement it in their own programming language (extremely detailed).
@@ -1495,12 +1495,16 @@ The authors must also submit several example datasets/scenarios.
The referee is expected to inspect the code and narrative, confirming that they match with each other, and with the stated conclusions of the published paper.
After publication, each paper also has a ``demo'' button on its webpage, allowing readers to try the algorithm on a web-interface and even provide their own input.
-The IPOL model is the single most robust model of peer review and publishing computational research methods/implementations that we have seen in this survey.
-It has grown steadily over the last 10 years, publishing 23 research articles in 2019 alone.
+IPOL has grown steadily over the last 10 years, publishing 23 research articles in 2019 alone.
We encourage the reader to visit its webpage and see some of its recent papers and their demos.
-The reason it can be so thorough and complete is its very narrow scope (image processing algorithms), where the published algorithms are highly atomic, not needing significant dependencies (beyond input/output), allowing the referees and readers to go deeply into each implemented algorithm.
+The reason it can be so thorough and complete is its very narrow scope (low-level image processing algorithms), where the published algorithms are highly atomic, not needing significant dependencies (beyond input/output of well-known formats), allowing the referees and readers to go deeply into each implemented algorithm.
In fact, high-level languages like Perl, Python or Java are not acceptable in IPOL precisely because of the additional complexities, such as dependencies, that they require.
-If any referee or reader were inclined to do so, a paper written in Maneage (the proof-of-concept solution presented in this paper) could be scrutinised at a similar detailed level, but for much more complex research scenarios, involving hundreds of dependencies and complex processing of the data.
+However, many data-intensive projects commonly involve dozens of high-level dependencies, with large and complex data formats and analysis, so this solution is not scalable.
+
+IPOL thus fails on our Scalability criteria.
+Furthermore, by not publishing/archiving each paper's version control history or directly linking the analysis and produced paper, it fails Criterias 6 and 7.
+Note that on the webpage, it is possible to change parameters, but that will not affect the produced PDF.
+A paper written in Maneage (the proof-of-concept solution presented in this paper) could be scrutinised at a similar detailed level to IPOL, but for much more complex research scenarios, involving hundreds of dependencies and complex processing of the data.
diff --git a/peer-review/1-answer.txt b/peer-review/1-answer.txt
index ae28c5f..55be70a 100644
--- a/peer-review/1-answer.txt
+++ b/peer-review/1-answer.txt
@@ -7,17 +7,18 @@ already done a very comprehensive review of the tools (as you may notice
from the Git repository[1]). However, the CiSE Author Information
explicitly states: "The introduction should provide a modicum of background
in one or two paragraphs, but should not attempt to give a literature
-review". This is the usual practice in previously published papers at CiSE and
-is in line with the very limited word count and maximum of 12 references to
-be used in bibliography.
+review". This is the usual practice in previously published papers at CiSE
+and is in line with the maximum 6250 word-count and maximum of 12
+references to be used in bibliography.
We agree with the need for this extensive review to be on the public record
-(creating the review took a lot of time and effort; most of the tools were run and
-tested). We discussed this with the editors and the following
-solution was agreed upon: we include the extended review as a set of appendices in
-the arXiv[2] and Zenodo[3] pre-prints of this paper and mention these
-publicly available appendices in the submitted paper so that any interested
-reader can easily access them.
+(creating the review took a lot of time and effort; most of the tools were
+run and tested). We discussed this with the editors and the following
+solution was agreed upon: the extended reviews will be published as a set
+of appendices in the arXiv[2] and Zenodo[3] pre-prints of this paper. These
+publicly available appendices are also mentioned in the submitted paper so
+that any interested reader of the final paper published by CiSE can easily
+access them.
[1] https://gitlab.com/makhlaghi/maneage-paper/-/blob/master/tex/src/paper-long.tex#L1579
[2] https://arxiv.org/abs/2006.03018
@@ -205,24 +206,24 @@ ANSWER:
large. However, the 6250 word-count limit is very tight and if we add
more on it in this length, we would have to remove points of higher priority.
Hopefully this can be the subject of a follow-up paper.
-3. A review of ReproZip is in Appendix B.
-4. A review of Occam is in Appendix B.
-5. A review of Popper is in Appendix B.
-6. A review of Whole Tale is in Appendix B.
-7. A review of Snakemake is in Appendix A.
-8. CWL and WDL are described in Appendix A (Job management).
-9. Nextflow is described in Appendix A (Job management).
-10. Sumatra is described in Appendix B.
-11. Podman is mentioned in Appendix A (Containers).
-12. AppImage is mentioned in Appendix A (Package management).
-13. Flatpak is mentioned in Appendix A (Package management).
-14. Snap is mentioned in Appendix A (Package management).
+3. A review of ReproZip is in Appendix C.
+4. A review of Occam is in Appendix C.
+5. A review of Popper is in Appendix C.
+6. A review of Whole Tale is in Appendix C.
+7. A review of Snakemake is in Appendix B.
+8. CWL and WDL are described in Appendix B (Job management).
+9. Nextflow is described in Appendix B (Job management).
+10. Sumatra is described in Appendix C.
+11. Podman is mentioned in Appendix B (Containers).
+12. AppImage is mentioned in Appendix B (Package management).
+13. Flatpak is mentioned in Appendix B (Package management).
+14. Snap is mentioned in Appendix B (Package management).
15. nbdev and jupytext are high-level tools to generate documentation and
packaging custom code in Conda or pypi. High-level package managers
like Conda and Pypi have already been thoroughly reviewed in Appendix A
for their longevity issues, so we feel that there is no need to
include these.
-16. Bazel is mentioned in Appendix A (job management).
+16. Bazel is mentioned in Appendix B (job management).
17. Debian's reproducible builds are only designed for ensuring that software
packaged for Debian is bitwise reproducible. As mentioned in the
discussion section of this paper, the bitwise reproducibility of software is
@@ -244,12 +245,12 @@ ANSWER:
* A model project for reproducible papers: https://arxiv.org/abs/1401.2000
* Executable/reproducible paper articles and original concepts
-ANSWER: Thank you for highlighting these points. Appendix B starts with a
+ANSWER: Thank you for highlighting these points. Appendix C starts with a
subsection titled "suggested rules, checklists or criteria" with a review of
existing sets of criteria. This subsection includes the sources proposed
by the reviewer [Sandve et al; Rule et al; Nust et al] (and others).
-ArXiv:1401.2000 has been added in Appendix A as an example paper using
+ArXiv:1401.2000 has been added in Appendix B as an example paper using
virtual machines. We thank the referee for bringing up this paper, because
the link to the VM provided in the paper no longer works (the URL
http://archive.comp-phys.org/provenance_challenge/provenance_machine.ova
@@ -348,7 +349,7 @@ FreeBSD (despite having bit-wise different executables).
provides little novelty (see comments below).
ANSWER: The previously suggested sets of criteria that were listed by
-Reviewer 1 are reviewed by us in the newly added Appendix B, and the
+Reviewer 1 are reviewed by us in the newly added Appendix C, and the
novelty and advantages of our proposed criteria are contrasted there
with the earlier sets of criteria.
@@ -541,7 +542,7 @@ ANSWER: Thank you very much for pointing out the works by Thain. We
couldn't find any first-author papers in 2015, but found Meng & Thain
(https://doi.org/10.1016/j.procs.2017.05.116) which had a related
discussion of why they didn't use Docker containers in their work. That
-paper is now cited in the discussion of Containers in Appendix A.
+paper is now cited in the discussion of Containers in Appendix B.
------------------------------
@@ -554,7 +555,7 @@ paper is now cited in the discussion of Containers in Appendix A.
ANSWER: Thank you for the reference. We are restricted in the main
body of the paper due to the strict bibliography limit of 12
-references; we have included Kurtzer et al 2017 in Appendix A (where
+references; we have included Kurtzer et al 2017 in Appendix B (where
we discuss Singularity).
------------------------------
@@ -569,7 +570,7 @@ we discuss Singularity).
ANSWER: The FAIR principles have been mentioned in the main body of the
paper, but unfortunately we had to remove its citation in the main paper (like
many others) to keep to the maximum of 12 references. We have cited it in
-Appendix B.
+Appendix C.
------------------------------
@@ -583,15 +584,10 @@ Appendix B.
further enrich the tool presented.
-ANSWER: Our section II discussing existing tools seems to be the most
-appropriate place to mention IPOL, so we have retained its position at
-the end of this section.
-
-We have indeed included an in-depth discussion of IPOL in Appendix B.
-We recommend it to the reader for any project written uniquely in C,
-and we comment on the readiness of Maneage'd projects for a similar
-level of peer-review control.
-
+ANSWER: We agree and have removed the IPOL example from that section.
+We have included an in-depth discussion of IPOL in Appendix C and we
+comment on the readiness of Maneage'd projects for a similar level of
+peer-review control.
------------------------------
@@ -657,7 +653,7 @@ Within the constraints of space (the limit is 6500 words), we don't
see how we could add more discussion of the history of our choice of
criteria or more anecdotal examples of their relevance.
-We do discuss some alternatives lists of criteria in Appendix B.A,
+We do discuss some alternatives lists of criteria in Appendix C.A,
without debating the wider perspective of which criteria are the
most desirable.
@@ -1027,7 +1023,7 @@ ANSWER: This work and the proposed criteria are very different from
Popper. We agree that VMs and containers are an important component
of this field, and the appendices add depth to our discussion of this.
However, these do not appear to satisfy all our proposed criteria.
-A detailed review of Popper, in particular, is given in Appendix B.
+A detailed review of Popper, in particular, is given in Appendix C.
------------------------------
@@ -1041,7 +1037,7 @@ A detailed review of Popper, in particular, is given in Appendix B.
most promising tools for offering true reproducibility.
ANSWER: Containers and VMs have been more thoroughly discussed in
-the main body and also extensively discussed in appendix A (that are
+the main body and also extensively discussed in appendix B (that are
now available in the arXiv and Zenodo versions of this paper). As
discussed (with many cited examples), Containers and VMs are only
appropriate when they are themselves reproducible (for example, if
@@ -1088,7 +1084,7 @@ of each step (why and how the analysis was done).
additional work highly relevant to this paper.
ANSWER: Thank you for the interesting paper by Lofstead+2019 on Data
-pallets. We have cited it in Appendix A as an example of how generic the
+pallets. We have cited it in Appendix B as an example of how generic the
concept of containers is.
The topic of linking data to analysis is also a core result of the criteria
@@ -1126,7 +1122,7 @@ ANSWER: All these tools have been reviewed in the newly added appendices.
ANSWER: A thorough review of current low-level tools and and high-level
reproducible workflow management systems has been added in the extended
-Appendix.
+Appendices.
------------------------------
diff --git a/project b/project
index cdace62..e53bcf0 100755
--- a/project
+++ b/project
@@ -507,7 +507,9 @@ case $operation in
numwords=$(pdftotext paper.pdf && cat paper.txt | wc -w)
numeff=$(echo $numwords | awk '{print $1-850+500}')
echo; echo "Number of words in full PDF: $numwords"
- echo "No abstract, and captions (250 for each figure): $numeff"
+ if [ $noappendix = 1 ]; then
+ echo "No abstract, and captions (250 for each figure): $numeff"
+ fi
rm paper.txt
fi
fi
diff --git a/tex/src/references.tex b/tex/src/references.tex
index ad38508..027fcc8 100644
--- a/tex/src/references.tex
+++ b/tex/src/references.tex
@@ -103,10 +103,11 @@ archivePrefix = {arXiv},
author = {Joe Menke and Martijn Roelandse and Burak Ozyurt and Maryann Martone and Anita Bandrowski},
title = {Rigor and Transparency Index, a new metric of quality for assessing biological and medical science methods},
year = {2020},
- journal = {bioRxiv},
- volume = {},
- pages = {2020.01.15.908111},
- doi = {10.1101/2020.01.15.908111},
+ journal = {iScience},
+ volume = {23},
+ issue = {11},
+ pages = {101698},
+ doi = {10.1016/j.isci.2020.101698},
}