aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMohammad Akhlaghi <mohammad@akhlaghi.org>2020-05-22 02:15:06 +0100
committerMohammad Akhlaghi <mohammad@akhlaghi.org>2020-05-22 02:18:42 +0100
commit7b008dfbb9b2f6a1f5145e3841464e723f590feb (patch)
tree4d4608d851de6130d5ab9506eddabc5435bf3033
parent2bfa3a043dcf394492a33bbcb16121dcb227b5ed (diff)
Re-write of the paper to fit in ~6000 words and IEEE format
Following the fact that the DSJ editor decided that this paper doesn't fit into their scope, we decided to submit it to IEEE's Computing in Science and Engineering (CiSE). So with this commit the text was re-written to fit into their style and word-count limitations.
-rw-r--r--paper.tex456
-rwxr-xr-xproject5
-rw-r--r--reproduce/analysis/config/demo-year.conf (renamed from reproduce/analysis/config/menke-demo-year.conf)0
-rw-r--r--reproduce/analysis/make/demo-plot.mk4
-rw-r--r--reproduce/analysis/make/format.mk2
-rw-r--r--reproduce/analysis/make/paper.mk11
-rw-r--r--reproduce/software/config/texlive-packages.conf4
-rw-r--r--tex/src/figure-branching.tex6
-rw-r--r--tex/src/figure-data-lineage.tex18
-rw-r--r--tex/src/figure-tools-per-year.tex2
-rw-r--r--tex/src/preamble-project.tex37
-rw-r--r--tex/src/references.tex (renamed from tex/src/references.bib)42
12 files changed, 428 insertions, 159 deletions
diff --git a/paper.tex b/paper.tex
index 8d7bde9..8ef2095 100644
--- a/paper.tex
+++ b/paper.tex
@@ -60,20 +60,16 @@
% in the abstract or keywords.
\begin{abstract}
%% CONTEXT
- Many reproducible workflow solutions have been proposed over recent decades.
- Most use the high-level technologies that were popular when they were created, providing an immediate solution that is unlikely to be sustainable in the long term.
- Decades later, scientists lack the resources to rewrite their projects, while still being accountable for their results.
- This creates generational gaps, which, together with technological obsolescence, impede reproducibility and building upon previous work.
+ Reproducible workflow solutions commonly use the high-level technologies that were popular when they were created, providing an immediate solution that is unlikely to be sustainable in the long term.
%% AIM
We aim to introduce a set of criteria to address this problem and to demonstrate their practicality.
%% METHOD
The criteria have been tested in several research publications and can be summarized as: completeness (no dependency beyond a POSIX-compatible operating system, no administrator privileges, no network connection and storage primarily in plain-text); modular design; linking analysis with narrative; temporal provenance; scalability; and free-and-open-source software.
%% RESULTS
- Through an implementation, called ``Maneage'' (managing+lineage), we find that storing the project in machine-actionable and human-readable plain-text, enables version-control, cheap archiving, automatic parsing to extract data provenance, and peer-reviewable verification.
- Furthermore, we show that these criteria are not limited to long-term reproducibility but also provide immediate, fast short-term reproducibility.
+ Through an implementation, called ``Maneage'' (managing+lineage), we find that storing the project in machine-actionable and human-readable plain-text, enables version-control, cheap archiving, automatic parsing to extract data provenance, and peer-review-able verification.
+ Furthermore, we find that these criteria are not limited to long-term reproducibility but also provide immediate, fast short-term reproducibility benefits.
%% CONCLUSION
- We conclude that requiring longevity from solutions is realistic.
- We discuss the benefits of these criteria for scientific progress.
+ We conclude that requiring longevity from solutions is realistic and discuss the benefits of these criteria for scientific progress.
\end{abstract}
% Note that keywords are not normally used for peerreview papers.
@@ -101,130 +97,348 @@ Data Lineage, Provenance, Reproducibility, Scientific Pipelines, Workflows
\section{Introduction}
% The very first letter is a 2 line initial drop letter followed
% by the rest of the first word in caps.
-\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
-for IEEE journal papers produced under \LaTeX\ using
-IEEEtran.cls version 1.8b and later.
-% You must have at least 2 lines in the paragraph with the drop letter
-% (should never be an issue)
-Here is an example citation \cite{akhlaghi19}.
-
-
-
-
-\section{Principles}
-\label{sec:principles}
-
-The core principle of Maneage is simple: science is defined primarily by its method, not its result.
-As \cite{buckheit1995} describe it, modern scientific papers are merely advertisements of scholarship, while the actual scholarship is the coding behind the plots/results.
-Many solutions have been proposed in the last decades, including (but not limited to)
-1992: \href{https://sep.stanford.edu/doku.php?id=sep:research:reproducible}{RED},
-2003: \href{https://taverna.incubator.apache.org}{Apache Taverna},
-2004: \href{https://www.genepattern.org}{GenePattern},
-2010: \href{https://wings-workflows.org}{WINGS},
-2011: \href{https://www.ipol.im}{Image Processing On Line journal} (IPOL),
- \href{https://www.activepapers.org}{Active papers},
- \href{https://is.ieis.tue.nl/staff/pvgorp/share}{SHARE},
-2015: \href{https://sciunit.run}{Sciunit};
-2017: \href{https://falsifiable.us}{Popper};
-2019: \href{https://wholetale.org}{WholeTale}.
-To help in the comparison, the founding principles of Maneage are listed below.
-
-
-\begin{enumerate}%[label={\bf P\arabic*]
-\item \label{principle:complete}\textbf{Completeness:}
- A project that is complete, or self-contained,
- (P1.1) has no dependency beyond the Port\-able Operating System (OS) Interface, or POSIX, or a minimal Unix-like environment.
- A consequence of this is that the project itself must be stored in plain-text: not needing any specialized software to open, parse or execute.
- (P1.2) does not affect the host,
- (P1.3) does not require root, or administrator, privileges,
- (P1.4) builds its software for an independent environment,
- (P1.5) can be run locally (without internet connection),
- (P1.6) contains the full project's analysis, visualization \emph{and} narrative, from access to raw inputs to producing final published format (e.g., PDF or HTML),
- (P1.7) requires no manual/human interaction and can run automatically \cite[according to][``\emph{a clerk can do it}'']{claerbout1992}.
-
- \emph{Comparison with existing:} with many dependencies beyond POSIX, except for IPOL, none of the tools above are complete.
- For example, the workflow of most recent solutions need Python or Jupyter notebooks.
- Because of their complexity (see \ref{principle:complexity}), pre-built binary blobs like containers or virtual machines are the chosen storage format, which are large (Giga-bytes) and expensive to archive.
- Furthermore, third-party package managers setup the environment, like Conda, or the OS's, like apt or yum.
- However, exact versions of \emph{every software} are rarely included, and the servers remove old binaries, hence blobs are hard to recreate.
- Blobs also have a short lifespan, e.g., Docker containers made today, may not be operable with future versions of Docker or Linux (currently Linux 3.2.x is the earliest supported version, released in 2012).
- In general they mostly aim for short-term reproducibility.
- A plain-text project is readable by humans and machines (even if it can't be executed) and consumes no less than a megabyte.
-
-\item \label{principle:modularity}\textbf{Modularity:}
-A project should be compartmentalized into independent modules with well-defined inputs/outputs having no side effects.
-Communication between the independent modules should be explicit, providing several optimizations:
-(1) independent modules can run in parallel.
-Modules that do not need to be run (because their dependencies have not changed) will not be re-run.
-(2) Data provenance extraction (recording any dataset's origins).
-(3) Citation: others can credit specific parts of a project.
-(4) Usage in other projects.
-(5) Most importantly: they are easy to debug and improve.
-
-\emph{Comparison with existing:} Visual workflow tools like Apache Taverna, GenePattern, Kepler or VisTrails encourage this, but the more recent tools (mostly written in Python) leave this to project authors.
-However, designing a modular project needs to be encouraged and facilitated.
-Otherwise, scientists, who are not usually trained in data management, will rarely design a modular project, leading to great inefficiencies in terms of project cost and/or scientific accuracy (testing/validating will be expensive).
-
-\item \label{principle:complexity}\textbf{Minimal complexity:}
- This is Ockham's razor extrapolated to project management \cite[``\emph{Never posit pluralities without necessity}''][]{schaffer15}:
- 1) avoid complex relations between analysis steps (related to \ref{principle:modularity}).
- 2) avoid the programming language that is currently in vogue, because it is going to fall out of fashion soon and require significant resources to translate or rewrite it every few years (to stay fashionable).
- The same job can be done with more stable/basic tools, requiring less long-term effort.
-
- \emph{Comparison with existing:} IPOL stands out here too (requiring only ISO C), however most others are written in Python, and use Conda or Jupyter (see \ref{principle:complete}).
- Besides being incomplete (\ref{principle:complete}), these tools have short lifespans and evolve fast (e.g., Python 2 code cannot run with Python 3, causing disruption in many projects).
- Their complex dependency trees also making them hard to maintain, for example, see the dependency tree of Matlplotlib in \cite[][Figure 1]{alliez19}, its one of the simpler Jupyter dependencies.
- The longevity of a workflow is determined by its shortest-lived dependency.
-
-\item \label{principle:verify}\textbf{Verifiable inputs and outputs:}
-The project should automatically verify its inputs (software source code and data) \emph{and} outputs, not needing expert knowledge to confirm a reproduction.
-
-\emph{Comparison with existing:} Such verification is usually possible in most systems, but as a responsibility of the project authors.
-As with \ref{principle:modularity}, due to lack of training, if not actively encouraged and facilitated, it will not be implemented.
-
-\item \label{principle:history}\textbf{History and temporal provenance:}
+%\IEEEPARstart{F}{irst} word
+
+Reproducible research has been discussed in the sciences for about 30 years \cite{claerbout1992, fineberg19}.
+Many solutions have been proposed, mostly relying on the common technology of the day: starting with Make and Matlab libraries in the 1990s, to Java in the 2000s and in the last decade they are mostly based on Python.
+Recently controlling the environment has been facilitated through generic package managers (PMs) and containers.
+
+However, because of their high-level nature, such third party tools for the workflow (not the analysis) grow very fast, e.g., Python 2 code cannot run with Python 3, interrupting many projects.
+Furthermore, containers (in custom binary formats) are also being heavily used recently, but are large (Gigabytes) and expensive to archive.
+Also, once the binary format is obsolete, reading or parsing the project is not possible.
+
+The cost of staying up to date with this evolving landscape is high.
+Scientific projects in particular suffer the most: scientists have to focus on their own research domain, but they also need to understand the used technology to a certain level, because it determines their results and interpretations.
+Decades later, they are also still held accountable for their results.
+Hence the evolving technology creates generational gaps in the scientific community, not allowing the previous generations to share valuable lessons which are too low-level to be published in a traditional scientific paper.
+As a solution to this problem, here we introduce a criteria that can guarantee the longevity of a project based on our experiences with existing solutions.
+
+
+
+
+
+\section{Commonly used tools and their longevity}
+To highlight the proposed criteria, some of the most commonly used tools are reviewed from the long-term usability perspective.
+We recall that while longevity is important in some fields (like the sciences), it isn't necessarilyy of interest in others (e.g., short term commercial projects), hence the wide usage of tools the evolve very fast.
+Most existing reproducible workflows use a common set of third-party tools that can be categozied as:
+(1) Environment isolators like virtual machines, containers and etc.
+(2) PMs like Conda, Nix, or Spack,
+(3) Job orchestrators like scripts, Make, SCons, and CGAT-core,
+(4) Notebooks like Jupyter.
+
+To isolate the environment, virtual machines (VMs) have sometimes been used, e.g., in \href{https://is.ieis.tue.nl/staff/pvgorp/share}{SHARE} (which was awarded 2nd prize in the Elseiver Executable Paper Grand Challenge of 2011 and discontinued in 2019).
+However, containers (in particular Docker and lesser, Singularity) are by far the most used solution today, so we'll focus on Docker here.
+
+%% Note that L. Barba (second author of this paper) is the editor in chief of CiSE.
+Ideally, is possible to precisely version/tag the images that are imported into a Docker container.
+But that is rarely practiced in most solutions that we have studied.
+Usually images are imported with generic operating system names e.g., `\inlinecode{FROM ubuntu:16.04}'\cite{mesnard20}.
+The extracted tarball (from \url{https://partner-images.canonical.com/core/xenial}) is updated with different software versions almost monthly and only archives the last 5 images.
+Hence if the Dockerfile is run in different months, it will contain different core operating system components.
+Furthermore, in the year 2024, when the long-term support for this version of Ubuntu expires, it will be totally removed.
+This is similar in other OSs: pre-built binary files are large and expensive to maintain and archive.
+Furthermore Docker requires root permissions, and only supports recent (in ``long-term-support'') versions of the host kernel, hence older Docker images may not be executable.
+
+Once the host OS is ready, PMs are used to install the software, or environment.
+Usually the OS's PM, like `\inlinecode{apt}' or `\inlinecode{yum}', is used first and higher-level software are built with more generic PMs like Conda, Nix, GNU Guix or Spack.
+The OS PM suffers from the same longevity problem as the OS.
+Some third-party tools like Conda and Spack are written in high-level languages like Python, so the PM itself depends on the host's Python installation.
+Nix and GNU Guix don't have any dependencies and produce bit-wise identical programs, however, they need root permissions.
+Generally the exact version of each software's dependencies isn't precisely identified in the build instructions (although it is possible).
+Therefore unless precise versions of \emph{every software} are stored, they will use the most recent version.
+Furthermore, because each third party PM introduces its own language and framework, they increase the project's complexity.
+
+With the software environment built, job management is the next component of a workflow.
+Visual workflow tools like Apache Taverna, GenePattern, Kepler or VisTrails (mostly introduced in the 2000s and using Java) do encourage modularity and robust job management, but the more recent tools (mostly in Python) leave this to project authors.
+Designing a modular project needs to be encouraged and facilitated because scientists (who are not usually trained in data management) will rarely apply best practices in project management and data carpentry.
+This includes automatic verification: while it is possible in many solutions, it is rarely practiced.
+This leads to many inefficiencies in project cost and/or scientific accuracy (reusing, expanding or validating will be expensive).
+
+Finally, to add narrative, computational notebooks\cite{rule18}, like Jupyter, are being increasingly used in many solutions.
+However, the complex dependency trees of such web-based tools make them very vulnerable to the passage of time, e.g., see Figure 1 of \cite{alliez19} for the dependencies of Matplotlib; one of the more simple Jupyter dependencies.
+The longevity of a project is determined by its shortest-lived dependency.
+Furthermore, similar to the point above on job management, by not actively encouraging good practices in programming or project management, such tools can rarely deliver their promised potential\cite{rule18} or can even hamper reproducibility \cite{pimentel19}.
+
+An exceptional solution we encountered was the Image Processing Online Journal (IPOL, \href{https://www.ipol.im}{ipol.im}).
+Submitted papers must be accompanied by an ISO C implementation of their algorithm (which is build-able on all operating systems) with example images/data that can also be executed on their webpage.
+This is possible due to the focus on low-level algorithms that don't need any dependencies beyond an ISO C compiler.
+Many data-intensive projects, commonly involve dozens of high-level dependencies, with large and complex data formats and analysis, hence this solution isn't scalable.
+
+
+
+
+
+\section{Proposed criteria for longevity}
+
+The main premise is that starting a project with robust data management strategy (or tools that provide it) is much more effective, for the researchers and community, than imposing it in the end \cite{austin17,fineberg19}.
+Researchers play a critical role\cite{austin17} in making their research more Findabe, Accessible, Interoperable, and Reusable (the FAIR principles).
+Actively curating workflows for evolving technologies by repositories alone is not practically feasible, or scalable.
+In this paper we argue that workflows that satisfy the criteria below can reduce the cost of curation for repositories, while maximizing the FAIRness of the deliverables for future researchers.
+
+\textbf{Criteria 1: Completeness.}
+A project that is complete, or self-contained, has the following properties:
+(1) has no dependency beyond the Portable Operating System (OS) Interface, or POSIX.
+IEEE defined POSIX (a minimal Unix-like environment) and many OSs have complied.
+It is thus a sufficiently reliable foundation for longevity in execution.
+(2) No dependency implies that the project itself must be primarily stored in plain-text: not needing specialized software to open, parse or execute.
+(3) Does not affect the host OS (its libraries, programs, or environment).
+(4) Does not require root or administrator privileges.
+(5) Builds its own controlled software for an independent environment.
+(6) Can run locally (without internet connection).
+(7) Contains the full project's analysis, visualization \emph{and} narrative: from access to raw inputs to doing the analysis, producing final data products \emph{and} its final published report with figures, e.g., PDF or HTML.
+(8) Can run automatically, with no human interaction.
+
+\textbf{Criteria 2: Modularity.}
+A modular project enables and encourages the analysis to be broken into independent modules with well-defined inputs/outputs and minimal side effects.
+Explicit communication between various modules enables optimizations on many levels:
+(1) Execution in parallel and avoiding redundancies (when a dependency of a module has not changed, it will not be re-run).
+(2) Usage in other projects.
+(3) Easy to debug and improve.
+(4) Facilitates citation of specific parts,
+(5) Provenance extraction.
+
+\textbf{Criteria 3: Minimal complexity.}
+Minimal complexity can be interpreted as
+(1) avoiding the language or framework that is currently in vogue (for the workflow, not necessarily the high-level analysis).
+Because it is going to fall out of fashion soon and require significant resources to translate or rewrite every few years.
+More stable/basic tools can also be used with less long-term maintenance.
+(2) avoiding too many different languages and frameworks, e.g., when the workflow's PM and analysis are orchestrated in the same framework, it becomes easier to adopt and encourages good practices.
+
+\textbf{Criteria 4: Scalability.}
+A scalable project can easily be used in arbitrarily large and/or complex projects.
+On a small scale, the criteria here are trivial to implement, but as the projects get more complex, it can become unsustainable.
+
+\textbf{Criteria 5: Verifiable inputs and outputs.}
+The project should automatically verify its inputs (software source code and data) \emph{and} outputs.
+Expert knowledge should not be required to confirm a reproduction, such that ``\emph{a clerk can do it}''\cite{claerbout1992}.
+
+\textbf{Criteria 6: History and temporal provenance.}
No project is done in a single/first attempt.
Projects evolve as they are being completed.
It is natural that earlier phases of a project are redesigned/optimized only after later phases have been completed.
This is often seen in exploratory research papers, with statements like ``\emph{we [first] tried method [or parameter] X, but Y is used here because it gave lower random error}''.
-A project's ``history'' is thus as scientifically relevant as the final, or published version.
+The ``history'' is thus as valuable as the final/published version.
-\emph{Comparison with existing:} The solutions above that implement version control usually support this principle.
-However, because the systems as a whole are rarely complete (see \ref{principle:complete}), their histories are also incomplete.
-IPOL fails here, because only the final snapshot is published.
+\textbf{Criteria 7: Including narrative, linked to analysis.}
+A project is not just its computational analysis.
+A raw plot, figure or table is hardly meaningful alone, even when accompanied by the code that generated it.
+A narrative description is also part of the deliverables (defined as ``data article'' in \cite{austin17}): describing the purpose of the computations, and interpretations of the result, possibly with respect to other projects/papers.
+This is related to longevity because if a workflow only contains the steps to do the analysis, or generate the plots, in time, it may be separated from its accompanying published paper.
+A raw analysis workflow with no context is hardly useful.
-\item \label{principle:scalable}\textbf{Scalability:}
-A project should be scalable to arbitrarily large and/or complex projects.
+\textbf{Criteria 8: Free and open source software:}
+Technically, reproducibility (as defined in \cite{fineberg19}) is possible with non-free or non-open-source software (a black box).
+This criteria is thus necessary to complement that definition (nature is already a black box).
+As free software, others can learn from, modify, and build upon a project.
+When the used software are also free,
+(1) The lineage can be traced to the implemented algorithms, possibly enabling optimizations on that level.
+(2) It can be modified to work on a future hardware by others.
+(3) A non-free software typically cannot be distributed by others, making it reliant on a single server (even without payments).
-\emph{Comparison with existing:}
-Most of the more recent solutions above are scalable.
-However, IPOL, which uniquely stands out in satisfying most principles, fails here: IPOL is devoted to low-level image processing algorithms that \emph{can be} done with no dependencies beyond an ISO C compiler.
-IPOL is thus not scalable to large projects, which commonly involve dozens of high-level dependencies, with complex data formats and analysis.
+
+
+
+
+
+
+
+
+
+\section{Proof of concept: Maneage}
+
+Given the limitations of existing tools with the proposed criteria, it is necessary to show a proof of concept.
+The proof presented here has already been tested in previously published papers \cite{akhlaghi19, infante20} and was recently awarded a Research Data Alliance (RDA) adoption grant for implementing the recommendations of the joint RDA and World Data System (WDS) working group on Publishing Data Workflows\cite{austin17} from the researcher perspective to ensure longevity.
+
+The proof of concept is called Maneage (Managing+Lineage, ending is pronounced like ``Lineage'').
+It was developed along with the criteria, as a parallel research project in 5 years for publishing our reproducible research workflows with our research.
+Its primordial form was implemented in \cite{akhlaghi15} and later evolved in \href{http://doi.org/10.5281/zenodo.1163746}{zenodo.1163746} and \href{http://doi.org/10.5281/zenodo.1164774}{zenodo.1164774}.
+
+Technically, the hardest criteria to implement was the completeness criteria (and in particular no dependency beyond POSIX), blended with minimal complexity.
+One proposed solution was the Guix Workflow Language (GWL) which is written in the same framework (GNU Guile, an implementation of Scheme) as GNU Guix (a PM).
+But as natural scientists (astronomers), our background was with languages like Shell, Python, C or Fortran.
+Not having any exposure to Lisp/Scheme and their fundamentally different style, made it very hard for us to adopt GWL.
+Furthermore, the desired solution was meant to be easily understandable/usable by fellow scientists, which generally also haven't had exposure to Lisp/Scheme.
+
+Inspired by GWL+Guix, a single job management tool was used for both installing of software \emph{and} the analysis workflow: Make.
+Make is not an analysis language, it is a job manager, deciding when to call analysis programs (written in any languge like Shell, Python, Julia or C).
+Make is standardized in POSIX and is used in almost all core OS components.
+It is thus mature, actively maintained and highly optimized.
+Make was recommended by the pioneers of reproducible research\cite{claerbout1992,schwab2000} and many researchers have already had a minimal exposure to it (when building research software).
+%However, because they didn't attempt to build the software environment, in 2006 they moved to SCons (Make-simulator in Python which also attempts to manage software dependencies) in a project called Madagascar (\url{http://ahay.org}), which is highly tailored to Geophysics.
+
+Linking the analysis and narrative was another major design choice.
+Literate programming, implemented as Computational Notebooks like Jupyter, is a common solution these days.
+However, due to the problems above, we our implementation follows a more abstract design: providing a more direct and precise, but modular (not in the same file) connection.
+
+Assuming that the narrative is typeset in \LaTeX{}, the connection between the analysis and narrative (usually as numbers) is through \LaTeX{} macros, that are automatically defined during the analysis.
+For example, in the abstract of \cite{akhlaghi19} we say `\emph{... detect the outer wings of M51 down to S/N of 0.25 ...}'.
+The \LaTeX{} source of the quote above is: `\inlinecode{\small detect the outer wings of M51 down to S/N of \$\textbackslash{}demo\-sf\-optimized\-sn\$}'.
+The macro `\inlinecode{\small\textbackslash{}demosfoptimizedsn}' is set during the analysis, and expands to the value `\inlinecode{0.25}' when the PDF output is built.
+Such values also depend on the analysis, hence just as plots, figures or tables they should also be reproduced.
+As a side-effect, these macros act as a quantifiable link between the narrative and analysis, with the granulity of a word in a sentence and exact analysis command.
+This allows accurate provenance tracking \emph{and} automatic updates to the text when any part of the analysis is changed.
+Manually typing such numbers in the narrative is prone to errors and discourages experimentation after the first writing of the project.
+
+The ultimate aim of any project is to produce a report accompaning a dataset with some visualizations, or a research article in a journal, let's call it \inlinecode{paper.pdf}.
+Hence the files with the relevant macros of each (modular) step, build the core structure (skeleton) of Maneage.
+During the software building (configuration) phase, each package is identified by a \LaTeX{} file, containing its official name, version and possible citation.
+In the end, they are combined to enable precise software acknowledgement and citation (see the appendices of \cite{akhlaghi19, infante20}, not included here due to the word-limit).
+Simultaneously, they act as Make \emph{targets} and \emph{prerequisite}s to allow accurate dependency tracking and optimized execution (parallel, no redundancies), for any complexity (e.g., Maneage also builds Matplotlib if requested, see Figure 1 of \cite{alliez19}).
+Dependencies go down to precise versions of the shell, C compiler, and the C library (task 15390) for an exactly reproducible environment.
+To enable easy and fast relocation of the project without building from source, it is possible to build it in any existing container/VM.
+The important factor is that, the precise environment isolator is irrelevant, it can always be rebuilt.
+
+During configuration, only the very high-level choice of which software to built differs between projects.
+The Makefiles containig build recipes of each software don't generally change.
+However, the analysis will naturally be different from one project to another.
+Therefore a design was necessary to satisfy the modularity, scalability and minimal complexity criteria.
+To avoid getting too abstract, we will demonstrate it by replicating Figure 1C of \cite{menke20} in Figure \ref{fig:datalineage} (top).
+Figure \ref{fig:datalineage} (bottom) is the data lineage graph that produced it (with this whole paper).
+
+\begin{figure*}[t]
+ \begin{center}
+ \includetikz{figure-tools-per-year}
+ \includetikz{figure-data-lineage}
+ \end{center}
+ \vspace{-3mm}
+ \caption{\label{fig:datalineage}
+ Top: an enhanced replica of figure 1C in \cite{menke20}, shown here for demonstrating Maneage.
+ It shows the ratio of papers mentioning software tools (green line, left vertical axis) to total number of papers studied in that year (light red bars, right vertical axis in log-scale).
+ Bottom: Schematic representation of the data lineage, or workflow, to generate the plot above.
+ Each colored box is a file in the project and the arrows show the dependencies between them.
+ Green files/boxes are plain-text files that are under version control and in the project source directory.
+ Blue files/boxes are output files in the build-directory, shown within the Makefile (\inlinecode{*.mk}) where they are defined as a \emph{target}.
+ For example, \inlinecode{paper.pdf} depends on \inlinecode{project.tex} (in the build directory; generated automatically) and \inlinecode{paper.tex} (in the source directory; written manually).
+ The solid arrows and full-opacity built boxes are included with this paper's source.
+ The dashed arrows and low-opacity built boxes show the scalability by adding hypothetical steps to the project.
+ }
+\end{figure*}
+
+Analysis is orchestrated in a single point of entry (the Makefile \inlinecode{top-make.mk}).
+It is only responsible for \inlinecode{include}-ing the modular \emph{subMakefiles} of the analysis, in the desired order, not doing any analysis itself.
+This is shown in Figure \ref{fig:datalineage} (bottom) where all the built/blue files are placed over subMakefiles.
+A random reader will be able to understand the high-level logic of the project (irrespective of the low-level implementation details) with simple visual inspection of this file, provided that the subMakefile names are descriptive.
+A human-friendly design (that is also optimized for execution) is a critical component of publishing reproducible workflows.
+
+In all projects \inlinecode{top-make.mk} will first load the subMakefiles \inlinecode{initialize.mk} and \inlinecode{download.mk}, while concluding with \inlinecode{verify.mk} and \inlinecode{paper.mk}.
+Project authors add their modular subMakefiles in between (after \inlinecode{download.mk} and before \inlinecode{verify.mk}), in Figure \ref{fig:datalineage} (bottom), the project-specific subMakefiles are \inlinecode{format.mk} \& \inlinecode{demo-plot.mk}.
+Except for \inlinecode{paper.mk} which builds the ultimate target \inlinecode{paper.pdf}, all subMakefiles build atleast one file: a \LaTeX{} macro file with the same base-name, see the \inlinecode{.tex} files in each subMakefile of Figure \ref{fig:datalineage}.
+The other built files will ultimately (through other files) lead to one of the macro files.
+
+Irrespective of the number of subMakefiles, there lineaege reaches a bottle-neck in \inlinecode{verify.mk} to satisfy the verification criteria.
+All the macro files, plot information and published datasets of the project are verfied with their checksums here to automatically ensure exact reproducibility.
+Where exact reproducibility is not possible, values can be verified by any statistical means (specified by the project authors).
+Finally, having verified quantitative results, the project builds the ultimate target in \inlinecode{paper.mk}.
\begin{figure*}[t]
\begin{center} \includetikz{figure-branching}\end{center}
\vspace{-3mm}
- \caption{\label{fig:branching} Harvesting the power of version-control in project management with Maneage.
- Maneage is maintained as a core branch, with projects created by branching off it.
- (a) shows how projects evolve on their own branch, but can always update their low-level structure by merging with the core branch
+ \caption{\label{fig:branching} Maneage is a Git branch, projects using Maneage are branched-off of it and apply their customizations.
+ (a) shows a hypothetical project's history prior to publication.
+ The low-level structure (in Maneage, shared between all projects) can be updated by merging with Maneage.
(b) shows how a finished/published project can be revitalized for new technologies simply by merging with the core branch.
- Each Git ``commit'' is shown on their branches as colored ellipses, with their hash printed in them.
- The commits are colored based on the team that is working on that branch.
- The collaboration and paper icons are respectively made by `mynamepong' and `iconixar' and downloaded from \url{www.flaticon.com}.
+ Each Git ``commit'' is shown on its branch as a colored ellipse, with their hash printed in them.
+ The commits are colored based on their branch.
+ The collaboration and two paper icons are respectively made by `mynamepong' and `iconixar' from \url{www.flaticon.com}.
}
\end{figure*}
-\item \label{principle:freesoftware}\textbf{Free and open source software:}
- Technically, reproducibility \cite{fineberg19} is possible with non-free or non-open-source software (a black box).
- This principle is thus necessary to complement that definition (nature is already a black box, we don't need another one):
- (1) As a free software, others can learn from, modify, and build upon it.
- (2) The lineage can be traced to free software's implemented algorithms, also enabling optimizations on that level.
- (3) A free-software package that does not execute on particular hardware can be modified to work on it.
- (4) A non-free software project typically cannot be distributed by others, making the whole community reliant on the owner's server (even if the owner does not ask for payments).
+To further minimize complexity, the low-level implementation can be further separated from from the high-level execution through configuration files.
+By convention in Maneage, the subMakefiles (and the Python, Julia, C, Fortran, or etc, programs that they call for doing the number crunching), only organize the analysis, they don't contain any fixed numbers, settings or parameters.
+Parameters are set as Make variables in ``configuration files'' and passed to the respective program (\inlinecode{.conf} files in Figure \ref{fig:datalineage}).
+In the demo lineage, \inlinecode{INPUTS.conf} contains URLs and checksums for all imported datasets, enabling exact verification before usage.
+As another demo, we report that \cite{menke20} studied $\menkenumpapersdemocount$ papers in $\menkenumpapersdemoyear$ (which isn't in their original plot).
+The number \inlinecode{\menkenumpapersdemoyear} is stored in \inlinecode{demo-year.conf}.
+The result \inlinecode{\menkenumpapersdemocount} was calculated after generating \inlinecode{columns.txt}.
+Both are expanded in the PDF as \LaTeX{} macros.
+Enabling the reader to change the value in \inlinecode{demo-year.conf} to automatically update the result, without necessarily knowing how it was generated.
+Since a configuration file is a prerequisite of the target that uses it, if it is changed, Make will re-execute the recipe and its descendants.
+This encourages testing (without necessarily knowing the implementation details, e.g., by co-authors or future readers), and ensures self-consistency.
+
+Finally, to satisfy the temporal provenance criteria, version control (currently implemented in Git), plays a defining role in Maneage as shown in Figure \ref{fig:branching}.
+In practice, Maneage is a Git branch that contains the shared components, or infrastructure of all projects (e.g., software tarball URLs, build recipes, common subMakefiles and interface script).
+Every project starts by branching-off the Maneage branch and customizing it by adding their own title, input data links, writing their narrative, and subMakefiles for their analsyis, see Listing \ref{code:branching}.
+
+\begin{lstlisting}[
+ label=code:branching,
+ caption={Starting new project with Maneage, and building it},
+ ]
+# Cloning main Maneage branch and branching-off of it.
+$ git clone https://git.maneage.org/project.git
+$ cd project
+$ git remote rename origin origin-maneage
+$ git checkout -b master
+
+# Build the project in two phases:
+$ ./project configure # Build software environment.
+$ ./project make # Do analysis, build PDF paper.
+\end{lstlisting}
+
+As Figure \ref{fig:branching} shows, due to this architecture, it is always possible to import, or merge, Maneage into the project to improve the low-level infrastructure:
+in (a) the authors merge into Maneage during an ongoing project,
+in (b) readers can do it after the paper's publication, even when authors can't be accessed, and the project's infrastructure is outdated, or doesn't build.
+Low-level improvements in Maneage are thus automatically propagated to all projects.
+This greatly reduces the cost of curation, or maintenance, of each individual project, before and after publication.
+
+
+
+
+
+
+\section{Discussion}
+
+%% It should provide some insight or lessons learned.
+%% What is the message we should take from the experience?
+%% Are there clear demonstrated design principles that can be reapplied elsewhere?
+%% Are there roadblocks or bottlenecks that others might avoid?
+%% Are there suggested community or work practices that can make things smoother?
+%% Attempt to generalise the significance.
+%% should not just present a solution or an enquiry into a unitary problem but make an effort to demonstrate wider significance and application and say something more about the ‘science of data’ more generally.
+
+As shown in the proof of concept above, it is possible to define a workflow that satisfies the criteria presented in this paper.
+Here we will review the lessons learnt and insights gained, while sharing the experience of implementing the RDA recommendations
+We will also discuss the design principles, an how they may be generalized and usable in other projects.
+
+With the support of RDA, the user base and development of the criteria and Maneage grew phenomenally, highlighting some difficulties for wide-spread adoption of these criteria.
+Firstly, the low-level tools are not widely used by by many scientists, e.g., Git, \LaTeX, the command-line and Make.
+This is primarily because of a lack of exposure, we noticed that after witnessing the improvements in their research, many (especially early career researchers) have started mastering these tools.
+Fortunately many research institutes are having courses on these generic tools and we will also be adding more tutorials and demonstration videos in its documentation.
+
+Secondly, to satisfy the completeness criteria, all the necessary software of the project must be built on various POSIX-compatible systems (we actively test Maneage on several GNU/Linux distributions and macOS).
+This requires maintenance by our core team and consumes time and energy.
+However, due to the complexity criteria, the PM and analysis share the same job manager.
+Our experience has shown that users' experience in the analysis empowers some of them them to add/fix their required software on their own systems, and share that commits on the core branch, thus propagating to all derived projects.
+This has already happened in multiple cases.
+
+Thirdly, publishing a project's reproducible data lineage immediately after publication enables others to continue with followup papers in competition with the original authors.
+We propose these solutions:
+1) Through the Git history, the work added by another team at any phase of the project can be quantified, contributing to a new concept of authorship in scientific projects and helping to quantify Newton's famous ``\emph{standing on the shoulders of giants}'' quote.
+This is a long-term goal and requires major changes to academic value systems.
+2) Authors can be given a grace period where the journal or a third party embargoes the source, keeping it private for the embargo period and then publishing it.
+
+Other implementations of the criteria, or future improvements in Maneage, may solve the caveats above.
+However, the proof of concept already shows many advantages to adopting the criteria.
+Above, the benefits for researchers was the main focus, but the these criteria also help in data centers, for example with regard to th challenges mentioned in \cite{austin17}:
+(1) The burden of curation is shared among all project authors and/or readers (who may find a bug and fix it), not just by data-base curators, improving the sustainability of data centers.
+(2) Automated and persistent bi-directional linking of data and publication can be established through the published \& \emph{complete} data lineage that is version controlled.
+(3) Software management.
+With these criteria, each project's unique and complete software management is included: its not a third-party PM, that needs to be maintained by the data center employees.
+This enables easy management, preservation, publishing and citation of used software.
+For example see \href{https://doi.org/10.5281/zenodo.3524937}{zenodo.3524937}, \href{https://doi.org/10.5281/zenodo.3408481}{zenodo.3408481}, \href{https://doi.org/10.5281/zenodo.1163746}{zenodo.1163746} where we have exploited the free software criteria to distribute all the used software tarballs with the other project files.
+(4) ``Linkages between documentation, code, data, and journal articles in an integrated environment'', which results from the criteria.
+
+Generally, scientists are rarely trained sufficiently in data management or software development, and the plethora of high-level tools that change every few years does not help.
+Such high-level tools are primarily targetted at software developers, who are paid to learn them and use them effectively for short-term projects.
+Scientists, on the other hand, need to focus on their own research fields, and need to think about longevity.
+Hence, arguably the most important feature is that the un-customized project is already a fully working template blending version control, paper's narrative, software management \emph{and} a modular lineage for analysis with mature tools, allowing scientists to learn them in practice, not abstractly.
- \emph{Comparison with existing:} The existing solutions listed above are all free software.
- Based on this principle, we do not consider non-free solutions.
-\end{enumerate}
+Publication of projects with these criteria on a wide scale allows automatic workflow generation, optimized for desired characteristics of the results (for example via machine learning).
+Because is complete, algorithms and data selection methods can be similarly optimized.
+Furthermore, through elements like the macros, natural language processing can also be included, allowing a direct connection between an analysis and the resulting narrative \emph{and} history of that narrative.
+Parsers can be written over projects for meta-research and data provenance studies, for example to generate ``research objects''.
+As another example, when a bug is found in one software package, all affected projects can be found and the scale of the effect can be measured.
+Combined with SoftwareHeritage, precise high-level science parts of Maneage projects can be accurately cited (e.g., failed/abandoned tests at any historical point).
+Many components of ``machine-actionable'' data management plans can be automatically filled out by Maneage, which is useful for project PIs and and grant funders.
@@ -277,14 +491,14 @@ The Pozna\'n Supercomputing and Networking Center (PSNC) computational grant 314
%% Bibliography
\bibliographystyle{IEEEtran}
-\bibliography{IEEEabrv,/home/mohammad/documents/personal/professional/data-science/maneage/paper/source/tex/src/references}
+\bibliography{IEEEabrv,references}
%% Biography
\begin{IEEEbiographynophoto}{Mohammad Akhlaghi}
- is currently a big data postdoctoral researcher at the Instituto de Astrof\'isica de Canarias, Tenerife, Spain.
- His main scientific interest is in early galaxy evolution, but to extract information from the modern complex datasets, he has been involved in image processing and reproducible workflow management where he has founded GNU Astronomy Utilities (Gnuastro) and Maneage.
- He received his PhD in astronomy from Tohoku University, Sendai Japan, and also held a postdoc position at the Centre de Recherche Astrophysique de Lyon (CRAL).
- Contact him at mohammad@akhlaghi.org and find his website at https://akhlaghi.org.
+ is currently a postdoctoral researcher at the Instituto de Astrof\'isica de Canarias, Tenerife, Spain.
+ His main scientific interest is in early galaxy evolution, but to extract information from the modern complex datasets, he has been involved in image processing and reproducible workflow management where he has founded GNU Astronomy Utilities (Gnuastro) and Maneage (introduced here).
+ He received his PhD in astronomy from Tohoku University, Sendai Japan, and before coming to Tenerife, held a CNRS postdoc position at the Centre de Recherche Astrophysique de Lyon (CRAL).
+ Contact him at mohammad@akhlaghi.org and find his website at \url{https://akhlaghi.org}.
\end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Ra\'ul Infante-Sainz}
diff --git a/project b/project
index efbd266..47cb5ae 100755
--- a/project
+++ b/project
@@ -406,6 +406,11 @@ EOF
# Run the actual project.
controlled_env reproduce/analysis/make/top-make.mk
+
+ # Print the number of words
+ numwords=$(/usr/bin/pdftotext paper.pdf && cat paper.txt | wc -w)
+ echo; echo "Number of words in full PDF: $numwords"
+ rm paper.txt
;;
diff --git a/reproduce/analysis/config/menke-demo-year.conf b/reproduce/analysis/config/demo-year.conf
index 429b220..429b220 100644
--- a/reproduce/analysis/config/menke-demo-year.conf
+++ b/reproduce/analysis/config/demo-year.conf
diff --git a/reproduce/analysis/make/demo-plot.mk b/reproduce/analysis/make/demo-plot.mk
index ac05776..c14b83d 100644
--- a/reproduce/analysis/make/demo-plot.mk
+++ b/reproduce/analysis/make/demo-plot.mk
@@ -27,7 +27,7 @@ $(a2dir):; mkdir $@
# Table for Figure 1C of Menke+20
# -------------------------------
-a2mk20f1c = $(a2dir)/tools-per-year.txt
+a2mk20f1c = $(a2dir)/columns.txt
$(a2mk20f1c): $(mk20tab3) | $(a2dir)
# Remove the (possibly) produced figure that is created from this
@@ -47,7 +47,7 @@ $(a2mk20f1c): $(mk20tab3) | $(a2dir)
# Final LaTeX macro
-$(mtexdir)/demo-plot.tex: $(a2mk20f1c) $(pconfdir)/menke-demo-year.conf
+$(mtexdir)/demo-plot.tex: $(a2mk20f1c) $(pconfdir)/demo-year.conf
# Find the first year (first column of first row) of data.
v=$$(awk 'NR==1{print $$1}' $(a2mk20f1c))
diff --git a/reproduce/analysis/make/format.mk b/reproduce/analysis/make/format.mk
index d10034d..3070e6a 100644
--- a/reproduce/analysis/make/format.mk
+++ b/reproduce/analysis/make/format.mk
@@ -24,7 +24,7 @@
# Save the "Table 3" spreadsheet from the downloaded `.xlsx' file into a
# simple plain-text file that is easy to use.
a1dir = $(BDIR)/analysis1
-mk20tab3 = $(a1dir)/menke20-table-3.txt
+mk20tab3 = $(a1dir)/table-3.txt
$(a1dir):; mkdir $@
$(mk20tab3): $(indir)/menke20.xlsx | $(a1dir)
diff --git a/reproduce/analysis/make/paper.mk b/reproduce/analysis/make/paper.mk
index 4f2088b..a216370 100644
--- a/reproduce/analysis/make/paper.mk
+++ b/reproduce/analysis/make/paper.mk
@@ -86,15 +86,21 @@ $(mtexdir)/project.tex: $(mtexdir)/verify.tex
# recipe and the `paper.pdf' recipe. But if `tex/src/references.bib' hasn't
# been modified, we don't want to re-build the bibliography, only the final
# PDF.
-$(texbdir)/paper.bbl: tex/src/references.bib $(mtexdir)/dependencies-bib.tex \
+$(texbdir)/paper.bbl: tex/src/references.tex $(mtexdir)/dependencies-bib.tex \
| $(mtexdir)/project.tex
# If `$(mtexdir)/project.tex' is empty, don't build PDF.
@macros=$$(cat $(mtexdir)/project.tex)
if [ x"$$macros" != x ]; then
+ # Unfortunately I can't get bibtex to look into a special
+ # directory for the references, so we'll copy it here.
+ p=$$(pwd)
+ if ! [ -L $(texbdir)/references.bib ]; then
+ ln -s $$p/tex/src/references.tex $(texbdir)/references.bib
+ fi
+
# We'll run LaTeX first to generate the `.bcf' file (necessary
# for `biber') and then run `biber' to generate the `.bbl' file.
- p=$$(pwd)
export TEXINPUTS=$$p:
cd $(texbdir);
latex -shell-escape -halt-on-error $$p/paper.tex
@@ -137,5 +143,4 @@ paper.pdf: $(mtexdir)/project.tex paper.tex $(texbdir)/paper.bbl
# file here.
cd $$p
cp $(texbdir)/$@ $(final-paper)
-
fi
diff --git a/reproduce/software/config/texlive-packages.conf b/reproduce/software/config/texlive-packages.conf
index 70f246e..7dac084 100644
--- a/reproduce/software/config/texlive-packages.conf
+++ b/reproduce/software/config/texlive-packages.conf
@@ -16,4 +16,6 @@
# the basic installation scheme that we used to install tlmgr, they will be
# ignored in the `tlmgr install' command, but will be used later when we
# want their versions.
-texlive-packages = times IEEEtran cite xcolor pgfplots courier ps2eps
+texlive-typewriter-pkgs = courier inconsolata xkeyval upquote
+texlive-packages = times IEEEtran cite xcolor pgfplots ps2eps \
+ listing etoolbox $(texlive-typewriter-pkgs)
diff --git a/tex/src/figure-branching.tex b/tex/src/figure-branching.tex
index 52a6303..7259f7d 100644
--- a/tex/src/figure-branching.tex
+++ b/tex/src/figure-branching.tex
@@ -120,9 +120,9 @@
\draw [->, black!40!white, rounded corners, line width=2mm]
(11cm,4.5cm) -- (12.5cm,5cm) -- (12.5cm,7.9cm);
\draw [black!40!white, line width=2mm] (9.5cm,6cm) -- (12.5cm,7cm);
- \draw [anchor=north, black!40!white] (12.7cm,4.8cm) node [scale=1.5]
- {\bf Derivative};
- \draw [anchor=north, black!40!white] (12.7cm,4.4cm) node [scale=1.5]
+ \draw [anchor=north, black!40!white] (12.6cm,4.8cm) node [scale=1.5]
+ {\bf Derived};
+ \draw [anchor=north, black!40!white] (12.6cm,4.4cm) node [scale=1.5]
{\bf project};
%% Maneage commits.
diff --git a/tex/src/figure-data-lineage.tex b/tex/src/figure-data-lineage.tex
index 146a833..fcc52d9 100644
--- a/tex/src/figure-data-lineage.tex
+++ b/tex/src/figure-data-lineage.tex
@@ -46,9 +46,9 @@
text centered,
font=\ttfamily,
text width=2.8cm,
+ minimum width=15cm,
minimum height=7.8cm,
draw=green!50!black!50,
- minimum width=\linewidth,
fill=black!10!green!2!white,
label={[shift={(0,-5mm)}]\texttt{top-make.mk}}] {};
@@ -62,7 +62,7 @@
\node (analysis2mk) [node-makefile, at={(2.67cm,-1.3cm)},
label={[shift={(0,-5mm)}]\texttt{demo-plot.mk}}] {};
\node [opacity=0.6] (analysis3mk) [node-makefile, at={(5.47cm,-1.3cm)},
- label={[shift={(0,-5mm)}, opacity=0.6]\texttt{another-step.mk}}] {};
+ label={[shift={(0,-5mm)}, opacity=0.6]\texttt{next-step.mk}}] {};
%% verify.mk
\node [at={(-5.3cm,-2.8cm)},
@@ -137,14 +137,16 @@
%% input-2.dat
\ifdefined\inputtwo
\node (input2) [node-terminal, at={(-2.93cm,1.9cm)}] {menke20.xlsx};
- \draw [->] (input2) -- (downloadtex);
\fi
%% INPUTS.conf
\ifdefined\inputsconf
\node (INPUTS) [node-nonterminal, at={(-2.93cm,4.6cm)}] {INPUTS.conf};
\node (input2-west) [node-point, at={(-4.33cm,1.9cm)}] {};
+ \node (downloadtex-west) [node-point, at={(-4.33cm,-0.8cm)}] {};
\draw [->,rounded corners] (INPUTS.west) -| (input2-west) |- (input2);
+ \draw [->,rounded corners] (INPUTS.west) -| (downloadtex-west)
+ |- (downloadtex);
\fi
%% analysis1.tex
@@ -155,7 +157,7 @@
%% out1b.dat
\ifdefined\outoneb
- \node (out1b) [node-terminal, at={(-0.13cm,1.1cm)}] {menke20-table-3.txt};
+ \node (out1b) [node-terminal, at={(-0.13cm,1.1cm)}] {table-3.txt};
\draw [->] (out1b) -- (a1tex);
\fi
@@ -173,9 +175,9 @@
%% out-2b.dat
\ifdefined\outtwob
- \node (menkedemoyear) [node-nonterminal, at={(2.67cm,4.6cm)}] {menke-demo-year.conf};
+ \node (menkedemoyear) [node-nonterminal, at={(2.67cm,4.6cm)}] {demo-year.conf};
\node (a2tex-west) [node-point, at={(1.27cm,-0.8cm)}] {};
- \node (out2b) [node-terminal, at={(2.67cm,0.3cm)}] {tools-per-year.txt};
+ \node (out2b) [node-terminal, at={(2.67cm,0.3cm)}] {columns.txt};
\draw [->] (out2b) -- (a2tex);
\draw [->,rounded corners] (menkedemoyear.west) -| (a2tex-west) |- (a2tex);
\fi
@@ -187,7 +189,7 @@
%% analysis3.tex
\ifdefined\analysisthreetex
- \node [opacity=0.6] (a3tex) [node-terminal, at={(5.47cm,-0.8cm)}] {another-step.tex};
+ \node [opacity=0.6] (a3tex) [node-terminal, at={(5.47cm,-0.8cm)}] {next-step.tex};
\draw [opacity=0.6, rounded corners, -, dashed] (a3tex) |- (initialize-south);
\fi
@@ -216,7 +218,7 @@
\ifdefined\outthreeadep
\node [opacity=0.6] (out3a-west) [node-point, at={(4.07cm,2.7cm)}] {};
\draw [opacity=0.6, ->,rounded corners, dashed] (input2) |- (out3a);
- \node [opacity=0.6] (a3conf1) [node-nonterminal, at={(5.47cm,4.6cm)}] {param-3.conf};
+ \node [opacity=0.6] (a3conf1) [node-nonterminal, at={(5.47cm,4.6cm)}] {param.conf};
\draw [opacity=0.6, rounded corners, dashed] (a3conf1.west) -| (out3a-west) |- (out3a);
\fi
\end{tikzpicture}
diff --git a/tex/src/figure-tools-per-year.tex b/tex/src/figure-tools-per-year.tex
index 75557ac..f82402f 100644
--- a/tex/src/figure-tools-per-year.tex
+++ b/tex/src/figure-tools-per-year.tex
@@ -1,4 +1,4 @@
-\begin{tikzpicture}
+\begin{tikzpicture}[scale=0.9]
\begin{axis}[
ymin=0,
ymax=100,
diff --git a/tex/src/preamble-project.tex b/tex/src/preamble-project.tex
index c4d7feb..9b956cf 100644
--- a/tex/src/preamble-project.tex
+++ b/tex/src/preamble-project.tex
@@ -8,11 +8,9 @@
%% For the `\url' command.
\usepackage{url}
-% correct bad hyphenation here
-\hyphenation{op-tical net-works semi-conduc-tor}
-
-%% To use colors.
-\usepackage{xcolor}
+%% No need to load xcolor, its included by others below (it conflicts with
+%% the listings package.
+%\usepackage{xcolor}
%% To have links.
\usepackage[
@@ -25,3 +23,32 @@
%% To have typewriter font
\usepackage{courier}
+
+%% To have bold monospace
+%\usepackage[scaled=0.85]{beramono}
+\usepackage{inconsolata}
+
+%% To display codes.
+\usepackage{listings}
+\usepackage{etoolbox}
+\input{listings-bash.prf}
+\lstset{
+ frame=lines,
+ numbers=none,
+ language=bash,
+ commentstyle=\color{gray},
+ abovecaptionskip=0mm,
+ belowcaptionskip=0mm,
+ keywordstyle=\mdseries,
+ basicstyle=\small\ttfamily\color{blue!35!black},
+}
+\makeatletter
+\preto\lstlisting{\def\@captype{table}}
+\pretocmd\lst@makecaption{\noindent{\rule{\linewidth}{1pt}}}{}{}
+\makeatother
+
+
+
+
+%% Custom macros
+\newcommand{\inlinecode}[1]{\textcolor{blue!35!black}{\texttt{#1}}}
diff --git a/tex/src/references.bib b/tex/src/references.tex
index e19ec16..f355bba 100644
--- a/tex/src/references.bib
+++ b/tex/src/references.tex
@@ -1,10 +1,11 @@
-@ARTICLE{clement19,
- author = {Cl\'ement-Fontaine, M\'elanie and Di Cosmo, Roberto and Guerry, Bastien and MOREAU, Patrick and Pellegrini, Fran\c cois},
- title = {Encouraging a wider usage of software derived from research},
- year = {2019},
- journal = {Archives ouvertes HAL},
- volume = {},
- pages = {\href{https://hal.archives-ouvertes.fr/hal-02545142}{hal-02545142}},
+@ARTICLE{mesnard20,
+ author = {Olivier Mesnard and Lorena A. Barba},
+ title = {Reproducible Workflow on a Public Cloud for Computational Fluid Dynamics},
+ year = {2020},
+ journal = {Computing in Science \& Engineering},
+ volume = {22},
+ pages = {102-116},
+ doi = {10.1109/MCSE.2019.2941702},
}
@@ -73,7 +74,7 @@ archivePrefix = {arXiv},
month = "Feb",
volume = {491},
number = {4},
- pages = {5317},
+ pages = {5317-5329},
doi = {10.1093/mnras/stz3111},
archivePrefix = {arXiv},
eprint = {1911.01430},
@@ -100,13 +101,26 @@ archivePrefix = {arXiv},
+@ARTICLE{clement19,
+ author = {Cl\'ement-Fontaine, M\'elanie and Di Cosmo, Roberto and Guerry, Bastien and MOREAU, Patrick and Pellegrini, Fran\c cois},
+ title = {Encouraging a wider usage of software derived from research},
+ year = {2019},
+ journal = {Archives ouvertes HAL},
+ volume = {},
+ pages = {\href{https://hal.archives-ouvertes.fr/hal-02545142}{hal-02545142}},
+}
+
+
+
+
+
@ARTICLE{pimentel19,
author = {{Jo\~ao Felipe} Pimentel and Leonardo Murta and Vanessa Braganholo and Juliana Freire},
title = {A large-scale study about quality and reproducibility of jupyter notebooks},
year = {2019},
journal = {Proceedings of the 16th International Conference on Mining Software Repositories},
volume = {1},
- pages = {507},
+ pages = {507-517},
doi = {10.1109/MSR.2019.00077},
}
@@ -234,7 +248,7 @@ archivePrefix = {arXiv},
title = {Reproducibility and Replicability in Science},
journal = {The National Academies Press},
year = 2019,
- pages = {1},
+ pages = {1-256},
doi = {10.17226/25303},
}
@@ -316,7 +330,7 @@ author = "Adam Brinckman and Kyle Chard and Niall Gaffney and Mihael Hategan and
keywords = {Computer Science - Digital Libraries, Computer Science - Software Engineering},
year = "2019",
month = "May",
- pages = {39},
+ pages = {39-52},
archivePrefix = {arXiv},
eprint = {1905.11123},
primaryClass = {cs.DL},
@@ -742,7 +756,7 @@ archivePrefix = {arXiv},
journal = {International Journal on Digital Libraries},
volume = {18},
year = {2017},
- pages = {77},
+ pages = {77-92},
doi = {10.1007/s00799-016-0178-2},
}
@@ -982,7 +996,7 @@ archivePrefix = {arXiv},
month = sep,
volume = 220,
eid = {1},
- pages = {1},
+ pages = {1-33},
doi = {10.1088/0067-0049/220/1/1},
adsurl = {https://ui.adsabs.harvard.edu/abs/2015ApJS..220....1A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
@@ -1647,7 +1661,7 @@ Reproducible Research in Image Processing},
journal = {SEG Technical Program Expanded Abstracts},
year = {1992},
volume = {1},
- pages = {601},
+ pages = {601-604},
doi = {10.1190/1.1822162},
}