aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitattributes5
-rw-r--r--README.md147
-rw-r--r--paper.tex910
-rw-r--r--peer-review/1-answer.txt1216
-rw-r--r--peer-review/1-review.txt788
-rw-r--r--peer-review/2-review.txt147
-rwxr-xr-xproject26
-rw-r--r--reproduce/analysis/config/INPUTS.conf8
-rw-r--r--reproduce/analysis/config/delete-me-squared-num.conf9
-rw-r--r--reproduce/analysis/config/demo-year.conf3
-rw-r--r--reproduce/analysis/config/metadata.conf12
-rw-r--r--reproduce/analysis/make/delete-me.mk169
-rw-r--r--reproduce/analysis/make/demo-plot.mk80
-rw-r--r--reproduce/analysis/make/download.mk7
-rw-r--r--reproduce/analysis/make/format.mk86
-rw-r--r--reproduce/analysis/make/initialize.mk36
-rw-r--r--reproduce/analysis/make/paper.mk102
-rw-r--r--reproduce/analysis/make/top-make.mk10
-rw-r--r--reproduce/analysis/make/verify.mk11
-rw-r--r--reproduce/software/config/TARGETS.conf5
-rwxr-xr-xreproduce/software/config/software_acknowledge_context.sh4
-rw-r--r--reproduce/software/config/texlive-packages.conf12
-rw-r--r--reproduce/software/config/versions.conf5
-rw-r--r--tex/img/icon-collaboration.eps1150
-rw-r--r--tex/img/icon-complete.eps630
-rw-r--r--tex/img/icon-processing.eps698
-rw-r--r--tex/src/IEEEtran_openaccess.bst2484
-rw-r--r--tex/src/appendix-existing-solutions.tex532
-rw-r--r--tex/src/appendix-existing-tools.tex661
-rw-r--r--tex/src/appendix-necessity.tex97
-rw-r--r--tex/src/appendix-used-software.tex4
-rw-r--r--tex/src/delete-me-image-histogram.tex51
-rw-r--r--tex/src/delete-me-squared.tex32
-rw-r--r--tex/src/figure-branching.tex156
-rw-r--r--tex/src/figure-data-lineage.tex209
-rw-r--r--tex/src/figure-file-architecture.tex165
-rw-r--r--tex/src/figure-project-outline.tex229
-rw-r--r--tex/src/figure-src-demoplot.tex32
-rw-r--r--tex/src/figure-src-download.tex8
-rw-r--r--tex/src/figure-src-format.tex59
-rw-r--r--tex/src/figure-src-inputconf.tex7
-rw-r--r--tex/src/figure-src-topmake.tex24
-rw-r--r--tex/src/figure-tools-per-year.tex258
-rw-r--r--tex/src/paper-long.tex2591
-rw-r--r--tex/src/preamble-biblatex.tex147
-rw-r--r--tex/src/preamble-maneage-default-style.tex161
-rw-r--r--tex/src/preamble-pgfplots.tex155
-rw-r--r--tex/src/preamble-project.tex109
-rw-r--r--tex/src/references.tex2031
-rw-r--r--tex/src/supplement.tex99
50 files changed, 15594 insertions, 983 deletions
diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 0000000..1fab0e0
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1,5 @@
+paper.tex merge=ours
+tex/src/*.tex merge=ours
+reproduce/analysis/config/*.conf merge=ours
+reproduce/software/config/TARGETS.conf merge=ours
+reproduce/software/config/texlive-packages.conf merge=ours \ No newline at end of file
diff --git a/README.md b/README.md
index 250f807..7ef3e08 100644
--- a/README.md
+++ b/README.md
@@ -1,35 +1,46 @@
-Reproducible source for XXXXXXXXXXXXXXXXX
--------------------------------------------------------------------------
+Reproducible source for Akhlaghi et al. (2021, arXiv:2006.03018)
+----------------------------------------------------------------
Copyright (C) 2018-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>\
See the end of the file for license conditions.
-This is the reproducible project source for the paper titled "**XXX XXXXX
-XXXXXX**", by XXXXX XXXXXX, YYYYYY YYYYY and ZZZZZZ ZZZZZ that is published
-in XXXXX XXXXX.
-
-To reproduce the results and final paper, the only dependency is a minimal
-Unix-based building environment including a C and C++ compiler (already
-available on your system if you have ever built and installed a software
-from source) and a downloader (Wget or cURL). Note that **Git is not
-mandatory**: if you don't have Git to run the first command below, go to
-the URL given in the command on your browser, and download the project's
-source (there is a button to download a compressed tarball of the
-project). If you have received this source from arXiv or Zenodo (without
-any `.git` directory inside), please see the "Building project tarball"
-section below.
+This is the reproducible project source for the paper titled "**Toward
+Long-Term and Archivable Reproducibility**", by Mohammad Akhlaghi, Raúl
+Infante-Sainz, Boudewijn F. Roukema, Mohammadreza Khellat, David
+Valls-Gabaud, Roberto Baena-Gallé, see
+[DOI:10.1109/MCSE.2021.3072860](https://doi.org/10.1109/MCSE.2021.3072860)
+[arXiv:2006.03018](https://arxiv.org/abs/2006.03018) or
+[zenodo.3872247](https://doi.org/10.5281/zenodo.3872247).
+
+To learn more about the purpose, principles and technicalities of this
+reproducible paper, please see `README-hacking.md`. In the "Quick start"
+section below we show a minimal set of commands to clone, and reproduce the
+full project using Git. In the next section the commands are explained
+more. The following section describes how to deal with a tarball of the
+project's source (not using Git). In the last section building the project
+within a Docker container is described.
+
+
+
+
+
+### Quick start (using Git, with internet access)
+
+Run these commands to clone this project's history, enter it, configure it
+(let it build and install its own software) and "make it (let it do
+reproduce its analysis). If you already have the project on your system,
+you can ignore the first step (cloning). In the core Maneage branch, all
+operations will be done in the build-directory that you specify at
+configure time, no root permissions are required and no other part of your
+filesystem is affected.
```shell
-$ git clone XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-$ cd XXXXXXXXXXXXXXXXXX
+$ git clone http://git.maneage.org/paper-concept.git
+$ cd paper-concept
$ ./project configure
$ ./project make
```
-This paper is made reproducible using Maneage (MANaging data linEAGE). To
-learn more about its purpose, principles and technicalities, please see
-`README-hacking.md`, or the Maneage webpage at https://maneage.org.
-
@@ -41,25 +52,25 @@ requiring root/administrator permissions.
1. Necessary dependencies:
- 1.1: Minimal software building tools like C compiler, Make, and other
- tools found on any Unix-like operating system (GNU/Linux, BSD, Mac
- OS, and others). All necessary dependencies will be built from
- source (for use only within this project) by the `./project
- configure` script (next step).
+ 1.1: Minimal software building tools like a C compiler and other very
+ basic POSIX tools found on any Unix-like operating system
+ (GNU/Linux, BSD, Mac OS, and others). All necessary dependencies
+ will be built from source (for use only within this project) by the
+ `./project configure` script (next step). Note that **Git is not
+ mandatory**: if you don't have Git to run the first command above,
+ go to the URL given in the command on your browser, and download
+ the project's source (there is a button to download a compressed
+ tarball of the project). You can also get project's source as a
+ tarball from arXiv or Zenodo.
1.2: (OPTIONAL) Tarball of dependencies. If they are already present (in
a directory given at configuration time), they will be
used. Otherwise, a downloader (`wget` or `curl`) will be necessary
to download any necessary tarball. The necessary tarballs are also
- collected in the archived project on
- [https://doi.org/10.5281/zenodo.XXXXXXX](XXXXXXX). Just unpack that
- tarball and you should see all the tarballs of this project's
- software. When `./project configure` asks for the "software tarball
- directory", give the address of the unpacked directory that has all
- the tarballs. [[TO AUTHORS: UPLOAD THE SOFTWARE TARBALLS WITH YOUR
- DATA AND PROJECT SOURCE TO ZENODO OR OTHER SIMILAR SERVICES. THEN
- ADD THE DOI/LINK HERE. DON'T FORGET THAT THE SOFTWARE ARE A
- CRITICAL PART OF YOUR WORK'S REPRODUCIBILITY.]]
+ collected in the archived project on Zenodo (link below). Just
+ unpack that tarball, and when `./project configure` asks for the
+ "software tarball directory", give the address of the unpacked
+ directory: https://doi.org/10.5281/zenodo.3911395
2. Configure the environment (top-level directories in particular) and
build all the necessary software for use in the next step. It is
@@ -91,30 +102,26 @@ requiring root/administrator permissions.
-### Building project tarball (possibly from arXiv)
+### Building project tarball (without Git)
If the paper is also published on arXiv, it is highly likely that the
authors also uploaded/published the full project there along with the LaTeX
sources. If you have downloaded (or plan to download) this source from
arXiv, some minor extra steps are necessary as listed below. This is
because this tarball is mainly tailored to automatic creation of the final
-PDF without using Maneage (only calling LaTeX, not using the './project'
-command)!
-
-You can directly run 'latex' on this directory and the paper will be built
-with no analysis (all necessary built products are already included in the
-tarball). One important feature of the tarball is that it has an extra
-`Makefile` to allow easy building of the PDF paper without worring about
-the exact LaTeX and bibliography software commands.
+PDF without actually using the './project' command! You can directly run
+'latex' on this directory and the paper will be built with no analysis (all
+necessary built products are already included).
#### Only building PDF using tarball (no analysis)
-1. If you got the tarball from arXiv and the arXiv code for the paper is
- 1234.56789, then the downloaded source will be called `1234.56789` (no
- suffix). However, it is actually a `.tar.gz` file. So take these steps
- to unpack it to see its contents.
+1. If you got the tarball from arXiv and the arXiv code for the paper
+ is 1234.56789, then the downloaded source will be called
+ `1234.56789` (no special identification suffix). However, it is
+ actually a `.tar.gz` file. So take these steps to unpack it to see
+ its contents.
```shell
$ arxiv=1234.56789
@@ -125,11 +132,10 @@ the exact LaTeX and bibliography software commands.
```
2. No matter how you got the tarball, if you just want to build the PDF
- paper, simply run the command below. Note that this won't actually
- install any software or do any analysis, it will just use your host
- operating system (assuming you already have a LaTeX installation and all
- the necessary LaTeX packages) to build the PDF using the already-present
- plots data.
+ paper from the tarball, simply run the command below. Note that this
+ won't actually install any software or do any analysis, it will just use
+ your host operating system to build the PDF and assumes you already have
+ all the necessary LaTeX packages.
```shell
$ make # Build PDF in tarball without doing analysis
@@ -137,35 +143,36 @@ the exact LaTeX and bibliography software commands.
3. If you want to re-build the figures from scratch, you need to make the
following corrections to the paper's main LaTeX source (`paper.tex`):
- uncomment (remove the starting `%`) the line containing
- `\newcommand{\makepdf}{}`, see the comments above it for more.
+ uncomment (remove the starting `%`) of the line containing
+ `\newcommand{\makepdf}{}`. See the comments above it for more
+ information.
#### Building full project from tarball (custom software and analysis)
-As described above, the tarball is mainly geared to only building the final
-PDF. A few small tweaks are necessary to build the full project from
-scratch (download necessary software and data, build them and run the
-analysis and finally create the final paper).
+Since the tarball is mainly geared to only building only the final PDF, a
+few small tweaks are necessary to build the full project from scratch
+(download necessary software and data, build them and run the analysis and
+finally create the final paper).
1. If you got the tarball from arXiv, before following the standard
procedure of projects described at the top of the file above (using the
- `./project` script), its necessary to set its executable flag because
- arXiv removes the executable flag from the files (for its own security).
+ './project' script), its necessary to set its executable flag. arXiv
+ removes the executable flag from the files (for its own security).
```shell
$ chmod +x project
```
-2. Make the following changes in two of the LaTeX files so LaTeX attempts
- to build the figures from scratch (to make the tarball; it was
- configured to avoid building the figures, just using the ones that came
- with the tarball).
+2. Make the following change in two of the LaTeX files so LaTeX attempts to
+ build the figures from scratch (to make the tarball, it was configured
+ to avoid building the figures, just using the ones that came with the
+ tarball).
- `paper.tex`: uncomment (remove the starting `%`) of the line
- containing `\newcommand{\makepdf}{}`, see the comments above it for
- more.
+ containing `\newcommand{\makepdf}{}`. See the comments above it for
+ more information.
- `tex/src/preamble-pgfplots.tex`: set the `tikzsetexternalprefix`
variable value to `tikz/`, so it looks like this:
@@ -181,7 +188,7 @@ analysis and finally create the final paper).
```shell
$ ls
- COPYING paper.tex project README-hacking.md README.md reproduce/ tex/
+ COPYING paper.tex project README-hacking.md README.md reproduce tex
```
@@ -767,6 +774,8 @@ docker images -a -q | xargs docker rmi -f
+
+
### Copyright information
This file and `.file-metadata` (a binary file, used by Metastore to store
diff --git a/paper.tex b/paper.tex
index b57bb30..f17f083 100644
--- a/paper.tex
+++ b/paper.tex
@@ -1,15 +1,28 @@
-%% Copyright (C) 2018-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
-%% See the end of the file for license conditions.
-\documentclass[10pt, twocolumn]{article}
-
-%% (OPTIONAL) CONVENIENCE VARIABLE: Only relevant when you use Maneage's
-%% '\includetikz' macro to build plots/figures within LaTeX using TikZ or
-%% PGFPlots. If so, when the Figure files (PDFs) are already built, you can
-%% avoid TikZ or PGFPlots completely by commenting/removing the definition
-%% of '\makepdf' below. This is useful when you don't want to slow-down a
-%% LaTeX-only build of the project (for example this happens when you run
-%% './project make dist'). See the definition of '\includetikz' in
-%% 'tex/preamble-pgfplots.tex' for more.
+%% Main LaTeX source of project's paper.
+%
+%% Copyright (C) 2020-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+%% Copyright (C) 2020-2022 Raúl Infante-Sainz <infantesainz@gmail.com>
+%% Copyright (C) 2020-2022 Boudewijn F. Roukema <boud@astro.uni.torun.pl>
+%% Copyright (C) 2020-2022 Mohammadreza Khellat <mkhellat@ideal-information.com>
+%% Copyright (C) 2020-2022 David Valls-Gabaud <david.valls-gabaud@obspm.fr>
+%% Copyright (C) 2020-2022 Roberto Baena-Gallé <roberto.baena@unir.net>
+%
+%% This file is free software: you can redistribute it and/or modify it
+%% under the terms of the GNU General Public License as published by the
+%% Free Software Foundation, either version 3 of the License, or (at your
+%% option) any later version.
+%
+%% This file is distributed in the hope that it will be useful, but WITHOUT
+%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+%% for more details. See <http://www.gnu.org/licenses/>.
+\documentclass[journal]{IEEEtran}
+
+%% This is a convenience variable if you are using PGFPlots to build plots
+%% within LaTeX. If you want to import PDF files for figures directly, you
+%% can use the standard `\includegraphics' command. See the definition of
+%% `\includetikz' in `tex/preamble-pgfplots.tex' for where the files are
+%% assumed to be if you use `\includetikz' when `\makepdf' is not defined.
\newcommand{\makepdf}{}
%% VALUES FROM ANALYSIS (NUMBERS AND STRINGS): this file is automatically
@@ -17,207 +30,790 @@
%% (defined with '\newcommand') for various processing outputs to be used
%% within the paper.
\input{tex/build/macros/project.tex}
-
-%% MANEAGE-ONLY PREAMBLE: this file contains LaTeX constructs that are
-%% provided by Maneage (for example enabling or disabling of highlighting
-%% from the './project' script). They are not style-related.
\input{tex/src/preamble-maneage.tex}
-%% PROJECT-SPECIFIC PREAMBLE: This is where you can include any LaTeX
-%% setting for customizing your project.
+%% Import the other necessary TeX files for this particular project.
\input{tex/src/preamble-project.tex}
+%% Title and author names.
+\title{\projecttitle}
+\author{
+ Mohammad Akhlaghi,
+ Ra\'ul Infante-Sainz,
+ Boudewijn F. Roukema,
+ Mohammadreza Khellat,\\
+ David Valls-Gabaud,
+ Roberto Baena-Gall\'e\\
+ \footnotesize{Manuscript received June 5th, 2020; accepted April 7th, 2021; first published by CiSE April 13th, 2021}
+}
+%% The paper headers
+\markboth{Computing in Science and Engineering, Vol. 23, Issue 3, Pages 82--91, 2021: \href{https://doi.org/10.1109/MCSE.2021.3072860}{DOI:10.1109/MCSE.2021.3072860}, arXiv:2006.03018, \href{https://doi.org/10.5281/zenodo.\projectzenodoid}{zenodo.\projectzenodoid}}%
+{Akhlaghi \MakeLowercase{\textit{et al.}}: \projecttitle}
-%% PROJECT TITLE: The project title should also be printed as metadata in
-%% all output files. To avoid inconsistancy caused by manually typing it,
-%% the title is defined with other core project metadata in
-%% 'reproduce/analysis/config/metadata.conf'. That value is then written in
-%% the '\projectitle' LaTeX macro which is available in 'project.tex' (that
-%% was loaded above).
-%
-%% Please set your project's title in 'metadata.conf' (ideally with other
-%% basic project information) and re-run the project to have your new
-%% title. If you later use a different LaTeX style, please use the same
-%% '\projectitle' in it (after importing 'tex/build/macros/project.tex'
-%% like above), don't type it by hand.
-\title{\large \uppercase{\projecttitle}}
-%% AUTHOR INFORMATION: For a more fine-grained control of the headers
-%% including author name, or paper info, see
-%% 'tex/src/preamble-header.tex'. Note that if you plan to use a journal's
-%% LaTeX style file, you will probably set the authors in a different way,
-%% feel free to change them here, this is just basic style and varies from
-%% project to project.
-\author[1]{Your name}
-\author[2]{Coauthor one}
-\author[1,3]{Coauthor two}
-\affil[1]{The first affiliation in the list.; \url{your@email.address}}
-\affil[2]{Another affilation can be put here.}
-\affil[3]{And generally as many affiliations as you like.
-\par \emph{Received YYYY MM DD; accepted YYYY MM DD; published YYYY MM DD}}
-\date{}
+%% Start the paper.
+\begin{document}
+% make the title area
+\maketitle
+
+% As a general rule, do not put math, special symbols or citations
+% in the abstract or keywords.
+\begin{abstract}
+ %% CONTEXT
+ Analysis pipelines commonly use high-level technologies that are popular when created, but are unlikely to be readable, executable, or sustainable in the long term.
+ %% AIM
+ A set of criteria is introduced to address this problem:
+ %% METHOD
+ Completeness (no execution requirement beyond a minimal Unix-like operating system, no administrator privileges, no network connection, and storage primarily in plain text); modular design; minimal complexity; scalability; verifiable inputs and outputs; version control; linking analysis with narrative; and free and open source software.
+ %% RESULTS
+ As a proof of concept, we introduce ``Maneage'' (Managing data lineage), enabling cheap archiving, provenance extraction, and peer verification that has been tested in several research publications.
+ %% CONCLUSION
+ We show that longevity is a realistic requirement that does not sacrifice immediate or short-term reproducibility.
+ The caveats (with proposed solutions) are then discussed and we conclude with the benefits for the various stakeholders.
+ This article is itself a \emph{Maneage'd} project (project commit \projectversion).
+
+ \vspace{2.5mm}
+ \emph{Appendices} ---
+ Two comprehensive appendices that review the longevity of existing solutions; available
+\ifdefined\separatesupplement
+as supplementary ``Web extras'' on the journal web page.
+\else
+after the main body of this paper (Appendices \ref{appendix:existingtools} and \ref{appendix:existingsolutions}).
+\fi
+
+ \vspace{2.5mm}
+ \emph{Reproducibility} ---
+ Products available in \href{https://doi.org/10.5281/zenodo.\projectzenodoid}{\texttt{zenodo.\projectzenodoid}}.
+ Git history of this paper is at \href{http://git.maneage.org/paper-concept.git}{\texttt{git.maneage.org/paper-concept.git}},
+ which is also archived in Software Heritage\footnote{\inlinecode{\href{https://archive.softwareheritage.org/swh:1:dir:c6b928d31e43980e5f23472995dbbad7846bc522;origin=http://git.maneage.org/paper-concept.git/;visit=swh:1:snp:8b2dddede6359a155b618ce48059b83c078a593d;anchor=swh:1:rev:54d994d4aedae5f0222ce2c967bb884bc19d1d64}{swh:1:dir:c6b928d31e43980e5f23472995dbbad7846bc522}}\\Software Heritage identifiers (SWHIDs) can be used with resolvers like \inlinecode{http://n2t.net/} (e.g., \inlinecode{http://n2t.net/swh:1:...}). Clicking on the SWHIDs in the digital format will provide more ``context'' for same content.}.
+\end{abstract}
+
+% Note that keywords are not normally used for peer-review papers.
+\begin{IEEEkeywords}
+Data Lineage, Provenance, Reproducibility, Scientific Pipelines, Workflows
+\end{IEEEkeywords}
+
+
+
+
+
+
+
+% For peer review papers, you can put extra information on the cover
+% page as needed:
+% \ifCLASSOPTIONpeerreview
+% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
+% \fi
+%
+% For peerreview papers, this IEEEtran command inserts a page break and
+% creates the second title. It will be ignored for other modes.
+\IEEEpeerreviewmaketitle
+
+
+
+\section{Introduction}
+% The very first letter is a 2 line initial drop letter followed
+% by the rest of the first word in caps.
+%\IEEEPARstart{F}{irst} word
+
+Reproducible research has been discussed in the sciences for at least 30 years\cite{claerbout1992, fineberg19}.
+Many reproducible workflow solutions (hereafter, ``solutions'') have been proposed which mostly rely on the common technology of the day, starting with Make and Matlab libraries in the 1990s, Java in the 2000s, and mostly shifting to Python during the last decade.
+
+However, these technologies develop fast, e.g., code written in Python 2 (which is no longer officially maintained) often cannot run with Python 3.
+The cost of staying up to date within this rapidly-evolving landscape is high.
+Scientific projects, in particular, suffer the most: scientists have to focus on their own research domain, but to some degree, they need to understand the technology of their tools because it determines their results and interpretations.
+Decades later, scientists are still held accountable for their results and therefore the evolving technology landscape creates generational gaps in the scientific community, preventing previous generations from sharing valuable experience.
+
+
+
+
+
+\section{Longevity of existing tools}
+\label{sec:longevityofexisting}
+Reproducibility is defined as ``obtaining consistent results using the same input data; computational steps, methods, and code; and conditions of analysis''\cite{fineberg19}.
+Longevity is defined as the length of time that a project remains \emph{functional} after its creation.
+Functionality is defined as \emph{human readability} of the source and its \emph{execution possibility} (when necessary).
+Many usage contexts of a project do not involve execution: for example, checking the configuration parameter of a single step of the analysis to \emph{reuse} in another project, or checking the version of used software, or the source of the input data.
+Extracting these from execution outputs is not always possible.
+A basic review of the longevity of commonly used tools is provided here (for a more comprehensive review, see
+ \ifdefined\separatesupplement
+ the supplementary appendices, available online%
+ \else%
+ appendices \ref{appendix:existingtools} and \ref{appendix:existingsolutions}%
+ \fi%
+ ).
+
+To isolate the environment, virtual machines (VMs) have sometimes been used, e.g., in SHARE\footnote{\inlinecode{\url{https://is.ieis.tue.nl/staff/pvgorp/share}}} (awarded second prize in the Elsevier Executable Paper Grand Challenge of 2011, discontinued in 2019).
+However, containers (e.g., Docker or Singularity) are currently more widely used.
+We will focus on Docker here because it is currently the most common.
+
+It is possible to precisely identify the used Docker ``images'' with their checksums (or ``digest'') to recreate an identical operating system (OS) image later.
+However, that is rarely done.
+Usually images are imported with OS names; e.g., Mesnard \& Barba\cite{mesnard20} use ``\inlinecode{FROM ubuntu:16.04}''.
+The extracted tarball URL\footnote{\inlinecode{\url{https://partner-images.canonical.com/core/xenial}}} is updated almost monthly, and only the most recent five are archived.
+Hence, if the image is built in different months, it will contain different OS components.
+In the year 2024, when this version's long-term support (LTS) expires (if not earlier, like CentOS 8, which will terminate 8 years early\footnote{\inlinecode{\url{https://blog.centos.org/2020/12/future-is-centos-stream}}}), the image will not be available at the expected URL.
+
+Generally, prebuilt binary files (like Docker images) are large and expensive to maintain, distribute, and archive.
+Because of this, in October 2020, Docker Hub (where many workflows are archived) announced\footnote{\inlinecode{\href{https://www.docker.com/blog/docker-hub-image-retention-policy-delayed-and-subscription-updates}{https://www.docker.com/blog/docker-hub-image-retention}\\\href{https://www.docker.com/blog/docker-hub-image-retention-policy-delayed-and-subscription-updates}{-policy-delayed-and-subscription-updates}}} a new consumpiton-based payment model.
+Furthermore, Docker requires root permissions, and only supports recent (LTS) versions of the host kernel.
+Hence older Docker images may not be executable: their longevity is determined by OS kernels, typically a decade.
+
+Once the host OS is ready, package managers (PMs) are used to install the software or environment.
+Usually the PM of the OS, such as `\inlinecode{apt}' or `\inlinecode{yum}', is used first and higher-level software are built with generic PMs.
+The former has the same longevity as the OS while some of the latter (such as Conda and Spack) are written in high-level languages like Python; so, the PM itself depends on the host's Python installation with a typical longevity of a few years.
+Nix and GNU Guix produce bitwise identical programs with considerably better longevity; that of their supported CPU architectures.
+However, they need root permissions and are primarily targeted at the Linux kernel.
+Generally, in all the PMs, the exact version of each software (and its dependencies) is not precisely identified by default, although an advanced user can, indeed, fix them.
+Unless precise version identifiers of \emph{every software package} are stored by project authors, a third-party PM will use the most recent version.
+Furthermore, because third-party PMs introduce their own language, framework, and version history (the PM itself may evolve) and are maintained by an external team, they increase a project's complexity.
+With the software environment built, job management is the next component of a workflow.
+Visual/GUI tools (written in Java or Python 2) such as Taverna (deprecated), GenePattern (deprecated), Kepler, or VisTrails (deprecated), which were mostly introduced in the 2000s encourage modularity and robust job management.
+However, a GUI environment is tailored to specific applications and is hard to generalize while being hard to reproduce once the required Java VM (JVM) is deprecated.
+These tools' data formats are complex (designed for computers to read) and hard to read by humans without the GUI.
+The more recent solutions (mostly non-GUI, written in Python) leave this to the project authors.
+Designing a robust project needs to be encouraged and facilitated because scientists (who are not usually trained in project or data management) will rarely apply best practices.
+This includes automatic verification, which is possible in many solutions, but is rarely practiced.
+Besides non-reproducibility, weak project management leads to many inefficiencies in project cost and/or scientific accuracy (reusing, expanding, or validating will be expensive).
+
+Finally, to blend narrative and analysis, computational notebooks (CNs) \cite{rule18}, such as Jupyter, are currently gaining popularity.
+However, because of their complex dependency trees, their build is vulnerable to the passage of time; e.g., see Figure 1 in the work of Alliez et al.\cite{alliez19} for the dependencies of Matplotlib, one of the simpler Jupyter dependencies.
+It is important to remember that the longevity of a project is determined by its shortest lived dependency.
+Furthermore, as with job management, CNs do not actively encourage good practices in programming or project management.
+The ``cells'' in a Jupyter notebook can either be run sequentially (from top to bottom, one after the other) or by manually selecting the cell to run.
+By default, cell dependencies are not included (e.g., automatically running some cells only after certain others), parallel execution, or usage of more than one language.
+There are third party add-ons like \inlinecode{sos} or \inlinecode{nbextensions} (both written in Python) for some of these.
+However, since they are not part of the core, a shorter longevity can be assumed.
+The core Jupyter framework has few options for project management, especially as the project grows beyond a small test or tutorial.
+Notebooks, can, therefore rarely deliver their promised potential\cite{rule18} and may even hamper reproducibility\cite{pimentel19}.
-%% Start creating the paper.
-\begin{document}
-%% Project abstract and keywords.
-\includeabstract{
- Welcome to Maneage (\emph{Man}aging data lin\emph{eage}) and reproducible papers/projects, for a review of the basics of this system, please see \citet{maneage}.
- You are now ready to configure Maneage and implement your own research in this framework.
- Maneage contains almost all the elements that you will need in a research project, and adding any missing parts is very easy once you become familiar with it.
- For example it already has steps to downloading of raw data and necessary software (while verifying them with their checksums), building the software, and processing the data with the software in a highly-controlled environment.
- But Maneage is not just for the analysis of your project, you will also write your paper in it (by replacing this text in \texttt{paper.tex}): including this abstract, figures and bibliography.
- If you design your project with Maneage's infra-structure, don't forget to add a notice and clearly let the readers know that your work is reproducible, we should spread the word and show the world how useful reproducible research is for the sciences, also don't forget to cite and acknowledge it so we can continue developing it.
- This PDF was made with Maneage, commit \projectversion{}.
- \vspace{0.25cm}
- \textsl{Keywords}: Add some keywords for your research here.
- \textsl{Reproducible paper}: All quantitave results (numbers and plots)
- in this paper are exactly reproducible with Maneage
- (\url{https://maneage.org}). }
+\section{Proposed criteria for longevity}
+\label{criteria}
+The main premise here is that starting a project with a robust data management strategy (or tools that provide it) is more effective, for researchers and the community, than imposing it just before publication\cite{austin17,fineberg19}.
+In this context, researchers play a critical role\cite{austin17} in making their research more Findable, Accessible, Interoperable, and Reusable (the FAIR principles\footnote{FAIR originally targeted data. Work is ongoing to adopt it for software through initiatives like FAIR4RS (FAIR for Research Software).}).
+Simply archiving a project workflow in a repository after the project is finished is, on its own, insufficient, and maintaining it by repository staff is often either practically unfeasible or unscalable.
+We argue and propose that workflows satisfying the following criteria can not only improve researcher flexibility during a research project, but can also increase the FAIRness of the deliverables for future researchers.
-%% To add the first page's headers.
-\thispagestyle{firststyle}
+\textbf{Criterion 1: Completeness.}
+A project that is complete (self-contained) has the following properties.
+(1) No \emph{execution requirements} apart from a minimal Unix-like operating system.
+Fewer explicit execution requirements would mean larger \emph{execution possibility} and consequently better \emph{longevity}.
+(2) Primarily stored as plain text (encoded in ASCII/Unicode), not needing specialized software to open, parse, or execute.
+(3) No impact on the host OS libraries, programs, and environment variables.
+(4) No root privileges to run (during development or postpublication).
+(5) Builds its own controlled software with independent environment variables.
+(6) Can run locally (without an internet connection).
+(7) Contains the full project's analysis, visualization \emph{and} narrative: including instructions to automatically access/download raw inputs, build necessary software, do the analysis, produce final data products \emph{and} final published report with figures \emph{as output}, e.g., PDF or HTML.
+(8) It can run automatically, without human interaction.
+
+\textbf{Criterion 2: Modularity.}
+A modular project enables and encourages independent modules with well-defined inputs/outputs and minimal side effects.
+In terms of file management, a modular project will \emph{only} contain the hand-written project source of that particular high-level project: no automatically generated files (e.g., software binaries or figures), software source code, or data should be included.
+The latter two (developing low-level software, collecting data, or the publishing and archival of both) are separate projects in themselves because they can be used in other independent projects.
+This optimizes the storage, archival/mirroring, and publication costs (which are critical to longevity): a snapshot of a project's hand-written source will usually be on the scale of $\sim100$ kilobytes, and the version controlled history may become a few megabytes.
+
+In terms of the analysis workflow, explicit communication between various modules enables optimizations on many levels:
+(1) Modular analysis components can be executed in parallel and avoid redundancies (when a dependency of a module has not changed, the latter will not be rerun).
+(2) Usage in other projects.
+(3) Debugging and adding improvements (possibly by future researchers).
+(4) Citation of specific parts.
+(5) Provenance extraction.
+
+\textbf{Criterion 3: Minimal complexity.}
+Minimal complexity can be interpreted as:
+(1) Avoiding the language or framework that is currently in vogue (for the workflow, not necessarily the high-level analysis).
+A popular framework typically falls out of fashion and requires significant resources to translate or rewrite every few years (for example, Python 2, which is no longer supported).
+More stable/basic tools can be used with less long-term maintenance costs.
+(2) Avoiding too many different languages and frameworks; e.g., when the workflow's PM and analysis are orchestrated in the same framework, it becomes easier to maintain in the long term.
+
+\textbf{Criterion 4: Scalability.}
+A scalable project can easily be used in arbitrarily large and/or complex projects.
+On a small scale, the criteria here are trivial to implement, but can rapidly become unsustainable.
+
+\textbf{Criterion 5: Verifiable inputs and outputs.}
+The project should automatically verify its inputs (software source code and data) \emph{and} outputs, not needing any expert knowledge.
+
+\textbf{Criterion 6: Recorded history.}
+No exploratory research is done in a single, first attempt.
+Projects evolve as they are being completed.
+Naturally, earlier phases of a project are redesigned/optimized only after later phases have been completed.
+Research papers often report this with statements such as ``\emph{we [first] tried method [or parameter] X, but Y is used here because it gave lower random error}''.
+The derivation ``history'' of a result is often as valuable as the result itself.
+
+\textbf{Criterion 7: Including narrative that is linked to analysis.}
+A project is not just its computational analysis.
+A raw plot, figure, or table is hardly meaningful alone, even when accompanied by the code that generated it.
+A narrative description is also a deliverable (defined as ``data article''\cite{austin17}): describing the purpose of the computations, interpretations of the result, and the context in relation to other projects/papers.
+This is related to longevity, because if a workflow contains only the steps to do the analysis or generate the plots, in time it may get separated from its accompanying published paper.
+
+\textbf{Criterion 8: Free and open-source software (FOSS):}
+Non-FOSS software typically cannot be distributed, inspected, or modified by others.
+They are, thus, reliant on a single supplier (even without payments) and prone to \emph{proprietary obsolescence}\footnote{\inlinecode{\url{https://www.gnu.org/proprietary/proprietary-obsolescence.html}}}.
+A project that is \emph{free software} (as formally defined by GNU\footnote{\inlinecode{\url{https://www.gnu.org/philosophy/free-sw.en.html}}}), allows others to run, learn from, distribute, build upon (modify), and publish their modified versions.
+When the software used by the high-level project is also free, the lineage can be traced to the core algorithms, possibly enabling optimizations on that level and it can be modified for future hardware.
+
+Proprietary software may be necessary to read proprietary data formats produced by data collection hardware (for example, microarrays in genetics).
+In such cases, it is best to immediately convert the data to free formats upon collection and safely use or archive the data in free formats.
+
+
+
+
+
+
+
+
+
+
+\section{Proof of concept: Maneage}
+
+With the longevity problems of existing tools outlined earlier, a proof-of-concept solution is presented here via an implementation that has been tested in published papers\cite{akhlaghi19, infante20}.
+Since the initial submission of this article, it has also been used in \href{https://doi.org/10.5281/zenodo.3951151}{zenodo.3951151} (on the COVID-19 pandemic) and \href{https://doi.org/10.5281/zenodo.4062460}{zenodo.4062460} (on galaxy evolution).
+It was also awarded a Research Data Alliance (RDA) adoption grant for implementing the recommendations of the joint RDA and World Data System (WDS) working group on Publishing Data Workflows\cite{austin17}, from the researchers' perspective.
+
+It is called Maneage, for \emph{Man}aging data Lin\emph{eage} (the ending is pronounced as in ``lineage''), hosted at \inlinecode{\url{https://maneage.org}}.
+It was developed as a parallel research project over five years of publishing reproducible workflows of our research.
+Its primordial implementation was used in Akhlaghi and Ichikawa\cite{akhlaghi15}, which evolved in \href{http://doi.org/10.5281/zenodo.1163746}{zenodo.1163746} and \href{http://doi.org/10.5281/zenodo.1164774}{zenodo.1164774}.
+
+Technically, the hardest criterion to implement was the first (completeness); in particular, restricting execution requirements to only a minimal Unix-like operating system.
+One solution we considered was GNU Guix and Guix Workflow Language (GWL).
+However, because Guix requires root access to install, and only works with the Linux kernel, it failed the completeness criterion.
+Inspired by GWL+Guix, a single job management tool was implemented for both installing software \emph{and} the analysis workflow: Make.
+
+Make is not an analysis language, it is a job manager.
+Make decides when and how to call analysis steps/programs (in any language such as Python, R, Julia, Shell, or C).
+Make has been available since 1977, it is still heavily used in almost all components of modern Unix-like OSs and is standardized in POSIX.
+It is thus mature, actively maintained, highly optimized, efficient in managing provenance, and recommended by the pioneers of reproducible research\cite{claerbout1992,schwab2000}.
+Moreover, researchers using FOSS have already had some exposure to Make (most FOSS are built with Make).
+
+Linking the analysis and narrative (criterion 7) was historically our first design element.
+To avoid the problems with computational notebooks mentioned before, we adopt a more abstract linkage, providing a more direct and traceable connection.
+Assuming that the narrative is typeset in \LaTeX{}, the connection between the analysis and narrative (usually as numbers) is through automatically created \LaTeX{} macros, during the analysis.
+For example, Akhlaghi writes\cite{akhlaghi19} ``\emph{... detect the outer wings of M51 down to S/N of 0.25 ...}''.
+The \LaTeX{} source of the quote above is: ``\inlinecode{\small detect the outer wings of M51 down to S/N of \$\textbackslash{}demo\-sf\-optimized\-sn\$}''.
+The macro ``\inlinecode{\small\textbackslash{}demosfoptimizedsn}'' is automatically generated after the analysis and expands to the value ``\inlinecode{0.25}'' upon creation of the PDF.
+Since values like this depend on the analysis, they should \emph{also} be reproducible, along with figures and tables.
+
+These macros act as a quantifiable link between the narrative and analysis, with the granularity of a word in a sentence and a particular analysis command.
+This allows automatic updates to the embedded numbers during the experimentation phase of a project \emph{and} accurate postpublication provenance.
+Through the former, manual updates by authors (which are prone to errors and discourage improvements or experimentation after writing the first draft) are by-passed.
+
+Acting as a link, the macro files build the core skeleton of Maneage.
+For example, during the software building phase, each software package is identified by a \LaTeX{} file, containing its official name, version, and possible citation.
+These are combined at the end to generate precise software acknowledgment and citation that is shown in the
+\ifdefined\separatesupplement%
+appendices, available online, %
+\else%
+appendices (\ref{appendix:software}), %
+\fi%
+other examples have also been published\cite{akhlaghi19, infante20}.
+Furthermore, the machine-related specifications of the running system (including CPU architecture and byte-order) are also collected to report in the paper (they are reported for this article in the section ``Acknowledgments'').
+These can help in \emph{root cause analysis} of observed differences/issues in the execution of the workflow on different machines.
+
+The macro files also act as Make \emph{targets} and \emph{prerequisites} to allow accurate dependency tracking and optimized execution (in parallel, no redundancies), for any level of complexity (e.g., Maneage builds Matplotlib if requested; see Figure~1 in the work by Alliez et al.\cite{alliez19}).
+All software dependencies are built down to precise versions of every tool, including the shell, important low-level application programs (e.g., GNU Coreutils) and of course, the high-level science software.
+The source code of all the FOSS software used in Maneage is archived in, and downloaded from, \href{https://doi.org/10.5281/zenodo.3883409}{zenodo.3883409}.
+Zenodo promises long-term archival and also provides persistent identifiers for the files, which are sometimes unavailable at a software package's web page.
+
+On GNU/Linux distributions, even the GNU Compiler Collection (GCC) and GNU Binutils are built from source and the GNU C library (glibc) is being added\footnote{\inlinecode{\url{http://savannah.nongnu.org/task/?15390}}}.
+Currently, {\TeX}Live is also being added\footnote{\inlinecode{\url{http://savannah.nongnu.org/task/?15267}}}, but that is only for building the final PDF, not affecting the analysis or verification.
+
+Building the core Maneage software environment on an 8-core CPU takes about 1.5 hours (GCC consumes more than half of the time).
+However, this is only necessary once in a project: the analysis (which usually takes months to write/mature for a normal project) will only use the built environment.
+Hence the few hours of initial software building is negligible compared to a project's life span.
+To facilitate moving to another computer in the short term, Maneage'd projects can be built in a container or VM.
+The \inlinecode{README.md}\footnote{\inlinecode{\label{maneageatswh}\href{https://archive.softwareheritage.org/swh:1:cnt:66c1d53b2860a40aa9d350048f6b02c73c3b46c8;origin=http://git.maneage.org/project.git}{swh:1:cnt:66c1d53b2860a40aa9d350048f6b02c73c3b46c8}}} file has thorough instructions on building in Docker.
+Through containers or VMs, users on non-Unix-like OSs (like Microsoft Windows) can use Maneage.
+For Windows-native software that can be run in batch-mode, evolving technologies like Windows Subsystem for Linux may be usable.
+
+The analysis phase of the project, however, is naturally different from one project to another at a low-level.
+It was, thus, necessary to design a generic framework to comfortably host any project while still satisfying the criteria of modularity, scalability, and minimal complexity.
+This design is demonstrated with the example of Figure \ref{fig:datalineage} (left) which is an enhanced replication of the ``tool'' curve of Figure 1C in the work by Menke et al.\cite{menke20}.
+Figure \ref{fig:datalineage} (right) shows the data lineage that produced it.
+
+\begin{figure*}[t]
+ \begin{center}
+ \includetikz{figure-tools-per-year}{width=0.95\linewidth}
+% \includetikz{figure-data-lineage}{width=0.85\linewidth}
+ \end{center}
+ \vspace{-3mm}
+ \caption{\label{fig:datalineage}
+ Left: an enhanced replica of Figure 1C in the work by Menke et al.\cite{menke20}, shown here for demonstrating Maneage.
+ It shows the fraction of the number of papers mentioning software tools (green line, left vertical axis) in each year (red bars, right vertical axis on a log scale).
+ Right: Schematic representation of the data lineage, or workflow, to generate the plot on the left.
+ Each colored box is a file in the project and arrows show the operation of various software: linking input file(s) to the output file(s).
+ Green files/boxes are plain-text files that are under version control and in the project source directory.
+ Blue files/boxes are output files in the build directory, shown within the Makefile (\inlinecode{*.mk}) where they are defined as a \emph{target}.
+ For example, \inlinecode{paper.pdf} is created by running \LaTeX{} on \inlinecode{project.tex} (in the build directory; generated automatically) and \inlinecode{paper.tex} (in the source directory; written manually).
+ Other software is used in other steps.
+ The solid arrows and full-opacity built boxes correspond to the lineage of this paper.
+ The dotted arrows and built boxes show the scalability of Maneage (ease of adding hypothetical steps to the project as it evolves).
+ The underlying data of the left plot is available at
+ \href{https://zenodo.org/record/\projectzenodoid/files/tools-per-year.txt}{zenodo.\projectzenodoid/tools-per-year.txt}.
+ }
+\end{figure*}
+
+The analysis is orchestrated through a single point of entry (\inlinecode{top-make.mk}, which is a Makefile; see Listing \ref{code:topmake}).
+It is only responsible for \inlinecode{include}-ing the modular \emph{subMakefiles} of the analysis, in the desired order, without doing any analysis itself.
+This is visualized in Figure \ref{fig:datalineage} (right) where no built (blue) file is placed directly over \inlinecode{top-make.mk}.
+A visual inspection of this file is sufficient for a non-expert to understand the high-level steps of the project (irrespective of the low-level implementation details), provided that the subMakefile names are descriptive (thus encouraging good practice).
+A human-friendly design that is also optimized for execution is a critical component for the FAIRness of reproducible research.
+
+All projects first load \inlinecode{initialize.mk} and \inlinecode{download.mk}, and finish with \inlinecode{verify.mk} and \inlinecode{paper.mk} (see Listing \ref{code:topmake}).
+Project authors add their modular subMakefiles in between.
+Except for \inlinecode{paper.mk} (which builds the ultimate target: \inlinecode{paper.pdf}), all subMakefiles build a macro file with the same base-name (the \inlinecode{.tex} file at the bottom of each subMakefile in Figure \ref{fig:datalineage}).
+Other built files (``targets'' in intermediate analysis steps) cascade down in the lineage to one of these macro files, possibly through other files.
+
+\begin{lstlisting}[
+ label=code:topmake,
+ caption={This project's simplified \inlinecode{top-make.mk}, also see Figure \ref{fig:datalineage}.\\
+ {\footnotesize (\inlinecode{\href{https://archive.softwareheritage.org/swh:1:cnt:6b055f75fa8050bbb4dee868ef1fb01e1725407d;origin=http://git.maneage.org/paper-concept.git/;visit=swh:1:snp:01ad46a4f2cb90c2998df83dc0f2d9bd3e233710;anchor=swh:1:rev:e4a5566861bb7b639624c50be45b2a04d0ce9197;path=/reproduce/analysis/make/top-make.mk}{swh:1:cnt:6b055f75fa8050bbb4dee868ef1fb01e1725407d}})}}
+ ]
+# Default target/goal of project.
+all: paper.pdf
+
+# Define subMakefiles to load in order.
+makesrc = initialize \ # General
+ download \ # General
+ format \ # Project-specific
+ demo-plot \ # Project-specific
+ verify \ # General
+ paper # General
+
+# Load all the configuration files.
+include reproduce/analysis/config/*.conf
+
+# Load the subMakefiles in the defined order.
+include $(foreach s,$(makesrc), \
+ reproduce/analysis/make/$(s).mk)
+\end{lstlisting}
+
+Just before reaching the ultimate target (\inlinecode{paper.pdf}), the lineage reaches a bottleneck in \inlinecode{verify.mk} to satisfy the verification criteria.
+All project deliverables (macro files, plot or table data, and other datasets) are verified at this stage, with their checksums, to automatically ensure exact reproducibility.
+Where exact reproducibility is not possible (for example, due to parallelization), values can be verified by the project authors.
+For example, see \inlinecode{\small verify-parameter-statistically.sh}\footnote{\inlinecode{\href{https://archive.softwareheritage.org/swh:1:cnt:dae4e6de5399a061ab4df01ea51f4757fd7e293a;origin=https://codeberg.org/boud/elaphrocentre.git;visit=swh:1:snp:54f00113661ea30c800b406eee55ea7a7ea35279;anchor=swh:1:rev:a029edd32d5cd41dbdac145189d9b1a08421114e;path=/reproduce/analysis/bash/verify-parameter-statistically.sh}{swh:1:cnt:dae4e6de5399a061ab4df01ea51f4757fd7e293a}}} of \href{https://doi.org/10.5281/zenodo.4062460}{zenodo.4062460}.
+
+\begin{figure*}[t]
+ \begin{center} \includetikz{figure-branching}{scale=1}\end{center}
+ \vspace{-3mm}
+ \caption{\label{fig:branching} Maneage is a Git branch.
+ Projects using Maneage are branched off it and apply their customizations.
+ (a) Hypothetical project's history before publication.
+ The low-level structure (in Maneage, shared between all projects) can be updated by merging with Maneage.
+ (b) Finished/published project can be revitalized for new technologies by merging with the core branch.
+ Each Git ``commit'' is shown on its branch as a colored ellipse, with its commit hash shown and colored to identify the team that is/was working on the branch.
+ Briefly, Git is a version control system, allowing a structured backup of project files, for more see
+ \ifdefined\separatesupplement%
+ supplementary appendices available online (section on version control)%
+ \else%
+ Appendix \ref{appendix:versioncontrol}%
+ \fi%
+ . Each Git ``commit'' effectively contains a copy of all the project's files at the moment it was made.
+ The upward arrows at the branch-tops are, therefore, in the direction of time.
+ }
+\end{figure*}
+
+To further minimize complexity, the low-level implementation can be further separated from the high-level execution through configuration files.
+By convention in Maneage, the subMakefiles (and the programs they call for number crunching) do not contain any fixed numbers, settings, or parameters.
+Parameters are set as Make variables in ``configuration files'' (with a \inlinecode{.conf} suffix) and passed to the respective program by Make.
+For example, in Figure \ref{fig:datalineage} (right), \inlinecode{INPUTS.conf} contains URLs and checksums for all imported datasets, thereby enabling exact verification before usage.
+To illustrate this, we report that Menke et al.\cite{menke20} studied $\menkenumpapersdemocount$ papers in $\menkenumpapersdemoyear$ (which is not in their original plot).
+The number \inlinecode{\menkenumpapersdemoyear} is stored in \inlinecode{demo-year.conf} and the result (\inlinecode{\menkenumpapersdemocount}) was calculated after generating \inlinecode{tools-per-year.txt}.
+Both numbers are expanded as \LaTeX{} macros when creating this PDF file.
+An interested reader can change the value in \inlinecode{demo-year.conf} to automatically update the result in the PDF, without knowing the underlying low-level implementation.
+Furthermore, the configuration files are a prerequisite of the targets that use them.
+If changed, Make will \emph{only} re-execute the dependent recipe and all its descendants, with no modification to the project's source or other built products.
+This fast and cheap testing encourages experimentation (without necessarily knowing the implementation details; e.g., by co-authors or future readers), and ensures self-consistency.
+
+In contrast to notebooks like Jupyter, the analysis scripts, configuration parameters, and paper's narrative are, therefore, not blended into in a single file, and do not require a unique editor.
+To satisfy the modularity criterion, the analysis steps and narrative are written and run in their own files (in different languages) and the files can be viewed or manipulated with any text editor that the authors prefer.
+The analysis can benefit from the powerful and portable job management features of Make and communicates with the narrative text through \LaTeX{} macros, enabling much better-formatted output that blends analysis outputs in the narrative sentences and enables direct provenance tracking.
+
+To satisfy the recorded history criterion, version control (currently implemented in Git) is another component of Maneage (see Figure \ref{fig:branching}).
+Maneage is a Git branch that contains the shared components (infrastructure) of all projects (e.g., software tarball URLs, build recipes, common subMakefiles, and interface script).
+The core Maneage git repository is hosted at \inlinecode{\href{http://git.maneage.org/project.git}{git.maneage.org/project.git}} (archived at Software Heritage\footnote{\inlinecode{\href{https://archive.softwareheritage.org/swh:1:dir:45a9e282a86145fe9babef529c8fce52ffe8d717;origin=http://git.maneage.org/paper-concept.git/;visit=swh:1:snp:33d24ae2107e25c734067d704cdad9d33013588a;anchor=swh:1:rev:b858c601613d620f5cf4501816e161a2f8f2e100}{swh:1:dir:45a9e282a86145fe9babef529c8fce52ffe8d717}}}).
+Derived projects start by creating a branch and customizing it (e.g., adding a title, data links, narrative, and subMakefiles for its particular analysis, see Listing \ref{code:branching}).
+There is a thoroughly elaborated customization checklist in \inlinecode{README-hacking.md}.
+
+The current project's Git hash is provided to the authors as a \LaTeX{} macro (shown here in the sections ``Abstract'' and ``Acknowledgments''), as well as the Git hash of the last commit in the Maneage branch (shown here in the section ``Acknowledgments'').
+These macros are created in \inlinecode{initialize.mk}, with other basic information from the running system like the CPU details (shown in the section ``Acknowledgments'').
+As opposed to Git ``tag''s, the hash is a core concept in the Git paradigm and is immutable and always present in a given history, which is why it is the recommended version identifier.
+
+Figure \ref{fig:branching} shows how projects can reimport Maneage at a later time (technically: \emph{merge}), thus improving their low-level infrastructure: in (a), authors do the merge during an ongoing project;
+in (b), readers do it after publication; e.g., the project remains reproducible but the infrastructure is outdated, or a bug is fixed in Maneage.
+Generally, any Git flow (branching strategy) can be used by the high-level project authors or future readers.
+Low-level improvements in Maneage can, thus, propagate to all projects, greatly reducing the cost of project curation and maintenance, before \emph{and} after publication.
+
+Finally, a snapshot of the complete project source is usually $\sim100$ kilobytes.
+It can, thus, easily be published or archived in many servers, for example, it can be uploaded to arXiv (with the \LaTeX{} source\cite{akhlaghi19, infante20, akhlaghi15}), published on Zenodo and archived in Software Heritage.
+
+\begin{lstlisting}[
+ label=code:branching,
+ caption={Starting a new project with Maneage, and building it},
+ ]
+# Cloning Maneage and branching off of it.
+$ git clone https://git.maneage.org/project.git
+$ cd project
+$ git remote rename origin origin-maneage
+$ git checkout -b main
+
+# Build the raw Maneage skeleton in two phases.
+$ ./project configure # Build software environment.
+$ ./project make # Do analysis, build PDF paper.
+
+# Start editing, test-building and committing.
+$ emacs paper.tex # Set your name as author.
+$ ./project make # Rebuild to see effect.
+$ git add -u && git commit # Commit changes.
+\end{lstlisting}
+
+
+
+
+
+
+
+
+
+
+\section{Discussion}
+\label{discussion}
+%% It should provide some insight or 'lessons learned', where 'lessons learned' is jargon for 'informal hypotheses motivated by experience', reworded to make the hypotheses sound more scientific (if it's a 'lesson', then it sounds like knowledge, when in fact it's really only a hypothesis).
+%% What is the message we should take from the experience?
+%% Are there clear demonstrated design principles that can be reapplied elsewhere?
+%% Are there roadblocks or bottlenecks that others might avoid?
+%% Are there suggested community or work practices that can make things smoother?
+%% Attempt to generalise the significance.
+%% should not just present a solution or an enquiry into a unitary problem but make an effort to demonstrate wider significance and application and say something more about the ‘science of data’ more generally.
+
+We have shown that it is possible to build workflows satisfying all the proposed criteria.
+Here we comment on our experience in testing them through Maneage and its increasing user-base (thanks to the support of RDA).
+
+First, while most researchers are generally familiar with them, the necessary low-level tools (e.g., Git, \LaTeX, the command-line and Make) are not widely used.
+Fortunately, we have noticed that after witnessing the improvements in their research, many, especially early-career researchers, have started mastering these tools.
+Scientists are rarely trained sufficiently in data management or software development, and the plethora of high-level tools that change every few years discourages them.
+Indeed, the fast-evolving tools are primarily targeted at software developers, who are paid to learn and use them effectively for short-term projects before moving on to the next technology.
+
+Scientists, on the other hand, need to focus on their own research fields and need to consider longevity.
+Hence, arguably the most important feature of these criteria (as implemented in Maneage) is that they provide a fully working template or bundle that works immediately out of the box by producing a paper with an example calculation that they just need to start customizing.
+Using mature and time-tested tools, for blending version control, the research paper's narrative, the software management \emph{and} a robust data management strategy.
+We have noticed that providing a clear checklist of the initial customizations is much more effective in encouraging mastery of these core analysis tools than having abstract, isolated tutorials on each tool individually.
+
+Second, to satisfy the completeness criterion, all the required software of the project must be built on various Unix-like OSs (Maneage is actively tested on different GNU/Linux distributions, macOS, and is being ported to FreeBSD also).
+This requires maintenance by our core team and consumes time and energy.
+However, because the PM and analysis components share the same job manager (Make) and design principles, we have already noticed some early users adding, or fixing, their required software alone.
+They later share their low-level commits on the core branch, thus propagating it to all derived projects.
+
+Third, Unix-like OSs are a very large and diverse group (mostly conforming with POSIX), so our completeness condition does not guarantee bitwise reproducibility of the software, even when built on the same hardware.
+However, our focus is on reproducing results (output of software), not the software itself.
+Well-written software internally corrects for differences in OS or hardware that may affect its output (through tools like the GNU Portability Library, or Gnulib).
+
+On GNU/Linux hosts, Maneage builds precise versions of the compilation tool chain.
+However, glibc is not install-able on some Unix-like OSs (e.g., macOS) and all programs link with the C library.
+This may hypothetically hinder the exact reproducibility \emph{of results} on non-GNU/Linux systems, but we have not encountered this in our research so far.
+With everything else under precise control in Maneage, the effect of differing hardware, Kernel and C libraries on high-level science can now be systematically studied in follow-up research (including floating-point arithmetic or optimization differences).
+Using continuous integration (CI) is one way to precisely identify breaking points on multiple systems.
+
+% DVG: It is a pity that the following paragraph cannot be included, as it is really important but perhaps goes beyond the intended goal.
+%Thirdly, publishing a project's reproducible data lineage immediately after publication enables others to continue with follow-up papers, which may provide unwanted competition against the original authors.
+%We propose these solutions:
+%1) Through the Git history, the work added by another team at any phase of the project can be quantified, contributing to a new concept of authorship in scientific projects and helping to quantify Newton's famous ``\emph{standing on the shoulders of giants}'' quote.
+%This is a long-term goal and would require major changes to academic value systems.
+%2) Authors can be given a grace period where the journal or a third party embargoes the source, keeping it private for the embargo period and then publishing it.
+
+Other implementations of the criteria, or future improvements in Maneage, may solve some of the caveats, but this proof of concept already shows many advantages.
+For example, the publication of projects meeting these criteria on a wide scale will allow automatic workflow generation, optimized for desired characteristics of the results (e.g., via machine learning).
+The completeness criterion implies that algorithms and data selection can be included in the optimizations.
+
+Furthermore, through elements like the macros, natural language processing can also be included, automatically analyzing the connection between an analysis with the resulting narrative \emph{and} the history of that analysis+narrative.
+Parsers can be written over projects for metaresearch and provenance studies, e.g., to generate Research Objects
+\ifdefined\separatesupplement
+(see supplement appendix B, available online)
+\else
+(see Appendix \ref{appendix:researchobject})
+\fi
+or allow interoperability with Common Workflow Language (CWL) or higher-level concepts like Canonical Workflow Framework for Research, or CWFR
+\ifdefined\separatesupplement
+(see supplement appendix A, available online).
+\else
+(see Appendix \ref{appendix:genericworkflows}).
+\fi
+
+Likewise, when a bug is found in one science software, affected projects can be detected and the scale of the effect can be measured.
+Combined with Software Heritage, precise high-level science components of the analysis can be accurately cited (e.g., even failed/abandoned tests at any historical point).
+Many components of ``machine-actionable'' data management plans can also be automatically completed as a byproduct, useful for project PIs and grant funders.
+
+From the data repository perspective, these criteria can also be useful, e.g., the challenges mentioned in the work by Austin et al.\cite{austin17}:
+(1) The burden of curation is shared among all project authors and readers (the latter may find a bug and fix it), not just by database curators, thereby improving sustainability.
+(2) Automated and persistent bidirectional linking of data and publication can be established through the published \emph{and complete} data lineage that is under version control.
+(3) Software management: with these criteria, each project comes with its unique and complete software management.
+It does not use a third-party PM that needs to be maintained by the data center (and the many versions of the PM), hence enabling robust software management, preservation, publishing, and citation.
+For example, see \href{https://doi.org/10.5281/zenodo.1163746}{zenodo.1163746}, \href{https://doi.org/10.5281/zenodo.3408481}{zenodo.3408481}, \href{https://doi.org/10.5281/zenodo.3524937}{zenodo.3524937}, \href{https://doi.org/10.5281/zenodo.3951151}{zenodo.3951151} or \href{https://doi.org/10.5281/zenodo.4062460}{zenodo.4062460}, where we distribute the source code of all (FOSS) software used in each project, as deliverables.
+(4) ``Linkages between documentation, code, data, and journal articles in an integrated environment'', which effectively summarizes the whole purpose of these criteria.
+
+
+
+
+
+% use section* for acknowledgment
+\section*{Acknowledgment}
+
+This project (commit \inlinecode{\projectversion}) is maintained in Maneage (\emph{Man}aging data lin\emph{eage}).
+The latest merged Maneage branch commit was \inlinecode{\maneageversion} (\maneagedate).
+This project was built on an \inlinecode{\machinearchitecture} machine with {\machinebyteorder} byte-order and address sizes {\machineaddresssizes}.
+
+The authors wish to thank (sorted alphabetically)
+Julia Aguilar-Cabello,
+Dylan A\"issi,
+Marjan Akbari,
+Alice Allen,
+Pedram Ashofteh Ardakani,
+Roland Bacon,
+Michael R. Crusoe,
+Roberto Di Cosmo,
+Antonio D\'iaz D\'iaz,
+Surena Fatemi,
+Fabrizio Gagliardi,
+Konrad Hinsen,
+Marios Karouzos,
+Johan Knapen,
+Florian Kohrt,
+Tamara Kovazh,
+Sebastian Luna Valero,
+Terry Mahoney,
+Javier Mold\'on,
+Ryan O'Connor,
+Mervyn O'Luing,
+Simon Portegies Zwart,
+R\'emi Rampin,
+Vicky Rampin,
+Susana Sanchez Exposito,
+Idafen Santana-P\'erez,
+Elham Saremi,
+Yahya Sefidbakht,
+Zahra Sharbaf,
+Nadia Tonello,
+Ignacio Trujillo,
+Lourdes Verdes-Montenegro
+and Peter Wittenburg
+for their useful help, suggestions, and feedback on Maneage and this paper.
+The five referees and editors of CiSE (Lorena Barba and George Thiruvathukal) provided many points that greatly helped to clarify this paper.
+
+Work on Maneage, and this paper, has been partially funded/supported by the following institutions:
+The Japanese MEXT PhD scholarship to M.A and its Grant-in-Aid for Scientific Research (21244012, 24253003).
+The European Research Council (ERC) advanced grant 339659-MUSICOS.
+The European Union (EU) Horizon 2020 (H2020) research and innovation programmes No 777388 under RDA EU 4.0 project, and Marie Sk\l{}odowska-Curie grant agreement No 721463 to the SUNDIAL ITN.
+The State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation (SMSI) under the grant "The structure and evolution of galaxies and their central regions" with reference PID2019-105602GB-I00/10.13039/501100011033.
+The IAC project P/300724, financed by the SMSI, through the Canary Islands Department of Economy, Knowledge and Employment.
+The ``A next-generation worldwide quantum sensor network with optical atomic clocks'' project of the TEAM IV programme of the
+Foundation for Polish Science co-financed by the EU under ERDF.
+The Polish MNiSW grant DIR/WK/2018/12.
+The Pozna\'n Supercomputing and Networking Center (PSNC) computational grant 314.
-%% Start of main body.
-\section{Congratulations!}
-Congratulations on running the raw template project! You can now follow the ``Customization checklist'' in the \texttt{README-hacking.md} file, customize this template and start your exciting research project over it.
-You can always merge Maneage back into your project to improve its infra-structure and leaving your own project intact.
-If you haven't already read \citet{maneage}, please do so before continuing, it isn't long (just 7 pages).
-While you are writing your paper, just don't forget to \emph{not} use numbers or fixed strings (for example database urls like \url{\wfpctwourl}) directly within your \LaTeX{} source.
-Put them in configuration files and after using them in the analysis, pass them into the \LaTeX{} source through macros in the same subMakefile that used them.
-For some already published examples, please see \citet{maneage}\footnote{\url{https://gitlab.com/makhlaghi/maneage-paper}}, \citet{infantesainz20}\footnote{\url{https://gitlab.com/infantesainz/sdss-extended-psfs-paper}} and \citet{akhlaghi19}\footnote{\url{https://gitlab.com/makhlaghi/iau-symposium-355}}.
-Working in this way, will let you focus clearly on your science and not have to worry about fixing this or that number/name in the text.
-Once your project is ready for publication, there is also a ``Publication checklist'' in \texttt{README-hacking.md} that will guide you in the steps to do for making your project as FAIR as possible (Findable, Accessibile, Interoperable, and Reusable).
-The default \LaTeX{} structure within Maneage also has two \LaTeX{} macros for easy marking of text within your document as \emph{new} and \emph{notes}.
-To activate them, please use the \texttt{-{}-highlight-new} or \texttt{-{}-highlight-notes} options with \texttt{./project make}.
-For example if you run \texttt{./project make -{}-highlight-new}, then \new{this text (that has been marked as \texttt{new}) will show up as green in the final PDF}.
-If you run \texttt{./project make -{}-highlight-notes} then you will see a note following this sentence that is written in red and has square brackets around it (it is invisible without this option).
-\tonote{This text is written within a \texttt{tonote} and is invisible without \texttt{-{}-highlight-notes}.}
-You can also use these two options together to both highlight the new parts and add notes within the text.
+%% Bibliography of main body
+\bibliographystyle{IEEEtran_openaccess}
+\bibliography{IEEEabrv,references}
-Another thing you may notice from the \LaTeX{} source of this default paper is there is one line per sentence (and one sentence in a line).
-Of course, as with everything else in Maneage, you are free to use any format that you are most comfortable with.
-The reason behind this choice is that this source is under Git version control and that Git also primarily works on lines.
-In this way, when a change in a setence is made, git will only highlight/color that line/sentence we have found that this helps a lot in viewing the changes.
-Also, this format helps in reminding the author when the sentence is getting too long!
-Here is a tip when looking at the changes of narrative document in Git: use the \texttt{-{}-word-diff} option (for example \texttt{git diff -{}-word-diff}, you can also use it with \texttt{git log}).
+%% Biography
+\begin{IEEEbiographynophoto}{Mohammad Akhlaghi}
+ is currently a Postdoctoral Researcher with the Instituto de Astrof\'isica de Canarias (IAC), Santa Cruz de Tenerife, Spain.
+ Prior to this, he was a CNRS postdoc in Lyon, France.
+ He received the Ph.D. degree from Tohoku University, Sendai, Japan.
+ He is the corresponding author of this article.
+ His ORCID ID is \href{https://orcid.org/0000-0003-1710-6613}{0000-0003-1710-6613}.
+ For this article he is affiliated with:
+ 1) Instituto de Astrof\'isica de Canarias, C/V\'ia L\'actea, 38205. La Laguna, Tenerife, Spain.
+ 2) Facultad de F\'isica, Universidad de La Laguna, Avda. Astrofísico Fco. S\'anchez s/n, 38205. La Laguna, Tenerife, Spain.
+ 3) Univ. Lyon, Ens de Lyon, Univ Lyon1, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, 69007, Lyon, France.
+ For more details visit \url{https://akhlaghi.org}.
+ Contact him at mohammad@akhlaghi.org.
+\end{IEEEbiographynophoto}
-Figure \ref{squared} shows a simple plot as a demonstration of creating plots within \LaTeX{} (using the {\small PGFP}lots package).
-The minimum value in this distribution is $\deletememin$, and $\deletememax$ is the maximum.
-Take a look into the \LaTeX{} source and you'll see these numbers are actually macros that were calculated from the same dataset (they will change if the dataset, or function that produced it, changes).
+\begin{IEEEbiographynophoto}{Ra\'ul Infante-Sainz}
+ is currently a Doctoral student at IAC, Spain.
+ He received the M.Sc. degree from the University of Granada, Granada, Spain.
+ His ORCID ID is \href{https://orcid.org/0000-0002-6220-7133}{0000-0002-6220-7133}.
+ For this article he is affiliated with:
+ 1) Instituto de Astrof\'isica de Canarias, C/V\'ia L\'actea, 38205. La Laguna, Tenerife, Spain.
+ 2) Facultad de F\'isica, Universidad de La Laguna, Avda. Astrofísico Fco. S\'anchez s/n, 38205. La Laguna, Tenerife, Spain.
+ For more details visit \url{https://infantesainz.org}.
+ Contact him at infantesainz@gmail.com.
+\end{IEEEbiographynophoto}
-The individual {\small PDF} file of Figure \ref{squared} is available under the \texttt{tex/tikz/} directory of your build directory.
-You can use this PDF file in other contexts (for example in slides showing your progress or after publishing the work).
-If you want to directly use the {\small PDF} file in the figure without having to let {\small T}i{\small KZ} decide if it should be remade or not, you can also comment the \texttt{makepdf} macro at the top of this \LaTeX{} source file.
+\begin{IEEEbiographynophoto}{Boudewijn F. Roukema}
+ is a professor of cosmology with the Institute of Astronomy, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University, Toru\'n, Poland.
+ He received the Ph.D. degree from Australian National University, Canberra, ACT, Australia.
+ His ORCID ID is \href{https://orcid.org/0000-0002-3772-0250}{0000-0002-3772-0250}.
+ For this article he is affiliated with:
+ 1) Institute of Astronomy, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University, Grudziadzka 5, 87-100 Torun, Poland.
+ 2) Univ. Lyon, Ens de Lyon, Univ Lyon1, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, 69007, Lyon, France.
+ Contact him at boud@astro.uni.torun.pl.
+\end{IEEEbiographynophoto}
-\begin{figure}[t]
- \includetikz{delete-me-squared}{width=\linewidth}
+\begin{IEEEbiographynophoto}{Mohammadreza Khellat}
+ is currently the Backend Technical Services Manager at Ideal-Information, Muscat, Oman.
+ He received the M.Sc. degree in theoretical particle physics from Yazd University, Yazd, Iran.
+ His ORCID ID is \href{https://orcid.org/0000-0002-8236-809X}{0000-0002-8236-809X}.
+ For this article he is affiliated with
+ Ideal-Information, PC 133 Al Khuwair, PO Box 1886, Muscat, Oman.
+ Contact him at mkhellat@ideal-information.com.
+\end{IEEEbiographynophoto}
- \captionof{figure}{\label{squared} A very basic $X^2$ plot for
- demonstration.}
-\end{figure}
+\begin{IEEEbiographynophoto}{David Valls-Gabaud}
+ is a CNRS Research Director at LERMA, Observatoire de Paris, France.
+ He studied at the Universities of Madrid, Paris and Cambridge, and obtained his Ph.D. degree in 1991.
+ For this article, he is affiliated with
+ LERMA, CNRS UMR 8122, Paris Observatory, 75014 Paris, France.
+ Contact him at david.valls-gabaud@observatoiredeparis.psl.eu.
+\end{IEEEbiographynophoto}
+
+\begin{IEEEbiographynophoto}{Roberto Baena-Gall\'e}
+ is a professor at the Universidad Internacional de La Rioja, La Rioja, Spain.
+ He was a Postdoc with Instituto de Astrof\'isica de Canarias (IAC), Spain.
+ He received a degree from the University of Seville, Seville, Spain and a Ph.D. degree from the University of Barcelona, Barcelona, Spain.
+ His ORCID id is \href{https://orcid.org/0000-0001-5214-7408}{0000-0001-5214-7408}.
+ For this article, he is affiliated with
+ Universidad Internacional de La Rioja (UNIR), Gran V\'ia Rey Juan Carlos I, 41. 26002 Logro\~no, La Rioja, Spain.
+ Contact him at roberto.baena@unir.net.
+\end{IEEEbiographynophoto}
+\vfill
-Figure \ref{image-histogram} is another demonstration of showing images (datasets) using PGFPlots.
-It shows a small crop of an image from the Wide-Field Planetary Camera 2 (that was installed on the Hubble Space Telescope from 1993 to 2009).
-As another more realistic demonstration of reporting results with Maneage, here we report that the mean pixel value in that image is $\deletemewfpctwomean$ and the median is $\deletemewfpctwomedian$.
-The skewness in the histogram of Figure \ref{image-histogram}(b) explains this difference between the mean and median.
-The dataset is visualized here as a black and white image using the \textsf{Convert\-Type} program of GNU Astronomy Utilities (Gnuastro).
-The histogram and basic statstics were generated with Gnuastro's \textsf{Statistics} program.
-{\small PGFP}lots\footnote{\url{https://ctan.org/pkg/pgfplots}} is a great tool to build the plots within \LaTeX{} and removes the necessity to add further dependencies, just to create the plots.
-There are high-level libraries like Matplotlib which also generate plots.
-However, the problem is that they require \emph{many} dependencies, for example see Figure 1 of \citet{alliez19}.
-Installing these dependencies from source, is not easy and will harm the reproducibility of your paper in the future.
-Furthermore, since {\small PGFP}lots builds the plots within \LaTeX{}, it respects all the properties of your text (for example line width and fonts and etc).
-Therefore the final plot blends in your paper much more nicely.
-It also has a wonderful manual\footnote{\url{http://mirrors.ctan.org/graphics/pgf/contrib/pgfplots/doc/pgfplots.pdf}}.
-\begin{figure}[t]
- \includetikz{delete-me-image-histogram}{width=\linewidth}
- \captionof{figure}{\label{image-histogram} (a) An example image of the Wide-Field Planetary Camera 2, on board the Hubble Space Telescope from 1993 to 2009.
- This is one of the sample images from the FITS standard webpage, kept as examples for this file format.
- (b) Histogram of pixel values in (a).}
-\end{figure}
-\section{Notice and citations}
-To encourage other scientists to publish similarly reproducible papers, please add a notice close to the start of your paper or in the end of the abstract clearly mentioning that your work is fully reproducible.
-One convention we have adopted until now is to put the Git checkum of the project as the last word of the abstract, for example see \citet{akhlaghi19}, \citet{infantesainz20} and \citet{maneage}
-Finally, don't forget to cite \citet{maneage} and acknowledge the funders mentioned below.
-Otherwise we won't be able to continue working on Maneage.
-Also, just as another reminder, before publication, don't forget to follow the ``Publication checklist'' of \texttt{README-hacking.md}.
-%% End of main body.
-\section{Acknowledgments}
-\new{Please include the following paragraph in the Acknowledgement section of your paper.
- In order to get more funding to continue working on Maneage, we need to to cite it and its funding institutions in your papers.
- Also note that at the start, it includes version and date information for the most recent Maneage commit you merged with (which can be very helpful for others) as well as very basic information about your CPU architecture (which was extracted during configuration).
- This CPU information is very important for reproducibility because some software may not be buildable on other CPU architectures, so it is necessary to publish CPU information with the results and software versions.}
-This project was developed in the reproducible framework of Maneage \citep[\emph{Man}aging data lin\emph{eage},][latest Maneage commit \maneageversion{}, from \maneagedate]{maneage}.
-The project was built on an {\machinearchitecture} machine with {\machinebyteorder} byte-order, see Appendix \ref{appendix:software} for the used software and their versions.
-Maneage has been funded partially by the following grants: Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) PhD scholarship to M. Akhlaghi and its Grant-in-Aid for Scientific Research (21244012, 24253003).
-The European Research Council (ERC) advanced grant 339659-MUSICOS.
-The European Union (EU) Horizon 2020 (H2020) research and innovation programmes No 777388 under RDA EU 4.0 project, and Marie Sk\l{}odowska-Curie grant agreement No 721463 to the SUNDIAL ITN.
-The State Research Agency (AEI) of the Spanish Ministry of Science, Innovation and Universities (MCIU) and the European Regional Development Fund (ERDF) under the grant AYA2016-76219-P.
-The IAC project P/300724, financed by the MCIU, through the Canary Islands Department of Economy, Knowledge and Employment.
-%% Tell BibLaTeX to put the bibliography list here.
-\printbibliography
-%% Start appendix.
-\appendix
-%% Mention all used software in an appendix.
-\section{Software acknowledgement}
-\label{appendix:software}
-\input{tex/build/macros/dependencies.tex}
-%% Finish LaTeX
+
+%% Appendix (only build if 'separatesupplement' has not been given): by
+%% default, the appendices are built.
+\ifdefined\separatesupplement
+\else
+\clearpage
+\appendices
+\input{tex/src/appendix-existing-tools.tex}
+\input{tex/src/appendix-existing-solutions.tex}
+\input{tex/src/appendix-used-software.tex}
+%\input{tex/src/appendix-necessity.tex}
+\bibliographystyleappendix{IEEEtran_openaccess}
+\bibliographyappendix{IEEEabrv,references}
+\fi
\end{document}
-%% This file is part of Maneage (https://maneage.org).
-%
-%% This file is free software: you can redistribute it and/or modify it
-%% under the terms of the GNU General Public License as published by the
-%% Free Software Foundation, either version 3 of the License, or (at your
-%% option) any later version.
-%
-%% This file is distributed in the hope that it will be useful, but WITHOUT
-%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
-%% for more details.
-%
-%% You should have received a copy of the GNU General Public License along
-%% with this file. If not, see <http://www.gnu.org/licenses/>.
+
+
+
+
+
+
+
+
+
+%%\newpage
+%%\section{Things remaining to add}
+%%\begin{itemize}
+%%\item \url{https://sites.nationalacademies.org/cs/groups/pgasite/documents/webpage/pga_180684.pdf}, does the following classification of tools:
+%% \begin{itemize}
+%% \item Research environments: \href{http://vcr.stanford.edu}{Verifiable computational research} (discussed above), \href{http://www.sciencedirect.com/science/article/pii/S1877050911001207}{SHARE} (a Virtual Machine), \href{http://www.codeocean.com}{Code Ocean} (discussed above), \href{http://jupyter.org}{Jupyter} (discussed above), \href{https://yihui.name/knitr}{knitR} (based on Sweave, dynamic report generation with R), \href{https://cran.r-project.org}{Sweave} (Function in R, for putting R code within \LaTeX), \href{http://www.cyverse.org}{Cyverse} (proprietary web tool with servers for bioinformatics), \href{https://nanohub.org}{NanoHUB} (collection of Simulation Programs for nanoscale phenomena that run in the cloud), \href{https://www.elsevier.com/about/press-releases/research-and-journals/special-issue-computers-and-graphics-incorporates-executable-paper-grand-challenge-winner-collage-authoring-environment}{Collage Authoring Environment} (discussed above), \href{https://osf.io/ns2m3}{SOLE} (discussed above), \href{https://osf.io}{Open Science framework} (a hosting webpage), \href{https://www.vistrails.org}{VisTrails} (discussed above), \href{https://pypi.python.org/pypi/Sumatra}{Sumatra} (discussed above), \href{http://software.broadinstitute.org/cancer/software/genepattern}{GenePattern} (reviewed above), Image Processing On Line (\href{http://www.ipol.im}{IPOL}) journal (publishes full analysis scripts, but does not deal with dependencies), \href{https://github.com/systemslab/popper}{Popper} (reviewed above), \href{https://galaxyproject.org}{Galaxy} (reviewed above), \href{http://torch.ch}{Torch.ch} (finished project for neural networks on images), \href{http://wholetale.org/}{Whole Tale} (discussed above).
+%% \item Workflow systems: \href{http://www.taverna.org.uk}{Taverna}, \href{http://www.wings-workflows.org}{Wings}, \href{https://pegasus.isi.edu}{Pegasus}, \href{http://www.pgbovine.net/cde.html}{CDE}, \href{http://binder.org}{Binder}, \href{http://wiki.datakurator.org/wiki}{Kurator}, \href{https://kepler-project.org}{Kepler}, \href{https://github.com/everware}{Everware}, \href{http://cds.nyu.edu/projects/reprozip}{Reprozip}.
+%% \item Dissemination platforms: \href{http://researchcompendia.org}{ResearchCompendia}, \href{https://datacenterhub.org/about}{DataCenterHub}, \href{http://runmycode.org}, \href{https://www.chameleoncloud.org}{ChameleonCloud}, \href{https://occam.cs.pitt.edu}{Occam}, \href{http://rcloud.social/index.html}{RCloud}, \href{http://thedatahub.org}{TheDataHub}, \href{http://www.ahay.org/wiki/Package_overview}{Madagascar}.
+%% \end{itemize}
+%%\item Special volume on ``Reproducible research'' in the Computing in Science Engineering \citeappendix{fomel09}.
+%%\item ``I’ve learned that interactive programs are slavery (unless they include the ability to arrive in any previous state by means of a script).'' \citeappendix{fomel09}.
+%%\item \citeappendix{fomel09} discuss the ``versioning problem'': on different systems, programs have different versions.
+%%\item \citeappendix{fomel09}: a C program written 20 years ago was still usable.
+%%\item \citeappendix{fomel09}: ``in an attempt to increase the size of the community, Matthias Schwab and I submitted a paper to Computers in Physics, one of CiSE’s forerunners. It was rejected. The editors said if everyone used Microsoft computers, everything would be easily reproducible. They also predicted the imminent demise of Fortran''.
+%%\item \citeappendix{alliez19}: Software citation, with a nice dependency plot for matplotlib.
+%% \item SC \href{https://sc19.supercomputing.org/submit/reproducibility-initiative}{Reproducibility Initiative} for mandatory Artifact Description (AD).
+%% \item \href{https://www.acm.org/publications/policies/artifact-review-badging}{Artifact review badging} by the Association of computing machinery (ACM).
+%% \item eLife journal \href{https://elifesciences.org/labs/b521cf4d/reproducible-document-stack-towards-a-scalable-solution-for-reproducible-articles}{announcement} on reproducible papers. \citeappendix{lewis18} is their first reproducible paper.
+%% \item The \href{https://www.scientificpaperofthefuture.org}{Scientific paper of the future initiative} encourages geoscientists to include associate metadata with scientific papers \citeappendix{gil16}.
+%% \item Digital objects: \url{http://doi.org/10.23728/b2share.b605d85809ca45679b110719b6c6cb11} and \url{http://doi.org/10.23728/b2share.4e8ac36c0dd343da81fd9e83e72805a0}
+%% \item \citeappendix{mesirov10}, \citeappendix{casadevall10}, \citeappendix{peng11}: Importance of reproducible research.
+%% \item \citeappendix{sandve13} is an editorial recommendation to publish reproducible results.
+%% \item \citeappendix{easterbrook14} Free/open software for open science.
+%% \item \citeappendix{peng15}: Importance of better statistical education.
+%% \item \citeappendix{topalidou16}: Failed attempt to reproduce a result.
+%% \item \citeappendix{hutton16} reproducibility in hydrology, criticized in \citeappendix{melson17}.
+%% \item \citeappendix{fomel09}: Editorial on reproducible research.
+%% \item \citeappendix{munafo17}: Reproducibility in social sciences.
+%% \item \citeappendix{stodden18}: Effectiveness of journal policy on computational reproducibility.
+%% \item \citeappendix{fanelli18} is critical of the narrative that there is a ``reproducibility crisis'', and that its important to empower scientists.
+%% \item \citeappendix{burrell18} open software (in particular Python) in heliophysics.
+%% \item \citeappendix{allen18} show that many papers do not cite software.
+%% \item \citeappendix{zhang18} explicity say that they will not release their code: ``We opt not to make the code used for the chemical evo-lution modeling publicly available because it is an important asset of the re-searchers’ toolkits''
+%% \item \citeappendix{jones19} make genuine effort at reproducing every number in the paper (using Docker, Conda, and CGAT-core, and Binder), but they can ultimately only release scripts. They claim its not possible to reproduce that level of reproducibility, but here we show it is.
+%% \item LSST uses Kubernetes and docker for reproducibility \citeappendix{banek19}.
+%% \item Interesting survey/paper on the importance of coding in science \citeappendix{merali10}.
+%% \item Discuss the Provenance challenge \citeappendix{moreau08}, showing the importance of meta data and provenance tracking.
+%% Especially that it is organized by teh medical scientists.
+%% Its webpage (for latest challenge) has a nice intro: \url{https://www.cccinnovationcenter.com/challenges/provenance-challenge}.
+%% \item In discussion: The XML provenance system is very interesting, scripts can be written to parse the Makefiles within this template to generate such XML outputs for easy standard metadata parsing.
+%% The XML that contains a log of the outputs is also interesting.
+%% \item \citeappendix{becker17} Discuss reproducibility methods in R.
+%% \item Elsevier Executable Paper Grand Challenge\footnote{\url{https://shar.es/a3dgl2}} \citeappendix{gabriel11}.
+%% \item \citeappendix{menke20} show how software identifability has seen the best improvement, so there is hope!
+%% \item Nature's collection on papers about reproducibility: \url{https://www.nature.com/collections/prbfkwmwvz}.
+%% \item Nice links for applying FAIR principles in research software: \url{https://www.rd-alliance.org/group/software-source-code-ig/wiki/fair4software-reading-materials}
+%% \item Jupyter Notebooks and problems with reproducibility: \citeappendix{rule18} and \citeappendix{pimentel19}.
+%% \item Reproducibility certification \url{https://www.cascad.tech}.
+%% \item \url{https://plato.stanford.edu/entries/scientific-reproducibility}.
+%% \item
+%%Modern analysis tools are almost entirely implemented as software packages.
+%%This has lead many scientists to adopt solutions that software developers use for reproducing software (for example to fix bugs, or avoid security issues).
+%%These tools and how they are used are thorougly reviewed in Appendices \ref{appendix:existingtools} and \ref{appendix:existingsolutions}.
+%%However, the problem of reproducibility in the sciences is more complicated and subtle than that of software engineering.
+%%This difference can be broken up into the following categories, which are described more fully below:
+%%1) Reading vs. executing, 2) Archiving how software is used and 3) Citation of the software/methods used for scientific credit.
+%%
+%%The first difference is because in the sciences, reproducibility is not merely a problem of re-running a research project (where a binary blob like a container or virtual machine is sufficient).
+%%For a scientist it is more important to read/study a method of a paper that is 1, 10, or 100 years old.
+%%The hardware to execute the code may have become obsolete, or it may require too much processing power, storage, or time for another random scientist to execute.
+%%Another scientist just needs to be assured that the commands they are reading is exactly what was (and can potentially be) executed.
+%%
+%%On the second point, scientists are devoting a smaller fraction of their papers to the technical aspects of the work because they are done increasingly by pre-written software programs and libraries.
+%%Therefore, scientific papers are no longer a complete repository for preserving and archiving very important aspects of the scientific endeavor and hard gained experience.
+%%Attempts such as Software Heritage\footnote{\url{https://www.softwareheritage.org}} \citeappendix{dicosmo18} do a wonderful job at long term preservation and archival of the software source code.
+%%However, preservation of the software's raw code is only part of the process, it is also critically important to preserve how the software was used: with what configuration or run-time options, for what kinds of problems, in conjunction with which other software tools and etc.
+%%
+%%The third major difference was scientific credit, which is measured in units of citations, not dollars.
+%%As described above, scientific software are playing an increasingly important role in modern science.
+%%Because of the domain-specific knowledge necessary to produce such software, they are mostly written by scientists for scientists.
+%%Therefore a significant amount of effort and research funding has gone into producing scientific software.
+%%Atleast for the software that do have an accompanying paper, it is thus important that those papers be cited when they are used.
+%%\end{itemize}
diff --git a/peer-review/1-answer.txt b/peer-review/1-answer.txt
new file mode 100644
index 0000000..dd7f272
--- /dev/null
+++ b/peer-review/1-answer.txt
@@ -0,0 +1,1216 @@
+Dear CiSE editos,
+
+Thank you very much for the very complete and useful referee reports. They
+have been fully implemented in this submission and have significantly
+improved teh quality and clarity of the paper.
+
+Below all the points raised by the Editor in Chief (EiC), Associate editor,
+and the 5 referees (in the same order as the review process report) are
+addressed individually as a numbered list.
+
+Sincerely yours,
+Dr. Mohammad Akhlaghi [on behalf of the co-authors]
+Instituto de Astrofísica de Canarias, Tenerife, Spain.
+
+------------------------------
+
+
+
+
+
+1. [EiC] Some reviewers request additions, and overview of other
+ tools.
+
+ANSWER: Indeed, there is already a large body of previous work in this
+field, and we had learnt a lot from them during the creation of the
+criteria and the proof of concept tool (Maneage). Before submitting the
+paper, we had already done a very comprehensive review of the tools (as you
+may notice from the Git repository[1], where most of the tools were run and
+practically tested). However, the CiSE Author Information explicitly
+states: "The introduction should provide a modicum of background in one or
+two paragraphs, but should not attempt to give a literature review". This
+is the usual practice in previously published papers at CiSE and is in line
+with the maximum 6250 word-count and maximum of 12 references to be used in
+bibliography.
+
+We already discussed this point privately with you and we agreed upon the
+following solution: the extended reviews will be submitted as supplementary
+material, to accompany the paper as "Web extras". These appendices are also
+mentioned in the submitted paper so that any interested CiSE reader can
+easily be informed about the existance from the paper and access them.
+
+Appendix A is focused on the low-level "tools" that are commonly used in
+the reproducible workflow solutions (including Maneage). In Appendix B, we
+touch upon +25 reproducible solutions and compare them directly with our
+criteria. In particular, we also review tools that have been abandoned or
+discontinued and use the criteria to justify why this happened.
+
+[1] https://gitlab.com/makhlaghi/maneage-paper/-/blob/master/tex/src/paper-long.tex#L1579
+[2] https://arxiv.org/abs/2006.03018
+[3] https://doi.org/10.5281/zenodo.3872247
+
+------------------------------
+
+
+
+
+
+2. [Associate Editor] There are general concerns about the paper
+ lacking focus
+
+ANSWER: Thanks to all the corrections/clarifications that have been done in
+this review, the paper is much more focused and direct to the point. We are
+very grateful to the thorough listing of points by the referees that helped
+clarify points that we needed to improve.
+
+------------------------------
+
+
+
+
+
+3. [Associate Editor] Some terminology is not well-defined
+ (e.g. longevity).
+
+ANSWER: In this revision, "Reproducibility", "Longevity" and "Usage" have
+been explicitly defined in the first paragraph of Section II. With this
+definition, the main argument of the paper has become much more clear.
+Thank you (and the referees) for highlighting this.
+
+------------------------------
+
+
+
+
+
+4. [Associate Editor] The discussion of tools could benefit from some
+ categorization to characterize their longevity.
+
+ANSWER: The approximate longevity of the various tools reviewed in Section
+II is now mentioned immediately after each and highlighted in green. For
+example we have added this after containers "(their longevity is determined
+by the host kernel, typically a decade)".
+
+------------------------------
+
+
+
+
+
+5. [Associate Editor] Background and related efforts need significant
+ improvement. (See below.)
+
+ANSWER: This has been done, as mentioned in (1.) above.
+
+------------------------------
+
+
+
+
+
+6. [Associate Editor] There is consistency among the reviews that
+ related work is particularly lacking.
+
+ANSWER: This has been done, as mentioned in (1.) above.
+
+------------------------------
+
+
+
+
+
+7. [Associate Editor] The current work needs to do a better job of
+ explaining how it deals with the nagging problem of running on CPU
+ vs. different architectures.
+
+ANSWER: The CPU architecture of the running system is now precisely
+reported in the "Acknowledgments" section (highlighted in green). Also, a
+description of dependency on hardware architecture, and how Maneage reports
+this, is also added in the "Proof of concept: Maneage" Section.
+
+------------------------------
+
+
+
+
+
+8. [Associate Editor] At least one review commented on the need to
+ include a discussion of continuous integration (CI) and its
+ potential to help identify problems running on different
+ architectures. Is CI employed in any way in the work presented in
+ this article?
+
+ANSWER: CI has been added in the "Discussion" section as one solution to
+find breaking points in operating system updates and new/different
+architectures. For the core Maneage branch, we have defined task #15741 [1]
+to add CI on many architectures in the near future.
+
+[1] http://savannah.nongnu.org/task/?15741
+
+------------------------------
+
+
+
+
+
+9. [Associate Editor] The presentation of the Maneage tool is both
+ lacking in clarity and consistency with the public
+ information/documentation about the tool. While our review focus
+ is on the article, it is important that readers not be confused
+ when they visit your site to use your tools.
+
+ANSWER: Thank you for raising this important point. We have broken down the
+very long "About" page into multiple pages to help in readability:
+
+https://maneage.org/about.html
+
+Generally, the webpage will soon undergo major improvements to be even more
+clear (as part of our RDA grant for Maneage, after the paper we have
+promised a clear and friendly webpage). The website is developed on a
+public git repository (https://git.maneage.org/webpage.git), so any
+specific proposals for improvements can be handled efficiently and
+transparently and we welcome any feedback in this aspect.
+
+------------------------------
+
+
+
+
+
+10. [Associate Editor] A significant question raised by one review is
+ how this work compares to "executable" papers and Jupyter
+ notebooks. Does this work embody similar/same design principles
+ or expand upon the established alternatives? In any event, a
+ discussion of this should be included in background/motivation and
+ related work to help readers understand the clear need for a new
+ approach, if this is being presented as new/novel.
+
+ANSWER: Thank you for highlighting this important point. We saw that it is
+necessary to compare and contrast our Maneage proof-of-concept
+demonstration more directly against the Jupyter notebook type of
+approach. Two paragraphs have been added in Sections II and IV to clarify
+this (our criteria require and build in more modularity and longevity than
+Jupyter). A much more extensive comparison and review is now also available
+in Appendix A.
+
+
+------------------------------
+
+
+
+
+
+11. [Reviewer 1] Adding an explicit list of contributions would make
+ it easier to the reader to appreciate these. These are not
+ mentioned/cited and are highly relevant to this paper (in no
+ particular order):
+ 1. Git flows, both in general and in particular for research.
+ 2. Provenance work, in general and with git in particular
+ 3. Reprozip: https://www.reprozip.org/
+ 4. OCCAM: https://occam.cs.pitt.edu/
+ 5. Popper: http://getpopper.io/
+ 6. Whole Tale: https://wholetale.org/
+ 7. Snakemake: https://github.com/snakemake/snakemake
+ 8. CWL https://www.commonwl.org/ and WDL https://openwdl.org/
+ 9. Nextflow: https://www.nextflow.io/
+ 10. Sumatra: https://pythonhosted.org/Sumatra/
+ 11. Podman: https://podman.io
+ 12. AppImage (https://appimage.org/)
+ 13. Flatpack (https://flatpak.org/)
+ 14. Snap (https://snapcraft.io/)
+ 15. nbdev https://github.com/fastai/nbdev and jupytext
+ 16. Bazel: https://bazel.build/
+ 17. Debian reproducible builds: https://wiki.debian.org/ReproducibleBuilds
+
+ANSWER:
+
+1. In Section IV, we have added that "Generally, any git flow (branching
+ strategies) can be used by the high-level project authors or future
+ readers."
+2. We have mentioned research objects as one mode of provenance tracking
+ and the related provenance work that has already been done and can be
+ exploited using these criteria and our proof of concept is indeed very
+ large. However, the 6250 word-count limit is very tight and if we add
+ more on it in this length, we would have to remove points of higher priority.
+ Hopefully this can be the subject of a follow-up paper.
+3. A review of ReproZip is in Appendix B.
+4. A review of Occam is in Appendix B.
+5. A review of Popper is in Appendix B.
+6. A review of Whole Tale is in Appendix B.
+7. A review of Snakemake is in Appendix A.
+8. CWL and WDL are described in Appendix A (Job management).
+9. Nextflow is described in Appendix A (Job management).
+10. Sumatra is described in Appendix B.
+11. Podman is mentioned in Appendix A (Containers).
+12. AppImage is mentioned in Appendix A (Package management).
+13. Flatpak is mentioned in Appendix A (Package management).
+14. Snap is mentioned in Appendix A (Package management).
+15. nbdev and jupytext are high-level tools to generate documentation and
+ packaging custom code in Conda or pypi. High-level package managers
+ like Conda and Pypi have already been thoroughly reviewed in Appendix A
+ for their longevity issues, so we feel that there is no need to
+ include these.
+16. Bazel is mentioned in Appendix A (job management).
+17. Debian's reproducible builds are only designed for ensuring that software
+ packaged for Debian is bitwise reproducible. As mentioned in the
+ discussion section of this paper, the bitwise reproducibility of software is
+ not an issue in the context discussed here; the reproducibility of the
+ relevant output data of the software is the main issue.
+
+------------------------------
+
+
+
+
+
+12. [Reviewer 1] Existing guidelines similar to the proposed "Criteria
+ for longevity". Many articles of these in the form "10 simple
+ rules for X", for example (not exhaustive list):
+ * https://doi.org/10.1371/journal.pcbi.1003285
+ * https://arxiv.org/abs/1810.08055
+ * https://osf.io/fsd7t/
+ * A model project for reproducible papers: https://arxiv.org/abs/1401.2000
+ * Executable/reproducible paper articles and original concepts
+
+ANSWER: Thank you for highlighting these points. Appendix B starts with a
+subsection titled "suggested rules, checklists or criteria". In this
+section, we review the existing sets of criteria. This subsection includes
+the sources proposed by the reviewer [Sandve et al; Rule et al; Nust et al]
+(and others).
+
+ArXiv:1401.2000 has been added in Appendix A as an example paper using
+virtual machines. We thank the referee for bringing up this paper, because
+the link to the VM provided in the paper no longer works (the URL
+http://archive.comp-phys.org/provenance_challenge/provenance_machine.ova
+redirects to
+https://share.phys.ethz.ch//~alpsprovenance_challenge/provenance_machine.ova
+which gives a 'Not Found' html response). Together with SHARE, this very
+nicely highlights our main issue with binary containers or VMs: their lack
+of longevity due to the high cost of long term storage of large files.
+
+------------------------------
+
+
+
+
+
+13. [Reviewer 1] Several claims in the manuscript are not properly
+ justified, neither in the text nor via citation. Examples (not
+ exhaustive list):
+ 1. "it is possible to precisely identify the Docker “images” that
+ are imported with their checksums, but that is rarely practiced
+ in most solutions that we have surveyed [which ones?]"
+ 2. "Other OSes [which ones?] have similar issues because pre-built
+ binary files are large and expensive to maintain and archive."
+ 3. "Researchers using free software tools have also already had
+ some exposure to it"
+ 4. "A popular framework typically falls out of fashion and
+ requires significant resources to translate or rewrite every
+ few years."
+
+ANSWER: These points have been clarified in the highlighted parts of the text:
+
+1. Many examples have been given throughout the newly added
+ appendices. To avoid confusion in the main body of the paper, we
+ have removed the "we have surveyed" part. It is already mentioned
+ above this point in the text that a large survey of existing
+ methods/solutions is given in the appendices.
+
+2. Due to the thorough discussion of this issue in the appendices with
+ precise examples, this line has been removed to allow space for the
+ other points raised by the referees. The main point (high cost of
+ keeping binaries) is already abundantly clear.
+
+ On a similar topic, Dockerhub's recent announcement that inactive images
+ (for over 6 months) will be deleted has also been added. The announcemnt
+ URL is a hyperlink in the text (it was too long to print directly, if
+ IEEE has a special short-url format, we can add it).
+
+ Another interesting News in relation to longevity has also been added
+ here: the decision by CentOS to abandon CentOS 8 next year. Again, the
+ URL is within a hyperlink on the text. Many scientific and industrial
+ projects have relied on CentOS for longevity over the last two decades,
+ but that didn't stop its creators from abandoning it 8 years early and
+ completely switching its release paradigm.
+
+3. A small statement has been added, reminding the readers that almost all
+ free software projects are built with Make (CMake is also used
+ sometimes, but CMake is just a high-level wrapper over Make: it finally
+ produces a 'Makefile'; practical usage of CMake generally obliges the
+ user to understand Make).
+
+4. The example of Python 2 has been added to clarify this point.
+
+
+------------------------------
+
+
+
+
+
+14. [Reviewer 1] As mentioned in the discussion by the authors, not
+ even Bash, Git or Make is reproducible, thus not even Maneage can
+ address the longevity requirements. One possible alternative is
+ the use of CI to ensure that papers are re-executable (several
+ papers have been written on this topic). Note that CI is
+ well-established technology (e.g. Jenkins is almost 10 years old).
+
+ANSWER: Thank you for raising these issues. We had initially planned to
+discuss CIs, but like many discussion points, we were forced to remove it
+before the first submission due to the very tight word-count limit. We have
+now added a sentence on CI in the discussion.
+
+On the issue of Bash/Git/Make, indeed, the executable built files of Bash,
+Git and Make binaries are not bitwise reproducible/identical on different
+systems. However, as mentioned in the discussion, we are concerned with the
+_output_ of the software's executable file. We are not interested in the
+executable file itself (which should be different for different OSs or CPU
+architectures).
+
+The reproducibility of a binary file only becomes important for security
+purposes where binaries are downloaded. In Maneage, we download the
+software source code tarball, confirm the tarball's SHA512 checksum with
+the checksum that is recorded in Maneage [1], and build the software with
+precisely defined build environment and dependencies.
+
+In summary, even though the compiled binary files of specific versions of
+Git, Bash or Make will not be bitwise reproducible/identical on different
+systems, their scientific outputs are exactly reproducible: 'git describe'
+or Bash's 'for' loop will have the same output on GNU/Linux, macOS/Darwin
+or FreeBSD (despite having bitwise different executables).
+
+[1] http://git.maneage.org/project.git/tree/reproduce/software/config/checksums.conf
+
+------------------------------
+
+
+
+
+
+15. [Reviewer 1] Criterion has been proposed previously. Maneage itself
+ provides little novelty (see comments below).
+
+ANSWER: The previously suggested sets of criteria that were listed by
+Reviewer 1 are reviewed by us in the newly added Appendix B, and the
+novelty and advantages of our proposed criteria are contrasted there
+with the earlier sets of criteria.
+
+------------------------------
+
+
+
+
+
+16. [Reviewer 2] Authors should add indication that using good practices it
+ is possible to use Docker or VM to obtain identical OS usable for
+ reproducible research.
+
+ANSWER: In the submitted version we had stated that "Ideally, it is
+possible to precisely identify the Docker images that are imported with
+their checksums ...". But to be more clear and go directly to the point, it
+has been edited to explicity say "... to recreate an identical OS image
+later".
+
+------------------------------
+
+
+
+
+
+17. [Reviewer 2] The CPU architecture of the platform used to run the
+ workflow is not discussed in the manuscript. Authors should probably
+ take into account the architecture used in their workflow or at least
+ report it.
+
+ANSWER: Thank you very much for raising this important point. We hadn't
+seen other reproducibility papers mention this important point and thus
+missed it. In the acknowledgments (where we also mention the commit hashes)
+we now explicitly mention the exact CPU architecture used to build this
+paper: "This project was built on an x86_64 machine with Little Endian
+byte-order and address sizes 39 bits physical, 48 bits virtual.". This is
+because we have already seen cases where the architecture is the same, but
+programs fail because of the byte order.
+
+Generally, Maneage will now extract this information from the running
+system during its configuration phase, and provide the users with three
+different LaTeX macros that contain this information. Users can use these
+LaTeX macros anywhere in their paper.
+
+------------------------------
+
+
+
+
+
+18. [Reviewer 2] I don’t understand the "no dependency beyond
+ POSIX". Authors should more explained what they mean by this sentence.
+
+ANSWER: This has been clarified with the short extra statement "a minimal
+Unix-like standard that is shared between many operating systems". Also in
+the appendix we now say "no execution requirement beyond a minimal
+Unix-like operating system".
+
+We would have liked to explain this more, but the word limit is very
+constraining. It is more clear in the appendices, and we will put more
+clear explations in teh web page.
+
+------------------------------
+
+
+
+
+
+19. [Reviewer 2] Unfortunately, sometime we need proprietary or specialized
+ software to read raw data... For example in genetics, micro-array raw
+ data are stored in binary proprietary formats. To convert this data
+ into a plain text format, we need the proprietary software provided
+ with the measurement tool.
+
+ANSWER: Thank you very much for this good point. A description of a
+possible solution to this has been added after criterion 8.
+
+------------------------------
+
+
+
+
+
+20. [Reviewer 2] I was not able to properly set up a project with
+ Maneage. The configuration step failed during the download of tools
+ used in the workflow. This is probably due to a firewall/antivirus
+ restriction out of my control. How frequent this failure happen to
+ users?
+
+ANSWER: Thank you for mentioning this. This has been fixed by archiving all
+Maneage'd software on Zenodo (https://doi.org/10.5281/zenodo.3883409) and
+also downloading them from there as highest precedence.
+
+Until recently we would directly access each software's own webpage to
+download the source files, and this caused frequent problems of the type
+you mentioned (different servers in different ISPs/states/countries can
+behave differentely). In other cases, we were very frustrated when a
+software's webpage would temporarily be unavailable (e.g., for maintenance
+reasons); this was a major hindrance in building new projects.
+
+Since all the software is free-licensed, we are legally allowed to
+re-distribute them (within the conditions, such as not removing copyright
+notices) and Zenodo is defined for long-term archival of academic digital
+objects, so we decided that a software source code repository on Zenodo
+would be the most reliable solution. At configure time, Maneage now
+accesses Zenodo's DOI and resolves the most recent URL to automatically
+download any necessary software source code that the project needs from
+there.
+
+Generally, we also keep all software in a Git repository on our own
+webpage: http://git.maneage.org/tarballs-software.git/tree. Also, Maneage
+users can identify their own custom URLs for downloading software, which
+will be given higher priority than Zenodo (useful for situations when
+custom software is downloaded and built in a project branch (not the core
+'maneage' branch).
+
+------------------------------
+
+
+
+
+
+21. [Reviewer 2] The time to configure a new project is quite long because
+ everything needs to be compiled. Authors should compare the time
+ required to set up a project Maneage versus time used by other
+ workflows to give an indication to the readers.
+
+ANSWER: Thank you for raising this point. it takes about 1.5 hours to
+configure the default Maneage branch on an 8-core CPU (more than half of
+this time is devoted to GCC on GNU/Linux operating systems, and the
+building of GCC can optionally be disabled with the '--host-cc' option to
+significantly speed up the build when the host's GCC is
+similar). Furthermore, Maneage can be built within a Docker container.
+
+A paragraph has been added in Section IV on this issue (the
+build time and building within a Docker container). We have also defined
+task #15818 [1] to have our own core Docker image that is ready to build a
+Maneaged project and will be adding it shortly.
+
+[1] https://savannah.nongnu.org/task/index.php?15818
+
+------------------------------
+
+
+
+
+
+22. [Reviewer 3] Authors should define their use of the term [Replicability
+ or Reproducibility] briefly for their readers.
+
+ANSWER: "Reproducibility" has been defined along with "Longevity" and
+"usage" at the start of Section II.
+
+------------------------------
+
+
+
+
+
+23. [Reviewer 3] The introduction is consistent with the proposal of the
+ article, but deals with the tools separately, many of which can be used
+ together to minimize some of the problems presented. The use of
+ Ansible, Helm, among others, also helps in minimizing problems.
+
+ANSWER: That is correct. In the new appendices we have touched upon this,
+especially in Appendix B where we discuss the technologies used by various
+reproducible workflow solutions.
+
+About Ansible and Helm; they are primarily designed for distributed
+computing. For example Helm is just a high-level package manager for a
+Kubernetes cluster that is based on containers. A review of them could be
+added to the Appendices, but we feel they this would distract somewhat from
+the main points of our current paper.
+
+------------------------------
+
+
+
+
+
+24. [Reviewer 3] When the authors use the Python example, I believe it is
+ interesting to point out that today version 2 has been discontinued by
+ the maintaining community, which creates another problem within the
+ perspective of the article.
+
+ANSWER: Thank you very much for highlighting this point. We had excluded
+this point for the sake of article length, but we have restored it in
+the introduction of the revised version.
+
+------------------------------
+
+
+
+
+
+25. [Reviewer 3] Regarding the use of VM's and containers, I believe that
+ the discussion presented by THAIN et al., 2015 is interesting to
+ increase essential points of the current work.
+
+ANSWER: Thank you very much for pointing out the works by Thain. We
+couldn't find any first-author papers in 2015, but found Meng & Thain
+(https://doi.org/10.1016/j.procs.2017.05.116) which had a relevant
+discussion of why they didn't use Docker containers in their work. That
+paper is now cited in the discussion of Containers in Appendix A.
+
+------------------------------
+
+
+
+
+
+26. [Reviewer 3] About the Singularity, the description article was missing
+ (Kurtzer GM, Sochat V, Bauer MW, 2017).
+
+ANSWER: Thank you for the reference. We are restricted in the main
+body of the paper due to the strict bibliography limit of 12
+references; we have included Kurtzer et al 2017 in Appendix A (where
+we discuss Singularity).
+
+------------------------------
+
+
+
+
+
+27. [Reviewer 3] I also believe that a reference to FAIR is interesting
+ (WILKINSON et al., 2016).
+
+ANSWER: The FAIR principles have been mentioned in the main body of the
+paper, but unfortunately we had to remove its citation in the main paper (like
+many others) to keep to the maximum of 12 references. We have cited it in
+Appendix B.
+
+------------------------------
+
+
+
+
+
+28. [Reviewer 3] In my opinion, the paragraph on IPOL seems to be out of
+ context with the previous ones. This issue of end-to-end
+ reproducibility of a publication could be better explored, which would
+ further enrich the tool presented.
+
+
+ANSWER: We agree and have removed the IPOL example from that section. We
+have included an in-depth discussion of IPOL in Appendix B and we comment
+on how Maneage'd projects offer a similar level of peer-review control.
+
+------------------------------
+
+
+
+
+
+29. [Reviewer 3] On the project website, I suggest that the information
+ contained in README-hacking be presented on the same page as the
+ Tutorial. A topic breakdown is interesting, as the markdown reading may
+ be too long to find information.
+
+ANSWER: Thank you very much for this good suggestion, it has been
+implemented: https://maneage.org/about.html . The webpage will continuously
+be improved and such feedback is always very welcome.
+
+------------------------------
+
+
+
+
+
+31. [Reviewer 3] The tool is suitable for Unix users, keeping users away
+ from Microsoft environments.
+
+ANSWER: The issue of building on Windows has been discussed in Section IV,
+either using Docker (or VMs) or using the Windows Subsystem for Linux.
+
+------------------------------
+
+
+
+
+32. [Reviewer 3] Important references are missing; more references are
+ needed
+
+ANSWER: Two comprehensive Appendices have been added to address this issue.
+
+------------------------------
+
+
+
+
+
+33. [Reviewer 4] Revisit the criteria, show how you have come to decide on
+ them, give some examples of why they are important, and address
+ potential missing criteria.
+
+ANSWER: In the new appendix B, we have added a new section, reviewing some
+existing criteria. We would be very interested to discuss them even further
+in the main body, Within the constraints of space (the limit is 6250
+words), it is almost impossible to discuss the history of each in detail or
+add more anecdotal examples of their relevance.
+
+------------------------------
+
+
+
+
+
+34. [Reviewer 4] Clarify the discussion of challenges to adoption and make
+ it clearer which tradeoffs are important to practitioners.
+
+ANSWER: We discuss many of these challenges and caveats in the Discussion
+Section (V), within the existing word limit.
+
+------------------------------
+
+
+
+
+
+35. [Reviewer 4] Be clearer about which sorts of research workflow are best
+ suited to this approach.
+
+ANSWER: Maneage is flexible enough to enable a wide range of workflows to
+be implemented. This is done by leveraging the highly modular and flexible
+nature of Makefiles run via 'Make'.
+
+GUI-based operations (that involve human interaction and cannot be run in
+batch-mode) are one type of workflow that our proof-of-concept will not
+support. But as discussed in the completeness criteria, human interaction
+is an incompleteness, dramatically reducing the reproducibility of a
+result.
+
+------------------------------
+
+
+
+
+
+36. [Reviewer 4] There is also the challenge of mathematical
+ reproducibility, particularly of the handling of floating point number,
+ which might occur because of the way the code is written, and the
+ hardware architecture (including if code is optimised / parallelised).
+
+ANSWER: Floating point errors and optimizations have been mentioned in the
+discussion (Section V). The issue with parallelization has also been
+discussed in Section IV, in the part on verification ("Where exact
+reproducibility is not possible (for example due to parallelization),
+values can be verified by a statistical method specified by the project
+authors."). We have linked keywords in the latter sentence to a Software
+Heritage URI [1] with the specific file in a Maneage'd paper that
+illustrates an example of how statistical verification of parallelised code
+can work in practice (Peper & Roukema 2020; zenodo.4062460).
+
+We would be interested to hear if any other papers already exist that use
+automatic statistical verification of parallelised code as has been done in
+this Maneage'd paper.
+
+[1] https://archive.softwareheritage.org/browse/origin/content/?branch=refs/heads/postreferee_corrections&origin_url=https://codeberg.org/boud/elaphrocentre.git&path=reproduce/analysis/bash/verify-parameter-statistically.sh
+
+------------------------------
+
+
+
+
+
+37. [Reviewer 4] ... the handling of floating point number
+[reproducibility] ... will come with a tradeoff agianst performance, which
+is never mentioned.
+
+ANSWER: The criteria we propose and the proof-of-concept with Maneage do
+not force the choice of a tradeoff between exact bitwise floating point
+reproducibility versus performance (e.g. speed). The specific concepts of
+"verification" and "reproducibility" will vary between domains of
+scientific computation, but we expect that the criteria allow this wide
+range.
+
+Performance is indeed an important issue for _immediate_ reproducibility
+and we would have liked to discuss it. But due to the strict word-count, we
+feel that adding it to the discussion points, without having adequate space
+to elaborate, can confuse the readers away from the focus of this paper (on
+long term usability). It has therefore not been added.
+
+------------------------------
+
+
+
+
+
+38. [Reviewer 4] Tradeoff, which might affect Criterion 3 is time to result,
+ people use popular frameworks because it is easier to use them.
+
+ANSWER: That is true. In section IV, we have given the time it takes to
+build Maneage (only once on each computer) to be around 1.5 hours on an
+8-core CPU (a typical machine that may be used for data analysis). We
+therefore conclude that when the analysis is complex (and thus taking many
+hours, or even days to complete), this time is negligible.
+
+But if the project's full analysis takes 10 minutes or less (like the
+extremely simple analysis done in this paper). Indeed, the 1.5 hour
+building time is significant. In those cases, as discussed in the main
+body, the project can be built once in a Docker image and easily moved to
+other computers.
+
+Generally, it is true that the initial configuration time (only once on
+each computer) of a Maneage install may discourage some scientists; but a
+serious scientific research project is never started and completed on a
+time scale of a few hours.
+
+------------------------------
+
+
+
+
+
+39. [Reviewer 4] I would liked to have seen explanation of how these
+ challenges to adoption were identified: was this anecdotal, through
+ surveys? participant observation?
+
+ANSWER: The results mentioned here are anecdotal: based on private
+discussions after holding multiple seminars and Webinars with RDA's
+support, and also a workshop that was planned for non-astronomers. We
+invited (funded) early career researchers to come to the workshop with the
+RDA funding. However, that workshop was cancelled due to the COVID-19
+pandemic and we had private communications instead.
+
+We would very much like to elaborate on this experience of training new
+researchers with these tools. However, as with many of the cases above, the
+very strict word-limit doesn't allow us to elaborate beyond what we have
+already written. Hopefully in a couple of years and with the wider usage of
+Maneage or these criteria in research papers, we will be able to write a
+paper that is directly focused on this.
+
+------------------------------
+
+
+
+
+
+40. [Reviewer 4] Potentially an interesting sidebar to investigate how
+ LaTeX/TeX has ensured its longevity!
+
+ANSWER: That is indeed a very interesting subject to study (an obvious link
+is that LaTeX/TeX is very strongly based on plain text files). We have been
+in touch with Karl Berry (one of the core people behind TeX Live, who also
+plays a prominent role in GNU) and have whitnessed the TeX Live community's
+efforts to become more and more portable and longer-lived.
+
+However, as the reviewer states, this would be a sidebar, and we are
+constrained for space, so we couldn't find a place to highlight this. But
+it is indeed a subject worthy of a full paper (that can be very useful for
+many software projects).
+
+------------------------------
+
+
+
+
+
+41. [Reviewer 4] The title is not specific enough - it should refer to the
+ reproducibility of workflows/projects.
+
+ANSWER: A problem here is that "workflow" and "project" taken in isolation
+risk being vague for wider audiences. Also, we aim at covering a wider
+range of aspects of a project than just than the workflow alone; in the
+other direction, the word "project" could be seen as too broad, including
+the funding, principal investigator, and team coordination.
+
+A specific title that might be appropriate could be, for example, "Towards
+long-term and archivable reproducibility of scientific computational
+research projects". Using a term proposed by one of our reviewers, "Towards
+long-term and archivable end-to-end reproducibility of scientific
+computational research projects" might also be appropriate.
+
+Nevertheless, we feel that in the context of an article published in CiSE,
+our current short title is sufficient.
+
+------------------------------
+
+
+
+
+
+42. [Reviewer 4] Whilst the thesis stated is valid, it may not be useful to
+ practitioners of computation science and engineering as it stands.
+
+ANSWER: This point appears to refer to floating point bitwise
+reproducibility and possibly to the conciseness of our paper. The former is
+fully allowed for, as stated above, though not obligatory, using the
+"verify.mk" rule file to (typically, but not obligatorily) force bitwise
+reproducibility. The latter is constrained by the 6250-word limit of
+CiSE. The addition of supplementary appendices in the extended version help
+respond to the latter point.
+
+------------------------------
+
+
+
+
+
+43. [Reviewer 4] Longevity is not defined.
+
+ANSWER: This has been defined now at the start of Section II.
+
+------------------------------
+
+
+
+
+
+44. [Reviewer 4] Whilst various tools are discussed and discarded, no
+ attempt is made to categorise the magnitude of longevity for which they
+ are relevant. For instance, environment isolators are regarded by the
+ software preservation community as adequate for timescale of the order
+ of years, but may not be suitable for the timescale of decades where
+ porting and emulation are used.
+
+ANSWER: Statements on quantifying the longevity of specific tools have been
+added in Section II and are highlighted in green. For example in the case
+of Docker images: "their longevity is determined by the host kernel,
+usually a decade", for Python packages: "Python installation with a usual
+longevity of a few years", for Nix/Guix: "with considerably better
+longevity; same as supported CPU architectures."
+
+------------------------------
+
+
+
+
+
+45. [Reviewer 4] The title of this section "Commonly used tools and their
+ longevity" is confusing - do you mean the longevity of the tools or the
+ longevity of the workflows that can be produced using these tools?
+ What happens if you use a combination of all four categories of tools?
+
+ANSWER: We have changed the section title to "Longevity of existing tools"
+to clarify that we refer to longevity of the tools.
+
+If the four categories of tools were combined, then the overall longevity
+would be that of the shortest intersection of the time spans over which the
+tools remained viable.
+
+------------------------------
+
+
+
+
+
+46. [Reviewer 4] It wasn't clear to me if code was being run to generate
+ the results and figures in a LaTeX paper that is part of a project in
+ Maneage. It appears to be suggested this is the case, but Figure 1
+ doesn't show how this works - it just has the LaTeX files, the data
+ files and the Makefiles. Is it being suggested that LaTeX itself is the
+ programming language, using its macro functionality?
+
+ANSWER: Thank you for highlighting this point of confusion. The caption of
+Figure 1 has been edited to hopefully clarify the point. In short, the
+arrows represent the operation of software and boxes represent files. In
+the case of generating 'paper.pdf' from its three dependencies
+('references.tex', 'paper.tex' and 'project.tex'), yes, LaTeX is used. But
+in other steps, other tools are used (depending on the analysis). For
+example as you see in [1] the main step of the arrow connecting
+'table-3.txt' to 'tools-per-year.txt' is an AWK command (there are also a
+few 'echo' commands for meta data and copyright in the output plain-text
+file [2]).
+
+[1] https://gitlab.com/makhlaghi/maneage-paper/-/blob/master/reproduce/analysis/make/demo-plot.mk#L51
+[2] https://zenodo.org/record/3911395/files/tools-per-year.txt
+
+------------------------------
+
+
+
+
+
+47. [Reviewer 4] I was a bit confused on how collaboration is handled as
+ well - this appears to be using the Git branching model, and the
+ suggestion that Maneage is keeping track of all components from all
+ projects - but what happens if you are working with collaborators that
+ are using their own Maneage instance?
+
+ANSWER: Indeed, Maneage operates based on the Git branching model. As
+mentioned in the text, Maneage is itself a Git branch. Researchers spin-off
+their own branch for a specific project from the 'maneage' branch and start
+customizing it for their particular project in their own particular
+repository. They can also use all types of Git-based collaborating models
+to work together on their branch.
+
+Figure 2 in fact explicitly shows such a case: the main project leader is
+committing on the "project" branch. But a collaborator creates a separate
+branch over commit '01dd812' and makes a couple of commits ('f69e1f4' and
+'716b56b'), and finally asks the project leader to merge them into the
+project. This can be generalized to any Git based collaboration model.
+
+Recent experience by one of us [Roukema] found that a merge of a
+Maneage-based cosmology simulation project (now zenodo.4062460), after
+separate evolution of about 30-40 commits on maneage and possibly 100 on
+the project, needed about one day of straightforward effort, without any
+major difficulties. So it is easy to update low-level infrastructure.
+
+------------------------------
+
+
+
+
+
+48. [Reviewer 4] I would also [have] liked to have seen a comparison
+ between this approach and other "executable" paper approaches
+ e.g. Jupyter notebooks, compared on completeness, time taken to
+ write a "paper", ease of depositing in a repository, and ease of
+ use by another researcher.
+
+ANSWER: This type of sociological survey will make sense once the number of
+projects run with Maneage is sufficiently high and comparable to Jupyter
+for example. The time taken to write a paper is be measurable
+automatically: from the git history. The other parameters suggested would
+require cooperation from the scientists in responding to the survey, or
+will have to be collected anecdotally in the short term. This is a good
+subject for a follow-up paper in a few years.
+
+------------------------------
+
+
+
+
+
+49. [Reviewer 4] The weakest aspect is the assumption that research can be
+ easily compartmentalized into simple and complete packages. Given that
+ so much of research involves collaboration and interaction, this is not
+ sufficiently addressed. In particular, the challenge of
+ interdisciplinary work, where there may not be common languages to
+ describe concepts and there may be different common workflow practices
+ will be a barrier to wider adoption of the primary thesis and criteria.
+
+ANSWER: Maneage was precisely defined to address the problem of
+publishing/collaborating on complete workflows by many people (in this
+paper itself, we are already 6 people who have been collaborating to
+complete it and you can see this in the Git history). Git has been
+exceptionally powerful in enabling collaborations of huge projects with
+thousands of contributors like the Linux kernel. Exactly the same
+collaborating style of the Linux kernel can be implemented in Maneage for
+large scientific projects.
+
+Hopefully with the clarification to point 47 above, this should also become
+clear.
+
+------------------------------
+
+
+
+
+
+50. [Reviewer 5] Major figures currently working in this exact field do not
+ have their work acknowledged in this work.
+
+ANSWER: This was due to the strict word limit and the CiSE publication
+policy (to not include a literature review because there is a limit of only
+12 citations). But we had indeed already done a comprehensive literature
+review and the editors kindly agreed that we submit that review as
+supplementary appendices.
+
+------------------------------
+
+
+
+
+
+51. [Reviewer 5] Jimenez I et al ... 2017 "The popper convention: Making
+ reproducible systems evaluation practical ..." and the later
+ revision that uses GitHub Actions, is largely the same as this
+ work.
+
+ANSWER: This work and the proposed criteria are very different from
+Popper. A detailed review of Popper, in particular, is given in Appendix B.
+
+------------------------------
+
+
+
+
+
+52. [Reviewer 5] The lack of attention to virtual machines and containers
+ is highly problematic. While a reader cannot rely on DockerHub or a
+ generic OS version label for a VM or container, these are some of the
+ most promising tools for offering true reproducibility.
+
+ANSWER: Containers and VMs have been more thoroughly discussed in the main
+body and also extensively discussed in appendix A. As discussed (with many
+cited examples), Containers and VMs are only appropriate when they are
+themselves reproducible (for example, if running the Dockerfile this year
+and next year gives the same internal environment). However, we show that
+this is not the case in most solutions (a more comprehensive review would
+require its own paper).
+
+Moreover, with complete, robust environment builders like Maneage, Nix or
+GNU Guix, the analysis environment within a container can be exactly
+reproduced later. But even so, due to their binary nature and large storage
+volume, they are not trusable sources for the long term (it is expensive to
+archive them). We show several examples in the paper and appendices of how
+projects that relied on VMs in 2011 and 2014 are no longer active, and how
+even Dockerhub will be deleting containers that are not used for more than
+6 months in free accounts (due to the high storage costs).
+
+Furthermore, as a unique new feature, Maneage has the criterion of "Minimal
+complexity". This means that even if, for any reason, the project is not
+able to be run in the future, the content, analysis scripts, etc. are
+accessible for the interested reader as plain text (only the development
+history - the git history - is storied in git's binary format). Unlike Nix
+or Guix, our approach doesn't need a third-party package package manager:
+the instructions for building all the software of a project are directly in
+the same project as the high-level analysis software. The full end-to-end
+process is transparent and archived in Maneage, and the interested
+scientist can follow the analysis and study the different decisions of each
+step (why and how the analysis was done). They can also modify it to work
+on future hardware that we don't know about today (this is not possible on
+a binary file like VMs or containers).
+
+------------------------------
+
+
+
+
+
+53. [Reviewer 5] On the data side, containers have the promise to manage
+ data sets and workflows completely [Lofstead J, Baker J, Younge A. Data
+ pallets: containerizing storage for reproducibility and
+ traceability. InInternational Conference on High Performance Computing
+ 2019 Jun 16 (pp. 36-45). Springer, Cham.] Taufer has picked up this
+ work and has graduated a MS student working on this topic with a
+ published thesis. See also Jimenez's P-RECS workshop at HPDC for
+ additional work highly relevant to this paper.
+
+ANSWER: Thank you for the interesting paper by Lofstead+2019 on Data
+pallets. We have cited it in Appendix A as an example of how generic the
+concept of containers is.
+
+The topic of linking data to analysis is also a core result of the criteria
+presented here, and is also discussed briefly in our paper. There are
+indeed many very interesting works on this topic. But the format of CiSE is
+very short (a maximum of ~6500 words with 12 references), so we don't have
+the space to go into this any further. But this is indeed a very
+interesting aspect for follow-up studies, especially as usage of
+Maneage grows, and we have more example workflows by users to study the
+linkage of data analysis.
+
+------------------------------
+
+
+
+
+
+54. [Reviewer 5] Some other systems that do similar things include:
+ reprozip, occam, whole tale, snakemake.
+
+ANSWER: All these tools have been reviewed in the newly added appendices.
+
+------------------------------
+
+
+
+
+
+55. [Reviewer 5] the paper needs to include the context of the current
+ community development level to be a complete research paper. A revision
+ that includes evaluation of (using the criteria) and comparison with
+ the suggested systems and a related work section that seriously
+ evaluates the work of the recommended authors, among others, would make
+ this paper worthy for publication.
+
+ANSWER: A thorough review of current low-level tools and and high-level
+reproducible workflow management systems has been added in the extended
+Appendices.
+
+------------------------------
+
+
+
+
+
+
+56. [Reviewer 5] Yet another example of a reproducible workflows project.
+
+ANSWER: As the newly added thorough comparisons with existing systems
+shows, these set of criteria and the proof-of-concept offer uniquely new
+features. As another referee summarized: "This manuscript describes a new
+reproducible workflow _which doesn't require another new trendy high-level
+software_. The proposed workflow is only based on low-level tools already
+widely known."
+
+Interestingly, the fact that we don't define yet another workflow language
+and framework is itself what makes our proof-of-concept unique. Other
+unique features of Maneage is that it is based on time-tested solutions
+(the youngest tool we use it Git which is already 15 years old) in a
+framwork that costs only ~100 kB to archive (in contrast to multi-GB
+containers or VMs).
+
+------------------------------
+
+
+
+
+
+57. [Reviewer 5] There are numerous examples, mostly domain specific, and
+ this one is not the most advanced general solution.
+
+ANSWER: As the comparisons in the appendices and clarifications above show,
+there are many features in the proposed criteria and proof of concept that
+are new and not satisfied by the domain-specific solutions known to us.
+
+------------------------------
+
+
+
+
+
+58. [Reviewer 5] Lack of context in the field missing very relevant work
+ that eliminates much, if not all, of the novelty of this work.
+
+ANSWER: The newly added appendices thoroughly describe the context and
+previous work that has been done in this field.
+
+------------------------------
diff --git a/peer-review/1-review.txt b/peer-review/1-review.txt
new file mode 100644
index 0000000..16e227b
--- /dev/null
+++ b/peer-review/1-review.txt
@@ -0,0 +1,788 @@
+From: cise computer org
+To: mohammad akhlaghi org,
+ infantesainz gmail com,
+ boud astro uni torun pl,
+ david valls-gabaud observatoiredeparis psl eu,
+ rbaena iac es
+Received: Tue, 22 Sep 2020 15:28:21 -0400
+Subject: Computing in Science and Engineering, CiSESI-2020-06-0048
+ major revision required
+
+--------------------------------------------------
+
+Computing in Science and Engineering,CiSESI-2020-06-0048
+"Towards Long-term and Archivable Reproducibility"
+manuscript type: Reproducible Research
+
+Dear Dr. Mohammad Akhlaghi,
+
+The manuscript that you submitted to Computing in Science and Engineering
+has completed the review process. After carefully examining the manuscript
+and reviews, we have decided that the manuscript needs major revisions
+before it can be considered for a second review.
+
+Your revision is due before 22-Oct-2020. Please note that if your paper was
+submitted to a special issue, this due date may be different. Contact the
+peer review administrator, Ms. Jessica Ingle, at cise computer.org if you
+have questions.
+
+The reviewer and editor comments are attached below for your
+reference. Please maintain our 6,250–word limit as you make your revisions.
+
+To upload your revision and summary of changes, log on to
+https://mc.manuscriptcentral.com/cise-cs, click on your Author Center, then
+"Manuscripts with Decisions." Under "Actions," choose "Create a Revision"
+next to the manuscript number.
+
+Highlight the changes to your manuscript by using the track changes mode in
+MS Word, the latexdiff package if using LaTex, or by using bold or colored
+text.
+
+When submitting your revised manuscript, you will need to respond to the
+reviewer comments in the space provided.
+
+If you have questions regarding our policies or procedures, please refer to
+the magazines' Author Information page linked from the Instructions and
+Forms (top right corner of the ScholarOne Manuscripts screen) or you can
+contact me.
+
+We look forward to receiving your revised manuscript.
+
+Sincerely,
+Dr. Lorena A. Barba
+George Washington University
+Mechanical and Aerospace Engineering
+Editor-in-Chief, Computing in Science and Engineering
+
+--------------------------------------------------
+
+
+
+
+
+EiC comments:
+Some reviewers request additions, and overview of other tools, etc. In
+doing your revision, please remember space limitations: 6,250 words
+maximum, including all main body, abstract, keyword, bibliography (12
+references or less), and biography text. See "Write For Us" section of the
+website: https://www.computer.org/csdl/magazine/cs
+
+Comments of the Associate Editor: Associate Editor
+Comments to the Author: Thank to the authors for your submission to the
+Reproducible Research department.
+
+Thanks to the reviewers for your careful and thoughtful reviews. We would
+appreciate it if you can make your reports available and share the DOI as
+soon as possible, per our original invitation e-mail. We will follow up our
+original invitation to obtain your review DOI, if you have not already
+included it in your review comments.
+
+Based on the review feedback, there are a number of major issues that
+require attention and many minor ones as well. Please take these into
+account as you prepare your major revision for another round of
+review. (See the actual review reports for details.)
+
+1. In general, there are a number of presentation issues needing
+attention. There are general concerns about the paper lacking focus. Some
+terminology is not well-defined (e.g. longevity). In addition, the
+discussion of tools could benefit from some categorization to characterize
+their longevity. Background and related efforts need significant
+improvement. (See below.)
+
+2. There is consistency among the reviews that related work is particularly
+lacking and not taking into account major works that have been written on
+this topic. See the reviews for details about work that could potentially
+be included in the discussion and how the current work is positioned with
+respect to this work.
+
+3. The current work needs to do a better job of explaining how it deals
+with the nagging problem of running on CPU vs. different architectures. At
+least one review commented on the need to include a discussion of
+continuous integration (CI) and its potential to help identify problems
+running on different architectures. Is CI employed in any way in the work
+presented in this article?
+
+4. The presentation of the Maneage tool is both lacking in clarity and
+consistency with the public information/documentation about the tool. While
+our review focus is on the article, it is important that readers not be
+confused when they visit your site to use your tools.
+
+5. A significant question raised by one review is how this work compares to
+"executable" papers and Jupyter notebooks. Does this work embody
+similar/same design principles or expand upon the established alternatives?
+In any event, a discussion of this should be included in
+background/motivation and related work to help readers understand the clear
+need for a new approach, if this is being presented as new/novel.
+
+Reviews:
+
+Please note that some reviewers may have included additional comments in a
+separate file. If a review contains the note "see the attached file" under
+Section III A - Public Comments, you will need to log on to ScholarOne
+Manuscripts to view the file. After logging in, select the Author Center,
+click on the "Manuscripts with Decisions" queue and then click on the "view
+decision letter" link for this manuscript. You must scroll down to the very
+bottom of the letter to see the file(s), if any. This will open the file
+that the reviewer(s) or the Associate Editor included for you along with
+their review.
+
+--------------------------------------------------
+
+
+
+
+
+Reviewer: 1
+Recommendation: Author Should Prepare A Major Revision For A Second Review
+
+Comments:
+
+ * Adding an explicit list of contributions would make it easier to the
+ reader to appreciate these.
+
+ * These are not mentioned/cited and are highly relevant to this paper (in
+ no particular order):
+
+ * Git flows, both in general and in particular for research.
+ * Provenance work, in general and with git in particular
+ * Reprozip: https://www.reprozip.org/
+ * OCCAM: https://occam.cs.pitt.edu/
+ * Popper: http://getpopper.io/
+ * Whole Tale: https://wholetale.org/
+ * Snakemake: https://github.com/snakemake/snakemake
+ * CWL https://www.commonwl.org/ and WDL https://openwdl.org/
+ * Nextflow: https://www.nextflow.io/
+ * Sumatra: https://pythonhosted.org/Sumatra/
+ * Podman: https://podman.io
+ * AppImage (https://appimage.org/), Flatpack
+ (https://flatpak.org/), Snap (https://snapcraft.io/)
+ * nbdev https://github.com/fastai/nbdev and jupytext
+ * Bazel: https://bazel.build/
+ * Debian reproducible builds: https://wiki.debian.org/ReproducibleBuilds
+
+ * Existing guidelines similar to the proposed "Criteria for
+ longevity". Many articles of these in the form "10 simple rules for
+ X", for example (not exhaustive list):
+ * https://doi.org/10.1371/journal.pcbi.1003285
+ * https://arxiv.org/abs/1810.08055
+ * https://osf.io/fsd7t/
+
+ * A model project for reproducible papers: https://arxiv.org/abs/1401.2000
+
+ * Executable/reproducible paper articles and original concepts
+
+ * Several claims in the manuscript are not properly justified, neither in
+ the text nor via citation. Examples (not exhaustive list):
+
+ * "it is possible to precisely identify the Docker “images” that are
+ imported with their checksums, but that is rarely practiced in most
+ solutions that we have surveyed [which ones?]"
+
+ * "Other OSes [which ones?] have similar issues because pre-built
+ binary files are large and expensive to maintain and archive."
+
+ * "Researchers using free software tools have also already had some
+ exposure to it"
+
+ * "A popular framework typically falls out of fashion and requires
+ significant resources to translate or rewrite every few years."
+
+ * As mentioned in the discussion by the authors, not even Bash, Git or
+ Make is reproducible, thus not even Maneage can address the longevity
+ requirements. One possible alternative is the use of CI to ensure that
+ papers are re-executable (several papers have been written on this
+ topic). Note that CI is well-established technology (e.g. Jenkins is
+ almost 10 years old).
+
+Additional Questions:
+
+1. How relevant is this manuscript to the readers of this periodical?
+ Please explain your rating in the Detailed Comments section.: Very
+ Relevant
+
+2. To what extent is this manuscript relevant to readers around the world?:
+ The manuscript is of interest to readers throughout the world
+
+1. Please summarize what you view as the key point(s) of the manuscript and
+ the importance of the content to the readers of this periodical.: This
+ article introduces desiderata for long-term archivable reproduciblity
+ and presents Maneage, a system whose goal is to achieve these outlined
+ properties.
+
+2. Is the manuscript technically sound? Please explain your answer in the
+ Detailed Comments section.: Partially
+
+3. What do you see as this manuscript's contribution to the literature in
+ this field?: Presentation of Maneage
+
+4. What do you see as the strongest aspect of this manuscript?: A great
+ summary of Maneage, as well as its implementaiton.
+
+5. What do you see as the weakest aspect of this manuscript?: Criterion has
+ been proposed previously. Maneage itself provides little novelty (see
+ comments below).
+
+1. Does the manuscript contain title, abstract, and/or keywords?: Yes
+
+2. Are the title, abstract, and keywords appropriate? Please elaborate in
+ the Detailed Comments section.: Yes
+
+3. Does the manuscript contain sufficient and appropriate references
+ (maximum 12-unless the article is a survey or tutorial in scope)? Please
+ elaborate in the Detailed Comments section.: Important references are
+ missing; more references are needed
+
+4. Does the introduction clearly state a valid thesis? Please explain your
+ answer in the Detailed Comments section.: Could be improved
+
+5. How would you rate the organization of the manuscript? Please elaborate
+ in the Detailed Comments section.: Satisfactory
+
+6. Is the manuscript focused? Please elaborate in the Detailed Comments
+ section.: Satisfactory
+
+7. Is the length of the manuscript appropriate for the topic? Please
+ elaborate in the Detailed Comments section.: Satisfactory
+
+8. Please rate and comment on the readability of this manuscript in the
+ Detailed Comments section.: Easy to read
+
+9. Please rate and comment on the timeliness and long term interest of this
+ manuscript to CiSE readers in the Detailed Comments section. Select all
+ that apply.: Topic and content are of limited interest to CiSE readers.
+
+Please rate the manuscript. Explain your choice in the Detailed Comments
+section.: Good
+
+--------------------------------------------------
+
+
+
+
+
+Reviewer: 2
+Recommendation: Accept If Certain Minor Revisions Are Made
+
+Comments: https://doi.org/10.22541/au.159724632.29528907
+
+Operating System: Authors mention that Docker is usually used with an image
+of Ubuntu without precision about the version used. And Even if users take
+care about the version, the image is updated monthly thus the image used
+will have different OS components based on the generation time. This
+difference in OS components will interfere on the reproducibility. I agree
+on that, but I would like to add that it is a wrong habit of users. It is
+possible to generate reproducible Docker images by generating it from an
+ISO image of the OS. These ISO images are archived, at least for Ubuntu
+(http://old-releases.ubuntu.com/releases) and for Debian
+(https://cdimage.debian.org/mirror/cdimage/archive) thus allow users to
+generate an OS with identical components. Combined with the
+snapshot.debian.org service, it is even possible to update a Debian release
+to a specific time point up to 2005 and with a precision of six hours. With
+combination of both ISO image and snapshot.debian.org service it is
+possible to obtain an OS for Docker or for a VM with identical components
+even if users have to use the PM of the OS. Authors should add indication
+that using good practices it is possible to use Docker or VM to obtain
+identical OS usable for reproducible research.
+
+CPU architecture: The CPU architecture of the platform used to run the
+workflow is not discussed in the manuscript. During software integration in
+Debian, I have seen several software failing their unit tests due to
+different behavior from itself or from a library dependency. This not
+expected behavior was only present on non-x86 architectures, mainly because
+developers use a x86 machine for their developments and tests. Bug or
+feature? I don’t know, but nowadays, it is quite frequent to see computers
+with a non-x86 CPU. It would be annoying to fail the reproducibility step
+because of a different in CPU architecture. Authors should probably take
+into account the architecture used in their workflow or at least report it.
+
+POSIX dependency: I don’t understand the "no dependency beyond
+POSIX". Authors should more explained what they mean by this sentence. I
+completely agree that the dependency hell must be avoided and dependencies
+should be used with parsimony. Unfortunately, sometime we need proprietary
+or specialized software to read raw data. For example in genetics,
+micro-array raw data are stored in binary proprietary formats. To convert
+this data into a plain text format, we need the proprietary software
+provided with the measurement tool.
+
+Maneage: I was not able to properly set up a project with Maneage. The
+configuration step failed during the download of tools used in the
+workflow. This is probably due to a firewall/antivirus restriction out of
+my control. How frequent this failure happen to users? Moreover, the time
+to configure a new project is quite long because everything needs to be
+compiled. Authors should compare the time required to set up a project
+Maneage versus time used by other workflows to give an indication to the
+readers.
+
+Disclaimer: For the sake of transparency, it should be noted that I am
+involved in the development of Debian, thus my comments are probably
+oriented.
+
+Additional Questions:
+
+1. How relevant is this manuscript to the readers of this periodical?
+ Please explain your rating in the Detailed Comments section.: Relevant
+
+2. To what extent is this manuscript relevant to readers around the world?:
+ The manuscript is of interest to readers throughout the world
+
+1. Please summarize what you view as the key point(s) of the manuscript and
+ the importance of the content to the readers of this periodical.: The
+ authors describe briefly the history of solutions proposed by
+ researchers to generate reproducible workflows. Then, they report the
+ problems with the current tools used to tackle the reproducible
+ problem. They propose a set of criteria to develop new reproducible
+ workflows and finally they describe their proof of concept workflow
+ called "Maneage". This manuscript could help researchers to improve
+ their workflow to obtain reproducible results.
+
+2. Is the manuscript technically sound? Please explain your answer in the
+ Detailed Comments section.: Yes
+
+3. What do you see as this manuscript's contribution to the literature in
+ this field?: The authors try to propose a simple answer to the
+ reproducibility problem by defining new criteria. They also propose a
+ proof of concept workflow which can be directly used by researchers for
+ their projects.
+
+4. What do you see as the strongest aspect of this manuscript?: This
+ manuscript describes a new reproducible workflow which doesn't require
+ another new trendy high-level software. The proposed workflow is only
+ based on low-level tools already widely known. Moreover, the workflow
+ takes into account the version of all software used in the chain of
+ dependencies.
+
+5. What do you see as the weakest aspect of this manuscript?: Authors don't
+ discuss the problem of results reproducibility when analysis are
+ performed using CPU with different architectures. Some libraries have
+ different behaviors when they ran on different architectures and it
+ could influence final results. Authors are probably talking about x86,
+ but there is no reference at all in the manuscript.
+
+1. Does the manuscript contain title, abstract, and/or keywords?: Yes
+
+2. Are the title, abstract, and keywords appropriate? Please elaborate in
+ the Detailed Comments section.: Yes
+
+3. Does the manuscript contain sufficient and appropriate references
+ (maximum 12-unless the article is a survey or tutorial in scope)? Please
+ elaborate in the Detailed Comments section.: References are sufficient
+ and appropriate
+
+4. Does the introduction clearly state a valid thesis? Please explain your
+ answer in the Detailed Comments section.: Yes
+
+5. How would you rate the organization of the manuscript? Please elaborate
+ in the Detailed Comments section.: Satisfactory
+
+6. Is the manuscript focused? Please elaborate in the Detailed Comments
+ section.: Satisfactory
+
+7. Is the length of the manuscript appropriate for the topic? Please
+ elaborate in the Detailed Comments section.: Satisfactory
+
+8. Please rate and comment on the readability of this manuscript in the
+ Detailed Comments section.: Easy to read
+
+9. Please rate and comment on the timeliness and long term interest of this
+ manuscript to CiSE readers in the Detailed Comments section. Select all
+ that apply.: Topic and content are of immediate and continuing interest
+ to CiSE readers
+
+Please rate the manuscript. Explain your choice in the Detailed Comments
+section.: Good
+
+--------------------------------------------------
+
+
+
+
+
+Reviewer: 3
+Recommendation: Accept If Certain Minor Revisions Are Made
+
+Comments: Longevity of workflows in a project is one of the problems for
+reproducibility in different fields of computational research. Therefore, a
+proposal that seeks to guarantee this longevity becomes relevant for the
+entire community, especially when it is based on free software and is easy
+to access and implement.
+
+GOODMAN et al., 2016, BARBA, 2018 and PLESSER, 2018 observed in their
+research that the terms reproducibility and replicability are frequently
+found in the scientific literature and their use interchangeably ends up
+generating confusion due to the authors' lack of clarity. Thus, authors
+should define their use of the term briefly for their readers.
+
+The introduction is consistent with the proposal of the article, but deals
+with the tools separately, many of which can be used together to minimize
+some of the problems presented. The use of Ansible, Helm, among others,
+also helps in minimizing problems. When the authors use the Python example,
+I believe it is interesting to point out that today version 2 has been
+discontinued by the maintaining community, which creates another problem
+within the perspective of the article. Regarding the use of VM's and
+containers, I believe that the discussion presented by THAIN et al., 2015
+is interesting to increase essential points of the current work. About the
+Singularity, the description article was missing (Kurtzer GM, Sochat V,
+Bauer MW, 2017). I also believe that a reference to FAIR is interesting
+(WILKINSON et al., 2016).
+
+In my opinion, the paragraph on IPOL seems to be out of context with the
+previous ones. This issue of end-to-end reproducibility of a publication
+could be better explored, which would further enrich the tool presented.
+
+The presentation of the longevity criteria was adequate in the context of
+the article and explored the points that were dealt with later.
+
+The presentation of the tool was consistent. On the project website, I
+suggest that the information contained in README-hacking be presented on
+the same page as the Tutorial. A topic breakdown is interesting, as the
+markdown reading may be too long to find information.
+
+Additional Questions:
+
+1. How relevant is this manuscript to the readers of this periodical?
+ Please explain your rating in the Detailed Comments section.: Relevant
+
+2. To what extent is this manuscript relevant to readers around the world?:
+ The manuscript is of interest to readers throughout the world
+
+1. Please summarize what you view as the key point(s) of the manuscript and
+ the importance of the content to the readers of this periodical.: In
+ this article, the authors discuss the problem of the longevity of
+ computational workflows, presenting what they consider to be criteria
+ for longevity and an implementation based on these criteria, called
+ Maneage, seeking to ensure a long lifespan for analysis projects.
+
+2. Is the manuscript technically sound? Please explain your answer in the
+ Detailed Comments section.: Yes
+
+3. What do you see as this manuscript's contribution to the literature in
+ this field?: In this article, the authors discuss the problem of the
+ longevity of computational workflows, presenting what they consider to
+ be criteria for longevity and an implementation based on these criteria,
+ called Maneage, seeking to ensure a long lifespan for analysis projects.
+
+ As a key point, the authors enumerate quite clear criteria that can
+ guarantee the longevity of projects and present a free software-based
+ way of achieving this objective. The method presented by the authors is
+ not easy to implement for many end users, with low computer knowledge,
+ but it can be easily implemented by users with average knowledge in the
+ area.
+
+4. What do you see as the strongest aspect of this manuscript?: One of the
+ strengths of the manuscript is the implementation of Maneage entirely in
+ free software and the search for completeness presented in the
+ manuscript. The use of GNU software adds the guarantee of long
+ maintenance by one of the largest existing software communities. In
+ addition, the tool developed has already been tested in different
+ publications, showing itself consistent in different scenarios.
+
+5. What do you see as the weakest aspect of this manuscript?: For the
+ proper functioning of the proposed tool, the user needs prior knowledge
+ of LaTeX, GIT and the command line, which can keep inexperienced users
+ away. Likewise, the tool is suitable for Unix users, keeping users away
+ from Microsoft environments.
+
+ Even though Unix-like environments are the majority in the areas of
+ scientific computing, many users still perform their analysis in
+ different areas on Windows computers or servers, with the assistance of
+ package managers.
+
+1. Does the manuscript contain title, abstract, and/or keywords?: Yes
+
+2. Are the title, abstract, and keywords appropriate? Please elaborate in
+ the Detailed Comments section.: Yes
+
+3. Does the manuscript contain sufficient and appropriate references
+ (maximum 12-unless the article is a survey or tutorial in scope)? Please
+ elaborate in the Detailed Comments section.: Important references are
+ missing; more references are needed
+
+4. Does the introduction clearly state a valid thesis? Please explain your
+ answer in the Detailed Comments section.: Could be improved
+
+5. How would you rate the organization of the manuscript? Please elaborate
+ in the Detailed Comments section.: Satisfactory
+
+6. Is the manuscript focused? Please elaborate in the Detailed Comments
+ section.: Could be improved
+
+7. Is the length of the manuscript appropriate for the topic? Please
+ elaborate in the Detailed Comments section.: Satisfactory
+
+8. Please rate and comment on the readability of this manuscript in the
+ Detailed Comments section.: Easy to read
+
+9. Please rate and comment on the timeliness and long term interest of this
+ manuscript to CiSE readers in the Detailed Comments section. Select all
+ that apply.: Topic and content are of immediate and continuing interest
+ to CiSE readers
+
+Please rate the manuscript. Explain your choice in the Detailed Comments
+section.: Excellent
+
+--------------------------------------------------
+
+
+
+
+
+Reviewer: 4
+Recommendation: Author Should Prepare A Major Revision For A Second Review
+
+Comments: Overall evaluation - Good.
+
+This paper is in scope, and the topic is of interest to the readers of
+CiSE. However in its present form, I have concerns about whether the paper
+presents enough new contributions to the area in a way that can then be
+understood and reused by others. The main things I believe need addressing
+are: 1) Revisit the criteria, show how you have come to decide on them,
+give some examples of why they are important, and address potential missing
+criteria. 2) Clarify the discussion of challenges to adoption and make it
+clearer which tradeoffs are important to practitioners. 3) Be clearer about
+which sorts of research workflow are best suited to this approach.
+
+B2.Technical soundness: here I am discussing the soundness of the paper,
+rather than the soundness of the Maneage tool. There are some fundamental
+additional challenges to reproducibility that are not addressed. Although
+software library versions are addressed, there is also the challenge of
+mathematical reproducibility, particularly of the handling of floating
+point number, which might occur because of the way the code is written, and
+the hardware architecture (including if code is optimised /
+parallelised). This could obviously be addressed through a criterion around
+how code is written, but this will also come with a tradeoff against
+performance, which is never mentioned. Another tradeoff, which might affect
+Criterion 3 is time to result - people use popular frameworks because it is
+easier to use them. Regarding the discussion, I would liked to have seen
+explanation of how these challenges to adoption were identified: was this
+anecdotal, through surveys. participant observation? As a side note around
+the technical aspects of Maneage - it is using LaTeX which in turn is built
+on TeX which in turn has had many portability problems in the past due to
+being written using WEB / Tangle, though with web2c this is largely now
+resolved - potentially an interesting sidebar to investigate how LaTeX/TeX
+has ensured its longevity!
+
+C2. The title is not specific enough - it should refer to the
+reproducibility of workflows/projects.
+
+C4. As noted above, whilst the thesis stated is valid, it may not be useful
+to practitioners of computation science and engineering as it stands.
+
+C6. Manuscript focus. I would have liked a more focussed approach to the
+presentation of information in II. Longevity is not defined, and whilst
+various tools are discussed and discarded, no attempt is made to categorise
+the magnitude of longevity for which they are relevant. For instance,
+environment isolators are regarded by the software preservation community
+as adequate for timescale of the order of years, but may not be suitable
+for the timescale of decades where porting and emulation are used. The
+title of this section "Commonly used tools and their longevity" is also
+confusing - do you mean the longevity of the tools or the longevity of the
+workflows that can be produced using these tools? What happens if you use a
+combination of all four categories of tools?
+
+C8. Readability. I found it difficult to follow the description of how
+Maneage works. It wasn't clear to me if code was being run to generate the
+results and figures in a LaTeX paper that is part of a project in
+Maneage. It appears to be suggested this is the case, but Figure 1 doesn't
+show how this works - it just has the LaTeX files, the data files and the
+Makefiles. Is it being suggested that LaTeX itself is the programming
+language, using its macro functionality? I was a bit confused on how
+collaboration is handled as well - this appears to be using the Git
+branching model, and the suggestion that Maneage is keeping track of all
+components from all projects - but what happens if you are working with
+collaborators that are using their own Maneage instance?
+
+I would also liked to have seen a comparison between this approach and
+other "executable" paper approaches e.g. Jupyter notebooks, compared on
+completeness, time taken to write a "paper", ease of depositing in a
+repository, and ease of use by another researcher.
+
+Additional Questions:
+
+1. How relevant is this manuscript to the readers of this periodical?
+ Please explain your rating in the Detailed Comments section.: Relevant
+
+2. To what extent is this manuscript relevant to readers around the world?:
+ The manuscript is of interest to readers throughout the world
+
+1. Please summarize what you view as the key point(s) of the manuscript and
+ the importance of the content to the readers of this periodical.: This
+ manuscript discusses the challenges of reproducibility of computational
+ research workflows, suggests criteria for improving the "longevity" of
+ workflows, describes the proof-of-concept tool, Maneage, that has been
+ built to implement these criteria, and discusses the challenges to
+ adoption.
+
+ Of primary importance is the discussion of the challenges to adoption,
+ as CiSE is about computational science which does not take place in a
+ theoretical vacuum. Many of the identified challenges relate to the
+ practice of computational science and the implementation of systems in
+ the real world.
+
+2. Is the manuscript technically sound? Please explain your answer in the
+ Detailed Comments section.: Partially
+
+3. What do you see as this manuscript's contribution to the literature in
+ this field?: The manuscript makes a modest contribution to the
+ literature through the description of the proof-of-concept, in
+ particular its approach to integrating asset management, version control
+ and build and the discussion of challenges to adoption.
+
+ The proposed criteria have mostly been discussed at length in many other
+ works looking at computational reproducibility and executable papers.
+
+4. What do you see as the strongest aspect of this manuscript?: The
+ strongest aspect is the discussion of difficulties for widespread
+ adoption of this sort of approach. Because the proof-of-concept tool
+ received support through the RDA, it was possible to get feedback from
+ researchers who were likely to use it. This has highlighted and
+ reinforced a number of challenges and caveats.
+
+5. What do you see as the weakest aspect of this manuscript?: The weakest
+ aspect is the assumption that research can be easily compartmentalized
+ into simple and complete packages. Given that so much of research
+ involves collaboration and interaction, this is not sufficiently
+ addressed. In particular, the challenge of interdisciplinary work, where
+ there may not be common languages to describe concepts and there may be
+ different common workflow practices will be a barrier to wider adoption
+ of the primary thesis and criteria.
+
+1. Does the manuscript contain title, abstract, and/or keywords?: Yes
+
+2. Are the title, abstract, and keywords appropriate? Please elaborate in
+ the Detailed Comments section.: No
+
+3. Does the manuscript contain sufficient and appropriate references
+ (maximum 12-unless the article is a survey or tutorial in scope)? Please
+ elaborate in the Detailed Comments section.: References are sufficient
+ and appropriate
+
+4. Does the introduction clearly state a valid thesis? Please explain your
+ answer in the Detailed Comments section.: Could be improved
+
+5. How would you rate the organization of the manuscript? Please elaborate
+ in the Detailed Comments section.: Satisfactory
+
+6. Is the manuscript focused? Please elaborate in the Detailed Comments
+ section.: Could be improved
+
+7. Is the length of the manuscript appropriate for the topic? Please
+ elaborate in the Detailed Comments section.: Satisfactory
+
+8. Please rate and comment on the readability of this manuscript in the
+ Detailed Comments section.: Readable - but requires some effort to
+ understand
+
+9. Please rate and comment on the timeliness and long term interest of this
+ manuscript to CiSE readers in the Detailed Comments section. Select all
+ that apply.: Topic and content are of immediate and continuing interest
+ to CiSE readers
+
+Please rate the manuscript. Explain your choice in the Detailed Comments
+section.: Good
+
+--------------------------------------------------
+
+
+
+
+
+Reviewer: 5
+Recommendation: Author Should Prepare A Major Revision For A Second Review
+
+Comments:
+
+Major figures currently working in this exact field do not have their work
+acknowledged in this work. In no particular order: Victoria Stodden,
+Michael Heroux, Michela Taufer, and Ivo Jimenez. All of these authors have
+multiple publications that are highly relevant to this paper. In the case
+of Ivo Jimenez, his Popper work [Jimenez I, Sevilla M, Watkins N, Maltzahn
+C, Lofstead J, Mohror K, Arpaci-Dusseau A, Arpaci-Dusseau R. The popper
+convention: Making reproducible systems evaluation practical. In2017 IEEE
+International Parallel and Distributed Processing Symposium Workshops
+(IPDPSW) 2017 May 29 (pp. 1561-1570). IEEE.] and the later revision that
+uses GitHub Actions, is largely the same as this work. The lack of
+attention to virtual machines and containers is highly problematic. While a
+reader cannot rely on DockerHub or a generic OS version label for a VM or
+container, these are some of the most promising tools for offering true
+reproducibility. On the data side, containers have the promise to manage
+data sets and workflows completely [Lofstead J, Baker J, Younge A. Data
+pallets: containerizing storage for reproducibility and
+traceability. InInternational Conference on High Performance Computing 2019
+Jun 16 (pp. 36-45). Springer, Cham.] Taufer has picked up this work and has
+graduated a MS student working on this topic with a published thesis. See
+also Jimenez's P-RECS workshop at HPDC for additional work highly relevant
+to this paper.
+
+Some other systems that do similar things include: reprozip, occam, whole
+tale, snakemake.
+
+While the work here is a good start, the paper needs to include the context
+of the current community development level to be a complete research
+paper. A revision that includes evaluation of (using the criteria) and
+comparison with the suggested systems and a related work section that
+seriously evaluates the work of the recommended authors, among others,
+would make this paper worthy for publication.
+
+Additional Questions:
+
+1. How relevant is this manuscript to the readers of this periodical?
+ Please explain your rating in the Detailed Comments section.: Very
+ Relevant
+
+2. To what extent is this manuscript relevant to readers around the world?:
+ The manuscript is of interest to readers throughout the world
+
+1. Please summarize what you view as the key point(s) of the manuscript and
+ the importance of the content to the readers of this periodical.: This
+ paper describes the Maneage system for reproducibile workflows. It lays
+ out a bit of the need, has very limited related work, and offers
+ criteria any system that offers reproducibility should have, and finally
+ describes how Maneage achieves these goals.
+
+2. Is the manuscript technically sound? Please explain your answer in the
+ Detailed Comments section.: Partially
+
+3. What do you see as this manuscript's contribution to the literature in
+ this field?: Yet another example of a reproducible workflows
+ project. There are numerous examples, mostly domain specific, and this
+ one is not the most advanced general solution.
+
+4. What do you see as the strongest aspect of this manuscript?: Working
+ code and published artifacts
+
+5. What do you see as the weakest aspect of this manuscript?: Lack of
+ context in the field missing very relevant work that eliminates much, if
+ not all, of the novelty of this work.
+
+1. Does the manuscript contain title, abstract, and/or keywords?: Yes
+
+2. Are the title, abstract, and keywords appropriate? Please elaborate in
+ the Detailed Comments section.: Yes
+
+3. Does the manuscript contain sufficient and appropriate references
+ (maximum 12-unless the article is a survey or tutorial in scope)? Please
+ elaborate in the Detailed Comments section.: Important references are
+ missing; more references are needed
+
+4. Does the introduction clearly state a valid thesis? Please explain your
+ answer in the Detailed Comments section.: Could be improved
+
+5. How would you rate the organization of the manuscript? Please elaborate
+ in the Detailed Comments section.: Satisfactory
+
+6. Is the manuscript focused? Please elaborate in the Detailed Comments
+ section.: Could be improved
+
+7. Is the length of the manuscript appropriate for the topic? Please
+ elaborate in the Detailed Comments section.: Could be improved
+
+8. Please rate and comment on the readability of this manuscript in the
+ Detailed Comaments section.: Easy to read
+
+9. Please rate and comment on the timeliness and long term interest of this
+ manuscript to CiSE readers in the Detailed Comments section. Select all
+ that apply.: Topic and content are likely to be of growing interest to
+ CiSE readers over the next 12 months
+
+Please rate the manuscript. Explain your choice in the Detailed Comments
+section.: Fair
diff --git a/peer-review/2-review.txt b/peer-review/2-review.txt
new file mode 100644
index 0000000..9f8cdd8
--- /dev/null
+++ b/peer-review/2-review.txt
@@ -0,0 +1,147 @@
+From: Computing in Science and Engineering <onbehalfof@manuscriptcentral.com>
+To: mohammad akhlaghi org,
+ infantesainz gmail com,
+ boud astro uni torun pl,
+ mkhellat ideal-information.com,
+ david.valls-gabaud observatoiredeparis psl eu,
+ rbaena iac es
+Cc: cise@computer.org,
+ cise-rr@computer.org
+Received: Wed, 7 Apr 2021 19:39:59 +0000
+Subject: Decision - Computing in Science and Engineering, CiSESI-2020-06-0048.R1
+
+--------------------------------------------------
+
+Computing in Science and Engineering, CiSESI-2020-06-0048.R1
+"Towards Long-term and Archivable Reproducibility"
+manuscript type: Reproducible Research
+
+Dear Dr. Mohammad Akhlaghi:
+
+Congratulations! Your manuscript, "Towards Long-term and Archivable
+Reproducibility," CiSESI-2020-06-0048.R1, has been accepted for publication
+in an upcoming issue of Computing in Science and Engineering, subject to a
+final light copyedit. Do note the editors' comments below.
+
+Thank you,
+Lorena A. Barba
+Editor in Chief, Computing in Science and Engineering
+labarba gwu edu
+**********
+
+Editor-in-Chief's Comments
+**********
+- I am processing this as an "accept" to expedite, but please take
+ notice/care of the following items before submitting your final files.
+- Note particularly that you have to edit any usage of a reference (like
+ [1]) as part of speech in a sentence. Use alternatives like "Smith et
+ al. [1]." The article template uses superscripts for references!
+- I strongly recommend that you deposit the appendices in arXiv, separately
+ from the main preprint. This way, all the cited works will get their
+ citation indexed by Google Scholar, which the authors will likely
+ appreciate.
+- You may add the arXiv id for the manuscript and the appendices in your
+ reproducibility statement at the end of the Abstract.
+- I have manually edited the due date for your final files for Friday April
+ 9 because we are finalizing the next issue. If we received your files
+ ASAP we can include your article in the next issue.
+
+Associate Editor Comments:
+**********
+(There are no comments)
+
+Reviewers' Comments
+**********
+
+Reviewer: 1
+
+Recommendation: Accept With No Changes
+
+Comments:
+A more in depth evaluation of different options from a technical standpoint
+rather than just discussion would really strengthen the paper showing that
+the idea isn't just good enough to generate this paper that does not
+contain experimental code executions and benchmarking demonstrating
+different system characteristics. Deeper acknowledgement of the differences
+each system brings and over time as they are improved could be greatly
+expanded. This is an area of extreme concern.
+
+Additional Questions:
+
+1. How relevant is this manuscript to the readers of this periodical?
+Please explain your rating in the Detailed Comments section.: Very Relevant
+
+2. To what extent is this manuscript relevant to readers around the world?:
+The manuscript is of interest to readers throughout the world
+
+1. Please summarize what you view as the key point(s) of the manuscript and
+the importance of the content to the readers of this periodical: This
+paper presents necessary requirements for reproducible work. This is aimed
+at being able to generate the same output from a workflow of inputs, test
+code, and data processing into a text report format.
+
+The concepts presented cover a wide variety of topics hitting on the vast
+majority of cases. The kinds of things not really addressed, such as minor
+hardware version differences not evident except in physical stamps on the
+part itself, but yielding slightly different behavior, are mentioned at a
+high level and left unaddressed. These concerns are part of the longevity
+discussion instead and limit the lifetime of an artifact.
+
+The topics presented cover the topic reasonably well and offer a good guide
+for people to think about how best to approach providing a reproducible
+scientific system. Most favorably, the paper itself is offered as an
+example with an embedded, machine generated version ID and link to the
+source materials. This is truly putting your money where your mouth is, so
+to speak.
+
+2. Is the manuscript technically sound? Please explain your answer in the
+Detailed Comments section.: Yes
+
+3. What do you see as this manuscript's contribution to the literature in
+this field?: Clearly defining what the longevity limitations are for
+reproducible work is crucial for us as a community to have effective
+discussions about what we mean for something to be reproducible. Unless we
+can agree what acceptable longevity is, we cannot agree if something is
+reproducible or not. The other factors listed for different components are
+important things to consider and cover basically everything necessary.
+
+4. What do you see as the strongest aspect of this manuscript?: A simple,
+easy to follow discussion of what reproducibility really means and how to
+achieve it.
+
+5. What do you see as the weakest aspect of this manuscript?: Few examples
+of the process itself and no testing/comparison of any of the variants in
+the supplemental materials.
+
+1. Does the manuscript contain title, abstract, and/or keywords?: Yes
+
+2. Are the title, abstract, and keywords appropriate? Please elaborate in
+the Detailed Comments section.: Yes
+
+3. Does the manuscript contain sufficient and appropriate references
+(maximum 12-unless the article is a survey or tutorial in scope)? Please
+elaborate in the Detailed Comments section.: References are sufficient and
+appropriate
+
+4. Does the introduction clearly state a valid thesis? Please explain your
+answer in the Detailed Comments section.: Yes
+
+5. How would you rate the organization of the manuscript? Please elaborate
+in the Detailed Comments section.: Satisfactory
+
+6. Is the manuscript focused? Please elaborate in the Detailed Comments
+section.: Satisfactory
+
+7. Is the length of the manuscript appropriate for the topic? Please
+elaborate in the Detailed Comments section.: Satisfactory
+
+8. Please rate and comment on the readability of this manuscript in the
+Detailed Comments section.: Easy to read
+
+9. Please rate and comment on the timeliness and long term interest of this
+manuscript to CiSE readers in the Detailed Comments section. Select all
+that apply.: Topic and content are of immediate and continuing interest to
+CiSE readers
+
+Please rate the manuscript. Explain your choice in the Detailed Comments
+section.: Good
diff --git a/project b/project
index ba9e9ff..8db213a 100755
--- a/project
+++ b/project
@@ -46,6 +46,7 @@ highlightnew=0
all_highlevel=0
existing_conf=0
highlightnotes=0
+separatesupplement=0
scriptname="./project"
@@ -129,6 +130,7 @@ Make (final PDF) options:
--refresh-bib Force refresh the bibliography.
--highlight-new Highlight '\new' parts of text as green.
--highlight-notes Show '\tonote' regions as red text in PDF.
+ --supplement Build the appendices as a separate supplement PDF.
Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.
@@ -207,6 +209,11 @@ do
# Make options
# ------------
#
+ # Note that Make's `debug' can take values, but when called without any
+ # value, it is like giving it a value of `a'):
+ --supplement) separatesupplement=1; shift;;
+ --supplement=*) on_off_option_error --supplement;;
+
# Note that Make's 'debug' can take values, but when called without any
# value, it is like giving it a value of 'a'):
--refresh-bib) [ -f tex/src/references.tex ] && touch tex/src/references.tex; shift;;
@@ -387,8 +394,9 @@ controlled_env() {
# Remove all existing environment variables (with 'env -i') and only
# use some pre-defined environment variables, then build the project.
envmake=".local/bin/env -i HOME=$bdir sys_rm=$(which rm) $gopt"
+ envmake="$envmake separatesupplement=$separatesupplement "
envmake="$envmake highlightnew=$highlightnew"
- envmake="$envmake highlightnotes=$highlightnotes .local/bin/make"
+ envmake="$envmake highlightnotes=$highlightnotes .local/bin/make "
envmake="$envmake --no-builtin-rules --no-builtin-variables -f $1"
if ! [ x"$debug" = x ]; then envmake="$envmake --debug=$debug"; fi
@@ -500,6 +508,22 @@ case $operation in
# Run the actual project.
controlled_env reproduce/analysis/make/top-make.mk
+
+ # Print the number of words (if the user has pdftotext outside of
+ # Maneage! For now!!!), AND there actually is a 'paper.pdf' (for
+ # example when running './project make clean' there isn't any
+ # 'paper.pdf').
+ if [ -f paper.pdf ]; then
+ if type pdftotext > /dev/null 2>/dev/null; then
+ numwords=$(pdftotext paper.pdf && cat paper.txt | wc -w)
+ numeff=$(echo $numwords | awk '{print $1-850+500}')
+ echo; echo "Number of words in full PDF: $numwords"
+ if [ $separatesupplement = 1 ]; then
+ echo "No abstract, and captions (250 for each figure): $numeff"
+ fi
+ rm paper.txt
+ fi
+ fi
;;
diff --git a/reproduce/analysis/config/INPUTS.conf b/reproduce/analysis/config/INPUTS.conf
index 5a58758..f3d1cd4 100644
--- a/reproduce/analysis/config/INPUTS.conf
+++ b/reproduce/analysis/config/INPUTS.conf
@@ -75,7 +75,7 @@
-# Demo dataset used in the histogram plot (remove when customizing).
-INPUT-wfpc2.fits-size = 62K
-INPUT-wfpc2.fits-url = https://fits.gsfc.nasa.gov/samples/WFPC2ASSNu5780205bx.fits
-INPUT-wfpc2.fits-sha256 = 9851bc2bf9a42008ea606ec532d04900b60865daaff2f233e5c8565dac56ad5f
+# Dataset used in this analysis and its checksum for integrity checking.
+INPUT-menke20.xlsx-size = 1.9M
+INPUT-menke20.xlsx-url = https://www.biorxiv.org/content/biorxiv/early/2020/01/18/2020.01.15.908111/DC1/embed/media-1.xlsx
+INPUT-menke20.xlsx-sha256 = 7839cdc2946134773ffc401cbcc78fb58fc489d2caad65375c85d605b2f8b13e
diff --git a/reproduce/analysis/config/delete-me-squared-num.conf b/reproduce/analysis/config/delete-me-squared-num.conf
deleted file mode 100644
index 4df2101..0000000
--- a/reproduce/analysis/config/delete-me-squared-num.conf
+++ /dev/null
@@ -1,9 +0,0 @@
-# Number of samples in the demonstration analysis (to be deleted).
-#
-# Copyright (C) 2019-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
-#
-# Copying and distribution of this file, with or without modification, are
-# permitted in any medium without royalty provided the copyright notice and
-# this notice are preserved. This file is offered as-is, without any
-# warranty.
-delete-me-squared-num = 50
diff --git a/reproduce/analysis/config/demo-year.conf b/reproduce/analysis/config/demo-year.conf
new file mode 100644
index 0000000..429b220
--- /dev/null
+++ b/reproduce/analysis/config/demo-year.conf
@@ -0,0 +1,3 @@
+# This is the demonstration year showing the number of papers studied
+# before 1997.
+menke-demo-year = 1996
diff --git a/reproduce/analysis/config/metadata.conf b/reproduce/analysis/config/metadata.conf
index 0241136..f570340 100644
--- a/reproduce/analysis/config/metadata.conf
+++ b/reproduce/analysis/config/metadata.conf
@@ -23,16 +23,16 @@
# warranty.
# Project information
-metadata-title = The project title goes here
+metadata-title = Toward Long-Term and Archivable Reproducibility
# DOIs and identifiers (don't include fixed URL prefixes like
# 'https://doi.org/' or 'https://arxiv.org/abs'), they will be added
# automatically where necessary.
-metadata-arxiv =
-metadata-doi-zenodo =
-metadata-doi-journal =
-metadata-doi = $(metadata-doi-journal)
-metadata-git-repository = http://git.maneage.org/project.git
+metadata-arxiv = 2006.03018
+metadata-doi-zenodo = 10.5281/zenodo.4913277
+metadata-doi-journal = 10.1109/MCSE.2021.3072860
+metadata-doi = $(metadata-doi-journal)
+metadata-git-repository = http://git.maneage.org/paper-concept.git
# DATA Copyright owner and license information.
metadata-copyright-owner = Mohammad Akhlaghi <mohammad@akhlaghi.org>
diff --git a/reproduce/analysis/make/delete-me.mk b/reproduce/analysis/make/delete-me.mk
deleted file mode 100644
index f4c8600..0000000
--- a/reproduce/analysis/make/delete-me.mk
+++ /dev/null
@@ -1,169 +0,0 @@
-# Dummy Makefile to create a random dataset for plotting.
-#
-# Copyright (C) 2018-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
-#
-# This Makefile is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This Makefile is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this Makefile. If not, see <http://www.gnu.org/licenses/>.
-
-
-
-
-
-# Dummy dataset
-# -------------
-#
-# Just as a demonstration(!): we will use AWK to generate a table showing X
-# and X^2 and draw its plot.
-#
-# Note that this dataset is directly read by LaTeX to generate a plot, so
-# we need to put it in the $(tex-publish-dir) directory.
-dm-squared = $(tex-publish-dir)/squared.txt
-$(dm-squared): $(pconfdir)/delete-me-squared-num.conf | $(tex-publish-dir)
-
-# When the plotted values are re-made, it is necessary to also delete
-# the TiKZ externalized files so the plot is also re-made by
-# PGFPlots.
- rm -f $(tikzdir)/delete-me-squared.pdf
-
-# Write the column metadata in a temporary file name (appending
-# '.tmp' to the actual target name). Once all steps are done, it is
-# renamed to the final target. We do this because if there is an
-# error in the middle, Make will not consider the job to be complete
-# and will stop here.
- echo "# Data for demonstration plot of default Maneage (MANaging data linEAGE)." > $@.tmp
- echo "# It is a simple plot, showing the power of two: y=x^2! " >> $@.tmp
- echo "# " >> $@.tmp
- echo "# Column 1: X [arbitrary, f32] The horizontal axis numbers." \
- >> $@.tmp
- echo "# Column 2: X_POW2 [arbitrary, f32] The horizontal axis to the power of two." \
- >> $@.tmp
- echo "# " >> $@.tmp
- $(call print-general-metadata, $@.tmp)
-
-# Generate the table of random values.
- awk 'BEGIN {for(i=1;i<=$(delete-me-squared-num);i+=0.5) \
- printf("%-8.1f%.2f\n", i, i*i); }' >> $@.tmp
-
-# Write it into the final target
- mv $@.tmp $@
-
-
-
-
-
-# Demo image PDF
-# --------------
-#
-# For an example image, we'll make a PDF copy of the WFPC II image to
-# display in the paper.
-dm-histdir = $(texdir)/image-histogram
-$(dm-histdir): | $(texdir); mkdir $@
-dm-img-pdf = $(dm-histdir)/wfpc2.pdf
-$(dm-img-pdf): $(dm-histdir)/%.pdf: $(indir)/%.fits | $(dm-histdir)
-
-# When the plotted values are re-made, it is necessary to also
-# delete the TiKZ externalized files so the plot is also re-made.
- rm -f $(tikzdir)/delete-me-image-histogram.pdf
-
-# Convert the dataset to a PDF.
- astconvertt --colormap=gray --fluxhigh=4 $< -h0 -o$@
-
-
-
-
-
-# Histogram of demo image
-# -----------------------
-#
-# For an example plot, we'll show the pixel value histogram also. IMPORTANT
-# NOTE: because this histogram contains data that is included in a plot, we
-# should publish it, so it will go into the $(tex-publish-dir).
-dm-img-histogram = $(tex-publish-dir)/wfpc2-histogram.txt
-$(dm-img-histogram): $(tex-publish-dir)/%-histogram.txt: $(indir)/%.fits \
- | $(tex-publish-dir)
-
-# When the plotted values are re-made, it is necessary to also delete
-# the TiKZ externalized files so the plot is also re-made.
- rm -f $(tikzdir)/delete-me-image-histogram.pdf
-
-# Generate the pixel value histogram.
- aststatistics --lessthan=5 $< -h0 --histogram -o$@.data
-
-# Put a two-line description of the dataset, copy the column metadata
-# from '$@.data', and add copyright.
- echo "# Histogram of example image to demonstrate Maneage (MANaging data linEAGE)." \
- > $@.tmp
- echo "# Example image URL: $(DEMO-URL)" >> $@.tmp
- echo "# " >> $@.tmp
- awk '/^# Column .:/' $@.data >> $@.tmp
- echo "# " >> $@.tmp
- $(call print-general-metadata, $@.tmp)
-
-# Add the column numbers in a formatted manner, rename it to the
-# output and clean up.
- awk '!/^#/{printf("%-15.4f%d\n", $$1, $$2)}' $@.data >> $@.tmp
- mv $@.tmp $@
- rm $@.data
-
-
-
-
-
-# Basic statistics
-# ----------------
-#
-# This is just as a demonstration on how to get analysic configuration
-# parameters from variables defined in 'reproduce/analysis/config/'.
-dm-img-stats = $(dm-histdir)/wfpc2-stats.txt
-$(dm-img-stats): $(dm-histdir)/%-stats.txt: $(indir)/%.fits \
- | $(dm-histdir)
- aststatistics $< -h0 --mean --median > $@
-
-
-
-
-
-# TeX macros
-# ----------
-#
-# This is how we write the necessary parameters in the final PDF.
-#
-# NOTE: In LaTeX you cannot use any non-alphabetic character in a variable
-# name.
-$(mtexdir)/delete-me.tex: $(dm-squared) $(dm-img-pdf) $(dm-img-histogram) \
- $(dm-img-stats)
-
-# Write the number of random values used.
- echo "\newcommand{\deletemenum}{$(delete-me-squared-num)}" > $@
-
-# Note that since Make variables start with a '$(', if you want to
-# use '$' within the shell (not Make), you have to quote any
-# occurance of '$' with another '$'. That is why there are '$$' in
-# the AWK command below.
-#
-# Here, we are first using AWK to find the minimum and maximum
-# values, then using it again to read each separately to use in the
-# macro definition.
- mm=$$(awk 'BEGIN{min=99999; max=-min}
- !/^#/{if($$2>max) max=$$2; if($$2<min) min=$$2;}
- END{print min, max}' $(dm-squared));
- v=$$(echo "$$mm" | awk '{printf "%.3f", $$1}');
- echo "\newcommand{\deletememin}{$$v}" >> $@
- v=$$(echo "$$mm" | awk '{printf "%.3f", $$2}');
- echo "\newcommand{\deletememax}{$$v}" >> $@
-
-# Write the statistics of the demo image as a macro.
- mean=$$(awk '{printf("%.2f", $$1)}' $(dm-img-stats))
- echo "\newcommand{\deletemewfpctwomean}{$$mean}" >> $@
- median=$$(awk '{printf("%.2f", $$2)}' $(dm-img-stats))
- echo "\newcommand{\deletemewfpctwomedian}{$$median}" >> $@
diff --git a/reproduce/analysis/make/demo-plot.mk b/reproduce/analysis/make/demo-plot.mk
new file mode 100644
index 0000000..13b0d45
--- /dev/null
+++ b/reproduce/analysis/make/demo-plot.mk
@@ -0,0 +1,80 @@
+# Second step of analysis:
+# Data for plot of number/fraction of tools per year.
+#
+# Copyright (C) 2020-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+#
+# This Makefile is free software: you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation, either version 3 of the License, or (at your
+# option) any later version.
+#
+# This Makefile is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details. See <http://www.gnu.org/licenses/>.
+
+
+
+
+
+# Table for Figure 1C of Menke+20
+# -------------------------------
+a2mk20f1c = $(tex-publish-dir)/tools-per-year.txt
+$(a2mk20f1c): $(mk20tab3) | $(tex-publish-dir)
+
+# Remove the (possibly) produced figure that is created from this
+# table: it is created by LaTeX's TiKZ package, and includes multiple
+# files with a fixed prefix.
+ rm -f $(tikzdir)/figure-tools-per-year*
+
+# Write the column metadata in a temporary file name (appending
+# '.tmp' to the actual target name). Once all steps are done, it is
+# renamed to the final target. We do this because if there is an
+# error in the middle, Make will not consider the job to be complete
+# and will stop here.
+ echo "# Data of plot showing fraction of papers that mentioned software tools" > $@.tmp
+ echo "# per year to demonstrate the features of Maneage (MANaging data linEAGE)." >> $@.tmp
+ >> $@.tmp
+ echo "# Raw data taken from Menke+2020 (https://doi.org/10.1101/2020.01.15.908111)." \
+ >> $@.tmp
+ echo "# " >> $@.tmp
+ echo "# Column 1: YEAR [count, u16] Publication year of papers." \
+ >> $@.tmp
+ echo "# Column 2: WITH_TOOLS [frac, f32] Fraction of papers mentioning software tools." \
+ >> $@.tmp
+ echo "# Column 3: NUM_PAPERS [count, u32] Total number of papers studied in that year." \
+ >> $@.tmp
+ echo "# " >> $@.tmp
+ $(call print-general-metadata, $@.tmp)
+
+
+# Find the maximum number of papers.
+ awk '!/^#/{all[$$1]+=$$2; id[$$1]+=$$3} \
+ END{ for(year in all) \
+ printf("%-7d%-10.3f%d\n", year, 100*id[year]/all[year], \
+ all[year]) \
+ }' $< \
+ >> $@.tmp
+
+# Write it into the final target
+ mv $@.tmp $@
+
+
+
+
+
+# Final LaTeX macro
+$(mtexdir)/demo-plot.tex: $(a2mk20f1c) $(pconfdir)/demo-year.conf
+
+# Find the first year (first column of first row) of data.
+ v=$$(awk '!/^#/ && c==0{c++; print $$1}' $(a2mk20f1c))
+ echo "\newcommand{\menkefirstyear}{$$v}" > $@
+
+# Find the number of rows in the plotted table.
+ v=$$(awk '!/^#/{c++} END{print c}' $(a2mk20f1c))
+ echo "\newcommand{\menkenumyears}{$$v}" >> $@
+
+# Find the number of papers in 1996.
+ v=$$(awk '$$1==$(menke-demo-year){print $$3}' $(a2mk20f1c))
+ echo "\newcommand{\menkenumpapersdemocount}{$$v}" >> $@
+ echo "\newcommand{\menkenumpapersdemoyear}{$(menke-demo-year)}" >> $@
diff --git a/reproduce/analysis/make/download.mk b/reproduce/analysis/make/download.mk
index 6e67962..7110c8f 100644
--- a/reproduce/analysis/make/download.mk
+++ b/reproduce/analysis/make/download.mk
@@ -101,5 +101,8 @@ $(inputdatasets): $(indir)/%: | $(indir) $(lockdir)
#
# It is very important to mention the address where the data were
# downloaded in the final report.
-$(mtexdir)/download.tex: $(pconfdir)/INPUTS.conf | $(mtexdir)
- echo "\\newcommand{\\wfpctwourl}{$(INPUT-wfpc2.fits-url)}" > $@
+$(mtexdir)/download.tex: $(indir)/menke20.xlsx | $(mtexdir)
+ echo "\newcommand{\menketwentyxlsxname}{menke20.xlsx}" > $@
+ echo "\newcommand{\menketwentychecksum}{$(INPUT-menke20.xlsx-sha256)}" >> $@
+ echo "\newcommand{\menketwentybytesize}{$(INPUT-menke20.xlsx-size)}" >> $@
+ echo "\newcommand{\menketwentyurl}{$(INPUT-menke20.xlsx-url)}" >> $@
diff --git a/reproduce/analysis/make/format.mk b/reproduce/analysis/make/format.mk
new file mode 100644
index 0000000..979475f
--- /dev/null
+++ b/reproduce/analysis/make/format.mk
@@ -0,0 +1,86 @@
+# First step of analysis:
+# Prepare the data, return basic values.
+#
+# As a demonstration analysis to go with the paper, we use the data from
+# Menke 2020 (DOI:10.1101/2020.01.15.908111). This is a relevant paper
+# because it provides interesting statistics about tools and methods used
+# in scientific papers.
+#
+# Copyright (C) 2020-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+#
+# This Makefile is free software: you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation, either version 3 of the License, or (at your
+# option) any later version.
+#
+# This Makefile is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+# Public License for more details. See <http://www.gnu.org/licenses/>.
+
+
+
+
+# Save the "Table 3" spreadsheet from the downloaded `.xlsx' file into a
+# simple plain-text file that is easy to use.
+a1dir = $(badir)/analysis1
+mk20tab3 = $(a1dir)/table-3.txt
+$(a1dir):; mkdir $@
+$(mk20tab3): $(indir)/menke20.xlsx | $(a1dir)
+
+# Set a base-name for the table-3 data.
+ base=$(basename $(notdir $<))-table-3
+
+# Unfortunately XLSX I/O only works when the input and output are in
+# the directory it is running. So first, we need to switch to the
+# input directory, run it, then put our desired output where we want
+# and delete the extra files.
+ topdir=$$(pwd)
+ cd $(indir)
+ xlsxio_xlsx2csv $(notdir $<)
+ cp $(notdir $<)."Table 3 All by journal by year".csv $$base.csv
+ rm $(notdir $<).*.csv
+ cd $$topdir
+
+# Read the necessary information. Note that we are dealing with a CSV
+# (comma-separated value) file. But when there are commas in a
+# string, quotation signs are put around it. The `FPAT' values is
+# fully described in the GNU AWK manual. In short, it ensures that if
+# there is a comma in the middle of double-quotes, it doesn't count
+# as a delimter.
+ echo "# Column 1: YEAR [counter, i16] Year of journal's publication." > $@.tmp
+ echo "# Column 2: NUM_PAPERS [counter, i16] Number of studied papers in that journal." >> $@.tmp
+ echo "# Column 3: NUM_PAPERS_WITH_TOOLS [counter, i16] Number of papers with an identified tool." >> $@.tmp
+ echo "# Column 4: NUM_ID_TOOLS [counter, i16] Number of software/tools that were identified." >> $@.tmp
+ echo "# Column 5: JOURNAL_NAME [string, str150] Name of journal." >> $@.tmp
+ awk 'NR>1{printf("%-10d%-10d%-10d%-10d %s\n", $$2, $$3, $$3*$$NF, $$(NF-1), $$1)}' \
+ FPAT='([^,]+)|("[^"]+")' $(indir)/$$base.csv >> $@.tmp
+
+# Set the temporary file as the final target. This was done so if
+# there is any possible crash in the steps above, this rule is re-run
+# (its final target isn't rebuilt).
+ mv $@.tmp $@
+
+
+
+
+
+# Main LaTeX macro file
+$(mtexdir)/format.tex: $(mk20tab3)
+
+# Count the total number of papers in their study.
+ v=$$(awk '!/^#/{c+=$$2} END{print c}' $(mk20tab3))
+ echo "\newcommand{\menkenumpapers}{$$v}" > $@
+
+# Count how many unique journals there were in the study. Note that
+# the `31' comes because we put 10 characters for each numeric column
+# and separated the last numeric column from the string column with a
+# space. If the number of numeric columns change in the future, the
+# `31' also has to change.
+ v=$$(awk 'BEGIN{FIELDWIDTHS="41 10000"} !/^#/{print $$2}' \
+ $(mk20tab3) | uniq | wc -l)
+ echo "\newcommand{\menkenumjournals}{$$v}" >> $@
+
+# Count how many rows the original catalog has.
+ v=$$(awk '!/^#/{c++} END{print c}' $(mk20tab3))
+ echo "\newcommand{\menkenumorigrows}{$$v}" >> $@
diff --git a/reproduce/analysis/make/initialize.mk b/reproduce/analysis/make/initialize.mk
index 7e9e938..7f0c514 100644
--- a/reproduce/analysis/make/initialize.mk
+++ b/reproduce/analysis/make/initialize.mk
@@ -97,11 +97,9 @@ endif
# doesn't change anything.
ifeq (x$(GROUP-NAME),x)
texbtopdir = build
-final-paper = paper.pdf
else
user = $(shell whoami)
texbtopdir = build-$(user)
-final-paper = paper-$(user).pdf
endif
texbdir = $(texdir)/$(texbtopdir)
tikzdir = $(texbdir)/tikz
@@ -139,6 +137,7 @@ curdir := $(shell echo $$(pwd))
# we are also going to overwrite 'TEXINPUTS' just before 'pdflatex'.
.ONESHELL:
.SHELLFLAGS = -ec
+export TERM=xterm
export TEXINPUTS :=
export CCACHE_DISABLE := 1
export PATH := $(installdir)/bin
@@ -210,8 +209,11 @@ $(lockdir): | $(bsdir); mkdir $@
# Version and distribution tarball definitions
-project-commit-hash := $(shell if [ -d .git ]; then \
- echo $$(git describe --dirty --always --long); else echo NOGIT; fi)
+project-commit-hash := $(shell \
+ if [ -d .git ]; then \
+ export LD_LIBRARY_PATH="$(installdir)/lib"; \
+ echo $$($(installdir)/bin/git describe --dirty --always --long); \
+ else echo NOGIT; fi)
project-package-name := maneaged-$(project-commit-hash)
project-package-contents = $(texdir)/$(project-package-name)
@@ -287,15 +289,19 @@ $(project-package-contents): paper.pdf | $(texdir)
dir=$@
rm -rf $$dir
mkdir $$dir
+ curdir=$$(pwd)
# Build a small Makefile to help in automatizing the paper building
# (including the bibliography).
m=$$dir/Makefile
echo "paper.pdf: paper.tex paper.bbl" > $$m
- printf "\tpdflatex -shell-escape -halt-on-error paper\n" >> $$m
+ printf "\tlatex -shell-escape -halt-on-error paper\n" >> $$m
+ printf "\tlatex -shell-escape -halt-on-error paper\n" >> $$m
+ printf "\tdvips paper.dvi\n" >> $$m
+ printf "\tps2pdf -dNOSAFER paper.ps\n" >> $$m
echo "paper.bbl: tex/src/references.tex" >> $$m
- printf "\tpdflatex -shell-escape -halt-on-error paper\n" >> $$m
- printf "\tbiber paper\n" >> $$m
+ printf "\tlatex -shell-escape -halt-on-error paper\n" >> $$m
+ printf "\tbibtex paper\n" >> $$m
echo ".PHONY: clean" >> $$m
echo "clean:" >> $$m
printf "\trm -f *.aux *.auxlock *.bbl *.bcf\n" >> $$m
@@ -321,7 +327,7 @@ $(project-package-contents): paper.pdf | $(texdir)
# To keep the sub-directory structure, we are packaging the files
# with Tar, piping it, and unpacking it in the archive directory. So
# afterwards we need to come back to the current directory.
- tar -c -f - $$(git ls-files reproduce tex/src) \
+ tar -c -f - $$(git ls-files peer-review reproduce tex/src) \
| (cd $$dir ; tar -x -f -)
cd $(curdir)
@@ -335,6 +341,8 @@ $(project-package-contents): paper.pdf | $(texdir)
# using Bash's extended globbing ('extglob') for excluding this
# directory.
shopt -s extglob
+ cp -r tex/img $$dir/tex/img
+ cp tex/tikz/*.eps $$dir/tex/tikz
cp -r tex/build/!($(project-package-name)) $$dir/tex/build
# Clean up the $(texdir)/build* directories in the archive (when
@@ -362,8 +370,8 @@ $(project-package-contents): paper.pdf | $(texdir)
# problems on the arXiv server.
cp tex/build/build/paper.bbl $$dir/
tltopdir=.local/texlive/maneage/texmf-dist/tex/latex
- find $$tltopdir/biblatex/ -maxdepth 1 -type f -print0 \
- | xargs -0 cp -t $$dir
+ #find $$tltopdir/biblatex/ -maxdepth 1 -type f -print0 \
+ # | xargs -0 cp -t $$dir
# Just in case the package users want to rebuild some of the figures
# (manually un-comment the 'makepdf' command we commented above),
@@ -375,7 +383,8 @@ $(project-package-contents): paper.pdf | $(texdir)
# PROJECT SPECIFIC
# ----------------
# Put any project-specific distribution steps here.
-
+ cd $$curdir
+ cp tex/build/build/appendix.bbl $$dir/
# ----------------
# Clean temporary files that may have been created by text editors.
@@ -514,6 +523,11 @@ $(mtexdir)/initialize.tex: | $(mtexdir)
echo "\newcommand{\projectgitrepo}{$(metadata-git-repository)}" >> $@
echo "\newcommand{\projectcopyrightowner}{$(metadata-copyright-owner)}" >> $@
+ # arXiv/Zenodo identifier (necessary for download link):
+ echo "\newcommand{\projectarxivid}{$(metadata-arxiv)}" >> $@
+ v=$$(echo $(metadata-doi-zenodo) | sed -e's/\./ /g' | awk '{print $$NF}')
+ echo "\newcommand{\projectzenodoid}{$$v}" >> $@
+
# Calculate the latest Maneage commit used to build this project:
# - The project may not have the 'maneage' branch (for example
# after cloning from a fork that didn't include it!). In this
diff --git a/reproduce/analysis/make/paper.mk b/reproduce/analysis/make/paper.mk
index 740dc7d..da2702c 100644
--- a/reproduce/analysis/make/paper.mk
+++ b/reproduce/analysis/make/paper.mk
@@ -18,6 +18,7 @@
+
# LaTeX macros for paper
# ----------------------
#
@@ -44,7 +45,7 @@ $(mtexdir)/project.tex: $(mtexdir)/verify.tex
# If no PDF is requested, or if LaTeX isn't available, don't continue
# to building the final PDF. Otherwise, merge all the TeX macros into
# one for building the PDF.
- @if [ -f .local/bin/pdflatex ] && [ x"$(pdf-build-final)" = xyes ]; then
+ @if [ -f .local/bin/latex ] && [ x"$(pdf-build-final)" = xyes ]; then
# Put a LaTeX input command for all the necessary macro files.
# 'hardware-parameters.tex' is created in 'configure.sh'.
@@ -54,6 +55,11 @@ $(mtexdir)/project.tex: $(mtexdir)/verify.tex
echo "\input{tex/build/macros/$$t.tex}" >> $$projecttex
done
+# Possibly print the appendix in the final PDF.
+ if [ x"$(separatesupplement)" = x1 ]; then
+ echo "\newcommand{\separatesupplement}{}" >> $$projecttex
+ fi
+
# Possibly highlight the '\new' parts of the text.
if [ x"$(highlightnew)" = x1 ]; then
echo "\newcommand{\highlightnew}{}" >> $$projecttex
@@ -95,24 +101,39 @@ $(mtexdir)/project.tex: $(mtexdir)/verify.tex
# The bibliography
# ----------------
#
-# We need to run the 'biber' program on the output of LaTeX to generate the
-# necessary bibliography before making the final paper. So we'll first have
-# one run of LaTeX (similar to the 'paper.pdf' recipe), then 'biber'.
+# We need to run the 'bibtex' program on the output of LaTeX to generate
+# the necessary bibliography before making the final paper. So we'll first
+# have one run of LaTeX (similar to the 'paper.pdf' recipe), then 'bibtex'.
#
# NOTE: '$(mtexdir)/project.tex' is an order-only-prerequisite for
# 'paper.bbl'. This is because we need to run LaTeX in both the 'paper.bbl'
# recipe and the 'paper.pdf' recipe. But if 'tex/src/references.tex' hasn't
# been modified, we don't want to re-build the bibliography, only the final
# PDF.
-$(texbdir)/paper.bbl: tex/src/references.tex $(mtexdir)/dependencies-bib.tex \
- | $(mtexdir)/project.tex
+bbls = $(foreach t,$(subst .pdf,,$(top-pdfs)),$(texbdir)/$(t).bbl)
+$(bbls): $(texbdir)/%.bbl: tex/src/references.tex \
+ $(mtexdir)/dependencies-bib.tex | $(mtexdir)/project.tex
+
# If '$(mtexdir)/project.tex' is empty, don't build PDF.
@macros=$$(cat $(mtexdir)/project.tex)
if [ x"$$macros" != x ]; then
-# We'll run LaTeX first to generate the '.bcf' file (necessary for
-# 'biber') and then run 'biber' to generate the '.bbl' file.
+# Unfortunately I can't get bibtex to look into a special directory
+# for the references, so we'll copy it into the LaTeX building
+# directory.
p=$$(pwd)
+ if ! [ -L $(texbdir)/references.bib ]; then
+ ln -sf $$p/tex/src/references.tex $(texbdir)/references.bib
+ fi
+
+# Copy the improved IEEE bst file into the build directory. The
+# improved bst file provides ArXiv clickable URLs and if available,
+# open-access URLs based on the DOIs, with closed-access URLs as a
+# fallback, via https://oadoi.org .
+ ln -sf $$p/tex/src/IEEEtran_openaccess.bst $(texbdir)/
+
+# We'll run LaTeX first to generate the '.bcf' file (necessary for
+# 'bibtex') and then run 'bibtex' to generate the '.bbl' file.
export TEXINPUTS=$$p:
cd $(texbdir);
@@ -122,15 +143,61 @@ $(texbdir)/paper.bbl: tex/src/references.tex $(mtexdir)/dependencies-bib.tex \
# will be built anyway once this rule is done.
rm -f $@
+# Put a link to the main LaTeX source that we want to build.
+ if [ $* = paper ]; then sdir="$$p"
+ else sdir="$$p"/tex/src
+ fi
+ ln -sf "$$sdir"/$*.tex ./
+
# The pdflatex option '-shell-escape' is "normally disallowed for
# security reasons" according to the 'info pdflatex' manual, but is
# enabled here in order to allow the use of PGFPlots. If you do not
# use PGFPlots, then you should remove the '-shell-escape' option
# for better security. See https://savannah.nongnu.org/task/?15694
# for details.
- pdflatex -shell-escape -halt-on-error "$$p"/paper.tex
- biber paper
+ latex -shell-escape -halt-on-error $*.tex
+
+# When we are building the main paper and the appendices are to be
+# built within the main paper's PDF, we need two bibliographies:
+# one for the main body, and one for the appendix. For this, we use
+# 'multibib'. Multibib creates a separate '.aux' file for each
+# bibliography.
+ bibtex $*
+ if [ x"$(separatesupplement)" != x1 ]; then
+ bibtex appendix
+ fi
+
+# Hack: tidy up eprint+doi style that didn't work in .bst file.
+# TODO (better): read Part 4 of
+# http://mirrors.ctan.org/info/bibtex/tamethebeast/ttb_en.pdf and
+# fix the .bst style properly.
+ cp -pv $*.bbl $*-tmp.bbl \
+ && sed -e "s/\'/EOLINE/g" $*-tmp.bbl \
+ | tr -d '\n' \
+ | sed -e 's/\([0-9]\)\( \|EOLINE\)}/\1}/g' \
+ | sed -e 's/\([^,]\) *\( \|EOLINE\) *\\eprint/\1, \\eprint/g' \
+ | sed -e 's/\([^,]\) *\( \|EOLINE\) *\\doi/\1, \\doi/g' \
+ | sed -e 's/EOLINE/\n/g' > $*.bbl
+ if [ x"$(separatesupplement)" != x1 ]; then
+ cp -pv appendix.bbl appendix-tmp.bbl \
+ && sed -e "s/\'/EOLINE/g" appendix-tmp.bbl \
+ | tr -d '\n' \
+ | sed -e 's/\([0-9]\)\( \|EOLINE\)}/\1}/g' \
+ | sed -e 's/\([^,]\) *\( \|EOLINE\) *\\eprint/\1, \\eprint/g' \
+ | sed -e 's/\([^,]\) *\( \|EOLINE\) *\\doi/\1, \\doi/g' \
+ | sed -e 's/EOLINE/\n/g' > appendix.bbl
+ fi
+
+# Paper-specific hacks for reducing very-long author lists.
+ cp -pv $*.bbl $*-tmp.bbl \
+ && sed -e "s/\'/EOLINE/g" $*-tmp.bbl \
+ | tr -d '\n' \
+ | sed -e 's;, D\..Chong[^{]*Forstag; et al.\\/;' \
+ | sed -e 's;, V\..Khodiyar[^{]*Whyte; et al.\\/;' \
+ | sed -e 's/EOLINE/\n/g' > $*.bbl
+# The pre-final run of LaTeX after 'paper.bbl' was created.
+ latex -shell-escape -halt-on-error $*.tex
fi
@@ -145,7 +212,8 @@ $(texbdir)/paper.bbl: tex/src/references.tex $(mtexdir)/dependencies-bib.tex \
# to run everything cleanly from there, it is necessary to add the current
# directory (top project directory) to the 'TEXINPUTS' environment
# variable.
-paper.pdf: $(mtexdir)/project.tex paper.tex $(texbdir)/paper.bbl
+$(top-pdfs): %.pdf: $(mtexdir)/project.tex paper.tex \
+ tex/src/appendix-*.tex $(texbdir)/%.bbl
# If '$(mtexdir)/project.tex' is empty, don't build the PDF.
@macros=$$(cat $(mtexdir)/project.tex)
@@ -158,11 +226,17 @@ paper.pdf: $(mtexdir)/project.tex paper.tex $(texbdir)/paper.bbl
# See above for a warning and brief discussion on the the pdflatex
# option '-shell-escape'.
- pdflatex -shell-escape -halt-on-error "$$p"/paper.tex
+ latex -shell-escape -halt-on-error $*.tex
+
+# Convert the DVI to PostScript, and the PostScript to PDF. The
+# '-dNOSAFER' option to GhostScript allows transparencies in the
+# conversion from PostScript to PDF, see
+# https://www.ghostscript.com/doc/current/Language.htm#Transparency
+ dvips $*.dvi
+ ps2pdf $*.ps
# Come back to the top project directory and copy the built PDF
# file here.
cd "$$p"
- cp $(texbdir)/$@ $(final-paper)
-
+ cp $(texbdir)/$*.pdf $@
fi
diff --git a/reproduce/analysis/make/top-make.mk b/reproduce/analysis/make/top-make.mk
index 4e95c54..7755174 100644
--- a/reproduce/analysis/make/top-make.mk
+++ b/reproduce/analysis/make/top-make.mk
@@ -64,8 +64,13 @@ include reproduce/software/config/LOCAL.conf
# If you are just interested in the processing and don't want to build the
# PDF, you can skip the creation of the final PDF by giving a value of
# 'yes' to 'pdf-build-final' in 'reproduce/analysis/config/pdf-build.conf'.
+ifeq ($(separatesupplement),0)
+top-pdfs = paper.pdf
+else
+top-pdfs = paper.pdf supplement.pdf
+endif
ifeq (x$(maneage_group_name),x$(GROUP-NAME))
-all: paper.pdf
+all: $(top-pdfs)
else
all:
@if [ "x$(GROUP-NAME)" = x ]; then \
@@ -112,7 +117,8 @@ endif
# wild-card like the configuration Makefiles).
makesrc = initialize \
download \
- delete-me \
+ format \
+ demo-plot \
verify \
paper
diff --git a/reproduce/analysis/make/verify.mk b/reproduce/analysis/make/verify.mk
index d3f3282..ac91089 100644
--- a/reproduce/analysis/make/verify.mk
+++ b/reproduce/analysis/make/verify.mk
@@ -131,16 +131,15 @@ $(mtexdir)/verify.tex: $(foreach s, $(verify-dep), $(mtexdir)/$(s).tex)
rm -f $@.tmp
# Verify the figure datasets.
- $(call verify-txt-no-comments-no-space, \
- $(dm-squared), 6b6d3b0f9c351de53606507b59bca5d1, $@.tmp)
- $(call verify-txt-no-comments-no-space, \
- $(dm-img-histogram), b1f9c413f915a1ad96078fee8767b16c, $@.tmp)
+ $(call verify-txt-no-comments-leading-space, \
+ $(a2mk20f1c), 76fc5b13495c4d8e8e6f8d440304cf69)
# Verify TeX macros (the values that go into the PDF text).
for m in $(verify-check); do
file=$(mtexdir)/$$m.tex
- if [ $$m == download ]; then s=49e4e9f049aa9da0453a67203d798587
- elif [ $$m == delete-me ]; then s=711e2f7fa1f16ecbeeb3df6bcb4ec705
+ if [ $$m == download ]; then s=5d0ab54ca95366d1aab12196966dd3b6
+ elif [ $$m == format ]; then s=e04d95a539b5540c940bf48994d8d45f
+ elif [ $$m == demo-plot ]; then s=48bffe6cf8db790c63a33302d20db77f
else echo; echo "'$$m' not recognized."; exit 1
fi
$(call verify-txt-no-comments-no-space, $$file, $$s, $@.tmp)
diff --git a/reproduce/software/config/TARGETS.conf b/reproduce/software/config/TARGETS.conf
index ef95f57..94c7e5f 100644
--- a/reproduce/software/config/TARGETS.conf
+++ b/reproduce/software/config/TARGETS.conf
@@ -36,7 +36,10 @@
# Programs and libraries (for Python or R modules, use respective variable).
-top-level-programs = gnuastro
+#
+# Ghostscript: to build PDF paper (in particular the `ps2pdf' command).
+# XLSXI/O: to read and write XLSX files.
+top-level-programs = ghostscript xlsxio
# Python libraries/modules.
top-level-python =
diff --git a/reproduce/software/config/software_acknowledge_context.sh b/reproduce/software/config/software_acknowledge_context.sh
index 4dfb407..f719d5e 100755
--- a/reproduce/software/config/software_acknowledge_context.sh
+++ b/reproduce/software/config/software_acknowledge_context.sh
@@ -10,8 +10,8 @@
# your project to make a smoothly readable English text. Afterwards, please
# feel free to modify them as you wish.
#
-# Copyright (C) 2021-2022 Boud Roukema <boud@cosmo.torun.pl>
-# Copyright (C) 2021-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+# Copyright (C) 2020-2022 Boud Roukema <boud@cosmo.torun.pl>
+# Copyright (C) 2020-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
#
# This script is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
diff --git a/reproduce/software/config/texlive-packages.conf b/reproduce/software/config/texlive-packages.conf
index ff3dad0..05dd0e2 100644
--- a/reproduce/software/config/texlive-packages.conf
+++ b/reproduce/software/config/texlive-packages.conf
@@ -16,11 +16,7 @@
# the basic installation scheme that we used to install tlmgr, they will be
# ignored in the 'tlmgr install' command, but will be used later when we
# want their versions.
-#
-# fancyvrb: Needed by R.
-texlive-packages = tex fancyhdr ec newtx fontaxes xkeyval etoolbox xstring \
- xcolor setspace caption footmisc datetime fmtcount \
- titlesec preprint ulem biblatex biber logreq pgf pgfplots \
- fp courier tex-gyre txfonts times csquotes kastrup \
- trimspaces pdftexcmds pdfescape letltxmacro bitset \
- mweights fancyvrb
+texlive-typewriter-pkgs = courier inconsolata xkeyval upquote
+texlive-packages = times ieeetran cite xcolor pgfplots ps2eps \
+ listings ulem etoolbox multibib \
+ $(texlive-typewriter-pkgs)
diff --git a/reproduce/software/config/versions.conf b/reproduce/software/config/versions.conf
index 2a27ddd..34d074c 100644
--- a/reproduce/software/config/versions.conf
+++ b/reproduce/software/config/versions.conf
@@ -152,6 +152,7 @@ tides-version = 2.0
util-linux-version = 2.37.2
valgrind-version = 3.18.1
vim-version = 8.2
+xlsxio-version = 0.2.21
yaml-version = 0.2.5
# Xorg packages
@@ -286,10 +287,6 @@ wheel-version = 0.37.0
# it.
#healpix-version = 3.50
-# XLSX I/O (until version 0.2.29) crashes during compilation with GCC
-# 11.1.0, so we are temporarily commenting it.
-#xlsxio-version = 0.2.21
-
# Setuptools-rust crash (https://savannah.nongnu.org/bugs/index.php?61731),
# so it and its dependencies are being ignored: 'cryptography', and thus
# 'secretstorage' and thus 'keyring' and thus 'astroquery'.
diff --git a/tex/img/icon-collaboration.eps b/tex/img/icon-collaboration.eps
new file mode 100644
index 0000000..5817a32
--- /dev/null
+++ b/tex/img/icon-collaboration.eps
@@ -0,0 +1,1150 @@
+%!PS-Adobe-3.0 EPSF-3.0
+%%
+%% Copyright (C) 2020-2022 Marjan Akbari <mrjakbari@gmail.com>
+%%
+%% This image is available under Creative Commons Attribution-ShareAlike
+%% (CC BY-SA). License URL: https://creativecommons.org/licenses/by-sa/4.0
+%%
+%%Creator: cairo 1.16.0 (https://cairographics.org)
+%%CreationDate: Sat Jun 13 00:07:25 2020
+%%Pages: 1
+%%DocumentData: Clean7Bit
+%%LanguageLevel: 3
+%%BoundingBox: 0 0 186 185
+%%EndComments
+%%BeginProlog
+50 dict begin
+/q { gsave } bind def
+/Q { grestore } bind def
+/cm { 6 array astore concat } bind def
+/w { setlinewidth } bind def
+/J { setlinecap } bind def
+/j { setlinejoin } bind def
+/M { setmiterlimit } bind def
+/d { setdash } bind def
+/m { moveto } bind def
+/l { lineto } bind def
+/c { curveto } bind def
+/h { closepath } bind def
+/re { exch dup neg 3 1 roll 5 3 roll moveto 0 rlineto
+ 0 exch rlineto 0 rlineto closepath } bind def
+/S { stroke } bind def
+/f { fill } bind def
+/f* { eofill } bind def
+/n { newpath } bind def
+/W { clip } bind def
+/W* { eoclip } bind def
+/BT { } bind def
+/ET { } bind def
+/BDC { mark 3 1 roll /BDC pdfmark } bind def
+/EMC { mark /EMC pdfmark } bind def
+/cairo_store_point { /cairo_point_y exch def /cairo_point_x exch def } def
+/Tj { show currentpoint cairo_store_point } bind def
+/TJ {
+ {
+ dup
+ type /stringtype eq
+ { show } { -0.001 mul 0 cairo_font_matrix dtransform rmoveto } ifelse
+ } forall
+ currentpoint cairo_store_point
+} bind def
+/cairo_selectfont { cairo_font_matrix aload pop pop pop 0 0 6 array astore
+ cairo_font exch selectfont cairo_point_x cairo_point_y moveto } bind def
+/Tf { pop /cairo_font exch def /cairo_font_matrix where
+ { pop cairo_selectfont } if } bind def
+/Td { matrix translate cairo_font_matrix matrix concatmatrix dup
+ /cairo_font_matrix exch def dup 4 get exch 5 get cairo_store_point
+ /cairo_font where { pop cairo_selectfont } if } bind def
+/Tm { 2 copy 8 2 roll 6 array astore /cairo_font_matrix exch def
+ cairo_store_point /cairo_font where { pop cairo_selectfont } if } bind def
+/g { setgray } bind def
+/rg { setrgbcolor } bind def
+/d1 { setcachedevice } bind def
+/cairo_data_source {
+ CairoDataIndex CairoData length lt
+ { CairoData CairoDataIndex get /CairoDataIndex CairoDataIndex 1 add def }
+ { () } ifelse
+} def
+/cairo_flush_ascii85_file { cairo_ascii85_file status { cairo_ascii85_file flushfile } if } def
+/cairo_image { image cairo_flush_ascii85_file } def
+/cairo_imagemask { imagemask cairo_flush_ascii85_file } def
+%%EndProlog
+%%BeginSetup
+%%EndSetup
+%%Page: 1 1
+%%BeginPageSetup
+%%PageBoundingBox: 0 0 186 185
+%%EndPageSetup
+q 0 0 186 185 rectclip
+1 0 0 -1 0 185 cm q
+0 g
+8.503937 w
+0 J
+0 j
+[] 0.0 d
+4 M q 1 0 0 1 0 0 cm
+52.598 25.387 m 52.598 35.629 44.352 43.934 34.176 43.934 c 24 43.934 15.754
+ 35.629 15.754 25.387 c 15.754 15.145 24 6.84 34.176 6.84 c 44.352 6.84
+52.598 15.145 52.598 25.387 c S Q
+q 1 0 0 1 0 0 cm
+58.586 49.727 m 54.84 49.82 51.078 49.754 47.359 50.254 c 44.746 52.09
+43.156 55.004 40.5 56.789 c 37.953 58.488 34.684 59.215 31.684 58.453 c
+28.895 57.727 26.512 55.898 24.727 53.695 c 23.613 52.391 22.59 50.914 21.035
+ 50.09 c 19.727 49.52 18.262 49.723 16.871 49.684 c 13.691 49.695 10.512
+ 49.707 7.332 49.719 c 6.004 50.328 5.211 51.852 5.398 53.281 c 5.398 69.582
+ 5.398 85.883 5.398 102.184 c 5.852 103.473 7.289 104.293 8.637 104.109
+c 26.137 104.105 43.637 104.098 61.133 104.09 c 62.41 103.652 63.246 102.258
+ 63.066 100.934 c 63.066 84.5 63.066 68.066 63.066 51.633 c 62.691 50.359
+ 61.313 49.555 60.008 49.68 c 59.535 49.695 59.059 49.711 58.586 49.727
+c h
+58.586 49.727 m S Q
+q 1 0 0 1 0 0 cm
+170.574 100.582 m 170.574 110.828 162.324 119.133 152.152 119.133 c 141.977
+ 119.133 133.727 110.828 133.727 100.582 c 133.727 90.34 141.977 82.035
+152.152 82.035 c 162.324 82.035 170.574 90.34 170.574 100.582 c S Q
+q 1 0 0 1 0 0 cm
+176.563 124.922 m 172.816 125.016 169.055 124.953 165.336 125.453 c 162.723
+ 127.289 161.129 130.203 158.473 131.984 c 155.926 133.684 152.66 134.41
+ 149.66 133.648 c 146.871 132.922 144.488 131.094 142.699 128.891 c 141.586
+ 127.59 140.566 126.113 139.012 125.289 c 137.699 124.715 136.238 124.918
+ 134.848 124.879 c 131.668 124.891 128.488 124.902 125.305 124.918 c 123.977
+ 125.527 123.188 127.051 123.375 128.48 c 123.375 144.781 123.375 161.082
+ 123.375 177.383 c 123.828 178.668 125.266 179.488 126.613 179.309 c 144.113
+ 179.301 161.609 179.297 179.109 179.289 c 180.387 178.848 181.219 177.453
+ 181.043 176.133 c 181.043 159.699 181.043 143.262 181.043 126.828 c 180.664
+ 125.559 179.285 124.75 177.984 124.879 c 177.508 124.891 177.035 124.906
+ 176.563 124.922 c h
+176.563 124.922 m S Q
+0.92549 g
+129.355 108.895 m 129.355 111.129 127.543 112.938 125.309 112.938 c 123.078
+ 112.938 121.266 111.129 121.266 108.895 c 121.266 106.66 123.078 104.848
+ 125.309 104.848 c 127.543 104.848 129.355 106.66 129.355 108.895 c f*
+0.109804 0.12549 0 rg
+3.685039 w
+q 1 0 0 1 0 0 cm
+129.355 108.895 m 129.355 111.129 127.543 112.938 125.309 112.938 c 123.078
+ 112.938 121.266 111.129 121.266 108.895 c 121.266 106.66 123.078 104.848
+ 125.309 104.848 c 127.543 104.848 129.355 106.66 129.355 108.895 c S Q
+0.92549 g
+103.313 122.773 m 103.313 127.43 99.543 131.203 94.887 131.203 c 90.234
+ 131.203 86.461 127.43 86.461 122.773 c 86.461 118.121 90.234 114.348 94.887
+ 114.348 c 99.543 114.348 103.313 118.121 103.313 122.773 c f*
+0.109804 0.12549 0 rg
+q 1 0 0 1 0 0 cm
+103.313 122.773 m 103.313 127.43 99.543 131.203 94.887 131.203 c 90.234
+ 131.203 86.461 127.43 86.461 122.773 c 86.461 118.121 90.234 114.348 94.887
+ 114.348 c 99.543 114.348 103.313 118.121 103.313 122.773 c S Q
+q
+0 0 186 185 re W n
+% Fallback Image: x=0 y=0 w=186 h=185 res=300ppi size=1792575
+[ 186 0 0 -185.04 0 185.04 ] concat
+/cairo_ascii85_file currentfile /ASCII85Decode filter def
+/DeviceRGB setcolorspace
+<<
+ /ImageType 1
+ /Width 775
+ /Height 771
+ /Interpolate false
+ /BitsPerComponent 8
+ /Decode [ 0 1 0 1 0 1 ]
+ /DataSource cairo_ascii85_file /FlateDecode filter
+ /ImageMatrix [ 775 0 0 -771 0 771 ]
+>>
+cairo_image
+ Gb",kF^.tjf#J:I.(7bo,>n^g.Ms:O+9n4c=@Z%O4^t-fR&Gh>W7W\pJq[="-k[_%dcI`"d
+ eh=Po5_AK&%gGBp+tUPr,G)q^"/X?28?j!lo"h=%gI[&0LepepK(5<Y$=,tD9(r;J(fpimU
+ 7h8Vh_7WIb/0])R;C.NK&7km:3:VbfY2,bRkdQ3M&:lrT*5A:S0g9)ihaB0Gk3&6^Q6LddU
+ l[p[$WoISt[?:A$ZH8X"!^0+Aq1\DD)US\"p[$GJhgKY_%M(^pU9NqiC:c!+E]:"iDT68^V
+ GX/=2_i?*<Y['Tti57@aK_-er6;%sf_Sa\^#@'BZa>3P^8[+:O[/+_Z]6G&/l?n`E+#4j>K
+ "lF]fmPa=[JQ4^0[:s=+F]g^P1FipgFn)66ZiitCKZ(C`&W\"r$qAK+nGMs0F/\o*5'6@[(
+ N+h=k:RaTkh00+="=(C<ReA)e>1E_5=,lJ$jM=r+?t>Q3-3ZKX&2O8jA=C(hGI_A-:b`)k]
+ b1lb#7(B"Q-b=b]ht,/M7DI:U2$Wf!Dic[Vg_@.Y$B2>C0@!rN=Dh#O"jQ3nVi*f$<+;OFh
+ fVj;raSMnrdTe;/Mn3I725L`EGKN-i"FE9T?(0S%9kcTr1Wf:M[,rj3\*UpHMU4SZ4?S!Z+
+ I/kqI]G&j#<F*./_kDorX$_U!K#(sb]RaTWI:%$<,RW@>DW3T-e3W/j5H-h-$Nmc?RZs`:U
+ ^NoP.dG9/X#@C&B'5CHCQUN[ad<X<B1YsSL&tfIn#(sb]Mp/900fH_]<RC3:RrPLULV_?r@
+ ?DWrIRM5hn2dr4DYh_?3q^=U9ACesRIodKAGNV((O^@F18-&u#(s_lF6@Fk5rU*/rDQWneW
+ eh#cOG^9kVt:g;:5+C%te0>3Z.U)#JMPEL0J4F+/#s^h)YC9W5*!b7U.>l/,Shp6decij5Z
+ d357%W&k^Q/q8O<A0[#GnGF8j$229e2?b,<h!Gc@56ar(p1(YuTHC!OArpn87:4E!AVQ.Kk
+ Tng`oBoVB;/ORM)A7n6'V]68&cmbNd;0=J0llIiMAQBkS#VG7(5J@)\*oB+<*[C&_'9ui87
+ ]6sEp:J]<P^qdIi\<7K0?R@IWlp%u`>jW)@';l6QhlkJ&U7T!"FYN$Z"MWOIL)[,C_b=X*]
+ 5r#8>Ne)#&S(D'oDSXd2f78ZCMV\(;^Y^N=X)j\!d3eXl.ii@PUTP>DVZ3F'_M4mB5#mO@`
+ -pmMnsNGff"/Hg"1.LTHcsYr:dan#Ma\!;$11ZE5RZ1CTf4!1Vs!8^c;EXYf1QK@Ue>Z,=c
+ @iR%^Xa/L;^^OpVh]N/S1n8.-1anFQ7pb0#;G$&632pM^:;37ODa7#U'a,3\gPcI9"Z"V)4
+ 8GP&BOXEEjcHL(:>E04-o[=&#R4ZGFRg"G'*ck&SKM+136c,%3s=]ep6$lpf+O#V.M!4]J2
+ dq6k0/h8H)Nah[GQ^7s5^,BGb$2EM6)fG""1_$A-(`rK&*.#J1GQH5ZQS)RL[C'"6<.l#NK
+ P8f0ac%)nDV_m(>$;7R4:Eop2_M+Da>,/@d'*-1V`B%fOB&@]h!VH0i1V/]Np]&PGj!B.lS
+ l=6cC-P8^u3KVg3&h67;LL.L(<:J'GhRV^ZaG(U*KGbKc=p"2=%NEkUlj]nmPokpOGe99UR
+ Z4@DdiH15@&G8kO0fc@,Wc0m"s$0I3R/eM1a]Lb.fGXnbl8)<WXm:J+6(?Q3-NR5]%i?,(B
+ )+*0$2FO>72Zd@\/QdX<FPGc&%:"k-$:q5Hgd]>q4I.[1doL-W:7s9'a]=Y\d\iR3s@17!i
+ D<):4Xe8%>iJsY,mmBS^dT7[R`g^>JF>>3k]2gL_/be]=3@pSmhGJj$)+`S7(ZRqC9%O0jR
+ J#)1?^5TJ%D5E&A;NP0(Hda#59;n0XIPe#[#/:G\om>aQn]g4-3cNT(bSq$H9P%D124n(O9
+ `7oBq\miI3%G1@]uXT:WFNPnGkLRhn9iYca"5Z8q\.$r+YOuV<6&FQ_'cM2(LC4JSkIk?+X
+ ]_BaH@q35X1p'_r+WFUX1]Kg/C7Q_>rpN-honF7(G$Ei:H"0>b`^Ra]YQ]Cr>h5+!ef&l@]
+ q?\!+5NLrp4]C%"iBCW!nJZ/r$LJFC$V6-j/X]#3R2E&9his'T9@$mk!Zuk5fcBO%']EP`X
+ l)BUlU'gJ_oCqV+7/:[<9#!s>nXN&=>J\pNRiGEclbG,6H<ltC%apt[2qZN'p5Y\ZNud,-l
+ ,p![G65[S4b!8d/ibpH2)U)arke/UgtZ/abG`Cu^3]GHrlnYKc^m4AkjZlWS"0I_1hj%.^Y
+ \*Vi1mZ+ch:GDS`Xa+@b;e_34NbUY0bl*R%?HfhS+CW\goI.6Fh+odOIbImc>S5h[^#Oma7
+ &`O],m$j!K_/mfiD=<T\aMdH[Med^l??MuCIZ4)P4-91Y$)TLU]q%MDO!OjB.nhX7X\rk]!
+ ?qKGh<Qf3Dc:74!c,;YUS)2_jad*PT6,9pkb_BT6n(_@].%G`TQ#0>!2f%+V%6#:YkCD_<V
+ 11rU03[V/9Kf1eoK]EK@1&Uo9^;JsP>V7>9HE/W$@cqfCcdT*p[(:8.$Q&o.QC]"BgOJCho
+ i%^sea;.FZb``FgYf%!hX^5X<[T>N.5EY6VG*BZ<J-ad84uUNaQ2IE6$Eoq(7X*M.EDo<K>
+ nEgmr4f(',09s+/huO>-$%6faD1Qi.]J4RWG]Vk]^`!s3L4*I6D(I7r]Mo!K`bmef.p]JeE
+ Fo_n/G^N>lf]c&4LGZ)7!H-*fI*1%ZnE.U'<!?+kRWT[a:NC"fqnVl(mXq-qL50;$TC+bdC
+ ;!J)%QEqS1?2eIIq.-RC\O7"\C]n1iSNm=u`?7_2QJ%70?^)_`r#gm)TmbQ:<dEkpM-)mjm
+ Vp>(icPqd[KifA\g:@a=>r]k!4N0"kj2[4'F`jAmLY0t[B@#5_=QiMtOCm+9ob7PKWRHV';
+ fos9YYP%SaP(Gd_493*]o%-ea'=P9"epoP9.(PQi7n6]Tb\7c7uf]Vn0#(Qi_=T>f#>g]F6
+ D:kG,iU>aL@d#k2XLsR@UGs3YW8B3'04FaN-US%5C4F:(G5P+/:/PPa7VW1N#W^>qC&c$([
+ RCT0@UTJsM=OIE>IP%NjB#U*7RsML^/!`[n1Vro`fm+U16KCso-$jN3UZ%1rjp.7o:)mi4$
+ W1J><XLT*$]o>hT:4F*ZuV?X.U`0bGu]"NC\#.ba.KF#;5SZJtJ\(&[Fb(NO3s0-0n*EhFS
+ ].Kd"[%a;`99r@`@j1&NghQL#LR?^qd%M0srr2mF4"2[16Q74+6::47RE3C#,;h&u^XIu%:
+ KS$:b0%lXY?rq5.gmpZ`#_?9`C#FM?krk,bTBJM<%+fjg^:UsSVI.j3B=?c\)sLQn(,00bp
+ /9JhBc86dUt]B>7AkAlKak%P62h)_6Rn1ZAS#@il):=>6IK3C<GtXDI&3)1-RQ2/Xc2.HQc
+ <*%mukoZnsQp1KZ4P0Lh*ErSbX!k`3X3aJVfUaIZ,@@,V7Pm/'$B*8*^Q@F;dce>nWoHut/
+ 6NM^3%qY1/0=dLNtGOA^uQhKk/U9'YkP*(m@DngYnT'_r:,9s=;jM0#P0_Jjh,pXb3A]pCD
+ &jh\%kg8;aoB4KVMlE:@=&+pA'5EtY?G25HcFa*DDJlUk+*CIf33TQa(9[6FI@:$fH0>MY/
+ uq*+1@ao)daHPY8]fQ7(>9aZPEV3tjmCIG\.sX[V.X_s[gB7+9n8V"nFH,_l>`tS5X@]pkg
+ ?0PSW'P&jj#YRabb9Nppcc;qs5Ng73?S&Z@VlBj+k[X[I$tjR*+ps#O5U*H3QEN'#aeJq^X
+ I#X&c?qc&i8e1=EZ!E+SO\Wa;$h:W#NOaFopQ^46/H@DW.4_&t`h>IFqV1?'7MIE=W^#n=Q
+ USS5&I4$1WYRh@/?dPK_lO&B+`fi#lT9I4l\0$Va;FC&8pFP%js0k84&9h3O$4bL16;Fj?;
+ ?bCJ1;l>DLBus]m36,>tP]6X$qsWCTHUEVm$"VPa;$@+,d+s`X(F0D[$&O-04$i&])HlJjM
+ j,mX$!2m4.4JloY^7+-=dPCp;c8dOUEsn&$*'es<&GJ;G^)Im-!<?VI".r7:IaIYdT<ri,9
+ %mS-$AS,&P=*8/D[p^ccsH+j`nVlCSs`1M/>=ZTegJDgq/&oNGV,'`C2TM7kp1K/I(E&#e^
+ Zf-OD(I.-R!cA8Q/mD3Vu$T:^V_f2JkG;u3i"]rI=+cVCo)UdPe9*o=PTEtD?%;--ghMA1@
+ B@/dKBk8I5b!o<0.Vpu^-bKJB'g5NNpj2Uho2*M0Ihp=T_T>8u=.W"i]K>4f!`7`i>QRnr.
+ UkI<4BY,OP2fE0mV@R0)(EN#!b!DV`"@>IC)`DL<U'E@eZte,@3]\1*+@nQb(QKlG+q2%3n
+ U);"e/?/\]@3m/CXKsI.f)>;OsU(V4^^hDk7LL`@jC4T)]SgKKcWC'"DsLdEoNm][U)>to?
+ 6=\DJlbVJ@U10IY.h6\<d5`?YIi]7:]pSoPfn*L5mr5Xs,2f)$ld'CLPi_%AY%mIK'/LVT>
+ !fQ;!o2K)V=BPDaj4Fr#1-,b;ll`+fg!g9P8tZtU.IFA*tY>t/LnSNQ#QA_Ih82f@EL>IQ6
+ Z!WZ.N9q+&[-&_iO0ka6c6EFG*!'5tf_W@iZ[R`hU7oUFnZEgc]$Z'M=C4gEK!s1,ALBR2Z
+ hVR,>k+<517R3Fil/_!hDK'YbPEF-q;c]6qHL3Q0a\s$=I/$N8;3Bm]+![5oN>oW[ak-Wb(
+ ugdHQ_S(X>XE]4m"3NU[IE0e3%7_sprU3a,I:k0>agO`fLY&SYG7fOJbbXh'.6hMI$um+_X
+ f_''#Kp7B5#jO;ULSQ&/rK("r2KV5(/=GRdlRW/R%)]A%Cdg[kFO&`9"%tTlXiuUo1,QngO
+ ;,[GSlB^hYGp7C6`b*I_S>+7f$p>IA3G;t@JKT>07n8/0H+Ob8C3V).SIFW7uRHc:CW'iQ;
+ $,LK3dQO>s2'IkR^Bk_;._aLfOaiPeS+bG<L6RU/_5g,e"VAoo1GK'$;R.poXVt20@7:#WF
+ TcMKg_#+^6p5$dP)`DM_2A#I5)&_[;V@BAsMbJG>aN_s28Z1Q:7RfkZG3ueChD#_ldF%dbZ
+ ,Vd,(rf->N[H-1*e`C$cC2..C7!EV^OoMg'(Ot<L3?kF$!F<m,!NGfLTFgq>"%'^1h[H-JX
+ HJ#Q^?m@P1IhB+iCo,'RYO*Wa`f204,+r$ohJE0m)X2Y?u9h1%^Qi'N.kn8;X'fpMW?qbhC
+ J2<0+OZB?p?Q7O(,^a/DL4b,Mn/;9O`UmQl2\WJ4`mHAT^'%3%4Lc-:sN&bA6Rfscg`"5?$
+ V9a&\@4K#QnFPsrcS?;#mqTP<]"+&Hf$Le#?,:/3K!\Zkfb?E!/aW]bCjU"O-;2hpM90!<&
+ n(I]ZbA_.`1"Gd]h7Hidq@3uFL,K_:.t4D/<A`bb:J"g(W$^ZOS&`cONa\#dgU9BY*/U>ek
+ W:4"ETZ%RkY,+eM2?.BH$&de?puKp].KeJIa._(`5KBAFraLE^:q=h-Ccr/ej,^]WJ4&I:8
+ qsL;k'%C:kaTfJ5*71jQP!d*3A)"d]-:t;cD/eP:*P9Z,4Dr"&3`aJ,eH!KS>+[r.f(>Xs$
+ TMDr:!Hk\+aICLUcDbINC;LTITDSND%gLQ=irVHJ\aK^dX%%Q[>86HFf6UdS)TZVV?h^`+*
+ EX%H74]m]cn^O7%+m5Ln3VG<[cCVo>n^H])nR-M5.mB!="*GTZEd%nq`^[n*s/<>UCL_.TN
+ NG[FY[B[!i\T?ri]6F;SqesXFB4YZ1\ol3Ph0nA*OlCW(>Dj7KdaQY_*"SEO1%*_T\:eg'1
+ 'K^/f#!9-?DKtd@a^a9S4rmac)>40Sn>!BEA>g8@mn_YIJ[3S9tblG:r7mo'N7.20:ep:*c
+ 8o5/*I\j(Qb.`<E8-gm:HJHhM)IY[Vt';`5G'0!YQLo]liXAF*$qV?a''V*^+i-:"PYpQ^=
+ VJV`Y4o3d&HsgLo]@I2b+`EC"I!rlmrnm5]Cik`_q=.=@%g44/YiKGZK"#Wb9&K'V,Z`8+;
+ >OtS75Shb-g`V\$MD5*b25^SG%`M5qa#\KRA^d)RC0f="6lqq>:)/Acmn+7b]'!g.uD/IP_
+ HHc^dT8XaMda-8.ldr>170X2T0/"uSNhiMJT`\+QbEjmF/2B?Ke#*_cK!)VdFVlmH)o7T8F
+ :9'lO#>b!D.Vq+gP(=b[9d=]WjsBSKP$e)e.h*L.>ksZ80$`'DV=99pC:r+<kh;6L_jLHmP
+ 77ZHp"bBXK[0PH@i,.`JPDW`j/-MC+?RsFReqqTJffXF;0p;hK"^\q+ON'h!bPK<Rjg52J$
+ N6"%F^g.1,):&VXQe!)d\I\@LhpkDuOdG`T,oUS8o)l=J<q9gcP\N*.A-DHnSB/un/>LQCd
+ Gop^?3_ip7H7&k:&m0!f3gGm_F.bqR]"57$-=hK"LV+UkPV@DA[&PP'qHhW7QQ5(#F&n]CW
+ 'cnu\:F"Jm>>.a-NO8+N?=(og%d+R?Xlo:Y*eoE(^:q=XV5;OYqNp4$@UtOkSfVu]C8>_$[
+ WM4DM&4Po0E,35O]8.t,#,gfZdih,VJ!R=rJI9ZK1Z"UT=5R2SiB!XQgR#k;b4.[<M`H\oR
+ ce%@IE$f0.SR=lKS^n#a8e0A1/iS&Bls$o(&3:RF!9:8Uc-&Vl+1VVG67"T+>cH1E_b@bSs
+ &H(EN2)bC)S;"%[8#kg>%524?:tVPa+Uh1L_ap&+UTj5S%9]Tksp[^ONp@\%/X+&/:\N7@C
+ !A=K$Rr)1T"8]gL&Y?P+2T\`OsplBOFDj6BH%\4W@B:Q(aECVF,DROD;TThei?+P,DF3W"U
+ :,#n4:S0gF.aMA5H55.E`]u$*f\#t[T-rJB,U>XE_\cNZRDVK4;jb#I\dbsN-1T0LD;2M(Z
+ 7=<j7n4p\0G$9'[8UC<Ie)Pk+s1T>cfN?:Tu4b`\[bk$QeJ^BKmo:VRW[2_>P.USk=Pirf^
+ p?[$ce:rLJEiZ&/G/,e@RMOU1j,I+m=(ag/h>-\T;E&>Ve)hM2N$6Ob)2lB9PqCNfK.H-8R
+ nj>+1(rK8ba_Ki;YZKXQ\Fio=Bi0ks9gntN`k^cW,oEcQ36HM&+]N9Sa"ET>leGOt!/)%d2
+ _$<F?::&M7P2mdGVp=#p8d'7#HZEc6(T:J%>NAsj)Q*#D:p&R*)<1'rH_j/a/p9i8mbKGN5
+ NPDdCBdVSlaK2QLchEjV+Z2-bEoTHT2Jqm;&VG/m7&-WV7OlOPkg9LdA@hR3q4Ubjl&h9t<
+ _cC"Zr,%gj5q'kF]f&Mpf>`VfUU8BHooV`q!d8%>A5H>1"!kT!]l#7_n:9dTVK@Lq<rAPfa
+ ZqI%TRA(QeE/gHj^=35kY^A>W`%X(]T9#\/5\B]#=!6OA7YpeI)b3#jj%P<3:5?W2Ot5.,P
+ E"M]a2fJo$1@;l<Ao=rOd,'RB]UH2mmBJ+;aX\T?,@Z,LmI2&'YBms^]O-'D0gjhuuhF;J6
+ FB\KiXfs>=b6UI+]5CVtT]9b_c0VtVB+$FoQ*>N12pukJ^m]Bn4;+2EXfHC01XD)$a;L`9/
+ =!K5R8%V[^s#HZN7>0C26T^sSA=`VlmoQ'";GpC&%LtUK.S'\dGBXBck/)f[d"[`24$23Ie
+ Q#p^rYJoG$lAt])N17s3Q@p,Ko,0`ZP,CuF$W)X!Io:0R%'S'mfT<AH2[S^f</DQKDI7B&c
+ mA_H/[1k;MT%lh)adX>e#"kftZ7HFVT")r0Q/^!f1bkVtT+dr$4q/DC,.5h^A<BOX%j8q$F
+ GF#E'BOL,PPA=0Gr\daJ!J&Qr8iLCXRf@.X'WBY?#maj+JRc:pKMGRqInYg2OaX/RYT9S75
+ a<ioOOJ5uI_^46/HD;$i1kl/c1A&k8&5ms<[]35*NN*90KW.<A!WB\es<:B@@&Epe%4tSq*
+ O1?$GZRI:HK5SpI&qsKCn96@k>?=6p=80ZAEN7icQ'J2EW(8)F(`DkWApYifG1FUP"t_md3
+ b!#G%rEF*S3#[t@7IHT\.j60@$fBm<'ItSC!`NhV+[/JK>Ab'St:<h,,@r&2I%!tB,:6D/&
+ :h=Ldrk(FV.Lr8<^i$`B!P1On+W$$9`h2#cs(lP*VD7D4S.Jc'jL:([Dki>bIBhA7]=N;Y[
+ tm4teQ"A`G+3RagCej2R)9^3pB+-:^b!3J-UV7e_En3LS6p!K0XA30DqiB+VFJUH9pg8?bi
+ c:f,&a%AAFQa6M3UTd)jWA&n=p/"]uK"'Jo_1/`"-/lY$iZYpL+CT5!,KCTM0RN;h]_;ObL
+ %W*40k0KYq3PCqC_r$pMLjSma)&aENG/;GJ%GFMDLl[&A8\Yjd=B%k6F>Wb1?Fm_Zkk5?E&
+ fgH,eLREq@0;PXAFoEaTd)j`MjI6/n?#hNRR-TWZ@B8mfeee$Gj=p0;TGG,GYFpdWSX[W&K
+ op2&eYg$@eSL.fk"f/6eb]Ng=k8Fe'!=H$_YbJT&esCH[GY4!^no>Wi@P9ns]bBE,ZbF=B*
+ D00k2UBIR`PT[B%q3cnn<c8Kh:<KQ3NQSY<qdJlbgq8:<0%eLG,4ffra;f6iVQ+eLc=PZ/g
+ o%]WJL'CFG_Q:n.NKBR5(HM$F;k6#7iU8$l4('=X@BFh,Zcd.[1h<:h:VkB%2R_6;G5sr8I
+ S_`VfQ9h\_bk(DebF5l-EJ(O$nV7VsTYgapC20c[\iO6['bqI)F*"%m5`!aY5FV8o?rju)F
+ _TR9\*AL[SbirI8,o9XDLJC2\KY=d=!RGP%CNb<D/sYVH@UN`%*9QU9AqUfM5bN=[Ue36=&
+ SF/l`-n^W:9N@-4^Ut3k7[_>+/M3Eh5F@7_E9%:li#dPrj2.OIbFc\dcB$&R%KY;56V.)>l
+ <W2\_Er;!IGEUnkA5f+'M6bEe!?cCZnB/:FXP1k=+ie'2)t"Q/pVp.M@:f2:/&T3K"K_/bV
+ <1V/D'js&:Z%l7bnAQ)W1VXo[*Pas#K,\hi%0@[Ab\glp`jS89*d$@_MnDV:L,SkKsc'.<u
+ )&X;e:/5rse0/M9N/Yu5#7mK0#(sdQlbc596*Cemo&e19^thBnjjTo5!)-X&>k@+&Tt14Mm
+ >i>R6H>`7!cD89.8dYn[G8'1<D;a0ilU.n=QkWBo-cgeUAPW521od=B?qT..Hs!AI)#\HI6
+ 'u]GT/B[6p_4oe\q+EdqbM,83`bIRob/-Bpn,(+R87WcqaMVji`uJ2)Rs.5&D4^mQR3R)>R
+ N;ZOWsKDUqlV80rB?baCh2jO9f*ZY".5^4d$s0*i(!#<ZpBldi2b!>*5ZK!-N73`),0\t,>
+ WhWpJqR6`Dn,UCD0jgn8_9:iGBT^]1aU.q";!$s#te(qW[aY#+,;Qb79iS@YpNQVaa091kF
+ DI&N)UIM)XVaK590/me(TUAerD/E]eLlID%Z*>HfG5_:1?b_%!CH4?aMl2Z-l+qSH6SB.tJ
+ F6.)em\ddKng0\63i$2S3;37C1gV[G3qmo8tZ8_,9&R%;H?37B8qk2B->0I5C5M7KJ@$Q6!
+ 5!I0fEs$mBbtOB\K)IR^1aZNIKJkPU"k4b$uX#58]Us\om=.E$jl7l+MK?'&)2lGlIV$c)f
+ p`i;mM;:/6h=Rp3!J/urR>\ogh]o7s9C(Fg>DUaGug)8i*`$>.qp@0=c::RT+U%mt]i4'&L
+ h$.U%D:7Qk.^3Xg?19#\AVVY2/^qjjr-An=`m$4Uf:tA&U<2lo1'b/jdP:TN+*?tBn.k`RE
+ `m$"A(/E6-D*UT$FeJAoa?T/<KCKabG4"!PHmUaef'N8+ld.!Y"3l&d:tp$YhRuSX5UBuQA
+ P)b-^Es^KSkNn)X/O5(k%6n4(N_)Z0CR8]".`jr)1-^WB^ba&lB/o8#D;Z)WMq>RpXQU!j&
+ -47GXpTdf<2H#YVh+f[V.iUWnOh%'ZZ1lhmUGqk,k\(EA?!IR$*VS5:$$^TcLK+XZL`B:B.
+ %'9pOe1j.N)OfndC&P+J9%W)4f!PHdlR3d^SeJ[YD<SQs[;5m-P5OX;A#&Lfsj!af;ANc!o
+ 10-d*3+Sg!/ld0h=jao)c!<JDi%it<8otQ"V%dcDT6+?Qi>."FrQe9qmB^^H4mgo6JHDDX&
+ +Q[S>:*=oJiomXqA.(s-,Dmo:\/*o-FW9u>X_Kl;@(;KD^nG%.qku<LJuU^<mCHgM':h,I`
+ Ej@:*A_<O+?L\4m-Jb5EWBsREtkL*Ka#NJqY9h6.4J+PplD%+L)eslF!aU!6u'kK%+Ku;1$
+ A]0-,.%)r2Wo&TO$_<[r1"'fj)J5Dt?+&\<DB7Tdb3I.dQ7*m+Ak7XC4dc/E4C;OP8#"6Rk
+ 6B0kr/B[>Co;pibu'R%'Qm-72(4I-oWFb/qTDab_f8_o'Chp?bN+e-`"JV=33M_jSbY5sX?
+ SZF<0J0k,N&=*AaFaK_Pq[V^Y?>n!BIj:GnLI9(?-0sp9HKB6Lh?]Ma:?-,g\Qp#X%H$)FC
+ ZS0+kX=eSjm,:.j>,63!-?g8DH9mLgoq\^);,pVEo]QmC_&4+.Ph1^!(G9Z:jRM9NM&.9I_
+ 8M0D\M73FaNr3$f=quuM8+31?dO]8Fgc5G:4FCK`Q6-!XK2iaFYVhGkKb7c*A:^0"RZ&8M0
+ p%5Fp5>nD46fXa%aQupFb./7lsW2WHV;qkb\KcF,2bT9t:ob&%m<09.DZj5E("2`D&"A&\E
+ 8UVD]sjf<9\pj@KoP%1EXn(b@6jSXc4oJU-_@]kcH8279A`78Zo:2E$j'K>ijVHT%%Uqu$c
+ b6*D6W0ekDV5F+h`.k;PV)%#]>fPorp'ok?]0n0%FBYXSj\d_;'Xl*qtnmPssSsXKnlk-NY
+ A'[?JSn9J1C55@/d&q^Yi#M"fA2C4'BJY5*r?Pi<7RdT]mFG:"OcaX+FlhKBR58JRYZ=h,Q
+ Q,O`HD8,geLGGF(-CA"]HR/hIWC<JefILZ:$$iUMM$N`[,5?4O<HtNm\>+$l@&R'%GL9#ks
+ %_"e0;WNBdWRGA#+6#;FW9XR#mT(5%$)#1ITl5H<lt'H$[6",DKEF-^>G:(@.E^Yhcl?f%/
+ DC=`J0F_1MtAfZ?iVk_BZj19q"XD;2MsWL:jqaDSINjLkY=iLn3AX]t%N,uQR6dKG7?g2A\
+ j/hW1]I:)0`T83.;+7f#E8:BHnVBm<!on*("ES^XW.C%:F\CHW^5JDV#FE3cYY'au!_aYQ@
+ ]M#3eM"cD^OfD7-;5\k4#$tc=LY?q;Wp&d7^Nr?.W#D*m6psj*daQ[/"5D&Db\.3e!cN!q%
+ ]gCP:7j[98efrR[")FDW6r;*G=I<#EBH?GTUcik(+`S'EDP9Ag-Rn"%p,s*-FJl[ZF>Q\)L
+ X3E8XUJ=+&h.YflKY3+0<I[6UGn>\j[nks4qUC4l@<ZPW3aX.&`.;$`IpOY"G`qXb).V3&A
+ '5M\]hTI.lnEgK#=SE.DO;.kik>k1,T849XL#OK-BqhGKuF8Ba]ojQ5KCHie-Meu];f$_dM
+ 79<tk/lIN)\AS3l7d\O7W'\`$UTV\(+C<jY.pbf=!qeB$+SiQ+8l/pa0Z"%YJJnq';euOkc
+ mCWA4FMs'Ok2CMs%'jNl*r;(";,QlNgUGrT4ETd`;RSGuer4I6OckmE823%P.4Hn2V>)gfD
+ D8%8n2)\LgZ=#>-Vg2)5Fb9LGOJjOVo>%BM\gHi?[OhsDXu;n`)lP*H4=q9ZCs%no8'<?["
+ mCJVdGte%m8aOHYYl,Uo#H!<9UfCYJ59L5>)XHjJln7,q">qeI`0E1,9b,DG5PCE`NHZc'g
+ Mc$l^s4#hp]ZgY:0f+Mu.<L2BkC_-rtlUdU?cY-je$&r\PZdNh:O'LA0HDiPnQd31;%PEA0
+ O`*'d"271j4l.'"'pmOBTnc7=<60d5&)7-Gop\4:BNuL#<,'Cr2*)b1GS[ZB].65,r?HC=5
+ -FKO\oXNb^*?c1pqkWNoRP\`AR[VBM[Jp1)U^Uc1/%Glq*luLHdLW"&JjO!Y,cddLL:7J-8
+ EGCW2LS\6+./k:]UZi#@QLSr.qYN<LY1;5ES4pC`l>7^();6FFhS]*"WM5dk3Ps`XgK"=;j
+ 65`5.jBE=Z"I1-RFQ)-G1,A&N=W6-?Y+-^'2"k_ufeiXXi&6E?b_>An5G6\g]ujMEPJ?oCI
+ >&m)_fj\?*4m!(^+0eEg.q%sou9K![6W,DmW7o('?%\^4*04YL0VjKdC5V56D*r!;M)<d$O
+ 'p&3+FUTbVGrPms;XYo=>,hi2M?U"9,pDc&FgrKE4>H^5*gHAe?;j1t`#kZG6M5&q_!O*Rf
+ /f5HDd@[=Rg9fi$8G=`B\e$\i:/6i"ceB>jOnHc@D6@SeC$aM%4<0o)P=c"DT9Jc:IJ`^m1
+ 8kJqX($Tg%EBP!"pP853h)3J7r";"%D<.=0od709AE]./U;qf84np"[U]!g@9q!H5?i!jBr
+ 9n&_Z"7OaH%E-#T>C>`\?Cd:d_iI54=FQ9J,Z\T6"E-m+J]Og1+f0aIX9+M2='MI'$rZrr)
+ 3:.7haTpE_,C/m^_-2>qU@`/,-F),paLIJYq0`ra*#bDXQ(>IA2\9nU3.q37#0qX.WuC222
+ ;O+3L^Q4X(#Z3PciN+Dl#i*%o@.>*P:Mb"$Cmgo6nJ`g,62/g!)G9-WO"L"QV;8H_7m[oCa
+ qOKfL=a,Z9L*gbnVa9XRB!Ark7PM7+>#S@`K6(l>*BHU8T"/q*Z=h2Ll-nle$t3OO^tE-]J
+ fu9lSs1X,[p]SsKEHh+W[]7RM?k4`F<H$.M@n+rc_%cG2W*<FS?.E)3LQ5:S9/k@2.Df,DW
+ `@]59;Pglsr>!N47-Yka30\#KDt*pIT(X\@<C.N8YjtJX!tm50tciMF.:*n_LP7NqaUM]m$
+ +'jt/2V05]M*\9%DU6f'nd"M/Y5!3%*0YP%MYF%E.DQ%(B9m03#heuW#59[se(:9\U#M\e#
+ P)KY]C@?t`>F6ARBN!ppC@s-CeJ2&fn90=0G9H_$#J<6%]`Q4\raKfC#TKO&)f%/EBopqhO
+ ,/GS5Ycn,0c=dQHHGsCJ=+.]9CO4"4C4A&!`=l,fj4119fQ;-'dRd=@(DQB#//8aZ%1==*M
+ GF!gK'`a3ik^6PN(hc>h7@T6/]#"`;,OD7"ItgFa2fm75COZ=_M3c-^uVI^Lh?gb*c7Sa(a
+ 6tL>Y1-`YV9aO"i]`TVn)0.f[s<Rp7R],o720Hj&VuJ^b:b0Q7eg'qZ2<=nM-hgkL&>eYj(
+ 0Bab.O6<Rm<IM^PGol]MlL&MM"/HU7!P2E#/XWa<;\NGVUL%3&?K)=?.cA*N8"*Zg7C8mYj
+ '`=%46J:tbY9hrX56$lT\Z+0J]]Ql:X6]25P>IPB+bV)G5q=96_UX7;V3?uD.i&fO^PnF]]
+ NIIiHR%Ei.e1UffZ`[]Z[_IW&X4t/kqh'S'[Vl,'GRd-4M"bVFAaeKhN(d5E91u]f?+Y8sX
+ _+g0&10S)^1l&nN!*nKZ=RiS-m>*DGDt$W;V[`ikT!Vb'.-C\A7uQ`qXj%!mL#&)rVCNbK/
+ N&(rqc$*_Sq]UO[E-<\Rh\kf9-)>ip15TJ%KLj#2"%&@,V7Jesc;4kA\L=BE#d<`"QNsF"Q
+ #s&4b0te(_<J97Ns68P.aQcgJ\K,&\>WD;)qJV5I"T69dm/o(&rdY#r0^iUIS%n'c43\dY.
+ IC.DW<TDn$uaP!n4U"pG+mdBM5p#*oR,C_Wp.65,G77Y?4;+8s^;7q#4AAs6n*ID3ZD_,_9
+ kh!Q_EGHrT%H;KV:h@Qn'B0=;ioGZp>p-#_`5EK0(snnWA&aI9eP7;7*,j`Kc1lF6K7iE\e
+ >]4<'+tqp\8@BuH?pL/]%2YLG<k>q=stmOj7;;):/2;+`rb>DP:)mnTXt!'%lC#TT$u@Y7&
+ 294bNqI'"^R&90r\H;=ooafrUt(JWD2NXCE4P;%V^6V.MGU./h:*&(0fFeVPXhC30<9=Q'%
+ 2;qeZ,3?+YtJ5X@\enOesiH$S5WFe]fi#6tKO8mFsi.+8gT$,7SQa>_thHUh^jO[;E&"57$
+ h;GpCjY?jA03Y=qs,buqD*>A>@pCHXoC0ocOSQFI-qNLe"_,-!*gLh<O;i_sV#<7BmUSX_V
+ 49=l'IY0*#jQ>UMQna6UKLNYkWfa$:@I;UG(ZSSmq-[H1=W)^;cR_FR(_P(#99rA+c=+,"s
+ 8IF-",mpmVG)Pa=0V>bh4(a"7/*o#(JROq2Df-cN?YBI?F7pj#,sh,FIir,.',n-Otu^`o1
+ -#PL^4.fham,V3.Bj0<;ZoPS!t@f=%2]NnulE8B$WSXDss]mCYH.pTqUGtZ"'>pS.6MunFY
+ sDAhWo@bo8_:/T#<?\2c6EU6oIMR!;An3b'"K)QSR%a,X"<?tAfsn:5[`YUj%hOL'BE#min
+ !/1rOnAnB2S,-#>A*>M-DHeltA;H$N(cOs$F4s9OdXHQsX;T6>C5Ve^k..RnY^jY^+a3gYP
+ 'bh?1:Y;L:Qa17.Ci&Hq'e9b73YWPRS=#\bhM!Er5CP`&+=JZfSNM1\D=]$:Ko7;lcO%Lk\
+ jNC#=s3B1,u`aN4kPC3aaO$\qYHp:Y$&5i@msAE<&N`-rVLFn>e//nZf[2nAL.<ESgoPRH;
+ 7'H4a_@4qqqjFRl>7`52iPN_hQ:Taj;(_5X7SbP`ul9#Zt86,4PkS^0kqd]81],i;[Gu`"2
+ O9:Mt&&[(CpDbIuJEDJkT?_GrF'5t"Rs<@QFRs*O^Ibn:ZL\P*SY%rRR?NqZuPG_l4_q'/e
+ 3-n5SU`++a^L^6^L9kn_bh>BU/:R@TA**liUZB!P??f/c="57pLZX`0%i]2Si7BJR3,bdSW
+ #11'kRCp0pDYimaT24L&NAL]66kuF%bq)bBZG9lmBDDQWs$9#aOYRFrW6D>"kRg4=LQCkb&
+ k2?\/FI,45S=\BHM-Pl@Eu;pd3:A&'A@h;CtPtCXOkJ3`b^-PLI(:[[u#?)0^,M>S]5E<:S
+ 0fbVAKIF$k/eW'f\a-Emp;kL0HnlU48W3FM"1,i^s*6h^NJ]o"tYj8$X1qT@=s/lk&:8r>V
+ P#;0"GXRnP)eXK8M.A&hK7[QA^$KXRF4q>'j/b0%YnX]),-Y0+g'#7p2ESeJJRq>PV,:J[%
+ gr1_!riPV@;'f__+V,jM6OJ:q\>t"NtOcD`fB@!3SSK+e=4%nP7iqWu&TZ7O>D?X;h-&OH5
+ db0Yk"G.>>PEQ[5Brdu&P+.pKD*)llJq=[O'o@1hZ"(h$p[DcQmPo6'+7YkoU*0c4`bL(_0
+ urpS0k1EpY6)b]_LdDhXAe9mZ@\mH.BCXN)Ip3kpP'VGYCqYe]mK51d:<(9/[.Z&%MXAo]d
+ i9%76O>0?b_V50/J8D\4sf+/qtsrie55Wqub9/&k+f@g$AlI_V)8.&)"P+7ueR?[[Itu04,
+ K#5[-iQb0J9b4WR18:-%Q_Lf@R9q>$r]ldr&D8.l*^g-GNbE5mkVa$DpJDG!>3NO8-Ln58l
+ !0I8"k33S]:q>h!"LIJ07m\e1Tf&&Q5Ds2C^7R>kth<+u*E(Fq2i#EM$r8HYSM[^U`,X-MH
+ o^i-!]JG/CYHNnD9XW`bCY/!sPq12c8g4<M7CA^l_\AS!Dhdsci67TFLP(%lh2\ijEcSFgo
+ st%_X/E@I'M86S[0K`e->aSQF+*oO&[KDX1u6GO^3o_Z4`ErB71nD_%1Pa8JGB%23XXSO\n
+ p9<rHH#XXK4C[P/LSaj5Z8,KMoHL&k0l"C1SJ8Y^?+^BQjX*OsNcRmR13gIn)BdFaqdO4+]
+ 6rkos$OMo^@:?eNpj.dYm&JW]mG)&_\^SS(g(Tq^r2])d6-O9:Cb*YKD2gUVB<JlB0thnIJ
+ 21-jAA<K.nS3d=If$`>3ajR;PUB4n-<UkIe`PSle#7E*6bln"aYE8\M4BQjW?Mi.ttSZ#Y_
+ S1*[Y:GXMZ`.u,\/*E'.'m+1:%34*phYgZ#;Si/:a*X1';`:;0\Q2STGN_mk%P3M48SW#*J
+ ,ob.9]7<mRNs.(5hjH^oB+:bme#hs005Q@NIneh/hYH"eQ#%,"U.1q>2%<K&'u.:M5%1\^T
+ tG*<lN,"#MNI7ZY%H#[QC`k'69i2e/9?+P:J(2J^e";K5O;G2+^4U5KWKYXfJ6=4(i2=SN1
+ m.Be#R<^)qA/HH;0PM^%#@S!uKTeh#5?GMp8fJrpUiH$MpeM#.r:Z66>(?G'r_AQ<l64a:O
+ 5+c2qFJc_6>dJr4nm1&Hc_"m<"^Y%tmAh?Q0,,A<hXGQQK*)Bg@W+sHa*_'5$T[q"S`HCRB
+ ^`/Ur1G`c$m+o;7MjI0kFdta)M4q@snKd>g63SotGYUu2a0o[/6JOdgPAju@@pNAGIkTA`@
+ q,UXR\3&GBk]$Nb@i2nSt=<s$62Q+8%(La7;EO1B^1Q8'^"$;P6X[YMeC?XlKC1$Y>Qj=R[
+ /?iI[ZIt$sKlS_K+gB9:&GH45\%iJjg2KAPrcGMJdjd[D%+g$p@pqJ41(Mnb)RW>ISM3S<]
+ 8#6L1L&?C=A-eNOnOUiY_E)&Y<ga)Qiq6kQ/jcPmbB_a?O]G7n7&*$!W!`0P72LnZcm(/pT
+ -+V/IS&f?HsDpV,$P[*1?BkZ+XMN33.//5pp81Y#R$>SG13L[1pb/se\ge9J[M2?4pFm[PL
+ 6VZ7D<NB._Go8(c;5BkX_"LB(_:l;4Tm[Y8;l<B%]+\DV,pA.(\[t\*?X[@R8SO.9f9+/Ko
+ c'6LWuWBY=^C1f7:58@66.V7`.a#'#T"9np)YI>jlO1#n`&?7F5+-?6YlFj7>kd(Sipf9,c
+ dP2R5/@Agpp(E/k,flcCI$M?bRKp'uc=^!i,>6OK>ObLfB<KmbhK-98cV9@%sT>j2Po=U_P
+ Fc\.I@qJMUcR#dugWe<?VlJ'FlF^M_p8+=/gAhBi*_B5rd)bGTo,Sm'nJK3P"(#q`A0P#ua
+ GF3uJMB$Hd,,=h%alH7;nPKb/+MM_TB#,=\LhR$uqBP=??m?+Q:kic]U+ksJE?Y^p`UneJ@
+ 3Ws<gIhA%rWq`HP""$d5#jn\.TltfA.4Y$*&ZcI!f!ZZSq6g7<<`1[t5<l'_rK<?'8Toa%A
+ WMYUB&+Z`&OIQjMBn2Nn8L<F\&!\%P:E+t"=5??CMW5V4*L,S*=roUD,rb;,/0=Rq=<Xo<r
+ W[3'1IY1L<L/#RP6ZZL\=a+O?,h!)GQ]b&4Pk)'dt&bDEG*fELEMsUKCi/?5\=RihF!Ta+9
+ /@RM8V!;oKG%MBmkNUa/`3cb=i)E_tRn>$BXu`5KR2,;h=ng2B`CMMcFsF`#mA/ZE[u*BH#
+ 10+Ze1s$:%eB]-[W/.M8Z:f1)@p/T1uPa$-kB>.ea&u#Z>XkQp@RTs90%nY;]*&P%G06&Bj
+ .s"iR#@%=1D9D-!#J@KsN!dS@U.FdD5C\.WnDTlXs&9_%77CEbgRt2^kSUNs<`YNINt4+af
+ s,&-P`ukq%t6rTU1hDs$#C%=hiX1h/M/Q.G#to@4$*p5Z#2^FU8X9)UT!CC"D8L]r>B+oQ^
+ $?/Zt.>JTg1DOhu:l8RWZF-&8F61NVrt*F$C^J!fJ7rP+SB8kNBn=MZ_@oR$b[TBP*+jn8J
+ U;#:1ZBC11i1ZtVor"LJim+>?[2_0=\"`5T\]].6tb!s:RI6"q3WU8YuUdkjtOh@Z/YCXt'
+ a.lknFXg(Rre$+r%^"U@RU7ie?P>g=Y3=OOEM^%,2DFP*W4K,Babs;CS"d2>fcCBFq%g11(
+ 1fT=%SD;Yq=.rp!*eB0BC/5g]M0%k*6+\XL[VXX69ddAc6V\=qkb@:.8.fnM/Y,Whp(UIrH
+ ?tI)R(WG\Dr(O7&36P`=_GG&19ippXB)PR&fCB2h(pfAi[/eEUNc1`:S'\"NY=jQ5(2_Si6
+ \Q.1\u+\nF1u%VFC\8MCr5J8ADnFo%.5u6+\XL-;U5W9d_i47.fEDWi_J6U[9+<hRuS]AN&
+ Z4k$GW>b1om]_JbGQdu-b-*^@2J0p'QH;f9LT\jsIAB=l2M^nO@R,UFbigY6dM]U'+KEcO]
+ SA7gua=rJ/G*Xu+\cC?oA5)DU6U?MFEkD]$O&I&^b#jnd&dF?\,VA!NOs%q71US\e`[-QF>
+ YZSTY!=>PrmOkj/Np1<6MR)q8IUYOXi"QoS5NKMqW(1(#j\YtCH<78@iq\h:K=H`3o^qeOo
+ B2m%K,.B]:X;+Hm\H\UJQTcb]Y2!/8m[Df.Olp7040Ho>j\%CGE1AAJ,T6:bK7kWG"WMN'1
+ Kte#.mYf%3%2V.k?8G=FfJqkB$.SCh5!biNmaac:u$#q*Z])8/h:Fp[$F#N=C5hNAM!PEBs
+ RHh2=a-7+(n(4,7*I?)sQWg9P9_FRsBgDsuZDiU?78mN&"'D?;%=1D<3-Rp<ch.*Edoq5b,
+ o6UR7nkbP;]*$675m8beE9GAY=1hjg7-]06+1(r;H!HR;E(!_-Pd*OK?FX2U;UJbkhGH[bW
+ /#6CdT7skXS>0T@R(aFBh3?8^Tc[gR6[=S,M4t@&S2hhR[N5<R-71'_G'7b@DAOs4I/Wn!X
+ L2m9jN/&U*?EFG"\Y5OR`<:a5f^/KOB#hL%Z.&KK*Di.5(n*[G(_6#d+o@fr@"+OgU9D[.m
+ Ok$=MS&M:7j]j?#ei_8Uu]>W(1(gVp#qm(>QNOA)eoV@91CDMBi(I$#%%gc[)C@,ji^OfSL
+ dT\e48VV$k,u>q(tB?![F,r$!=/"EPK8Q^=&5U*?Y[JMWOTPV-7Q"<GniiSD+i]"5H]qC!p
+ IIJWSMBX\6j4Akb6T78WKQ(kOj["NU(J>fNLF4Psb*Zk@WkCm3%?C.NcU?KbM;p5I8laJON
+ U7S'Nh<s75b\>)0"Z@#A&$R*j3l\&kjJddZIs"mSB<UKqJ/];2a1WrT/n4P>@Y?^uUA$n0)
+ ,6PVM;!Jue>Q7-cCJfgCas!/hs(L]_9L5aZT%7++=etSB?qX_\Gf-EG.+rl3O935fWYD1/Y
+ WMH'VRSoAp[GZo()?Y^OCn1=AZrVa)fg[LnLE6MX+TYF<![\c*SS3s#DB)`)IFNJFkrKKpp
+ DMFR7Z-b0%lKaiVTDkMa=6\[f8tcIGI7If@3<$gE#_6]fMr]65cJJH''3Zt[aqAnu/im1Te
+ Z-]4KXH,=BQMWWD;L2ZZ0*-;_4AbVmFU<6+^F.0qM,3m]KmF.sDJC0[%`**iuDVr0&j:lX?
+ aN;OQ.k=^"fI\<m"5;@V5\c:b@NA:tgY9W[icK0[@Fh4Ah7I<2AH@*F(?(+E_SSVl:V*g>[
+ YFKX:`(#dE7T5MgU=51l&q4hM3A0`VMK=U2i2sB2sNaSm\0t!(Dk]0B2>sK$Bdta>$C-E"`
+ V.EMJ2eSiG<k`#6;sBG^Fcnqd-R6]67kD6FfGmiJAb3M2;Okh's<H"bi!Z5=0]J&7T-Z3U8
+ ZCq!mBo1#VZdO$eYGj$43!e?'f.rW"4bQq@K(MVeA35s<9)_1A(=2LYGoH1gG;pHBcU2J(#
+ F%NOO9X&c@^502?Z?YB(X!a6R^hnI@K,;^".-sF3(.>'cf/,T&-KD&Ci/6fOk]]5)+)E5AP
+ X,9H_CU<MC&%N3(jqi[oMCLT*iWQS6ASM9PL&3YEQ7Q1UJSY/a8_fjuGB+S[a,_;u>GMkAi
+ 9fpVV$fs?85XKf_?C-*[oNnG%%oa!!l`-t/R#O>ZK#9*Hs*Qt77H+#/t^.H+rW/f+>N*ia0
+ ET_Mi^?Sqk$AWFmRbCS%3$?AbGca>AjAm\ofM^:@SZ/)4@6iO)bomq"T)6<E3YuGB.```d#
+ K8UaXC2cK=nSc%ci"NB^]DSik,+Q_>ATdaD#*p7/m:N00!RPtbCe`d*8d@:91L+iT7,@%ul
+ `cgih97ulYB#,,6<%P#ep5C^C^*H-e/DV_mtTqQ9m3$TZ#nPaOb%j*F<'sRljMZ1N\IU0nU
+ dA*Bu(43W9mE0",&_S>"PEYh*U5=iY],iZYi+Z:!#@YO^7sLclrQjjiDJj?tZEbHD,'<9fB
+ 4o9A6#:O\E?%2+jr<UNlDu&bP*_LoFQcP5D:[1b'C3"#KIVkLSXl29B[OoBa<W#V5W^R;":
+ #0Qfj>"<0/%pc`P%+3][Vu4\ZqHQoe0Q20)DO]"p@5LU'e&/]>iMAWLB9ED!\(Ec-8+'2<D
+ i3?YJ)rTsCsZ>22hG9T4sjF`hiSJ#ueU*f)Q2a,ajq[[#(.4e#SidAVt^N+C7B%M5Wg\p*T
+ aZ[H+0e##ilG$9j`no0m%d/=b0$SsP:p[-j#BP836&!"5DDnTW1h!esMi\G'j&-7\56r[,"
+ j0(md6<7.<_8.9VS&tFb3]_:kr#\.1"caWeHMdF'f<8P2oB)$gOC#M?aj&(Ikg6$IDG(bi^
+ 4#kq$u@6Uf<8i<:,t<pUrM(APtQc[MFpK5)Du9JClaKD7RdU,->:05P+\Hj/pdr!USC30gA
+ L]UBJJ>SIF[-Kj$nt_Yo'._X,i]P5E0)P`fo`n\ZU'7iHHt>5)-oK?s#bJU'QmHQ,Vi.o%g
+ m"$O[>P`:BBVDfW(IW,::tM\[lO#mm;5@;]s8131sI=\YADA_YLVp$05<q3$lU<c[9B&]Q'
+ >nJ?6t+]M<lYdLlM[`<1&`l85:C%7t2]\M/lR,acYEH-R')Zu/Z78X6cp[>:i[k9Tq90kjp
+ IZ6jn2tZe9=gbS;Y?kkfSn7!<2l&PH+$RoW_*`tfV/L)p_83sH%uL,Rk;-cp+9E<I*.O>__
+ 1N!&+s/h**^.Yo2q@OT5";9.&5e+jGOM-/Z8-(abJM+?-74JQ[@9*nB$W@k1UDU^!!Zq;!R
+ cO3cCE;2dpp^EVS;_]Lknl*;I\38F)uCd+.5V+!K3\h_SZ0BlSI;g[:3ZW<*il@mCLs>p$:
+ 3fD;&sa8",m8d!fJLJ`MV2`7qb>O]LFWQ8NI/bh(I5Jt6n$E@gL)B[NQ.Vl(oCT8Mgnj`u)
+ qYZ+&tIJ`a&l01#V7)M8q?(G-91c.%PP*-Xg+<M0!-pgUp0S.p6-NYo>9q91\(N\J"X&c?S
+ p)CmW,YAcas7nLhe^]r.Z9MAj<f)ZiUl;[cZ!kN_CMW7sjCFPXq[5,50kP$0Cj87P,%>.r-
+ F7>m2/V%]RD`5G:Mp`P.(X`t.Y@hR+:T5G84uU.G$PL^%hG"hOl=$,U8"BT?,uF98J=N-S3
+ D@ib:fdI[r7>u0L=Hb4aY1n:6:g/GB7h825IucTg'Fh"YD?;7:oP``f;(_PU384Ob91g9J-
+ pc-n6f#D;Dc\;o=,FK*Mo&Y<[*Rp[@!BIK*Ue`l?$FY%.6YA;[,U5IH9,NXf]c:.e.$hVLF
+ H/#u^g`fD4fm%ojBoCr"B+JdP.iku-,"U/L$EcH)mB$>i1$Q(SuN>t_i:M.d'n-WT^+p8fN
+ N/RnFm+J]Mg3$`.B[J#cIN`%(&L2NG<YW7Iq'LgUh$)Ndg@t'n(L^3PA0eLZ['sl);Y1\ai
+ 7bSSb\f.Md\I2')V7&pm\G>H',)$D5<n[$^OmY\(I\W:#ooRi<^GQQFn'79RKs<1"R-Uh7E
+ $oe^-I%+i<tf"MCNRdF-DfOj(nc@c*MltqO3A20k1E?f4s0TH$FQnQbfb[Rl>7@>7GrHg9m
+ ^b85E$Z,Kus[V2#L!G3d^2QK#e@">>WD>^g`k'hqEI:S0g:Hd=cO;Grr@CBi33.-j3HO5U[
+ Q'HmMZ!a<$Kak7Z@['ZpcN;30mH$O\4Y0rt/K=c?rEZ@X_S2kYF61NX&716D>Y;U4L5iG%#
+ `7ikr4F[3qV6KG&8J"<lY$>CM1m2G2aH?6*rMgPI()e)06?Pe/A0c6s0K]DpU'cu)l.rs12
+ fgZ$.&^;ggtgeS\qIGtoS!Yh\.)m60_,:GP#J8ROh,jF?66'a\b5:?\$ri-MMbjl)LP!e6k
+ Q_->?dT5!D*VG-R\*Xn-.;R'.6P.lA,HC#!`Cn__85BW['YfHg/tTH?sn=^.fP%A4TE!FR@
+ eIWN4DDDOip-X>VbIe0qV(J+2@sOj6s);F1D[r/I0GO9AF6nFZ?`C,AfgDsj_-br*:C\8gO
+ 5F9"#m!fNf1aiVYMq!]G!QjsT9!s]<Hn6dmu.B/cE4[4DnR&R.YlICg%V3OtX;jf,*-nd5L
+ TjAqk]cqo(=]sG&bq1.bK+6_W#RM)3hnFO!i7Pc4OHG[9E8^k?%"Rm;n:6B\!o3p9l-RDR#
+ 8T`rLl$uYoCG=n1e8k=3@i%jb,V=30NH'!:7XFAT0D?l3+VRRpLib1'+1Y@An>Orm+Ck&T@
+ 2?C7l$I@@Zk4H6kgfCm5/;[1g;:%8]BZ+LI"*_X5qd0;p[87n.%",.$0g`LfS[X2`DZp)mm
+ HpXH/A?fn0M?CAQ0b*BGl0boIfF\I8X!R&ZfmAnFNkl=_'8g1;"+fe`&k=<fi#kj-j(c'pY
+ )A2I1O?=*WRPq13i!9!YBiujVf/()bl_;!d#^OnreI8X!Bn(bJ6l02o#Di[rORVERt))J.e
+ ?[2;2)'dVC9q+%Mn`hJ[2O9I=lHtSL[A2*G9q-lZdD1;8K7edopJ(]ZN[R51g9c_9:o$+1-
+ 1*kLrr)`)4DUp"cCI$%*<loo_4la*.^gVuq"Rh$DQ>2"JaSAd4F'iRT.oqI*Mms44f&5nOG
+ p\Cm3B6-P`ut;?81c1E0)JFo^1DB(+i]D1)_Tonq2Uc(lO-kRPekgU]&6Zh*h#m&Z/a9.,o
+ c5L^;B5D-1"He">bsRl3u69B45FRRBS50iGW1n`%O7Xp>Sq&k.ViHhYr1MY(n1F;4F@>uXc
+ +q"9&8!WiFrK7Vu1=;6=MT:aHNNU"rg9;,W7#9u<g$k3Z>r=C*7-AkDg90piHrLG*!L4&dc
+ !BM:'OHG[QUIPjY!<7\JV'U4;0`2*4,&j,9H,@Wr%grbtG(gq!c_%-([B-G\KlbTpU6.90K
+ Xc$^#5urj#b%3NY?nnoNX*JW]mH*h;ZfF<1.H(r/9F!.qur!&E.!%\kg+>q,9pkd8<]C,f=
+ H*?qS(k$I2glTTcnu]#VgGb;#6rL&`i'8h;-n@nLCZJZujQ]26\0k<Q2ej!BMJC?@tZY3%J
+ (5Ud*=*g4AsM$IbkHU[S)2!sbLt&/p]\MVc'qbK*299JKqjJ<Z94kbuiP1,9L3:p]%neZ+m
+ 2hbIXJ!fJ7A`f1p-J$P!sduqV(&(C9$q=<XoIL$D)'Ldi&RlAe<O:7k^W46WpY-P>gkeN:W
+ Ycn\pm]h(K&d@c;pm>7$U'Mb88"MSQe4<8+0r'):qW1<t*.inWNPp^KdaD"tO!%s?D/F.[.
+ XZ[3g,2l1<Tp[cb0ZhLT`]oZ"0a0TVd[Jt1adY-^3u]GTFUq]WN3'-m=`iJ=Aa_W%hIjL#+
+ 2Dm=]n0^76u^,(.(AUqt0+T*NEjfd%E[.=rML-0Ob"t7BiG18q4S$RPbK,&*aQ?0S$%tc'g
+ MBG!]k5Z7&$nZH\F>Unsp.Nuu\DBnYQoTL"4',?,nC%j-Od&4SdtU'R'p\2a?Vh;-m-g(3J
+ mT0JsFqss=KcqXiU#o?#V,pgRN`q1W*91VWG%3@Vk3&m99@4s2a]m0'Vc^m:nUEYTCUIR\l
+ #pV?9;$77OVk0G?FUHh5l6&Mc56BNkGZZ_XKSq$BI!iT$U*BPuaXS2I)Oi[(cHOK8#uO#U_
+ TJ,ogU(Ys*JRa2gU8[PdNB^MLa-"9_/f#$4o2MaqXs1$ISe)6e!#4`&ebqPr=BZF'Hmr,3^
+ eI.Ee(O(FPjf\%idRENB0C0B`%MjI"%5Z'bm1m[r6=!(5uY:U'MO1BBW-=Uo^6fTT64Z0Du
+ /E7Rf`A+XjQCWaP,AWUpECB1PTcFme&<g"AA.+Q2;#Nss#O2`LL%ji\,D%[riTcsDlF_hJU
+ /`W7:j?r7'Mfs>>U:])S9Sj<0=*C^P9GkC23Zlal$MkUOc$OaRgksBlIEf[T7[%<lGjN3Up
+ CtTF<(`4&*p\ju5JmNs(p+8<d)QBBh)`QL'fP"u56$kt]iPUEYUFJ6X9<]gr"_AbOF$Coq"
+ POCOq"aa5O21AMc_::K'Nc&LP*D3\Ue)6r6^/+gC=PpX=1:tt51\<9j2R(J5(1#U^&?H[;p
+ g*'ZEc5A1[EPiPEM(A[$f.2VP]*+=(<Y/71h(0Dr.9SrR$Zkq`u6@Df>IaLlICJp-VYa;,i
+ fb[r0=1b0hB.(sZgATRlSBe\uJ0(.GH\KS7^u!2iVbLA$6\>IJ@A*s'%%RTZjCq?9@6UVUo
+ %N/YugCY,57$q=L<U'MN(50;]jD&]i(e60.l&P`"rUnk<%ZCq.-'Nc(JNg#Y?rMb\5J:7=g
+ 5n=`i(kY6h/?,VJoTR)-KaXUQTT6Ff2,3gb7;KoZU?HgDbKCb`],AL9NfK.H,6]=K?r7&%j
+ 2R'mjoGht$YUodUTUapdsZXmI:t!]"r9YX;I!HhFQcPi_\L'1;l<BUOJ.J=%kNURdF6R>R0
+ pn,;9?Fon"ceVBt?NBcld!dKE`2\2,RIXfXGd+RNiTc#,=\L/hJZ/*Zu<)#E'dF"qMbK59Y
+ hsTq9`2:Xf&dk007?IL$B6MF?t0o4K9m,PpF9_]ZjO%4`kVJK>^H+b2un3"$NTUn8;YCtJa
+ T#8QCHSP.l7mG#*;q]Rf#-kaeFNurXr"KFk@'N&:I+nVGC#Qu]9^gX$8O-uCRZlc*ccd2(,
+ T`]`#`O\_V5Ja(m<ka"ZAmo+6J+R2`@Uio#T+PKK"H+IC_SX1RB*N'akuY&p6+]IDJ:IXrN
+ m#MbUo1+X]-9)9-mCuD6,C?f"6kkT$tUb\S;?nO@@.4V,&m,Do'4i1:Hq(F>."^k'lm!:1l
+ 8rug=k9-IL$K9M?m*Z<&B(%Gl#;p6\4@BQfFjc0T:WQ&K`\c2lE8l>.&)k7PtV7j5Ujg'^0
+ m,CXt%M<s/dE7DqL+Y?A2dr^b$?7n,s[n(tb)4c^1%=;::V%3%2ZOt67ngB[*Q;:&Si,;('
+ &IN7&V!J3`P=aYUEjQ,A=):BAL-n6]Q]m:F)__H`HM]t@Skg:VEdRAp$*Q6rG;1*b=K*DeK
+ qho>/nYp/;J41%RJqATC2/>[2<+A=hU8%5<`=(n4`X!G5MKhJB[VZmR)Ii5Rrlgcd)0**3p
+ [6D2#SSbHW0*C3Ao);-aOnN1:;;8VEcZ;q3uK8^YUJK;p@e3=jGeKYl:"*/5^O`5]6E_daN
+ -'pm,lQ^TtM$b=;4W0Ycr)hPUVP"DbMHT;Go,"/nJU*d+sTl5$/NhK](crV@n]]V.G+)5.!
+ ]-PEhE0X/jF"Rn-]$`l:>,,YYEjU<0rekNJOPLoI%moB-&Orm4r.OVG#JiXZqG)GHt>hS)B
+ fUkEjo]mKL.p-u8o;+.%0KS8dL]/)=u-'f]AL?.FV5^+PEK#"+:5Ie550U55pe>X)m&-2+E
+ YBY><8P)K.jl=10#GE.N/6ET@6:jbjW`?*1AS%8Jo+FL*HM)IMhMm:+L1@G9iX`<!Yt3(mP
+ u=&.CDIs4%dh_p\U!cn,DCt(B[NQ.=F=a/VI39>+Ko+b_Rm"nd\fm5q"T*tY/<dS=;::_#n
+ nX>Qrm<e<2-+]TRHZlqtKP8]>_Y/aU-ZK+bN2qZ]pcAmb5'A;,NIl.3MSdU@@aE_BTBi"Vs
+ P>ik^6P%)--Gj/4L2583G@]mE@D8dma\VbWf7Y@#1.-#Be6@Y==FFSoR9#c+;n;1.jC+q-#
+ )'_)W*91rkq9q+%Xmbbbijj',A<uNu5aK.o*p@_>N8-i<3Ym7j10JG1gG")`;',-S-:S0st
+ %L.PS&aqtF'AYAtRrB&V'osD!b.THGDJkE*I8I+R4oC\'D>5?58kM^9+$]2m^T:\Nd[<3m:
+ /:MIQp,rp\Gp."`/.Lo_4jAV>[:Vu4?Uo\4o>0Qo8atXMKZ#$XR+l]KC.C#@s2UC.GEM0kK
+ ]XY7TUBA(4Q#b)WXKi:Ln"so:J-koI0M73VWQHbj"d6:J];oiO<:8$8e:8c'sL"IbM/CQ,@
+ "tgppXtCq4lbC-k5g/u^,1?r=j0`f;(_7e/jU4X22()].Moj%#%F_1ia9%M%:(+Na+='U>#
+ Oqt^%B=JpaDPa%De%j/N2oB+EHd-ol,=6j76EA:X3>ISJZD(HOHb/u+6V5:#!`s)lb>$;i-
+ 0nFn,82K_h[h!0f(DdmP*+mKdM,`g!_1R+/mVQWNW2?HCZ=QZu%M^1A]Y(n>OjV1.)rJ;R6
+ [L:j3!G3a*YK%s_;Js><DcUj\T$L/X2a>ckBhQ&2fIQC%K"OVK+C'(!,teWjiS<dI=4#.&G
+ IS&.ScBe,rdk$ah6LorV#X5eu`/E3ork:d*g?/=e_^f7)Toj.k=7e9aqC8nb)Q\[r*4tV_^
+ 7j7j>KmrVFn'P66eTA:IW\(:/&b_\-&cnWTeBQa/PALCG3rkg:V5beAAoArdurfX'A^dut6
+ I0u!]:MJh=cD.6tkh@KaaIJ`^g<ieQdrcbQ(Mi6!2YG\9'D'S?`)B'eqCO3U7n`%N%e#)g(
+ f2#CHQ16g%Ygf^TcdC,Ph(pfD;_TbR71hc+bfjVj`IWu,N4$(dX]r9%1AKhGLDCX@lktcGf
+ s<&EO"V(^+JL8H4Ikd^`!>SWWT3@3r9`7Jh]mZgGiQdQ.k<./eU(7l:ma@j1:e:&WV=/Ws#
+ t)@*,%-2Va5q>X&lM(o][%fq1<#]/;e\qp$:4trB.huqXj"c()BB[cFgg&p$:5QbKI5,-i/
+ 7WSo.tcLO2nZjlOJ0.q=%VVG/":?f.=3QCXT6md@6\Xq#:q"))Ic,@<i'+aNtYD3lk\TIo8
+ cp?gV0]1AAh#/_'U'_.TZ1M:_^6]p5EEH.[Q+Ss'8(VP.fikWX[Rl<,E?;+(4.>g_j9BXWn
+ cbU1P+Vkh/aiVX^7Ad`14<u0T"fFeI(=8A17R]bbaiOkNLKXqul-lQ@hZs[OXK3u$MR\rU&
+ &FurcX^X"Ejjf2A&jW/rr/_;4*gBB2qcF'dLs;SitBX*&\=8R]J:-dj2R'3?+M:B8nDRF_8
+ 3s/kK[(.<ee&l(/PoP\od2#-VoO,M=9?:1O)6n3&5aJ@(a[=DdZG\F2Gi43P196PES@XD$1
+ ;Xp+/!3Zu4\\/hW17+CfkO)c?7ldjJD?&0W>G[<V6$io4Dk6&U_^h7e;as8Mu_eu]<spqJY
+ %fPncb&c;=k(\-#@G9lR@=\YP<nFtgi;j]`cQGB5Ns7lBaE3QmO!dT(e.p&n'C/RGf"LBJ"
+ e-+3*kT6.E+ChFgJLEi'!3PT-$mBW,bfl$qi`K)Vo(_`b,`kKT)V)-)99-.Ad%S@1#o!%#h
+ &@4L8YJtSAA0h"$"IK*c_!FcN?O$&YUC+l<j#]LfaOui^OA^EL^7LrZEeRoF?$K-8\mcL%W
+ N7p9I4M5,4t?o.:Bp>WDf^=n(ppbj>f4\5s[fk[V`@WZg6J]40sXX>.8BdC2,eFeuV#]p>!
+ 9(n&'nke;UJZbA8LXo^:PITV-g1<&%'t7?P*qjW^FmToke4>*C8UQ7_.:$qEfX/gH=SKPaT
+ Yf3a#I1PE],2Jq/977AqogV816(*+EDmFntcLC_#/].[O*I+krA,=[Wrr/$PuOl?=M@U`7*
+ 5E6X>C?Ea.cHa^+>TGhF,VUo+/hX<(MVJmo0/&[E:%bnLB[EDebBm\X#)a[F!Sn&;ITENU`
+ f1p=XVl?OhW:Vj$9n+t&6;4s84X#!YCjfV-;Bps6q"8Go:[Q_.mp<F3kDt9p[6ka]nF4r8=
+ [cDb!L:cSB]F<c]HDtcf_1ZnG)P6oCDH;JrG!$VC3oD.dGN_'*glqXD-P(H$"-bX-oSi:qr
+ &44W6AaI&C;aHIWcna)D>#7H+L!7@ZLoC=/bE6:+#hX*]2?F)]r,P:#DRk!DKRo:AZR[!MC
+ [%Yg&FB[s,J@+NB8_Eto7SffFX\9JT:=;6T48/JOQ!<)@[S2de=FZX^/_"JK`*#roh,&gWs
+ 756MO&llB/-<aSVW%Ze0!naB[\9R5aU!U%Y-bR+)TK>lB_4jI?aihm)o?TW]R%DaSdQp/b'
+ c5]2@gCc?>/,MLCg\t4YU#+-XfY_H6GgXab`'AdcJ2?!<b?,_$QVYjULhW7V5:$8>pm0j;'
+ `mBhS#D2K;=KcN:QT3F)uD&<E7_a8qR<WI!pF:,Du:52FLD_EjCX_%!t?4W7]bTM!7E$BN/
+ ql:m(5Q$k.gW4imkc,-`IC.#<Mus'B#<qtKPln+S[s2'5&3$lr!bIpk\Y3B0"&2hZk/MA7X
+ $=:5s9_N42ZK<;7-FRMdR=;6TR#mgoS^Tqe4[OooZD#CT'Pa%`0SVW_*(_@N#73=aCXC&_+
+ )ad._nF?&-Ajp*b[;4A\a%VFh:f'DU#.b<kPe\4S8S,['Cas(s@ot$A5C`S(m:'-T^LbTSB
+ ?m#u%N_JcNh8:%4=>5IPHHdhm+f,bHFSejFm@J%nDF<P.fpq83ckWg?t.HuK*S*^;*kT'F.
+ JOD8kjX;SZjj!'-/)VI/%[EWD!S3C$Ea'-i`bN5DDCI+5TFK-YR2FXYnF,=BIg!3jGScZY!
+ \:OoG%>f=@mL.VsG(l36"-'$>jR+B"p^+Yb$!@:M]Fo&NA"2)XS(/IE^I,CIK]ce3eU/+K.
+ &A'e>"Osqe-:f'u=*d9q9?XNiVQS7,F%FfXpkY^0IFeRh@"JhihioB'mqbn,Q)AYY3'YSr9
+ %@s^X[?-T9j5JtE\ohfPC&KQNR[RPh?FUCUs4Or0"7_o.`MfWs,]i"c%HsK5*(Gl(6/bdhD
+ MX.+C"fpCFG,=b\om>3]2TDgna\+X;=rjZ3-<$W&0`>DA&*Tqr2GsZW2S@Z3aTOJ#otM>YI
+ JLR8N\Afo]dP[aXg`/'RKBXME,b?E3hTe,!A)[KAs$HKnG![FK\:\naS?07+(B`J^=]B6^/
+ HEAjB/Xg3NPe.<M(6^Z)n8%e6&s5!VV[OtegOKcXelK5W,mU+q:]Zst(,MEo74T1_cj?+>+
+ \3alNSJjg9+)RJRb-CtmjNUJ5Uag?!D*dX%W%2g%O;l<B1ZUWHea,_;O%1T!u,!`@\!hl]s
+ 9NbYUGQL!LGUIUM`Xi<3+q-?Kf^p>Y'BUmN&WpKo92"fa88Q'T>uXWsd(=L?k[4Z0/;g)Sk
+ c.$5!H)q%JPOi7U0piOTglfrSVZH^Q)j)+82"[kC!]4@YfOr.SNV=cZ^'No^4#n0[r5X;0F
+ /FFMP)XFTL1gBc<iO33alNSJjg9+!r;ZfR[r^UcNtZ)er!!O%j*ug*b%H*8kNjAQ7\UY(01
+ 2"iBseT]q>HMA?[J6@`%1,`"V'b;*CRn2MNV^'O`C$(J!oRMc49$7AQ:9f<7OP1#]H\ku*P
+ Vdn`V0_`OpP#)aZR<ih`_499U_D?ZT;0grjP#Xa/I6.\YaU;p'tM+B:Dn(ra>@M47&\Tr@A
+ 2E+8uBP;)6[V_^$(-Yh76#lD@2W>%I]rG%HC'o(:`Xi30+q+Vb*Ss@['P1mQM+B;YiSi-N9
+ N#Q[\oghG&?C`.NupTpD;0ehc>s('78VdLH&5K1$js]YS[L8+^nM*%JYeA_Lh?h_NXFn0jW
+ `,G=BHF4&<e64Unjk`TDmfe#<#V?aN-n.1G`\\,mZ&ZTQhLOMNfOC^"3]SD.<j&71k7U_aY
+ OXcN^dXDFFF24iAV+Q7Z>/hnQq%UkG^),!A);rquRI1,@4J8JYDB5uCNXXK#EYGT**]^_St
+ J@gGr6.&p4D:)aHW2JcL_`5eU.=6/N+C1u3FOU_;s'qJ[@'Lc\oWDUOJ^qc$]kg0)#3_<7K
+ oD%k977BXe<en2p\=#-co&mA&E9-*@Unsou\J>rLJ.*@MKm4O&h+n\FF(5JQXXNJk]mCLt:
+ /4R7:4+UMGfA\jjA(Nf[staJgqom96J(3D@K&T<GW65[?`p&5bEdYVp",ZlDb,/<SXmH@pM
+ Q3GTZniMUIL6s[V`(AV1k/6\[hP*iPTMKD!^QVQRl9k,2hnWIc/!BYFkUcq47E)jia!Cru]
+ aIc[QQ@'9i'PjQ,A4s%dWSEFj)G1c2`4#`BllQ93D<Sd*rM9#\icL/-Tmg;0Kh6^Bn42N\V
+ @%mk'bfddB'#a(:5U8k69RAUb_e>\Y,UZ:%6BA^FOqtp-b+$Y3BWW9#P5QCX[=',.rfXhZm
+ .$C\Ci(I#`'lsQ[6%XUoL^?MhJ>-KQ,K:I>!!@rjN//A30'3:G"9TuI,cE4_P%5k1<"VEJK
+ ^Yt,pKX='VL)q?cLfoLWO$6`HOFpqFkuo?9M`-^PSEc[eq>$7K7h(pZY/a^\b#+:X*=$hr>
+ 8^k%7?7'3-jchUIsb/Gm(^'UY9-)6;6'-+3(5n]`4(!F)@&$:dhsAo1"\QW8=IMUWSW->IA
+ 3Gh+-4o8Oc23aH;p/N?rBXMBp%0(QTG6_PIakTSkb/*JPcV89T0bY!HRSl[VP[j]ZGXjFe`
+ :jia!JMMfp[.tL+IM/O5g,f/*9-(6mf)&1!F.a[i-eZR%ke@Keg"V=@$9WD@m(lC^O@#uJ*
+ (IsCL:9M"585pu]DVi#TRb(hdJ<Z;JiPUGEB,F!uUgWi!&(2L_H2q6f;(_8fq1t1+)snLXJ
+ TG-fCFq*#(5dp##7j_uZ:*B\Im<S\rW>NMK\1d9f`d[G[Q6aAE)[2R7>R)?"$/.d!m/Q@[2
+ 'Ma$q1<PcD^D>6>Y>\"aX5L"%/3(@!Yt<q%,^Q&rp^XTa:2Vjf>;<.KD+<'`FZ"L8(uZZ=1
+ Y*;<p4)c'pX&H:n3kTh%M7QBph".l"&]c^s0?-IOR^+l7(l[[PV7?aau%082hM6P5HIQ`$m
+ b9<IY&2JUa??gf-<#Xaa8=L_lSUT1hQP65nM7lA+ZKQu+H?UmT/s/t6Z?E/n$L*@g"2R[Vc
+ `mhTG8S2RHI,:)li%Pj:Nb^T!1TH=eh:lIOAO4pg<%SMILdupn^5UXWL3C8I_CrQ$'hJm(`
+ ;TT-nJN4G)&O3<qXb)?%Y'5,893c-Hre^t_l,KihRinqc;Cu''HnML?>@YsiQng9ij#LSGU
+ Ob2SN8(V2i&5DWiE)`l06R$<s9"$'#f%*"f'd2FiJCaEfR/RTa:2Jc'nhFm<X^,re67XGUt
+ 9\M?mj-B53(7%F(>!A,<hrC+p>nPREgQ19-<`l7udVc_67XWZ$U9`/@oJeh^M&lCI5&jlQ&
+ 8;D8ds'bqIl[^O]*6/4>2d,<T`B@)iC/IY&\jia!B:-+8k.cd$_(-i[Y<4+2COdqqH1C3[=
+ eZ3`L@jlqoQ'Gq/>D:IfT=8c%mj/-);5KG=P2X<6;]?)671."d#7mK*X!S[@,`Vc.BoFO;[
+ U,`)Uc6Ys*E<-4I>2Lul#hs4H7/*/^eN\O$&52C81q3T[%oSui^dB^hUg?qHD5/*Th*Ub4M
+ 8_A8@i\C"'sVX'eg>qRcWVk$lGe`BX)b9C0=_Db'Q:f;"UOBiaTR"@s-knjK;e##!GD__b_
+ Umb0.tTR]8-CPI^sXPEXlj(O5R.$q7+,?KY*<I0J]klH#oq?mtkC@:A,F!<]9[M7AY0On&#
+ 5Jt<sR?mq1'LQCe2r+=@M[Eo'ur<u%*WWA$eM,_u9;U^dG'b_!E^FCW[_!?o-<4?aNDf9RX
+ C#Ke]+pjL+T;U5M+erf*8OkKf/fM=FU]42&-u.MFXTO75]mEhkn1bVC3I1H4e(6d)k)EkS9
+ J@/\WN<1;mbGA&7RiUk.Ot;jq*@d^!uQn@dorXQ9FL!gQ*[.,4Sg/pQ$cQ%GONt=HaI:o`r
+ CK#aL%+<.&qj!X/gp2f<J>V<\IgoKUrE\0GI*UKZ"^=2/,M`Z\,i*kgN&K`?+sc6P6>YlaA
+ ZE'@DnM?!(&=$q3$8G3kTtX=Ub&&h,)3,UOj)cN%GlMBpcCH/k9X9q-tATSku0(PX,5V5W#
+ )9Q*Ejg3)9\/BS6QU07^)#.AF*RbB0HL]IS^A7]=<O$+F!MBl5=Ca@IgMb*I/49AgZd=5OP
+ 6<1=^5)ea;mQX7?]Y'b')8T.fMsTfs6%]A=)^nfd;GrO8Nb)d5d%el!L*B2IHgV`J`DaNUo
+ /CS#TW:9Q#DOGp.i)L0-gfWIoB^2tL*@\$OPoKl/1iGJD7DhfCY%k'X]s/)bngdWOt>nkX7
+ iPk9^Cl9V,O#p5PbV>,4q).D\K`$[plqC#7T0ppGUG!@45U+f@QA,+/PBoR\lpX6'!b80GI
+ *gq26T(G[5UJgB5%b,7I$)B6MKP'I<(k>p%W$"@Ec"\[f8WB$K>f\'jQmXm>??biiBs+q$c
+ #KR9&WU7kkRBbf1adPNlU2=qA$kqEgF+.YKCS4+X-Sm]Q6!PqUN:/Fb`VbWe]%@p=`S!uo<
+ 7mi?9^gJQW0!:3cC?^DnjuGF^d//>]_bc$#Mi8"len`3G'e?=VLIXJiVg;`!@dV@*0?]\8Q
+ ^=&:2Lkk,VPY[UO#(8K6P7Hoprq_Y"O_J]G)"i\,[-J,84H(G+bVn!l;Y=`0bdKHR@,eHWX
+ "G.Cqk?YT@F$2&YIpmGr'$S'X>7M>>\7;_hJUh77GS`-[6Jc&E.d2(lO:*6!a3H/T9h^6#t
+ qjcVPpC'f0!o?Z!JGR@3O=/0,Rb"<o<Fb1bR<g.:de]E3+qF<kK>oU#pmL*B#b<)cPY4HN!
+ ;mG+O0Gk_[14[DDC-pkl:1OlZ3\YPc[F,9kEJY>Vb!WrO[$lL7pf3l()gNN>Zh>-mBd,QBV
+ (lD.d`&FsdSa*L/Xb//5;H$MeG6UWlTa9Hb\BSpE>)hpgqJ5jj_GDLsZ*CQE]55uq&[d1lQ
+ *IUadA'3)8kPbsL'A&TkW?8%)qhpcS1rVY1G]\jIulW,MBlG/fQ7"/c.RO7*VK8o5`,c#+s
+ G(ScTcO8OKP`=[Gh(LR[VsAFmELG@\<sY.L\c0]<>R7L(<[p/f1X/S.3/#8=V.s;7h#=<^G
+ s([r7N>+%)0P]Qo]KC8Sgb[??k@JgqZUMM*Q:133!Kbsk/le36oKgU?MIq=>[lHrs6C!Sr)
+ &(lH\$io6[nC+6Bf[+fd%!PM?$(Ape=`(E`u).@*nDOle+do1cUfu$WS9hJ%#^4#k_%NO,o
+ 5m)\!D4<G#-;U4\q!d7P\2`]4!lIifIWGAI5'Fo2Zor:GTskC9JtY^jARqT4Qu(&cKkfo&J
+ +H^&],l\NqC3sEioB&P>e%"aokFjBMcQr`0ceBIFmJ>Wl&?`IHGtaVa,V0uNK&Y[P6f!$KR
+ Q6ggpEKefp1@dW`"V5mXa4O,R&oH$'^^?2.P3.eh:7Q?T9[jEe0oT'P(ih7n,rP"U00uH]s
+ PtIJ\1X8&r/]kE:<U3B=X]\fNhM^tmh;jJ[G"mNj5.9BPDH4MjpJI[-+\C"/n0];hp+/p$[
+ ;?KeLY,1tT%d94,`8P)K:IJOY(S@Bdg,CShcQ7l?2>I\4,^hX+.,/c2QMGTRE]_ku.ZOZ<q
+ MtZMn"MKA*?[@%0S"#n9RMq_Cb_WX5[$?nV5X5>A+E9nT>3blSCi1S#g!#68IJHb`n3V]"#
+ ]U9-,#k+sV[fr6^neMY?OOQ,8;-?rC_%&;Y$JZf=ie"6PEW>XZ$H9B1%`8#8odPPW%ktf01
+ Ht1V70%d_Pe(aM@dg9-pfdn@W0Fc/m>](<`W6:oR7u5DVVa!h=opnOI)<+3d#1>=;^[$E6[
+ _#&H`7%Z<i:.C=7>)L%Wqj$q5t^0GL3!'GhRF*nQmR>\$!kIrb*akZpW/OD$Iqn6jC7$;a]
+ $['R)sMMd#R4U4.L434p%4+>c=TC1Hf=!B=4+caB8KPL5>@nrYF*Vl\VK8/Y]7cCra<'>Kq
+ ^rFA`7Rfl"lagfZVCP`N9MA-5mP-Y+p@d^GUWCdHF'NQ0eG=]h#]U9-,#kV=U]^d#:D<N[o
+ XZlmiHklWLQORBSisEdZ]g`B(WM>MhnK&T/R)+O;k\Q[9fY3o4*EV5X%DohmNt^M,/F(4<X
+ D,*iYW(mL_la\K/L%%nPc&[e7EgkFsa%,HFK"r042F=d7?Ph5u;@-6\c/`GONhuaFN[rB-e
+ nEoCGDWau4?dfDPP/@93XA7>PG-'HnM\#e\FY9D\\&FL<,%T+DCDk=D?"f[uH>=7Z1ND-^H
+ 'K*MqH`UcFXkC97Rch;`=[/P]"hLEVD:_k`!+caB8KQq_(Y"sh5EdX0grN=DXIEk.I4;'+m
+ .;?tk/M128A@d4a9M@Q24Re=oTutU'h!D=E79tYo-.]]#H=l;$3]-q*&`OW97>MOo*tS@/G
+ 'd5gQ)j7L;cB?5+UOEsP!Ym?gU:sE=h4M-ot,M@-M/I&?g,s/(7']&Xt8!pH7>bh,uH/u(l
+ AmF?mn?8?+]ODs8,6\Fml/SrPh:i]h!1?Vl(V!GYgapUWN`j/M0,1n6g`qJ*=f2^<FgF*P7
+ j:s(n.2*dp'p,pgp>"rN4/a'^;^/DTp1.ZbS*?<(b>U590_ER=TGjJoj9G*age^+WK3H$L>
+ J)gm_t*=i5?noJ<\+.LqKXc/2qb,#)*&`OW97>RL)Ns#0iK?EqcUT['l.;*1e6Z5ea.jl^b
+ e'kX!T0Dl/c&7C6J3Fs0&/,_b`-s39+%B>?re#<uZI8!h!=VDQ@k\T?et,'PW8G"N5Y=(Or
+ <@#q]o8$u+SgNMDJ)8KVBB`W:#Aa=5(XIL$q5t^0GG[Z,UXt.%Fc"iD's1RQ-!=Hld36Z6:
+ -g*SaGZabbV_*!11pB"ZF?=78O^!6P8;p$q5tJ+$[L&oD*(7[`MI'5()u&H3Zf.@'[)DcCB
+ %GAZ@)mA^XmbC?KK4MM_bVjc?)@E=@RG`'<fZFc!f6?ulo/p_,!DhnF.HH8e3I0Bq`ircI&
+ NZ]D+o(J%%JL!/=cTU.is,/c>UMNFL$4EWnh6XI!^T/6pK6]Z:Y1GeQp&kL6b`Q,K+>GKO%
+ Es&IZ4\\p$,%!U=pjZ["66(WO#RO=0pf!A_ao^H%80^4>3p4K@i@gi_3eV5a*.2[]rD`GbF
+ bR(9CaSl%H(P8Rf)Gh>&`OW977aYgZSNq'2AaA>d-fXeALC4jFQTYdh@1E\TSUZj4*EXW'B
+ iL.IV^%SATtbg0/"u6?KM4(:kUe46P0s@MM_c8_`J$BGK9!;VKVnuo;o.jGc%CRKAHosUl1
+ 4\rMTP<&QV'KX%E]eSF:0lqX"#&kf*KV$q5t^0GGZRM26OfB!*_u4ef5rB\/_3e+39jrUC3
+ !d!I"#/5MjjCtnK8Kr(KKR^B<Fo-O+=q>#<_CUsiiM@dg9-pkdL=SQIfQ=YZZB5c%SCeOp8
+ l)n2GpTh7u;UQNJ%23_[=P33i8)<OoNU(?7MS`Lj)Yl`ZDI/!&b14@*KK'0'&JDNR@A/=.l
+ I4EP.N=4QAcoFW/=uL3qp+-XHh[RQ`7g\>#rd5b\Sbbi4+.sR+B7%JX%M%uT+6"EC*$e366
+ (WO#Z4E%qK:]pbsM7^:9CXN\npiLl;M=(GM]Rr`2SB5rN;@9qj<Vp:QeY>Sid>L\a">$cgA
+ nVc5LKI[u4*VO>E30?@C@UK'u#ZgtH0b66(WO#Z6eKo#>S(=K#Nff9bIWKtmEp4gOh,r-d'
+ mLM?(E(N4J"\RUs>Ba5/cF$HrL)op[(]pgM7GPt#.eUPeMY^"T\Bbhp`pG<hfXf^1\r1'j)
+ RQC,`.gWp<G;+WB#]U9-,"1+Z`rVDh9uD][//ei[F6?c*VBnO3;$J]g+caB8K]jp</NrFaD
+ VV!-pbmd7Rtroe$"hLj4GI12N+",$*.spj62CAbKL@'-(U%;c->IDm8E7>]f:\HT9,NNta'
+ ^;^Yjr/cn(k`M^M>Xp]a5W9b0@f/g[.)ZY`H?X'8@^L0c5e0-n$Mn^3KHQhu9)&q\1r)'\_
+ bj^$kf)'C.VW#t#<a(^ri(FHi)WqHW\55119qcbbE2(en!tI_k$#i=XD0Q^Y]t-peo+Ta6L
+ JnF1(NX6h`dl@\P7fWY[X)4IpPEHOt$8;1AS!f!BSE=@RG,dR"3EfZdDCLPm*Hq2)/NEoK5
+ OInO[e6W)iAS#H"lU`KY'HrLQ:kV5DHp[J.4jJKPFTgU^Y[QRg.B?%$*nU;QEOYZaZCt?\E
+ +(.]*.spjM2R7KMJ(6W\[X\h%FbI:orqm%[<:[]2du9m@jgua92C+F/sK`l66(WO#`5<8Dr
+ ^gX]Yf&IR@7_.n?!7kKK"O$c0$aKi,dF)0/",fA^H1*#]U9-+sb`sM@)_C"s<clFlHqQ<=O
+ 6Cr=PNPZq`Z>jHLl:??l7Oo($hbFT;V5&`OW97>N`l9>?W1lKE5s-2+eG#O/TsPlfi(Z#QX
+ `:mWIU'fi\IHFh/:7S7\4q@SH+`%`*d&6IuTjh?!8\(CFcRpB&gT4r"SVLbAI@H4WWkUR)_
+ hK?ZoM0nu8\oI<c8O6^uQ9_(iU'O*uiYW(m8>9X-B=nQZ]m8r7dGo-!GQ+kuB`Zo^VG%qD3
+ NB^e&A;CueO]#W]sY"/@r.ZO4aO/.cEbCp_pRN@ba:9)IbD(;S*\u)D=ff3pjU"QP(3$15!
+ ts?@KSt0iYW(m#U+lR_,,eE#"&th#U+@F'HrLQ:kTBHL`HmmLgr+Q+bUD9#=B*_(^rh=+b\
+ La:kUe46P2Wj+a,OR+caB8KFgJ+*=PAm@k\T?KFhSt6P8;p$q1FFK\4+;KK'0'&OHaHFtaU
+ RNknPF&420n$q5t^0GG[`&7Pl3&`OW970k&ifYLC;3!M_^63n`X0GG[j_,,dr6AQWj7>PG-
+ 'Hr,@@L#74iYW(m#U+lR_,,eE#"&th#U+@F'HrLQ:kTBHL`HmmLgr+Q+bUD9#=B*_(^rh=+
+ U[f2nm9"UFnP%MUIUA_gU:tpo&VZ>bcnOa*\q8^\[hOOJHH@r0f:gGs(9KB0iOrC=gDS[`f
+ 1oJ*^"]Hr:u*Y+/Ys66h^]]%h(m_@k\T?KENnV.I$^%lRjeBj2[3.;5[!L-A&6CFmIW/f%(
+ m+I(5rHUPFUpG^0#8Y@"!8/%I>J+caB8KFm/#kKfe!r;>-L?U&f)?mR*F3`W#>]Y(n>OV+Q
+ HkKfbr91smY$-@dm#]U9-,(pLhAS#Er+X(XPX>)OgP`_*%;b:`STgOUgZY*XZHoWi5%sio=
+ %mprg(^rh=+[ueG$I52(B4GH#\8c">eUnl&3P#.fqsW7H.jZJFHH\>V6P8;p$q1FF?5++h?
+ +Y:)f%*l!;3=W$&0O>-6T6s5GOOBSO!<1`-peo+Ta2f@ZfpUd7n:Us9q0S1'uD1I/T#BCIc
+ 8.tdaD%W60/@\7>PG-'Hr.0<*W[Ur:oe'4aPI(&N$n2f?VL[$/[`%+,1uX0c5cZ69k;<l'%
+ U+>MHT'knF@No()@[[^Hb[&^]Pi`\8)MNknPF&Ge":20I<@7J5uuVc+@!JWuC`KnVrl:_<9
+ OM^8d#Lgr+Q+bYZAS/b$h7g0&"3X%ePR@B[1H1D)a*lJJA,/c2QM@bqi\[f6r%S>0^6e.<^
+ Or??$,^uE+3t+Q6@k\T?,dGM8b>4ZieY5J?3^l;=84uV1g=g.MAO4Y'(r/t$3!M_^6BWc?+
+ Mbe.JBm^%7u\HHG'.pZ.4Kh>.$97X#]U9-,(q49DJlVJSXl"0'V2Wbf(AY.6UO57B2!DU-p
+ eo+Ta2fuEp_Sg2Jq0p=nUPm.k=98=$]+Qo[J^A$q5t^0GGZU+gSDZ7ZA<TB>&AIL5*^C7us
+ >+:ul"466(WO#U)cYldi2(/R-Y+ls(he/%]6-L9e*?kfFM^(lAmF?mn?tgPUlG*$$&rV5<F
+ XFNuZs0hFo,J=]b<JY<?h%#,sZ3:+nZ2dO%=V6e/fLQ[hKp$:5'k>ij9-peo+Ta2gE$4$pQ
+ HOpL?Pa%E-ULSRp3V4R!_bc"G#"&thgf%'IZ$#A5)&X;QX'9%9>$C(Nk!IBP,Rd+t@k\T?`
+ %`Pa-+:7jFOi;n,r@eL*dZ,J?mn>]K:rPo0$2A_mGGZCSOJMf[;+7Wl-lQW1CjRT&`OW970
+ mm&;Gr[\XtH7[NIFjAIXT>NoGU(/iYW(m#]W4JCi$_,mH?*6oICdsR?K!o6D"Fg@44G^K:r
+ Po%X@sK2*SYTS<T8fJHH.McNU=&'HrLQ:kXq?Hh$)aoB+<^/7_^9P)K&#f`QXld"A"tE=@R
+ GKZ)TF=0EN$DG4[X,9H70^E<<bTEIUoiYW(m#`5ap\@>SA>1$KiLj)`p;Fc`eJNc//^ef\D
+ #"&th@O`ZM77BY!cMBt'aiMPJH$M]J,=70Z#]U9-,)!m3<`YNt\iQD$MqF/YQhU+CK0E4e$
+ UYNc(^rh=M4LB8U0bNZd94#9*t9/I$k!Gq5$rsK6P8;p$q1GYQ\pZX8WtLK=gj-9.Olnc)&
+ ^e6NNce=66(WO#U/`tkibQMbp#G.2E(R*jU:^;,8:uP&BA0,6:d'*4*Q_1Je^1XC$A[#>3Y
+ >;p?gVddaHS?)]K_8o$>^Y7h6S!YY1<8mG#+R;l:,tOUB*m.8unc*ML8FKK'0'&OPJ$1,?.
+ arVCUOXC0/nb8V<:c'gN5c->YDh7IZHE?unW]QC]YU>k"@G9J+Qa,_=O\$m_n8UPI!`6"KS
+ @jjY`K:rPoZ&k<=Vl?XeB=/)"S=H+;R@/tk92JG]@:>k&`hC&XS:j7RGBXuF)t=8+,T/dlo
+ LMg=+<iKR+caB8KTK:Q4*E0GV91.h"bgsV`a-)nJYY86O%F&".7mc;Q7NW7kJ"?>fC&V7pV
+ IseRP5F!NknPFOQ\cPoI-8Qb)]lBHC[0D1-\!qrqM_EQf#Q-J)\D'U43Ub$+KQI7sH^%+ca
+ B8KTM!_84ZQV<2?iq=Wf_8euW$5iSi/DoT,N4S?6#)[Vjn>H;nX"m+=&VE=s)s7>PG-'Hna
+ E8kVg(o]SSROYTA"g6sm71,C_[#O,kI,pgTm*uETlIQ(reJ-90A5nXS77>PG-'HnaE[VXV`
+ B8&*=[r,IV#D_[4D!9iWak*_'%j+!j^3t@ldAhetWG-X'3V3R`!/;]90c5cZ&m93oE=*$[F
+ B0gY/1iFe',.([L0'(_Ks%S],5Cuo?[r"X4Z3@lTa5WHL*D:=;aj-/R@+q`jiJ`)27:&?mm
+ 7Nn)fOWJ*BJ9jo_ABG7ZA>"HhS"r@/,GL#]U9-,(r@J8:BMl>$?7:d\R*N"jj$]QSW:"1hk
+ A@H7](RB$;+U+lO5l,/c2QM@^Cd+$Y3jeuTa[[99Pr/R.)CFUNLd"=#]Y;l&8*')-sr:kUe
+ 46P2X/B^*sAF;AJj/1iG6G/(lano>N_(E").kb7s1NfHkKBdr\9Y_SnXE=@RGKLAO.9T5L3
+ *1_dd"">ZJ7n6'eHqEU)!<E3Q>-rf$j+$FM[h!d=fY^O=3!M_^6DDKJdZVnUX?jOGot7uOT
+ gqGZ&ebqd['ZJrqR7eGRXFo"!I?G%_GGnF#"&th+sf3.R*Lkr6.;]*6UF,47tQJV+$YY"=8
+ X$Y*MGuT,n*4u@k\T?,Z;GPBOtc:q"T*t@A'O.\9PrJ&[5bO^OCSC]J2B!0GG[j_,,fHkj\
+ ?@Vm@GFe"K,60tK*7TTkb.:73j@G2sY`*2&(<%1G[]@k\T?,W^8GmS.=W<_5_K,rmt'H442
+ GQ'I=L3u2f)'bhA1e'k"GJM@R%Lgr+Q+XA65!WqR:`[\B6h--O:2>^BF8I=`8N0Z,(%Qt#T
+ @44G^K:rPo&HR,f9>cJn_ns9`#6<VI040ZtNM0sX;56U+H$RbH^^PrA`%`*d&/#/OZ*Da*d
+ W?8b:Ln!HY$JXD4TM3#4$-YP>odH"Lu#[e6Os/Q(lAmF?mn?4'[3+0Qd/5$Puup+:`.'e_s
+ >pX%l+qN.OsG#K51laLgr+Q+TsDV2n*,,H?<3I:SpgR!o83#jQYn81*+<PM\i(6cN&\D`%`
+ *d&-@(6^]?L>n0`O.H=+C6JaU4->-qrTaJ94^1'G2S$t(JM*.spj+`L`!G,;MRIoN=@*PAn
+ n#0"R#qYL'\k=&QX3[lKa](`kS:kUe46P2YhSj%QpB+=iNf</E+>o.YBiSc%Cd[qFjB4pG2
+ o*RhsiYW(m#f-SQ`4BU-T&o(qcs7Fi)`MZ15<I$1b2P0<(lAmF?mn>I:haWXoB+<^XMOAdj
+ lI^3,GD_%*_K$sFEM^T:Cn*X*":YGSmWs5*.spj+[C8?hPIOYJ$aOpC?U,)'9Z.eUnfEbd@
+ s'fT_ltI$q5t^0GGZ5B\'7fF;EeY/R,Xt]A"Zb)aAd6j_8J[W`<2))0EW/LS4un%#,sZTR?
+ _7[;"*eF;EeYa,_<tm^Ms_8kMuX:nTpLJUrC!rN7/q,/c2QM@bA+rs>o5>$>D*rdtK^\$r7
+ t=UkoOd%.<[L*FPj(lAlViooSsBD$]9jlH[%BlTE/SZSih89;(YF`jB>>rrH`Lgr+Q+U#d"
+ ^js_qX/)l"/M:E&(/58K%Q*@,85;p7mW:$7'HrLQ:kTC9o^m7Jhe>,lj9Tk*`ER(lWgoS!b
+ pb`%pb6ffCOr/bE=@RGKKu%iH'6J$X44e-o:hs'kY@&L8WtLKS@,V3a$TO1()Vo?a'^;^#R
+ '6Bj0)Xo$G+8_&/#UBF4OO%BE?rL>$raF*.spj+V7g\>'-&O+XJEL(Dd>4@uep`d@iI<7!s
+ AEFH:3hTa5WHL*D<7qs`PHbmTF*p[8XKA+&![F!nAP,>Odj"l84-?mn>]K:rPoqQ=,97VCO
+ W')h-=`CP<$]!Sh$[CNWMZ\.[c9e[8)NknPF&:I;K:3>&-+Y,(4PEQ+)@nu:;V[p3"ldDU;
+ FE_MPTa5WHL*D<7Za-m2ZgJ*?F*$R5EfHQX)n"q2;l<B)Bdn.bX+d5QE=@RGK]l?5;kp\eL
+ g@W96$JZ2@UioI9cjr^'a<FRC1/MW*.spj+j0BIo4_NB/]q:tpek0pV`'1%r,<D[7>PG-'H
+ oj^c$cC)gU>Jbjeb%Zk"qPc[;4A\GIL`mM@dg9-pi`q1M>"[=`f=VN=BuR\`rbNe'o^a_Lr
+ 6Bh5qHI,/c2QM@cLue##i](*2Arjp'XIDG!SrUc+&V-77*qXVQX8`%`*d&>JOchg_Q1k_P!
+ )+$t1<Z^Ndh8nSc*q-b?^M@dg9-pi_FF)uC`MbZl:.PEFc]"=JDAnFMF*f5K#!pK&Q%7:`e
+ (^rh=;5U%\6"eTEWiAD[Q-0pp>"!!EM:\q<6[-"S(lAmF?mn?t1*n4m3f@n.Y[Pn,jg'LD&
+ (6k9+tt\^]":t[:kUe46P2XAIJs#-g.dG?/6Do.>qYaAoqaI*`0DiSTY"Dc$q5t^0GGZu;a
+ s4\SJ=Y2hnC,B]3P(+okS;@.C,$,`DD4I#"&th1dsi%d91J-fs>=b]?=c2H$Ro:EpM;Yd+t
+ fBL*FPj(lAm!.3p0lcAEiah7<dG?&k,8Q_uX)NfK.HIB=EB,/c2QM@cN(hqt'OMBe]VI1"e
+ 9NbdI^24^^O\o]BC?`QXP'PV8q)'Xgbe^`1]'68,6FO"M-70jL7DV_%nieWth)!LC.*8.4R
+ 9]`p4o:Yta24C3E+YLT-KFm,&j$R\2,paj'GF.@^p%@QCjfDUkT_hnM$q1H,K:rPoikgBWe
+ .u>9H9-.l3R6:0R*JZ+R5+rWY>rJ:KFg3^M@^uE]Y(0p]$/AlkjEIrc^m;a9]a3L';>;,2%
+ 2nZZ/ad#!53alJp7%]ASR,U,p``+],]G1CKEPq3IUaGke:(,6P2ZO/76+2HMJ`fn&C>+e.u
+ =V63;7Q3DN(7LX&lQg"A[c[;$P9&7T;Or;"^_(En$In(snLkeb6TTtAKfWiE(@VK1Y4L!Y7
+ M)MTGhfU4I(#5%IT#i:1=&#_(kJ:IX`mW)Qb"qBcijg&I@T_hnM$q1H,ZZb_>kiI^6]P#Q^
+ ;-S.//mX(s/shifPYdiAB4kl@mT'ca&OPh(4*D$Yq'#qp.4L,-U-!J\&9N-,/1iH5g/RLp[
+ h$'^<PqXd&3GGoJ^DRV+aEhYq:$29W`?*m]?=d%_84WudaQYY]":t[:kTBAjAcrenjQ(pe?
+ )rAcD$d^@01OX\a7P.1N_cA\9ms&FEaeZjN/'MK7FMGe?<5oU7q^8q'%'sg9tikBAFFC)+"
+ "^bLWk(:M1`*B@!/8GIM=@KnW6j2Pi;A#]YK^PnirF7>P<IV5</0cb+ZgG4'No>]$rpl'/a
+ MXd;,pEa$6Qn<%#Nme^U,nFUlP[:_iT-n(=FPmMe>,hCVQA*M2^Ocbb,h;)ft?bW&a3$?-K
+ -Vp=e?FU9GPa@_``oF8%1PTW.2JC_$\!gOYrgKn>c@$<D<5Vq8f%/F%Q<nBD/BtG_.%6TnF
+ =*-'>>]jI4XhgO7Ri-X:a![WAA.'bj?PA'KG]W,NEfq2X59gnX]al>euQ".nWWj-M@+k3[4
+ >:ZTXO^0]Y*HhRj@e,q(oE(TV\Igh;)7n(bpWf:a![WrVQ>E?EshkPFdmtDrGqP2/*hGkf;
+ 8nW9%/Xp[[nG++O*;!sbd,^*QF<5`Hqrq>'h`Rm[Vl$g@$0iqDcoe4'N`EVJXgImPekKQpK
+ o;/ts@hG-+hI)_j%gB]<npk7l"J)]V8S((I\?>D,<)`DN*2Mn1.N$P]D3d'^UD*SXF?`dPA
+ /hF*`'*NtVd&0Y]B?hPE^c:u"Od("Qb3@'`P+8%9D#qf%b's3Z$$K9.Bu#6"e0a_$JHO9TX
+ dtrN)mo]48:9g^^E7d,,/f\hb]i2:oRr?_^,Hj0a3#oEQ'@N3Xhi2I01-7%f+\o5e>ZB%g'
+ e/j#=J6qFsMDnCtOj'QlmZ7g9k]t<)a"6=:qRhS%!CW4k^3"&QQ8>CZ^^`A85jOD&LWj4WA
+ ]h\]2\1LYDA4\[f8t4m'kC'.7H'$US=ci5(&mb7pZ(AS/[BBrXcK&Uf/DBYf9(e<XOdQ7_W
+ E7=mC,`XHV2VlZs.h-C43:7OjlA.TA,jL$\-@O1f-Mi0,;]2;3hQ#5r:>FLI)+<UZ1R(8uf
+ Uni"j`mj-g`/##P4_>fq2ciF:L"bH-,VDX885;p7?D%SSDU#cgPS4XCC79;+jQWeeH1U/a%
+ l8G)qYJX0\_hkj\ofNFO-p:a%`WiU,4lOeDf=j9M,<R@*o*K^LalVup+fNDPh)UmOqN3M<)
+ @8bOSh$l\]Xar7ueROQBnC.8dL8/dD)^d&eUC)'>*7PG5KQLjt3eT25]Q0U&)J!Wp+YCg;U
+ !:q%nr$L1>)s3in]gi6^&E[k>F@)^iG9?;a\V\%a7h-mGbo`@pP%G^'L5kTnaVp[;gFmn@%
+ o?;/hLVl-GoX=Uc16:bXH)`R3Nf<<OO&rF!jK]p<DG"nTCaii/OIVM>8g+KY>4b)GcLQD?s
+ 43%6g:;B9ic^RAG^JT>@h7%<0J]s[<gTb5nV$[$r7un_<4_E$`b6rA'P&T"'f&X5'oNFX,l
+ I2YlB0G'@O6bV.<JpfETqX.qEO,DZbGP[T^T]\bP*09AP;]Y=LpVp/mT9&8P:'-/5,T$I'K
+ 9T7AU5$`gM^lZY(c:#?S5Tn:p41gRNmQKka42@&q8XF<@KdWa_C_5N`le,a;"p8Rd\tjS<q
+ :_?7oRIO7Dg3H^UmnT/:N@9Zn#K-<crZZMm"(P^fHOAJ&(=6CM6fEVAa%8lPCi5s[f`S`X!
+ qFL70m&`n""#aU^>PeFan-6i)_oSccHA^`\^PS<ap/tKU2ESIt-b,<ZhBoKGKMnc!gf4_M6
+ b/gNcpHQaAb.;Tanq>-Im#G>%egkH;=kSI]"JJRLLR"3mlg*k8R]:9E2/:'h2r7+Z>[:Xsj
+ Q(!7*^W\,[;+ghlO1DmON8T1nE=jN(0:t8du8X\]5dFOXiH`rQGhc-4R3=Pj5Qfj=7=#F;^
+ =4TrO#@(cs*X$VG.a'b]D8/oB5A_dl6=,NJ$Z++?b,Id+I!<ne`Q=fq_3'`8qd,Lo-!L(BX
+ m-dYpOX[bp8,Y5'j>F'_A)rm$qHO35\jX<A0UHMH]TY!HPe^UXfmX&NV?A>5ioWLX6`;-$_
+ pUSC1pT"[4.lkjiZ"n0a-4#ERLK+KoRHh[7(nUY"T8/.UYF%EuC?)U.jfo19b$[l4G%:0f%
+ S]p<,%34.`,9snQ6r`F)oa=INL6AJ/-1B_B/M1\.%,>gcclm]koSoB2*''@Yq^j@MJUq=NN
+ 2^1!535X18Dk0s($Ef/',D>f]rI/B.Olp<[r=HG*;\MQW6;aohHAr[RLRbUoW&$7gu0t"<L
+ Q'p/>M++fRW6:^4#l:jV("_IJilF=gM^>BBZh^/1`?Fq^lnY77DZSIZ#!qDVr/kjl&XaJ"8
+ WCEclLNo`9@b+VC!+Qcng^AYHPTcATigbC;Y"daD#_(OfSJmEI$k&W[t<1-.t&-</U&QBkQ
+ kEA>sg,'IW`^%XT`IWeb_MMd;PjQ$OV*h530_TBpAf%,$*aH>N'O12I%Z``/d=gRH+Wf"L'
+ ]q[9`?G2etY0cJ=M6tLNGNM9b^`u[@D8G4g?b_nS5Le34[u>e6i1Y!]6P7_h</pW:d>[(me
+ s4>-"*h81OiNM)dskh\]Y+/t%j.IVNY"O5N=R+PR!p/H\]qFdqL6VDf?n#cI[952B9hiGNT
+ 9t51iGg.fDT@:NJNC@4`*SoEH/7TSf[:0q<bYrA&go[*?g(I)hiuQXN'r8O$0nZeURM[8b#
+ TK4_42;i=R_GKVI"Hbieq#ZtU'?<,l2Lh4gVffTShD/R,ZBrk&fODV]p43^O)q`lcHHVG%p
+ 9*dDpG#QJlla8FrYhi*$LeZ3=66BU<n3P>BKc3Dc_5Pp1OqTrl:kjnKB#C4h2ZEc7B#T"Y;
+ `6lld:>f!)pSP#=*>IKL)]Qo_Vf(A^AeBot9XE[U:CJ0CF&P.L.B1LW012@"U*q62UJ749+
+ HtgOljV]sKFgJ/"c)l:Qm&EHd^,#"jJbLJYFXjfII4`N/plA3]""n`=>=lu9jdj:bm!cA\o
+ hp3c1%0rbq:N7-DITC/A3Z1H2eP!k)J]`]fX:NXJDX++!2#A.Z'Ip?bc=&h&BlcjQ,B";Gs
+ .m!$R",1V(%<\f/';1SStfQ^oh`r8d*0dF"(6a,DBA6:acg34LdE\]As*"f$j6oJ0FicH^#
+ 9MS+A_0fl+I44]o`G7J."rpAIZ7@ublJj,kraIG83T]T_SjN5c^7gL9VR,_pCIZfPbG5X:J
+ />tE6r@0efresu)q0[st='l0hQRD1qaTK0t+\>L_/.<Bor_$<fQ^*i8WiADQ`F`^o,pakFc
+ aj83^3r)m/.aI@]RV%'KFgJo]Qj62lWDSA<GMNM$$#Ji9]OK\666=>)e$ZG'Qqi(AG"dF)D
+ hp7J&10MD;"3qoQ]4>rUU'@cCI#t\)Z`%VcQhs5bD2p]Y(mB0!;?6@))d+!C-o6`!k<[=U,
+ T<DU!+p?K*=BI@$Lq^"[B+UO,r2ds@F^77=kior.lfCBr;nT:,=LXrQ*,]Y)7D)aj#6`1f\
+ Y?r&@V^:q>Po(%=kO5IlG#U-J5GOIT)>VU>F\!WO2JL$uYr9rHPF@e8Q9ttJMo+t/&G9h+`
+ ,V38:bMq-&^O[$&H1PX`9posBF9F_6!;ZHd.4?SENdT0E4e]mr+bUD:UW2q:Z1X.sDjHQT/
+ D*?il.P[CNI/%-PD^och4O*s?b?flb/toX4ZW?*S\*KSYFVSnf8/69I.mKcpT_uhcYq24k+
+ 5$!c'pZ\Vl.LRT!,>9h7@`;LRk4rk[?^^+iMU"`rRabr=N'WBkaQ`848_67+(JWN*7k1hGc
+ Anq>:\6k1W%*ag5j=2V?iRQoT5"Bk]$G3C]ElLPMCi*:/3FR[QPR"%!8:`o\qC^i\K&fgk_
+ [Rfq[rU.Pptn+%PtkdO2d:B*/f"U,(3>IT@CUPY[3LOT7#)<;d4ai24f8kKG:Y^[#h%"oZ.
+ KFgJojQG]?B\-bj?:o1XI:4Lt9)B+.J;5+6c.#F^au<jk*BOJ=9E.@d?U$8"OoB&a7o<.2N
+ #=FEa7p`d`f,Hi%"o0YN1KQ$b&lOu\ns<q6q$+sjGY/3#<_;@hYciuW,eEObKGm(N1fq1Il
+ (lfjS%uqPYX(B;H$Lg9MD*LpE/tsnp!ZEKFebd[aQ*Vc`>m2YH__NUf.Xi7-N)i9Y[C,V?u
+ jfgUm\J2%9/qLCZi4A"DMk4WeP2il,06[A14?GI(o+TZ?^i<ioO/4W[.%^3hASQ_*b/3pb8
+ SJ6TLdOB7QW4l&^,ds_,05@(O6[@K@,Y]?JA?98r?FQZY+BMqaFVb\D.cUUQ+<)cCX^::\/
+ _VZgp/S4Qr%+7nCq28i/WjH`eA4?eaA69ZEM9)KZEc1<#RKn6<q!l]mp:lQGNFQF,.%Fm9Q
+ (OZZp]upf[;")Z&h_u!Sq8Wk@Gt!8>rfUQD>eLq=d,`nks%^/;l;h#b:@BQgW1o;*kbQCO6
+ a,cl!tT38!9Ep6i=9rd\4uo/kA]0,-O(!,'@!BA(6m;h0ZT^2/)pJ4f)0.A]XoBD>\`6><2
+ <%?e]JUnKtGB"F(?K:I>ti%jnti1,C`NLS>S<6Q+]s]9A97b#Q4*>U^TFoghNRFhfdg119S
+ 4pli:3lT.*hMsjk>[r1#";,Mlc`W'r>aqa,;5s0F^'S#f0\W'gZOo.Sh/GO+@9rshE^(T,>
+ =#g1S@a_<]d'$fq[AZuho02KGpDK-5HCCk>*rJ5#3j)^FRk3``+5c[84F?g@@e?R(\#l?rL
+ anCh2J%!/O8f][nbYj<bCR<L\9Hhsa"]f!j9YClrA+NXK*Mo.n9HW`QC=7'@DW02Oo(Q3Pn
+ ir,TP>2X(Hm.fJ&e^7PACJff:1^Q-oMPo<=Sc_,E:<VRt'h:(YSPVc[CpS;Gqnos4Z9(eq/
+ =#Ko3Ic`6%-X0:s^Fr5.m;a<C*QkLut/Kh&e/$:W!qmpe16Vp?>4.OprHF1AGMIPoA2f6o=
+ A53<Rp(&e"%KAO;B?u9bOOH@9G7%u+p5C^S*%JB#:K-%7h2/Lo!/M.EXGgQI=0/%hNlH-d=
+ S&\;CT\cY#lpch+1)nn-#7g1g[!qON^T1_U'L8i.@-JQ.c5W)5kUQs^bWC#_Z>%L&(*@=\&
+ (0G9YF#dKL.Xj%>[0+\#EY'Vs1`Ig9f];B+hWpb1E5oS*n+PYMi4WC^HV92?3N>L3BK=7PS
+ ?E46n:NVKn[-_mfpNiG]hfth4[=ErC:jI;,E&jfnPH3Zj#+fZ!j]=O^pXqZMDo.DBp/dY7f
+ Pe;r<`a8b9?HWeUEaHeUJtj*3[Qnf"O'Q3-NiJ,.7E!CIIHDkIm77lt;9d-j-2Gki'&HSP'
+ =/1eCZ-*_i5Kl!F/H[)sZjlL>rD'L5KfQBn*o()A,gt^VlD$rKbeZuE`pus^!Lq]%8inJ[i
+ kKFkmN:i2?RfFCQNBb;&KVYB5S4r-h,BTPBIj/m2K=WVT:j;+O[rKTsJhQr-*bRr3)Pc2;j
+ JX#6.'f=$Nq]=9gL'QmcGmdl#k3:5,jXmPj36\_FJD<bK&4Ub+seNmGjsXL(GEX3L)h0N3d
+ ^R:77BZ/nYf$[<ir[6ZN3Qf_N?hS,W'2bRQ5Vh&2pe/.dAeM-2_?C1VE).iomQF24@;rXfc
+ g4hYQ6n;`,Y3ChtAlhs'<R"#^k(\?=iiBY,+ipo2$(Efq*^e4bM@'6q@QM6,SWq@VCMa^tm
+ nT@^57M\_3d/IDdjU%qI)gUD*,?![-roq?_>4-CY'h7?$lAC-BXF$CX.ZAZ?s!\_+:YjpDR
+ 3]C<MDE`@E%_\kF-7:0^P3lZHJc2iJT5",[@nq.Or8QhqN.+u5fQ5\AFQBA>mml,09;0L7Q
+ ^*hA]ln;kU]HFgj%l+Mf;<>S(@_=f,VSTA]-f*Z$b`V4WdXd<F6:^:p@]9MI<`i_4F6X5h7
+ @_u7ReSBQV1XE3][UX1f2Ffq6F?c%Bl>B`k=9V>T[l+%rqVIs.dU8L*gcQ#P6PZgnS0=e)2
+ ^:3@Mh,dBAF'LkQ:,*p8nROeh,p_S8`>3Hb1kh7U!uN*99V)`i)*)`1NDrPk].me[@U']$3
+ 7)t^;Y(^Y5;L5&?e/3.qB>Z=[%c/W'8eC<!;2fETdY[PV7+Uo>HJF4:b!sT/V=\aM1"0*(b
+ B]S7rG\P-O9rP1pEN<AlSFc;lTbe\Y'bqI1p23](QN82Gl<4F8fgC7+qt8&!#M`k^HeuD,J
+ .4p`0:c]s^qAR(+?0&4o,!JVqH`d)l?U\@k]PbjJO+Jm&n=O,^C'UDW<,)0`f(c;]smU0:7
+ XFlH?pf*N**`;nb2A)NupT8p$,OA5JPRlctE@n&.h+Dc^?S%gt^[cX]dSj_?:McP9s"m5T`
+ 'pg9k^_Q%aE)TnFu:MjI3K*8DFEU12#0Xm$`\hR6pR(u=G4]LZ;<C,Yjsn^8CXbkUE;rSLB
+ P)?#;gfAWs-kiCtq]H8_^WDI42;gEY6g=k9-4$6u,7)?p)+%)Y^&tutNS@hVUo4gO,fYk75
+ B\.</^KC-';+X?ah7U#N_o6tlAS5"B<;?Fe,9gJKqXMUU#Jl.hI0X#%5?](U-,=SSWAebPZ
+ u"D.HCm^?P*1thIJPem=X+N16Ms(iAV^ebW&60_"tm'=q>#<_Mm@))^(`8Ln:W3snWO0)cU
+ Dd/`\I>CMjI2VL8#EC%,R67hQoB3)8]?dBW)3nV&<9Ie##jdk6q$tI._%2kT8Y>',rilC0_
+ 6PGapj#?=",T5lX>gkuZX((>sh_Y0E=kU@NB4JhQqYmLd-L`uhOZB\[hghN-6.<-RQTKX$F
+ O01nl16Z;au()&&j`;;@u<H`P?rUT2H^E6(GOd`8EY=+h^6"?,8.bH`6$k-?>cEaI)4SU@,
+ A[AW-<o$iUkTfb^/6MBriMS8MQs*YC);PCEifl`ZM't/9Z]f8/7X]qR;qTa7G;&T[O(pFdf
+ W_PdKmrj2`$2D:rcI&#`FPeQ]W%jQo^&>f4e_nm-'L:"X?:ZRhplN*K7a'>_OeMjRg<*u&W
+ "rdPc33mNueP_\U2T=o>kgY%4bN59qFCfi8Dn>*XO"Z7e6BQ72%F4ha+I"s4ai%V-ppd\P&
+ -k2LQTHbhTg[bDB.&^!<7[EUts]-i*L"HT0V2Rm+jG<8^PJgTu)+l-"b%"*sp/:-+7M)<jt
+ 1VOtUVUCQ_H^DZlA@;4cANR0aOW`lT4]a>)lrHXF?'-?Z`jROJ-/km]e2W)#i3SEsE5'd32
+ jcF`/aiP]'g#o@O8P4'd=nh14OjSq;!bio!Gapi(ptk[L^T2tOojE\1He^N64:K<VcE;<ZZ
+ a-lgKCA3_m4>2SUaiE:#6j!jla22i:L1PZL;K?Z,neK@*NJSQ92HuBru<%"U%a'/ZgQY]$g
+ FgZW2QY6Zc`\jO%(<N]YCue.4OH**tOEW$!#FcpEV&u\TuAFlaG>RD>M\nZ`*O"ZaWNroN,
+ pF>#ON9:&$jGN`lc?*+t%]5%HhCO8f]1=><`KZOWTtd`&0Hkq`a^jA+FY)8cW&rpYr\G15p
+ 6N)A!Gb"AR]L'S-CrJMFCJkc@1?X:9e-0HHDH,33.?N*Z*@T1o=oXctfZuh2G\YF070d(TI
+ 8X/l>GIR$Gh6:T2SXXDWBQ<[S)pPjN8kVhQr$mB"LAX*q`I_9S0elP+f$*O.;3%89)Ek5^U
+ Gonde7G5dGTr70BB8R04cK6fIuk/J'17#qX]l%$"jfJmN/ZV'+qfL5ZEh1@2[T?j,#^9,Vt
+ E^Nh7pqPHh%FK6[6Pmr<KflZfrk(q`IZB2DdKWK?j<VHt4$XLH9=cfo)2j![$\T:I7O1l,C
+ u"FmInQd=4B1d6rQ[U1Tg])*Rh<@!%K[3%^qI<X>1qX"/99OE>pJp;KhPKH]_gnlVBq_V6(
+ lGA4A$IOb,&X:b;+6:.4>Okh7`9S@W_O[6J=GQ"ZOHi7PfV/,BS@X!LdFjQ.S:,t<p65jD*
+ S+l@._YTaS9L&8N<g.tDGZRFaWRB)Jc:g<Bn%Q!0:C1cZmbB6\:IjN6iPUEe()GIb8VePiH
+ T-$U)@L%F+Kr_#V[rC0lBRX"7C,Z2`5pD?O&u?9TDo:>Wb`r#D;;FUT;PT<C@6IbG/n4I]h
+ M>-5D9^C^NWVk>B=3]n#su`L`WrWBY8Za9LHE5UVlD,fKuMa"+?Lg>uFKqJ.;htB@a.e*'9
+ 9N6$[bOY!((I\-Gl4j&VXT<e<Pq#U/'$VnY<5lJV/o`kpB-XXBJBl,b&AT+Ue1R@7hX7)#B
+ R]Hf]qrKrFaF'Lk`6B=\0)J!9lo[oZm/.om9!:018a3C)4LQ2\(6:skmg"@6hc8d:;5AujY
+ Q,(R?cT\@UpMV5P$\b=.O>$fmgn-[c7c7>f)>.\?m.hlW0B>^5K390f^O]SY$*f^"2E(8O0
+ 2j_8Tr>Z)e?S'2K\Dc_S=Fj4`j7Y5n;-6/56jSB9MCDBVC$Cki4aO1l?RZ#6-hFo]hOA#pd
+ 8==/.F7>3K$s8,Amd8qIol&M1Y]kJXU2A/)FnB\<%pZ?a.$_b-643RdUje-o*9raFqBXW`_
+ 89EC.?UdHWsYZT'G(M>[T;qkE6EkKdYXN.F_5>c+U(r:K5Pk]u<`k:[m#0l/#KqEnS3H?pS
+ FhR/dX5m\'OZ,D*k!8fXX).qJAB0iZ]hrfr1H2nl;oq"8P!?BPGOSet5`JBjMB'mZ:2h<'#
+ <`YObZK^]c.HKGN5u#%+Wi2k8F`kYnU^=rcTgM%lp]*Ip*-C6'7028dcD7ql=c^JqmL\BdL
+ Q<KMQu"CjrHd/>RAHbX5,'G_/W%FMJ]CP-J0*#1J,Zj.67?UC/V<u\Zm[^d;c#t*(E[-MTg
+ -/A)*,9`5\dgm]63F?M\[n;6NhVSb0$ko^L#D24O">U:[SnGA_#l7VX*r`GjQ'**'RH;"F+
+ _2a]!s/(4P4T3Ub_;RHYglIEi\n;>cJ(dk'_D,1;4*0REY@&HBduiSpi>`i[l6",UZbo@..
+ T8.`25Y^fI^>gT03LQIT/kkX;:I0[1dWlWDU48/@5#l]_j6K0:I1LlL#`_JRP!a/$'D>NDX
+ kHS@@AcoFXrOG\7%:We@TSN#$,,<#BI[Qb!&)B:FbH>fS*??0BmV%5?mdp6TU&FQH!\EJCT
+ q\9uh;,bN]mA&16B$mC2t#_HO3f])9tU*Q^;F'<Xa;__CY(3F[QG`Z?HCtaFiJBO/1?DgL*
+ &VAYZT<!?T_rs\+Uh]JKDB%iOWlc5fSo[VQ(fSB^$3qru4(,!1BGXcK!\WRu%7Br4AN31jT
+ *E>MrLe/B#TLS!f[1LC<Y$Z=V3XVbTP(*bRL8?d+!_;,N[">WjGl2+?/o&g=N!k,c1$Q_s\
+ aX?f5H=mVM[\#j-3<t#LALt[X?UmNhV5XF3pWa#5Ko+m9LH>N'KJD3:PmO^4N1GUH(pZ<4"
+ I@DV=%dBeD5TkREg3&8O%X$l;7L^7mJ.<K,_qNi@4[..bBUBR].[uPlgH0V__3IfL6rA=um
+ %c(HDA2Y@)f/rES'1/F-'k=;1\qc3p25I'XX^ij19411-Oq7:;BS9eVq+F0;1ROhq6@&.4'
+ +t\?+_F1-X0_V6,sF3Vld)gn*^cM='XmrD^5:?HMX]TM2CcLjV&e7P`q<kIUmXm/Hn=>fuV
+ Q(ndB\jYTrrs);RPL9s`."[u&I!<<\O84*6_I]d"f3]T*KN'$qRW'\\XW?&FPuKNdHUbPh>
+ 4p%A!OUPC6q;GqOXFY,$kJY"&<4*#-JT.?A[d\FkV65NCM7,?n<K6m8lkB"1nT/;)=%Fjk'
+ F[j,KFM;kjX3%e;)"m"<j+-&!Il_d7ij0NqBUe]JRK#lc:f-YD2i.r+'+j`8aAeIjQ1<:nd
+ aHSBqtE$bedOToL^I9f=Z\^WlteDdlE'P592/,,1EI8G7AdfiT9Wkp@_R<QpJ)sMF7$OTT?
+ mKp120DWH.*%UYN1qD)7)g'\W'@VC!"QDZ=DKI7_dO/k8a-`<6N2N#c#*Rr6ppiJBq4k@6C
+ noJ5'7OdHg!M50Gq=IE6R\-RT/4Dt?A#>.T.Zp1@oKZY%J9\T?r/7uqE;,4kEt?!U_T2f9%
+ Pe<%b-=^,+N`l@RpSlAZgOK=V.OqJcU9u;aZT:_1*O/JnR]=[t6mG"2&SX2!:DYIKr2E'.D
+ $l_+Kr0r06qAK:!g4-MHfs:-,p$:5m]gq-ge^Dka4aQb#<NCLqI7<=YYJ6\4o(bo#B&\>oK
+ K/1AM3'^=aiW5Eit"=lJ%]jFZ<A(>h"fO6s$B]pGu=Qi"K=29if1j5)3ohnr8^%*?hXl1Bm
+ "=go[uT\C3XoF[Y'3"h7@`_[j+hETV+P.gs)<I-RC'=fhJc;Y?A2$!jBEeEG4^kZ&t024J!
+ )Jl00I(P+8jga,X`SQ;*i,NC4[p@q0(]AMV?R]r@<\P1%jZ;m0K=<a8nRcZ7rB;Gr*4\*!U
+ ikf$23"aCP[KI"^JlPK?/OjT"aqeKNuN%R"0@FYI+88;Qs7$%*1=H2_@I2jkdT4)h5&tRV(
+ \5brX$e>LZe#-"-eudt7:*:FIJ,fMG:7Wj_./:3nH$O[QDJnjq[=^Y:)W^#AD/F-bU8"ZXI
+ Bjh65()SQiSieU@V9=4GjsWdSN;US2ZE)J3ngfH_ro@Y?8a5t+$[L.h2hNB%HJhi*k.-U2%
+ h)&()6+R^rPI=<RbUJ#McCuNL,uu1V0`9mCJ"uj&$2>'f1.(NaC>75E<asqnAL[otc:-pL\
+ 0;Odqr_Y'PTno#9/0[7QQ/co[U+Og)<0eZ(*o;/pUGZ"(g8GO>lfN5F#MC"(V5=(c&S<-JD
+ 64F*+;B2rE==gM_p[r3YW3P,I3!i_KLCtX;(=B.`fh"g!W"VMt2#`j4Eg0^':@am8K>i"8*
+ Q'KYY<sdo\W.MGGJ.&+nIK'0/`rV$RNA*0?#*2-Ic*F[ubKFHu]n:ogg9bPr]k^KnY$F+BY
+ A3O^o^(6e4$,Mm77HFX'Gs2)6[*bK)SuJGM:8@uLH\C(>.8D:mHr/9R4u$ck??H[RCr7+Ic
+ DVoF[V$I&ej;\%-u&^8IcZ=?ff2Q&e\[kr>n3')pfg,MlYIT1[<B"oTYufUo:52jf7!aEN"
+ N$m(knl.k3%$Q0okeNl]De6Z6#4p$:3_4?U9f1&:J+e";-hfr%UfBk_;F,@sGP?%)+o[INe
+ W2:$_nFL7X2*q[:sAnu0d=BNX^^$s,3hK@CVa3a,,d*LSc\hkVL2/hA!noGK,KhEck]$eL5
+ T/-6h04'B"%mqoi5'k#C7re(&X/pl]DI5-_V+a)cN?6\!3i:WU]k1heAWDe2cl3Z9=]nm/f
+ 36%%C23&%h7HBT9VeJ[TT>EemG#*W[VT)[BIGM]^o:81\G_Om%D$]J()Ie-DW0(p2>+)3^(
+ S$mXf^2'.+fWQ/gA5.aXKe+X@u`7\/sfCs0UE#5.])(2oN_ML)ttN]R'70;3,%tH;\#pr:6
+ ]"`RRgc6_7$ml=/E*_PNHth]OBu.N1'Q#kZ[NDp4S'T0@\'Fh"`_Z7]Cl?u'O.Jm[T*Oc_@
+ T1j!naIeN+GO68kr6,BpBe"@3k=Mok&K7\^-/Y<rGQ^:oP0OqSj4)-4#rk%J%M7)F*\#_ea
+ e'P7aO;RTLH=qQe7e50)#G;iPERB_k'HC#oEtHdV,s/599SQ<@EG3rFkPY6?^?W/72>!ak.
+ b)tt?G1[(j3X'WQ3IR!DqVF-fW\dV8kMq#/12gsZY.TqbEhu!q!\p]IM<Y;WM?3jFlXa#j<
+ >Z&<TDoLrL;KBC[uf:lF$61'bMVWHi#^`d21,YC_e6(92#;u$/U<MgPgSdoPZ]))&:/i8"i
+ _2Gq"^*fbrOfXQeTJ]jTl^Kk7aBB+FJ3n.)al6>oAgKkueDmbMA6N$MUs6pr"XH?A]:`Vo]
+ .([?5=lIN)lX]r:!e#2Zk^&,Wu#HTS8nauIT=0J4bN`5;+=UPVtpJ.XD]a>-YGGbocFdYmh
+ cTmgaKHs!&lt_gA.dT(Q*P9mN?h.!b4_42kiScG_]3^CC:I5ok.-`[nD^kVA[nSY<`8?$sZ
+ n?a],_<=O"$1;T[OO2$f=UG#Cu;sTcl8)qP*_M"GO?HCA"/dM((_$riku-N-RW,&>>nI!D/
+ JYU/R$6:aRFEkN>a\>fWh3.M94DT@nCi,E8Z8MYA[*.9PJ2is&2^BS+p;N/j*ZrU8+LZ+5W
+ 0D@^CVhm+1sG""qs.&0]..qXbB3C]DIK)"R6qEn*\#KJWJXLQ<KApU_4?pMMbO;,U9nMeZB
+ l;*fs)k]+'lo>V4Zgpu!_rbLgW)EuSofuf_LY+3eY8E7HiIs:qlIGQnr8OEQs?[r##^sW-Y
+ jl></@q.$]eJ/o<kjRtIq<q0X;<7c"aN+qXe%7MVh@)Q_U5NR6U*>'DJDIcjM-V</[:V%5+
+ [ej6h=\D*mP<]d6hMro6q^*s_rq?O?T8AJkusT:Fc]I]hL?*PUGL*9b*%r%+92;Ts5&`Nf=
+ fIU\pJ`D\aFh'6/_.WV]jaHe_X_q:W:;l>$>+Vqi=:3P*-Ga/qQDFq2,n;TSHRtX]i,iT0\
+ K5Lh?C3Z\_TbAS5XC(IEd#eBqcQftE\3o[c",n-]W=0RDm5[N0i_pUD")2`^Ba*>0e)ngOd
+ -eZ2b=mlY&E>$Cdpl-n"#,r"HBjhLip47o2'!EiRYVb`qj)e9RCa%mVhU98\.V(DHZ?6[=)
+ 9-'n7=M,Z^'Il=-@WXQ#=a$R6]3C1b;]+;Fhoo>0T_fI(iSibTo#Q/s1,:UdY?lpS0``Ddo
+ pe$`TSHH^Yl54$_PNGIMQ2!#\tX\*!CE<COc_hLGW-C2Gj:5rAGK#kCsMp=k]^m0PEQ+)j(
+ `.;Ap2U>^a.rC$$t3tVf1J,d!Ld4C!`NhbEjlU-]MJ+QS*t5827Glk>%n,r8gS&;3/.:.dF
+ ;!qk-S)6KfL'Ofrj9)REPbJO+MZE*C?H8"MgLJaRE`mKYGX_gmZGUG-jD$:7*ITm%IGe##j
+ /Pa%f4l68^KY$DDuK]5L#;j6GT;0VDIAl24!c*W/,IgD`t?+>-:`BruW8E]Wu58jF@QE?1o
+ IBD)SFj'pDAa_EAg:r<c`l84/[h27;"pU\aItjOX2I'KlTEe7t4b!<rpMnL"8CQ(fca8dWl
+ eXZW0cmq*+b]j8r.PedjJ"g72DY-tHhL;Obtl]lZa1+Q[pddQoScc.m:*qrPa!FD.gs&@Lo
+ [4:LPjp<p$V89d).3Mk0072qbSqiSAhjg.]Z.u]]BT.s,\@7AS$uZ)]K_8()HWAq/c,Qkih
+ 5`KEUk\6Q[4U+?:a)`>ou2)RhIY@?r3Whmj*Mf.)%BS7C',IKMV;D\j':]RMf2(+rht4bjI
+ @q"WdLW;*'^Ur;:&_c,$I=PVtJWP^RVieLV[0CYSrH#i&k)/!UF.9!+2R[Ru*Bf0b]["\fT
+ .k3$WCY,^n[r.LtQ?IFp>ISL`FQq5S=0J:/JuX"[[L].:fN5LiQA1@tH^fioTK?>2&QV%3o
+ jZ1Wq82"6KS1mJRX5Ah*BJ9JH5_)"pYUHnm"=a1LrXk3ZZqt\^p(eI2[RjM%l'03Q?M6("a
+ bUn^4#l:CtMQ\k?-caNViJ=n)3+hIX+OX$?RrK^ZNhN@CiVLoboR`Oh`)(k`5Y'gNTeUmDA
+ 6PhSS-sKHW*+;GaDrP0(hfZ$85HAPl<%6@Jj%j;9X_8?JYXp+DcPgI*+GN^=OH`049+?>\E
+ 'M<'s`Fm980Hm@aL7un]d:/6il5DJSbHCp91Vq7aoe"s1lnD:8lX@u%Fb+bu=/>c4aeMV#F
+ ;Gp1s?PBq4HhI_l78!oL2r`lXU$&acjMba.p?eOYM'HPAA5Z?A\U\YLRQ4'bjuAiZd)?OG+
+ snYV`Pn[#p[0nd+sp8-)B,YXYC?".e/oXPD=0o;1L8fW[1o#-M!3WYA,jBI9FlUFg:;=62M
+ &3=YQ.h[S><<jSt.H`kI_)UVb`rWp%>#6LJr*G\5C.oJ,#DKZj&"\8I(%+LG"%S>$%n6*Be
+ _U]Y$B!WV6o9ankYoF7SB`]MogQ)>='l3_XSre#$p-3'?1HFC3oiJ+9=uNZur)%t,?_D1`L
+ pdaD%+_F1X)p?c'h/6_K[IeiQ2B@!0r"2cuRLSB4o2`%QZMjN-a(I(lQN/W_oqtFl@C-nP4
+ <GU?tmA<UI;H$Mfa%>:G]]IC[NO/CBIFnh#bfgL2DPj>H)`Y>`e&+'_W5\CXDB^8R.1(;pF
+ 3qj^'?eSBU$),684"7VF=?dCE)7";W3l1Idp#piB\/0%I19<Td')<>HfsDJ6;e'\\Q6>>9q
+ /:i[go/FAMt4T4O?o`?7t@#jNO$)c/P*D?9Uf!RhMHfTHNIKbY?Qa:D\^i<*!%3D/B_F(u/
+ (X..Xq6ag?f(q-sG)h*\s"cHb:UDdStlD4o31Am2Z]]m&pOEq%H9-RjY"mkOce^3Z]g/2G5
+ bc-&C6\GD9^/<7/??T#+#GMdW-cCKSoa.,#6p2&CU%nRt^)jLe,%A@IjpjS0:84P7c\@?46
+ U5+2?rqq$n/M62XM`h.E"+PfMkY[QY'::oK[GA;b#?f!(G5jZmC_b3DjlGL(he:@(:sXI!e
+ (j0jC]!hmcmtL,-;L':FaXkt]D*G0:fS<h;U^ereiiSEN+TUfeVT*<(ccU^oU.!*daHRYs8
+ G+:h1u-0%nJ"K1TLc(P.#2g/3;D5B4ba.=uqAn6(/84qYIUQnpVrO99q,[?G7E2^OQ8ZT0?
+ ,Df$BsQ%idP?]'C>;>ps`'pme)h*Ta#j,:O<f<!2^&GjEq_MMd;<Vj/BIkg8;Nc(!'IbC-/
+ +ojP5g9mS=rb8[6[NU8=aEq0!K"OfG6s4@AognMI=$'p3r"8'Il4tS@gaN-neF./D\An5ET
+ h.c;X/emstd"tU/R6>bq:eMe.<Hrb5^OH-S:FMouQBkjfE[>%',ZL*_T4+0.``P%?;*Qr@_
+ \STP4bV<$YOa,Dgm@SYS%!?![@L&`S=H+B=04lh?bcT>;`(Ys^O,]moB+;k].4Jn$q8L65@
+ XW/lAgtk)reiAA&jguPb?>Li?FC;2Du=sUcBqU.uWXj@dTLVe'tAC3dZrHH0Wb\Yic)S`Ua
+ mAq"un<[U,_fN&1mrS5=(UZ3>Qf8s"&Q`ar[k#\bar$4=jI6Lqu)\sJSSb##HZbAp'`ACq/
+ 4!L>U]87lJrr;#rXcQc'9[VZs#(^N:cS@-Csfn\_GHa`\Bn_bdG[/9R=qqV!*:S71SjHirO
+ Sh74>g-K#1cf=Q+l7&ObNM[.(2F&orajR\=j26W_Q^*gBFrnu%5EAV5V6%g9e@31>UG?lbk
+ T5<)AAhG`Ci&Rs]qJ4GWp83>n)5FkO'O#96:)[@)`*5(5I4]\S^/;(mb?@H`nBuGH2:?0C,
+ 2Po:.fhRo[mKNZ+X:i+n!CZenK\C2`3!#MEkijCL=VmL&WgmQ^=&&37P?O=6L,cU0g8b9Pa
+ j,18:XKkO%"l+]W5UK@%tA'bqJI1C/N<[.KqQR)=WpG5<m^bC@R8FbPNV"q9)E5EJi&P0-6
+ rp"8W:o!5aFo+t2O]am$s?_0F\qpOR^)`KFZ*7^V[[t_UU$d3EsqWug@;%#hR7EU<kq"XXZ
+ 0bFs.n;N3Z%j,,kHI^[#Qqe`kda?H&FQcPMIQi[H91hh1Z=SrKL'D`IpnaXVd'RLc2)R6H)
+ M[.2q(huRdA'39IktZY'YA;GbaIMd?^irjErCaBH1u5jIN89$.,):_';J1jqj]A`eQ]oH\T
+ <OrEn&AJ&[ZH33p9/nTSo.31Ksd1L?atsOdqroXa=hLKp*StaZMW\8g_5OHC:dUn"i_W>t&
+ $g(k9W&QZ@CA/PjD&,Lf&BP*-GW*t&Ga7u\EUVjf4oB[C/bq!hZs?6;C\XLTQ%,pbiC6"J,
+ ap[/L`KPhinOCN$?8g;OXq=C/ISG)F/n*0(>R=Rq3NK0#o:/3/7(&1M<'+tqVp$pX11jWDF
+ 7`E=/WY]b#QT/'L>Tl+tg.<KoQ7n$(6SaF.A>JNH]E(l@QACP-BDR7Te+C40NJi9<fJ3q0&
+ jd3#p$5[Vc:-W5kg7/gb%fnPKd93*hnIpseWmeE>Inq@aH7^T[C,>MW<7R!4L)B)LUra0?m
+ mKUr5.ZlJke@aUl%JTElm>.m9R2=?#"#.Lg.9El5c$6_7Ts3)'t0`hIe==a5aoVY^?9QD!S
+ $r&Ge3dU='F3KS1PDW2QXjT0@?X/'5jM"ZX\UTdVVY%\n>*bAK.Y$(!2,5!G?iI^3tR7d+R
+ FPk,\d1+E@K<mLVL5TW,#YLU0,Gg>9]l0'W\#n<V8j+U6K64lLY>"q^c3rP=;ldr=qSRTB2
+ jLTasUQXWqGm(^3999Ah*cUI`eH-m?-0=R_%Qui-Od;<e;+lK7,1*Bog5e*:4T&=u_\V0EL
+ R6TB\5d(h9=iZE&r?ra6t0V7M\[o.FBR$C=gA0VH6GZ'W%gG\0.AC91AJO=C0nT"eZraZC+
+ "a\Ocu"#naV1%,48k8I+ZrUU`[e$S%!CNql,3\Ssl/%5crPc[o=k>7?SFDLu@3!H$OZr]p9
+ XZkK\dmNp&u0Tr>?j/#,`=#$Z2#0&Da8.4Iii+*gXJP9s"1]QaH?;&liVc[&%]j1=]5T8Ha
+ "Ibq_kbcX[7g$Ij\rVW)Wdl_?-^edFh4F*)bY9h,c;548C4%a[h6;^/3$V#h]mCJ"u`EQ6!
+ (bpWr.3ZEIF`gV];k`B3$\F!/)3k9TmAm9=ALC7?=4=5Q*eAcV*OQSD]oLI>j^9>i'G8NS/
+ Ygd7c8.74.k3$#T$RFe$:X-jN\W(;pgnh6nE.*jRlksOoQ5(=<`YNp?6e<"_;qR_WiK`/Y?
+ s$\:gU);XA.T^\0rh,*14[!U@RkqHjRp0cCI&[k2o!+k.dQUd9<FO]U+uYW`"(-&I#X]h3+
+ %gV05%A^l]jdn\5'_T/;Z(C1,DjGmsqZU8$Z,j\rBb?gDiOHIgPN'X!Ih6XiCpOdDD)p2i@
+ OkG/S.3kTh6%XK&n#?3*\FPFeVI6'W[D=/W]B#foPWLXIQd'-rrnF1D\W=sX()jW6O*OS(6
+ 4u`5R`\#@pkpTGi6X$$"\o(lGQR>*QHRY';:dh+UV;ZJpfkK$==\MKtoE>.%.MS#3Hb>]6=
+ gP6'b;T)H>e#mLRD:,$"q?0U3jEnsr,G3Zj3cle?AdM;[s.$*3F1ql^i/@uUFo`,U*LpNL`
+ o.p8L'@]K+Kc&[R!9\U11-u;,`IR.BT:u-pipD#mj&FM@`/^8IUH7`,VfU+lqM93]G[<Qb<
+ 3DpKl0PC]D,@of7NsUF5qACH_"X:4(m2H"1ai,W>D2QK$8M\/6ZRWR[t*9]r*?P+%hs)jn\
+ m<UBMNj4%"?!(eTMIJ]>3L,3TTSiijP++O49b0#Up8e;F!>\'_\VbbBK#dV1Xr7c/)I^O'-
+ BWlTeSLbK$6IL@iB-6%()j]Bqo:%c]RKEgaP2t\7I.tF$2E!HTG3kln1rDjNOkcS#IIl8ul
+ F&:MU8"@d%2pp)#3'WEFNhkKTq4`6'c.\0\qT1g:HpV.fB)'uMZ<U<Y2*<Os0-k7^X0Z,g\
+ D5S$%2Ijiuo+"Z$>eJNB^lP,GZ.6)`MZ1T!%6##t<U3^R!E%;NCcNh6oa1Dr*TgCHtir;,P
+ ila>^j62rbjoqiSS:BmR7,hqGVfbN()sHh[2KiDme:s&77pYJ'ats-2bXIbL!-kO*15]!!4
+ fMTWlXa0R2u^SKYPOJ(k8<02d---Km>hZ.#-[O`iJVs`6Tb0.tLFED'%#"3$<CR^WU7Qm.Z
+ $QIaOpn#j3X/CP4:s)>,do)?l2fIE_C/s917>p2<Ieo3!^PT!-8cL"5HeOnJHN&#f>R3bo_
+ e[I4O:ot#`J^^nW#R(VB^54#NGj>!>M#ZgfHQe=@03b=+L_:Jqbp9;a2X+*EK:U52RR8R-9
+ Ma2kS\I(b[FE]m`_7cRsK+t7hU\KN>Z,O!t5\0U2JW#P%MILjiYd_JCn[SCf421WMbt^$Pp
+ uD^*hEGn.KTmKVot-rRJ8he(E0'(D[c%naZ/!dpa#S:_YglPOT)9Cg^o@L+TKf5Q7&'m-?r
+ *6j31InnRR'"QOi\1iD3b7GBhr$pp(B#mgq54k^Rt$c^_t=&VnNYt.2YVllW2GJ=`bn>V'j
+ Y$JY?ZXupDZ`aW-p@ah*iR[=l84rk^<[c,9^.@!=TaX98oBb8CZVmVlCl4uj'e<=4qA+e=7
+ hU^5]i!,NmPfF6.-cO=VNiZ517NUgQni4[ou(n8k9Kk8Kqd%1.C>WZarhD3_"$I**?6'Oe4
+ 5.k<2jp%clq[GOG>(q[IgV9I.s*L5FK)kMlCRJ[`kUT5?qK"W@+!)B@!Er$:W!5%a]Ao."b
+ @g@aZFciShVM7%.#_DV]Bspj4>C8LX.Creg7%DuRTke.#VI;L2t6ms;<gqf_8hj\KL>"^.P
+ _i1\u6SJ0-g'UA"@]mC>'&T"JC&])<"s8472:V@nX71J'');kCY/=T;gYQZk0oOqk.XuG=u
+ iRH?"TBtpfZ9lr7H]i&(-^6nF1/_Rj@`=3>g!kn@[C,\'8Wt2[IS=$c'9RqlVu!jAGJ=kC(
+ Jm&,!S+A\T@pmCNIujNEH*`+%!/;=+CA.t9&BOVD<HR-#Xu`pFm>3LM[Q"pOf29\(3a+0H3
+ QqhHJR9&q!OKYAheGG((EGh;d3-t'(mMkk5+c:Z9ls)XW(q8-^6nFP3'u[4!R7U'_2:[SN:
+ nV2DTE+c%Sui6da</bU-Vf7ZSTb4is4V)iB*sZC\LG65bG#':_11Z:0ZeD!K#*N>aYe',+#
+ 39<%6W!/)O:Ed)_?2X1&`[6*YFjpXQg^CZ8roC1_V\@=L[4$,@N;i-Kn9iP*AX]sf?9GO6l
+ D_a+)/M&H?9tEMk>$Bo<Nj9cAnI)&3*#TV3]mBA2ZmU(af^\n8o@,'c(daAAXBrT/fo57MV
+ G3PYVj\t=7JoJ#11*XG8jGWARkcIf9q+%F[r/#pRjnu,oOd=6U!.+RcXL?mW2H(3#Y"YX-r
+ S45SNC&aCrrQA-4:.fg:(uBFfA>n:o$%pRPeS5j5tUR[F]eRI(FIdf!B!b5<lnPN>h?[H]>
+ hV5jG?2#so6=&;l]pil$\XYV7"8)`C\RP,5!dE,b<OA&kOkUH4&P?-141d<N$3gpln@fsB"
+ p3V1ie?<XB9#moJFhUI%M;-nbNFnYFM08(Nu+V^oLTV*D[/kuf#m$:t>:7=$(o&\'E-"-qM
+ 86:t<PF5.S>h,iSP;81R=0Zr9=tTB((UL?MaN_t0HU<F;:rG#?,5'a+It,@j%D@,_Y#MA5R
+ j+2u?+Y9%FL:]:Pe%(#+Va05I!e&_l'=ba0lm!S=lV.\4[!*J+274"]2u\F'F[;?^CZ8>\9
+ ID5FFXiB4Lf_1lKVBZ@`=3*!<H,s:-F""0Ybru0G?d8Y!b`K17N1A3B9*2AnK3!QFslCk09
+ Aos6VJFHM)#1-;d;>7JoJ#@d2:87n8U]?*n9B6pu^+Vl0@"3HB(Xkh*0#PUTN@*^'5U_#ru
+ gUPD[A.iS?9Vld)gJHH-N]:?3?cT4*J;I@7]c($dmg$'.e64psDM?oK3Pbb.P,pjt!)`MZU
+ O<Hc6p?^/"K854PX]i+^<2a29AQ>R!'rabc4$-+8."s&ursT_u@+qu%)`DM'9hcAeVMe&4.
+ +/iEV_FTX,FS?qN.ciel77+^iPUF$20$K#.,l%0&VpR`83c\:o<ic!e4PuUR<duYio8sp9q
+ ,$fLLFV"@7ne8n.:KPKhqmr5q(0BeC7H0F)@_/L^<UUC\I,JeCS,2f</E(^H.oN.,l%0&Vp
+ R`83aP$hp-&@NO4',DY('to]aiSf3t&MYHRb'V\9>)r'De5OWYA&7q,pOZaht`p$1(cMi/=
+ \1f]%&C,0^GDagg(%hOWnKhqmr5q(0B3HO=:RqlMeL5%1#C$tEWe5p-FM\[n/\j53t'_)R)
+ #Xj8@U`\"3dELb,(fE"elI('-I&O6gJ0n!Z.!:*X6R^q164o"VTh%BP%j&r2Y)T]+j$ojtc
+ ,7GOqJ$GS4%7.V.,l%0&VpR`83au._O/TU0od;<WmmkZo+mA]:6G&XMYMm%"<p,[dZ<7uHS
+ _.,A)u_3-urUk:7\,n'Z9G+1ZKVmnI)#r7"S72+!DdNbiZKA[^NV:?'f/qEr5UGs';U>'HT
+ RQ^CZ8f,R?j8&JPqtCp(r]X]dTYY"P6K4d5O*gK1@i'HTRQ^CZ8f,R?j8d@io4Aoh617n6'
+ </o((Ee^TrV>`k/2MHlnt+;F>RTa4&<:o#1Kf<1[>),$8H1&m-:ZEbGi*1/<c\L((AB`9mS
+ "@<^8&;i^u#XuP.Jq=%g5<ib\f*%!I/YXj12\:lAKhqmr5pudU8VsB=QS-LJY%a&_PH!ld%
+ >5pS+V]Ju&;l!"l07H7>.%hJDG><5_?K4&E<Dnj?O0(C&qhIebElRbWQiJu5,2#P9%9]s#s
+ o6=&;i^u#XuOe6\c-T"pQheY%GQ<XCX[GiYOonL]jCWZu?!Dr?3=-()@\0euJP#%NS/I2P)
+ $*nI)#r7"S5<RT3Tt]PcOd+!2S:92n2MQS,,;+&<H*pk;NI,!dUY%D$lYlE"-plIDrUCiLr
+ mjN3V8<TFHQ&\em75U6V(TIFO!S;E,\P.DaAAZ5lHB4pKrEX&4n?O0(C&qhIULg@W9p#sf@
+ g=i"KJlL'h6%c,ZaVfaUIN]DV8.^Z/gm0NrI!pF2X[<:NCtM!Y<6cU#nI)#r7"S5<KFT(fX
+ RSf'R%S3EWeH.\e>;7864psDMEiEm76i%]5<gnY;N).0ShXWg!:&k?#GHr\#XojKKHafBn(
+ ta\Y-,ta*b3VZV_0ZQ&;p7::]qt.-j%DS/mW4)B?+i=1$7ol:0h6[+VaGSTa4&<:]r=to[-
+ 1Qef0JAoB0'(QqI]Rr'De5OWY>=1XXUtbSD.SlIDpO'GOB1R7dfSr'De5OWY>=)`)-PA@Bp
+ bbaLDI]$.fR.,l%0&VpR`#T3@1gM_fDU"Ro[e>\(-igU@MLgLEV+;0>O:g6t$Q@jelZXlTP
+ +X)EJdMtV&iYOonL]jCWeCM6UDe<dD9#lr4WQWLEKHbjg'S/ddMEk1RG3rbI^o152:uUKWP
+ LPl[)aY.O&;i^u#XtDd2r8e3f`Ip(Dr*VA133u"LgLEV+;0>O:sZt1#FV\H.\WQ0e7BBr64
+ psDMEiEm7A-7An8N!?MhI"H96PJ:FdY[K:rG#?+qP&I&ZFdE[;/dC<b8_NfR)_F=&MsqWQQ
+ guKHbjg'S/ddMEiKFJ,_[<V8Zbp$5IaPgdtd]E?Gtc$_[hoJkuS"M2[C([^6m8&s3n(W!b-
+ jKHbjg'S/ddMEiI/*ZiOk(/+App05"DV66t7&;p7::]qt.-iu8:B4kFO=YX,abe3f;X8(1<
+ %1#E#67;nt+;BAFK`"0!]=[t)e#00<.jfonIN]DV8.^YD%F_4NB==tqkg?.q:/2:A?%mu/`
+ %Qcf&.(Z8W8X:khI3jqe5ih``VK,g;ccq;#Y"YX-j#MR'S.18]Y'W6.oiumf.J?qKt<f66R
+ ^q164o"VTa90+i8DCe?)aW7eZ&um/>+Vk?6BsP$_[hoJkuT-.jugMYcqNDXqYM]B,;F'R?a
+ %+>t>u%.,l%0&VpR`#S`VgP/;$4.EflVlE.PP8kVgP/tq\U'_)R)#Xj8@"H&BiRb=E5:e4-
+ udA'42Ml0JZ7JoJ#!g+T>J`9_4>/b;T.ACCsQ909VTIu_:+VaGSTa4&<:]uZF4*N<`7:i@=
+ Pq0nq`toO.;qF_3&;p7::]qt.-j%@^QnX*EU,%LrTAM<:r'F[@p]Ehp^CZ8f,R?i3\C%'A1
+ L!g*PQ",;+Jpb+r#7s7^CZ8f,R?i3\044o0O%MRR5o=8FckTH+;F>RTa4&<:]uX`$O^IQW(
+ [aD*_ouaGSWUX^CZ8f,R?i3\DFoG1L!h!g"Y7SCS&+(n195:IN]DV8.^YDEM9)"bSHIK4aZ
+ mu/e8eL:rG#?+qP&I&L]7W,_XjFP&'$nL",2b!t^@bKhqmr5pufRFm7>.fqOYqnDV:7jT,G
+ pMYMm%"<p,[!l]<Ie;;kQP/Z//]!/E#\ia@#64psDMEiEm7A)WbL8*D0CK&gh$puH&(IA^i
+ "JH)8VqFo-jlJp:)__\oLgLEV+;0>O:n2,b;,C(#F`"oJKHbjg'S/ddMEl0M%7P^AoV&Ds(
+ ktl*08(M\#so4P::$X-^jlI8[/n/VMYMm%"<p,[!l\k3'YXPGqp`"S_[q_#Khqmr5pufRBf
+ 6Y7)`EYd2II@g&\em75U6V(TXaaQH`s+i9q=>1\i*q564psDMEiEm70#R_Tnm%]73E4r+oi
+ a@:rG#?+qP&I&HIn[.,l%0&VpR`#QVph'_)R)#Xj8@"9@KoMYMm%"<p,[!W[6H7JoJ#!g+T
+ >!<>-5,(:[LJOg?05Wc,R\ao-shp.WG-SdG43&oRBf@SL#e'6.>ET>'qhVJ86!Iah:M2@'l
+ [VXXrrVNF$PkshP`?5lb$_[hoJkuU2P*;*+'\'L(;_'@Kj-N^+b*@U2JnE5oOe/.>I0F"2K
+ Hbjg'S/ddML^QuB#]cI;l<A21Y7m+V>U;jQS2^+pQ#$&:rG#?+qP&I&[6KEe>_s,E'$-CAJ
+ ^`?Zh++jo8AK`W36_B6R^q164o"VTa4aM-n'n.M,?AF4jDO-Fj$(&e/&J_+;F>RTa4&<:^#
+ !Yh7D.ifpU)oleAo;V+[/RY8E4P:rG#?+qP&I&MR$d>WdK5GZiq!2IE"9FnY1T6[e3,'SIZ
+ Z+V]Ju&;i_i3B9*NJUu%2g!abSD<'[:W=&^f6R^q164o"VTa4V_Oce%m=YjZ*bb1>j)!WkA
+ 3=q)CKhqmr5pug-S2kZ1`PmPa!dfL7K*N2/(K8ofSdV6J?O0(C&qhH*8QJlXX/dbWeQ&c.L
+ ^<:/^iLY5nI)#r7"S5<C<4ZED:u3@Z__6/ET>&9(Bt:8.,l%0&VpR`#h75/CKG-68sY/=(K
+ /ih$pZ6#(IA^i"JH)8(Ddn.Cf/=IdRu563,jDZ6LZOZ,(:[LJOg?05ZrI.m^9I1A[\O2i8W
+ aZPES@>-hn2X7JoJ#!g+T>JV'\mc,7GOfnqQ<]C3I,>q5U[#Y"YX-j#MR'S1jJc'lUf3LRe
+ 5dW5*Mh:0u,s+aFh08(M\#so4P79W>^f4)1Ybg`^s1[-$ZXr7-V64psDMEiF4Tr>,PNUbhC
+ A`=p!L(-V/NK%MXQ5$T5`<tn!_7p^"(!6kP>E^tmG"sH<:]qsc#t;"Vm#CI_bEh2A<?0*'b
+ H"[G)RHiILaGib:mVXj/a$lWXAH9FpD2K.$"X[*5U7VKW,]/D`co1O?c5f#Cae2RH1RCT,R
+ &&?\<4@Cn;"*')6uTVrW>C5G"sH<:]qsc#t6GYC)Gk,i]io!h6<T#hVR,5(6X9\FmDM&3VY
+ sfqCf-@dEuJ$Bfd],5puesOG"hHp2uA`Ia_rh&a*>aQBl,k@DQb'.EKf[m`LeFeFeG5-W6#
+ J$:UpAO3i&F."b>!+s@7]JcG3n^AE<q4d`fh6TM#>T?_6f.;ltR"k<3t)nU93P8/s11XFHl
+ :+1`k"<nuq7=F&'0Y[DnMWdgkYb<%B\o_AhMA4[KB2@gh_2XGD<lk\;s1DK.gGtr964o$,,
+ SWZWmq[=F$glK9\m[rd\GP+8dq[C26!Z^bZ"2Nmj*O+`H<-(K5Gmh<:^3U!6p_ND-;KRV^&
+ e**eYd<;n05o6+(r$8<-%!:$0j#*h6\'I`qP-uT%.Q0-9Ma2#XgulMEYB$ebB@4<8SI8*nI
+ Xildmps67Q-EAdrktokqjthT#9ms1c1p[S:ub64o$,,SY+lHDR_3&,^JA1j.-!kF+,8#.#V
+ OOss=KgBV2Y]='Q1+bU<k7,'WYBa@-M&s?(5/EMD`:pbYkpXD>FSnlE/+j?768d5\@5[?gl
+ CXT;QnI8pDUuA[t':sqT"<nuq7G"[Wa9k@oQcu>pg$i6pqXe4)KN-C1F(abflIs48RBd_t_
+ O'f"D%iLq:]qsc#t:\eV6V.l(,[k+\o#?+EW(7^-iEo?W2lk4gQFJ<$:VRUg.L*XiFrQd+;
+ 0i6,j?/q`:tAjfVp<BgP4nVd%LQJ-)?])X]eS';:jsM:R2`QZ7_^rm$[>OKH^s6<@AbLnnm
+ E6TMp9n_SQB>c4,O(qYDsO67Q^''WISMT_Cd4mfkBR/!Q9_&GAeB#!roXKIRZf99=.s[T\/
+ C'RLX=k6AhUJqB@A+dKEKdtYGk]%4Gu*TL?iE/5<=_h>ck-j#N-Kqb3#m$/<Ck/L2ombD2P
+ BP8g#-)?]a*BR/#;87Q?HWjH#L`J^EY[PCU+V]JuW'\o!gUCoZ=W$$&qnA&"9;BA@_O)nuP
+ t8HG\r-?l@c[YoTkNiiAV:L+rX,>)6p_WG,X-\MNUMN0)Z?e:W//$7K)sBIdEIo<8tr&SW2
+ lk(OWVYh&u%5.S2imc/\rQ_+qP&h-lO$\=B76UET\,&rXB&_>-s+=&lQhaT7'177@h#=rHe
+ '9NlF!Y[jI@^4aKNH/\rQ_+qP&h./NYLYN1+ui8^N\/K>1\VNogI;T[Ll>H+[%;g\6+OOXK
+ j*Rp@S<Pn>M'S/d\6]p1$>'mZ*S[sN-!LGHH[]luD[Ybc3D/Gi-P=&VESXMN`KWHN)D#$2n
+ <XI*):]qsc$q1fd%^**@io-O(;0R"6TOG`mbZ9kc>-qs/>sJ;YOkBba3I?pb:VmJ^'S/eg6
+ d`pfLWO:f'.2ZGZ!,*=&(!N8hui9@l'1fY8(RBPk#qZQj2WEn&mbR3#XguuM@LW].B=u-n>
+ ^Kqg,8/%%aS=`@Kj)<Uq#gH[o:mSVda+jaP-W'@g8,R8U)tC+qP&t-n6Zkkh`\(XN%b%Bes
+ lbLM,`2k#SQcOkE'!:u/^*WJdZQ77BWd/RF^Mp[K5*D*fWPOjJh,aTj'IP1:BS&*ZC9cG>&
+ !h7GII8.(67Ni&tL:49g$[B,AVg;s9M^#.Le8159@&Vu*&'Q)O3;-%UWWc+AhO\X`[iE;X.
+ D.9r8P[\9<GQ.MclH8?jY<HmGg@u9KpL)l"_&*GH+;46A,ea%qCJn;9H^ltrYO8g>c-;7$>
+ #N;s_1S_&VGeomLLiX17iFt@S(Y8@0EsiULkIl)8Oebh=&s\?B`:Ui5sBEuLM&2@Y^kD117
+ t,.?!:DMFj5#C`2N[@[X@+I<?F?5caYocOV!C8,[(IB7um#$eKLcOYT1u#d%$i)U';&F0O%
+ :aMN*Ul/soCFAtF3\=@(Me+#b@+EFT.3+V]JuZk/*e92"%93m29$XkVCBo3jN?Rr?#L[[bQ
+ =^OH-5Jcb4Pku@$E8@^C*:8apBDN%scKH^ta(.+/Eo*&(Sp;nshbbsms?pM2=h3b;mdo]gY
+ -RC'9f1DkqZS'aOX!hb]G786'Z6b*-#XojKNJ+%c)qUTcm:<WI2XR()As!:/AnJdNg2"nJP
+ mO6-:0f.D)E2!Z:$e^]G5X1^/J7&1-j#NE,SWg>I5k&@[p8\s*-I;I"/Ld^ZEdrJY%6eFV$
+ hEko]ocY)7eWOEc5.CpYYEpl\AjLKH^taG!aU0F/S)R,,o!W\s,Y>l+qlf1M/=F=2sk&o#f
+ 5%V65f,%3EqbmN/"rksrsd+d`J-#XkDX:aG;uVS)f:8RpQN@FUhh$l^p1"LFq9%!Qpi9:%8
+ .=%0mtme]lL]nu$cf];/WYj$gK15$h_M5EV%gulT+qXXpCa%m\Kme:*ks8MtbXf`J(&eT"f
+ :$DHHl^`VjjmMOkdG\oq+IE8)#XkDd:a?A7eR!.Vlk\<P\@@p;l[rVC;,POQf^nn5pft@MK
+ t7o.2LjZo\bis54fcb-1)]^L0G?bbLdZ$PB^`\&Egp\ngriRLB$?W.pq?!BrVGK=7't735M
+ NF4bBTLI<!Yo8'!6j38_<C4jn;LEVH#6R3(Mg%iOPb`Y?A1YZ^(6;]tOB=7>n)t*H&M1(*n
+ )of<&\VSUFB&mK\V7-j#NE0GKrEe.69cmb7aaU1mb2<`\t_h3fk'2Jq.f4*HSZU>a+tjma[
+ 6kDpu#;nM^R;@n^N,T7([P448=',i<;>GQ])IJ9;%S"$G$be_,(ac[t"(CGX_91s-Kk\kK#
+ E^TRn`Pk9Gp,u4`64o#AS"]&ojc$c'2KF.Y]`p-dU`?6q#G<&*77CR2nV+Te]Aa4"B$HdnV
+ G4[Kod]V\_B5S1maA5&$*S6\=RrGk9b3JB:]quiAgig9qIG6cmBlg2/ibreSIlr%Dr)HP)J
+ X2/Ds"X:[^Po(pKt4:8RZ$NoejScMu@n!$!Xp!O4/^dpXSSSTK%TK&;b!ddBC=ACu3=^KVf
+ YO3!<S@i^dB^hPsSV"qDX:md=t,BkdKcW`cZ?NugIDj5VpN18li-T!`b?Ek/YoHnp0+q7V+
+ A%b-#*"<u60.$FM4A\ACE<>tD"9qa`a^_GJTffOBD=LrTgCi&Hg!<[Z@lIEFdB7GF2NZC39
+ KS9!`f5K<G@C'UmhRn,T>$;j;*t-D;m+<I_2gZ3]DT)E8<,-NJ*eY^KM3^MkA70Yd$*CU14
+ f`M8rJLG^!77L!(GjhB31I-h)`E/UYV8k4/dJCC-j#NE4;?o>AXi2T'kp2l<"C#+978u(kY
+ "3-IW1+0BlELD?Lrjh/Ip9b:]quiGUS`V8mH%5O)<arna&8`a9'/up$:5I#Ff`i%;EKNQN-
+ !cNtq,%^]6\5Ld_-6W3iq]Zk(Q[dE]4-AgSQ$$G]!ppdim-Bd8]=cHFO2]X\%^#XojKm=ul
+ WnVq>/=ur&%DuBdo042FaI9ZaXDW52*m\0/]r30bGZ>C%Q67;ntE"RsQ4*UX2Em"n%n]K+q
+ %K-:V1@rssPIG2SaEJ>7BYYZU&Vu+,:aGjIl6qi8DqFF6g"D^kNt)-jc(7#EjGiEY_=dNh>
+ =s#o+G!>l,2jpaQ'@O*HLhT0gMc"74L-U!/q3jnRPO7tPIG(T7XrV&33-"/""FRF+;0]$Ol
+ ZB36I45dRG<Jo!o,D53(JCU!/TS0p$6+^#FW4BCHG[89E&+W-3gggMEiG7Kd'+2nO$(ED,"
+ oeXh8Ei4a)q`49=AE?=*%";h#JWA5&IIf)0Cc>/2['g"Y8"qEJcHXB>o"(\.%cETS-#FtB-
+ Ur*KebUAt&\nK4Vk<iPF[qsT$e5^fX4nET#Pbc]c+S!/X4KQh_6k`I(F!6?l&70%p:.MPJ:
+ `_S@f-eELG=gDirp"9D^T>Nn^CYc.R3P2J9f9UXLhGL9f!',]V!7N>5MFpN77um"U:1om4q
+ iQ<tnoC]?)$"Q%[<>]$[u0<a\sD21nUNF3+;=jD7_qnHH2-kp^UH:nR>V,.2*LQS.I*Z45j
+ O;b$HL.SR(;%A1p;Y+Ru%BD#f&:>L*Fb.^YeG3a4JO)h6?FmCVEtEIHCNC='Ae-acC^qHD@
+ =-3Nd0'!%gGa!\hlj4aRn#O10&blJ/eDRI0jm7ql%GF/TTk!6nZ7\B*4FhTYpuhlYncA,@"
+ urh'E#70'2V8kMuk9A[=,,N/$Mc-JgqI@PNKhF;nr!GFo6k4sR\\sgC?l8<hNi1\tAm![4^
+ KH^EFCu2g-V-Wo_Y;a^UHb\.(2j>79Wdddh($afbJCqFB?T)V!c29`dg;s9/Tj9-bI*_qJ+
+ op(?rgfd]h7E9_mQK<nhnu;<Z?=F:hlNM8PQ5Be3dnYbh[YN[-+>4dO.SD5hQ@)85UZ[fc%
+ QNA<8F%Ys*hH74+W8GF(t:1m;q@P!U9aoJ=YKNX4(ZK41j2qZ@gr%<]P=!R&X@3ZiHd*^]]
+ shQM9k*=*o;N^OL1cYWb\@U:Ms80sO(u?Hf3/X8pammFns$=3PU6dQskGe&KEW1Flbur=3"
+ ]:]t0X3d'_lfqG7=V.STQqO-=-Aj9^%<-ItHl@TI0(#+ndnIK_CE6AB>GSM#2Z.\S2%\X0D
+ "TJE"/GJC1l^$,(B<>R6Dpk`@][&dXch9XA`PogRs0*ZN;c<k/M)6F5n+$;DG?Jg,TXB%7@
+ d;tWJ3VH/d>YR"JV-%R1frVu1J\PUVH'Keq_7>F#5*NO@(N),U=F%F./WF/XZq<Jq<@6cNu
+ pSu][MG_Dnl8Eo#i[bT.g883HF0dqq[^1H_jY[$Y;DLXZu9d1?uc$Nu^;MICK=!F8>CGps5
+ ja[=)?N:!p38b<(1NNr:"(/Vo@EUcr2eGiIrN8G2iLrr%1R@g4H4"?[QQSXM0*R$XG;7t&q
+ hPiL]<T14e)Ds$(bT/YWrDigB4Nb!cp(8Rk+>?G!(T/-K&(claem@IUqI[OsrF!"EO]]%99
+ Q*E%UHhQm#55,&KQ(Iuu][&c=r0+Faa2l[,Dr:AMG(Tn-J:IXbEs@.64*Kt`hQm0&n^<+d>
+ s6_")PkcX^!kKq@h0U:977G)hIU5n03]J>g8k2<cBd`IpdQ>)]k-L6lCABQnD/<G;4fWEHu
+ O.JHh?M6op5Qo_hSc"c\totR!:AiH.NrJ4_S'Jk(824lJiKKU:@`.aO(*^c5tjlfTX,qn(G
+ `?00b5<:P2\=:#="6M>FR[i4upHl_sRDIn2SKcnD&cg.0/HiTt;,]iq$?pX-/"gBn;rW/jX
+ ad#$8I9:!gCoNnIdMc\T5C=brK$"(\fjHU3g7pK:=ok=N$F`))7)Z5H[rVQ=QSb$i,(E58f
+ aHPR^htA-t>)\:ZZncXKZ"&QHM@->D;rH*WF(sN\^M_3$&)VabTQ.<</D%*_qRmPep$-,R*
+ =Q-DYCHMOG4"#5B+BHhX^Ai7HD`sY\b+@e+^U,RPd2H$1*=:-"2][<:DpS<bKmXDD,*+j1O
+ dCLAVuA&V#7Q7GLBQ.Qd(sT+nC9";Q+Kk?AWF9H8L+_e(6*C91u6C/nikNIJYQI/ZFdO!0(
+ aFY+5dS-@2-Qk=L*9Gq2CbKBBATX\%V4m@Ahi2t,5AJ'`P9,=_?m7dTok$'V);6ldLB&!1C
+ Kkcd#'%L_RU]sY%nK;P6jQ+\KU3CEX9-;KTtp",\_q\;(%)]MFunc3?#jiWjl>e*276&fpZ
+ m[^oYPNk#MM>["^^sLe[o.C"6(X>kG;;ao88Vspc2tctKTK-3QIW`3,ZS)e1q)3ktm9j=@0
+ nMGFqe8h=otCS.A7]=PY?mdr#QiA7nF1D^I;,G"b]E5Cp9*dn6%1(f?r2hAr#:%^9qoWlqZ
+ \W6GOCurUEfFt"J?(ggUFq=k770^1s;lI`s.8',8t"SD+JY[e(6*C8kQ%k+E/tY,GS;A>1C
+ S)!3p_7gAU`7+SYfKP+=IMEYQ6'KV=1K.Cd@Kh^%4L6_L)&=0CJANhC%PgU@tG&XCHk\2nd
+ /L[8pbK7UjHe>[(<ke6CSO6rk%B-=('JqBYg+R5^.J:PDL5OY55]tf7*pU;JdlM#Hlnl#r!
+ 6]od/Q/dh5R,#>jlFd@Q.kN>Y\oapb]fl$+"i,r%[<5/:beo2HWAbn,^RRI_n\ej1=ZoX`"
+ %1a=f>V?0Gjr0&Y=hK0Z!fpCdaQZ.4[#bT.fe*WCtS7Pn`/+<NjVB<*Z[5ICW*0U#*&UOI`
+ u5.(c+3qoj#5jZf2qUW-8d9eDnT12f..)iSd(NC&t9Mf%,$>nDST[hPK(DPj[>?/\UA6dLj
+ 38KV?4Zf>XWUgAtZar#?fBJTgD6[B"H9f$l&aJ7<9CW"B[VPFJ%7]sfhoUnm)7,C;YC2EL=
+ L*r%<O#46*_<TA]DDghZ'\m)ZC7D1JW$p/h0Cg:4M]6<T2\2<-1["()Pg8'q9&"2e(5Hf!"
+ g(ZVtW*_eW"=+C6UM5Wfh](9j<T:m>b]D8q!'l9jJqAZ/K]0h(GNmPP;56T6^%;s15mL5Hb
+ *=JI77Eb3J"Cu\n4.f(Ic0/0q$X*U7te8AY"=(a@L+b.2#:F;hEFl8O4gg]_ptfSWE2-2@e
+ 0-^$O[>\hRr[B_1S6HIrLTs&C7PphE6qsoD/p>Vb[825^8V*JW9/!D2`L*>o+IKp2t`UnVr
+ BRDqX],qtBL"c-%^k/Yue!!!!!+PEJi?q6XElD3_udP%Cu[B#Vc=r;P&OIJUVjf^J3fTtT0
+ &lKTf-^OV&UI@W)\)GatR9=IsM6q_%:Heh2QI<Wj+1%Q2T>J90WD96-X&\=nh+0Q4Vzzz!!
+ #9/rsK$gUXf~>
+Q
+Q Q
+showpage
+%%Trailer
+end
+%%EOF
diff --git a/tex/img/icon-complete.eps b/tex/img/icon-complete.eps
new file mode 100644
index 0000000..ef50adb
--- /dev/null
+++ b/tex/img/icon-complete.eps
@@ -0,0 +1,630 @@
+%!PS-Adobe-3.0 EPSF-3.0
+%%
+%% Copyright (C) 2020-2022 Marjan Akbari <mrjakbari@gmail.com>
+%%
+%% This image is available under Creative Commons Attribution-ShareAlike
+%% (CC BY-SA). License URL: https://creativecommons.org/licenses/by-sa/4.0
+%%
+%%Creator: cairo 1.16.0 (https://cairographics.org)
+%%CreationDate: Sat Jun 13 00:06:52 2020
+%%Pages: 1
+%%DocumentData: Clean7Bit
+%%LanguageLevel: 3
+%%BoundingBox: 0 0 129 177
+%%EndComments
+%%BeginProlog
+50 dict begin
+/q { gsave } bind def
+/Q { grestore } bind def
+/cm { 6 array astore concat } bind def
+/w { setlinewidth } bind def
+/J { setlinecap } bind def
+/j { setlinejoin } bind def
+/M { setmiterlimit } bind def
+/d { setdash } bind def
+/m { moveto } bind def
+/l { lineto } bind def
+/c { curveto } bind def
+/h { closepath } bind def
+/re { exch dup neg 3 1 roll 5 3 roll moveto 0 rlineto
+ 0 exch rlineto 0 rlineto closepath } bind def
+/S { stroke } bind def
+/f { fill } bind def
+/f* { eofill } bind def
+/n { newpath } bind def
+/W { clip } bind def
+/W* { eoclip } bind def
+/BT { } bind def
+/ET { } bind def
+/BDC { mark 3 1 roll /BDC pdfmark } bind def
+/EMC { mark /EMC pdfmark } bind def
+/cairo_store_point { /cairo_point_y exch def /cairo_point_x exch def } def
+/Tj { show currentpoint cairo_store_point } bind def
+/TJ {
+ {
+ dup
+ type /stringtype eq
+ { show } { -0.001 mul 0 cairo_font_matrix dtransform rmoveto } ifelse
+ } forall
+ currentpoint cairo_store_point
+} bind def
+/cairo_selectfont { cairo_font_matrix aload pop pop pop 0 0 6 array astore
+ cairo_font exch selectfont cairo_point_x cairo_point_y moveto } bind def
+/Tf { pop /cairo_font exch def /cairo_font_matrix where
+ { pop cairo_selectfont } if } bind def
+/Td { matrix translate cairo_font_matrix matrix concatmatrix dup
+ /cairo_font_matrix exch def dup 4 get exch 5 get cairo_store_point
+ /cairo_font where { pop cairo_selectfont } if } bind def
+/Tm { 2 copy 8 2 roll 6 array astore /cairo_font_matrix exch def
+ cairo_store_point /cairo_font where { pop cairo_selectfont } if } bind def
+/g { setgray } bind def
+/rg { setrgbcolor } bind def
+/d1 { setcachedevice } bind def
+/cairo_data_source {
+ CairoDataIndex CairoData length lt
+ { CairoData CairoDataIndex get /CairoDataIndex CairoDataIndex 1 add def }
+ { () } ifelse
+} def
+/cairo_flush_ascii85_file { cairo_ascii85_file status { cairo_ascii85_file flushfile } if } def
+/cairo_image { image cairo_flush_ascii85_file } def
+/cairo_imagemask { imagemask cairo_flush_ascii85_file } def
+%%EndProlog
+%%BeginSetup
+%%EndSetup
+%%Page: 1 1
+%%BeginPageSetup
+%%PageBoundingBox: 0 0 129 177
+%%EndPageSetup
+q 0 0 129 177 rectclip
+1 0 0 -1 0 177 cm q
+0.92549 g
+2.25 2.25 m 2.25 173.875 l 126.703 173.875 l 126.703 29.457 l 95.609 2.25
+ l h
+2.25 2.25 m f*
+0 g
+4.498582 w
+0 J
+0 j
+[] 0.0 d
+4 M q 1 0 0 1 0 0 cm
+2.25 2.25 m 2.25 173.875 l 126.703 173.875 l 126.703 29.457 l 95.609 2.25
+ l h
+2.25 2.25 m S Q
+0.4 g
+8.503937 w
+1 J
+q 1 0 0 1 0 0 cm
+14.793 17.715 m 69.133 17.715 74.914 17.715 74.914 17.715 c S Q
+4.251969 w
+q 1 0 0 1 0 0 cm
+59.641 37.348 m 87.418 37.348 90.375 37.348 90.375 37.348 c S Q
+5.669291 w
+q 1 0 0 1 0 0 cm
+12.004 64.504 m 50.879 64.504 55.012 64.504 55.012 64.504 c S Q
+q 1 0 0 1 0 0 cm
+12.004 78.18 m 50.793 78.18 54.918 78.18 54.918 78.18 c S Q
+4.251969 w
+q 1 0 0 1 0 0 cm
+14.32 46.551 m 61.996 46.551 67.066 46.551 67.066 46.551 c S Q
+q 1 0 0 1 0 0 cm
+74.531 45.957 m 88.641 45.957 90.145 45.957 90.145 45.957 c S Q
+0.807843 0.776471 0.803922 rg
+94.934 2.699 m 94.934 22.141 l 94.934 26.754 98.117 30.469 102.066 30.469
+ c 126.508 30.469 l h
+94.934 2.699 m f*
+0 g
+4.5 w
+0 J
+1 j
+q 1 0 0 1 0 0 cm
+94.934 2.699 m 94.934 22.141 l 94.934 26.754 98.117 30.469 102.066 30.469
+ c 126.508 30.469 l h
+94.934 2.699 m S Q
+0.4 g
+4.251969 w
+1 J
+0 j
+q 1 0 0 1 0 0 cm
+14.32 37.633 m 46.785 37.633 50.242 37.633 50.242 37.633 c S Q
+5.669291 w
+q 1 0 0 1 0 0 cm
+52.766 105.809 m 55.777 105.809 56.098 105.809 56.098 105.809 c S Q
+0.8 g
+105.602 92.074 m 115.84 92.074 116.926 92.207 116.926 92.207 c f
+0.4 g
+q 1 0 0 1 0 0 cm
+105.602 92.074 m 115.84 92.074 116.926 92.207 116.926 92.207 c S Q
+q 1 0 0 1 0 0 cm
+12.004 91.852 m 51.406 91.852 55.598 91.852 55.598 91.852 c S Q
+q 1 0 0 1 0 0 cm
+75.023 77.395 m 113.867 77.395 117.996 77.395 117.996 77.395 c S Q
+q 1 0 0 1 0 0 cm
+12.004 105.527 m 40.352 105.527 43.367 105.527 43.367 105.527 c S Q
+q 1 0 0 1 0 0 cm
+12.004 146.555 m 37.043 146.555 39.707 146.555 39.707 146.555 c S Q
+q 1 0 0 1 0 0 cm
+75.023 63.852 m 97.594 63.852 99.992 63.852 99.992 63.852 c S Q
+q 1 0 0 1 0 0 cm
+49.676 146.461 m 55.684 146.461 56.324 146.461 56.324 146.461 c S Q
+q 1 0 0 1 0 0 cm
+75.023 146.422 m 114.023 146.422 118.172 146.422 118.172 146.422 c S Q
+q 1 0 0 1 0 0 cm
+12.004 160.23 m 50.289 160.23 54.363 160.23 54.363 160.23 c S Q
+q 1 0 0 1 0 0 cm
+75.023 160.23 m 113.762 160.23 117.883 160.23 117.883 160.23 c S Q
+q 1 0 0 1 0 0 cm
+75.023 118.191 m 100.063 118.191 102.723 118.191 102.723 118.191 c S Q
+q 1 0 0 1 0 0 cm
+75.023 105.004 m 113.41 105.004 117.492 105.004 117.492 105.004 c S Q
+q 1 0 0 1 0 0 cm
+110.895 118.191 m 116.902 118.191 117.543 118.191 117.543 118.191 c S Q
+q 1 0 0 1 0 0 cm
+75.023 132.617 m 113.309 132.617 117.379 132.617 117.379 132.617 c S Q
+q 1 0 0 1 0 0 cm
+75.023 92.141 m 94.914 92.141 97.027 92.141 97.027 92.141 c S Q
+q 1 0 0 1 0 0 cm
+106.531 63.586 m 116.414 63.586 117.465 63.586 117.465 63.586 c S Q
+q 1 0 0 1 0 0 cm
+12.004 119.203 m 51.359 119.203 55.547 119.203 55.547 119.203 c S Q
+q 1 0 0 1 0 0 cm
+12.004 132.879 m 51.359 132.879 55.547 132.879 55.547 132.879 c S Q
+q
+0 0 129 177 re W n
+% Fallback Image: x=0 y=0 w=129 h=177 res=300ppi size=1191132
+[ 129.12 0 0 -177.12 0 177.12 ] concat
+/cairo_ascii85_file currentfile /ASCII85Decode filter def
+/DeviceRGB setcolorspace
+<<
+ /ImageType 1
+ /Width 538
+ /Height 738
+ /Interpolate false
+ /BitsPerComponent 8
+ /Decode [ 0 1 0 1 0 1 ]
+ /DataSource cairo_ascii85_file /FlateDecode filter
+ /ImageMatrix [ 538 0 0 -738 0 738 ]
+>>
+cairo_image
+ Gb"-VBiJ&5\i9,mS_WkN\,3nqS\&'8bhVHA2Zq?=*2A?FNbC7hLa*Ut1'C#SMGl2OU*;Q_1
+ b,4J15Lf4Jd@61`Jbpk:^f5OomXIohKL@>CX\)Yl:fOWVuFI(I*,X/F8"JRq:DV"Fm<?5j^
+ 3H=!5OXqX&f2fs8MKe"c`<oU^7%jB%F%ds78JT6esrbT--i.J4IM4&%>`Ee/]8&!GMT8S9k
+ /,fC.V3:YeFtmuSjr^oupnIm-(]T=U*#09@L\^pW(umWU*/.FbI6O<stC4k@Xt+sNttWN/C
+ T!.[7@k:eK`#ls/nFq,0PRK.l0N#Xk+B!Y?Y<`ar2!<A\,c=%>V'*-B$ll9%JdKBPl#s3oB
+ L,8o2]u&''2'GZo5i8ee=<0pG!73mq****g"9;[-Fq-#3B`\8e!]u/P,q_pkoVXp5#69k'J
+ ,RuKaW;Et_u?dm[r54>?%!?P-D,\V8LFLg*2ZK%.XPq&!BZ&OJi!K.=t6GlDm4:b)+YD@TV
+ [++FMZ72Gp,l^J/2.Rr!#6<AVDrA!*Y%i_@.MMghO5pG@E^#RK*=>g,/U7Fq2[uB`\:AaOB
+ 3/Gil4MO<srm9+,J))seQ1$u#bCZo!pm^VS3(BojYA5YF1QdKM^f`Bfpm!%k=La`"D)=;7P
+ ;!79/%Rf!@OLRj2W!<?<W9U"@AkHL/3-3-;(FIIjeYC#urB>sha!b(8rE(CWBTHuWn$Ei2G
+ (jFmKJ,"Z+s'6c6*#8fo!W]U'Uku(lGUE71^juU,r<7bVF'&^D#lr>8QmSo;SA5!qhqf.C7
+ tee*iP$I0!.^C-1gsC']tf&l]=>U#*-Mhd!GMT8Zt)`A<lOlYVgUa'gn,2h4,9F4B`\:Aj@
+ U(d`kkWk8hY169fVJZmM4.Q!,0KA-=%@k@FI$Y4<s94$A=tf:I:KM!%lmQKN5m\K`Ch.rt+
+ p;"iARD*fq]K5YHqj@<LS-j8\fMoA]bT5hJ*P*RY&jBIT/mXKp=#^\6r.ir6H,fK%^jB`\:
+ APY2Zj2_X'1qoDLeh\QbaL]9$U"9>ALRO5-(k25u^o&o$3$/r+0(hj$O[$4=LY+<JR(,$?q
+ Y6a=N4,8qF!'j222!M$%^l[IQp:'&\/V./Pk:k<e!,/?t-EMCXM##lqIOFQU'?Fa/%O4[JJ
+ <lX<_Wk7_c[YrCT>gbe!kII=@AF20cr)ku=Ih[GoubI]M#0\rF1%RV8.turV1+m2Z?%uQpS
+ /6_')1i%k>FBsOX:&nM^2#!(#HG`G0F'.s0QaJZZ?st!W]U+dY(Wbm?4Y8k)&ueKl`BI_ak
+ F@TLG4p-L;_<&(1R\BT]2!X/PD_cTV#m\AJkKpH+.m-3-;($YdKCd(Y-^pqn2l*Jm&XhDre
+ QRPZlZ?+Y:IaH7]E;,M-j631?8VG3P'mbE*SfWj5egm46APqPBkBd?DTSNV8\Kn1n53uHrK
+ \oV!6Bk"R+Y@#$c)&[-B(q]k,=5Mfj]6<Qk-n'$o*9pdcMUG7gYe$9Ojdim;(3_mXl#PskQ
+ IFpGX]o4B@FQ)GJ9J3poR$?#WGjRA8lKH@lLp3h1%&Pi*KkC<gGu@[1hglHO,o@"^/=tk^i
+ %nVqWTAsc)0\NW@4mF$*05@HQG!+Lc[*FFLH6mK5'o7_1V3ks.9'*"lRM2D;1MKS!CT8'rA
+ `'1&^q[Hc&H/E<dh\SotTZ/PR-%)F?eYW@psGrbU5kiR+Dj$=0$VdUnqie`Q<FSpg<8EJGp
+ 'DKZM*.SX)N>OaucJ7oN,&Jg9nM/-$e>##`3qtKIWrU54hmKa!Ea3J,\Ul!"JfX5K2p#QIp
+ \&3t3bC4:X:#fs).eWT$!Ikt0mN@5DIHRhXdcBu\bL@=PJ0#C>(!1oM.SBSP_Wo"Km+5Tk?
+ /0E,a7[_dNMK-97n7N;0_JEOTEcDoCp;[]nm8#IiZ_*!j^#DWjPqEC%TPK;G7JC69enIZW%
+ UF:rQ*7.Q1kVc'JF1;l>HTlWDc2+X`0fu*7dHF3#iEUp^SFAaN-miY>D'8PU7aD*.Bq#Ul!
+ 3(U;$C&FN%WA1c.&1G`aJ5Y;GLD[6\SnIY^.24,:5*-E3uDe/'3WhW3bYaEWgsMT7P^[Hkk
+ =s7UC2-LSqYibR_](=HaVcCNHd*e<)e3cmp4Mc<QDV?s<`M8u4foCLrO?8u87%Vm4^eui:O
+ IH:Lnc1^PZA\:PK`G_3Lb"b%ul79&q4+?.ufJuWs-4YgKgY84oaog!0p**56Mc=CqV[9Dn3
+ Op5]plW'b&5N*aLWS#`@^/a:FoXQ;&.fu97B/3$dtC^uDW1Aqj2IRCE#RUo9V*R^D5f0M!$
+ )CR$Bj^!<LOEGKN7:(P501^M`VO0[(LII4A?Z"?XN(K%mY'aY?rsC7B.baUe+XA"GXGQc=&
+ <#8U,J>op#cT2pp%#11$c6dSlU=MZ+da\`]OtG=DXJF6BbCdKDe@#U$b>a,eDgOX`t9Bu90
+ !Td,'biuc72P]0g\3NRok\TmWgop#cT2n*__QLQ/!PlB*a<SGR$K3fB?m_"5%eBVJSUe*p@
+ K>Crg^A%^#_r'@dUSC2rMc<Qb9G:H.Z`h%5p,h8/9fVP,*7dHF0Mt\EeCPdmIN1Ip:u3^%;
+ 0kqM<juu]I,[=Zgt<"3Oe'B.8X%$.6L*kI5UY3TIT9C5]N;&+FIX\Z8db!3bjl<hK#251kH
+ IG(K\=(f47uSZTEfIVEO&rCMUH^;FO_;ee`*%"E9'*9[?Ds(eJ,`B![,nUeZ0W.)eHcH($,
+ d:_J8]:S"G-&=`J((&FIbOmA(&PSt8]*/HYG/qXs/H>apUkOYgKh@.4*Ps8McP:0'?SS\pm
+ g'8OJfOH;_U3<J^8`/,.">aS-)SMXbtk!\LM!<^<3YAbaKLGb>DN88BlQPffl85)]fYK%?g
+ 0GW\IBu9.U<mFof^*W33Ki<KaYh%6uo@H6!:/7\#1]b*p0GrFg_eG?W\mCi0b.jB(CrfFmB
+ .]Xr4,5\\-=-35nq)LHW@q0MX]rPhI;s83dat7g>VJlS($FKQi\Q]E/tA@lLGbPJN7i#ZYI
+ ON/rMcu%!%eNd$qs;ej6/Sn:ZImX_TGt_C09^HmbELpD<J-he"fWPC[6Z^(Et(KnIpl^N3B
+ F'FEI/?.p$hS,@+P49tG*"qMif`o8L0k&4i@__qInMa4f;&91hf/?+P.Om+GnL(W1,X/ml>
+ /1`EJsX85niU0KfT=N*bsqR:tO2fXHmSC0KS3D'qbd%tI*VMT)LGunmH#D$-&4Rq'rs7BrD
+ iujkd4,8qF!'k>fgpuEjj-B5&a\tIEb"b%ug+0@a)[bPggGt3L!<?:rQ7NX"(c,)Ha<Nt,\
+ p=,6`uf^\DD[#FB]h(t"9<+\H[C\:MFS0t+n,H:fA6?d;a-`rDErkR@coGn"9<+`pYC&Q>2
+ ltb>KNAL&lj7iAAPn^il4AGflmoN9`T]ECi$2_IV;ht+Rf6N<M4i7dLWULGNU[."9<,3V59
+ #aL?$=7-h$kgf$;?MFpB1.E.jEjk:k<e!,.X^P_a+-ci<fdnqO>irI"D(TDnI1ET8:6/3AW
+ L-3-<S@^1ag1AJDE2"HC)FKom,oYM_jj!1)dS\mu3!$M$mHgcgWJc=O*jP=-r\s@d;h07]j
+ 2akMSgc:<M!<@/7^N/oE]C1rmH?JNHNIoQQ7Z/33iM>f[7[a;W\NP<5Li._l7o,;WH?H7[G
+ 0.BWo0a+R5b`H4,>A/gh!Nk*7<cMWZfdMGGr<dcdmolhLQNn`5gJ>-]_LKEi`b5\cTRVYH&
+ 92UX'&XcDL(s/B>k=HE6dkk7[a=-(+`h,>]em#b$uK<LKD"r`Z=il+9!C/_ut*qk:k<e!4:
+ \.R5:S,hD<J?PO`c%H&#J3c6T#"3,kPS5ncfac=*LT!,-r&@eo%+s!YJC^&>FL9+(*LGBfD
+ =b5'6I`'k3*5QZ->,>A/',=Lu6m]plt$M*nnIMH*@E$+<sA__qMZ?$js!WX-YoB&ar*KP?L
+ fVi*:m9TZ?cG)I.o.QmJSIkbf*(H@sJ9D>?2G@T#92%<=hVQd9@pu?FV3dF?';Frd_akF@d
+ #Nd4"Gn/fpV6aACBb-H%;Wk1qqL"#*5I"dD<EV$"9>gH@^3T>g1q>Uh/iVmh"HP@jLq+[p-
+ eBKpOE3eM$)D6??Zl(9`RFW%NIGN9?3MVD0j%+>NG0WrAsJtn`TrM]I[RFLF79U,>A/';f"
+ AH47Bg,9?6q5kFW8l+7=_BLU5IT^\Z*RZA`36HL)V@RK.lhUnjg-f^o@X)!L;IPC>Q`]ZI3
+ W2fj"(qVWS9LTsq:G7Gil!'l0Kc'fr*lPoc]i5lW[H`ccG$2?,8oR#Cq879L\s3L_-I2I<U
+ ]Q4p(!WX-<>?e/F0A(k-If%mWVPg=:9`RGJp?p`eHf"pZr:o<58.u!M/.W7%g%ug=!GMT8>
+ %T[^jkASqMF*a3JB$EYHhTc)o06gTINU;>#lnXG7RkDdIePqW)^*3-B`\9>3-k)U^]*Q6b2
+ *4d/k?WUi5OZmLS&kS);3aea9$&W!$""?k)9X3L(5CM9`T\sH$O[f"q?a/)1k7I'*-sNd:i
+ IQj,[d?LEHcK"9<[WYuh&/!q7q%Fs'KZJ7%KcT7?N6`LjaP66Ju&J8=KujbFq,pu<Ir.!p:
+ DV]\\65R6T?lW:2lV8=n<_UBW%!'iI])B(MPhMh3C80CV,5\Fmk>A@p<o&\%"%j!9Kc\/2^
+ RK.kujlPT@h7@aFJd!6DnA01l4XYRaKu]9%!8o'&<ia>Qb0A2Ubfi1ur-3U:Hn7CrbKC9-!
+ WW35SN1`qB`\87&g9sL!'iT@#ljrrLGT#3!9oRl!!",^dKBOMUdLRN!.]/^'*&#o%VZ!kLW
+ 7?R$D:F,?Z3:(W7M+C").i+UoLGp6\c-T"pTC)2Yd(Bm+SkJTV0*ZE)<_Jkah.j`+l^BjQ*
+ Ofpt^/O;-skWK:nm1kih6sm-OH>G(g^UcThGG0Gp`a'RCQ1("Lg`Ur:c87/BXlp.?YW,_UL
+ >K-Js-_mO:3jBXGtRk5s6LAK,e+/,C%%j!ipGNqIT*N)Iu7L!1-lJ@@2-Y-I<8P)M'-No#h
+ 6q"8k8/\<HUsj+JjQ>T&YBNf7<`YM;CW9m?iT1;7daHQSR<OaN0%7aT2WOC-fAZ&]/'If2E
+ H-!JVZKU!%M$KJ029i*R\ZgX;Qk1=e^:-^-*A,de]LdHqE2PR,pakCF:=7b0%2(.4Ig_cK;
+ sNTJg>4RXfACq4pc1s>]4Y7/G,8UmHs:cpY,l%f@NYu2T<YSe7SbV4*BhQ>^Z^MdpJQne^[
+ O#E?J(]=t4+:Pa.M2T)gq=gpqJ][1/4KrmIU9Tn.NL"+g%6E[f2/[C$aY[+rdLc^1\NJ9:W
+ \':[8/j5WM?bX$*jc-6l&cIT+%WD<Sc^b0]7fdoeF!*SM=9pcq_G#5Aa:fVab\Fa?RjiS>'
+ o)L0)hnNOpUk=VJR3pOg?)(`jTW3s0%-bsTDRXQ@]UAUq5`jNs=#Vrm8=t!cOY5+9[=q>E2
+ GpZ%:s$:klRWg0/1ZK5W1]e+l4\buG^Fet',(i'AXY<\[C,AO$h^M)`VK<(QqShE=&*%,h1
+ c@XW)!emMI"hi&oH'4RP<jbj`m>qDlg5!]mEiSQ,E$3;H$NaE#u.Op%:IikRcV/dOZE:$.I
+ \bI,Md@G$dpk*Iu?B#/f<2HPLBK2OghEUa/#`4!c1!)<dSRHT!Z9Z_3taXfY=.km$DESL.V
+ \)]LG8\<*)F\80^!B]*#u>hn;ULU4:ln,a=iCKr#W!!#8u8.tt'O:Gt+!!(Fj9`P/Y3i!&^
+ !(_l%!<<,aO<sr-+<Al5!!&f^RK*><F@[&F!0@\)!WW5K+\_q:5WY\I!!#QF1][RVl&[4l!
+ $E<1"98Ft6^e#=:O@G8H$O[$6,?:@'c%RNPa"!_2H#SaoFGX;TI7^t`Og<\o(D[Prip%&Kn
+ kFrAS##H<pB^?Va?gPk_$i`ftW#$Fk9^GL(,*%WiE!f)LCQRBjric:7O9U*?Dh5j2Omq/_R
+ LF;e0EL;Gts1gUHQn]BD]/?F7&R6f%+ZY\).Ko($f=qZFo.he>te;r]'g)!F4e=BJEtOgq6
+ :q!__W_cKW)8;Pr8QFBqGF?30\Jq=(&]mC1aAhY(ISiuE+;J142m!kfpG#]PGH;o`u!E/,=
+ 9oMOdU'*H@SXo/tC6+F:V`nY!q>:&u\$mm;4[9umVMWi<%1NcC?LIk<Yno`LF`dae]>NT0Z
+ Xs[>+VVA.l^"8^"Iu=31q@Sa!qf.jOQ=3.!s2PN6ULtK_9tkK9i\Q[J?s?c-<mrsV-`BdP-
+ E[SJ`!T.W2Q@Q=;o5?*E0P5J5c*,-5MNYgY83DV#s8"1`D'!q"ji5l6E-VHP<O/?^eLk`<A
+ ;dS!tp=_;"@#It)t5K-cVGQ`"LE!GCXM9ccEA5eYABA0<R#Qp#fi:`ldmI!g>2PnU:I?^c(
+ YFPr]Bd"0Wn;LrVt\@@q&@+BGaclj^I#`,1h1h!h/fg<C)!@74.osImQ_Uj0?>TED[X!]ST
+ Yc4Tn\HW,pkV>l<ZEgF\4+JR_#mjA/q6m`?9q+%<+13#7I!kA=]rOuq!%E#X-M[9YE6[.l]
+ ;DIFfk#([S"laRWK1=f_83t^mbLg9Bn1<:9U!b/hathc-bu_*bY3$"hViG/\8``;,(fY"j;
+ /LEFRnKaI.FFOH;,&"2`DP6l2pCLUngOgr!a)E"U,&i\o@WEonnV6.?Ed"Fm(eZR**QDHm0
+ *6=M&`nIX]"G<sO8goUR>"^Y\i((De/REb,(@mF8n0"ON(ORlBfn^Rs=Sj(E+XCV!:T!!#g
+ rdKBOMUdLRN!.]/^'*&#o%VYtE!7E#b!!#8FU^7#$8;B#&!!)8F-3+#g*RY&j!2MuN!!%Ok
+ 8.tt'O:Gt+!!(Fj9`P/Y3i!&^!(_l%!<<,aO<sr-+<Al5!!&f^RK*><F@[&F!0@\)!WW5K+
+ \_q:5WY\I!!#QF1][RVl&[4l!$E<1"98Ft6^duTB\=V7bJi$OE=[P*?Pg,G3d=-5WpsY-%k
+ 4(>ETrgk`gZ$cY?8?63cJqAjIV,scF7+qYVhR`-di+t5\o_Wp>!*naG0KCQ!S@)e%:3!G^W
+ `/@e56.QWs&4%]a?bN&1P8d#JH+9m":(.HO3m9or[sS8#gj#CGo^Pc8W_8.tt'/\'PD>$Dg
+ =pan%BCN4Jn[e[M/a,`%X!2*?K;&sV*ZP"G0UD?l6jL<rBkU_fB!!#tZ*0qZcp(&t)<$=lF
+ MV\/_f>k6X>%Ko7U^7#$M1YH\R2)<nj[f-7;RRjKoqoC"K/<T/:uBSnB87F=O[McB\Wop&[
+ mg!c]3K!%1][SQPJps9>8Kku,MgPLApp_T?'b(KRK*>dVUaPK)E<"uF>&\ImlF[*oM\,G!!
+ &t8ltH:-E]&,i4B^,`<n:KO"q&c]!'g^iNqUHB60,lP[$k:bp!D88jTL\m!!#XWkd2nCrf0
+ )rqQBWoIdN.*dH?<B!<>r\`F1XRC`j5r+VesGZ_PtmmPL%Q!!(0HGM:UCT"A8#;cB^+1I[`
+ 4!%<Lha`hm^aTKDZ)I]-fmO=8h'*&"Y3'&)Gl_;JL5t&u$ne:Pq7\g=jJ.d<UY`Cm_6s!:8
+ BkPa.B`\8W],r!cP:(sH)Io;hA8cYVP`[9m!:nk"p_.om81ta7:7X8j"9:\V#9&fJK0Y_@A
+ b9s,-g$O<!!%%2DGD#<1[^^6fS<&8cQ-^$!$I3K/a!lF*F>cB[!]-t@24%`8.tt'l-?]tkU
+ ]?;.arc6i!:H"!!":eBtZOD1qV@Wr*Id8^YTt)b#S2P3?p@Y04,U9lF#qfhRYT7KArkYVb,
+ hF/HQ.XB`aql%Yqr411b!,,UOl+"0iIO&sAHBF)uD,>$Cb>mAk+",E0<($1$,o/e7bOf#Ls
+ s!U&9@ZFk\MZ_sXV]?*FuUEG[G!.&+^>$6Z7L5(#*I!g<hio8stX/hj4nGK^VXP`a8_!D4h
+ ]*0<Q$E#Z@NSLd7$^<0DGs^('dKDfoN(2k(p$8S]YAP3dL(bq0Ye5EIi5M"fZ)7=[H7sW9s
+ 2'J`j8gEufM0LY-.AGZ'7jLQd6X5.[^KMRFG[\O#S.FV!-l(tldeKn7AZlYf%/Ep]*Pee2`
+ I):KVn1IqR9tTNqW0RIRq/j;EiC5i5rSD:8=Vc?FfA#A=>'L$5BcI\P<gEfmN/=0je9plLb
+ >YM2mV!WiD4t.?_;kq_if_9Kf>WJh8$B>NMdEMk=guI(dAHC],i7_NQ(4#^H\$^M-SD;A'T
+ T6G9INA)?)!Bk_8kLWJ?WdZ2>e`mq+k;\,c6=WFrqmFi@P7LACujB3V=#%eDAn6XGIX0C+G
+ ]+-hqPPk?+gld1=&ebq(&!VM>p/-mF$X'cIfk[KT%QNqNhfh<Rd;lY0K1fVNp#`b*Vfl4Y#
+ OspRM909pP^n]j;ap]<S/0npIin-!WFi.0o"Uc<mqTD:>p%PUHR7`%+BPgMm#`X:c0>0Z7I
+ JUBC:.O&$HS-JA#%F"AQ]X<8P,Ts5?G,6DoJ3fUE?u#J_<E%msnlNkK-Xf`p3)r`k;nDp:D
+ $L99Z`i2!Qq%3ja%q<[l4Td,rTBW<^`qXRKHef!-qm>40"VP99PR=0B,u7t?QKrR=s:CcL)
+ 9;WK?&Cc5Ko:_<<*!duOI.1s6KWS$T]lj?e08tT,P*Z=icbC)0br8W8'dcHVP&KkSsF_bI\
+ 1tbpE\)5`W=.7pTF,,'EJSlb[3UuTUNG*$?TcQ,j>NP8J5uR0@[CUj;*,38kRc#c,:el7[\
+ EK2LV7F.)HJ!=WQ,3(0`2E&#IJq/5Y4>?>6bU1^[:df,lGWq<jmDGP\+fn;/PWM'CtR+2*)
+ D3bgVHI0[+t(2C"[nAGm)2j4[4X=eD$njOcbdUlO3@/Cc72XS5dAB_Pi5aV[QnLp[tlWpRU
+ '2@YaG_I&0D6<##!4"$PsrrXrQN',.@#9of]h,NCuR]B'M^n(-M\j'4gBX*_Co[hf\j&uZT
+ aka%j5+-0cd0.nlS*]Y5ZZfd6q>OK[97_-g.<QZo,4H7)\^="8i4e6naJSo>^^0UBRUpXdW
+ (@&`bQ#SNc$L,,b5,[hK(G?4FNrt6hIG;)gTDDYq>tr#Z5e#CR+F7VI.P!$eBCU#1&2cgGb
+ O+uEmB$<amcWrE4aA6-Xs\.t==pT]&@)fA8s<"to,!YMmHr`Vclcn2dRm5.V&hrdQlJ+j8B
+ W#qES-KD"a0i)T$orMg95'V2T,u+K\WCqIXXKRM?=mQ^L,?kbA!9[iP<M(&l%3I0d+rIhj4
+ /VKqV:+4o!,\'Y`Gp)K=Q8qfc\!e0T1=O@@5rRP;"OZ./(LF<rUpYdq;Iij'iM![Tum*=l<
+ cBA2Kn6DA"0cA3NK`->Ij(\r"3NS".^0:!h9>oq,UVk/&U,E6cullADn+HL5G2Oj4R@??DV
+ \+e^jG9sC`mAR%bg9k]45"9P7N/dE6p(o#+(S:P%66Z#>iT)q<a1cq4'MG/d(Pc^ufR4DpQ
+ rgJNApN"sIFd];#_?cPa#%3C)B/+A/Wc"+P0_J#QbFTMVQR'S/M/P$-AGeQ!RR_J>F3'KIb
+ ;I&+\rkH&Ctu2]0<+h[hN,MdTS70>]lR6$.IV^MP$C@.5l"p:f1(5bA%/TjgX,e2gLL/CsC
+ OaZ>VcD(4h+[Djkbs/hW0ENlHK13/,%Tcd'hE>-0P47GuJYNdL1`6qL`#oC.k2n`2i-\hl9
+ %?+Y833B9)`5(4Th;]IrHo_eOV<E3$]Icha4<SV"$CRS&UY-hLZN3hhe%ttOn510Q65e(i-
+ oq\l^jR$.2-0.!Z\jMndJ4llKekRi)9\ug-?LB/$#$[j$",eu:!7t(\DjX_BPe:(5^CE(I4
+ $0lA'*)t`&$-;>=0*ZceClR:8oL3:l!KH<RK.jgBXXDn<VVY7C@0V,j35B#&eQ@I1][U41H
+ X;+<;D_:C+`,sHX'j<-30]WXnMHW<Is4:RYt?T)r]up\UPZ7!WY8nEEKnh_oTGH(H&\=>Mo
+ T)H\F9W!52i=@]7sFhPIc]WT?e=Cm'8,#AF5>\n(9:#Mn90hPIc]M5Gu<Q7@^nN\gb:rGcp
+ ?_Wml:]/1<]U$N_nV,I/W,>A/'B3:C7XPZJ'V7ENZ`d)d^NNrsSdKBPlAk,2iH'dZfgeAi8
+ m6u%)44463oOMUP61QYQ7?'.L\^>S)`fWc[!.\r,HB^bQV7EP```X*^Dob$88.u!MLSUfrj
+ B\44)H&WV>YC+Dj,Z"t!$+&PSACX>H)&r>Yu*R2HpNT<5ahZQn']II&LBcdX`XEE7qi=T!n
+ %2O\<^^?3YYF8X:@.Z<,NnA-31ib/$4(A9XDjQ1`*R'nRk;R!bh]9SB>cbj)4E1n'YH?m)(
+ *FVZUW7!'IjCG1YQ+-kTG5)WF_`&Yo^5E2U4`1tFLU]M!<Clg(95s($sbO<stCQSM4a7+4K
+ n0$QheX4q(OFSZg1`Bfpm!(SLi23Z3"PBU^nC3=BHc(S%"F<XEZJ04N&&#PXG,Oa&cP\88;
+ %_G0Sfg2&,!#HPiNAE%bO`tLlC.7g,%Yq&]dYTd]!$3uN*mLkW,4Erbe=u6D<dVITF+c@N!
+ 0(c6[*GlG,Oa&ceD(E"8%'1rBgS6gk`s\5]mp'rd:g29ZhulgZ\hes><!MNDbVZcFT$*[^k
+ "'h/Zl+UCi4-VCV:lc!-H4s2J6W6DVr1agY:J@F";Z>]m"SR?a;jboinVtN;D+TR>=0.m^q
+ ot*BM-U7H20ORK2Q+h/:Q[aB5c\NU0rA`%#]O%h;AXn'YIjX/AH?%Tk3#^faaBXfW>Or/`6
+ *qS6u?G3mj)9,lIa:P5S"Th4<NRYuJ<Fi:">WDV*_f@Tcnp$:U<TFTDG[C,\UV+YfJET,m8
+ !<>n-(`9c&j2Ulhprf!W\$m%;GYo05&pAH(oFifa;)74JVrV-PMgFXSn8H&<<lVs7(^rDb@
+ Xa`-lSN$Od'7*$=I@5jP2H]FrV]Ym;KS*O%Yl8i3gt@kj^[cuZ0sL>0$Qfo4u[;,aC<h8^t
+ [VUUnkr^Fi\KiY`fM&$'BWQ_0(EcL/q8QaKc^u?+bEEH7\/7HM:D00$QhuG])_HlP:p+$gd
+ /_=If9f,H#7=T:_0K&s/.0U_FU]94*d6XB'+UE%prMb3"mn?]]JG/o"5Z:lk;,-or04R@-f
+ %739prA25<#5VB*=YHFk(Ogl_E?+kQLH3e(<DWc-40$Qfob]'$d^-LrI@(.LY/Of*UKY8"&
+ ^OElE_[%k@+sOm9ZUn;cKQch@9g*srpiOP-nh/0U%Erp`&O'Qm8ZA_i'##'j99Z`i1j+Nb>
+ frm9Q_Z0*6Mbqrlgnt7#%Y9,2eqa3J5!FdS*-b/02>dhI&.-r!>uRcUtS@klNdk+F#_\EI6
+ )XrFEC??S.)94nR,(S]MWIZ=Q"Y:I?'apF5k+EiN&*E1q@\jNg_1!/qq@E>tn$feAEW`oM&
+ [%7$It8P0#r4;L*GKP*1L%7_[:r,pc"A_$&BKljad_BPVCY*]?T%J,f0LR7#%72&A+nC52Q
+ uFStoRiN&*E1tf3JO8@lC0&lBZ_kL3Z'o0[=&.h)iE[&c]nF[/,L_<!2DWc,)3QfLe'##:[
+ 8s@?;8<P,"=0Qo;?I:9e&?"E;T,5(X:cb==:q?%_8i9=4UJ^eaj=<s!U8"SU'm))m5$mm(,
+ a=jj9g*Ml>Hi:mi3ZqR?(-Zi(&'`=1^+*hg[01C99HD78OuCF)HbA"UIUB&?C?K#g=l=J"T
+ ?pq9*V"WPR+9t#u`qqSl@M2R[WNk;WK?&daQ[E"r;=Lo&\&h=!fMBePd!M:^[B?IJM(mnh/
+ 0UI=,^kVh&Z]JqATuB!YG%OKUL?Y-')j>gP?\_9gie.!fIQZIs%+&J<M[fY>C@QG.Zk,^Ga
+ =U-:V#E>@;8$'UjJlg-[n.nuDURl;"@0^)MYFL79D8Znb\6/u=0?a=:sCSq^iK'N<,`ins,
+ N09AP:Pr8-gndSkIQl^5K3jE02$^7_YHJHE$_ncahC\D-rn!A%jB\44HKXIr/%eq\bnRc]G
+ P:'[a1+bU-8mcJC8aFb%NIGNED:19H[C*K@NoRne0FfcQM[f^g[W\P?(-[tqq*8HI*&*.1c
+ 2S<W(?ROj"X9PAhtJQ%!(%.Bi@MlLVu'L.5`t?8P)JlH(s+%^K%@&P':UmlHSD;bPYD>5=a
+ SehEUr-_r](.X]DKeKu[\lFT2+hZ"@q>JKka=&M5RddMJ\`3?4(5g=k;jDs,KhauTJjmZP%
+ s9gnYYA+D`[F3^dG41Vo42fIP(/M1gVD3\\SELJ3_)hEG7p/-#Yi0D,d3.1M*cL+:]`T"+J
+ eV&?UqB`0!RT@/8hY*%W`h_TQ9tD#T(IptcC[I*jcn]cO*uc/717;ZXLX:\(@IpT1-h#Jge
+ <I6be^ceH)3Y*iC!riWWTi-ECWX]?naZ/H!.(u$s8J7I'R"]XgtXIX$A@j6SbD"=gW6u`(H
+ &I6Tug2(4_+'08"'F.7Q<FLhq.6@n'YIjpp.(r/c>RIcI3n1!'i.f8g5=pP':UmC5d.=@6=
+ 5@Ep=-K!W[9:Z!fkBQfb#3)*MlG(K+%qb`F.O+H+m-8`X0GM]-<"F1n]+HFi0m1][U@%Q0e
+ &n'YIj0]U@bk3LOGV$Nk'!(_4+3SgfZ3dILOILR`sEOOsQolMVdB`\94S8-/T9p3.dCVSLM
+ SSgG`4^AC]RK.kO)(h21,"_tP]6?@5kR@-+.19hk!9.6=kDh7@-?%C=<'oVblKM1ddKBP<\
+ D!65_MBTP<-o].V4K:s+ADjO=o%H/geAjN?(-]*?6;[#>q/./!.\#9G*kHJI7<"h\Yq(jp/
+ "POHB9HR+C'c"I%4mOH'dZf6L0)BF?6@59Kbo[!"c*AUO#hk,4[A27BNPKY]ng#B1u;r'*&
+ T<elr*!q,6c9bshHYDR;E/.nVjb-3,/RS\8G]U-r+cX:FX%>Gu1n!7CuMqUT%iS5nN>hPIc
+ ]`RR7kFr1+&BVYq7oV'rUSlal<hPIc]KiJ/UNcs\*,D.fRJ>Y+`g(mZaPBU^nC2Tlo,C9gk
+ N);p1!'maA*?F$<"d=.E/o!kpWWlZPoo'JMRK1-e/.S^^D&@Pbk+]R@?Vt`YTUmla8.u!e_
+ <S9#e:s02BeF-mfog+9IHh^!"Jc>(FD('L#M!]<n'YIj7+9".9,7,YBa)of!$63'3l:0VaL
+ a8F250@UUSX-"`\[g4!WZM.P\d@\-1B8ee/75YU@qLM):K@s#lo)#nK<Cq963tu-BGh5,3n
+ F@*2El@0JO(KO`tLlC8>jpNOITke"oK('*(#a.##<,F&Bi7ZS&]CTUmcq6CIna\>6"NjB\4
+ 4)H&WV>Y>S"FD2Q>"Mg;!1hX*@8`$DR?Yks)S$T)l9@JtL!%oeXHl5d*UteS?s4h^^HFgHn
+ B`d4]cF7+!At2HLW4Ai/NQK]!`;ub@J.&s:RQ&)CS%MC8;>V8]VOfD(kTET'>$<O`\?L>Sm
+ ]`oL5<h@cJ>`#5pC@Z@gph@a_1Mu6F*"HaX1YKq$i016geAjN?(-\/I@)EoW,4FslT`kgP2
+ eSt=H4@]El?#=LUonTYC_@X1,C`>,qml=e-LuO;G^2/r:k:*.dMj3Ib<).hl`X(0";G;)<&
+ a2dcHTE?Q&P5NDJC`R.O7t[&6@7[Y?aRKE(/_UPt.jkic]g]&[/"hVR)ih#mrfLNX=P*o;=
+ >=c8;sM:o@Vhk<Dm&ebqBPESpD;q'4Efk^&NU8#*)2VD.P^M-SN>g*CNhT5iN`%J_.%j&ft
+ /BO=)AelS+eu\-/9G`)cG];B@^LVbOH<^Qt]=iB9bQl*cHUQ!P\gU(H_l:IbYA\pekpJ)ih
+ m-X!=]8F).-'%A1emQ3npGk6W@%i@j1FcoX0&Lh^+sE-qiO'K%ke@fp>n(TaLeo>CLHjt[r
+ +of*$oCd,41qEVZso3UkGio2&EAGU8"A#)W^^6(*:rCBJaalrkOlIcqsGi;O5L`6rm?S_-h
+ NAfe!P7q?ism^Nq,K/;o\A[1P<PF&?uTp2,8J;F!K4BkaR=(!'[L==fd2ldo&G5NpS1IJ]g
+ I;^AMO]@P#J/I>PQg?Fa\]palUo%LPnG)K:O?WC+fn*dI6.TM(mT3lkh\\pqX?(.9HH^9!9
+ 6-j(eOf#Fq]"1)^pg(IWq&:^;ah'AGp&7bI4ak";-Bmr8\(PR:2f;T9M(0br,c%iU<Q[A`.
+ o"[7ahCINTY'$lG>#K>8ArHb4RI6Vq60R/18705]lS#V$D1=7co,apjDYD'r;GWoM^g>TAL
+ /Zqg()A/RM6JO;3*["*.*f$nJ.M8WiE(jFtUu:<k`^(%!e3%UAf=FoJ5;=O,&b')M-oj()E
+ di_Gl/KO,oN$jS;E3P:'.(O#Q9pn4o\]a,_>2odj/?XEsX)3:\]H7D5*ORbbo)+2.IEGTh^
+ s8!J'E2)SD+;WK?&G449^4kQEWR@0J;Q`0a)+sNU@"VrEQ*C5nqWn10=ICjf.Nr]Xr$5,,h
+ g"G&Y/4oMNfs4'=;WK?&+sS>3#93=(e>ZB@Q`/&C2VXE7;ce6kIW0N5H'dZf?9LGlEPU"/Q
+ U+PK;nqAmdn`1W^E-CoQBUN&6h$?f;,L3?K+7Z)j^qk2/M/P;%c5[ml#q"YH'dZf?:c0B.'
+ WFo@B-?-qAoV2piqK/,68rBg+0nc,U"=^HDEk,/mZ%6HASZ'8"-Y1)1#[GH'dZf?911Glej
+ (;/-@&0Ep1jnKbks7d3$<uPpR@d):Tl?qH$]u-@1NKb\FP3H[Gek\.)ahp3fEeV7EP@kD%4
+ l=kra,BP_MOGPkB;;NtLIl8+pSi`7:2Bi7l.6TrkjF$W)XG.VfkeY!Wu*o;&#A+k:9hPIc]
+ (XEQhC>GVuQU*l%J<6t)gq7o2q'E!iJ_L]]`E/K^>IJ@A*cKE3j2[3R=eFt@HLrUIm'3hEa
+ La8F2(3mOa80`N>m>:gJt>Egg=iR5a3RBRp+CQ'EpM:.lL>)7o(2Gdo)=B@QqXAsdF64L1`
+ &#'_eiHL![0';5<iN5q!7:!phSKaM-c(pV+E&KFTRHZl^K5,&g*Zjkq\jo-m2,;h.D>d(U=
+ H-CtKm.h9P<Nl@42'`OV(l&1P@6a((R*c"9+%blBRE/hTooe!W$:\N/KJFSblY4r\ItNP&P
+ 4\HZTU:"qd+i(CM#kDjViOIWE9b#HWN.p&OJ7NU$3Bl%V`rql/``uak_c^m;TrLWKBQLdI9
+ B6=*`e)0`dFiC7K8XA+DgZJiI#q&N?FESk\NJn@]$Fb6J?AYk_3)('`XfW%>56:"rI=9Smd
+ ^Zn>aY[?BqZ!=dX5:o<9A[-8==d&ks"I[8@Rs0d'Xn]93]_HPgo9L(<Rga0gY"#H>a,)_8X
+ IlRd[6Dh5(7M^i+`,fr=9u6/D&48Wp+P>h?_Ts*ku41lE+8(#A+JiDr@(a#uNL3/-bgB+sS
+ ="b)6UMLQ\*pb2X6jG#okN8ilcDP:k,)br9Oq84cB5FQh(8J>Zm-,Uk1!/c29:,$D&koj)@
+ F28\!=P]YF\`D^-W2B![4FOdmgl2C<RDf>*S\@DG1n&mIJ%j!i@2`Rj>Y>ueMUg(HSXcuOb
+ 4m4+b!*mIE[_(rSO/GFgj8/9%([O>>`QCi?7h3*ibVe_8_]GMb#n%-qZ=V5QmFr9e&h[G!T
+ Z!F@X4:BO;Nd<1FlL>r6rE[HhquJ4U-3ks7!GtIa,Vb-6psFbSS<%^E'Q[=T:]JO&7PU0"_
+ _dkY.UAA,#&&^;/@[#CRX.!Pb;,qI_aiZBI_/B[l66)2`*?>G>!rQ!T6;DV.,s/!I:$oU-e
+ 0p[!2*5GT;,sRK.#.Kr*q\o#9ZqRYt?T)e!e18dHP6!&X,XmD7<CA`R-cMt3tZ[lQMGhN2U
+ t9`VQ%_:CT@@MO0Tk+YmUYNp.0W+X=s+\_qnl_1FpG,6"#Qfb"X<<]hbgSni0V'/P5[&M$l
+ qGj@FH'dZf^91_sl9LRF6^e!g:n,..8Tc_4Qfb#)._QqW>Gu0CO<ssre'Vl4>5]IeQfb$>=
+ 0-dWI7QM2n-c#uJ>QEg<^Y+9nh/0UI9-&YXgOaH`o-<I-32b.n$E>Thb;n'\Yprg9<,+:e2
+ %=/J2@;R#Sn`&pU?9/BeF.0fg;I1P'Y.+T--i.[(t)RpqCVnH'dZf4u%J.\&1TsA'bn;"@*
+ @'f>j"tBeF/-EGf33$2&C+09@L\jJ!P&OEYCkC6[*j\#ka<\tF7<'*)4@&!P)H1`&%)39u_
+ fQ[67$WK8G,KFTr;OEYCkC7'#'"nQm%l%6Vl9`V+;2lF9Q:+'u0g92Zr1X<F6Un04&#LWa*
+ `2T%Reg/?V[J#L^mYU(JlKNp"+\_sdQdKi!.&'+3PT5*nL3gEr&>TV_b&U[(N\J^)Dq&#k6
+ K<(NklRQ?-3.elg6t7CLaWj7HQ/#D:DYtkB`arrr3e]T-4fZ"bscKMhb-+<i#JO!J3&Q>AR
+ eI+ah'AG[CL`FolPc6O<srW.']"t1G'a<dTfabh2h8)48"P`duYmV8h9A/SlE.S]M!<CCMG
+ (*-(I;j!%=ku>5rmD;XuHb7JYJOZoJ[D]L&%e.&YAAn4!..nh/0UXfN$"YZn&=%)bN%!%?p
+ %X_Cgi-[D'@U)$f8[FmX?B+-%ORK+b0fl!B_iBuIu5J6a0#K9V!??;+_Pn0l65XgZlNHVqX
+ D^lr*!+]_cZn86KVb3&38uB:ECZ<N=7LdQ_ns?G=2-tc**ja(/Kbp]5gY7(uQ7]"0SRUP%b
+ HaX,!6D4sdID'R0NF@A*-pM[;"BPV=Dt"F03u8]\uiP)(+Y.N+)Ye,Ogst)O="cT/mPojHh
+ Zs4YV8:+5O+Yf?hjNq#6kBMgplmiJHNHn\Q5b2hFZ<MMo4g['1R+;o#asVnph9XYG[?UFq(
+ 06.mcQhN(0S,`Qr[qYT,ZDjG8kX&+r%M5?P4%A7X>/2S#^a1%mPaMMdR\P@J*4[l@mpp?hD
+ ?nS)]O'bqI(*K8NWo=Z5@'d\KR7Z[k.n9Qs7jPmtOe-crh$%"O9>[/h+rUZL*m&YhlK;jVJ
+ S.H8E;.!Z`bNq%OIO@9A?'[@PC"&sWr-X@AA+-c0eS_40K8ZqFP*aLTpa16BoSE\'"U"sa^
+ .:7YPJ"Oj?XI1*JHr*Xk$MD"/U[Y!<`X+jYX.,UgU1g`kdQQmSW4/%d8X!6G/5HpJ.Q$?V%
+ *GC;5.K]Oc^-/6c<!?A""6o7_H@PYI//t2E*Qf=TClZ71+H.()@Z4@XecDP"kVS<)LMId)B
+ GS)'Q4j9H-$Bm:%C(%V/O8:a"c7j=X.X(<cRIl_t+.,[kltU)-'p<L';*/_DXNCMTtLLU?)
+ -!b%=4FCR'SIAk3_YEbS=O@rPPP_a+=]/2Npp+5l-K+.2)]dmRc4$#E%GV,&>O<FI_G!.r]
+ (aU9qo81QsksOTd+\MOfmHlLb?!k=o`+H=Z"U.ba1rpdd-:YoT852fYkO;):98+A3Tr+$\;
+ 8S3KDjLmVE17_/?_+_0=VMVe3dZ/CPpSpg7FYfUndt?NLY\=#"9bZEGXB8"WTc6bP^HFcNB
+ AZdXT=i-dj!qR1N=P(XK3scq_MSnEq89_l7iLr3;Z9SDJlUla4>1]V)2fp.@ibr-6M+AJik
+ U\NKPs'bZ/N>5VB%8WQt3M^=$QdmbQ;]Bnam5q&tZ/K[9[Gl^UO2cC,;%K@Qb<1T,-\?oR+
+ QLFcTij!jq6JUX.OGL!2@:J=G]oC,k,4mbec[Tq<KP\<9U#%Y8m[V_%nZNbHlaN/>/5P(-e
+ OcbbML;hX"q,QXQ#sZ`1kjZMl:A=d:@J4I$[F6@U*;K0T*OEKZ%NFGnEoP"DVe1GUp?cT9h
+ PjGOH9I5YXk/:FB9MH!j3mLn*SD1'k*1FUDVVb\Ck(Rt9qnAVS/;`gQcJr@L@:W&nek"pS7
+ 'Iu^1Sq-SPa4[>h%6Vp!]kP1H[*8l,dGVBtZOD1n-Tn]%9TA8!9c>Y;IJLL77];nuAakF1+
+ =)PpQeD#suX?L$u$u4(c3u1.pE(j);t.J%fdF.W;s1-BIN[Co:l0/1aK>_u&\rT6_JBDHl'
+ JUBc'loVa_?`fgYJP;RO81c6WgjL&F6"qgg%Y;FABaQRs^=H8>uq/UGCJmSt9-$@/%aI1*`
+ klgFkI>mljY(H@VGO7i@a1m(F1>oeaH$j]epYDIB3E%'4bfi`mrL'iGWMukBg?Er&4?qRa8
+ )g"A_Md7F9],W1Re*R9!<E5H5,Oq@;tsfpfiZ^No&+ar]obCB4@(%?Ljbn@):Y0)OBG+F84
+ SH&?WH5=9pdM:[QJj3MZT6ZG@[6?(X@em'Cc/6iZ't!`7_,fg*'^=WLd(tLcdHhW/jQJC#i
+ ,s6haAgYNo/W-BOe7?kE'oY"fJUjOUG;4\hX*Opjr1q`=4>BuBHe?YB32qMKO-U70tGI/G7
+ p8o*hTo#9[8d;T!Xh9X2%GC=jX,,0a7,/.[[(jB]Bg$Muk.t(G-9:/'FH>GjDgXof\E#f7O
+ QBik8l;*s3V9T[=Pa&D%Y1?eVH21A/U_$Ysht`6#.4ZJ[Sq%=b\]4&0UdRd%T4qamS&<+[Q
+ MI\.pF*ki>>M;\%S^Ku7_L!*^g7G*qt5Wf?Z/_,6UR,&_YEnNp2;h@)J<)qP%TGHp#3S;80
+ .j.rfQaGc7df#X]nk+<$\_)o9^lK]a!MDBbSQ$Udr+BFPI41"A3$1@q-ha7XScYV$^"oDmV
+ Zr?/\5D%cZceU+%Wl<86X0cHg!%G[oFu.O$&\4$-[+<S<)H1jNbMRU,1N;9?1HYDjl)&6r?
+ >q=s`hqe3Ut/q)+_()ACrCKW6Lecf(QjIm^$fEK-$jfM`#hom%C>`H?Z2`X#I*I*d5PuUXW
+ d+$PjrH`jM]/3qt\jrT@pQo;A?Q#]>rV$8^1u8m-,eNU>of[E\OGpp[cW!kCaSobjR$a8,Q
+ BkQuqkE@@ngr^nfTk*"Qh3)&TI5,`M7qC+;]G_sWrDME(N1*`H6[Y#CY-V2H[PCQIWk6Ff6
+ 6ZA_5*kPFu+19p?hSIFr=BJA*i&7/M/P63*:KeCtc6F0/(Z1E--2*Q;r^%GA/[EHH"$QX4&
+ $Z5f`XlPK8K'nr$%H*I/oEG6A1^Ze:4nMGB[$Z"qso/^E"WD-L7MgtB#HbKJ)@rfkB1or8<
+ C?TDB@eFI2FD%C<X*jgZs1q@/BFXs"XIN8`r&Z45MmA7,6'e%/cYck3h+C@$3e<KMM<S%&s
+ 1Bs;c9:@]k*bkQLaA:BRjX\Q82VFu:A7tA<YH<^.?SGa7`3P(D9-s7iJJRd=-D>QHP$^^'.
+ e*EQfr#8>jlgf&XPZI\<4Jl&9OHUi2c@s=BG;7sP1O$Yd<f+]l?KXV^G#pCgXZp+^.>c4dD
+ i`+475sA(MFU$BsOrmJG`:X>pG<7mrajD0m=brcA)YH<sTU"b34uB<_,AZTP<+UPAKi5nk2
+ P`c*$PNGDLO]1FVNrdn0,<_*nelcFVMog])eUCX[a$HsO:Or+_C$)sX.4RR%MJE)R,>Wo0V
+ +)/!Qj?DjAIi./hrjAt#@nuG?b3o,0i\Tn=US8?L/pQIQoCIm]K1X-`7V[N9mnMJYB^<P'F
+ \J:DKje`Xn@^44I,l)c5I^-\"YtI5c^^Vln8\:b8h5&"siYGhkX8G$F(.BSbqNb^,3YYF8X
+ :@.ZUn03sM3cWOCs,9<$VL-Hp)m,X$&(QJC!@T*RSCL#\>]2BA(8s2SC)AG"P%B9^SYNp\'
+ jO99U!Ef[XsX]3TLQuMi)qM6p4hk+3g[>UXR'8UJqBI=O[0dCRrm0$f0WR]Ba@ae[q^$'lK
+ X0*6k?PWMP4EZ;Rik<!V.t1Al#(#]Zcupke5cnk&#=+s+\.(NS"]#$fSmRNshiP@%IMf$9#
+ &8l%&nAX9?QVM4*[bseb8gL9QhM]<:"ZY?02V^Dp&ST1J`:[.[tE>=e.alSH&kF.<4>oS@l
+ AWT6bBsP$*Pl1AU\gE[oC9TQF4iK%S``i+>fsg%KbXM?a&%=T9e2%;VdqkJsW)uK;M5a=pW
+ ^H:4JeD0Z]>tVJ?7W&-FZAkJ.$.'-C#V08E:uIXWq7jSRYS>Kfp,n)1Wr48>Q#Q`8IDcMAt
+ #rnDXsi%[2N40,HndOKt2@=/WSEX<IX#Xb!jRUn7f>#WFlW]6CV[uG-mB=BPQ+_Z6`6=Aeh
+ 1IpY+sS9<*Wd-^9P[W:\*0!uYf/9c@i-jO]jCj]`[aIJske4$(*X6BhbUR2)=G!<Vi?At$(
+ d>0/EVodfR[.2?H"Y:L0aGKomCPK61@<q\,:`W3<nAt#sL5^p\O?c.gY$egM=Qur;3GM3/e
+ D^M)B/o"#3P%mAk-pX89_MiN'4^Hn0bOQB[8"7qjXtqV$K(!V&T0"Z?U1n&HV4)_M5bBm&-
+ IXJGP$`M\'mP4ddZ^g]N_W)hI15qQED`?q^:N'-U7O*d/^:07o'3XD=!(E_Da()k]^*T"qV
+ C6(`C5b@c7<]aTq3m`1X2eR.sHg^`nC)Ib*8KD*JeGi5X&<hY7HMg^45<%\"YNORkcHYDO$
+ KPc-6U`ZL6r?W2ZanilJ1#W`iV2jZQelV?qTt:4=K%f'E_sHpbG_],Q5DT-eF+7jZX@\i'$
+ >/#*<=GPb%a`re-QApqG0$PX6c&?pNUmbHT&X48]Cg1DLNjN*IfYV;o7I-oXqLCP=S.?Wej
+ <O6C[\&1U_<RTHQ\nG6D=I0LUSKsgMD:[V8-bOD,-ju0?@Snm:I(n$'$+beEL?@[^93KmX*
+ ^#7<NE-#",9nGdh29.O;R:(Hn$J-/qjRH:Uoi+jk9\EW/gWHS]9Q,=T,k$K]-Un[,EHo!eU
+ U?&Bq+:bhjReT;Gr\60'6$6/;[E2?;5Rp5!D/s/mXW\@;7\pR+8SVjB!aHNAf(aYgc43?i<
+ "8Ou9%W#5Qb^X]Uerc5G"S8u?M!TP@1RDXZGqc3DGaldrh`]r;Ot.m6%$oX1[[1gtf"aE@.
+ 6]eA$or@:t*#@/>#ogZ3XlE(_0<dB$i0/9Sr^?M\LR?FFl@.oZu/du<Np[oJt]N[@`49afG
+ Sp)J`DNfR;76Op/mDOhp_Y^RZA7E"Pe;*r0K>A],brRGSM:p97_Do0r*+:urIgOPuh"OR??
+ =K?Wlh^i=A.F`GCtU>#[gOOFB7\!Ildiuqk;=Dt#5*)K3nBUV7\c&7B=1o%CtM"$]`/NW'P
+ '+[H*XfCe"j=r2nR]MbO1u\lF>L4bS_@*rI`VQ@1-D_O?#oSf;mLmIoJ1_4;:1bNsfCOcKW
+ j-"i&^(ICkRnQ^s[AIAB"P-R\459s1r?oQp>;f9<e\-0\`gdtTFa=A$9FaNP#t3%/(3./E`
+ GkDWfX]&^CEU=cm#rC#FbIni.4n.1eLQ2Pkd:#21Y#m^g@fpr%QB8req1T/[Jj77K!BSoGO
+ 8QWUso2ncjocg1#e_3GFVQ"PKqXTnKjO#*E?FfA#p_63b7?hIY8kM_5rN2roLADa6=&e8Q:
+ \\Wg:!U1Pm!e>L<&/=;`F7i@SYX9F:8;-)CAU&&al'Mj:%t-&j2ocdfm<QSf<BoWSd'#`d$
+ CW/W4r.5:/2;l1%t&-l-oK%'76X*DPb\>CZhl[a,tRgWZ$u`2LC&aIAqL#i3tA3)56]$G-?
+ JM6hkuBMnQXc3qA=B:.1=[d+YmCqsJCEaiW&QfXpqRGl=HFF30sLj=;t:/Cn+Aoj:2%d+#9
+ ZE7lb><K30E@P:g(1I?>7:ct!`o1+Q`]&pVtg:(\=R_G7q;Gn/#1%m4H-:X>9deWo%GJ.3?
+ RCo/Uf6^$-l2F]HOu:Eh4#N13G[J*?S3/$L(V>lW5AC$F+.]S?IYEH.o\J2\+$WMKN4muTD
+ kC@bUs@idM<V'"ElX>%F"Po:BK^j7@-Qt<Dk(oNH49[!No'.![OTo_aIddui2O/%gX^R]bK
+ @u%5/uHaR+#i>0PN,*\o1qqjVjNN2L#*YeB:PubPYDRR66p]Gi($?666e,C11+-X.qg-.Z/
+ Q#i2M_V3RW+U.kN?L5/uI8\8amVBu57DEZTKRf1<TD3g)@,l4k\Q'n34?3];_?Cu;f%.="h
+ `>B;=-+5'gY2>lGL]EIT-[G1(75sa"UfoqJXO,..-i@@X27n;"A#GS3oi:$k&*I[XmdV%+B
+ 7l9a2VY6Y>28*I1K<;EEU+i(]rr./K98c,(6\RZ"0dS-"qFGte^OO"SrleH4,U=X_*O=`0K
+ teDJ;,)q<8\OQQZ-(hmK;8h7j4RH$ht`3n0D_V]jict,%lN4(J@TjjRZOu/daAa>&3tZJs5
+ <5/>?b;^U-J77-5d[=lsqJ6c6dHh)QA:UDY)F2UgAIIXYP-MR&H8u)0"Gl4>6Q0.:l/PX`Z
+ e0$`hhSrV,2J:F#e$R$\_G_]c7jZ.u$:hL@M3p8ke'bZC5qIL<p3LF+J+<HtLp:Q8Q:[u,A
+ )8`=CQFPF!mg!oTa<@4"`1g,-SmSdCsSHdu#G?HKPSc.@&fWer98,\@U\2`FjfE5Jp_JIo2
+ Kq7dT^Ql!nVYNE-#0ZDl,jI68QGIA3^+&+FfY'-=JGSeI?Q5UM-!f/=XgMrr;U6b9D;Mq9\
+ [gDHcGk2*M%_7MO*q!,r?<<Vp_\3hEpM;YXqUgbFO@l4Q[AGde*=8VF(j1!(^@1+EM&@(?2
+ #7ZE>Wt9oNGa`o*X5FqRnr6Aoi?I]YU89bo'dVnaSVgr4(9l(#@d<oCG=96I\6j^qdaUS/f
+ q3h1`Ht@<V+rh),6IU1TM]+-"bX)+,0m/67648hh[d$hA<'?3AKbC%If@UPp6QgC&^B+Q__
+ H#7_`h7P?V&"U,o0HF1E#?=Itc//HG)oB/&Lg8*#VLP:GYV59%$&,[[odtGC7@jCKd5T%/@
+ oB@iPrUlu`8fqq75T(C0-2EAi-uD.5Gbf?)osOo6Q\jO9B<dVhS!sLPb0.u/Y-&Q==[O:"S
+ !scn$SPl$XO**e^4#n;_h\14LL<6A$.IPZfX6QIWW2Nje:#QBQ>?90.F]diaJ9h4aiCDOC%
+ Gbo_i:uTR'0/t?3!s?Bpp_mD/BRc$*6"0XT)lt-9a]jH[W/1/p14e6YdXR#A+LojsTUb8nX
+ s->a"2X!Dig3C!ribZns*VB&##`]kp)9K@5N+%/uPR0qN.F<Y+RcrsKWt^6(KXS'Q=i;0'8
+ D]Soo!JC8[<P:fSS9f)BBYg2aloeEXTFa*?oOo=C5pq?R3BD'U:?Fqt]G&Z(@H2Y`1Ds!&2
+ jp1<[j]4G@k=6`25mBSC+)/UAfc%ksI8/Q'1fE'Hifk9j.C*0]%l;KV*uQ\GJ)^k#klPgmD
+ UO3Ke=+Qee<.3A/JK.Z/Fukq``[tqD>>E8ZcRuVMq\'t..qZ_LnU_'@CFe.O`pnee6(2tC?
+ p<0L0D+bp,h[h&+Tb[U38F7Un01=O`Zm9<ekNhFWbs*cq&R`[>#-HEhb5)m>cP]7RW*+g@A
+ V^m:65gTsHRaC9@0qAN>$Kcq*PO8GQH`?:Pmq6=?q!MfNp^rs^pfe`\Y\Quqq2o&cU-)H&W
+ V8o($H"5^)!9D,*d7H>]/7BN\M[!26DZUt4$VTUf)_Wt^Q>D,I\@bJVECP)t(hAs3qr2f6h
+ noNugr+\1E*/G6cSmKmA_dSs;iD&/W1fDMs6e$MJ>A%lA>/$7!XLY9025)%6MeZ/mTH(YB#
+ uf\XNR*(rQYQCm)f1M?.Qki!Q]#$R]sa[L;$&ePj"t1>2J8+q%#rc`dM[>t77n)8Ze[aI_M
+ A&"\Yt3^!NhB*V.Tor2>U&]Ao1g!H/Ht>9ds;pA9[ktiIj!Kg(mZ`PFj0<27M#sOf_'_\H#
+ HumaIuVhD#[^]_qg#Y5)CtD5ci^Nc1tMa\S"VRSg>i_:GR'/-$70I@;i9QL^0X3TN=k,B^L
+ F7J7>%336Go@H=?+7=A=Q6N&KZ!%MrA1guQ9:6W.!NRTZBb_#Ot)9:=nYkce'PFiX-25P,G
+ 7NQaU+?LB2'4uB("F2fN851UADK`D/191%CHdM_njCcD"9f+FQZW,%%li7sb3bd#Y2rdOBO
+ "9`!(?n3Ok]DL.7?O2M(H%oQ)l'6-qNq@uQKt><kH@Z(*,_98hH2U4a.(CH),(el)s_^1T*
+ 6#g`]10#]i:k/%%pHMPpT1Pj"^,]U6-@&S6aoY&j;C>)A:+eJW"P-H)U4,*rp>aQ6tr5[I8
+ NkAGl1j?0To-kR]J-3u\Zm+$:>sdQ$DZgMPW10n9J=>+.,UD,$5;(R+S2NRW]oBP.S>d%_<
+ G_rd\J)eWGm]qN@_#1+J/9G8b^X`-hq%.\;2OZM<=?MMlUYI2/T$g_+kNcJnP5\jU\9pjRg
+ M<#+9,NWh6&@cfSA9.rQ=@f\o8o;jSH49siMT_lR^!j^.H7aEsd7AO^H?V@FH'iH[q(e[48
+ @b!\5P?L"PuS6c+YK629V&:oI+d-4bss"dXWaj.Z-"h/*Ju6W<X5*hWj[W_[)80XKXoU$b?
+ dc'pE(t$)[@,r]oLO+f";Tlg#PZE7I=mg=lhLLTFifJ-Hu2[3:[i\OW&#(s5[+So%WWCS+*
+ 1_Jh$J&'$2$rciU#"eR;Z]*D#%C1fE41X^M>&G]4#DMt-\0!ite!$3>5ZBu8A'le0I%nogg
+ kgA15Gf<>s5QnU&mCOKKQ69cFl^^Hr0QZVh1nrBN.)H&WV8o%b$V.u`=+o/rn;=!LjfK8KM
+ 9,N0rdc<,F_shZN*;L;0$P6>>pGX<h!'65hh/kV-F"=oF]+^'snuQ!O@INB+3WLc6C!p:47
+ l<JP7L?.2ea*)1asT3]h;/M."G]ADO84ka4e-?6+E(1,mFS)t^fGeDokNk7CD;!\gdt15go
+ t4>YHP,G^_;*CHo>ltpsW#'Q'@NSPq4sePEqN#$PtG^hRu!%/BlGd:\m$]h*:]g=WaD\=Tc
+ !c]Pq<Ci1(C_htZ9g_TCf3A8[+q,?p)=b?-Ac77H1oiR4n2E#kE,De>C]II2$F-7hZI*'(l
+ &<o1)eiP"N9PO@L[&#DT%\iB'DJk8^+M+=CdPJ(,%2(TuQeXD_e,k&,@0#D!MS2bLTi_9cp
+ Bi82@*#os"d+EdfZK-`3+lp<HV,"^MD=g=EdtC-2oSSnEX:JlF_"(r3&ebqRd,b6[,!J1s8
+ t^-l-@'l^9"uN33H@$'QU@`QK6'M0GjgR)UZAbeDelFsdT<Ub&ebon]c=gV,UXst?A(G<M:
+ $"$rS=S^\a?"WhW7S6e]V#`oarKKb*Q'jnIRL9-n6]EiGnp"?n`Ei92!7+6giC@@^3"A^L!
+ I^?1,2m[F6AB],XGNH8BJeY>I'Kll2^r;TcRNNnnR`$O]nrM?$_^FmE(J99HhC3ICJ$RKNF
+ ekb<VGDWG(5V+qKD*qJCh?([Lu]%^YVZ=jtQ:f*hI'`kI&h!GDTBk"F%JqASR*nUcr1W[0J
+ 9""N]c#MR7A@->o@>>8\-^D#fs*g6:#gg8WIhqW34aa(4c25B_Y7D9oOEW^L3q_L/iK'aEP
+ gL_bX\eY1<QD[nlNf*Md'_,f2iCdqI&4s,P:&rAG,bN4=3caIDfdCMWp@p4iPo#e>'!-9<H
+ VEG<g"s7oZb_.:W.#/.nt3UhG#l?*p^QYOGOKE>b^(*RPN)H1Y[:_oKlR&UIUBDXpEB5p-Q
+ aOe]:g\6q0WY%e$op0?G.R8qhT[p6;m8(IbN*24;^$_CG][b1kZT`7Gk%8s?3P.)HSi1jH`
+ aA3>.@6Rc1@ZJ9m[g*_3'Wf16SZ94TdTY$;I&.tt*Y;AgA8>%!a-#8g[PW#&Ej.R/:(+#/o
+ QrR-d-$kEJmtW!6TY2a=Ipunt74^mAdNg%^K6iouAq_mG4bp]=Tp2J^;)<TJI;Lm3kU!7C[
+ 'V[TCBF,T-RVF&;WK?&C2.L^L\Va]G>BORHRRQ_gs5/#Ed-p)-$k!&BP_Kak<VJn,$(mRh:
+ gR4@O^l5-Hqc->Gqe]CR)a-SRZ0O],sYbEo/[\CN=K;[D"0d8U0;@a=&>\7GKrb/kp6>\j\
+ d7^KA-\l/8pWj?4&#99"i+]PC\0CtN.)gSLuI''tlo84X$R'`n.B()CMeM#;[9ma@npoNN^
+ !I2VPiGoK.CY@.`94:Rlg^HOe\Q=$:0;-$_P+705geuW"Q>?`V(&b;;9B!YVb%o"(TmrS&3
+ LltXX+.D8)q"nm!4*OHZb,!"NE:rMHU_a/Q/1;pFU7sus$KNp93e.+hnb_diV]L;j<Bd"(4
+ #*2:-)/?.7u&IpqXq"E8PMpE^+@AhGjk\PL-^LVkien@-Afp,dkkLeg=jW_I_mWEr-$FIr8
+ oY^p&7bIfo#khK7ST)H$O\@X@Y!_3#q:,BV-1D]J(02/4U$Tja"3^^M2,m['VY:-AIKdiJ`
+ /uFUi\86oe99r&f.VpT!00i09H?ELS,ZYIV4>k09B0BkbZh:In8Wcd0pUBk`]LamkrL?.aj
+ Pj2MPZ)%IZJ[#`)2Zph0'F[p/\?FrD`P=oslAYcW`+ae]2&O>CKOXB/Cll@l44T"nREKir8
+ UXK%%?-_&3!71m`,*rp8q+0Ncnns<LUS_+oHsh@n]i"k&$#(TYUo)W%%28EART@/8hM*EZ:
+ ^YW=?E7idaP/',adQ,9UrHS.]fH/c$u$ohB"N>'q,6eGM<\C\b8-NtQ./G/5j+4UZ7G8i)/
+ bs5hEk(F288GC;N%>hp]>sW8/!9p1`[t4U*s&PN:ZJE)l]/+J4i.r.)F<qn@]o=hFh.e7N"
+ @,[`e7g?NB:+dR55hD,$5;(R'&BQ`ZCi3S3!mW)j"M$UL40-b$3s8EgQR^?U;-bW,3Gq)\3
+ H"1T,BM5"H$-Bbi=="BTpB3QudcTHa$6(04EhJ$>']p?=I\k]f$:-[-r:-ItEXpAa9ZoVF/
+ pp.*FgT2n>Misirc;7:rJ;Za_W8LQh/o,RibuXS(d!2(,WW??@dR5*?4P30\D"V(\+!.8MA
+ ,qsN1f9%Y@9Z\O8D\C[>2/B'W+Qs_eKIXd;erKWCRS&UGkG-B*pfSl>6$pWW%-VNRCr:ZX?
+ u6%FEBRB-Pc]D->M:;^Pf`,l_`7_7bBfie5H\S^He6dXM\J7>N&<5LNQFB[2Wrm6j\BE+AL
+ ?ehGNZXoH<VB>h@-`cjm[AH6r[N+-7fp:,ZaH2e7AQEHP*I]n:V@!T9Rm9rP.ogHj\^H)u^
+ "!NhB*lIc8iSrb9ufW4>+'qXL2RBN&`l1<!Q69TGkcjQWUTb:#o8E1]\(<oLsW*%SXCS[OY
+ !9bHOKN;""C%<n$7=A=QU>GU*HFa+7WK8IfJnhcXpM,]>Pd2Ucf)*r1"TWs:RZM<nS#;Q*-
+ 1BKTXbip/AX4h2=TX,&+AK6$<nlG=`gPn9>D"f,ll2^rP'VQWTRTKGNC[_D:>WRi<b%*[MV
+ :_&B*J>=6(2J?UXS1K5FfB;CT#r^*dR-AeC+I!66T(1j.WShEqs282Qh`PBq07>m5FWj!.Y
+ `E-CU+P<]Hms#'i#&Xl79&(X1LC#,J)lV6mTd2`EZcDHF#Jc-+9dR7nVAqKthlfk`8h.*0W
+ V9EoaQ['VZ5I6DiE5l-Ym6(5$4F5auih7IldKn]?uq0?st>g/Oaanf+l2)'Bu$Q"O-c5H0K
+ NDJC_S?,POYV85lP1/:ESs>QU&J8G*(mH(\+!7,RW2LXoDV4>;Gp4sN&U$Vn$m"QR`f1p?,
+ IL^WU)+m&Bu4oOaH7^K)c4k5S3.7PmiNECSfRd\MjK15F8JnhUEm6-qqAGn_83tQO*l1s34
+ f]aCN5MNBP?&'@u'nJ#J.0hrHcjm!P`>H>[0tNF&]ASc*E\aL6LeSOF$TRk`8?5aV#n;V:1
+ J`SiM2n0]C*"H[BC;B$.DCe]4+D5X5>.k^bq%X5N!VM]u1EVeM<\Qfm]#90MPIRMEPE5aE@
+ 7=9tnB3?r&@Pl2+g`5fns&,\*/e'k-H;3b-\<9W7W?I4_&S/c/X+cg>H1`CN1mT-M\n"*.C
+ 'E[U^/#+@Z8_h4Uc^m9m4lh**0Gk-)$RMrR279m$Y$JZ*HItJafZSF2'#ghmaBb6:/=,tpf
+ 2Ia;)HI_<h`3T\4=LEaRsF$_8'.ohLML[@20H7IRWEg;2nu)-jN%qu!<X@"Lkp"0*5^;NGA
+ 3eB=0O/V]OpEdZsi#e+f#<S&,G_O(F#Sal[ule=uGjl=*GN"),_6MC7:T[1_ms%-b"!L'##
+ (U8e[]=().HN4i"(d*q9tj;CHa9ZG%s6dHP=YC%:ojVB4un6:4+/02sg*V57$"($J\7Com-
+ &aB+fE'bsT,/qK/^4ss,5+naikiD)h`g:Bh%k:^34\8p[<obdG+]"2);>#ifX5Q!c0:7tfT
+ 5!J<j]YLAb=.5XAH'3Jq$*<^fr7d(UjC-GS%63f'HukX(m3\=_n)"o1m+_N+04,L.<n6/J3
+ ;iaRWQ,>;XKkaf6bN4ePMNip;\(I>Kn8+k3]D^S0D_gFMP^Z9P#XB5*HoWdBoRt+TcW.KV5
+ C,I?E<?FS'BH8"?Lc([a6DanWbG`Oce-\94mHi&+Zsg[tK,jIX1V?%YpnXZtf=#gW,?s(VN
+ l"n`\@Fl9PKBbKJ'B\/P08PGkE.lCk%21gr.b?M%P%5/1l1F^V8`kK_i7fN@F4m@Et>9e7%
+ focSh2h7\/:i+h='9bJ'OW/$\e$Le#HhS&nVQ(J'ss!Q%1^?DH'Ui6+(VG3i=DJ_=LGFR6m
+ Z0gh"]A7@@b=)FtOVn`ZMAQe*[]XY1,XW,9N5Ztfb,l/*/BDB&YlZ^:9Guu)qUV##YG#Z(7
+ @ZBa0Yd3@Hp=%nL?C\gSMS<-k'(1CZ:fO"2L'Z1o=$YLaj6lkQX\,k*`e?BUO0B6lg*lC]W
+ 2okr:k9fF+TSZ!eDUudB&dYG'<-lX_%fuICT5/2SFY3>1\cD;R13fqaio[mU7+Q(skFtp!(
+ h/>MdnhZ/MfGE")DG-2COtX!Vn/F1S9Mh[cc;VEGpb^0^9]os0hKR6ESP?$t=6%>VhZ]`5.
+ 9XN^70As1d3eNGBA<_A\gc)QUu=]q_N_"rJcnLa.qGk(J"HZ-$UP';9q4c[96-#8s=Rro87
+ s&jU3SJLa?h!K:NSe`:o4$.e/nJ/gLcSM^NK7effB<:"2LCW&*!m8](?2TiR_qF82?[5cPI
+ _W#QRj]j?:&_"Am5R8FAk;Up1`BGZVd&r!PNVo@LlK+NjgD63leZ@HrMfN=,0A^<Us9-c&,
+ !\XF)r]V-\Ycn@1<YT]#P"F.5*?86mAhk]]YYaP>E[GWP,U`otF/'aKPJoY2*"]``B!83jS
+ %aORAhAkAg.-]tM8tbrg+qFoM;ah8=k4\6LZ$hk00HRTL4L&."MdQKfO%IX=`^Dnhu+-sur
+ -pKgP8S!3<#7T)n$RZ<C<NF-@Jq!d6nJH>RqC8$7-f\!-<FE!2=Wi@FBSljW16YiEtXK67.
+ gJ?De4h:S#Ra+a<GCODaS)8]Lq5aDOEQ^jqCi$2SX=Wt,<:B2r5CT,gjj[s_E0)l"=Ar$mE
+ 9"rtGW]k\pWTb]Rdoq<R[P.Smq1tFQcD>cijJ5c\0L:'ADLl\^>o+3(Qgn-=7*?Gl`I:kc-
+ +;bm-,6"'f@g)W's>!52(UWhU'VGI=8fPjkA_/)G#VZ'i;Y\[r:/1?!W^HT'8+8'+>*Bqto
+ .&gXV_kB3S8V_mu=frTWW2]/K-NV7nn&h#H36h;,c\UE)RJ=M,JDp<M?%W'?4K$Oc5qrTEG
+ -oD,8]VlcN;)^H>]rW@DU<%kXt^_o"\Z5Sf%EM7Qrpmrb'$(p)+[jU&2eDIS*I3em%+DTmi
+ GJj1L9HVJt9(pM?9.lG/DuaY44Q46D`IMUQ`4cY%jNYg@e@u_i7!8OV!&UGt:,O-bT3D$=Y
+ 0"q'MX&81roRTNV#(B#?jkK&f`JFT:]2k0nbkAZT5-;K\bb=;PQ9@YW6&)%Gadi#\Q?'*ac
+ E?K@lYb%:B1=[ihbW'"5Q0N!'?o%>KoQHVDWVSp%[>ARDWr:r*k;+m%-9-_8@Lt6%<5/H5J
+ O=[QX\\MOJ@$J_6.NM`3cHZf58;GumJ4,-tMc!K[q/!W\=W-3+$+C'"BsUF#s@3GJTqOJ=-
+ &!'+cq!0Es&!!##o!W\=W-3+$+C'"BsUF#s@3GJTqOJ=-&!'+cq!0Es&!!##o!W\=W-3+$+
+ C'"BsUF#s@3GJTqOJ=-&!'+cq!0Es&!!##o!W\='5Y[Q2Qb0lACMR_ADYKhlL].99-U0i1Y
+ ct@+U>kb*bJV68nT[10lXs#"Za3`:?ELB55'H9pG':=^K)jZJb/u1^-ocPFW*5AXp@aoB)>
+ Vi$>nbbNQBmi,S/g?T#0&&?.I$6e9dCA9>!fjkq0Zi$T%,u9R-2\]1R"#dP2.q'-DR?277B
+ YVNrY,<;c=>:L!m8^p7W,PdF#du#V8n%/F#@:np;rUCNB1;h;$djh6VmQb0%kY_4Z/jD*n1
+ )W3<DV4]FDW`JYP7I3<?89dV26piHmaVb`qF7\dh;[#)6"2VmDHF*Keg$VP/<9d5^UbN7/8
+ CcI)/YEs(#9g![i+=es8pFP8O,9nG4m\+VX\T[A/mP2U>e4K)fihBs,[7_bX'bqIp#77:gH
+ 1GLG(Jr>U8LtW-/1f:6AiNCuCTHQNZiftB;YLIU9LD07.BT=I27\5:^?u$>Bo7h,NW'I+9h
+ e?n7gNb-X/em-(!Wc#Pq,pX%i>e`YcqM\-abHR.FC:C30SZ`Q-q.N<Kl.j>29:8"/U<5D]d
+ A?Pj@ej4Q16=q/%sh-IpYs\u,o@lKRQ<]T[XFK0e>2K9Q-0roCQ9XI>Y@G>@mSiBRJI".".
+ /Y>?XCH['M8Pj\#uX!X/c?.(lA37-mQW9'T".iscC:^Cm#S=H+6DXTH`-Cp]u\d&YAai_a]
+ qO(We:7]fr69%nhH$N#_BN@jik2par(f""0<FpG!m@^5L6h^lF9W:3)_Q91SQ'IWGB+.;@F
+ mG?(+gmf3TBEc$83:t>rohb)PZH2ARLBZ9l5haOUU[hd!=sY?H6Uf*_M&Asm\+VXH-0D?8=
+ t8'mKpAM%[NEH$47,^cP"NcT:[22"B+&C'5J3fL"pRTmWl]pD/F.,<O@gLOV'eG6<^<V)BK
+ s.Y_4@S6rDbk/M5L9FVr?_nJM4LXquAs?G?.=6FjY;Q>aMnZmNr<apK4,_h,P6]K:?*Uh'l
+ "ApSclG?M/om<8AS50N8)DGA]@O,qoBeF9C\,ict.,H\)MbKJ(Cgd&K.`F@ZZh'Ysfcol34
+ 7!5<+X%W4P;G=:9RU@i%hr!>JhVQo;p_iRkVP^5VQBpjgYE+[&WP>fSh`Z<Y,SIGiZRRIHO
+ /K.`:A7&)*Aoc7X7`uJo;5]V[!_0)N1'_HoV*Z4!'+cq!0Es&!!##o!W\=W-3+$+C'"BsUF
+ #s@3GJTqOJ=-&!'+cq!0Es&!!##o!W\=W-3+$+C'"BsUF#s@3GJTqOJ=-&!'+cq!0Es&!!#
+ #o!W\=W-3+$+C'"BsUF#s@3GJTqOJ=-&!'+cq!0Es&!!##o!W\=W-3+$+C'"BsUF#s@3GJT
+ qOJ=-&!'+cq!0GBO9t"on[MJr%/3A(Db]$8V!;IXN!W\=W-3+$+C'"BsUF#s@3GJTqOJ=-&
+ !'+cq!0Es&!!##o!W\=W-3+$+C'"BsUF#s@3GJTqOJ=-&!'+cq!0Es&!!##o!W\<\2>JU"!
+ &Dll4o6#qz!8n._"pMU_Fo~>
+Q
+Q Q
+showpage
+%%Trailer
+end
+%%EOF
diff --git a/tex/img/icon-processing.eps b/tex/img/icon-processing.eps
new file mode 100644
index 0000000..c0bff76
--- /dev/null
+++ b/tex/img/icon-processing.eps
@@ -0,0 +1,698 @@
+%!PS-Adobe-3.0 EPSF-3.0
+%%
+%% Copyright (C) 2020-2022 Marjan Akbari <mrjakbari@gmail.com>
+%%
+%% This image is available under Creative Commons Attribution-ShareAlike
+%% (CC BY-SA). License URL: https://creativecommons.org/licenses/by-sa/4.0
+%%
+%%Creator: cairo 1.16.0 (https://cairographics.org)
+%%CreationDate: Sat Jun 13 00:07:09 2020
+%%Pages: 1
+%%DocumentData: Clean7Bit
+%%LanguageLevel: 3
+%%BoundingBox: 0 0 129 177
+%%EndComments
+%%BeginProlog
+50 dict begin
+/q { gsave } bind def
+/Q { grestore } bind def
+/cm { 6 array astore concat } bind def
+/w { setlinewidth } bind def
+/J { setlinecap } bind def
+/j { setlinejoin } bind def
+/M { setmiterlimit } bind def
+/d { setdash } bind def
+/m { moveto } bind def
+/l { lineto } bind def
+/c { curveto } bind def
+/h { closepath } bind def
+/re { exch dup neg 3 1 roll 5 3 roll moveto 0 rlineto
+ 0 exch rlineto 0 rlineto closepath } bind def
+/S { stroke } bind def
+/f { fill } bind def
+/f* { eofill } bind def
+/n { newpath } bind def
+/W { clip } bind def
+/W* { eoclip } bind def
+/BT { } bind def
+/ET { } bind def
+/BDC { mark 3 1 roll /BDC pdfmark } bind def
+/EMC { mark /EMC pdfmark } bind def
+/cairo_store_point { /cairo_point_y exch def /cairo_point_x exch def } def
+/Tj { show currentpoint cairo_store_point } bind def
+/TJ {
+ {
+ dup
+ type /stringtype eq
+ { show } { -0.001 mul 0 cairo_font_matrix dtransform rmoveto } ifelse
+ } forall
+ currentpoint cairo_store_point
+} bind def
+/cairo_selectfont { cairo_font_matrix aload pop pop pop 0 0 6 array astore
+ cairo_font exch selectfont cairo_point_x cairo_point_y moveto } bind def
+/Tf { pop /cairo_font exch def /cairo_font_matrix where
+ { pop cairo_selectfont } if } bind def
+/Td { matrix translate cairo_font_matrix matrix concatmatrix dup
+ /cairo_font_matrix exch def dup 4 get exch 5 get cairo_store_point
+ /cairo_font where { pop cairo_selectfont } if } bind def
+/Tm { 2 copy 8 2 roll 6 array astore /cairo_font_matrix exch def
+ cairo_store_point /cairo_font where { pop cairo_selectfont } if } bind def
+/g { setgray } bind def
+/rg { setrgbcolor } bind def
+/d1 { setcachedevice } bind def
+/cairo_data_source {
+ CairoDataIndex CairoData length lt
+ { CairoData CairoDataIndex get /CairoDataIndex CairoDataIndex 1 add def }
+ { () } ifelse
+} def
+/cairo_flush_ascii85_file { cairo_ascii85_file status { cairo_ascii85_file flushfile } if } def
+/cairo_image { image cairo_flush_ascii85_file } def
+/cairo_imagemask { imagemask cairo_flush_ascii85_file } def
+%%EndProlog
+%%BeginSetup
+%%EndSetup
+%%Page: 1 1
+%%BeginPageSetup
+%%PageBoundingBox: 0 0 129 177
+%%EndPageSetup
+q 0 0 129 177 rectclip
+1 0 0 -1 0 177 cm q
+0.92549 g
+2.25 2.25 m 2.25 173.875 l 126.703 173.875 l 126.703 29.457 l 95.609 2.25
+ l h
+2.25 2.25 m f*
+0 g
+4.498583 w
+0 J
+0 j
+[] 0.0 d
+4 M q 1 0 0 1 0 0 cm
+2.25 2.25 m 2.25 173.875 l 126.703 173.875 l 126.703 29.457 l 95.609 2.25
+ l h
+2.25 2.25 m S Q
+0.4 g
+8.503937 w
+1 J
+q 1 0 0 1 0 0 cm
+14.793 17.715 m 69.133 17.715 74.914 17.715 74.914 17.715 c S Q
+4.251969 w
+q 1 0 0 1 0 0 cm
+59.637 37.348 m 87.418 37.348 90.371 37.348 90.371 37.348 c S Q
+q 1 0 0 1 0 0 cm
+14.316 46.547 m 61.992 46.547 67.066 46.547 67.066 46.547 c S Q
+q 1 0 0 1 0 0 cm
+74.531 45.957 m 88.641 45.957 90.141 45.957 90.141 45.957 c S Q
+0.807843 0.776471 0.803922 rg
+95.133 2.324 m 95.133 21.766 l 95.133 26.379 98.313 30.094 102.266 30.094
+ c 126.703 30.094 l h
+95.133 2.324 m f*
+0 g
+4.5 w
+0 J
+1 j
+q 1 0 0 1 0 0 cm
+95.133 2.324 m 95.133 21.766 l 95.133 26.379 98.313 30.094 102.266 30.094
+ c 126.703 30.094 l h
+95.133 2.324 m S Q
+0.4 g
+4.251969 w
+1 J
+0 j
+q 1 0 0 1 0 0 cm
+14.316 37.633 m 46.785 37.633 50.238 37.633 50.238 37.633 c S Q
+5.669291 w
+q 1 0 0 1 0 0 cm
+11.559 64.637 m 50.43 64.637 54.566 64.637 54.566 64.637 c S Q
+q 1 0 0 1 0 0 cm
+11.559 78.313 m 50.344 78.313 54.473 78.313 54.473 78.313 c S Q
+q 1 0 0 1 0 0 cm
+52.316 105.941 m 55.328 105.941 55.648 105.941 55.648 105.941 c S Q
+q 1 0 0 1 0 0 cm
+105.152 91.27 m 115.391 91.27 116.477 91.398 116.477 91.398 c S Q
+q 1 0 0 1 0 0 cm
+11.559 91.988 m 50.957 91.988 55.148 91.988 55.148 91.988 c S Q
+q 1 0 0 1 0 0 cm
+74.574 77.527 m 113.418 77.527 117.547 77.527 117.547 77.527 c S Q
+q 1 0 0 1 0 0 cm
+11.559 105.664 m 39.902 105.664 42.918 105.664 42.918 105.664 c S Q
+q 1 0 0 1 0 0 cm
+11.559 146.688 m 36.594 146.688 39.258 146.688 39.258 146.688 c S Q
+q 1 0 0 1 0 0 cm
+74.574 63.984 m 97.145 63.984 99.547 63.984 99.547 63.984 c S Q
+q 1 0 0 1 0 0 cm
+49.227 146.594 m 55.234 146.594 55.875 146.594 55.875 146.594 c S Q
+q 1 0 0 1 0 0 cm
+74.574 146.559 m 113.574 146.559 117.723 146.559 117.723 146.559 c S Q
+q 1 0 0 1 0 0 cm
+11.559 160.363 m 49.84 160.363 53.914 160.363 53.914 160.363 c S Q
+q 1 0 0 1 0 0 cm
+74.574 160.363 m 113.313 160.363 117.434 160.363 117.434 160.363 c S Q
+q 1 0 0 1 0 0 cm
+74.574 118.324 m 99.613 118.324 102.277 118.324 102.277 118.324 c S Q
+q 1 0 0 1 0 0 cm
+74.574 105.141 m 112.961 105.141 117.047 105.141 117.047 105.141 c S Q
+q 1 0 0 1 0 0 cm
+110.449 118.945 m 116.453 118.945 117.094 118.945 117.094 118.945 c S Q
+q 1 0 0 1 0 0 cm
+74.574 132.75 m 112.859 132.75 116.934 132.75 116.934 132.75 c S Q
+q 1 0 0 1 0 0 cm
+74.574 92.273 m 94.465 92.273 96.582 92.273 96.582 92.273 c S Q
+q 1 0 0 1 0 0 cm
+106.082 63.723 m 115.965 63.723 117.016 63.723 117.016 63.723 c S Q
+q 1 0 0 1 0 0 cm
+11.559 119.336 m 50.91 119.336 55.098 119.336 55.098 119.336 c S Q
+q 1 0 0 1 0 0 cm
+11.559 133.012 m 50.91 133.012 55.098 133.012 55.098 133.012 c S Q
+q
+0 0 129 177 re W n
+% Fallback Image: x=0 y=0 w=129 h=177 res=300ppi size=1191132
+[ 129.12 0 0 -177.12 0 177.12 ] concat
+/cairo_ascii85_file currentfile /ASCII85Decode filter def
+/DeviceRGB setcolorspace
+<<
+ /ImageType 1
+ /Width 538
+ /Height 738
+ /Interpolate false
+ /BitsPerComponent 8
+ /Decode [ 0 1 0 1 0 1 ]
+ /DataSource cairo_ascii85_file /FlateDecode filter
+ /ImageMatrix [ 538 0 0 -738 0 738 ]
+>>
+cairo_image
+ Gb"-VG?bm7\i9,"F3d#4gm`BBHVSIQVC.6Y7dLaFp9'7c`2KqkF;4s0&N,HN2q.iIJd*MN(
+ 0!O2.gc=^Jg*XZaeiQdnAkF/87(;LcCXaRRe_@rcQDn&DUcAOXL3M`p8j\X[t":a561cR_q
+ NhUXK:dgK#[^-dKBPp1N_j--Vp=1:S7,0O<stCUCJ;*jQ5K#c_0b4U^7$?lJmpi^3/`Wocs
+ mZ6^duT^YX4qN=-OT:sYp+)l*c?E:'iYrN!]J5P7t_8.u!]*h1244?TQT;kl^K!.[7@k:eK
+ `#ls/nFq,0PRK.l0FR%@)5<kcdAi1d.#ls/tFq0^$RK.m;@-=cjO<st#9@u8+18G3VkS87N
+ WsNJA!;_G@_s0LU!.aKGkHG;T-3.EVfK$#iU^7#<"Ek8*8mIh_iue<b%KIW8^]!P<Z=c\8r
+ qCpBp[[FheFTh,M+a&8?+UjQ!6F6=%mQ:A3,4&9_[8\u!!##:e)X<W3NNXb#lp&c3DIY)kH
+ MU\-3-;k,Ye8kS\qa"RK*=JW\e-FFQegQQUa1PlN,E:!6>bLae,eY==-QP!$Fe$-@`lgLZO
+ %C!<@0$C%<Y<*0q"I!W_;gO=#Ioc=(FK-3-;+,AC$c4,:d@RK*=J9r5t(Y]En\U^7$_':1'
+ GrV->@pRQLsr';L1F4a_g'*'.&/km86Df@<gh""B:&V!gell7nkdKBQkM,'9#=(3E"?LHCP
+ Rts9]"Ek7_LGT#3E`1>=<QsRg0dB/Gp`"n%c=)9c-3-;+@[`OJlR^Z>?@,Dbhbtoh*#8Ti!
+ W_:l8J>.*QEU/.?iD8/Y7^muLMMZ%!<@.\V$SYj0CSr?qgXP!gP!AfiE@?u!.^SuBu9/n:H
+ u2cQ\`CQ#j`P6$.8qVaN9nuXA0"S2J^`?`8<!HFq.R]B`\:qA5<k>Qd9"'nlZb@J#s:73F!
+ <I"9?LlOX@0(03oOej_tf]ZOkbL>RAg]!6?UR-ELZ$@Vhr:(\*h"JXs]^6CIlS3T\\b]Tt*
+ Po&i7;=i<NJ"Ek9A6^duT3Mk2Xe[CIl?@Vor&c[_"fK&kZOX:($!@jlOWrTm:p$:-EnIGqW
+ S\mu3!$J17R]nhm39]C`4r3[,$"<=o_akF@+K&D3YW<kPY$IN?,5sSGN4a3KB`\:qe4^NVO
+ Vb/1kW;&J^F4bDG7Gil!.^T^BnDnn8jJgF!'0`$UqGj$\1.%]q`bs:k:k<e!6A$6$*/4+YP
+ [)3r[-W3(W'!1LQNn`5`RNm=S_d\H/kN8M"gs:(h`"cdKBQkdA@5NTL6aar'8J=&AfPOLQN
+ n`5`Rs$=Ifg0Dr&Q6`5`G+#Wmg(-3-;+$YdI-o).%P^&!Q[,XU^Dk:k<e!6Ak"9j#A>NZKu
+ sN/.uhKYDX?o9(GEmbBgDk02,A4'/Of`=BpN!/"FeGk%4[C\I,J'-83dR@1&F!*#D1d\ON\
+ B=VL7YdlT^-3XPV@:<T9St<C![aP98,s+3#FmIUa\oapUYY^mQW&1WQ$*2T,<rM,5IhD4MK
+ [Ai+@4eD!`JYNK/6G/#Dum>(XjaIZ];EGQ<14$?K[h^DScA:Gs!-NU$),,)_qIeJS=CQP0/
+ )UQ$GZ\ND#a%G+7o*q'ciH0dq5&Q3qUn;[i;t3ij/rbZ^nuP9Tt^*PXUKG;3+QSSp^:gG8V
+ 89b)K#iBf8[fSIQ!Df!Lj$O!fUPFq/:>9i=ngkKTJd4Z!37ZZ+m?o^m6Z#,]hUS0k<iT=rb
+ IZS!FcTb+fg3NT2:aj/2OKb=C54?P`HF*!T`RnD.SR]q[`FD\tC&L'E7&Gk$>XIV0s/1b&8
+ DB8Ch"Q5c1O,rX;;[(lNPOb1$i%/ck'S?(M_qI\GOH:%=JI%nR=0>ejI<[6$QS`-YAC6Q)s
+ 7stb5-5B$mi9nLR[?dGfs><iO$Wh0=L`<Y5s^(7,?/@sV$X3iS$p$0qWf=hYRpa#*7dZL*'
+ &%mH3aT97sK9g]H^BiI&..MTD&Q>Y?,)LB,6*JS\nW%PJY1'>^Qd<Of,+TN$D:*gnb=XOCQ
+ +$/=MCifL#);F'&faErGiWlG3^u/%2mpWMujP%T9.nQmSo[LRr;5VO`N=/d-A[LUE3j!<E5
+ NK+C4cJqJ\e*2H3e1eh(@?&0KQDh*WS0*GWDLUE3jCY#S\*dIa3j6c4ciuh>?^M1P72Bp.a
+ g2mq!?Q`Zu%Vm4^gt^\`7im3;/76+r`^-U]Y1'2Z^7G;k^]42GMte&SE3XJO4A?.j$gm<IW
+ (.nZhQW2#\H]#?FH@iNeur9O\%hl9`"s"Pq`BSTdj-k2Q[79bkg7PA1Z]ON.aU\SYk$Q-pV
+ oGT&Fj%Zd.kmpnhach90MVAFNDG?7BSqV!o9XYX.Vi#5PdXhf+C?m4,:r,KQ1sMTH8@YH8C
+ 'MHl30gde&bNXNlNiCMha.`&A;qZTErMO='D`maNHl5]9],?7k1[<LV4]KN7)eX*o?h(%J,
+ Rd,rV^/Yi\ZZNq[85<h@M#H?/DQR8eC/k:sC2-.Zi&kHe"LUE3j-Vp>0*dIa3F8>B<)$d.h
+ FI=K;jZ:TX[F\^una2"VeBVJSUe,<9Wi`BTp##r!%jr_PPK>*1+B=kjdtH6icddODlX*).5
+ p<VAF'&fa_Q6qKML,1$kf42ur#kG6<LUYNK[jt4?2s]?5=Blh+Z-KG=<hMm&J7;lrW%%lXn
+ 38>''((,9%/3WYWD-#kj8RmMti`-BOU).7acrm^3hs6-j7nLc'pZ85!5*BKJ[+[-KoRCB$E
+ bl;^'5+%VmFdf\$sV49:Pbg"AA;/PYrtWAK7'$*/lgQTsro<Q(4=Jp-=]G1)$M$k/pQMZA,
+ 4$O]mBCb![hBla16cJ_X<7u@#T!`A>)el8(P()CfD\s7S$!q9snhRuMi9GJBo:!1Y%QE.O[
+ <sXt[_jX12VP'R^2_tMki9T^PcGpV;QqB/BR]ouhmF"6!q9L8!\.Iro*7e;^'YOp>IJ`_8%
+ o!%Af\5&b+iA5_'H&c&=T,ILqqqGQG7i_<#aKTDCG8K*SNhUQp?^J^_\>B%k0KYA'c!S;GZ
+ :,?dtH8tqgP\T^V5N-08,cULGbPJ9NG1046(@%EhXVY!*V3peuV5o%e4\fHVKHh-EKOpLGW
+ l'<(%*"U"[e#/Z21Nm-O'P-Veo4HK#[!!/uP__s5dQ6UO66OX&u+V$T5[-P$9hlGDfW*SRq
+ <d0N(;ZZB7GBi6*%-RUr9qUDKC#7hm2#@K6%)%mPp&fVdZQ^=WLkYk[6VIIiphtZ@Rj=)3]
+ 79t]he:4-IFq@4/S\lGD#ls0hldfP%/bPb9q4H^[+E,4g`7RhS/tsct;KV(UdKBQkFm7=C`
+ 'j7OCTg06\jd_AZI]00"j0!H?FcWE8.ttGf;DCdEH>X:U#Y3t^CmqsK@bVOf7/WmP<(/"Eu
+ 8Tm!9D.Z)^fN4r*f2?nsNjYo%W-t`]r>lf.;&B@coGn"9?NfSNBp.X>nhNNI9G]Y)XSdp>H
+ 6cG.S&(qp&WG#lp(Yo(!DP,nB3)OF5>T[*FAqCLc$e>iX=*hOVPT'*'/!iku.CoAR=M]LtH
+ I.pZ6&?0Tn7Qal`$6EZls8.ttGGO*g3d?ENts$Pi\^=!a;3Vof(+tkMS+j8/>U^7$7_$;'0
+ +DrNJZ?%4^mF<Z;=ftrb3K,cq!%f$OI(6\#d%Xqnnp+QL>.&b.?iKU4E=GnqM*$KT-30\Bb
+ Ejju4UqD#I0]R1Y?L>`h>PZLr)7]AeP=94@AF20BYg)mB11's]mKAuTB+P(_#9&GH2AW0bu
+ HjQ4,8qF!2'LLKn]!.,BANLo9fPqJ+?0;brQM[\j`q@igNM2H]$H9N&1P8NL2hRZKMVaZVI
+ -TcgWF9jg1;bIO=4QAceC4Eu8Tm!#kD!cTb'jrB-FL+J7UmY*dD[UIU9+dV-$ECZdD""9:9
+ qA2:jgLA3,prUV-24<5,.5/XF.%b'gac/_1sh`6WP!<DM+FEEa&osJPf.dB>Gq=Lh=ICFIS
+ J,I?:885Y=#i%G!1][UDLlRL]Ga^e^Md`k&1$$fqqR(b#[b"l9>tWSr6HIG\U^7$79$a(l,
+ u/mKa,W)jTEP$1W:s=UM60jC3d2&Y9`RFW!WiG!:OQRY)KP4S'`%C(IK^fVA^SRcH373Kp7
+ 9)l'*-At.k<.7;dX=`egCq4O\bH2f>Z2T3je2q]`DnHe';Bt#lnX5:f*6)oA8C)rqC]Ek8J
+ <%&p5@<;#`heZsgABh)UEN!<<SoeP!>0H@5AD8+-!)YCoE-:\&7$CdcLSDl1\-L/h703K,c
+ q!"FgEN;W!Ks!m'qp\O3Zp[-LDnUb]RmG*'YoEFg=?Cb7VGN$=:qnL>,N3B7gEu8Tm!8=f!
+ iBl3F^^@9Yp=fQBn,HmN;D(]CoWL?3OX:($9mM#0/R,ZJI0'Os<U:\pRK.kMKnkH,qg0,-J
+ M$Kq!s=Z4!2*mB2f=k.p&iurs8D[_'*-Ch#nnZ,55ToDO<st#I5s;J*BSG/I0'Nh+\_re$Y
+ a<h\0A+fhgP5>dKBQ[OIVirUe)6QVa6E`o]ahe-31i>KS5!o/X1Q!3_1uQpi*#8!-%#ESNG
+ Z>GF*_5"Gm<n*fMoV!5@OVk'm!R=0g/-+N;Ea,EX;Q^n\#9\,Q?pJ!YRa"E>D%*K_ZK^o<]
+ gR.fq%b<@Y4#bIWZ8.u!]='f4iMA7Z@q9OQ55!=L#E5)\ZkeQT?\*`SMs72Mka?E1I5Y)Og
+ d#IACY<+d+@\#geB`\8qPF\8X@Ddg0*BLk,g2""?R8=Qm#ls0h;Gtr//R*r<)=kcdDob+f6
+ 7#/9!&0!D2f9TtG]:cf,u^^:*AqS2^OH-_XK3tHMM;QZ,#&%;5bam#rY]DO!!!WT9`P/Y3i
+ !&^!(_l%!<<,aO<sr-+<Al5!!&f^RK*><F@[&F!0@\)!WW5K+\_q:5WY\I5V:F4dc,g]EcQ
+ 1d5ll+8J3"fI!3/X^8D(Y)qlEr,VLpcd3&n!D[k8jKU9Lk`\p$F4FIsn*H$R6B_/%Z*SNV<
+ Xomd#[\@B&M?G63RM%_5"8Vmq@T:`l>k>#5c>O!jKeRM*64[!)-6Osgo4&%>6]SI-H/%0CV
+ HhOPR*JU,3YfVe!RmA(Wc_#^ap2,'c4\2L21c]sY0QjQ(`VKE.bLgHTI=8*0&.%:SiPD6q`
+ ^:9bkqu2P_SW8K4/h5YM%d2eR@'?Yr4p:6,=dd8Enir5\kJ=3bflJ2B2Wqs:l0gW6Y^A9oP
+ V!Tj$r=;!bRfUB+?hGBo4Ol,Tn4[\u11\p?^HWZ-VLmjk8DC;h#<3:MajK/2ILO6C2DlE8a
+ .KD@W\84aXWU4qf"[:]2TTCdC?QZHO..$iQnnG'5cmrAGdT5B^>GHBn=GJCsu*!p#e/5W\k
+ ?8Rc-QFWUhif<8PJdVWH'A#[3DR-j+bUN4Tu+J&M[,WRlQ\rQrE?[r#S9"2pdbf#[14-qU-
+ Q>>=Wl1C;h\=@;!/-rPeZ6],`%hB39mf@$>E,`nDcB`%i!*SM=9c-Z%:[4M]W`m&[Z1MoI*
+ #qZN4hqK_hVQZk3ljjYBbD=&9&7r^<QC+?PMU-`$49lb;1B;l`7`g(/M0]n5mJ:l1/6E.nE
+ m^B;YPB3ao(u+ja,!;EcSIDM[T1oJi_Q;g#7gF#3dm;R%$$QZEkA4?4Vp7E,^VkMj(F/,UO
+ ij=Km>PCI*qlcCB_>d3cQ($$6(AFe!+0gPd"+U8"Amml;_Y.3K`50/q8sH[C)]+!:ErV-,i
+ S'.+qKA)"G/CgB^t%R=X7[^MYaDcZM8jSAWIf</Be$4?@upYQ3t%NRTPo]X[qQS.C\j=#5^
+ 5rsFl8NCrLF5k+E?XM]5ldi0BT!8E,B;`e1*D8sq;7RJD5jaMl-3+%=1o(EX!(_l%!<<,aO
+ <sr-+<Al5!!&f^RK*><F@[&F!0@\)!WW5K+\_q:5WY\I!!#QF1][RVl&[4l!$E<1"98Ft6^
+ duTJ94<q!!&)jB`\87dNM<boHu"6-S@%>d\TV"3#</=&KMmO\+anDB`aqT;,L3Veua"4E1h
+ [5ZHV+[h1L<2o()@M)]3=C1]bsYgt^\%gU9u*qT<98P)tbjh9o+_,GYWM.IZ-,Hf`D8$Pk:
+ \#G<A9QS)RK@YKl(c\2`C$47,.GB.lTmcit-4kZlEk007`naSB[%j2[:V[5GE=gi,aFnZ+P
+ AH)N&QS2\]f@="F;1Eud\@K.]B4oCL=+GBR:6Tb^cIpt,;g3Tho)o1!R+p/p?b^JoCJ4WB<
+ TWE,Vo5A3;Wou4U8"CKm-L[_4Fe9hNZBpV[sNCG;1E,a3BK<l9I>I*%hM6tBp+gL2g=[plM
+ _89&mJ9^,p`lam=g7J1M4k(rl>41s!e9]d:Jad4aN@(VZTJqD;-t"]fe51Q28DH3-=AGP-Y
+ F]9p]SRiSd)YdfC_6E,ZA5Daf&$4S=K+Z$5YCV.UDpMuM-M-RZrJ=!U(u@U`f"FSOqXAElr
+ _j5@b+MU!JT0)_WgX]r95]/bBM_Sa9FFSLH>![f?Cl46PGil-h5c6QdMrM<(`;A++2#EXMP
+ 1gr:[gdqBPe`PfJJ6PdU7Xo-(?b/KE"THhLoQ!/YonQP:YHP-mZNuA\^GICV<I!FD487%Fb
+ n]*u2fIPHrK5VdCY#SGN""ep&.h?uBn@AB^cUL:KZi2bH[##>`RLM`%4l/F>"kWh"/O:TR]
+ qu:gq=!k1@+AI$<^M7X&lL%rp7dJF`eGu"!FO9QYB?N-`[6nb^HchKu1^DZY.VLns<Z!(Dh
+ RNIp@9N1;0Ok'-83t^NTm5mN"9+5^MTh9hi<Cku+`<KS>-!a2^hu>X,/Sa(\HDnqm!3QS,,
+ +*WQ>22fBaq!7aODJ@tNF-KVC9]=)M,/l,8=l8Kpg<+FUeA_u;#kKfdHcd,[<BE6,p#\a>?
+ !(oVI!!%Ok8.tt'O:Gt+!!(Fj9`P/Y3i!&^!(_l%!<<,aO<sr-+<Al5!!&f^RK*><F@[&F!
+ 0@\)![(6M<cQ:Lie3dIOk9#,N49[MpAU2I#D\>i<[U3TYlF(fV>F=<n&8PHa8TmqC.W_@1%
+ eTKO9*c4TFt1@dIk#4Y]KQlr4g/<iOUV%(OJ?61%4?p]4XLO^Z[kkZpV%&:V4\NXh'g@),N
+ A-oS]tQT22*PCl\DAV5p2(f_V!6lc2da@*ul$Mn2mgl@:\l_oa\K7n)<Fe$[Ni8)LFtm,[k
+ ^b`q9A/EBM4\Z0M-Z@4$)Hp.G=bJL/-eirf`oB=]gqeUD-pO7W^+AKWom\CYk]6Jh7s2/EK
+ *`$j+ep$ls[NZj?#jKmBBA&VY]l7q=\oRc0I/sj3>e\7m.<T-Q&Ksk6QO4+YIYf]a#^jE'*
+ L#i^FWIF.'o,-+B3;.hlt>Xs*ZmQ8i'q/^p!Eb,5am3e]G_k*m@IJRV4Z2D;k<fPZ:JPb1=
+ eXuOqB=>#c%drX.Z+6k5)dIrXYR_55fkjYuVrEWlL4Ll*HsunU>TZdf$g@lD"W2JM[DAQu\
+ ,%cJ;jroluASZVGr4B4g"Y/`NWg4_*m$Op,b:"t>W\^kIH2-08nRr.LnOFmNBlg#9CtbE^.
+ qe-?ugK/<VRb/u>#HK)cXAP`.4ImH9I)QijqGemQI0^(f7(3YLb+@m0mR\Z12*I\MHffs9$
+ SUOI*[I$gLc1r5](uc#tW#a:h1udZ"'(9+e`nNVbr1&3C79(p_>XhJ7^<Z'7of2:FK5)N'J
+ /4WC2Jddu@cDJ"fb,UDL/3'!@I)r\D]i*=%nI?VOe,F_g]&F$M<NuSFMekuXfrOO.XhL&U^
+ =!CA1@onIc*?_C"j^$1EfAnZMaX0c!^MU+5Y7.X;Dc%"OqEGinhO`:HJ6j4be88X1kD<Q9L
+ /8M@IQpKJW]uR<"5<.4[$/(p-8&HG8/6@J3tX<=B"E!XcIZ=5pdYEL-].N#9%*qNeC_8lRX
+ 5.Y`I1!I<eg(&`O@_$!sn7SnTaGamjsJIh82(4AQ=J6]CcMnO<tl9,i>&QJGR@n>iFYK%NK
+ q"[RTO<u)Ub3`Xr9Ag.=8k<+X/R.fE'0I+=Iul!"7GP'mCh@@9mCju)X)3nQkIFBnqb'_H-
+ 6Nef#H#gnL3jG.;Mt,h]@kN57Nk_&!,2MsMN5Bjc74r$-lHg+__XCZT6Eg>&2**+EWi4<Z$
+ sYm]3)!#$Y]Z/YT.Y'6]RfR0?_\b=ANH*NOm;cfoROW];1[S\K4Q:P^&F]`VS?e,u9\&Z'U
+ h*8/%pnNi@8]pER=2(I_=EHBicYH^p@\9]6*%"KEq"-ouJ:6(_S1IkHVmH@A`&g]&G_PFun
+ d'<'tcq#0.bn6?Zl[VFIH01*PD;\'A:%FPA/ge'2=E.(>BDkKoaQ)!YDgdJ>K8/#D%o8C8*
+ i=ZHSKBggaCn92<gg',!"'$I<i;BGK6c\4"$&YW!p@X=OQgqt[[NJ;'/3TbQanJT8q\R4][
+ H^pRa8P8i2!k"c$D`Me*B<u$lTdT*(WCTH5,e,#Ym8d2$%@L`0qd%N`=B&saQER<9!33+Ma
+ [t9Ul!L''0.U8;#751K1c<!YXkI@1bfJ`o&f"f%DXAT(L*]2h8(:h#1CF4NW_LQ-ZA'u*JI
+ D&\dpDVIL:uI!_=!\1Z]B(q>L]r(e::LX?#]EF%i$)Uh"(L-6skTCBjYig"G%NHrHht^;^(
+ LJpU&t>:lqJ)n?03Hk[?fOX`t,>$?Ka7t\e!`-?WU]=PRKf;/66kO7-4M^=9d;b[kP7+_>I
+ 54$pkISN80=\[U7IfGVKqMfRQCm]pVf-mcQJ0!Y3><=pY`JYQ0q>&jVEPnkta8Yt$.p'kTq
+ ($+,VG/U&AXRM!-G#G,>s)m'0";k$=H54O@NqG[2-=_-S=DXb8]m:0@Ddi\jCBE$0-hg]_8
+ /F?WEpUOB$Io-jbi\E7gEakR@+p1)&X>2)@S:ti-VG%mt+#G*hY!_EP"`c3pJ_35Q%mUI+$
+ SdIItYSET>lO9Y!nn:>t0j[Eh:^Nutj=[?YNj<te6B9:%8&S;R,:PMFsKTV*D<%hDDGT0`a
+ 0Z'h`^aTo"-l`EY0n$OQWg-KY('%iNpDd2U1I>cj4L2N).\CacucIQa!OkAp3SXl<V"'L%'
+ 0$X>hY7aUZjMHTYDJj@Ac*&]W2KkX<*ZuQ2>h$C1o0<%E-^tc5C/mUWT);P7)^@7!im0hl?
+ :77P<h5@nEcO!`_-ORI9:&9(Hb`pU'G_JPh7K#beMV"1PaTeN?_AI?4B4P&]/j&i2!aq$]H
+ t+h:HG#84Cs92Yr67?S>nYe!s/O7*6s6&r&%8GBsMO+0'`/]S?ZFLqa$-bnj1E(^""O#pPU
+ Kh1s7+^7NUm5:ZA0T7Ub!?lD2ADk@`,4/`VX6Zspu]K;;(08-j?JCX/TH)s^_d6A`cm.hmG
+ ^o"r738i::PqShs3iWbs3Mp&09#fXa#a?-=,g"P0:\T::8Bp+U@0k5t5or0S:)R/8HL7.*)
+ PO!PpP="hF&6HAN,.TIT&+gGmoV/(S.]0)Y^d*1c26@6&0'GJnr;?JkRor*pChu1V4$l(IF
+ ]-IGEVY!2r6-H[Fm=EH`-$>`F&kt/4*HFm5uO24*''b^1s+(iT/CC"FYdgPe7jpp4.[`?L:
+ X>p5:n4C"qak]i\#N*l(_F)$CtdGhCXN;[V`(4S\2VJ'H63]%NII$mkeJ=9'S=cMTJe/2JE
+ gj4ok7Toc*(U7tZUAD`9gZP;ZJh:GLmWFoD10'sj+-YHN!,?%%9TJbi+TSc(;mg@ts^Y^K0
+ $ipg4M>E,r1gJ(K:EQcu@PD%f3`mu@LZ]f:8#F@N!p8l!WAEV7/e87ltbdbX\1ef/9FLj"E
+ 0:h%#>dDuYc#9g"2Bd]mc^m97ZVs7[K[O@;9h@oJqG%]BDf=o!-8p1Oa#d"ZabQ?lMoQc4h
+ Z'C5]\W3]ml!l[qc2I]U4e5S<Qn3`H$R]\c72Kh.WBTU(DeA;V$T3sTPe@6,=[17TQ)IW5g
+ s_pZ8Suc]W?2SddlZ]\?t.L(DYJKn8J>:]:O&?8`OuTaP0FUohNu31#j&$1*5iLlnU@Q?_*
+ j3F'&&)j2Z;7-:'p`eBL[!fLX?b(:VD@R(fV4k8I]KbVGpU(StJA5I[9VD#TSFX#<08hY$*
+ JiJ4=+hiG9nms86\c!)LY)9'.s69B7KWL3YY8LYt5Z1l*ggcp,EJ_u$6?9XENXF34_j/M^1
+ B4$,\_Q'f4f.LQ;QU:@gPE\pSQe!u=?bL,N/hJ["a[soJn$QQkgpnbq8]:O3NgDSZO,p2VB
+ c:?==C&,X*V.%CmqU4f<'E<dl;KT]ddu.MT!$i=pns`+SdE13l\uP90d3ae]>_%ig^=G#^M
+ !R6(,RSc#*4&W[`?.Y8]0-Ra]hQEeY$pILsaDH%S=)_WV2U2-Vp=5It!KAXf\_D\")rN-A@
+ /-oB+;mDJuol)7H]>76:7KdVH+r[BPU)ht?2,\#A4@pCi`_U^SPljQ@>u-^:,,\^/h^1Zpt
+ C2#@#0c#>7@WN'm!2LLa%V,@';:I!BOFL^0W/Gc6_$G</@_nI[EH`QGT&ZZHhE`\!LjY[eq
+ I0]d+HE;n0[Jf`amg2T^:R=&$#i/1eo?Gd-I0TMSms=olQEb\_NkEo8na01`QW9!e[;4CFo
+ #u9#d^sl#DRqhq,duDfHM"0.Y/jJ/QqSje/5J`q=b^]<ZIE*<ITq>EmWm`KiG5HA%_9?h^[
+ uM6.jZL<@VAH+judJ0b0TtOCXN+_hO>ib2/&L8At6p6QZk"CjlP$<36n'Va3hLo(+ri/*??
+ _)5NF4eQtt<VEW(&:JOY])JfS+E.\fS)Li.Y7Z;</\JCMNT_aDNCTRGMHg"WrPX7RH2p>&5
+ t?J_NFJ)f*]=dr50'TVLmk/fj_KX.G0^r*u`I?,I7Y2qU[ZAI)[e+<4rdo'M5<=*#H_?:)K
+ C\@bM@bc3PgfG]&hJ[k<IFmM3LM[Ut/gPU(J,AqV\V?7_`5T]@H$MD0<:e21a\ud])S2Vjf
+ 41nIaH%$*<R9[mLS#?3o%Aq-?^<%o2#GF!7RfjXA7T@eBWu*Q"C[4N]6E_dhYjrW'.,7]bg
+ Fg8mTEd.`_Vur?F*&j&]@a.r%titn"4@06(f]NhO'Up-HdO[)Re@/dfl4"!ipD;E8W(H0Pe
+ _=:;([3[_.BqW@ZaeXE57V^C`,^;E3=/6&!?9@1j4e=0Gr4d$.::i\tpBB^mVKjNL8"=@RK
+ &]B?5u"[*+N<U<[qs8Mumq>&J9+n)SSQ#,1RBZ.jPjS/?=;I>B0Oe,EDIG?Z:dkiKi)]S,'
+ j5]8KoKd=t2KbHeUK6!fVeGJLqos&$mWi4<K_ac=Yaq?@GMdgmq!d7$N"!7CTPgu9=m,I&^
+ aT1'IAqZiY^J._%R3,:3__)7AS!/5,fFciQXAam>8K3`p_6_,[I*tGEDBiUTp3&H0eJo[P_
+ @"HSI,a_c<\)tE7pH#r<;#6DqK+/p8QsXnckhL1u[8Rn"5[P&C&e9C!Z_g4u<l-OD[ahgu<
+ __&'%Ga046Zg\h/uBO-8WJ9uu_qIn2a[1g!e,b8)XW&p`jcp$jcDfO%65f0.,OV/!F'>/j<
+ XW/m"J+AHO?#^KhR3W7=O<"0];@Y`jYXDD/-e-^@MI.obg2Zd&\rh\?B-jr+\TDIJZ#h"B&
+ <:i=s##7?1Zr:J34?n5kJUDZ?JXkA43HXTg4%A<06p&@gD=Iqb1]0>1$'dU:`AlO$Y?"rPC
+ gEWd?bg@s+4u]ba,:rqnt)V1)YWi^*#$oN^Hn<M@&J9.1\lB?X:dDDiRua0>TD]uFR20R\P
+ J>:XCa&/8prNQ!GOl0h3Q+B6\)gON^R,@NpmLG;I7(a*`4!dH:S_B"eW>Dn13tVS=a(VNp$
+ a1/"#U4kd>T'mUbQ7)?$I:Ul!5,P=iCr2"#h9+_+lL#^J[`&3pi$BC?A'@,+D+h$98nV],D
+ h4OQVlVVInCh\^H.$l`uor?#L`QG0WabDn>>f&ZdhUaF_X2t@UuO="^GltOCbhV7]S>IF!@
+ p0I$30a`**fb(F[$@@ETH8"=8,EFZY:%4SJ#/"%udI);H08<g4)ch>QM#f9,[-gt<iNbbQ#
+ pN*nO`mY4`t2`4s18doofr3_&'$K9T\p'gLDm5VpD`Y>Qqq`fR]+-?odP/6=04#<03df=,W
+ !W"!t)'Na=LZih%N.-bia9hnI?8/M$iq[)!JMjI>>Qs'5mHEP#0^)9D8ngrfi%*%C^NnBqh
+ tfRjA8a(`,:0>d3i,Y-p\lE_OGF^,O1Yjg0E4M+P^u@Ycj<<'`gtGT0"d$19V9`Ta^2P!Ii
+ TceLJ#Pn]^Uc_qde<Cdclq<iMrhPF4##M3U5=Z-PPR8rT9f+;!r%J>&adR6+;WPr\]9AZ9U
+ X`ckM]Q`<@7:<R@_BRQPg@_91_hqLHJYd()N,kYH1f9;$pMgW`oAWr46+>Et/?VIu)`V$aQ
+ fdH`IR;l7.RBLOjS5\ZjP<qcR_@o6?L%U>ARuZ>\E`l:a&ZKh%'n%gBqb_gc1W0\gG:J*cc
+ `bAj4>lE9jqHBlVT]lYVo^8R<%tMl@h&l_U;GqI3)IV!MprJL,dQ^Q*KG`Nq^tO^HqnGmm<
+ JamV:67]q/dE*WP-N)ogSu%r."#V*qMR;;1[oe,@obR`On9#CZRW4N[R?*gQk"]9?S!olPW
+ =U?1=5o>dsU:#r/&Msk`BTB>962_Gr"a](*,Sj)m9L?eUF?_iO;o(.dR(oP8oqNC<>i).:_
+ -J%'TXF3e!o4>,a';"ObKlKWs)=?qBgX@/p5;1d%michcJjF8Df&<ALp''Z'5Trk_P*K=SZ
+ PT"7./E+?h=c6S!]ugtfq&s;9Bc'9-V+o]XB_s0KmB0UAVZ9`kRg1.P)tcEm+AS64*PSi"`
+ &mWmsgN#?Aem@e<l*UW#Kl;Q^=&2T3l8Fkj@-6jm%(]'3X*_/nqMa-P!QM=V_^1.,dic5s]
+ I/Ye.OEFPRi(`TFWJA\?\M9-;]WEU)U61T;>dGIFrgX&c?[MMdSR"-Fq9pi_@?DHCZ/U[2N
+ iSp-di;IX+0LPH--bbUmt6T;^3:d9NZV:U1](9NOt<jK'<V0#$EUe,01An5Et>I"DG#rm1<
+ =bh?`g-!_J2/`.r`Th5</ZG8`QS584Ur_<\Zm.Umg=b/IH8\Z+ruk]e2U!EW!a2Nhj4iHbS
+ jhJ0Pq?7N0J+kNF?ehF9lb;rK7\^-q&!>8d2mm7h%,Wl:73T<hiM]^NN&k%oB%&QE$-AM6^
+ Adl&b5D]%Lruu9rjH6"+@/hLhrp(jiR.]JKfDo_J,HD\8]:/c&K<nIktfE97;^Tnt%:'p$D
+ ]'F`XS!aiPtc0X+@i\'X3sjKOH9DGaDVla)q*UrZI,/_KX4Kt9:FZ"_Z6EP7LH1guT`hS"9
+ 8g\VMRo`u]/QEo$'^%YC5V5i+L?#)3`Ui]$Wd*u/o9uh.,aiB;je[aiDlF*d#6pu]ahT/uk
+ m+Sk*\u#E\H-.[]JjJ6@V,uD(-`9`frkVEZSt;PLq>!p7,H\&K/@NiuDFFUb2RY2!Sg>I9d
+ ]5[ogM@@nDU*!u=Kt1\LZBs[6:+#;S,$E5l*[.r6V=Oom4MUI)s)kL8pl&87JB8l_83t;m7
+ 1dR1a`[SCZJtqH?JP[LUo(<Q^!_FDMR:$9enIZB$?W.h8Xr3B'o@gdq&%+.'YsL&[0$a-*"
+ 5X[>g3,(=s_OD;)@R])=$ORQUKGjpC)n87K9am&%1u4`st0,:Ft]g^_G'ReZ^S;)e!ZkW=#
+ NGup54%2u.'T<NjW+c1JjHA_$MjQ=d\:Yg96`f+,,Y7^MU?j;"`RCXQBiT&s.DMR:!:%IoI
+ B?elU.<NI'UWND,[)s@DCcunkC-lB=U52!)cBoDB.Uc71of/+<(7%hIX]USUldqW\+aHYCW
+ $lL?pk[<Ja?-*6lI*kJ\eVX4nDV8/qBS)]gh'HrNno_WO!]]@Em-<(<ig-)g9p'd87'^i0J
+ JU"FoR/l<@E,'PZ#c\?aR'mga91mLu)N8bLP..]&bcbg=i]DgBd@)-)E/p['dZ)ft)=,\K-
+ j"1tfE\-["bS@H'5K>X*l/]k03^I-&&AA]pBrk`b(.Y?jrZEa$;6jP*Bgh@d`gP6mG[F`)0
+ ^clmHl,VNlkmHmX(\prqlbFcHE8Z9id8$&G&6>^^dG8JXSag(A2h/XDhACu4nOX7!6G5H5p
+ Cok/fP*\#YZUZOXZIuP[h56%iI8=rfotV8@g%+7J7J'/laiVZc&#n$B[U"L+a-Z#dTLHM:m
+ NA)[Dk;bT&+Pf05s[fd%K*gk8$!m&GbXNaQEentHD=q$[Vt%UU/,NT_/D?5AEl"LO'9PcO#
+ #*?@e:#m-N'EjRn\e88X&rEGj5M4?0fggaa9SgWb7GXB&'lFmAEB#:7MmC-ab'S2&6"A?Su
+ lJGPo/ZV-R.sS25ZlmC.@dGp7PZ058:h?!;JRlI9Onr&ujFC[[mogJ<aom-O/q\KdhEC.)`
+ K.!62<9?1q$jC:nhe[N'5Q<1Gqa%fBIgpeN1/+SiMb`O,?B\Rh/Dt!#F9&A[JeOM4OXDRGG
+ =#IY']Ynq`mD.iY=0Y]<pRC%_+*$;?rt$j#VINI9)&Xm7+**Sm-Bj:6H.'A(7IpF)Y(hp>8
+ _\@@A8cC9lddZ8_acd`T0A7,]INDtEO/f[Mc>ZoJ,]AU\2Yq.OJ=X@Y$JYOB4j`bg(V5@jn
+ 'qR[Tt`[H`RFV"g=0LUF@W@.;YPD3']<s<@:Q10@","N66#!m-MB7s8JOY*/Qh?Y;@\Sq%R
+ _H?'.;Hm@s#_@]D`S'\o`k/Zp0c?^%k([%g-PDV21%d@s&c%Es.piWbAuhJ/;">n%db7tpY
+ VEH1O<nCY0dgR0j1jEbY_GA4pIKnVNJ3iYF?=BJ%T7tZV,+fKioBcK2Xfsbml5$m@Z2-I?W
+ nY][8\>r3pc_t&C,TbEf8PDfWq6:Qr-I6:^;"A9R[m"Kq.k\P;ipQ4ej\d>m,5Z7tO!1@\l
+ !SC3IU8=`VjP'c`8EIHb/MB?qY9iq\[e[n"`aAsPC[*#m^D#Xebu@>RkA$[=AVg:<t>Fno/
+ lH'4<6hSs1sK?/LLbSBi6*).k;bd+^R_RD$uhUI*,M"*`E[fVeE,MKBkTk5W!ZDnAVmRjQU
+ &i3Oj,*A!]C-6trORif:_:qj=p:,31q1b5i1Oq8`[/O_AWTkZ)dbLVFqCO=!;a%)#Hn_9B$
+ DPFgeY1N4L#UU]X*C+aJ4lVZ6DIq_X-@n:=K&ML.9[N8lU^S9HVU*#HYf!d0$m.PqMb5^D;
+ 6cU6I>=$(96I4MlWQnk^IjOYl[2!c]?GasMROU>*KH61I-0.$09n[Imre.q.R9R^,5+ldO!
+ 24f7E-[#P#h=?BBi[.?6oKIJEOIZ6f[hd\0EI1J'GLusceem9,Ll][6FR(?:$V@DHjTTL27
+ EVgYqQ4(nG%@9mcuB=1O_#JRad0_1\ia_X*P1'(%j\'!+OEIT90X+<(0r=??RXcT('XQ1o\
+ sP<Ld>O=h!KIFdiDf&c53mS*onn#ZY1=Q9I%hn0(Nk1jp-8k=]VF-E!/+[i']eM`j'^H.-P
+ OmU(9%c+I?)8<Z+)F`DBOJu@*-"sDcnoO9_>OcL`OcgF<c*]p/hSU,b$;r8'JM`V]*Dr/)I
+ OcL`>(H[>rcSta:+\h64eX)a^@DR_3:]^%UnqmX4iTURJrRI<T0diegP<*^U9U4^TE;^gL+
+ =NVla._^oSEGe-[2`mYVf%-t=dqfJS$ALp5g-#cn[/0^2q>WTfo8C^gN\f,Fm;2SO%&finL
+ pi1$:$MS^YEV\NIFtIXYsu1s8*#KUeVTjSs(7*+gi)pR=N"&>__(OW`Ga#V)]mmcRq\UeB)
+ Rt;t?\7Ub5rhcWm+n\'p3pg&n+&3dO-@':Fq/FkZg>#Q#7J%E<9<clk%a%S!'SC5Se9=*,T
+ [l?)G<#7mT!l[bN@6&0K%M[#4(j2*+oe[esfFKEt62ioA/1?B9qd/;XAn2:s,X!9p_/Za1K
+ >:%0e=/:ZaG-a'>=g<D$1+Ci#!&ST!K;Jkrn'u8oBh"s.'+!#SLj];mIOMo.*2u/I!bb`t[
+ <n43dgn:K(7X2l3D;sW+gUPP>P4gWkV^o\;tuagFDdZTFpffu,:>X@CBG*q7%d!+Bs]\j*a
+ 1&Qq.p%Y8\M7j1HJ9hXnZG>ZN+\`d*5c5I?0HX*dLec!#i=d)g2V4>]U^NpHG\4+\dW"/Pg
+ if.oic+922t,rXOZjat;m?*4po9/%4#%:6fno5ahtFU:mu.J]#!,!X<]7.!?!A<^5q-qgDS
+ S6^jB2;t'7EAH/a8OLdj\eFiKV5+\7AnJX1V7:)S0LB_I%Q@]1?Cjl>b?g5g>GlKiBcIF?0
+ (7(XhSX0:%X[j+H-)[&NVgdYtX0-#b626EI%*J6I4s%YBflD%3j:4&CM$R0,n'O=7A]%Q:J
+ +*U!85`':QJ[MPZ?'a/%SnY05&Aa26(Q!oZc;AR$*P]C+'00R=.48Z>#EO+4_nL!]Y)m0?r
+ cU7CY,=4iMIe`CEQ?i3I^6ZgE&2"FlBVR\6K(>\)-F*H"p;1((K^hbBpR(Q?MPq7bRGcmac
+ <He"DQ^mA@G_[4&8K)4-k`XMdCjRYZ.R,XJ(G8cX`T5)Zhp0FdCbB$1<8A!:OqSb7I')cNX
+ pg=k8`?+]g^?[n@8T]dLm1,C^2,9oQC6\ACJonNgrY!k\.ZdR$km/stgM5O+Js"3mg,UA%c
+ Q&LY1H$^Zq3,htM:X<h/qEB0u#cp<Z2ba8Om-O*4n)%Iphi!Wl9p=Afgn:;_%hB2:Zd\/tT
+ <OtcX(\@,5_I'*YBC#0B^t*$a,V/rR7so.71E^)6u$IO]WT5m`=MjWjOFVC!ipUEB/Nsn&5
+ aNRhBU>PJ^q,E`:W;ZB?SPh2s8;i=.%Gn\T$MZS!tA=6r]GZ[=_KRK5WZl:ok14ShOk2os`
+ GE3HO>9LPJOkNS;4G"h$s/ZY#2lUEapIWaXE0V/J99'R>Ga`:Qp-Oqq)]mTsVUr\8[f#d0,
+ k?b'7mN5!$?Z?/>!d\O+OE:;Ys.X,&>`f+-WUs0X"42AI&'Di=1Wd6],1>9-O=.q7L7@cG[
+ `mrV)\iJ<lqQj*58(e,!Mo72IfUo0VEL/^2KN>#=LMCNm=\c$-9+s5W<U6UGZi+p_R`;J5b
+ *!u4qsU%9bn+NTjQ&?\UQV7An)`Y"h24$9IcFVh,#.'9X5>n7Gpfre9'5>XaZE4\-Vp.59/
+ EmF&F$rRnWbFE+X*lY;CP(p?.-IQdC5SjRsWN>C6[,?rOG3]2cs9G:k&d]3H=&3`1]SQ9`h
+ @sYGQ2%8mQfD,^mCTX&6=9J\oAQY4Cj>G^H"+(`XMHYq'jZ-5d[=1M4j-Tl\Aq`l;okW"iT
+ aq6t$eiU%s@2'(n,(?lU#;'!7h))QHqPc6cn5s94.E_8ttP;o"%Y_<g7it7bmmJ)/%)q(Y>
+ H5:]hq+U"T@hq5g)'Q48/?daZeGX/CS'iU+7?5g7cd,C@SuE^<dMt&\nZYk@^DO\.n?<Agr
+ tmcY]8>Fh&TXO1"hSsFOuSo[GNHs^l=SLur^4-:1@`*Y8YNpVZ"X#r4=5_]k(qFB$#)a3#D
+ [\@0Ab_8:"\'f2/AN+I1hL_g"BN2Xcf\eZ=F8aZY9J&-#;9s;O6RV=[tag!TRa$E8)umKfA
+ %O)`OSb7UFLg.On'$Xp@jaSa&ic9e)'XM4nf@%2h0<RHWJDYLWq1MhInJ8$,D)PG6?;co?0
+ rj>k*30KM6u8r)IR9`bD!Q0X2*jT5JbUl6XcmFeh$]"tbmdOIZ]$s_-<Y&;f;4o1SU"4eT/
+ Z?,2(KmFk,"hqDP03blD]"tc8l\T]K27b#\&f,L\A)h`naMXbGqXB\.F16lN-72@LOM%BG\
+ $rh54V49<Q^=&'dA/N6(Dl?@e""]HpIh-6S$]Br!Q?U0aRc/"qJq77qo0Wr*$d_SMjH9:U)
+ m:<A&f'.UIf$5pes,=k"M?.r*H]tM=WE*G6n#Llg:R[jH3;<[Hn`BDUY^n]2iBrj<[NPJqB
+ 6NHN:61Eo`%p!MuZ]1MaV7fs(;^fe.7o(:#jGng<=A(?sj,/24QhUd&+<QSaP[O&r(9!PBm
+ /-:[!F;UaXEMl@XhMem3G/M*Zmbf7M<MJpk[7%PE!X%P*qEuSu4Jd'RE:!ZJ$'G_J0Rl=,o
+ Xp=I.3HO?(poXi:q"XV\\!]\p+14H-<;]dlCk5G]IRcNtn&aVMeU"FPNN(Bsb\Ab7>MI6?8
+ Eh.XQS`-2*''`ihZI6r*]tj!5cO#=jic.jVHp=*cA8!M>8.?='+JjI)YiD@F5(I0rr'-E8<
+ 2-:A27S=!X`piFj8%MO_%4c-9j^:,GHO"m2!,AZWgPi7?^f2r=P$j:o\pc<paFF,6LGM_Ha
+ ]rc-FU0U$`/?`B"`@GJRA=jf-rZ53qaF\[1ffc19)<P)9^dCf*Qj-5iT!H3eF2#7$#_VbY@
+ d)u3)>-9*uPp=6jL1TWYA"U3"*TN&qV.LlA:qH?R`k59*q1Nj/4onC=c0fJLmm":Tm<)9O_
+ fM1S;!1"in5!H8:1n2*6@D]mu%.^,j:`:9RTqS0c$mdB+=]l+K/u3AUNTr=@>PN"cn2:q'e
+ Zs6F99r3=UMnT[Z05YhmFALUqliqe#Hu]+<eYnp"U2WIJ,fLrqtBF1ZcC\LQ_h]f/R-f/?-
+ CMSeurH,rl=NtnEfLX-+Y:]Bu2,B-K:6u,[.a!V8S.dC.(N\"s,*r/K5Y`L7cHk=d@&F,p]
+ n9micg4j>BZS`Q#p'jE5NbU[GJ('3A7:Df9R.DCqhdIX_:Q]mFaCHr\'SH.F#_mki3$nJYb
+ /e^3h>UqN0a*aGU7"h*&Z84\OAW"nVDR@0JKG<bIc8P/IDo:%mi2Y@ce/M0&UXYr]+D4WCi
+ #F>Y\m.pP.?10`K8.tuJ<Dl`CV+[/,iMG@ZkF8in?ZCq=U?t=!YJfFpAhA7"h;*O81][U4>
+ Is)+q6[=nFm)hkQrHE.]"^V$!.aL6B,#P-:eEqIb?c-s4eciJ59NZ#^^,=2o=\q?C@r\E_N
+ [^LXKOmX!.Y[Bk8/RBeI1pROKjCh+&Fo]^He4fJ3f\2\PF7IEn(lEB@P1/):AWfN+8;4hm\
+ o9=5+d-fAj5Yn\#"P"9=B*p6Uq#h<Erb@]`c0R(=HAJM^5&H:BX^7fY6SDccGA>IaRgd`a6
+ oc6PpoeQdsDpU/7eRGK1h!$^5jkBj]-r^6==jJj!D6ldgH^7+cNo(0/>8%#`6"2KGqs2aMD
+ iGm>RHU`lL;9)Hs5@dX.Wm@ot*0V.>W;sr#Rp.^_1k3,kf`Z?[b!#%0WtEcJ"ke*XH[T%#S
+ gLNqFo*Y=0c`@OJ34jrXT_@i:cDTqO<uGZ^?0E(Pd%IY>FM(^\_VZ67Pi#GVqo4\qj6/7e+
+ Cs)8/%i=rE@XCGM0$2_J7p?9qPnq(dZC=H`ap-a+V8!frar)1ls%VZ^LWj;O"D#e%I<lWci
+ ml">GW89DR3lr5H/^H$Ro!<>VHfU5kdl:c1kfCV7&upLXCB7-a/KX1fS?((NO'M>(H/%V_X
+ WPGr6bfG?EA$*1ialkTdJ[`)J:]7lg=C?9kG[6pB2BB-Ej[6t<OXZ]K!^jpHHM[#-gmV_"W
+ 4j.^go!)m6Bqg]bPS1%2[7*Z4H22.Par5fu*BpX^h88Kk[.m1pC*?:c12F_T3htd9mhXLHY
+ !2n2QtZ5a4?0>V?T1%k9pBVrdf%bQlAh%u>.>!QK`Quu?G"5^,C+Z9WhUqd]!R>OO^17=d>
+ 3Y*"\))ZDW*+.fI[LsCp(fWo([I;^2_NTqljgj1m-T(^PB\$UqA&GenkEZ-fqhn_Xo!L?EL
+ i)7E9FlP)2#X=*d@+kCZCO9bURJ:[tff"jj**gU+7jGhFSGIK2m,aD91I:IISWX+@$QQqZ#
+ %*l-E%\OCq%e*e(?2RkRgPFIL!'p'atMS+,99m8<h;I2TS`,8u`@Qts.h^uK"5bmecWZBMU
+ /qo!J';h7=n\&[[),QcFLVFK8!1b!G6b-EGjJ:rB"k66h$IV<p#7d5On\&Zhp>k\_;?38u)
+ F_)=\5cU)^CRAg8/#-:h_"7P9[(KM9%Be#IiCBJoOmkARbbi%4ns7I=1+,:Z<4[:Z1KFnY:
+ 5La/I[f<Vl,<*LBdqbm+AS\(IVE@EF5KlGosCE;R7q.jDNVW=4-KEGOOC:qt;$nH8IB41,i
+ :sVZ\2<V#0gS2=JGG;C=djGks71d,iH;ftau&[@IejIGSVoRgl^RXIfB?<:^JY3B?p1p`q4
+ 7bLQQ5O=$B`R4Mu<6UQK_$lbhd\D76)X%U4?'.6,F^5d:Y+X'pj?=dcjnfDJC`LT)7<j,i3
+ k$4^/e#XSHDk<V*P^M9W]!\r:qHq_.hL>M(r<74HpuaS1fJ5$,.*0<DKD%uTfVn8rCmZAmo
+ cE<ipnd&WGk#7OVD]lK,7PRqWMujO0eng->eb_N5!GS:&QsjL7f<,KY:_F[e43_u#(3[E[5
+ 8a^EY[OlAZ$a#a!mp#EoFk=2`E\4`\1tAD?&!2f9JmjGFBjQShcN+qoN@Am+H$tSs=cO?cB
+ =$omr)A2`a#^Mu-I0B$=Y4r8P\R]Z3WB;UWdW3pA-JA)"Nd<>D7(Q.6c;D`W<sW`9\P!dA?
+ V=BN(Wmp>:6s':\7]9S28o!7P#OS\jrBuc5SEm<@jY$LqugMc3M?u,iY*??`T#?+uYI_NG+
+ ]3J&2<K.p/oZ`,YMFL0#M`]rHbbtH7^Y-)7T9S3R?(1-6[>VW<IlD)IbM,`hPY73d(?sl!0
+ ;ST5Er+aM+/BnQEWH1AZN%ATrka9m%nck<l=6q>Dc8o$XZm1o1ekRTre^J%rm&g7m`(5*U3
+ e+lBbD$k6J^LJB\VXEE`#_@/N\\aInQ1M4*F<id:`CTr07(pCs=KlZ`,T*-O7D+WF4^EY4X
+ Bp'T/(JS)9Soc&qiQrkZe$96>sM-quHP11nat6/C2.)`d&m[i-\7-07"#eG%JM<DJR7N[ZD
+ +i]9>DE3Rb40/PLbgh.jOjp_3Lm0:t7QbPk$12,kUl7_lb*#os(Za7Vn8<2+ddIU8Bf\"h7
+ [_/V,Pd"otBrm:'_7i4sq"jiE3;R?'Bqd8j$ci5Qp(*#GTB]paO\.P<Paa_Yl+L[fAWN"OD
+ 'PoRfA+V:kKfb:f-,E2PPk@V=gM^;2tQM7%fSbb-n2>`K79Km`c>P3'+oR'=U(ksnTO?U.A
+ D<T=L_1/Qdmpqs,a@'Vb+n\YI(fl@fQH+i]8><BbD-q7GYtuRlk9%]?P5mR3jS2$Z"iO5$;
+ .'GgWUfT9GJV(?"J*2K<.q6A<As<3WX4[BPUS0hj]O5>;$HWorF&,HaE)OsYa2J^eQ744XY
+ dN0<c.XG<sem[/@l'-83tk/S.KdT;jM]tKlZ![G`:PA@Ys4&(nrHUHJpD;3!O87'_TkK]XJ
+ D>]lCitp:Q868#Ml<0osA4O3^[Vt&@f$WB]B9/K'O-e*sZY.TbZWVfmeL=I`2!I)H47u[Sd
+ B6ce`-"UWd_snpS%=T/e%FL*oqS=q>$7Dnrq<YN]Qphd7t8D\<VO1Cg9aF>(KKA8Pq1C$(1
+ =kJ34E4h3te=,_Xa.TNTMk-=#bKrUppoV9:Df,G$;"66T7n38U1aaV3BSbSK!r=oN>g*oAc
+ rZ]imf?<Sr?0#LmQ*.+c"BJ@%/T*'*.Cl7Z!Rd5NcMOsWI\6*r4+c._-/(@ebGU@D/#i=6k
+ hACf.8^OEkb+)g2ZN?Ys4qJP0&WFiJ%\Z+&Jheef13_e=O3D'/YC?+aM0"OF@Os]_hn[+;2
+ [J*J(;,Ua!L1p>E<oKdIU<G[p.UP'f]mI5E/kTTtRPeig%$^'>Ge:I6C!cYKUrd\V5CRI$K
+ 7cVe7UFN=Ycr*']QmI.r:S"=0-@;]T-j"a:Rj2*msUl^/hZ9MrpfVp<Nd9oKnYBIU*OX3\:
+ L2Vl?g'GpmR!ql-W9WNu>$bLPJBKI_P%&a@hsU7ueR_C?/DrB?5.iU:U!N:Xor\#RC\ipX!
+ PbB%kq;oKWC73G2o7M<gl]#fhX]%3+PGDNoOu1="TW]@$*6m!SEe9dsA1UGO]n9I.fF@A>s
+ ,'BI\i\QbG;R3p>JfsAX\"\?&!LmYf$=UVV.?u8:,[*oo!dcAR`g3kuYK7h(,.p""i6]TTj
+ kiI^S<'*T%B5MM[EQ_JT"K>A&5L[ucP_HHO[Cpt`ZFE8oR$ahhmI4q#P*D1V\pk>"$k-*\B
+ Y`45kLHmM\m)[Ua0X9f[mk7QDR$IOdQ8UZBi=To6*b7>]!l>PS`JI$D2fCG>d[%_bL*(TV&
+ ]>a#8M`:f\s`'V0kU5^]+)1b$1j]Wr(+/U84ALqY1@)W'>)UfGWE2h7H*?1q41ddY%o7$Zg
+ Dhj,F&r9Q>?i8MYR1<*L-(;&ns=B'9(ip55(b/1acmpTCW&\$CKMieJeGOH?\bhnE[FQJp@
+ A.[Vu6`]HnYl:_jc?0c$6j0#PfCY,^PG3i=CAu:N8O2j(+HMP1nlICdl#hc@r&:&$?;Z2C?
+ nN_+j:JWmf8O)E-04+tQj5ZoQL\194jo#AQ<O$J,%1:juUSqbedTW[Ji+FL<GH!ghmrEGhP
+ j<7XVFe>NfN:&@\@K.!0tiMelg\"#Jk'hfYP4J3:7\9<h88OVkD(..:u[c+CiLrb,HBC'Fn
+ Y0)ffp`!"?Qi+3r[*r?-6o^",@spIn#]BSL,)WZEi-9DnLIHq'H#uYpD!'SnbnR/\<On\!@
+ C`(@nW$X"p):Cn:"Qi?A%A)Z,.[Q7P%Fr&?@UT*NV%IJ/+a;=^"FF8gYP'hTQ^,.kMh%kU"
+ !dscH+?$>g?$$C#\@^-X]',)E.*\9uU8BA>_kL^m8l_Y<A31H>^W^[:6Tb1Ap;1CFq+AJlY
+ atE,X74EL[#V;<r:T59A_!%rAQ*fd^3DJ:q5GA']W^<.R[;>MGnr+,dXh0jh^N?*ZblE^JA
+ *&nUUSMu2d^2*Te/q:_8<ZYTJ%=16-[eIO<TBqpkdO[pD+`,&mP:HB,#*7U4f&'FC."Fh@Q
+ i`hGrga0j6R]ccn+WkO346YbHG5CG^:R;O=%.lEoI+F/r6Pl_b0C[(i*7`TJ\8d=H9UB#s2
+ e<+.nZB'hl0WrC=REcNV_(hWln='*9$,M3%ruk8SMQ%+:rgqQ2kcI'6]oH^?$*)1&bGJ/2[
+ +)Y@)4Ue/+$aX'u9kn??l\Lf`'8J(ra6^j6\EdLEM"k'Go\$F1H\m#LAZib$ZS-cr+*X$N6
+ GU#An!<BJ/Bp,)q1jfAF(Z2:E_5#X7$ik$0ReZ5u6pCXo>CD7>a>"S)3ZnE!0mK._hYup9<
+ V#rmp=cnP&TuP(J?]qR9bo:.69auk;-&*<P>e_K!<Dd03DHYGh.tK1Ica_1k'7g0q5"'VbH
+ P\Y=BICgqK)3f?b'W3BrH5+!!"UgVMY==52M&@V7sCSDIENAM)!I)qegEBTfTj]?F>dr@CI
+ ojW54j8Z=5ECO5pp["%Ku>CX3#H"U7Yapk4e?^$4^*eS;-<FpeD&iXD2$2D_mk+-u,rl<B]
+ t)nUe)5ZpQLPN/QcI34c>>4`hY$Nkpb%$`0M\OD.klKXL:!io&iZ>22I/IVUd1/s]Yg9.nT
+ EXAqk%aV)B*<;&63,+P?U5/JO8ZA@jlkTd*V&1RSAhDBIqllmG^sUdd$iimS8=H'`FMDrWq
+ Y?Yo]Q[en0E<N<#\d15:c)XUSfc'#]XeM"9tQ&6_E+$JPS>)Wm-!Q@IJ`^iTE$Vu04/)D4a
+ Zmm3-!rtN>p@P>Utbp-9M.ts%"d$jfoH1L%VFLYV7UZD!BBEdm7:-md?uJS]]:)VsXW)FIs
+ oE)9(D9)rcub!io&iZC't1/Z^l[.4NH,0;kCB%?4pk6:-9_a##G&03`>\n!*>U<hfIJEF5l
+ ppL-Xm.?<%4X=LQ\Yqp%<Un1:h\q>XQV+[/N*6OeASD`>`9q+&O\[gDIg+)_TB?ZaZc9:mi
+ hgatC/\*oNOY5'-4ZtqP+sJ9X,e+3=+"oNdJ%p!8#H18&DU[.9c)#c\TqP=\;="]chB#4GS
+ I[eB3@QfPJ@Y%6m9(MEE?+TAo+)j&jQu67nd"$U@7*pjOt9n$h_['U9ZN/6oC27J\XdQ+[C
+ u?I_[KMpqRC+<[C)QO>ujVbm-Hht4i>.R$47,Nl&(.7F*$NjT24:;`\i%aN%sA,*B)&]n?D
+ S<LMcfqpmU0:<`i.)G5HHK=iW6C-)K1l9NdC9NpK/,H>hk$:!T:k2eUF+g7)d6Q^c)ED(Sh
+ ]5sFT+<R#Q.GJAV*](FiMaPYmI\eU4D`l<2OfVP57g%S'nn9:<JeMOd"r3Tl$Qksi!TC2R?
+ iN;1(g(.Y%@Ubn7R0nro3?k^eEgJ,,KnXmIm&@=&VHH!0d,J-t)-b;'`kpdc7fT`aX]%uc3
+ [JbW;9@,/<r[pe_\tI_O]@HpmkmlT1:Z1UPhEA6+*T#iRI^9r]alnSX4a#8Q'F4FhE:'VpD
+ [=@g[O]O.34S`2#6[^+,.'qQX9R2Veb[d<KEc;m\8FU#1:PV-`-H,.NorYe;\)H,=X>)dJg
+ ehK<&GHP"N>nXSS57?h:eUlQN]RgID/?r\.j%N&u7OM!g@PV+GWFY`Ld=HK)U6NRPu3C@,(
+ cM`]rH/5Wp9C*-ZR`5]fSmZpg1Dr+1DE]Q.._Ru0T_o*"eGFRbr];)Z1T<aI2*gJtM-n[-4
+ qC>UTA&f),*Km0S@A,,h>q`bk(pH3kF@PNoab<0A(3+Qmfn'Ae'2XRGNGX`*];1s]f!EGtA
+ XEDSYqI_8\``)97K#Y*F)Z*>atdlC5m<:s`6Zb*88AVQof`*bbu_.c;n@m-iUlB`id)pdC^
+ a@c2cEtM\%<c2A@:]+gO7Of\sN+3\9IdkV/@ap:%\E;?IFLt2[[)[bM(\Lj-J<=k@_N/iMS
+ Cg2?qthr2Sl%Q\ueN<7kn1FdqXB\FGJ@r$5U7OnEFTg>:*+][IPfhmEA6Gac%KNh_YrO15\
+ A9:A7;71!Z5[Q#S*S9_3MK7?_/Me\ldX_hZ^VRUV6_U[2%]$9kZoXH2c2bB<H9YJt)Zh?me
+ 1iE&\r4d'qrqnTg8*[;WLD;=?M!L/8Ue+s%@%E92BsPq:9iB&FAfg9oAr0H`@^#%gLkl=Vj
+ +tXR<NE&;9NO]Eo8kIp=M<NIVtMr]eE1)VJ)$ds"`#'ZXAI^.h8OM-?IEBIcCK)96JF+?aJ
+ WTuX)lSh.WeXLj6PP82L)2?9KsjtEdP!XTR065eq]k5et+3(D=>79hm`S96mL;[Q7Q/_\jL
+ M#-JmV9[C<rMZ-2!=6pE'j>P0Pq`_S;b1HLO:4!nhbO[r):;XVAOL,.$-Z/MV.nd_U1*?Wn
+ )-;?QaWq8fmob"RY882kkXqfP3GtF4RmshgOZ#+<&Q$bp\]inQmWn([f.3p0,gc2%6If4TX
+ 2Ud9gT-'2>.)"*R<9L$h>BYI%ktf"C."0%t("A3tIK0<Fid)h&BbD*o<6Nn^4*Wj_<32-!2
+ Tet#5-%+,0eq\-&J_`:_9OZi^)XCMhEgd7UWhkk"MG*$mA;n/rp5E4`04Pj(]Eha?IC)_5/
+ *o_5'b;0G@g`he&Z*/H&XEJeEjd#@;g(&._eE?.jn,aUi&2Q=kbo`e1seU61hEnZ+r:-]m:
+ j9U:U'l-Ys?<GPAtk6?-:nQS`,XYbl'.Fm=FCHkP"/oc(2:jUS@(ldr<V2i,QBhnPepnT`T
+ Zhem^p:Rr<Z?`d@79C"O^Y4O[&Dk&^oOrIDR89s\L0589C4aOL#G98bm@6KNjnekP*oj77.
+ WRHg9>?KZRN\l?i;"0[8\Y`,CPjrCG[i/gYIF;VreYN/]Wde$hDKXhJTnIRSek,+Z*2Q5ED
+ Jn"dM(B0&5ZOg:^&0>U18QjDejpZK>&`fYXUQ4-@$d4!]a!hbCWO0C4L^$p;fe_0]KWbTY4
+ T3.jiO=?/h4?MJcuNM5G>+t7qoh.7n;=ggK/e$5smSo:V=7iesX:EW/5^NW5HTn,rIP[auI
+ h0jQ,BS#7mtD^"U`UmFB#S<INXX<nm[`bE.#c=nN_HZOgPJ+?E"BlD%BKQ2ZU]rH>6p'^kD
+ WBk_8oq]9*Y;Y9ON%cq>1e#XQRBHb^^FAPu<ok/jtriZE!:/9S]jT=[!>eVR?!p$I1m2rWe
+ WYg7.I<&Varm.H?Df>*7S/gcfb$CGiiSia3#3ae<-h8]fh_em8GO"g56E[Jo]YDZ,`*'gON
+ k5\LY>;dCB\9nlik#&tnI$*!r+tW/H1U1.\2_$fPfGUh/mQ!d@EZet[8HL\Kk6L/G70b0P4
+ A448!),[Z*caQQaU("4gesMhgT1ucCI%FA[Q$gNd^fu*BA-c+sK@>ouO0n9bRc`Vr_dpiMV
+ A=RbT3]GkG5B^=Yb8lcY6@aPt0sfml[i:6s`IJQ&hgR(`;frL,q9)0J$/LW;4X$+jQDfirS
+ NA]]Oe]t+\uGV(s"?7up)'D(ecA/RAB[B70(aMXatl8H/:\t.'_?(0j;(Cg7F]1Z-%eN0nS
+ Ecj8?NL4DVS:"iRWT;Q<lD?<=GA4A]7h/9DoMciXQef;S,$06r+Eo3H\4q2DOX6eU7bjbV/
+ Mlm]-E1^uSbq%4Wb%D-295ZpN*QpDf"QZijC:ei*BsIQ]!]bt=.7A+&G0L.Pe>PX>M8gFpL
+ W]6!/.4/4sM.J!c-]>&#idV$ll+cQ,n<bXAALjj/,!Vqd<8$%7s/@iFj._iJ07P5>;IlMV*
+ SOIW19O!J*?b\cJhM`GVi9g?jqXOZuRmMN@\V)SGnVIppL]Bnm![<Pp3RFAdGC0[`AO1s*r
+ kmdUH5^=1\gXcJ_1KE1;IHrXCsk,Pa_L@TH"P)D/"[T[`Je1kQXl;MjV)7tTR=**/n53uA4
+ MZ!]?K]PmkfO%8)oqXgRW:HDCJ_e2YV:;m3%__*sORR#6]e%4Bdr3F`:D8bso,46(&!+6Ce
+ +jVA\IDb'qh\hXK]gj%Afcr0acW5$jMqs7W?t<ZDIMHs?=$L$@@UVMS\O4lFgIK+b@tG./a
+ bh9O6"_unTBaB^g;KXJ$5f'7l_N">#(:S?C\,kNDhCi9t?.)9E<`['GM#-nCpX:XgUC81,=
+ -dBMtYn1o\8O6[D4A9pM5PP&RbIMS+-$8n@U;1#h'SnG>)nnP]tN)0eUC)Tp_gRFR*"c_i`
+ dRqV\Ho]-p+n<b!o7TEgo7_IqGYdC:uhM'e&^kW#Iq-056A'`iO9o:o%'.r]Ya593tj<S!g
+ !2F&eU-ltB[P2./j4P@!7%apS3P4aZ]\!RL]&`o?*B;:):7et&q7Em>Bd+*M_s-h?Im8+)q
+ :,kcC_.#uPZuXU"FgfC0?1\uDE0G5-GhA2Nm$VEoCWqu!#]YQJZ!-C<g;c./mij\1_K0*l,
+ rIikG#Ns/ks[MCi]l4FK)a-5O6AImg6)40C\UYXpR]5iGs!t3.:DYZ;k/$l2XBFm0YIq9^"
+ XH[%Mr\m6j7/R)NV[B2=L;YP,XhH[Hf`6kS4g>]9EV?I/S3->#7M_W?haQ7lU:aQX!AJ@Y7
+ BMai-4dsFbpXf,o>:]-;$AmrB;a.6Mt[<!EBa+GOOgVN!jG?.)OF1Id@*7Dc14Pn([?D$m@
+ 9hOA_'0C_I&sGqhqXSsT\(-h[RhdLs:!=@L$^\=WQC!4SIue@F&?`7ABW;`Ln$$KS]o;:K4
+ f!o?@j^gDP:WcJ!Vj>9FN]"IYJ5(^Rc;E_aWAkuKHKsZpCOlmZKbZ:j$Oacl4j-=]aha>LR
+ @;r5KNSe*n!H0oF0#O$!O;t:O\>EXan^$/j3eRa$1Ja*E^J9H[De[RY^##/XJDf<7L2-]^B
+ J=W7LMs]P`cZ02A.:ng^Gl3*2k7^(tY_np938:bJBs!)QZln:T<Jk_GFfnTZ:@qqn$.-DF5
+ %cLJ3oo&\"Ti?n:>(of?Z9TbI"-Vg0kE8\Mq"5(!B8Fql;+'TNP>D#EnWmFTArr.J"J]Eck
+ ]6*:8AnIq,*DZoSP+J6d&J5Tf&e_:KQ#!Q\DYJ\Il7pq5m!,JOm7&K\1c-[3.Q]^rhB&U9Z
+ *CR'2fEkYE0tM[e#XQ@/>M'<R)*nS+14GsG-eTa?+G";q=EbWja3aD^M1P[^f\[(G^-07'I
+ qbffRETe4F'/U=RE[@VeSIHG'8&#St9A!@0J]<ekB+FQC!pSFjmA:[S>qKjF^!42>f:cI1`
+ Ip>eYT2EcSH:SWM;W3HsoHS2kYhg:ou#r]cQi7Hr@/s%_j6UuCb0If9+cl+N3a;GtrUH1Ru
+ (PJH@ppQ?%/l.("IG1pPMG'8n)N0N00\P!UE0&*^F%rL?.Qo`Lfm3X:C]hr7PeL4qcQ^ar.
+ Op5-gS2tdbqD][@qtBF1ZY*2q,_86K2mTSZ5("X,]?P4fS`W>rQIG6gm:0U7,^)WEdcAFjJ
+ #$uWIppmfBCZN]^$8hW*]8*iF^*:u[C&/Cnta3D7&?QL@-LQ6*k2fWQe-/G7Qtde=-%gBW8
+ $le/hSao)&Z=u,@>-t*#qZ.jHWH.4I]'ajF'pP^"Y+.l\S+/[:K-W'hW:YXH[]mjlk*ee;Z
+ t+r.G7B1qc=AA>C%=(0Xh-<7nd)X-DJ^;4?:NF2uteRC^)4@FaB\H+0$75&(roBeoY;-k_F
+ R/]uDcQSOfTQ^=&Zc6oSq:2LZZERhnnQh'Fm7k6gXQXf@RUnf9`*Km0Snal@IDX?X_RdAIq
+ Yo]2YX?QLAqDV=]2aN]jIJU>7EC;!54Ij"-eFq-tQ]G@jOq/4/lKoq?;l7j5DUHW#mFs#"6
+ h6Bem(@?#baHs4>pFc&4Xi#lbA3<>BWdQ"@/9BTU;mk&D;2Kr\!a7n9Qeq&XX+_4[J)lM(9
+ N>']=[8:a\/'j77C'4F)?#?oq]6h[^eMCRNlsMWf8V%Q+0+^3GdVP\!_Q>:31=$q?m&FOK0
+ 9V:7445Ml4U=G':F,\`,Xq052%9(eEK,FjSV.KN5@e/5W]uPtFmc&J=$B&IE;\7T9Fn/2t#
+ M8j4Y6XKK88W>i67.4H_*3[?@mZ?)qpn$%.aon<3h:L:.MXfiZ1-SN4%4gDK*Vf"1GHo$&6
+ L#k2_i?2XDf/upRVeS//hA"QbGtAZfDf=o!Y$8A!HcS:K][cN#Qq<JNFD'C6A&lNI,8*Jud
+ .PWtQU\*OaTmF1eT^KneEpG/['VZ0*KlaGdE_^Z[?tRnn3Mt>anr*N`<'W12b)IY*"il.DB
+ +f.o^nqs0P*tG2#!c2f?&eug]K%W`l=%MBkuerln%unOM%A\:f%^G1CGsOkU,lg.XSkdr5R
+ Bm^85^N2+W5,KURAdkpPa#;pa1#gu!%A5i2<JY:iDIU2J]=XBDl!FGJp':#b3nWLI$5gLb:
+ UPP824X$Pd1Td)EX$Db#;[l9^KU5+p.p_acX)`DH-%\A2P[@4S:5i&ei#S%=[h'%&8'=R\q
+ jXI1(`\,ZT_s"+$4c?ZJ?YlTW>m7?;^L$OlU8#gQ]D<'.Wb6=,]Yhp!D`(-/1QLf2)FCsD\
+ E)c5M=-A9Ue)/"O!JJpL&6'He1_jt](O`Lpaub>77BY%*_02hio9=k[/2)\\SQ(u481j&hT
+ lq:p0E_V2fKe5/51r28sA&c>[E^D'>Ks1Ze`L%<\(4I9g,LO(/fT6VR:pRId4q:a5e[*53W
+ VRhRsZ(GFKmno6/([Ci(fp4cCAW#nn="c-%1*h?/2I40Wbud3mP\qKa<-)p<sfn5E7aV2QW
+ cC^/8&,?Df[=J2,gE)g*\s-7i^Z=Q$V3>/qJ?YRB1KS8"3Epu^0>I-8&*C8P;Q)f+t-L'R%
+ ]T\Nm?X((@@Y(h/mp7Wd7C@jY?a&Lt4KB7bcYN@Ni=-_a8<8oX%b%6;jeGc-$p]4Dl8EPXU
+ 3[T$GbIo-oi@grWud%tXnc'N<m0%D<?Emc/UI4D]>][E;n&]NXK<LF(XhEJQ&;U!3/t&QW*
+ ?iZDJj>I5!#1R1b!bH,4)^Qf2<]J\omE-:YqM)Y/Zc/H$mjP4-TF4p?(ZqO*+oi'UC135!J
+ T$ce&.oAc.5W3dYC0[FMsok3flo-W+*VCgj7'o"NS.dkiKi(DjPj`st"HW"2X_fVqri:\FX
+ $E-].Q,\>?LNRWq>#b,>W$2hMcqi[/,jm%WTPa@`'St;OR<PH(>m<_$6g%i1mX5ED:M2A^@
+ )uC.K=]nYr!8!7s1oZS#PB5d(NHe/YE%pC;c^qSLdXO:&ac26OTPj5P,UOif2j?S#Zoc75(
+ -K#&r;X0Kmse28GoXX"M<q7qVO$sQe&D3\0[%C==g;;Ke,][\!QE!50O%4*>?DLM6!KUK&C
+ Crk"`Y+hM%I4R@KJ#8%!-&nQsobup\8c9^+(cic^B((F.VRdIXQ_#<h8iPF$@oq0'DJaI>s
+ lLqk"3/lh2*T4OQXW&sDV*417++a5;$NaMPO%6+[%qCjolfMg7f-VijtqXP6rY8.uT6eb[8
+ .S]pu>39;rkGAErOc<UP<<X3>QD)796_7Y8T<pp8SQ*q/lRY^"pc_cP;Yj^PFeQY[nMZ.I)
+ h<H*tH)E3(h9*B7MH7H?]iV?T=gtupO=$^TGL4LFPD`\^)]ao4MAoL`>BLCpFe:X\G=@WEl
+ UViXL1^L#+r["I^NP7hL(u^?92;l$rQmB?XJ8.TSn7Tf;p^]gIZ\&6[mLJ*O="GJhJA]:?Z
+ C^Ve<1QlZi@jq0*V1AN-W6p4j6!Zpbci<X8,hNhD@WhLofN)MpDh*I@a#aD<b0[`FPgUr3L
+ *[R,Z]TpM8R.hD6J#4_mBQLbJ!3)tRsnW[>s(pV^/)9nf1OakO#.$jfLK18?VnAut:Pn#WT
+ $!_Z&tnTB$?'moLp^S\^39&4F!gQ"siM3_2[cFl,C5PAB@@>.)uoQJ=Y8/!>kk^.*E'!Eeb
+ [V!a--)]I7ha+,Ym*8m(9"K014=/hnr,%ST9EmB?UVReqKU'\0dj-6QJb$;pR6mdGG.#ag,
+ 'WJ<#[(apmm>66igek`PmZ52j`-A/$m!XX@Qk>K;R71L^AZ?7ZTBdED`7N9Nt",?TT8rC`4
+ &A\L,<=3;0u%0=h]?fKHtnm,G+dqW9P;Mk'4M4G'3#T8<7>1=JQ"4_H'0^5%qX(n&R^uj(A
+ "6ZDG22!\PKaKheO!/hmTLF=?kAp;[n^PKlqSLIX+]-Ht;[@o1IBOD`UcJ=cBa.lNK"W9q#
+ :lUVgP&Bm4g"Dd6o-T^F/+\ec*iL4c[!aLG.FA8@mg@hBAo$:F<Ll/8C2oXPATZH0Ybsee4
+ )n70j4%L$AW46M=puD4!f!7R(rbO+AI.XGY-KEr>X8b[lN'+(erEPUf35,,4Uks2`/i"sQ4
+ lUo+Bqh9nNhaY,E07the/C\lJL,Q9"$NkQ4n)Do4pgrh8/"Q<kMFDf$SOro%l\Cn-?C;=7B
+ i(Y"3KYd.*'7dljkK\jYNYjq^A$@IPdjWe0LqAG8Do@7:nl,;LLO&l_[SSDKXQjS+l?QY7>
+ VIR*GX"nPt(b;"RLf:PL]gTZ!B,'Ho>OAoGI'kFPZ-\aq.1\+p7L*W"3*k+lp3n#^Wi6!n'
+ .lYjrBcF^]M-5*-'iN5T'RQAb0(=Q"'_!;mQrSUCmNUbpk\jE1(-dTb!K2dQ6'`3P?OX>*`
+ *>R2NHj%QdAhS*,DfheN>Kb16C,P<ElXcmT<7dL.N(.aOIWH)Bc14QII+F;-DP*";P,6^o6
+ _j1?]SF\^YBO!&A&iKA&,DN[>,)M'#dTW,En*^0cBl.ZJY<RT5s/K4Rh-nEJi&TFE[%X$:W
+ \Mt^4*5829sNkfX$W^Q(BdQnI=&VD]-V\Q)m0`fP+P-N;2m[XcM_tBNQ?s:*<7VNW_sf1bO
+ '1=&[sr/IQYi`H?\%GO)M=93Jg9^<t#44sB+,f<-(75CiYfp81tWrPH98jdc;[Z9u9^r3T\
+ /AT6inKf"q+NXl"8>7`26b@a<5*<1@76dAZ?I;.TLf=(.@ouIYLV[>$=cC:kMN$NCH=q.TN
+ rE@p^Wt",SY%o^lf9e0q1)6.CF7LmZZ`RSaCYP:k-RTGJf/`"Hknu^Dr=;tMD2Pg_!gd,Q-
+ <q\,Re1IZisXD0,1/46D:^!?=/C/@\F#(b[inIjU!r6>Oq9#iPh56-*Hm-diE[:r#tIMsru
+ qb&2PO#SXIX_.MP<@,'CMZ/=g9h+$K^gKR=U>rW$"fZPZqP32LR!<#A`<<]uNoXB4*2E8<]
+ @*QAUQWU8"AK-2BIl_UtiB@1t2deaQ/=Z-u+PM*04qqdgl0TMaG+LTA*fj:<Vf`J?b%m8ha
+ #.F'*s9ERe.4T!:WVha+uoqr"S\.Pt)lO=17)&\j(OH>TeEW5n3:/<un^i,[?e`r>G:WCGq
+ Y>_G3"$V-QjOL/7DAB8X$u(&sK]\QO]=^'?P!AlT=51^l;.S6(=iic0k\J[Y#DkulN5TNq)
+ :[SA[3,?Y$gZ3a=A2Bs3ChWI4ab%ZR[cNf2C^26oatSMARO$7f.+`]J7QMEb6Q?gL1@+SO>
+ CWP;[Q)fE`ULK&HimV7-'/S)D_$WMH-$8r\#]LWO9-63GCh.1HR[fFfS/+d-C,JP=ie9L1K
+ `Ci$ggWaiPHPmDSS>YWLq<Sp;poW0_pkbbo"nlS[SZeK9RO*l68;)]RPegoQSo3H3mVlh6V
+ eeZ?:d#7M2k0R>3FaJso=Ph[o).,:q$a<=sJDVPsE/5p.&?+X.VHK7]G[VadQgY6;3-A@/-
+ XBSl>(+fI]\Q<[)4t>=NDe'@sV.Dj(4K*fU4.$7@s,/G*E.Q3gH?Rr#G8KXU4^KEd,9oS4B
+ l`OjXmkh!f$,a[*sTXm*d'p4Z2*1Um54Fnn4>P#e&riueVR>h1""=1<"O+J09,tm_gkNcO&
+ W)WWDn"_FYZ^U6\c/VDIX.Z#"=K>GZNl;9(S8`_`R=JX658=RE%^F$4@MkC\4EcT[aXUa!`
+ %DqsHPVVMS?Sh6:;ilNZLhg9mq0Z]Buh)s)bc)I`6rR&N6sS.E0?*KQ]YrNE,S[`nBe`f6/
+ \Mhm29]"&hG054<e=in6t3O<D`Ne,+6`>P[$&P+M[Qk.<5LA<t'a;R+rbWIHQ//$sd)iBOl
+ 1nnm0qh_CE,_'QDC1N_ZH$O\:S!H:>![C-\G^0!/+kolnkqrS7WTK?9dTA[(YirEb)0Ud\6
+ V5&4r;!8++C*J/U:@^8Q>)MMRE<$S3Mi0#Du\9RDg?]R$@>`sn/*2;g#eFiioj/"@F8^Zo-
+ 0lMMKJ_uEt=EW^j6F[.R-.)Q>*XCh`fUcbH^e+r7p&QH3q2?KJ]C%Vha,@hGBB;_4tc&q]L
+ I_aQ$Jj6%LT9"Rj*($4Me/5P8C?)na&=-FhJm;\FB;;ee0t8F+4Y-h*9\^!b>,=f0<?;l<_
+ VXW7*h8K'G&/%=Yp]u,Z<n2,ic9cgHg_s@8fI*Hu-ai(T&f4:9r[C*EDRoUkf<_&6-2pVum
+ !CPRoD,r=C-\T%_?NkeVr1SKb(QgR7lObHWRl>8/G=u@NdY.NNo'9C:Y^FZ%[-*/%NLh'-j
+ <RmtP_cdb"W,9igcDuQ",Afd=H"P_-fsFM\3!@TOsWSGIQm'%QBbI$Og"QrHXn\VCt461<G
+ 5"12kEF<!CL$<9ImoA`3UX3,21c)A=OtbN<[Q>O7Q`Lp#&]T1rG`1S&?+5G;>eT4SX!9XKg
+ 5Z9"l,Adcb$R]siXe(m:T0_En9%1e27H<)lqPKS6RIBt=%a:$D4.%mMl:<rp4l,?qZf7s^U
+ Bl.`>UlWqgF]h(9PRPge?7un(TV54KfTD;(mU8Y"]V8DOhSenUnG3%uWD&5bj0i*4!*FI!a
+ .<6+:'dt;658UP/l/1@jP*D1FGIQbp2bf+VdaG?R4A>/PWd+ud*SA-=o47:^m)OMD-"!)b3
+ V`Mk2=l+MgGNY4h7JT/=Uu0k$4.#!ET.,GVr;np';DJ*C"Ftbn&?l0<_#iOSb%LB?NZ%i%3
+ D8X[`r<SoucT+cb8$bjoe-P3gq)^,uqi0=@@$idH]Cnc-tQ0]_^[]XN-))Od#D8qtCKtJ1p
+ A*Eo3-4=#)Nd@UinN*t9)kYpt(teC21get.O#RAp2XRR)UJGd?U[fuH'cPa7hXH`>(.:AtS
+ r?)a#eO`egnn`H1XlH*t/5n+m1p%,BdnuDkB"-6&I9?YH!WMlalX488ZOL\,M#MRqelg*kP
+ _:)-n>[P(Z5j!5F5m_m&&VZsYT_4>f.[.nc%.FIHU^;:rM:7c7Q?hT'((0`E=r>WXiu'Z=^
+ (5*HRQu@92#>Gb]A?eG9L3s;jtH$\Q>>aOADQLd9h5r]5E[Hu7nBN8Or)7kp"%P4d_UT#TF
+ -W$9u(&U'U6XRmLnRdpQ-^EN*;uPca(jd8#DrXC>O;#fgiY83`pT>TS>9()J#CZoQ]!)!8L
+ @9\Qqj03_AMNH:0ZQ-9tA)e"8'H#B/[D>8!1qop<)3;hZ"<h%i*]B`]>tl*;aa>PInqm,om
+ `"RG_@]qg!SfAQT>Gs[sMa43$a^IYY)HQ0Fn;#d^X(V078>-Bk(=SGPZ,,>NiM-@%-/[,A[
+ a5nu9QHtM:0=-j[J=@cUJ2@:FqrN_*X(3b]PZ:ZF\[Ej=ED]A+O<u*14*M-:8%Ln\R,_K0;
+ 17$*RCs\t4-&E+e/HmY%Rc(/!Kf5t),ntco<IhP]P:k?:NEJ;^<t$5IfV>r'4@ZJ`q39:D:
+ Z>/!lf6)nTQ9)fuqOC4X2(DJ?b+ErTPM!oX=1eo+3G-eEd@A\F&uGV.T)=#S/%P5U+:^6r%
+ aDM(WrOPAL%HmNDgkKF"0sJqkLD5\`ikS3(ZCZ.h2:"5I+C7/;>^POsj:iZ]GjJgWjQJ<X'
+ c(t5gmlTT=`qQZYl'igI\]%4oDB-$5O\:tsgcn@p'i.>"1^Mq;ZS8*C8coGtjqoMmA[4n9:
+ X(9tQ;R6Rgpk"us-GZ1A^Dd)s_QrQ%qr6^#8/B>l%7gQsMk.<Sdh^B9XH+rP$u#cJ]5uS#<
+ c1Z([g$bm9R.#Tn,$,FBAH4BKHtn^JYf1X#m-]L1]c:VC@r=.9`)0.2LfUOZGJV12Z%c(,"
+ 7r0kt5UTkP!-g]?2don=hsg"4@>58u'd@X>GA*c`YUoC1"^#l0G7?PmdDmb6"5='CApm\gf
+ >$e8!P>!u(`L!Xd?.WNU4s(B)u](,,2bfiNqhjt>ogRDF@7Vls52>UtFIESKmXZ`V1mk%iT
+ ='G`BQ1]\n$CCHJ.VVD5Gl@*M8P!(Y5S".pQnTRP\AZ!6&>.W-")XS[Ie9qS(-Vdkq/t$\=
+ "!dq^!q7n'k2&G%XPB>0B1^FTmS:$Ljik`Hh.nr/4pZ2CZ`M4-)sg@ll^ILLCLKKS$C,*gC
+ cD1UP#H07LUp%)Q(Bd5f9l!7,;ha7O<stEFEZrk]&6VITru46>A(Tfa]*9-FJdPm>[f#ln8
+ ONiks:I3QimQf5E\7)?&b3O3C'#5!!#0X#ljtP6^duT!'R+a!!(A5RK*<f4W1%e91qoH77C
+ 'N;'EaD'c.Zn\od31mG"6"EY>;!l0@U#bK@t02a9jb:I"eW?!Z:6dG_p%a)@EFe`r>G:WCG
+ qY60"nPtI=/Qq5L]4?I^3W2ZaF1G^gB16QdD"!9is6hfs#F7J6YO*!T]e)]lKZVaRsX:GJR
+ B$6JlQ'IX,h+rVn:hF9M"U>:=DI_U&;GpD6F#.ATj=X0J.R#pURs!HfYK/aN0/&B,e4@id>
+ IUh-#>,R?&X;H^gni,D!s_r;3cOQ_n`)Hs:f?$$g1"Q%=gog#6"jqF<E5=BY/j\5'G_IiDJ
+ tU=Z9kOhW"@_pV1U>La[fsuS0/o9Oca1R/$([MPU>HML51PjDJYDkR$\^Js.B8*q%\/0hS"
+ 8iogXtijNsSfXIC)<jlFOI_RcI0BXQPgRl7Nh7?F4D=0DtpOM%@q))rQnCgc)=+=g[*DACM
+ 5V5(]9S95Ks4<6D#5X:+c,u#j>.p4ebaA>f6PCVL<&aAQEXP49H.69Ln5_nDH:%IlG=>qqT
+ :e=68JDM[dGUaQCVkp6/h%%P"RUg#BZ>Ns0U"nF.!PM>S+sJ45[i`[l9hgVZ*KlaGfsblAl
+ dc4Wb/qa;cl48T1c1G>qgiQh^3nVi%`qA4i;IlhLA^ss!PM>SYTF[+O+>9^GY+)tVbXL7?'
+ (cNno=cE;'NGehp/h35b:q)e_Ri]h<g<:Ue,E?/aETF\KO>;[C'""Dj1O?JH]?m(DeHl6h6
+ Be3h:0[k0:!^(h@3uNZA24U3'@9=0ld2OJcU/NX=p2]VGjrT:aH@NBpUfb0#B[7Hm?687:8
+ 2+?(Ge+i:\Qi[@MX)@_u[i$jM>M.FFQX=Hn5AS%]1V./8H+=i3X*u<EH3D'sZWiCs;-L8YU
+ mY-FO2`<Q$mY?sW=0]PnioB(@H+U;`aB+fA77BX))$:f<K]L"($T+DN;RBWpmbPKX2_)m79
+ M>l>(Ds\U';Dk@1b:2:Fi+<J+sS>opG(QcNfLKR1tGA/Hm>\TRl>96oKs\KKP5+8g9k]4lf
+ JGDO<na98rDO[!D+@nbJFJV+MdjHl]3&J7M`P!4cn,L\aR\+.iB3Bpl$Xa=^54eTgOTGF/.
+ e&aFmAf;NdQ?N:4nTT7V]J[r:/qQ^?=Tf&i+_a*!h5d,*Vc?XI.Jn9YTR%>&LIE_WI\O]<7
+ ajp1.^ET79%@cS'\OX:&.^_Y?$!!)4+9`P0D*mt/k!9bS#!!%NjdKBOMkTNY#!.Z-="98Gc
+ +\_q:J0*,A!!)4+9`P0D*mt/k!9bS#!!%NjdKBOMkTNY#!.Z-="98Gc+\_q:J0*,A!!)4+9
+ `P0D*mt/k!9bS#!!%NjdKBOMkTNY#!.Z-="98Gc+\_q:J0*,A!!)4+9`P0D*mt/k!9bS#!!
+ %NjdKBOMkTNY#!.Z-="98Gc+\_q:J0*,A!!)5>T-d>sp!c;.!0A"(?XJrE"98G[!Mg#%!74
+ *%!!!!^U^7#$cpO6%!!#0X#ljtP6^duT!'R+a!!(A5RK*<f4erAa!74*%!!!!^U^7#$cpO6
+ %!!#0X#ljtP6^duT!'R+a!!(A5RK*<f4erAa!72s)@KZZ2T]9uNrX/]+z!!",Lrr^rV9oA~>
+Q
+Q Q
+showpage
+%%Trailer
+end
+%%EOF
diff --git a/tex/src/IEEEtran_openaccess.bst b/tex/src/IEEEtran_openaccess.bst
new file mode 100644
index 0000000..d77fbb3
--- /dev/null
+++ b/tex/src/IEEEtran_openaccess.bst
@@ -0,0 +1,2484 @@
+%%
+%% IEEEtran_openaccess.bst
+%% BibTeX Bibliography Style file for IEEE Journals and Conferences (unsorted)
+%% Copyright (c) 2003-2015 Michael Shell
+%% Copyright (c) 2020-2022 Boud Roukema for additions of \eprint and \doi rules.
+%% Based on IEEEtran.bst Version 1.14 (2015/08/26).
+%%
+%% WARNING: 2020-05-24: The \eprint and \doi rules are not quite right;
+%% some sed/tr hacks are needed in post-processing for a neat result.
+%%
+%% USAGE: The \eprint and \doi LaTeX commands must be supplied in the main
+%% LaTeX files, e.g. to ArXiv and an open-access DOI (oadoi) resolver:
+%% \newcommand\eprint[1]{\href{https://arXiv.org/abs/#1}{{arXiv:#1}}}
+%% \newcommand\doi[1]{\href{https://oadoi.org/#1}{{DOI:#1}}}
+%%
+%%
+%% IEEEtrans.bst version:
+%% Version 1.14 (2015/08/26)
+%%
+%% Copyright (c) 2003-2015 Michael Shell
+%%
+%% Original starting code base and algorithms obtained from the output of
+%% Patrick W. Daly's makebst package as well as from prior versions of
+%% IEEE BibTeX styles:
+%%
+%% 1. Howard Trickey and Oren Patashnik's ieeetr.bst (1985/1988)
+%% 2. Silvano Balemi and Richard H. Roy's IEEEbib.bst (1993)
+%%
+%% Support sites:
+%% http://www.michaelshell.org/tex/ieeetran/
+%% http://www.ctan.org/pkg/ieeetran
+%% and/or
+%% http://www.ieee.org/
+%%
+%% For use with BibTeX version 0.99a or later
+%%
+%% This is a numerical citation style.
+%%
+%%*************************************************************************
+%% Legal Notice:
+%% This code is offered as-is without any warranty either expressed or
+%% implied; without even the implied warranty of MERCHANTABILITY or
+%% FITNESS FOR A PARTICULAR PURPOSE!
+%% User assumes all risk.
+%% In no event shall the IEEE or any contributor to this code be liable for
+%% any damages or losses, including, but not limited to, incidental,
+%% consequential, or any other damages, resulting from the use or misuse
+%% of any information contained here.
+%%
+%% All comments are the opinions of their respective authors and are not
+%% necessarily endorsed by the IEEE.
+%%
+%% This work is distributed under the LaTeX Project Public License (LPPL)
+%% ( http://www.latex-project.org/ ) version 1.3, and may be freely used,
+%% distributed and modified. A copy of the LPPL, version 1.3, is included
+%% in the base LaTeX documentation of all distributions of LaTeX released
+%% 2003/12/01 or later.
+%% Retain all contribution notices and credits.
+%% ** Modified files should be clearly indicated as such, including **
+%% ** renaming them and changing author support contact information. **
+%%*************************************************************************
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%% DEFAULTS FOR THE CONTROLS OF THE BST STYLE %%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+% These are the defaults for the user adjustable controls. The values used
+% here can be overridden by the user via IEEEtranBSTCTL entry type.
+
+% NOTE: The recommended LaTeX command to invoke a control entry type is:
+%
+%\makeatletter
+%\def\bstctlcite{\@ifnextchar[{\@bstctlcite}{\@bstctlcite[@auxout]}}
+%\def\@bstctlcite[#1]#2{\@bsphack
+% \@for\@citeb:=#2\do{%
+% \edef\@citeb{\expandafter\@firstofone\@citeb}%
+% \if@filesw\immediate\write\csname #1\endcsname{\string\citation{\@citeb}}\fi}%
+% \@esphack}
+%\makeatother
+%
+% It is called at the start of the document, before the first \cite, like:
+% \bstctlcite{IEEEexample:BSTcontrol}
+%
+% IEEEtran.cls V1.6 and later does provide this command.
+
+
+
+% #0 turns off the display of the number for articles.
+% #1 enables
+FUNCTION {default.is.use.number.for.article} { #1 }
+
+
+% #0 turns off the display of the paper and type fields in @inproceedings.
+% #1 enables
+FUNCTION {default.is.use.paper} { #1 }
+
+
+% #0 turns off the display of urls
+% #1 enables
+FUNCTION {default.is.use.url} { #1 }
+
+
+% #0 turns off the forced use of "et al."
+% #1 enables
+FUNCTION {default.is.forced.et.al} { #0 }
+
+
+% The maximum number of names that can be present beyond which an "et al."
+% usage is forced. Be sure that num.names.shown.with.forced.et.al (below)
+% is not greater than this value!
+% Note: There are many instances of references in IEEE journals which have
+% a very large number of authors as well as instances in which "et al." is
+% used profusely.
+FUNCTION {default.max.num.names.before.forced.et.al} { #10 }
+
+
+% The number of names that will be shown with a forced "et al.".
+% Must be less than or equal to max.num.names.before.forced.et.al
+FUNCTION {default.num.names.shown.with.forced.et.al} { #1 }
+
+
+% #0 turns off the alternate interword spacing for entries with URLs.
+% #1 enables
+FUNCTION {default.is.use.alt.interword.spacing} { #1 }
+
+
+% If alternate interword spacing for entries with URLs is enabled, this is
+% the interword spacing stretch factor that will be used. For example, the
+% default "4" here means that the interword spacing in entries with URLs can
+% stretch to four times normal. Does not have to be an integer. Note that
+% the value specified here can be overridden by the user in their LaTeX
+% code via a command such as:
+% "\providecommand\BIBentryALTinterwordstretchfactor{1.5}" in addition to
+% that via the IEEEtranBSTCTL entry type.
+FUNCTION {default.ALTinterwordstretchfactor} { "4" }
+
+
+% #0 turns off the "dashification" of repeated (i.e., identical to those
+% of the previous entry) names. The IEEE normally does this.
+% #1 enables
+FUNCTION {default.is.dash.repeated.names} { #1 }
+
+
+% The default name format control string.
+FUNCTION {default.name.format.string}{ "{f.~}{vv~}{ll}{, jj}" }
+
+
+% The default LaTeX font command for the names.
+FUNCTION {default.name.latex.cmd}{ "" }
+
+
+% The default URL prefix.
+FUNCTION {default.name.url.prefix}{ "[Online]. Available:" }
+
+
+% Other controls that cannot be accessed via IEEEtranBSTCTL entry type.
+
+% #0 turns off the terminal startup banner/completed message so as to
+% operate more quietly.
+% #1 enables
+FUNCTION {is.print.banners.to.terminal} { #1 }
+
+
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%% FILE VERSION AND BANNER %%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+FUNCTION{bst.file.version} { "1.14" }
+FUNCTION{bst.file.date} { "2015/08/26" }
+FUNCTION{bst.file.website} { "http://www.michaelshell.org/tex/ieeetran/bibtex/" }
+
+FUNCTION {banner.message}
+{ is.print.banners.to.terminal
+ { "-- IEEEtran.bst version" " " * bst.file.version *
+ " (" * bst.file.date * ") " * "by Michael Shell." *
+ top$
+ "-- " bst.file.website *
+ top$
+ "-- See the " quote$ * "IEEEtran_bst_HOWTO.pdf" * quote$ * " manual for usage information." *
+ top$
+ }
+ { skip$ }
+ if$
+}
+
+FUNCTION {completed.message}
+{ is.print.banners.to.terminal
+ { ""
+ top$
+ "Done."
+ top$
+ }
+ { skip$ }
+ if$
+}
+
+
+
+
+%%%%%%%%%%%%%%%%%%%%%%
+%% STRING CONSTANTS %%
+%%%%%%%%%%%%%%%%%%%%%%
+
+FUNCTION {bbl.and}{ "and" }
+FUNCTION {bbl.etal}{ "et~al." }
+FUNCTION {bbl.editors}{ "eds." }
+FUNCTION {bbl.editor}{ "ed." }
+FUNCTION {bbl.edition}{ "ed." }
+FUNCTION {bbl.volume}{ "vol." }
+FUNCTION {bbl.of}{ "of" }
+FUNCTION {bbl.number}{ "no." }
+FUNCTION {bbl.in}{ "in" }
+FUNCTION {bbl.pages}{ "pp." }
+FUNCTION {bbl.page}{ "p." }
+FUNCTION {bbl.chapter}{ "ch." }
+FUNCTION {bbl.paper}{ "paper" }
+FUNCTION {bbl.part}{ "pt." }
+FUNCTION {bbl.patent}{ "Patent" }
+FUNCTION {bbl.patentUS}{ "U.S." }
+FUNCTION {bbl.revision}{ "Rev." }
+FUNCTION {bbl.series}{ "ser." }
+FUNCTION {bbl.standard}{ "Std." }
+FUNCTION {bbl.techrep}{ "Tech. Rep." }
+FUNCTION {bbl.mthesis}{ "Master's thesis" }
+FUNCTION {bbl.phdthesis}{ "Ph.D. dissertation" }
+FUNCTION {bbl.st}{ "st" }
+FUNCTION {bbl.nd}{ "nd" }
+FUNCTION {bbl.rd}{ "rd" }
+FUNCTION {bbl.th}{ "th" }
+
+
+% This is the LaTeX spacer that is used when a larger than normal space
+% is called for (such as just before the address:publisher).
+FUNCTION {large.space} { "\hskip 1em plus 0.5em minus 0.4em\relax " }
+
+% The LaTeX code for dashes that are used to represent repeated names.
+% Note: Some older IEEE journals used something like
+% "\rule{0.275in}{0.5pt}\," which is fairly thick and runs right along
+% the baseline. However, the IEEE now uses a thinner, above baseline,
+% six dash long sequence.
+FUNCTION {repeated.name.dashes} { "------" }
+
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%% PREDEFINED STRING MACROS %%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+MACRO {jan} {"Jan."}
+MACRO {feb} {"Feb."}
+MACRO {mar} {"Mar."}
+MACRO {apr} {"Apr."}
+MACRO {may} {"May"}
+MACRO {jun} {"Jun."}
+MACRO {jul} {"Jul."}
+MACRO {aug} {"Aug."}
+MACRO {sep} {"Sep."}
+MACRO {oct} {"Oct."}
+MACRO {nov} {"Nov."}
+MACRO {dec} {"Dec."}
+
+
+
+%%%%%%%%%%%%%%%%%%
+%% ENTRY FIELDS %%
+%%%%%%%%%%%%%%%%%%
+
+ENTRY
+ { address
+ archiveprefix
+ assignee
+ author
+ booktitle
+ chapter
+ day
+ dayfiled
+ doi
+ edition
+ editor
+ eprint
+ howpublished
+ institution
+ intype
+ journal
+ key
+ language
+ month
+ monthfiled
+ nationality
+ note
+ number
+ organization
+ pages
+ paper
+ publisher
+ school
+ series
+ revision
+ title
+ type
+ url
+ volume
+ year
+ yearfiled
+ CTLuse_article_number
+ CTLuse_paper
+ CTLuse_url
+ CTLuse_forced_etal
+ CTLmax_names_forced_etal
+ CTLnames_show_etal
+ CTLuse_alt_spacing
+ CTLalt_stretch_factor
+ CTLdash_repeated_names
+ CTLname_format_string
+ CTLname_latex_cmd
+ CTLname_url_prefix
+ }
+ {}
+ { label }
+
+
+
+
+%%%%%%%%%%%%%%%%%%%%%%%
+%% INTEGER VARIABLES %%
+%%%%%%%%%%%%%%%%%%%%%%%
+
+INTEGERS { prev.status.punct this.status.punct punct.std
+ punct.no punct.comma punct.period
+ prev.status.space this.status.space space.std
+ space.no space.normal space.large
+ prev.status.quote this.status.quote quote.std
+ quote.no quote.close
+ prev.status.nline this.status.nline nline.std
+ nline.no nline.newblock
+ status.cap cap.std
+ cap.no cap.yes}
+
+INTEGERS { longest.label.width multiresult nameptr namesleft number.label numnames }
+
+INTEGERS { is.use.number.for.article
+ is.use.paper
+ is.use.url
+ is.forced.et.al
+ max.num.names.before.forced.et.al
+ num.names.shown.with.forced.et.al
+ is.use.alt.interword.spacing
+ is.dash.repeated.names}
+
+
+%%%%%%%%%%%%%%%%%%%%%%
+%% STRING VARIABLES %%
+%%%%%%%%%%%%%%%%%%%%%%
+
+STRINGS { bibinfo
+ longest.label
+ oldname
+ s
+ t
+ ALTinterwordstretchfactor
+ name.format.string
+ name.latex.cmd
+ name.url.prefix}
+
+
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%
+%% LOW LEVEL FUNCTIONS %%
+%%%%%%%%%%%%%%%%%%%%%%%%%
+
+FUNCTION {initialize.controls}
+{ default.is.use.number.for.article 'is.use.number.for.article :=
+ default.is.use.paper 'is.use.paper :=
+ default.is.use.url 'is.use.url :=
+ default.is.forced.et.al 'is.forced.et.al :=
+ default.max.num.names.before.forced.et.al 'max.num.names.before.forced.et.al :=
+ default.num.names.shown.with.forced.et.al 'num.names.shown.with.forced.et.al :=
+ default.is.use.alt.interword.spacing 'is.use.alt.interword.spacing :=
+ default.is.dash.repeated.names 'is.dash.repeated.names :=
+ default.ALTinterwordstretchfactor 'ALTinterwordstretchfactor :=
+ default.name.format.string 'name.format.string :=
+ default.name.latex.cmd 'name.latex.cmd :=
+ default.name.url.prefix 'name.url.prefix :=
+}
+
+
+% This IEEEtran.bst features a very powerful and flexible mechanism for
+% controlling the capitalization, punctuation, spacing, quotation, and
+% newlines of the formatted entry fields. (Note: IEEEtran.bst does not need
+% or use the newline/newblock feature, but it has been implemented for
+% possible future use.) The output states of IEEEtran.bst consist of
+% multiple independent attributes and, as such, can be thought of as being
+% vectors, rather than the simple scalar values ("before.all",
+% "mid.sentence", etc.) used in most other .bst files.
+%
+% The more flexible and complex design used here was motivated in part by
+% the IEEE's rather unusual bibliography style. For example, the IEEE ends the
+% previous field item with a period and large space prior to the publisher
+% address; the @electronic entry types use periods as inter-item punctuation
+% rather than the commas used by the other entry types; and URLs are never
+% followed by periods even though they are the last item in the entry.
+% Although it is possible to accommodate these features with the conventional
+% output state system, the seemingly endless exceptions make for convoluted,
+% unreliable and difficult to maintain code.
+%
+% IEEEtran.bst's output state system can be easily understood via a simple
+% illustration of two most recently formatted entry fields (on the stack):
+%
+% CURRENT_ITEM
+% "PREVIOUS_ITEM
+%
+% which, in this example, is to eventually appear in the bibliography as:
+%
+% "PREVIOUS_ITEM," CURRENT_ITEM
+%
+% It is the job of the output routine to take the previous item off of the
+% stack (while leaving the current item at the top of the stack), apply its
+% trailing punctuation (including closing quote marks) and spacing, and then
+% to write the result to BibTeX's output buffer:
+%
+% "PREVIOUS_ITEM,"
+%
+% Punctuation (and spacing) between items is often determined by both of the
+% items rather than just the first one. The presence of quotation marks
+% further complicates the situation because, in standard English, trailing
+% punctuation marks are supposed to be contained within the quotes.
+%
+% IEEEtran.bst maintains two output state (aka "status") vectors which
+% correspond to the previous and current (aka "this") items. Each vector
+% consists of several independent attributes which track punctuation,
+% spacing, quotation, and newlines. Capitalization status is handled by a
+% separate scalar because the format routines, not the output routine,
+% handle capitalization and, therefore, there is no need to maintain the
+% capitalization attribute for both the "previous" and "this" items.
+%
+% When a format routine adds a new item, it copies the current output status
+% vector to the previous output status vector and (usually) resets the
+% current (this) output status vector to a "standard status" vector. Using a
+% "standard status" vector in this way allows us to redefine what we mean by
+% "standard status" at the start of each entry handler and reuse the same
+% format routines under the various inter-item separation schemes. For
+% example, the standard status vector for the @book entry type may use
+% commas for item separators, while the @electronic type may use periods,
+% yet both entry handlers exploit many of the exact same format routines.
+%
+% Because format routines have write access to the output status vector of
+% the previous item, they can override the punctuation choices of the
+% previous format routine! Therefore, it becomes trivial to implement rules
+% such as "Always use a period and a large space before the publisher." By
+% pushing the generation of the closing quote mark to the output routine, we
+% avoid all the problems caused by having to close a quote before having all
+% the information required to determine what the punctuation should be.
+%
+% The IEEEtran.bst output state system can easily be expanded if needed.
+% For instance, it is easy to add a "space.tie" attribute value if the
+% bibliography rules mandate that two items have to be joined with an
+% unbreakable space.
+
+FUNCTION {initialize.status.constants}
+{ #0 'punct.no :=
+ #1 'punct.comma :=
+ #2 'punct.period :=
+ #0 'space.no :=
+ #1 'space.normal :=
+ #2 'space.large :=
+ #0 'quote.no :=
+ #1 'quote.close :=
+ #0 'cap.no :=
+ #1 'cap.yes :=
+ #0 'nline.no :=
+ #1 'nline.newblock :=
+}
+
+FUNCTION {std.status.using.comma}
+{ punct.comma 'punct.std :=
+ space.normal 'space.std :=
+ quote.no 'quote.std :=
+ nline.no 'nline.std :=
+ cap.no 'cap.std :=
+}
+
+FUNCTION {std.status.using.period}
+{ punct.period 'punct.std :=
+ space.normal 'space.std :=
+ quote.no 'quote.std :=
+ nline.no 'nline.std :=
+ cap.yes 'cap.std :=
+}
+
+FUNCTION {initialize.prev.this.status}
+{ punct.no 'prev.status.punct :=
+ space.no 'prev.status.space :=
+ quote.no 'prev.status.quote :=
+ nline.no 'prev.status.nline :=
+ punct.no 'this.status.punct :=
+ space.no 'this.status.space :=
+ quote.no 'this.status.quote :=
+ nline.no 'this.status.nline :=
+ cap.yes 'status.cap :=
+}
+
+FUNCTION {this.status.std}
+{ punct.std 'this.status.punct :=
+ space.std 'this.status.space :=
+ quote.std 'this.status.quote :=
+ nline.std 'this.status.nline :=
+}
+
+FUNCTION {cap.status.std}{ cap.std 'status.cap := }
+
+FUNCTION {this.to.prev.status}
+{ this.status.punct 'prev.status.punct :=
+ this.status.space 'prev.status.space :=
+ this.status.quote 'prev.status.quote :=
+ this.status.nline 'prev.status.nline :=
+}
+
+
+FUNCTION {not}
+{ { #0 }
+ { #1 }
+ if$
+}
+
+FUNCTION {and}
+{ { skip$ }
+ { pop$ #0 }
+ if$
+}
+
+FUNCTION {or}
+{ { pop$ #1 }
+ { skip$ }
+ if$
+}
+
+
+% convert the strings "yes" or "no" to #1 or #0 respectively
+FUNCTION {yes.no.to.int}
+{ "l" change.case$ duplicate$
+ "yes" =
+ { pop$ #1 }
+ { duplicate$ "no" =
+ { pop$ #0 }
+ { "unknown boolean " quote$ * swap$ * quote$ *
+ " in " * cite$ * warning$
+ #0
+ }
+ if$
+ }
+ if$
+}
+
+
+% pushes true if the single char string on the stack is in the
+% range of "0" to "9"
+FUNCTION {is.num}
+{ chr.to.int$
+ duplicate$ "0" chr.to.int$ < not
+ swap$ "9" chr.to.int$ > not and
+}
+
+% multiplies the integer on the stack by a factor of 10
+FUNCTION {bump.int.mag}
+{ #0 'multiresult :=
+ { duplicate$ #0 > }
+ { #1 -
+ multiresult #10 +
+ 'multiresult :=
+ }
+ while$
+pop$
+multiresult
+}
+
+% converts a single character string on the stack to an integer
+FUNCTION {char.to.integer}
+{ duplicate$
+ is.num
+ { chr.to.int$ "0" chr.to.int$ - }
+ {"noninteger character " quote$ * swap$ * quote$ *
+ " in integer field of " * cite$ * warning$
+ #0
+ }
+ if$
+}
+
+% converts a string on the stack to an integer
+FUNCTION {string.to.integer}
+{ duplicate$ text.length$ 'namesleft :=
+ #1 'nameptr :=
+ #0 'numnames :=
+ { nameptr namesleft > not }
+ { duplicate$ nameptr #1 substring$
+ char.to.integer numnames bump.int.mag +
+ 'numnames :=
+ nameptr #1 +
+ 'nameptr :=
+ }
+ while$
+pop$
+numnames
+}
+
+
+
+
+% The output routines write out the *next* to the top (previous) item on the
+% stack, adding punctuation and such as needed. Since IEEEtran.bst maintains
+% the output status for the top two items on the stack, these output
+% routines have to consider the previous output status (which corresponds to
+% the item that is being output). Full independent control of punctuation,
+% closing quote marks, spacing, and newblock is provided.
+%
+% "output.nonnull" does not check for the presence of a previous empty
+% item.
+%
+% "output" does check for the presence of a previous empty item and will
+% remove an empty item rather than outputing it.
+%
+% "output.warn" is like "output", but will issue a warning if it detects
+% an empty item.
+
+
+FUNCTION {output.nonnull}
+{ swap$
+ prev.status.punct punct.comma =
+ { "," * }
+ { skip$ }
+ if$
+ prev.status.punct punct.period =
+ { add.period$ }
+ { skip$ }
+ if$
+ prev.status.quote quote.close =
+ { "''" * }
+ { skip$ }
+ if$
+ prev.status.space space.normal =
+ { " " * }
+ { skip$ }
+ if$
+ prev.status.space space.large =
+ { large.space * }
+ { skip$ }
+ if$
+ write$
+ prev.status.nline nline.newblock =
+ { newline$ "\newblock " write$ }
+ { skip$ }
+ if$
+}
+
+FUNCTION {output.nonnull.nocomma}
+{ swap$
+ prev.status.punct punct.comma =
+ { }
+ { skip$ }
+ if$
+ prev.status.punct punct.period =
+ { add.period$ }
+ { skip$ }
+ if$
+ prev.status.quote quote.close =
+ { "''" * }
+ { skip$ }
+ if$
+ prev.status.space space.normal =
+ { " " * }
+ { skip$ }
+ if$
+ prev.status.space space.large =
+ { large.space * }
+ { skip$ }
+ if$
+ write$
+ prev.status.nline nline.newblock =
+ { newline$ "\newblock " write$ }
+ { skip$ }
+ if$
+}
+
+
+FUNCTION {output}
+{ duplicate$ empty$
+ 'pop$
+ 'output.nonnull
+ if$
+}
+
+FUNCTION {output.nocomma}
+{ duplicate$ empty$
+ 'pop$
+ 'output.nonnull.nocomma
+ if$
+}
+
+
+FUNCTION {output.warn}
+{ 't :=
+ duplicate$ empty$
+ { pop$ "empty " t * " in " * cite$ * warning$ }
+ 'output.nonnull
+ if$
+}
+
+% "fin.entry" is the output routine that handles the last item of the entry
+% (which will be on the top of the stack when "fin.entry" is called).
+
+FUNCTION {fin.entry}
+{ this.status.punct punct.no =
+ { skip$ }
+ { add.period$ }
+ if$
+ this.status.quote quote.close =
+ { "''" * }
+ { skip$ }
+ if$
+write$
+newline$
+}
+
+
+FUNCTION {is.last.char.not.punct}
+{ duplicate$
+ "}" * add.period$
+ #-1 #1 substring$ "." =
+}
+
+FUNCTION {is.multiple.pages}
+{ 't :=
+ #0 'multiresult :=
+ { multiresult not
+ t empty$ not
+ and
+ }
+ { t #1 #1 substring$
+ duplicate$ "-" =
+ swap$ duplicate$ "," =
+ swap$ "+" =
+ or or
+ { #1 'multiresult := }
+ { t #2 global.max$ substring$ 't := }
+ if$
+ }
+ while$
+ multiresult
+}
+
+FUNCTION {capitalize}{ "u" change.case$ "t" change.case$ }
+
+FUNCTION {emphasize}
+{ duplicate$ empty$
+ { pop$ "" }
+ { "\emph{" swap$ * "}" * }
+ if$
+}
+
+FUNCTION {do.name.latex.cmd}
+{ name.latex.cmd
+ empty$
+ { skip$ }
+ { name.latex.cmd "{" * swap$ * "}" * }
+ if$
+}
+
+% IEEEtran.bst uses its own \BIBforeignlanguage command which directly
+% invokes the TeX hyphenation patterns without the need of the Babel
+% package. Babel does a lot more than switch hyphenation patterns and
+% its loading can cause unintended effects in many class files (such as
+% IEEEtran.cls).
+FUNCTION {select.language}
+{ duplicate$ empty$ 'pop$
+ { language empty$ 'skip$
+ { "\BIBforeignlanguage{" language * "}{" * swap$ * "}" * }
+ if$
+ }
+ if$
+}
+
+FUNCTION {tie.or.space.prefix}
+{ duplicate$ text.length$ #3 <
+ { "~" }
+ { " " }
+ if$
+ swap$
+}
+
+FUNCTION {get.bbl.editor}
+{ editor num.names$ #1 > 'bbl.editors 'bbl.editor if$ }
+
+FUNCTION {space.word}{ " " swap$ * " " * }
+
+
+% Field Conditioners, Converters, Checkers and External Interfaces
+
+FUNCTION {empty.field.to.null.string}
+{ duplicate$ empty$
+ { pop$ "" }
+ { skip$ }
+ if$
+}
+
+FUNCTION {either.or.check}
+{ empty$
+ { pop$ }
+ { "can't use both " swap$ * " fields in " * cite$ * warning$ }
+ if$
+}
+
+FUNCTION {empty.entry.warn}
+{ author empty$ title empty$ howpublished empty$
+ month empty$ year empty$ note empty$ url empty$
+ and and and and and and
+ { "all relevant fields are empty in " cite$ * warning$ }
+ 'skip$
+ if$
+}
+
+
+% The bibinfo system provides a way for the electronic parsing/acquisition
+% of a bibliography's contents as is done by ReVTeX. For example, a field
+% could be entered into the bibliography as:
+% \bibinfo{volume}{2}
+% Only the "2" would show up in the document, but the LaTeX \bibinfo command
+% could do additional things with the information. IEEEtran.bst does provide
+% a \bibinfo command via "\providecommand{\bibinfo}[2]{#2}". However, it is
+% currently not used as the bogus bibinfo functions defined here output the
+% entry values directly without the \bibinfo wrapper. The bibinfo functions
+% themselves (and the calls to them) are retained for possible future use.
+%
+% bibinfo.check avoids acting on missing fields while bibinfo.warn will
+% issue a warning message if a missing field is detected. Prior to calling
+% the bibinfo functions, the user should push the field value and then its
+% name string, in that order.
+
+FUNCTION {bibinfo.check}
+{ swap$ duplicate$ missing$
+ { pop$ pop$ "" }
+ { duplicate$ empty$
+ { swap$ pop$ }
+ { swap$ pop$ }
+ if$
+ }
+ if$
+}
+
+FUNCTION {bibinfo.warn}
+{ swap$ duplicate$ missing$
+ { swap$ "missing " swap$ * " in " * cite$ * warning$ pop$ "" }
+ { duplicate$ empty$
+ { swap$ "empty " swap$ * " in " * cite$ * warning$ }
+ { swap$ pop$ }
+ if$
+ }
+ if$
+}
+
+
+% The IEEE separates large numbers with more than 4 digits into groups of
+% three. The IEEE uses a small space to separate these number groups.
+% Typical applications include patent and page numbers.
+
+% number of consecutive digits required to trigger the group separation.
+FUNCTION {large.number.trigger}{ #5 }
+
+% For numbers longer than the trigger, this is the blocksize of the groups.
+% The blocksize must be less than the trigger threshold, and 2 * blocksize
+% must be greater than the trigger threshold (can't do more than one
+% separation on the initial trigger).
+FUNCTION {large.number.blocksize}{ #3 }
+
+% What is actually inserted between the number groups.
+FUNCTION {large.number.separator}{ "\," }
+
+% So as to save on integer variables by reusing existing ones, numnames
+% holds the current number of consecutive digits read and nameptr holds
+% the number that will trigger an inserted space.
+FUNCTION {large.number.separate}
+{ 't :=
+ ""
+ #0 'numnames :=
+ large.number.trigger 'nameptr :=
+ { t empty$ not }
+ { t #-1 #1 substring$ is.num
+ { numnames #1 + 'numnames := }
+ { #0 'numnames :=
+ large.number.trigger 'nameptr :=
+ }
+ if$
+ t #-1 #1 substring$ swap$ *
+ t #-2 global.max$ substring$ 't :=
+ numnames nameptr =
+ { duplicate$ #1 nameptr large.number.blocksize - substring$ swap$
+ nameptr large.number.blocksize - #1 + global.max$ substring$
+ large.number.separator swap$ * *
+ nameptr large.number.blocksize - 'numnames :=
+ large.number.blocksize #1 + 'nameptr :=
+ }
+ { skip$ }
+ if$
+ }
+ while$
+}
+
+% Converts all single dashes "-" to double dashes "--".
+FUNCTION {n.dashify}
+{ large.number.separate
+ 't :=
+ ""
+ { t empty$ not }
+ { t #1 #1 substring$ "-" =
+ { t #1 #2 substring$ "--" = not
+ { "--" *
+ t #2 global.max$ substring$ 't :=
+ }
+ { { t #1 #1 substring$ "-" = }
+ { "-" *
+ t #2 global.max$ substring$ 't :=
+ }
+ while$
+ }
+ if$
+ }
+ { t #1 #1 substring$ *
+ t #2 global.max$ substring$ 't :=
+ }
+ if$
+ }
+ while$
+}
+
+
+% This function detects entries with names that are identical to that of
+% the previous entry and replaces the repeated names with dashes (if the
+% "is.dash.repeated.names" user control is nonzero).
+FUNCTION {name.or.dash}
+{ 's :=
+ oldname empty$
+ { s 'oldname := s }
+ { s oldname =
+ { is.dash.repeated.names
+ { repeated.name.dashes }
+ { s 'oldname := s }
+ if$
+ }
+ { s 'oldname := s }
+ if$
+ }
+ if$
+}
+
+% Converts the number string on the top of the stack to
+% "numerical ordinal form" (e.g., "7" to "7th"). There is
+% no artificial limit to the upper bound of the numbers as the
+% two least significant digits determine the ordinal form.
+FUNCTION {num.to.ordinal}
+{ duplicate$ #-2 #1 substring$ "1" =
+ { bbl.th * }
+ { duplicate$ #-1 #1 substring$ "1" =
+ { bbl.st * }
+ { duplicate$ #-1 #1 substring$ "2" =
+ { bbl.nd * }
+ { duplicate$ #-1 #1 substring$ "3" =
+ { bbl.rd * }
+ { bbl.th * }
+ if$
+ }
+ if$
+ }
+ if$
+ }
+ if$
+}
+
+% If the string on the top of the stack begins with a number,
+% (e.g., 11th) then replace the string with the leading number
+% it contains. Otherwise retain the string as-is. s holds the
+% extracted number, t holds the part of the string that remains
+% to be scanned.
+FUNCTION {extract.num}
+{ duplicate$ 't :=
+ "" 's :=
+ { t empty$ not }
+ { t #1 #1 substring$
+ t #2 global.max$ substring$ 't :=
+ duplicate$ is.num
+ { s swap$ * 's := }
+ { pop$ "" 't := }
+ if$
+ }
+ while$
+ s empty$
+ 'skip$
+ { pop$ s }
+ if$
+}
+
+% Converts the word number string on the top of the stack to
+% Arabic string form. Will be successful up to "tenth".
+FUNCTION {word.to.num}
+{ duplicate$ "l" change.case$ 's :=
+ s "first" =
+ { pop$ "1" }
+ { skip$ }
+ if$
+ s "second" =
+ { pop$ "2" }
+ { skip$ }
+ if$
+ s "third" =
+ { pop$ "3" }
+ { skip$ }
+ if$
+ s "fourth" =
+ { pop$ "4" }
+ { skip$ }
+ if$
+ s "fifth" =
+ { pop$ "5" }
+ { skip$ }
+ if$
+ s "sixth" =
+ { pop$ "6" }
+ { skip$ }
+ if$
+ s "seventh" =
+ { pop$ "7" }
+ { skip$ }
+ if$
+ s "eighth" =
+ { pop$ "8" }
+ { skip$ }
+ if$
+ s "ninth" =
+ { pop$ "9" }
+ { skip$ }
+ if$
+ s "tenth" =
+ { pop$ "10" }
+ { skip$ }
+ if$
+}
+
+
+% Converts the string on the top of the stack to numerical
+% ordinal (e.g., "11th") form.
+FUNCTION {convert.edition}
+{ duplicate$ empty$ 'skip$
+ { duplicate$ #1 #1 substring$ is.num
+ { extract.num
+ num.to.ordinal
+ }
+ { word.to.num
+ duplicate$ #1 #1 substring$ is.num
+ { num.to.ordinal }
+ { "edition ordinal word " quote$ * edition * quote$ *
+ " may be too high (or improper) for conversion" * " in " * cite$ * warning$
+ }
+ if$
+ }
+ if$
+ }
+ if$
+}
+
+
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%% LATEX BIBLIOGRAPHY CODE %%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+FUNCTION {start.entry}
+{ newline$
+ "\bibitem{" write$
+ cite$ write$
+ "}" write$
+ newline$
+ ""
+ initialize.prev.this.status
+}
+
+% Here we write out all the LaTeX code that we will need. The most involved
+% code sequences are those that control the alternate interword spacing and
+% foreign language hyphenation patterns. The heavy use of \providecommand
+% gives users a way to override the defaults. Special thanks to Javier Bezos,
+% Johannes Braams, Robin Fairbairns, Heiko Oberdiek, Donald Arseneau and all
+% the other gurus on comp.text.tex for their help and advice on the topic of
+% \selectlanguage, Babel and BibTeX.
+FUNCTION {begin.bib}
+{ "% Generated by IEEEtran.bst, version: " bst.file.version * " (" * bst.file.date * ")" *
+ write$ newline$
+ preamble$ empty$ 'skip$
+ { preamble$ write$ newline$ }
+ if$
+ "\begin{thebibliography}{" longest.label * "}" *
+ write$ newline$
+ "\providecommand{\url}[1]{#1}"
+ write$ newline$
+ "\csname url@samestyle\endcsname"
+ write$ newline$
+ "\providecommand{\newblock}{\relax}"
+ write$ newline$
+ "\providecommand{\bibinfo}[2]{#2}"
+ write$ newline$
+ "\providecommand{\BIBentrySTDinterwordspacing}{\spaceskip=0pt\relax}"
+ write$ newline$
+ "\providecommand{\BIBentryALTinterwordstretchfactor}{"
+ ALTinterwordstretchfactor * "}" *
+ write$ newline$
+ "\providecommand{\BIBentryALTinterwordspacing}{\spaceskip=\fontdimen2\font plus "
+ write$ newline$
+ "\BIBentryALTinterwordstretchfactor\fontdimen3\font minus \fontdimen4\font\relax}"
+ write$ newline$
+ "\providecommand{\BIBforeignlanguage}[2]{{%"
+ write$ newline$
+ "\expandafter\ifx\csname l@#1\endcsname\relax"
+ write$ newline$
+ "\typeout{** WARNING: IEEEtran.bst: No hyphenation pattern has been}%"
+ write$ newline$
+ "\typeout{** loaded for the language `#1'. Using the pattern for}%"
+ write$ newline$
+ "\typeout{** the default language instead.}%"
+ write$ newline$
+ "\else"
+ write$ newline$
+ "\language=\csname l@#1\endcsname"
+ write$ newline$
+ "\fi"
+ write$ newline$
+ "#2}}"
+ write$ newline$
+ "\providecommand{\BIBdecl}{\relax}"
+ write$ newline$
+ "\BIBdecl"
+ write$ newline$
+}
+
+FUNCTION {end.bib}
+{ newline$ "\end{thebibliography}" write$ newline$ }
+
+FUNCTION {if.url.alt.interword.spacing}
+{ is.use.alt.interword.spacing
+ { is.use.url
+ { url empty$ 'skip$ {"\BIBentryALTinterwordspacing" write$ newline$} if$ }
+ { skip$ }
+ if$
+ }
+ { skip$ }
+ if$
+}
+
+FUNCTION {if.url.std.interword.spacing}
+{ is.use.alt.interword.spacing
+ { is.use.url
+ { url empty$ 'skip$ {"\BIBentrySTDinterwordspacing" write$ newline$} if$ }
+ { skip$ }
+ if$
+ }
+ { skip$ }
+ if$
+}
+
+
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%
+%% LONGEST LABEL PASS %%
+%%%%%%%%%%%%%%%%%%%%%%%%
+
+FUNCTION {initialize.longest.label}
+{ "" 'longest.label :=
+ #1 'number.label :=
+ #0 'longest.label.width :=
+}
+
+FUNCTION {longest.label.pass}
+{ type$ "ieeetranbstctl" =
+ { skip$ }
+ { number.label int.to.str$ 'label :=
+ number.label #1 + 'number.label :=
+ label width$ longest.label.width >
+ { label 'longest.label :=
+ label width$ 'longest.label.width :=
+ }
+ { skip$ }
+ if$
+ }
+ if$
+}
+
+
+
+
+%%%%%%%%%%%%%%%%%%%%%
+%% FORMAT HANDLERS %%
+%%%%%%%%%%%%%%%%%%%%%
+
+%% Lower Level Formats (used by higher level formats)
+
+FUNCTION {format.address.org.or.pub.date}
+{ 't :=
+ ""
+ year empty$
+ { "empty year in " cite$ * warning$ }
+ { skip$ }
+ if$
+ address empty$ t empty$ and
+ year empty$ and month empty$ and
+ { skip$ }
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ address "address" bibinfo.check *
+ t empty$
+ { skip$ }
+ { punct.period 'prev.status.punct :=
+ space.large 'prev.status.space :=
+ address empty$
+ { skip$ }
+ { ": " * }
+ if$
+ t *
+ }
+ if$
+ year empty$ month empty$ and
+ { skip$ }
+ { t empty$ address empty$ and
+ { skip$ }
+ { ", " * }
+ if$
+ month empty$
+ { year empty$
+ { skip$ }
+ { year "year" bibinfo.check * }
+ if$
+ }
+ { month "month" bibinfo.check *
+ year empty$
+ { skip$ }
+ { " " * year "year" bibinfo.check * }
+ if$
+ }
+ if$
+ }
+ if$
+ }
+ if$
+}
+
+
+FUNCTION {format.names}
+{ 'bibinfo :=
+ duplicate$ empty$ 'skip$ {
+ this.to.prev.status
+ this.status.std
+ 's :=
+ "" 't :=
+ #1 'nameptr :=
+ s num.names$ 'numnames :=
+ numnames 'namesleft :=
+ { namesleft #0 > }
+ { s nameptr
+ name.format.string
+ format.name$
+ bibinfo bibinfo.check
+ 't :=
+ nameptr #1 >
+ { nameptr num.names.shown.with.forced.et.al #1 + =
+ numnames max.num.names.before.forced.et.al >
+ is.forced.et.al and and
+ { "others" 't :=
+ #1 'namesleft :=
+ }
+ { skip$ }
+ if$
+ namesleft #1 >
+ { ", " * t do.name.latex.cmd * }
+ { s nameptr "{ll}" format.name$ duplicate$ "others" =
+ { 't := }
+ { pop$ }
+ if$
+ t "others" =
+ { " " * bbl.etal emphasize * }
+ { numnames #2 >
+ { "," * }
+ { skip$ }
+ if$
+ bbl.and
+ space.word * t do.name.latex.cmd *
+ }
+ if$
+ }
+ if$
+ }
+ { t do.name.latex.cmd }
+ if$
+ nameptr #1 + 'nameptr :=
+ namesleft #1 - 'namesleft :=
+ }
+ while$
+ cap.status.std
+ } if$
+}
+
+
+
+
+%% Higher Level Formats
+
+%% addresses/locations
+
+FUNCTION {format.address}
+{ address duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ }
+ if$
+}
+
+
+
+%% author/editor names
+
+FUNCTION {format.authors}{ author "author" format.names }
+
+FUNCTION {format.editors}
+{ editor "editor" format.names duplicate$ empty$ 'skip$
+ { ", " *
+ get.bbl.editor
+ capitalize
+ *
+ }
+ if$
+}
+
+
+%% eprint
+FUNCTION {format.eprint}
+{ eprint empty$
+ { "" }
+ { " \eprint{" * eprint * "}" }
+ if$
+}
+
+%% doi
+FUNCTION {format.doi}
+{ doi empty$
+ { "" }
+ { " \doi{" * doi * "}" }
+ if$
+}
+
+
+
+%% date
+
+FUNCTION {format.date}
+{
+ month "month" bibinfo.check duplicate$ empty$
+ year "year" bibinfo.check duplicate$ empty$
+ { swap$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ "there's a month but no year in " cite$ * warning$ }
+ if$
+ *
+ }
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ swap$ 'skip$
+ {
+ swap$
+ " " * swap$
+ }
+ if$
+ *
+ }
+ if$
+}
+
+FUNCTION {format.date.electronic}
+{ month "month" bibinfo.check duplicate$ empty$
+ year "year" bibinfo.check duplicate$ empty$
+ { swap$
+ { pop$ }
+ { "there's a month but no year in " cite$ * warning$
+ pop$ ")" * "(" swap$ *
+ this.to.prev.status
+ punct.no 'this.status.punct :=
+ space.normal 'this.status.space :=
+ quote.no 'this.status.quote :=
+ cap.yes 'status.cap :=
+ }
+ if$
+ }
+ { swap$
+ { swap$ pop$ ")" * "(" swap$ * }
+ { "(" swap$ * ", " * swap$ * ")" * }
+ if$
+ this.to.prev.status
+ punct.no 'this.status.punct :=
+ space.normal 'this.status.space :=
+ quote.no 'this.status.quote :=
+ cap.yes 'status.cap :=
+ }
+ if$
+}
+
+
+
+%% edition/title
+
+% Note: The IEEE considers the edition to be closely associated with
+% the title of a book. So, in IEEEtran.bst the edition is normally handled
+% within the formatting of the title. The format.edition function is
+% retained here for possible future use.
+FUNCTION {format.edition}
+{ edition duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ convert.edition
+ status.cap
+ { "t" }
+ { "l" }
+ if$ change.case$
+ "edition" bibinfo.check
+ "~" * bbl.edition *
+ cap.status.std
+ }
+ if$
+}
+
+% This is used to format the booktitle of a conference proceedings.
+% Here we use the "intype" field to provide the user a way to
+% override the word "in" (e.g., with things like "presented at")
+% Use of intype stops the emphasis of the booktitle to indicate that
+% we no longer mean the written conference proceedings, but the
+% conference itself.
+FUNCTION {format.in.booktitle}
+{ booktitle "booktitle" bibinfo.check duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ select.language
+ intype missing$
+ { emphasize
+ bbl.in " " *
+ }
+ { intype " " * }
+ if$
+ swap$ *
+ cap.status.std
+ }
+ if$
+}
+
+% This is used to format the booktitle of collection.
+% Here the "intype" field is not supported, but "edition" is.
+FUNCTION {format.in.booktitle.edition}
+{ booktitle "booktitle" bibinfo.check duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ select.language
+ emphasize
+ edition empty$ 'skip$
+ { ", " *
+ edition
+ convert.edition
+ "l" change.case$
+ * "~" * bbl.edition *
+ }
+ if$
+ bbl.in " " * swap$ *
+ cap.status.std
+ }
+ if$
+}
+
+FUNCTION {format.article.title}
+{ title duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ "t" change.case$
+ }
+ if$
+ "title" bibinfo.check
+ duplicate$ empty$ 'skip$
+ { quote.close 'this.status.quote :=
+ is.last.char.not.punct
+ { punct.std 'this.status.punct := }
+ { punct.no 'this.status.punct := }
+ if$
+ select.language
+ "``" swap$ *
+ cap.status.std
+ }
+ if$
+}
+
+FUNCTION {format.article.title.electronic}
+{ title duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ "t" change.case$
+ }
+ if$
+ "title" bibinfo.check
+ duplicate$ empty$
+ { skip$ }
+ { select.language }
+ if$
+}
+
+FUNCTION {format.book.title.edition}
+{ title "title" bibinfo.check
+ duplicate$ empty$
+ { "empty title in " cite$ * warning$ }
+ { this.to.prev.status
+ this.status.std
+ select.language
+ emphasize
+ edition empty$ 'skip$
+ { ", " *
+ edition
+ convert.edition
+ status.cap
+ { "t" }
+ { "l" }
+ if$
+ change.case$
+ * "~" * bbl.edition *
+ }
+ if$
+ cap.status.std
+ }
+ if$
+}
+
+FUNCTION {format.book.title}
+{ title "title" bibinfo.check
+ duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ select.language
+ emphasize
+ }
+ if$
+}
+
+
+
+%% journal
+
+FUNCTION {format.journal}
+{ journal duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ select.language
+ emphasize
+ }
+ if$
+}
+
+
+
+%% how published
+
+FUNCTION {format.howpublished}
+{ howpublished duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ }
+ if$
+}
+
+
+
+%% institutions/organization/publishers/school
+
+FUNCTION {format.institution}
+{ institution duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ }
+ if$
+}
+
+FUNCTION {format.organization}
+{ organization duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ }
+ if$
+}
+
+FUNCTION {format.address.publisher.date}
+{ publisher "publisher" bibinfo.warn format.address.org.or.pub.date }
+
+FUNCTION {format.address.publisher.date.nowarn}
+{ publisher "publisher" bibinfo.check format.address.org.or.pub.date }
+
+FUNCTION {format.address.organization.date}
+{ organization "organization" bibinfo.check format.address.org.or.pub.date }
+
+FUNCTION {format.school}
+{ school duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ cap.status.std
+ }
+ if$
+}
+
+
+
+%% volume/number/series/chapter/pages
+
+FUNCTION {format.volume}
+{ volume empty.field.to.null.string
+ duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ bbl.volume
+ status.cap
+ { capitalize }
+ { skip$ }
+ if$
+ swap$ tie.or.space.prefix
+ "volume" bibinfo.check
+ * *
+ cap.status.std
+ }
+ if$
+}
+
+FUNCTION {format.number}
+{ number empty.field.to.null.string
+ duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ status.cap
+ { bbl.number capitalize }
+ { bbl.number }
+ if$
+ swap$ tie.or.space.prefix
+ "number" bibinfo.check
+ * *
+ cap.status.std
+ }
+ if$
+}
+
+FUNCTION {format.number.if.use.for.article}
+{ is.use.number.for.article
+ { format.number }
+ { "" }
+ if$
+}
+
+% The IEEE does not seem to tie the series so closely with the volume
+% and number as is done in other bibliography styles. Instead the
+% series is treated somewhat like an extension of the title.
+FUNCTION {format.series}
+{ series empty$
+ { "" }
+ { this.to.prev.status
+ this.status.std
+ bbl.series " " *
+ series "series" bibinfo.check *
+ cap.status.std
+ }
+ if$
+}
+
+
+FUNCTION {format.chapter}
+{ chapter empty$
+ { "" }
+ { this.to.prev.status
+ this.status.std
+ type empty$
+ { bbl.chapter }
+ { type "l" change.case$
+ "type" bibinfo.check
+ }
+ if$
+ chapter tie.or.space.prefix
+ "chapter" bibinfo.check
+ * *
+ cap.status.std
+ }
+ if$
+}
+
+
+% The intended use of format.paper is for paper numbers of inproceedings.
+% The paper type can be overridden via the type field.
+% We allow the type to be displayed even if the paper number is absent
+% for things like "postdeadline paper"
+FUNCTION {format.paper}
+{ is.use.paper
+ { paper empty$
+ { type empty$
+ { "" }
+ { this.to.prev.status
+ this.status.std
+ type "type" bibinfo.check
+ cap.status.std
+ }
+ if$
+ }
+ { this.to.prev.status
+ this.status.std
+ type empty$
+ { bbl.paper }
+ { type "type" bibinfo.check }
+ if$
+ " " * paper
+ "paper" bibinfo.check
+ *
+ cap.status.std
+ }
+ if$
+ }
+ { "" }
+ if$
+}
+
+
+FUNCTION {format.pages}
+{ pages duplicate$ empty$ 'skip$
+ { this.to.prev.status
+ this.status.std
+ duplicate$ is.multiple.pages
+ {
+ bbl.pages swap$
+ n.dashify
+ }
+ {
+ bbl.page swap$
+ }
+ if$
+ tie.or.space.prefix
+ "pages" bibinfo.check
+ * *
+ cap.status.std
+ }
+ if$
+}
+
+
+
+%% technical report number
+
+FUNCTION {format.tech.report.number}
+{ number "number" bibinfo.check
+ this.to.prev.status
+ this.status.std
+ cap.status.std
+ type duplicate$ empty$
+ { pop$
+ bbl.techrep
+ }
+ { skip$ }
+ if$
+ "type" bibinfo.check
+ swap$ duplicate$ empty$
+ { pop$ }
+ { tie.or.space.prefix * * }
+ if$
+}
+
+
+
+%% note
+
+FUNCTION {format.note}
+{ note empty$
+ { "" }
+ { this.to.prev.status
+ this.status.std
+ punct.period 'this.status.punct :=
+ note #1 #1 substring$
+ duplicate$ "{" =
+ { skip$ }
+ { status.cap
+ { "u" }
+ { "l" }
+ if$
+ change.case$
+ }
+ if$
+ note #2 global.max$ substring$ * "note" bibinfo.check
+ cap.yes 'status.cap :=
+ }
+ if$
+}
+
+
+
+%% patent
+
+FUNCTION {format.patent.date}
+{ this.to.prev.status
+ this.status.std
+ year empty$
+ { monthfiled duplicate$ empty$
+ { "monthfiled" bibinfo.check pop$ "" }
+ { "monthfiled" bibinfo.check }
+ if$
+ dayfiled duplicate$ empty$
+ { "dayfiled" bibinfo.check pop$ "" * }
+ { "dayfiled" bibinfo.check
+ monthfiled empty$
+ { "dayfiled without a monthfiled in " cite$ * warning$
+ *
+ }
+ { " " swap$ * * }
+ if$
+ }
+ if$
+ yearfiled empty$
+ { "no year or yearfiled in " cite$ * warning$ }
+ { yearfiled "yearfiled" bibinfo.check
+ swap$
+ duplicate$ empty$
+ { pop$ }
+ { ", " * swap$ * }
+ if$
+ }
+ if$
+ }
+ { month duplicate$ empty$
+ { "month" bibinfo.check pop$ "" }
+ { "month" bibinfo.check }
+ if$
+ day duplicate$ empty$
+ { "day" bibinfo.check pop$ "" * }
+ { "day" bibinfo.check
+ month empty$
+ { "day without a month in " cite$ * warning$
+ *
+ }
+ { " " swap$ * * }
+ if$
+ }
+ if$
+ year "year" bibinfo.check
+ swap$
+ duplicate$ empty$
+ { pop$ }
+ { ", " * swap$ * }
+ if$
+ }
+ if$
+ cap.status.std
+}
+
+FUNCTION {format.patent.nationality.type.number}
+{ this.to.prev.status
+ this.status.std
+ nationality duplicate$ empty$
+ { "nationality" bibinfo.warn pop$ "" }
+ { "nationality" bibinfo.check
+ duplicate$ "l" change.case$ "united states" =
+ { pop$ bbl.patentUS }
+ { skip$ }
+ if$
+ " " *
+ }
+ if$
+ type empty$
+ { bbl.patent "type" bibinfo.check }
+ { type "type" bibinfo.check }
+ if$
+ *
+ number duplicate$ empty$
+ { "number" bibinfo.warn pop$ }
+ { "number" bibinfo.check
+ large.number.separate
+ swap$ " " * swap$ *
+ }
+ if$
+ cap.status.std
+}
+
+
+
+%% standard
+
+FUNCTION {format.organization.institution.standard.type.number}
+{ this.to.prev.status
+ this.status.std
+ organization duplicate$ empty$
+ { pop$
+ institution duplicate$ empty$
+ { "institution" bibinfo.warn }
+ { "institution" bibinfo.warn " " * }
+ if$
+ }
+ { "organization" bibinfo.warn " " * }
+ if$
+ type empty$
+ { bbl.standard "type" bibinfo.check }
+ { type "type" bibinfo.check }
+ if$
+ *
+ number duplicate$ empty$
+ { "number" bibinfo.check pop$ }
+ { "number" bibinfo.check
+ large.number.separate
+ swap$ " " * swap$ *
+ }
+ if$
+ cap.status.std
+}
+
+FUNCTION {format.revision}
+{ revision empty$
+ { "" }
+ { this.to.prev.status
+ this.status.std
+ bbl.revision
+ revision tie.or.space.prefix
+ "revision" bibinfo.check
+ * *
+ cap.status.std
+ }
+ if$
+}
+
+
+%% thesis
+
+FUNCTION {format.master.thesis.type}
+{ this.to.prev.status
+ this.status.std
+ type empty$
+ {
+ bbl.mthesis
+ }
+ {
+ type "type" bibinfo.check
+ }
+ if$
+cap.status.std
+}
+
+FUNCTION {format.phd.thesis.type}
+{ this.to.prev.status
+ this.status.std
+ type empty$
+ {
+ bbl.phdthesis
+ }
+ {
+ type "type" bibinfo.check
+ }
+ if$
+cap.status.std
+}
+
+
+
+%% URL
+
+FUNCTION {format.url}
+{ is.use.url
+ { url empty$
+ { "" }
+ { this.to.prev.status
+ this.status.std
+ cap.yes 'status.cap :=
+ name.url.prefix " " *
+ "\url{" * url * "}" *
+ punct.no 'this.status.punct :=
+ punct.period 'prev.status.punct :=
+ space.normal 'this.status.space :=
+ space.normal 'prev.status.space :=
+ quote.no 'this.status.quote :=
+ }
+ if$
+ }
+ { "" }
+ if$
+}
+
+
+
+
+%%%%%%%%%%%%%%%%%%%%
+%% ENTRY HANDLERS %%
+%%%%%%%%%%%%%%%%%%%%
+
+
+% Note: In many journals, the IEEE (or the authors) tend not to show the number
+% for articles, so the display of the number is controlled here by the
+% switch "is.use.number.for.article"
+FUNCTION {article}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors "author" output.warn
+ name.or.dash
+ format.article.title "title" output.warn
+ format.journal "journal" bibinfo.check "journal" output.warn
+ format.volume output
+ format.number.if.use.for.article output
+ format.pages output
+ format.eprint output.nocomma
+ format.doi output.nocomma
+ format.date "year" output.warn
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {book}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ author empty$
+ { format.editors "author and editor" output.warn }
+ { format.authors output.nonnull }
+ if$
+ name.or.dash
+ format.book.title.edition output
+ format.series output
+ author empty$
+ { skip$ }
+ { format.editors output }
+ if$
+ format.address.publisher.date output
+ format.volume output
+ format.number output
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {booklet}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors output
+ name.or.dash
+ format.article.title "title" output.warn
+ format.howpublished "howpublished" bibinfo.check output
+ format.organization "organization" bibinfo.check output
+ format.address "address" bibinfo.check output
+ format.date output
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {electronic}
+{ std.status.using.period
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors output
+ name.or.dash
+ format.date.electronic output
+ format.article.title.electronic output
+ format.howpublished "howpublished" bibinfo.check output
+ format.organization "organization" bibinfo.check output
+ format.address "address" bibinfo.check output
+ format.note output
+ format.url output
+ fin.entry
+ empty.entry.warn
+ if.url.std.interword.spacing
+}
+
+FUNCTION {inbook}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ author empty$
+ { format.editors "author and editor" output.warn }
+ { format.authors output.nonnull }
+ if$
+ name.or.dash
+ format.book.title.edition output
+ format.series output
+ format.address.publisher.date output
+ format.volume output
+ format.number output
+ format.chapter output
+ format.pages output
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {incollection}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors "author" output.warn
+ name.or.dash
+ format.article.title "title" output.warn
+ format.in.booktitle.edition "booktitle" output.warn
+ format.series output
+ format.editors output
+ format.address.publisher.date.nowarn output
+ format.volume output
+ format.number output
+ format.chapter output
+ format.pages output
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {inproceedings}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors "author" output.warn
+ name.or.dash
+ format.article.title "title" output.warn
+ format.in.booktitle "booktitle" output.warn
+ format.series output
+ format.editors output
+ format.volume output
+ format.number output
+ publisher empty$
+ { format.address.organization.date output }
+ { format.organization "organization" bibinfo.check output
+ format.address.publisher.date output
+ }
+ if$
+ format.paper output
+ format.pages output
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {manual}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors output
+ name.or.dash
+ format.book.title.edition "title" output.warn
+ format.howpublished "howpublished" bibinfo.check output
+ format.organization "organization" bibinfo.check output
+ format.address "address" bibinfo.check output
+ format.date output
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {mastersthesis}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors "author" output.warn
+ name.or.dash
+ format.article.title "title" output.warn
+ format.master.thesis.type output.nonnull
+ format.school "school" bibinfo.warn output
+ format.address "address" bibinfo.check output
+ format.date "year" output.warn
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {misc}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors output
+ name.or.dash
+ format.article.title output
+ format.howpublished "howpublished" bibinfo.check output
+ format.organization "organization" bibinfo.check output
+ format.address "address" bibinfo.check output
+ format.pages output
+ format.date output
+ format.note output
+ format.url output
+ fin.entry
+ empty.entry.warn
+ if.url.std.interword.spacing
+}
+
+FUNCTION {patent}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors output
+ name.or.dash
+ format.article.title output
+ format.patent.nationality.type.number output
+ format.patent.date output
+ format.note output
+ format.url output
+ fin.entry
+ empty.entry.warn
+ if.url.std.interword.spacing
+}
+
+FUNCTION {periodical}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.editors output
+ name.or.dash
+ format.book.title "title" output.warn
+ format.series output
+ format.volume output
+ format.number output
+ format.organization "organization" bibinfo.check output
+ format.date "year" output.warn
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {phdthesis}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors "author" output.warn
+ name.or.dash
+ format.article.title "title" output.warn
+ format.phd.thesis.type output.nonnull
+ format.school "school" bibinfo.warn output
+ format.address "address" bibinfo.check output
+ format.date "year" output.warn
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {proceedings}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.editors output
+ name.or.dash
+ format.book.title "title" output.warn
+ format.series output
+ format.volume output
+ format.number output
+ publisher empty$
+ { format.address.organization.date output }
+ { format.organization "organization" bibinfo.check output
+ format.address.publisher.date output
+ }
+ if$
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {standard}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors output
+ name.or.dash
+ format.book.title "title" output.warn
+ format.howpublished "howpublished" bibinfo.check output
+ format.organization.institution.standard.type.number output
+ format.revision output
+ format.date output
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {techreport}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors "author" output.warn
+ name.or.dash
+ format.article.title "title" output.warn
+ format.howpublished "howpublished" bibinfo.check output
+ format.institution "institution" bibinfo.warn output
+ format.address "address" bibinfo.check output
+ format.tech.report.number output.nonnull
+ format.date "year" output.warn
+ format.note output
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+FUNCTION {unpublished}
+{ std.status.using.comma
+ start.entry
+ if.url.alt.interword.spacing
+ format.authors "author" output.warn
+ name.or.dash
+ format.article.title "title" output.warn
+ format.date output
+ format.note "note" output.warn
+ format.url output
+ fin.entry
+ if.url.std.interword.spacing
+}
+
+
+% The special entry type which provides the user interface to the
+% BST controls
+FUNCTION {IEEEtranBSTCTL}
+{ is.print.banners.to.terminal
+ { "** IEEEtran BST control entry " quote$ * cite$ * quote$ * " detected." *
+ top$
+ }
+ { skip$ }
+ if$
+ CTLuse_article_number
+ empty$
+ { skip$ }
+ { CTLuse_article_number
+ yes.no.to.int
+ 'is.use.number.for.article :=
+ }
+ if$
+ CTLuse_paper
+ empty$
+ { skip$ }
+ { CTLuse_paper
+ yes.no.to.int
+ 'is.use.paper :=
+ }
+ if$
+ CTLuse_url
+ empty$
+ { skip$ }
+ { CTLuse_url
+ yes.no.to.int
+ 'is.use.url :=
+ }
+ if$
+ CTLuse_forced_etal
+ empty$
+ { skip$ }
+ { CTLuse_forced_etal
+ yes.no.to.int
+ 'is.forced.et.al :=
+ }
+ if$
+ CTLmax_names_forced_etal
+ empty$
+ { skip$ }
+ { CTLmax_names_forced_etal
+ string.to.integer
+ 'max.num.names.before.forced.et.al :=
+ }
+ if$
+ CTLnames_show_etal
+ empty$
+ { skip$ }
+ { CTLnames_show_etal
+ string.to.integer
+ 'num.names.shown.with.forced.et.al :=
+ }
+ if$
+ CTLuse_alt_spacing
+ empty$
+ { skip$ }
+ { CTLuse_alt_spacing
+ yes.no.to.int
+ 'is.use.alt.interword.spacing :=
+ }
+ if$
+ CTLalt_stretch_factor
+ empty$
+ { skip$ }
+ { CTLalt_stretch_factor
+ 'ALTinterwordstretchfactor :=
+ "\renewcommand{\BIBentryALTinterwordstretchfactor}{"
+ ALTinterwordstretchfactor * "}" *
+ write$ newline$
+ }
+ if$
+ CTLdash_repeated_names
+ empty$
+ { skip$ }
+ { CTLdash_repeated_names
+ yes.no.to.int
+ 'is.dash.repeated.names :=
+ }
+ if$
+ CTLname_format_string
+ empty$
+ { skip$ }
+ { CTLname_format_string
+ 'name.format.string :=
+ }
+ if$
+ CTLname_latex_cmd
+ empty$
+ { skip$ }
+ { CTLname_latex_cmd
+ 'name.latex.cmd :=
+ }
+ if$
+ CTLname_url_prefix
+ missing$
+ { skip$ }
+ { CTLname_url_prefix
+ 'name.url.prefix :=
+ }
+ if$
+
+
+ num.names.shown.with.forced.et.al max.num.names.before.forced.et.al >
+ { "CTLnames_show_etal cannot be greater than CTLmax_names_forced_etal in " cite$ * warning$
+ max.num.names.before.forced.et.al 'num.names.shown.with.forced.et.al :=
+ }
+ { skip$ }
+ if$
+}
+
+
+%%%%%%%%%%%%%%%%%%%
+%% ENTRY ALIASES %%
+%%%%%%%%%%%%%%%%%%%
+FUNCTION {conference}{inproceedings}
+FUNCTION {online}{electronic}
+FUNCTION {internet}{electronic}
+FUNCTION {webpage}{electronic}
+FUNCTION {www}{electronic}
+FUNCTION {default.type}{misc}
+
+
+
+%%%%%%%%%%%%%%%%%%
+%% MAIN PROGRAM %%
+%%%%%%%%%%%%%%%%%%
+
+READ
+
+EXECUTE {initialize.controls}
+EXECUTE {initialize.status.constants}
+EXECUTE {banner.message}
+
+EXECUTE {initialize.longest.label}
+ITERATE {longest.label.pass}
+
+EXECUTE {begin.bib}
+ITERATE {call.type$}
+EXECUTE {end.bib}
+
+EXECUTE{completed.message}
+
+
+%% That's all folks, mds.
diff --git a/tex/src/appendix-existing-solutions.tex b/tex/src/appendix-existing-solutions.tex
new file mode 100644
index 0000000..9a9e2a6
--- /dev/null
+++ b/tex/src/appendix-existing-solutions.tex
@@ -0,0 +1,532 @@
+%% Appendix on reviewing existing reproducible workflow solutions. This
+%% file is loaded by the project's 'paper.tex' or 'tex/src/supplement.tex',
+%% it should not be run independently.
+%
+%% Copyright (C) 2020-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+%% Copyright (C) 2021-2022 Raúl Infante-Sainz <infantesainz@gmail.com>
+%% Copyright (C) 2021-2022 Boudewijn F. Roukema <boud@astro.uni.torun.pl>
+%
+%% This file is free software: you can redistribute it and/or modify it
+%% under the terms of the GNU General Public License as published by the
+%% Free Software Foundation, either version 3 of the License, or (at your
+%% option) any later version.
+%
+%% This file is distributed in the hope that it will be useful, but WITHOUT
+%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+%% for more details. See <http://www.gnu.org/licenses/>.
+
+
+
+
+
+\section{Survey of common existing reproducible workflows}
+\label{appendix:existingsolutions}
+The problem of reproducibility has received considerable attention over the last three decades and various solutions have already been proposed.
+The core principles that many of the existing solutions (including Maneage) aim to achieve are nicely summarized by the FAIR principles\citeappendix{wilkinson16}.
+In this appendix, \emph{some} of the solutions are reviewed.
+We are not just reviewing solutions that can be used today.
+The main focus of this paper is longevity, therefore we also spent considerable time on finding and inspecting solutions that have been aborted, discontinued or abandoned.
+
+The solutions are based on an evolving software landscape, therefore they are ordered by date: when the project has a web page, the year of its first release is used for the sorting.
+Otherwise their paper's publication year is used.
+For each solution, we summarize its methodology and discuss how it relates to the criteria proposed in this paper.
+Freedom of the software/method is a core concept behind scientific reproducibility, as opposed to industrial reproducibility where a black box is acceptable/desirable.
+Therefore proprietary solutions like Code Ocean\footnote{\inlinecode{\url{https://codeocean.com}}} or Nextjournal\footnote{\inlinecode{\url{https://nextjournal.com}}} will not be reviewed here.
+Other studies have also attempted to review existing reproducible solutions, for example, see Konkol et al.\citeappendix{konkol20}.
+
+We have tried our best to test and read through the documentation of almost all reviewed solutions to a sufficient level.
+However, due to time constraints, it is inevitable that we may have missed some aspects of the solutions, or incorrectly interpreted their behavior and outputs.
+In this case, please let us know and we will correct it in the text on the paper's Git repository and publish the updated (postprint) PDF on \href{https://doi.org/10.5281/zenodo.3872247}{zenodo.3872247} (this is the version-independent DOI, which always points to the most recent Zenodo upload).
+
+
+\subsection{Suggested rules, checklists, or criteria}
+Before going into the various implementations, it is useful to review some existing suggested rules, checklists, or criteria for computationally reproducible research.
+
+Sandve et al.\citeappendix{sandve13} propose ``ten simple rules for reproducible computational research'' that can be applied in any project.
+Generally, these are very similar to the criteria proposed here and follow a similar spirit, but they do not provide any actual research papers following up all those points, nor do they provide a proof of concept.
+The Popper convention\citeappendix{jimenez17} also provides a set of principles that are indeed generally useful, among which some are common to the criteria here (for example, automatic validation, and, as in Maneage, the authors suggest providing a template for new users), but the authors do not include completeness as a criterion nor pay attention to longevity: Popper has already changed its core workflow language once and is written in Python with many dependencies that evolve fast, see \ref{appendix:highlevelinworkflow}.
+For more on Popper, please see Section \ref{appendix:popper}.
+
+For improved reproducibility Jupyter notebooks, Rule et al.\citeappendix{rule19} propose ten rules and also provide links to example implementations.
+These can be very useful for users of Jupyter but are not generic for non-Jupyter-based computational projects.
+Some criteria (which are indeed very good in a more general context) do not directly relate to reproducibility, for example their Rule 1: ``Tell a Story for an Audience''.
+Generally, as reviewed in
+\ifdefined\separatesupplement%
+the main body of this paper (section on the longevity of existing tools)%
+\else%
+Section \ref{sec:longevityofexisting}%
+\fi
+and Section \ref{appendix:jupyter} (below), Jupyter itself has many issues regarding reproducibility.
+To create Docker images, N\"ust et al. propose\citeappendix{nust20} ``ten simple rules''.
+They recommend some issues that can indeed help increase the quality of Docker images and their production/usage, such as their rule 7 to ``mount datasets [only] at run time'' to separate the computational environment from the data.
+However, the long-term reproducibility of the images is not included as a criterion by these authors.
+For example, they recommend using base operating systems, with version identification limited to a single brief identifier such as \inlinecode{ubuntu:18.04}, which has a serious problem with longevity issues
+\ifdefined\separatesupplement%
+(as discussed in the longevity of existing tools section of the main paper)%
+\else%
+(Section \ref{sec:longevityofexisting})%
+\fi.
+Furthermore, in their proof-of-concept Dockerfile (listing 1), \inlinecode{rocker} is used with a tag (not a digest), which can be problematic due to the high risk of ambiguity (as discussed in Section \ref{appendix:containers}).
+
+Previous criteria are thus primarily targeted to immediate reproducibility and do not consider longevity.
+Therefore, they lack a strong/clear completeness criterion (they mainly only suggest, rather than require, the recording of versions, and their ultimate suggestion of storing the full binary OS in a binary VM or container is problematic (as mentioned in \ref{appendix:independentenvironment} and Oliveira et al.\citeappendix{oliveira18}).
+
+
+
+
+\subsection{Reproducible Electronic Documents, RED (1992)}
+\label{appendix:red}
+RED\footnote{\inlinecode{\url{http://sep.stanford.edu/doku.php?id=sep:research:reproducible}}} is the first attempt\cite{claerbout1992,schwab2000} that we could find on doing reproducible research.
+It was developed within the Stanford Exploration Project (SEP) for Geophysics publications.
+Their introductions on the importance of reproducibility resonate a lot with today's environment in computational sciences.
+In particular, the authors highlight the heavy investment one has to make in order to re-do another scientist's work, even in the same team.
+RED also influenced other early reproducible works, for example Buckheit \& Donoho\citeappendix{buckheit1995}.
+
+To orchestrate the various figures/results of a project, from 1990, they used ``Cake''\citeappendix{somogyi87}, a dialect of Make, for more on Make, see Appendix \ref{appendix:jobmanagement}.
+As described in Schwab et al.\cite{schwab2000}, in the latter half of that decade, they moved to GNU Make, which was much more commonly used, better maintained, and came with a complete and up-to-date manual.
+The basic idea behind RED's solution was to organize the analysis as independent steps, including the generation of plots, and organizing the steps through a Makefile.
+This enabled all the results to be re-executed with a single command.
+Several basic low-level Makefiles were included in the high-level/central Makefile.
+The reader/user of a project had to manually edit the central Makefile and set the variable \inlinecode{RESDIR} (result directory), the directory where built files are kept.
+The reader could later select which figures/parts of the project to reproduce by manually adding their names to the central Makefile, and running Make.
+
+At the time, Make was already used by individual researchers and projects as a job orchestration tool, but SEP's innovation was to standardize it as an internal policy, and define conventions for the Makefiles to be consistent across projects.
+This enabled new members to benefit from the already existing work of previous team members (who had graduated or moved to other jobs).
+However, RED only used the existing software of the host system, with no means to control that software.
+Therefore, with wider adoption, they confronted a ``versioning problem'' where the host's analysis software had different versions on different hosts, creating different results, or crashing\citeappendix{fomel09}.
+Hence, in 2006, SEP moved to a new Python-based framework called Madagascar; see Appendix \ref{appendix:madagascar}.
+
+
+
+
+
+\subsection{Taverna (2003)}
+\label{appendix:taverna}
+Taverna\footnote{\inlinecode{\url{https://github.com/taverna}}}\citeappendix{oinn04} was a workflow management system written in Java with a graphical user interface.
+In 2014 it was sponsored by the Apache Incubator project and called ``Apache Taverna'', but its developers \href{https://lists.apache.org/thread.html/r559e0dd047103414fbf48a6ce1bac2e17e67504c546300f2751c067c\%40\%3Cdev.taverna.apache.org\%3E}{voted} to \emph{retire} it in 2020 because development has come to a standstill (as of April 2021, latest public Github commit was in 2016).
+
+In Taverna, a workflow is defined as a directed graph, where nodes are called ``processors''.
+Each Processor transforms a set of inputs into a set of outputs and they are defined in the Scufl language (an XML-based language, where each step is an atomic task).
+Other components of the workflow are ``Data links'' and ``Coordination constraints''.
+The main user interface is graphical, where users move processors in the given space and define links between their inputs and outputs (manually constructing a lineage, as in the
+\ifdefined\separatesupplement
+lineage figure of the main paper).
+\else
+Figure \ref{fig:datalineage}).
+\fi
+Taverna is only a workflow manager and is not integrated with a package manager, hence the versions of the used software can be different in different runs.
+Zhao et al. \citeappendix{zhao12} studied the problem of workflow decays in Taverna.
+
+
+
+
+
+\subsection{Madagascar (2003)}
+\label{appendix:madagascar}
+Madagascar\footnote{\inlinecode{\url{http://ahay.org}}}\citeappendix{fomel13} is a set of extensions to the SCons job management tool (reviewed in \ref{appendix:scons}).
+Madagascar is a continuation of the Reproducible Electronic Documents (RED) project that was discussed in Appendix \ref{appendix:red}.
+Madagascar has been used in the production of hundreds of research papers or book chapters\footnote{\inlinecode{\url{http://www.ahay.org/wiki/Reproducible_Documents}}}, 120 prior to Fomel et al.\citeappendix{fomel13}.
+
+Madagascar does include project management tools in the form of SCons extensions.
+However, it is not just a reproducible project management tool.
+The Regularly Sampled File (RSF) file format\footnote{\inlinecode{\url{http://www.ahay.org/wiki/Guide\_to\_RSF\_file\_format}}} is a custom plain-text file that points to the location of the actual data files on the file system and acts as the intermediary between Madagascar's analysis programs.
+Therefore, Madagascar is primarily a collection of analysis programs and tools to interact with RSF files and plotting facilities.
+For example in our test of Madagascar 3.0.1, it installed 855 Madagascar-specific analysis programs (\inlinecode{PREFIX/bin/sf*}).
+The analysis programs mostly target geophysical data analysis, including various project-specific tools: more than half of the total built tools are under the \inlinecode{build/user} directory which includes names of Madagascar users.
+
+Besides the location or contents of the data, RSF also contains name/value pairs that can be used as options to Madagascar programs, which are built with inputs and outputs of this format.
+Since RSF contains program options also, the inputs and outputs of Madagascar's analysis programs are read from, and written to, standard input and standard output.
+
+In terms of completeness, as long as the user only uses Madagascar's own analysis programs, it is fairly complete at a high level (not lower-level OS libraries).
+However, this comes at the expense of a large amount of bloatware (programs that one project may never need, but is forced to build), thus adding complexity.
+Also, the linking between the analysis programs (of a certain user at a certain time) and future versions of that program (that is updated in time) is not immediately obvious.
+Furthermore, the blending of the workflow component with the low-level analysis components fails the modularity criterion.
+
+
+
+
+
+\subsection{GenePattern (2004)}
+\label{appendix:genepattern}
+GenePattern\footnote{\inlinecode{\url{https://www.genepattern.org}}}\citeappendix{reich06} (first released in 2004) is a client-server software containing many common analysis functions/modules, primarily focused for Gene studies.
+Although it is highly focused to a special research field, it is reviewed here because its concepts/methods are generic.
+
+Its server-side software is installed with fixed software packages that are wrapped into GenePattern modules.
+The modules are used through a web interface, the modern implementation is GenePattern Notebook\citeappendix{reich17}.
+It is an extension of the Jupyter notebook (see Appendix \ref{appendix:editors}), which also has a special ``GenePattern'' cell that will connect to GenePattern servers for doing the analysis.
+However, the wrapper modules just call an existing tool on the running system.
+Given that each server may have its own set of installed software, the analysis may differ (or crash) when run on different GenePattern servers, hampering reproducibility.
+
+%% GenePattern shutdown announcement (although as of November 2020, it does not open any more): https://www.genepattern.org/blog/2019/10/01/the-genomespace-project-is-ending-on-november-15-2019
+The primary GenePattern server was active since 2008 and had 40,000 registered users with 2000 to 5000 jobs running every week\citeappendix{reich17}.
+However, it was shut down on November 15th 2019 due to the end of funding.
+All processing with this sever has stopped, and any archived data on it has been deleted.
+Since GenePattern is free software, there are alternative public servers to use, so hopefully, work on it will continue.
+However, funding is limited and those servers may face similar funding problems.
+
+This is a very nice example of the fragility of solutions that depend on archiving and running the research codes with high-level research products (including data and binary/compiled codes that are expensive to keep in one place).
+The data and software may have backups in other places, but the high-level project-specific workflows that researchers spent most time on, have been lost due to the deletion (unless they were backed up privately by the authors!).
+
+
+
+
+
+\subsection{Kepler (2005)}
+Kepler\footnote{\inlinecode{\url{https://kepler-project.org}}}\citeappendix{ludascher05} is a Java-based Graphic User Interface workflow management tool.
+Users drag-and-drop analysis components, called ``actors'', into a visual, directional graph, which is the workflow (similar to
+\ifdefined\separatesupplement
+the lineage figure shown in the main paper).
+\else
+Figure \ref{fig:datalineage}).
+\fi
+Each actor is connected to others through Ptolemy II\footnote{\inlinecode{\url{https://ptolemy.berkeley.edu}}}\citeappendix{eker03}.
+In many aspects, the usage of Kepler and its issues for long-term reproducibility is like Taverna (see Section \ref{appendix:taverna}).
+
+
+
+
+
+\subsection{VisTrails (2005)}
+\label{appendix:vistrails}
+VisTrails\footnote{\inlinecode{\url{https://www.vistrails.org}}}\citeappendix{bavoil05} was a graphical workflow managing system.
+According to its web page, VisTrails maintenance has stopped since May 2016, its last Git commit, as of this writing, was in November 2017.
+However, given that it was well maintained for over 10 years is an achievement.
+
+VisTrails (or ``visualization trails'') was initially designed for managing visualizations, but later grew into a generic workflow system with meta-data and provenance features.
+Each analysis step, or module, is recorded in an XML schema, which defines the operations and their dependencies.
+The XML attributes of each module can be used in any XML query language to find certain steps (for example those that used a certain command).
+Since the main goal was visualization (as images), apparently its primary output is in the form of image spreadsheets.
+Its design is based on a change-based provenance model using a custom VisTrails provenance query language (vtPQL), for more see Scheidegger et al.\citeappendix{scheidegger08}.
+Since XML is a plain text format, as the user inspects the data and makes changes to the analysis, the changes are recorded as ``trails'' in the project's VisTrails repository that operates very much like common version control systems (see Appendix \ref{appendix:versioncontrol}).
+.
+However, even though XML is in plain text, it is very hard to read/edit without the VisTrails software (which is no longer maintained).
+VisTrails, therefore, provides a graphic user interface with a visual representation of the project's inter-dependent steps (similar to
+\ifdefined\separatesupplement
+the data lineage figure of the main paper).
+\else
+Figure \ref{fig:datalineage}).
+\fi
+Besides the fact that it is no longer maintained, VisTrails did not control the software that is run, it only controlled the sequence of steps that they are run in.
+
+
+
+
+
+\subsection{Galaxy (2010)}
+\label{appendix:galaxy}
+Galaxy\footnote{\inlinecode{\url{https://galaxyproject.org}}} is a web-based Genomics workbench\citeappendix{goecks10}.
+The main user interface is the ``Galaxy Pages'', which does not require any programming: users graphically manipulate abstract ``tools'' which are wrappers over command-line programs.
+Therefore the actual running version of the program can be hard to control across different Galaxy servers.
+Besides the automatically generated metadata of a project (which include version control, or its history), users can also tag/annotate each analysis step, describing its intent/purpose.
+Besides some small differences, Galaxy seems very similar to GenePattern (Appendix \ref{appendix:genepattern}), so most of the same points there apply here too.
+For example the very large cost of maintaining such a system, being based on a graphic environment and blending hand-written code with automatically generated (large) files.
+
+
+
+
+
+\subsection{Image Processing On Line journal, IPOL (2010)}
+\label{appendix:ipol}
+The IPOL journal\footnote{\inlinecode{\url{https://www.ipol.im}}}\citeappendix{limare11} (first published article in July 2010) publishes papers on image processing algorithms as well as the the full code of the proposed algorithm.
+An IPOL paper is a traditional research paper, but with a focus on implementation.
+The published narrative description of the algorithm must be detailed to a level that any specialist can implement it in their own programming language (extremely detailed).
+The author's own implementation of the algorithm is also published with the paper (in C, C++ or MATLAB/Octave and recently Python), the code can only have a very limited set of external dependencies (with pre-defined versions), must be commented well enough, and link each part of it with the relevant part of the paper.
+The authors must also submit several example datasets that show the applicability of their proposed algorithm.
+The referee is expected to inspect the code and narrative, confirming that they match with each other, and with the stated conclusions of the published paper.
+After publication, each paper also has a ``demo'' button on its web page, allowing readers to try the algorithm on a web-interface and even provide their own input.
+
+IPOL has grown steadily over the last 10 years, publishing 23 research articles in 2019.
+We encourage the reader to visit its web page and see some of its recent papers and their demos.
+The reason it can be so thorough and complete is its very narrow scope (low-level image processing algorithms), where the published algorithms are highly atomic, not needing significant dependencies (beyond input/output of well-known formats), allowing the referees and readers to go deeply into each implemented algorithm.
+However, many data-intensive projects commonly involve dozens of high-level dependencies, with large and complex data formats and analysis, so while it is modular (a single module, doing a very specific thing) this solution is not scalable.
+
+Furthermore, by not publishing/archiving each paper's version controlled history or directly linking the analysis and produced paper, it fails criteria 6 and 7.
+Note that on the web page, it is possible to change parameters, but that will not affect the produced PDF.
+A paper written in Maneage (the proof-of-concept solution presented in this paper) could be scrutinized at a similar detailed level to IPOL, but for much more complex research scenarios, involving hundreds of dependencies and complex processing of the data.
+
+
+
+
+
+\subsection{WINGS (2010)}
+\label{appendix:wings}
+WINGS\footnote{\inlinecode{\url{https://wings-workflows.org}}}\citeappendix{gil10} is an automatic workflow generation algorithm.
+It runs on a centralized web server, requiring many dependencies (such that it is recommended to download Docker images).
+It allows users to define various workflow components (for example datasets, analysis components, etc), with high-level goals.
+It then uses selection and rejection algorithms to find the best components using a pool of analysis components that can satisfy the requested high-level constraints.
+%\tonote{Read more about this}
+
+
+
+
+
+\subsection{Active Papers (2011)}
+\label{appendix:activepapers}
+Active Papers\footnote{\inlinecode{\url{http://www.activepapers.org}}} attempts to package the code and data of a project into one file (in HDF5 format).
+It was initially written in Java because its compiled byte-code outputs in JVM are portable on any machine\citeappendix{hinsen11}.
+However, Java is not a commonly used platform today, hence it was later implemented in Python\citeappendix{hinsen15}.
+Dependence on high-level platforms (Java or Python) is therefore a fundamental issue.
+
+In the Python version, all processing steps and input data (or references to them) are stored in an HDF5 file.
+%However, it can only account for pure-Python packages using the host operating system's Python modules \tonote{confirm this!}.
+When the Python module contains a component written in other languages (mostly C or C++), it needs to be an external dependency to the Active Paper.
+
+As mentioned in Hinsen\citeappendix{hinsen15}, the fact that it relies on HDF5 is a caveat of Active Papers, because many tools are necessary to merely open it.
+Downloading the pre-built ``HDF View'' binaries (a GUI browser of HDF5 files that is provided by the HDF group) is not possible anonymously/automatically: as of January 2021 login is required\footnote{\inlinecode{\url{https://www.hdfgroup.org/downloads/hdfview}}} (this was not the case when Active Papers moved to HDF5).
+% From K. Hinsen in a private email to M. Akhlaghi: This is true today, but wasn't when I started ActivePapers. Otherwise I'd never have built on HDF5.
+Installing HDF View using the Debian or Arch Linux package managers also failed due to dependencies in our trials.
+Furthermore, like most high-level tools, the HDF5 library evolves very fast: on its webpage (from April 2021), it says ``Applications that were created with earlier HDF5 releases may not compile with 1.12 by default''.
+
+While data and code are indeed fundamentally similar concepts technically\citeappendix{hinsen16}, they are used by humans differently.
+The hand-written code of a large project involving Terabytes of data can be 100 kilo bytes.
+When the two are bundled together in one remote file, merely seeing one line of the code, requires downloading Terabytes volume that is not needed, this was also acknowledged in Hinsen\citeappendix{hinsen15}.
+It may also happen that the data are proprietary (for example medical patient data).
+In such cases, the data must not be publicly released, but the methods that were applied to them can.
+
+Furthermore, since all reading and writing is currently done in the HDF5 file, it can easily bloat the file to very large sizes due to temporary files.
+These files can later be removed as part of the analysis, but this makes the code more complicated and hard to read/maintain.
+For example the Active Papers HDF5 file of \citeappendix[in \href{https://doi.org/10.5281/zenodo.2549987}{zenodo.2549987}]{kneller19} is 1.8 giga-bytes.
+This is not a fundamental feature of the approach, but rather an effect of the initial implementation; future improvements are possible.
+
+
+
+
+\subsection{Collage Authoring Environment (2011)}
+\label{appendix:collage}
+The Collage Authoring Environment\citeappendix{nowakowski11} was the winner of Elsevier Executable Paper Grand Challenge\citeappendix{gabriel11}.
+It is based on the GridSpace2\footnote{\inlinecode{\url{http://dice.cyfronet.pl}}} distributed computing environment, which has a web-based graphic user interface.
+Through its web-based interface, viewers of a paper can actively experiment with the parameters of a published paper's displayed outputs (for example figures) through a web interface.
+In their Figure 3, they nicely vizualize how the ``Executable Paper'' of Collage operates through two servers and a computing backend.
+
+Unfortunately in the paper no webpage has been provided to follow up on the work and find its current status.
+A web search only pointed us to its main paper\citeappendix{nowakowski11}.
+In the paper, the authors do not discuss the major issue of software versioning and its verification to ensure that future updates to the backend do not affect the result; apparently it just assumes that the software exists on the ``Computing backend''.
+Since we could not access or test it, from the descriptions in the paper, it seems to be very similar to the modern day Jupyter notebook concept (see \ref{appendix:jupyter}), which had not yet been created in its current form in 2011.
+So we expect similar longevity issues with Collage.
+
+
+\subsection{SHARE (2011)}
+\label{appendix:SHARE}
+SHARE\footnote{\inlinecode{\url{https://is.ieis.tue.nl/staff/pvgorp/share}}}\citeappendix{vangorp11} is a web portal that hosts virtual machines (VMs) for storing the environment of a research project.
+SHARE was recognized as the second position in the Elsevier Executable Paper Grand Challenge\citeappendix{gabriel11}.
+Simply put, SHARE was just a VM library that users could download or connect to, and run.
+The limitations of VMs for reproducibility were discussed in Appendix \ref{appendix:virtualmachines}, and the SHARE system does not specify any requirements or standards on making the VM itself reproducible, or enforcing common internals for its supported projects.
+As of January 2021, the top SHARE web page still works.
+However, upon selecting any operation, a notice is printed that ``SHARE is offline'' since 2019 and the reason is not mentioned.
+
+
+
+
+
+\subsection{Verifiable Computational Result, VCR (2011)}
+\label{appendix:verifiableidentifier}
+A ``verifiable computational result''\footnote{\inlinecode{\url{http://vcr.stanford.edu}}} is an output (table, figure, etc) that is associated with a ``verifiable result identifier'' (VRI), see\citeappendix{gavish11}.
+It was awarded the third prize in the Elsevier Executable Paper Grand Challenge\citeappendix{gabriel11}.
+
+A VRI is a hash that is created using tags within the programming source that produced that output, also recording its version control or history.
+This enables the exact identification and citation of results.
+The VRIs are automatically generated web-URLs that link to public VCR repositories containing the data, inputs, and scripts, that may be re-executed.
+According to Gavish \& Donoho\citeappendix{gavish11}, the VRI generation routine has been implemented in MATLAB, R, and Python, although only the MATLAB version was available on the webpage in January 2021.
+VCR also has special \LaTeX{} macros for loading the respective VRI into the generated PDF.
+In effect this is very similar to what we have done at the end of the caption of
+\ifdefined\separatesupplement
+the first figure in the main body of the paper,
+\else
+Figure \ref{fig:datalineage},
+\fi
+where you can click on the given Zenodo link and be taken to the raw data that created the plot.
+However, instead of a long and hard to read hash, we point to the plotted file's source as a Zenodo DOI (which has long-term funding for longevity).
+
+Unfortunately, most parts of the web page are not complete as of January 2021.
+The VCR web page contains an example PDF\footnote{\inlinecode{\url{http://vcr.stanford.edu/paper.pdf}}} that is generated with this system, but the linked VCR repository\footnote{\inlinecode{\url{http://vcr-stat.stanford.edu}}} did not exist (again, as of January 2021).
+Finally, the date of the files in the MATLAB extension tarball is set to May 2011, hinting that probably VCR has been abandoned soon after the publication of Gavish \& Donoho\citeappendix{gavish11}.
+
+
+
+
+
+\subsection{SOLE (2012)}
+\label{appendix:sole}
+SOLE (Science Object Linking and Embedding) defines ``science objects'' (SOs) that can be manually linked with phrases of the published paper\citeappendix{pham12,malik13}.
+An SO is any code/content that is wrapped in begin/end tags with an associated type and name.
+For example, special commented lines in a Python, R, or C program.
+The SOLE command-line program parses the tagged file, generating metadata elements unique to the SO (including its URI).
+SOLE also supports workflows as Galaxy tools\citeappendix{goecks10}.
+
+For reproducibility, Pham et al. \citeappendix{pham12} suggest building a SOLE-based project in a virtual machine, using any custom package manager that is hosted on a private server to obtain a usable URI.
+However, as described in Appendices \ref{appendix:independentenvironment} and \ref{appendix:packagemanagement}, unless virtual machines are built with robust package managers, this is not a sustainable solution (the virtual machine itself is not reproducible).
+Also, hosting a large virtual machine server with fixed IP on a hosting service like Amazon (as suggested there) for every project in perpetuity will be very expensive.
+
+The manual/artificial definition of tags to connect parts of the paper with the analysis scripts is also a caveat due to human error and incompleteness (the authors may not consider tags as important things, but they may be useful later).
+In Maneage, instead of using artificial/commented tags, the analysis inputs and outputs are automatically linked into the paper's text through \LaTeX{} macros that are the backbone of the whole system (are not artifical/extra features).
+
+
+
+
+
+\subsection{Sumatra (2012)}
+Sumatra\footnote{\inlinecode{\url{http://neuralensemble.org/sumatra}}}\citeappendix{davison12} attempts to capture the environment information of a running project.
+It is written in Python and is a command-line wrapper over the analysis script.
+By controlling a project at running-time, Sumatra is able to capture the environment it was run in.
+The captured environment can be viewed in plain text or a web interface.
+Sumatra also provides \LaTeX/Sphinx features, which will link the paper with the project's Sumatra database.
+This enables researchers to use a fixed version of a project's figures in the paper, even at later times (while the project is being developed).
+
+The actual code that Sumatra wraps around, must itself be under version control, and it does not run if there are non-committed changes (although it is not clear what happens if a commit is amended).
+Since information on the environment has been captured, Sumatra is able to identify if it has changed since a previous run of the project.
+Therefore Sumatra makes no attempt at storing the environment of the analysis as in Sciunit (see Appendix \ref{appendix:sciunit}), but its information.
+Sumatra thus needs to know the language of the running program and is not generic.
+It just captures the environment, it does not store \emph{how} that environment was built.
+
+
+
+
+
+\subsection{Research Object (2013)}
+\label{appendix:researchobject}
+The Research object\footnote{\inlinecode{\url{http://www.researchobject.org}}} is collection of meta-data ontologies, to describe aggregation of resources, or workflows\citeappendix{bechhofer13,belhajjame15}.
+It thus provides resources to link various workflow/analysis components (see Appendix \ref{appendix:existingtools}) into a final workflow.
+
+Bechhofer et al. \citeappendix{bechhofer13} describes how a workflow in Taverna (Appendix \ref{appendix:taverna}) can be translated into research objects.
+The important thing is that the research object concept is not specific to any special workflow, it is just a metadata bundle/standard which is only as robust in reproducing the result as the running workflow.
+Therefore if implemented over a complete workflow like Maneage, it can be very useful in analysing/optimizing the workflow, finding common components between many Maneage'd workflows, or translating to other complete workflows.
+
+
+
+
+
+\subsection{Sciunit (2015)}
+\label{appendix:sciunit}
+Sciunit\footnote{\inlinecode{\url{https://sciunit.run}}}\citeappendix{meng15} defines ``sciunit''s that keep the executed commands for an analysis and all the necessary programs and libraries that are used in those commands.
+It automatically parses all the executable files in the script and copies them, and their dependency libraries (down to the C library), into the sciunit.
+Because the sciunit contains all the programs and necessary libraries, it is possible to run it readily on other systems that have a similar CPU architecture.
+Sciunit was originally written in Python 2 (which reached its end-of-life on January 1st, 2020).
+Therefore Sciunit2 is a new implementation in Python 3.
+
+The main issue with Sciunit's approach is that the copied binaries are just black boxes: it is not possible to see how the used binaries from the initial system were built.
+This is a major problem for scientific projects: in principle (not knowing how the programs were built) and in practice (archiving a large volume sciunit for every step of the analysis requires a lot of storage space and archival cost).
+
+
+
+
+
+\subsection{Umbrella (2015)}
+Umbrella\citeappendix{meng15b} is a high-level wrapper script for isolating the environment of the analysis.
+The user specifies the necessary operating system, and necessary packages for the analysis steps in various JSON files.
+Umbrella will then study the host operating system and the various necessary inputs (including data and software) through a process similar to Sciunits mentioned above to find the best environment isolator (maybe using Linux containerization, containers, or VMs).
+We could not find a URL to the source software of Umbrella (no source code repository is mentioned in the papers we reviewed above), but from the descriptions\citeappendix{meng17}, it is written in Python 2.6 (which is now deprecated).
+
+
+
+
+
+\subsection{ReproZip (2016)}
+ReproZip\footnote{\inlinecode{\url{https://www.reprozip.org}}}\citeappendix{chirigati16} is a Python package that is designed to automatically track all the necessary data files, libraries, and environment variables of a process into a single bundle.
+The tracking is done at the kernel system-call level, so any file that is accessed during the running of the project is identified.
+The tracked files can be packaged into a \inlinecode{.rpz} bundle that can then be unpacked into another system.
+
+ReproZip is therefore very good for storing a ``snapshot'' of the running environment, at a single moment, into a single file.
+However, the bundle can become very large when many/large datasets are involved, or if the software environment is complex (many dependencies).
+Furthermore, since the binary software libraries are directly copied, it can only be re-run on a systems with a compatible CPU architecture.
+Another problem is that ReproZip copies all files used in a project, without (by default) a way of knowing how the software was built (its provenance).
+
+As mentioned in this paper, and also Oliveira et al. \citeappendix{oliveira18}, the question of ``how'' the environment was built is critical to understanding the results; having only the binaries is not useful in many contexts.
+It is possible to include the build instructions of the software used within the project to be ReproZip'd, but this risks bloating the bundle with the many temporary files that are created during the build of the software, adding complexity and slowing down the project's running time.
+
+For the data, it is similarly not possible to extract which data server they came from.
+Hence two projects that each use a 1-terabyte dataset will need a full copy of that same 1-terabyte file in their bundle, making long-term preservation extremely expensive.
+Such files can be excluded from the bundle through modifications in the configuration file.
+However, this will add complexity: a higher-level script will be necessary with the ReproZip bundle, to make sure that the data and bundle are used together, or to check the integrity of the data (in case they have changed).
+
+Finally, because it is only a snapshot of one moment in a project's history, preserving the connection between the ReproZip'd bundles of various points in a project's history is likely to be difficult (for example, when software or data are updated, or when analysis methods are modified).
+In other words, a ReproZip user will have to personally define an archival method to preserve the various black boxes of the project as it evolves, and tracking what has changed between the versions is not trivial.
+
+
+
+
+
+\subsection{Binder (2017)}
+Binder\footnote{\inlinecode{\url{https://mybinder.org}}} is used to containerize already existing Jupyter based processing steps.
+Users simply add a set of Binder-recognized configuration files to their repository and Binder will build a Docker image and install all the dependencies inside of it with Conda (the list of necessary packages comes from Conda).
+One good feature of Binder is that the imported Docker image must be tagged, although as mentioned in Appendix \ref{appendix:containers}, tags do not ensure reproducibility.
+However, it does not make sure that the Dockerfile used by the imported Docker image follows a similar convention also.
+So users can simply use generic operating system names.
+Binder is used by Jones et al.\citeappendix{jones19}.
+
+
+
+
+
+\subsection{Gigantum (2017)}
+%% I took the date from their PiPy page, where the first version 0.1 was published in November 2016.
+Gigantum\footnote{\inlinecode{\url{https://gigantum.com}}} is a client/server system, in which the client is a web-based (graphical) interface that is installed as ``Gigantum Desktop'' within a Docker image.
+Gigantum uses Docker containers for an independent environment, Conda (or Pip) to install packages, Jupyter notebooks to edit and run code, and Git to store its history.
+The reproducibility issues with these tools has been thoroughly discussed in \ref{appendix:existingtools}.
+
+Simply put, it is a high-level wrapper for combining these components.
+Internally, a Gigantum project is organized as files in a directory that can be opened without their own client.
+The file structure (which is under version control) includes codes, input data, and output data.
+As acknowledged on their own web page, this greatly reduces the speed of Git operations, transmitting, or archiving the project.
+Therefore there are size limits on the dataset/code sizes.
+However, there is one directory that can be used to store files that must not be tracked.
+
+
+
+
+
+\subsection{Popper (2017)}
+\label{appendix:popper}
+Popper\footnote{\inlinecode{\url{https://getpopper.io}}} is a software implementation of the Popper Convention\citeappendix{jimenez17}.
+The Popper team's own solution is through a command-line program called \inlinecode{popper}.
+The \inlinecode{popper} program itself is written in Python.
+However, job management was initially based on the HashiCorp configuration language (HCL) because HCL was used by ``GitHub Actions'' to manage workflows at that time.
+However, from October 2019 GitHub changed to a custom YAML-based language, so Popper also deprecated HCL.
+This is an important issue when low-level choices are based on service providers (see Appendix \ref{appendix:highlevelinworkflow}).
+
+To start a project, the \inlinecode{popper} command-line program builds a template, or ``scaffold'', which is a minimal set of files that can be run.
+By default, Popper runs in a Docker image (so root permissions are necessary and reproducible issues with Docker images have been discussed above), but Singularity is also supported.
+See Appendix \ref{appendix:independentenvironment} for more on containers, and Appendix \ref{appendix:highlevelinworkflow} for using high-level languages in the workflow.
+
+Popper does not comply with the completeness, minimal complexity, and including-the-narrative criteria.
+Moreover, the scaffold that is provided by Popper is an output of the program that is not directly under version control.
+Hence, tracking future low-level changes in Popper and how they relate to the high-level projects that depend on it through the scaffold will be very hard.
+In Maneage, users start their projects by branching off the core \inlinecode{maneage} git branch.
+Hence any future change in the low level features will be directly propagated to all derived projects (and will appear prominently as Git conflicts if the user has customized them).
+
+
+
+
+
+\subsection{Whole Tale (2017)}
+\label{appendix:wholetale}
+Whole Tale\footnote{\inlinecode{\url{https://wholetale.org}}} is a web-based platform for managing a project and organizing data provenance\citeappendix{brinckman17}.
+It uses online editors like Jupyter or RStudio (see Appendix \ref{appendix:editors}) that are encapsulated in a Docker container (see Appendix \ref{appendix:independentenvironment}).
+
+The web-based nature of Whole Tale's approach and its dependency on many tools (which have many dependencies themselves) is a major limitation for future reproducibility.
+For example, when following their own tutorial on ``Creating a new tale'', the provided Jupyter notebook could not be executed because of a dependency problem.
+This was reported to the authors as issue 113\footnote{\inlinecode{\url{https://github.com/whole-tale/wt-design-docs/issues/113}}} and fixed.
+But as all the second-order dependencies evolve, it is not hard to envisage such dependency incompatibilities being the primary issue for older projects on Whole Tale.
+Furthermore, the fact that a Tale is stored as a binary Docker container causes two important problems:
+1) it requires a very large storage capacity for every project that is hosted there, making it very expensive to scale if demand expands.
+2) It is not possible to see how the environment was built accurately (when the Dockerfile uses operating system package managers like \inlinecode{apt}).
+This issue with Whole Tale (and generally all other solutions that only rely on preserving a container/VM) was also mentioned in Oliveira et al.\citeappendix{oliveira18}, for more on this, please see Appendix \ref{appendix:packagemanagement}.
+
+
+
+
+
+\subsection{Occam (2018)}
+\label{appendix:occam}
+Occam\footnote{\inlinecode{\url{https://occam.cs.pitt.edu}}}\citeappendix{oliveira18} is a web-based application to preserve software and its execution.
+To achieve long-term reproducibility, Occam includes its own package manager (instructions to build software and its dependencies) in order to be in full control of the software build instructions, similarly to Maneage.
+Besides Nix or Guix (which are primarily a package manager that can also do job management), Occam is the only solution in our survey that attempts to be complete in this aspect.
+
+However, it is incomplete from the perspective of requirements: it works within a Docker image (that requires root permissions) and currently only runs on Debian-based, Red Hat based, and Arch-based GNU/Linux operating systems that respectively use the \inlinecode{apt}, \inlinecode{yum} or \inlinecode{pacman} package managers.
+It is also itself written in Python (version 3.4 or above).
+
+Furthermore, it does not satisfy the minimal complexity criterion, because the instructions to build the software packages and their versions are not immediately viewable or modifiable by the user.
+Occam contains its own JSON database that should be parsed by Occam's own custom program.
+The analysis phase of Occam is through a drag-and-drop interface (similar to Taverna, Appendix \ref{appendix:taverna}), which is provided as a web-based graphic user interface.
+All the connections between the various phases of the analysis need to be pre-defined in a JSON file and manually linked in the GUI.
+Hence, for complex data analysis operations that involve thousands of steps, this is not scalable.
diff --git a/tex/src/appendix-existing-tools.tex b/tex/src/appendix-existing-tools.tex
new file mode 100644
index 0000000..dcedd78
--- /dev/null
+++ b/tex/src/appendix-existing-tools.tex
@@ -0,0 +1,661 @@
+%% Appendix on reviewing existing low-level tools that are used in
+%% high-level reproducible workflow solutions. This file is loaded by the
+%% project's 'paper.tex' or 'tex/src/supplement.tex', it should not be run
+%% independently.
+%
+%% Copyright (C) 2020-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+%% Copyright (C) 2021-2022 Raúl Infante-Sainz <infantesainz@gmail.com>
+%% Copyright (C) 2021-2022 Boudewijn F. Roukema <boud@astro.uni.torun.pl>
+%
+%% This file is free software: you can redistribute it and/or modify it
+%% under the terms of the GNU General Public License as published by the
+%% Free Software Foundation, either version 3 of the License, or (at your
+%% option) any later version.
+%
+%% This file is distributed in the hope that it will be useful, but WITHOUT
+%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+%% for more details. See <http://www.gnu.org/licenses/>.
+
+
+
+
+
+\section{Survey of existing tools for various phases}
+\label{appendix:existingtools}
+Data analysis workflows (including those that aim for reproducibility) are commonly high-level frameworks that employ various lower-level components.
+To help in reviewing existing reproducible workflow solutions in light of the proposed criteria in Appendix \ref{appendix:existingsolutions}, we first need to survey the most commonly employed lower-level tools.
+
+
+
+
+
+\subsection{Independent environment}
+\label{appendix:independentenvironment}
+The lowest-level challenge of any reproducible solution is to avoid the differences between various run-time environments, to a desirable/certain level.
+For example different hardware, operating systems, versions of existing dependencies, etc.
+Therefore, any reasonable attempt at providing a reproducible workflow starts with isolating its running environment from the host environment.
+Three general technologies are used for this purpose and reviewed below:
+1) Virtual machines,
+2) Containers,
+3) Independent build in the host's file system.
+
+\subsubsection{Virtual machines}
+\label{appendix:virtualmachines}
+Virtual machines (VMs) host a binary copy of a full operating system that can be run on other operating systems.
+This includes the lowest-level operating system component or the kernel.
+VMs thus provide the ultimate control one can have over the run-time environment of the analysis.
+However, the VM's kernel does not talk directly to the running hardware that is doing the analysis, it talks to a simulated hardware layer that is provided by the host's kernel.
+Therefore, a process that is run inside a virtual machine can be much slower than one that is run on a native kernel.
+An advantage of VMs is that they are a single file that can be copied from one computer to another, keeping the full environment within them if the format is recognized.
+VMs are used by cloud service providers, enabling fully independent operating systems on their large servers where the customer can have root access.
+
+VMs were used in solutions like SHARE\citeappendix{vangorp11} (which was awarded second prize in the Elsevier Executable Paper Grand Challenge of 2011\citeappendix{gabriel11}), or in some suggested reproducible papers\citeappendix{dolfi14}.
+However, due to their very large size, these are expensive to maintain, thus leading SHARE to discontinue its services in 2019.
+The URL to the VM file \texttt{provenance\_machine.ova} that is mentioned in Dolfi et al.\citeappendix{dolfi14} is also not currently accessible (we suspect that this is due to size and archival costs).
+
+\subsubsection{Containers}
+\label{appendix:containers}
+Containers also host a binary copy of a running environment but do not have their own kernel.
+Through a thin layer of low-level system libraries, programs running within a container talk directly with the host operating system kernel.
+Otherwise, containers have their own independent software for everything else.
+Therefore, they have much less overhead in hardware/CPU access.
+Like VMs, users often choose an operating system for the container's independent operating system (most commonly GNU/Linux distributions which are free software).
+
+We review some of the most common container solutions: Docker, Singularity, and Podman.
+
+\begin{itemize}
+\item {\bf\small Docker containers:} Docker is one of the most popular tools nowadays for keeping an independent analysis environment.
+ It is primarily driven by the need of software developers for reproducing a previous environment, where they have root access mostly on the ``cloud'' (which is usually a remote VM).
+ A Docker container is composed of independent Docker ``images'' that are built with a \inlinecode{Dockerfile}.
+ It is possible to precisely version/tag the images that are imported (to avoid downloading the latest/different version in a future build).
+ To have a reproducible Docker image, it must be ensured that all the imported Docker images check their dependency tags down to the initial image which contains the C library.
+
+ An important drawback of Docker for high-performance scientific needs is that it runs as a daemon (a program that is always running in the background) with root permissions.
+ This is a major security flaw that discourages many high-performance computing (HPC) facilities from providing it.
+
+\item {\bf\small Singularity:} Singularity\citeappendix{kurtzer17} is a single-image container (unlike Docker, which is composed of modular/independent images).
+ Although it needs root permissions to be installed on the system (once), it does not require root permissions every time it is run.
+ Its main program is also not a daemon, but a normal program that can be stopped.
+ These features make it much safer for HPC administrators to install compared to Docker.
+ However, the fact that it requires root access for the initial install is still a hindrance for a typical project: if Singularity is not already present on the HPC, the user's science project cannot be run by a non-root user.
+
+\item {\bf\small Podman:} Podman uses the Linux kernel containerization features to enable containers without a daemon, and without root permissions.
+ It has a command-line interface very similar to Docker, but only works on GNU/Linux operating systems.
+\end{itemize}
+
+Generally, VMs or containers are good solutions to reproducibly run/repeating an analysis in the short term (a couple of years).
+However, their focus is to store the already-built (binary, non-human readable) software environment.
+Because of this, they will be large (many Gigabytes) and expensive to archive, download, or access.
+Recall the two examples above for VMs in Section \ref{appendix:virtualmachines}. But this is also valid for Docker images, as is clear from Dockerhub's recent decision to a new consumpiton-based payment model.
+Meng \& Thain\citeappendix{meng17} also give similar reasons on why Docker images were not suitable in their trials.
+
+On a more fundamental level, VMs or containers do not store \emph{how} the core environment was built.
+This information is usually in a third-party repository, and not necessarily inside the container or VM file, making it hard (if not impossible) to track for future users.
+This is a major problem in relation to the proposed completeness criteria and is also highlighted as an issue in terms of long term reproducibility by Oliveira et al.\citeappendix{oliveira18}.
+
+The example of \inlinecode{Dockerfile} of Mesnard \& Barba\cite{mesnard20} was previously mentioned in
+\ifdefined\separatesupplement
+the main body of this paper, when discussing the criteria.
+\else
+in Section \ref{criteria}.
+\fi
+Another useful example is the \inlinecode{Dockerfile}\footnote{\inlinecode{\href{https://github.com/benmarwick/1989-excavation-report-Madjedbebe/blob/master/Dockerfile}{https://github.com/benmarwick/1989-excavation-report-}\\\href{https://github.com/benmarwick/1989-excavation-report-Madjedbebe/blob/master/Dockerfile}{Madjedbebe/blob/master/Dockerfile}}} of Clarkson et al.\citeappendix{clarkso15} (published in June 2015) which starts with \inlinecode{FROM rocker/verse:3.3.2}.
+When we tried to build it (November 2020), we noticed that the core downloaded image (\inlinecode{rocker/verse:3.3.2}, with image ``digest'' \inlinecode{sha256:c136fb0dbab...}) was created in October 2018 (long after the publication of that paper).
+In principle, it is possible to investigate the difference between this new image and the old one that the authors used, but that would require a lot of effort and may not be possible when the changes are not available in a third public repository or not under version control.
+In Docker, it is possible to retrieve the precise Docker image with its digest, for example, \inlinecode{FROM ubuntu:16.04@sha256:XXXXXXX} (where \inlinecode{XXXXXXX} is the digest, uniquely identifying the core image to be used), but we have not seen this often done in existing examples of ``reproducible'' \inlinecode{Dockerfiles}.
+
+The ``digest'' is specific to Docker repositories.
+A more generic/long-term approach to ensure identical core OS components at a later time is to construct the containers or VMs with fixed/archived versions of the operating system ISO files.
+ISO files are pre-built binary files with volumes of hundreds of megabytes and not containing their build instructions.
+For example, the archives of Debian\footnote{\inlinecode{\url{https://cdimage.debian.org/mirror/cdimage/archive/}}} or Ubuntu\footnote{\inlinecode{\url{http://old-releases.ubuntu.com/releases}}} provide older ISO files.
+
+The concept of containers (and the independent images that build them) can also be extended beyond just the software environment.
+For example, Lofstead et al.\citeappendix{lofstead19} propose a ``data pallet'' concept to containerize access to data and thus allow tracing data back to the application that produced them.
+
+In summary, containers or VMs are just a built product themselves.
+If they are built properly (for example building a Maneage'd project inside a Docker container), they can be useful for immediate usage and fast-moving of the project from one system to another.
+With a robust building, the container or VM can also be exactly reproduced later.
+However, attempting to archive the actual binary container or VM files as a black box (not knowing the precise versions of the software in them, and \emph{how} they were built) is expensive, and will not be able to answer the most fundamental questions.
+
+\subsubsection{Independent build in host's file system}
+\label{appendix:independentbuild}
+The virtual machine and container solutions mentioned above, have their own independent file system.
+Another approach to having an isolated analysis environment is to use the same file system as the host, but installing the project's software in a non-standard, project-specific directory that does not interfere with the host.
+Because the environment in this approach can be built in any custom location on the host, this solution generally does not require root permissions or extra low-level layers like containers or VMs.
+However, ``moving'' the built product of such solutions from one computer to another is not generally as trivial as containers or VMs.
+Examples of such third-party package managers (that are detached from the host OS's package manager) include (but are not limited to) Nix, GNU Guix, Python's Virtualenv package, Conda.
+Because it is highly intertwined with the way software is built and installed, third party package managers are described in more detail as part of Section \ref{appendix:packagemanagement}.
+
+Maneage (the solution proposed in this paper) also follows a similar approach of building and installing its own software environment within the host's file system, but without depending on it beyond the kernel.
+However, unlike the third-party package manager mentioned above, Maneage'd software management is not detached from the specific research/analysis project: the instructions to build the full isolated software environment is maintained with the high-level analysis steps of the project, and the narrative paper/report of the project.
+This is fundamental to achieve the completeness criterion.
+
+
+
+
+
+\subsection{Package management}
+\label{appendix:packagemanagement}
+Package management is the process of automating the build and installation of a software environment.
+A package manager thus contains the following information on each software package that can be run automatically: the URL of the software's tarball, the other software that it possibly depends on, and how to configure and build it.
+Package managers can be tied to specific operating systems at a very low level (like \inlinecode{apt} in Debian-based OSs).
+Alternatively, there are third-party package managers that can be installed on many OSs.
+Both are discussed in more detail below.
+
+Package managers are the second component in any workflow that relies on containers or VMs for an independent environment, and the starting point in others that use the host's file system (as discussed above in Section \ref{appendix:independentenvironment}).
+In this section, some common package managers are reviewed, in particular those that are most used by the reviewed reproducibility solutions of Appendix \ref{appendix:existingsolutions}.
+For a more comprehensive list of existing package managers, see Wikipedia\footnote{\inlinecode{\href{https://en.wikipedia.org/wiki/List\_of\_software\_package\_management\_systems}{https://en.wikipedia.org/wiki/List\_of\_software\_package\_}\\\href{https://en.wikipedia.org/wiki/List\_of\_software\_package\_management\_systems}{management\_systems}}}.
+Note that we are not including package managers that are specific to one language, for example \inlinecode{pip} (for Python) or \inlinecode{tlmgr} (for \LaTeX).
+
+\subsubsection{Operating system's package manager}
+The most commonly used package managers are those of the host operating system, for example, \inlinecode{apt}, \inlinecode{yum} or \inlinecode{pkg} which are respectively used in Debian-based, Red Hat-based and FreeBSD-based OSs (among many other OSs).
+
+These package managers are tightly intertwined with the operating system: they also include the building and updating of the core kernel and the C library.
+Because they are part of the OS, they also commonly require root permissions.
+Also, it is usually only possible to have one version/configuration of the software at any moment and downgrading versions for one project, may conflict with other projects, or even cause problems in the OS.
+Hence if two projects need different versions of the software, it is not possible to work on them at the same time in the OS.
+
+When a container or virtual machine (see Appendix \ref{appendix:independentenvironment}) is used for each project, it is common for projects to use the containerized operating system's package manager.
+However, it is important to remember that operating system package managers are not static: software is updated on their servers.
+Hence, simply running \inlinecode{apt install gcc}, will install different versions of the GNU Compiler Collection (GCC) based on the version of the OS and when it has been run.
+Requesting a special version of that special software does not fully address the problem because the package managers also download and install its dependencies.
+Hence a fixed version of the dependencies must also be specified.
+
+In robust package managers like Debian's \inlinecode{apt} it is possible to fully control (and later reproduce) the built environment of a high-level software.
+Debian also archives all packaged high-level software in its Snapshot\footnote{\inlinecode{\url{https://snapshot.debian.org/}}} service since 2005 which can be used to build the higher-level software environment on an older OS\citeappendix{aissi20}.
+Therefore it is indeed theoretically possible to reproduce the software environment only using archived operating systems and their own package managers, but unfortunately, we have not seen it practiced in (reproducible) scientific papers/projects.
+
+In summary, the host OS package managers are primarily meant for the low-level operating system components.
+Hence, many robust reproducible analysis workflows (reviewed in Appendix \ref{appendix:existingsolutions}) do not use the host's package manager, but an independent package manager, like the ones discussed below.
+
+\subsubsection{Blind packaging of already built software}
+An already-built software contains links to the system libraries it uses.
+Therefore one way of packaging a software is to look into the binary file for the libraries it uses and bring them into a file with the executable so on different systems, the same set of dependencies are moved around with the desired software.
+Tools like AppImage\footnote{\inlinecode{\url{https://appimage.org}}}, Flatpak\footnote{\inlinecode{\url{https://flatpak.org}}} or Snap\footnote{\inlinecode{\url{https://snapcraft.io}}} are designed for this purpose: the software's binary product and all its dependencies (not including the core C library) are packaged into one file.
+This makes it very easy to move that single software's built product and already built dependencies to different systems.
+However, because the C library is not included, it can fail on newer/older systems (depending on the system it was built on).
+We call this method ``blind'' packaging because it is agnostic to \emph{how} the software and its dependencies were built (which is important in a scientific context).
+Moreover, these types of packagers are designed for the Linux kernel (using its containerization and unique mounting features).
+They can therefore only be run on GNU/Linux operating systems.
+
+\subsubsection{Nix or GNU Guix}
+\label{appendix:nixguix}
+Nix\footnote{\inlinecode{\url{https://nixos.org}}}\citeappendix{dolstra04} and GNU Guix\footnote{\inlinecode{\url{https://guix.gnu.org}}}\citeappendix{courtes15} are independent package managers that can be installed and used on GNU/Linux operating systems, and macOS (only for Nix, prior to macOS Catalina).
+Both also have a fully functioning operating system based on their packages: NixOS and ``Guix System''.
+GNU Guix is based on the same principles of Nix but implemented differently, so we focus the review here on Nix.
+
+The Nix approach to package management is unique in that it allows exact dependency tracking of all the dependencies, and allows for multiple versions of software, for more details see Dolstra et al.\citeappendix{dolstra04}.
+In summary, a unique hash is created from all the components that go into the building of the package (including the instructions on how to build the software).
+That hash is then prefixed to the software's installation directory.
+As an example from Dolstra et al.\citeappendix{dolstra04}: if a certain build of GNU C Library 2.3.2 has a hash of \inlinecode{8d013ea878d0}, then it is installed under \inlinecode{/nix/store/8d013ea878d0-glibc-2.3.2} and all software that is compiled with it (and thus need it to run) will link to this unique address.
+This allows for multiple versions of the software to co-exist on the system, while keeping an accurate dependency tree.
+
+As mentioned in Court{\'e}s \& Wurmus\citeappendix{courtes15}, one major caveat with using these package managers is that they require a daemon with root privileges (failing our completeness criterion).
+This is necessary ``to use the Linux kernel container facilities that allow it to isolate build processes and maximize build reproducibility''.
+This is because the focus in Nix or Guix is to create bitwise reproducible software binaries and this is necessary for the security or development perspectives.
+However, in a non-computer-science analysis (for example natural sciences), the main aim is reproducible \emph{results} that can also be created with the same software version that may not be bitwise identical (for example when they are installed in other locations, because the installation location is hard-coded in the software binary or for a different CPU architecture).
+
+Finally, while Guix and Nix do allow precisely reproducible environments, the inherent detachment from the high-level computational project (that uses the environment) requires extra effort to keep track of the changes in dependencies as the project evolves.
+For example, if users simply run \inlinecode{guix install gcc} (the most common way to install a new software) the most recent version of GCC will be installed.
+But this will be different at different dates on a different system with no record of previous runs.
+It is therefore up to the user to store the used Guix commit in their high level computation and ensure ``Reproducing a reproducible computation''\footnote{A guide/tutorial on storing the Guix environment:\\\inlinecode{\url{https://guix.gnu.org/en/blog/2020/reproducible-computations-with-guix}}}.
+Similar to the Docker digest codes mentioned in Appendix \ref{appendix:containers}, many may not know about, forget, or ignore it.
+
+Generally, this is a common issue with relying on detached (third party) package managers for building a high-level computational project's software (including other tools mentioned below).
+We solved this problem in Maneage by including the low-level package manager and highlevel computation into a single project with a single version controlled history: it is simply not possible to forget to record the exact versions of the software used (or how they change as the project evolves).
+
+\subsubsection{Conda/Anaconda}
+\label{appendix:conda}
+Conda is an independent package manager that can be used on GNU/Linux, macOS, or Windows operating systems, although all software packages are not available in all operating systems.
+Conda is able to maintain an approximately independent environment on an operating system without requiring root access.
+
+Conda tracks the dependencies of a package/environment through a YAML formatted file, where the necessary software and their acceptable versions are listed.
+However, it is not possible to fix the versions of the dependencies through the YAML files alone.
+This is thoroughly discussed under issue 787 (in May 2019) of \inlinecode{conda-forge}\footnote{\inlinecode{\url{https://github.com/conda-forge/conda-forge.github.io/issues/787}}}.
+In that Github discussion, the authors of Uhse et al.\citeappendix{uhse19} report that the half-life of their environment (defined in a YAML file) is 3 months, and that at least one of their dependencies breaks shortly after this period.
+The main reply they got in the discussion is to build the Conda environment in a container, which is also the suggested solution by Gr\"uning et al.\citeappendix{gruning18}.
+However, as described in Appendix \ref{appendix:independentenvironment}, containers just hide the reproducibility problem, they do not fix it: containers are not static and need to evolve (i.e., get re-built) with the project.
+Given these limitations, Uhse et al.\citeappendix{uhse19} are forced to host their conda-packaged software as tarballs on a separate repository.
+
+Conda installs with a shell script that contains a binary-blob (+500 megabytes, embedded in the shell script).
+This is the first major issue with Conda: from the shell script, it is not clear what is in this binary blob and what it does.
+After installing Conda in any location, users can easily activate that environment by loading a special shell script.
+However, the resulting environment is not fully independent of the host operating system as described below:
+
+\begin{itemize}
+\item The Conda installation directory is present at the start of environment variables like \inlinecode{PATH} (which is used to find programs to run) and other such environment variables.
+ However, the host operating system's directories are also appended afterward.
+ Therefore, a user or script may not notice that the software that is being used is actually coming from the operating system, and not from the controlled Conda installation.
+
+\item Generally, by default, Conda relies heavily on the operating system and does not include core commands like \inlinecode{mkdir} (to make a directory), \inlinecode{ls} (to list files) or \inlinecode{cp} (to copy).
+ Although a minimal functionality is defined for them in POSIX and generally behave similarly for basic operations on different Unix-like operating systems, they have their differences.
+ For example, \inlinecode{mkdir -p} is a common way to build directories, but this option is only available with the \inlinecode{mkdir} of GNU Coreutils (default on GNU/Linux systems and installable in almost all Unix-like OSs).
+ Running the same command within a Conda environment that does not include GNU Coreutils on a macOS would crash.
+ Important packages like GNU Coreutils are available in channels like conda-forge, but they are not the default.
+ Therefore, many users may not recognize this, and failing to account for it, will cause unexpected crashes when the project is run on a new system.
+
+\item Many major Conda packaging ``channels'' (for example the core Anaconda channel, or very popular conda-forge channel) do not include the C library, that a package was built with, as a dependency.
+ They rely on the host operating system's C library.
+ C is the core language of modern operating systems and even higher-level languages like Python or R are written in it, and need it to run.
+ Therefore if the host operating system's C library is different from the C library that a package was built with, a Conda-packaged program will crash and the project will not be executable.
+ Theoretically, it is possible to define a new Conda ``channel'' which includes the C library as a dependency of its software packages, but it will take too much time for any individual team to practically implement all their necessary packages, up to their high-level science software.
+
+\item Conda does allow a package to depend on a special build of its prerequisites (specified by a checksum, fixing its version and the version of its dependencies).
+ However, this is rarely practiced in the main Git repositories of channels like Anaconda and conda-forge: only the name of the high-level prerequisite packages is listed in a package's \inlinecode{meta.yaml} file, which is version-controlled.
+ Therefore two builds of the package from the same Git repository will result in different tarballs (depending on what prerequisites were present at build time).
+ In Conda's downloaded tarball (that contains the built binaries and is not under version control) the exact versions of most build-time dependencies are listed.
+ However, because the different software of one project may have been built at different times, if they depend on different versions of a single software there will be a conflict and the tarball cannot be rebuilt, or the project cannot be run.
+\end{itemize}
+
+As reviewed above, the low-level dependence of Conda on the host operating system's components and build-time conditions, is the primary reason that it is very fast to install (thus making it an attractive tool to software developers who just need to reproduce a bug in a few minutes).
+However, these same factors are major caveats in a scientific scenario, where long-term archivability, readability, or usability are important. % alternative to `archivability`?
+
+\subsubsection{Spack}
+Spack\citeappendix{gamblin15} is a package manager that is also influenced by Nix (similar to GNU Guix).
+But unlike Nix or GNU Guix, it does not aim for full, bitwise reproducibility and can be built without root access in any generic location.
+It relies on the host operating system for the C library.
+
+Spack is fully written in Python, where each software package is an instance of a class, which defines how it should be downloaded, configured, built, and installed.
+Therefore if the proper version of Python is not present, Spack cannot be used and when incompatibilities arise in future versions of Python (similar to how Python 3 is not compatible with Python 2), software building recipes, or the whole system, have to be upgraded.
+Because of such bootstrapping problems (for example how Spack needs Python to build Python and other software), it is generally a good practice to use simpler, lower-level languages/systems for a low-level operation like package management.
+
+In conclusion for all package managers, there are two common issues regarding generic package managers that hinder their usage for high-level scientific projects:
+
+\begin{itemize}
+\item {\bf\small Pre-compiled/binary downloads:} Most package managers primarily download the software in a binary (pre-compiled) format.
+ This allows users to download it very fast and almost instantaneously be able to run it.
+ However, to provide for this, servers need to keep binary files for each build of the software on different operating systems (for example Conda needs to keep binaries for Windows, macOS and GNU/Linux operating systems).
+ It is also necessary for them to store binaries for each build, which includes different versions of its dependencies.
+ Maintaining such a large binary library is expensive, therefore once the shelf-life of a binary has expired, it will be removed, causing problems for projects that depend on them.
+
+\item {\bf\small Adding high-level software:} Packaging new software is not trivial and needs a good level of knowledge/experience with that package manager.
+ For example, each one has its own special syntax/standards/languages, with pre-defined variables that must already be known before someone can package new software for them.
+ However, in many research projects, the most high-level analysis software is written by the team that is doing the research, and they are its primary/only users, even when the software is distributed with free licenses on open repositories.
+
+ Although active package manager members are commonly very supportive in helping to package new software, many teams may not be able to make that extra effort and time investment to package their most high-level (i.e., relevant) software in it.
+ As a result, they manually install their high-level software in an uncontrolled, or non-standard way, thus jeopardizing the reproducibility of the whole work.
+ This is another consequence of the detachment of the package manager from the project doing the analysis.
+\end{itemize}
+
+Addressing these issues has been the basic reason behind Maneage: based on the completeness criterion, instructions to download and build the packages are included within the actual science project, and no special/new syntax/language is used.
+Software download, built and installation is done with the same language/syntax that researchers manage their research: using the shell (by default GNU Bash in Maneage) for low-level steps and Make (by default, GNU Make in Maneage) for job management.
+
+
+
+
+
+\subsection{Version control}
+\label{appendix:versioncontrol}
+A scientific project is not written in a day; it usually takes more than a year.
+During this time, the project evolves significantly from its first starting date, and components are added or updated constantly as it approaches completion.
+Added with the complexity of modern computational projects, is not trivial to manually track this evolution, and the evolution's affect of on the final output: files produced in one stage of the project can mistakenly be used by an evolved analysis environment in later stages (where the project has evolved).
+
+Furthermore, scientific projects do not progress linearly: earlier stages of the analysis are often modified after later stages are written.
+This is a natural consequence of the scientific method; where progress is defined by experimentation and modification of hypotheses (results from earlier phases).
+
+It is thus very important for the integrity of a scientific project that the state/version of its processing is recorded as the project evolves.
+For example, better methods are found or more data arrive.
+Any intermediate dataset that is produced should also be tagged with the version of the project at the time it was created.
+In this way, later processing stages can make sure that they can safely be used, i.e., no change has been made in their processing steps.
+
+Solutions to keep track of a project's history have existed since the early days of software engineering in the 1970s and they have constantly improved over the last decades.
+Today the distributed model of ``version control'' is the most common, where the full history of the project is stored locally on different systems and can easily be integrated.
+There are many existing version control solutions, for example, CVS, SVN, Mercurial, GNU Bazaar, or GNU Arch.
+However, currently, Git is by far the most commonly used in individual projects, such that Software Heritage\citeappendix{dicosmo18} (an archival system aiming for long term preservation of software) is also modeled on Git.
+Git is also the foundation upon which this paper's proof of concept (Maneage) is built.
+Hence we will just review Git here, but the general concept of version control is the same in all implementations.
+
+\subsubsection{Git}
+With Git, changes in a project's contents are accurately identified by comparing them with their previous version in the archived Git repository.
+When the user decides the changes are significant compared to the archived state, they can ``commit'' the changes into the history/repository.
+The commit involves copying the changed files into the repository and calculating a 40 character checksum/hash that is calculated from the files, an accompanying ``message'' (a narrative description of the purpose/goals of the changes), and the previous commit (thus creating a ``chain'' of commits that are strongly connected to each other, as in
+\ifdefined\separatesupplement
+the figure on Git in the main body of the paper).
+\else
+Figure \ref{fig:branching}).
+\fi
+For example \inlinecode{f4953cc\-f1ca8a\-33616ad\-602ddf\-4cd189\-c2eff97b} is a commit identifier in the Git history of this project.
+Through the content-based storage concept, similar hash structures can be used to identify data\citeappendix{hinsen20}.
+Git commits are commonly summarized by the checksum's first few characters, for example, \inlinecode{f4953cc} of the example above.
+
+With Git, making parallel ``branches'' (in the project's history) is very easy and its distributed nature greatly helps in the parallel development of a project by a team.
+The team can host the Git history on a web page and collaborate through that.
+There are several Git hosting services, for example, \href{https://codeberg.org}{codeberg.org}, \href{https://notabug.org}{notabug.org}, \href{https://gitlab.com}{gitlab.com}, \href{https://bitbucket.com}{bitbucket.com} or \href{https://github.com}{github.com} (among many others).
+Storing the changes in binary files is also possible in Git, however it is most useful for human-readable plain-text sources.
+
+
+
+
+
+
+
+
+\subsection{Archiving}
+\label{appendix:archiving}
+
+Long-term, bytewise, checksummed archiving of software research projects is necessary for a project to be reproducible by a broad community (in both time and space).
+Generally, archival includes either binary or plain-text source code files.
+In some cases, specific tools have their own archival systems, such as Docker Hub\footnote{\inlinecode{\url{https://hub.docker.com}}} for Docker containers (that were discussed above in Appendix \ref{appendix:containers}, so they are not reviewed here).
+We will focus on generic archival tools in this section.
+
+The Wayback Machine (part of the Internet Archive)\footnote{\inlinecode{\url{https://archive.org}}} and similar services such as Archive Today\footnote{\inlinecode{\url{https://archive.today}}} provide on-demand long-term archiving of web pages, which is a critically important service for preserving the history of the World Wide Web.
+However, because these services are heavily tailored to the web format, they have many limitations for scientific source code or data.
+For example, the only way to archive the source code of a computational project is through its tarball\footnote{For example \inlinecode{\url{https://archive.org/details/gnuastro}}}.
+
+Through public research repositories such as Zenodo\footnote{\inlinecode{\url{https://zenodo.org}}} or Figshare\footnote{\inlinecode{\url{https://figshare.com}}} academic files (in any format and of any type of content: data, hand-written narrative or code) can be archived for the long term.
+Since they are tailored to academic files, these services mint a DOI for each package of files, and provide convenient maintenance of metadata by the uploading user, while verifying the files with MD5 checksums.
+Since these services allow large files, they are mostly useful for data (for example Zenodo currently allows a total size, for all files, of 50 GB in each upload).
+Universities now regularly provide their own repositories,\footnote{For example \inlinecode{\url{https://repozytorium.umk.pl}}} many of which are registered with the \emph{Open Archives Initiative} that aims at repository interoperability\footnote{\inlinecode{\url{https://www.openarchives.org/Register/BrowseSites}}}.
+
+However, a computational research project's source code (including instructions on how to do the research analysis, how to build the plots, blended with narrative, how to access the data, and how to prepare the software environment) are different from the data to be analysed (which are usually just a sequence of values resulting from experiments and whose volume can be very large).
+Even though both source code and data are ultimately just sequences of bits in a file, their creation and usage are fundamentally different within a project, from both the philosophy-of-science point of view and from a practical point of view.
+Source code is often written by humans, for machines to execute \emph{and also} for humans to read/modify; it is often composed of many files and thousands of lines of (modular) code.
+Often, the fine details of the history of the changes in those lines are preserved through version control, as mentioned in Appendix \ref{appendix:versioncontrol}.
+
+Due to this fundamental difference, some services focus only on archiving the source code of a project.
+A prominent example is arXiv\footnote{\inlinecode{\url{https://arXiv.org}}}, which pioneered the archiving of research preprints.
+ArXiv uses the {\LaTeX} source of a paper (and its plots) to build the paper internally and provide users with in-house Postscript or PDF outputs: having access to the {\LaTeX} source, allows it to extract metadata or contextual information among other benefits\footnote{\inlinecode{\url{https://arxiv.org/help/faq/whytex}}}.
+However, along with the {\LaTeX} source, authors can also submit any type of plain-text file, including Shell or Python scripts for example (as long as the total volume of the upload doesn't exceed a certain limit).
+This feature of arXiv is heavily used by Maneage'd papers.
+For example this paper is available at \href{https://arxiv.org/abs/2006.03018}{arXiv:2006.03018}; by clicking on ``Other formats'', and then ``Download source'', the full source file that we uploaded is available to any interested reader.
+The file includes a full snapshot of this Maneage'd project, at the point the paper was submitted there, including all data and software download and build instructions, analysis commands and narrative source code.
+In fact the \inlinecode{./project make dist} command in Maneage will automatically create the arXiv-ready tarball to help authors upload their project to arXiv.
+ArXiv provides long-term stable URIs, giving unique identifiers for each publication\footnote{\inlinecode{\url{https://arxiv.org/help/arxiv_identifier}}} and is mirrored on many servers across the globe.
+
+The granularity offered by the archival systems above is a file (which is usually a compressed package of many files in the case of source code).
+It is thus not possible to be more precise when preserving or linking to the contents of a file, or to preserve the history of changes in the file (both of which are very important in hand-written source code).
+Commonly used Git repositories (like Codeberg, Notabug, Gitlab or Github) do provide one way to access the fine details of the source files in a project.
+However, the Git history of projects on these repositories can easily be changed by the owners, or the whole site may become inactive (for association-run sites, like Codeberg or Notabug) or go bankrupt or be sold to another (commercial sites, like Gitlab or Github), thus changing the URL or conditions of access.
+Such repositories are thus not reliable sources in view of longevity.
+
+For preserving, and providing direct access to the fine details of a source-code file (with the granularity of a line within the file), Software Heritage is especially useful\citeappendix{abramatic18,dicosmo18}.
+Through Software Heritage, users can anonymously nominate the version-controlled repository of any publicly accessible project and request that it be archived.
+The Software Heritage scripts (themselves free-licensed) download the repository (including its full history) and preserve it.
+This allows the repository as a whole, or individual files, and certain lines within the files, to be accessed using a standard Software Heritage ID (SWHID), for more see \citeappendix{dicosmo18}.
+In the main body of \emph{this} paper, we use this feature several times.
+Software Heritage is mirrored on international servers and is supported by major international institutions like UNESCO.
+
+An open question in archiving the full sequence of steps that go into a quantitative scientific research project is whether or how to preserve ``scholarly ephemera''.
+This refers to discussions about the project such as bug reports or proposals of adding new features: which are usually referred to as ``issues'' or ``pull requests'' (also called ``merge requests'').
+These ephemera are not part of the Git commit history of a software project, but add wider context and understanding beyond the commit history itself, and provide a record that could be used to allocate intellectual credit.
+For these reasons, the \emph{Investigating \& Archiving the Scholarly Git Experience} (IASGE) project proposes that the ephemera should be archived along with the Git repositories themselves\footnote{\inlinecode{\href{https://investigating-archiving-git.gitlab.io/updates/define-scholarly-ephemera}{https://investigating-archiving-git.gitlab.io/updates/}}\\\inlinecode{\href{https://investigating-archiving-git.gitlab.io/updates/define-scholarly-ephemera}{define-scholarly-ephemera}}}.
+While Github is controversial for practical and ethical reasons\footnote{\inlinecode{\href{https://web.archive.org/web/20210613150610/https://git.sdf.org/humanacollaborator/humanacollabora/src/branch/master/github.md}{https://web.archive.org/web/20210613150610/https://git.sdf}\\\inlinecode{\href{https://web.archive.org/web/20210613150610/https://git.sdf.org/humanacollaborator/humanacollabora/src/branch/master/github.md}{.org/humanacollaborator/humanacollabora/src/branch/master/}}\\\inlinecode{\href{https://web.archive.org/web/20210613150610/https://git.sdf.org/humanacollaborator/humanacollabora/src/branch/master/github.md}{github.md}}}}, it is currently in wide use, and appears to be the first git repository hoster for which the ephemera are being preserved, by the GHTorrent project\footnote{\inlinecode{\url{https://ghtorrent.org}}}.
+The GHTorrent project tracks the public Github ``event timeline'', downloads all ``contents and their dependencies, exhaustively'', and provides database files of all the material.
+A particular complication that will need to be dealt with by projects such as GHTorrent is the copyright of the git hoster on the particular format and creative choices in style in which the ephemera are provided for downloading.
+
+
+
+
+
+
+\subsection{Job management}
+\label{appendix:jobmanagement}
+Any analysis will involve more than one logical step.
+For example, it is first necessary to download a dataset and do some preparations on it before applying the research software on it, and finally to make visualizations/tables that can be imported into the final report.
+Each one of these is a logically independent step, which needs to be run before/after the others in a specific order.
+
+Hence job management is a critical component of a research project.
+There are many tools for managing the sequence of jobs, below we review the most common ones that are also used in the existing reproducibility solutions of Appendix \ref{appendix:existingsolutions} and Maneage.
+
+\subsubsection{Manual operation with narrative}
+\label{appendix:manual}
+The most commonly used workflow system for many researchers is to run the commands, experiment on them, and keep the output when they are happy with it (therefore loosing the actual command that produced it).
+As an improvement, some researchers also keep a narrative description in a text file, and keep a copy of the command they ran.
+At least in our personal experience with colleagues, this method is still being heavily practiced by many researchers.
+Given that many researchers do not get trained well in computational methods, this is not surprising.
+As discussed in
+\ifdefined\separatesupplement
+the discussion section of the main paper,
+\else
+Section \ref{discussion},
+\fi
+based on this observation we believe that improved literacy in computational methods is the single most important factor for the integrity/reproducibility of modern science.
+
+\subsubsection{Scripts}
+\label{appendix:scripts}
+Scripts (in any language, for example GNU Bash, or Python) are the most common ways of organizing a series of steps.
+They are primarily designed to execute each step sequentially (one after another), making them also very intuitive.
+However, as the series of operations become complex and large, managing the workflow in a script will become highly complex.
+
+For example, if 90\% of a long project is already done and a researcher wants to add a followup step, a script will go through all the previous steps every time it is run (which can take significant time).
+In other scenarios, when a small step in the middle of the analysis has to be changed, the full analysis needs to be re-run from the start.
+Scripts have no concept of dependencies, forcing authors to ``temporarily'' comment parts that they do not want to be re-run.
+Therefore forgetting to un-comment them afterwards is the most common cause of frustration.
+
+This discourages experimentation, which is a critical component of the scientific method.
+It is possible to manually add conditionals all over the script, thus manually defining dependencies, or only run certain steps at certain times, but they just make it harder to read, add logical complexity and introduce many bugs themselves.
+Parallelization is another drawback of using scripts.
+While it is not impossible, because of the high-level nature of scripts, it is not trivial and parallelization can also be very inefficient or buggy.
+
+\subsubsection{Make}
+\label{appendix:make}
+Make was originally designed to address the problems mentioned above for scripts\citeappendix{feldman79}.
+In particular, it was originally designed in the context of managing the compilation of software source code that are distributed in many files.
+With Make, the source files of a program that have not been changed are not recompiled.
+Moreover, when two source files do not depend on each other, and both need to be rebuilt, they can be built in parallel.
+This was found to greatly help in debugging software projects, and in speeding up test builds, giving Make a core place in software development over the last 40 years.
+
+The most common implementation of Make, since the early 1990s, is GNU Make.
+Make was also the framework used in the first attempts at reproducible scientific papers\cite{claerbout1992,schwab2000}.
+Our proof-of-concept (Maneage) also uses Make to organize its workflow.
+Here, we complement that section with more technical details on Make.
+
+Usually, the top-level Make instructions are placed in a file called Makefile, but it is also common to use the \inlinecode{.mk} suffix for custom file names.
+Each stage/step in the analysis is defined through a \emph{rule}.
+Rules define \emph{recipes} to build \emph{targets} from \emph{pre-requisites}.
+In Unix-like operating systems, everything is a file, even directories and devices.
+Therefore all three components in a rule must be files on the running filesystem.
+
+To decide which operation should be re-done when executed, Make compares the timestamp of the targets and prerequisites.
+When any of the prerequisite(s) is newer than a target, the recipe is re-run to re-build the target.
+When all the prerequisites are older than the target, that target does not need to be rebuilt.
+A recipe is just a bundle or shell commands that are executed if necessary.
+Going deeper into the syntax of Make is beyond the scope of this paper, but we recommend interested readers to consult the GNU Make manual for a very good introduction\footnote{\inlinecode{\url{http://www.gnu.org/software/make/manual/make.pdf}}}.
+
+\subsubsection{Snakemake}
+\label{appendix:snakemake}
+Snakemake is a Python-based workflow management system, inspired by GNU Make (discussed above).
+It is aimed at reproducible and scalable data analysis\citeappendix{koster12}\footnote{\inlinecode{\url{https://snakemake.readthedocs.io/en/stable}}}.
+It defines its own language to implement the ``rule'' concept of Make within Python.
+Technically, using complex shell scripts (to call software in other languages) in each step will involve a lot of quotations that make the code hard to read and maintain.
+It is therefore most useful for Python-based projects.
+
+Currently, Snakemake requires Python 3.5 (released in September 2015) and above, while Snakemake was originally introduced in 2012.
+Hence it is not clear if older Snakemake source files can be executed today.
+As reviewed in many tools here, depending on high-level systems for low-level project components causes a major bootstrapping problem that reduces the longevity of a project.
+
+\subsubsection{Bazel}
+Bazel\footnote{\inlinecode{\url{https://bazel.build}}} is a high-level job organizer that depends on Java and Python and is primarily tailored to software developers (with features like facilitating linking of libraries through its high-level constructs).
+
+\subsubsection{SCons}
+\label{appendix:scons}
+Scons\footnote{\inlinecode{\url{https://scons.org}}} is a Python package for managing operations outside of Python (in contrast to CGAT-core, discussed below, which only organizes Python functions).
+In many aspects it is similar to Make, for example, it is managed through a `SConstruct' file.
+Like a Makefile, SConstruct is also declarative: the running order is not necessarily the top-to-bottom order of the written operations within the file (unlike the imperative paradigm which is common in languages like C, Python, or FORTRAN).
+However, unlike Make, SCons does not use the file modification date to decide if it should be remade.
+SCons keeps the MD5 hash of all the files in a hidden binary file and checks them to see if it is necessary to re-run.
+
+SCons thus attempts to work on a declarative file with an imperative language (Python).
+It also goes beyond raw job management and attempts to extract information from within the files (for example to identify the libraries that must be linked while compiling a program).
+SCons is, therefore, more complex than Make and its manual is almost double that of GNU Make.
+Besides added complexity, all these ``smart'' features decrease its performance, especially as files get larger and more numerous: on every call, every file's checksum has to be calculated, and a Python system call has to be made (which is computationally expensive).
+
+Finally, it has the same drawback as any other tool that uses high-level languages, see Section \ref{appendix:highlevelinworkflow}.
+We encountered such a problem while testing SCons: on the Debian-10 testing system, the \inlinecode{python} program pointed to Python 2.
+However, since Python 2 is now obsolete, SCons was built with Python 3 and our first run crashed.
+To fix it, we had to either manually change the core operating system path, or the SCons source hashbang.
+The former will conflict with other system tools that assume \inlinecode{python} points to Python-2, the latter may need root permissions for some systems.
+This can also be problematic when a Python analysis library, may require a Python version that conflicts with the running SCons.
+
+\subsubsection{CGAT-core}
+CGAT-Core\citeappendix{cribbs19} is a Python package for managing workflows.
+It wraps analysis steps in Python functions and uses Python decorators to track the dependencies between tasks.
+It is used in papers like Jones et al.\citeappendix{jones19}.
+However, as mentioned there it is primarily good for managing individual outputs (for example separate figures/tables in the paper, when they are fully created within Python).
+Because it is primarily designed for Python tasks, managing a full workflow (which includes many more components, written in other languages) is not trivial.
+Another drawback with this workflow manager is that Python is a very high-level language where future versions of the language may no longer be compatible with Python 3, that CGAT-core is implemented in (similar to how Python 2 programs are not compatible with Python 3).
+
+\subsubsection{Guix Workflow Language (GWL)}
+GWL is based on the declarative language that GNU Guix uses for package management (see Appendix \ref{appendix:packagemanagement}), which is itself based on the general purpose Scheme language.
+It is closely linked with GNU Guix and can even install the necessary software needed for each individual process.
+Hence in the GWL paradigm, software installation and usage does not have to be separated.
+GWL has two high-level concepts called ``processes'' and ``workflows'' where the latter defines how multiple processes should be executed together.
+
+\subsubsection{Nextflow (2013)}
+Nextflow\footnote{\inlinecode{\url{https://www.nextflow.io}}} workflow language\citeappendix{tommaso17} with a command-line interface that is written in Java.
+
+\subsubsection{Generic workflow specifications (CWL and WDL)}
+\label{appendix:genericworkflows}
+Due to the variety of custom workflows used in existing reproducibility solution (like those of Appendix \ref{appendix:existingsolutions}), some attempts have been made to define common workflow standards like the Common workflow language (CWL\footnote{\inlinecode{\url{https://www.commonwl.org}}}, with roots in Make, formatted in YAML or JSON) and Workflow Description Language (WDL\footnote{\inlinecode{\url{https://openwdl.org}}}, formatted in JSON).
+These are primarily specifications/standards rather than software.
+At an even higher level solutions like Canonical Workflow Frameworks for Research (CWFR) are being proposed\footnote{\inlinecode{\href{https://codata.org/wp-content/uploads/2021/01/CWFR-position-paper-v3.pdf}{https://codata.org/wp-content/uploads/2021/01/}}\\\inlinecode{\href{https://codata.org/wp-content/uploads/2021/01/CWFR-position-paper-v3.pdf}{CWFR-position-paper-v3.pdf}}}.
+With these standards, ideally, translators can be written between the various workflow systems to make them more interoperable.
+
+In conclusion, shell scripts and Make are very common and extensively used by users of Unix-based OSs (which are most commonly used for computations).
+They have also existed for several decades and are robust and mature.
+Many researchers that use heavy computations are also already familiar with them and have used them (to different levels).
+As we demonstrated above in this appendix, the list of necessary tools for the various stages of a research project (an independent environment, package managers, job organizers, analysis languages, writing formats, editors, etc) is already very large.
+Each software/tool/paradigm has its own learning curve, which is not easy for a natural or social scientist for example (who need to put their primary focus on their own scientific domain).
+Most workflow management tools and the reproducible workflow solutions that depend on them are, yet another language/paradigm that has to be mastered by researchers and thus a heavy burden.
+
+Furthermore as shown above (and below) high-level tools will evolve very fast causing disruptions in the reproducible framework.
+A good example is Popper\citeappendix{jimenez17} which initially organized its workflow through the HashiCorp configuration language (HCL) because it was the default in GitHub.
+However, in September 2019, GitHub dropped HCL as its default configuration language, so Popper is now using its own custom YAML-based workflow language, see Appendix \ref{appendix:popper} for more on Popper.
+
+
+
+
+
+\subsection{Editing steps and viewing results}
+\label{appendix:editors}
+In order to reproduce a project, the analysis steps must be stored in files.
+For example Shell, Python, R scripts, Makefiles, Dockerfiles, or even the source files of compiled languages like C or FORTRAN.
+Given that a scientific project does not evolve linearly and many edits are needed as it evolves, it is important to be able to actively test the analysis steps while writing the project's source files.
+Here we review some common methods that are currently used.
+
+\subsubsection{Text editors}
+The most basic way to edit text files is through simple text editors which just allow viewing and editing such files, for example, \inlinecode{gedit} on the GNOME graphic user interface.
+However, working with simple plain text editors like \inlinecode{gedit} can be very frustrating since it is necessary to save the file, then go to a terminal emulator and execute the source files.
+To solve this problem there are advanced text editors like GNU Emacs that allow direct execution of the script, or access to a terminal within the text editor.
+However, editors that can execute or debug the source (like GNU Emacs), just run external programs for these jobs (for example GNU GCC, or GNU GDB), just as if those programs were called from outside the editor.
+
+With text editors, the final edited file is independent of the actual editor and can be further edited with another editor, or executed without it.
+This is a very important feature and corresponds to the modularity criterion of this paper.
+This type of modularity is not commonly present for other solutions mentioned below (the source can only be edited/run in a specific browser).
+Another very important advantage of advanced text editors like GNU Emacs or Vi(m) is that they can also be run without a graphic user interface, directly on the command-line.
+This feature is critical when working on remote systems, in particular high performance computing (HPC) facilities that do not provide a graphic user interface.
+Also, the commonly used minimalistic containers do not include a graphic user interface.
+Hence by default all Maneage'd projects also build the simple GNU Nano plain-text editor as part of the project (to be able to edit the source directly within a minimal environment).
+Maneage can also also optionally build GNU Emacs or Vim, but it is up to the user to build them (same as their high-level science software).
+
+\subsubsection{Integrated Development Environments (IDEs)}
+To facilitate the development of source code in special programming languages, IDEs add software building and running environments as well as debugging tools to a plain text editor.
+Many IDEs have their own compilers and debuggers, hence source files that are maintained in IDEs are not necessarily usable/portable on other systems.
+Furthermore, they usually require a graphic user interface to run.
+In summary, IDEs are generally very specialized tools, for special projects and are not a good solution when portability (the ability to run on different systems and at different times) is required.
+
+\subsubsection{Jupyter}
+\label{appendix:jupyter}
+Jupyter\citeappendix{kluyver16} (initially IPython) is an implementation of Literate Programming \citeappendix{knuth84}.
+Jupyter's name is a combination of the three main languages it was designed for: Julia, Python, and R.
+The main user interface is a web-based ``notebook'' that contains blobs of executable code and narrative.
+Jupyter uses the custom built \inlinecode{.ipynb} format\footnote{\inlinecode{\url{https://nbformat.readthedocs.io/en/latest}}}.
+The \inlinecode{.ipynb} format, is a simple, human-readable format that can be opened in a plain-text editor) and formatted in JavaScript Object Notation (JSON).
+It contains various kinds of ``cells'', or blobs, that can contain narrative description, code, or multi-media visualizations (for example images/plots), that are all stored in one file.
+The cells can have any order, allowing the creation of a literal programming style graphical implementation, where narrative descriptions and executable patches of code can be intertwined.
+For example to have a paragraph of text about a patch of code, and run that patch immediately on the same page.
+
+The \inlinecode{.ipynb} format does theoretically allow dependency tracking between cells, see IPython mailing list (discussion started by Gabriel Becker from July 2013\footnote{\url{https://mail.python.org/pipermail/ipython-dev/2013-July/010725.html}}).
+Defining dependencies between the cells can allow non-linear execution which is critical for large scale (thousands of files) and complex (many dependencies between the cells) operations.
+It allows automation, run-time optimization (deciding not to run a cell if it is not necessary), and parallelization.
+However, Jupyter currently only supports a linear run of the cells: always from the start to the end.
+It is possible to manually execute only one cell, but the previous/next cells that may depend on it, also have to be manually run (a common source of human error, and frustration for complex operations).
+Integration of directional graph features (dependencies between the cells) into Jupyter has been discussed, but as of this publication, there is no plan to implement it (see Jupyter's GitHub issue 1175\footnote{\inlinecode{\url{https://github.com/jupyter/notebook/issues/1175}}}).
+
+The fact that the \inlinecode{.ipynb} format stores narrative text, code, and multi-media visualization of the outputs in one file, is another major hurdle and against the modularity criterion proposed here.
+The files can easily become very large (in volume/bytes) and hard to read when the Jupyter web-interface is not accessible.
+Both are critical for scientific processing, especially the latter: when a web browser with proper JavaScript features is not available (can happen in a few years).
+This is further exacerbated by the fact that binary data (for example images) are not directly supported in JSON and have to be converted into a much less memory-efficient textual encoding.
+
+Finally, Jupyter has an extremely complex dependency graph: on a clean Debian 10 system, Pip (a Python package manager that is necessary for installing Jupyter) required 19 dependencies to install, and installing Jupyter within Pip needed 41 dependencies.
+Hinsen\citeappendix{hinsen15} reported such conflicts when building Jupyter into the Active Papers framework (see Appendix \ref{appendix:activepapers}).
+However, the dependencies above are only on the server-side.
+Since Jupyter is a web-based system, it requires many dependencies on the viewing/running browser also (for example special JavaScript or HTML5 features, which evolve very fast).
+As discussed in Appendix \ref{appendix:highlevelinworkflow} having so many dependencies is a major caveat for any system regarding scientific/long-term reproducibility.
+In summary, Jupyter is most useful in manual, interactive, and graphical operations for temporary operations (for example educational tutorials).
+
+
+
+
+
+\subsection{Project management in high-level languages}
+\label{appendix:highlevelinworkflow}
+Currently, the most popular high-level data analysis language is Python.
+R is closely tracking it and has superseded Python in some fields, while Julia\citeappendix{bezanson17} is quickly gaining ground.
+These languages have themselves superseded previously popular languages for data analysis of the previous decades, for example, Java, Perl, or C++.
+All are part of the C-family programming languages.
+In many cases, this means that the language's execution environment are themselves written in C, which is the language of modern operating systems.
+
+Scientists, or data analysts, mostly use these higher-level languages.
+Therefore they are naturally drawn to also apply the higher-level languages for lower-level project management, or designing the various stages of their workflow.
+For example Conda or Spack (Appendix \ref{appendix:packagemanagement}), CGAT-core (Appendix \ref{appendix:jobmanagement}), Jupyter (Appendix \ref{appendix:editors}) or Popper (Appendix \ref{appendix:popper}) are written in Python.
+The discussion below applies to both the actual analysis software and project management software.
+In this context, it is more focused on the latter.
+
+Because of their nature, higher-level languages evolve very fast, creating incompatibilities on the way.
+The most prominent example is the transition from Python 2 (released in 2000) to Python 3 (released in 2008).
+Python 3 was incompatible with Python 2 and it was decided to abandon the former by 2015.
+However, due to community pressure, this was delayed to 1 January 2020.
+The end-of-life of Python 2 caused many problems for projects that had invested heavily in Python 2: all their previous work had to be translated, for example, see Jenness\citeappendix{jenness17} or Appendix \ref{appendix:sciunit}.
+Some projects could not make this investment and their developers decided to stop maintaining it, for example VisTrails (see Appendix \ref{appendix:vistrails}).
+
+The problems were not just limited to translation.
+Python 2 was still being actively used during the transition period (and is still being used by some, after its end-of-life).
+Therefore, developers had to maintain (for example fix bugs in) both versions in one package.
+This is not particular to Python, a similar evolution occurred in Perl: in 2000 it was decided to improve Perl 5, but the proposed Perl 6 was incompatible with it.
+However, the Perl community decided not to abandon Perl 5, and Perl 6 was eventually defined as a new language that is now officially called ``Raku'' (\url{https://raku.org}).
+
+It is unreasonably optimistic to assume that high-level languages will not undergo similar incompatible evolutions in the (not too distant) future.
+For industrial software developers, this is not a major problem: non-scientific software, and the general population's usage of them, has a similarly fast evolution and shelf-life.
+Hence, it is rarely (if ever) necessary to look into industrial/business codes that are more than a couple of years old.
+However, in the sciences (which are commonly funded by public money) this is a major caveat for the longer-term usability of solutions.
+
+In summary, in this section we are discussing the bootstrapping problem as regards scientific projects: the workflow/pipeline can reproduce the analysis and its dependencies.
+However, the dependencies of the workflow itself should not be ignored.
+Beyond technical, low-level, problems for the developers mentioned above, this causes major problems for scientific project management as listed below:
+
+\subsubsection{Dependency hell}
+The evolution of high-level languages is extremely fast, even within one version.
+For example, packages that are written in Python 3 often only work with a specific interval of Python 3 versions.
+For example, Snakemake and Occam, which can only be run on Python versions 3.4 and 3.5 or newer respectively, see Appendices \ref{appendix:snakemake} and \ref{appendix:occam}.
+This is not just limited to the core language; much faster changes occur in their higher-level libraries.
+For example, version 1.9 of Numpy (Python's numerical analysis module) discontinued support for Numpy's predecessor (called Numeric), causing many problems for scientific users\citeappendix{hinsen15}.
+
+On the other hand, the dependency graph of tools written in high-level languages is often extremely complex.
+For example, see Figure 1 of Alliez et al.\cite{alliez19}.
+It shows the dependencies and their inter-dependencies for Matplotlib (a popular plotting module in Python).
+Acceptable version intervals between the dependencies will cause incompatibilities in a year or two, when a robust package manager is not used (see Appendix \ref{appendix:packagemanagement}).
+
+Since a domain scientist does not always have the resources/knowledge to modify the conflicting part(s), many are forced to create complex environments, with different versions of Python (sometimes on different computers), and pass the data between them (for example just to use the work of a previous PhD student in the team).
+This greatly increases the complexity/cost of the project, even for the principal author.
+A well-designed reproducible workflow like Maneage that has no dependencies beyond a C compiler in a Unix-like operating system can account for this.
+However, when the actual workflow system (not the analysis software) is written in a high-level language like the examples above, the complex dependencies of the workflow itself will inevitably cause bootstrapping problems in the future.
+
+Another relevant example of the dependency hell is the following: installing the Python installer (\inlinecode{pip}) on a Debian system (with \inlinecode{apt install pip2} for Python 2 packages) required 32 other packages as dependencies.
+\inlinecode{pip} is necessary to install Popper and Sciunit (Appendices \ref{appendix:popper} and \ref{appendix:sciunit}).
+As of this writing, the \inlinecode{pip3 install popper} and \inlinecode{pip2 install sciunit2} commands for installing each, required 17 and 26 Python modules as dependencies.
+It is impossible to run either of these solutions if there is a single conflict in this very complex dependency graph.
+This problem actually occurred while we were testing Sciunit: even though it was installed, it could not run because of conflicts (its last commit was only 1.5 years old), for more see Appendix \ref{appendix:sciunit}.
+Hinsen\citeappendix{hinsen15} also report a similar problem when attempting to install Jupyter (see Appendix \ref{appendix:editors}).
+Of course, this also applies to tools that these systems use, for example Conda (which is also written in Python, see Appendix \ref{appendix:packagemanagement}).
+
+\subsubsection{Generational gap}
+This occurs primarily for scientists in a given domain (for example, astronomers, biologists, or social scientists).
+Once they have mastered one version of a language (mostly in the early stages of their career), they tend to ignore newer versions/languages.
+The inertia of programming languages is very strong.
+This is natural because scientists have their own science field to focus on, and re-writing their high-level analysis toolkits (which they have curated over their career and is often only readable/usable by themselves) in newer languages every few years is impractical.
+
+When this investment is not possible, either the mentee has to use the mentor's old method (and miss out on all the newly fashionable tools that many are talking about), or the mentor has to avoid implementation details in discussions with the mentee because they do not share a common language.
+The authors of this paper have personal experiences in both mentor/mentee relational scenarios.
+This failure to communicate in the details is a very serious problem, leading to the loss of valuable inter-generational experience.
diff --git a/tex/src/appendix-necessity.tex b/tex/src/appendix-necessity.tex
new file mode 100644
index 0000000..591a0a5
--- /dev/null
+++ b/tex/src/appendix-necessity.tex
@@ -0,0 +1,97 @@
+%% Appendix on reviewing the necessity for reproducible research
+%% papers. This file is loaded by the project's 'paper.tex' or
+%% 'tex/src/supplement.tex', it should not be run independently.
+%
+%% Copyright (C) 2020-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+%% Copyright (C) 2021-2022 Raúl Infante-Sainz <infantesainz@gmail.com>
+%
+%% This file is free software: you can redistribute it and/or modify it
+%% under the terms of the GNU General Public License as published by the
+%% Free Software Foundation, either version 3 of the License, or (at your
+%% option) any later version.
+%
+%% This file is distributed in the hope that it will be useful, but WITHOUT
+%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+%% for more details. See <http://www.gnu.org/licenses/>.
+
+
+
+
+
+\section{Necessity for reproducible research\\(not part of journal article; introductory review for non-specialists)}
+\label{appendix:necessity}
+The increasing volume and complexity of data analysis has been highly productive, giving rise to a new branch of ``Big Data'' in many fields of the sciences and industry.
+However, given its inherent complexity, the mere results are barely useful alone.
+Questions such as these commonly follow any such result:
+What inputs were used?
+What operations were done on those inputs? How were the configurations or training data chosen?
+How did the quantitative results get visualized into the final demonstration plots, figures, or narrative/qualitative interpretation?
+Could there be a bias in the visualization?
+See Figure \ref{fig:questions} for a more detailed visual representation of such questions for various stages of the workflow.
+
+In data science and database management, this type of metadata is commonly known as \emph{data provenance} or \emph{data lineage}.
+Data lineage is being increasingly demanded for integrity checking from both the scientific, industrial, and legal domains.
+Notable examples in each domain are respectively the ``Reproducibility crisis'' in the sciences that was reported by \emph{Nature} after a large survey \citeappendix{baker16}, and the General Data Protection Regulation (GDPR) by the European Parliament and the California Consumer Privacy Act (CCPA), implemented in 2018 and 2020, respectively.
+The former argues that reproducibility (as a test on sufficiently conveying the data lineage) is necessary for other scientists to study, check and build-upon each other's work.
+The latter requires the data-intensive industry to give individual users control over their data, effectively requiring thorough management and knowledge of the data lineage.
+Besides regulation and integrity checks, having robust data governance (management of data lineage) in a project can be very productive: it enables easy debugging, experimentation on alternative methods, or optimization of the workflow.
+
+In the sciences, the results of a project's analysis are published as scientific papers, which have traditionally been the primary conveyor of the lineage of the results: usually in narrative form, especially within the ``Methods'' section of the paper.
+From our own experience, this section is often that which is the most intensively discussed during peer review and conference presentations, showing its importance.
+After all, a result is defined as ``scientific'' based on its \emph{method} (the ``scientific method''), or lineage in data-science terminology.
+In industry, however, data governance is usually kept as a trade secret and is not published openly or widely scrutinized.
+Therefore, the main practical focus here will be on the scientific front, which has traditionally been more open to the publication of methods and anonymous peer scrutiny.
+
+\begin{figure*}[t]
+ \begin{center}
+ \includetikz{figure-project-outline}{width=\linewidth}
+ \end{center}
+ \caption{\label{fig:questions}Graph of a generic project's workflow (connected through arrows), highlighting the various issues and questions on each step.
+ The green boxes with sharp edges are inputs and the blue boxes with rounded corners are the intermediate or final outputs.
+ The red boxes with dashed edges highlight the main questions at various stages in the work chain.
+ The orange box, surrounding the software download and build phases, lists some commonly recognized solutions to the questions in it; for more discussion, see Appendix \ref{appendix:independentenvironment}.
+ }
+\end{figure*}
+
+The traditional format of a scientific paper has been very successful in conveying the method and the results during recent centuries.
+However, the complexity mentioned above has made it impossible to describe all the analytical steps of most modern projects to a sufficient level of detail.
+Citing this difficulty, many authors limit themselves to describing the very high-level generalities of their analysis, while even the most basic calculations (such as the mean of a distribution) can depend on the software implementation.
+
+Due to the complexity of modern scientific analysis, a small deviation in some of the different steps involved can lead to significant differences in the final result.
+Publishing the precise codes of the analysis is the only guarantee of allowing this to be investigated.
+For example, \citeappendix{smart18} describes how a 7-year old conflict in theoretical condensed matter physics was only identified after the different groups' codes were shared.
+Nature is already a black box that we are trying hard to unlock and understand.
+Not being able to experiment on the methods of other researchers is an artificial and self-imposed black box, wrapped over the original, and wasting much of researchers' time and energy.
+
+A dramatic example showing the importance of sharing code is \citeappendix{miller06}, in which a mistaken flipping of a column was discovered, leading to the retraction of five papers in major journals, including \emph{Science}.
+Ref.\/ \citeappendix{baggerly09} highlighted the inadequate narrative description of analysis in several papers and showed the prevalence of simple errors in published results, ultimately calling their work ``forensic bioinformatics''.
+References \citeappendix{herndon14} and \citeappendix{horvath15} also reported similar situations and \citeappendix{ziemann16} concluded that one-fifth of papers with supplementary Microsoft Excel gene lists contain erroneous gene name conversions.
+Such integrity checks are a critical component of the scientific method but are only possible with access to the data and codes and \emph{cannot be resolved from analyzing the published paper alone}.
+
+The completeness of a paper's published metadata (or ``Methods'' section) can be measured by a simple question: given the same input datasets (supposedly on a third-party database like \href{http://zenodo.org}{zenodo.org}), can another researcher reproduce the same result automatically, without needing to contact the authors?
+Several studies have attempted to answer this with different levels of detail.
+For example, \citeappendix{allen18} found that roughly half of the papers in astrophysics do not even mention the names of any analysis software they have used, while \cite{menke20} found that the fraction of papers explicitly mentioning their software tools has greatly improved in medical journals over the last two decades.
+
+Ref.\/ \citeappendix{ioannidis2009} attempted to reproduce 18 published results by two independent groups, but only fully succeeded in two of them and partially in six.
+Ref.\/ \citeappendix{chang15} attempted to reproduce 67 papers in well-regarded economic journals with data and code: only 22 could be reproduced without contacting authors, and more than half could not be replicated at all.
+Ref.\/ \citeappendix{stodden18} attempted to replicate the results of 204 scientific papers published in the journal Science \emph{after} that journal adopted a policy of publishing the data and code associated with the papers.
+Even though the authors were contacted, the success rate was $26\%$.
+Generally, this problem is unambiguously felt in the community: \citeappendix{baker16} surveyed 1574 researchers and found that only $3\%$ did not see a ``reproducibility crisis''.
+
+This is not a new problem in the sciences: in 2011, Elsevier conducted an ``Executable Paper Grand Challenge'' \citeappendix{gabriel11}.
+The proposed solutions were published in a special edition.
+Some of them are reviewed in Appendix \ref{appendix:existingsolutions}, but most have not been continued since then.
+In 2005, Ref.\/ \citeappendix{ioannidis05} argued that ``most claimed research findings are false''.
+Even earlier, in the 1990s, Refs \cite{schwab2000}, \citeappendix{buckheit1995} and \cite{claerbout1992} described this same problem very eloquently and provided some of the solutions that they adopted.
+While the situation has improved since the early 1990s, the problems mentioned in these papers will resonate strongly with the frustrations of today's scientists.
+Even earlier yet, through his famous quartet, Anscombe \citeappendix{anscombe73} qualitatively showed how the distancing of researchers from the intricacies of algorithms and methods can lead to misinterpretation of the results.
+One of the earliest such efforts we found was \citeappendix{roberts69}, who discussed conventions in FORTRAN programming and documentation to help in publishing research codes.
+
+From a practical point of view, for those who publish the data lineage, a major problem is the fast-evolving and diverse software technologies and methodologies that are used by different teams in different epochs.
+Ref.\/ \citeappendix{zhao12} describes it as ``workflow decay'' and recommends preserving these auxiliary resources.
+But in the case of software, this is not as straightforward as for data: if preserved in binary form, the software can only be run on a certain operating system on particular hardware, and if kept as source code, its build dependencies and build configuration must also be preserved.
+Ref.\/ \citeappendix{gronenschild12} specifically studies the effect of software version and environment and encourages researchers to not update their software environment.
+However, this is not a practical solution because software updates are necessary, at least to fix bugs in the same research software.
+Generally, the software is not a secular component of projects, where one software package can easily be swapped with another.
+Projects are built around specific software technologies, and research in software methods and implementations is itself a vibrant research topic in many domains \citeappendix{dicosmo19}.
diff --git a/tex/src/appendix-used-software.tex b/tex/src/appendix-used-software.tex
new file mode 100644
index 0000000..aa06d45
--- /dev/null
+++ b/tex/src/appendix-used-software.tex
@@ -0,0 +1,4 @@
+%% Mention all used software in an appendix.
+\section{Software acknowledgement}
+\label{appendix:software}
+\input{tex/build/macros/dependencies.tex}
diff --git a/tex/src/delete-me-image-histogram.tex b/tex/src/delete-me-image-histogram.tex
deleted file mode 100644
index 4648fe6..0000000
--- a/tex/src/delete-me-image-histogram.tex
+++ /dev/null
@@ -1,51 +0,0 @@
-%% Plot the demonstration image and its histogram.
-%
-%% Copyright (C) 2019-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
-%
-%% This file is free software: you can redistribute it and/or modify it
-%% under the terms of the GNU General Public License as published by the
-%% Free Software Foundation, either version 3 of the License, or (at your
-%% option) any later version.
-%
-%% This file is distributed in the hope that it will be useful, but WITHOUT
-%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
-%% for more details.
-%
-%% You should have received a copy of the GNU General Public License along
-%% with this file. If not, see <http://www.gnu.org/licenses/>.
-
-\begin{tikzpicture}
-
- %% Dispaly the demo image.
- \node[anchor=south west] (img) at (0,0)
- {\includegraphics[width=0.5\linewidth]
- {tex/build/image-histogram/wfpc2.pdf}};
-
- %% Its label
- \node[anchor=south west] at (0.45\linewidth,0.45\linewidth)
- {\textcolor{white}{a}};
-
- %% This histogram.
- \begin{axis}[at={(0.52\linewidth,0.1\linewidth)},
- no markers,
- axis on top,
- xmode=normal,
- ymode=normal,
- yticklabels={},
- scale only axis,
- xlabel=Pixel value,
- width=0.5\linewidth,
- height=0.412\linewidth,
- enlarge y limits=false,
- enlarge x limits=false,
- ]
- \addplot [const plot mark mid, fill=red]
- table [x index=0, y index=1]
- {tex/build/to-publish/wfpc2-histogram.txt}
- \closedcycle;
- \end{axis}
-
- %% The histogram's label
- \node[anchor=south west] at (0.95\linewidth,0.45\linewidth) {b};
-\end{tikzpicture}
diff --git a/tex/src/delete-me-squared.tex b/tex/src/delete-me-squared.tex
deleted file mode 100644
index f8e29cc..0000000
--- a/tex/src/delete-me-squared.tex
+++ /dev/null
@@ -1,32 +0,0 @@
-%% PGFPlots code to plot a random set of numbers as demo
-%%
-%% Copyright (C) 2019-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
-%
-%% This file is free software: you can redistribute it and/or modify it
-%% under the terms of the GNU General Public License as published by the
-%% Free Software Foundation, either version 3 of the License, or (at your
-%% option) any later version.
-%
-%% This file is distributed in the hope that it will be useful, but WITHOUT
-%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
-%% for more details.
-%
-%% You should have received a copy of the GNU General Public License along
-%% with this file. If not, see <http://www.gnu.org/licenses/>.
-
-\begin{tikzpicture}
-
- %% Settings of the plotted axis
- \begin{axis}[
- width=\linewidth,
- xlabel=$X$,
- ylabel=$X^2$,
- ]
-
- %% A particular plot.
- \addplot+[scatter, only marks]
- table {tex/build/to-publish/squared.txt};
-
- \end{axis}
-\end{tikzpicture}
diff --git a/tex/src/figure-branching.tex b/tex/src/figure-branching.tex
new file mode 100644
index 0000000..d0a3b2e
--- /dev/null
+++ b/tex/src/figure-branching.tex
@@ -0,0 +1,156 @@
+% Copyright (C) 2020-2021 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+%
+%% This LaTeX source is free software: you can redistribute it and/or
+%% modify it under the terms of the GNU General Public License as published
+%% by the Free Software Foundation, either version 3 of the License, or (at
+%% your option) any later version.
+%
+%% This LaTeX source is distributed in the hope that it will be useful, but
+%% WITHOUT ANY WARRANTY; without even the implied warranty of
+%% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+%% General Public License for more details.
+%
+%% You should have received a copy of the GNU General Public License along
+%% with this LaTeX source. If not, see <https://www.gnu.org/licenses/>.
+
+
+
+
+
+%% Basic definitions to facilitate the plot code.
+\newcommand{\branchcommit}[4]{
+ \draw [fill=#1, opacity=0.8] (#2,#3) circle [x radius=5.5mm, y radius=2.1mm];
+ \draw [anchor=center] (#2,#3) node {\textcolor{white}{\scriptsize\texttt{#4}}};
+}
+\definecolor{maneagebranchcolor}{HTML}{46211A}
+\definecolor{projectbranchcolor}{HTML}{9A3324}
+\definecolor{derivedbranchcolor}{HTML}{C28556}
+
+
+
+
+
+
+
+
+
+\begin{tikzpicture}
+
+ %% Just for a reference (so the image size always remains fixed). It also
+ %% helps in defining easy coordinates for all the other elements.
+ \draw [white] (0,0) -- (0,8cm);
+ \draw [white] (0,0) -- (\linewidth,0);
+
+ %% Collaboration icon, which should be under the commit ellipses.
+ \node[inner sep=0pt] at (3.885cm,5.51cm)
+ {\includegraphics[width=6.6mm]{tex/img/icon-collaboration.eps}};
+
+ %% Maneage branch line.
+ \draw [black!40!white, dashed, line width=2mm] (1.5cm,0) -- (1.5cm,0.6cm);
+ \draw [->, black!40!white, line width=2mm] (1.5cm,0.6cm) -- (1.5cm,7.9cm);
+ \draw [anchor=north, black!40!white] (1.5cm,0.1cm) node [scale=1.5]
+ {\bf Maneage};
+
+ %% Project branch line.
+ \draw [->, black!40!white, rounded corners, line width=2mm]
+ (1.5cm,2cm) -- (3cm,2.5cm) -- (3cm,7.9cm);
+ \draw [black!40!white, line width=2mm] (1.5cm,5cm) -- (3cm,5.5cm);
+ \draw [anchor=north, black!40!white] (3cm,2.3cm) node [scale=1.5]
+ {\bf Project};
+
+ %% Derivative project
+ \draw [black!40!white, rounded corners, line width=2mm]
+ (3cm,4.5cm) -- (4.5cm,5cm) -- (4.5cm,6cm) -- (3cm,6.5cm);
+
+ %% Maneage commits.
+ \branchcommit{maneagebranchcolor}{1.5cm}{1cm}{1d72e26}
+ \branchcommit{maneagebranchcolor}{1.5cm}{2cm}{0c120cb}
+ \branchcommit{maneagebranchcolor}{1.5cm}{3cm}{5781173}
+ \branchcommit{maneagebranchcolor}{1.5cm}{4cm}{0774aac}
+ \branchcommit{maneagebranchcolor}{1.5cm}{5cm}{3c05235}
+ \branchcommit{maneagebranchcolor}{1.5cm}{6cm}{6ec4881}
+ \branchcommit{maneagebranchcolor}{1.5cm}{7cm}{852d996}
+
+ %% Project commits.
+ \branchcommit{projectbranchcolor}{3cm}{2.5cm}{4483a81}
+ \branchcommit{projectbranchcolor}{3cm}{3.5cm}{5e830f5}
+ \branchcommit{projectbranchcolor}{3cm}{4.5cm}{01dd812}
+ \branchcommit{projectbranchcolor}{3cm}{5.5cm}{2ed0c82}
+ \branchcommit{projectbranchcolor}{3cm}{6.5cm}{f62596e}
+
+ %% Derivate project commits.
+ \branchcommit{projectbranchcolor}{4.5cm}{5cm}{f69e1f4}
+ \branchcommit{projectbranchcolor}{4.5cm}{6cm}{716b56b}
+
+ %% Paper being processed icon. The white rectangle over it is to blend it
+ %% into the background.
+ \node[anchor=south,inner sep=0pt] at (4.2cm,6.5cm)
+ {\includegraphics[width=1cm]{tex/img/icon-processing.eps}};
+% \draw[white, fill=white, opacity=0.7] (3.42cm,6.7) rectangle (5cm,7.8cm);
+
+ %% Description of this scenario:
+ \draw [rounded corners, fill=black!10!white] (3.1cm,0) rectangle (7.5cm,1.25cm);
+ \draw [anchor=west, black] (3.1cm,1.0cm) node {\textbf{(a)} pre-publication:};
+ \draw [anchor=west, black] (3.3cm,0.6cm) node {\footnotesize Collaborating on a project while};
+ \draw [anchor=west, black] (3.3cm,0.2cm) node {\footnotesize working in parallel, then merging.};
+
+
+
+
+
+
+
+
+
+
+ %% Maneage branch line.
+ \draw [black!40!white, dashed, line width=2mm] (9.5cm,0) -- (9.5cm,0.6cm);
+ \draw [black!40!white, line width=2mm] (9.5cm,0.6cm) -- (9.5cm,2.5cm);
+ \draw [black!40!white, line width=2mm, dashed] (9.5cm,2.5cm) -- (9.5cm,3.5cm);
+ \draw [->,black!40!white, line width=2mm] (9.5cm,3.5cm) -- (9.5cm,7.9cm);
+ \draw [anchor=north, black!40!white] (9.5cm,0.1cm) node [scale=1.5]
+ {\bf Maneage};
+
+ %% Project branch line.
+ \draw [black!40!white, rounded corners, line width=2mm]
+ (9.5cm,2cm) -- (11cm,2.5cm) -- (11cm,3cm);
+ \draw [black!40!white, line width=2mm, dashed] (11cm,3cm) -- (11cm,4cm);
+ \draw [black!40!white, line width=2mm, dashed] (9.5cm,3cm) -- (11cm,3.5cm);
+ \draw [black!40!white, line width=2mm] (11cm,4cm) -- (11cm,4.9cm);
+ \draw [anchor=north, black!40!white] (11cm,2.3cm) node [scale=1.5]
+ {\bf Project};
+
+ %% Derivative project
+ \draw [->, black!40!white, rounded corners, line width=2mm]
+ (11cm,4.5cm) -- (12.5cm,5cm) -- (12.5cm,7.9cm);
+ \draw [black!40!white, line width=2mm] (9.5cm,6cm) -- (12.5cm,7cm);
+ \draw [anchor=north, black!40!white] (12.6cm,4.8cm) node [scale=1.5]
+ {\bf Derived};
+ \draw [anchor=north, black!40!white] (12.6cm,4.4cm) node [scale=1.5]
+ {\bf project};
+
+ %% Maneage commits.
+ \branchcommit{maneagebranchcolor}{9.5cm}{1cm}{1d72e26}
+ \branchcommit{maneagebranchcolor}{9.5cm}{2cm}{0c120cb}
+ \branchcommit{maneagebranchcolor}{9.5cm}{4cm}{b47b2a3}
+ \branchcommit{maneagebranchcolor}{9.5cm}{5cm}{340a7ec}
+ \branchcommit{maneagebranchcolor}{9.5cm}{6cm}{a92b25a}
+ \branchcommit{maneagebranchcolor}{9.5cm}{7cm}{6e1e3ff}
+
+ %% Project commits.
+ \branchcommit{projectbranchcolor}{11cm}{2.5cm}{4483a81}
+ \branchcommit{projectbranchcolor}{11cm}{4.5cm}{\projectversion}
+ \node[anchor=south, inner sep=0pt, color=white] at (11cm,4.8cm)
+ {\includegraphics[width=1cm]{tex/img/icon-complete.eps}};
+
+ %% Derivate project commits.
+ \branchcommit{derivedbranchcolor}{12.5cm}{5cm}{b177c7e}
+ \branchcommit{derivedbranchcolor}{12.5cm}{6cm}{5ae1fdc}
+ \branchcommit{derivedbranchcolor}{12.5cm}{7cm}{bcf4512}
+
+ %% Description of this scenario:
+ \draw [rounded corners, fill=black!10!white] (11.1cm,0) rectangle (15.3cm,1.25cm);
+ \draw [anchor=west, black] (11.1cm,1.0cm) node {\textbf{(b)} post-publication:};
+ \draw [anchor=west, black] (11.3cm,0.6cm) node {\footnotesize Other researchers building upon};
+ \draw [anchor=west, black] (11.3cm,0.2cm) node {\footnotesize previously published work.};
+\end{tikzpicture}
diff --git a/tex/src/figure-data-lineage.tex b/tex/src/figure-data-lineage.tex
new file mode 100644
index 0000000..31f4380
--- /dev/null
+++ b/tex/src/figure-data-lineage.tex
@@ -0,0 +1,209 @@
+% All macros commented % 1
+\newcommand{\paperpdf}{} % 2
+\newcommand{\papertex}{} % 3
+\newcommand{\projecttex}{} % 4
+\newcommand{\verifytex}{} % 5
+\newcommand{\initializetex}{} % 6
+\newcommand{\demoplottex}{} % 7
+\newcommand{\toolsperyear}{} % 8
+\newcommand{\tablethree}{} % 9
+\newcommand{\menkexlsx}{} % 10
+\newcommand{\inputsconf}{} % 11
+\newcommand{\downloadtex}{} % 12
+\newcommand{\formattex}{} % 13
+\newcommand{\demoyearconf}{} % 14
+\newcommand{\expandingproject}{} % 15
+
+
+
+
+
+%% Start the TiKZ picture environment.
+\begin{tikzpicture}[
+ line width=1.5pt,
+ black!50,
+ text=black,
+]
+
+ %% Use small fonts
+ \footnotesize
+
+ %% These white lines are only relevant when we want to add boxes in
+ %% multiple figures (for example to build slides). They are used to fix
+ %% the vertical position of the boxs in the figure so it doesn't change
+ %% as we add more boxes.
+ \draw [white] (-7.6,0) -- (7.5,0);
+ \draw [white] (0,-4.5) -- (0,4.9);
+
+ %% top-make.mk
+ \node [at={(-0.05cm,2mm)},
+ rectangle,
+ very thick,
+ text centered,
+ font=\ttfamily,
+ text width=2.8cm,
+ minimum width=15cm,
+ minimum height=7.8cm,
+ draw=green!50!black!50,
+ fill=black!10!green!2!white,
+ label={[shift={(0,-5mm)}]\texttt{top-make.mk}}] {};
+
+ %% Work-horse Makefiles. -5.6 -> -5.73 = -0.13
+ \node (initializemk) [node-makefile, at={(-5.73cm,-1.3cm)},
+ label={[shift={(0,-5mm)}]\texttt{initialize.mk}}] {};
+ \node (downloadmk) [node-makefile, at={(-2.93cm,-1.3cm)},
+ label={[shift={(0,-5mm)}]\texttt{download.mk}}] {};
+ \node (analysis1mk) [node-makefile, at={(-0.13cm,-1.3cm)},
+ label={[shift={(0,-5mm)}]\texttt{format.mk}}] {};
+ \node (analysis2mk) [node-makefile, at={(2.67cm,-1.3cm)},
+ label={[shift={(0,-5mm)}]\texttt{demo-plot.mk}}] {};
+
+ %% verify.mk
+ \node [at={(-5.3cm,-2.8cm)},
+ thick,
+ rectangle,
+ text centered,
+ font=\ttfamily,
+ text width=2.45cm,
+ minimum width=3.5cm,
+ minimum height=1.3cm,
+ draw=green!50!black!50,
+ fill=black!10!green!12!white,
+ label={[shift={(1cm,-5mm)}]\texttt{verify.mk}}] {};
+
+ %% Paper.mk
+ \node [at={(2.67cm,-2.8cm)},
+ thick,
+ rectangle,
+ text centered,
+ text width=2.8cm,
+ minimum width=8.5cm,
+ minimum height=1.3cm,
+ draw=green!50!black!50,
+ fill=black!10!green!12!white,
+ font=\ttfamily,
+ label={[shift={(0,-5mm)}]\texttt{paper.mk}}] {};
+
+ %% paper.pdf
+ \ifdefined\paperpdf
+ \node (paperpdf) [node-terminal, at={(5.47cm,-2.9cm)}] {paper.pdf};
+ \fi
+
+ %% paper.tex
+ \ifdefined\papertex
+ \node (reftex) [node-nonterminal, at={(2.67cm,-4.2cm)}] {references.tex};
+ \node (papertex) [node-nonterminal, at={(5.47cm,-4.2cm)}] {paper.tex};
+ \node (papertex-north) [node-point, at={(5.47cm,-3.58cm)}] {};
+ \draw [rounded corners] (reftex) |- (papertex-north);
+ \draw [->] (papertex) -- (paperpdf);
+ \fi
+
+ %% project.tex
+ \ifdefined\projecttex
+ \node (projecttex) [node-terminal, at={(-0.13cm,-2.9cm)}] {project.tex};
+ \draw [->] (projecttex) -- (paperpdf);
+ \fi
+
+ %% verify.tex
+ \ifdefined\verifytex
+ \node (verifytex) [node-terminal, at={(-5.73cm,-2.9cm)}] {verify.tex};
+ \draw [->] (verifytex) -- (projecttex);
+ \fi
+
+ %% Initialize.tex
+ \ifdefined\initializetex
+ \node (initializetex) [node-terminal, at={(-5.73cm,-0.8cm)}] {initialize.tex};
+ \node (initialize-south) [node-point, at={(-5.73cm,-1.5cm)}] {};
+ \draw [->] (initializetex) -- (verifytex);
+ \node [anchor=west, at={(-7.05cm,2.30cm)}] {Basic project info};
+ \node [anchor=west, at={(-7.05cm,1.95cm)}] {(e.g., Git commit).};
+ \node [anchor=west, at={(-7.05cm,1.10cm)}] {Also defines};
+ \node [anchor=west, at={(-7.05cm,0.75cm)}] {project structure};
+ \node [anchor=west, at={(-7.05cm,0.40cm)}] {(for \texttt{*.mk} files).};
+ \fi
+
+ %% demo-plot.tex
+ \ifdefined\demoplottex
+ \node (dptex) [node-terminal, at={(2.67cm,-0.8cm)}] {demo-plot.tex};
+ \draw [rounded corners, -] (dptex) |- (initialize-south);
+ \fi
+
+ %% tools-per-year.txt
+ \ifdefined\toolsperyear
+ \node (tpyear) [node-terminal, at={(2.67cm,0.3cm)}] {tools-per-\\year.txt};
+ \draw [->] (tpyear) -- (dptex);
+ \fi
+
+ %% table-3.txt
+ \ifdefined\tablethree
+ \node (tabthree) [node-terminal, at={(-0.13cm,1.1cm)}] {table-3.txt};
+ \draw [rounded corners, ->] (tabthree) |- (tpyear);
+ \fi
+
+ %% menkexlsx
+ \ifdefined\menkexlsx
+ \node (xlsx) [node-terminal, at={(-2.93cm,1.9cm)}] {menke20.xlsx};
+ \draw [->, rounded corners] (xlsx) |- (tabthree);
+ \fi
+
+ %% INPUTS.conf
+ \ifdefined\inputsconf
+ \node (INPUTS) [node-nonterminal, at={(-2.93cm,4.6cm)}] {INPUTS.conf};
+ \node (xlsx-west) [node-point, at={(-4.33cm,1.9cm)}] {};
+ \draw [->,rounded corners] (INPUTS.west) -| (xlsx-west) |- (xlsx);
+ \fi
+
+ %% download.tex
+ \ifdefined\downloadtex
+ \node (downloadtex) [node-terminal, at={(-2.93cm,-0.8cm)}] {download.tex};
+ \node (downloadtex-west) [node-point, at={(-4.33cm,-0.8cm)}] {};
+ \draw [->,rounded corners] (INPUTS.west) -| (downloadtex-west)
+ |- (downloadtex);
+ \draw [rounded corners, -] (downloadtex) |- (initialize-south);
+ \fi
+
+ %% format.tex
+ \ifdefined\formattex
+ \node (fmttex) [node-terminal, at={(-0.13cm,-0.8cm)}] {format.tex};
+ \draw [->] (tabthree) -- (fmttex);
+ \draw [rounded corners, -] (fmttex) |- (initialize-south);
+ \fi
+
+ %% demo-year.conf
+ \ifdefined\demoyearconf
+ \node (dyearconf) [node-nonterminal, at={(2.67cm,4.6cm)}] {demo-year.conf};
+ \node (dptex-west) [node-point, at={(1.27cm,-0.8cm)}] {};
+ \draw [->,rounded corners] (dyearconf.west) -| (dptex-west) |- (dptex);
+ \fi
+
+ %% Expanding project
+ \ifdefined\expandingproject
+
+ %% The Makefile.
+ \node [opacity=0.7, dashed] (analysis3mk) [node-makefile, at={(5.47cm,-1.3cm)},
+ label={[shift={(0,-5mm)}, opacity=0.7]\texttt{next-step.mk}}] {};
+
+ %% next-step.tex
+ \node [opacity=0.7, dashed] (a3tex) [node-terminal, at={(5.47cm,-0.8cm)}] {next-step.tex};
+ \draw [opacity=0.7, rounded corners, -, dashed] (a3tex) |- (initialize-south);
+
+ % out-3a.dat and out-3b.dat
+ \node [opacity=0.7, dashed] (out3a) [node-terminal, at={(5.47cm,2.7cm)}] {out-a.dat};
+ \node [opacity=0.7, dashed] (out3b) [node-terminal, at={(5.47cm,1.1cm)}] {out-b.dat};
+ \node (a3tex-east) [node-point, at={(6.87cm,-0.8cm)}] {};
+ \draw [opacity=0.7, ->,rounded corners, dashed] (out3a.east) -| (a3tex-east) |- (a3tex);
+ \draw [opacity=0.7, ->, dashed] (out3b) -- (a3tex);
+
+ %% demo-out.dat
+ \node [opacity=0.7, dashed] (dout) [node-terminal, at={(2.67cm,1.9cm)}] {demo-out.dat};
+ \draw [opacity=0.7, ->, rounded corners, dashed] (dout.south) |- (out3b);
+
+ %% links
+ \node (dout-west) [node-point, at={(1.27cm,1.9cm)}] {};
+ \draw [opacity=0.7, ->, dashed] (xlsx) -- (dout);
+ \node [opacity=0.7] (out3a-west) [node-point, at={(4.07cm,2.7cm)}] {};
+ \draw [opacity=0.7, ->,rounded corners, dashed] (xlsx) |- (out3a);
+ \node [opacity=0.7, dashed] (a3conf1) [node-nonterminal, at={(5.47cm,4.6cm)}] {param.conf};
+ \draw [opacity=0.7, rounded corners, dashed] (a3conf1.west) -| (out3a-west) |- (out3a);
+ \fi
+\end{tikzpicture}
diff --git a/tex/src/figure-file-architecture.tex b/tex/src/figure-file-architecture.tex
new file mode 100644
index 0000000..c3b55ff
--- /dev/null
+++ b/tex/src/figure-file-architecture.tex
@@ -0,0 +1,165 @@
+\newcommand{\fullfilearchitecture}{}
+
+\begin{tikzpicture}[
+ line width=1.5pt,
+ black!50,
+ text=black,
+]
+
+ %% Use small fonts
+ \footnotesize
+
+ %% project/
+ \node [dirbox, at={(0,4cm)}, minimum width=15cm, minimum height=9.9cm,
+ label={[shift={(0,-5mm)}]\texttt{project/}}] {};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(-6.0cm,3.3cm)}] {COPYING};
+ \fi
+ \node [node-nonterminal-thin, at={(-3.5cm,3.3cm)}] {paper.tex};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(-1.0cm,3.3cm)}] {project};
+ \node [node-nonterminal-thin, at={(+1.5cm,3.3cm)}] {README.md};
+ \node [node-nonterminal-thin, at={(+4.25cm,3.3cm)},
+ text width=2.5cm, text depth=-3pt] {README-hacking.md};
+ \fi
+
+ %% reproduce/
+ \node [dirbox, at={(-1.4cm,2.6cm)}, minimum width=11.9cm, minimum height=6cm,
+ label={[shift={(0,-5mm)}]\texttt{reproduce/}}, fill=brown!15!white] {};
+
+ %% reproduce/software/
+ \node [dirbox, at={(-4.35cm,2.1cm)}, minimum width=5.7cm, minimum height=5.3cm,
+ label={[shift={(0,-5mm)}]\texttt{software/}}, fill=brown!20!white] {};
+
+ %% reproduce/software/config/
+ \node [dirbox, at={(-5.75cm,1.5cm)}, minimum width=2.6cm, minimum height=2.1cm,
+ label={[shift={(0,-5mm)}]\texttt{config/}}, fill=brown!25!white] {};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(-5.75cm,0.8cm)}] {TARGETS.conf};
+ \fi
+ \node [node-nonterminal-thin, at={(-5.75cm,0.3cm)}] {versions.conf};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(-5.75cm,-0.2cm)}] {checksums.conf};
+ \fi
+
+ %% reproduce/software/make/
+ \node [dirbox, at={(-2.95cm,1.5cm)}, minimum width=2.6cm, minimum height=2.1cm,
+ label={[shift={(0,-5mm)}]\texttt{make/}}, fill=brown!25!white] {};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(-2.95cm,0.8cm)}] {basic.mk};
+ \fi
+ \node [node-nonterminal-thin, at={(-2.95cm,0.3cm)}] {high-level.mk};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(-2.95cm,-0.2cm)}] {python.mk};
+ \fi
+
+ %% reproduce/software/bash/
+ \node [dirbox, at={(-5.75cm,-0.8cm)}, minimum width=2.6cm, minimum height=1.6cm,
+ label={[shift={(0,-5mm)}]\texttt{shell/}}, fill=brown!25!white] {};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(-5.75cm,-1.5cm)}] {configure.sh};
+ \node [node-nonterminal-thin, at={(-5.75cm,-2.0cm)}] {bashrc.sh};
+ \fi
+
+ %% reproduce/software/bibtex/
+ \node [dirbox, at={(-2.95cm,-0.8cm)}, minimum width=2.6cm, minimum height=2.1cm,
+ label={[shift={(0,-5mm)}]\texttt{bibtex/}}, fill=brown!25!white] {};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(-2.95cm,-1.5cm)}] {fftw.tex};
+ \node [node-nonterminal-thin, at={(-2.95cm,-2.0cm)}] {numpy.tex};
+ \node [node-nonterminal-thin, at={(-2.95cm,-2.5cm)}] {gnuastro.tex};
+ \fi
+
+ %% reproduce/analysis/
+ \node [dirbox, at={(1.55cm,2.1cm)}, minimum width=5.7cm, minimum height=5.3cm,
+ label={[shift={(0,-5mm)}]\texttt{analysis/}}, fill=brown!20!white] {};
+
+ %% reproduce/analysis/config/
+ \node [dirbox, at={(0.15cm,1.5cm)}, minimum width=2.6cm, minimum height=2.6cm,
+ label={[shift={(0,-5mm)}]\texttt{config/}}, fill=brown!25!white] {};
+ \node [node-nonterminal-thin, at={(0.15cm,0.8cm)}] {INPUTS.conf};
+ \node [node-nonterminal-thin, at={(0.15cm,0.3cm)}] {param-1.conf};
+ \node [node-nonterminal-thin, at={(0.15cm,-0.2cm)}] {param-2a.conf};
+ \node [node-nonterminal-thin, at={(0.15cm,-0.7cm)}] {param-2b.conf};
+
+ %% reproduce/analysis/make/
+ \node [dirbox, at={(2.95cm,1.5cm)}, minimum width=2.6cm, minimum height=2.6cm,
+ label={[shift={(0,-5mm)}]\texttt{make/}}, fill=brown!25!white] {};
+ \node [node-nonterminal-thin, at={(2.95cm,0.8cm)}] {top-prepare.mk};
+ \node [node-nonterminal-thin, at={(2.95cm,0.3cm)}] {top-make.mk};
+ \node [node-nonterminal-thin, at={(2.95cm,-0.2cm)}] {initialize.mk};
+ \node [node-nonterminal-thin, at={(2.95cm,-0.7cm)}] {format.mk};
+
+ %% reproduce/analysis/bash/
+ \node [dirbox, at={(0.15cm,-1.3cm)}, minimum width=2.6cm, minimum height=1.1cm,
+ label={[shift={(0,-5mm)}]\texttt{bash/}}, fill=brown!25!white] {};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(0.15cm,-2.0cm)}] {process-A.sh};
+ \fi
+
+ %% reproduce/analysis/python/
+ \node [dirbox, at={(2.95cm,-1.3cm)}, minimum width=2.6cm, minimum height=1.6cm,
+ label={[shift={(0,-5mm)}]\texttt{python/}}, fill=brown!25!white] {};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(2.95cm,-2.0cm)}] {operation-B.py};
+ \node [node-nonterminal-thin, at={(2.95cm,-2.5cm)}] {fitting-plot.py};
+ \fi
+
+ %% tex/
+ \node [dirbox, at={(6cm,2.6cm)}, minimum width=2.7cm, minimum height=6cm,
+ label={[shift={(0,-5mm)}]\texttt{tex/}}, fill=brown!15!white] {};
+
+ %% tex/src/
+ \node [dirbox, at={(6cm,2.1cm)}, minimum width=2.5cm, minimum height=1.6cm,
+ label={[shift={(0,-5mm)}]\texttt{src/}}, fill=brown!20!white] {};
+ \node [node-nonterminal-thin, at={(6cm,1.4cm)}] {references.tex};
+ \ifdefined\fullfilearchitecture
+ \node [node-nonterminal-thin, at={(6cm,0.9cm)}] {figure-1.tex};
+ \fi
+
+ %% tex/build/
+ \ifdefined\fullfilearchitecture
+ \node [dirbox, at={(6cm,0.1cm)}, minimum width=2.5cm, minimum height=1.3cm,
+ label={[shift={(0,-5mm)}]\texttt{build/}}, dashed, , fill=brown!20!white] {};
+ \node [anchor=west, at={(4.7cm,-0.7cm)}] {\scriptsize\sf Symbolic link to};
+ \node [anchor=west, at={(4.7cm,-1.0cm)}] {\scriptsize\sf \LaTeX{} build directory.};
+ \fi
+
+ %% tex/tikz/
+ \ifdefined\fullfilearchitecture
+ \node [dirbox, at={(6cm,-1.6cm)}, minimum width=2.5cm, minimum height=1.6cm,
+ label={[shift={(0,-5mm)}]\texttt{tikz/}}, dashed, fill=brown!20!white] {};
+ \node [anchor=west, at={(4.67cm,-2.4cm)}] {\scriptsize\sf Symbolic link to TikZ};
+ \node [anchor=west, at={(4.67cm,-2.7cm)}] {\scriptsize\sf directory (figures built};
+ \node [anchor=west, at={(4.67cm,-3.0cm)}] {\scriptsize\sf by \LaTeX).};
+ \fi
+
+ %% .git/
+ \ifdefined\fullfilearchitecture
+ \node [dirbox, at={(0,-3.6cm)}, minimum width=14.2cm, minimum height=7mm,
+ label={[shift={(0,-5mm)}]\texttt{.git/}}, fill=brown!15!white] {};
+ \node [anchor=north, at={(0cm,-3.9cm)}]
+ {\scriptsize\sf Full project temporal provenance (version controlled history) in Git.};
+ \fi
+
+ %% .local/
+ \ifdefined\fullfilearchitecture
+ \node [dirbox, at={(-3.6cm,-4.5cm)}, minimum width=7cm, minimum height=1.2cm,
+ label={[shift={(0,-5mm)}]\texttt{.local/}}, dashed, fill=brown!15!white] {};
+ \node [anchor=west, at={(-7.1cm,-5.2cm)}]
+ {\scriptsize\sf Symbolic link to project's software environment, e.g., };
+ \node [anchor=west, at={(-7.1cm,-5.5cm)}]
+ {\scriptsize\sf Python or R, run `\texttt{.local/bin/python}' or `\texttt{.local/bin/R}'};
+ \fi
+
+ %% .build/
+ \ifdefined\fullfilearchitecture
+ \node [dirbox, at={(3.6cm,-4.5cm)}, minimum width=7cm, minimum height=1.2cm,
+ label={[shift={(0,-5mm)}]\texttt{.build/}}, dashed, fill=brown!15!white] {};
+ \node [anchor=west, at={(0.1cm,-5.2cm)}]
+ {\scriptsize\sf Symbolic link to project's top-level build directory.};
+ \node [anchor=west, at={(0.1cm,-5.5cm)}]
+ {\scriptsize\sf Enabling easy access to all of project's built components.};
+ \fi
+
+\end{tikzpicture}
diff --git a/tex/src/figure-project-outline.tex b/tex/src/figure-project-outline.tex
new file mode 100644
index 0000000..dbf281c
--- /dev/null
+++ b/tex/src/figure-project-outline.tex
@@ -0,0 +1,229 @@
+% Copyright (C) 2018-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+%
+% This LaTeX source is free software: you can redistribute it and/or
+% modify it under the terms of the GNU General Public License as
+% published by the Free Software Foundation, either version 3 of the
+% License, or (at your option) any later version.
+%
+% This LaTeX source is distributed in the hope that it will be useful,
+% but WITHOUT ANY WARRANTY; without even the implied warranty of
+% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+% General Public License for more details.
+%
+% You should have received a copy of the GNU General Public License
+% along with this LaTeX source. If not, see <https://www.gnu.org/licenses/>.
+
+%% Environment variables.
+\newcommand{\allopacity}{1}
+\newcommand{\sver}{}
+\newcommand{\srep}{}
+\newcommand{\dver}{}
+\newcommand{\ddver}{}
+\newcommand{\confopt}{}
+\newcommand{\confenv}{}
+\newcommand{\containers}{}
+\newcommand{\db}{}
+\newcommand{\calib}{}
+\newcommand{\corr}{}
+\newcommand{\runord}{}
+\newcommand{\runopt}{}
+\newcommand{\humanerr}{}
+\newcommand{\confirmbias}{}
+\newcommand{\depupdate}{}
+\newcommand{\coauth}{}
+\newcommand{\varsinpaper}{}
+\newcommand{\recordinfo}{}
+\newcommand{\softcite}{}
+\newcommand{\prevchange}{}
+
+\begin{tikzpicture}[>=stealth, thick, black!50, text=black,
+ every new ->/.style={shorten >=1pt},
+ hv path/.style={to path={-| (\tikztotarget)}},
+ graphs/every graph/.style={edges=rounded corners}]
+
+ \footnotesize
+
+ %% This white line is only added to fix the vertical position of the
+ %% figure so it doesn't change as we add more boxes.
+ \draw [white] (0,-4.3) -- (0,3.8);
+ \draw [white] (-0.5,0) -- (12,0);
+
+ %% Box showing containers.
+ \ifdefined\containers
+ \filldraw[orange!20!white, rounded corners=2mm] (-0.1,0.3) rectangle (5.6,3.8);
+ \draw (-0.1,3.65) node [anchor=west] {\scriptsize Existing solutions:};
+ \draw (0,3.35) node [anchor=west] {\scriptsize Virtual machines};
+ \draw (0,3.05) node [anchor=west] {\scriptsize Containers};
+ \draw (0,2.75) node [anchor=west] {\scriptsize Package managers};
+ \fi
+
+ \graph[grow right sep, simple] {
+ { [nodes={yshift=7mm}]
+ soft/Software [gbox] -> build/Build [bbox],
+ hard/Hardware/data [gbox, yshift=-0.5cm] --
+ p1 [coordinate, xshift=2cm, yshift=-0.5cm]
+ } -- [hv path]
+ p2 [coordinate] ->
+ srun/Run software on data [bbox] ->
+ paper/Paper [bbox]
+ };
+
+ \ifdefined\paperfinal
+ \node (happy) [inner sep=0pt, below=of paper, yshift=+8mm]
+ {\includegraphics[width=2cm]{img/happy-question.jpg}};
+ \node (happyurl) [below=of happy, xshift=-9.5mm, yshift=+1cm]
+ {\tiny \url{https://heywhatwhatdidyousay.wordpress.com}};
+ \node (qurl) [below=of happyurl, xshift=10.5mm, yshift=+1.2cm]
+ {\tiny \url{http://pngimages.net}};
+ \else
+ \ifdefined\paperinit
+ \node (happy) [inner sep=0pt, below=of paper, yshift=+8mm]
+ {\includegraphics[width=2cm]{img/happy.jpg}};
+ \node (happyurl) [below=of happy, xshift=-9.5mm, yshift=+1cm]
+ {\tiny \url{https://heywhatwhatdidyousay.wordpress.com}};
+ \fi
+ \fi
+
+ %% Software...
+ \let\ppopacity\undefined
+ \ifdefined\allopacity \newcommand{\ppopacity}{1}
+ \else \ifdefined\focusonpackages
+ \newcommand{\ppopacity}{1}
+ \else
+ \newcommand{\ppopacity}{0.3}
+ \fi
+ \fi
+
+ \ifdefined\sver
+ \node (sver)
+ [rbox, above=of soft, yshift=-8mm, opacity=\ppopacity]
+ {What version?};
+ \fi
+ \ifdefined\srep
+ \node (srep)
+ [rbox, above=of sver, yshift=-8mm, opacity=\ppopacity]
+ {Repository?};
+ \fi
+
+ %% Build
+ \ifdefined\dver
+ \node (dver)
+ [rbox, above=of build, yshift=-8mm, opacity=\ppopacity]
+ {Dependencies?};
+ \fi
+ \ifdefined\ddver
+ \node (ddver)
+ [rbox, above=of dver, yshift=-8mm, opacity=\ppopacity]
+ {Dep. versions?};
+ \fi
+ \ifdefined\confopt
+ \node (confopt)
+ [rbox, above=of ddver, yshift=-8mm, opacity=\ppopacity]
+ {Config options?};
+ \fi
+ \ifdefined\confenv
+ \node (confenv)
+ [rbox, above=of confopt, yshift=-8mm, opacity=\ppopacity]
+ {Config environment?};
+ \fi
+
+ %% Hardware/data
+ \let\ppopacity\undefined
+ \ifdefined\allopacity \newcommand{\ppopacity}{1}
+ \else \ifdefined\focusonhardware
+ \newcommand{\ppopacity}{1}
+ \else
+ \newcommand{\ppopacity}{0.3}
+ \fi
+ \fi
+ \ifdefined\db
+ \node (db)
+ [rbox, below=of hard, yshift=+8mm, opacity=\ppopacity]
+ {Data base, or PID?};
+ \fi
+ \ifdefined\calib
+ \node (calib)
+ [rbox, below=of db, yshift=+8mm, opacity=\ppopacity]
+ {Calibration/version?};
+ \fi
+ \ifdefined\corr
+ \node (corr)
+ [rbox, below=of calib, yshift=+8mm, opacity=\ppopacity]
+ {Integrity?};
+ \fi
+
+ %% Run software ...
+ \let\ppopacity\undefined
+ \ifdefined\allopacity \newcommand{\ppopacity}{1}
+ \else \ifdefined\focusonrun
+ \newcommand{\ppopacity}{1}
+ \else
+ \newcommand{\ppopacity}{0.3}
+ \fi
+ \fi
+ \ifdefined\runord
+ \node (runord)
+ [rbox, above=of srun, yshift=-8mm, opacity=\ppopacity]
+ {What order?};
+ \fi
+ \ifdefined\runopt
+ \node (runopt)
+ [rbox, above=of runord, yshift=-8mm, opacity=\ppopacity]
+ {Runtime options?};
+ \fi
+ \ifdefined\humanerr
+ \node (humanerr)
+ [rbox, above=of runopt, yshift=-8mm, opacity=\ppopacity]
+ {Human error?};
+ \fi
+ \ifdefined\confirmbias
+ \node (confirmbias)
+ [rbox, above=of humanerr, yshift=-8mm, opacity=\ppopacity]
+ {Confirmation bias?};
+ \fi
+ \ifdefined\depupdate
+ \node (depupdate)
+ [rbox, below=of srun, yshift=+8mm, opacity=\ppopacity]
+ {Environment update?};
+ \fi
+ \ifdefined\coauth
+ \node (coaut)
+ [rbox, below=of depupdate, yshift=+8mm, opacity=\ppopacity]
+ {In sync with coauthors?};
+ \fi
+
+ %% Paper ...
+ \let\ppopacity\undefined
+ \ifdefined\allopacity \newcommand{\ppopacity}{1}
+ \else \ifdefined\focusonpaper
+ \newcommand{\ppopacity}{1}
+ \else
+ \newcommand{\ppopacity}{0.3}
+ \fi
+ \fi
+ \ifdefined\varsinpaper
+ \node (varsinpaper)
+ [rbox, above=of paper, xshift=-1mm, yshift=-8mm, opacity=\ppopacity]
+ {Sync with analysis?};
+ \fi
+ \ifdefined\recordinfo
+ \node (recordinfo)
+ [rbox, above=of varsinpaper, yshift=-8mm, opacity=\ppopacity]
+ {Report this info?};
+ \fi
+ \ifdefined\softcite
+ \node (softcite)
+ [rbox, above=of recordinfo, yshift=-8mm, opacity=\ppopacity]
+ {Cited software?};
+ \fi
+ \ifdefined\prevchange
+ \node (prevchange)
+ [rbox, above=of softcite, yshift=-8mm, opacity=\ppopacity]
+ {History recorded?};
+ \fi
+
+ \ifdefined\gitlogo
+ \node [inner sep=0pt, opacity=0.5] at (5.5,0)
+ {\includegraphics[width=10cm]{img/git.png}};
+ \fi
+\end{tikzpicture}
diff --git a/tex/src/figure-src-demoplot.tex b/tex/src/figure-src-demoplot.tex
new file mode 100644
index 0000000..6d788f5
--- /dev/null
+++ b/tex/src/figure-src-demoplot.tex
@@ -0,0 +1,32 @@
+\begin{tcolorbox}[title=\inlinecode{\textcolor{white}{demo-plot.mk}}\hfill(Simplified contents)]
+ \footnotesize
+ \texttt{\mkcomment{1ST MAKE RULE: build the directory hosting the converted table.}}\\
+ \texttt{\mkvar{a2dir} = \$(\mkvar{texdir})/tools-per-year}\\
+ \texttt{\mktarget{\$(a2dir)}:; \mkprog{mkdir} \$@}
+
+ \vspace{2em}
+ \texttt{\mkcomment{2ND MAKE RULE: extract necessary info from raw table.}}\\
+ \texttt{\mkvar{a2mk20f1c} = \$(\mkvar{a2dir})/tools-per-year.txt}\\
+ \texttt{\mktarget{\$(a2mk20f1c)}: \$(\mkvar{mk20tab3}) | \$(\mkvar{a2dir})}\\
+ \texttt{\mktab{}\mkprog{awk} '!/\^{}\#/ \{all[\$\$1]+=\$\$2; id[\$\$1]+=\$\$3;\}} \textbackslash\\
+ \texttt{\mktab{}{ }{ }{ }{ }{ }END\{ for(year in all) print year, 100*id[year]/all[year], all[year] \}}' \textbackslash\\
+ \texttt{\mktab{}{ }{ }{ }{ }> \$@}
+
+ \vspace{2em}
+ \texttt{\mkcomment{3RD MAKE RULE: Main LaTeX macro file for reported values in text.}}\\
+ \texttt{\mkvar{pconfdir} = reproduce/analysis/config}\\
+ \texttt{\mktarget{\$(mtexdir)/demo-plot.tex}: \$(\mkvar{a2mk20f1c}) \$(\mkvar{pconfdir})/menke-demo-year.conf}
+
+ %% We need an empty line here for the extra space to work.
+ \texttt{\recipecomment{First year data were taken (first column of first row).}}\\
+ \texttt{\mktab{}v=\$\$(awk 'NR==1\{print \$\$1\}' \$(\mkvar{a2mk20f1c}))}\\
+ \texttt{\mktab{}\mkprog{echo} "\textbackslash{}newcommand\{\textbackslash{}menkefirstyear\}\{\$\$v\}" > \$@}
+
+ %% We need an empty line here for the extra space to work.
+ \texttt{\recipecomment{Number of papers in the demonstration year. The year is defined in}}
+
+ \texttt{\recipecomment{`\$(pconfdir)/menke-demo-year.conf' as `menke-demo-year' and also passed onto LaTeX.}}\\
+ \texttt{\mktab{}v=\$\$(awk '\$\$1==\$(\mkvar{menke-demo-year})\{print \$\$3\}' \$(\mkvar{a2mk20f1c}))}\\
+ \texttt{\mktab{}\mkprog{echo} "\textbackslash{}newcommand\{\textbackslash{}menkenumpapersdemocount\}\{\$\$v\}"{ }>> \$@} \\
+ \texttt{\mktab{}\mkprog{echo} "\textbackslash{}newcommand\{\textbackslash{}menkenumpapersdemoyear\}\{\$(\mkvar{menke-demo-year})\}"{ }>> \$@}
+\end{tcolorbox}
diff --git a/tex/src/figure-src-download.tex b/tex/src/figure-src-download.tex
new file mode 100644
index 0000000..4d4b755
--- /dev/null
+++ b/tex/src/figure-src-download.tex
@@ -0,0 +1,8 @@
+\begin{tcolorbox}[title=\inlinecode{\textcolor{white}{download.mk}} \hfill\textcolor{white}{(only \LaTeX{} macro's rule)}]
+ \footnotesize
+ \texttt{\mkcomment{Write download URL into the paper (through a LaTeX macro).}}
+
+ \texttt{\mktarget{\$(mtexdir)/download.tex}: \$(\mkvar{indir})/menke20.xlsx}
+
+ \texttt{\mktab{}\mkprog{echo} "\textbackslash{}newcommand{\textbackslash{}menketwentyurl}\{\mktarget{\$(MK20URL)}\}" > \$@}
+\end{tcolorbox}
diff --git a/tex/src/figure-src-format.tex b/tex/src/figure-src-format.tex
new file mode 100644
index 0000000..f860d54
--- /dev/null
+++ b/tex/src/figure-src-format.tex
@@ -0,0 +1,59 @@
+\begin{tcolorbox}[title=\inlinecode{\textcolor{white}{format.mk}}\hfill(Simplified contents)]
+ \footnotesize
+ \texttt{\mkcomment{1ST MAKE RULE: build the directory hosting the converted table.}}
+
+ \texttt{\mkvar{a1dir} = \$(\mkvar{BDIR})/format}
+
+ \texttt{\mktarget{\$(a1dir)}:}
+
+ \texttt{\mktab{}\mkprog{mkdir} \$@}
+
+ \vspace{2em}
+ \texttt{\mkcomment{2ND MAKE RULE: Convert the XLSX table to a simple plain-text table.}}
+
+ \texttt{\mkvar{mk20tab3} = \$(\mkvar{a1dir})/menke20-table-3.txt}
+
+ \texttt{\mktarget{\$(mk20tab3)}: \$(\mkvar{indir})/menke20.xlsx | \$(\mkvar{a1dir})}
+
+ \texttt{\recipecomment{Call XLSX I/O to convert all the spreadsheets into different CSV files.}}
+
+ \texttt{\recipecomment{We only want the `table-3' spreadsheet, but XLSX I/O does not allow setting its}}
+
+ \texttt{\recipecomment{output filename. For simplicity, let's assume its written in `table-3.csv'.}}
+
+ \texttt{\mktab{}\mkprog{xlsxio\_xlsx2csv} \$<}
+
+ \vspace{0.5em}
+ \texttt{\recipecomment{Use GNU AWK to keep the desired columns in space-separated, fixed-width format.}}
+
+ \texttt{\recipecomment{With `FPAT' commas within double quotes are not counted as columns.}}
+
+ \texttt{\mktab{}\mkprog{awk} 'NR>1\{printf("\%-10d\%-10d\%-10d \%s\textbackslash{}n", \$\$2, \$\$3, \$\$(NF-1)*\$\$NF, \$\$1)\}' \textbackslash}
+
+ \texttt{\mktab{}{ }{ }{ }{ }FPAT='([\^{},]+)|("[\^{}"]+")' table-3.csv > \$@}
+
+ \vspace{0.5em}
+ \texttt{\recipecomment{Delete the temporary CSV file.}}
+
+ \texttt{\mktab{}\mkprog{rm} table-3.csv}
+
+ \vspace{2em}
+ \texttt{\mkcomment{3RD MAKE RULE: Main LaTeX macro file for reported values.}}
+
+ \texttt{\mktarget{\$(mtexdir)/format.tex}: \$(\mkvar{mk20tab3)}}
+
+ \texttt{\recipecomment{Count the total number of papers in their study to report in this paper.}}
+
+ \texttt{\mktab{}v=\$\$(\mkprog{awk} '\!/\^{}\#/\{c+=\$\$2\} END\{print c\}' \$(\mkvar{mk20tab3)})}
+
+ \texttt{\mktab{}\mkprog{echo} "\textbackslash{}newcommand\{\textbackslash{}menkenumpapers\}\{\$\$v\}" > \$@}
+
+ \vspace{0.5em}
+ \texttt{\recipecomment{Count total number of journals in that study.}}
+
+ \texttt{\mktab{}v=\$\$(awk 'BEGIN{FIELDWIDTHS="31 10000"} !/\^\#/\{print \$\$2\}' \$(mk20tab3) \textbackslash}
+
+ \texttt{\mktab{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }| uniq | wc -l)}
+
+ \texttt{\mktab{}\mkprog{echo} "\textbackslash{}newcommand\{\textbackslash{}menkenumjournals\}\{\$\$v\}" >> \$@}
+\end{tcolorbox}
diff --git a/tex/src/figure-src-inputconf.tex b/tex/src/figure-src-inputconf.tex
new file mode 100644
index 0000000..33742df
--- /dev/null
+++ b/tex/src/figure-src-inputconf.tex
@@ -0,0 +1,7 @@
+\begin{tcolorbox}[title=\inlinecode{\textcolor{white}{INPUT.conf}}]
+ \footnotesize
+ \texttt{\mkvar{INPUT-{\menketwentyxlsxname}-sha256} = \menketwentychecksum}\\
+ \texttt{\mkvar{INPUT-{\menketwentyxlsxname}-size} = \menketwentybytesize}\\
+ \texttt{\mkvar{INPUT-{\menketwentyxlsxname}-url} = {\scriptsize \menketwentyurl}}\\
+ \vspace{-3mm}
+\end{tcolorbox}
diff --git a/tex/src/figure-src-topmake.tex b/tex/src/figure-src-topmake.tex
new file mode 100644
index 0000000..6ed315b
--- /dev/null
+++ b/tex/src/figure-src-topmake.tex
@@ -0,0 +1,24 @@
+\begin{tcolorbox}[title=\inlinecode{\textcolor{white}{top-make.mk}}\hfill\textcolor{white}{(simplified)}]
+ \footnotesize
+
+ \texttt{\mkcomment{Ultimate target/purpose of project (`paper.pdf' is the final target of the final subMakefile}}\par
+ \texttt{\mkcomment{that is loaded/included below)}}\par
+ \texttt{\mktarget{all}: paper.pdf}
+
+ \vspace{1em}
+ \texttt{\mkcomment{List of subMakefiles to be loaded in order.}}\par
+ \texttt{\mkvar{makesrc} = initialize \textbackslash}\par
+ \texttt{{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }download \textbackslash}\par
+ \texttt{{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }format \textbackslash}\par
+ \texttt{{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }demo-plot \textbackslash}\par
+ \texttt{{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }verify \textbackslash}\par
+ \texttt{{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }paper}\par
+
+ \vspace{1em}
+ \texttt{\mkcomment{Include all the configuration files.}}\par
+ \texttt{\textcolor{purple}{include} reproduce/analysis/config/*.conf}
+
+ \vspace{1em}
+ \texttt{\mkcomment{Include the subMakefiles in the specified order.}}\par
+ \texttt{\textcolor{purple}{include} \$(\textcolor{blue}{foreach} s, \$(\mkvar{makesrc}), reproduce/analysis/make/\$(\mkvar{s}).mk)}
+\end{tcolorbox}
diff --git a/tex/src/figure-tools-per-year.tex b/tex/src/figure-tools-per-year.tex
new file mode 100644
index 0000000..738cdad
--- /dev/null
+++ b/tex/src/figure-tools-per-year.tex
@@ -0,0 +1,258 @@
+% All macros commented
+\newcommand{\paperpdf}{} % 1
+\newcommand{\papertex}{} % 2
+\newcommand{\projecttex}{} % 3
+\newcommand{\verifytex}{} % 4
+\newcommand{\demoplottex}{} % 5
+\newcommand{\toolsperyear}{} % 6
+\newcommand{\tablethree}{} % 7
+\newcommand{\menkexlsx}{} % 8
+\newcommand{\inputsconf}{} % 9
+\newcommand{\downloadtex}{} % 10
+\newcommand{\formattex}{} % 11
+\newcommand{\demoyearconf}{} % 12
+\newcommand{\initializetex}{} % 13
+\newcommand{\expandingproject}{} % 14
+
+
+
+\begin{tikzpicture}
+
+ %% These white lines are only relevant when we want to add boxes in
+ %% multiple figures (for example to build slides). They are used to fix
+ %% the vertical position of the boxs in the figure so it doesn't change
+ %% as we add more boxes.
+ \draw [white] (-9cm,0) -- (9cm,0);
+ \draw [white] (0,-5cm) -- (0,4cm);
+
+ %% Right-side Y axis (red, logarithmic histogram). This should be created
+ %% under the line, because the line doesn't interfere with interpretting
+ %% the histogram, but the inverse is not true.
+ \begin{axis}[
+ ymode=log,
+ width=5cm,
+ height=5cm,
+ axis on top,
+ axis x line=none,
+ axis y line*=right,
+ at={(-7.4cm,-1.8cm)},
+ enlarge x limits = false,
+ ylabel={Num. papers (red, log-scale)},
+ max space between ticks=20,
+ ]
+ \addplot+ [ybar, mark=none, fill=red!40!white, red!40!white]
+ table [x index=0, y index=2] {tex/build/to-publish/tools-per-year.txt};
+ \end{axis}
+
+ %% Left-side Y axis (green, linear/percent line).
+ \begin{axis}[
+ ymin=0,
+ ymax=100,
+ width=5cm,
+ height=5cm,
+ xlabel={Year},
+ axis y line*=left,
+ at={(-7.4cm,-1.8cm)},
+ enlarge x limits = false,
+ ylabel={Frac. papers with tools (green)},
+ yticklabel=\pgfmathprintnumber{\tick}\,\%,
+ x tick label style={/pgf/number format/1000 sep=},
+ ]
+
+ %% Linear plot, showing the number of papers mentioning tools.
+ \addplot+ [mark=none, ultra thick, green!60!black]
+ table {tex/build/to-publish/tools-per-year.txt};
+ \end{axis}
+
+ %% Use small fonts for the rest.
+ \scriptsize
+
+ %% top-make.mk
+ \node [rectangle,
+ very thick,
+ text centered,
+ font=\ttfamily,
+ text width=2.8cm,
+ anchor=north west,
+ at={(-2.6cm,3.5cm)},
+ minimum width=11.6cm,
+ minimum height=7.25cm,
+ draw=green!50!black!50,
+ fill=black!10!green!2!white,
+ label={[shift={(0,-5mm)}]\texttt{top-make.mk}}] {};
+
+ %% verify.mk
+ \node [at={(-0.7cm,-2.9cm)},
+ thick,
+ rectangle,
+ text centered,
+ font=\ttfamily,
+ text width=2.45cm,
+ minimum width=3.5cm,
+ minimum height=1.3cm,
+ draw=green!50!black!50,
+ fill=black!10!green!12!white,
+ label={[shift={(1cm,-5mm)}]\texttt{verify.mk}}] {};
+
+ %% Paper.mk
+ \node [at={(5.35cm,-2.9cm)},
+ thick,
+ rectangle,
+ text centered,
+ text width=2.8cm,
+ minimum width=7cm,
+ minimum height=1.3cm,
+ draw=green!50!black!50,
+ fill=black!10!green!12!white,
+ font=\ttfamily,
+ label={[shift={(0,-5mm)}]\texttt{paper.mk}}] {};
+
+ %% Work-horse Makefiles, the X axis value of the files is the same. The Y
+ %% axis values range like this:
+ %% 2.1cm
+ %% 1.3cm
+ %% 0.5cm
+ %% -0.4cm
+ %% -1.3cm
+ \node [node-makefile, at={(-1.4cm,3cm)},
+ label={[shift={(0,-5mm)}]\texttt{initialize.mk}}] {};
+ \node [node-makefile, at={(0.9cm,3cm)},
+ label={[shift={(0,-5mm)}]\texttt{download.mk}}] {};
+ \node [node-makefile, at={(3.2cm,3cm)},
+ label={[shift={(0,-5mm)}]\texttt{format.mk}}] {};
+ \node [node-makefile, at={(5.5cm,3cm)},
+ label={[shift={(0,-5mm)}]\texttt{demo-plot.mk}}] {};
+
+ %% paper.pdf
+ \ifdefined\paperpdf
+ \node (paperpdf) [node-terminal, at={(7.8cm,-3cm)}] {paper.pdf};
+ \fi
+
+ %% paper.tex and references.tex
+ \ifdefined\papertex
+ \node (reftex) [node-nonterminal, at={(5.5cm,-3.9cm)}] {references.tex};
+ \node (papertex) [node-nonterminal, at={(7.8cm,-3.9cm)}] {paper.tex};
+ \node (papertex-north) [node-point, at={(7.8cm,-3.65cm)}] {};
+ \draw [rounded corners, black!50, line width=1.5pt] (reftex) |- (papertex-north);
+ \draw [->, black!50, line width=1.5pt] (papertex) -- (paperpdf);
+ \fi
+
+ %% project.tex
+ \ifdefined\projecttex
+ \node (projecttex) [node-terminal, at={(3.2cm,-3cm)}] {project.tex};
+ \draw [->, black!50, line width=1.5pt] (projecttex) -- (paperpdf);
+ \fi
+
+ %% verify.tex
+ \ifdefined\verifytex
+ \node (verifytex) [node-terminal, at={(-1.4cm,-3cm)}] {verify.tex};
+ \draw [->, black!50, line width=1.5pt] (verifytex) -- (projecttex);
+ \fi
+
+ %% demo-plot.tex
+ \ifdefined\demoplottex
+ \node (initialize-south) [node-point, at={(-1.4cm,-2cm)}] {};
+ \node (verifytop) [node-point, at={(-1.4cm,-2.75cm)}] {};
+ \node (dptex) [node-terminal, at={(5.5cm,-1.3cm)}] {demo-plot.tex};
+ \draw [rounded corners, ->, black!50, line width=1.5pt]
+ (dptex) |- (initialize-south) |- (verifytop);
+ \fi
+
+ %% tools-per-year.txt
+ \ifdefined\toolsperyear
+ \node (tpyear) [node-terminal, at={(5.5cm,-0.4cm)}] {tools-per-\\year.txt};
+ \draw [->, black!50, line width=1.5pt] (tpyear) -- (dptex);
+ \fi
+
+ %% table-3.txt
+ \ifdefined\tablethree
+ \node (tabthree) [node-terminal, at={(3.2cm,0.5cm)}] {table-3.txt};
+ \draw [rounded corners, ->, black!50, line width=1.5pt] (tabthree) |- (tpyear);
+ \fi
+
+ %% menkexlsx
+ \ifdefined\menkexlsx
+ \node (xlsx) [node-terminal, at={(0.9cm,1.3cm)}] {menke20.xlsx};
+ \draw [->, rounded corners, black!50, line width=1.5pt] (xlsx) |- (tabthree);
+ \fi
+
+ %% INPUTS.conf
+ \ifdefined\inputsconf
+ \node (INPUTS) [node-nonterminal, at={(0.9cm,4cm)}] {INPUTS.conf};
+ \node (xlsx-west) [node-point, at={(-0.25cm,1.37cm)}] {};
+ \draw [->,rounded corners, black!50, line width=1.5pt]
+ (INPUTS.west) -| (xlsx-west) |- (xlsx);
+ \fi
+
+ %% download.tex
+ \ifdefined\downloadtex
+ \node (downloadtex) [node-terminal, at={(0.9cm,-1.3cm)}] {download.tex};
+ \node (downloadtex-west) [node-point, at={(-0.25cm,-1.25cm)}] {};
+ \draw [->,rounded corners, black!50, line width=1.5pt]
+ (INPUTS.west) -| (downloadtex-west) |- (downloadtex);
+ \draw [rounded corners, -, black!50, line width=1.5pt]
+ (downloadtex) |- (initialize-south);
+ \fi
+
+ %% format.tex
+ \ifdefined\formattex
+ \node (fmttex) [node-terminal, at={(3.2cm,-1.3cm)}] {format.tex};
+ \draw [->, black!50, line width=1.5pt] (tabthree) -- (fmttex);
+ \draw [rounded corners, -, black!50, line width=1.5pt]
+ (fmttex) |- (initialize-south);
+ \fi
+
+ %% demo-year.conf
+ \ifdefined\demoyearconf
+ \node (dyearconf) [node-nonterminal, at={(5.5cm,4cm)}] {demo-year.conf};
+ \node (dptex-west) [node-point, at={(4.35cm,-1.25cm)}] {};
+ \draw [->,rounded corners, black!50, line width=1.5pt]
+ (dyearconf.west) -| (dptex-west) |- (dptex);
+ \fi
+
+ %% Initialize.tex
+ \ifdefined\initializetex
+ \node (initializetex) [node-terminal, at={(-1.4cm,-1.3cm)}] {initialize.tex};
+ \draw [->, black!50, line width=1.5pt] (initializetex) -- (verifytex);
+ \node [anchor=west, at={(-2.4cm,1.5cm)}] {Basic project info};
+ \node [anchor=west, at={(-2.4cm,1.2cm)}] {(e.g., Git commit).};
+ \node [anchor=west, at={(-2.4cm,0.5cm)}] {Also defines};
+ \node [anchor=west, at={(-2.4cm,0.2cm)}] {project structure};
+ \node [anchor=west, at={(-2.4cm,-0.1cm)}] {(for \texttt{*.mk} files).};
+ \fi
+
+ %% Expanding project
+ \ifdefined\expandingproject
+
+ %% The Makefile.
+ \node [node-makefile, dotted, at={(7.8cm,3cm)},
+ label={[shift={(0,-5mm)}]\texttt{next-step.mk}}] {};
+
+ %% next-step.tex
+ \node [dotted] (a3tex) [node-terminal, at={(7.8cm,-1.3cm)}] {next-step.tex};
+ \draw [dotted, rounded corners, -, black!50, line width=1.4pt]
+ (a3tex) |- (initialize-south);
+
+ % out-3a.dat and out-3b.dat
+ \node [dotted] (out3a) [node-terminal, at={(7.8cm,2.1cm)}] {out-a.dat};
+ \node [dotted] (out3b) [node-terminal, at={(7.8cm,0.5cm)}] {out-b.dat};
+ \node (a3tex-east) [node-point, at={(8.93cm,-0.8cm)}] {};
+ \draw [dotted, ->, rounded corners, black!50, line width=1.5pt]
+ (out3a.east) -| (a3tex-east) |- (a3tex);
+ \draw [dotted, ->, black!50, line width=1.5pt] (out3b) -- (a3tex);
+
+ %% demo-out.dat
+ \node [dotted] (dout) [node-terminal, at={(5.5cm,1.3cm)}] {demo-out.dat};
+ \draw [dotted, rounded corners, ->, black!50, line width=1.5pt] (dout.south) |- (out3b);
+
+ %% links
+ \node (dout-west) [node-point, at={(5cm,1.3cm)}] {};
+ \draw [dotted, ->, black!50, line width=1.5pt] (xlsx) -- (dout);
+ \node [opacity=0.7] (out3a-west) [node-point, at={(6.65cm,2.1cm)}] {};
+ \draw [dotted, ->, rounded corners, black!50, line width=1.5pt] (xlsx) |- (out3a);
+ \node [dotted] (a3conf1) [node-nonterminal, at={(7.8cm,4cm)}] {param.conf};
+ \draw [dotted, rounded corners, black!50, line width=1.5pt]
+ (a3conf1.west) -| (out3a-west) |- (out3a);
+ \fi
+
+\end{tikzpicture}
diff --git a/tex/src/paper-long.tex b/tex/src/paper-long.tex
new file mode 100644
index 0000000..6d74e17
--- /dev/null
+++ b/tex/src/paper-long.tex
@@ -0,0 +1,2591 @@
+\documentclass[10.5pt]{article}
+
+%% This is a convenience variable if you are using PGFPlots to build plots
+%% within LaTeX. If you want to import PDF files for figures directly, you
+%% can use the standard `\includegraphics' command. See the definition of
+%% `\includetikz' in `tex/preamble-pgfplots.tex' for where the files are
+%% assumed to be if you use `\includetikz' when `\makepdf' is not defined.
+\newcommand{\makepdf}{}
+
+%% When defined (value is irrelevant), `\highlightchanges' will cause text
+%% in `\tonote' and `\new' to become colored. This is useful in cases that
+%% you need to distribute drafts that is undergoing revision and you want
+%% to hightlight to your colleagues which parts are new and which parts are
+%% only for discussion.
+\newcommand{\highlightchanges}{}
+
+%% Import the necessary preambles.
+\input{tex/src/preamble-style.tex}
+\input{tex/build/macros/project.tex}
+\input{tex/src/preamble-pgfplots.tex}
+\input{tex/src/preamble-biblatex.tex}
+
+
+
+
+
+\title{Maneage: Customizable Template for Managing Data Lineage}
+\author{\large\mpregular \authoraffil{Mohammad Akhlaghi}{1,2},
+ \large\mpregular \authoraffil{Ra\'ul Infante-Sainz}{1,2}\\
+ {
+ \footnotesize\mplight
+ \textsuperscript{1} Instituto de Astrof\'isica de Canarias, C/V\'ia L\'actea, 38200 La Laguna, Tenerife, ES.\\
+ \textsuperscript{2} Facultad de F\'isica, Universidad de La Laguna, Avda. Astrof\'isico Fco. S\'anchez s/n, 38200, La Laguna, Tenerife, ES.\\
+ Corresponding author: Mohammad Akhlaghi
+ (\href{mailto:mohammad@akhlaghi.org}{\textcolor{black}{mohammad@akhlaghi.org}})
+ }}
+\date{}
+
+
+
+
+
+\begin{document}%\layout
+\thispagestyle{firstpage}
+\maketitle
+
+%% Abstract
+{\noindent\mpregular
+ The era of big data has also ushered an era of big responsibility.
+ Without it, the integrity of the results will be a subject of perpetual debate.
+ In this paper, Maneage (management + lineage) is introduced as a low-level solution.
+ Maneage is a publishing and archival friendly data lineage management system (in machine-actionable plain-text) for projects in the sciences or industry.
+ Its core principles include: stand-alone (e.g., not requiring anything beyond a POSIX-compatible system, administrator privileges, or a network connection), modular, straight-forward design, traceable input and output, temporal lineage/provenance and free software (for scientific applications).
+ A project that uses Maneage will be able to publish the complete data lineage, making it exactly reproducible (as a test on sufficiently conveying the data lineage).
+ The offered lineage/control isn't limited downloading the raw input data and processing them automatically, but also includes building the necessary data analyze software with fixed versions and build configurations.
+ Additionally, Maneage also includes the final PDF report of the project, establishing direct links between the data analysis and the narrative (with the precision of sentence).
+ Maneage enables incremental projects, where a new project can branch off an existing one making only moderate changes and experimentation on the methods.
+ It can also be used on more ambitious projects once a sufficiently large number of projects use it, for example automatic workflow creation through machine learning tools, or automating data management plans.
+ As a demonstration, this paper is written using Maneage (snapshot \projectversion).
+ \horizontalline
+
+ \noindent
+ {\mpbold Keywords:} Data Lineage, Data Provenance, Reproducibility, Scientific Pipelines, Workflows
+}
+
+\horizontalline
+
+
+
+
+
+
+
+
+
+
+\section{Introduction}
+\label{sec:introduction}
+
+The increasing volume and complexity of data analysis has been highly productive, giving rise to a new branch of ``Big Data'' in many fields of the sciences and industry.
+However, given its inherent complexity, the mere results are barely useful alone.
+Questions such as these commonly follow any such result:
+What inputs were used?
+What operations were done on those inputs? How were the configurations or training data chosen?
+How did the quantitative results get visualized into the final demonstration plots, figures or narrative/qualitative interpretation?
+May there be a bias in the visualization?
+See Figure \ref{fig:questions} for a more detailed visual representation of such questions for various stages of the workflow.
+
+In data science and database management, this type of metadata are commonly known as \emph{data provenance} or \emph{data lineage}.
+Their definitions are elaborated with other basic concepts in Section \ref{sec:definitions}.
+Data lineage is being increasingly demanded for integrity checking from both the scientific and industrial/legal domains.
+Notable examples in each domain are respectively the ``Reproducibility crisis'' in the sciences that was claimed by the Nature journal \citep{baker16}, and the General Data Protection Regulation (GDPR) by the European Parliament and the California Consumer Privacy Act (CCPA), implemented in 2018 and 2020 respectively.
+The former argues that reproducibility (as a test on sufficiently conveying the data lineage) is necessary for other scientists to study, check and build-upon each other's work.
+The latter requires the data intensive industry to give individual users control over their data, effectively requiring thorough management and knowledge of the data's lineage.
+Besides regulation and integrity checks, having a robust data governance (management of data lineage) in a project can be very productive: it enables easy debugging, experimentation on alternative methods, or optimization of the workflow.
+
+In the sciences, the results of a project's analysis are published as scientific papers which have also been the primary conveyor of the result's lineage: usually in narrative form, within the ``Methods'' section of the paper.
+From our own experiences, this section is usually most discussed during peer review and conference presentations, showing its importance.
+After all, a result is defined as ``scientific'' based on its \emph{method} (the ``scientific method''), or lineage in data-science terminology.
+In the industry however, data governance is usually kept as a trade secret and isn't publicly published or scrutinized.
+Therefore while the proposed approach introduced in this paper (Maneage) is also useful in industrial contexts, the main practical focus would be in the scientific front which has traditionally been more open to publishing the methods and anonymous peer scrutiny.
+
+\begin{figure}[t]
+ \begin{center}
+ \includetikz{figure-project-outline}
+ \end{center}
+ \vspace{-17mm}
+ \caption{\label{fig:questions}Graph of a generic project's workflow (connected through arrows), highlighting the various issues/questions on each step.
+ The green boxes with sharp edges are inputs and the blue boxes with rounded corners are the intermediate or final outputs.
+ The red boxes with dashed edges highlight the main questions on the respective stage.
+ The orange box surrounding the software download and build phases marks shows the various commonly recognized solutions to the questions in it, for more see Appendix \ref{appendix:jobmanagement}.
+ }
+\end{figure}
+
+The traditional format of a scientific paper has been very successful in conveying the method with the result in the last centuries.
+However, the complexity mentioned above has made it impossible to describe all the analytical steps of a project to a sufficient level of detail.
+Citing this difficulty, many authors suffice to describing the very high-level generalities of their analysis, while even the most basic calculations (like the mean of a distribution) can depend on the software implementation.
+
+Due to the complexity of modern scientific analysis, a small deviation in the final result can be due to many different steps, which may be significant.
+Publishing the precise codes of the analysis is the only guarantee.
+For example, \citet{smart18} describes how a 7-year old conflict in theoretical condensed matter physics was only identified after the relative codes were shared.
+Nature is already a black box which we are trying hard to unlock, or understand.
+Not being able to experiment on the methods of other researchers is an artificial and self-imposed black box, wrapped over the original, and taking most of the energy of researchers.
+
+\citet{miller06} found that a mistaken column flipping, leading to retraction of 5 papers in major journals, including Science.
+\citet{baggerly09} highlighted the inadequate narrative description of the analysis and showed the prevalence of simple errors in published results, ultimately calling their work ``forensic bioinformatics''.
+\citet{herndon14} and \citet[a self-correction]{horvath15} also reported similar situations and \citet{ziemann16} concluded that one-fifth of papers with supplementary Microsoft Excel gene lists contain erroneous gene name conversions.
+Such integrity checks tests are a critical component of the scientific method, but are only possible with access to the data and codes.
+
+The completeness of a paper's published metadata (or ``Methods'' section) can be measured by a simple question: given the same input datasets (supposedly on a third-party database like \href{http://zenodo.org}{zenodo.org}), can another researcher reproduce the exact same result automatically, without needing to contact the authors?
+Several studies have attempted to answer this with different levels of detail.
+For example \citet{allen18} found that roughly half of the papers in astrophysics don't even mention the names of any analysis software they have used, while \citet{menke20} found that the fraction of papers explicitly mentioning their tools/software has greatly improved in medical journals over the last two decades.
+
+\citet{ioannidis2009} attempted to reproduce 18 published results by two independent groups but, only fully succeeded in 2 of them and partially in 6.
+\citet{chang15} attempted to reproduce 67 papers in well-regarded economic journals with data and code: only 22 could be reproduced without contacting authors, and more than half could not be replicated at all.
+\citet{stodden18} attempted to replicate the results of 204 scientific papers published in the journal Science \emph{after} that journal adopted a policy of publishing the data and code associated with the papers.
+Even though the authors were contacted, the success rate was $26\%$.
+Generally, this problem is unambiguously felt in the community: \citet{baker16} surveyed 1574 researchers and found that only $3\%$ did not see a ``reproducibility crisis''.
+
+This is not a new problem in the sciences: in 2011, Elsevier conducted an ``Executable Paper Grand Challenge'' \citep{gabriel11}.
+The proposed solutions were published in a special edition.
+Some of them are reviewed in Appendix \ref{appendix:existingsolutions}, but most have not been continued since then.
+Before that, \citet{ioannidis05} proved that ``most claimed research findings are false''.
+In the 1990s, \citet{schwab2000, buckheit1995, claerbout1992} describe this same problem very eloquently and also provided some solutions that they used.
+While the situation has improved since the early 1990s, these papers still resonate strongly with the frustrations of today's scientists.
+Even earlier, through his famous quartet, \citet{anscombe73} qualitatively showed how distancing of researchers from the intricacies of algorithms/methods can lead to misinterpretation of the results.
+One of the earliest such efforts we found was \citet{roberts69} who discussed conventions in FORTRAN programming and documentation to help in publishing research codes.
+
+From a practical point of view, for those who publish the data lineage, a major problem is the fast evolving and diverse software technologies and methodologies that are used by different teams in different epochs.
+\citet{zhao12} describe it as ``workflow decay'' and recommend preserving these auxiliary resources.
+But in the case of software its not as straightforward as data: if preserved in binary form, software can only be run on certain hardware and if kept as source-code, their build dependencies and build configuration must also be preserved.
+\citet{gronenschild12} specifically study the effect of software version and environment and encourage researchers to not update their software environment.
+However, this is not a practical solution because software updates are necessary, at least to fix bugs in the same research software.
+Generally, software is not a secular component of projects, where one software can easily be swapped with another.
+Projects are built around specific software technologies, and research in software methods and implementations is itself a vibrant research topic in many domains \citep{dicosmo19}.
+
+\tonote{add a short summary of the advantages of Maneage.}
+
+This paper introduces Maneage as a solution to these important issues.
+Section \ref{sec:definitions} defines the necessary concepts and terminology used in this paper leading to a discussion of the necessary guiding principles in Section \ref{sec:principles}.
+Section \ref{sec:maneage} introduces the implementation of Maneage, going into lower-level details in some cases.
+Finally, in Section \ref{sec:discussion}, the future prospects of using systems like this template are discussed.
+After the main body, Appendix \ref{appendix:existingtools} reviews the most commonly used lower-level technologies used today.
+In light of the guiding principles, in Appendix \ref{appendix:existingsolutions} a critical review of many workflow management systems that have been introduced over the last three decades is given.
+Finally, in Appendix \ref{appendix:softwareacknowledge} we acknowledge the various software (with a name and version number) that were used for this project.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Definition of important terms}
+\label{sec:definitions}
+
+The concepts and terminologies of reproducibility and project/workflow management and design are commonly used differently by different research communities or different solution provides.
+As a consequence, before starting with the technical details it is important to clarify the specific terms used throughout this paper and its appendix.
+
+
+
+
+
+\subsection{Definition: input}
+\label{definition:input}
+Any computer file that may be usable in more than one project.
+The inputs of a project include data, software source code, etc. (see \citet{hinsen16} on the fundamental similarity of data and source code).
+Inputs may be encoded in plain text (for example tables of comma-separated values, CSV, or processing scripts), custom binary formats (for example JPEG images), or domain-specific data formats \citep[e.g., FITS in astronomy, see][]{pence10}.
+
+Inputs may have initially been created/written (e.g., software source code) or collected (e.g., data) for one specific project.
+However, they can, and most often will, be used in other/later projects also.
+Following the principle of modularity, it is therefore optimal to treat the inputs of any project as independent entities, not mixing them with how they are managed (how software is run on the data) within the project (see Section \ref{definition:project}).
+
+Inputs are nevertheless necessary for building and running any project.
+Some inputs may already archived/published independently prior to the project's publication.
+In this case, they can easily be downloaded and used by independent projects.
+Otherwise, they can be published with the project, but as independent files, for example see \href{https://doi.org/10.5281/zenodo.3408481}{zenodo.3408481} \citep{akhlaghi19}.
+
+
+
+
+
+\subsection{Definition: output}
+\label{definition:output}
+Any computer file that is published at the end of the project.
+The output(s) can be datasets (terabyte-sized, small table(s) or image(s), a single number, a true/false (Boolean) outcome), automatically generated software source code, or any other file.
+The raw output files are commonly supplemented with a paper/report that summarizes them in a human-friendly readable/printable/narrative format.
+The report commonly includes highlights of the input/output datasets (or intermediate datasets) as plots, figures, tables or simple numbers blended into the text.
+
+The outputs can either be published independently on data servers which assign specific persistent identifiers (PIDs) to be cited in the final report or published paper (in a journal for example).
+Alternatively, the datasets can be published with the project source, for example \href{https://doi.org/10.5281/zenodo.1164774}{zenodo.1164774} \citep[Sections 7.3 \& 3.4]{bacon17}.
+
+
+
+
+
+\subsection{Definition: project}
+\label{definition:project}
+The most high-level series of operations that are done on input(s) to produce the output(s).
+Because the project's report is also defined as an output (see above), besides the high-level analysis, the project's source also includes scripts/commands to produce plots, figures or tables.
+
+With this definition, this concept of a ``project'' is similar to ``workflow''.
+However, it is important to emphasize that the project's source code and inputs are distinct entities.
+For example the project may be written in the same programming language as one analysis step.
+Generally, the project source is defined as the most high-level source file that is unique to that individual project (its language is irrelevant).
+The project is thus only in charge of managing the inputs and outputs of each analysis step (take the outputs of one step, and feed them as inputs to the next), not to do analysis by itself.
+A good project will follow the modularity principle: analysis scripts should be well-defined as an independently managed software source.
+For example modules in Python, packages in R, or libraries/programs in C/C++ that can be imported in higher-level project sources.
+
+
+
+
+\subsection{Definition: data provenance}
+\label{definition:provenance}
+
+Data provenance is a very generic term which points to slightly different technical concepts in different fields like databases, storage systems and scientific workflows.
+For example within a database, an SQL query from a relational database connects a subset of the database entries to the output (\emph{why-} provenance), their more detailed dependency (\emph{how-} provenance) and the precise location of the input sources (\emph{where-} provenance), for more see \citet{cheney09}.
+In scientific workflows, provenance goes beyond a single database and its datasets, but may includes many databases that aren't directly linked, the higher-level project specific analysis that is done on the data, and linking of the analysis to the text of the paper, for example see \citet{bavoil05, moreau08, malik13}.
+
+Here, we define provenance to be the common factor of the usages above: a dataset's provenance is the set of metadata (in any ontology, standard or structure) that connect it to the components (other datasets or scripts) that produced it.
+Data provenance thus provides a high-level view of the data's genealogy.
+
+\subsection{Definition: data lineage}
+\label{definition:lineage}
+
+% This definition is inspired from https://stackoverflow.com/questions/43383197/what-are-the-differences-between-data-lineage-and-data-provenance:
+
+% "data provenance includes only high level view of the system for business users, so they can roughly navigate where their data come from.
+% It's provided by variety of modeling tools or just simple custom tables and charts.
+% Data lineage is a more specific term and includes two sides - business (data) lineage and technical (data) lineage.
+% Business lineage pictures data flows on a business-term level and it's provided by solutions like Collibra, Alation and many others.
+% Technical data lineage is created from actual technical metadata and tracks data flows on the lowest level - actual tables, scripts and statements.
+% Technical data lineage is being provided by solutions such as MANTA or Informatica Metadata Manager. "
+Data lineage is commonly used interchangeably with Data provenance \citep[for example][\tonote{among many others, just search ``data lineage'' in scholar.google.com}]{cheney09}.
+However, for clarity, in this paper we refer to the term ``Data lineage'' as a low-level and fine-grained recording of the data's source, and operations that occur on it, down to the exact command that produced each intermediate step.
+This \emph{recording} does not necessarily have to be in a formal metadata model.
+But data lineage must be complete (see completeness principle in Section \ref{principle:complete}), and allow extraction of data provenance metadata, and thus higher-level operations like visualization of the workflow.
+
+
+\subsection{Definition: reproducibility and replicability}
+\label{definition:reproduction}
+These terms have been used in the literature with various meanings, sometimes in a contradictory way.
+It is therefore necessary to clarify the precise usage of this term in this paper.
+But before that, it is important to highlight that in this paper we are only considering computational analysis. In other words, analysis after data has been collected and stored as a file on a filesystem.
+Therefore, many of the definitions reviewed in \citet{plesser18}, that are about data collection, are out of context here.
+We adopt the same definition of \citet{leek17,fineberg19}, among others:
+
+%% From Zahra Sharbaf:
+%% According to a U.S. National Science Foundation (NSF), the definition of reproducibility is “reproducibility refers to the ability of a researcher to duplicate the results of a prior study using the same materials as were used by the original investigator.
+%% That is, a second researcher might use the same raw data to build the same analysis files and implement the same statistical analysis in an attempt to yield the same results….
+%% Reproducibility is a minimum necessary condition for a finding to be believable and informative.”(K. Bollen, J. T. Cacioppo, R. Kaplan, J. Krosnick, J. L. Olds, Social, Behavioral, and Economic Sciences Perspectives on Robust and Reliable Science (National Science Foundation, Arlington, VA, 2015)).
+
+\begin{itemize}
+\item {\bf\small Reproducibility:} (same inputs $\rightarrow$ consistent result).
+ Formally: ``obtaining consistent [not necessarily identical] results using the same input data; computational steps, methods, and code; and conditions of analysis'' \citep{fineberg19}.
+ This is thus synonymous with ``computational reproducibility''.
+
+ \citet{fineberg19} allow non-bitwise or non-identical numeric outputs within their definition of reproducibility, but they also acknowledge that this flexibility can lead to complexities: what is an acceptable non-identical reproduction?
+ Exactly reproducible outputs can be precisely and automatically verified without statistical interpretations, even in a very complex analysis (involving many CPU cores, and random operations), see Section \ref{principle:verify}.
+ It also requires no expertise, as \citet{claerbout1992} put it: ``a clerk can do it''.
+ \tonote{Raul: I don't know if this is true... at least it needs a bit of training and an extra time. Maybe remove last phrase?}
+ In this paper, unless otherwise mentioned, we only consider bitwise/exact reproducibility.
+
+\item {\bf\small Replicability:} (different inputs $\rightarrow$ consistent result).
+ Formally: ``obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data'' \citep{fineberg19}.
+
+Generally, since replicability involves new data collection, it can be expensive.
+For example the ``Reproducibility Project: Cancer Biology'' initiative started in 2013 to replicate 50 high-impact papers in cancer biology\footnote{\url{https://elifesciences.org/collections/9b1e83d1/reproducibility-project-cancer-biology}}.
+Even with a funding of at least \$1.3 million, it later shrunk to 18 projects \citep{kaiser18} due to very high costs.
+We also note that replicability doesn't have to be limited to different input data: using the same data, but with different implementations of methods, is also a replication attempt \citep[also known as ``in silico'' experiments, see][]{stevens03}.
+\end{itemize}
+
+\tonote{Raul: put white line to separate next paragraph from the previous list?}
+Some authors have defined these terms in the opposite manner.
+Examples include \citet{hinsen15} and the policy guidelines of the Association of Computing Machinery\footnote{\url{https://www.acm.org/publications/policies/artifact-review-badging}} (ACM, dated April 2018).
+ACM has itself adopted the 2008 definitions of Vocabulaire international de m\'etrologie (VIM).
+
+Besides the two terms above, ``repeatability'' is also sometimes used in regards to the concept discussed here and must be clarified.
+For example, \citet{ioannidis2009} use ``repeatability'' to encompass both the terms above.
+However, the ACM/VIM definition for repeatability is ``a researcher can reliably repeat her own computation''.
+Hence, in the ACM terminology, the only difference between replicability and repeatability is the ``team'' that is conducting the computation.
+In the context of this paper, inputs are precisely defined (Section \ref{definition:input}): files with specific/registered checksums (see Section \ref{principle:verify}).
+Therefore our inputs are team-agnostic, allowing us to safely ignore ``repeatability'' as defined by ACM/VIM.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Principles of the proposed solution}
+\label{sec:principles}
+
+The core principle behind this solution is simple: science is defined by its method, not its result.
+Statements that convey a ``result'' abound in all aspects of human life (e.g., in fiction, religion and science).
+What distinguishes one from the other is the ``method'' that the result was derived.
+Science is the only class that attempts to be as objective as possible through the ``scientific method''.
+\citet{buckheit1995} nicely summarize this by pointing out that modern scientific papers (narrative combined with plots, tables and figures) are merely advertisements of a scholarship, the actual scholarship is the scripts and software usage that went into doing the analysis.
+
+This paper thus proposes a framework that is optimally designed for both designing and executing a project, \emph{as well as} publication of the (computational) methods along with the published paper/result.
+However, this paper is not the first attempted solution to this fundamental problem.
+Various solutions have been proposed since the early 1990s, see Appendix \ref{appendix:existingsolutions} for a review.
+To better highlight the differences with those methods, and the foundations of this method (which help in understanding certain implementation choices), in the sub-sections below, the core principle above is expanded by breaking it into logically independent sub-components.
+
+It is important to note that based on the definition of a project (Section \ref{definition:project}) and the first principle below (modularity, Section \ref{principle:modularity}) this paper is designed to be modular and thus agnostic to high-level choices.
+For example the choice of hardware (e.g., high performance computing facility or a personal computer), or high-level interfaces (for example a webpage or specialized graphic user interface).
+The proposed solution in this paper is a low-level skeleton that is designed to be easily adapted to any high-level, project-specific, choice.
+For example, in terms of hardware choice, a large simulation project simply cannot be run on smaller machines.
+However, when such a project is managed in the proposed system, the complete project (see Section \ref{principle:complete}) is published and readable by peers, who can be sure that what they are reading, contains the full/exact environment and commands that produced the result.
+In terms of interfaces, wrappers can be written over this core skeleton for various other high-level cosmetics, for example a web interface, a graphic user interface or plugins to text editors or notebooks (see Appendix \ref{appendix:editors}).
+
+
+
+
+
+\subsection{Principle: Complete/Self-contained}
+\label{principle:complete}
+A project should be self-contained, needing no particular features from the host operating system (OS), and not affecting the host OS.
+At build-time (when the project is building its necessary tools), the project shouldn't need anything beyond a minimal POSIX environment on the host which is available in Unix-like operating system like GNU/Linux, BSD-based or macOS.
+At run-time (when environment/software are built), it should not use or affect any host operating system programs or libraries.
+
+Generally, a project's source should include the whole project: access to the inputs (see Section \ref{sec:definitions}), building necessary software (access to tarballs and instructions on configuring, building and installing those software), doing the analysis (run the software on the data) and creating the final narrative report/paper in its final format.
+This principle has several important consequences:
+
+\begin{itemize}
+\item A complete project doesn't need any privileged/root permissions for system-wide installation, or environment preparations.
+ Even when the user does have root privileges, interfering with the host operating system for a project, may lead to many conflicts with the host or other projects.
+ This principle thus allows a safe execution of the project, and will not cause any security problems.
+
+\item A complete project doesn't need an internet connection to build itself or to do its analysis and possibly make a report.
+ Of course this only holds when the analysis doesn't inherently require internet, for example needing a live data feed.
+
+\item A complete project inherently includes the complete data lineage and provenance: automatically enabling a full backtrace of the output datasets or narrative, to raw inputs: data or software source code lines.
+ This is very important because many existing data provenance solutions require manual tagging within the data workflow or connecting the data with the paper's text (Appendix \ref{appendix:existingsolutions}).
+ Manual tagging can be highly subjective, prone to many errors, and incomplete.
+
+\item A complete project will not need any user interaction and can complete itself automatically.
+ This is because manual interaction is an incompleteness.
+ Interactivity is also an inherently irreproducible operation, exposing the analysis to human error, and requiring expert knowledge.
+\end{itemize}
+
+The first two components are particularly important for high performance computing (HPC) facilities: because of security reasons, HPC users commonly don't have privileged permissions or internet access.
+
+A complete project as defined here is much less exposed to ``workflow decay'' as defined by \citet{zhao12} (in particular under their missing execution environment tests).
+As recommended by \citet{zhao12}, a complete project automatically builds all its necessary third-party tools, it doesn't just assume their existence.
+Ultimately, the executability of a project will decay once the host Linux kernel inevitably evolves to such that the project's fixed version of the GNU C Library and GNU C Compiler can't be built.
+This will happen on much longer time scales than the high-level software mentioned in \citet{zhao12} and can be fixed by changing the project's (GNU) C library and (GNU) C Compiler to versions that are build-able with the host kernel.
+These are very low-level components and any possible change in the output should be minimal.
+Ultimately after multiple decades, even that may not be possible, but even at that point, thanks to the plain-text principle (Section \ref{principle:text}), it can still be studied, without necessarily executing it.
+
+
+
+
+
+\subsection{Principle: Modularity}
+\label{principle:modularity}
+A project should be compartmentalized or partitioned to independent modules or components with well-defined inputs/outputs having no side-effects.
+In a modular project, communication between the independent modules is explicit, providing optimizations on multiple levels:
+1) Execution: independent modules can run in parallel, or modules that don't need to be run (because their dependencies haven't changed) won't be re-done.
+2) Data lineage and data provenance extraction (recording any dataset's origins).
+3) Citation: allowing others to credit specific parts of a project.
+This principle doesn't just apply to the analysis, it also applies to the whole project, for example see the definitions of ``input'', ``output'' and ``project'' in Section \ref{sec:definitions}.
+
+Within the analysis phase, this principle can be summarized best with the Unix philosophy, best described by \citet{mcilroy78} in the ``Style'' section.
+In particular ``Make each program do one thing well.
+To do a new job, build afresh rather than complicate old programs by adding new `features'''.
+Independent parts of the analysis can be maintained as independent software (for example shell, Python, or R scripts, or programs written in C, C++ or FORTRAN, among others).
+This core aspect of the Unix philosophy has been the cause of its continued success (particularly through GNU and BSD) and development in the last half century.
+
+For the most high-level analysis/operations, the boundary between the ``analysis'' and ``project'' can become blurry.
+It is thus inevitable that some highly project-specific, and small, analysis steps are also kept within the project and not maintained as a separate software package (that is built before the project is run).
+This isn't a problem, because inputs are defined as files that are \emph{usable} by other projects (see Section \ref{definition:input}).
+If necessary, such highly project-specific software can later spin-off into a separate software package later.
+One example of an existing system that doesn't follow this principle is Madagascar, it builds a large number of analysis programs as part of the project (see Appendix \ref{appendix:madagascar}).
+
+%\tonote{Find a place to put this:} Note that input files are a subset of built files: they are imported/built (copied or downloaded) using the project's instructions, from an external location/repository.
+% This principle is inspired by a similar concept in the free and open source software building tools like the GNU Build system (the standard `\texttt{./configure}', `\texttt{make}' and `\texttt{make install}'), or CMake.
+% Software developers already have decades of experience with the problems of mixing hand-written source files with the automatically generated (built) files.
+% This principle has proved to be an exceptionally useful in this model, greatly
+
+
+
+\subsection{Principle: Plain text}
+\label{principle:text}
+A project's primarily stored/archived format should be plain text with human-readable encoding\footnote{Plain text format doesn't include document container formats like \inlinecode{.odf} or \inlinecode{.doc}, for software like LibreOffice or Microsoft Office.}, for example ASCII or Unicode (for the definition of a project, see Section \ref{definition:project}).
+The reason behind this principle is that opening, reading, or editing non-plain text (executable or binary) file formats needs specialized software.
+Binary formats will complicate various aspects of the project: its usage, archival, automatic parsing, or human readability.
+This is a critical principle for long term preservation and portability: when the software to read binary format has been depreciated or become obsolete and isn't installable on the running system, the project will not be readable/usable any more. % should replace `installable`?
+
+A project that is solely in plain text format can be put under version control as it evolves, with easy tracking of changed parts, using already available and mature tools in software development: software source code is also in plain text.
+After publication, independent modules of a plain-text project can be used and cited through services like Software Heritage \citep{dicosmo18,dicosmo20}, enabling future projects to easily build on top of old ones, or cite specific parts of a project.
+
+Archiving a binary version of the project is like archiving a well cooked dish itself, which will be inedible with changes in hardware (temperature, humidity, and the natural world in general).
+But archiving the dish's recipe (which is also in plain text!): you can re-cook it any time.
+When the environment is under perfect control (as in the proposed system), the binary/executable, or re-cooked, output will be verifiably identical. % should replace `verifiably` with another word?
+One illustrative example of the importance of source code is mentioned in \citet{smart18}: a seven-year old dispute between condensed matter scientists could only be solved when they shared the plain text source of their respective projects.
+
+This principle doesn't conflict with having an executable or immediately-runnable project\footnote{In their recommendation 4-1 on reproducibility, \citet{fineberg19} mention: ``a detailed description of the study methods (ideally in executable form)''.}. % should replace `runnable`?
+Because it is trivial to build a text-based project within an executable container or virtual machine.
+For more on containers, please see Appendix \ref{appendix:independentenvironment}.
+To help contemporary researchers, this built/executable form of the project can be published as an output in respective servers like \url{http://hub.docker.com} (see Section \ref{definition:output}).
+
+Note that this principle applies to the whole project, not just the initial phase.
+Therefore a project like Conda that currently includes a $+500$MB binary blob in a plain-text shell script (see Appendix \ref{appendix:conda}) is not acceptable for this principle. % is it `Anaconda` or `Conda` project?
+This failure also applies to projects that build tools to read binary sources.
+In short, the full source of a project should be in plain text.
+
+
+
+
+
+\subsection{Principle: Minimal complexity (i.e., maximal compatibility)}
+\label{principle:complexity}
+An important measure of the quality of a project is how much it avoids complexity.
+In principle this is similar to Occam's razor: ``Never posit pluralities without necessity'' \citep{schaffer15}, but extrapolated to project management.
+In this context Occam's razor can be interpreted like the following cases:
+minimize the number of a project's dependency software (there are often multiple ways of doing something),
+avoid complex relations between analysis steps (which is not unrelated to the principle of modularity in Section \ref{principle:modularity}),
+or avoid the programming language that is currently in vogue because it is going to fall out of fashion soon and take the project down with it, see Appendix \ref{appendix:highlevelinworkflow}).
+This principle has several important consequences:
+\begin{itemize}
+\item Easier learning curve.
+Scientists can't adopt new tools and methods as fast as software developers.
+They have to invest the majority of their time on their own research domain.
+Because of this researchers usually continue their career with the language/tools they learned when they started.
+
+\item Future usage.
+Scientific projects require longevity: unlike software engineering, there is no end-of-life in science (e.g., Aristotle's work 2.5 millennia ago is still ``science'').
+Scientific projects that depend too much on an ever evolving, high-level software developing toolchain, will be harder to archive, run, or even study for their immediate and future peers.
+One recent example is the Popper software implementation: it was originally designed in the HashiCorp configuration language (HCL) because it was the default for organizing operations in GitHub. % should names like HashiCorp be formatted in italics?
+However, GitHub dropped HCL in October 2019, for more see Appendix \ref{appendix:popper}.
+
+\item Compatible and extensible.
+ A project that has minimal complexity, can easily adapt to any kind of data, programming language, host hardware or software and etc.
+ It can also be easily extended for new inputs and environments.
+ For example when a project management system is designed only to manage Python functions (like CGAT-core, see Appendix \ref{appendix:jobmanagement}), it will be hard, inefficient and buggy for managing an analysis step that is written in R and another written in FORTRAN.
+\end{itemize}
+
+
+
+
+
+\subsection{Principle: Verifiable inputs and outputs}
+\label{principle:verify}
+The project should contain automatic verification checks on its inputs (software source code and data) and outputs.
+When applied, expert knowledge won't be necessary to confirm the correct reproduction.
+It is just important to emphasize that in practice, exact or bit-wise reproduction is very hard to implement at the level of a file.
+This is because many specialized scientific software commonly print the running date on their output files (which is very useful in its own context).
+
+For example in plain text tables, such meta-data are commonly printed as commented lines (usually starting with \inlinecode{\#}).
+Therefore when verifying such a plain text table, the checksum which is used to validate the data, can be recorded after removing all commented lines.
+Fortunately, the tools to operate on specialized data formats also usually have ways to remove requested metadata (like creation date), or ignore metadata altogether.
+For example the FITS standard in astronomy \citep{pence10} defines a special \inlinecode{DATASUM} keyword which is a checksum calculated only from the raw data, ignoring all metadata.
+
+
+
+
+
+\subsection{Principle: History and temporal provenance (version control)}
+\label{principle:history}
+No project is done in a single/first attempt.
+Projects evolve as they are being completed.
+It is natural that earlier phases of a project are redesigned/optimized only after later phases have been completed.
+This is often seen in scientific papers, with statements like ``we [first] tried method [or parameter] XXXX, but YYYY is used here because it showed to have better precision [or less bias, or etc]''.
+A project's ``history'' is thus as scientifically relevant as the final, or published, snapshot of the project.
+All the outputs (datasets or narrative papers) need to contain the exact point in the project's history that produced them.
+
+For a complete project (see Section \ref{principle:complete}) that is under version control (like Git), this would be the unique commit checksum (for more on version control, see Appendix \ref{appendix:versioncontrol}).
+This principle thus benefits from the plain-text principle (Section \ref{principle:text}).
+Note that with our definition of a project (Section \ref{definition:project}), ``changes'' in the project include changes in the software building or versions, changes in the running environment, changes in the analysis, or changes in the narrative.
+After publication, the project's history can also be published on services like Software Heritage \citep{dicosmo18}, enabling precise citation and archival of all stages of the project's evolution.
+
+Taking this principle to a higher level, newer projects are built upon the shoulders of previous projects.
+A project management system should be able to provide this temporal connection between projects.
+Quantifying how newer projects relate to older projects (for example through Git branches) will enable 1) scientists to simply use the relevant parts of an older project, 2) quantify the connections of various projects, which is primarily of interest for meta-research (research on research) or historical studies.
+In data science, ``provenance'' is used to track the analysis and original datasets that were used in producing a higher-level dataset.
+A system that uses this principle will also provide ``temporal provenance'', quantifying how a certain project grew/evolved in the time dimension.
+
+
+
+
+
+\subsection{Principle: Free and open source software}
+\label{principle:freesoftware}
+Technically, as defined in Section \ref{definition:reproduction}, reproducibility is also possible with a non-free and non-open-source software (a black box).
+This principle is thus necessary to complement the definition of reproducibility.
+This is because software freedom as an important pillar for the sciences as shown below:
+\begin{itemize}
+\item Based on the completeness principle (Section \ref{principle:complete}), it is possible to trace the output's provenance back to the exact source code lines within an analysis software.
+ If the software's source code isn't available such important and useful provenance information is lost.
+\item A non-free software may not be runnable on a given hardware. % should use an alternative word for `runnable`? If yes, please consider replacing it in the whole document to keep consistency.
+ Since free software is modifiable, others can modify (or hire someone to modify) it and make it runnable on their particular platform.
+\item A non-free software cannot be distributed by the authors, making the whole community reliant only on the proprietary owner's server (even if the proprietary software doesn't ask for payments).
+ A project that uses free software can also release the necessary tarballs of the software it uses.
+ For example see \href{https://doi.org/10.5281/zenodo.3408481}{zenodo.3408481} \citep{akhlaghi19} or \href{https://doi.org/10.5281/zenodo.3524937}{zenodo.3524937} \citep{infante20}.
+\item A core component of reproducibility is that anonymous peers should be able confirm the result from the same datasets with minimal effort, and this includes financial cost beyond hardware.
+\end{itemize}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Implementation of Maneage}
+\label{sec:maneage}
+
+The proposed solution is an implementation of the principles discussed in Section \ref{sec:principles}: it is complete and automatic (Section \ref{principle:complete}), modular (Section \ref{principle:modularity}), fully in plain text (Section \ref{principle:text}), having minimal complexity (see Section \ref{principle:complexity}), with automatically verifiable inputs \& outputs (Section \ref{principle:verify}), preserving temporal provenance, or project evolution (Section \ref{principle:history}) and finally, it is free software (Section \ref{principle:freesoftware}).
+
+In practice it is a collection of plain-text files, that are distributed in pre-defined sub-directories by context, and are all under version-control (currently with Git).
+In its raw form (before customizing for different projects), it is a fully working skeleton of a project without much flesh: containing all the low-level infrastructure, with just a small demonstrative ``delete-me'' analysis.
+To start a new project, users will \emph{clone}\footnote{In Git, ``clone''ing is the process of copying all the project's file and their history into the host system.} the core skeleton, create their own Git branch, and start customizing the core files (adding their high-level analysis steps, scripts to generate figure and narrative) within their custom branch. % should replace ``clone''ing with ``cloning''?
+
+In this section we will review the current implementation of the reproducible paper template.
+Generally, job orchestration is implemented in Make (a POSIX software), this choice is elaborated in Section \ref{sec:usingmake}.
+We continue with a general outline of the project's file structure in Section \ref{sec:generalimplementation}.
+As described there, we make a cosmetic distinction between ``configuration'' (or building of necessary software) and execution (or running the software on the data), these two phases are discussed in Sections \ref{sec:projectconfigure} \& \ref{sec:projectmake}.
+
+
+\subsection{Job orchestration with Make}
+\label{sec:usingmake}
+When non-interactive, or batch, processing is needed (see Section \ref{principle:complete}), shell scripts are usually the first solution that come to mind (see Appendix \ref{appendix:scripts}).
+However, the inherent complexity and non-linearity of progress in a scientific project (where experimentation is key) makes it hard and inefficient to manage the script(s) as the project evolves.
+For example, a script will start from the top/start every time it is run.
+Therefore, even if $90\%$ of a research project is done and only the newly added, final $10\%$ must be executed, a script will always start from the start.
+
+It is possible to manually ignore (by conditionals), or comment, parts of a script to only do a special part.
+However, such conditionals/comments will only add to the complexity and will discourage experimentation on an already completed part of the project.
+This is also prone to very serious bugs in the end (e.g., due to human error, some parts may be left-out or not up to date), when re-running from scratch.
+Such bugs are very hard to notice during the work and frustrating to find in the end.
+These problems motivated the creation of Make in the early Unix operating system \citep{feldman79}.
+
+In the Make paradigm, process execution starts from the end: the final \emph{target}.
+Through the Make syntax, the user specifies the \emph{prerequisite}(s) of each target and a \emph{recipe} (a small shell script) to create the target from the prerequisites (for more see Appendix \ref{appendix:make}).
+With this lineage, Make is thus able to build a dependency tree internally and find the rule(s) that need to be executed on each run.
+This has many advantages:
+\begin{itemize}
+\item \textbf{\small Only executing necessary steps:} in the scenario above, a researcher that has just added the final $10\%$ of her research, will only have to run those extra steps, without any modification to the previous parts.
+ With Make, it is also trivial to change the processing of any intermediate (already written) \emph{rule} (or step) in the middle of an already written analysis: the next time Make is run, only rules that are affected by the changes/additions will be re-run, not the whole analysis/project.
+
+Most importantly, this enables full reproducibility from scratch with no changes in the project code that was working during the research.
+This will allow robust results and let scientists do what they do best: experiment, and be critical to the methods/analysis without having to waste energy on the added complexity of experimentation in scripts.
+
+\item \textbf{\small Parallel processing:} Since the dependencies are clearly demarcated in Make, it can identify independent steps and run them in parallel.
+ This greatly speeds up the processing, with no cost in terms of complexity.
+
+\item \textbf{\small Codifying data lineage and provenance:} In many systems data provenance has to be manually added.
+ However, in Make, it is part of the design and no extra manual step is necessary to fully track (or back-track) the series of steps that generated the data.
+\end{itemize}
+
+Make has been a fixed component of POSIX (or Unix-like operating systems including Unix, GNU/Linux, BSD, and macOS, among others) from very early days of Unix almost 40 years ago.
+It is therefore, by far, the most reliable, commonly used, well-known and well-maintained workflow manager today.
+Because the core operating system components are built with it, Make is expected to keep this unique position into the foreseeable future.
+Make is also well known by many outside of the software developing communities.
+For example \citet{schwab2000} report how geophysics students have easily adopted it for the RED project management tool used in their lab at that time (see Appendix \ref{appendix:red} for more on RED).
+Because of its simplicity, we have also had very good feedback on using Make from the early adopters of this system during the last year, in particular graduate students and postdocs.
+
+In summary Make satisfies all our principles (see Section \ref{sec:principles}), while avoiding the well-known problems of using high-level languages for project management like a generational gap and ``dependency hell'', see Appendix \ref{appendix:highlevelinworkflow}.
+For more on Make and a discussion on some other job orchestration tools, see Appendices \ref{appendix:make} and \ref{appendix:jobmanagement} respectively.
+
+
+
+
+
+\subsection{General implementation structure}
+\label{sec:generalimplementation}
+
+As described above, a project using this template is a combination of plain-text files that are organized in various directories by context.
+Figure \ref{fig:files} shows this directory structure and some representative files in each directory.
+The top-level source only has two main directories: \inlinecode{tex/} (containing \LaTeX{} files) and \inlinecode{reproduce/} (containing all other parts of the project) as well as several high-level files.
+Most of the top project directory files are only intended for human readers (as narrative text, not scripts or programming sources):
+\inlinecode{COPYING} is the project's high-level copyright license,
+\inlinecode{README.md} is a basic introduction to the specific project, and
+\inlinecode{README-hacking.md} describes how to customize, or hack, the template for creators of new projects.
+
+In the top project directory, there are two non-narrative files: \inlinecode{project} (which should have been under \inlinecode{reproduce/}) and \inlinecode{paper.tex} (which should have been under \inlinecode{tex/}).
+The former is necessary in the top project directory because it is the high-level user interface, with the \inlinecode{./project} command.
+The latter is necessary for many web-based automatic paper generating systems like arXiv, journals, or systems like Overleaf.
+
+\begin{figure}[t]
+ \begin{center}
+ \includetikz{figure-file-architecture}
+ \end{center}
+ \vspace{-5mm}
+ \caption{\label{fig:files}
+ Directory and file structure in a hypothetical project using this solution.
+ Files are shown with small, green boxes that have a suffix in their names (for example \inlinecode{format.mk} or \inlinecode{download.tex}).
+ Directories (containing multiple files) are shown as large, brown boxes, where the name ends in a slash (\inlinecode{/}).
+ Directories with dashed lines and no files (just a description) are symbolic links that are created after building the project, pointing to commonly needed built directories.
+ Symbolic links and their contents are not considered part of the source and are not under version control.
+ Files and directories are shown within their parent directory.
+ For example the full address of \inlinecode{format.mk} from the top project directory is \inlinecode{reproduce/analysis/make/format.mk}.
+ }
+\end{figure}
+
+\inlinecode{project} is a simple executable POSIX-compliant shell script, that is just a high-level wrapper script to call the project's Makefiles. % should the `makefile` be in capitalized?
+Recall that the main job orchestrator in this system is Make, see Section \ref{sec:usingmake} for why Make was chosen.
+In the current implementation, the project's execution consists of the following two calls to the \inlinecode{project} script:
+
+\begin{lstlisting}[language=bash]
+ ./project configure # Build software from source (takes around 2 hours for full build).
+ ./project make # Do the analysis (download data, run software on data, build PDF).
+\end{lstlisting}
+
+The operations of both are managed by files under the top-level \inlinecode{reproduce/} directory.
+When the first command is called, the contents of \inlinecode{reproduce\-/software} are used, and the latter calls files under \inlinecode{reproduce\-/analysis}.
+This highlights the \emph{cosmetic} distinction we have adopted between the two main steps of a project: 1) building the project's full software environment and 2) doing the analysis (running the software).
+Technically there is no difference between the two and they could easily be merged under one directory.
+However, during a research project, researchers commonly just need to focus on their analysis steps and will rarely need to edit the software environment settings (maybe only once at the start of the project).
+Therefore, having the files mixed under the same directory can be confusing.
+
+In summary, the same structure governs both aspects of a project: software building and analysis.
+This is an important and unique feature in this template.
+A researcher that has become familiar with Makefiles for orchestrating their analysis, will also easily be able to modify the Makefiles for the software that is built in their project, and feel free to customize their project's software also.
+Most other systems use third-party package managers for their project's software, thus discouraging project-specific customization of software, for a full review of third party package managers, see Appendix \ref{appendix:packagemanagement}.
+
+
+
+
+
+\subsection{Project configuration}
+\label{sec:projectconfigure}
+A critical component of any project is the set of software used to do the analysis.
+However, verifying an already built software environment (which is critical to reproducing the research result) is a very hard.
+This has forced most projects to move around the whole \emph{built} software environment (a black box) as virtual machines or containers, see Appendix \ref{appendix:independentenvironment}.
+Because these black boxes are almost impossible to reproduce themselves, they need to be archived, even though they can take gigabytes of space.
+Package managers like Nix or GNU Guix do provide a verifiable, i.e., reproducible, software building environment, but because they aim to be generic package managers, they have their own limitations on a project-specific level, see Appendix \ref{appendix:nixguix}.
+
+Based on the principles of completeness and minimal complexity (Sections \ref{principle:complete} \& \ref{principle:complexity}), a project that uses this solution, also contains the full instructions to build its necessary software in the same language that the analysis is orchestrated: Make.
+Project configuration (building software environment) is managed by the files under \inlinecode{reproduce\-/software}.
+Project configuration involves three high-level steps which are discussed in the subsections below: setting the local directories (Section \ref{sec:localdirs}), checking a working C compiler (Section \ref{sec:ccompiler}), and the software source code download, build and install (Section \ref{sec:buildsoftware}).
+
+
+
+
+
+\subsubsection{Setting local directories}
+\label{sec:localdirs}
+All files built by the project (software or analysis) will be under a ``build directory'' (or\inlinecode{BDIR}) on the host filesystem.
+No other location on the running operating system will be affected by the project.
+Following the modularity principle (Section \ref{principle:modularity}), this directory should be separate from the source directory.
+Therefore, at configuration time, the first thing to specify is the build directory on the running system.
+The build directory can be specified in two ways: 1) on the command-line with the \inlinecode{--build-dir} option, or 2) manually giving the directory after running the configuration: it will stop with a prompt and some explanation.
+
+Two other local directories can optionally be specified by the project when inputs are present locally (for the definition of inputs, see Section \ref{definition:input}) and don't need to be downloaded: 1) software tarball directory and 2) input data directory.
+The project just needs reading permissions on these directories: when given, nothing will be written inside of them.
+The project will only look into them for the necessary software tarballs and input data.
+If they are not found, the project will attempt to download any necessary file from the recoded URLs/PIDs within the project source.
+These directories are therefore primarily tailored to scenarios where the project must run offline (based on the completeness principle of Section \ref{principle:complete}).
+
+After project configuration, a symbolic link is built the top project source directory that points to the build directory.
+The symbolic link is a hidden file named \inlinecode{.build}, see Figure \ref{fig:files}.
+With this symbolic link, its always very easy to access to built files, no matter where the build directory is actually located on the filesystem.
+
+
+
+
+
+\subsubsection{Checking for a C compiler}
+\label{sec:ccompiler}
+This template builds all its necessary software internally to avoid dependency issues with various software versions on different hosts.
+A working C compiler is thus mandatory and the configure script will abort if a working C compiler isn't found.
+In particular, on GNU/Linux systems, the project builds its own version of the GNU Compiler Collection (GCC), therefore a static C library is necessary with the compiler.
+If not found, an informative error message will be printed and the project will abort.
+
+The custom version of GCC is configured to also build FORTRAN, C++, objective-C and objective-C++ compilers.
+Python and R running environments are themselves written in C, therefore they are also automatically built afterwards if the project uses these languages.
+On macOS systems, we currently don't build a C compiler, but it is planned to do so in the future.
+
+
+
+
+
+\subsubsection{Verifying and building necessary software from source}
+\label{sec:buildsoftware}
+
+All necessary software for the project, and their dependencies, are installed from source.
+Researchers using the template only have to specify the most high-level analysis software they need in \inlinecode{reproduce\-/software\-/config\-/installation\-/TARGETS.conf} (see Figure \ref{fig:files}).
+Based on the completeness principle (Section \ref{principle:complete}), on GNU/Linux systems the dependency tree is automatically traced down to the GNU C Library and GNU Compiler Collection (GCC).
+Thus creating identical high-level analysis software on any system.
+When the C library and compiler can't be installed (for example on macOS systems), the users are forced to rely on the host's C compiler and library, and this may hamper the exact reproducibility of the final result: the project will abort if the final outputs have changed.
+Because the project's main output is currently a \LaTeX{}-built PDF, the project also contains an internal installation of \TeX{}Live, providing all the necessary tools to build the PDF, independent of the host operating system's \LaTeX{} version and packages.
+
+To build the software from source, the project needs access to its source tarball or zip-file.
+If the tarballs are already present on the system, the user can specify the respective directory at the start of project configuration (Section \ref{sec:localdirs}).
+If not, the software tarballs will be downloaded from pre-defined servers.
+Ultimately the origin of the tarballs is irrelevant for this project, what matters is the tarball contents: checked through the SHA-512 checksum \citep[part of the SHA-2 algorithms, see][]{romine15}.
+If the SHA-512 checksum of the tarball is different from the checksum stored for it in the project's source, the project will complain and abort.
+Because the server is irrelevant, one planned task\tonote{add task number} is to allow users to identify the most convenient server themselves, for example to improve download speed.
+
+Software tarball access, unpacking, building and installation is managed through Makefiles, see Sections \ref{sec:usingmake} \& \ref{sec:generalimplementation}.
+The project's software are classified into two classes: 1) basic and 2) high-level.
+The former contains meta-software: software needed to build other software, for example GNU Gzip, GNU Tar, GNU Make, GNU Bash, GNU Coreutils, GNU SED, GNU Binutils, GNU Compiler Collection (GCC) and etc\footnote{Note that almost all these GNU software are also installable on non-GNU/Linux operating systems like BSD or macOS also, exceptions include GNU Binutils.}.
+The basic software are built with the host operating system's tools and are installed with any project.
+The high-level software are those that are used directly in the science analysis and can differ from project to project.
+However, because the basic software have already been built by the project, the higher-level software are built with them and independent of the host operating system's tools.
+
+Software building is managed by two top-level Makefiles that follow the same classification.
+Both are under the \inlinecode{reproduce\-/softwar\-e/make/} directory (Figure \ref{fig:files}): \inlinecode{basic.mk} and \inlinecode{high-level.mk}.
+Because \inlinecode{basic.mk} can't assume anything about the host, it is written to comply with POSIX Make and POSIX shell, which are very limited compared to GNU Make and GNU Bash respectively.
+However, after it is finished, a specific version of GNU Make (among other basic software), is present, enabling us to assume the much advanced features of GNU tools in \inlinecode{high-level.mk}.
+
+The project's software are installed under \inlinecode{BDIR/software/installed}.
+The \inlinecode{.local} symbolic link in the top project source directory points to it for easy access (see Figure \ref{fig:files}).
+It contains the top-level POSIX filesystem hierarchy subdirectories for the project including \inlinecode{bin/}, \inlinecode{lib/}, \inlinecode{include/} among others.
+For example the custom-built GNU Make executable is placed under \inlinecode{BDIR\-/software\-/installed\-/bin\-/make} or alternatively \inlinecode{.local/bin/make}.
+
+To orchestrate software building with Make, the building of each software should be represented as a file.
+In the Makefiles that file should be used as a \emph{target}, in the rule that builds the software, or \emph{prerequisite}, in the rule(s) of software that depend on it.
+For more on Make, see Appendix \ref{appendix:make}.
+Initially we tried using the actual software's built files (executable programs, libraries or etc).
+However, given the variety of installed files, using them as the software's representative caused many complexities, confusions and bugs.
+Therefore, in the current system, once a software is built, a simple plain-text file is created under a sub-directory of \inlinecode{.local\-/version-info}.
+The representative files for C/C++ programs or libraries are placed under the \inlinecode{proglib} sub-directory.
+The Python or \TeX{}Live representative files are placed under the \inlinecode{python} and \inlinecode{tex} subdirectories respectively.
+Make uses this file to refer to the software and arrange the order of software execution.
+The contents of this plain-text file are the name and possible citation to the software that are directly imported into the final paper in the end.
+For more on software citation, see Section \ref{sec:softwarecitation}.
+
+
+
+\subsubsection{Software citation}
+\label{sec:softwarecitation}
+Based on the completeness principle (Section \ref{principle:complete}), the project contains the full list of installed software, their versions and their configuration options.
+However, this information is buried deep into the project's source.
+A distilled fraction of this information must also be printed in the project's final report, blended into the narrative.
+Furthermore, when a published paper is associated with the used software, it is important to cite that paper, the citations help software authors gain more recognition and grants, encouraging them to further develop it.
+This is particularly important in the case for research software, where the researcher has invested significant time in building the software, and requires official citation to justify continued work on it.
+
+One notable example that nicely highlights this issue is GNU Parallel \citep{tange18}: every time it is run, it prints the citation information before it starts.
+This doesn't cause any problem in automatic scripts, but can be annoying when reading/debugging the outputs.
+Users can disable the notice, with the \inlinecode{--citation} option and accept to cite its paper, or support its development directly by paying $10000$ euros!
+This is justified by an uncomfortably true statement\footnote{GNU Parallel's FAQ on the need to cite software: \url{http://git.savannah.gnu.org/cgit/parallel.git/plain/doc/citation-notice-faq.txt}}: ``history has shown that researchers forget to [cite software] if they are not reminded explicitly. ... If you feel the benefit from using GNU Parallel is too small to warrant a citation, then prove that by simply using another tool''.
+In bug 905674\footnote{Debian bug on the citation notice of GNU Parallel: \url{https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=905674}}, the Debian developers argued that because of this extra condition, GNU Parallel should not be considered as free software, and they are using a patch to remove that part of the code for its build under Debian-based operating systems.
+Most other research software don't resort to such drastic measures, however, citation is important for them.
+Given the increasing number of software used in scientific research, the only reliable solution is to automatically cite the used software in the final paper.
+
+As mentioned above in Section \ref{sec:buildsoftware}, a plain-text file is built automatically at the end of a software's successful build and installation.
+This file contains the name, version and possible citation of that software.
+At the end of the configuration phase, all these plain-text files are merged into one \LaTeX{} macro that can be imported directly into the final paper or report.
+In this paper, this macro's value is shown in Appendix \ref{appendix:softwareacknowledge}.
+The paragraph produced by this macro won't be too large, but it will greatly help in the recording of the used software environment and will automatically cite the software where necessary.
+
+In the current version of this template it is assumed the published report of a project is built by \LaTeX{}.
+Therefore, every software that has an associated paper, has a Bib\TeX{} file under the \inlinecode{reproduce\-/software\-/bibtex} directory.
+When the software is built for the project (possibly as a dependency of another software specified by the user), the Bib\TeX{} entry(s) are copied to the build directory and the command to cite that Bib\TeX{} record is included in the \LaTeX{} macro with the name and version of the software, as shown in Appendix \ref{appendix:softwareacknowledge}.
+
+For a review of the necessity and basic elements in software citation, see \citet{katz14} and \citet{smith16}.
+There are ongoing projects specifically tailored to software citation, including CodeMeta (\url{https://codemeta.github.io}) and Citation file format (CFF: \url{https://citation-file-format.github.io}).
+Both are based on scheme.org, but are respectively implemented in the JSON-LD and YAML.
+Another robust approach is provided by SoftwareHeritage \citep{dicosmo18}.
+The feature of the SoftwareHeritage is that a published paper isn't necessary and it won't populate a research paper's bibliography list.
+However, this also makes it hard to count as academic credit.
+We are considering using these tools, and export Bib\TeX{} entries when necessary.
+
+
+
+
+
+
+
+
+
+
+\subsection{High-level organization of analysis}
+\label{sec:highlevelanalysis}
+
+Once a project is configured (Section \ref{sec:projectconfigure}), all the necessary software, with precise versions and configurations, are built and ready to use.
+The analysis phase of the project (running the software on the data) is also orchestrated through Makefiles.
+For the unique advantages of using Make to manage a research project, see Sections \ref{sec:usingmake} \& \ref{sec:generalimplementation}.
+In order to best follow the principle of modularity (Section \ref{principle:modularity}), the analysis is not done in one phase or with a single Makefile.
+Here, the two high-level phases of the analysis are reviewed.
+The organization of the lower-level analysis, in many modular Makefiles, is discussed in Section \ref{sec:lowlevelanalysis}.
+
+After running \inlinecode{./project make}, the analysis is done in two sequential phases: 1) preparation and 2) main analysis.
+The purpose of the preparation phase is further elaborated in Section \ref{sec:prepare}.
+Technically, these two phases are managed by the two high-level Makefiles: \inlinecode{top-prepare.mk} and \inlinecode{top-make.mk}.
+Both are under \inlinecode{reproduce\-/analysis\-/make} (see Figure \ref{fig:files}) and both have an identical lower-level analysis structure.
+But before that, in Section \ref{sec:analysisenvironment} the isolation of the analysis environment from the host is discussed.
+
+
+
+
+
+\subsubsection{Isolated analysis environment}
+\label{sec:analysisenvironment}
+By default, the analysis part of the project is not exposed to any of the host's environment variables.
+This is accomplished through the `\inlinecode{env -i}' command\footnote{Note that the project's self-built \inlinecode{env} program is used, not the one provided by the host operating system.
+ Within the project, \inlinecode{env} is installed as part of GNU Coreutils and \inlinecode{-i} is short for \inlinecode{--ignore-environment}.}, which will remove the host environment.
+The project will define its own values for standard environment variables to avoid using system or user defaults.
+Combined with the fact that all the software were configured and compiled from source for each project at configuration time (Section \ref{sec:buildsoftware}), this completely isolates the analysis from the host operating system, creating an exactly reproducible result on any machine that the project can be configured.
+
+For example, the project builds is own fixed version of GNU Bash (a command-line shell environment).
+It also has its own \inlinecode{bashrc} startup script\footnote{The project's Bash startup script is under \inlinecode{reproduce\-/software\-/bash\-/bashrc.sh}, see Figure \ref{fig:files}.}, and the \inlinecode{BASH\_ENV} environment variable is set to load this startup script.
+Furthermore, the \inlinecode{HOME} environment variable is set to \inlinecode{BDIR} to avoid the penetration of any existing Bash startup file of the user's home directory into the analysis.
+
+
+
+
+
+\subsubsection{Preparation phase}
+\label{sec:prepare}
+When \inlinecode{./project make} is called, the first Makefile that is run is \inlinecode{top-prepare.mk}.
+It is designed for any selection steps that may be necessary to optimize \inlinecode{top-make.mk}, or to ``prepare'' for it.
+It is mainly useful when the research targets are more focused than the raw input and may not be necessary in many scenarios.
+Its role is described here with an example.
+
+Let's assume the raw input data (that the project received from a database) has 5000 rows (potential targets for doing the analysis on).
+However, this particular project only needs to work on 100 of them, not the full 5000.
+If the full 5000 targets are given to \inlinecode{top-make.mk}, Make will need to create a data lineage for all 5000 targets and project authors have to add checks in many places to ignore those that aren't necessary.
+This will add to the project's complexity and is prone to many bugs.
+Furthermore, if the filesystem isn't fast (for example a filesystem that exists over a network), checking all the intermediate and final files over the full lineage can be slow.
+
+In this scenario, the preparation phase finds the IDs of the 100 targets of interest and saves them as a Make variable in a file under \inlinecode{BDIR}.
+Later, this file is loaded into the analysis phase, precisely identifying the project's targets-of-interest.
+This selection phase can't be done within \inlinecode{top-make.mk} because the full data lineage (all input and output files) must be known to Make before it starts to execute the necessary operations.
+It is possible to for Make to call itself as another Makefile, but this practice is strongly discouraged here because it makes the flow very hard to read.
+However, if the project authors insist on calling Make within Make, it is certainly possible.
+
+The ``preparation'' phase thus allows \inlinecode{top-make.mk} to optimally organize the complex set of operations that must be run on each input and the dependencies (possibly in parallel).
+It also greatly simplifies the coding for the project authors.
+Ideally \inlinecode{top-prepare.mk} is only for the ``preparation phase''.
+However, projects can be complex and ultimately, the choice of which parts of an analysis being a ``preparation'' can be highly subjective.
+Generally, the internal design and concepts of \inlinecode{top-prepare.mk} are identical to \inlinecode{top-make.mk}.
+Therefore in Section \ref{sec:lowlevelanalysis}, where the lower-level management is discussed, we will only focus on the latter to avoid confusion.
+
+
+
+
+
+
+
+
+
+
+\subsection{Low-level organization of analysis}
+\label{sec:lowlevelanalysis}
+
+A project consists of many steps, including data access (possibly by downloading), running various steps of the analysis on the obtained data, and creating the necessary plots, figures or tables for a published report, or output datasets for a database.
+If all of these steps are organized in a single Makefile, it will become very large, or long, and will be hard to maintain, extend/grow, read, reuse, and cite.
+Generally, large files are a bad practice because it is against the modularity principle (Section \ref{principle:modularity}).
+
+The proposed template is thus designed to encourage and facilitate modularity by distributing the analysis in many Makefiles that contain contextually-similar (or modular) analysis steps.
+In the rest of this paper these modular, or lower-level, Makefiles will be called \emph{subMakefiles}.
+The subMakefiles are loaded into \inlinecode{top-make.mk} in a certain order and executed in one instance of Make without recursion (see Section \ref{sec:nonrecursivemake} below).
+In other words, this modularity is just cosmetic for Make: Make ``see''s all the subMakefiles as parts of one file.
+However, this modularity plays a critical role for the human reader/author of the project and is necessary in re-using or citing parts of the analysis in other projects.
+
+Within the project's source, the subMakefiles are placed in \inlinecode{reproduce\-/analysis\-/make} (with \inlinecode{top-make\-.mk}), see Figure \ref{fig:files}.
+Therefore by design, \inlinecode{top-make.mk} is very simple: it just defines the ultimate target (\inlinecode{paper\-.pdf}), and the name and order of the subMakefiles that should be loaded into Make.
+
+The precise organization of the analysis steps highly depends on each individual project.
+However, many aspects of the project management are the same, irrespective of the particular project, here we will focus on those.
+Figure \ref{fig:datalineage} is a general overview of the analysis phase in a hypothetical project using this template.
+As described above and shown in Figure \ref{fig:datalineage}, \inlinecode{top-make.mk} imports the various Makefiles under the \inlinecode{reproduce/} directory that are in charge of the different phases of the analysis.
+Each of the subMakefiles builds intermediate targets, or outputs (files), which are shown there as blue boxes.
+In the subsections below, the project's analysis is described using this graph.
+We'll follow Make's paradigm (see Section \ref{sec:usingmake}) of starting form the ultimate target in Section \ref{sec:paperpdf}, and tracing back its lineage all the way up to the inputs and configuration files.
+
+\begin{figure}[t]
+ \begin{center}
+ \includetikz{figure-data-lineage}
+ \end{center}
+ \vspace{-7mm}
+ \caption{\label{fig:datalineage}Schematic representation of data lineage in a hypothetical project/pipeline using Maneage.
+ Each colored box is a file in the project and the arrows show the dependencies between them.
+ Green files/boxes are plain text files that are under version control and in the source-directory.
+ Blue files/boxes are output files of various steps in the build-directory, located within the Makefile (\inlinecode{*.mk}) that generates them.
+ For example \inlinecode{paper.pdf} depends on \inlinecode{project.tex} (in the build directory and generated automatically) and \inlinecode{paper.tex} (in the source directory and written by hand).
+ In turn, \inlinecode{project.tex} depends on all the \inlinecode{*.tex} files at the bottom of the Makefiles above it.
+ The solid arrows and built boxes with full opacity are actually described in the context of a demonstration project in this paper.
+ The dashed arrows and lower opacity built boxes, just shows how adding more elements to the lineage is also easily possible, making this a scalable tool.
+ }
+\end{figure}
+
+To avoid getting too abstract in the subsections below, where necessary, we'll do a basic analysis on the data of \citet[data were published as supplementary material on bioXriv]{menke20} and try to replicate some of their results.
+Note that because we are not using the same software, this isn't a reproduction (see Section \ref{definition:reproduction}).
+We can't use the same software because they use Microsoft Excel for the analysis which violates several of our principles: 1) Completeness (as a graphic user interface program, it needs human interaction, Section \ref{principle:complete}), 2) Minimal complexity (even free software alternatives like LibreOffice involve many dependencies and are extremely hard to build, Section \ref{principle:complexity}) and 3) Free software (Section \ref{principle:freesoftware}).
+
+
+
+
+\subsubsection{Non-recursive Make}
+\label{sec:nonrecursivemake}
+
+It is possible to call a new instance of Make within an existing Make instance.
+This is also known as recursive Make\footnote{\url{https://www.gnu.org/software/make/manual/html_node/Recursion.html}}.
+Recursive Make is in fact used by many Make users, especially in the software development communities.
+It is also possible within a project using the proposed template.
+
+However, recursive Make is discouraged in the template, and not used in it.
+All the subMakefiles mentioned above are \emph{included}\footnote{\url{https://www.gnu.org/software/make/manual/html_node/Include.html}} into \inlinecode{top-make.mk}, i.e., their contents are read into Make as it is parsing \inlinecode{top-make.mk}.
+In Make's view, this is identical to having one long file with all the subMakefiles concatenated to each other.
+Make is only called once and there is no recursion.
+
+Furthermore, we have the convention that only \inlinecode{top-make.mk} (or \inlinecode{top-prepare.mk}) can include subMakefiles.
+SubMakefiles should not include other subMakefiles.
+The main reason behind this convention is the Minimal Complexity principle (Section \ref{principle:complexity}): a simple glance at \inlinecode{top-make.mk}, will immediately show \emph{all} the project's subMakefiles \emph{and} their loading order.
+When the names of the subMakefiles are descriptive enough, this enables both the project authors, and later, project readers to get a complete view of the various stages of the project.
+
+
+
+\subsubsection{Ultimate target: the project's paper or report (\inlinecode{paper.pdf})}
+\label{sec:paperpdf}
+
+The ultimate purpose of a project is to report the data analysis result.
+In scientific projects, this ``report'' is the published (or draft) paper.
+In the industry, it is a quality-check and analysis of the final data product(s).
+In both cases, the report contains many visualizations of the final data product of the project, for example as a plot, figure, table, or numbers blended into the narrative description.
+
+In Figure \ref{fig:datalineage} it is shown as \inlinecode{paper.pdf}, note that it is the only built file (blue box) with no arrows leaving it.
+In other words, nothing depends on it: highlighting its unique ``ultimate target'' position in the lineage.
+The instructions to build \inlinecode{paper.pdf} are in \inlinecode{paper.mk}.
+The report's source (containing the main narrative, its typesetting as well as that of the figures or tables) is \inlinecode{paper.tex}.
+To build the final report's PDF, \inlinecode{references.tex} and \inlinecode{project.tex} are also loaded into \LaTeX{}.
+\inlinecode{references.tex} is part of the project's source and can contain the Bib\TeX{} entries for the bibliography of the final report.
+In other words, it formalizes the connections of this project with previous projects.
+
+Another class of files that maybe loaded into \LaTeX{}, but are not shown to avoid complications in the figure, are the figure or plot data, or built figures.
+For example in this paper, the demonstration figure shown in Section \ref{sec:analysis} is drawn directly within \LaTeX{} (using its PGFPlots package).
+The project only needed to build the plain-text table of numbers that were fed into PGFPlots (\inlinecode{tools-per-year.txt} in Figure \ref{fig:datalineage}).
+
+However, building some plots may not be possible with PGFPlots, or the authors may prefer another tool to generate the visualization's image file \citep[for example with Python's Matplotlib, ][]{matplotlib2007}.
+For this scenario, the actual image file of the visualization can be used in the lineage, for example \inlinecode{tools-per-year.pdf} instead of \inlinecode{tools-per-year.txt}.
+See Section \ref{sec:publishing} on the project publication for special considerations regarding these files.
+
+
+
+
+
+\subsubsection{Values within text (\inlinecode{project.tex})}
+\label{sec:valuesintext}
+Figures, plots, tables and narrative aren't the only analysis output that goes into the paper.
+In many cases, quantitative values from the analysis are also blended into the sentences of the report's narration.
+For example this sentence in the abstract of \citet{akhlaghi19}: ``... detect the outer wings of M51 down to S/N of 0.25 ...''.
+The reported signal-to-noise ratio (S/N) value ``0.25'' depends on the analysis and is an output of the analysis just like paper's figures and plots.
+Manually typing the number in the \LaTeX{} source is prone to very important bugs: the author may forget to check it after a change in an analysis (e.g., using a newer version of the software, or changing an analysis parameter for another part of the paper).
+Given the evolution of a scientific projects, this type of human error is very hard to avoid when such values are manually written.
+Such values must also be automatically generated.
+
+To automatically generate and blend them in the text, we use \LaTeX{} macros.
+In the quote above, the \LaTeX{} source\footnote{\citet{akhlaghi19} uses this template to be reproducible, so its \LaTeX{} source is available in multiple ways: 1) direct download from arXiv:\href{https://arxiv.org/abs/1909.11230}{1909.11230}, by clicking on ``other formats'', or 2) the Git or \href{https://doi.org/10.5281/zenodo.3408481}{zenodo.3408481} links is also available on arXiv.} looks like this: ``\inlinecode{\small detect the outer wings of M51 down to S/N of \$\textbackslash{}demo\-sf\-optimized\-sn\$}''.
+The \LaTeX{} macro ``\inlinecode{\small\textbackslash{}demosfoptimizedsn}'' is automatically calculated and recorded during in the project and expands to the value ``\inlinecode{0.25}''.
+The automatically generated file \inlinecode{project.tex} stores all such inline output macros.
+Furthermore, Figure \ref{fig:datalineage} shows that it is a prerequisite of \inlinecode{paper.pdf} (as well as the manually written \LaTeX{} sources that are shown in green).
+Therefore \inlinecode{paper.pdf} will not be built until this file is ready and up-to-date.
+
+However, managing all the necessary \LaTeX{} macros for a full project in one file is against the modularity principle and can be frustrating and buggy.
+To address this problem, all subMakefiles \emph{must} contain a fixed target with the same base-name, but with a \inlinecode{.tex} suffix.
+For example in Figure \ref{fig:datalineage}, assume \inlinecode{out-1b.dat} is a table and the mean of its third column must be reported in the paper.
+Therefore in \inlinecode{format.mk}, a prerequisite of \inlinecode{format.tex} is \inlinecode{out-1b.dat} (as shown by the arrow in Figure \ref{fig:datalineage}).
+The recipe of this rule will calculate the mean of the column and put it in the \LaTeX{} macro which is written in \inlinecode{format.tex}.
+In a similar way, any other reported calculation from \inlinecode{format.mk} is stored as a \LaTeX{} macro in \inlinecode{format.tex}.
+
+These \LaTeX{} macro files thus form the core skeleton of the project: as shown in Figure \ref{fig:datalineage}, the outward arrows of all built files of any subMakefile ultimately leads to one of these \LaTeX{} macro files.
+Note that \emph{built} files in a subMakefile don't have to be a prerequisite of its \inlinecode{.tex} file.
+They may point to another Makefile's \LaTeX{} macro file.
+For example even though \inlinecode{input1.dat} is a target in \inlinecode{download.mk}, it isn't a prerequisite of \inlinecode{download.tex}, it is a prerequisite of \inlinecode{out-2a.dat} (a target in \inlinecode{demo-plot.mk}).
+The lineage ultimate ends in a \LaTeX{} macro file in \inlinecode{analysis3.tex}.
+
+
+
+
+
+\subsubsection{Verification of outputs (\inlinecode{verify.mk})}
+\label{sec:outputverification}
+An important principle for this template is that outputs should be automatically verified, see Section \ref{principle:verify}.
+However, simply confirming the checksum of the final PDF, or figures and datasets is not generally possible: as mentioned in Section \ref{principle:verify}, many tools that produce datasets or PDFs write the creation date into the produced files. % should replace `PDFs` with `PDF files`?
+Therefore it is necessary to verify the project's outputs before the PDF is created.
+To facilitate output verification, the project has a \inlinecode{verify.mk} Makefile, see Figure \ref{fig:datalineage}.
+It is the only prerequisite of \inlinecode{project.tex} that was described in Section \ref{sec:paperpdf}.
+Verification is therefore the connection-point, or bottleneck, between the analysis steps of the project and its final report.
+
+Prior to publication, the project authors should add the MD5 checksums of all the\LaTeX{} macro files in the recipe of \inlinecode{verify\-.tex}.
+The necessary structure is already there, so adding/changing the values is trivial.
+If any \LaTeX{} macro is different in future builds of the project, the project will abort with a warning of the problematic file.
+When projects involve other outputs (for example images, tables or datasets that will also be published), their contents should also be validated.
+To do this, prerequisites should be added to the \inlinecode{verify\-.tex} rule that automatically check the \emph{contents} of other project outputs.
+Recall that many tools print the creation date automatically when creating a file, so to verify a file, this kind of metadata must be ignored.
+\inlinecode{verify\-.tex} contains some Make functions to facilitate checking with some some file formats, others can be added easily.
+
+
+
+
+
+\subsubsection{Project initialization (\inlinecode{initialize.mk})}
+\label{sec:initialize}
+The \inlinecode{initial\-ize\-.mk} subMakefile is present in all projects and is the first subMakefile that is loaded into \inlinecode{top-make.mk} (see Figure \ref{fig:datalineage}).
+Project authors rarely need to modify/edit this file, it is part of the template's low-level infra-structure.
+Nevertheless, project authors are strongly encouraged to study it and use all the useful variables and targets that it defines.
+\inlinecode{initial\-ize\-.mk} doesn't contain any analysis or major processing steps, it just initializes the system.
+For example it sets the necessary environment variables, internal Make variables and defines generic rules like \inlinecode{./project make clean} (to clean/delete all built products, not software) or \inlinecode{./project make dist} (to package the project into a tarball for distribution) among others.
+
+It also adds one special \LaTeX{} macro in \inlinecode{initial\-ize\-.tex}: the current Git commit that is generated every time the analysis is run.
+It is stored in \inlinecode{{\footnotesize\textbackslash}projectversion} macro and can be used anywhere within the final report.
+For this PDF it has a value of \inlinecode{\projectversion}.
+One good place to put it is in the end of the abstract for any reader to be able to identify the exact point in history that the report was created.
+It also uses the \inlinecode{--dirty} feature of Git's \inlinecode{--describe} output: if any version-controlled file is not already committed, the value to this macro will have a \inlinecode{-dirty} suffix.
+If its in a prominent place (like the abstract), it will always remind the author to commit their work.
+
+
+
+
+
+\subsubsection{Importing and validating inputs (\inlinecode{download.mk})}
+\label{sec:download}
+The \inlinecode{download.mk} subMakefile is present in all Maneage projects and contains the common steps for importing the input dataset(s) into the project.
+All necessary input datasets to the project are imported through this subMakefile.
+This helps in modularity and minimal complexity (Sections \ref{principle:modularity} \& \ref{principle:complexity}): to see what external datasets were used in a project, this is the only necessary file to manage/read.
+Also, a simple call to a downloader (for example \inlinecode{wget}) is not usually enough.
+Irrespective of where the dataset is \emph{used} in the project's lineage, some operations are always necessary when importing datasets:
+\begin{itemize}
+\item The file may already be present on the host, or the user may not have internet connection.
+ Hence it is necessary to check the given \emph{input directory} on the host before attempting to download over the internet (see Section \ref{sec:localdirs}).
+\item The network might temporarily fail, but connect with an automatic re-trial (even today this isn't uncommon).
+ Crashing the whole analysis because of a temporary network issue, requires human intervention and is against the completeness principle (Section \ref{principle:complete}).
+\item Before it can be used, the integrity of the imported file must be confirmed with its stored checksum.
+\end{itemize}
+
+In some scenarios the generic download script may not be useful.
+For example when the database takes queries and generates the dataset for downloading on the fly.
+In such cases, users can add their own Make rules in this \inlinecode{download.mk} to import the file.
+They can use its pre-defined structure to do the extra steps like validating it.
+Note that in such cases the servers often encode the creation date and version of their database system in the resulting file as metadata.
+Even when the actual data is identical, this metadata (which is in the same file) will differ based on the moment the query was done.
+Therefore a simple checksum of the whole downloaded file can't be used for validation in such scenarios, see Section \ref{principle:verify}.
+
+\begin{figure}[t]
+ \input{tex/src/figure-inputconf.tex}
+ \vspace{-3mm}
+ \caption{\label{fig:inputconf} Contents of the \inlinecode{INPUTS.conf} file for the demonstration dataset of \citet{menke20}.
+ This file contains the basic, or minimal, metadata for retrieving the required dataset(s) of a project: it can become arbitrarily long.
+ Here, \inlinecode{M20DATA} contains the name of this dataset within this project.
+ \inlinecode{MK20MD5} contains the MD5 checksum of the dataset, in order to check the validity and integrity of the dataset before usage.
+ \inlinecode{MK20SIZE} contains the size of the dataset in human readable format.
+ \inlinecode{MK20URL} is the URL which the dataset is automatically downloaded from (only when its not already present on the host).
+ Note that the original URL (footnote \ref{footnote:dataurl}) was too long to display properly here.
+ }
+\end{figure}
+
+Each external dataset has some basic information, including its expected name on the local system (for offline access), the necessary checksum to validate it (either the whole file or just its main ``data''), and its URL/PID.
+In this template, such information regarding a project's input dataset(s) is in the \inlinecode{INPUTS.conf} file.
+See Figures \ref{fig:files} \& \ref{fig:datalineage} for the position of \inlinecode{INPUTS.conf} in the project's file structure and data lineage respectively.
+For demonstration, in this paper, we are using the datasets of \citet{menke20} which are stored in one \inlinecode{.xlsx} file on bioXriv\footnote{\label{footnote:dataurl}Full data URL: \url{\menketwentyurl}}.
+Figure \ref{fig:inputconf} shows the corresponding \inlinecode{INPUTS.conf}, show the necessary information as Make variables for this dataset.
+
+\begin{figure}[t]
+ \input{tex/src/figure-src-download.tex}
+ \vspace{-3mm}
+ \caption{\label{fig:download} Simplified Make rule, showing how the downloaded data URL is written into this paper (in Footnote \ref{footnote:dataurl}).
+ In Make, lines starting with a \inlinecode{\#} are ignored (thus used for human-readable comments, like the red line shown here).
+ The \emph{target} is placed before a colon (\inlinecode{:}) and its \emph{prerequisite(s)} is(are) after the colon.
+ Here, both the target and prerequisite can be seen in the second line.
+ The executable \emph{recipe} lines (shell commands to build the target from the prerequisite), start with a \inlinecode{TAB} (shown here with a light gray \inlinecode{\_\_\_TAB\_\_\_}).
+ A Make recipe is an independent, or containerized, shell script.
+ In the recipe, \inlinecode{\$@} is an \emph{automatic variable}, expanding to the target file's name.
+ For \inlinecode{MK20URL}, see Figure \ref{fig:inputconf}.
+ The same URL is then passed to this paper through the definition of the \LaTeX{} variable \inlinecode{\textbackslash{}menketwentyurl} that is written in \inlinecode{\$(mtexdir)/download.tex}.
+ Later, when the paper's PDF is being built, this \inlinecode{.tex} file is loaded into it.
+ \inlinecode{\$(mtexdir)} is the directory hosting all the \LaTeX{} macro files for various stages of the analysis, see Section \ref{sec:valuesintext}.
+ }
+\end{figure}
+
+If \inlinecode{menke20.xlsx} exists on the \emph{input} directory, it will just be validated and put it in the \emph{build} directory.
+Otherwise, it will be downloaded from the given URL, validated, and put it in the build directory.
+Recall that the input and build directories differ from system to system and are specified at project configuration time, see Section \ref{sec:localdirs}.
+In the data lineage of Figure \ref{fig:datalineage}, the arrow from \inlinecode{INPUTS.conf} to \inlinecode{menke20.xlsx} symbolizes this step.
+Note that in our notation, once an external dataset is imported, it is a \emph{built} product, it thus has a blue box in Figure \ref{fig:datalineage}.
+
+It is sometimes necessary to report basic information about external datasets in the report/paper.
+As described in Section \ref{sec:valuesintext}, here this is done with \LaTeX{} macros to avoid human error.
+For example in Footnote \ref{footnote:dataurl}, we gave the full URL that this dataset was downloaded from.
+In the \LaTeX{} source of that footnote, this URL is stored as the \inlinecode{\textbackslash{}menketwentyurl} macro which is created with the simplified\footnote{This Make rule is simplified by removing the directory variable names to help in readability.} Make rule below (it is located at the bottom of \inlinecode{download.mk}).
+
+In this rule, \inlinecode{download.tex} is the \emph{target} and \inlinecode{menke20.xlsx} is its \emph{prerequisite}.
+The \emph{recipe} to build the target from the prerequisite is the \inlinecode{echo} shell command which writes the \LaTeX{} macro definition as a simple string (enclosed in double-quotes) into the \inlinecode{download.tex}.
+The target is built after the prerequisite(s) are built, or when the prerequisites are newer than the target (for more on Make, see Appendix \ref{appendix:make}).
+Note that \inlinecode{\$(MK20URL)} is a call to the variable defined above in \inlinecode{INPUTS.conf}.
+Also recall that in Make, \inlinecode{\$@} is an \emph{automatic variable}, which is expanded to the rule's target name (in this case, \inlinecode{download.tex}).
+Therefore if the dataset is re-imported (possibly with a new URL), the URL in Footnote \ref{footnote:dataurl} will also be re-created automatically.
+In the data lineage of Figure \ref{fig:datalineage}, the arrow from \inlinecode{menke20.xlsx} to \inlinecode{download.tex} symbolizes this step.
+
+
+
+
+
+\subsubsection{The analysis}
+\label{sec:analysis}
+
+The analysis subMakefile(s) are loaded into \inlinecode{top-make.mk} after the initialization and download steps (see Sections \ref{sec:download} and \ref{sec:initialize}).
+However, the analysis phase involves much more complexity.
+If done without modularity in mind from the start, research project sources can become very long, thus becoming hard to modify, debug, improve or read.
+Maneage is therefore designed to encourage and facilitate splitting the analysis into multiple/modular subMakefiles.
+For example in the data lineage graph of Figure \ref{fig:datalineage}, the analysis is broken into three subMakefiles: \inlinecode{format.mk}, \inlinecode{demo-plot.mk} and \inlinecode{analysis3.mk}.
+
+Theoretical discussion of this phase can be hard to follow, we will thus describe a demonstration project on data from \citet{menke20}.
+In Section \ref{sec:download}, the process of importing this dataset into the project was described.
+The first issue is that \inlinecode{menke20.xlsx} must be converted to a simple plain-text table which is generically usable by simple tools (see the principle of minimal complexity in Section \ref{principle:complexity}).
+For more on the problems with Microsoft Office and this format, see Section \ref{sec:lowlevelanalysis}.
+In \inlinecode{format.mk} (Figure \ref{fig:formatsrc}), we thus convert it to a simple white-space separated, plain-text table (\inlinecode{menke20-table-3.txt}) and do a basic calculation on it.
+
+\begin{figure}[t]
+ \input{tex/src/figure-src-format.tex}
+ \vspace{-3mm}
+ \caption{\label{fig:formatsrc}Simplified contents of \inlinecode{format.mk}.
+ Here, we want to convert the downloaded XLSX dataset (Office Open XML Workbook format) to a simple plain-text fixed-width-per-column table.
+ For the position of this subMakefile in the full project's data lineage, see Figure \ref{fig:datalineage}.
+ In particular, here the arrows of that figure from \inlinecode{menke20.xlsx} to \inlinecode{menke20-table-3.txt} and from the latter to \inlinecode{format.tex} are shown as the second and third Make rules.
+ See Figure \ref{fig:download} and Appendix \ref{appendix:make} for more on the Make notation and Section \ref{sec:analysis} for describing the steps.
+ }
+\end{figure}
+
+As shown in Figure \ref{fig:formatsrc}, the first operation (or Make \emph{rule}) is to define a directory to keep the generated files.
+To keep the source and build-directories separate, we thus define \inlinecode{a1dir} under the build-directory (\inlinecode{BDIR}, see Section \ref{sec:localdirs}).
+We'll then define all outputs/targets to be under this directory.
+The second rule (which depends on the directory as a prerequisite), then converts the Microsoft Excel spreadsheet file to a simple plain-text format using the XLSX I/O program.
+But XLSX I/O only converts to CSV and we don't need all the columns here, so we further shorten and modify the table (re-order columns and multiply them) using the AWK program (which is available on any Unix-like operating system).
+In Figure \ref{fig:datalineage} on the example data lineage, this second rule is shown with the arrow from \inlinecode{menke20.xlsx} to \inlinecode{menke20-table-3.txt}.
+
+Finally, as described in Section \ref{sec:valuesintext}, the last rule of a subMakefile should be a \LaTeX{} macro file (in Figure \ref{fig:formatsrc}, this is the third rule).
+Ending each analysis phase with a \LaTeX{} macro is natural in many papers/reports.
+For example, here, once the dataset is ready, we want to give the reader a general view of the dataset size.
+We thus need to report the number of subjects studied in \citet{menke20}.
+Therefore in the \LaTeX{} macro rule, we count them from the simplified table of the second rule.
+In both cases, we write the sum as a temporary shell variable \inlinecode{v}, which is respectively written into these two \LaTeX{} macros \inlinecode{\textbackslash{}menkenumpapers} and \inlinecode{\textbackslash{}menkenumjournals}.
+In the built PDF paper, they expand to $\menkenumpapers$ (number of papers studied) and $\menkenumjournals$ (number of journals studied) respectively.
+This rule is shown schematically in Figure \ref{fig:datalineage} with the arrow from \inlinecode{menke20-table-3.txt} to \inlinecode{format.tex}.
+
+To further demonstrate the concept, we'll reproduce (with some enhancements) Figure 1C of \citet{menke20} in Figure \ref{fig:toolsperyear}.
+Figure \ref{fig:toolsperyear} also shows the number of papers that were studied each year in the same plot (unlike the original plot).
+Its horizontal axis also shows the full range of the data (starting from $\menkefirstyear$) while the original Figure 1C in \citet{menke20} starts from 1997.
+The reason \citet{menke20} decided to avoid earlier years was probably the small number of papers before 1997.
+For example in \menkenumpapersdemoyear, they had only studied \menkenumpapersdemocount{} papers.
+Note that both the numbers of this sentence, and the first year of data mentioned above, are actually \LaTeX{} macros, see Figure \ref{fig:demoplotsrc}).
+
+\begin{figure}[t]
+ \begin{center}
+ \includetikz{figure-tools-per-year}
+ \end{center}
+ \vspace{-5mm}
+ \caption{\label{fig:toolsperyear}Fraction of papers mentioning software tools (green line, left vertical axis) to total number of papers studied in that year (light red bars, right vertical axis in log-scale).
+ Data from \citet{menke20}.
+ The subMakefile archiving the executable lineage of figure's data is shown in Figure \ref{fig:demoplotsrc} and discussed in Section \ref{sec:analysis}.
+ }
+\end{figure}
+
+The operation of reproducing that figure is a contextually separate operation from the operations that were described above in \inlinecode{format.mk}.
+Therefore we add a new subMakefile to the project called \inlinecode{demo-plot.mk}, which is shown in Figure \ref{fig:demoplotsrc}.
+As before, in the first rule, we make the directory to host the data (\inlinecode{a2dir}).
+However, unlike before, this directory is placed under \inlinecode{texdir} which is the directory hosting all \LaTeX{} related files.
+This is because the plot of Figure \ref{fig:toolsperyear} is directly made within \LaTeX{}, using its PGFPlots package\footnote{PGFPLots package of \LaTeX: \url{https://ctan.org/pkg/pgfplots}.
+ \inlinecode{texdir} has some special features when using \LaTeX{}, see Section \ref{sec:buildingpaper}.
+ PGFPlots uses the same graphics engine that is building the paper, producing a high quality figure that blends nicely in the paper.}.
+Note that this is just our personal choice, other methods of generating plots (for example with R, Gnuplot or Matplotlib) are also possible within this system.
+As with the input data files of PGFPlots, it is just necessary to put the files that are loaded into \LaTeX{} under the \inlinecode{\$(BDIR)/tex} directory, see Section \ref{sec:publishing}.
+
+The plain-text table that is used to build Figure \ref{fig:toolsperyear} is defined as the variable \inlinecode{a2mk20f1c} of Figure \ref{fig:demoplotsrc} (just above the second rule).
+As shown in the second rule, again we use GNU AWK to extract the necessary information from \inlinecode{mk20tab3} (which was built in \inlinecode{format.mk}).
+\inlinecode{mk20tab3} is thus the \emph{prerequisite} of \inlinecode{a2mk20f1c} (along with \inlinecode{a2dir}).
+In Figure \ref{fig:datalineage}, this lineage is shown as the arrow from \inlinecode{menke20-table-3.txt} (file name of \inlinecode{mk20tab3}) that points to \inlinecode{tools-per-year.txt} (file name of \inlinecode{a2mk20f1c}).
+
+As with all subMakefiles, \inlinecode{demo-plot.mk} finishes with the rule to build its \LaTeX{} macro file (\inlinecode{demo-plot.tex}) containing the values reported above.
+But here, it doesn't just depend on \inlinecode{a2mk20f1c}, it also depends on the \inlinecode{menke-demo-year.conf} configuration file.
+This is also visible in the data lineage (Figure \ref{fig:datalineage}): two arrows point to \inlinecode{demo-plot.tex}, one from a configuration file, and one from a built file.
+Configuration files are discussed in more detain in Section \ref{sec:configfiles}.
+
+\begin{figure}[t]
+ \input{tex/src/figure-src-demoplot.tex}
+ \vspace{-2mm}
+ \caption{\label{fig:demoplotsrc}Contents of \inlinecode{analysi2.mk} subMakefile used to generate the data for Figure \ref{fig:toolsperyear}.
+ }
+\end{figure}
+
+In a similar manner many more subMakefiles can be added in more complex analysis scenarios. % should subMakefiles have a special kind of formatting? Such as monospaced, italics, etc?
+This is shown with the lower opacity files and dashed arrows of the data lineage in Figure \ref{fig:datalineage}.
+Generally, the files created within one subMakefile don't necessarily have to be a prerequisite of its \LaTeX{} macro.
+For example see \inlinecode{demo-out.dat} in Figure \ref{fig:datalineage}: it is managed in \inlinecode{demo-plot.mk}, however, it isn't a prerequisite of \inlinecode{demo-plot.tex}, it is a prerequisite of \inlinecode{out-3b.dat} (which is managed in \inlinecode{another-step.mk} and is a prerequisite of \inlinecode{another-step.tex}).
+Hence ultimately, through another file, it's descendants conclude in a \LaTeX{} macro.
+
+The high-level \inlinecode{top-make.mk} file is designed to simplify the addition of new subMakefiles for the authors, and reading the source for readers (see Section \ref{sec:highlevelanalysis}).
+As mentioned before, this high-level Makefile just defines the ultimate target (\inlinecode{paper.pdf}, see Section \ref{sec:paperpdf}) and imports all the subMakefiles in the specific order.
+For example Figure \ref{fig:topmake} shows this project's \inlinecode{top-make.mk}.
+When descriptive names are chosen for the subMakefiles, a simple glance over the values to \inlinecode{makesrc} here provides a general understanding of the project without needing to get into the technical details.
+
+\begin{figure}[t]
+ \input{tex/src/figure-src-topmake.tex}
+ \vspace{-3mm}
+ \caption{\label{fig:topmake} General view of the High-level \inlinecode{top-make.mk} Makefile which manages the project's analysis that is in various subMakefiles.
+ See Figures \ref{fig:files} \& \ref{fig:datalineage} for its location in the project's file structure and its data lineage, as well as the subMakefiles it includes.
+ }
+\end{figure}
+
+
+
+\subsubsection{Configuration files}
+\label{sec:configfiles}
+
+The analysis subMakefiles discussed above in Section \ref{sec:analysis} should only contain the organization of an analysis, they should not contains any fixed numbers, settings or parameters.
+Such elements should only be used as variables that are defined elsewhere.
+In the data lineage plot of Figure \ref{fig:datalineage}, configuration files are shown as the sharp-edged, green \inlinecode{*.conf} files in the top row.
+
+The last recipe of Figure \ref{fig:demoplotsrc} is a good demonstration of their usage: in Section \ref{sec:analysis}, we reported the number of papers studied by \citet{menke20} in \menkenumpapersdemoyear.
+However, note that in Figure \ref{fig:demoplotsrc}, the year's number is not written by hand in the subMakefile.
+It is referenced through the \inlinecode{menke-year-demo} variable, which is defined in \inlinecode{menke-demo-year.conf}, that is a prerequisite of the \inlinecode{demo-plot.tex} rule.
+This is also visible in the data lineage of Figure \ref{fig:demoplotsrc}.
+
+All the configuration files of a project are placed under the \inlinecode{reproduce/analysis/config} (see Figure \ref{fig:files}) subdirectory, and are loaded into \inlinecode{top-make.mk} before any of the subMakefiles, see Figure \ref{fig:topmake}.
+The configuration files greatly simplify project management from multiple perspectives as listed below:
+
+\begin{itemize}
+\item If an analysis parameter is used in multiple places within the project, simply changing the value in the configuration file will change it everywhere in the project.
+ This is cortical in more complex projects and if not done like this can lead to significant human error.
+\item Configuration files enable the logical separation between the low-level implementation and high-level running of a project.
+ For example after writing the project, the authors don't need to remember where the number/parameter was used, they can just modify the configuration file.
+ Other co-authors, or readers, of the project also benefit: they just need to know that there is a unified place for high-level project settings, parameters, or numbers without necessarily having to know the low-level implementation.
+\item A configuration file will be a prerequisite to any rule that uses it's value.
+ If the configuration file is updated (the value/parameter is changed), Make will automatically detect the data lineage branch that is affected by it and re-execute only that branch, without any human interference.
+\end{itemize}
+
+This is a great leap compared the current, mostly manual, project management that many scientists employ.
+Manual management is prone to serious human error factors: at the later phases of a project, scientists are least likely to experiment on their project's configurations.
+However, the later phases of a project are precisely the times where the lower-level parts of the project are complete and the authors can look at the bigger picture.
+This style of managing project parameters therefore produces a much more healthy scientific result where experimentation is very cheap during all phases of a project; before its publication (by the authors) and after it (by the authors and readers).
+
+
+
+
+\subsection{Projects as Git branches of Maneage}
+\label{sec:starting}
+
+Maneage is fully composed of plain-text files distributed in a directory structure (see Sections \ref{principle:text} \& \ref{sec:generalimplementation} and Figure \ref{fig:files}).
+Therefore it can be maintained under under version control systems like Git (for more on version control, see Appendix \ref{appendix:versioncontrol}).
+Every commit in the version controlled history contains \emph{a complete} snapshot of the data lineage, for more see the completeness principle in Section \ref{principle:complete}.
+
+Maneage is maintained by its developers in a central branch, which we'll call \inlinecode{man\-eage} hereafter.
+The \inlinecode{man\-eage} branch contains all the low-level infrastructure, or skeleton, that is necessary for any project.
+This skeleton is primarily the configuration features discussed in Section \ref{sec:projectconfigure}\footnote{Recall that project configuration files are located under \inlinecode{reproduce/software} in Figure \ref{fig:files}, and executed with the \inlinecode{./project configure} command.}.
+The \inlinecode{maneage} branch only contains a minimal demonstration analysis in order to be complete\footnote{The names of all the files related to the demonstration of the \inlinecode{maneage} branch have a \inlinecode{delete-me} prefix to highlight that they must be deleted when starting a new project.}.
+
+To start a new project, users simply clone it from its reference repository and build their own Git branch over the most recent commit.
+This is demonstrated in the first phase of Figure \ref{fig:branching}: the project started by branching of commit \inlinecode{0c120cb} in \inlinecode{maneage}.
+They can then start customizing Maneage for their project and adding their high-level analysis in their own branch and push it to their own Git repository.
+Manages contains a file called \inlinecode{README-hacking.md} that has a complete checklist of steps to start a new project and remove demonstration parts.
+This file is updated on the \inlinecode{maneage} branch and will always be up-to-date with the low-level infrastructure.
+
+%% Exact URLs of imported images.
+%% Collaboration icon: https://www.flaticon.com/free-icon/collaboration_809522
+%% Paper done: https://www.flaticon.com/free-icon/file_2521838
+%% Paper processing: https://www.flaticon.com/free-icon/file_2521989
+\begin{figure}[t]
+ \includetikz{figure-branching}
+ \vspace{-3mm}
+ \caption{\label{fig:branching} Projects start by branching off the main Maneage branch and developing their high-level analysis over the common low-level infrastructure: add flesh to a skeleton.
+ The low-level infrastructure can always be updated (keeping the added high-level analysis intact), with a simple merge between branches.
+ Two phases of a project's evolution shown here: in phase 1, a co-author has made two commits in parallel to the main project branch, which have later been merged.
+ In phase 2, the project has finished: note the identical first project commit and the Maneage commits it branches from.
+ The dashed parts of Scenario 2 can be any arbitrary history after those shown in phase 1.
+ A second team now wants to build upon that published work in a derivate branch, or project.
+ The second team applies two commits and merges their branch with Maneage to improve the skeleton and continue their research.
+ The Git commits are shown on their branches as colored ellipses, with their hash printed in them.
+ The commits are colored based on the team that is working on that branch.
+ The collaboration and paper icons are respectively made by `mynamepong' and `iconixar' and downloaded from \url{www.flaticon.com}.
+ }
+\end{figure}
+
+After a project starts, Maneage will evolve.
+For example new features will be added, low-level bugs will be fixed that are useful for any project.
+Because all the changes in Maneage are committed on the \inlinecode{maneage} branch (that projects also branch-off from) updating the project's low-level infra-structure is as easy as merging the \inlinecode{maneage} branch into the project's branch.
+For example see how Maneage's \inlinecode{3c05235} commit has been merged into project's branch trough commit \inlinecode{2ed0c82} in Figure \ref{fig:branching} (phase 1).
+
+This doesn't just apply to the pre-publication phase, when done in Maneage, a project can be revived at any later date by other researchers as shown in phase 2 of Figure \ref{fig:branching}.
+In that figure, a new team of researchers have decided to experiment on the results of the published paper and have merged it with the Maneage branch (commit \inlinecode{a92b25a}) to fix some possible portability problem for their operating system that was fixed as a bug in Maneage after the paper's publication.
+Propagating bug fixes or improvements in the low-level infrastructure to all projects using Maneage has been one of the reasons it evolved so well over the last 5 years, see Section \ref{sec:futurework}.
+As we started using it in more projects by more users, many bugs were found and improvements made.
+We would then implement it in Maneage and that fix would propagate to all other projects automatically in their next merge.
+
+Other scenarios include a third project that can easily merge various high-level components from different projects into its own branch, thus adding a temporal dimension to their data lineage.
+Modern version control systems provide many more capabilities that can be exploited through Maneage in project management, thanks to the shared branch it has with \emph{all} projects that use it.
+
+
+
+
+
+\subsection{Multi-user collaboration on single build directory}
+\label{sec:collaborating}
+
+Because the project's source and build directories are separate, it is possible for different users to share a build directory, while working on their own separate project branches during a collaboration.
+Similar to the parallel branch that is later merged in phase 1 of Figure \ref{fig:branching}.
+
+To give all users privilege, Maneage assumes that they are in the same (POSIX) user group of the system.
+All files built in the build directory are then automatically assigned to this user group, with read-write permissions for all group members (\inlinecode{-rwxrwx---}), through the \inlinecode{sg} and \inlinecode{umask} commands that are prepended to the call to Make.
+The \inlinecode{./project} script has a special \inlinecode{--group} option which activates this mode in both configuration and analysis phases.
+It takes the user group name as its argument and the built files will only be accessible by the group members, even when the shared location is accessible by people outside the project.
+
+When multiple project members are contributing on a shared build directory, they usually work on independent parts of the project which won't cause any conflict.
+When there is a conflict, a member can temporarily change the name of the part's top directory within their branch.
+For example if Alice is working on the \inlinecode{demo-plot.mk} (Figure \ref{fig:demoplotsrc}) in parallel with others, she can set \inlinecode{a2dir} to be \inlinecode{\$(texdir)/tools-per-year-alice}.
+Other project members can also compare her results in this way and once it is merged into the master branch, \inlinecode{a2dir} can be set to its original value.
+
+The project already applies this strategy for part of the project that runs \LaTeX{} to build the final report.
+This is because project members will usually also be editing their parts of the report/paper as they progress.
+To fix this, when the project is configured and built with \inlinecode{--group}, each project member's user-name will be appended to the \LaTeX{} build directory (which is under \inlinecode{\$(BDIR)/tex}).
+
+However, human error is inevitable, so when the project takes long in some phases, the user and group write-permission flags can be manually removed from the respective subdirectories under \inlinecode{\$(BDIR)} until the project is to be built from scratch; maybe for a test prior to submission.
+
+
+
+
+\subsection{Publishing the project}
+\label{sec:publishing}
+
+Once the project is complete, publishing the project is the final step.
+In a scientific scenario, it is public
+As discussed in the various steps before, the source of the project (the software configuration, data lineage and narrative text) is fully in plain text, greatly facilitating the publication of the project.
+
+
+\subsubsection{Automatic creation of publication tarball}
+\label{sec:makedist}
+To facilitate the publication of the project source, Maneage has a special \inlinecode{dist} target during the build process which is activated with the command \inlinecode{./project make dist}.
+In this mode, Maneage will not do any analysis, it will simply copy the full project's source (on the given commit) into a temporary directory and compress it into a \inlinecode{.tar.gz} file.
+If a Zip compression is necessary, the \inlinecode{dist-zip} target can be called instead \inlinecode{dist}.
+
+The \inlinecode{dist} tarball contains the project's full data lineage and is enough to reproduce the full project: it can build the software, download the data, run the analysis, and build the final PDF.
+However, it doesn't contain the Git history, it is just a checkout of one commit.
+Instead of the history, it contains all the necessary \emph{built products} that go into building the final paper without the analysis: for example the used plots, figures, tables, and \inlinecode{project.tex}, see Section \ref{sec:valuesintext}.
+As a result, the tarball can be \emph{also} only build the final report with a simple \inlinecode{pdflatex paper} command \emph{without} running \inlinecode{./project}.
+When the project is distributed as a tarball (not as a Git repository), building the report may be the main purpose, like the arXiv distribution scenario discussed below, the data lineage (under the \inlinecode{reproduce/} directory) is likely just a supplement.
+
+\subsubsection{What to publish, and where?}
+\label{sec:whatpublish}
+The project's source, which is fully in hand-written plain-text, has a very small volume, usually much less than one megabyte.
+However, the necessary input files (see Section \ref{definition:input}) and built datasets may be arbitrarily large, from megabytes to petabytes or more.
+Therefore, there are various scenarios for the publication of the project as described below:
+
+\begin{itemize}
+\item \textbf{Only source:} Publishing the project source is very easy because it only contains plain-text files with a very small volume: a commit will usually be on the scale of $\times100kB$. With the Git history, it will usually only be on the scale of $\sim5MB$.
+
+ \begin{itemize}
+ \item \textbf{Public Git repository:} This is the simplest publication method.
+ The project will already be on a (private) Git repository prior to publication.
+ In such cases, the private configuration can be removed so it becomes public.
+ \item \textbf{In journal or PDF-only preprint systems (e.g., bioRxiv):} If the journal or pre-print server allows publication of small supplement files to the paper, the commit that produced the final paper can be submitted as a compressed file, for example with the
+ \item \textbf{arXiv:} Besides simply uploading a PDF pre-print, on arXiv, it is also possible to upload the \LaTeX{} source of the paper.
+ arXiv will run its own internal \LaTeX{} engine on the uploaded files and produce the PDF that is published.
+ When the project is published, arXiv also allows users to anonymously download the \LaTeX{} source tarball that the authors uploaded\footnote{In the current arXiv user interface, the tarball is available by clicking the ``Other formats'' link on the paper's main page, and then clicking ``Download source'', it can be checked with \url{https://arxiv.org/abs/1909.11230} of \citet{akhlaghi19}.}.
+ Therefore, simply uploading the tarball from the \inlinecode{./project make dist} command is sufficient for arXiv, and will allow the full project data lineage to also be published there with the \LaTeX{} source.
+ We done this in \citet[arXiv:1909.11230]{akhlaghi19} and \citet[arXiv:1911.01430]{infante20}.
+ Since arXiv is mirrored in many institutes over the planet, this is a robust way to preserve the reproducible lineage.
+ \item \textbf{In output datasets:} Many data storage formats support an internal structure with the data file.
+ One commonly used example today is the Hierarchical Data Format (HDF), and in particular its HDF5 which can host a complex filesystem in POSIX syntax.
+ It is even used by some reproducible analysis solutions like the Active papers project \citet[for more, see Appendix \ref{appendix:activepapers}]{hinsen11}.
+ Since the volume of the project source is so insignificant compared to the output datasets of most projects, the whole project source can be stored with each published data file if the format supports it.
+ \end{itemize}
+\item \textbf{Source and data:} The project inputs (including the software tarballs, or possible datasets) may have a large volume.
+ Publishing them with the source is thus not always possible.
+ However, based on the definition of inputs in Section \ref{definition:input}, they are usable in other projects: another project may use the same data or software source code, in a different way.
+ Therefore even when published with the source, it is encouraged to publish them as separate files.
+
+ For example strategy was followed in \href{https://doi.org/10.5281/zenodo.3408481}{zenodo.3408481}\footnote{https://doi.org/10.5281/zenodo.3408481} which supplements \citet{akhlaghi19} which contains the following files.
+
+ \begin{itemize}
+ \item \textbf{Final PDF:} for easy understanding of the project.
+ \item \textbf{Git history:} as the Git ``bundle'' of the project.
+ This single file contains the full Git history of the project until its publication date (only 4Mb), see Section \ref{sec:starting}.
+ \item \textbf{Project source tarball}: output of \inlinecode{./project make dist}, as explained above.
+ \item \textbf{Tarballs of all necessary software:} This is necessary in case the software webpages is not accessible for any reason at a later date or the project must be run with no internet access.
+ This is only possible because of the free software principle discussed in Section \ref{principle:freesoftware}.
+ \end{itemize}
+
+ Note that \citet{akhlaghi19} used previously published datasets which are automatically accessed when necessary.
+ Also, that paper didn't produce any output datasets beyond the figures shown in the report, therefore the Zenodo upload doesn't contain any datasets.
+ When a project involves data collection, or added-value data products, they can also be uploaded with the files above.
+\end{itemize}
+
+\subsubsection{Worries about getting scooped!}
+\label{sec:scooped}
+Publishing the project source with the paper can have many benefits for the researcher and the larger community.
+For example if the source is published with a pre-print, others my help the authors find bugs, or improvements to the source that can affect the validity or precision of the result, or simply optimize it so it does the same work in half the time for example.
+
+However, one particular feedback raised by a minority of researchers is that publishing the project's reproducible data lineage immediately after publication may hamper their ability to continue harvesting from all their hard work.
+Because others can easily reproduce the work, others may take the next follow-up project they originally intended to do.
+This is informally known as getting scooped.
+
+The level that this may happen is an interesting subject to be studied once many papers become reproducible.
+But it is a valid concern that must be addressed.
+Given the strong integrity checks in Maneage, we believe it has features to address this problem in the following ways:
+
+\begin{enumerate}
+\item This worry is essentially the 2nd phase of Figure \ref{fig:branching}.
+ The commits of the other team are built up on the commits of the original authors.
+ It is therefore perfectly clear (with the precision of a character!) how much of their result is purely their own work (qualitatively or quantitatively).
+ In this way, Maneage can contribute to a new concept of authorship in scientific projects and help to quantify newton's famous ``standing on the shoulders of giants'' quote.
+ However, this is a long term goal and requires major changes to academic value systems.
+\item The authors can be given a grace period where the journal, or some third authority, keeps the source and publishes it a certain time after publication.
+ In fact, journals can create specific policies for such scenarios, for example saying that all project sources will be available publicly, $N$ months/years after publication while allowing authors to opt-out of it if they like, so the source is published immediately with the paper.
+ However, journals cannot expect exclusive copyright to distribute the project source, in the same manner they do with the final paper.
+ As discussed in the free software principle of Section \ref{principle:freesoftware}, it is critical that the project source be free for the community to use, modify and distribute.
+
+ This can also be done by the authors on servers like Zenodo, where you can get the dataset's final DOI first, and publish at a later date.
+ Reproducibility is indeed very important for the sciences, but the hard work that went into it should also be acknowledged for the authors that would like to publish the source at a later date.
+\end{enumerate}
+
+
+
+
+\subsection{Future of Maneage and its past}
+\label{sec:futurework}
+As with any software, the core architecture of Maneage will inevitably evolve after the publication of this paper.
+The current version introduced here has already experienced 5 years of evolution and several reincarnations.
+Its primordial implementation was written for \citet{akhlaghi15}.
+This paper described a new detection algorithm in astronomical image processing.
+The detection algorithm was developed as the paper was being written (initially a small report!).
+An automated sequence of commands to build the figures, and update the paper/report was a practical necessity as the algorithm was evolving.
+In particular, it didn't just reproduce figures, it also used \LaTeX{} macros to update numbers printed within the text.
+Finally, since the full analysis pipeline was in plain-text and roughly 100kb (much less than a single figure), it was uploaded to arXiv with the paper's \LaTeX{} source, under a \inlinecode{reproduce/} directory, see \href{https://arxiv.org/abs/1505.01664}{arXiv:1505.01664}\footnote{
+ To download the \LaTeX{} source of any arXiv paper, click on the ``Other formats'' link, containing necessary instructions and links.}.
+
+The system later evolved in \citet{bacon17}, in particular the two sections of that paper that were done by M. Akhlaghi (first author of this paper): \citet[\href{http://doi.org/10.5281/zenodo.1163746}{zenodo.1163746}]{akhlaghi18a} and \citet[\href{http://doi.org/10.5281/zenodo.1164774}{zenodo.1164774}]{akhlaghi18b}.
+With these projects, the skeleton of the system was written as a more abstract ``template'' that could be customized for separate projects.
+The template later matured by including installation of all necessary software from source and used in \citet[\href{https://doi.org/10.5281/zenodo.3408481}{zenodo.3408481}]{akhlaghi19} and \citet[\href{https://doi.org/10.5281/zenodo.3524937}{zenodo.3524937}]{infante20}.
+The short historical review above highlights how this template was created by practicing scientists, and has evolved and matured significantly.
+
+We already have roughly 30 tasks that are left for the future and will affect various high-level phases of the project as described here.
+However, the core of the system has been used and become stable enough already and we don't see any major change in the core methodology in the near future.
+A list of the notable changes after the publication of this paper will be kept in in the project's \inlinecode{README-hacking.md} file.
+Once the improvements become substantial, new paper(s) will be written to complement or replace this one.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Discussion}
+\label{sec:discussion}
+
+\begin{itemize}
+\item Science is defined by its method, not its results.
+ Just as papers are quality checked for a reasonable English (which is not necessary for conveying the final result), the necessities of modern science require a similar check on a reasonable review of the computation, which is easiest to check when the result is exactly reproducible.
+\item Initiative such as \url{https://software.ac.uk} (UK) and \url{http://urssi.us} (USA) are good attempts at improving the quality of research software.
+\item Hiring software engineers is not the solution: the language of science has changed.
+ Could Galileo have created a telescope if he wasn't familiar with what a lens is?
+ Science is not independent of its its tools.
+\item The actual processing is archived in multiple places (with the paper on arXiv, with the data on Zenodo, on a Git repository, in future versions of the project).
+\item As shown by the very common use of something like Conda, Software (even free software) is mainly seen in executable form, but this is wrong: even if the software source code is no longer compilable, it is still readable.
+\item The software/workflow is not independent of the paper.
+\item Cost of archiving is a critical issue (the NSF director mentions this during the first Panel of the National Academies meeting\footnote{\url{https://vimeo.com/367085708} (around minute 45:00)}).
+\item Meta-science (or ``Science of science'', ``economics of science'', ``Research on research'') and its importance.
+\item Provenance tracking (like some tools in Appendix \ref{appendix:existingsolutions}) is built-in and doesn't need any manual tagging.
+\item Important that a project starts by following good practice \citep{fineberg19}, not an extra step in the end.
+\item It is possible to write graphic user interface wrappers like those in Appendix \ref{appendix:existingsolutions}.
+\item What is often discussed is ``taking data and apply different methods to it'', but an even more productive questions can be this: ``take (exact) methods and give different data to it''.
+\item \citet{munafo19} discuss how collective action is necessary.
+\item Research objects (Appendix \ref{appendix:researchobject}) can automatically be generated from the Makefiles, we can also apply special commenting conventions, to be included as annotations/descriptions in the research object metadata.
+\item Provenance between projects: through Git, all projects based on this template are automatically connected, but also through inputs/outputs, the lineage of a project can be traced back to projects before it also.
+\item \citet{gibney20}: After code submission was encouraged by the Neural Information Processing Systems (NeurIPS), the frac % incomplete
+\item When the data are confidential, \citet{perignon19} suggest to have a third party familiar with coding to referee the code and give its approval.
+ In this system, because of automatic verification of inputs and outputs, no technical knowledge is necessary for the verification.
+\item \citet{miksa19b} Machine-actionable data management plans (maDMPs) embedded in workflows, allowing
+\item \citet{miksa19a} RDA recommendation on maDMPs.
+\item FAIR Principles \citep{wilkinson16}.
+\item \citet{cheney09}: ``In both data warehouses and curated databases, tremendous (\emph{and often manual}) effort is usually expended in the construction''
+\item https://arxiv.org/pdf/2001.11506.pdf
+\item Apache NiFi for automated data flow.
+\item \url{https://arxiv.org/pdf/2003.04915.pdf}: how data lineage can help machine learning.
+\item Interesting patent on ``documenting data lineage'': \url{https://patentimages.storage.googleapis.com/c0/51/6e/1f3af366cd73b1/US10481961.pdf}
+\item Automated data lineage extractor: \url{http://hdl.handle.net/20.500.11956/110206}.
+\item Caveat: Many low-level tools.
+\item High-level tools can be written to exploit the low-level features.
+\end{itemize}
+
+
+
+
+
+%% Acknowledgements
+\section{Acknowledgments}
+The authors wish to thank Pedram Ashofteh Ardakani, Zahra Sharbaf and Surena Fatemi for their useful suggestions and feedback on Maneage and this paper.
+Work on the reproducible paper template has been funded by the Japanese Ministry of Education, Culture, Sports, Science, and Technology ({\small MEXT}) scholarship and its Grant-in-Aid for Scientific Research (21244012, 24253003), the European Research Council (ERC) advanced grant 339659-MUSICOS, European Union’s Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement No 721463 to the SUNDIAL ITN, and from the Spanish Ministry of Economy and Competitiveness (MINECO) under grant number AYA2016-76219-P.
+The reproducible paper template was also supported by European Union’s Horizon 2020 (H2020) research and innovation programme via the RDA EU 4.0 project (ref. GA no. 777388).
+
+%% Tell BibLaTeX to put the bibliography list here.
+\printbibliography
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\newpage
+\appendix
+\noindent
+ {\Large\bf Appendices}\\
+\vspace{-5mm}
+\section{Survey of existing tools for various phases}
+\label{appendix:existingtools}
+
+Conducting a reproducible research project is a high-level process, which involves using various lower-level tools.
+In this section, a survey of the most commonly used lower-level tools for various aspects of a reproducible project is presented with an introduction as relates to reproducibility and the proposed template.
+In particular, we focus on the tools used within the proposed template and also tools that are used by the existing reproducible framework that is reviewed in Appendix \ref{appendix:existingsolutions}.
+Some existing solution to for managing the different parts of a reproducible workflow are reviewed here.
+
+
+
+\subsection{Independent environment}
+\label{appendix:independentenvironment}
+
+There are three general ways of controlling the environment: 1) Virtual machines, 2) Containers, 3) controlled build and environment.
+Below, a short description of each solution is provided.
+
+\subsubsection{Virtual machines}
+\label{appendix:virtualmachines}
+Virtual machines (VMs) keep a copy of a full operating system that can be run on other operating systems.
+This includes the lowest-level kernel which connects to the hardware.
+VMs thus provide the ultimate control one can have over the run-time environment of an analysis.
+However, the VM's kernel does not talk directly to the hardware that is doing the analysis, it talks to a simulated hardware that is provided by the operating system's kernel.
+Therefore, a process that is run inside a virtual machine can be much slower than one that is run on a native kernel.
+VMs are used by cloud providers, enabling them to sell fully independent operating systems on their large servers, to their customers (where the customer can have root access).
+But because of all the overhead, they aren't used often used for reproducing individual processes.
+
+\subsubsection{Containers}
+Containers are higher-level constructs that don't have their own kernel, they talk directly with the host operating system kernel, but have their own independent software for everything else.
+Therefore, they have much less overhead in storage, and hardware/CPU access.
+Users often choose an operating system for the container's independent operating system (most commonly GNU/Linux distributions which are free software).
+
+Below we'll review some of the most common container solutions: Docker and Singularity.
+
+\begin{itemize}
+\item {\bf\small Docker containers:} Docker is one of the most popular tools today for keeping an independent analysis environment.
+ It is primarily driven by the need of software developers: they need to be able to reproduce a bug on the ``cloud'' (which is just a remote VM), where they have root access.
+ A Docker container is composed of independent Docker ``images'' that are built with Dockerfiles.
+ It is possible to precisely version/tag the images that are imported (to avoid downloading the latest/different version in a future build).
+ To have a reproducible Docker image, it must be ensured that all the imported Docker images check their dependency tags down to the initial image which contains the C library.
+
+ Another important drawback of Docker for scientific applications is that it runs as a daemon (a program that is always running in the background) with root permissions.
+ This is a major security flaw that discourages many high performance computing (HPC) facilities from installing it.
+
+\item {\bf\small Singularity:} Singularity is a single-image container (unlike Docker which is composed of modular/independent images).
+ Although it needs root permissions to be installed on the system (once), it doesn't require root permissions every time it is run.
+ Its main program is also not a daemon, but a normal program that can be stopped.
+ These features make it much easier for HPC administrators to install Docker.
+ However, the fact that it requires root access for initial install is still a hindrance for a random project: if its not present on the HPC, the project can't be run as a normal user.
+
+\item {\bf\small Virtualenv:} \tonote{Discuss it later.}
+\end{itemize}
+
+When the installed software within VMs or containers is precisely under control, they are good solutions to reproducibly ``running''/repeating an analysis.
+However, because they store the already-built software environment, they are not good for ``studying'' the analysis (how the environment was built).
+Currently, the most common practice to install software within containers is to use the package manager of the operating system within the image, usually a minimal Debian-based GNU/Linux operating system.
+For example the Dockerfile\footnote{\url{https://github.com/benmarwick/1989-excavation-report-Madjedbebe/blob/master/Dockerfile}} in the reproducible scripts of \citet{clarkso15}, which uses \inlinecode{sudo apt-get install r-cran-rjags -y} to install the R interface to the JAGS Bayesian statistics (rjags).
+However, the operating system package managers aren't static.
+Therefore the versions of the downloaded and used tools within the Docker image will change depending when it was built.
+At the time \citet{clarkso15} was published (June 2015), the \inlinecode{apt} command above would download and install rjags 3-15, but today (January 2020), it will install rjags 4-10.
+Such problems can be corrected with robust/reproducible package managers like Nix or GNU Guix within the docker image (see Appendix \ref{appendix:packagemanagement}), but this is rarely practiced today.
+
+\subsubsection{Package managers}
+\label{appendix:packagemanagersinenv}
+The virtual machine and container solutions mentioned above, install software in standard Unix locations (for example \inlinecode{/usr/bin}), but in their own independent operating systems.
+But if software are built in, and used from, a non-standard, project specific directory, we can have an independent build and run-time environment without needing root access, or the extra layers of the container or VM.
+This leads us to the final method of having an independent environment: a controlled build of the software and its run-time environment.
+Because this is highly intertwined with the way software are installed, we'll describe it in more detail in Section \ref{appendix:packagemanagement} where package managers are reviewed.
+
+
+
+
+
+\subsection{Package management}
+\label{appendix:packagemanagement}
+
+Package management is the process of automating the installation of software.
+A package manager thus contains the following information on each software package that can be run automatically: the URL of the software's tarball, the other software that it possibly depends on, and how to configure and build it.
+
+Here some of package management solutions that are used by the reviewed reproducibility solutions of Appendix \ref{appendix:existingsolutions} are reviewed\footnote{For a list of existing package managers, please see \url{https://en.wikipedia.org/wiki/List_of_software_package_management_systems}}.
+Note that we are not including package manager that are only limited to one language, for example \inlinecode{pip} (for Python) or \inlinecode{tlmgr} (for \LaTeX).
+
+
+
+\subsubsection{Operating system's package manager}
+The most commonly used package managers are those of the host operating system, for example \inlinecode{apt} or \inlinecode{yum} respectively on Debian-based, or RedHat-based GNU/Linux operating systems (among many others).
+
+These package managers are tightly intertwined with the operating system.
+Therefore they require root access, and arbitrary control (for different projects) of the versions and configuration options of software within them is not trivial/possible: for example a special version of a software that may be necessary for a project, may conflict with an operating system component, or another project.
+Furthermore, in many operating systems it is only possible to have one version of a software at any moment (no including Nix or GNU Guix which can also be independent of the operating system, described below).
+Hence if two projects need different versions of a software, it is not possible to work on them at the same time.
+
+When a full container or virtual machine (see Appendix \ref{appendix:independentenvironment}) is used for each project, it is common for projects to use the containerized operating system's package manager.
+However, it is important to remember that operating system package managers are not static: software are updated on their servers.
+For example, simply adding \inlinecode{apt install gcc} to a \inlinecode{Dockerfile} will install different versions of GCC based on when the Docker image is created.
+Requesting a special version also doesn't fully address the problem because the package managers also download and install its dependencies.
+Hence a fixed version of the dependencies must also be included.
+
+In summary, these package managers are primarily meant for the operating system components.
+Hence, many robust reproducible analysis solutions (reviewed in Appendix \ref{appendix:existingsolutions}) don't use the host's package manager, but an independent package manager, like the ones below.
+
+\subsubsection{Conda/Anaconda}
+\label{appendix:conda}
+Conda is an independent package manager that can be used on GNU/Linux, macOS, or Windows operating systems, although all software packages are not available in all operating systems.
+Conda is able to maintain an approximately independent environment on an operating system without requiring root access.
+
+Conda tracks the dependencies of a package/environment through a YAML formatted file, where the necessary software and their acceptable versions are listed.
+However, it is not possible to fix the versions of the dependencies through the YAML files alone.
+This is thoroughly discussed under issue 787 (in May 2019) of \inlinecode{conda-forge}\footnote{\url{https://github.com/conda-forge/conda-forge.github.io/issues/787}}.
+In that discussion, the authors of \citet{uhse19} report that the half-life of their environment (defined in a YAML file) is 3 months, and that at least one of their their dependencies breaks shortly after this period.
+The main reply they got in the discussion is to build the Conda environment in a container, which is also the suggested solution by \citet{gruning18}.
+However, as described in Appendix \ref{appendix:independentenvironment} containers just hide the reproducibility problem, they don't fix it: containers aren't static and need to evolve (i.e., re-built) with the project.
+Given these limitations, \citet{uhse19} are forced to host their conda-packaged software as tarballs on a separate repository.
+
+Conda installs with a shell script that contains a binary-blob (+500 mega bytes, embedded in the shell script).
+This is the first major issue with Conda: from the shell script, it is not clear what is in this binary blob and what it does.
+After installing Conda in any location, users can easily activate that environment by loading a special shell script into their shell.
+However, the resulting environment is not fully independent of the host operating system as described below:
+
+\begin{itemize}
+\item The Conda installation directory is present at the start of environment variables like \inlinecode{PATH} (which is used to find programs to run) and other such environment variables.
+ However, the host operating system's directories are also appended afterwards.
+ Therefore, a user, or script may not notice that a software that is being used is actually coming from the operating system, not the controlled Conda installation.
+
+\item Generally, by default Conda relies heavily on the operating system and doesn't include core analysis components like \inlinecode{mkdir}, \inlinecode{ls} or \inlinecode{cp}.
+ Although they are generally the same between different Unix-like operating systems, they have their differences.
+ For example \inlinecode{mkdir -p} is a common way to build directories, but this option is only available with GNU Coreutils (default on GNU/Linux systems).
+ Running the same command within a Conda environment on a macOS for example, will crash.
+ Important packages like GNU Coreutils are available in channels like conda-forge, but they are not the default.
+ Therefore, many users may not recognize this, and failing to account for it, will cause unexpected crashes.
+
+\item Many major Conda packaging ``channels'' (for example the core Anaconda channel, or very popular conda-forge channel) don't include the C library, that a package was built with, as a dependency.
+ They rely on the host operating system's C library.
+ C is the core language of most modern operating systems and even higher-level languages like Python or R are written in it, and need it to run.
+ Therefore if the host operating system's C library is different from the C library that a package was built with, a Conda-packaged program will crash and the project will not be executable.
+ Theoretically, it is possible to define a new Conda ``channel'' which includes the C library as a dependency of its software packages, but it will take too much time for any individual team to practically implement all their necessary packages, up to their high-level science software.
+
+\item Conda does allow a package to depend on a special build of its prerequisites (specified by a checksum, fixing its version and the version of its dependencies).
+ However, this is rarely practiced in the main Git repositories of channels like Anaconda and conda-forge: only the name of the high-level prerequisite packages is listed in a package's \inlinecode{meta.yaml} file, which is version-controlled.
+ Therefore two builds of the package from the same Git repository will result in different tarballs (depending on what prerequisites were present at build time).
+ In the Conda tarball (that contains the binaries and is not under version control) \inlinecode{meta.yaml} does include the exact versions of most build-time dependencies.
+ However, because the different software of one project may have been built at different times, if they depend on different versions of a single software there will be a conflict and the tarball can't be rebuilt, or the project can't be run.
+\end{itemize}
+
+As reviewed above, the low-level dependence of Conda on the host operating system's components and build-time conditions, is the primary reason that it is very fast to install (thus making it an attractive tool to software developers who just need to reproduce a bug in a few minutes).
+However, these same factors are major caveats in a scientific scenario, where long-term archivability, readability or usability are important. % alternative to `archivability`?
+
+
+
+\subsubsection{Nix or GNU Guix}
+\label{appendix:nixguix}
+Nix \citep{dolstra04} and GNU Guix \citep{courtes15} are independent package managers that can be installed and used on GNU/Linux operating systems, and macOS (only for Nix, prior to macOS Catalina).
+Both also have a fully functioning operating system based on their packages: NixOS and ``Guix System''.
+GNU Guix is based on Nix, so we'll focus the review here on Nix.
+
+The Nix approach to package management is unique in that it allows exact dependency tracking of all the dependencies, and allows for multiple versions of a software, for more details see \citep{dolstra04}.
+In summary, a unique hash is created from all the components that go into the building of the package.
+That hash is then prefixed to the software's installation directory.
+For example \citep[from][]{dolstra04} if a certain build of GNU C Library 2.3.2 has a hash of \inlinecode{8d013ea878d0}, then it is installed under \inlinecode{/nix/store/8d013ea878d0-glibc-2.3.2} and all software that are compiled with it (and thus need it to run) will link to this unique address.
+This allows for multiple versions of the software to co-exist on the system, while keeping an accurate dependency tree.
+
+As mentioned in \citet{courtes15}, one major caveat with using these package managers is that they require a daemon with root privileges.
+This is necessary ``to use the Linux kernel container facilities that allow it to isolate build processes and maximize build reproducibility''.
+
+\tonote{While inspecting the Guix build instructions for some software, I noticed they don't actually mention the version names. This creates a similar issue withe Conda example above (how to regenerate the software with a given hash, given that its dependency versions aren't explicitly mentioned. Ask Ludo' about this.}
+
+
+\subsubsection{Spack}
+Spack is a package manager that is also influenced by Nix (similar to GNU Guix), see \citet{gamblin15}.
+ But unlike Nix or GNU Guix, it doesn't aim for full, bit-wise reproducibility and can be built without root access in any generic location.
+ It relies on the host operating system for the C library.
+
+ Spack is fully written in Python, where each software package is an instance of a class, which defines how it should be downloaded, configured, built and installed.
+ Therefore if the proper version of Python is not present, Spack cannot be used and when incompatibilities arise in future versions of Python (similar to how Python 3 is not compatible with Python 2), software building recipes, or the whole system, have to be upgraded.
+ Because of such bootstrapping problems (for example how Spack needs Python to build Python and other software), it is generally a good practice to use simpler, lower-level languages/systems for a low-level operation like package management.
+
+
+\subsection{Package management conclusion}
+There are two common issues regarding generic package managers that hinders their usage for high-level scientific projects, as listed below:
+\begin{itemize}
+\item {\bf\small Pre-compiled/binary downloads:} Most package managers (excluding Nix or its derivatives) only download the software in a binary (pre-compiled) format.
+ This allows users to download it very fast and almost instantaneously be able to run it.
+ However, to provide for this, servers need to keep binary files for each build of the software on different operating systems (for example Conda needs to keep binaries for Windows, macOS and GNU/Linux operating systems).
+ It is also necessary for them to store binaries for each build, which includes different versions of its dependencies.
+ This will take major space on the servers, therefore once the shelf-life of a binary has expired, it will not be easy to reproduce a project that depends on it .
+
+ For example Debian's Long Term Support is only valid for 5 years.
+ Pre-built binaries of the ``Stable'' branch will only be kept during this period and this branch only gets updated once every two years.
+ However, scientific software commonly evolve on much faster rates.
+ Therefore scientific projects using Debian often use the ``Testing'' branch which has more up to date features.
+ The problem is that binaries on the Testing branch are immediately removed when no other package depends on it, and a newer version is available.
+ This is not limited to operating systems, similar problems are also reported in Conda for example, see the discussion of Conda above for one real-world example.
+
+
+\item {\bf\small Adding high-level software:} Packaging new software is not trivial and needs a good level of knowledge/experience with that package manager.
+For example each has its own special syntax/standards/languages, with pre-defined variables that must already be known to someone packaging new software.
+However, in many scenarios, the most high-level software of a research project are written and used only by the team that is doing the research, even when they are distributed with free licenses on open repositories.
+Although active package manager members are commonly very supportive in helping to package new software, many teams may not take that extra effort/time.
+They will thus manually install their high-level software in an uncontrolled, or non-standard way, thus jeopardizing the reproducibility of the whole work.
+
+\item {\bf\small Built for a generic scenario} All the package managers above are built for one full system, that can possibly be run by multiple projects.
+ This can result in not fully documenting the process that each package was built (for example the versions of the dependent libraries of a package).
+\end{itemize}
+
+Addressing these issues has been the basic reason d'\^etre of the proposed template's approach to package management strategy: instructions to download and build the packages are included within the actual science project (thus fully customizable) and no special/new syntax/language is used: software download, building and installation is done with the same language/syntax that researchers manage their research: using the shell (GNU Bash) and Make (GNU Make).
+
+
+
+\subsection{Version control}
+\label{appendix:versioncontrol}
+A scientific project is not written in a day.
+It commonly takes more than a year (for example a PhD project is 3 or 4 years).
+During this time, the project evolves significantly from its first starting date and components are added or updated constantly as it approaches completion.
+Added with the complexity of modern projects, is not trivial to manually track this evolution, and its affect of on the final output: files produced in one stage of the project may be used at later stages (where the project has evolved).
+Furthermore, scientific projects do not progress linearly: earlier stages of the analysis are often modified after later stages are written.
+This is a natural consequence of the scientific method; where progress is defined by experimentation and modification of hypotheses (earlier phases).
+
+It is thus very important for the integrity of a scientific project that the state/version of its processing is recorded as the project evolves for example better methods are found or more data arrive.
+Any intermediate dataset that is produced should also be tagged with the version of the project at the time it was created.
+In this way, later processing stages can make sure that they can safely be used, i.e., no change has been made in their processing steps.
+
+Solutions to keep track of a project's history have existed since the early days of software engineering in the 1970s and they have constantly improved over the last decades.
+Today the distributed model of ``version control'' is the most common, where the full history of the project is stored locally on different systems and can easily be integrated.
+There are many existing version control solutions, for example CVS, SVN, Mercurial, GNU Bazaar, or GNU Arch.
+However, currently, Git is by far the most commonly used in individual projects and long term archival systems like Software Heritage \citep{dicosmo18}, it is also the system that is used in the proposed template, so we'll only review it here.
+
+\subsubsection{Git}
+With Git, changes in a project's contents are accurately identified by comparing them with their previous version in the archived Git repository.
+When the user decides the changes are significant compared to the archived state, they can ``commit'' the changes into the history/repository.
+The commit involves copying the changed files into the repository and calculating a 40 character checksum/hash that is calculated from the files, an accompanying ``message'' (a narrative description of the project's state), and the previous commit (thus creating a ``chain'' of commits that are strongly connected to each other).
+For example \inlinecode{f4953cc\-f1ca8a\-33616ad\-602ddf\-4cd189\-c2eff97b} is a commit identifier in the Git history that this paper is being written in.
+Commits are is commonly summarized by the checksum's first few characters, for example \inlinecode{f4953cc}.
+
+With Git, making parallel ``branches'' (in the project's history) is very easy and its distributed nature greatly helps in the parallel development of a project by a team.
+The team can host the Git history on a webpage and collaborate through that.
+There are several Git hosting services for example \href{http://github.com}{github.com}, \href{http://gitlab.com}{gitlab.com}, or \href{http://bitbucket.org}{bitbucket.org} (among many others).
+
+
+
+
+
+\subsection{Job management}
+\label{appendix:jobmanagement}
+Any analysis will involve more than one logical step.
+For example it is first necessary to download a dataset, then to do some preparations on it, then to actually use it, and finally to make visualizations/tables that can be imported into the final report.
+Each one of these is a logically independent step which needs to be run before/after the others in a specific order.
+There are many tools for managing the sequence of jobs, below we'll review the most common ones that are also used in the proposed template, or the existing reproducibility solutions of Appendix \ref{appendix:existingsolutions}.
+
+\subsubsection{Scripts}
+\label{appendix:scripts}
+Scripts (in any language, for example GNU Bash, or Python) are the most common ways of organizing a series of steps.
+They are primarily designed execute each step sequentially (one after another), making them also very intuitive.
+However, as the series of operations become complex and large, managing the workflow in a script will become highly complex.
+For example if 90\% of a long project is already done and a researcher wants to add a followup step, a script will go through all the previous steps (which can take significant time).
+Also, if a small step in the middle of an analysis has to be changed, the full analysis needs to be re-run: scripts have no concept of dependencies (so only the steps that are affected by that change are run).
+Such factors discourage experimentation, which is a critical component of the scientific method.
+It is possible to manually add conditionals all over the script to add dependencies, but they just make it harder to read, and introduce many bugs themselves.
+Parallelization is another drawback of using scripts.
+While its not impossible, because of the high-level nature of scripts, it is not trivial and parallelization can also be very inefficient or buggy.
+
+
+\subsubsection{Make}
+\label{appendix:make}
+Make was originally designed to address the problems mentioned in Appendix \ref{appendix:scripts} for scripts \citep{feldman79}.
+In particular this motivation arose from management issues related to program compilation with many source code files.
+With Make, the various source files of a program that haven't been changed, wouldn't be recompiled.
+Also, when two source files didn't depend on each other, and both needed to be rebuilt, they could be built in parallel.
+This greatly helped in debugging of software projects, and speeding up test builds, giving Make a core place in software building tools since then.
+The most common implementation of Make, since the early 1990s, is GNU Make \citep[\url{http://www.gnu.org/s/make}]{stallman88}.
+The proposed solution uses Make to organize its workflow, see Section \ref{sec:usingmake}.
+Here, we'll complement that section with more technical details on Make.
+
+Usually, the top-level Make instructions are placed in a file called Makefile, but it is also common to use the \inlinecode{.mk} suffix for custom file names.
+Each stage/step in the analysis is defined through a \emph{rule}.
+Rules define \emph{recipes} to build \emph{targets} from \emph{pre-requisites}.
+In POSIX operating systems (Unix-like), everything is a file, even directories and devices.
+Therefore all three components in a rule must be files on the running filesystem.
+Figure \ref{fig:makeexample} demonstrates a hypothetical Makefile with the targets, prerequisites and recipes highlighted.
+
+\begin{figure}[t]
+ {\small
+ \texttt{\mkcomment{\# The ultimate "target" of this Makefile is 'ultimate.txt' (the first target Make finds).}}
+
+ \texttt{\mktarget{ultimate.txt}: out.txt\hfill\mkcomment{\# 'ultimate.txt' depends on 'out.txt'.{ }{ }{ }{ }{ }}}
+
+ \texttt{\mktab{}awk '\$1\textless5' out.txt \textgreater{ }\mktarget{ultimate.txt}\hfill\mkcomment{\# Only rows with 1st column less than 5.{ }{ }{ }}}
+
+ \vspace{1em}
+ \texttt{\mkcomment{\# But 'out.txt', is created by a Python script, and 'params.conf' keeps its configuration.}}
+
+ \texttt{\mktarget{out.txt}: run.py params.conf}
+
+ \texttt{\mktab{}python run.py --in=params.conf --out=\mktarget{out.txt}}
+ }
+
+ \caption{\label{fig:makeexample}An example Makefile that describes how to build \inlinecode{ultimate.txt} with two \emph{rules}.
+ \emph{targets} (blue) are placed before the colon (\texttt{:}).
+ \emph{prerequisites} (green) are placed after the colon.
+ The \emph{recipe} to build the targets from the prerequisites is placed after a \texttt{TAB}.
+ The final target is the first one that Make confronts (\inlinecode{ultimate.txt}).
+ It depends on the output of a Python program (\inlinecode{run.py}), which is configured by \inlinecode{params.conf}.
+ Anytime \inlinecode{run.py} or \inlinecode{params.conf} are edited/updated, \inlinecode{out.txt} is re-created and thus \inlinecode{ultimate.txt} is also re-created.
+ }
+\end{figure}
+
+To decide which operation should be re-done when executed, Make compares the time stamp of the targets and prerequisites.
+When any of the prerequisite(s) is newer than a target, the recipe is re-run to re-build the target.
+When all the prerequisites are older than the target, that target doesn't need to be rebuilt.
+The recipe can contain any number of commands, they should just all start with a \inlinecode{TAB}.
+Going deeper into the syntax of Make is beyond the scope of this paper, but we recommend interested readers to consult the GNU Make manual\footnote{\url{http://www.gnu.org/software/make/manual/make.pdf}}.
+
+\subsubsection{SCons}
+Scons (\url{https://scons.org}) is a Python package for managing operations outside of Python (in contrast to CGAT-core, discussed below, which only organizes Python functions).
+In many aspects it is similar to Make, for example it is managed through a `SConstruct' file.
+Like a Makefile, SConstruct is also declarative: the running order is not necessarily the top-to-bottom order of the written operations within the file (unlike the imperative paradigm which is common in languages like C, Python, or FORTRAN).
+However, unlike Make, SCons doesn't use the file modification date to decide if it should be remade.
+SCons keeps the MD5 hash of all the files (in a hidden binary file) to check if the contents has changed.
+
+SCons thus attempts to work on a declarative file with an imperative language (Python).
+It also goes beyond raw job management and attempts to extract information from within the files (for example to identify the libraries that must be linked while compiling a program).
+SCons is therefore more complex than Make: its manual is almost double that of GNU Make.
+Besides added complexity, all these ``smart'' features decrease its performance, especially as files get larger and more numerous: on every call, every file's checksum has to be calculated, and a Python system call has to be made (which is computationally expensive).
+
+Finally, it has the same drawback as any other tool that uses high-level languages, see Section \ref{appendix:highlevelinworkflow}.
+We encountered such a problem while testing SCons: on the Debian-10 testing system, the \inlinecode{python} program pointed to Python 2.
+However, since Python 2 is now obsolete, SCons was built with Python 3 and our first run crashed.
+To fix it, we had to either manually change the core operating system path, or the SCons source hashbang.
+The former will conflict with other system tools that assume \inlinecode{python} points to Python-2, the latter may need root permissions for some systems.
+This can also be problematic when a Python analysis library, may require a Python version that conflicts with the running SCons.
+
+\subsubsection{CGAT-core}
+CGAT-Core (\url{https://cgat-core.readthedocs.io/en/latest}) is a Python package for managing workflows, see \citet{cribbs19}.
+It wraps analysis steps in Python functions and uses Python decorators to track the dependencies between tasks.
+It is used papers like \citet{jones19}, but as mentioned in \citet{jones19} it is good for managing individual outputs (for example separate figures/tables in the paper, when they are fully created within Python).
+Because it is primarily designed for Python tasks, managing a full workflow (which includes many more components, written in other languages) is not trivial in it.
+Another drawback with this workflow manager is that Python is a very high-level language where future versions of the language may no longer be compatible with Python 3, that CGAT-core is implemented in (similar to how Python 2 programs are not compatible with Python 3).
+
+\subsubsection{Guix Workflow Language (GWL)}
+GWL (\url{https://www.guixwl.org}) GWL is based on the declarative language that GNU Guix uses for package management (see Appendix \ref{appendix:packagemanagement}), which is itself based on the general purpose Scheme language.
+It is closely linked with GNU Guix and can even install the necessary software needed for each individual process.
+Hence in the GWL paradigm, software installation and usage doesn't have to be separated.
+GWL has two high-level concepts called ``processes'' and ``workflows'' where the latter defines how multiple processes should be executed together.
+
+As described above shell scripts and Make are a common and highly used system that have existed for several decades and many researchers are already familiar with them and have already used them.
+The list of necessary software solutions for the various stages of a research project (listed in the subsections of Appendix \ref{appendix:existingtools}), is already very large, and each software has its own learning curve (which is a heavy burden for a natural or social scientist for example).
+The other workflow management tools are too specific to a special paradigm, for example CGAT-core is written for Python, or GWL is intertwined with GNU Guix.
+Therefore their generalization into any kind of problem is not trivial.
+
+Also, high-level and specific solutions will evolve very fast, for example the Popper solution to reproducible research (see Appendix \ref{appendix:popper}) organized its workflow through the HashiCorp configuration language (HCL) because it was the default in GitHub.
+However, in September 2019, GitHub dropped HCL as its default configuration language and is now using its own custom YAML-based language.
+Using such high-level, or provider-specific solutions also has the problem that it makes them hard, or impossible, to use in any generic system.
+Therefore a robust solution would avoid designing their low-level processing steps in these languages and only use them for the highest-level layer of their project, depending on which provider they want to run their project on.
+
+
+
+\subsection{Editing steps and viewing results}
+\label{appendix:editors}
+In order to later reproduce a project, the analysis steps must be stored in files.
+For example Shell, Python or R scripts, Makefiles, Dockerfiles, or even the source files of compiled languages like C or FORTRAN.
+Given that a scientific project does not evolve linearly and many edits are needed as it evolves, it is important to be able to actively test the analysis steps while writing the project's source files.
+Here we'll review some common methods that are currently used.
+
+\subsubsection{Text editors}
+The most basic way to edit text files is through simple text editors which just allow viewing and editing such files, for example \inlinecode{gedit} on the GNOME graphic user interface.
+However, working with simple plain text editors like \inlinecode{gedit} can be very frustrating since its necessary to save the file, then go to a terminal emulator and execute the source files.
+To solve this problem there are advanced text editors like GNU Emacs that allow direct execution of the script, or access to a terminal within the text editor.
+However, editors that can execute or debug the source (like GNU Emacs), just run external programs for these jobs (for example GNU GCC, or GNU GDB), just as if those programs was called from outside the editor.
+
+With text editors, the final edited file is independent of the actual editor and can be further edited with another editor, or executed without it.
+This is a very important feature that is not commonly present for other solutions mentioned below.
+Another very important advantage of advanced text editors like GNU Emacs or Vi(m) is that they can also be run without a graphic user interface, directly on the command-line.
+This feature is critical when working on remote systems, in particular high performance computing (HPC) facilities that don't provide a graphic user interface.
+
+\subsubsection{Integrated Development Environments (IDEs)}
+To facilitate the development of source files, IDEs add software building and running environments as well as debugging tools to a plain text editor.
+Many IDEs have their own compilers and debuggers, hence source files that are maintained in IDEs are not necessarily usable/portable on other systems.
+Furthermore, they usually require a graphic user interface to run.
+In summary IDEs are generally very specialized tools, for special projects and are not a good solution when portability (the ability to run on different systems) is required.
+
+\subsubsection{Jupyter}
+Jupyter \citep[initially IPython,][]{kluyver16} is an implementation of Literate Programming \citep{knuth84}.
+The main user interface is a web-based ``notebook'' that contains blobs of executable code and narrative.
+Jupyter uses the custom built \inlinecode{.ipynb} format\footnote{\url{https://nbformat.readthedocs.io/en/latest}}.
+Jupyter's name is a combination of the three main languages it was designed for: Julia, Python and R.
+The \inlinecode{.ipynb} format, is a simple, human-readable (can be opened in a plain-text editor) file, formatted in JavaScript Object Notation (JSON).
+It contains various kinds of ``cells'', or blobs, that can contain narrative description, code, or multi-media visualizations (for example images/plots), that are all stored in one file.
+The cells can have any order, allowing the creation of a literal programming style graphical implementation, where narrative descriptions and executable patches of code can be intertwined.
+For example to have a paragraph of text about a patch of code, and run that patch immediately in the same page.
+
+The \inlinecode{.ipynb} format does theoretically allow dependency tracking between cells, see IPython mailing list (discussion started by Gabriel Becker from July 2013\footnote{\url{https://mail.python.org/pipermail/ipython-dev/2013-July/010725.html}}).
+Defining dependencies between the cells can allow non-linear execution which is critical for large scale (thousands of files) and complex (many dependencies between the cells) operations.
+It allows automation, run-time optimization (deciding not to run a cell if its not necessary) and parallelization.
+However, Jupyter currently only supports a linear run of the cells: always from the start to the end.
+It is possible to manually execute only one cell, but the previous/next cells that may depend on it, also have to be manually run (a common source of human error, and frustration for complex operations).
+Integration of directional graph features (dependencies between the cells) into Jupyter has been discussed, but as of this publication, there is no plan to implement it (see Jupyter's GitHub issue 1175\footnote{\url{https://github.com/jupyter/notebook/issues/1175}}).
+
+The fact that the \inlinecode{.ipynb} format stores narrative text, code and multi-media visualization of the outputs in one file, is another major hurdle:
+The files can easy become very large (in volume/bytes) and hard to read from source.
+Both are critical for scientific processing, especially the latter: when a web-browser with proper JavaScript features isn't available (can happen in a few years).
+This is further exacerbated by the fact that binary data (for example images) are not directly supported in JSON and have to be converted into much less memory-efficient textual encodings.
+
+Finally, Jupyter has an extremely complex dependency graph: on a clean Debian 10 system, Pip (a Python package manager that is necessary for installing Jupyter) required 19 dependencies to install, and installing Jupyter within Pip needed 41 dependencies!
+\citet{hinsen15} reported such conflicts when building Jupyter into the Active Papers framework (see Appendix \ref{appendix:activepapers}).
+However, the dependencies above are only on the server-side.
+Since Jupyter is a web-based system, it requires many dependencies on the viewing/running browser also (for example special JavaScript or HTML5 features, which evolve very fast).
+As discussed in Appendix \ref{appendix:highlevelinworkflow} having so many dependencies is a major caveat for any system regarding scientific/long-term reproducibility (as opposed to industrial/immediate reproducibility).
+In summary, Jupyter is most useful in manual, interactive and graphical operations for temporary operations (for example educational tutorials).
+
+
+
+
+
+
+\subsection{Project management in high-level languages}
+\label{appendix:highlevelinworkflow}
+
+Currently the most popular high-level data analysis language is Python.
+R is closely tracking it, and has superseded Python in some fields, while Julia \citep[with its much better performance compared to R and Python, in a high-level structure, see][]{bezanson17} is quickly gaining ground.
+These languages have themselves superseded previously popular languages for data analysis of the previous decades, for example Java, Perl or C++.
+All are part of the C-family programming languages.
+In many cases, this means that the tools to use that language are written in C, which is the language of the operating system.
+
+Scientists, or data analysts, mostly use these higher-level languages.
+Therefore they are naturally drawn to also apply the higher-level languages for lower-level project management, or designing the various stages of their workflow.
+For example Conda or Spack (Appendix \ref{appendix:packagemanagement}), CGAT-core (Appendix \ref{appendix:jobmanagement}), Jupyter (Appendix \ref{appendix:editors}) or Popper (Appendix \ref{appendix:popper}) are written in Python.
+The discussion below applies to both the actual analysis software and project management software.
+In this context, its more focused on the latter.
+
+Because of their nature, higher-level languages evolve very fast, creating incompatibilities on the way.
+The most prominent example is the transition from Python 2 (released in 2000) to Python 3 (released in 2008).
+Python 3 was incompatible with Python 2 and it was decided to abandon the former by 2015.
+However, due to community pressure, this was delayed to January 1st, 2020.
+The end-of-life of Python 2 caused many problems for projects that had invested heavily in Python 2: all their previous work had to be translated, for example see \citet{jenness17} or Appendix \ref{appendix:sciunit}.
+Some projects couldn't make this investment and their developers decided to stop maintaining it, for example VisTrails (see Appendix \ref{appendix:vistrails}).
+
+The problems weren't just limited to translation.
+Python 2 was still actively being actively used during the transition period (and is still being used by some, after its end-of-life).
+Therefore, developers of packages used by others had to maintain (for example fix bugs in) both versions in one package.
+This isn't particular to Python, a similar evolution occurred in Perl: in 2000 it was decided to improve Perl 5, but the proposed Perl 6 was incompatible with it.
+However, the Perl community decided not to abandon Perl 5, and Perl 6 was eventually defined as a new language that is now officially called ``Raku'' (\url{https://raku.org}).
+
+It is unreasonably optimistic to assume that high-level languages won't undergo similar incompatible evolutions in the (not too distant) future.
+For software developers, this isn't a problem at all: non-scientific software, and the general population's usage of them, evolves extremely fast and it is rarely (if ever) necessary to look into codes that are more than a couple of years old.
+However, in the sciences (which are commonly funded by public money) this is a major caveat for the longer-term usability of solutions that are designed.
+
+In summary, in this section we are discussing the bootstrapping problem as regards scientific projects: the workflow/pipeline can reproduce the analysis and its dependencies, but the dependencies of the workflow itself cannot not be ignored.
+The most robust way to address this problem is with a workflow management system that ideally doesn't need any major dependencies: tools that are already part of the operating system.
+
+Beyond technical, low-level, problems for the developers mentioned above, this causes major problems for scientific project management as listed below:
+
+\subsubsection{Dependency hell}
+The evolution of high-level languages is extremely fast, even within one version.
+For example packages that are written in Python 3 often only work with a special interval of Python 3 versions (for example newer than Python 3.6).
+This isn't just limited to the core language, much faster changes occur in their higher-level libraries.
+For example version 1.9 of Numpy (Python's numerical analysis module) discontinued support for Numpy's predecessor (called Numeric), causing many problems for scientific users \citep[see][]{hinsen15}.
+
+On the other hand, the dependency graph of tools written in high-level languages is often extremely complex.
+For example see Figure 1 of \citet{alliez19}, it shows the dependencies and their inter-dependencies for Matplotlib (a popular plotting module in Python).
+
+Acceptable dependency intervals between the dependencies will cause incompatibilities in a year or two, when a robust package manager is not used (see Appendix \ref{appendix:packagemanagement}).
+Since a domain scientist doesn't always have the resources/knowledge to modify the conflicting part(s), many are forced to create complex environments with different versions of Python and pass the data between them (for example just to use the work of a previous PhD student in the team).
+This greatly increases the complexity of the project, even for the principal author.
+A good reproducible workflow can account for these different versions.
+However, when the actual workflow system (not the analysis software) is written in a high-level language this will cause a major problem.
+
+For example, merely installing the Python installer (\inlinecode{pip}) on a Debian system (with \inlinecode{apt install pip2} for Python 2 packages), required 32 other packages as dependencies.
+\inlinecode{pip} is necessary to install Popper and Sciunit (Appendices \ref{appendix:popper} and \ref{appendix:sciunit}).
+As of this writing, the \inlinecode{pip3 install popper} and \inlinecode{pip2 install sciunit2} commands for installing each, required 17 and 26 Python modules as dependencies.
+It is impossible to run either of these solutions if there is a single conflict in this very complex dependency graph.
+This problem actually occurred while we were testing Sciunit: even though it installed, it couldn't run because of conflicts (its last commit was only 1.5 years old), for more see Appendix \ref{appendix:sciunit}.
+\citet{hinsen15} also report a similar problem when attempting to install Jupyter (see Appendix \ref{appendix:editors}).
+Of course, this also applies to tools that these systems use, for example Conda (which is also written in Python, see Appendix \ref{appendix:packagemanagement}).
+
+
+
+
+
+\subsubsection{Generational gap}
+This occurs primarily for domain scientists (for example astronomers, biologists or social sciences).
+Once they have mastered one version of a language (mostly in the early stages of their career), they tend to ignore newer versions/languages.
+The inertia of programming languages is very strong.
+This is natural, because they have their own science field to focus on, and re-writing their very high-level analysis toolkits (which they have curated over their career and is often only readable/usable by themselves) in newer languages requires too much investment and time.
+
+When this investment is not possible, either the mentee has to use the mentor's old method (and miss out on all the new tools, which they need for the future job prospects), or the mentor has to avoid implementation details in discussions with the mentee, because they don't share a common language.
+The authors of this paper have personal experiences in both mentor/mentee relational scenarios.
+This failure to communicate in the details is a very serious problem, leading to the loss of valuable inter-generational experience.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Survey of common existing reproducible workflows}
+\label{appendix:existingsolutions}
+
+As reviewed in the introduction (Section \ref{sec:introduction}), the problem of reproducibility has received a lot of attention over the last three decades and various solutions have already been proposed.
+In this appendix, some of the solutions are reviewed.
+The solutions are based on an evolving software landscape, therefore they are ordered by date\footnote{When the project has a webpage, the year of its first release is used, otherwise their paper's publication year is used.}.
+For each solution, we summarize its methodology and discuss how it relates to the principles in Section \ref{sec:principles}.
+Freedom of the software/method is a core concept behind scientific reproducibility, as opposed to industrial reproducibility where a black box is acceptable/desirable.
+Therefore proprietary solutions like Code Ocean (\url{https://codeocean.com}) or Nextjournal (\url{https://nextjournal.com}) will not be reviewed here.
+
+\begin{itemize}
+\item \citet{konkol20} have also done a review of some tools from various points of view.
+\end{itemize}
+
+
+
+
+\subsection{Reproducible Electronic Documents, RED (1992)}
+\label{appendix:red}
+
+Reproducible Electronic Documents (\url{http://sep.stanford.edu/doku.php?id=sep:research:reproducible}) is the first attempt that we could find on doing reproducible research \citep{claerbout1992,schwab2000}.
+It was developed within the Stanford Exploration Project (SEP) for Geophysics publications.
+Their introductions on the importance of reproducibility, resonate a lot with today's environment in computational sciences.
+In particular the heavy investment one has to make in order to re-do another scientist's work, even in the same team.
+RED also influenced other early reproducible works, for example \citet{buckheit1995}.
+
+To orchestrate the various figures/results of a project, from 1990, they used ``Cake'' \citep[]{somogyi87}, a dialect of Make, for more on Make, see Appendix \ref{appendix:jobmanagement}.
+As described in \citep{schwab2000}, in the latter half of that decade, moved to GNU Make \citep{stallman88}, which was much more commonly used, developed and came with a complete and up-to-date manual.
+The basic idea behind RED's solution was to organize the analysis as independent steps, including the generation of plots, and organizing the steps through a Makefile.
+This enabled all the results to be re-executed with a single command.
+Several basic low-level Makefiles were included in the high-level/central Makefile.
+The reader/user of a project had to manually edit the central Makefile and set the variable \inlinecode{RESDIR} (result dir), this is the directory where built files are kept.
+Afterwards, the reader could set which figures/parts of the project to reproduce by manually adding its name in the central Makefile, and running Make.
+
+At the time, Make was already practiced by individual researchers and projects as a job orchestration tool, but SEP's innovation was to standardize it as an internal policy, and define conventions for the Makefiles to be consistent across projects.
+This enabled new members to benefit from the already existing work of previous team members (who had graduated or moved to other jobs).
+However, RED only used the existing software of the host system, it had no means to control them.
+Therefore, with wider adoption, they confronted a ``versioning problem'' where the host's analysis software had different versions on different hosts, creating different results, or crashing \citep{fomel09}.
+Hence in 2006 SEP moved to a new Python-based framework called Madagascar, see Appendix \ref{appendix:madagascar}.
+
+
+
+
+
+\subsection{Apache Taverna (2003)}
+\label{appendix:taverna}
+Apache Taverna (\url{https://taverna.incubator.apache.org}) is a workflow management system written in Java with a graphical user interface, see \citet[still being actively developed]{oinn04}.
+A workflow is defined as a directed graph, where nodes are called ``processors''.
+Each Processor transforms a set of inputs into a set of outputs and they are defined in the Scufl language (an XML-based language, were each step is an atomic task).
+Other components of the workflow are ``Data links'' and ``Coordination constraints''.
+The main user interface is graphical, where users place processors in a sheet and define links between their inputs outputs.
+\citet{zhao12} have studied the problem of workflow decays in Taverna.
+In many aspects Taverna is like VisTrails, see Appendix \ref{appendix:vistrails} [Since kepler is older, it may be better to bring the VisTrails features here.]
+
+
+
+
+
+\subsection{Madagascar (2003)}
+\label{appendix:madagascar}
+Madagascar (\url{http://ahay.org}) is a set of extensions to the SCons job management tool \citep{fomel13}.
+For more on SCons, see Appendix \ref{appendix:jobmanagement}.
+Madagascar is a continuation of the Reproducible Electronic Documents (RED) project that was discussed in Appendix \ref{appendix:red}.
+
+Madagascar does include project management tools in the form of SCons extensions.
+However, it isn't just a reproducible project management tool, it is primarily a collection of analysis programs, tools to interact with RSF files, and plotting facilities.
+For example in our test of Madagascar 3.0.1, it installed 855 Madagascar-specific analysis programs (\inlinecode{PREFIX/bin/sf*}).
+The analysis programs mostly target geophysical data analysis, including various project specific tools: more than half of the total built tools are under the \inlinecode{build/user} directory which includes names of Madagascar users.
+Following the Unix spirit of modularized programs that communicating through text-based pipes, Madagascar's core is the custom Regularly Sampled File (RSF) format\footnote{\url{http://www.ahay.org/wiki/Guide\_to\_RSF\_file\_format}}.
+RSF is a plain-text file that points to the location of the actual data files on the filesystem, but it can also keep the raw binary dataset within same plain-text file.
+
+Besides the location or contents of the data, RSF also contains name/value pairs that can be used as options to Madagascar programs, which are built with inputs and outputs of this format.
+Since RSF contains program options also, the inputs and outputs of Madagascar's analysis programs are read from, and written to, standard input and standard output.
+
+Madagascar has been used in the production of hundreds of research papers or book chapters\footnote{\url{http://www.ahay.org/wiki/Reproducible_Documents}} \citep[120 prior to][]{fomel13}.
+
+
+\subsection{GenePattern (2004)}
+\label{appendix:genepattern}
+GenePattern (\url{https://www.genepattern.org}) is a client-server software containing many common analysis functions/modules, primarily focused for Gene studies \citet[first released in 2004]{reich06}.
+Although its highly focused to a special research field, it is reviewed here because its concepts/methods are generic, and in the context of this paper.
+
+Its server-side software is installed with fixed software packages that are wrapped into GenePattern modules.
+The modules are used through a web interface, the modern implementation is GenePattern Notebook \citep{reich17}.
+It is an extension of the Jupyter notebook (see Appendix \ref{appendix:editors}), which also has a special ``GenePattern'' cell that will connect to GenePattern servers for doing the analysis.
+However, the wrapper modules just call an existing tool on the host system.
+Given that each server may have its own set of installed software, the analysis may differ (or crash) when run on different GenePattern servers, hampering reproducibility.
+
+The primary GenePattern server was active since 2008 and had 40,000 registered users with 2000 to 5000 jobs running every week \citep{reich17}.
+However, it was shut down on November 15th 2019 due to end of funding\footnote{\url{https://www.genepattern.org/blog/2019/10/01/the-genomespace-project-is-ending-on-november-15-2019}}.
+All processing with this sever has stopped, and any archived data on it has been deleted.
+Since GenePattern is free software, there are alternative public servers to use, so hopefully work on it will continue.
+However, funding is limited and those servers may face similar funding problems.
+This is a very nice example of the fragility of solutions that depend on archiving and running high-level research products (including data, binary/compiled code).
+
+
+
+
+
+\subsection{Kepler (2005)}
+Kepler (\url{https://kepler-project.org}) is a Java-based Graphic User Interface workflow management tool \citep{ludascher05}.
+Users drag-and-drop analysis components, called ``actors'', into a visual, directional graph, which is the workflow (similar to Figure \ref{fig:analysisworkflow}).
+Each actor is connected to others through the Ptolemy approach \citep{eker03}.
+In many aspects Kepler is like VisTrails, see Appendix \ref{appendix:vistrails}.
+\tonote{Since kepler is older, it may be better to bring the VisTrails features here.}
+
+
+
+
+
+\subsection{VisTrails (2005)}
+\label{appendix:vistrails}
+
+VisTrails (\url{https://www.vistrails.org}) was a graphical workflow managing system that is described in \citet{bavoil05}.
+According to its webpage, VisTrails maintainance has stopped since May 2016, its last Git commit, as of this writing, was in November 2017.
+However, given that it was well maintained for over 10 years is an achievement.
+
+VisTrails (or ``visualization trails'') was initially designed for managing visualizations, but later grew into a generic workflow system with meta-data and provenance features.
+Each analysis step, or module, is recorded in an XML schema, which defines the operations and their dependencies.
+The XML attributes of each module can be used in any XML query language to find certain steps (for example those that used a certain command).
+Since the main goal was visualization (as images), apparently its primary output is in the form of image spreadsheets.
+Its design is based on a change-based provenance model using a custom VisTrails provenance query language (vtPQL), for more see \citet{scheidegger08}.
+Since XML is a plane text format, as the user inspects the data and makes changes to the analysis, the changes are recorded as ``trails'' in the project's VisTrails repository that operates very much like common version control systems (see Appendix \ref{appendix:versioncontrol}).
+
+With respect to keeping the history/provenance of the final dataset, VisTrails is very much like the template introduced in this paper.
+However, even though XML is in plain text, it is very hard to edit manually.
+VisTrails therefore provides a graphic user interface with a visual representation of the project's inter-dependent steps (similar to Figure \ref{fig:analysisworkflow}).
+Besides the fact that it is no longer maintained, the conceptual differences with the proposed template are substantial.
+The most important is that VisTrails doesn't control the software that is run, it only controls the sequence of steps that they are run in.
+This template also defines dependencies and operations based on the very standard and commonly known Make system, not a custom XML format.
+Scripts can easily be written to generate an XML-formatted output from Makefiles.
+
+
+
+
+
+\subsection{Galaxy (2010)}
+\label{appendix:galaxy}
+
+Galaxy (\url{https://galaxyproject.org}) is a web-based Genomics workbench \citep{goecks10}.
+The main user interface are ``Galaxy Pages'', which doesn't require any programming: users simply use abstract ``tools'' which are a wrappers over command-line programs.
+Therefore the actual running version of the program can be hard to control across different Galaxy servers \tonote{confirm this}.
+Besides the automatically generated metadata of a project (which include version control, or its history), users can also tag/annotate each analysis step, describing its intent/purpose.
+Besides some small differences, this seems to be very similar to GenePattern (Appendix \ref{appendix:genepattern}).
+
+
+
+
+
+\subsection{Image Processing On Line journal, IPOL (2010)}
+The IPOL journal (\url{https://www.ipol.im}) attempts to publish the full implementation details of proposed image processing algorithm as a scientific paper \citep[first published article in July 2010]{limare11}.
+An IPOL paper is a traditional research paper, but with a focus on implementation.
+The published narrative description of the algorithm must be detailed to a level that any specialist can implement it in their own programming language (extremely detailed).
+The author's own implementation of the algorithm is also published with the paper (in C, C++ or MATLAB), the code must be commented well enough and link each part of it with the relevant part of the paper.
+The authors must also submit several example datasets/scenarios.
+The referee actually inspects the code and narrative, confirming that they match with each other, and with the stated conclusions of the published paper.
+After publication, each paper also has a ``demo'' button on its webpage, allowing readers to try the algorithm on a web-interface and even provide their own input.
+
+The IPOL model is indeed the single most robust model of peer review and publishing computational research methods/implementations.
+It has grown steadily over the last 10 years, publishing 23 research articles in 2019 alone.
+We encourage the reader to visit its webpage and see some of its recent papers and their demos.
+It can be so thorough and complete because it has a very narrow scope (image processing), and the published algorithms are highly atomic, not needing significant dependencies (beyond input/output), allowing the referees to go deep into each implemented algorithm.
+In fact, high-level languages like Perl, Python or Java are not acceptable precisely because of the additional complexities/dependencies that they require.
+
+Ideally (if any referee/reader was inclined to do so), the proposed template of this paper allows for a similar level of scrutiny, but for much more complex research scenarios, involving hundreds of dependencies and complex processing on the data.
+
+
+
+\subsection{WINGS (2010)}
+\label{appendix:wings}
+
+WINGS (\url{https://wings-workflows.org}) is an automatic workflow generation algorithm \citep{gil10}.
+It runs on a centralized web server, requiring many dependencies (such that it is recommended to download Docker images).
+It allows users to define various workflow components (for example datasets, analysis components and etc), with high-level goals.
+It then uses selection and rejection algorithms to find the best components using a pool of analysis components that can satisfy the requested high-level constraints.
+\tonote{Read more about this}
+
+
+
+
+
+\subsection{Active Papers (2011)}
+\label{appendix:activepapers}
+Active Papers (\url{http://www.activepapers.org}) attempts to package the code and data of a project into one file (in HDF5 format).
+It was initially written in Java because its compiled byte-code outputs in JVM are portable on any machine \citep[see][]{hinsen11}.
+However, Java is not a commonly used platform today, hence it was later implemented in Python \citep{hinsen15}.
+
+In the Python version, all processing steps and input data (or references to them) are stored in a HDF5 file.
+However, it can only account for pure-Python packages using the host operating system's Python modules \tonote{confirm this!}.
+When the Python module contains a component written in other languages (mostly C or C++), it needs to be an external dependency to the Active Paper.
+
+As mentioned in \citep{hinsen15}, the fact that it relies on HDF5 is a caveat of Active Papers, because many tools are necessary to access it.
+Downloading the pre-built HDF View binaries (provided by the HDF group) is not possible anonymously/automatically (login is required).
+Installing it using the Debian or Arch Linux package managers also failed due to dependencies.
+Furthermore, as a high-level data format HDF5 evolves very fast, for example HDF5 1.12.0 (February 29th, 2020) is not usable with older libraries provided by the HDF5 team. % maybe replace with: February 29\textsuperscript{th}, 2020?
+
+While data and code are indeed fundamentally similar concepts technically \tonote{cite Konrad's paper on this}, they are used by humans differently.
+This becomes a burden when large datasets are used, this was also acknowledged in \citet{hinsen15}.
+If the data are proprietary (for example medical patient data), the data must not be released, but the methods they were produced can.
+Furthermore, since all reading and writing is done in the HDF5 file, it can easily bloat the file to very large sizes due to temporary/reproducible files, and its necessary to remove/dummify them, thus complicating the code, making it hard to read.
+For example the Active Papers HDF5 file of \citet[in \href{https://doi.org/10.5281/zenodo.2549987}{zenodo.2549987}]{kneller19} is 1.8 giga-bytes.
+
+In many scenarios, peers just want to inspect the processing by reading the code and checking a very special part of it (one or two lines), not necessarily needing to run it, or obtaining the datasets.
+Hence the extra volume for data, and obscure HDF5 format that needs special tools for reading plain text code is a major burden.
+
+
+
+
+
+\subsection{Collage Authoring Environment (2011)}
+\label{appendix:collage}
+The Collage Authoring Environment \citep{nowakowski11} was the winner of Elsevier Executable Paper Grand Challenge \citep{gabriel11}.
+It is based on the GridSpace2\footnote{\url{http://dice.cyfronet.pl}} distributed computing environment\tonote{find citation}, which has a web-based graphic user interface.
+Through its web-based interface, viewers of a paper can actively experiment with the parameters of a published paper's displayed outputs (for example figures).
+\tonote{See how it containerizes the software environment}
+
+
+
+
+
+\subsection{SHARE (2011)}
+\label{appendix:SHARE}
+SHARE (\url{https://is.ieis.tue.nl/staff/pvgorp/share}) is a web portal that hosts virtual machines (VMs) for storing the environment of a research project, for more, see \citet{vangorp11}.
+The top project webpage above is still active, however, the virtual machines and SHARE system have been removed since 2019.
+
+SHARE was recognized as second position in the Elsevier Executable Paper Grand Challenge \citep{gabriel11}.
+Simply put, SHARE is just a VM that users can download and run.
+The limitations of VMs for reproducibility were discussed in Appendix \ref{appendix:virtualmachines}, and the SHARE system does not specify any requirements on making the VM itself reproducible.
+
+
+
+
+
+\subsection{Verifiable Computational Result, VCR (2011)}
+\label{appendix:verifiableidentifier}
+A ``verifiable computational result'' (\url{http://vcr.stanford.edu}) is an output (table, figure, or etc) that is associated with a ``verifiable result identifier'' (VRI), see \citet{gavish11}.
+It was awarded the third prize in the Elsevier Executable Paper Grand Challenge \citep{gabriel11}.
+
+A VRI is created using tags within the programming source that produced that output, also recording its version control or history.
+This enables exact identification and citation of results.
+The VRIs are automatically generated web-URLs that link to public VCR repositories containing the data, inputs and scripts, that may be re-executed.
+According to \citet{gavish11}, the VRI generation routine has been implemented in MATLAB, R and Python, although only the MATLAB version was available during the writing of this paper.
+VCR also has special \LaTeX{} macros for loading the respective VRI into the generated PDF.
+
+Unfortunately most parts of the webpage are not complete at the time of this writing.
+The VCR webpage contains an example PDF\footnote{\url{http://vcr.stanford.edu/paper.pdf}} that is generated with this system, however, the linked VCR repository (\inlinecode{http://vcr-stat.stanford.edu}) does not exist at the time of this writing.
+Finally, the date of the files in the MATLAB extension tarball are set to 2011, hinting that probably VCR has been abandoned soon after the publication of \citet{gavish11}.
+
+
+
+
+
+\subsection{SOLE (2012)}
+\label{appendix:sole}
+SOLE (Science Object Linking and Embedding) defines ``science objects'' (SOs) that can be manually linked with phrases of the published paper \citep[for more, see ][]{pham12,malik13}.
+An SO is any code/content that is wrapped in begin/end tags with an associated type and name.
+For example special commented lines in a Python, R or C program.
+The SOLE command-line program parses the tagged file, generating metadata elements unique to the SO (including its URI).
+SOLE also supports workflows as Galaxy tools \citep{goecks10}.
+
+For reproducibility, \citet{pham12} suggest building a SOLE-based project in a virtual machine, using any custom package manager that is hosted on a private server to obtain a usable URI.
+However, as described in Appendices \ref{appendix:independentenvironment} and \ref{appendix:packagemanagement}, unless virtual machines are built with robust package managers, this is not a sustainable solution (the virtual machine itself is not reproducible).
+Also, hosting a large virtual machine server with fixed IP on a hosting service like Amazon (as suggested there) will be very expensive.
+The manual/artificial definition of tags to connect parts of the paper with the analysis scripts is also a caveat due to human error and incompleteness (tags the authors may not consider important, but may be useful later).
+The solution of the proposed template (where anything coming out of the analysis is directly linked to the paper's contents with \LaTeX{} elements avoids these problems.
+
+
+
+
+
+\subsection{Sumatra (2012)}
+Sumatra (\url{http://neuralensemble.org/sumatra}) attempts to capture the environment information of a running project \citet{davison12}.
+It is written in Python and is a command-line wrapper over the analysis script, by controlling its running, its able to capture the environment it was run in.
+The captured environment can be viewed in plain text, a web interface.
+Sumatra also provides \LaTeX/Sphinx features, which will link the paper with the project's Sumatra database.
+This enables researchers to use a fixed version of a project's figures in the paper, even at later times (while the project is being developed).
+
+The actual code that Sumatra wraps around, must itself be under version control, and it doesn't run if there is non-committed changes (although its not clear what happens if a commit is amended).
+Since information on the environment has been captured, Sumatra is able to identify if it has changed since a previous run of the project.
+Therefore Sumatra makes no attempt at storing the environment of the analysis as in Sciunit (see Appendix \ref{appendix:sciunit}), but its information.
+Sumatra thus needs to know the language of the running program.
+
+
+
+
+
+\subsection{Research Object (2013)}
+\label{appendix:researchobject}
+
+The Research object (\url{http://www.researchobject.org}) is collection of meta-data ontologies, to describe aggregation of resources, or workflows, see \citet{bechhofer13} and \citet{belhajjame15}.
+It thus provides resources to link various workflow/analysis components (see Appendix \ref{appendix:existingtools}) into a final workflow.
+
+\citet{bechhofer13} describes how a workflow in Apache Taverna (Appendix \ref{appendix:taverna}) can be translated into research objects.
+The important thing is that the research object concept is not specific to any special workflow, it is just a metadata bundle which is only as robust in reproducing the result as the running workflow.
+For example, Apache Taverna cannot guarantee exact reproducibility as described in Appendix \ref{appendix:taverna}.
+But when a translator is written to convert the proposed template into research objects, they can do this.
+
+
+
+
+
+\subsection{Sciunit (2015)}
+\label{appendix:sciunit}
+Sciunit (\url{https://sciunit.run}) defines ``sciunit''s that keep the executed commands for an analysis and all the necessary programs and libraries that are used in those commands.
+It automatically parses all the executables in the script, and copies them, and their dependency libraries (down to the C library), into the sciunit.
+Because the sciunit contains all the programs and necessary libraries, its possible to run it readily on other systems that have a similar CPU architecture.
+For more, please see \citet{meng15}.
+
+In our tests, Sciunit installed successfully, however we couldn't run it because of a dependency problem with the \inlinecode{tempfile} package (in the standard Python library).
+Sciunit is written in Python 2 (which reached its end-of-life in January 1st, 2020) and its last Git commit in its main branch is from June 2018 (+1.5 years ago).
+Recent activity in a \inlinecode{python3} branch shows that others are attempting to translate the code into Python 3 (the main author has graduated and apparently not working on Sciunit anymore).
+
+Because we weren't able to run it, the following discussion will just be theoretical.
+The main issue with Sciunit's approach is that the copied binaries are just black boxes.
+Therefore, its not possible to see how the used binaries from the initial system were built, or possibly if they have security problems.
+This is a major problem for scientific projects, in principle (not knowing how they programs were built) and practice (archiving a large volume sciunit for every step of an analysis requires a lot of space).
+
+
+
+
+
+\subsection{Binder (2017)}
+Binder (\url{https://mybinder.org}) is a tool to containerize already existing Jupyter based processing steps.
+Users simply add a set of Binder-recognized configuration files to their repository.
+Binder will build a Docker image and install all the dependencies inside of it with Conda (the list of necessary packages comes from Conda).
+One good feature of Binder is that the imported Docker image must be tagged (something like a checksum).
+This will ensure that future/latest updates of the imported Docker image are not mistakenly used.
+However, it does not make sure that the dockerfile used by the imported Docker image follows a similar convention also.
+Binder is used by \citet{jones19}.
+
+
+
+
+
+\subsection{Gigantum (2017)}
+Gigantum (\url{https://gigantum.com}) is a client/server system, in which the client is a web-based (graphical) interface that is installed as ``Gigantum Desktop'' within a Docker image and is free software (MIT License).
+\tonote{I couldn't find the license to the server software yet, but it says that 20GB is provided for ``free'', so it is a little confusing if anyone can actually run the server.}
+\tonote{I took the date from their PiPy page, where the first version 0.1 was published in November 2016.}
+
+Gigantum uses Docker containers for an independent environment, Conda (or Pip) to install packages, Jupyter notebooks to edit and run code, and Git to store its history.
+Simply put, its a high-level wrapper for combining these components.
+Internally, a Gigantum project is organized as files in a directory that can be opened without their own client.
+The file structure (which is under version control) includes codes, input data and output data.
+As acknowledged on their own webpage, this greatly reduces the speed of Git operations, transmitting, or archiving the project.
+Therefore there are size limits on the dataset/code sizes.
+However, there is one directory which can be used to store files that must not be tracked.
+
+
+
+
+
+\subsection{Popper (2017)}
+\label{appendix:popper}
+Popper (\url{https://falsifiable.us}) is a software implementation of the Popper Convention \citep{jimenez17}.
+The Convention is a set of very generic conditions that are also applicable to the template proposed in this paper.
+For a discussion on the convention, please see Section \ref{sec:principles}, in this section we'll review their software implementation.
+
+The Popper team's own solution is through a command-line program called \inlinecode{popper}.
+The \inlinecode{popper} program itself is written in Python, but job management is with the HashiCorp configuration language (HCL).
+HCL is primarily aimed at running jobs on HashiCorp's ``infrastructure as a service'' (IaaS) products.
+Until September 30th, 2019\footnote{\url{https://github.blog/changelog/2019-09-17-github-actions-will-stop-running-workflows-written-in-hcl}}, HCL was used by ``GitHub Actions'' to manage workflows. % maybe use the \textsuperscript{th} with dates?
+
+To start a project, the \inlinecode{popper} command-line program builds a template, or ``scaffold'', which is a minimal set of files that can be run.
+The scaffold is very similar to the raw template of that is proposed in this paper.
+However, as of this writing, the scaffold isn't complete.
+It lacks a manuscript and validation of outputs (as mentioned in the convention).
+By default Popper runs in a Docker image (so root permissions are necessary), but Singularity is also supported.
+See Appendix \ref{appendix:independentenvironment} for more on containers, and Appendix \ref{appendix:highlevelinworkflow} for using high-level languages in the workflow.
+
+
+
+
+
+\subsection{Whole Tale (2019)}
+\label{appendix:wholetale}
+
+Whole Tale (\url{https://wholetale.org}) is a web-based platform for managing a project and organizing data provenance, see \citet{brinckman19}
+It uses online editors like Jupyter or RStudio (see Appendix \ref{appendix:editors}) that are encapsulated in a Docker container (see Appendix \ref{appendix:independentenvironment}).
+
+The web-based nature of Whole Tale's approach, and its dependency on many tools (which have many dependencies themselves) is a major limitation for future reproducibility.
+For example, when following their own tutorial on ``Creating a new tale'', the provided Jupyter notebook could not be executed because of a dependency problem.
+This has been reported to the authors as issue 113\footnote{\url{https://github.com/whole-tale/wt-design-docs/issues/113}}, but as all the second-order dependencies evolve, its not hard to envisage such dependency incompatibilities being the primary issue for older projects on Whole Tale.
+Furthermore, the fact that a Tale is stored as a binary Docker container causes two important problems: 1) it requires a very large storage capacity for every project that is hosted there, making it very expensive to scale if demand expands. 2) It is not possible to see how the environment was built accurately (when the Dockerfile uses \inlinecode{apt}), for more on this, please see Appendix \ref{appendix:packagemanagement}.
+
+
+
+
+
+\subsection{Things to add}
+\url{https://sites.nationalacademies.org/cs/groups/pgasite/documents/webpage/pga_180684.pdf}, does the following classification of tools:
+ \begin{itemize}
+ \item Research environments: \href{http://vcr.stanford.edu}{Verifiable computational research} (discussed above), \href{http://www.sciencedirect.com/science/article/pii/S1877050911001207}{SHARE} (a Virtual Machine), \href{http://www.codeocean.com}{Code Ocean} (discussed above), \href{http://jupyter.org}{Jupyter} (discussed above), \href{https://yihui.name/knitr}{knitR} (based on Sweave, dynamic report generation with R), \href{https://cran.r-project.org}{Sweave} (Function in R, for putting R code within \LaTeX), \href{http://www.cyverse.org}{Cyverse} (proprietary web tool with servers for bioinformatics), \href{https://nanohub.org}{NanoHUB} (collection of Simulation Programs for nanoscale phenomena that run in the cloud), \href{https://www.elsevier.com/about/press-releases/research-and-journals/special-issue-computers-and-graphics-incorporates-executable-paper-grand-challenge-winner-collage-authoring-environment}{Collage Authoring Environment} (discussed above), \href{https://osf.io/ns2m3}{SOLE} (discussed above), \href{https://osf.io}{Open Science framework} (a hosting webpage), \href{https://www.vistrails.org}{VisTrails} (discussed above), \href{https://pypi.python.org/pypi/Sumatra}{Sumatra} (discussed above), \href{http://software.broadinstitute.org/cancer/software/genepattern}{GenePattern} (reviewed above), Image Processing On Line (\href{http://www.ipol.im}{IPOL}) journal (publishes full analysis scripts, but doesn't deal with dependencies), \href{https://github.com/systemslab/popper}{Popper} (reviewed above), \href{https://galaxyproject.org}{Galaxy} (reviewed above), \href{http://torch.ch}{Torch.ch} (finished project for neural networks on images), \href{http://wholetale.org/}{Whole Tale} (discussed above).
+ \item Workflow systems: \href{http://www.taverna.org.uk}{Taverna}, \href{http://www.wings-workflows.org}{Wings}, \href{https://pegasus.isi.edu}{Pegasus}, \href{http://www.pgbovine.net/cde.html}{CDE}, \href{http://binder.org}{Binder}, \href{http://wiki.datakurator.org/wiki}{Kurator}, \href{https://kepler-project.org}{Kepler}, \href{https://github.com/everware}{Everware}, \href{http://cds.nyu.edu/projects/reprozip}{Reprozip}.
+ \item Dissemination platforms: \href{http://researchcompendia.org}{ResearchCompendia}, \href{https://datacenterhub.org/about}{DataCenterHub}, \href{http://runmycode.org}, \href{https://www.chameleoncloud.org}{ChameleonCloud}, \href{https://occam.cs.pitt.edu}{Occam}, \href{http://rcloud.social/index.html}{RCloud}, \href{http://thedatahub.org}{TheDataHub}, \href{http://www.ahay.org/wiki/Package_overview}{Madagascar}.
+ \end{itemize}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\newpage
+\section{Things remaining to add}
+\begin{itemize}
+\item Special volume on ``Reproducible research'' in the Computing in Science Engineering \citep{fomel09}.
+\item ``I’ve learned that interactive programs are slavery (unless they include the ability to arrive in any previous state by means of a script).'' \citep{fomel09}.
+\item \citet{fomel09} discuss the ``versioning problem'': on different systems, programs have different versions.
+\item \citet{fomel09}: a C program written 20 years ago was still usable.
+\item \citet{fomel09}: ``in an attempt to increase the size of the community, Matthias Schwab and I submitted a paper to Computers in Physics, one of CiSE’s forerunners. It was rejected. The editors said if everyone used Microsoft computers, everything would be easily reproducible. They also predicted the imminent demise of Fortran''.
+\item \citet{alliez19}: Software citation, with a nice dependency plot for matplotlib.
+ \item SC \href{https://sc19.supercomputing.org/submit/reproducibility-initiative}{Reproducibility Initiative} for mandatory Artifact Description (AD).
+ \item \href{https://www.acm.org/publications/policies/artifact-review-badging}{Artifact review badging} by the Association of computing machinery (ACM).
+ \item eLife journal \href{https://elifesciences.org/labs/b521cf4d/reproducible-document-stack-towards-a-scalable-solution-for-reproducible-articles}{announcement} on reproducible papers. \citet{lewis18} is their first reproducible paper.
+ \item The \href{https://www.scientificpaperofthefuture.org}{Scientific paper of the future initiative} encourages geoscientists to include associate metadata with scientific papers \citep{gil16}.
+ \item Digital objects: \url{http://doi.org/10.23728/b2share.b605d85809ca45679b110719b6c6cb11} and \url{http://doi.org/10.23728/b2share.4e8ac36c0dd343da81fd9e83e72805a0}
+ \item \citet{mesirov10}, \citet{casadevall10}, \citet{peng11}: Importance of reproducible research.
+ \item \citet{sandve13} is an editorial recommendation to publish reproducible results.
+ \item \citet{easterbrook14} Free/open software for open science.
+ \item \citet{peng15}: Importance of better statistical education.
+ \item \citet{topalidou16}: Failed attempt to reproduce a result.
+ \item \citet{hutton16} reproducibility in hydrology, criticized in \citet{melson17}.
+ \item \citet{fomel09}: Editorial on reproducible research.
+ \item \citet{munafo17}: Reproducibility in social sciences.
+ \item \citet{stodden18}: Effectiveness of journal policy on computational reproducibility.
+ \item \citet{fanelli18} is critical of the narrative that there is a ``reproducibility crisis'', and that its important to empower scientists.
+ \item \citet{burrell18} open software (in particular Python) in heliophysics.
+ \item \citet{allen18} show that many papers don't cite software.
+ \item \citet{zhang18} explicity say that they won't release their code: ``We opt not to make the code used for the chemical evo-lution modeling publicly available because it is an important asset of the re-searchers’ toolkits''
+ \item \citet{jones19} make genuine effort at reproducing every number in the paper (using Docker, Conda, and CGAT-core, and Binder), but they can ultimately only release scripts. They claim its not possible to reproduce that level of reproducibility, but here we show it is.
+ \item LSST uses Kubernetes and docker for reproducibility \citep{banek19}.
+ \item Interesting survey/paper on the importance of coding in science \citep{merali10}.
+ \item Discuss the Provenance challenge \citep{moreau08}, showing the importance of meta data and provenance tracking.
+ Especially that it is organized by teh medical scientists.
+ Its webpage (for latest challenge) has a nice intro: \url{https://www.cccinnovationcenter.com/challenges/provenance-challenge}.
+ \item In discussion: The XML provenance system is very interesting, scripts can be written to parse the Makefiles within this template to generate such XML outputs for easy standard metadata parsing.
+ The XML that contains a log of the outputs is also interesting.
+ \item \citet{becker17} Discuss reproducibility methods in R.
+ \item Elsevier Executable Paper Grand Challenge\footnote{\url{https://shar.es/a3dgl2}} \citep{gabriel11}.
+ \item \citet{menke20} show how software identifability has seen the best improvement, so there is hope!
+ \item Nature's collection on papers about reproducibility: \url{https://www.nature.com/collections/prbfkwmwvz}.
+ \item Nice links for applying FAIR principles in research software: \url{https://www.rd-alliance.org/group/software-source-code-ig/wiki/fair4software-reading-materials}
+ \item Jupyter Notebooks and problems with reproducibility: \citet{rule18} and \citet{pimentel19}.
+ \item Reproducibility certification \url{https://www.cascad.tech}.
+ \item \url{https://plato.stanford.edu/entries/scientific-reproducibility}.
+ \item
+Modern analysis tools are almost entirely implemented as software packages.
+This has lead many scientists to adopt solutions that software developers use for reproducing software (for example to fix bugs, or avoid security issues).
+These tools and how they are used are thorougly reviewed in Appendices \ref{appendix:existingtools} and \ref{appendix:existingsolutions}.
+However, the problem of reproducibility in the sciences is more complicated and subtle than that of software engineering.
+This difference can be broken up into the following categories, which are described more fully below:
+1) Reading vs. executing, 2) Archiving how software is used and 3) Citation of the software/methods used for scientific credit.
+
+The first difference is because in the sciences, reproducibility is not merely a problem of re-running a research project (where a binary blob like a container or virtual machine is sufficient).
+For a scientist it is more important to read/study a method of a paper that is 1, 10, or 100 years old.
+The hardware to execute the code may have become obsolete, or it may require too much processing power, storage, or time for another random scientist to execute.
+Another scientist just needs to be assured that the commands they are reading is exactly what was (and can potentially be) executed.
+
+On the second point, scientists are devoting a smaller fraction of their papers to the technical aspects of the work because they are done increasingly by pre-written software programs and libraries.
+Therefore, scientific papers are no longer a complete repository for preserving and archiving very important aspects of the scientific endeavor and hard gained experience.
+Attempts such as Software Heritage\footnote{\url{https://www.softwareheritage.org}} \citep{dicosmo18} do a wonderful job at long term preservation and archival of the software source code.
+However, preservation of the software's raw code is only part of the process, it is also critically important to preserve how the software was used: with what configuration or run-time options, for what kinds of problems, in conjunction with which other software tools and etc.
+
+The third major difference was scientific credit, which is measured in units of citations, not dollars.
+As described above, scientific software are playing an increasingly important role in modern science.
+Because of the domain-specific knowledge necessary to produce such software, they are mostly written by scientists for scientists.
+Therefore a significant amount of effort and research funding has gone into producing scientific software.
+Atleast for the software that do have an accompanying paper, it is thus important that those papers be cited when they are used.
+\end{itemize}
+
+
+
+%% Mention all used software in an appendix.
+\section{Software acknowledgement}
+\label{appendix:softwareacknowledge}
+\input{tex/build/macros/dependencies.tex}
+
+%% Finish LaTeX
+\end{document}
+
+%% This file is part of the reproducible paper template
+%% https://gitlab.com/makhlaghi/reproducible-paper
+%
+%% This template is free software: you can redistribute it and/or modify it
+%% under the terms of the GNU General Public License as published by the
+%% Free Software Foundation, either version 3 of the License, or (at your
+%% option) any later version.
+%
+%% This template is distributed in the hope that it will be useful, but
+%% WITHOUT ANY WARRANTY; without even the implied warranty of
+%% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+%% General Public License for more details.
+%
+%% You should have received a copy of the GNU General Public License along
+%% with Template. If not, see <https://www.gnu.org/licenses/>.
diff --git a/tex/src/preamble-biblatex.tex b/tex/src/preamble-biblatex.tex
deleted file mode 100644
index 2ac18d9..0000000
--- a/tex/src/preamble-biblatex.tex
+++ /dev/null
@@ -1,147 +0,0 @@
-%% Biblatex settings.
-%%
-%% Settings necessary to make the bibliography with Biblatex. Keeping all
-%% BibLaTeX settings in a separate preamble was done in the spirit of
-%% modularity to 1) easily managable, 2) If a similar BibLaTeX
-%% configuration is necessary in another LaTeX compilation, this file can
-%% just be copied there and used.
-%%
-%% USAGE:
-%% - 'tex/src/references.tex': the file containing Bibtex source of each
-%% reference. The file suffix doesn't have to be '.bib'. This naming
-%% helps in clearly identifying the files and avoiding places that
-%% complain about '.bib' files.
-%
-%% Copyright (C) 2018-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
-%
-%% This file is free software: you can redistribute it and/or modify it
-%% under the terms of the GNU General Public License as published by the
-%% Free Software Foundation, either version 3 of the License, or (at your
-%% option) any later version.
-%
-%% This file is distributed in the hope that it will be useful, but WITHOUT
-%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
-%% for more details.
-%
-%% You should have received a copy of the GNU General Public License along
-%% with this file. If not, see <http://www.gnu.org/licenses/>.
-
-
-
-
-%% To break up highlighted text (for example texttt when some it is on the
-%% line break) and also to no underline emphasized words (like journal
-%% titles in the references).
-\usepackage[normalem]{ulem}
-
-
-
-
-
-%% For quotation signs (sometimes used by BibLaTeX)
-\usepackage{csquotes}
-
-
-
-
-
-%% To define colors
-\usepackage{xcolor}
-
-
-
-
-
-% Basic BibLaTeX settings
-\usepackage[
- doi=false,
- url=false,
- dashed=false,
- eprint=false,
- maxbibnames=2,
- minbibnames=1,
- hyperref=true,
- maxcitenames=2,
- mincitenames=1,
- giveninits=true,
- style=authoryear,
- uniquelist=false,
- backend=biber,natbib]{biblatex}
-\DeclareFieldFormat[article]{pages}{#1}
-\DeclareFieldFormat{pages}{\mkfirstpage[{\mkpageprefix[bookpagination]}]{#1}}
-\addbibresource{tex/src/references.tex}
-\addbibresource{tex/build/macros/dependencies-bib.tex}
-\renewbibmacro{in:}{}
-\AtEveryBibitem{\clearfield{month}}
-\renewcommand*{\bibfont}{\footnotesize}
-\DefineBibliographyStrings{english}{references = {References}}
-
-%% Include the adsurl field key into those that are recognized:
-\DeclareSourcemap{
- \maps[datatype=bibtex]{
- \map{
- \step[fieldsource=adsurl,fieldtarget=iswc]
- \step[fieldsource=gbkurl,fieldtarget=iswc]
- }
- }
-}
-
-%% Set the color of the doi link to mymg (magenta) and the ads links
-%% to mypurp (or purple):
-\definecolor{mypurp}{cmyk}{0.75,1,0,0}
-\newcommand{\doihref}[2]{\href{#1}{\color{magenta}{#2}}}
-\newcommand{\adshref}[2]{\href{#1}{\color{mypurp}{#2}}}
-\newcommand{\blackhref}[2]{\href{#1}{\color{black}{#2}}}
-
-%% Define a format for the printtext commands in
-%% DeclareBibliographyDriver to make links for the doi, ads link and
-%% arxiv link:
-\DeclareFieldFormat{doilink}{
- \iffieldundef{doi}{#1}{\doihref{http://dx.doi.org/\thefield{doi}}{#1}}}
-\DeclareFieldFormat{adslink}{
- \iffieldundef{iswc}{#1}{\adshref{\thefield{iswc}}{#1}}}
-\DeclareFieldFormat{arxivlink}{
- \iffieldundef{eprint}{#1}{\href{http://arxiv.org/abs/\thefield{eprint}}{#1}}}
-
-\DeclareListFormat{doiforbook}{
- \iffieldundef{doi}{#1}{\doihref{http://dx.doi.org/\thefield{doi}}{#1}}}
-\DeclareFieldFormat{googlebookslink}{
- \iffieldundef{iswc}{#1}{\adshref{\thefield{iswc}}{#1}}}
-
-%% Set the formatting to make the last three values into the
-%% appropriate link. Note that the % signs are necessary. Without
-%% them, the items will be indented.
-\DeclareBibliographyDriver{article}{%
- \usebibmacro{bibindex}%
- \usebibmacro{begentry}%
- \usebibmacro{author/translator+others}%
- \newunit%
- \ifdefined\makethesis\printtext{\usebibmacro{title}}\fi%
- \newunit%
- \printtext[doilink]{\usebibmacro{journal}}%
- \addcomma%
- \printtext[adslink]{\printfield{volume}}%
- \addcomma%
- \printtext[arxivlink]{\printfield{pages}}%
- \addperiod%
-}
-
-\DeclareBibliographyDriver{book}{%
- \usebibmacro{bibindex}%
- \usebibmacro{begentry}%
- \usebibmacro{author/translator+others}%
- \newunit%
- \printtext{\usebibmacro{title}}%
- \addperiod%
- \addspace%
- \printlist[doiforbook]{publisher}%
- \addcomma%
- \addspace%
- \printfield[googlebookslink]{edition}%
- \printtext{ ed.}%
- \addperiod%
-}
-
-%% In order to have et al. instead of et al.,:
-\renewcommand*{\nameyeardelim}{\addspace}
diff --git a/tex/src/preamble-maneage-default-style.tex b/tex/src/preamble-maneage-default-style.tex
deleted file mode 100644
index 10a61ad..0000000
--- a/tex/src/preamble-maneage-default-style.tex
+++ /dev/null
@@ -1,161 +0,0 @@
-%% General paper's style settings.
-%
-%% This preamble can be completely ignored when including this TeX file in
-%% another style. This is done because this LaTeX build is meant to be an
-%% initial/internal phase or part of a larger effort, so it has a basic
-%% style defined here as a preamble. To ignore it, uncomment or delete the
-%% respective line in 'paper.tex'.
-%
-%% Copyright (C) 2019-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
-%
-%% This file is free software: you can redistribute it and/or modify it
-%% under the terms of the GNU General Public License as published by the
-%% Free Software Foundation, either version 3 of the License, or (at your
-%% option) any later version.
-%
-%% This file is distributed in the hope that it will be useful, but WITHOUT
-%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
-%% for more details.
-%
-%% You should have received a copy of the GNU General Public License along
-%% with this file. If not, see <http://www.gnu.org/licenses/>.
-
-
-
-
-
-%% Font.
-\usepackage[T1]{fontenc}
-\usepackage{newtxtext}
-\usepackage{newtxmath}
-
-
-
-
-
-%% Print size
-\usepackage[a4paper, includeheadfoot, body={18.7cm, 24.5cm}]{geometry}
-
-
-
-
-
-%% Set the distance between the columns if two columns:
-\setlength{\columnsep}{0.75cm}
-
-
-
-
-
-% To allow figures to take up more space on the top of the page:
-\renewcommand{\topfraction}{.99}
-\renewcommand{\bottomfraction}{.7}
-\renewcommand{\textfraction}{.05}
-\renewcommand{\floatpagefraction}{.99}
-\renewcommand{\dbltopfraction}{.99}
-\renewcommand{\dblfloatpagefraction}{.99}
-\setcounter{topnumber}{1}
-\setcounter{bottomnumber}{0}
-\setcounter{totalnumber}{2}
-\setcounter{dbltopnumber}{1}
-
-
-
-
-
-%% To make the footnotes align:
-\usepackage[hang]{footmisc}
-\setlength\footnotemargin{10pt}
-
-
-
-
-
-%For including time in the title:
-\usepackage{datetime}
-
-
-
-
-
-%% Define the abstract environment
-\renewenvironment{abstract}
- {\vspace{-0.5cm}\small%
- \list{}{%
- \setlength{\leftmargin}{2cm}%
- \setlength{\rightmargin}{\leftmargin}%
- }%
- \item\relax}
- {\endlist}
-
-
-
-
-
-%% To keep the main page's code clean.
-\newcommand{\includeabstract}[1]{%
-\twocolumn[%
- \begin{@twocolumnfalse}%
- \maketitle%
- \begin{abstract}%
- #1%
- \end{abstract}%
- \vspace{1cm}%
- \end{@twocolumnfalse}%
- ]%
-}
-
-
-
-
-
-%% Basic header style
-%% ------------------
-%
-%% The steps below are to use the necessary LaTeX packages to get the demo
-%% Maneage paper running with a reasonably looking, custom paper style. If
-%% you are using a custom journal style, feel free to delete these.
-
-%% General page header settings.
-\usepackage{fancyhdr}
-\pagestyle{fancy}
-\lhead{\footnotesize{\scshape Draft paper}, {\footnotesize nnn:i (pp), Year Month day}}
-\rhead{\scshape\footnotesize YOUR-NAME et al.}
-\cfoot{\thepage}
-\setlength{\voffset}{0.75cm}
-\setlength{\headsep}{0.2cm}
-\setlength{\footskip}{0.75cm}
-\renewcommand{\headrulewidth}{0pt}
-
-%% Specific style for first page.
-\fancypagestyle{firststyle}
-{
- \lhead{\footnotesize{\scshape Draft paper}, nnn:i (pp), YYYY Month day\\
- \scriptsize \textcopyright YYYY, Your name. All rights reserved.}
- \rhead{\footnotesize \footnotesize \today, \currenttime\\}
-}
-
-%To set the style of the titles:
-\usepackage{titlesec}
-\titleformat{\section}
- {\centering\normalfont\uppercase}
- {\thesection.}
- {0em}
- { }
-\titleformat{\subsection}
- {\centering\normalsize\slshape}
- {\thesubsection.}
- {0em}
- { }
-\titleformat{\subsubsection}
- {\centering\small\slshape}
- {\thesubsubsection.}
- {0em}
- { }
-
-%% Title and author information
-\usepackage{authblk}
-\renewcommand\Authfont{\small\scshape}
-\renewcommand\Affilfont{\footnotesize\normalfont}
-\setlength{\affilsep}{0.2cm}
diff --git a/tex/src/preamble-pgfplots.tex b/tex/src/preamble-pgfplots.tex
index 75119d6..6c8ed5b 100644
--- a/tex/src/preamble-pgfplots.tex
+++ b/tex/src/preamble-pgfplots.tex
@@ -42,20 +42,15 @@
%
%% Copyright (C) 2018-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
%
-%% This file is part of Maneage (https://maneage.org).
+%% This LaTeX file is part of Maneage. Maneage is free software: you can
+%% redistribute it and/or modify it under the terms of the GNU General
+%% Public License as published by the Free Software Foundation, either
+%% version 3 of the License, or (at your option) any later version.
%
-%% This file is free software: you can redistribute it and/or modify it
-%% under the terms of the GNU General Public License as published by the
-%% Free Software Foundation, either version 3 of the License, or (at your
-%% option) any later version.
-%
-%% This file is distributed in the hope that it will be useful, but WITHOUT
+%% Maneage is distributed in the hope that it will be useful, but WITHOUT
%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
-%% for more details.
-%
-%% You should have received a copy of the GNU General Public License along
-%% with this file. If not, see <http://www.gnu.org/licenses/>.
+%% for more details. See <http://www.gnu.org/licenses/>.
@@ -69,7 +64,9 @@
%% slow with detailed plots). 2) You can use the PDFs of the individual
%% plots for other purposes (for example to include in slides) cleanly.
\usepackage{tikz}
+\usetikzlibrary{graphs}
\usetikzlibrary{external}
+\usetikzlibrary{positioning}
\tikzexternalize
\tikzsetexternalprefix{tikz/}
@@ -77,28 +74,17 @@
-%% The '\includetikz' can be used to either build the figures using
-%% PGFPlots (when '\makepdf' is defined), or use an existing file (when
-%% '\makepdf' isn't defined). When making the PDF, it will set the output
-%% figure name to be the same as the 'tex/src/XXXX.tex' file that contains
-%% the PGFPlots source of the figure. In this way, when using the PDF, it
-%% will also have the same name, thus allowing the figures to easily change
-%% their place relative to others: figure ordering won't be a problem. This
-%% is a problem by default because if an explicit name isn't set at the
-%% start, tikz will make images based on their order in the paper.
-%
-%% This function takes two arguments:
-%% 1) The base-name of the LaTeX file with the 'tikzpicture'
-%% environment. As mentioned above, this will also be the name of
-%% the produced figure.
-%% 2) The settings to use with 'includegraphics' when an already-built
-%% file should be used.
+%% The following rule will cause the name of the files keeping a figure's
+%% external PDF to be set based on the file that the TiKZ commands are
+%% from. Without this, TiKZ will use numbers based on the order of
+%% figures. These numbers can be hard to manage and they will also depend
+%% on order in the final PDF, so it will be very buggy to manage them.
\newcommand{\includetikz}[2]{%
\ifdefined\makepdf%
\tikzsetnextfilename{#1}%
\input{tex/src/#1.tex}%
\else
- \includegraphics[#2]{tex/tikz/#1.pdf}
+ \includegraphics[#2]{tex/tikz/#1.eps}
\fi
}
@@ -109,12 +95,15 @@
%% Uncomment the following lines for EPS and PS images. Note that you still
%% have to use the 'pdflatex' executable and also add a '[dvips]' option to
%% graphicx.
-
-%% \tikzset{external/system call={rm -f "\image".eps "\image".ps
-%% "\image".dvi; latex \tikzexternalcheckshellescape -halt-on-error
-%% -interaction=batchmode -jobname "\image" "\texsource";
-%% dvips -o "\image".ps "\image".dvi;
-%% ps2eps "\image.ps"}}
+\tikzset{
+ external/system call={
+ rm -f "\image".eps "\image".ps "\image".dvi;
+ latex \tikzexternalcheckshellescape -halt-on-error
+ -interaction=batchmode -jobname "\image" "\texsource";
+ dvips -o "\image".ps "\image".dvi;
+ ps2eps "\image.ps"
+ }
+}
@@ -132,3 +121,101 @@
legend style = {font=\footnotesize},
label style = {font=\footnotesize}
}
+
+
+
+
+
+%% Nodes in demo graphs
+
+%% sub-Makefiles.
+\tikzset{node-makefile/.style={
+ thick,
+ rectangle,
+ anchor=north,
+ minimum height=4.7cm,
+ minimum width=2.1cm,
+ draw=green!50!black!50,
+ fill=black!10!green!12!white}}
+
+%% Input files (green, sharp-edged boxes).
+\tikzset{node-nonterminal/.style={
+ rectangle,
+ very thick,
+ anchor=north,
+ text centered,
+ top color=white,
+ text width=1.7cm,
+ minimum height=4mm,
+ draw=green!50!black!50,
+ bottom color=green!80!black!50,
+ font=\ttfamily}}
+
+\tikzset{node-terminal/.style={
+ rectangle,
+ very thick,
+ draw=blue!50,
+ text centered,
+ top color=white,
+ text width=1.7cm,
+ minimum height=4mm,
+ rounded corners=2mm,
+ bottom color=blue!20,
+ font=\ttfamily}}
+
+
+%%%%%%%%%%%%%%%%%%%
+
+\tikzset{node-nonterminal-thin/.style={
+ rectangle,
+ thick,
+ text centered,
+ top color=white,
+ text width=2cm,
+ minimum size=2mm,
+ draw=green!50!black!50,
+ bottom color=green!80!black!50,
+ font=\ttfamily\scriptsize}}
+
+\tikzset{node-point/.style={
+ circle,
+ black!50,
+ inner sep=0pt,
+ minimum size=0pt,
+ fill=white}}
+
+\tikzset{ bbox/.style={
+ rectangle,
+ minimum width=2.5cm,
+ rounded corners=2mm,
+ very thick,draw=blue!50,
+ top color=white,
+ bottom color=blue!20 } }
+
+\tikzset{ rbox/.style={
+ rectangle,
+ dotted,
+ minimum width=2.5cm,
+ rounded corners=2mm,
+ very thick,draw=red!50!black!50,
+ top color=white,
+ bottom color=red!50!black!20 } }
+
+\tikzset{ gbox/.style={
+ rectangle,
+ minimum width=2.5cm,
+ very thick,
+ draw=green!50!black!50,
+ top color=white,
+ bottom color=green!50!black!20 } }
+
+\tikzset{ dirbox/.style={
+ thick,
+ rectangle,
+ anchor=north,
+ text centered,
+ font=\ttfamily,
+ minimum width=15cm,
+ minimum height=7.5cm,
+ draw=brown!50!black!50,
+ fill=brown!10!white }}
diff --git a/tex/src/preamble-project.tex b/tex/src/preamble-project.tex
index a596aec..d5a30af 100644
--- a/tex/src/preamble-project.tex
+++ b/tex/src/preamble-project.tex
@@ -33,67 +33,68 @@
%% For loading images into the output (with '\includegraphics').
\usepackage{graphicx}
-%% Ordering correction between 'figure' and 'figure*' ('figure*' is
-%% commonly used in two-column documents, where the figure should span both
-%% columns).
-\usepackage{fixltx2e}
-
-%% Color management.
-\usepackage{xcolor}
-\color{black} % Color of main text.
-\definecolor{DarkBlue}{RGB}{0,0,90}
-
-%% Caption management: The 'setspace' package defines the 'stretch'
-%% variable. 'abovecaptionskip' is the distance between the figure and the
-%% caption. You can use 'captionof{figure}{...}' to use these custom
-%% 'figure' caption that is defined here.
-\usepackage{setspace, caption}
-\captionsetup{font=footnotesize, labelfont={color=DarkBlue,bf}, skip=1pt}
-\captionsetup[figure]{font={stretch=1, small}}
-\setlength{\abovecaptionskip}{3pt plus 1pt minus 1pt}
-\setlength{\belowcaptionskip}{-1.25em}
-
-%% Manage links in the produced paper (for example their colors), and
-%% include document information in the "Properties" of the PDF.
+%% IEEEtran V1.6 and later pre-defines the format of the cite.sty package
+%% \cite{} output to follow that of the IEEE.
+\usepackage{cite}
+
+%% For the `\url' command.
+\usepackage{url}
+
+%% To have links.
\usepackage[
colorlinks,
- urlcolor=blue,
- citecolor=blue,
- linkcolor=blue,
+ urlcolor=gray,
+ citecolor=gray,
+ linkcolor=gray,
linktocpage]{hyperref}
\renewcommand\UrlFont{\rmfamily}
-\hypersetup{
- pdftitle={\projecttitle},
- pdfauthor={\projectcopyrightowner},
- pdfsubject={\projectgitrepo{} (commit \projectversion)},
- pdfkeywords={Reproducible research, Maneage, ADD YOUR OWN}
-}
+%% To have multiple bibliographies (one for the main paper, one for the
+%% appendix). With 'multibib' we need to specify a name for each
+%% bibliography. But this is only necessary when the appendices are to be
+%% included in the final paper. When the supplement should be separate, it
+%% will be treated as a completely independent build, so '\citeappendix'
+%% should just be mapped to '\cite'.
+\ifdefined\separatesupplement
+\newcommand{\citeappendix}{\cite}
+\else
+\usepackage{multibib}
+\newcites{appendix}{Bibliography}
+\fi
+
+%% To have typewriter font
+\usepackage{courier}
+
+%% To have bold monospace
+%\usepackage[scaled=0.85]{beramono}
+\usepackage{inconsolata}
+
+%% To display codes.
+\usepackage{listings}
+\usepackage{etoolbox}
+\input{listings-bash.prf}
+\lstset{
+ frame=lines,
+ numbers=none,
+ language=bash,
+ commentstyle=\color{gray},
+ abovecaptionskip=0mm,
+ belowcaptionskip=0mm,
+ keywordstyle=\mdseries,
+ basicstyle=\small\ttfamily\color{blue!35!black},
+}
+\makeatletter
+\preto\lstlisting{\def\@captype{table}}
+\lst@AddToHook{OnEmptyLine}{\vspace{-0.5\baselineskip}}
+\pretocmd\lst@makecaption{\noindent{\rule{\linewidth}{1pt}}}{}{}
+\makeatother
+%% Custom macros
+\newcommand{\inlinecode}[1]{\textcolor{blue!35!black}{\texttt{#1}}}
+\newcommand\eprint[1]{\href{https://arXiv.org/abs/#1}{{arXiv:#1}}}
+\newcommand\doi[1]{\href{https://oadoi.org/#1}{{DOI:#1}}}
-%% BibLaTeX or PGFPlots templates
-%% ------------------------------
-%
-%% These are ready-made customizations of these two commonly used packages
-%% that you can use as a template for your own project: BibLaTeX (advanced
-%% bibliography management) or PGFPlots (for drawing plots within LaTeX
-%% directly from tables of data). If you don't use them, you can just
-%% delete these two lines and also delete their files from your branch (to
-%% keep the 'tex/src' directory on your branch clean).
-\input{tex/src/preamble-biblatex.tex}
+%% Import Maneage template for PGFPlots.
\input{tex/src/preamble-pgfplots.tex}
-
-
-
-
-
-%% Style of default paper (DELETE IF USING JOURNAL STYLES)
-%% -------------------------------------------------------
-%
-%% This is primarily defined for the default Maneage paper style. So when
-%% you later import your journal's style, delete this line (and these
-%% comments). Also delete the file (to keep your project source branch
-%% clean from files you don't need/use).
-\input{tex/src/preamble-maneage-default-style.tex}
diff --git a/tex/src/references.tex b/tex/src/references.tex
index b38706b..4f6af2a 100644
--- a/tex/src/references.tex
+++ b/tex/src/references.tex
@@ -1,51 +1,162 @@
%% Non-software BibTeX entries. The software-specific BibTeX entries are
-%% stored in a '*.tex' file under the 'tex/dependencies' directory.
+%% stored in a `*.tex' file under the `tex/dependencies' directory.
%
-%% Copyright (C) 2018-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+%% [[[BibTeX 0.99d complains with the at-character, even when its in a
+%% comment line! So ::at:: is used instead in the email address]]].
+%% Copyright (C) 2018-2022 Mohammad Akhlaghi <mohammad::at::akhlaghi.org>
%
%% Copying and distribution of this file, with or without modification,
%% are permitted in any medium without royalty provided the copyright
%% notice and this notice are preserved. This file is offered as-is,
%% without any warranty.
-@ARTICLE{maneage,
- author = {{Akhlaghi}, Mohammad and {Infante-Sainz}, Raul and
- {Roukema}, Boudewijn F. and {Khellat}, Mohammadreza and
- {Valls-Gabaud}, David and {Baena-Galle}, Roberto},
- title = "{Toward Long-Term and Archivable Reproducibility}",
- journal = {Computing in Science and Engineering},
- keywords = {Computer Science - Digital Libraries},
- year = 2021,
- month = may,
- volume = {23},
- number = {3},
- pages = {82-91},
- doi = {10.1109/MCSE.2021.3072860},
-archivePrefix = {arXiv},
- eprint = {2006.03018},
+
+
+
+
+@ARTICLE{peper20,
+ author = {{Peper}, Marius and {Roukema}, Boudewijn F.},
+ title = "{The role of the elaphrocentre in low surface brightness galaxy formation}",
+ journal = {arXiv e-prints},
+ keywords = {Astrophysics - Cosmology and Nongalactic Astrophysics, Astrophysics - Astrophysics of Galaxies},
+ year = 2020,
+ month = oct,
+ eid = {arXiv:2010.03742},
+ pages = {arXiv:2010.03742},
+archivePrefix = {arXiv},
+ eprint = {2010.03742},
+ primaryClass = {astro-ph.CO},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv201003742P},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{roukema20,
+ author = {{Roukema}, Boudewijn F.},
+ title = "{Anti-clustering in the national SARS-CoV-2 daily infection counts}",
+ journal = {arXiv e-prints},
+ keywords = {Quantitative Biology - Populations and Evolution, Physics - Physics and Society, Statistics - Methodology},
+ year = 2020,
+ month = jul,
+ eid = {arXiv:2007.11779},
+ pages = {arXiv:2007.11779},
+archivePrefix = {arXiv},
+ eprint = {2007.11779},
+ primaryClass = {q-bio.PE},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv200711779R},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{aissi20,
+ author = {Dylan A\"issi},
+ title = "{Review for Towards Long-term and Archivable Reproducibility}",
+ year = {2020},
+ journal = {Authorea},
+ volume = {n/a},
+ pages = {n/a},
+ doi = {10.22541/au.159724632.29528907},
+
+}
+
+
+
+
+
+@ARTICLE{mesnard20,
+ author = {Olivier Mesnard and Lorena A. Barba},
+ title = {Reproducible Workflow on a Public Cloud for Computational Fluid Dynamics},
+ year = {2020},
+ journal = {Computing in Science \& Engineering},
+ volume = {22},
+ pages = {102-116},
+ doi = {10.1109/MCSE.2019.2941702},
+}
+
+
+
+
+
+@ARTICLE{dicosmo20,
+ author = {{Di Cosmo}, Roberto and {Gruenpeter}, Morane and {Zacchiroli}, Stefano},
+ title = "{Referencing Source Code Artifacts: a Separate Concern in Software Citation}",
+ journal = {Computing in Science \& Engineering},
+ year = 2020,
+ volume = 22,
+ eid = {arXiv:2001.08647},
+ pages = {33},
+archivePrefix = {arXiv},
+ eprint = {2001.08647},
primaryClass = {cs.DL},
- adsurl = {https://ui.adsabs.harvard.edu/abs/2021CSE....23c..82A},
+ doi = {10.1109/MCSE.2019.2963148},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv200108647D},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
-@ARTICLE{alliez19,
- author = {{Alliez}, Pierre and {Di Cosmo}, Roberto and {Guedj}, Benjamin and
- {Girault}, Alain and {Hacid}, Mohand-Said and {Legrand}, Arnaud and
- {Rougier}, Nicolas P.},
- title = "{Attributing and Referencing (Research) Software: Best Practices and Outlook from Inria}",
- journal = {CiSE},
- volume = {22},
- year = "2020",
- month = "Jan",
- pages = {39-52},
+
+@ARTICLE{hinsen20,
+ author = {Konrad Hinsen},
+ title = {The Magic of Content-Addressable Storage},
+ year = {2020},
+ journal = {Computing in Science \& Engineering},
+ volume = {22},
+ number = {03},
+ pages = {113-119},
+ doi = {10.1109/MCSE.2019.2949441},
+}
+
+
+
+
+@ARTICLE{menke20,
+ author = {Joe Menke and Martijn Roelandse and Burak Ozyurt and Maryann Martone and Anita Bandrowski},
+ title = {Rigor and Transparency Index, a new metric of quality for assessing biological and medical science methods},
+ year = {2020},
+ journal = {iScience},
+ volume = {23},
+ issue = {11},
+ pages = {101698},
+ doi = {10.1016/j.isci.2020.101698},
+}
+
+
+
+
+
+@ARTICLE{nust20,
+ author = {Daniel N\"ust and Vanessa Sochat and Ben Marwick and Stephen J. Eglen and Tim Head and Tony Hirst and Benjamin D. Evans},
+ title = "{Ten simple rules for writing Dockerfiles for reproducible data science}",
+ year = {2020},
+ journal = {PLOS Computational Biology},
+ volume = {16},
+ pages = {e1008316},
+ doi = {10.1371/journal.pcbi.1008316},
+}
+
+
+
+
+
+@ARTICLE{konkol20,
+ author = {{Konkol}, Markus and {N{\"u}st}, Daniel and {Goulier}, Laura},
+ title = "{Publishing computational research -- A review of infrastructures for reproducible and transparent scholarly communication}",
+ journal = {arXiv},
+ year = 2020,
+ month = jan,
+ pages = {2001.00484},
archivePrefix = {arXiv},
- eprint = {1905.11123},
+ eprint = {2001.00484},
primaryClass = {cs.DL},
- doi = {10.1109/MCSE.2019.2949413},
- adsurl = {https://ui.adsabs.harvard.edu/abs/2019arXiv190511123A},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv200100484K},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@@ -53,13 +164,14 @@ archivePrefix = {arXiv},
-@ARTICLE{infantesainz20,
+@ARTICLE{infante20,
author = {{Infante-Sainz}, Ra{\'u}l and {Trujillo}, Ignacio and
{Rom{\'a}n}, Javier},
title = "{The Sloan Digital Sky Survey extended point spread functions}",
- journal = {MNRAS},
- year = 2020,
- month = feb,
+ journal = {Monthly Notices of the Royal Astronomical Society},
+ keywords = {instrumentation: detectors, methods: data analysis, techniques: image processing, techniques: photometric, galaxies: haloes, Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Astrophysics of Galaxies},
+ year = "2020",
+ month = "Feb",
volume = {491},
number = {4},
pages = {5317-5329},
@@ -75,17 +187,1858 @@ archivePrefix = {arXiv},
+@ARTICLE{gibney20,
+ author = {Elizabeth Gibney},
+ title = "{This AI researcher is trying to ward off a reproducibility crisis}",
+ year = {2020},
+ journal = {Nature},
+ volume = {577},
+ pages = {14},
+ doi = {10.1038/d41586-019-03895-5},
+}
+
+
+
+
+
+@ARTICLE{lofstead19,
+ author = {Jay Lofstead and Joshua Baker and Andrew Younge},
+ title = {Data Pallets: Containerizing Storage for Reproducibility and Traceability},
+ year = {2019},
+ journal = {High Performance Computing. ISC High Performance 2019},
+ volume = {11887},
+ pages = {1},
+ doi = {10.1007/978-3-030-34356-9\_4},
+}
+
+
+
+
+
+@ARTICLE{clement19,
+ author = {Cl\'ement-Fontaine, M\'elanie and Di Cosmo, Roberto and Guerry, Bastien and MOREAU, Patrick and Pellegrini, Fran\c cois},
+ title = {Encouraging a wider usage of software derived from research},
+ year = {2019},
+ journal = {Archives ouvertes HAL},
+ volume = {},
+ pages = {\href{https://hal.archives-ouvertes.fr/hal-02545142}{hal-02545142}},
+}
+
+
+
+
+
+@ARTICLE{pimentel19,
+ author = {{Jo\~ao Felipe} Pimentel and Leonardo Murta and Vanessa Braganholo and Juliana Freire},
+ title = {A large-scale study about quality and reproducibility of jupyter notebooks},
+ year = {2019},
+ journal = {Proceedings of the 16th International Conference on Mining Software Repositories},
+ volume = {1},
+ pages = {507-517},
+ doi = {10.1109/MSR.2019.00077},
+}
+
+
+
+
+
+@ARTICLE{miksa19a,
+ author = {Tomasz Miksa and Paul Walk and Peter Neish},
+ title = "{RDA DMP Common Standard for Machine-actionable Data Management Plans}",
+ year = {2019},
+ journal = {RDA},
+ pages = {doi:10.15497/rda00039},
+ doi = {10.15497/rda00039},
+}
+
+
+
+
+
+@ARTICLE{miksa19b,
+ author = {Tomasz Miksa and Stephanie Simms and Daniel Mietchen and Sarah Jones},
+ title = {Ten principles for machine-actionable data management plans},
+ year = {2019},
+ journal = {PLoS Computational Biology},
+ volume = {15},
+ pages = {e1006750},
+ doi = {10.1371/journal.pcbi.1006750},
+}
+
+
+
+
+
+@ARTICLE{dicosmo19,
+ author = {M\'elanie Cl\'ement-Fontaine and Roberto Di Cosmo and Bastien Guerry and Patrick Moreau and Francois Pellegrini},
+ title = {Encouraging a wider usage of software derived from research},
+ year = {2019},
+ journal = {Ouvrir la science},
+ volume = {},
+ pages = {\href{https://hal.archives-ouvertes.fr/hal-02545142}{hal-02545142}},
+ doi = {},
+}
+
+
+
+
+
+@ARTICLE{perignon19,
+ author = {Christophe P\'erignon and Kamel Gadouche and Christophe Hurlin and Roxane Silberman and Eric Debonnel},
+ title = {Certify reproducibility with confidential data},
+ year = {2019},
+ journal = {Science},
+ volume = {365},
+ pages = {127},
+ doi = {10.1126/science.aaw2825},
+}
+
+
+
+
+
+@ARTICLE{munafo19,
+ author = {Marcus Munaf\'o},
+ title = {Raising research quality will require collective action},
+ year = {2019},
+ journal = {Nature},
+ volume = {576},
+ pages = {183},
+ doi = {10.1038/d41586-019-03750-7},
+}
+
+
+
+
+
+@ARTICLE{jones19,
+ author = {{Jones}, M.~G. and {Verdes-Montenegro}, L. and {Damas-Segovia}, A. and
+ {Borthakur}, S. and {Yun}, M. and {del Olmo}, A. and {Perea}, J. and
+ {Rom{\'a}n}, J. and {Luna}, S. and {Lopez Gutierrez}, D. and
+ {Williams}, B. and {Vogt}, F.~P.~A. and {Garrido}, J. and
+ {Sanchez}, S. and {Cannon}, J. and {Ram{\'\i}rez-Moreta}, P.},
+ title = "{Evolution of compact groups from intermediate to final stages. A case study of the H I content of HCG 16}",
+ journal = {Astronomy \& Astrophysics},
+ eprint = {1910.03420},
+ keywords = {galaxies: groups: individual: HCG 16, galaxies: interactions, galaxies: evolution, galaxies: ISM, radio lines: galaxies},
+ year = "2019",
+ month = "Dec",
+ volume = {632},
+ eid = {A78},
+ pages = {A78},
+ doi = {10.1051/0004-6361/201936349},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2019A&A...632A..78J},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{banek19,
+ author = {{Banek}, Christine and {Thornton}, Adam and {Economou}, Frossie and
+ {Fausti}, Angelo and {Krughoff}, K. Simon and {Sick}, Jonathan},
+ title = "{Why is the LSST Science Platform built on Kubernetes?}",
+ journal = {Proceedings of ADASS XXIX},
+ volume = {arXiv},
+ keywords = {Astrophysics - Instrumentation and Methods for Astrophysics},
+ year = "2019",
+ month = "Nov",
+ eid = {arXiv:1911.06404},
+ pages = {1911.06404},
+archivePrefix = {arXiv},
+ eprint = {1911.06404},
+ primaryClass = {astro-ph.IM},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2019arXiv191106404B},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{fineberg19,
+ author = {Harvey V. Fineberg and David B. Allison and Lorena A. Barba and Dianne Chong and David L. Donoho and Juliana Freire and Gerald Gabrielse and Constantine Gatsonis and Edward Hall and Thomas H. Jordan and Dietram A. Scheufele and Victoria Stodden and Simine Vazire, Timothy D. Wilson and Wendy Wood and Jennifer Heimberg and Thomas Arrison and Michael Cohen and Michele Schwalbe and Adrienne Stith Butler and Barbara A. Wanchisen and Tina Winters and Rebecca Morgan and Thelma Cox and Lesley Webb and Garret Tyson and Erin Hammers Forstag},
+ title = {Reproducibility and Replicability in Science},
+ journal = {The National Academies Press},
+ year = 2019,
+ pages = {1-256},
+ doi = {10.17226/25303},
+}
+
+
+
+
+
@ARTICLE{akhlaghi19,
author = {{Akhlaghi}, Mohammad},
title = "{Carving out the low surface brightness universe with NoiseChisel}",
- journal = {IAU Symposium 335},
- year = 2019,
- month = sep,
+ journal = {IAU Symposium 355},
+ volume = {\,},
+ keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Astrophysics of Galaxies, Computer Science - Computer Vision and Pattern Recognition},
+ year = "2019",
+ month = "Sep",
eid = {arXiv:1909.11230},
- pages = {arXiv:1909.11230},
+ pages = {},
archivePrefix = {arXiv},
eprint = {1909.11230},
primaryClass = {astro-ph.IM},
adsurl = {https://ui.adsabs.harvard.edu/abs/2019arXiv190911230A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
+
+
+
+
+
+@ARTICLE{cribbs19,
+ author = {Cribbs, AP and Luna-Valero, S and George, C and Sudbery, IM and Berlanga-Taylor, AJ and Sansom, SN and Smith, T and Ilott, NE and Johnson, J and Scaber, J and Brown, K and Sims, D and Heger, A},
+ title = {CGAT-core: a python framework for building scalable, reproducible computational biology workflows [version 2; peer review: 1 approved, 1 approved with reservations]},
+ journal = {F1000Research},
+ year = 2019,
+ volume = 8,
+ pages = {377},
+ doi = {10.12688/f1000research.18674.2},
+}
+
+
+
+
+
+@ARTICLE{kurtzer17,
+ author = {Gregory M. Kurtzer and Vanessa Sochat and Michael W. Bauer},
+ title = {Singularity: Scientific containers for mobility of compute},
+ journal = {PLoS ONE},
+ year = {2017},
+ volume = {12},
+ pages = {e0177459},
+ doi = {10.1371/journal.pone.0177459},
+}
+
+
+
+
+
+@ARTICLE{meng17,
+ author = {Haiyan Meng and Douglas Thain},
+ title = {Facilitating the Reproducibility of Scientific Workflows with Execution Environment Specifications},
+ journal = {Procedia Computer Science},
+ year = {2017},
+ volume = {108},
+ pages = {705},
+ doi = {10.1016/j.procs.2017.05.116},
+}
+
+
+
+
+
+@ARTICLE{tommaso17,
+ author = {Paolo Di Tommaso and Maria Chatzou and Evan W Floden and Pablo Prieto Barja and Emilio Palumbo and Cedric Notredame},
+ title = {Nextflow enables reproducible computational workflows},
+ journal = {Nature Biotechnology},
+ year = 2017,
+ volume = 35,
+ pages = 316,
+ doi = {10.1038/nbt.3820},
+}
+
+
+
+
+
+@ARTICLE{brinckman17,
+ author = {Adam Brinckman and Kyle Chard and Niall Gaffney and Mihael Hategan and Matthew B. Jones and Kacper Kowalik and Sivakumar Kulasekaran and Bertram Ludäscher and Bryce D. Mecum and Jarek Nabrzyski and Victoria Stodden and Ian J. Taylor and Matthew J. Turk and Kandace Turner},
+ title = {Computing environments for reproducibility: Capturing the ``Whole Tale''},
+ journal = {Future Generation Computer Systems},
+ year = 2017,
+ volume = 94,
+ pages = 854,
+ doi = {10.1016/j.future.2017.12.029},
+}
+
+
+
+
+
+@ARTICLE{uhse19,
+ author = {Uhse, Simon and Pflug, Florian G. and {von Haeseler}, Arndt and Djamei, Armin},
+ title = {Insertion Pool Sequencing for Insertional Mutant Analysis in Complex Host‐Microbe Interactions},
+ journal = {Current Protocols in Plant Biology},
+ volume = {4},
+ year = "2019",
+ month = "July",
+ pages = {e20097},
+ doi = {10.1002/cppb.20097},
+}
+
+
+
+
+
+@ARTICLE{alliez19,
+ author = {{Alliez}, Pierre and {Di Cosmo}, Roberto and {Guedj}, Benjamin and
+ {Girault}, Alain and {Hacid}, Mohand-Said and {Legrand}, Arnaud and
+ {Rougier}, Nicolas P.},
+ title = "{Attributing and Referencing (Research) Software: Best Practices and Outlook from Inria}",
+ journal = {Computing in Science \& Engineering},
+ volume = {22},
+ keywords = {Computer Science - Digital Libraries, Computer Science - Software Engineering},
+ year = "2019",
+ month = "May",
+ pages = {39-52},
+archivePrefix = {arXiv},
+ eprint = {1905.11123},
+ primaryClass = {cs.DL},
+ doi = {10.1109/MCSE.2019.2949413},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2019arXiv190511123A},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{kneller19,
+ author = {Kneller,Gerald R. and Hinsen,Konrad},
+ title = {Memory effects in a random walk description of protein structure ensembles},
+ journal = {The Journal of Chemical Physics},
+ volume = {150},
+ year = {2019},
+ pages = {064911},
+ doi = {10.1063/1.5054887},
+}
+
+
+
+
+
+@ARTICLE{oliveira18,
+ author = {Lu\'is Oliveira and David Wilkinson and Daniel Moss\'e and Bruce Robert Childers},
+ title = {Supporting Long-term Reproducible Software Execution},
+ journal = {Proceedings of the First International Workshop on Practical Reproducible Evaluation of Computer Systems (P-RECS'18)},
+ volume = {1},
+ year = {2018},
+ pages = {6},
+ doi = {10.1145/3214239.3214245},
+}
+
+
+
+
+
+@ARTICLE{rule19,
+ author = {{Rule}, Adam and {Birmingham}, Amanda and {Zuniga}, Cristal and
+ {Altintas}, Ilkay and {Huang}, Shih-Cheng and {Knight}, Rob and
+ {Moshiri}, Niema and {Nguyen}, Mai H. and {Brin Rosenthal},
+ Sara and {P{\'e}rez}, Fernando and {Rose}, Peter W.},
+ title = {Ten Simple Rules for Reproducible Research in Jupyter Notebooks},
+ journal = {PLOS Computational Biology},
+ volume = {15},
+ year = 2019,
+ month = jul,
+ pages = {e1007007},
+archivePrefix = {arXiv},
+ eprint = {1810.08055},
+ doi = {10.1371/journal.pcbi.1007007},
+}
+
+
+
+
+
+@article{tange18,
+ author = {Tange, Ole},
+ title = {GNU Parallel 2018},
+ Journal = {Zenodo},
+ volume = {1146014},
+ pages = {\href{https://doi.org/10.5281/zenodo.1146014}{DOI:10.5281/zenodo.1146014}},
+ year = 2018,
+ ISBN = {9781387509881},
+ doi = {10.5281/zenodo.1146014},
+ url = {https://doi.org/10.5281/zenodo.1146014}
+}
+
+
+
+
+
+@ARTICLE{rule18,
+ author = {Adam Rule and Aur\'elien Tabard and {James D.} Hollan},
+ title = {Exploration and Explanation in Computational Notebooks},
+ journal = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
+ volume = {1},
+ year = {2018},
+ pages = {30},
+ doi = {10.1145/3173574.3173606},
+}
+
+
+
+
+
+@ARTICLE{plesser18,
+ author = {Hans E. Plesser},
+ title = {Reproducibility vs. Replicability: A Brief History of a Confused Terminology},
+ journal = {Frontiers in Neuroinformatics},
+ volume = {11},
+ year = {2018},
+ pages = {76},
+ doi = {10.3389/fninf.2017.00076},
+}
+
+
+
+
+
+@ARTICLE{zhang18,
+ author = {{Zhang}, Zhi-Yu and {Romano}, D. and {Ivison}, R.~J. and
+ {Papadopoulos}, Padelis P. and {Matteucci}, F.},
+ title = "{Stellar populations dominated by massive stars in dusty starburst galaxies across cosmic time}",
+ journal = {Nature},
+ keywords = {Astrophysics - Astrophysics of Galaxies},
+ year = "2018",
+ month = "Jun",
+ volume = {558},
+ number = {7709},
+ pages = {260},
+ doi = {10.1038/s41586-018-0196-x},
+archivePrefix = {arXiv},
+ eprint = {1806.01280},
+ primaryClass = {astro-ph.GA},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2018Natur.558..260Z},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{smart18,
+ author = {{Smart}, A.G.},
+ title = {The war over supercooled water},
+ journal = {Physics Today},
+ volume = {Aug},
+ year = "2018",
+ pages = {DOI:10.1063/PT.6.1.20180822a},
+ doi = {10.1063/PT.6.1.20180822a},
+}
+
+
+
+
+
+@ARTICLE{kaiser18,
+ author = {{Kaiser}, J.},
+ title = {Plan to replicate 50 high-impact cancer papers shrinks to just 18},
+ journal = {Science},
+ volume = {Jul},
+ year = "2018",
+ pages = {31},
+ doi = {10.1126/science.aau9619},
+}
+
+
+
+
+
+@ARTICLE{abramatic18,
+ author = {Jean-Francois Abramatic and Roberto {Di Cosmo} and Stefano Zacchiroli},
+ title = {Identifiers for Digital Objects: The case of software source code preservation},
+ journal = {Communications of the ACM},
+ year = 2018,
+ volume = 61,
+ issue = 10,
+ pages = {29},
+ doi = {10.1145/3183558},
+}
+
+
+
+
+
+@ARTICLE{dicosmo18,
+ author = {{Di Cosmo}, Roberto and {Gruenpeter}, Morane and {Zacchiroli}, Stefano},
+ title = {Identifiers for Digital Objects: The case of software source code preservation},
+ journal = {Proceedings of iPRES 2018},
+ year = "2018",
+ pages = {204.4},
+ doi = {10.17605/osf.io/kde56},
+}
+
+
+
+
+
+@ARTICLE{gruning18,
+ author = {Gr\"uning, Bj\"orn and Chilton, John and K\"oster, Johannes and Dale, Ryan and Soranzo, Nicola and {van den Beek}, Marius and Goecks, Jeremy and Backofen, Rolf and Nekrutenko, Anton and Taylor, James},
+ title = {Practical Computational Reproducibility in the Life Sciences},
+ journal = {Cell Systems},
+ volume = 6,
+ year = "2018",
+ pages = {631. bioRxiv:\href{https://www.biorxiv.org/content/10.1101/200683v2}{200683}},
+ doi = {10.1016/j.cels.2018.03.014},
+}
+
+
+
+
+
+@ARTICLE{allen18,
+ author = {{Allen}, Alice and {Teuben}, Peter J. and {Ryan}, P. Wesley},
+ title = "{Schroedinger's Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics}",
+ journal = {The Astrophysical Journal Supplement Series},
+ keywords = {methods: numerical, Astrophysics - Instrumentation and Methods for Astrophysics, Computer Science - Digital Libraries},
+ year = "2018",
+ month = "May",
+ volume = {236},
+ number = {1},
+ eid = {10},
+ pages = {10},
+ doi = {10.3847/1538-4365/aab764},
+archivePrefix = {arXiv},
+ eprint = {1801.02094},
+ primaryClass = {astro-ph.IM},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2018ApJS..236...10A},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{burrell18,
+ author = {{Burrell}, A.G. and {Halford}, A. and {Klenzing}, J. and {Stoneback}, R.A. and {Morley}, S.K. and {Annex}, A.M. and {Laundal}, K.M. and {Kellerman}, A.C. and {Stansby}, D. and {Ma}, J.},
+ title = {Snakes on a Spaceship—An Overview of Python in Heliophysics},
+ journal = {Journal of Geophysical Research: Space Physics},
+ volume = {123},
+ year = "2018",
+ pages = {384},
+ doi = {10.1029/2018JA025877},
+}
+
+
+
+
+
+@article{stodden18,
+ author = {{Stodden}, V. and {Seiler}, J. and {Ma}, Z.},
+ title = {An empirical analysis of journal policy effectiveness for computational reproducibility},
+ volume = {115},
+ number = {11},
+ pages = {2584},
+ year = {2018},
+ doi = {10.1073/pnas.1708290115},
+ issn = {0027-8424},
+ URL = {https://www.pnas.org/content/115/11/2584},
+ journal = {Proceedings of the National Academy of Sciences}
+}
+
+
+
+
+
+@article {fanelli18,
+ author = {{Fanelli}, D.},
+ title = {Opinion: Is science really facing a reproducibility crisis, and do we need it to?},
+ volume = {115},
+ number = {11},
+ pages = {2628},
+ year = {2018},
+ doi = {10.1073/pnas.1708272114},
+ publisher = {National Academy of Sciences},
+ issn = {0027-8424},
+ URL = {https://www.pnas.org/content/115/11/2628},
+ journal = {Proceedings of the National Academy of Sciences}
+}
+
+
+
+
+
+
+@ARTICLE{lewis18,
+ author = {{Lewis}, L.M. and {Edwards}, M.C. and {Meyers}, Z.R. and {Conover Talbot}, C. and {Hao}, H. and {Blum}, D. },
+ title = "{Replication Study: Transcriptional amplification in tumor cells with elevated c-Myc}",
+ journal = {eLife},
+ volume = {7},
+ year = "2018",
+ month = "January",
+ pages = {e30274},
+ doi = {10.7554/eLife.30274},
+}
+
+
+
+
+
+@ARTICLE{akhlaghi18b,
+ author = {{Akhlaghi}, Mohammad and {Bacon}, Roland and {Inami}, Hanae},
+ title = "{MUSE HUDF survey I \& II, Sections 7.3 \& 3.4: photometry for objects with no prior broad-band segmentation map}",
+ journal = {Zenodo},
+ pages = {DOI:10.5281/zenodo.1164774},
+ year = "2018",
+ month = "February",
+ doi = {10.5281/zenodo.1164774},
+}
+
+
+
+
+
+@ARTICLE{akhlaghi18a,
+ author = {{Akhlaghi}, Mohammad and {Bacon}, Roland},
+ title = "{MUSE HUDF survey I, Section 4: data and reproduction pipeline for photometry and astrometry}",
+ journal = {Zenodo},
+ pages = {DOI:10.5281/zenodo.1163746},
+ year = "2018",
+ month = "January",
+ doi = {10.5281/zenodo.1163746},
+}
+
+
+
+
+
+@ARTICLE{leek17,
+ author = {Jeffrey T. Leek and Leah R. Jager},
+ title = {Is Most Published Research Really False?},
+ journal = {Annual Review of Statistics and Its Application},
+ volume = {4},
+ year = {2017},
+ pages = {109},
+ doi = {10.1146/annurev-statistics-060116-054104},
+}
+
+
+
+
+
+@ARTICLE{reich17,
+ author = {Michael Reich and Thorin Tabor and Ted Liefeld and Helga Thorvaldsdóttir and Barbara Hill and Pablo Tamayo and Jill P. Mesirov},
+ title = {The GenePattern Notebook Environment},
+ journal = {Cell Systems},
+ year = {2017},
+ volume = {5},
+ pages = {149},
+ doi = {10.1016/j.cels.2017.07.003},
+}
+
+
+
+
+
+@ARTICLE{becker17,
+ author = {Gabriel Becker and Cory Barr and Robert Gentleman and Michael Lawrence},
+ title = {Enhancing Reproducibility and Collaboration via Management of R Package Cohorts},
+ journal = {Journal of Statistical Software, Articles},
+ volume = {82},
+ pages = 1,
+ year = "2017",
+archivePrefix = {arXiv},
+ eprint = {1501.02284},
+ doi = {10.18637/jss.v082.i01},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2015arXiv150102284B},
+}
+
+
+
+
+
+@ARTICLE{jenness17,
+ author = {{Jenness}, Tim},
+ title = "{Modern Python at the Large Synoptic Survey Telescope}",
+ journal = {ADASS 27},
+ year = "2017",
+ month = "Dec",
+ eid = {arXiv:1712.00461},
+ pages = {arXiv:1712.00461},
+archivePrefix = {arXiv},
+ eprint = {1712.00461},
+ primaryClass = {astro-ph.IM},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2017arXiv171200461J},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@article{bezanson17,
+ title={Julia: A fresh approach to numerical computing},
+ author={Bezanson, Jeff and Edelman, Alan and Karpinski, Stefan and Shah, Viral B},
+ journal={SIAM {R}eview},
+ volume={59},
+ number={1},
+ pages={65},
+ year={2017},
+ archivePrefix={arXiv},
+ eprint={1411.1607},
+ publisher={SIAM},
+ doi={10.1137/141000671},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2014arXiv1411.1607B},
+}
+
+
+
+
+
+@ARTICLE{melson17,
+ author = {{Melsen}, L.A. and {Torfs}, P.J.J.F and {Uijlenhoet}, R. and {Teuling}, A.J.},
+ title = {Comment on “Most computational hydrology is not reproducible, so is it really science?” by Christopher Hutton et al.},
+ journal = {Water Resources Research},
+ volume = 53,
+ pages = {2568},
+ year = {2017},
+ doi = {10.1002/2016WR020208},
+}
+
+
+
+
+
+@ARTICLE{munafo17,
+ author = {{Munaf\'o}, M.R. and {Nosek}, B.A. and {Bishop}, D.V.M. and {Button}, K.S. and {Chambers}, C.D. and {Percie du Sert}, N. and {Simonsohn}, U. and {Wagenmakers}, E.J. and {Ware}, J.J. {Ioannidis}, J.P.A.},
+ title = {A manifesto for reproducible science},
+ journal = {Nature Human Behaviour},
+ volume = 1,
+ pages = {21},
+ year = {2017},
+ doi = {10.1038/s41562-016-0021},
+}
+
+
+
+
+
+@ARTICLE{jimenez17,
+ title={The popper convention: Making reproducible systems evaluation practical},
+ author = {{Jimenez}, I. and {Sevilla}, M. and {Watkins}, N. and {Maltzahn}, C. and {Lofstead}, J. and {Mohror}, K. and {Arpaci-Dusseau}, A. and {Arpaci-Dusseau}, R.},
+ journal = {IEEE IPDPSW},
+ pages = {1561},
+ year = {2017},
+ doi = {10.1109/IPDPSW.2017.157},
+}
+
+
+
+
+
+@ARTICLE{bacon17,
+ author = {{Bacon}, Roland and {Conseil}, Simon and {Mary}, David and
+ {Brinchmann}, Jarle and {Shepherd}, Martin and {Akhlaghi}, Mohammad and
+ {Weilbacher}, Peter M. and {Piqueras}, Laure and {Wisotzki}, Lutz and
+ {Lagattuta}, David and {Epinat}, Benoit and {Guerou}, Adrien and
+ {Inami}, Hanae and {Cantalupo}, Sebastiano and
+ {Courbot}, Jean Baptiste and {Contini}, Thierry and {Richard}, Johan and
+ {Maseda}, Michael and {Bouwens}, Rychard and {Bouch{\'e}}, Nicolas and
+ {Kollatschny}, Wolfram and {Schaye}, Joop and {Marino}, Raffaella Anna and
+ {Pello}, Roser and {Herenz}, Christian and {Guiderdoni}, Bruno and
+ {Carollo}, Marcella},
+ title = "{The MUSE Hubble Ultra Deep Field Survey. I. Survey description, data reduction, and source detection}",
+ journal = {Astronomy \& Astrophysics},
+ keywords = {galaxies: distances and redshifts, galaxies: high-redshift, cosmology: observations, methods: data analysis, techniques: imaging spectroscopy, galaxies: formation, Astrophysics - Astrophysics of Galaxies},
+ year = "2017",
+ month = "Nov",
+ volume = {608},
+ eid = {A1},
+ pages = {A1},
+ doi = {10.1051/0004-6361/201730833},
+archivePrefix = {arXiv},
+ eprint = {1710.03002},
+ primaryClass = {astro-ph.GA},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2017A\&A...608A...1B},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{austin17,
+ author = {{Claire C.} Austin and Theodora Bloom and Sünje Dallmeier-Tiessen and {Varsha K.} Khodiyar and Fiona Murphy and Amy Nurnberger and Lisa Raymond and Martina Stockhause and Jonathan Tedds and Mary Vardigan and Angus Whyte},
+ title = {Key components of data publishing: using current best practices to develop a reference model for data publishing},
+ journal = {International Journal on Digital Libraries},
+ volume = {18},
+ year = {2017},
+ pages = {77-92},
+ doi = {10.1007/s00799-016-0178-2},
+}
+
+
+
+
+
+@ARTICLE{chirigati16,
+ author = {Chirigati, Fernando and Rampin, R{\'e}mi and Shasha, Dennis and Freire, Juliana},
+ title = {ReproZip: Computational Reproducibility With Ease},
+ journal = {Proceedings of the 2016 International Conference on Management of Data (SIGMOD 16)},
+ volume = {2},
+ year = {2016},
+ pages = {2085},
+ doi = {10.1145/2882903.2899401},
+}
+
+
+
+
+
+@ARTICLE{smith16,
+ author = {Arfon M. Smith and Daniel S. Katz and Kyle E. Niemeyer},
+ title = {Software citation principles},
+ journal = {PeerJ Computer Science},
+ volume = {2},
+ year = {2016},
+ pages = {e86},
+ doi = {10.7717/peerj-cs.86},
+}
+
+
+
+
+
+@ARTICLE{ziemann16,
+ author = {Mark Ziemann and Yotam Eren and Assam El-Osta},
+ title = {Gene name errors are widespread in the scientific literature},
+ journal = {Genome Biology},
+ volume = {17},
+ year = {2016},
+ pages = {177},
+ doi = {10.1186/s13059-016-1044-7},
+}
+
+
+
+
+
+@ARTICLE{hinsen16,
+ author = {Konrad Hinsen},
+ title = {Scientific notations for the digital era},
+ journal = {The Self Journal of Science},
+ year = {2016},
+ pages = {1: arXiv:\href{https://arxiv.org/abs/1605.02960}{1605.02960}},
+}
+
+
+
+
+
+@ARTICLE{kluyver16,
+ author = {Thomas Kluyver and Benjamin Ragan-Kelley and Fernando Pérez and Brian Granger and Matthias Bussonnier and Jonathan Frederic and Kyle Kelley and Jessica Hamrick and Jason Grout and Sylvain Corlay and Paul Ivanov and Damián Avila and Safia Abdalla and Carol Willing},
+ title = "{Jupyter Notebooks – a publishing format for reproducible computational workflows}",
+ journal = {Positioning and Power in Academic Publishing: Players, Agents and Agendas},
+ year = {2016},
+ pages = {87},
+ doi = {10.3233/978-1-61499-649-1-87},
+}
+
+
+
+
+
+@ARTICLE{baker16,
+ author = {{Baker}, M.},
+ title = "{Is there a reproducibility crisis?}",
+ journal = {Nature},
+ volume = {533},
+ year = "2016",
+ month = "May",
+ pages = {452},
+ doi = {10.1038/533452a},
+}
+
+
+
+
+
+@ARTICLE{wilkinson16,
+ author = { {Wilkinson}, M.D and {Dumontier}, M. and {Aalbersberg}, I.J. and {Appleton}, G. and {Axton}, M. and {Baak}, A. and {Blomberg}, N. and {Boiten}, J. and {da Silva Santos}, L.B and {Bourne}, P.E. and {Bouwman}, J. and {Brookes}, A.J. and {Clark}, T. and {Crosas}, M. and {Dillo}, I. and {Dumon}, O. and {Edmunds}, S. and {Evelo}, C. and {Finkers}, R. and {Gonzalez-Beltran}, A. and {Gray}, A.J.G. and {Groth}, P. and {Goble}, C. and {Grethe}, Jeffrey S. and {Heringa}, J. and {’t Hoen}, P.A.C and {Hooft}, R. and {Kuhn}, T. and {Kok}, R. and {Kok}, J. and {Lusher}, S. and {Martone}, M. and {Mons}, A. and {Packer}, A. and {Persson}, B. and {Rocca-Serra}, P. and {Roos}, M. and {van Schaik}, R. and {Sansone}, S. and {Schultes}, E. and {Sengstag}, T. and {Slater}, T. and {Strawn}, G. and {Swertz}, M. and {Thompson}, M. and {van der Lei}, J. and {van Mulligen}, E. and {Velterop}, J. and {Waagmeester}, A. and {Wittenburg}, P. and {Wolstencroft}, K. and {Zhao}, J. and {Mons}, B.},
+ title = "{The FAIR Guiding Principles for scientific data management and stewardship}",
+ journal = {Scientific Data},
+ year = 2016,
+ month = mar,
+ volume = 3,
+ pages = {160018},
+ doi = {10.1038/sdata.2016.18},
+}
+
+
+
+
+@ARTICLE{hutton16,
+ author = {{Hutton}, C. and {Wagener}, T. and {Freer}, J. and {Han}, D. and {Duffy}, C. and {Arheimer}, B.},
+ title = {Most computational hydrology is not reproducible, so is it really science?},
+ journal = {Water Resources Research},
+ year = {2016},
+ volume = 52,
+ pages = {7548},
+ doi = {10.1002/2016WR019285},
+}
+
+
+
+
+
+@ARTICLE{topalidou16,
+ author = {{Topalidou}, M. and {Leblois}, A. and {Boraud}, T. and {Rougier}, N.P.},
+ title = {A long journey into reproducible computational neuroscience},
+ journal = {Frontiers in Computational Neuroscience},
+ year = {2016},
+ volume = 9,
+ pages = {30},
+ doi = {10.3389/fncom.2015.00030},
+}
+
+
+
+
+
+@ARTICLE{gil16,
+ author = {{Gil}, Yolanda and {David}, C.H. and {Demir}, I. and {Essawy}, B.T. and {Fulweiler}, R.W. and {Goodall}, J.L. and {Karlstrom}, L. and {Lee}, H. and {Mills}, H.J. and {Oh}, J. and {Pierce}, S.A. and {Pope}, A. and {Tzeng}, M.W. and {Villamizar}, S.R. and {Yu}, X},
+ title = {Toward the Geoscience Paper of the Future: Best practices for documenting and sharing research from data to software to provenance},
+ journal = {Earth and Space Science},
+ year = 2016,
+ volume = 3,
+ pages = {388},
+ doi = {10.1002/2015EA000136},
+}
+
+
+
+
+
+@ARTICLE{romine15,
+ author = {Charles H. Romine},
+ title = {Secure Hash Standard (SHS)},
+ journal = {Federal Information processing standards publication},
+ volume = {180},
+ pages = {4},
+ year = {2015},
+ doi = {10.6028/NIST.FIPS.180-4},
+}
+
+
+
+
+
+@ARTICLE{horvath15,
+ author = {Steve Horvath},
+ title = "{Erratum to: DNA methylation age of human tissues and cell types}",
+ journal = {Genome Biology},
+ volume = {16},
+ pages = {96},
+ year = {2015},
+ doi = {10.1186/s13059-015-0649-6},
+}
+
+
+
+
+
+@ARTICLE{chang15,
+ author = {Andrew C. Chang and Phillip Li},
+ title = {Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say ``Usually Not''},
+ journal = {Finance and Economics Discussion Series 2015-083},
+ year = {2015},
+ pages = {1},
+ doi = {10.17016/FEDS.2015.083},
+}
+
+
+
+
+
+@ARTICLE{schaffer15,
+ author = {Jonathan Schaffer},
+ title = {What Not to Multiply Without Necessity},
+ journal = {Australasian Journal of Philosophy},
+ volume = {93},
+ pages = {644},
+ year = {2015},
+ doi = {10.1080/00048402.2014.992447},
+}
+
+
+
+
+
+@ARTICLE{clarkso15,
+ author = "Chris Clarkson and Mike Smith and Ben Marwick and Richard Fullagar and Lynley A. Wallis and Patrick Faulkner and Tiina Manne and Elspeth Hayes and Richard G. Roberts and Zenobia Jacobs and Xavier Carah and Kelsey M. Lowe and Jacqueline Matthews and S. Anna Florin",
+ title = "{The archaeology, chronology and stratigraphy of Madjedbebe (Malakunanja II): A site in northern Australia with early occupation}",
+ journal = {Journal of Human Evolution},
+ year = 2015,
+ volume = 83,
+ pages = 46,
+ doi = {10.1016/j.jhevol.2015.03.014},
+}
+
+
+
+
+
+@ARTICLE{meng15b,
+ author = {Haiyan Meng and Douglas Thain},
+ title = {Umbrella: A Portable Environment Creator for Reproducible Computing on Clusters, Clouds, and Grids},
+ journal = {Proceedings of the 8th International Workshop on Virtualization Technologies in Distributed Computing (VTDC 15)},
+ year = 2015,
+ volume = 1,
+ pages = 23,
+ doi = {10.1145/2755979.2755982},
+}
+
+
+
+
+
+@ARTICLE{meng15,
+ author = {Haiyan Meng and Rupa Kommineni and Quan Pham and Robert Gardner and Tanu Malik and Douglas Thain},
+ title = {An invariant framework for conducting reproducible computational science},
+ journal = {Journal of Computational Science},
+ year = 2015,
+ volume = 9,
+ pages = 137,
+ doi = {10.1016/j.jocs.2015.04.012},
+}
+
+
+
+
+
+@ARTICLE{gamblin15,
+ author = {Gamblin, Todd and LeGendre, Matthew and Collette, Michael R. and Lee, Gregory L. and Moody, Adam and {de Supinski}, Bronis R. and Futral, Scott},
+ title = {The Spack package manager: bringing order to HPC software chaos},
+ journal = {IEEE SC15},
+ year = 2015,
+ volume = 1,
+ pages = {1},
+ doi = {10.1145/2807591.2807623},
+}
+
+
+
+
+@ARTICLE{akhlaghi15,
+ author = {{Akhlaghi}, M. and {Ichikawa}, T.},
+ title = "{Noise-based Detection and Segmentation of Nebulous Objects}",
+ journal = {The Astrophysical Journal Supplement Series},
+ archivePrefix = "arXiv",
+ eprint = {1505.01664},
+ primaryClass = "astro-ph.IM",
+ keywords = {galaxies: irregular, galaxies: photometry, galaxies: structure, methods: data analysis, techniques: image processing, techniques: photometric},
+ year = 2015,
+ month = sep,
+ volume = 220,
+ eid = {1},
+ pages = {1-33},
+ doi = {10.1088/0067-0049/220/1/1},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2015ApJS..220....1A},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{courtes15,
+ author = {{Court{\'e}s}, Ludovic and {Wurmus}, Ricardo},
+ title = {Reproducible and User-Controlled Software Environments in HPC with Guix},
+ journal = {Euro-Par},
+ volume = {9523},
+ keywords = {Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Operating Systems, Computer Science - Software Engineering},
+ year = {2015},
+ month = {Jun},
+ eid = {arXiv:1506.02822},
+ pages = {arXiv:1506.02822},
+archivePrefix = {arXiv},
+ eprint = {1506.02822},
+ primaryClass = {cs.DC},
+ doi = {10.1007/978-3-319-27308-2\_47},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2015arXiv150602822C},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{hinsen15,
+ author = {{Hinsen}, K.},
+ title = {ActivePapers: a platform for publishing and archiving computer-aided research},
+ journal = {F1000Research},
+ year = 2015,
+ volume = 3,
+ pages = {289},
+ doi = {10.12688/f1000research.5773.3},
+}
+
+
+
+
+
+@ARTICLE{belhajjame15,
+ author = {{Belhajjame}, K. and {Zhao}, Z. and {Garijo}, D. and {Gamble}, M. and {Hettne}, K. and {Palma}, R. and {Mina}, E. and {Corcho}, O. and {Gómez-Pérez}, J.M. and {Bechhofer}, S. and {Klyne}, G. and {Goble}, C},
+ title = "{Using a suite of ontologies for preserving workflow-centric research objects}",
+ journal = {Journal of Web Semantics},
+ year = 2015,
+ volume = 32,
+ pages = {16},
+ doi = {10.1016/j.websem.2015.01.003},
+}
+
+
+
+
+
+@ARTICLE{bechhofer13,
+ author = {{Bechhofer}, S. and {Buchan}, I. and {De Roure}, D. and {Missier}, P. and {Ainsworth}, J. and {Bhagat}, J. and Couch, P. and Cruickshank, D. and {Delderfield}, M and Dunlop, I. and {Gamble}, M. and {Michaelides}, D. and {Owen}, S. and {Newman}, D. and {Sufi}, S. and {Goble}, C},
+ title = "{Why linked data is not enough for scientists}",
+ journal = {Future Generation Computer Systems},
+ year = 2013,
+ volume = 29,
+ pages = {599},
+ doi = {10.1016/j.future.2011.08.004},
+}
+
+
+
+
+
+@ARTICLE{peng15,
+ author = {{Peng}, R.D.},
+ title = {The reproducibility crisis in science: A statistical counterattack},
+ journal = {Significance},
+ year = 2015,
+ month = jun,
+ volume = 12,
+ pages = {30},
+ doi = {10.1111/j.1740-9713.2015.00827.x},
+}
+
+
+
+
+
+@ARTICLE{dolfi14,
+ author = {{Dolfi}, M. and {Gukelberger}, J. and {Hehn}, A. and
+ {Imri{\v{s}}ka}, J. and {Pakrouski}, K. and {R{\o}nnow},
+ T.~F. and {Troyer}, M. and {Zintchenko}, I. and {Chirigati},
+ F. and {Freire}, J. and {Shasha}, D.},
+ title = "{A model project for reproducible papers: critical temperature for the Ising model on a square lattice}",
+ journal = {arXiv},
+ year = 2014,
+ month = jan,
+ eid = {arXiv:1401.2000},
+ pages = {arXiv:1401.2000},
+archivePrefix = {arXiv},
+ eprint = {1401.2000},
+ primaryClass = {cs.CE},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2014arXiv1401.2000D},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{katz14,
+ author = {Daniel S. Katz},
+ title = {Transitive Credit as a Means to Address Social and Technological Concerns Stemming from Citation and Attribution of Digital Products},
+ journal = {Journal of Open Research Software},
+ year = {2014},
+ volume = {2},
+ pages = {e20},
+ doi = {10.5334/jors.be},
+}
+
+
+
+
+
+@ARTICLE{herndon14,
+ author = {Thomas Herndon and Michael Ash and Robert Pollin},
+ title = "{Does high public debt consistently stifle economic growth? A critique of Reinhart and Rogoff}",
+ journal = {Cambridge Journal of Economics},
+ year = {2014},
+ month = {dec},
+ volume = {38},
+ pages = {257},
+ doi = {10.1093/cje/bet075},
+}
+
+
+
+
+
+@ARTICLE{easterbrook14,
+ author = {{Easterbook}, S.},
+ title = {Open code for open science?},
+ journal = {Nature Geoscience},
+ year = 2014,
+ month = oct,
+ volume = 7,
+ pages = {779},
+ doi = {10.1038/ngeo2283},
+}
+
+
+
+
+
+@ARTICLE{fomel13,
+ author = {Sergey Fomel and Paul Sava and Ioan Vlad and Yang Liu and Vladimir Bashkardin},
+ title = {Madagascar: open-source software project for multidimensional data analysis and reproducible computational experiments},
+ journal = {Journal of open research software},
+ year = {2013},
+ volume = {1},
+ pages = {e8},
+ doi = {10.5334/jors.ag},
+}
+
+
+
+
+
+@ARTICLE{sandve13,
+ author = {{Sandve}, G.K. and {Nekrutenko}, A. and {Taylor}, J. and {Hovig}, E.},
+ title = {Ten Simple Rules for Reproducible Computational Research},
+ journal = {PLoS Computational Biology},
+ year = 2013,
+ month = oct,
+ volume = 9,
+ pages = {e1003285},
+ doi = {10.1371/journal.pcbi.1003285},
+}
+
+
+
+
+
+@ARTICLE{malik13,
+ author = {Tanu Malik and Quan Pham and Ian Foster},
+ title = {SOLE: Towards Descriptive and Interactive Publications},
+ journal = {Implementing Reproducible Research},
+ year = 2013,
+ volume = {Chapter 2},
+ pages = {1. URL: \url{https://osf.io/ns2m3}},
+}
+
+
+
+
+
+@ARTICLE{koster12,
+ author = {Johannes K\"oster and Sven Rahmann},
+ title = {Snakemake—a scalable bioinformatics workflow engine},
+ journal = {Bioinformatics},
+ volume = {28},
+ issue = {19},
+ year = {2012},
+ pages = {2520},
+ doi = {10.1093/bioinformatics/bts480},
+}
+
+
+
+
+
+@ARTICLE{gronenschild12,
+ author = {Ed H. B. M. Gronenschild and Petra Habets and Heidi I. L. Jacobs and Ron Mengelers and Nico Rozendaal and Jim van Os and Machteld Marcelis},
+ title = "{The Effects of FreeSurfer Version, Workstation Type, and Macintosh Operating System Version on Anatomical Volume and Cortical Thickness Measurements}",
+ journal = {PLoS ONE},
+ volume = {7},
+ year = {2012},
+ pages = {e38234},
+ doi = {10.1371/journal.pone.0038234},
+}
+
+
+
+
+
+@ARTICLE{pham12,
+ author = {Quan Pham and Tanu Malik and Ian Foster and Roberto {Di Lauro} and Raffaele Montella},
+ title = {SOLE: Linking Research Papers with Science Objects},
+ journal = {Provenance and Annotation of Data and Processes (IPAW)},
+ year = {2012},
+ pages = {203},
+ doi = {10.1007/978-3-642-34222-6\_16},
+}
+
+
+
+
+
+@ARTICLE{davison12,
+ author = {Andrew Davison},
+ title = {Automated Capture of Experiment Context for Easier Reproducibility in Computational Research},
+ journal = {Computing in Science \& Engineering},
+ volume = {14},
+ year = {2012},
+ pages = {48},
+ doi = {10.1109/MCSE.2012.41},
+}
+
+
+
+
+
+@ARTICLE{zhao12,
+ author = {Jun Zhao and Jose Manuel Gomez-Perez and Khalid Belhajjame and Graham Klyne and Esteban Garcia-Cuesta and Aleix Garrido and Kristina Hettne and Marco Roos and David {De Roure} and Carole Goble},
+ title = {Why workflows break — Understanding and combating decay in Taverna workflows},
+ journal = {IEEE 8th International Conference on E-Science},
+ year = {2012},
+ pages = {1},
+ doi = {10.1109/eScience.2012.6404482},
+}
+
+
+
+
+@ARTICLE{vangorp11,
+ author = {Pieter {Van Gorp} and Steffen Mazanek},
+ title = {SHARE: a web portal for creating and sharing executable research},
+ journal = {Procedia Computer Science},
+ year = 2011,
+ volume = 4,
+ pages = {589},
+ doi = {10.1016/j.procs.2011.04.062},
+}
+
+
+
+
+
+@ARTICLE{hinsen11,
+ author = {{Hinsen}, Konrad},
+ title = {A data and code model for reproducible research and executable papers},
+ journal = {Procedia Computer Science},
+ year = 2011,
+ volume = 4,
+ pages = {579},
+ doi = {10.1016/j.procs.2011.04.061},
+}
+
+
+
+
+
+@ARTICLE{limare11,
+ author = {Nicolas Limare and Jean-Michel Morel},
+ title = {The IPOL Initiative: Publishing and Testing Algorithms on Line for
+Reproducible Research in Image Processing},
+ journal = {Procedia Computer Science},
+ year = 2011,
+ volume = 4,
+ pages = {716},
+ doi = {10.1016/j.procs.2011.04.075},
+}
+
+
+
+
+
+@ARTICLE{gavish11,
+ author = {Matan Gavish and David L. Donoho},
+ title = {A Universal Identifier for Computational Results},
+ journal = {Procedia Computer Science},
+ year = 2011,
+ volume = 4,
+ pages = {637},
+ doi = {10.1016/j.procs.2011.04.067},
+}
+
+
+
+
+@ARTICLE{gabriel11,
+ author = {Ann Gabriel and Rebecca Capone},
+ title = {Executable Paper Grand Challenge Workshop},
+ journal = {Procedia Computer Science},
+ volume = {4},
+ year = {2011},
+ pages = {577},
+ doi = {10.1016/j.procs.2011.04.060},
+}
+
+
+
+
+
+@ARTICLE{nowakowski11,
+ author = {Piotr Nowakowski and Eryk Ciepiela and Daniel Hare\.{z}lak and Joanna Kocot and Marek Kasztelnik and Tomasz Barty\'nski and Jan Meizner and Grzegorz Dyk and Maciej Malawski},
+ title = {The Collage Authoring Environment},
+ journal = {Procedia Computer Science},
+ volume = {4},
+ year = {2011},
+ pages = {608},
+ doi = {j.procs.2011.04.064},
+}
+
+
+
+
+
+@ARTICLE{peng11,
+ author = {{Peng}, R.D.},
+ title = {Reproducible Research in Computational Science},
+ journal = {Science},
+ year = {2011},
+ month = dec,
+ volume = 334,
+ pages = {1226},
+ doi = {10.1126/science.1213847},
+}
+
+
+
+
+
+@ARTICLE{gil10,
+ author = {Yolanda Gil and Pedro A. González-Calero and Jihie Kim and Joshua Moody and Varun Ratnakar},
+ title = {A semantic framework for automatic generation of computational workflows using distributed data and component catalogues},
+ journal = {Journal of Experimental \& Theoretical Artificial Intelligence},
+ year = {2010},
+ volume = {23},
+ pages = {389},
+ doi = {10.1080/0952813X.2010.490962},
+}
+
+
+
+
+
+@ARTICLE{pence10,
+ author = {{Pence}, W.~D. and {Chiappetti}, L. and {Page}, C.~G. and {Shaw}, R.~A. and
+ {Stobie}, E.},
+ title = "{Definition of the Flexible Image Transport System (FITS), version 3.0}",
+ journal = {Astronomy and Astrophysics},
+ keywords = {instrumentation: miscellaneous, methods: miscellaneous, techniques: miscellaneous, reference systems, standards, astronomical databases: miscellaneous},
+ year = "2010",
+ month = "Dec",
+ volume = {524},
+ eid = {A42},
+ pages = {A42},
+ doi = {10.1051/0004-6361/201015362},
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2010A\&A...524A..42P},
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
+}
+
+
+
+
+
+@ARTICLE{goecks10,
+ author = {Jeremy Goecks and Anton Nekrutenko and James Taylor},
+ title = {Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences},
+ journal = {Genome Biology},
+ year = {2010},
+ volume = {11},
+ pages = {R86},
+ doi = {10.1186/gb-2010-11-8-r86},
+}
+
+
+
+
+
+@ARTICLE{merali10,
+ author = {Zeeya Merali},
+ title = {Computational science: ...Error},
+ journal = {Nature},
+ year = 2010,
+ volume = 467,
+ pages = {775},
+ doi = {10.1038/467775a},
+}
+
+
+
+
+
+@ARTICLE{casadevall10,
+ author = {{Casadevall}, A. and {Fang}, F.C},
+ title = {Reproducible Science},
+ journal = {Infection and Immunity},
+ year = 2010,
+ volume = 78,
+ pages = {4972},
+ doi = {10.1128/IAI.00908-10},
+}
+
+
+
+
+
+@ARTICLE{mesirov10,
+ author = {{Mesirov}, J.P.},
+ title = {Accessible Reproducible Research},
+ journal = {Science},
+ year = 2010,
+ volume = 327,
+ pages = {415},
+ doi = {10.1126/science.1179653},
+}
+
+
+
+
+
+@ARTICLE{cheney09,
+ author = {James Cheney and Laura Chiticariu and Wang-Chiew Tan},
+ title = {Provenance in Databases: Why, How, and Where},
+ journal = {Foundations and Trends in Databases},
+ year = {2009},
+ volume = {1},
+ pages = {379},
+ doi = {10.1561/1900000006},
+}
+
+
+
+
+
+@ARTICLE{ioannidis2009,
+ author = {John P. A. Ioannidis and David B. Allison and Catherine A. Ball and Issa Coulibaly and Xiangqin Cui and Aedín C Culhane and Mario Falchi and Cesare Furlanello and Laurence Game and Giuseppe Jurman and Jon Mangion and Tapan Mehta and Michael Nitzberg and Grier P. Page and Enrico Petretto and Vera {van Noort}},
+ title = {Repeatability of published microarray gene expression analyses},
+ journal = {Nature Genetics},
+ year = {2009},
+ volume = {41},
+ pages = {149},
+ doi = {10.1038/ng.295},
+}
+
+
+
+
+
+@ARTICLE{fomel09,
+ author = {Sergey Fomel and Jon F. Claerbout},
+ title = {Reproducible Research},
+ journal = {Computing in Science Engineering},
+ year = {2009},
+ volume = {11},
+ pages = {5},
+ doi = {10.1109/MCSE.2009.14},
+}
+
+
+
+
+
+@ARTICLE{baggerly09,
+ author = {Keith A. Baggerly and Kevin R Coombes},
+ title = {Deriving chemosensitivity from cell lines: Forensic bioinformatics and reproducible research in high-throughput biology},
+ journal = {The Annals of Applied Statistics},
+ year = {2009},
+ volume = {3},
+ pages = {1309},
+ doi = {10.1214/09-AOAS291},
+}
+
+
+
+
+
+@ARTICLE{scheidegger08,
+ author = {Carlos Scheidegger and David Koop and Emanuele Santos and Huy Vo and Steven Callahan and Juliana Freire and Cláudio Silva},
+ title = {Tackling the Provenance Challenge one layer at a time},
+ journal = {Concurrency Computation: Practice and Experiment},
+ year = {2008},
+ volume = {20},
+ pages = {473},
+ doi = {10.1002/cpe.1237},
+}
+
+
+
+
+
+@ARTICLE{moreau08,
+ author = {Moreau, Luc and Ludäscher, Bertram and Altintas, Ilkay and Barga, Roger S. and Bowers, Shawn and Callahan, Steven and Chin JR., George and Clifford, Ben and Cohen, Shirley and Cohen-Boulakia, Sarah and Davidson, Susan and Deelman, Ewa and Digiampietri, Luciano and Foster, Ian and Freire, Juliana and Frew, James and Futrelle, Joe and Gibson, Tara and Gil, Yolanda and Goble, Carole and Golbeck, Jennifer and Groth, Paul and Holland, David A. and Jiang, Sheng and Kim, Jihie and Koop, David and Krenek, Ales and McPhillips, Timothy and Mehta, Gaurang and Miles, Simon and Metzger, Dominic and Munroe, Steve and Myers, Jim and Plale, Beth and Podhorszki, Norbert and Ratnakar, Varun and Santos, Emanuele and Scheidegger, Carlos and Schuchardt, Karen and Seltzer, Margo and Simmhan, Yogesh L. and Silva, Claudio and Slaughter, Peter and Stephan, Eric and Stevens, Robert and Turi, Daniele and Vo, Huy and Wilde, Mike and Zhao, Jun and Zhao, Yong},
+ title = {The First Provenance Challenge},
+ journal = {Concurrency Computation: Practice and Experiment},
+ year = {2008},
+ volume = {20},
+ pages = {473},
+ doi = {10.1002/cpe.1233},
+}
+
+
+
+
+
+@Article{matplotlib2007,
+ Author = {Hunter, J. D.},
+ Title = {Matplotlib: A 2D graphics environment},
+ Journal = {CiSE},
+ Volume = {9},
+ Number = {3},
+ Pages = {90},
+ abstract = {Matplotlib is a 2D graphics package used for Python
+ for application development, interactive scripting, and
+ publication-quality image generation across user
+ interfaces and operating systems.},
+ publisher = {IEEE COMPUTER SOC},
+ doi = {10.1109/MCSE.2007.55},
+ year = 2007
+}
+
+
+
+
+
+@ARTICLE{witten2007,
+ author = {Ben Witten and Bill Curry and Jeff Shragge},
+ title = {A New Build Environment for SEP},
+ journal = {Stanford Exploration Project},
+ year = {2007},
+ volume = {129},
+ pages = {247: \url{http://sepwww.stanford.edu/data/media/public/docs/sep129/ben1.pdf}},
+}
+
+
+
+
+
+@ARTICLE{miller06,
+ author = {Greg Miller},
+ title = {A Scientist's Nightmare: Software Problem Leads to Five Retractions},
+ journal = {Science},
+ year = {2006},
+ volume = {314},
+ pages = {1856},
+ doi = {10.1126/science.314.5807.1856},
+}
+
+
+
+
+
+@ARTICLE{reich06,
+ author = {Michael Reich and Ted Liefeld and Joshua Gould and Jim Lerner and Pablo Tamayo and Jill P Mesirov},
+ title = {GenePattern 2.0},
+ journal = {Nature Genetics},
+ year = {2006},
+ volume = {38},
+ pages = {500},
+ doi = {10.1038/ng0506-500},
+}
+
+
+
+
+
+@ARTICLE{ludascher05,
+ author = {Ludäs\-cher, Bertram and Altintas, Ilkay and Berkley, Chad and Higgins, Dan and Jaeger, Efrat and Jones, Matthew and Lee, Edward A. and Tao, Jing and Zhao, Yang},
+ title = "{Scientific workflow management and the Kepler system}",
+ journal = {Concurrency Computation: Practice and Experiment},
+ year = {2006},
+ volume = {18},
+ pages = {1039},
+ doi = {10.1002/cpe.994},
+}
+
+
+
+
+
+@ARTICLE{ioannidis05,
+ author = {John P. A. Ioannidis},
+ title = {Why Most Published Research Findings Are False},
+ journal = {PLoS Medicine },
+ year = {2005},
+ volume = {2},
+ pages = {e124},
+ doi = {10.1371/journal.pmed.0020124},
+}
+
+
+
+
+
+@ARTICLE{bavoil05,
+ author = {Louis Bavoil and Steven P. Callahan and Patricia J. Crossno and Juliana Freire and Carlos E. Scheidegger and Cláudio T. Silva and Huy T. Vo},
+ title = "{VisTrails: Enabling Interactive Multiple-View Visualizations}",
+ journal = {VIS 05. IEEE Visualization},
+ year = {2005},
+ volume = {},
+ pages = {135},
+ doi = {10.1109/VISUAL.2005.1532788},
+}
+
+
+
+
+
+@ARTICLE{dolstra04,
+ author = {{Dolstra}, Eelco and {de Jonge}, Merijn and {Visser}, Eelco},
+ title = {Nix: A Safe and Policy-Free System for Software Deployment},
+ journal = {Large Installation System Administration Conference},
+ year = {2004},
+ volume = {18},
+ pages = {79, PDF in \href{https://www.usenix.org/legacy/events/lisa04/tech/full\_papers/dolstra/dolstra.pdf}{LISA04 webpage}},
+}
+
+
+
+
+
+@ARTICLE{oinn04,
+ author = {Oinn, Tom and Addis, Matthew and Ferris, Justin and Marvin, Darren and Senger, Martin and Greenwood, Mark and Carver, Tim and Glover, Kevin and Pocock, Matthew R. and Wipat, Anil and Li, Peter},
+ title = {Taverna: a tool for the composition and enactment of bioinformatics workflows},
+ journal = {Bioinformatics},
+ year = {2004},
+ volume = {20},
+ pages = {3045},
+ doi = {10.1093/bioinformatics/bth361},
+}
+
+
+
+
+
+@ARTICLE{schwab2000,
+ author = {Matthias Schwab and Martin Karrenbach and Jon F. Claerbout},
+ title = {Making scientific computations reproducible},
+ journal = {Computing in Science \& Engineering},
+ year = {2000},
+ volume = {2},
+ pages = {61},
+ doi = {10.1109/5992.881708},
+}
+
+
+
+
+
+@ARTICLE{buckheit1995,
+ author = {Jonathan B. Buckheit and David L. Donoho},
+ title = "{WaveLab and Reproducible Research}",
+ journal = {Wavelets and Statistics},
+ year = {1995},
+ volume = {1},
+ pages = {55},
+ doi = {10.1007/978-1-4612-2544-7\_5},
+}
+
+
+
+
+
+@ARTICLE{claerbout1992,
+ author = {Jon F. Claerbout and Martin Karrenbach},
+ title = {Electronic documents give reproducible research a new meaning},
+ journal = {SEG Technical Program Expanded Abstracts},
+ year = {1992},
+ volume = {1},
+ pages = {601-604},
+ doi = {10.1190/1.1822162},
+}
+
+
+
+
+
+@ARTICLE{eker03,
+ author = {Johan Eker and Jorn W Janneck and Edward A. Lee and Jie Liu and Xiaojun Liu and Jozsef Ludvig and Sonia Sachs and Yuhong Xiong and Stephen Neuendorffer},
+ title = "{Taming heterogeneity - the Ptolemy approach}",
+ journal = {Proceedings of the IEEE},
+ year = {2003},
+ volume = {91},
+ pages = {127},
+ doi = {10.1109/JPROC.2002.805829},
+}
+
+
+
+
+
+@ARTICLE{stevens03,
+ author = {Robert Stevens and Kevin Glover and Chris Greenhalgh and Claire Jennings and Simon Pearce and Peter Li and Melena Radenkovic and Anil Wipat},
+ title = {Performing in silico Experiments on the Grid: A Users Perspective},
+ journal = {Proceedings of UK e-Science All Hands Meeting},
+ year = {2003},
+ pages = {43},
+}
+
+
+
+
+
+@ARTICLE{knuth84,
+ author = {Donald Knuth},
+ title = {Literate Programming},
+ journal = {The Computer Journal},
+ year = {1984},
+ volume = {27},
+ pages = {97},
+ doi = {10.1093/comjnl/27.2.97},
+}
+
+
+
+
+
+@ARTICLE{somogyi87,
+ author = {Zoltan Somogyi},
+ title = {Cake: a fifth generation version of make},
+ journal = {University of Melbourne},
+ year = {1987},
+ pages = {\href{https://pdfs.semanticscholar.org/3e97/3b5c9af7763d70cdfaabdd1b96b3b75b5483.pdf}{Corpus ID: 107669553}},
+}
+
+
+
+
+
+@ARTICLE{feldman79,
+ author = {Stuart I. Feldman},
+ title = {Make -- a program for maintaining computer programs},
+ journal = {Journal of Software: Practice and Experience},
+ volume = {9},
+ pages = {255},
+ year = {1979},
+ doi = {10.1002/spe.4380090402},
+}
+
+
+
+
+
+@ARTICLE{mcilroy78,
+ author = {M. D. McIlroy and E. N. Pinson and B. A. Tague},
+ title = "{UNIX Time-Sharing System: Forward}",
+ journal = {\doihref{https://archive.org/details/bstj57-6-1899/mode/2up}{Bell System Technical Journal}},
+ year = {1978},
+ volume = {57},
+ pages = {6, ark:/13960/t0gt6xf72},
+ doi = {},
+}
+
+
+
+
+
+@ARTICLE{anscombe73,
+ author = {{Anscombe}, F.J.},
+ title = {Graphs in Statistical Analysis},
+ journal = {The American Statistician},
+ year = {1973},
+ volume = {27},
+ pages = {17},
+ doi = {10.1080/00031305.1973.10478966},
+}
+
+
+
+
+
+@ARTICLE{roberts69,
+ author = {{Roberts}, K.V.},
+ title = {The publication of scientific fortran programs},
+ journal = {Computer Physics Communications},
+ year = {1969},
+ volume = {1},
+ pages = {1},
+ doi = {10.1016/0010-4655(69)90011-3},
+}
diff --git a/tex/src/supplement.tex b/tex/src/supplement.tex
new file mode 100644
index 0000000..362f304
--- /dev/null
+++ b/tex/src/supplement.tex
@@ -0,0 +1,99 @@
+%% The top-level file to build the separate supplement that contains the
+%% appendices (to be published as a separate PDF file).
+%
+%% Copyright (C) 2020-2022 Mohammad Akhlaghi <mohammad@akhlaghi.org>
+%% Copyright (C) 2020-2022 Boudewijn F. Roukema <boud@astro.uni.torun.pl>
+%
+%% This file is free software: you can redistribute it and/or modify it
+%% under the terms of the GNU General Public License as published by the
+%% Free Software Foundation, either version 3 of the License, or (at your
+%% option) any later version.
+%
+%% This file is distributed in the hope that it will be useful, but WITHOUT
+%% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+%% FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+%% for more details. See <http://www.gnu.org/licenses/>.
+\documentclass[journal]{IEEEtran}
+
+%% This is a convenience variable if you are using PGFPlots to build plots
+%% within LaTeX. If you want to import PDF files for figures directly, you
+%% can use the standard `\includegraphics' command. See the definition of
+%% `\includetikz' in `tex/preamble-pgfplots.tex' for where the files are
+%% assumed to be if you use `\includetikz' when `\makepdf' is not defined.
+\newcommand{\makepdf}{}
+
+%% VALUES FROM ANALYSIS (NUMBERS AND STRINGS): this file is automatically
+%% generated at the end of the processing and includes LaTeX macros
+%% (defined with '\newcommand') for various processing outputs to be used
+%% within the paper.
+\input{tex/build/macros/project.tex}
+\input{tex/src/preamble-maneage.tex}
+
+%% Import the other necessary TeX files for this particular project.
+\input{tex/src/preamble-project.tex}
+
+%% Title and author names.
+\title{\textsc{\LARGE Supplemental appendices for paper:}\\ \emph{\projecttitle}}
+\author{
+ Mohammad Akhlaghi,
+ Ra\'ul Infante-Sainz,
+ Boudewijn F. Roukema,
+ Mohammadreza Khellat,\\
+ David Valls-Gabaud,
+ Roberto Baena-Gall\'e\\
+ \footnotesize{Manuscript received June 5th, 2020; accepted April 7th, 2021; first published by CiSE April 13th, 2021}
+}
+
+%% The paper headers
+\markboth{Computing in Science and Engineering, Vol. 23, Issue 3, Pages 82--91, 2021: \href{https://doi.org/10.1109/MCSE.2021.3072860}{DOI:10.1109/MCSE.2021.3072860}, arXiv:2006.03018, \href{https://doi.org/10.5281/zenodo.\projectzenodoid}{zenodo.\projectzenodoid}}%
+{Akhlaghi \MakeLowercase{\textit{et al.}}: \projecttitle}
+
+
+
+
+
+
+
+
+
+
+%% Start the paper.
+\begin{document}
+
+% make the title area
+\maketitle
+
+% For peer review papers, you can put extra information on the cover
+% page as needed:
+% \ifCLASSOPTIONpeerreview
+% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
+% \fi
+%
+% For peerreview papers, this IEEEtran command inserts a page break and
+% creates the second title. It will be ignored for other modes.
+\IEEEpeerreviewmaketitle
+
+%% A short appendix describing this file.
+\begin{abstract}
+ This supplement contains appendices to the main body of Akhlaghi et al., published in CiSE (\href{https://doi.org/10.1109/MCSE.2021.3072860}{DOI:10.1109/MCSE.2021.3072860}, available as a preprint at \href{https://arxiv.org/abs/\projectarxivid}{\texttt{arXiv:\projectarxivid}} or \href{https://doi.org/10.5281/zenodo.\projectzenodoid}{\texttt{zenodo.\projectzenodoid}}).
+ In the paper's main body we introduced criteria for longevity of reproducible workflow solutions and introduced a proof of concept that implements them, called Maneage (\emph{Man}aging data lin\emph{eage}).
+ This supplement provides an in-depth literature review of previous methods and compares them and their lower-level tools in detail with our criteria and with the proof of concept presented in this work.
+ Appendix \ref{appendix:existingtools} reviews the low-level tools that are used by many reproducible workflow solutions (including our proof of concept).
+ Appendix \ref{appendix:existingsolutions} reviews many solutions that attempt(ed) reproducible management of workflows (including solutions that have stopped development, or worse, are no longer available online).
+ To highlight the evolving landscape and its effects on longevity, Appendix \ref{appendix:existingsolutions} discusses the solutions in historical order.
+ Finally, in Appendix \ref{appendix:software} an automatically generated, exhaustive list of software used to build this project is included with the software versions indicated.
+ This supplement was built from project commit \projectversion.
+\end{abstract}
+
+%% Import the appendices.
+\appendices
+\input{tex/src/appendix-existing-tools.tex}
+\input{tex/src/appendix-existing-solutions.tex}
+\input{tex/src/appendix-used-software.tex}
+
+%% Bibliography.
+\bibliographystyle{IEEEtran_openaccess}
+\bibliography{IEEEabrv,references}
+
+%% End the paper.
+\end{document}