aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README-hacking.md127
-rw-r--r--README.md25
-rwxr-xr-xproject327
-rw-r--r--reproduce/analysis/make/initialize.mk45
-rw-r--r--reproduce/analysis/make/paper.mk20
-rw-r--r--reproduce/analysis/make/top-make.mk6
-rw-r--r--reproduce/analysis/make/top-prepare.mk6
-rw-r--r--reproduce/software/config/LOCAL.conf.in29
-rw-r--r--reproduce/software/config/versions.conf19
-rw-r--r--reproduce/software/make/basic.mk13
-rw-r--r--reproduce/software/make/high-level.mk10
-rw-r--r--reproduce/software/shell/apptainer-README.md (renamed from reproduce/software/containers/README-apptainer.md)38
-rwxr-xr-xreproduce/software/shell/apptainer.sh (renamed from reproduce/software/containers/apptainer.sh)121
-rwxr-xr-xreproduce/software/shell/configure.sh1750
-rw-r--r--reproduce/software/shell/docker-README.md (renamed from reproduce/software/containers/README-docker.md)43
-rwxr-xr-xreproduce/software/shell/docker.sh (renamed from reproduce/software/containers/docker.sh)129
-rwxr-xr-xreproduce/software/shell/pre-make-build.sh57
17 files changed, 1583 insertions, 1182 deletions
diff --git a/README-hacking.md b/README-hacking.md
index ad44d3c..fa14795 100644
--- a/README-hacking.md
+++ b/README-hacking.md
@@ -180,14 +180,71 @@ evolving rapidly, so some details will differ between the different
versions. The more recent papers will tend to be the most useful as good
working examples.
+ - Saremi et
+ al. [2025](https://ui.adsabs.harvard.edu/abs/2025arXiv250802780S),
+ Astronomy and Astrophysics (accepted): The project's version controlled
+ source is on [Gitlab](https://gitlab.com/nasim-projects/pipeline),
+ necessary software, outputs and backup of history are available at
+ [zenodo.16152699](https://doi.org/10.5281/zenodo.16152699); and the
+ archived git history is available at
+ [swh:1:dir:b3657cfb6053fd976695bd63c15cb99e5095648a](https://archive.softwareheritage.org/swh:1:dir:b3657cfb6053fd976695bd63c15cb99e5095648a;origin=https://gitlab.com/nasim-projects/pipeline;visit=swh:1:snp:ab7c6f0b9999f42d77154103c1bc082fa23b325c;anchor=swh:1:rev:afeb282c01983cba2a11eb4b2f25d5a40d35c164).
+
+ - Eskandarlou & Akhlaghi
+ [2024](https://ui.adsabs.harvard.edu/abs/2024RNAAS...8..168E), Research
+ Notes in American Astronomical Society (RNAAS), Volume 8, Issue 6,
+ id.168. The project's version controlled source is on
+ [Codeberg](https://codeberg.org/gnuastro/papers) (the `polar-plot`
+ branch). Necessary software, outputs and backup of history are available
+ at [zenodo.11403643](https://doi.org/10.5281/zenodo.11403643); and the
+ archived git history is available at
+ [swh:1:dir:4e09bf85f9f87336fa55920bf67e7bcf6d58bbd5](https://archive.softwareheritage.org/swh:1:dir:4e09bf85f9f87336fa55920bf67e7bcf6d58bbd5;origin=https://codeberg.org/gnuastro/papers;visit=swh:1:snp:557ee1a90de465659659ecc46df0c5ce29d0bb61;anchor=swh:1:rev:375e12e52080006be6a28e10980e79ef54d13d1d).
+
+ - Infante-Sainz et
+ al. [2024](https://ui.adsabs.harvard.edu/abs/2024RNAAS...8...22I),
+ Research Notes in American Astronomical Society (RNAAS), Volume 8, Issue
+ 1, id.22. The project's version controlled source is on
+ [Codeberg](https://codeberg.org/gnuastro/papers) (the `radial-profile`
+ branch). Necessary software, outputs and backup of history are available
+ at [zenodo.10124582](https://doi.org/10.5281/zenodo.10124582); and the
+ archived git history is available at
+ [swh:1:dir:d5029e066916cb64f0d95d20eb88294acc78b2b1](https://archive.softwareheritage.org/swh:1:dir:d5029e066916cb64f0d95d20eb88294acc78b2b1;origin=https://codeberg.org/gnuastro/papers;visit=swh:1:snp:b065324c2ef3b48bc26e8f30e48102a1abd2052f;anchor=swh:1:rev:61764447b16da44538e5ddbf7fb69937ba138e81).
+
+ - Infante-Sainz & Akhlaghi
+ [2024](https://ui.adsabs.harvard.edu/abs/2024RNAAS...8...10I), Research
+ Notes in American Astronomical Society (RNAAS), Volume 8, Issue 1,
+ id.10. The project's version controlled source is on
+ [Codeberg](https://codeberg.org/gnuastro/papers) (the `color-faint-gray`
+ branch). Necessary software, outputs and backup of history are available
+ at [zenodo.10058165](https://doi.org/10.5281/zenodo.10058165); and the
+ archived git history is available at
+ [swh:1:dir:1064a48d4bb58d6684c3df33c6633a04d4141d2d](https://archive.softwareheritage.org/swh:1:dir:1064a48d4bb58d6684c3df33c6633a04d4141d2d;origin=https://codeberg.org/gnuastro/papers;visit=swh:1:snp:a083ff647c571f895d1ccc9f7432fa1b9a1d03a8;anchor=swh:1:rev:ff77b619daa50b05ddd83206d979d1f8a53d040b).
+
+ - Eskandarlou et
+ al. [2023](https://ui.adsabs.harvard.edu/abs/2023RNAAS...7..269E),
+ Research Notes in American Astronomical Society (RNAAS), Volume 7, Issue
+ 12, id.269. The project's version controlled source is on
+ [Codeberg](https://codeberg.org/gnuastro/papers) (the `zeropoint`
+ branch). Necessary software, outputs and backup of history are available
+ at [zenodo.10256845](https://doi.org/10.5281/zenodo.10256845); and the
+ archived git history is available at
+ [swh:1:dir:8b2d1f63be96de3de03aa3e2bb68fa7fa52df56f](https://archive.softwareheritage.org/swh:1:dir:8b2d1f63be96de3de03aa3e2bb68fa7fa52df56f;origin=https://codeberg.org/gnuastro/papers;visit=swh:1:snp:e37e226bab517eef24d854467682b2fcf5d7dc32;anchor=swh:1:rev:ea682783d83707c0e1d114a5de74a100be9f545d).
+
+ - Akhlaghi [2023](https://ui.adsabs.harvard.edu/abs/2023RNAAS...7..211A),
+ Research Notes in American Astronomical Society (RNAAS), Volume 7, Issue
+ 10, id.211. The project's version controlled source is on
+ [Codeberg](https://codeberg.org/gnuastro/papers) (the
+ `pointing-simulate` branch).
+
- Borkowska & Roukema
- ([2022](https://ui.adsabs.harvard.edu/abs/2021arXiv211214174B), MNRAS
- Submitted, arXiv:2112.14174): The live version of the controlled source
- is [at Codeberg](https://codeberg.org/boud/gevcurvtest); the main input
+ ([2022](https://ui.adsabs.harvard.edu/abs/2022CQGra..39u5007B),
+ Classical and Quantum Gravity, arXiv:2112.14174): The live version of
+ the controlled source is [at
+ Codeberg](https://codeberg.org/boud/gevcurvtest); the main input
dataset, a software snapshot, the software tarballs, the project outputs
and editing history are available at
[zenodo.5806027](https://doi.org/10.5281/zenodo.5806027); and the
- archived git history is available at [swh:1:rev:54398b720ddbac269ede30bf1e27fe27f07567f7](https://archive.softwareheritage.org/browse/revision/54398b720ddbac269ede30bf1e27fe27f07567f7).
+ archived git history is available at
+ [swh:1:rev:54398b720ddbac269ede30bf1e27fe27f07567f7](https://archive.softwareheritage.org/browse/revision/54398b720ddbac269ede30bf1e27fe27f07567f7).
- Peper & Roukema
([2021](https://ui.adsabs.harvard.edu/abs/2021MNRAS.505.1223P), MNRAS,
@@ -580,7 +637,47 @@ First custom commit
$ pwd # Just to confirm where you are.
```
- 2. **Prepare to build project**: The `./project configure` command of the
+ 2. The final job of Maneage is to create your paper's PDF. By default it
+ uses a custom LaTeX style that resembles that of the Astrophysical
+ Journal (because the precusor of Maneage was for [Akhlaghi & Ichikawa
+ 2015](https://ui.adsabs.harvard.edu/abs/2015ApJS..220....1A)). The
+ journal you plan to submit your ppaer to will have its own separate
+ style. So it is best that you start your project by writing in the
+ desired style. We have already customized Maneage for the official
+ styles of some journals. To find them, run `git branch -r | grep
+ journal`. If your planned journal is one of them, you can take the
+ following steps to start your project based on that journal's style. If
+ it is not in these, you can ignore this step for now and customize the
+ style later (you can model based on these branchs). In the commands
+ below, we'll assume you want to prepare for the Astronomy and
+ Astrophysics journal (A&A).
+
+ ```shell
+ $ git checkout -b journal origin-maneage/journal-a-and-a
+ $ git log -1 --oneline | awk '{print $1}' # To keep the commit hash
+ $ git rebase -i main # See description below
+ ```
+
+ In the first text editor that opens after the last command, change all
+ (except the first) `pick`s into `squash`, then save the change and
+ close the editor. In case there is no conflict, the second editor will
+ be pre-filled with all the commit messages in that branch. You do not
+ need those, so you can delete everything and write a commit message
+ like the following: `A&A journal (commit XXXXX of Maneage's
+ journal-a-and-a branch)`. Just replace the `XXXXX` with the output of
+ the second command above. The commit hash is important to be stored
+ here since it allows you to later check if any updates have been made
+ in that branch in the future. After completing the git rebase operation
+ (last command above), run the following commands below to put the new
+ commit in your `main` branch (and continue working based on that).
+
+ ```shell
+ $ git checkout main
+ $ git merge journal
+ $ git branch -D journal
+ ```
+
+ 3. **Prepare to build project**: The `./project configure` command of the
next step will build the different software packages within the
"build" directory (that you will specify). Nothing else on your system
will be touched. However, since it takes long, it is useful to see
@@ -600,7 +697,7 @@ First custom commit
$ ./project --check-config
```
- 3. **Test Maneage**: Before making any changes, it is important to test it
+ 4. **Test Maneage**: Before making any changes, it is important to test it
and see if everything works properly with the commands below. If there
is any problem in the `./project configure` or `./project make` steps,
please contact us to fix the problem before continuing. Since the
@@ -618,7 +715,7 @@ First custom commit
# Open 'paper.pdf' and see if everything is ok.
```
- 4. **Setup the remote**: You can use any [hosting
+ 5. **Setup the remote**: You can use any [hosting
facility](https://en.wikipedia.org/wiki/Comparison_of_source_code_hosting_facilities)
that supports Git to keep an online copy of your project's version
controlled history. We recommend [GitLab](https://gitlab.com) because
@@ -646,7 +743,7 @@ First custom commit
git push origin maneage # Push 'maneage' branch to 'origin' (no tracking).
```
- 5. **Title**, **short description** and **author**: You can start adding
+ 6. **Title**, **short description** and **author**: You can start adding
your name (with your possible coauthors) and tentative abstract in
`paper.tex`. You should see the relevant place in the preamble (prior
to `\begin{document}`. Just note that some core project metadata like
@@ -659,7 +756,7 @@ First custom commit
specific journal's style), please feel free to use it your own methods
after finishing this checklist and doing your first commit.
- 6. **Delete dummy parts**: Maneage contains some parts that are only for
+ 7. **Delete dummy parts**: Maneage contains some parts that are only for
the initial/test run, mainly as a demonstration of important steps,
which you can use as a reference to use in your own project. But they
not for any real analysis, so you should remove these parts as
@@ -712,7 +809,7 @@ First custom commit
$ ./project make
```
- 7. **Ignore changes in some Maneage files**: One of the main advantages of
+ 8. **Ignore changes in some Maneage files**: One of the main advantages of
Maneage is that you can later update your infra-structure by merging
your `main` branch with the `maneage` branch. This is good for many
low-level features that you will likely never modify yourself. But it
@@ -744,7 +841,7 @@ First custom commit
$ git add .gitattributes
```
- 8. **Copyright and License notice**: It is necessary that _all_ the
+ 9. **Copyright and License notice**: It is necessary that _all_ the
"copyright-able" files in your project (those larger than 10 lines)
have a copyright and license notice. Please take a moment to look at
several existing files to see a few examples. The copyright notice is
@@ -766,7 +863,7 @@ First custom commit
Copyright (C) 2025-2025 YOUR NAME <YOUR@EMAIL.ADDRESS>
```
- 9. **Configure Git for fist time**: If this is the first time you are
+ 10. **Configure Git for fist time**: If this is the first time you are
running Git on this system, then you have to configure it with some
basic information in order to have essential information in the commit
messages (ignore this step if you have already done it). Git will
@@ -780,7 +877,7 @@ First custom commit
$ git config --global core.editor nano
```
- 10. **Your first commit**: You have already made some small and basic
+ 11. **Your first commit**: You have already made some small and basic
changes in the steps above and you are in your project's `main`
branch. So, you can officially make your first commit in your
project's history and push it. But before that, you need to make sure
@@ -799,7 +896,7 @@ First custom commit
$ git push # Push your commit to your remote.
```
- 11. **Read the publication checklist**: The publication checklist below is
+ 12. **Read the publication checklist**: The publication checklist below is
very similar to this one, but for the final phase of your project. For
now, you don't have to do any of its steps, but reading it will give
you good insight into the later stages of your project. If you already
@@ -809,7 +906,7 @@ First custom commit
Making it much easier to complete that checklist when you are ready
for submission.
- 12. **Start your exciting research**: You are now ready to add flesh and
+ 13. **Start your exciting research**: You are now ready to add flesh and
blood to this raw skeleton by further modifying and adding your
exciting research steps. You can use the "published works" section in
the introduction (above) as some fully working models to learn
diff --git a/README.md b/README.md
index 79106ec..5fbd320 100644
--- a/README.md
+++ b/README.md
@@ -332,25 +332,30 @@ disable internet after the configuration phase. Note that only the
necessary TeXLive packages are installed (~350 MB), not the full TeXLive
collection!
-The container technologies that Maneage has been tested on an documentation
-exists in this project (with the `reproduce/software/containers` directory)
-are listed below. See the respective `README-*.md` file in that directory
-for the details:
+The container technologies that Maneage has a high-level interface for
+(with the `reproduce/software/shell` directory) are listed below. Each has
+a dedicated shell script in that directory with an (almost) identical
+interface. See the respective `*-README.md` file in that directory for more
+details, as well as running your desired script with `--help` or reading
+its comments at the top of the file.
- [Apptainer](https://apptainer.org): useful in high performance
computing (HPC) facilities (where you do not have root
permissions). Apptainer is fully free and open source software.
Apptainer containers can only be created and used on GNU/Linux
- operating systems, but are stored as files (easy to manage).
+ operating systems, but are stored as a single file (very easy to
+ manage).
- [Docker](https://www.docker.com): requires root access, but useful on
virtual private servers (VPSs). Docker images are stored and managed by
a root-level daemon, so you can only manage them through its own
- interface. A docker container build on a GNU/Linux host can also be
- executed on Windows or macOS. However, while the Docker engine and its
- command-line interface on GNU/Linux are free and open source software,
- its desktop application (with a GUI and components necessary for
- Windows or macOS) is not (requires payment for large companies).
+ interface (making containers by all users visible and accessible to all
+ other users of a system by default). A docker container build on a
+ GNU/Linux host can also be executed on Windows or macOS. However, while
+ the Docker engine and its command-line interface on GNU/Linux are free
+ and open source software, its desktop application (with a GUI and
+ components necessary for Windows or macOS) is not (requires payment for
+ large companies).
diff --git a/project b/project
index c30bfbf..f2986fb 100755
--- a/project
+++ b/project
@@ -33,6 +33,7 @@ set -e
jobs=0 # 0 is for the default for the 'configure.sh' script.
group=
debug=
+quiet=0
timing=0
host_cc=0
offline=
@@ -43,6 +44,7 @@ keep_going=
check_config=
make_targets=
software_dir=
+pauseformsg=1
clean_texdir=0
prepare_redo=0
highlightnew=0
@@ -107,16 +109,18 @@ Project 'make' special tagets
With the options below you can modify the default behavior.
Configure options:
+ --all-highlevel Build all high-level software (for development).
-b, --build-dir=STR Top directory to build the project in.
- -e, --existing-conf Use (possibly existing) local configuration.
- --host-cc Use host system's C compiler, don't build GCC.
- -i, --input-dir=STR Directory containing input datasets (optional).
- -s, --software-dir=STR Directory containing necessary software tarballs.
--check-config During configuration, show what is being built.
--clean-texdir Remove possibly existing build-time subdirectories
under the project's 'tex/' directory (can happen
when source is from arXiv for example).
- --all-highlevel Build all high-level software (for development).
+ -e, --existing-conf Use (possibly existing) local configuration.
+ -i, --input-dir=STR Directory containing input datasets (optional).
+ --host-cc Use host system's C compiler, don't build GCC.
+ --quiet Do not print basic info messages (with '-e').
+ --no-pause Do not sleep/pause after basic info messages.
+ -s, --software-dir=STR Directory containing necessary software tarballs.
Configure and Make options:
-d, --debug[=FLAGS] In configure: use -j1, no -k, and no Zenodo check.
@@ -180,67 +184,71 @@ do
shell) func_operation_set $1; shift;;
# Configure options:
- -b|--build-dir) build_dir="$2"; check_v "$1" "$build_dir"; shift;shift;;
- -b=*|--build-dir=*) build_dir="${1#*=}"; check_v "$1" "$build_dir"; shift;;
- -b*) build_dir=$(echo "$1" | sed -e's/-b//'); check_v "$1" "$build_dir"; shift;;
- -e|--existing-conf) existing_conf=1; shift;;
+ -e|--existing-conf) existing_conf=1; shift;;
-e*|--existing-conf=*) on_off_option_error --existing-conf -e;;
- --host-cc) host_cc=1; shift;;
+ --host-cc) host_cc=1; shift;;
--host-cc=*) on_off_option_error --host-cc;;
- --offline) offline=1; shift;;
+ --offline) offline=1; shift;;
--offline=*) on_off_option_error --offline;;
- -i|--input-dir) input_dir="$2"; check_v "$1" "$input_dir"; shift;shift;;
- -i=*|--input-dir=*) input_dir="${1#*=}"; check_v "$1" "$input_dir"; shift;;
- -i*) input_dir=$(echo "$1" | sed -e's/-i//'); check_v "$1" "$input_dir"; shift;;
- -s|--software-dir) software_dir="$2"; check_v "$1" "$software_dir"; shift;shift;;
- -s=*|--software-dir=*) software_dir="${1#*=}"; check_v "$1" "$software_dir"; shift;;
- -s*) software_dir=$(echo "$1" | sed -e's/-s//'); check_v "$1" "$software_dir"; shift;;
- --check-config) check_config=1; shift;;
+ -i|--input-dir) input_dir="$2"; check_v "$1" "$input_dir"; shift;shift;;
+ -i=*|--input-dir=*) input_dir="${1#*=}"; check_v "$1" "$input_dir"; shift;;
+ -i*) input_dir=$(echo "$1" | sed -e's/-i//'); check_v "$1" "$input_dir"; shift;;
+ -s|--software-dir) software_dir="$2"; check_v "$1" "$software_dir"; shift;shift;;
+ -s=*|--software-dir=*) software_dir="${1#*=}"; check_v "$1" "$software_dir"; shift;;
+ -s*) software_dir=$(echo "$1" | sed -e's/-s//'); check_v "$1" "$software_dir"; shift;;
+ --check-config) check_config=1; shift;;
--check-config=*) on_off_option_error --check-config;;
- --clean-texdir) clean_texdir=1; shift;;
+ --clean-texdir) clean_texdir=1; shift;;
--clean-texdir=*) on_off_option_error --clean-texdir;;
- --all-highlevel) all_highlevel=1; shift;;
+ --all-highlevel) all_highlevel=1; shift;;
--all-highlevel=*) on_off_option_error --all-highlevel;;
+ --no-pause) pauseformsg=0; shift;;
+ --no-pause=*) on_off_option_error --no-pause;;
+ --quiet) quiet=1; shift;;
+ --quiet=*) on_off_option_error --quiet;;
# Configure and Make options:
- -g|--group) group="$2"; check_v group "$group"; shift;shift;;
- -g=*|--group=*) group="${1#*=}"; check_v group "$group"; shift;;
- -g*) group=$(echo "$1" | sed -e's/-g//'); check_v group "$group"; shift;;
- -j|--jobs) jobs="$2"; check_v jobs "$jobs"; shift;shift;;
- -j=*|--jobs=*) jobs="${1#*=}"; check_v jobs "$jobs"; shift;;
- -j*) jobs=$(echo "$1" | sed -e's/-j//'); check_v jobs "$jobs"; shift;;
- -k|--keep-going) keep_going="--keep-going"; shift;;
- -k=*|--keep-going=*) on_off_option_error --keep-going -k;;
- -k*) on_off_option_error --keep-going -k;;
- -'?'|--help) print_help; exit 0;;
- -'?'*|--help=*) on_off_option_error --help -?;;
+ -b|--build-dir) build_dir="$2"; check_v "$1" "$build_dir";shift;shift;;
+ -b=*|--build-dir=*) build_dir="${1#*=}"; check_v "$1" "$build_dir";shift;;
+ -b*) build_dir=$(echo "$1" | sed -e's/-b//'); check_v "$1" "$build_dir";shift;;
+ -g|--group) group="$2"; check_v group "$group"; shift;shift;;
+ -g=*|--group=*) group="${1#*=}"; check_v group "$group"; shift;;
+ -g*) group=$(echo "$1" | sed -e's/-g//'); check_v group "$group"; shift;;
+ -j|--jobs) jobs="$2"; check_v jobs "$jobs"; shift;shift;;
+ -j=*|--jobs=*) jobs="${1#*=}"; check_v jobs "$jobs"; shift;;
+ -j*) jobs=$(echo "$1" | sed -e's/-j//'); check_v jobs "$jobs"; shift;;
+ -k|--keep-going) keep_going="--keep-going"; shift;;
+ -k=*|--keep-going=*) on_off_option_error --keep-going -k;;
+ -k*) on_off_option_error --keep-going -k;;
+ -'?'|--help) print_help; exit 0;;
+ -'?'*|--help=*) on_off_option_error --help -?;;
# Make options (analysis):
- -p|--prepare-redo) prepare_redo=1; shift;;
- -p=*|--prepare-redo=*) on_off_option_error --prepare-redo; shift;;
- -t|--timing) timing=1; shift;;
- -t=*|--timing=*) on_off_option_error --timing; shift;;
+ -p|--prepare-redo) prepare_redo=1; shift;;
+ -p=*|--prepare-redo=*) on_off_option_error --prepare-redo; shift;;
+ -t|--timing) timing=1; shift;;
+ -t=*|--timing=*) on_off_option_error --timing; shift;;
# Make options (final PDF):
- --refresh-bib) [ -f tex/src/references.tex ] && touch tex/src/references.tex; shift;;
- --highlight-all) highlightnew=1; highlightnotes=1; shift;;
- --highlight-all=*) on_off_option_error --highlight-new;;
- --highlight-new) highlightnew=1; shift;;
- --highlight-new=*) on_off_option_error --highlight-new;;
- --highlight-notes) highlightnotes=1; shift;;
- --highlight-notes=*) on_off_option_error --highlight-notes;;
- -d|--debug) if [ x$operation = x ]; then
- echo "Please set the operation before calling '--debug'"; exit 1
- elif [ x$operation = xconfigure ]; then debug=a; shift;
- elif [ x$operation = xmake ]; then
- if [ x"$2" = x ]; then
- echo "In make-mode, '--debug' needs a value; see GNU Make manual"; exit 1
- else debug="$2"; check_v debug "$debug"; shift;shift; fi
- else
- echo "Operation '$operation' not recognized, please use 'configure' or 'make'"
- fi;;
- -d=*|--debug=*) debug="${1#*=}"; check_v debug "$debug"; shift;;
- -d*) debug=$(echo "$1" | sed -e's/-d//'); check_v debug "$debug"; shift;;
+ --refresh-bib) [ -f tex/src/references.tex ] && touch tex/src/references.tex; shift;;
+ --highlight-all) highlightnew=1; highlightnotes=1; shift;;
+ --highlight-all=*) on_off_option_error --highlight-new;;
+ --highlight-new) highlightnew=1; shift;;
+ --highlight-new=*) on_off_option_error --highlight-new;;
+ --highlight-notes) highlightnotes=1; shift;;
+ --highlight-notes=*) on_off_option_error --highlight-notes;;
+ -d|--debug) if [ x$operation = x ]; then
+ echo "Please set the operation before calling '--debug'"; exit 1
+ elif [ x$operation = xconfigure ]; then debug=a; shift;
+ elif [ x$operation = xmake ]; then
+ if [ x"$2" = x ]; then
+ echo "In make-mode, '--debug' needs a value; see GNU Make manual"; exit 1
+ else debug="$2"; check_v debug "$debug"; shift;shift; fi
+ else
+ echo "Operation '$operation' not recognized, please use 'configure' or 'make'"
+ fi;;
+ -d=*|--debug=*) debug="${1#*=}"; check_v debug "$debug"; shift;;
+ -d*) debug=$(echo "$1" | sed -e's/-d//'); check_v debug "$debug"; shift;;
# Unrecognized option:
-*) echo "$scriptname: unknown option '$1'"; exit 1;;
@@ -294,8 +302,8 @@ EOF
ls $coloropt .build/software/build-tmp || junk=1;
fi
- # Make the temporary directory, delete its contents, then put new
- # links of all built software.
+ # Make the temporary directory, delete its contents, then put
+ # new links of all built software.
if ! [ -d $checkdir ]; then mkdir $checkdir; fi
rm -f $checkdir/*
@@ -316,10 +324,11 @@ EOF
if [ $printresults = 1 ]; then
echo "--- Last 5 packages that were built:"
- # Then sort all the links based on the most recent dates of the
- # files they link to (with '-L').
+ # Then sort all the links based on the most recent dates of
+ # the files they link to (with '-L').
ls -Llt $checkdir \
- | awk '/^-/ && c++<5 {printf "[at %s] %s\n", $(NF-1), $NF}'
+ | awk '/^-/ && c++<5 {printf "[at %s] %s\n", \
+ $(NF-1), $NF}'
fi
else
cat <<EOF
@@ -343,14 +352,15 @@ fi
-# Basic group settings
-# --------------------
+# Group check
+# -----------
if ! [ x$group = x ]; then
# Check if group is usable.
if ! sg "$group" "echo Group \'$group\' exists"; then
- echo "$scriptname: '$group' is not a usable group name on this system.";
- echo "(TIP: you can use the 'groups' command to see your groups)"
+ printf "$scriptname: '$group' is not a usable group name on "
+ printf "this system. TIP: you can use the 'groups' command "
+ printf "to see your groups)\n"
exit 1
fi
@@ -362,9 +372,45 @@ fi
-# Error when configuration isn't run
-configuration_necessary() {
- cat <<EOF
+# Build directory symbolic links
+# ------------------------------
+#
+# The source directory will contain two symbolic links that point to the
+# build directory:
+
+# - .build: the top build directory.
+#
+# - .local: the second to the top software installed directory. They are
+# used during the configuration phase to simplify commands and are also
+# very useful during the development of a maneage'd project (to easily
+# get to the build directory or execut Maneage'd software).
+#
+# This needs to be done on every run because:
+# - './project configure' can be run with a new build directory, and
+# keeping the old '.build' conflicts with the new build directory
+# that the user gave.
+# - './project make' or './project shell' (within a newly cloned source
+# directory from inside a container): the links do not exist but have
+# to be set to the container's build directory.
+# - This is not an expensive operation.
+if ! [ x"$build_dir" = x ]; then
+ rm -f .build .local
+ ln -s $build_dir .build
+ ln -s $build_dir/software/installed .local
+fi
+
+
+
+
+
+# Function to validate configuration
+# ----------------------------------
+#
+# Check if the configuration is missing/incomplete
+configuration_check() {
+ confdone=software/config/hardware-parameters.tex
+ if ! [ -f .build/$confdone ]; then
+ cat <<EOF
The project is either (1) not configured on this system, or (2) the
configuration wasn't successful.
@@ -384,15 +430,69 @@ If there was a problem, please let us know by filling this online form:
http://savannah.nongnu.org/support/?func=additem&group=reproduce
EOF
- exit 1
+ exit 1
+ fi
}
-# Run operations in controlled environment
-# ----------------------------------------
+# Function for TeX Preparations
+# -----------------------------
+#
+# Make sure that the necessary analysis directories directory exist in the
+# build directory. These will be necessary in various phases of hte
+# analysis and having them inside the lower-level Make steps will require
+# setting them as prerequisites for many basic jobs (thus making the
+# Makefiles harder to read and add potentials for bugs: forgetting to add
+# them for example). Also, we don't want the configure phase to make any
+# edits in the analysis directory, so they are not built there.
+tex_preparations () {
+
+ # Extract the location of the build directory.
+ bdir=$(.local/bin/realpath .build)
+
+ # We are using our custom-built 'mkdir' which is guaranteed to have the
+ # '-p' option (that will also build intermediate directories) and won't
+ # complain if the directory already exists.
+ badir=$bdir/analysis
+ texdir=$badir/tex
+ btexdir=$texdir/build
+ tikzdir=$btexdir/tikz
+ mtexdir=$texdir/macros
+ .local/bin/mkdir -p $mtexdir $btexdir $tikzdir
+
+ # If 'tex/build' and 'tex/tikz' are symbolic links then 'rm -f'
+ # will delete them and we can continue. However, when the project
+ # is being built from the tarball (from arXiv for example), these
+ # two are not symbolic links but actual directories with the
+ # necessary built-components to build the PDF in them. In this
+ # case, because 'tex/build' is a directory, 'rm -f' will fail, so
+ # we'll just rename the two directories (as backup) and let the
+ # project build the proper symbolic links afterwards.
+ if rm -f tex/build; then
+ rm -f tex/tikz
+ else
+ mv tex/tikz tex/tikz-from-tarball
+ mv tex/build tex/build-from-tarball
+ fi
+
+ # Build the symbolic links.
+ if ! [ -L tex/tikz ]; then ln -s "$tikzdir" tex/tikz; fi
+ if ! [ -L tex/build ]; then ln -s "$texdir" tex/build; fi
+}
+
+
+
+
+
+# Function to run in controlled environment
+# -----------------------------------------
+#
+# Controlling the environment is necessary for running the analysis. Like
+# the other funcitons here, this is defined to simplify the high-level code
+# within the 'make)' switch statement.
perms="u+r,u+w,g+r,g+w,o-r,o-w,o-x"
controlled_env() {
@@ -464,11 +564,13 @@ case $operation in
# Variables to pass to the configuration script.
export jobs=$jobs
export debug=$debug
+ export quiet=$quiet
export host_cc=$host_cc
export offline=$offline
export build_dir=$build_dir
export input_dir=$input_dir
export scriptname=$scriptname
+ export pauseformsg=$pauseformsg
export maneage_group_name=$group
export software_dir=$software_dir
export existing_conf=$existing_conf
@@ -488,7 +590,8 @@ case $operation in
# creates problems when another group member wants to update
# the software for example. We thus need to manually add the
# group writing flag to all installed software files.
- echo "Enabling group writing permission on all installed software..."
+ printf "Enabling group writing permission on all installed "
+ printf "software...\n"
.local/bin/chmod -R g+w .local/;
fi
;;
@@ -500,43 +603,14 @@ case $operation in
# Batch execution of the project.
make)
- # Make sure the configure script has been completed properly
- # ('configuration-done.txt' exists).
- if ! [ -f .build/software/configuration-done.txt ]; then
- configuration_necessary
- fi
+ # Make sure the configure script is complete and necessary LaTeX
+ # directories are in place.
+ configuration_check
+ tex_preparations
- # Make sure that the necessary analysis directories directory exist
- # in the build directory. These will be necessary in various phases
- # of hte analysis and having them inside the lower-level Make steps
- # will require setting them as prerequisites for many basic jobs
- # (thus making the Makefiles harder to read and add potentials for
- # bugs: forgetting to add them for example). Also, we don't want
- # the configure phase to make any edits in the analysis directory,
- # so they are not built there.
- badir=.build/analysis
- texdir=$badir/tex
- mtexdir=$texdir/macros
- if ! [ -d $badir ]; then mkdir $badir; fi
- if ! [ -d $texdir ]; then mkdir $texdir; fi
- if ! [ -d $mtexdir ]; then mkdir $mtexdir; fi
-
- # TeX build directory. If built in a group scenario, the TeX build
- # directory must be separate for each member (so they can work on their
- # relevant parts of the paper without conflicting with each other).
- if [ "x$maneage_group_name" = x ]; then
- texbdir="$texdir"/build
- else
- user=$(whoami)
- texbdir="$texdir"/build-$user
- fi
- tikzdir="$texbdir"/tikz
- if ! [ -L tex/build ]; then ln -s "$(pwd -P)/$texdir" tex/build; fi
- if ! [ -L tex/tikz ]; then ln -s "$(pwd -P)/$tikzdir" tex/tikz; fi
-
- # Register the start of this run (we are appending the new
- # information so previous information is preserved until the user
- # intentionally deletes/cleans it).
+ # Register the start of this run if requested (we are appending the
+ # new information so previous information is preserved until the
+ # user intentionally deletes/cleans it).
if [ $timing = 1 ]; then echo "start: $(date)" >> timing.txt; fi
# Run data preparation phase (optionally build Makefiles with
@@ -564,36 +638,33 @@ case $operation in
# Interactive shell of Maneage.
shell)
- # Make sure the configure script has been completed properly
- # ('configuration-done.txt' exists).
- if ! [ -f .build/software/configuration-done.txt ]; then
- configuration_necessary
- fi
+ # Make sure the configure script has been completed properly.
+ configuration_check
# Run the project's own shell without inheriting any environment
# from the host. The 'TERM' environment variable is necessary for
# tools like some text editors.
- bdir=`.local/bin/realpath .build`
+ bdir=$(.local/bin/realpath .build)
instdir="$bdir"/software/installed
bindir="$bdir"/software/installed/bin
rcfile=$(pwd)/reproduce/software/shell/bashrc.sh
.local/bin/env -i \
- HOME="$bdir" \
- TERM="$TERM" \
- PATH="$bindir" \
- CCACHE_DISABLE=1 \
- PROJECT_STATUS=shell \
- SHELL="$bindir"/bash \
- COLORTERM="$COLORTERM" \
- PROJECT_RCFILE="$rcfile" \
- LDFLAGS=-L"$instdir"/lib \
- CPPFLAGS=-I"$instdir"/include \
- LD_LIBRARY_PATH="$instdir"/lib \
- OMPI_MCA_plm_rsh_agent=/bin/false \
- PYTHONPATH="$instdir"/lib/python/site-packages \
- PYTHONPATH3="$instdir"/lib/python/site-packages \
- PS1="[\[\033[01;35m\]maneage@\h \W\[\033[32m\]\[\033[00m\]]$ " \
- "$bindir"/bash --noprofile --rcfile "$rcfile"
+ HOME="$bdir" \
+ TERM="$TERM" \
+ PATH="$bindir" \
+ CCACHE_DISABLE=1 \
+ PROJECT_STATUS=shell \
+ SHELL="$bindir"/bash \
+ COLORTERM="$COLORTERM" \
+ PROJECT_RCFILE="$rcfile" \
+ LDFLAGS=-L"$instdir"/lib \
+ CPPFLAGS=-I"$instdir"/include \
+ LD_LIBRARY_PATH="$instdir"/lib \
+ OMPI_MCA_plm_rsh_agent=/bin/false \
+ PYTHONPATH="$instdir"/lib/python/site-packages \
+ PYTHONPATH3="$instdir"/lib/python/site-packages \
+ PS1="[\[\033[01;35m\]maneage@\h \W\[\033[32m\]\[\033[00m\]]$ " \
+ "$bindir"/bash --noprofile --rcfile "$rcfile"
;;
diff --git a/reproduce/analysis/make/initialize.mk b/reproduce/analysis/make/initialize.mk
index c51f910..b57b3a9 100644
--- a/reproduce/analysis/make/initialize.mk
+++ b/reproduce/analysis/make/initialize.mk
@@ -265,16 +265,8 @@ clean:
# executing 'build'.
rm -f *.aux *.log *.synctex *.auxlock *.dvi *.out *.run.xml *.bcf
-# Delete all the built outputs except the dependency programs. We'll
-# use Bash's extended options builtin ('shopt') to enable "extended
-# glob" (for listing of files). It allows extended features like
-# ignoring the listing of a file with '!()' that we are using
-# afterwards.
- shopt -s extglob
- rm -rf $(texdir)/macros/!(dependencies.tex|dependencies-bib.tex|hardware-parameters.tex)
- rm -rf $(badir)/!(tex) $(texdir)/!(macros|$(texbtopdir))
- rm -rf $(texdir)/build/!(tikz) $(texdir)/build/tikz/*
- rm -rf $(badir)/preparation-done.mk
+# Delete the full 'badir' (containing all analysis outputs).
+ rm -rf $(badir)
distclean: clean
# Without cleaning the Git hooks, we won't be able to easily commit
@@ -285,7 +277,7 @@ distclean: clean
# 'rm' program. So for this recipe, we'll use the host system's 'rm',
# not our own.
$$sys_rm -rf $(BDIR)
- $$sys_rm -f .local .build $(pconfdir)/LOCAL.conf
+ $$sys_rm -f .local .build
@@ -329,12 +321,11 @@ $(project-package-contents): paper.pdf | $(texdir)
paper.tex > $$dir/paper.tex
# Copy ONLY the version-controlled files in 'reproduce' and
-# 'tex/src'. This is important because files like 'LOCAL.conf' (in
-# 'reproduce/software/config') should not be archived, they contain
-# information about the host computer and are irrelevant for
-# others. Also some project authors may have temporary files here
-# that are not under version control and thus shouldn't be archived
-# (although this is bad practice, but that is up to the user).
+# 'tex/src'. This is important because the git commit hash goes in
+# the tarball name (should correspond to it) and some project authors
+# may have temporary files here that are not under version control
+# and thus shouldn't be archived (although this is bad practice, but
+# that is up to the user).
#
# To keep the sub-directory structure, we are packaging the files
# with Tar, piping it, and unpacking it in the archive directory. So
@@ -362,17 +353,20 @@ $(project-package-contents): paper.pdf | $(texdir)
rm -rf $$dir/tex/build/build*
# If the project has any PDFs in its 'tex/tikz' directory (TiKZ or
-# PGFPlots was used to generate them), copy them too.
+# PGFPlots was used to generate them), copy them too. Note that in
+# the main project source, 'tex/tikz' is just a symbolic link to
+# 'tex/build/tikz'. But inside the tarball we do not want to have
+# symbolic links and they should be independent.
if ls tex/tikz/*.pdf &> /dev/null; then
cp tex/tikz/*.pdf $$dir/tex/tikz
fi
# When submitting to places like arXiv, they will just run LaTeX once
-# and won't run 'biber'. So we need to also keep the '.bbl' file into
-# the distributing tarball. However, BibLaTeX is particularly
-# sensitive to versioning (a '.bbl' file has to be read by the same
-# BibLaTeX version that created it). This is hard to do with
-# non-up-to-date places like arXiv. Therefore, we thus just copy the
+# and won't run 'biber' or 'biblatex'. So we need to also keep the
+# '.bbl' file into the distributing tarball. However, BibLaTeX is
+# particularly sensitive to versioning (a '.bbl' file has to be read
+# by the same BibLaTeX version that created it). This is hard to do
+# with non-up-to-date places like arXiv. Therefore, we just copy the
# whole of BibLaTeX's source (the version we are using) into the top
# tarball directory. In this way, arXiv's LaTeX engine will use the
# same BibLaTeX version to interpret the '.bbl' file. TIP: you can
@@ -521,7 +515,10 @@ $(inputdatasets): $(indir)/%: | $(indir) $(lockdir)
# Unrecognized format.
*)
- echo "Maneage: 'DATABASEAUTHTYPE' format not recognized! Please see the description of this variable in 'reproduce/software/config/LOCAL.conf' for the acceptable values."; exit 1;;
+ printf "Maneage: 'DATABASEAUTHTYPE' format not recognized! "
+ printf "Please see the description of this variable in "
+ printf "'$(bsdir)/config/LOCAL.conf' for the acceptable "
+ printf "values."; exit 1;;
esac
fi
diff --git a/reproduce/analysis/make/paper.mk b/reproduce/analysis/make/paper.mk
index 3c06ce3..a399637 100644
--- a/reproduce/analysis/make/paper.mk
+++ b/reproduce/analysis/make/paper.mk
@@ -105,8 +105,6 @@ else
texbdir:=$(texdir)/build-$(shell whoami)
endif
tikzdir:=$(texbdir)/tikz
-$(texbdir):; mkdir $@
-$(tikzdir): | $(texbdir); mkdir $@
@@ -120,8 +118,8 @@ $(tikzdir): | $(texbdir); mkdir $@
# on). Therefore, we should copy those macros here in the LaTeX build
# directory, so the TeX directory is completely independent from each
# other.
-$(mtexdir)/dependencies.tex: $(bsdir)/tex/dependencies.tex
- cp $(bsdir)/tex/*.tex $(mtexdir)/
+$(mtexdir)/dependencies.tex: $(bsdir)/config/dependencies.tex
+ cp $(bsdir)/config/*.tex $(mtexdir)/
@@ -140,7 +138,7 @@ $(mtexdir)/dependencies.tex: $(bsdir)/tex/dependencies.tex
# been modified, we don't want to re-build the bibliography, only the final
# PDF.
$(texbdir)/paper.bbl: tex/src/references.tex $(mtexdir)/dependencies.tex \
- | $(mtexdir)/project.tex $(tikzdir)
+ | $(mtexdir)/project.tex
# If '$(mtexdir)/project.tex' is empty, don't build PDF.
@macros=$$(cat $(mtexdir)/project.tex)
@@ -166,9 +164,9 @@ $(texbdir)/paper.bbl: tex/src/references.tex $(mtexdir)/dependencies.tex \
# for details.
#
# We need the modification to 'LD_LIBRARY_PATH' because we do not
-# build LaTeX from source and it uses '/bin/sh' (among other
-# possible system-wide things).
- export LD_LIBRARY_PATH="$(sys_library_sh_path):$$LD_LIBRARY_PATH"
+# build LaTeX from source and it (or its packages) may use
+# '/bin/sh' (among other possible system-wide things).
+ export LD_LIBRARY_PATH="$(SYS_LIBRARY_SH_PATH):$$LD_LIBRARY_PATH"
pdflatex -shell-escape -halt-on-error "$$p"/paper.tex
biber paper
fi
@@ -200,9 +198,9 @@ paper.pdf: $(mtexdir)/project.tex paper.tex $(texbdir)/paper.bbl
# option '-shell-escape'.
#
# We need the modification to 'LD_LIBRARY_PATH' because we do not
-# build LaTeX from source and it uses '/bin/sh' (among other
-# possible system-wide things).
- export LD_LIBRARY_PATH="$(sys_library_sh_path):$$LD_LIBRARY_PATH"
+# build LaTeX from source and it (or its packages) may use
+# '/bin/sh' (among other possible system-wide things).
+ export LD_LIBRARY_PATH="$(SYS_LIBRARY_SH_PATH):$$LD_LIBRARY_PATH"
pdflatex -shell-escape -halt-on-error "$$p"/paper.tex
# Come back to the top project directory and copy the built PDF
diff --git a/reproduce/analysis/make/top-make.mk b/reproduce/analysis/make/top-make.mk
index 2689e64..e87aed8 100644
--- a/reproduce/analysis/make/top-make.mk
+++ b/reproduce/analysis/make/top-make.mk
@@ -19,9 +19,9 @@
-# Load the local configuration (created after running
-# './project configure').
-include reproduce/software/config/LOCAL.conf
+# Load the local configuration (created after running './project
+# configure').
+include .build/software/config/LOCAL.conf
diff --git a/reproduce/analysis/make/top-prepare.mk b/reproduce/analysis/make/top-prepare.mk
index ea40f39..d2d1c14 100644
--- a/reproduce/analysis/make/top-prepare.mk
+++ b/reproduce/analysis/make/top-prepare.mk
@@ -23,9 +23,9 @@
-# Load the local configuration (created after running
-# './project configure').
-include reproduce/software/config/LOCAL.conf
+# Load the local configuration (created after running './project
+# configure').
+include .build/software/config/LOCAL.conf
diff --git a/reproduce/software/config/LOCAL.conf.in b/reproduce/software/config/LOCAL.conf.in
index 341a78e..b95bb5f 100644
--- a/reproduce/software/config/LOCAL.conf.in
+++ b/reproduce/software/config/LOCAL.conf.in
@@ -9,12 +9,31 @@
# permitted in any medium without royalty provided the copyright notice and
# this notice are preserved. This file is offered as-is, without any
# warranty.
-BDIR = @bdir@
-INDIR = @indir@
+
+
+
+
+# Local system settings
+# ---------------------
+#
+# Build directory (mandatory). All the created files of the project will be
+# within this directory.
+BDIR = @bdir@
+
+# Input data directory. This can be empty or a non-existant location. If
+# so, then the inputs will be downloaded from the 'INPUTS.conf' into the
+# build directory.
+INDIR = @indir@
+
+# Software source code directory. This can be empty or a non-existant
+# location. If so, the software tarballs will be downloaded.
DEPENDENCIES-DIR = @ddir@
-SYS_CPATH = @sys_cpath@
-DOWNLOADER = @downloader@
-GROUP-NAME = @groupname@
+
+# Other local settings (compiler, downloader and user).
+SYS_CPATH = @sys_cpath@
+GROUP-NAME = @groupname@
+DOWNLOADER = @downloader@
+SYS_LIBRARY_SH_PATH = @sys_library_sh_path@
diff --git a/reproduce/software/config/versions.conf b/reproduce/software/config/versions.conf
index 9c82c8d..166e8ff 100644
--- a/reproduce/software/config/versions.conf
+++ b/reproduce/software/config/versions.conf
@@ -17,7 +17,6 @@
# --------------------------------------------------------------
#
# CLASS:BASIC (important identifier for 'awk'; don't modify this line)
-bash-version = 5.2.37
binutils-version = 2.43.1
bison-version = 3.8.2
coreutils-version = 9.6
@@ -42,15 +41,12 @@ libtool-version = 2.5.4
libunistring-version = 1.3
libxml2-version = 2.13.5
lzip-version = 1.25
-m4-version = 1.4.19
make-version = 4.4.1
mpc-version = 1.3.1
mpfr-version = 4.2.1
nano-version = 8.3
-ncurses-version = 6.5
openssl-version = 3.4.0
perl-version = 5.40.1
-pkgconfig-version = 0.29.2
podlators-version = 6.0.2
readline-version = 8.2.13
sed-version = 4.9
@@ -90,6 +86,21 @@ certpem-version = 2025-02-10
# supported.
patchelf-version = 0.13
+# Not working with C23
+# --------------------
+#
+# As of GCC 15.1, the default C standard has been changed from C17 to C23
+# and the following software cannot be built with C23. So we have added
+# '-std=c17' to the CFLAGS environment variable in their build rules. After
+# updating their version (and if you have GCC 15.1 or later) first remove
+# '-std=c17' and then try the build. If it works, move the software back up
+# to the main list before the commit.
+ncurses-version = 6.5
+bash-version = 5.2.37
+m4-version = 1.4.19
+pkgconfig-version = 0.29.2
+
+
diff --git a/reproduce/software/make/basic.mk b/reproduce/software/make/basic.mk
index 40c5a4e..4b18c29 100644
--- a/reproduce/software/make/basic.mk
+++ b/reproduce/software/make/basic.mk
@@ -39,7 +39,7 @@
# along with this Makefile. If not, see <http://www.gnu.org/licenses/>.
# Top level environment
-include reproduce/software/config/LOCAL.conf
+include .build/software/config/LOCAL.conf
include reproduce/software/make/build-rules.mk
include reproduce/software/config/versions.conf
include reproduce/software/config/checksums.conf
@@ -63,10 +63,13 @@ ibidir = $(BDIR)/software/installed/version-info/proglib
# editor) is installed by default, it is recommended to have it in the
# 'basic.mk', so Maneaged projects can be edited on any system (even when
# there is no command-line text editor available).
+#
+# The recipe is '@echo > /dev/null' so Make does not print "make: Nothing
+# to be done for 'all'."
targets-proglib = low-level-links \
gcc-$(gcc-version) \
nano-$(nano-version)
-all: $(foreach p, $(targets-proglib), $(ibidir)/$(p))
+all: $(foreach p, $(targets-proglib), $(ibidir)/$(p)); @echo > /dev/null
# Define the shell environment
# ----------------------------
@@ -423,6 +426,7 @@ $(ibidir)/ncurses-$(ncurses-version): $(ibidir)/patchelf-$(patchelf-version)
rm -f $(ibdir)/bash* $(ibdir)/awk* $(ibdir)/gawk*
# Standard build process.
+ export CFLAGS="-std=gnu17 $$CFLAGS"
$(call gbuild, ncurses-$(ncurses-version), static, \
--with-shared --enable-rpath --without-normal \
--without-debug --with-cxx-binding \
@@ -561,7 +565,7 @@ $(ibidir)/bash-$(bash-version): \
if [ "x$(static_build)" = xyes ]; then stopt="--enable-static-link"
else stopt=""
fi;
- export CFLAGS="$$CFLAGS \
+ export CFLAGS="$$CFLAGS -std=gnu17 \
-DDEFAULT_PATH_VALUE='\"$(ibdir)\"' \
-DSTANDARD_UTILS_PATH='\"$(ibdir)\"' \
-DSYS_BASHRC='\"$(BASH_ENV)\"' "
@@ -1014,6 +1018,7 @@ $(ibidir)/gmp-$(gmp-version): \
$(ibidir)/coreutils-$(coreutils-version)
tarball=gmp-$(gmp-version).tar.lz
$(call import-source, $(gmp-url), $(gmp-checksum))
+ export CFLAGS="-std=gnu17 $$CFLAGS"
$(call gbuild, gmp-$(gmp-version), static, \
--enable-cxx --enable-fat, \
-j$(numthreads))
@@ -1074,6 +1079,7 @@ $(ibidir)/grep-$(grep-version): $(ibidir)/coreutils-$(coreutils-version)
$(ibidir)/m4-$(m4-version): $(ibidir)/patchelf-$(patchelf-version)
tarball=m4-$(m4-version).tar.lz
$(call import-source, $(m4-url), $(m4-checksum))
+ export CFLAGS="-std=gnu17 $$CFLAGS"
$(call gbuild, m4-$(m4-version), static, \
--with-syscmd-shell=$(ibdir)/dash, \
-j$(numthreads) V=1)
@@ -1106,6 +1112,7 @@ $(ibidir)/pkg-config-$(pkgconfig-version): $(ibidir)/patchelf-$(patchelf-version
if [ x$(on_mac_os) = xyes ]; then export compiler="CC=clang"
else export compiler=""
fi
+ export CFLAGS="-std=gnu17 $$CFLAGS"
$(call gbuild, pkg-config-$(pkgconfig-version), static, \
$$compiler --with-internal-glib \
--with-pc-path=$(ildir)/pkgconfig, V=1)
diff --git a/reproduce/software/make/high-level.mk b/reproduce/software/make/high-level.mk
index 4ed5d62..67ca8b6 100644
--- a/reproduce/software/make/high-level.mk
+++ b/reproduce/software/make/high-level.mk
@@ -29,7 +29,7 @@
# along with this Makefile. If not, see <http://www.gnu.org/licenses/>.
# Top level environment (same as 'basic.mk')
-include reproduce/software/config/LOCAL.conf
+include .build/software/config/LOCAL.conf
include reproduce/software/make/build-rules.mk
include reproduce/software/config/versions.conf
include reproduce/software/config/checksums.conf
@@ -123,11 +123,12 @@ ifneq ($(strip $(offline)),1)
target-texlive := $(itidir)/texlive
endif
-# Ultimate Makefile target.
+# Ultimate Makefile target. The recipe is '@echo > /dev/null' so Make does
+# not print "make: Nothing to be done for 'all'."
all: $(foreach p, $(targets-proglib), $(ibidir)/$(p)) \
$(foreach p, $(targets-python), $(ipydir)/$(p)) \
$(foreach p, $(targets-r-cran), $(ircrandir)/$(p)) \
- $(target-texlive)
+ $(target-texlive); @echo > /dev/null
# Define the shell environment
# ----------------------------
@@ -2020,7 +2021,7 @@ $(itidir)/texlive: reproduce/software/config/texlive-packages.conf \
# We do not build TeXLive from source and for its installation it
# downloads components from the web internally; and those
# components can use '/bin/sh' (which needs 'sys_library_sh_path').
- export LD_LIBRARY_PATH="$(sys_library_sh_path):$$LD_LIBRARY_PATH"
+ export LD_LIBRARY_PATH="$(sys_library_sh_path)"
# To update itself, tlmgr needs a backup directory.
backupdir=$(idir)/texlive/backups
@@ -2049,7 +2050,6 @@ $(itidir)/texlive: reproduce/software/config/texlive-packages.conf \
# files (this is because we do no yet install LaTeX from source):
cdir=$$(pwd)
cd $(idir)/texlive
- $(shsrcdir)/prep-source.sh $(ibdir)
cd $$cdir
# Get all the necessary versions.
diff --git a/reproduce/software/containers/README-apptainer.md b/reproduce/software/shell/apptainer-README.md
index 9608dc8..a7826ec 100644
--- a/reproduce/software/containers/README-apptainer.md
+++ b/reproduce/software/shell/apptainer-README.md
@@ -22,24 +22,22 @@ analysis files (data and PDF) on your host operating system. This enables
you to keep the size of the image to a minimum (only containing the built
software environment) to easily move it from one computer to another.
- 1. Using your favorite text editor, create a `apptainer-local.sh` in your
- project's top directory that contains the usage command shown at the
- top of the 'apptainer.sh' script and take the following steps:
- * Set the respective directories based on your own preferences.
- * The `--software-dir` is optional (if you don't have the source
- tarballs, Maneage will download them automatically. But that requires
- internet (which may not always be available). If you regularly build
- Maneage'd projects, you can clone the repository containing all the
- tarballs at https://gitlab.cefca.es/maneage/tarballs-software
- * Add an extra `--build-only` for the first run so it doesn't go onto
- doing the analysis and just builds the image. After it has completed,
- remove the `--build-only` and it will only run the analysis of your
- project.
-
- 2. Once step one finishes, the build directory will contain two
- Singularity Image Format (SIF) files listed below. You can move them to
- any other (more permanent) positions in your filesystem or to other
- computers as needed.
+ 1. Using your favorite text editor, create a `run.sh` in your top Maneage
+ directory (as described in the comments at the start of the
+ `apptainer.sh` script in this directory). Just add `--build-only` on
+ the first run so it doesn't go onto doing the analysis and just sets up
+ the software environment. Set the respective directory(s) based on your
+ filesystem (the software directory is optional). The `run.sh` file name
+ is already in `.gitignore` (because it contains local directories), so
+ Git will ignore it and it won't be committed by mistake.
+
+ 2. Make the script executable with `chmod +x ./run.sh`, and run it with
+ `./run.sh`.
+
+ 3. Once the build finishes, the build directory (on your host) will
+ contain two Singularity Image Format (SIF) files listed below. You can
+ move them to any other (more permanent) positions in your filesystem or
+ to other computers as needed.
* `maneage-base.sif`: image containing the base operating system that
was used to build your project. You can safely delete this unless you
need to keep it for future builds without internet (you can give it
@@ -49,6 +47,10 @@ software environment) to easily move it from one computer to another.
project. This file is necessary for future runs of your project
within the container.
+ 3. To execute your project remote the `--build-only` and use `./run.sh` to
+ execute it. If you want to enter your Maneage'd project shell, add the
+ `--project-shell` option to the call inside `./run.sh`.
+
diff --git a/reproduce/software/containers/apptainer.sh b/reproduce/software/shell/apptainer.sh
index 52315f6..c581ade 100755
--- a/reproduce/software/containers/apptainer.sh
+++ b/reproduce/software/shell/apptainer.sh
@@ -9,34 +9,35 @@
#
# Usage:
#
-# - When you are at the top Maneage'd project directory, you can run this
-# script like the example below. Just set all the '/PATH/TO/...'
-# directories. See the items below for optional values.
+# - When you are at the top Maneage'd project directory, run this script
+# like the example below. Just set the build directory location on your
+# system. See the items below for optional values to optimize the
+# process (avoid downloading for exmaple).
#
-# ./reproduce/software/containers/apptainer.sh \
-# --build-dir=/PATH/TO/BUILD/DIRECTORY \
-# --software-dir=/PATH/TO/SOFTWARE/TARBALLS
+# ./reproduce/software/shell/apptainer.sh \
+# --build-dir=/PATH/TO/BUILD/DIRECTORY
#
-# - Non-mandatory options:
+# - Non-mandatory options:
#
-# - If you already have the input data that is necessary for your
-# project's, use the '--input-dir' option to specify its location
-# on your host file system. Otherwise the necessary analysis
-# files will be downloaded directly into the build
-# directory. Note that this is only necessary when '--build-only'
-# is not given.
+# - If you already have the input data that is necessary for your
+# project, use the '--input-dir' option to specify its location
+# on your host file system. Otherwise the necessary analysis
+# files will be downloaded directly into the build
+# directory. Note that this is only necessary when '--build-only'
+# is not given.
#
-# - The '--software-dir' is only useful if you want to build a
-# container. Even in that case, it is not mandatory: if not
-# given, the software tarballs will be downloaded (thus requiring
-# internet).
+# - If you already have the necessary software tarballs that are
+# necessary for your project, use the '--software-dir' option to
+# specify its location on your host file system only when
+# building the container. No problem if you don't have them, they
+# will be downloaded during the configuration phase.
#
-# - To avoid having to set them every time you want to start the
-# apptainer environment, you can put this command (with the proper
-# directories) into a 'run.sh' script in the top Maneage'd project
-# source directory and simply execute that. The special name 'run.sh'
-# is in Maneage's '.gitignore', so it will not be included in your
-# git history by mistake.
+# - To avoid having to set them every time you want to start the
+# apptainer environment, you can put this command (with the proper
+# directories) into a 'run.sh' script in the top Maneage'd project
+# source directory and simply execute that. The special name 'run.sh'
+# is in Maneage's '.gitignore', so it will not be included in your
+# git history by mistake.
#
# Known problems:
#
@@ -70,7 +71,7 @@ set -e
# Default option values
-jobs=
+jobs=0
quiet=0
source_dir=
build_only=
@@ -105,7 +106,7 @@ Top-level script to build and run a Maneage'd project within Apptainer.
--container-shell Open the container shell.
Operating mode:
- --quiet Do not print informative statements.
+ -q, --quiet Do not print informative statements.
-?, --help Give this help list.
-j, --jobs=INT Number of threads to use in each phase.
--build-only Just build the container, don't run it.
@@ -166,8 +167,8 @@ do
--container_shell=*) on_off_option_error --container-shell;;
# Operating mode
- --quiet) quiet=1; shift;;
- --quiet=*) on_off_option_error --quiet;;
+ -q|--quiet) quiet=1; shift;;
+ -q*|--quiet=*) on_off_option_error --quiet;;
-j|--jobs) jobs="$2"; check_v "$1" "$jobs"; shift;shift;;
-j=*|--jobs=*) jobs="${1#*=}"; check_v "$1" "$jobs"; shift;;
-j*) jobs=$(echo "$1" | sed -e's/-j//'); check_v "$1" "$jobs"; shift;;
@@ -245,8 +246,22 @@ if ! [ x"$input_dir" = x ]; then
fi
# If no '--jobs' has been specified, use the maximum available jobs to the
-# operating system.
-if [ x$jobs = x ]; then jobs=$(nproc); fi
+# operating system. Apptainer only works on GNU/Linux operating systems, so
+# there is no need to account for reading the number of threads on macOS.
+if [ x"$jobs" = x0 ]; then jobs=$(nproc); fi
+
+# Since the container is read-only and is run with the '--contain' option
+# (which makes an empty '/tmp'), we need to make a dedicated directory for
+# the container to be able to write to. This is necessary because some
+# software (Biber in particular on the default branch) need to write there!
+# See https://github.com/plk/biber/issues/494. We'll keep the directory on
+# the host OS within the build directory, but as a hidden file (since it is
+# not necessary in other types of build and ultimately only contains
+# temporary files of programs that need it).
+toptmp=$build_dir/.apptainer-tmp-$(whoami)
+if ! [ -d $toptmp ]; then mkdir $toptmp; fi
+chmod -R +w $toptmp/ # Some software remove writing flags on /tmp files.
+if ! [ x"$( ls -A $toptmp )" = x ]; then rm -r "$toptmp"/*; fi
# [APPTAINER-ONLY] Optional mounting option for the software directory.
software_dir_mnt=""
@@ -254,18 +269,6 @@ if ! [ x"$software_dir" = x ]; then
software_dir_mnt="--mount type=bind,src=$software_dir,dst=/home/maneager/tarballs-software"
fi
-# [APPTAINER-ONLY] Since the container is read-only and is run with the
-# '--contain' option (which makes an empty '/tmp'), we need to make a
-# dedicated directory for the container to be able to write to. This is
-# necessary because some software (Biber in particular on the default
-# branch) need to write there! See https://github.com/plk/biber/issues/494.
-# We'll keep the directory on the host OS within the build directory, but
-# as a hidden file (since it is not necessary in other types of build and
-# ultimately only contains temporary files of programs that need it).
-toptmp=$build_dir/.apptainer-tmp-$(whoami)
-if ! [ -d $toptmp ]; then mkdir $toptmp; fi
-rm -rf $toptmp/* # So previous runs don't affect this run.
-
@@ -284,7 +287,8 @@ if [ -f $project_name ]; then
fi
else
- # Build the basic definition, with just Debian and gcc/g++
+ # Build the basic definition, with just Debian-slim with minimal
+ # necessary tools.
if [ -f $base_name ]; then
if [ $quiet = 0 ]; then
printf "$scriptname: info: base OS docker image ('$base_name') "
@@ -300,7 +304,7 @@ Bootstrap: docker
From: $base_os
%post
- apt-get update && apt-get install -y gcc g++
+ apt-get update && apt-get install -y gcc g++ wget
EOF
# Build the base operating system container and delete the
# temporary definition file.
@@ -321,6 +325,7 @@ EOF
# software tarball directory, they will all be symbolic links that
# aren't valid when the user runs the container (since we only
# mount the software tarballs at build time).
+ intbuild=/home/maneager/build
maneage_def=$build_dir/maneage.def
cat <<EOF > $maneage_def
Bootstrap: localimage
@@ -336,24 +341,34 @@ From: $base_name
cd /home/maneager/source
./project configure --jobs=$jobs \\
--input-dir=/home/maneager/input \\
- --build-dir=/home/maneager/build \\
+ --build-dir=$intbuild \\
--software-dir=/home/maneager/tarballs-software
rm /home/maneager/build/software/tarballs/*
%runscript
cd /home/maneager/source
- if [ x"\$maneage_apptainer_stat" = xshell ]; then \\
- ./project shell; \\
- elif [ x"\$maneage_apptainer_stat" = xrun ]; then \\
- if [ x"\$maneage_jobs" = x ]; then \\
- ./project make; \\
+ if ./project configure --build-dir=$intbuild \\
+ --existing-conf --no-pause \\
+ --offline --quiet; then \\
+ if [ x"\$maneage_apptainer_stat" = xshell ]; then \\
+ ./project shell --build-dir=$intbuild; \\
+ elif [ x"\$maneage_apptainer_stat" = xrun ]; then \\
+ if [ x"\$maneage_jobs" = x ]; then \\
+ ./project make --build-dir=$intbuild; \\
+ else \\
+ ./project make --build-dir=$intbuild --jobs=\$maneage_jobs; \\
+ fi; \\
else \\
- ./project make --jobs=\$maneage_jobs; \\
+ printf "$scriptname: '\$maneage_apptainer_stat' (value "; \\
+ printf "to 'maneage_apptainer_stat' environment variable) "; \\
+ printf "is not recognized: should be either 'shell' or 'run'"; \\
+ exit 1; \\
fi; \\
else \\
- printf "$scriptname: '\$maneage_apptainer_stat' (value "; \\
- printf "to 'maneage_apptainer_stat' environment variable) "; \\
- printf "is not recognized: should be either 'shell' or 'run'"; \\
+ printf "$scriptname: configuration failed! This is probably "; \\
+ printf "due to a mismatch between the software versions of "; \\
+ printf "the container and the source that it is being "; \\
+ printf "executed.\n"; \\
exit 1; \\
fi
EOF
diff --git a/reproduce/software/shell/configure.sh b/reproduce/software/shell/configure.sh
index e291f7b..a409920 100755
--- a/reproduce/software/shell/configure.sh
+++ b/reproduce/software/shell/configure.sh
@@ -40,6 +40,14 @@ set -e
# had the chance to implement it yet (please help if you can!). Until then,
# please set them based on your project (if they differ from the core
# branch).
+
+# If equals 1, a message will be printed, showing the nano-seconds since
+# previous step: useful with '-e --offline --nopause --quiet' to find
+# bottlenecks for speed optimization. Speed is important because this
+# script is called automatically every time by the container scripts.
+check_elapsed=0
+
+# In case a fortran compiler is necessary to check.
need_gfortran=0
@@ -52,14 +60,12 @@ need_gfortran=0
# These are defined to help make this script more readable.
topdir="$(pwd)"
optionaldir="/optional/path"
-adir=reproduce/analysis/config
cdir=reproduce/software/config
-pconf=$cdir/LOCAL.conf
-ptconf=$cdir/LOCAL_tmp.conf
-poconf=$cdir/LOCAL_old.conf
-depverfile=$cdir/versions.conf
-depshafile=$cdir/checksums.conf
+
+
+
+
@@ -73,14 +79,21 @@ depshafile=$cdir/checksums.conf
# that their changes are not going to be permenant.
create_file_with_notice ()
{
- if echo "# IMPORTANT: file can be RE-WRITTEN after './project configure'" > "$1"
+ if printf "# IMPORTANT: " > "$1"
then
- echo "#" >> "$1"
- echo "# This file was created during configuration" >> "$1"
- echo "# ('./project configure'). Therefore, it is not under" >> "$1"
- echo "# version control and any manual changes to it will be" >> "$1"
- echo "# over-written if the project re-configured." >> "$1"
- echo "#" >> "$1"
+ # These commands may look messy, but the produced comments in the
+ # file are the main goal and they are readable. (without having to
+ # break our source-code line length).
+ printf "file can be RE-WRITTEN after './project " >> "$1"
+ printf "configure'.\n" >> "$1"
+ printf "#\n" >> "$1"
+ printf "# This file was created during configuration " >> "$1"
+ printf "('./project configure').\n" >> "$1"
+ printf "# Therefore, it is not under version control " >> "$1"
+ printf "and any manual changes\n" >> "$1"
+ printf "# to it will be over-written when the " >> "$1"
+ printf "project is re-configured.\n" >> "$1"
+ printf "#\n" >> "$1"
else
echo; echo "Can't write to $1"; echo;
exit 1
@@ -102,7 +115,7 @@ absolute_dir ()
if stat "$address" 1> /dev/null; then
echo "$(cd "$(dirname "$1")" && pwd )/$(basename "$1")"
else
- exit 1;
+ echo "$optionaldir"
fi
}
@@ -200,30 +213,113 @@ free_space_warning()
-# See if we are on a Linux-based system
-# --------------------------------------
+# Function to empty the temporary software building directory. This can
+# either be a symbolic link (to RAM) or an actual directory, so we can't
+# simply use 'rm -r' (because a symbolic link is not a directory for 'rm').
+empty_build_tmp() {
+
+ # 'ls -A' does not print the '.' and '..' and the '-z' option of '['
+ # checks if the string is empty or not. This allows us to only attempt
+ # deleting the directory's contents if it actually has anything inside
+ # of it. Otherwise, '*' will not expand and we'll get an 'rm' error
+ # complaining that '$tmpblddir/*' doesn't exist. We also don't want to
+ # use 'rm -rf $tmpblddir/*' because in case of a typo or while
+ # debugging (if '$tmpblddir' becomes an empty string), this can
+ # accidentally delete the whole root partition (or a least the '/home'
+ # partition of the user).
+ if ! [ x"$( ls -A $tmpblddir )" = x ]; then
+ rm -r "$tmpblddir"/*
+ fi
+ rm -r "$tmpblddir"
+}
+
+
+
+
+
+# Function to report the elapsed time between steps (if it was activated
+# above with 'check_elapsed').
+elapsed_time_from_prev_step() {
+ if [ $check_elapsed = 1 ]; then
+ chel_now=$(date +"%N");
+ chel_delta=$(echo $chel_prev $chel_now \
+ | awk '{ delta=($2-$1)/1e6; \
+ if(delta>0) d=delta; else d=0; \
+ print d}')
+ chel_dsum=$(echo $chel_dsum $chel_delta | awk '{print $1+$2}')
+ echo $chel_counter $chel_delta "$1" \
+ | awk '{ printf "Step %02d: %-6.2f [millisec]; %s\n", \
+ $1, $2, $3}'
+ chel_counter=$((chel_counter+1))
+ chel_prev=$(date +"%N")
+ fi
+}
+
+
+
+
+
+
+
+
+
+
+# In already-built container
+# --------------------------
+#
+# We need to run './project configure' at the start of every run of Maneage
+# within a container (with 'shell' or 'make'). This is because we need to
+# ensure the versions of all software are correct. However, the container
+# filesystem (where the build/software directory is located) should be run
+# as read-only when doing the analysis. So we will not be able to run some
+# of the tests that require writing files or are generally not relevant
+# when the container is already built (we want the configure command to be
+# as fast as possible).
+#
+# The project source in Maneage'd containers is '/home/maneager/source'.
+built_container=0
+if [ "$topdir" = /home/maneager/source ] \
+ && [ -f .build/software/config/hardware-parameters.tex ]; then
+ built_container=1;
+fi
+
+# Initialize the elapsed time measurement parameters.
+if [ $check_elapsed = 1 ]; then
+ chel_dsum=0.00
+ chel_counter=1
+ chel_prev=$(date +"%N")
+ chel_start=$(date +"%N")
+fi
+
+
+
+
+# Identify the running OS
+# -----------------------
#
# Some features are tailored to GNU/Linux systems, while the BSD-based
# behavior is different. Initially we only tested macOS (hence the name of
# the variable), but as FreeBSD is also being inlucded in our tests. As
# more systems get used, we need to tailor these kinds of things better.
-kernelname=$(uname -s)
-if [ x$kernelname = xLinux ]; then
- on_mac_os=no
-
- # Don't forget to add the respective C++ compiler below (leave 'cc' in
- # the end).
- c_compiler_list="gcc clang cc"
-elif [ x$kernelname = xDarwin ]; then
- host_cc=1
- on_mac_os=yes
-
- # Don't forget to add the respective C++ compiler below (leave 'cc' in
- # the end).
- c_compiler_list="clang gcc cc"
-else
- on_mac_os=no
- cat <<EOF
+if [ $built_container = 0 ]; then
+ kernelname=$(uname -s)
+ if [ $pauseformsg = 1 ]; then pausesec=10; else pausesec=0; fi
+ if [ x$kernelname = xLinux ]; then
+ on_mac_os=no
+
+ # Don't forget to add the respective C++ compiler below (leave 'cc' in
+ # the end).
+ c_compiler_list="gcc clang cc"
+ elif [ x$kernelname = xDarwin ]; then
+ host_cc=1
+ on_mac_os=yes
+
+ # Don't forget to add the respective C++ compiler below (leave 'cc' in
+ # the end).
+ c_compiler_list="clang gcc cc"
+ else
+ on_mac_os=no
+ cat <<EOF
______________________________________________________
!!!!!!! WARNING !!!!!!!
@@ -234,17 +330,20 @@ web-form:
https://savannah.nongnu.org/support/?func=additem&group=reproduce
-The configuration will continue in 10 seconds...
+The configuration will continue in $pausesec seconds. To avoid the
+pause on such messages use the '--no-pause' option.
+
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
EOF
- sleep 10
+ sleep $pausesec
+ fi
+ elapsed_time_from_prev_step os_identify
fi
-
# Collect CPU information
# -----------------------
#
@@ -255,42 +354,43 @@ fi
# later recorded as a LaTeX macro to be put in the final paper, but it
# could be used in a more systematic way to optimize/revise project
# workflow and build.
-hw_class=$(uname -m)
-if [ x$kernelname = xLinux ]; then
- byte_order=$(lscpu \
- | grep 'Byte Order' \
- | awk '{ \
- for(i=3;i<NF;++i) \
- printf "%s ", $i; \
- printf "%s", $NF}')
- address_sizes=$(lscpu \
- | grep 'Address sizes' \
- | awk '{ \
- for(i=3;i<NF;++i) \
- printf "%s ", $i; \
- printf "%s", $NF}')
-elif [ x$on_mac_os = xyes ]; then
- hw_byteorder=$(sysctl -n hw.byteorder)
- if [ x$hw_byteorder = x1234 ]; then byte_order="Little Endian";
- elif [ x$hw_byteorder = x4321 ]; then byte_order="Big Endian";
- fi
- # On macOS, the way of obtaining the number of cores is different
- # between Intel or Apple M1 CPUs. Here we disinguish between Apple M1
- # or others.
- maccputype=$(sysctl -n machdep.cpu.brand_string)
- if [ x"$maccputype" = x"Apple M1" ]; then
- address_size_physical=$(sysctl -n machdep.cpu.thread_count)
- address_size_virtual=$(sysctl -n machdep.cpu.logical_per_package)
+if [ $built_container = 0 ]; then
+ if [ x$kernelname = xLinux ]; then
+ byte_order=$(lscpu \
+ | grep 'Byte Order' \
+ | awk '{ \
+ for(i=3;i<NF;++i) \
+ printf "%s ", $i; \
+ printf "%s", $NF}')
+ address_sizes=$(lscpu \
+ | grep 'Address sizes' \
+ | awk '{ \
+ for(i=3;i<NF;++i) \
+ printf "%s ", $i; \
+ printf "%s", $NF}')
+ elif [ x$on_mac_os = xyes ]; then
+ hw_byteorder=$(sysctl -n hw.byteorder)
+ if [ x$hw_byteorder = x1234 ]; then byte_order="Little Endian";
+ elif [ x$hw_byteorder = x4321 ]; then byte_order="Big Endian";
+ fi
+
+ # On macOS, the way of obtaining the number of cores is different
+ # between Intel or Apple M1 CPUs. Here we disinguish between Apple
+ # M1 or others.
+ maccputype=$(sysctl -n machdep.cpu.brand_string)
+ if [ x"$maccputype" = x"Apple M1" ]; then
+ address_size_physical=$(sysctl -n machdep.cpu.thread_count)
+ address_size_virtual=$(sysctl -n machdep.cpu.logical_per_package)
+ else
+ address_size_physical=$(sysctl -n machdep.cpu.address_bits.physical)
+ address_size_virtual=$(sysctl -n machdep.cpu.address_bits.virtual)
+ fi
+ address_sizes="$address_size_physical bits physical, "
+ address_sizes+="$address_size_virtual bits virtual"
else
- address_size_physical=$(sysctl -n machdep.cpu.address_bits.physical)
- address_size_virtual=$(sysctl -n machdep.cpu.address_bits.virtual)
- fi
- address_sizes="$address_size_physical bits physical, "
- address_sizes+="$address_size_virtual bits virtual"
-else
- byte_order="unrecognized"
- address_sizes="unrecognized"
- cat <<EOF
+ byte_order="unrecognized"
+ address_sizes="unrecognized"
+ cat <<EOF
______________________________________________________
!!!!!!! WARNING !!!!!!!
@@ -300,10 +400,15 @@ the necessary steps in the 'reproduce/software/shell/configure.sh' script
https://savannah.nongnu.org/support/?func=additem&group=reproduce
+The configuration will continue in $pausesec seconds. To avoid the
+pause on such messages use the '--no-pause' option.
+
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
EOF
- sleep 5
+ sleep $pausesec
+ fi
+ elapsed_time_from_prev_step cpu-info
fi
@@ -318,7 +423,7 @@ fi
# avoid these error it is highly recommended to install Xcode in the host
# system. Here, it is checked that this is the case, and if not, warn the user
# about not having Xcode already installed.
-if [ x$on_mac_os = xyes ]; then
+if [ $built_container = 0 ] && [ x$on_mac_os = xyes ]; then
# 'which' isn't in POSIX, so we are using 'command -v' instead.
xcode=$(command -v xcodebuild)
@@ -341,12 +446,15 @@ web-form:
https://savannah.nongnu.org/support/?func=additem&group=reproduce
-The configuration will continue in 5 seconds ...
+The configuration will continue in $pausesec seconds. To avoid the
+pause on such messages use the '--no-pause' option.
+
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
EOF
- sleep 5
+ sleep $pausesec
fi
+ elapsed_time_from_prev_step compiler-of-mac-os
fi
@@ -359,14 +467,15 @@ fi
# To build the software, we'll need some basic tools (the C/C++ compilers
# in particular) to be present.
has_compilers=no
-for c in $c_compiler_list; do
+if [ $built_container = 0 ]; then
+ for c in $c_compiler_list; do
- # Set the respective C++ compiler.
- if [ x$c = xcc ]; then cplus=c++;
- elif [ x$c = xgcc ]; then cplus=g++;
- elif [ x$c = xclang ]; then cplus=clang++;
- else
- cat <<EOF
+ # Set the respective C++ compiler.
+ if [ x$c = xcc ]; then cplus=c++;
+ elif [ x$c = xgcc ]; then cplus=g++;
+ elif [ x$c = xclang ]; then cplus=clang++;
+ else
+ cat <<EOF
______________________________________________________
!!!!!!! BUG !!!!!!!
@@ -379,21 +488,21 @@ script (just above this error message), or contact us with this web-form:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
EOF
- exit 1
- fi
+ exit 1
+ fi
- # Check if they exist.
- if type $c > /dev/null 2>/dev/null; then
- export CC=$c;
- if type $cplus > /dev/null 2>/dev/null; then
- export CXX=$cplus
- has_compilers=yes
- break
+ # Check if they exist.
+ if type $c > /dev/null 2>/dev/null; then
+ export CC=$c;
+ if type $cplus > /dev/null 2>/dev/null; then
+ export CXX=$cplus
+ has_compilers=yes
+ break
+ fi
fi
- fi
-done
-if [ x$has_compilers = xno ]; then
- cat <<EOF
+ done
+ if [ x$has_compilers = xno ]; then
+ cat <<EOF
______________________________________________________
!!!!!!! C/C++ Compiler NOT FOUND !!!!!!!
@@ -416,51 +525,52 @@ Xcode install are recommended. There are known problems with GCC on macOS.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
EOF
- exit 1
+ exit 1
+ fi
+ elapsed_time_from_prev_step compiler-present
fi
-
-# Special directory for compiler testing
-# --------------------------------------
-#
-# This directory will be deleted when the compiler testing is finished.
-compilertestdir=.compiler_test_dir_please_delete
-if ! [ -d $compilertestdir ]; then mkdir $compilertestdir; fi
-
-
-
-
-
# Check C compiler
# ----------------
#
-# Here we check if the C compiler works properly. About the "no warning"
-# variable ('nowarnings'):
-#
-# -Wno-nullability-completeness: on macOS Big Sur 11.2.3 and Xcode 12.4,
-# hundreds of 'nullability-completeness' warnings are printed which can
-# be very annoying and even hide important errors or warnings. It is
-# also harmless for our test here, so it is generally added.
-testprog=$compilertestdir/test
+# We are checking the C compiler before asking for the directories to let
+# the user fix lower-level problems before giving inputs.
+compilertestdir=.compiler_test_dir_please_delete
testsource=$compilertestdir/test.c
-if [ x$on_mac_os = xyes ]; then
- noccwarnings="-Wno-nullability-completeness"
-fi
-echo; echo; echo "Checking host C compiler ('$CC')...";
-cat > $testsource <<EOF
+testprog=$compilertestdir/test
+if [ $built_container = 0 ]; then
+
+ # Here we check if the C compiler works properly. We'll start by
+ # making a directory to keep the products.
+ if ! [ -d $compilertestdir ]; then mkdir $compilertestdir; fi
+
+ # About the "no warning" variable ('nowarnings'):
+ #
+ # -Wno-nullability-completeness: on macOS Big Sur 11.2.3 and
+ # Xcode 12.4, hundreds of 'nullability-completeness' warnings
+ # are printed which can be very annoying and even hide
+ # important errors or warnings. It is also harmless for our
+ # test here, so it is generally added.
+ if [ x$on_mac_os = xyes ]; then
+ noccwarnings="-Wno-nullability-completeness"
+ fi
+ if [ $quiet = 0 ]; then
+ echo; echo "Checking host C compiler ('$CC')...";
+ fi
+ cat > $testsource <<EOF
#include <stdio.h>
#include <stdlib.h>
-int main(void){printf("...C compiler works.\n");
- return EXIT_SUCCESS;}
+int main(void){printf("Good!\n"); return EXIT_SUCCESS;}
EOF
-if $CC $noccwarnings $testsource -o$testprog && $testprog; then
- rm $testsource $testprog
-else
- rm $testsource
- cat <<EOF
+ if $CC $noccwarnings $testsource -o$testprog && $testprog > /dev/null; then
+ if [ $quiet = 0 ]; then echo "... yes"; fi
+ rm $testsource $testprog
+ else
+ rm $testsource
+ cat <<EOF
______________________________________________________
!!!!!!! C compiler doesn't work !!!!!!!
@@ -479,13 +589,14 @@ https://savannah.nongnu.org/support/?func=additem&group=reproduce
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
EOF
- exit 1
+ exit 1
+ fi
+ elapsed_time_from_prev_step compiler-c-check
fi
-
# See if we need the dynamic-linker (-ldl)
# ----------------------------------------
#
@@ -493,7 +604,8 @@ fi
# GNU/Linux systems, we'll need the '-ldl' flag to link such programs. But
# Mac OS doesn't need any explicit linking. So we'll check here to see if
# it is present (thus necessary) or not.
-cat > $testsource <<EOF
+if [ $built_container = 0 ]; then
+ cat > $testsource <<EOF
#include <stdio.h>
#include <dlfcn.h>
int
@@ -502,17 +614,17 @@ main(void) {
return 0;
}
EOF
-if $CC $testsource -o$testprog 2>/dev/null > /dev/null; then
- needs_ldl=no;
-else
- needs_ldl=yes;
+ if $CC $testsource -o$testprog 2>/dev/null > /dev/null; then
+ needs_ldl=no;
+ else
+ needs_ldl=yes;
+ fi
+ elapsed_time_from_prev_step compiler-needs-dynamic-linker
fi
-
-
# See if the C compiler can build static libraries
# ------------------------------------------------
#
@@ -528,32 +640,30 @@ fi
# the library came from the system or our build.
static_build=no
-
-
-
-
# Print warning if the host CC is to be used.
-if [ x$host_cc = x1 ]; then
+if [ $built_container = 0 ] && [ x$host_cc = x1 ]; then
cat <<EOF
______________________________________________________
!!!!!!!!!!!!!!! Warning !!!!!!!!!!!!!!!!
The GNU Compiler Collection (GCC, including compilers for C, C++, Fortran
-and etc) is currently not built on macOS systems for this project. To build
-the project's necessary software on this system, we need to use your
-system's C compiler.
+and etc) is not going to be built for this project. Either it is a macOS,
+or you have used '--host-cc'.
-Project's configuration will continue in 5 seconds.
-______________________________________________________
+The configuration will continue in $pausesec seconds. To avoid the
+pause on such messages use the '--no-pause' option.
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
EOF
- sleep 5
+ sleep $pausesec
fi
+
# Necessary C library element positions
# -------------------------------------
#
@@ -563,7 +673,7 @@ fi
# similarly different location.
sys_cpath=""
sys_library_path=""
-if [ x"$on_mac_os" != xyes ]; then
+if [ $built_container = 0 ] && [ x"$on_mac_os" != xyes ]; then
# Get the GCC target name of the compiler, when its given, special
# C libraries and headers are in a sub-directory of the host.
@@ -581,6 +691,7 @@ if [ x"$on_mac_os" != xyes ]; then
# For a check:
#echo "sys_library_path: $sys_library_path"
#echo "sys_cpath: $sys_cpath"
+ elapsed_time_from_prev_step compiler-sys-cpath
fi
@@ -592,25 +703,28 @@ fi
#
# A static C library and the 'sys/cdefs.h' header are necessary for
# building GCC.
-if [ x"$host_cc" = x0 ]; then
- echo; echo; echo "Checking if static C library is available...";
- cat > $testsource <<EOF
+if [ $built_container = 0 ]; then
+ if [ x"$host_cc" = x0 ]; then
+ if [ $quiet = 0 ]; then
+ echo; echo "Checking if static C library is available...";
+ fi
+ cat > $testsource <<EOF
#include <stdio.h>
#include <stdlib.h>
#include <sys/cdefs.h>
-int main(void){printf("...yes\n");
- return EXIT_SUCCESS;}
+int main(void){printf("...yes\n"); return EXIT_SUCCESS;}
EOF
- cc_call="$CC $testsource $CPPFLAGS $LDFLAGS -o$testprog -static -lc"
- if $cc_call && $testprog; then
- gccwarning=0
- rm $testsource $testprog
- else
- echo; echo "Compilation command:"; echo "$cc_call"
- rm $testsource
- gccwarning=1
- host_cc=1
- cat <<EOF
+ cc_call="$CC $testsource $CPPFLAGS $LDFLAGS -o$testprog -static -lc"
+ if $cc_call && $testprog > /dev/null; then
+ gccwarning=0
+ rm $testsource $testprog
+ if [ $quiet = 0 ]; then echo "... yes"; fi
+ else
+ echo; echo "Compilation command:"; echo "$cc_call"
+ rm $testsource
+ gccwarning=1
+ host_cc=1
+ cat <<EOF
_______________________________________________________
!!!!!!!!!!!! Warning !!!!!!!!!!!!
@@ -637,15 +751,14 @@ re-configure the project to fix this problem.
$ export LDFLAGS="-L/PATH/TO/STATIC/LIBC \$LDFLAGS"
$ export CPPFLAGS="-I/PATH/TO/SYS/CDEFS_H \$LDFLAGS"
-
_______________________________________________________
EOF
+ fi
fi
-fi
-# Print a warning if GCC is not meant to be built.
-if [ x"$gccwarning" = x1 ]; then
+ # Print a warning if GCC is not meant to be built.
+ if [ x"$gccwarning" = x1 ]; then
cat <<EOF
PLEASE SEE THE WARNINGS ABOVE.
@@ -655,10 +768,13 @@ seconds and use your system's C compiler (it won't build a custom GCC). But
please consider installing the necessary package(s) to complete your C
compiler, then re-run './project configure'.
-Project's configuration will continue in 5 seconds.
+The configuration will continue in $pausesec seconds. To avoid the
+pause on such messages use the '--no-pause' option.
EOF
- sleep 5
+ sleep $pausesec
+ fi
+ elapsed_time_from_prev_step compiler-linkable-static
fi
@@ -672,7 +788,7 @@ fi
# have a fortran compiler: we'll build it internally for high-level
# programs with GCC. However, when the host C compiler is to be used, the
# user needs to have a Fortran compiler available.
-if [ $host_cc = 1 ]; then
+if [ $built_container = 0 ] && [ $host_cc = 1 ]; then
# If a Fortran compiler is necessary, see if 'gfortran' exists and can
# be used.
@@ -705,8 +821,9 @@ EOF
# Then, see if the Fortran compiler works
testsourcef=$compilertestdir/test.f
echo; echo; echo "Checking host Fortran compiler...";
- echo " PRINT *, \"... Fortran Compiler works.\"" > $testsourcef
- echo " END" >> $testsourcef
+ echo " PRINT *, \"... Fortran Compiler works.\"" \
+ > $testsourcef
+ echo " END" >> $testsourcef
if gfortran $testsourcef -o$testprog && $testprog; then
rm $testsourcef $testprog
else
@@ -732,6 +849,68 @@ EOF
exit 1
fi
fi
+ elapsed_time_from_prev_step compiler-fortran
+fi
+
+
+
+
+
+# See if the linker accepts -Wl,-rpath-link
+# -----------------------------------------
+#
+# '-rpath-link' is used to write the information of the linked shared
+# library into the shared object (library or program). But some versions of
+# LLVM's linker don't accept it an can cause problems.
+#
+# IMPORTANT NOTE: This test has to be done **AFTER** the definition of
+# 'instdir', otherwise, it is going to be used as an empty string.
+if [ $built_container = 0 ]; then
+ cat > $testsource <<EOF
+#include <stdio.h>
+#include <stdlib.h>
+int main(void) {return EXIT_SUCCESS;}
+EOF
+ if $CC $testsource -o$testprog -Wl,-rpath-link 2>/dev/null \
+ > /dev/null; then
+ export rpath_command="-Wl,-rpath-link=$instdir/lib"
+ else
+ export rpath_command=""
+ fi
+
+ # Delete the temporary directory for compiler checking.
+ rm -f $testprog $testsource
+ rm -r $compilertestdir
+ elapsed_time_from_prev_step compiler-rpath
+fi
+
+
+
+
+
+# Paths needed by the host compiler (only for 'basic.mk')
+# -------------------------------------------------------
+#
+# At the end of the basic build, we need to build GCC. But GCC will build
+# in multiple phases, making its own simple compiler in order to build
+# itself completely. The intermediate/simple compiler doesn't recognize
+# some system specific locations like '/usr/lib/ARCHITECTURE' that some
+# operating systems use. We thus need to tell the intermediate compiler
+# where its necessary libraries and headers are.
+if [ $built_container = 0 ]; then
+ if [ x"$sys_library_path" != x ]; then
+ if [ x"$LIBRARY_PATH" = x ]; then
+ export LIBRARY_PATH="$sys_library_path"
+ else
+ export LIBRARY_PATH="$LIBRARY_PATH:$sys_library_path"
+ fi
+ if [ x"$CPATH" = x ]; then
+ export CPATH="$sys_cpath"
+ else
+ export CPATH="$CPATH:$sys_cpath"
+ fi
+ fi
+ elapsed_time_from_prev_step compiler-paths
fi
@@ -743,7 +922,8 @@ fi
#
# Print some basic information so the user gets a feeling of what is going
# on and is prepared on what will happen next.
-cat <<EOF
+if [ $quiet = 0 ]; then
+ cat <<EOF
-----------------------------
Project's local configuration
@@ -758,33 +938,29 @@ components from pre-defined webpages). It is STRONGLY recommended to read
the description above each question before answering it.
EOF
+fi
-
-# What to do with possibly existing configuration file
-# ----------------------------------------------------
+# Previous configuration
+# ----------------------
#
-# 'LOCAL.conf' is the top-most local configuration for the project. If it
-# already exists when this script is run, we'll make a copy of it as backup
-# (for example the user might have ran './project configure' by mistake).
-printnotice=yes
-rewritepconfig=yes
-if [ -f $pconf ]; then
+# 'LOCAL.conf' is the top-most local configuration for the project. At this
+# point, if a LOCAL.conf exists within the '.build' symlink, we use it
+# (instead of asking the user to interactively specify it).
+rewritelconfig=yes
+lconf=.build/software/config/LOCAL.conf
+if [ -f $lconf ]; then
if [ $existing_conf = 1 ]; then
- printnotice=no
- if [ -f $pconf ]; then rewritepconfig=no; fi
+ rewritelconfig=no;
fi
fi
-
-
-
# Make sure the group permissions satisfy the previous configuration (if it
# exists and we don't want to re-write it).
-if [ $rewritepconfig = no ]; then
- oldgroupname=$(awk '/GROUP-NAME/ {print $3; exit 0}' $pconf)
+if [ $rewritelconfig = no ]; then
+ oldgroupname=$(awk '/GROUP-NAME/ {print $3; exit 0}' $lconf)
if [ "x$oldgroupname" = "x$maneage_group_name" ]; then
just_a_place_holder_to_avoid_not_equal_test=1;
else
@@ -805,65 +981,9 @@ if [ $rewritepconfig = no ]; then
echo " $confcommand"; echo
exit 1
fi
-fi
-
-
-
-
-
-# Identify the downloader tool
-# ----------------------------
-#
-# After this script finishes, we will have both Wget and cURL for
-# downloading any necessary dataset during the processing. However, to
-# complete the configuration, we may also need to download the source code
-# of some necessary software packages (including the downloaders). So we
-# need to check the host's available tool for downloading at this step.
-if [ $rewritepconfig = yes ]; then
- if type wget > /dev/null 2>/dev/null; then
-
- # 'which' isn't in POSIX, so we are using 'command -v' instead.
- name=$(command -v wget)
-
- # See if the host wget has the '--no-use-server-timestamps' option
- # (for example wget 1.12 doesn't have it). If not, we'll have to
- # remove it. This won't affect the analysis of Maneage in anyway,
- # its just to avoid re-downloading if the server timestamps are
- # bad; at the worst case, it will just cause a re-download of an
- # input software source code (for data inputs, we will use our own
- # wget that has this option).
- tsname="no-use-server-timestamps"
- tscheck=$(wget --help | grep $tsname || true)
- if [ x"$tscheck" = x ]; then wgetts=""
- else wgetts="--$tsname";
- fi
-
- # By default Wget keeps the remote file's timestamp, so we'll have
- # to disable it manually.
- downloader="$name $wgetts -O";
- elif type curl > /dev/null 2>/dev/null; then
- name=$(command -v curl)
-
- # - cURL doesn't keep the remote file's timestamp by default.
- # - With the '-L' option, we tell cURL to follow redirects.
- downloader="$name -L -o"
- else
- cat <<EOF
-
-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
-!!!!!!!!!!!!!!!!!!!!!! Warning !!!!!!!!!!!!!!!!!!!!!!
-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
-
-Couldn't find GNU Wget, or cURL on this system. These programs are used for
-downloading necessary programs and data if they aren't already present (in
-directories that you can specify with this configure script). Therefore if
-the necessary files are not present, the project will crash.
-
-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
-EOF
- downloader="no-downloader-found"
- fi;
+ # Report timing of this step if necessary.
+ elapsed_time_from_prev_step LOCAL-and-group-check
fi
@@ -873,7 +993,7 @@ fi
# Build directory
# ---------------
currentdir="$(pwd)"
-if [ $rewritepconfig = yes ]; then
+if [ $rewritelconfig = yes ]; then
cat <<EOF
===============
@@ -901,12 +1021,18 @@ Do not choose any directory under the top source directory (this
directory). The build directory cannot be a subdirectory of the source.
---------------
+Build directory:
+ - Must be writable by running user.
+ - Not a sub-directory of the source directory.
+ - No meta-characters in name: SPACE ! ' @ # $ % ^ & * ( ) + ;
+
EOF
bdir=
junkname=pure-junk-974adfkj38
while [ x"$bdir" = x ]
do
- # Ask the user (if not already set on the command-line).
+ # Ask the user (if not already set on the command-line: 'build_dir'
+ # comes from the 'project' script).
if [ x"$build_dir" = x ]; then
if read -p"Please enter the top build directory: " build_dir;
then
@@ -948,9 +1074,11 @@ EOF
# If it was newly created, it will be empty, so delete it.
if ! [ "$(ls -A $bdir)" ]; then rm --dir "$bdir"; fi
- # Inform the user that this is not acceptable and reset 'bdir'.
+ # Inform the user that this is not acceptable and reset
+ # 'bdir'.
bdir=
- echo " ** The build-directory cannot be under the source-directory."
+ printf " ** The build-directory cannot be under the "
+ printf "source-directory."
fi
fi
@@ -959,7 +1087,8 @@ EOF
# building.
if ! [ x"$bdir" = x ]; then
hasmeta=0;
- case $bdir in *['!'\@\#\$\%\^\&\*\(\)\+\;\ ]* ) hasmeta=1 ;; esac
+ case $bdir in *['!'\@\#\$\%\^\&\*\(\)\+\;\ ]* ) hasmeta=1 ;;
+ esac
if [ $hasmeta = 1 ]; then
# If it was newly created, it will be empty, so delete it.
@@ -967,9 +1096,10 @@ EOF
# Inform the user and set 'bdir' to empty again.
bdir=
- echo " ** Build directory should not contain meta-characters"
- echo " ** (like SPACE, %, \$, !, ;, or parenthesis, among "
- echo " ** others): they can interrup the build for some software."
+ printf " ** Build directory should not contain "
+ printf "meta-characters (like SPACE, %, \$, !, ;, or "
+ printf "parenthesis, among others): they can interrup "
+ printf "the build for some software."
fi
fi
@@ -980,16 +1110,29 @@ EOF
if ! $(check_permission "$bdir"); then
# Unable to handle permissions well
bdir=
- echo " ** File permissions can't be modified in this directory"
+ printf " ** File permissions can not be modified in "
+ printf "this directory"
else
# Able to handle permissions, now check for 5GB free space
# in the given partition (note that the number is in units
# of 1024 bytes). If this is not the case, print a warning.
if $(free_space_warning 5000000 "$bdir"); then
- echo " !! LESS THAN 5GB FREE SPACE IN: $bdir"
- echo " !! We recommend choosing another partition."
- echo " !! Build will continue in 5 seconds..."
- sleep 5
+ cat <<EOF
+
+_______________________________________________________
+!!!!!!!!!!!! Warning !!!!!!!!!!!!
+
+Less than 5GB free space in '$bdir'. We recommend choosing another
+partition. Note that the software environment alone will take roughly
+4.5GB, so if your datasets are large, it will fill up very soon.
+
+The configuration will continue in $pausesec seconds. To avoid the
+pause on such messages use the '--no-pause' option.
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+
+EOF
+ sleep $pausesec
fi
fi
fi
@@ -1003,9 +1146,42 @@ EOF
echo " ** Please select another directory."
echo ""
else
+ # Set the '.build' and '.local' symbolic links (and delete
+ # possibly existing symbolic links). These commands are also
+ # present in the top-level 'project' script, but they are only
+ # invoked when '--build-dir' is called. When it is not called
+ # (the user wants to insert the directories interactively: the
+ # scenario here), the links need to be created from
+ # scratch. Furthermore, in case the given directory to
+ # '--build-dir' has problems (fails to pass the sanity checks
+ # above), the symbolic links also need to be recreated.
+ rm -f .build .local
+ ln -s $bdir .build
+ ln -s $bdir/software/installed .local
+
+ # Inform the user
echo " -- Build directory set to ($instring): '$bdir'"
fi
done
+
+ # Report timing if necessary
+ elapsed_time_from_prev_step build-dir
+
+# The directory should be extracted from the existing LOCAL.conf, not from
+# the command-line or in interactive mode.
+else
+
+ # Read the build directory from existing configuration file. It is
+ # assumed that 'LOCAL.conf' is created by this script (above the
+ # 'else') and that all the sanity checks there have already been
+ # applied. We'll just check if it is empty or not.
+ bdir=$(awk '$1=="BDIR" {print $3}' $lconf)
+ if [ x"$bdir" = x ]; then
+ printf "$scriptname: no value to 'BDIR' of '$lconf'. Please run "
+ printf "the project configuration again, but without "
+ printf "'--existing-conf' (or '-e')"
+ exit 1
+ fi
fi
@@ -1014,13 +1190,10 @@ fi
# Input directory
# ---------------
-if [ x"$input_dir" = x ]; then
- indir="$optionaldir"
-else
- indir="$input_dir"
+if [ x"$input_dir" = x ]; then indir="$optionaldir"
+else indir="$input_dir"
fi
-noninteractive_sleep=2
-if [ $rewritepconfig = yes ] && [ x"$input_dir" = x ]; then
+if [ $rewritelconfig = yes ]; then
cat <<EOF
----------------------------------
@@ -1047,35 +1220,61 @@ don't want to make duplicates, you can create symbolic links to them and
put those symbolic links in the given top-level directory.
EOF
- # Read the input directory if interactive mode is enabled.
- if read -p"(OPTIONAL) Input datasets directory ($indir): " inindir; then
- just_a_place_holder_to_avoid_not_equal_test=1;
- else
- echo "WARNING: interactive-mode seems to be disabled!"
- echo "If you have a local copy of the inputs, use '--input-dir'."
- echo "... project configuration will continue in $noninteractive_sleep sec ..."
- sleep $noninteractive_sleep
+ # In case an input directory is not given, ask the user interactively.
+ if [ x"$input_dir" = x ]; then
+
+ # Read the input directory if interactive mode is enabled.
+ if read -p"(OPTIONAL) Input datasets directory ($indir): " \
+ inindir; then
+ just_a_place_holder_to_avoid_not_equal_test=1;
+ else
+ cat <<EOF
+______________________________________________________
+!!!!!!!!!!!!!!! Warning !!!!!!!!!!!!!!!!
+
+WARNING: interactive-mode seems to be disabled! If you have a local copy of
+the inputs, use '--input-dir'. Otherwise, all the data will be downloaded.
+
+The configuration will continue in $pausesec seconds. To avoid the
+pause on such messages use the '--no-pause' option.
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+
+EOF
+ sleep $pausesec
+ fi
+ else # An input directory was given.
+ inindir="$input_dir"
fi
- # In case an input-directory is given, write it in 'indir'.
+ # If the given string is not empty, write it in 'indir'.
if [ x$inindir != x ]; then
indir="$(absolute_dir "$inindir")"
echo " -- Using '$indir'"
fi
+
+ # Report timing if necessary.
+ elapsed_time_from_prev_step input-dir
+
+# The directory should be extracted from the existing LOCAL.conf, not from
+# the command-line or in interactive mode; similar to 'bdir' above.
+else
+ indir=$(awk '$1=="INDIR" {print $3}' $lconf)
fi
+
# Dependency tarball directory
# ----------------------------
-if [ x"$software_dir" = x ]; then
- ddir=$optionaldir
-else
- ddir=$software_dir
+if [ x"$software_dir" = x ]; then ddir=$optionaldir
+else ddir=$software_dir
fi
-if [ $rewritepconfig = yes ] && [ x"$software_dir" = x ]; then
+if [ $rewritelconfig = yes ]; then
+
+ # Print information.
cat <<EOF
---------------------------------------
@@ -1091,14 +1290,32 @@ of a dependency, it is necessary to have an internet connection because the
project will download the tarballs it needs automatically.
EOF
- # Read the software directory if interactive mode is enabled.
- if read -p"(OPTIONAL) Directory of dependency tarballs ($ddir): " tmpddir; then
- just_a_place_holder_to_avoid_not_equal_test=1;
+
+ # Ask the user for the software directory if it is not given as an
+ # option.
+ if [ x"$software_dir" = x ]; then
+ if read -p"(OPTIONAL) Directory of dependency tarballs ($ddir): " \
+ tmpddir; then
+ just_a_place_holder_to_avoid_not_equal_test=1;
+ else
+ cat <<EOF
+______________________________________________________
+!!!!!!!!!!!!!!! Warning !!!!!!!!!!!!!!!!
+
+WARNING: interactive-mode seems to be disabled! If you have a local copy of
+the software source tarballs, use '--software-dir'. Otherwise, all the
+necessary tarballs will be downloaded.
+
+The configuration will continue in $pausesec seconds. To avoid the
+pause on such messages use the '--no-pause' option.
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+
+EOF
+ sleep $pausesec
+ fi
else
- echo "WARNING: interactive-mode seems to be disabled!"
- echo "If you have a local copy of the software source, use '--software-dir'."
- echo "... project configuration will continue in $noninteractive_sleep sec ..."
- sleep $noninteractive_sleep
+ tmpddir="$software_dir"
fi
# If given, write the software directory.
@@ -1106,105 +1323,165 @@ EOF
ddir="$(absolute_dir "$tmpddir")"
echo " -- Using '$ddir'"
fi
-fi
-
+# The directory should be extracted from the existing LOCAL.conf, not from
+# the command-line or in interactive mode; similar to 'bdir' above.
+else
+ indir=$(awk '$1=="DEPENDENCIES-DIR" {print $3}' $lconf)
+fi
+elapsed_time_from_prev_step software-dir
-# Write the parameters into the local configuration file.
-if [ $rewritepconfig = yes ]; then
- # Add commented notice.
- create_file_with_notice $pconf
- # Write the values.
- sed -e's|@bdir[@]|'"$bdir"'|' \
- -e's|@indir[@]|'"$indir"'|' \
- -e's|@ddir[@]|'"$ddir"'|' \
- -e's|@sys_cpath[@]|'"$sys_cpath"'|' \
- -e's|@downloader[@]|'"$downloader"'|' \
- -e's|@groupname[@]|'"$maneage_group_name"'|' \
- $pconf.in >> $pconf
-else
- # Read the values from existing configuration file. Note that the build
- # directory may have space characters. Even though we currently check
- # against it, we hope to be able to remove this condition in the
- # future.
- inbdir=$(awk '$1=="BDIR" { for(i=3; i<NF; i++) \
- printf "%s ", $i; \
- printf "%s", $NF }' $pconf)
-
- # Read the software directory (same as 'inbdir' above about space).
- ddir=$(awk '$1=="DEPENDENCIES-DIR" { for(i=3; i<NF; i++) \
- printf "%s ", $i; \
- printf "%s", $NF}' $pconf)
-
- # The downloader command may contain multiple elements, so we'll just
- # change the (in memory) first and second tokens to empty space and
- # write the full line (the original file is unchanged).
- downloader=$(awk '$1=="DOWNLOADER" {$1=""; $2=""; print $0}' $pconf)
-
- # Make sure all necessary variables have a value
- err=0
- verr=0
- novalue=""
- if [ x"$inbdir" = x ]; then novalue="BDIR, "; fi
- if [ x"$downloader" = x ]; then novalue="$novalue"DOWNLOADER; fi
- if [ x"$novalue" != x ]; then verr=1; err=1; fi
-
- # Make sure 'bdir' is an absolute path and it exists.
- berr=0
- ierr=0
- bdir="$(absolute_dir "$inbdir")"
-
- if ! [ -d "$bdir" ]; then if ! mkdir "$bdir"; then berr=1; err=1; fi; fi
- if [ $err = 1 ]; then
- cat <<EOF
+# Downloader
+# ----------
+#
+# After this script finishes, we will have both Wget and cURL for
+# downloading any necessary dataset during the processing. However, to
+# complete the configuration, we may also need to download the source code
+# of some necessary software packages (including the downloaders). So we
+# need to check the host's available tool for downloading at this step.
+if [ $rewritelconfig = yes ]; then
+ if type wget > /dev/null 2>/dev/null; then
-#################################################################
-######## ERORR reading existing configuration file ############
-#################################################################
-EOF
- if [ $verr = 1 ]; then
- cat <<EOF
+ # 'which' isn't in POSIX, so we are using 'command -v' instead.
+ name=$(command -v wget)
-These variables have no value: $novalue.
-EOF
+ # See if the host wget has the '--no-use-server-timestamps' option
+ # (for example wget 1.12 doesn't have it). If not, we'll have to
+ # remove it. This won't affect the analysis of Maneage in anyway,
+ # its just to avoid re-downloading if the server timestamps are
+ # bad; at the worst case, it will just cause a re-download of an
+ # input software source code (for data inputs, we will use our own
+ # wget that has this option).
+ tsname="no-use-server-timestamps"
+ tscheck=$(wget --help | grep $tsname || true)
+ if [ x"$tscheck" = x ]; then wgetts=""
+ else wgetts="--$tsname";
fi
- if [ $berr = 1 ]; then
- cat <<EOF
-Couldn't create the build directory '$bdir' (value to 'BDIR') in
-'$pconf'.
-EOF
- fi
+ # By default Wget keeps the remote file's timestamp, so we'll have
+ # to disable it manually.
+ downloader="$name $wgetts -O";
+ elif type curl > /dev/null 2>/dev/null; then
+ name=$(command -v curl)
+ # - cURL doesn't keep the remote file's timestamp by default.
+ # - With the '-L' option, we tell cURL to follow redirects.
+ downloader="$name -L -o"
+ else
cat <<EOF
-Please run the configure script again (accepting to re-write existing
-configuration file) so all the values can be filled and checked.
-#################################################################
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+!!!!!!!!!!!!!!!!!!!!!! Warning !!!!!!!!!!!!!!!!!!!!!!
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+
+Couldn't find GNU Wget, or cURL on this system. These programs are used for
+downloading necessary programs and data if they aren't already present (in
+directories that you can specify with this configure script). Therefore if
+the necessary files are not present, the project will crash.
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+
EOF
+ downloader="no-downloader-found"
+ fi;
+
+# The downloader should be extracted from the existing LOCAL.conf.
+else
+ # The value will be a command (including white spaces), so we will read
+ # all the "fields" from the third to the end.
+ downloader=$(awk '$1=="DOWNLOADER" { for(i=3; i<NF; i++) \
+ printf "%s ", $i; \
+ printf "%s", $NF }' $lconf)
+
+ if [ x"$downloader" = x ]; then
+ printf "$scriptname: no value to 'DOWNLOADER' of '$lconf'. "
+ printf "Please run the project configuration again, but "
+ printf "without '--existing-conf' (or '-e')"
+ exit 1
fi
fi
+elapsed_time_from_prev_step downloader
-# Delete final configuration target
-# ---------------------------------
+# Libraries necessary for the system's shell
+# ------------------------------------------
#
-# We only want to start running the project later if this script has
-# completed successfully. To make sure it hasn't crashed in the middle
-# (without the user noticing), in the end of this script we make a file and
-# we'll delete it here (at the start). Therefore if the script crashed in
-# the middle that file won't exist.
-sdir="$bdir"/software
-finaltarget="$sdir"/configuration-done.txt
-if ! [ -d "$sdir" ]; then mkdir "$sdir"; fi
-rm -f "$finaltarget"
+# In some cases (mostly the programs that Maneage doesn't yet build by
+# itself), the programs may call the system's shell, not Maneage's
+# shell. After we close-off the system environment from Maneage, this will
+# cause a crash! To avoid such cases, we need to find the locations of the
+# libraries that the shell needs and temporarily add them to the library
+# search path.
+#
+# About the 'grep -v "(0x[^)]*)"' term (from bug 66847, see [1]): On some
+# systems [2], the output of 'ldd /bin/sh' includes a line for the vDSO [3]
+# that is different to the formats that are assumed, prior to this commit,
+# by the algorithm in 'configure.sh' when evaluating the variable
+# 'sys_library_sh_path'. This leads to a fatal syntax error in (at least)
+# 'ncurses', because the option using 'sys_library_sh_path' contains an
+# unquoted RAM address in parentheses. Even if the address were quoted, it
+# would still be incorrect. This 'grep command excludes candidate host path
+# strings that look like RAM addresses to address the problem.
+#
+# [1] https://savannah.nongnu.org/bugs/index.php?66847
+# [2] https://stackoverflow.com/questions/34428037/how-to-interpret-the-output-of-the-ldd-program
+# [3] man vdso
+if [ $built_container = 0 ]; then
+ if [ x"$on_mac_os" = xyes ]; then
+ sys_library_sh_path=$(otool -L /bin/sh \
+ | awk '/\/lib/{print $1}' \
+ | sed 's#/[^/]*$##' \
+ | sort \
+ | uniq \
+ | awk '{if (NR==1) printf "%s", $1; \
+ else printf ":%s", $1}')
+ else
+ sys_library_sh_path=$(ldd /bin/sh \
+ | awk '{if($3!="") print $3}' \
+ | sed 's#/[^/]*$##' \
+ | grep -v "(0x[^)]*)" \
+ | sort \
+ | uniq \
+ | awk '{if (NR==1) printf "%s", $1; \
+ else printf ":%s", $1}')
+ fi
+ elapsed_time_from_prev_step sys-library-sh-path
+fi
+
+
+
+
+
+# When no local configuration existed, write the parameters into the local
+# configuration file.
+sdir=$bdir/software
+sconfdir=$sdir/config
+if ! [ -d "$sdir" ]; then mkdir "$sdir"; fi
+if ! [ -d "$sconfdir" ]; then mkdir "$sconfdir"; fi
+if [ $rewritelconfig = yes ]; then
+
+ # Put the basic comments at the top of the file.
+ create_file_with_notice $lconf
+
+ # Write the values.
+ lconfin=$cdir/LOCAL.conf.in
+ sed -e's|@bdir[@]|'"$bdir"'|' \
+ -e's|@indir[@]|'"$indir"'|' \
+ -e's|@ddir[@]|'"$ddir"'|' \
+ -e's|@sys_cpath[@]|'"$sys_cpath"'|' \
+ -e's|@downloader[@]|'"$downloader"'|' \
+ -e's|@groupname[@]|'"$maneage_group_name"'|' \
+ -e's|@sys_library_sh_path[@]|'"$sys_library_sh_path"'|' \
+ $lconfin >> $lconf
+fi
+elapsed_time_from_prev_step LOCAL-write
@@ -1217,99 +1494,58 @@ rm -f "$finaltarget"
# avoid too many directory dependencies throughout the software and
# analysis Makefiles (thus making them hard to read), we are just building
# them here
-# Software tarballs
tardir="$sdir"/tarballs
-if ! [ -d "$tardir" ]; then mkdir "$tardir"; fi
-
-# Installed software
instdir="$sdir"/installed
-if ! [ -d "$instdir" ]; then mkdir "$instdir"; fi
+tmpblddir="$sdir"/build-tmp
-# To record software versions and citation.
+# Second-level directories.
+instlibdir="$instdir"/lib
+instbindir="$instdir"/bin
verdir="$instdir"/version-info
-if ! [ -d "$verdir" ]; then mkdir "$verdir"; fi
-# Program and library versions and citation.
-ibidir="$verdir"/proglib
-if ! [ -d "$ibidir" ]; then mkdir "$ibidir"; fi
-
-# Python module versions and citation.
+# Sub-directories of version-info
+itidir="$verdir"/tex
+ictdir="$verdir"/cite
ipydir="$verdir"/python
-if ! [ -d "$ipydir" ]; then mkdir "$ipydir"; fi
-
-# R module versions and citation.
+ibidir="$verdir"/proglib
ircrandir="$verdir"/r-cran
-if ! [ -d "$ircrandir" ]; then mkdir "$ircrandir"; fi
-
-# Used software BibTeX entries.
-ictdir="$verdir"/cite
-if ! [ -d "$ictdir" ]; then mkdir "$ictdir"; fi
-
-# TeXLive versions.
-itidir="$verdir"/tex
-if ! [ -d "$itidir" ]; then mkdir "$itidir"; fi
-
-# Some software install their libraries in '$(idir)/lib64'. But all other
-# libraries are in '$(idir)/lib'. Since Maneage's build is only for a
-# single architecture, we can set the '$(idir)/lib64' as a symbolic link to
-# '$(idir)/lib' so all the libraries are always available in the same
-# place.
-instlibdir="$instdir"/lib
-if ! [ -d "$instlibdir" ]; then mkdir "$instlibdir"; fi
-ln -fs "$instlibdir" "$instdir"/lib64
-
-# Wrapper over Make as a single command so it does not default to '/bin/sh'
-# during installation (needed by some programs like CMake).
-instbindir=$instdir/bin
-if ! [ -d $instbindir ]; then mkdir $instbindir; fi
-makewshell="$instbindir/make-with-shell"
-echo "$instbindir/make SHELL=$instbindir/bash \$@" > $makewshell
-chmod +x $makewshell
-
-
-
-
-
-# Project's top-level built analysis directories
-# ----------------------------------------------
+if [ $built_container = 0 ]; then
+
+ # Top-level directories.
+ if ! [ -d "$tardir" ]; then mkdir "$tardir"; fi
+ if ! [ -d "$instdir" ]; then mkdir "$instdir"; fi
+
+ # Second-level directories.
+ if ! [ -d "$verdir" ]; then mkdir "$verdir"; fi
+ if ! [ -d "$instbindir" ]; then mkdir "$instbindir"; fi
+
+ # Sub-directories of version-info
+ if ! [ -d "$itidir" ]; then mkdir "$itidir"; fi
+ if ! [ -d "$ictdir" ]; then mkdir "$ictdir"; fi
+ if ! [ -d "$ipydir" ]; then mkdir "$ipydir"; fi
+ if ! [ -d "$ibidir" ]; then mkdir "$ibidir"; fi
+ if ! [ -d "$ircrandir" ]; then mkdir "$ircrandir"; fi
+
+ # Some software install their libraries in '$(idir)/lib64'. But all
+ # other libraries are in '$(idir)/lib'. Since Maneage's build is only
+ # for a single architecture, we can set the '$(idir)/lib64' as a
+ # symbolic link to '$(idir)/lib' so all the libraries are always
+ # available in the same place.
+ if ! [ -d "$instlibdir" ]; then mkdir "$instlibdir"; fi
+ ln -fs "$instlibdir" "$instdir"/lib64
+
+ # Wrapper over Make as a single command so it does not default to
+ # '/bin/sh' during installation (needed by some programs like CMake).
+ makewshell="$instbindir/make-with-shell"
+ if ! [ -f "$makewshell" ]; then
+ echo "$instbindir/make SHELL=$instbindir/bash \$@" > $makewshell
+ chmod +x $makewshell
+ fi
-# Top-level LaTeX.
-texdir="$sdir"/tex
-if ! [ -d "$texdir" ]; then mkdir "$texdir"; fi
-
-# If 'tex/build' and 'tex/tikz' are symbolic links then 'rm -f' will delete
-# them and we can continue. However, when the project is being built from
-# the tarball, these two are not symbolic links but actual directories with
-# the necessary built-components to build the PDF in them. In this case,
-# because 'tex/build' is a directory, 'rm -f' will fail, so we'll just
-# rename the two directories (as backup) and let the project build the
-# proper symbolic links afterwards.
-if rm -f tex/build; then
- rm -f tex/tikz
-else
- mv tex/tikz tex/tikz-from-tarball
- mv tex/build tex/build-from-tarball
+ # Report the execution time of this step.
+ elapsed_time_from_prev_step subdirectories-of-build
fi
-# Set the symbolic links for easy access to the top project build
-# directories. Note that these are put in each user's source/cloned
-# directory, not in the build directory (which can be shared between many
-# users and thus may already exist).
-#
-# Note: if we don't delete them first, it can happen that an extra link
-# will be created in each directory that points to its parent. So to be
-# safe, we are deleting all the links on each re-configure of the
-# project. Note that at this stage, we are using the host's 'ln', not our
-# own, so its best not to assume anything (like 'ln -sf').
-rm -f .build .local
-
-ln -s "$bdir" .build
-ln -s "$instdir" .local
-
-# --------- Delete for no Gnuastro ---------
-rm -f .gnuastro
-# ------------------------------------------
-
@@ -1322,120 +1558,116 @@ rm -f .gnuastro
# HDDs/SSDs and improve speed, it is therefore better to build them in the
# RAM when possible. The RAM of most systems today (>8GB) is large enough
# for the parallel building of the software.
-
+#
# Set the top-level shared memory location. Currently there is only one
# standard location (for GNU/Linux OSs), so doing this check here and the
# main job below may seem redundant. However, it is written separately from
# the main code below because later, we expect to add more possible
# mounting locations (for other OSs).
-if [ -d /dev/shm ]; then shmdir=/dev/shm
-else shmdir=""
-fi
+if [ $built_container = 0 ]; then
+ if [ -d /dev/shm ]; then shmdir=/dev/shm
+ else shmdir=""
+ fi
-# If a shared memory mounted directory exists and has the necessary
-# conditions, set that directory to build software.
-if [ x"$shmdir" != x ]; then
-
- # Make sure it has enough space.
- needed_space=2000000
- available_space=$(df "$shmdir" | awk 'NR==2{print $4}')
- if [ $available_space -gt $needed_space ]; then
-
- # Set the Maneage-specific directory within the shared
- # memory. We'll use the names of the two parent directories to the
- # current/running directory, separated by a '-' instead of
- # '/'. We'll then appended that with the user's name (in case
- # multiple users may be working on similar project names).
- #
- # Maybe later, we can use something like 'mktemp' to add random
- # characters to this name and make it unique to every run (even for
- # a single user).
- dirname=$(pwd | sed -e's/\// /g' \
- | awk '{l=NF-1; printf("%s-%s", $l, $NF)}')
- tbshmdir="$shmdir"/"$dirname"-$(whoami)
-
- # Try to make the directory if it does not yet exist. A failed
- # directory creation will be tested for a few lines later, when
- # testing for the existence and executability of a test file.
- if ! [ -d "$tbshmdir" ]; then (mkdir "$tbshmdir" || true); fi
-
- # Some systems may protect '/dev/shm' against the right to execute
- # programs by ordinary users. We thus need to check that the device
- # allows execution within this directory by this user.
- shmexecfile="$tbshmdir"/shm-execution-check.sh
- rm -f $shmexecfile # We also don't want any existing flags.
-
- # Create the file to be executed, but do not fail fatally if it
- # cannot be created. We will check a few lines later if the file
- # really exists.
- (cat > "$shmexecfile" <<EOF || true)
+ # If a shared memory mounted directory exists and has the necessary
+ # conditions, set that directory to build software.
+ if [ x"$shmdir" != x ]; then
+
+ # Make sure it has enough space.
+ needed_space=2000000
+ available_space=$(df "$shmdir" | awk 'NR==2{print $4}')
+ if [ $available_space -gt $needed_space ]; then
+
+ # Set the Maneage-specific directory within the shared
+ # memory. We'll use the names of the two parent directories to
+ # the current/running directory, separated by a '-' instead of
+ # '/'. We'll then appended that with the user's name (in case
+ # multiple users may be working on similar project names).
+ #
+ # Maybe later, we can use something like 'mktemp' to add random
+ # characters to this name and make it unique to every run (even
+ # for a single user).
+ dirname=$(pwd | sed -e's/\// /g' \
+ | awk '{l=NF-1; printf("%s-%s", $l, $NF)}')
+ tbshmdir="$shmdir"/"$dirname"-$(whoami)
+
+ # Try to make the directory if it does not yet exist. A failed
+ # directory creation will be tested for a few lines later, when
+ # testing for the existence and executability of a test file.
+ if ! [ -d "$tbshmdir" ]; then (mkdir "$tbshmdir" || true); fi
+
+ # Some systems may protect '/dev/shm' against the right to
+ # execute programs by ordinary users. We thus need to check
+ # that the device allows execution within this directory by
+ # this user.
+ shmexecfile="$tbshmdir"/shm-execution-check.sh
+ rm -f $shmexecfile # We also don't want any existing flags.
+
+ # Create the file to be executed, but do not fail fatally if it
+ # cannot be created. We will check a few lines later if the
+ # file really exists.
+ (cat > "$shmexecfile" <<EOF || true)
#!/bin/sh
-echo "This file successfully executed."
+a=b
EOF
- # If the file was successfully created, then make the file
- # executable and see if it runs. If not, set 'tbshmdir' to an empty
- # string so it is not used in later steps. In any case, delete the
- # temporary file afterwards.
- #
- # We aren't adding '&> /dev/null' after the execution command
- # because it can produce false failures randomly on some systems.
- if [ -e "$shmexecfile" ]; then
-
- # Add the executable flag.
- chmod +x "$shmexecfile"
-
- # The following line tries to execute the file.
- if "$shmexecfile"; then
- # Successful execution. The colon is a "no-op" (no
- # operation) shell command.
- :
+ # If the file was successfully created, then make the file
+ # executable and see if it runs. If not, set 'tbshmdir' to an
+ # empty string so it is not used in later steps. In any case,
+ # delete the temporary file afterwards.
+ #
+ # We aren't adding '&> /dev/null' after the execution command
+ # because it can produce false failures randomly on some
+ # systems.
+ if [ -e "$shmexecfile" ]; then
+
+ # Add the executable flag.
+ chmod +x "$shmexecfile"
+
+ # The following line tries to execute the file.
+ if "$shmexecfile"; then
+ # Successful execution. The colon is a "no-op" (no
+ # operation) shell command.
+ :
+ else
+ tbshmdir=""
+ fi
+ rm "$shmexecfile"
else
tbshmdir=""
fi
- rm "$shmexecfile"
- else
- tbshmdir=""
fi
+ else
+ tbshmdir=""
fi
-else
- tbshmdir=""
-fi
-
-
-
+ # If a shared memory directory was created, set the software building
+ # directory to be a symbolic link to it. Otherwise, just build the
+ # temporary build directory under the project's build directory.
+ #
+ # If it is a link, we need to empty its contents first, then itself.
+ if [ -d "$tmpblddir" ]; then empty_build_tmp; fi
+
+ # Now that we are sure it doesn't exist, we'll make it (either as a
+ # directory or as a symbolic link).
+ if [ x"$tbshmdir" = x ]; then mkdir "$tmpblddir";
+ else ln -s "$tbshmdir" "$tmpblddir";
+ fi
-# If a shared memory directory was created, set the software building
-# directory to be a symbolic link to it. Otherwise, just build the
-# temporary build directory under the project's build directory.
-tmpblddir="$sdir"/build-tmp
-rm -rf "$tmpblddir"/* "$tmpblddir" # If it is a link, we need to empty
- # its contents first, then itself.
-if [ x"$tbshmdir" = x ]; then mkdir "$tmpblddir";
-else ln -s "$tbshmdir" "$tmpblddir";
+ # Report the time this step took.
+ elapsed_time_from_prev_step temporary-software-building-dir
fi
-# Make sure the temporary build directory is empty (un-finished
-# source/build files from previous builds can remain there during debugging
-# or software updates).
-rm -rf $tmpblddir/*
-
-
-
-
-
# Inform the user that the build process is starting
# -------------------------------------------------
#
# Everything is ready, let the user know that the building is going to
# start.
-if [ $printnotice = yes ]; then
- tsec=10
+if [ $quiet = 0 ]; then
cat <<EOF
-------------------------
@@ -1450,20 +1682,20 @@ NOTE: the built software will NOT BE INSTALLED in standard places of your
OS (so no root access is required). They are only for local usage by this
project.
-**TIP**: you can see which software are being installed at every moment
-with the following command. See "Inspecting status" section of
-'README-hacking.md' for more. In short, run it while the project is being
-configured (in another terminal, but in this same directory:
-'$currentdir'):
+TIP: you can see which software are being installed at every moment with
+the following command. See "Inspecting status" section of
+'README-hacking.md' for more. In short, run it in another terminal while
+the project is being configured.
$ ./project --check-config
-Project's configuration will continue in $tsec seconds.
+Project's configuration will continue in $tsec seconds. To avoid the pause
+on such messages use the '--no-pause' option.
-------------------------
EOF
- sleep $tsec
+ sleep $pausesec
fi
@@ -1479,123 +1711,20 @@ fi
# - On BSD-based systems (for example FreeBSD and macOS), we have a
# 'hw.ncpu' in the output of 'sysctl'.
# - When none of the above work, just set the number of threads to 1.
-if [ $jobs = 0 ]; then
- if type nproc > /dev/null 2> /dev/null; then
- numthreads=$(nproc --all);
- else
- numthreads=$(sysctl -a | awk '/^hw\.ncpu/{print $2}')
- if [ x"$numthreads" = x ]; then numthreads=1; fi
- fi
-else
- numthreads=$jobs
-fi
-
-
-
-
-
-# See if the linker accepts -Wl,-rpath-link
-# -----------------------------------------
-#
-# '-rpath-link' is used to write the information of the linked shared
-# library into the shared object (library or program). But some versions of
-# LLVM's linker don't accept it an can cause problems.
-#
-# IMPORTANT NOTE: This test has to be done **AFTER** the definition of
-# 'instdir', otherwise, it is going to be used as an empty string.
-cat > $testsource <<EOF
-#include <stdio.h>
-#include <stdlib.h>
-int main(void) {return EXIT_SUCCESS;}
-EOF
-if $CC $testsource -o$testprog -Wl,-rpath-link 2>/dev/null > /dev/null; then
- export rpath_command="-Wl,-rpath-link=$instdir/lib"
-else
- export rpath_command=""
-fi
-
-
-
-
-
-# Delete the compiler testing directory
-# -------------------------------------
-#
-# This directory was made above to make sure the necessary compilers can be
-# run.
-rm -f $testprog $testsource
-rm -rf $compilertestdir
-
-
-
-
-
-# Paths needed by the host compiler (only for 'basic.mk')
-# -------------------------------------------------------
#
-# At the end of the basic build, we need to build GCC. But GCC will build
-# in multiple phases, making its own simple compiler in order to build
-# itself completely. The intermediate/simple compiler doesn't recognize
-# some system specific locations like '/usr/lib/ARCHITECTURE' that some
-# operating systems use. We thus need to tell the intermediate compiler
-# where its necessary libraries and headers are.
-if [ x"$sys_library_path" != x ]; then
- if [ x"$LIBRARY_PATH" = x ]; then
- export LIBRARY_PATH="$sys_library_path"
- else
- export LIBRARY_PATH="$LIBRARY_PATH:$sys_library_path"
- fi
- if [ x"$CPATH" = x ]; then
- export CPATH="$sys_cpath"
+# This check is also used in 'reproduce/software/shell/docker.sh'.
+if [ $built_container = 0 ]; then
+ if [ $jobs = 0 ]; then
+ if type nproc > /dev/null 2> /dev/null; then
+ numthreads=$(nproc --all);
+ else
+ numthreads=$(sysctl -a | awk '/^hw\.ncpu/{print $2}')
+ if [ x"$numthreads" = x ]; then numthreads=1; fi
+ fi
else
- export CPATH="$CPATH:$sys_cpath"
+ numthreads=$jobs
fi
-fi
-
-
-
-
-
-# Libraries necessary for the system's shell
-# ------------------------------------------
-#
-# In some cases (mostly the programs that Maneage doesn't yet build by
-# itself), the programs may call the system's shell, not Maneage's
-# shell. After we close-off the system environment from Maneage, this will
-# cause a crash! To avoid such cases, we need to find the locations of the
-# libraries that the shell needs and temporarily add them to the library
-# search path.
-#
-# About the 'grep -v "(0x[^)]*)"' term (from bug 66847, see [1]): On some
-# systems [2], the output of 'ldd /bin/sh' includes a line for the vDSO [3]
-# that is different to the formats that are assumed, prior to this commit,
-# by the algorithm in 'configure.sh' when evaluating the variable
-# 'sys_library_sh_path'. This leads to a fatal syntax error in (at least)
-# 'ncurses', because the option using 'sys_library_sh_path' contains an
-# unquoted RAM address in parentheses. Even if the address were quoted, it
-# would still be incorrect. This 'grep command excludes candidate host path
-# strings that look like RAM addresses to address the problem.
-#
-# [1] https://savannah.nongnu.org/bugs/index.php?66847
-# [2] https://stackoverflow.com/questions/34428037/how-to-interpret-the-output-of-the-ldd-program
-# [3] man vdso
-if [ x"$on_mac_os" = xyes ]; then
- sys_library_sh_path=$(otool -L /bin/sh \
- | awk '/\/lib/{print $1}' \
- | sed 's#/[^/]*$##' \
- | sort \
- | uniq \
- | awk '{if (NR==1) printf "%s", $1; \
- else printf ":%s", $1}')
-else
- sys_library_sh_path=$(ldd /bin/sh \
- | awk '{if($3!="") print $3}' \
- | sed 's#/[^/]*$##' \
- | grep -v "(0x[^)]*)" \
- | sort \
- | uniq \
- | awk '{if (NR==1) printf "%s", $1; \
- else printf ":%s", $1}')
+ elapsed_time_from_prev_step num-threads
fi
@@ -1619,42 +1748,32 @@ fi
# which will download the DOI-resolved webpage, and extract the Zenodo-URL
# of the most recent version from there (using the 'coreutils' tarball as
# an example, the directory part of the URL for all the other software are
-# the same). This is not done if the options '--debug' or `--offline` are used.
+# the same). This is not done if the options '--debug' or `--offline` are
+# used.
zenodourl=""
user_backup_urls=""
-zenodocheck=.build/software/zenodo-check.html
-if [ x$debug = x ] && [ x$offline = x ]; then
- if $downloader $zenodocheck https://doi.org/10.5281/zenodo.3883409; then
- zenodourl=$(sed -n -e'/coreutils/p' $zenodocheck \
- | sed -n -e'/http/p' \
- | tr ' ' '\n' \
- | grep http \
- | sed -e 's/href="//' -e 's|/coreutils| |' \
- | awk 'NR==1{print $1}')
- fi
+zenodocheck="$bdir"/software/zenodo-check.html
+if [ $built_container = 0 ]; then
+ if [ x$debug = x ] && [ x$offline = x ]; then
+ if $downloader $zenodocheck \
+ https://doi.org/10.5281/zenodo.3883409; then
+ zenodourl=$(sed -n -e'/coreutils/p' $zenodocheck \
+ | sed -n -e'/http/p' \
+ | tr ' ' '\n' \
+ | grep http \
+ | sed -e 's/href="//' -e 's|/coreutils| |' \
+ | awk 'NR==1{print $1}')
+ fi
+ fi
+ rm -f $zenodocheck
+
+ # Add the Zenodo URL to the user's given back software URLs. Since the
+ # user can specify 'user_backup_urls' (not yet implemented as an option
+ # in './project'), we'll give preference to their specified servers,
+ # then add the Zenodo URL afterwards.
+ user_backup_urls="$user_backup_urls $zenodourl"
+ elapsed_time_from_prev_step zenodo-url
fi
-rm -f $zenodocheck
-
-# Add the Zenodo URL to the user's given back software URLs. Since the user
-# can specify 'user_backup_urls' (not yet implemented as an option in
-# './project'), we'll give preference to their specified servers, then add
-# the Zenodo URL afterwards.
-user_backup_urls="$user_backup_urls $zenodourl"
-
-
-
-
-
-# Build core tools for project
-# ----------------------------
-#
-# Here we build the core tools that 'basic.mk' depends on: Lzip
-# (compression program), GNU Make (that 'basic.mk' is written in), Dash
-# (minimal Bash-like shell) and Flock (to lock files and enable serial
-# download).
-export on_mac_os
-./reproduce/software/shell/pre-make-build.sh \
- "$bdir" "$ddir" "$downloader" "$user_backup_urls"
@@ -1682,13 +1801,29 @@ fi
-# Build other basic tools our own GNU Make
-# ----------------------------------------
+# Core software
+# -------------
#
-# When building these software we don't have our own un-packing software,
-# Bash, Make, or AWK. In this step, we'll install such low-level basic
-# tools, but we have to be very portable (and use minimal features in all).
-echo; echo "Building necessary software (if necessary)..."
+# Here we build the core tools that 'basic.mk' depends on: Lzip
+# (compression program), GNU Make (that 'basic.mk' is written in), Dash
+# (minimal Bash-like shell) and Flock (to lock files and enable serial
+# operations where necessary: mostly in download).
+export on_mac_os
+if [ $quiet = 0 ]; then echo "Building/validating software: pre-make"; fi
+./reproduce/software/shell/pre-make-build.sh \
+ "$bdir" "$ddir" "$downloader" "$user_backup_urls"
+elapsed_time_from_prev_step make-software-pre-make
+
+
+
+
+
+# Basic software
+# --------------
+#
+# Having built the core tools, we are now ready to build GCC and all its
+# dependencies (the "basic" software).
+if [ $quiet = 0 ]; then echo "Building/validating software: basic"; fi
.local/bin/make $keepgoing -f reproduce/software/make/basic.mk \
sys_library_sh_path=$sys_library_sh_path \
user_backup_urls="$user_backup_urls" \
@@ -1700,23 +1835,19 @@ echo; echo "Building necessary software (if necessary)..."
on_mac_os=$on_mac_os \
host_cc=$host_cc \
-j$numthreads
+elapsed_time_from_prev_step make-software-basic
-# All other software
-# ------------------
+# High-level software
+# -------------------
#
-# We will be making all the dependencies before running the top-level
-# Makefile. To make the job easier, we'll do it in a Makefile, not a
-# script. Bash and Make were the tools we need to run Makefiles, so we had
-# to build them in this script. But after this, we can rely on Makefiles.
-if [ $jobs = 0 ]; then
- numthreads=$(.local/bin/nproc --all)
-else
- numthreads=$jobs
-fi
+# Having our custom GCC in place, we can now build the high-level (science)
+# software: we are using our custom-built 'env' to ensure that nothing from
+# the host environment leaks into the high-level software environment.
+if [ $quiet = 0 ]; then echo "Building/validating software: high-level"; fi
.local/bin/env -i HOME=$bdir \
.local/bin/make $keepgoing \
-f reproduce/software/make/high-level.mk \
@@ -1732,16 +1863,7 @@ fi
host_cc=$host_cc \
offline=$offline \
-j$numthreads
-
-
-
-
-
-# Delete the temporary Make wrapper
-# ---------------------------------
-#
-# See above for its description.
-rm $makewshell
+elapsed_time_from_prev_step make-software-high-level
@@ -1756,17 +1878,17 @@ rm $makewshell
# will just stop at the stage when all the processing is complete and it is
# only necessary to build the PDF. So we don't want to stop the project's
# configuration and building if its not present.
-if [ -f $itidir/texlive-ready-tlmgr ]; then
- texlive_result=$(cat $itidir/texlive-ready-tlmgr)
-else
- texlive_result="NOT!"
-fi
-if [ x"$texlive_result" = x"NOT!" ]; then
- cat <<EOF
+if [ $built_container = 0 ]; then
+ if [ -f $itidir/texlive-ready-tlmgr ]; then
+ texlive_result=$(cat $itidir/texlive-ready-tlmgr)
+ else
+ texlive_result="NOT!"
+ fi
+ if [ x"$texlive_result" = x"NOT!" ]; then
+ cat <<EOF
-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
-!!!!!!!!!!!!!!!!!!!!!! Warning !!!!!!!!!!!!!!!!!!!!!!
-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+______________________________________________________
+!!!!!!!!!!!!!!! Warning !!!!!!!!!!!!!!!!
TeX Live couldn't be installed during the configuration (probably because
there were downloading problems). TeX Live is only necessary in making the
@@ -1786,18 +1908,23 @@ and re-run configure:
./project configure -e
-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+The configuration will continue in $pausesec seconds. To avoid the pause on
+such messages use the '--no-pause' option.
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
EOF
- sleep 10 # increase the chance that an interactive user reads this message
+ sleep $pausesec
+ fi
+ elapsed_time_from_prev_step check-tex-installation
fi
-# Citation of installed software
+# Software information the paper
+# ------------------------------
#
# After everything is installed, we'll put all the names and versions in a
# human-readable paragraph and also prepare the BibTeX citation for the
@@ -1839,101 +1966,101 @@ prepare_name_version ()
fi
}
-# Import the context/sentences for placing between the list of software
-# names during their acknowledgment.
-. $cdir/software_acknowledge_context.sh
-
-# Report the different software in separate contexts (separating Python and
-# TeX packages from the C/C++ programs and libraries).
-proglibs=$(prepare_name_version $verdir/proglib/*)
-pymodules=$(prepare_name_version $verdir/python/*)
-texpkg=$(prepare_name_version $verdir/tex/texlive)
-
-# Acknowledge these software packages in a LaTeX paragraph.
-pkgver=$texdir/dependencies.tex
-
-# Add the text to the ${pkgver} file.
-.local/bin/echo "$thank_software_introduce " > $pkgver
-.local/bin/echo "$thank_progs_libs $proglibs. " >> $pkgver
-if [ x"$pymodules" != x ]; then
- .local/bin/echo "$thank_python $pymodules. " >> $pkgver
-fi
-.local/bin/echo "$thank_latex $texpkg. " >> $pkgver
-.local/bin/echo "$thank_software_conclude" >> $pkgver
-
-# Prepare the BibTeX entries for the used software (if there are any).
-hasentry=0
-bibfiles="$ictdir/*"
-for f in $bibfiles; do if [ -f $f ]; then hasentry=1; break; fi; done;
-
-# Make sure we start with an empty output file.
-pkgbib=$texdir/dependencies-bib.tex
-echo "" > $pkgbib
-
-# Fill it in with all the BibTeX entries in this directory. We'll just
-# avoid writing any comments (usually copyright notices) and also put an
-# empty line after each file's contents to make the output more readable.
-if [ $hasentry = 1 ]; then
- for f in $bibfiles; do
- awk '!/^%/{print} END{print ""}' $f >> $pkgbib
- done
-fi
+# Relevant files
+pkgver=$sconfdir/dependencies.tex
+pkgbib=$sconfdir/dependencies-bib.tex
+# Build the software LaTeX source but only when not in a container.
+if [ $built_container = 0 ]; then
+ # Import the context/sentences for placing between the list of software
+ # names during their acknowledgment.
+ . $cdir/software_acknowledge_context.sh
+ # Report the different software in separate contexts (separating Python
+ # and TeX packages from the C/C++ programs and libraries).
+ proglibs=$(prepare_name_version $verdir/proglib/*)
+ pymodules=$(prepare_name_version $verdir/python/*)
+ texpkg=$(prepare_name_version $verdir/tex/texlive)
+ # Acknowledge these software packages in a LaTeX paragraph.
+ .local/bin/echo "$thank_software_introduce " > $pkgver
+ .local/bin/echo "$thank_progs_libs $proglibs. " >> $pkgver
+ if [ x"$pymodules" != x ]; then
+ .local/bin/echo "$thank_python $pymodules. " >> $pkgver
+ fi
+ .local/bin/echo "$thank_latex $texpkg. " >> $pkgver
+ .local/bin/echo "$thank_software_conclude" >> $pkgver
+
+ # Prepare the BibTeX entries for the used software (if there are any).
+ hasentry=0
+ bibfiles="$ictdir/*"
+ for f in $bibfiles; do if [ -f $f ]; then hasentry=1; break; fi; done;
+
+ # Fill it in with all the BibTeX entries in this directory. We'll just
+ # avoid writing any comments (usually copyright notices) and also put an
+ # empty line after each file's contents to make the output more readable.
+ echo "" > $pkgbib # We don't want to inherit any pre-existing content.
+ if [ $hasentry = 1 ]; then
+ for f in $bibfiles; do
+ awk '!/^%/{print} END{print ""}' $f >> $pkgbib
+ done
+ fi
-# Report machine architecture
-# ---------------------------
-#
-# Report hardware
-hwparam="$texdir/hardware-parameters.tex"
-
-# Add the text to the ${hwparam} file. Since harware class might include
-# underscore, it must be replaced with '\_', otherwise pdftex would
-# complain and break the build process when doing ./project make.
-hw_class_fixed="$(echo $hw_class | sed -e 's/_/\\_/')"
-.local/bin/echo "\\newcommand{\\machinearchitecture}{$hw_class_fixed}" > $hwparam
-.local/bin/echo "\\newcommand{\\machinebyteorder}{$byte_order}" >> $hwparam
-.local/bin/echo "\\newcommand{\\machineaddresssizes}{$address_sizes}" >> $hwparam
+ # Report the time that this operation took.
+ elapsed_time_from_prev_step tex-macros
+fi
-# Clean the temporary build directory
-# ---------------------------------
+# Report machine architecture (has to be final created file)
+# ----------------------------------------------------------
#
-# By the time the script reaches here the temporary software build
-# directory should be empty, so just delete it. Note 'tmpblddir' may be a
-# symbolic link to shared memory. So, to work in any scenario, first delete
-# the contents of the directory (if it has any), then delete 'tmpblddir'.
-.local/bin/rm -rf $tmpblddir/* $tmpblddir
-
-
-
-
-
-# Register successful completion
-# ------------------------------
-echo `.local/bin/date` > $finaltarget
-
+# This is the final file that is created in the configuration phase: it is
+# used by the high-level project script to verify that configuration has
+# been completed. If any other files should be created in the final statges
+# of configuration, be sure to add them before this.
+#
+# Since harware class might include underscore, it must be replaced with
+# '\_', otherwise pdftex would complain and break the build process when
+# doing ./project make.
+if [ $built_container = 0 ]; then
+ hw_class=$(uname -m)
+ hwparam="$sconfdir/hardware-parameters.tex"
+ hw_class_fixed="$(echo $hw_class | sed -e 's/_/\\_/')"
+ .local/bin/echo "\\newcommand{\\machinearchitecture}{$hw_class_fixed}" \
+ > $hwparam
+ .local/bin/echo "\\newcommand{\\machinebyteorder}{$byte_order}" \
+ >> $hwparam
+ .local/bin/echo "\\newcommand{\\machineaddresssizes}{$address_sizes}" \
+ >> $hwparam
+ elapsed_time_from_prev_step hardware-params
+fi
-# Final notice
-# ------------
+# Clean up and final notice
+# -------------------------
#
-# The configuration is now complete, we can inform the user on the next
-# step(s) to take.
-if [ x$maneage_group_name = x ]; then
- buildcommand="./project make -j8"
-else
- buildcommand="./project make --group=$maneage_group_name -j8"
-fi
-cat <<EOF
+# The configuration is now complete. We just need to delete the temporary
+# build directory and inform the user (if '--quiet' wasn't called) on the
+# next step(s).
+if [ -d $tmpblddir ]; then empty_build_tmp; fi
+if [ $quiet = 0 ]; then
+
+ # Suggest the command to use.
+ if [ x$maneage_group_name = x ]; then
+ buildcommand="./project make -j8"
+ else
+ buildcommand="./project make --group=$maneage_group_name -j8"
+ fi
+
+ # Print the message.
+ cat <<EOF
----------------
The project and its environment are configured with no errors.
@@ -1951,3 +2078,10 @@ Please run the following command to start the project.
$buildcommand
EOF
+fi
+
+
+# Total time
+if [ $check_elapsed = 1 ]; then
+ echo $chel_dsum | awk '{printf "Total: %-6.2f [millisec]\n", $1}'
+fi
diff --git a/reproduce/software/containers/README-docker.md b/reproduce/software/shell/docker-README.md
index f86dceb..d651e22 100644
--- a/reproduce/software/containers/README-docker.md
+++ b/reproduce/software/shell/docker-README.md
@@ -35,17 +35,20 @@ software environment) to easily move it from one computer to another.
systemctl start docker
```
- 2. Using your favorite text editor, create a `docker-local.sh` in your top
- Maneage directory (as described in the comments at the start of the
- `docker.sh` script in this directory). Just activate `--build-only` on
- the first run so it doesn't go onto doing the analysis and just sets up
- the software environment.
-
- 3. After the setup is complete, run the following command to confirm that
- the `maneage-base` (the OS of the container) and `maneaged` (your
- project's full Maneage'd environment) images are available. If you want
- different names for these images, add the `--project-name` and
- `--base-name` options to the `docker.sh` call.
+ 2. Using your favorite text editor, create a `run.sh` in your top Maneage
+ directory (as described in the comments at the start of the `docker.sh`
+ script in this directory). Just activate `--build-only` on the first
+ run so it doesn't go onto doing the analysis and just sets up the
+ software environment. Set the respective directory(s) based on your
+ filesystem (the software directory is optional). The `run.sh` file name
+ is already in `.gitignore` (because it contains local directories), so
+ Git will ignore it and it won't be committed by mistake.
+
+ 3. After the setup is complete, remove the `--build-only` and run the
+ command below to confirm that `maneage-base` (the OS of the container)
+ and `maneaged` (your project's full Maneage'd environment) images are
+ available. If you want different names for these images, add the
+ `--project-name` and `--base-name` options to the `docker.sh` call.
```shell
docker image list
@@ -85,6 +88,24 @@ image into it.
Below are some useful Docker usage scenarios that have proved to be
relevant for us in Maneage'd projects.
+### Saving and loading an image as a file
+
+Docker keeps its images in hard to access (by humans) location on the
+operating system. Very much like Git, but with much less elegance: the
+place is shared by all users and projects of the system. So they are not
+easy to archive for usage on another system at a low-level. But it does
+have an interface (`docker save`) to copy all the relevant files within an
+image into a tar ball that you can archive externally. There is also a
+separate interface to load the tarball back into docker (`docker load`).
+
+Both of these have been implemented as the `--image-file` option of the
+`docker.sh` script. If you want to save your Maneage'd image into an image,
+simply give the tarball name to this option. Alternatively, if you already
+have a tarball and want to load it into Docker, give it to this option once
+(until you "clean up", as explained below). In fact, docker images take a
+lot of space and it is better to "clean up" regularly. And the only way you
+can clean up safely is through saving your needed images as a file.
+
### Cleaning up
Docker has stored many large files in your operating system that can drain
diff --git a/reproduce/software/containers/docker.sh b/reproduce/software/shell/docker.sh
index d5b5682..714c75f 100755
--- a/reproduce/software/containers/docker.sh
+++ b/reproduce/software/shell/docker.sh
@@ -9,35 +9,35 @@
#
# Usage:
#
-# - When you are at the top Maneage'd project directory, you can run this
-# script like the example below. Just set all the '/PATH/TO/...'
-# directories (see below for '--tmp-dir'). See the items below for
-# optional values.
+# - When you are at the top Maneage'd project directory, run this script
+# like the example below. Just set the build directory location on your
+# system. See the items below for optional values to optimize the
+# process (avoid downloading for exmaple).
#
-# ./reproduce/software/containers/docker.sh --shm-size=15gb \
-# --software-dir=/PATH/TO/SOFTWARE/TARBALLS \
-# --build-dir=/PATH/TO/BUILD/DIRECTORY
+# ./reproduce/software/shell/docker.sh --shm-size=20gb \
+# --build-dir=/PATH/TO/BUILD/DIRECTORY
#
-# - Non-mandatory options:
+# - Non-mandatory options:
#
-# - If you already have the input data that is necessary for your
-# project's, use the '--input-dir' option to specify its location
-# on your host file system. Otherwise the necessary analysis
-# files will be downloaded directly into the build
-# directory. Note that this is only necessary when '--build-only'
-# is not given.
+# - If you already have the input data that is necessary for your
+# project, use the '--input-dir' option to specify its location
+# on your host file system. Otherwise the necessary analysis
+# files will be downloaded directly into the build
+# directory. Note that this is only necessary when '--build-only'
+# is not given.
#
-# - The '--software-dir' is only useful if you want to build a
-# container. Even in that case, it is not mandatory: if not
-# given, the software tarballs will be downloaded (thus requiring
-# internet).
+# - If you already have the necessary software tarballs that are
+# necessary for your project, use the '--software-dir' option to
+# specify its location on your host file system only when
+# building the container. No problem if you don't have them, they
+# will be downloaded during the configuration phase.
#
-# - To avoid having to set the directory(s) every time you want to
-# start the docker environment, you can put this command (with the
-# proper directories) into a 'run.sh' script in the top Maneage'd
-# project source directory and simply execute that. The special name
-# 'run.sh' is in Maneage's '.gitignore', so it will not be included
-# in your git history by mistake.
+# - To avoid having to set them every time you want to start the
+# apptainer environment, you can put this command (with the proper
+# directories) into a 'run.sh' script in the top Maneage'd project
+# source directory and simply execute that. The special name 'run.sh'
+# is in Maneage's '.gitignore', so it will not be included in your
+# git history by mistake.
#
# Known problems:
#
@@ -76,7 +76,7 @@ set -e
# Default option values
-jobs=
+jobs=0
quiet=0
source_dir=
build_only=
@@ -116,7 +116,7 @@ Top-level script to build and run a Maneage'd project within Docker.
--container-shell Open the container shell.
Operating mode:
- --quiet Do not print informative statements.
+ -q, --quiet Do not print informative statements.
-?, --help Give this help list.
--shm-size=STR Passed to 'docker build' (default: $shm_size).
-j, --jobs=INT Number of threads to use in each phase.
@@ -168,6 +168,8 @@ do
# Container options.
--base-name) base_name="$2"; check_v "$1" "$base_name"; shift;shift;;
--base-name=*) base_name="${1#*=}"; check_v "$1" "$base_name"; shift;;
+ --project-name) project_name="$2"; check_v "$1" "$project_name"; shift;shift;;
+ --project-name=*) project_name="${1#*=}"; check_v "$1" "$project_name"; shift;;
# Interactive shell.
--project-shell) project_shell=1; shift;;
@@ -176,8 +178,8 @@ do
--container_shell=*) on_off_option_error --container-shell;;
# Operating mode
- --quiet) quiet=1; shift;;
- --quiet=*) on_off_option_error --quiet;;
+ -q|--quiet) quiet=1; shift;;
+ -q*|--quiet=*) on_off_option_error --quiet;;
-j|--jobs) jobs="$2"; check_v "$1" "$jobs"; shift;shift;;
-j=*|--jobs=*) jobs="${1#*=}"; check_v "$1" "$jobs"; shift;;
-j*) jobs=$(echo "$1" | sed -e's/-j//'); check_v "$1" "$jobs"; shift;;
@@ -250,17 +252,43 @@ if ! [ x"$input_dir" = x ]; then
input_dir_mnt="-v $input_dir:/home/maneager/input"
fi
-# If no '--jobs' has been specified, use the maximum available jobs to the
-# operating system.
-if [ x$jobs = x ]; then jobs=$(nproc); fi
+# Number of threads to build software (taken from 'configure.sh').
+if [ x"$jobs" = x0 ]; then
+ if type nproc > /dev/null 2> /dev/null; then
+ numthreads=$(nproc --all);
+ else
+ numthreads=$(sysctl -a | awk '/^hw\.ncpu/{print $2}')
+ if [ x"$numthreads" = x ]; then numthreads=1; fi
+ fi
+else
+ numthreads=$jobs
+fi
-# [DOCKER-ONLY] Make sure the user is a member of the 'docker' group:
-glist=$(groups $(whoami) | awk '/docker/')
-if [ x"$glist" = x ]; then
- printf "$scriptname: you are not a member of the 'docker' group "
- printf "You can run the following command as root to fix this: "
- printf "'usermod -aG docker $(whoami)'\n"
- exit 1
+# Since the container is read-only and is run with the '--contain' option
+# (which makes an empty '/tmp'), we need to make a dedicated directory for
+# the container to be able to write to. This is necessary because some
+# software (Biber in particular on the default branch or Ghostscript) need
+# to write there! See https://github.com/plk/biber/issues/494. We'll keep
+# the directory on the host OS within the build directory, but as a hidden
+# file (since it is not necessary in other types of build and ultimately
+# only contains temporary files of programs that need it).
+toptmp=$build_dir/.docker-tmp-$(whoami)
+if ! [ -d $toptmp ]; then mkdir $toptmp; fi
+chmod -R +w $toptmp/ # Some software remove writing flags on /tmp files.
+if ! [ x"$( ls -A $toptmp )" = x ]; then rm -r "$toptmp"/*; fi
+
+# [DOCKER-ONLY] Make sure the user is a member of the 'docker' group. This
+# is needed only for Linux, given that other systems uses other strategies.
+# (See: https://stackoverflow.com/a/70385997)
+kernelname=$(uname -s)
+if [ x$kernelname = xLinux ]; then
+ glist=$(groups $(whoami) | awk '/docker/')
+ if [ x"$glist" = x ]; then
+ printf "$scriptname: you are not a member of the 'docker' group "
+ printf "You can run the following command as root to fix this: "
+ printf "'usermod -aG docker $(whoami)'\n"
+ exit 1
+ fi
fi
# [DOCKER-ONLY] Function to check the temporary directory for building the
@@ -341,6 +369,7 @@ fi
#
# Having the base operating system in place, we can now construct the
# project's docker file.
+intbuild=/home/maneager/build
if docker image list | grep $project_name &> /dev/null; then
if [ $quiet = 0 ]; then
printf "$scriptname: info: project's image ('$project_name') "
@@ -387,7 +416,7 @@ else
printf " cp -r $dsr /home/maneager/source; \x5C\n" >> $df
printf " cd /home/maneager/source; \x5C\n" >> $df
printf " ./project configure --jobs=$jobs \x5C\n" >> $df
- printf " --build-dir=/home/maneager/build \x5C\n" >> $df
+ printf " --build-dir=$intbuild \x5C\n" >> $df
printf " --input-dir=/home/maneager/input \x5C\n" >> $df
printf " --software-dir=$dts; \x5C\n" >> $df
@@ -456,29 +485,33 @@ fi
# The startup command of the container is managed though the 'shellopt'
# variable that starts here.
shellopt=""
+sobase="/bin/bash -c 'cd source; "
+sobase="$sobase ./project configure --build-dir=$intbuild "
+sobase="$sobase --existing-conf --no-pause --offline --quiet && "
+sobase="$sobase ./project MODE --build-dir=$intbuild"
if [ $container_shell = 1 ] || [ $project_shell = 1 ]; then
- # If the user wants to start the project shell within the container,
- # add the necessary command.
+ # The interactive flag is necessary for both these scenarios.
+ interactiveopt="-it"
+
+ # With '--project-shell' we need 'shellopt', the MODE just needs to be
+ # set to 'shell'.
if [ $project_shell = 1 ]; then
- shellopt="/bin/bash -c 'cd source; ./project shell;'"
+ shellopt="$(echo $sobase | sed -e's|MODE|shell|');'"
fi
- # Finish the 'shellop' string with a single quote (necessary in any
- # case) and run Docker.
- interactiveopt="-it"
-
# No interactive shell requested, just run the project.
else
interactiveopt=""
- shellopt="/bin/bash -c 'cd source; ./project make --jobs=$jobs;'"
+ shellopt="$(echo $sobase | sed -e's|MODE|make|') --jobs=$jobs;'"
fi
# Execute Docker. The 'eval' is because the 'shellopt' variable contains a
# single-quote that the shell should "evaluate".
-eval docker run \
+eval docker run --read-only \
-v "$analysis_dir":/home/maneager/build/analysis \
-v "$source_dir":/home/maneager/source \
+ -v $toptmp:/tmp \
$input_dir_mnt \
$shm_mnt \
$interactiveopt \
diff --git a/reproduce/software/shell/pre-make-build.sh b/reproduce/software/shell/pre-make-build.sh
index 28b7385..172bdb6 100755
--- a/reproduce/software/shell/pre-make-build.sh
+++ b/reproduce/software/shell/pre-make-build.sh
@@ -135,16 +135,6 @@ download_tarball() {
else mv "$ucname" "$maneagetar"
fi
fi
-
- # If the tarball is newer than the (possibly existing) program (the
- # version has changed), then delete the program. When the LaTeX name is
- # not given here, the software is re-built later (close to the end of
- # 'basic.mk') and the name is properly placed there.
- if [ -f "$ibidir/$progname" ]; then
- if [ "$maneagetar" -nt "$ibidir/$progname" ]; then
- rm "$ibidir/$progname"
- fi
- fi
}
@@ -159,6 +149,9 @@ build_program() {
# Options
configoptions=$1
+ # Inform the user.
+ echo; echo "Pre-make building of $progname"; echo
+
# Go into the temporary building directory.
cd "$tmpblddir"
unpackdir="$progname"-"$version"
@@ -183,7 +176,8 @@ build_program() {
# build the project, either with Make and either without it.
if [ x$progname = xlzip ]; then
- ./configure --build --check --installdir="$instdir/bin" $configoptions
+ ./configure --build --check --installdir="$instdir/bin" \
+ $configoptions
else
# All others accept the configure script.
./configure --prefix="$instdir" $configoptions
@@ -196,7 +190,10 @@ build_program() {
case $on_mac_os in
yes) sed -e's/\%1u/\%d/' src/flock.c > src/flock-new.c;;
no) sed -e's/\%1u/\%ld/' src/flock.c > src/flock-new.c;;
- *) echo "pre-make-build.sh: '$on_mac_os' unrecognized value for on_mac_os";;
+ *)
+ printf "pre-make-build.sh: '$on_mac_os' "
+ printf "unrecognized value for on_mac_os"
+ exit 1;;
esac
mv src/flock-new.c src/flock.c
fi
@@ -218,9 +215,9 @@ build_program() {
cd "$topdir"
rm -rf "$tmpblddir/$unpackdir"
if [ x"$progname_tex" = x ]; then
- echo "" > "$ibidir/$progname"
+ echo "" > "$texfile"
else
- echo "$progname_tex $version" > "$ibidir/$progname"
+ echo "$progname_tex $version" > "$texfile"
fi
fi
}
@@ -238,12 +235,12 @@ build_program() {
# (without compression it is just ~400Kb). So we use its '.tar' file and
# won't rely on the host's compression tools at all.
progname="lzip"
-progname_tex="" # Lzip re-built after GCC (empty string to avoid repetition)
+progname_tex="" # Lzip is re-built after GCC (empty to avoid repetition)
url=$(awk '/^'$progname'-url/{print $3}' $urlfile)
version=$(awk '/^'$progname'-version/{print $3}' "$versionsfile")
tarball=$progname-$version.tar
-download_tarball
-build_program
+texfile="$ibidir/$progname-$version-pre-make"
+if ! [ -f $texfile ]; then download_tarball; build_program; fi
@@ -268,8 +265,11 @@ progname_tex="" # Make re-built after GCC (empty string to avoid repetition)
url=$(awk '/^'$progname'-url/{print $3}' $urlfile)
version=$(awk '/^'$progname'-version/{print $3}' $versionsfile)
tarball=$progname-$version.tar.lz
-download_tarball
-build_program "--disable-dependency-tracking --without-guile"
+texfile="$ibidir/$progname-$version-pre-make"
+if ! [ -f $texfile ]; then
+ download_tarball
+ build_program "--disable-dependency-tracking --without-guile"
+fi
@@ -286,13 +286,11 @@ progname_tex="Dash"
url=$(awk '/^'$progname'-url/{print $3}' $urlfile)
version=$(awk '/^'$progname'-version/{print $3}' $versionsfile)
tarball=$progname-$version.tar.lz
-download_tarball
-build_program
+texfile="$ibidir/$progname-$version"
+if ! [ -f $texfile ]; then download_tarball; build_program; fi
# If the 'sh' symbolic link isn't set yet, set it to point to Dash.
-if [ -f $bindir/sh ]; then just_a_place_holder=1
-else ln -sf $bindir/dash $bindir/sh;
-fi
+if ! [ -f $bindir/sh ]; then ln -sf $bindir/dash $bindir/sh; fi
@@ -315,12 +313,5 @@ progname_tex="Discoteq flock"
url=$(awk '/^'$progname'-url/{print $3}' $urlfile)
version=$(awk '/^'$progname'-version/{print $3}' $versionsfile)
tarball=$progname-$version.tar.lz
-download_tarball
-build_program
-
-
-
-
-
-# Finish this script successfully
-exit 0
+texfile="$ibidir/$progname-$version"
+if ! [ -f $texfile ]; then download_tarball; build_program; fi