Age | Commit message (Collapse) | Author | Lines |
|
Until now, we hadn't actually tested the case where a whole software
directory (Python modules in particular) is empty. So the configure script
finished with some errors in this case.
With this commit, this step of the configure script was modified to deal
with such cases cleanly.
Also, in `initialize.mk', I added a `-f' to the symbolic link command, so
it doesn't complain if the file link already exists.
|
|
In order to get a consistent final result, in its later steps, the
configure script uses our own build of the basic command-line tools (like
`cat', `awk').
Also, a correction was made to the short option parsing errors when an
unwanted argument is given, and the `-?*' was changed to `-'?'*' to avoid
un-necessary shell interpretation (for example giving unreasonable
results).
|
|
On some systems, M4 isn't available, so the linking to the host system
fails, as a result, we can't build GNU Libtool.
The main reason we weren't building M4 was a bug with the most recent GNU C
library
(http://lists.gnu.org/archive/html/bug-gnulib/2019-04/msg00004.html). But I
found a patch used by Arch Linux which fixes the issue and allows M4 to be
built. As a result, the pipeline is now building M4 also and the patched M4
tarball is now uploaded to my own webpage as backup.
While doing the steps above, I also noticed that we weren't using a tab at
the start of the link definitions of `dependencies-basic.mk'. Although its
not necessary, to be consistent, its good for the lines to always start
with a tab.
|
|
The step where we check the possibility of using `sys/cdefs.h' was still
using `$$' for shell variables (in Make), not `$' (for the shell). This was
corrected.
Also, since Astropy needs two citations, the `,' in the citation command
would conflict with Make's parsing. So we just used an `echo' command to
re-write the version info.
In Astroquery, the prerequisite list was just reordered by length to be
more clear to the eye.
|
|
In some cases (specially when debugging the pipeline), its very
time-consuming to install GCC. With this commit, a `--host-cc' option has
been added to avoid building the C compiler when necessary.
The test to see if `sys/cdefs.h' is available on the system (necessary to
build GCC) has also been moved to the configure script to print a more
visible warning and also use the new `host_cc' variable to let
`dependencies-basic.mk' know that GCC shouldn't be built.
Finally, we are having problems installing M4 from source, so it has been
set as a mandatory dependency.
|
|
Until now, management of the software names and versions in the paper was
done manually (a macro had to be defined in `initialize.mk', then used in
`paper.tex', so they had to be manually set in two places). Managing this
was not easy.
To fix this, with this commit, each software building rule's target is a
text file that contains its human-readable name and its version. In the
end, the configure script sorts them by their name and writes them into a
LaTeX input file that we can easily import as a file into the main paper.
|
|
Until now, these versions were written in each run. This was mainly
inherited from the old days of the pipeline, where we didn't know the
software on the host. But now that we have almost everything under control,
we can just write these LaTeX macros at the end of the configure script and
make `initialize.mk' simpler and also (very slightly!) speed-up/simplify
the processing.
|
|
Double quotes were placed around the checked values so they can have space
within them. Also, some checks were added for options that don't accept a
value.
|
|
Until now, the short options to the configure script needed a delimiter
(either white-space or an `=') between the name and value.
With this commit, for short options, it also accepts the value immediately
touching the option name.
Also, when trying to fine the absolute address of a given path, a check was
added to abort if it doesn't exist.
|
|
Until now we were (wrongly) assuming that the configure script's
`--existsing-conf' option takes a value, while this is not the case.
|
|
We were developing the build of Numpy and Scipy on Mac in a parallel thread
and things seems to be working relatively nice now. There were only two
problems:
1) GCC still has some random building issues on Mac.
2) ATLAS shared libraries can't be built on Mac (so we used OpenBLAS to
build Numpy and Scipy on both Mac and GNU/Linux).
But for now, none of these problems are critical. So, we can progress in
one branch.
There were only very minor conflicts in the merge.
|
|
Until now, the steps to manage the command-line options of the configure
script were limited (couldn't accept an equal sign or space between the
option name and value). With this commit, it can now also accept optional
equal signs between the option name and value. Thus not causing many
confusions.
Also, it is more logically consistent for the link to the build-directory
to be placed in the top directory (as a hidden file like `.local' until
now), and not as a visible directory like `reproduce/build' (which we used
until now). Therefore, with this commit, the link to easily access the
build-directory is `.build' in the top source directory.
Finally, because `minmapsize' is too specific to Gnuastro and has now been
given its default value at the start of the configure script, the
description for `minmapsize' has been removed (to not confuse users who
don't use Gnuastro). If anyone is familiar enough with Gnuastro to change
it, they already know it from its book.
|
|
Until this commit, the installation of all Python packages were
done in a separate Makefile.
With this commit, the pipeline install Python packages as part of the
hight level software. All Python packages rules them remain in a
separate Makefile, but this Makefile is included in the high level
dependency `reproduce/src/make/dependencies.mk'.
|
|
With the options, it is now possible to run the configure script more
easily after the initial run. The `--help' option provides a nice and
complete introduction along with a listing of the input options and the
`-j' option can be use to manually set the number of threads.
|
|
Until now, we were using `flock' (file-lock) for downloading the input
datasets in series. But we couldn't do this when downloading the software
tarballs because `flock' wasn't yet available. Generally, unlike
processing, downloading is much better done in series than in parallel.
To enable serial downloads of the software also, with this commit we are
installing `flock' in the configure script (not in a Makefile). As a
result, besides `flock', we can also benefit from the other good features
of the `reproduce/src/bash/download-multi-try' script *(for example
attempting download again after some time).
Some GNU mirrors may have problems at the time of download, so with this
commit, we are using the main GNU FTP server for GNU programs.
|
|
All dependencies for building astroquery package have been done.
Until nowthe Python dependencies were built in the same Makefile
as the high level libraries and programs. But, because astroquery
has many dependencies we split the Python and Python packages
installation in a new Makefile.
The installation of differents packages are done using Python and
not pip, because we found some problems when doing it with pip.
Apparently there are some interferences between the packages
installed by the pip of the system and the pip installed as part
of Python in the pipeline.
|
|
Until now, the group name to build the project actually went into the Git
source of the project! This doesn't allow exact reproducibility on
different machines (where the group name may be different).
With this commit, the `for-group' script has been modified to accept the
group name as its first argument and pass that onto `configure' and
Make. This is much better now, because not only the existance of a group
installation is checked, but also the name of the group. It also made
things simpler (in particular in `LOCAL.mk.in').
|
|
Until now, the `./configure' script would only print the `.local/bin/make
-j8' command. But when configured for groups, a different command should be
used. It now does a check just before running and suggests the proper
command.
|
|
Until now it was `/bin/sh', but on Debian systems, this can cause problems
because by default they use a much weaker shell (dash) which doesn't
recognize functions.
|
|
If the `./for-group' script is not used properly, it can lead to the whole
pipeline being re-run. Therefore it is important to do a sanity check
immediately at the start of Make's processing and inform the user if there
is a problem.
With this commit, `./for-group' exports the `reproducible_paper_for_group'
variable which is used by both the initial `./configure' script, and later
in each call to Make. The `./configure' script will use it to write a value
in `reproduce/config/pipeline/LOCAL.mk' and Make will use it to compare
with the value in `reproduce/config/pipeline/LOCAL.mk'.
If there is an inconsistency, Make will not even attempt to build anything
and will just print a message and abort.
|
|
Since the current implementation of this pipeline officially started in
2018, all the files only had 2018 in their copyright years. This has now
been corrected to 2018-2019.
|
|
To make things clear for a user of the pipeline its mentioned that the
given input directory is only read and nothing is written to it.
|
|
Some problems with using the number of threads in dependency building were
fixed.
|
|
Some host Make systems may not allow automatic passing of the number of
threads to sub-Makes. So while building the basic dependencies, we'll need
to explicity add the `-j' option to the Make files that can benefit most
from it: those that are dependencies of many others (Tar & Make), or are
the last to build (Coreutils).
|
|
The comment above the downloader section of the configure script was not up
to date with how the pipeline uses a downloader during configuration and
building now. So it was updated.
|
|
Until now we had constrained the configuration step to one thread to easily
see failures on other systems. But with most tests passing successfully
now, we are using the total number of available threads.
|
|
The build systems of Libgit2 and WCSLIB on Mac OS does not account for
installation in non-standard addresses: `Libgit2' keeps the absolute
address of its build directory (not the installation directory) and WCSLIB
doesn't write any absolute address at all (so the system uses the first one
it finds).
To address these issues, we are now using Mac OS's `install_name_tool'
program to fix the absolute path within the installed shared library.
Since the version of the library is actually present in its shared library
name, in `dependency-versions.mk' we have also separated these two
libraries so later when their version is changed, we are careful in
correcting the shared library name also.
|
|
OpenSSL can't automatically detect the architecture of Mac OS systems, so
as it suggests on its Wiki, it needs some help for doing that. With this
commit, we are checking the build on Mac OS with the presence of `otool'
(Mac OS's linker). If it's there, we'll add the OpenSSL configuration
options suggested by OpenSSL's Wiki.
|
|
Until now, we weren't including the `rpath' linking options to the basic
dependencies. They are now added. Also, when the download of an input file
fails for any reason, an empty file won't be replaced there any more.
|
|
To enable easy downloading of HTTPS links with Wget (this pipeline's defaut
downloader), we need a set of trusted CA certificates. Until the time that
we can generate one ourselves, one generic set of trusted CA certificates
is now downloaded like a tarball and placed in the OpenSSL configuration
directory.
With these CA certificates, within the pipeline we can now safely use the
pipeline's own installed Wget.
|
|
Some high-level programs like Wget and cURL need to be built in shared mode
because they also include dynamic loading of libraries. Therefore, if we
only build the lower-level libraries in static mode, our own build will be
ignored and they will go and find the system's shared libraries to link
with. Because of this, for now, we have manually set the `static_build'
variable in the configure script to `no'.
Also, if the downloader fails, we'll delete the output (an empty file in
the case of Wget) because it interefers with a target definition.
|
|
To help in debugging, we are only running the Makefiles within the
configure script on one thread.
|
|
The TeX Live installer needs Wget to operate smoothly, especially on recent
Mac OS systems that don't have Wget pre-installed. Also, it would be good
for the pipeline to have its own downloader. So with this commit, the
pipeline also installs Wget and OpenSSL which is a dependency.
Many other small changes/fixes were done in this process.
|
|
Thanks to the check by Cristina MartÃnez, some corrections were made when
we attempt to download and install TeXLive. Further checks and corrections
will be in due time.
|
|
The main reason I wasn't using cURL as a downloading tool was that I wasn't
familar with how to ask it to follow a re-direct. But I just found out that
its with the `-L' configure time option. So it is now added as a downloader
tool to the pipeline.
|
|
On the Libgit2 webpage, it has recommended to build it statically on Mac
systems. By default we are doing this on Linux systems, but the `-static'
flag failed on Mac. But apparently CMake might be able to deal with the
issue in a different way.
|
|
Thanks to a test build on Raul Infante Sainz's Mac OS computer, we were
able to address some issues and will be trying them after this commit:
a) The LLVM linker on that computer didn't recognize `-rpath-link'! So at
configure time we now check for it and only include it when the linker
recognizes it.
b) CMake corrections: 1) `CMAKE_LIBRARY_PATH' is now defined so CMake can
look in our custom directory to find the necessary libraries. 2) To
build and install the CMake built programs, we now simply use `make'
and `make install'.
c) To avoid particular linking problems with WCSLIB (which has special
problems compared to other libraries), we are now deleting the shared
library version (both on GNU and Mac systems).
|
|
GNU Binutils (which provides the GNU Linker) is not ported to Mac OS
systems. GCC also takes a very long time to build, and if we are to still
have linking problems with LLVM's linker, it would be better to just ignore
GCC also and use the system's C compiler and linker together.
So for the time being, GCC isn't a main target of the basic dependencies
and won't be installed. But we have kept the rules that were checked on a
GNU/Linux operating system.
|
|
The pipeline now installs GCC and all its necessary prerequisites.
|
|
We had forgot to add the rule to build the lock file directory for
downloading data. This has been corrected.
|
|
To avoid redundant steps in the the top-level Makefile and make it simpler
and easier to follow, we now define the base names of all the Makefiles in
the `makesrc' variable of the top-level Makefile. `makesrc' is then used to
define the Makefiles to include and the necessary TeX macros at the same
time. This is much more clear and obvious than the previous case were we
had to list the Makefiles and TeX macro files separately in the top level
Makefile.
|
|
While preparing the input dataset downloading scripts, we wanted to have
two input datasets, but it was too much, so it was ultimately removed. But
I had forgot to remove the information about this file in the `./configure'
script. So it is removed now.
|
|
In most analysis situations (except for simulations), an input dataset is
necessary, but that part of the pipeline was just left out and a general
`SURVEY' variable was set and never used. So with this commit, we actually
use a sample FITS file from the FITS standard webpage, show it (as well as
its histogram) and do some basic calculations on it.
This preparation of the input datasets is done in a generic way to enable
easy addition of more datasets if necessary.
|
|
Until now, in the instructions, we were suggesting to run
`./.local/bin/make', but the `./' part is extra: this is already a
directory and so the shell will be able to find it. So to make things more
clear and easy to read/write, we removed the `./' part from the calls to
our custom Make installation.
|
|
By default, Wget uses the same time stamp for the downloaded file as the
server. As a result, if a dependency was previously built before the
tarball was uploaded to the server, the pipeline wouldn't recognize that
its new and wouldn't re-build it. With this commit, we are adding the
`--no-use-server-timestamps' to the call to Wget to fix this problem.
I also noticed that (probably as a bug left over from the time we were
testing the configure script), we were manually setting the `userread'
variable to `n' independent of what the user provided. This has also been
fixed.
|
|
When you point to this project, the `README.md' file is the default file
that opens on GitLab and other online git repositories. Since a
reproduction pipeline project is different from the actual pipeline, its
best for the default text that opens to describe the paper, not the
pipeline. The old `README.md' is also kept, but its now called
`REAME-pipeline.md'.
|
|
When there is a problem in creating the final TeX Live installation, the
previous version of the pipeline would not understand and just finish! We
would later have problems in building the paper.
So the following series of steps were taken: to keep the recipes in a
shorter and easier to understand way, the steps to install TeX Live are now
one rule (that produce `.local/bin/texlive-ready-tlmgr' when its
successful), and the steps to install the necessary packages are in another
rule (that produce `.local/bin/texlive-ready' when its successful).
When control comes back inside configure, if `.local/bin/texlive-ready'
isn't there (something failed during the TeX Live installation, or building
packages), then the whole TeX Live installation directory
(`.local/texlive') will be deleted along with the two output files. This
will help ensure that future steps can check the availablility of a working
TeX Live in the pipeline.
|
|
After installing the basic dependencies, we have access to the internally
built `nproc' program (by Coreutils) to find the number of threads we can
use for building the high-level batch of dependencies. Unfortunately, I had
added an extra `./' before `$instdir' when calling `nproc' so the script
failed. This problem is fixed now.
|
|
The system's libraries are no longer used in building the higher-level
dependencies. Also, thanks to Raul Infante Sainz, we found out that Bash's
build script was still removing the extra directory information (not
good!).
|
|
When the C compiler is not GNU GCC, linking with GNU Binutils is going to
cause problems. So until the time that we can include GCC into this
pipeline, its best to avoid Binutils also.
Also, for building CMake, we were relying on an installed CMake, but now,
we are using its own `./bootstrap' script, so it can be built even if the
host system doesn't have CMake.
Also, for TeX Live, we are now setting a custom file as main target to
avoid complications with symbolic links as targets in Make.
Finally, when the user says they don't want to re-write an existing
configuration file, no extra notices will be printed and the configure
script will immediately start building programs.
|