Age | Commit message (Collapse) | Author | Lines |
|
This section was a little outdated and since then, a more clear/exact image
of using the Nix experience for the reproducible paper template has been
added.
|
|
In the libpng installation there was `ilibdir' instead of `ilidir'.
|
|
Until now, the pipeline was not installing its own `gcc' but using the
system one by making a symbolic link.
With this commit, GNU GCC has been added into the pipeline. Right now
the installation does not work on Mac OS system beause of some conflicts
with `clang', but in principle it should work on GNU Linux distributions.
|
|
Until now the installation of Python and its packages (numpy, astropy,
astroquery, etc.) were done in the same `makefile'.
With this commit the installation of Python and its packages have been
split and now it is independent of the other programs. The installation
of all Python packages needs to be written explicitely because pip is
not used anymore.
|
|
Until now, once the Git hooks have been installed (after the
installation of Metastore), if metastore doesn't exist (for example by
manually deleting the build directory for a re-build with same
configurations as before) we can't run `git commit' and `git checkout'
will print an ugly warning.
With this commit, the two Git hooks check for the existance of Metastore
and if it doesn't exist, they won't do anything.
|
|
In an attempt to test the GCC build rule (without Binutils, because its too
architecture dependent), all the necessary dependencies were moved to GCC
(from `ld'). Also `fortran' was also added to the languages supported by
GCC. This rule built GCC 8.2.0 nicely on my GNU/Linux system. But `gcc' is
still not a final target to built, so the rule is being ignored for now.
|
|
As matplotlib is a general package for plotting and it is widely
used in science, we have added it to the pipeline.
When installing a dependency of matplotlib `python-dateutil', we
found a conflict in the download of the tarball. This is because
the name has a dash (-) in the middle. In addition, the name starts
with 'python', so it is the same as the python itself. Now it is
possible to install any package with any name, just adding an elif
in before the URL direction.
|
|
All dependencies for building astroquery package have been done.
Until nowthe Python dependencies were built in the same Makefile
as the high level libraries and programs. But, because astroquery
has many dependencies we split the Python and Python packages
installation in a new Makefile.
The installation of differents packages are done using Python and
not pip, because we found some problems when doing it with pip.
Apparently there are some interferences between the packages
installed by the pip of the system and the pip installed as part
of Python in the pipeline.
|
|
As in all programs, the build process of ncurses depends on the running
shell (Bash) and AWK. At the start of the building of ncurses, we remove
its library. But Bash and AWK depend on ncurses to run (this creates a
circular dependency). Therefore its necessary to remove the Bash and AWK
executables when re-building ncurses.
This bug was found by Raul Infante Sainz.
|
|
Raul Infante-Sainz added the building of Python (along with the Numpy and
Astropy packages) into the pipeline. That work is now being merged into the
main pipeline branch.
There was only this small problem that needed to be fixed: the Python
tarball's name after unpacking is actually `Python-X.X.X' (with a captial
P), not `python-X.X.X'. This has been corrected with this merge.
|
|
The zip program wasn't placed correctly (in alphabetical order) and its URL
command had the wrong indentation! Both have no effect at all on the
processing and are only cosmetic (to help in readability).
|
|
Astropy was added and one very important thing is that we have to
use the pypi tarball (https://pypi.org/) (which is bootstrapped)
and not the github tarball.
|
|
Python needs some packages to be really useful. Numpy is the most
important package for using Python and a lot of other packages
depend on it.
In this commit we add numpy to the pipeline. The tarball of numpy
right now is fossies.
|
|
Many projects use Python so it is necessary include it in the
pipeline.
|
|
In the example running code of the wrapper script, I had just written
`./download-multi-try', but this script is meant to be run from the top of
the project directory. This could cause confusion.
So the example script now starts with `/path/to/download-multi-try'.
|
|
We don't have a `.sh' suffix in the other scripts of `reproduce/src/bash',
so it was also removed from this script.
|
|
Until now, downloading was treated similar to any other operation in the
Makefile: if it crashes, the pipeline would crash. But network errors
aren't like processing errors: attempting to download a second time will
probably not crash (network relays are very complex and not reproducible
and packages get lost all the time)!
This is usually not felt in downloading one or two files, but when
downloading many thousands of files, it will happen every once and a while
and its a real waste of time until you check to just press enter again!
With this commit we have the `reproduce/src/bash/download-multi-try.sh'
script in the pipeline which will repeat the downoad several times (with
incrasing time intervals) before crashing and thus fix the problem.
|
|
In order to collaborate effectively in the project, even project members
that don't necessarily want (or have the capacity) to do the whole analysis
must be able to contribute to the project. Until now, the users of the
distributed tarball could only modify the text and not the figures (built
with PGFPlots) of the paper.
With this commit, the management of TeX source files in the pipeline was
slightly modified to allow this as cleanly as I could think of now! In
short, the hand-written TeX files are now kept in `tex/src' and for the
pipeline's generated TeX files (in particular the old `tex/pipeline.tex'),
we now have a `tex/pipeline' symbolic-link/directory that points to the
`tex' directory under the build directory.
When packaging the project, `tex/pipeline' will be a full directory with a
copy of all the necessary files. Therefore as far as LaTeX is concerned,
having a build-directory is no longer relevant. Many other small changes
were made to do this job cleanly which will just make this commit message
too long!
Also, the old `tarball' and `zip' targets are now `dist' and `dist-zip' (as
in the standard GNU Build system).
|
|
With this commit, it is now possible to package the project into a tarball
or zip file, ready to be distributed to collaborators who only want to
modify the final paper (and not do the analysis technicalities), or for
uploading to sites like arXiv, or online LaTeX sharing pages.
|
|
A few issues came up while testing the `for-group' script in one of the
projects based on this pipeline that are being fixed with this commit:
1) We are ultimately using the `sg' command to use the specified group,
not `chgrp'. So in cases where `chgrp' has problems, this would cause
a wrong error. So for the test of the given group's existance, we are
now directly calling `sg'.
2) In the call to `make' we were mistakenly giving make the `$2' (which
is `make' on the command-line) argument. Since `./for-group' now takes
the group name as its first argument, this should have been `$3'.
3) To help in readability, and also allow for group names with a space,
`reproducible_paper_group_name' is now defined and exported before the
final call to `sg'.
|
|
Until now, the group name to build the project actually went into the Git
source of the project! This doesn't allow exact reproducibility on
different machines (where the group name may be different).
With this commit, the `for-group' script has been modified to accept the
group name as its first argument and pass that onto `configure' and
Make. This is much better now, because not only the existance of a group
installation is checked, but also the name of the group. It also made
things simpler (in particular in `LOCAL.mk.in').
|
|
Until now, the `./configure' script would only print the `.local/bin/make
-j8' command. But when configured for groups, a different command should be
used. It now does a check just before running and suggests the proper
command.
|
|
Until now it was `/bin/sh', but on Debian systems, this can cause problems
because by default they use a much weaker shell (dash) which doesn't
recognize functions.
|
|
I recently found another fork of metastore that allows its build on macOS
systems (https://github.com/mpctx/metastore). So I forked it into my own
fork with several other corrections (mostly cosmetic!), so it is now much
better suited for this pipeline.
Raul Infante-Sainz has already tested the building of metastore on his
macOS. In a previous test, we also noticed that libbsd should not be built
on Mac systems, so it is now a conditional prerequisite to metastore.
|
|
While editing files, some editors create temporary `~' files that can cause
problems in metastore's ability to delete their host directory if its not
on the other branch. With this commit, a `find' call was added to the post
checkout Git hook to remove such temporary files before metastore is
called.
Also, some comments were added to both git hooks to make them easier to
understand for a beginner.
|
|
I needed to take these steps in a few occasions on a project I am building
over this pipeline. This will commonly happen when a team starts using this
pipeline, so it was added to make things easier.
|
|
To be more generic and recognizable, the `README-pipeline.md' script was
renamed to `README-hacking.md'. In essence, it is just that: to hack the
existing pipeline for your own project. We follow a similar naming
convention in many GNU software.
|
|
Two corrections were made in the Git hooks of Metastore.
1) The shebang at the start of the scripts now uses the absolute adress of
our installed bash, not the relative `.local/bin/bash'. Note that it is
possible to use Git within subdirectories and in that scenario, the
`.local' will fail.
2) The `$$user' section was removed from the command to find the user's
group. With the user as an argument, `groups' may print the user's name
first, then their list of groups. When this happens, the script would
be just repeating the user's name. But the raw `groups' command will
list the groups of the running user.
|
|
Until now, the check to see if the patchelf program should be used or not
(for GNU/Linux vs. Mac installations) was mistakenly added over the step
that we define the `sh' symbolic link, not over the call to patchelf. This
is corrected with this commit.
|
|
In this version, too many extra notices (just regarding a change from
branch to branch) are not printed with `-q'. Instead only a one line
statement is printed that it is saved or applied.
|
|
Until we see what happens with the pull request of our suggested features
in metastore, its version isn't written directly into the executable, so we
won't actually check it, but write the version directly into the paper.
|
|
After testing the built of Metastore on a server, I noticed that because
its `/etc/passwd' doesn't have the list of users, the `getpwuid' call
within metastore failed and wouldn't let it finish.
So I looked into the code and was able to implement a solution to this
problem by adding two options to it for default values for the user and
group. Also, file attributes are not necessary in our (current) use case of
metastore and caused crashes on our server, so they are also disabled.
|
|
Metastore depends on `bsd/string.h' to work properly (atleast on GNU/Linux
systems). The first system I tried building with had that library, so I
didn't notice! With this commit, we also build `libbsd' as part of the
pipeline.
Also, I couldn't find libbsd's version in any of its installed headers, so
like Libjpeg, we can't actually check and will directly write our internal
version into the paper.
|
|
The pipeline heavily depends on file meta data (and in particular the
modification dates), for example the configuration-Makefiles within the
pipeline are set as prerequisites to the rules of the pipeline.
However, when Git checks out a branch, it doesn't preserve the meta-data of
the files unique to that branch (for example program source files or
configuration-Makefiles). As a result, the rules that depend on them will
be re-done.
This is especially troublesome in the scenario of this reproducible paper
project because we commonly need to switch between branches (for example to
import recent work in the pipeline into the projects). After some
searching, I think the Metastore program is the best solution. Metastore is
now built as part of the pipeline and through two Git hooks, it is called
by Git to store the original meta-data of files into a binary file that is
version controlled (and managed by Metastore).
|
|
When building in group mode, users can manage them selves to work on
independent analysis steps and thus not cause conflicts. However, until
now, there was no way to avoid conflicts in building the final paper.
To fix this problem, when we are in group mode, the pipeline will create a
separate LaTeX build director for each user and also a separate PDF file
for each user. This will ensure that their compilations don't conflict.
|
|
With the current build system, Bash and AWK don't write RPATH into the
executables. This causes many problems in the pipeline (for example when
using the `$(shell)' function in Make which doesn't have
`LD_LIBRARY_PATH').
After consulting the Bash and Make mailing lists, so far, the best solution
was to use the Patchelf program to manually write RPATH in these
executables. With this commit, Patchelf is now installed in the pipeline
and used in Bash and AWK to fix this problem.
|
|
The build of bash has been made a little cleaner to help in readability and
management of the code.
|
|
The TIFF library can optionally depend on webp [1] and zstd [2]. But these
aren't commonly used in scientific datasets so to avoid a longer build and
managing of extra dependencies (atleast for now!), we are disabling
them. The problem is that they cause a dependency on the host system and if
they are updated/removed, the relevant pipeline programs will crash.
[1] https://en.wikipedia.org/wiki/WebP
[2] https://en.wikipedia.org/wiki/Zstandard
|
|
In the previous commit, the copyright year and owner were mistakenly
modified. They are corrected now.
|
|
While working on a pipeline based on this, I noticed many linking errors of
our installed Bash, complaining that it can't link with libreadline. This
was while readline was present in the proper directory and the Bash within
a recipe would work properly.
After some investigation, I found out that this is because Make's `foreach'
function (which was used to define the targets) was apparently calling Bash
without setting `LD_LIBRARY_PATH', causing this error.
To avoid such sitations, Bash now uses its internal build of readline and
we no longer ask it to link with the installed readline.
|
|
The targets of the links to have the extra common `ncurses' packages were
previously just `pkgconfig/*.pc'! But this would only work when run within
the `installed/lib' directory, not any other! So the targets for these
packages now use an absolute address.
|
|
In a few cases, I had used a signle quote to close `. This would not
display properly on the Gitlab webpage, so they are corrected.
|
|
If the `./for-group' script is not used properly, it can lead to the whole
pipeline being re-run. Therefore it is important to do a sanity check
immediately at the start of Make's processing and inform the user if there
is a problem.
With this commit, `./for-group' exports the `reproducible_paper_for_group'
variable which is used by both the initial `./configure' script, and later
in each call to Make. The `./configure' script will use it to write a value
in `reproduce/config/pipeline/LOCAL.mk' and Make will use it to compare
with the value in `reproduce/config/pipeline/LOCAL.mk'.
If there is an inconsistency, Make will not even attempt to build anything
and will just print a message and abort.
|
|
Until now, Gnuastro was only mentioned in the first acknowledgments
section, but not in the paragraph with all the program names. But these two
are not mutually exclusive. All the software should be mentioned in the
last paragraph and those that need special mention can be mentioned before
it.
|
|
Until now, there was no reference to `README-pipeline.md' within the
`README.md' file. Since `README.md' is the first file that someone reads
and the basic perpose and structure of the pipeline is described in
`README-pipeline.md', it was necessary to bring it up there.
|
|
Wget and cURL depend on many network related libraries by default and if
they are present on the host operating system, they will be linked
with. This causes problems for the pipeline when these libraries are
updated on the host system.
With this commit, I went through the configure time options of both Wget
and cURL and removed any library that didn't seem related to merely
downloading of files (possibly with SSL, because we do build OpenSSL in the
pipeline).
Also, I noticed a new version of cURL has come, so that is also updated.
|
|
In a previous implementation, we were using a `target' variable to define
the final target of several links, but with the new `sov' solution, we just
used its base name. However, we had forgot to correct two instances of
`target'. This is corrected now.
Also, the step to clean all already built outputs of the NCURSES library
has been simplified to a platform independent wildcard.
|
|
On Mac OS systems, the full version number is not used in the filename
given to libncurses. For example for version 6.1, it is called
`libncursesw.6.dylib'. So a more generic and easier to maintain and read
script is now used to be able to make links for both Mac and GNU/Linux
systems.
In short, instead of checking if we are in Mac every time, we just set the
suffixes at the start based on the machine once as variables and use those
to define the links.
|
|
The call to SED in `dependencies-build-rules.mk' had the file name before
the options. On some verions of SED, this would cause problems. So the
filename is now given after the options.
|
|
The new `--colormap' option was added to the call to Gnuastro's ConvertType
program. Since Gnuastro 0.8, ConvertType needs this option for converting a
single-channel dataset to a color-supporting format.
|