Age | Commit message (Collapse) | Author | Lines |
|
This work is now merged, I just added the new argument to the `pybuild'
function.
|
|
Until now, management of the software names and versions in the paper was
done manually (a macro had to be defined in `initialize.mk', then used in
`paper.tex', so they had to be manually set in two places). Managing this
was not easy.
To fix this, with this commit, each software building rule's target is a
text file that contains its human-readable name and its version. In the
end, the configure script sorts them by their name and writes them into a
LaTeX input file that we can easily import as a file into the main paper.
|
|
After trying to set the pipeline from scratch with no internet conection
(but all tarballs already downladed), `h5py' Python package complained
about not having access to download `pkgconfig'. After solving this
dependency, it also complained about not having `cython'.
With this commit, we add `pkgconfig' (Python) and `cython' to the
pipeline in order to be able to install `h5py' properly.
|
|
Until now, these versions were written in each run. This was mainly
inherited from the old days of the pipeline, where we didn't know the
software on the host. But now that we have almost everything under control,
we can just write these LaTeX macros at the end of the configure script and
make `initialize.mk' simpler and also (very slightly!) speed-up/simplify
the processing.
|
|
Until now, the steps to manage the command-line options of the configure
script were limited (couldn't accept an equal sign or space between the
option name and value). With this commit, it can now also accept optional
equal signs between the option name and value. Thus not causing many
confusions.
Also, it is more logically consistent for the link to the build-directory
to be placed in the top directory (as a hidden file like `.local' until
now), and not as a visible directory like `reproduce/build' (which we used
until now). Therefore, with this commit, the link to easily access the
build-directory is `.build' in the top source directory.
Finally, because `minmapsize' is too specific to Gnuastro and has now been
given its default value at the start of the configure script, the
description for `minmapsize' has been removed (to not confuse users who
don't use Gnuastro). If anyone is familiar enough with Gnuastro to change
it, they already know it from its book.
|
|
In the previous commit, a copyright notice was added after a systematic
search of the version controlled files. However, we missed `.gitignore'
(because we were discarding those with the `*.git*' pattern to avoid files
in the `.git' directory). This has been fixed by using this command (in the
top project directory) instead:
for f in $(find ./ -type f); do \
if [[ $f != ./.git/* ]]; then \
n=$(grep -i copyright $f | wc -l); \
echo "$n $f"; \
fi; \
done | awk '$1==0'
|
|
After doing a systematic search for files without a copyright notice, a few
more were found that didn't have a notice. So a notice was added for them.
I used this Bash command to find the files:
for f in $(find ./ -type f); do \
if [[ $f != *.git* ]]; then \
n=$(grep -i copyright $f | wc -l); \
echo "$n $f"; \
fi; \
done | awk '$1==0'
|
|
To help make it easier to re-use (like the rest of the "large" files).
|
|
In order to be more clear, a copyright statement was added to all the LaTeX
and README files.
|
|
With the options, it is now possible to run the configure script more
easily after the initial run. The `--help' option provides a nice and
complete introduction along with a listing of the input options and the
`-j' option can be use to manually set the number of threads.
|
|
Until now, we were using `flock' (file-lock) for downloading the input
datasets in series. But we couldn't do this when downloading the software
tarballs because `flock' wasn't yet available. Generally, unlike
processing, downloading is much better done in series than in parallel.
To enable serial downloads of the software also, with this commit we are
installing `flock' in the configure script (not in a Makefile). As a
result, besides `flock', we can also benefit from the other good features
of the `reproduce/src/bash/download-multi-try' script *(for example
attempting download again after some time).
Some GNU mirrors may have problems at the time of download, so with this
commit, we are using the main GNU FTP server for GNU programs.
|
|
The LaTeX macro for libgit2 was not properly used in `paper.tex'.
On Mac systems, after browsing the directory, a `.DS_Store' file was
created. So to keep things clean on those systems, it is added to the files
to be ignored by Git.
|
|
Until recently, there was no problem with the `makelink' script of
`dependencies-basic.mk' because it was called on separate recipe lines (and
thus separate shells). But recently we added a call to it within a single
shell (for GCC on Mac OS systems). So a previous call to it would effect
the next call. To fix this, in this commit, we are re-setting PATH to its
original value after each call finishes.
|
|
Bzip2 has a special/separate Makefile to build shared libraries which
didn't work on a macOS. So with this commit, we are allowing Bzip2 shared
libraries only on macOS systems.
Also, I noticed that macOS's `sed' doesn't have the `-i' option (to do the
change in place within the same file). So we are using `-e' to write the
changed Makefile in a temporary directory, then rename that.
|
|
Until now, we were actually running all the programs to check their
versions during initialization. But now that the number of programs has
increased, this can be slow. With this commit, we simply report the version
as a constant string. Maybe later, we can follow the strategy of the TeX
Live packages and write them all at configure time.
|
|
Since the `install' script also sets permissions manually, the permissions
that we define in `for-group' don't usually affect the installed
files. Therefore the installed files of one user can't be modified/deleted
by another. With this commit, after for-group finishes configuration, it
also adds the write flag for all group members in the whole installation
directory.
|
|
Until now the `./for-group' script would only add one argument to the Make
call, but in some situations, you need a second argument is well. With this
option, any possible fourth argument to `./for-group' is passed to Make.
|
|
We still have a few problems with building GCC on a MacOS system. To allow
using the pipeline on this operating system, until we find the solution,
GCC is only built on non-Mac systems. On Mac, we'll just make a symbolic
link to the host's executables.
|
|
To ensure that we have all the necessary Python dependencies, I done an
offline build and noticed that several packages were also necessary for the
`./configure' step to finish (`libffi', `asn1crypto', `cffi', `jeepney',
`pycparser' and `secretstorage'). With this commit they are added.
|
|
With the help of Raul, we were able to build many higher-level Python
packages to enable the installation of packages like Matplotlib and
Astroquery. With this commit, that work is being merged into the master
branch.
|
|
Until this commit, we had some of the python packages intalled
but they did not work properly because of the `PYTHONPATH' variables.
That is, the pipeline's `python' was the `python' of the system
instead of the pipeline's `python'.
With this commit this issue has been fixed by setting the correct
`PYTHONPATH'. In this commit we also modify the installation of
`bzip2' because `CMake' was complaining about some libraries built
statically.
|
|
This section was a little outdated and since then, a more clear/exact image
of using the Nix experience for the reproducible paper template has been
added.
|
|
In an attempt to test the GCC build rule (without Binutils, because its too
architecture dependent), all the necessary dependencies were moved to GCC
(from `ld'). Also `fortran' was also added to the languages supported by
GCC. This rule built GCC 8.2.0 nicely on my GNU/Linux system. But `gcc' is
still not a final target to built, so the rule is being ignored for now.
|
|
As matplotlib is a general package for plotting and it is widely
used in science, we have added it to the pipeline.
When installing a dependency of matplotlib `python-dateutil', we
found a conflict in the download of the tarball. This is because
the name has a dash (-) in the middle. In addition, the name starts
with 'python', so it is the same as the python itself. Now it is
possible to install any package with any name, just adding an elif
in before the URL direction.
|
|
All dependencies for building astroquery package have been done.
Until nowthe Python dependencies were built in the same Makefile
as the high level libraries and programs. But, because astroquery
has many dependencies we split the Python and Python packages
installation in a new Makefile.
The installation of differents packages are done using Python and
not pip, because we found some problems when doing it with pip.
Apparently there are some interferences between the packages
installed by the pip of the system and the pip installed as part
of Python in the pipeline.
|
|
As in all programs, the build process of ncurses depends on the running
shell (Bash) and AWK. At the start of the building of ncurses, we remove
its library. But Bash and AWK depend on ncurses to run (this creates a
circular dependency). Therefore its necessary to remove the Bash and AWK
executables when re-building ncurses.
This bug was found by Raul Infante Sainz.
|
|
Raul Infante-Sainz added the building of Python (along with the Numpy and
Astropy packages) into the pipeline. That work is now being merged into the
main pipeline branch.
There was only this small problem that needed to be fixed: the Python
tarball's name after unpacking is actually `Python-X.X.X' (with a captial
P), not `python-X.X.X'. This has been corrected with this merge.
|
|
The zip program wasn't placed correctly (in alphabetical order) and its URL
command had the wrong indentation! Both have no effect at all on the
processing and are only cosmetic (to help in readability).
|
|
Astropy was added and one very important thing is that we have to
use the pypi tarball (https://pypi.org/) (which is bootstrapped)
and not the github tarball.
|
|
Python needs some packages to be really useful. Numpy is the most
important package for using Python and a lot of other packages
depend on it.
In this commit we add numpy to the pipeline. The tarball of numpy
right now is fossies.
|
|
Many projects use Python so it is necessary include it in the
pipeline.
|
|
In the example running code of the wrapper script, I had just written
`./download-multi-try', but this script is meant to be run from the top of
the project directory. This could cause confusion.
So the example script now starts with `/path/to/download-multi-try'.
|
|
We don't have a `.sh' suffix in the other scripts of `reproduce/src/bash',
so it was also removed from this script.
|
|
Until now, downloading was treated similar to any other operation in the
Makefile: if it crashes, the pipeline would crash. But network errors
aren't like processing errors: attempting to download a second time will
probably not crash (network relays are very complex and not reproducible
and packages get lost all the time)!
This is usually not felt in downloading one or two files, but when
downloading many thousands of files, it will happen every once and a while
and its a real waste of time until you check to just press enter again!
With this commit we have the `reproduce/src/bash/download-multi-try.sh'
script in the pipeline which will repeat the downoad several times (with
incrasing time intervals) before crashing and thus fix the problem.
|
|
In order to collaborate effectively in the project, even project members
that don't necessarily want (or have the capacity) to do the whole analysis
must be able to contribute to the project. Until now, the users of the
distributed tarball could only modify the text and not the figures (built
with PGFPlots) of the paper.
With this commit, the management of TeX source files in the pipeline was
slightly modified to allow this as cleanly as I could think of now! In
short, the hand-written TeX files are now kept in `tex/src' and for the
pipeline's generated TeX files (in particular the old `tex/pipeline.tex'),
we now have a `tex/pipeline' symbolic-link/directory that points to the
`tex' directory under the build directory.
When packaging the project, `tex/pipeline' will be a full directory with a
copy of all the necessary files. Therefore as far as LaTeX is concerned,
having a build-directory is no longer relevant. Many other small changes
were made to do this job cleanly which will just make this commit message
too long!
Also, the old `tarball' and `zip' targets are now `dist' and `dist-zip' (as
in the standard GNU Build system).
|
|
With this commit, it is now possible to package the project into a tarball
or zip file, ready to be distributed to collaborators who only want to
modify the final paper (and not do the analysis technicalities), or for
uploading to sites like arXiv, or online LaTeX sharing pages.
|
|
A few issues came up while testing the `for-group' script in one of the
projects based on this pipeline that are being fixed with this commit:
1) We are ultimately using the `sg' command to use the specified group,
not `chgrp'. So in cases where `chgrp' has problems, this would cause
a wrong error. So for the test of the given group's existance, we are
now directly calling `sg'.
2) In the call to `make' we were mistakenly giving make the `$2' (which
is `make' on the command-line) argument. Since `./for-group' now takes
the group name as its first argument, this should have been `$3'.
3) To help in readability, and also allow for group names with a space,
`reproducible_paper_group_name' is now defined and exported before the
final call to `sg'.
|
|
Until now, the group name to build the project actually went into the Git
source of the project! This doesn't allow exact reproducibility on
different machines (where the group name may be different).
With this commit, the `for-group' script has been modified to accept the
group name as its first argument and pass that onto `configure' and
Make. This is much better now, because not only the existance of a group
installation is checked, but also the name of the group. It also made
things simpler (in particular in `LOCAL.mk.in').
|
|
Until now, the `./configure' script would only print the `.local/bin/make
-j8' command. But when configured for groups, a different command should be
used. It now does a check just before running and suggests the proper
command.
|
|
Until now it was `/bin/sh', but on Debian systems, this can cause problems
because by default they use a much weaker shell (dash) which doesn't
recognize functions.
|
|
I recently found another fork of metastore that allows its build on macOS
systems (https://github.com/mpctx/metastore). So I forked it into my own
fork with several other corrections (mostly cosmetic!), so it is now much
better suited for this pipeline.
Raul Infante-Sainz has already tested the building of metastore on his
macOS. In a previous test, we also noticed that libbsd should not be built
on Mac systems, so it is now a conditional prerequisite to metastore.
|
|
While editing files, some editors create temporary `~' files that can cause
problems in metastore's ability to delete their host directory if its not
on the other branch. With this commit, a `find' call was added to the post
checkout Git hook to remove such temporary files before metastore is
called.
Also, some comments were added to both git hooks to make them easier to
understand for a beginner.
|
|
I needed to take these steps in a few occasions on a project I am building
over this pipeline. This will commonly happen when a team starts using this
pipeline, so it was added to make things easier.
|
|
To be more generic and recognizable, the `README-pipeline.md' script was
renamed to `README-hacking.md'. In essence, it is just that: to hack the
existing pipeline for your own project. We follow a similar naming
convention in many GNU software.
|
|
Two corrections were made in the Git hooks of Metastore.
1) The shebang at the start of the scripts now uses the absolute adress of
our installed bash, not the relative `.local/bin/bash'. Note that it is
possible to use Git within subdirectories and in that scenario, the
`.local' will fail.
2) The `$$user' section was removed from the command to find the user's
group. With the user as an argument, `groups' may print the user's name
first, then their list of groups. When this happens, the script would
be just repeating the user's name. But the raw `groups' command will
list the groups of the running user.
|
|
Until now, the check to see if the patchelf program should be used or not
(for GNU/Linux vs. Mac installations) was mistakenly added over the step
that we define the `sh' symbolic link, not over the call to patchelf. This
is corrected with this commit.
|
|
In this version, too many extra notices (just regarding a change from
branch to branch) are not printed with `-q'. Instead only a one line
statement is printed that it is saved or applied.
|
|
Until we see what happens with the pull request of our suggested features
in metastore, its version isn't written directly into the executable, so we
won't actually check it, but write the version directly into the paper.
|
|
After testing the built of Metastore on a server, I noticed that because
its `/etc/passwd' doesn't have the list of users, the `getpwuid' call
within metastore failed and wouldn't let it finish.
So I looked into the code and was able to implement a solution to this
problem by adding two options to it for default values for the user and
group. Also, file attributes are not necessary in our (current) use case of
metastore and caused crashes on our server, so they are also disabled.
|
|
Metastore depends on `bsd/string.h' to work properly (atleast on GNU/Linux
systems). The first system I tried building with had that library, so I
didn't notice! With this commit, we also build `libbsd' as part of the
pipeline.
Also, I couldn't find libbsd's version in any of its installed headers, so
like Libjpeg, we can't actually check and will directly write our internal
version into the paper.
|