Age | Commit message (Collapse) | Author | Lines |
|
Until this commit, we had some of the python packages intalled
but they did not work properly because of the `PYTHONPATH' variables.
That is, the pipeline's `python' was the `python' of the system
instead of the pipeline's `python'.
With this commit this issue has been fixed by setting the correct
`PYTHONPATH'. In this commit we also modify the installation of
`bzip2' because `CMake' was complaining about some libraries built
statically.
|
|
Raul Infante-Sainz added the building of Python (along with the Numpy and
Astropy packages) into the pipeline. That work is now being merged into the
main pipeline branch.
There was only this small problem that needed to be fixed: the Python
tarball's name after unpacking is actually `Python-X.X.X' (with a captial
P), not `python-X.X.X'. This has been corrected with this merge.
|
|
Astropy was added and one very important thing is that we have to
use the pypi tarball (https://pypi.org/) (which is bootstrapped)
and not the github tarball.
|
|
In order to collaborate effectively in the project, even project members
that don't necessarily want (or have the capacity) to do the whole analysis
must be able to contribute to the project. Until now, the users of the
distributed tarball could only modify the text and not the figures (built
with PGFPlots) of the paper.
With this commit, the management of TeX source files in the pipeline was
slightly modified to allow this as cleanly as I could think of now! In
short, the hand-written TeX files are now kept in `tex/src' and for the
pipeline's generated TeX files (in particular the old `tex/pipeline.tex'),
we now have a `tex/pipeline' symbolic-link/directory that points to the
`tex' directory under the build directory.
When packaging the project, `tex/pipeline' will be a full directory with a
copy of all the necessary files. Therefore as far as LaTeX is concerned,
having a build-directory is no longer relevant. Many other small changes
were made to do this job cleanly which will just make this commit message
too long!
Also, the old `tarball' and `zip' targets are now `dist' and `dist-zip' (as
in the standard GNU Build system).
|
|
With this commit, it is now possible to package the project into a tarball
or zip file, ready to be distributed to collaborators who only want to
modify the final paper (and not do the analysis technicalities), or for
uploading to sites like arXiv, or online LaTeX sharing pages.
|
|
Until now, the group name to build the project actually went into the Git
source of the project! This doesn't allow exact reproducibility on
different machines (where the group name may be different).
With this commit, the `for-group' script has been modified to accept the
group name as its first argument and pass that onto `configure' and
Make. This is much better now, because not only the existance of a group
installation is checked, but also the name of the group. It also made
things simpler (in particular in `LOCAL.mk.in').
|
|
In this version, too many extra notices (just regarding a change from
branch to branch) are not printed with `-q'. Instead only a one line
statement is printed that it is saved or applied.
|
|
Until we see what happens with the pull request of our suggested features
in metastore, its version isn't written directly into the executable, so we
won't actually check it, but write the version directly into the paper.
|
|
Metastore depends on `bsd/string.h' to work properly (atleast on GNU/Linux
systems). The first system I tried building with had that library, so I
didn't notice! With this commit, we also build `libbsd' as part of the
pipeline.
Also, I couldn't find libbsd's version in any of its installed headers, so
like Libjpeg, we can't actually check and will directly write our internal
version into the paper.
|
|
The pipeline heavily depends on file meta data (and in particular the
modification dates), for example the configuration-Makefiles within the
pipeline are set as prerequisites to the rules of the pipeline.
However, when Git checks out a branch, it doesn't preserve the meta-data of
the files unique to that branch (for example program source files or
configuration-Makefiles). As a result, the rules that depend on them will
be re-done.
This is especially troublesome in the scenario of this reproducible paper
project because we commonly need to switch between branches (for example to
import recent work in the pipeline into the projects). After some
searching, I think the Metastore program is the best solution. Metastore is
now built as part of the pipeline and through two Git hooks, it is called
by Git to store the original meta-data of files into a binary file that is
version controlled (and managed by Metastore).
|
|
When building in group mode, users can manage them selves to work on
independent analysis steps and thus not cause conflicts. However, until
now, there was no way to avoid conflicts in building the final paper.
To fix this problem, when we are in group mode, the pipeline will create a
separate LaTeX build director for each user and also a separate PDF file
for each user. This will ensure that their compilations don't conflict.
|
|
Readline is a prerequisite of Bash and AWK, while NCURSES is a prerequisite
of Readline. With the recent update of GNU Bash (and thus GNU Readline) on
my host operating system, the pipeline crashed and I noticed this hole in
the pipeline. In particular, AWK (which linked with Readline 7.0) would
complain about not finding it and abort.
|
|
ccache is a super annoying program in the context of the reproduction
pipeline. On systems that use it, the `gcc' and `g++' that are found in
PATH are actually calls to `ccache' (so it can manage their call)!
Two steps have been taken to ignore and disable ccache (if it isn't ignored
properly!): 1) when making symbolic links to compilers, if a directory
containing `ccache' is present in the PATH, it is first removed, then we
look for the low-level programs that we won't be building. 2) The
`CCACHE_DISABLE' environment variable is set to 1 where necessary (with the
other environment variables).
|
|
Since the current implementation of this pipeline officially started in
2018, all the files only had 2018 in their copyright years. This has now
been corrected to 2018-2019.
|
|
Both Gzip and Gnuastro were being bootstrapped personally from their Git
repository until now. But fortunately a new release of both came out last
week and so to make things standard we are now using their standard
tarballs.
I also noticed that we weren't checking the version of Gzip or mentioning
it in the acknowledgement section. This was also corrected.
|
|
Bzip2's verison is found differently from the other programs (because it
writes no standard error, not standard output!). So a custom function is
written for it which includes creating a temporary file. But we had forgot
to delete that file after the version is found.
|
|
The TeX Live installer needs Wget to operate smoothly, especially on recent
Mac OS systems that don't have Wget pre-installed. Also, it would be good
for the pipeline to have its own downloader. So with this commit, the
pipeline also installs Wget and OpenSSL which is a dependency.
Many other small changes/fixes were done in this process.
|
|
Thanks to the check by Cristina Martínez, some corrections were made when
we attempt to download and install TeXLive. Further checks and corrections
will be in due time.
|
|
Until now we weren't explicity writing the full path of the dynamic
libraries necessary for linking a program. But now with
`-Wl,-rpath=$(ildir)' we ensure that the linker keeps the address of the
dynamic libraries necessary for linking at linking time, not running
time. Also, `pkg-config' is also built when preparing the basics. Several
other minor corrections were made thanks to the great help of Raúl Infante
Sainz.
|
|
We had forgot to add the rule to build the lock file directory for
downloading data. This has been corrected.
|
|
The high-level dependencies are now built without having access to the
system's PATH. To do this, all the necessary software that we aren't
building ourselves are now brought into the installed `bin/' directory
using a symbolic link to the corresponding software on the host. To do
this, it was also necessary to increase the number of basic/low-level
packages that we are building, and add several more (Diffutils and
Findutils).
With this process in place, we now have a list of the exact software
packages that we are not building our selves, enabling easy building of all
such dependencies in the future.
|
|
In most analysis situations (except for simulations), an input dataset is
necessary, but that part of the pipeline was just left out and a general
`SURVEY' variable was set and never used. So with this commit, we actually
use a sample FITS file from the FITS standard webpage, show it (as well as
its histogram) and do some basic calculations on it.
This preparation of the input datasets is done in a generic way to enable
easy addition of more datasets if necessary.
|
|
When the C compiler is not GNU GCC, linking with GNU Binutils is going to
cause problems. So until the time that we can include GCC into this
pipeline, its best to avoid Binutils also.
Also, for building CMake, we were relying on an installed CMake, but now,
we are using its own `./bootstrap' script, so it can be built even if the
host system doesn't have CMake.
Also, for TeX Live, we are now setting a custom file as main target to
avoid complications with symbolic links as targets in Make.
Finally, when the user says they don't want to re-write an existing
configuration file, no extra notices will be printed and the configure
script will immediately start building programs.
|
|
After going through the checklist for starting a new project based on the
pipeline, I noticed some parts that could be modified to be more
clear. They are now applied.
|
|
Since the final product of the pipeline is a LaTeX-created PDF file, it was
necessary to also have LaTeX within the pipeline. With this commit, TeX
Live is also built as part of the configuration and all the necessary
packages to build the PDF are also installed and mentioned in the paper
along with their versions.
|
|
TeX Live is now also downloaded and built by the reproduction
pipeline. Currently on the basic (TeX and LaTeX) source is built but no
extra packages, so the PDF building will fail. We'll add them in the next
commit.
|
|
To have better control over the build, GNU Binutils, Bzip2, GNU Gzip, and
XZ Utils have also been added to the pipeline. Some other minor cleanups
and fixes were also implemented throughout the process.
|
|
To ensure the easy unpacking and building of the programs, Lzip and Tar are
now also build during the initial setup phase.
Some minor corrections were also applied to make things cleaner and
smoother.
|
|
Until now, we used semicolons in Make's Call function definitions to build
the programs with GNU build system or CMake. Therefore, if any step of the
process failed, the rest would be ignorant to it and pass. Now, we use `&&'
to separate the different processing steps. In this way, we can be sure
that if any of them fails (during configuration, or building for example),
the pipeline will also stop and not continue to the next command (in the
same recipe).
Since the two Make Call functions were identical in the two
`dependencies-basic.mk' and `dependencies.mk', they are now in one file to
be imported in both.
This bug was found by Raul Infante Sainz.
|
|
All the libraries that define their version string as a macro in their
headers are now also checked in `reproduce/src/make/initialize.mk'.
Also, the CFITSIO tarball now follows the same versioning style as the rest
of the tarballs: a script is added to convert the version string into what
is included in the tarball.
|
|
The version of all programs is now checked in
`reproduce/make/src/initialize.mk' and the pipeline won't complete if any
of the program versions change from those listed in
`reproduce/config/pipeline/dependency-versions.mk'.
Since the pipeline is systematically checking all program versions, we
don't need Gnuastro's `--onlyversion' option any more. So it (and all
references to it) have been removed.
|
|
The system's environment is now removed from the internal processing of the
pipeline, except for LaTeX.
|
|
During the configuration step several new programs that were necessary for
a more complete controlled environment are now also downloaded and built
statically.
|
|
To enable easy/proper reproduction of results, all the high-level
dependencies are now built within the pipeline and installed in a fixed
directory that is added to the PATH of the Makefile. This includes GNU Bash
and GNU Make, which are then used to run the pipeline.
The `./configure' script will first build Bash and Make within itself, then
it will build
All the dependencies are also built to be static. So after they are built,
changing of the system's low-level libraries (like C library) won't change
the tarballs.
Currently the C library and C compiler aren't built within the pipeline,
but we'll hopefully add them to the build process also.
With this change, we now have full control of the shell and Make that will
be used in the pipeline, so we can safely remove some of the generalities
we had before.
|
|
While we had set the default Makefile SHELL to be bash, we weren't
actually checking if `bash' is available on the system. With this
commit, it is also checked at configure time.
|
|
Making plots and including references are integral parts of a scientific
paper. Therefore to demonstrate how cleanly they can be used within the
pipeline, they are now used to produce the final PDF.
To use PGFPlots a random dataset is made (using AWK's random function) and
is plotted using PGFPlots. The minimum and maximum values of the dataset
are also included in the text to further show how such calculations can go
into the macros and text.
For the references, the NoiseChisel paper was added as a reference to cite
when using this pipeline along with the MUSE UDF paper I, which uses this
pipeline for two sections. Following this discussion, citation is also
discussed in `README.md` and the NoiseChisel paper is also added as a
published work with a reproduction pipeline.
|
|
Until now, the copyright statement was left empty for the users of the
pipeline to fill. However, the files have already been created and have an
author (or contributing authors) before the user starts using the
pipeline. So the original authors of the files are added along with the
year. The user can add their own name to the existing files under the
"Contributing author" when they start and they will be the "Original
author" of the new files they create.
Several changes were also made to the TeX management:
- LaTeX is run within a `reproduce/build/tex/build' directory now. Not in
the top reproduction pipeline directory. This helps keep all the
auxiliary TeX files and directories in that directory and keep the top
reproduction pipeline directory clean. After the final PDF is built, a
copy is put in the top reproduction pipeline directory for easy viewing.
- The PGFPlots preamble was also made more useful, allowing the name of
the `.tex' file to also be the name of the final plot that is
produced. This is a GREAT feature, because without it, the TiKZ
externalization would be based on order of the plots within the
paper. But now, order is irrelevant and we can even delete the TiKZ
files within the processing workhorse-Makefiles so the plots are
definitly rebuilt on the next run.
- The paper is now in a two-column format to be more similar to published
papers.
A tip on debugging Make was added to `README.md'.
|
|
As described in the commens above `MINMAPSIZE' of `LOCAL.mk.in', the amount
of memory to map to HDD/SSD or keep in RAM is a local issue and not
relevant to the pipeline's results. So it is now defined in a
`gnuastro-local.conf' file.
To keep the Makefiles clean, this file is created by the `./configure'
script. To do this cleanly, the `./configure' script was also almost fully
re-written with better functionality now.
|
|
Until now, Gnuastro's `mmap' files were included in the `rm' commands of
`clean*' rules two times. But by setting `clean-mmap' as a dependency of
`clean', it is now only necessary to have them in the Makefile once. This
also makes the code much more cleaner.
|
|
Managing this symbolic link as a prerequisite that may or maynot be defined
just made the code too dirty. It is almost always needed, so it is now a
super-high-level prerequisite (first dependency of the `all' target, even
before the final PDF). In this way, we can be sure it is always built and
that nothing else depends on it.
If the user doesn't want it, they can simply remove it from the top
`Makefile'.
|
|
The choice of whether or not to make a PDF is now also a local system
issue, not a general pipeline issue. So it has been put in the new
`LOCAL.mk.in' file which replaces the old `DIRECTORIES.mk.in'. All local
settings (things that when changed should not be version-controlled) should
be defined in this file.
A sanity check was added to find if `./configure' has been run before
`make' or not (using the `LOCAL.mk' file which is an output of the
configuration step). If `LOCAL.mk' doesn't exist, an error will be printed
informing the user that `./configure' needs to be run first.
The configure script also provides more clear and hopefully better
information on its purpose and what must be done.
Since `make clean', it is executed even when `./configure' hasn't been run,
it will only delete the build directory and its contents when local
configuration has been done.
A `distclean' target was also added which will first "clean" the pipeline,
then delete the `LOCAL.mk.in' file.
To allow rules like `make' to be run even if `BDIR' isn't defined
(`./configure' hasn't been run yet), a fake `BDIR' is defined in such
cases.
|
|
Recently the filename keeping TeX macros for the versions was changed from
`versions.tex' to `initialization.tex' (since it also contained the build
directory). However, it was forgotten to correct the change of name in the
`.PHONY' targets, so it was not being rebuilt every time. This is corrected
now.
|
|
Let's start working on this pipeline independently with this first
commit. It is based on my previous experiences, but I had never made a
skeleton of a pipeline before, it was always within a working analysis.
But now that the pipeline has a separate repository for its self, we will
be able to work on it and use it as a base for future work and modify it to
make it even better. Hopefully in time (and with the help of others), it
will grow and become much more robust and useful.
|