Age | Commit message (Collapse) | Author | Lines |
|
All occurances of "pipeline" have been chanaged to "project" or "template"
withint the text (comments, READMEs, and comments) of the template. The
main template branch is now also named `template'.
This was all because `pipeline' is too generic and couldn't be
distinguished from the base, and customized project.
|
|
Until now, the files where the people were meant to change didn't have a
proper copyright notice (for example `Copyright (C) YOUR NAME.'). This was
wrong because the license does not convey copyright ownership. So the name
of the file's original author must always be included and when people
modify it (and add their own copyright-able modifications).
With this commit, the file's original author (and email) are added to the
copyright notice and when more than one person modified a file, both names
have their individual copyright notice.
Based on this, the description for adding a copyright notice in
`README-hacking.md' has also been modified.
|
|
Until now, we were using `flock' (file-lock) for downloading the input
datasets in series. But we couldn't do this when downloading the software
tarballs because `flock' wasn't yet available. Generally, unlike
processing, downloading is much better done in series than in parallel.
To enable serial downloads of the software also, with this commit we are
installing `flock' in the configure script (not in a Makefile). As a
result, besides `flock', we can also benefit from the other good features
of the `reproduce/src/bash/download-multi-try' script *(for example
attempting download again after some time).
Some GNU mirrors may have problems at the time of download, so with this
commit, we are using the main GNU FTP server for GNU programs.
|
|
We don't have a `.sh' suffix in the other scripts of `reproduce/src/bash',
so it was also removed from this script.
|
|
Until now, downloading was treated similar to any other operation in the
Makefile: if it crashes, the pipeline would crash. But network errors
aren't like processing errors: attempting to download a second time will
probably not crash (network relays are very complex and not reproducible
and packages get lost all the time)!
This is usually not felt in downloading one or two files, but when
downloading many thousands of files, it will happen every once and a while
and its a real waste of time until you check to just press enter again!
With this commit we have the `reproduce/src/bash/download-multi-try.sh'
script in the pipeline which will repeat the downoad several times (with
incrasing time intervals) before crashing and thus fix the problem.
|
|
Since the current implementation of this pipeline officially started in
2018, all the files only had 2018 in their copyright years. This has now
been corrected to 2018-2019.
|
|
Until now, we weren't including the `rpath' linking options to the basic
dependencies. They are now added. Also, when the download of an input file
fails for any reason, an empty file won't be replaced there any more.
|
|
To enable easy downloading of HTTPS links with Wget (this pipeline's defaut
downloader), we need a set of trusted CA certificates. Until the time that
we can generate one ourselves, one generic set of trusted CA certificates
is now downloaded like a tarball and placed in the OpenSSL configuration
directory.
With these CA certificates, within the pipeline we can now safely use the
pipeline's own installed Wget.
|
|
While testing on another computer, I noticed that to operate properly, the
file given to `flock' must be created before it is called. This is a
low-level difference (how the system treats files), so it wasn't apparent
on my system. To fix it, we have added a `touch' command before it.
|
|
There was an extra `$(lockdir)' target in `download.mk'. This has been
corrected.
|
|
We had forgot to add the rule to build the lock file directory for
downloading data. This has been corrected.
|
|
Until now, we were keeping the input file within the reproduction
pipeline's directories using the same name as the database/server. Now, we
are using a short/summarized filename convention for the input dataset.
|
|
In most analysis situations (except for simulations), an input dataset is
necessary, but that part of the pipeline was just left out and a general
`SURVEY' variable was set and never used. So with this commit, we actually
use a sample FITS file from the FITS standard webpage, show it (as well as
its histogram) and do some basic calculations on it.
This preparation of the input datasets is done in a generic way to enable
easy addition of more datasets if necessary.
|
|
Until now, the copyright statement was left empty for the users of the
pipeline to fill. However, the files have already been created and have an
author (or contributing authors) before the user starts using the
pipeline. So the original authors of the files are added along with the
year. The user can add their own name to the existing files under the
"Contributing author" when they start and they will be the "Original
author" of the new files they create.
Several changes were also made to the TeX management:
- LaTeX is run within a `reproduce/build/tex/build' directory now. Not in
the top reproduction pipeline directory. This helps keep all the
auxiliary TeX files and directories in that directory and keep the top
reproduction pipeline directory clean. After the final PDF is built, a
copy is put in the top reproduction pipeline directory for easy viewing.
- The PGFPlots preamble was also made more useful, allowing the name of
the `.tex' file to also be the name of the final plot that is
produced. This is a GREAT feature, because without it, the TiKZ
externalization would be based on order of the plots within the
paper. But now, order is irrelevant and we can even delete the TiKZ
files within the processing workhorse-Makefiles so the plots are
definitly rebuilt on the next run.
- The paper is now in a two-column format to be more similar to published
papers.
A tip on debugging Make was added to `README.md'.
|
|
The mandatory and optional (for example downloader) dependencies are now
checked at configure time so users can know what they may be missing before
the processing starts. Since its recommended to be run in parallel, it can
be hard to find what you are missing after running the pipeline. As part of
these checks, the program to use for downloading is now also set at
configure time, it is only used as a pre-defined (in `LOCAL.mk') variable
during Make's processing.
A small title was also added to discus the pipeline architecture that will
be filled in the next commit.
|
|
Let's start working on this pipeline independently with this first
commit. It is based on my previous experiences, but I had never made a
skeleton of a pipeline before, it was always within a working analysis.
But now that the pipeline has a separate repository for its self, we will
be able to work on it and use it as a base for future work and modify it to
make it even better. Hopefully in time (and with the help of others), it
will grow and become much more robust and useful.
|