Age | Commit message (Collapse) | Author | Lines |
|
In order to collaborate effectively in the project, even project members
that don't necessarily want (or have the capacity) to do the whole analysis
must be able to contribute to the project. Until now, the users of the
distributed tarball could only modify the text and not the figures (built
with PGFPlots) of the paper.
With this commit, the management of TeX source files in the pipeline was
slightly modified to allow this as cleanly as I could think of now! In
short, the hand-written TeX files are now kept in `tex/src' and for the
pipeline's generated TeX files (in particular the old `tex/pipeline.tex'),
we now have a `tex/pipeline' symbolic-link/directory that points to the
`tex' directory under the build directory.
When packaging the project, `tex/pipeline' will be a full directory with a
copy of all the necessary files. Therefore as far as LaTeX is concerned,
having a build-directory is no longer relevant. Many other small changes
were made to do this job cleanly which will just make this commit message
too long!
Also, the old `tarball' and `zip' targets are now `dist' and `dist-zip' (as
in the standard GNU Build system).
|
|
With this commit, it is now possible to package the project into a tarball
or zip file, ready to be distributed to collaborators who only want to
modify the final paper (and not do the analysis technicalities), or for
uploading to sites like arXiv, or online LaTeX sharing pages.
|
|
In this version, too many extra notices (just regarding a change from
branch to branch) are not printed with `-q'. Instead only a one line
statement is printed that it is saved or applied.
|
|
Metastore depends on `bsd/string.h' to work properly (atleast on GNU/Linux
systems). The first system I tried building with had that library, so I
didn't notice! With this commit, we also build `libbsd' as part of the
pipeline.
Also, I couldn't find libbsd's version in any of its installed headers, so
like Libjpeg, we can't actually check and will directly write our internal
version into the paper.
|
|
The pipeline heavily depends on file meta data (and in particular the
modification dates), for example the configuration-Makefiles within the
pipeline are set as prerequisites to the rules of the pipeline.
However, when Git checks out a branch, it doesn't preserve the meta-data of
the files unique to that branch (for example program source files or
configuration-Makefiles). As a result, the rules that depend on them will
be re-done.
This is especially troublesome in the scenario of this reproducible paper
project because we commonly need to switch between branches (for example to
import recent work in the pipeline into the projects). After some
searching, I think the Metastore program is the best solution. Metastore is
now built as part of the pipeline and through two Git hooks, it is called
by Git to store the original meta-data of files into a binary file that is
version controlled (and managed by Metastore).
|
|
Until now, Gnuastro was only mentioned in the first acknowledgments
section, but not in the paragraph with all the program names. But these two
are not mutually exclusive. All the software should be mentioned in the
last paragraph and those that need special mention can be mentioned before
it.
|
|
Readline is a prerequisite of Bash and AWK, while NCURSES is a prerequisite
of Readline. With the recent update of GNU Bash (and thus GNU Readline) on
my host operating system, the pipeline crashed and I noticed this hole in
the pipeline. In particular, AWK (which linked with Readline 7.0) would
complain about not finding it and abort.
|
|
Both Gzip and Gnuastro were being bootstrapped personally from their Git
repository until now. But fortunately a new release of both came out last
week and so to make things standard we are now using their standard
tarballs.
I also noticed that we weren't checking the version of Gzip or mentioning
it in the acknowledgement section. This was also corrected.
|
|
The TeX Live installer needs Wget to operate smoothly, especially on recent
Mac OS systems that don't have Wget pre-installed. Also, it would be good
for the pipeline to have its own downloader. So with this commit, the
pipeline also installs Wget and OpenSSL which is a dependency.
Many other small changes/fixes were done in this process.
|
|
Until now we weren't explicity writing the full path of the dynamic
libraries necessary for linking a program. But now with
`-Wl,-rpath=$(ildir)' we ensure that the linker keeps the address of the
dynamic libraries necessary for linking at linking time, not running
time. Also, `pkg-config' is also built when preparing the basics. Several
other minor corrections were made thanks to the great help of Raúl Infante
Sainz.
|
|
The high-level dependencies are now built without having access to the
system's PATH. To do this, all the necessary software that we aren't
building ourselves are now brought into the installed `bin/' directory
using a symbolic link to the corresponding software on the host. To do
this, it was also necessary to increase the number of basic/low-level
packages that we are building, and add several more (Diffutils and
Findutils).
With this process in place, we now have a list of the exact software
packages that we are not building our selves, enabling easy building of all
such dependencies in the future.
|
|
Until now, we were keeping the input file within the reproduction
pipeline's directories using the same name as the database/server. Now, we
are using a short/summarized filename convention for the input dataset.
|
|
In most analysis situations (except for simulations), an input dataset is
necessary, but that part of the pipeline was just left out and a general
`SURVEY' variable was set and never used. So with this commit, we actually
use a sample FITS file from the FITS standard webpage, show it (as well as
its histogram) and do some basic calculations on it.
This preparation of the input datasets is done in a generic way to enable
easy addition of more datasets if necessary.
|
|
Gnuastro is just one of the softwares used in the pipeline, so it is too
much to include its version just under the abstract. However, the version
of the pipeline is very important as well as the link to it. This change
puts more emphasis on these two more important points.
|
|
When the C compiler is not GNU GCC, linking with GNU Binutils is going to
cause problems. So until the time that we can include GCC into this
pipeline, its best to avoid Binutils also.
Also, for building CMake, we were relying on an installed CMake, but now,
we are using its own `./bootstrap' script, so it can be built even if the
host system doesn't have CMake.
Also, for TeX Live, we are now setting a custom file as main target to
avoid complications with symbolic links as targets in Make.
Finally, when the user says they don't want to re-write an existing
configuration file, no extra notices will be printed and the configure
script will immediately start building programs.
|
|
Since the final product of the pipeline is a LaTeX-created PDF file, it was
necessary to also have LaTeX within the pipeline. With this commit, TeX
Live is also built as part of the configuration and all the necessary
packages to build the PDF are also installed and mentioned in the paper
along with their versions.
|
|
Some minor corrections have been made in the paper's text to make things
easier to read and be more formal.
|
|
To have better control over the build, GNU Binutils, Bzip2, GNU Gzip, and
XZ Utils have also been added to the pipeline. Some other minor cleanups
and fixes were also implemented throughout the process.
|
|
To ensure the easy unpacking and building of the programs, Lzip and Tar are
now also build during the initial setup phase.
Some minor corrections were also applied to make things cleaner and
smoother.
|
|
All the used software are now acknowledged in the template paper along with
their versions. This section is also mentioned in the check list, so users
don't delete it by mistake.
|
|
Different implementations of AWK may use different random number
generators, so even setting the seed will not ensure a reproducible
result. Because of this, the random plot may be different when the
pipeline runs on different systems and this can confuse early users
(its contrary to the exact reproducibility that is the whole purpose
of this pipeline).
The plot is just a simple X^2 plot, showing the squared value of the X
axis on the Y axis. It is very simple, but atleast it will be
identical on all systems. Also, there may be too many complicated
things in the pipeline already for an early user, and its just a
demonstration, so the easier/simpler, the better.
|
|
The old comments could be a little confusing, so they are now more clear
and describe where to look for how this variable is used.
|
|
The computer modern font that was designed by Donald Knuth and is the
default of LaTeX is indeed a very good, elegant and nice font in
print. However, most journals choose the roman fonts and thus the computer
modern font doesn't (subjectively) fit into the journal format nicely. So
the default font of this pipeline's paper now uses LaTeX's `newtx' package
for a roman style font.
Also, a set of preamble settings were added to allow headers in the pages
of the paper to make the result resemble more like a journal paper
(familiar to the eye), while also adding important information. A new
header was made for this job. This new header now also contains the title
and author settings (after all, these are also a type of header).
Finally, the LaTeX `authblk' package was used to organize authors and their
affiliations.
|
|
An abstract is also something most research reports will need, so a simple
macro was defined to make it easy (not too many code lines within the text
of the main body) to implement an abstract.
The title was also moved up a little to better use the extra white space at
the top of the page.
Finally, the `\highlightchanges' along with its explanation (both as
comments and within the text with examples) was added in `paper.tex' to
demonstrate how useful the `\new' and `\tonote' macros are.
|
|
Until now, we were using the `multicol' package which is mainly designed
for more than two columns. Instead, we are just passing a `twocolumn'
option to the article document class.
|
|
Having a look at the TeX source, some minor edits were made to the comments
so it is more clear.
|
|
Making plots and including references are integral parts of a scientific
paper. Therefore to demonstrate how cleanly they can be used within the
pipeline, they are now used to produce the final PDF.
To use PGFPlots a random dataset is made (using AWK's random function) and
is plotted using PGFPlots. The minimum and maximum values of the dataset
are also included in the text to further show how such calculations can go
into the macros and text.
For the references, the NoiseChisel paper was added as a reference to cite
when using this pipeline along with the MUSE UDF paper I, which uses this
pipeline for two sections. Following this discussion, citation is also
discussed in `README.md` and the NoiseChisel paper is also added as a
published work with a reproduction pipeline.
|
|
Until now, the copyright statement was left empty for the users of the
pipeline to fill. However, the files have already been created and have an
author (or contributing authors) before the user starts using the
pipeline. So the original authors of the files are added along with the
year. The user can add their own name to the existing files under the
"Contributing author" when they start and they will be the "Original
author" of the new files they create.
Several changes were also made to the TeX management:
- LaTeX is run within a `reproduce/build/tex/build' directory now. Not in
the top reproduction pipeline directory. This helps keep all the
auxiliary TeX files and directories in that directory and keep the top
reproduction pipeline directory clean. After the final PDF is built, a
copy is put in the top reproduction pipeline directory for easy viewing.
- The PGFPlots preamble was also made more useful, allowing the name of
the `.tex' file to also be the name of the final plot that is
produced. This is a GREAT feature, because without it, the TiKZ
externalization would be based on order of the plots within the
paper. But now, order is irrelevant and we can even delete the TiKZ
files within the processing workhorse-Makefiles so the plots are
definitly rebuilt on the next run.
- The paper is now in a two-column format to be more similar to published
papers.
A tip on debugging Make was added to `README.md'.
|
|
The first comment of the top LaTeX source was confusing and is now fixed.
|
|
Let's start working on this pipeline independently with this first
commit. It is based on my previous experiences, but I had never made a
skeleton of a pipeline before, it was always within a working analysis.
But now that the pipeline has a separate repository for its self, we will
be able to work on it and use it as a base for future work and modify it to
make it even better. Hopefully in time (and with the help of others), it
will grow and become much more robust and useful.
|