This HTML automatically generated with rman for NEMO
Table of Contents

Name

tricks - some Unix tricks when using NEMO

Files: Compression and Such

One can use cat(1) to catenate structured binary files, e.g.
    cat r001a.dat >>r001.dat

To copy structured binary files between machines of different binary format, use the tsf(1NEMO) , rsf(1NEMO) programs and, if available, the compress(1) , uncompress(1) utilities:

    tsf in=r001.dat maxprec=t allline=t | compress >> r001.data.Z
and on the other machine:
    zcat r001.data.Z | rsf - r001.dat
On non-Unix supercomputers, often the ASCII "205" format (see e.g. atos(1NEMO) ) will be used. This may be saved in compressed format also, and can be processed by NEMO after
    zcat r001.data.Z | atos - r001.dat
See also the tcppipe(1NEMO) program to read the data over a pipe from another machine.

Some N-body programs, which are capable of handling a series of snapshots, and selecting them using the times= keyword, are not able to handle subsequent snapshots which are larger than the first one. In fact, unpredictable things may happen, although usually it results in a core because of illegal memory access. There are two solutions. The program can be recompiled, by using a -DREALLOC flag or #define it in the source code. The second solution is to prepend the datafile with a large enough ’dummy’ file.

To display a scatter diagram in the form of a contour map, convert the two columns to a snapshot by calling them ’x’ and ’y’ coordinates. Remaining phase space coordinates are unimportant. Set masses to 1, and use the atos(1NEMO) format. A program like awk(1) can write the file for atos(1NEMO) , then snapgrid(1NEMO) creates a image(5NEMO) file, which can be optionally smoothed using ccdsmooth(1NEMO) and displayed with ccdplot(1NEMO) . In case your host has nicer contour plotting programs, use ccdfits(1NEMO) to write a fits(5NEMO) format file. Check also the tabccd shell script, if available, or perhaps someone wrote it in C already. It calls awk, atos and snapgrid.

The ds9(1) program is one of the external programs which can be used to display images. ds9 can understand a variety of FITS compression standards. Transform your image to a fits file using ccdfits(1NEMO) , and use ds on that fits file.

PIPES and SHARED MEMORY

NEMO encourages you to use pipes where serial operations make sense (see also tee(1) ). However on Linux, assuming you have enough memory, using /dev/shm (or anything that uses tmpfs(1) ) will sacrifice your memory to store data temporarely and could speed up your I/O dominated workflow. Here is an example:
       mkdir /dev/shm/$USER
       cd /dev/shm/$USER
       mkspiral s000 1000000 nmodel=40
The example of mkspiral(1NEMO) is taken from the NEMO bench(5NEMO) suite, but this example is actually not very I/O dominated. The variable $XDG_RUNTIME_DIR can also be used, or $TMPDIR, depending on your system configuration. Another option is using the mktemp(1) command, e.g. mktemp tmp.XXXXXXXX

Using tcppipe(1NEMO) one can read data produced on other machines

zrun(1) can uncompress on the fly, by prepending it to the command

    zrun fitsccd ngc6503.fits.gz - | tsf -

pee(1) is tee for pipes

sponge(1) soaks up standard input and writes to a file

vipe(1) edits a pipe using your editor,e.g.

    command1 | vipe | command2

pv(1) monitors the progress of data through a pipe, e.g.

        mkplummer - 10000 nmodel=10 | pv | snapscale - . mscale=1
The (bash) array ${PIPESTATUS[@]} contains the status ($?) for each program in a unix pipe. In this example it should return "0 0 0".

PARALLEL and OpenMP

Most programs in NEMO are still single-core, but many current machines have a number of cores. If a program only uses a fraction of the memory on the machine, the GNU parallel(1) program can be used to spread the load, e.g.
       echo mkplummer . 10000 > run.txt
       echo mkplummer . 10000 >> run.txt
       parallel --jobs 2 < run.txt
which will run both jobs in parallel.

One can also use the -j flag of make of running commands in parallel. Similar to the run.txt file created here, a well crafted Runfile can be created, and with

       make -f Runfile -j 2
should achieve the same result.

Although NEMO can be configured using --with-openmp to take advantage of multi-cores OpenMP computing, there are really no programs in NEMO taking advantage of this yet. However, programs using nemo_main() should be aware of the user interface implications of controlling how many cores are used:

   0.  the number of cores as per omp_get_num_procs().  We actually take
a short-cut and use
       omp_get_max_threads() since it listens to OMP_NUM_THREADS (see next
item)
       [the user has no control over this]
   1.  setting the environment variable OMP_NUM_THREADS to the (max) number
of cores it will use
   2.  using the np= system keyword will override any possible setting of
OMP_NUM_THREADS
       in the step before.
   

Slurm

slurm is a popular package you will find on large computer clusters. See also sbatch_nemo.sh(8NEMO) for a helper script in NEMO to use slurm.

Tables

Tables (table(5NEMO) ) come in many forms. Here are some TLDR type reminder on manipulating and displaying tables

  # align a table for viewing
  csvlook data.csv
  column -s, -t data.csv
  python -c "import sys,prettytable; print(prettytable.from_csv(sys.stdin))"
< data.csv
See Alsonemo(1NEMO), bench(5NEMO), table(5NEMO), tcppipe(1NEMO), sbatch_nemo(8NEMO),
tee(1), pee(1), netcat(1), zrun(1), parallel(1), sponge(1), vipe(1)  AuthorPeter
Teuben.  Update History
18-Aug-88    Document created    PJT 
5-mar-89    tabccd added             PJT
6-mar-89    ds added              PJT
9-oct-90    fixed some typos    PJT
jan-2020    added pipe/shm        PJT
may-2021    OpenMP            PJT


Table of Contents