The following allied software is required for using TransAT on Linux:
64 bit machine and operating system.
Python 2.6.6 or a newer version of Python 2
Packages from Linux distribution:
gcc and g++ (4.4.6 or newer)
gfortran
GNU make
libstdc++6
libstdc++6-devel or libstdc++6-dev (depending on package manager)
libglib2.0-0
libglib2.0-dev
Packages for exporting images: libpng12, libtiff
Recommended: gnuplot, gzip
openMPI 1.5.5
PETSc-3.2-p7 (including HYPRE package)
ParaView or Tecplot for post-processing
Please note: Instructions to install openMPI 1.5.5 and PETSc-3.2-p7 are given in the present
document.
Please note: The libstdc++6-devel or libstdc++6-dev package names might be slightly different
depending on the package manager. In some package managers, an extra version number is included in
the package’s name e.g. libstdc++6-4.7-dev.
Chapter 2 Hardware Requirements
The hardware requirements in terms of memory and disk space depend on the problem at hand. The
memory and disk space requirements increase with the number of cells in a given simulation.
Likewise, they also increase with the number of activated equations and/or models in the
simulation.
To illustrate this, the hardware recommendations for a simulation with a one-million-cell grid solving
only for pressure and velocities are the following:
RAM: 2 GB
Disk space:
per restart file: 1 GB
per output file: ParaView 50 MB, Tecplot 100 MB
whereas the hardware recommendations for a simulation with a one-million-cell grid solving for
pressure and velocities with turbulence, multiphase and phase change models activated are the
following:
RAM: 4.5 GB
Disk space:
per restart file: 2 GB
per output file: ParaView 100 MB, Tecplot 150 MB
Chapter 3 Installing openMPI and PETSc
3.1 Downloading openMPI and PETSc
To run simulations with TransAT, openMPI and PETSc need to be installed. The following versions
need to be installed:
Note that the path defined by ”–prefix” is the installation directory into which PETSc should be
installed and the path defined by ”–with-mpi-dir” is the path to the openMPI installation
directory.
After the configuration step, follow the instructions of PETSc to build and install PETSc.
The commands should be similar to the following:
make PETSC_DIR=/home/yourname/software/petsc-3.2-p7 PETSC_ARCH=arch-linux2-c-opt all make PETSC_DIR=/home/yourname/software/petsc-3.2-p7 PETSC_ARCH=arch-linux2-c-opt install
Listing 3.5: Build PETSc
3.5 Setting path and environment variables
Typically, path and environment variables can be set in one of the following files located in your home
directory:
.bashrc If your are using BASH. This script is executed when you open a BASH-terminal. Add the
following lines to the file:
The path /home/yourname/software refers to the directory where openMPI and PETSc have been
installed.
Chapter 4 Installing TransAT 5.7.1
4.1 Downloading TransAT
The TransAT installation files can be downloaded at the following adress: http://transat-cfd.com/download-center/ Note that the software is only available on 64-bit platforms.
If you do not know which kind of platform you have, execute the following command
Listing 4.8:
check
if
environment
variables
are
set
The script to create the executable is called tmb_link.py and it is in the bin folder in the
transatMB directory. It links the TransAT libraries to the local MPI- and PETSc-library to produce the
executable.
Execute the following commands to create the bin/transatmbDP executable
tmb_link.py
Listing 4.9:
link
transatmb
If the executable has been successfully linked and created in the bin folder, the command
Listing 4.11:
check
if
environment
variables
are
set
If the executable is not created please refer to the troubleshooting section 6.5
Chapter 5 TransAT 5.7.1 License
TransAT has a built-in demo version which lets you run simulations for up to 1 hour wall time, using a
maximum of 4 cores. No checkpoint/restart options are provided in this version.
A license can also be installed. This provides the full usage of TransAT, without restrictions on
simulation runtime and checkpoint/restart capability. A restriction on the number of cores might remain
and it depends on the contract.
5.1 Install license
The license file for TransAT will be sent separately by email. Save the license file (e.g.
/home/jerry/transat_jerry.lic).
The environment variable TRANSATLIC should point to the license file. The license file should
have both READ and WRITE permissions for the user(s). Further modifications depend
on the SHELL used. Which SHELL is used can be checked in the terminal using ”echo”:
echo$SHELL /usr/bin/csh echo$SHELL /usr/bin/bash
Listing 5.1:
Check
SHELL
being
used
If the SHELL is csh, then set the environment variable TRANSATLIC as follows (you can add this line
to .bashrc in your home directory to automatically set the variable):
setenvTRANSATLIC/home/jerry/transat_jerry.lic
Listing 5.2:
Set
license
file
path
(csh)
If the SHELL is bash, then set the environment variable TRANSATLIC as follows (you can add this
line to .bashrc in your home directory to automatically set the variable):
exportTRANSATLIC=”/home/jerry/transat_jerry.lic”
Listing 5.3:
Set
license
file
path
(bash)
Please note: After TransAT and the license file are installed, TransAT has to be run at least once in
the same month the license file was created in order to activate it.
Chapter 6 Running TransAT 5.7.1
Executables that run on distributed memory machines can not be run like normal executables on Linux.
The parallel environment has to be set up and the number of processors involved has to be
defined.
On small machines (desktops computer with multicore processors), the user can directly submit parallel
jobs if openMPI is installed. For instance, to run TransAT executable ”transatmbinitialDP” on 4
processors from the current folder the following command can be executed:
mpiexec-n4./transatmbinitialDP
The tmb_runinit.py and tmb_run.py scripts from the transatMB/bin folder allow users to run
small parallel jobs locally (see the next section 6.1).
On clusters, many users run simulations simultaneously and consequently there is a job system to
manage parallel jobs. The job management system and the job submission process on specific clusters is
information that is usually documented by the provider of the serivce. Section 6.2 shows which
executables from TransAT should be submitted to the job system.
6.1 Running without job system
All the following commands should be executed from the project directory:
Compiling initialconditions.cxx Use the script tmb_init_compile.py -n nprocessors to compile your initialconditions (initialconditions.cxx) in the project folder and link it to
the code library. This will create the executable transatmbinitialDP inside the project
directory and run it locally. The argument -n nprocessors defines the number of processors
used in the initialisation. If not given, the executable will not be run; it can be run later
using tmb_runinit.py, described below.
Initalizing the simulation Use the script tmb_runinit.py (-n nprocessors) to run the initialisation executable locally. The
optional argument (-n nprocessors) defines the number of processors used in the simulation.
If not given, the simulation will run on one processor. This is not needed if tmb_runinit.py
has been used with the argument -n nprocessors (See previous point).
Running the simulation Use the script tmb_run.py (-n nprocessors) to run the executable locally. The mandatory argument
(nprocessors) defines the number of processors used in the simulation.
6.2 Running on a cluster with job system
All the following commands should be executed from the project directory, e.g. from your own project
folder.
Compiling initialconditions.cxx Use the script tmb_init_compile.py to compile your initialconditions (initialconditions.cxx) in the project folder and link it to
the code library. This will create the executable transatmbinitialDP inside the project
directory.
Running initialconditions Use the executable transatmbinitialDP to submit your parallel job to the job system.
Running the simulation Use the executable from TransAT directory. The full path to the executable is: /home/yourname/software/TransAT_5.7.1/transatMB/bin/transatmbDP (depending on where you installed TransAT, the path will differ)
6.3 Running a TransAT - OLGA coupled simulation
The commercial simulation software OLGA is not available for Linux, thus to run a TransAT-OLGA
coupled simulation from Linux, a separate Windows machine is needed which must meet the following
requirements:
OLGA 7.2 (supported version) installed
Python 2.7.3 (or newer version of Python 2) installed
OpenOPC 1.2.0 package installed
6.4 Troubleshooting: modifying transatui.target
transatui.target is used by the Graphical User interface to locate the executables of TransAT. For a
classical installation, it is not needed to modify this file.
If the TransAT executable is not found when running a simulation, the issue might come from incorrect
setting of paths in the transatui.target file. The transatui.target file can be found at the following
location:
where /home/yourname/software refers to the path to the directory where TransAT has been
installed.
The transatui.target file contains the paths to the the installation directories of the TransAT solver and to
the executable of a default text editor used to modify initial conditions. The path to transatUI and
transatMB installation directory can be either absolute or relative. The following is an example of the
content of transatui.target.
The “Bin” variable in section ”TextEditor” defines the path to text editor used by TransATUI. Note
that the path to the text editor executable has to be absolute. The text editor is the only variable that
should be changed by the user, all the other directories should be correct after the installation.
The “Bin” variable in the ”PythonCmd” section defines the python executable which TransAT uses to
execute the python scripts.
Depending on the Operating System and version being used, there can be linking problems to create the
executable transatmbDP. This is a known issue with Ubunut 18.04 where linking fails with errors similar
to the following:
Depending on how dependencies have been installed, there can be problems during linking of
tmb_link.py to find system libraries in the proper location. This has been encountered for Ubuntu
16.02, for finding libglib-2.0. If the problem is encountered, the following procedure can be followed to fix
the issue. If another library is not found, the same procedure can be followed, but substituting the name
of the problematic library. The error shown for missing libraries may be similar to the following
output:
usr/bin/ld:cannotfind-lglib-2.0
Listing 6.5:
missing
library
error
The following instructions may resolve the issue: use the command locate to see if the library is
installed
locatelibglib-2.0
Listing 6.6:
locate
library
You should get an output similar to the following, showing where the library, with a .so suffix is
located
/lib/x86_64-linux-gnu/libglib-2.0.so.0
Listing 6.7:
link
transatmb
If the locate command does not return any libraries, then it means that the library in question is not
installed. To install the library, use the package manager of your linux system (apt-get for Ubuntu,
zypper for openSUSUE, or others), such as in the following example:
sudoapt-getinstalllibglib-2.0-dev
Listing 6.8:
install
package
If it is a different library missing besides libglib-2.0, use a google search to find which package to
install that contains the missing library. Once installed, use locate again to check that the library
is installed on the system. It is now possible to try running tmb_link.py again, as the
issue may already be resolved. If tmb_link.py still cannot find the library, try creating a
symbolic link to the /usr/lib/x86_64-linux-gnu/ folder. This can be achieved with the following
command:
Please note in the above command, that the first location given is where the library actually is. This
should be the path where the library was located from the locate command. The second path is where
the link should be created. Typically libraries are searched for in /usr/lib/, so that is where the link is
created, along with an .so suffix.