Quick-and-dirty knowledge base for ODU RCS.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

3.6 KiB

CFL3D Installation Notes - ODU Wahab Cluster

About the CFL3D Software

Installation on ODU Cluster

  • Base container: intel/2023.0 (ICC + Intel MPI)

  • Following build instruction: https://nasa.github.io/CFL3D/Cfl3dv6/cfl3dv6_build.html#make

  • Configuration: (This is called "Installation" stage in their lingo.) From the build subfolder, issue: ./Install -noredirect -linux_compiler_flags=Intel

  • Build:

    • make cfl3d_seq
    • make cfl3d_mpi
    • ... and so on. see the help doc issued by make with no target.

Usage Instruction

(Initially written for Dr. Adem Ibrahim, 2023-05-31)

Dr. Ibrahim,

Below is an instruction to run CFL3D on our cluster:

The software is currently installed in your home directory at the following path:

~/CFL3D/bin

Prerequisites for running CFL3D

This software was built on top of the "intel/2023.0" container, so the first thing you must do is to invoke the following commands on the shell:

module load container_env intel/2023.0

For serial runs, the main input file MUST be named to cfl3d.inp. Assuming that this input file has existed in the current directory, you will run the serial CFL3D software in this way:

crun.intel ~/CFL3D/bin/cfl3d_seq

There is an MPI (parallel) version of CFL3D, called cfl3d_mpi that has been installed into the the same folder.

This is an example of SLURM job script to run CFL3D in serial (sequential) mode:

#!/bin/bash
#SBATCH --job-name cfl3d
#SBATCH --ntasks 1

module load container_env intel/2023.0
crun.intel ~/CFL3D/bin/cfl3d_seq

CFL3D has a lot of sample calculations located here: https://nasa.github.io/CFL3D/Cfl3dv6/cfl3dv6_testcases.html

Demo: Flat Plate Steady Flow

Source: https://nasa.github.io/CFL3D/Cfl3dv6/cfl3dv6_testcases.html#flatplate

Here are the commands I invoked:

module load container_env intel/2023.0

mkdir -p ~/LIONS/Cfl3dv6/examples
cd ~/LIONS/Cfl3dv6/examples

# download and unpack the input files
wget https://nasa.github.io/CFL3D/Cfl3dv6/2DTestcases/Flatplate/Flatplate.tar.Z
tar xvf Flatplate.tar.Z
cd Flatplate/

# split the input files and generate the unformatted grid file,
# which is grdflat5.bin
crun.intel ~/CFL3D/bin/splitter < split.inp_1blk

# copy the main input file as "cfl3d.inp" before running:
cp grdflat5.inp cfl3d.inp
srun crun.intel ~/CLF3D/bin/cfl3d_seq

Files will be unpacked to a subfolder called Flatplate, and this folder is also where the calculation is taking place. The main output file will go to a file named cfl3d.out.

FIXME: Run in parallel. Still has issue on Wahab.

Update 2023-06-02

A few notes:

  1. The CFL3D software can only run on Wahab as the hardware was new enough for the instruction sets used in the code. Please do not run this on Turing as it will quit with an error message.

  2. With version 6, there is no more need to recompile CFL3D every time you want to run with a different physical system (model). The code now allocates arrays dynamically, so precfl3d is not needed anymore.

  3. I included the source code in <<#TODO>> directory in case you want to play around and modify the source code.

  4. The code was built to NOT read from stdin. Please do not run in this way:

    crun.intel ~/CFL3D/bin/cfl3d_seq < MY_INPUT.inp     ### WON'T WORK
    

    Instead, run it in two steps:

    cp MY_INPUT.inp cfl3d.inp
    crun.intel ~/CFL3D/bin/cfl3d_seq