diff --git a/sci-cfd/20230531.cfl3d-install.md b/sci-cfd/20230531.cfl3d-install.md new file mode 100644 index 0000000..cac10bc --- /dev/null +++ b/sci-cfd/20230531.cfl3d-install.md @@ -0,0 +1,106 @@ +CFL3D Installation Notes - ODU Wahab Cluster +============================================ + +About the CFL3D Software +------------------------ + +* Software home page: https://software.nasa.gov/software/LAR-16003-1 +* Git repo: https://github.com/nasa/CFL3D +* Documentation: https://nasa.github.io/CFL3D/ + +Installation on ODU Cluster +--------------------------- + +* Base container: intel/2023.0 (ICC + Intel MPI) + +* Following build instruction: https://nasa.github.io/CFL3D/Cfl3dv6/cfl3dv6_build.html#make + +* Configuration: (This is called "Installation" stage in their lingo.) + From the `build` subfolder, issue: `./Install -noredirect -linux_compiler_flags=Intel` + +* Build: + + - `make cfl3d_seq` + - `make cfl3d_mpi` + - ... and so on. see the help doc issued by `make` with no target. + + +Usage Instruction +----------------- + +(Initially written for Dr. Adem Ibrahim) + +Dr. Ibrahim, + +Below is an instruction to run CFL3D on our cluster: + +The software is currently installed in your home directory at the following path: + + ~/CFL3D/bin + +**Prerequisites for running CFL3D** + +This software was built on top of the "intel/2023.0" container, +so the first thing you must do is to invoke the following commands on the shell: + +```bash +module load container_env intel/2023.0 +``` + +For serial runs, the main input file MUST be named to `cfl3d.inp`. +Assuming that this input file has existed in the current directory, +you will run the *serial* CFL3D software in this way: + +```bash +crun.intel ~/CFL3D/bin/cfl3d_seq +``` + +There is an MPI (parallel) version of CFL3D, called `cfl3d_mpi` +that has been installed into the the same folder. + + +This is an example of SLURM job script to run CFL3D in serial (sequential) mode: + +```bash +#!/bin/bash +#SBATCH --job-name cfl3d +#SBATCH --ntasks 1 + +module load container_env intel/2023.0 +crun.intel ~/CFL3D/bin/cfl3d_seq +``` + +CFL3D has a lot of sample calculations located here: +https://nasa.github.io/CFL3D/Cfl3dv6/cfl3dv6_testcases.html + +### Demo: Flat Plate Steady Flow + +Source: https://nasa.github.io/CFL3D/Cfl3dv6/cfl3dv6_testcases.html#flatplate + +Here are the commands I invoked: + +```bash +module load container_env intel/2023.0 + +mkdir -p ~/LIONS/Cfl3dv6/examples +cd ~/LIONS/Cfl3dv6/examples + +# download and unpack the input files +wget https://nasa.github.io/CFL3D/Cfl3dv6/2DTestcases/Flatplate/Flatplate.tar.Z +tar xvf Flatplate.tar.Z +cd Flatplate/ + +# split the input files and generate the unformatted grid file, +# which is grdflat5.bin +crun.intel ~/CFL3D/bin/splitter < split.inp_1blk + +# copy the main input file as "cfl3d.inp" before running: +cp grdflat5.inp cfl3d.inp +srun crun.intel ~/CLF3D/bin/cfl3d_seq +``` + +Files will be unpacked to a subfolder called `Flatplate`, +and this folder is also where the calculation is taking place. +The main output file will go to a file named `cfl3d.out`. + +# FIXME: Run in parallel. Still has issue on Wahab.