Ceci est une ancienne révision du document !
Examples of data set
The user finds here some examples illustrating different configurations related to the namelist "Domain_Features".
The data initialized by default, and not explicitly required, are generally not present for a sake of clarity.
Data values are showed for equations used in a dimensional form.
2D domain configuration
No parallel setting
No OpenMP parallelization is considered.
No domain decomposition approach (MPI parallelization).
The grid is regular. The number of cells in each direction is 80.
&Domain_Features Geometric_Layout = 0, Start_Coordinate_I_Direction =-0.05 , End_Coordinate_I_Direction = 0.05, Start_Coordinate_J_Direction =-0.05 , End_Coordinate_J_Direction = 0.05, Start_Coordinate_K_Direction = 0.00 , End_Coordinate_K_Direction = 0.00, Cells_Number_I_Direction = 80 , Cells_Number_J_Direction = 80 , Cells_Number_K_Direction = 1 /
Parallel setting : OpenMP Only
OpenMP parallelization is considered with 4 threads.
No domain decomposition approach (MPI parallelization).
The grid is regular. The number of cells in each direction is 80.
&Domain_Features Geometric_Layout = 0, Start_Coordinate_I_Direction =-0.05 , End_Coordinate_I_Direction = 0.05, Start_Coordinate_J_Direction =-0.05 , End_Coordinate_J_Direction = 0.05, Start_Coordinate_K_Direction = 0.00 , End_Coordinate_K_Direction = 0.00, Cells_Number_I_Direction = 80 , Cells_Number_J_Direction = 80 , Cells_Number_K_Direction = 1, Number_OMP_Threads= 4 /
Parallel setting : MPI Only (in MPI cartesian topology)
No OpenMP parallelization is considered .
Domain decomposition approach (MPI parallelization) in MPI cartesian topology. The domain is divided on 8 subdomains :
- 4 along the I-direction
- 2 along the J-direction
- 1 along the K-direction (default)
The grid is regular. The number of cells in each direction is 80 for each subdomain.
&Domain_Features Geometric_Layout = 0, Start_Coordinate_I_Direction =-0.05 , End_Coordinate_I_Direction = 0.05, Start_Coordinate_J_Direction =-0.05 , End_Coordinate_J_Direction = 0.05, Start_Coordinate_K_Direction = 0.00 , End_Coordinate_K_Direction = 0.00, Cells_Number_I_Direction = 80 , Cells_Number_J_Direction = 80 , Cells_Number_K_Direction = 1, MPI_Cartesian_Topology = .false. , Total_Number_MPI_Processes = 8, Max_Number_MPI_Proc_I_Direction= 4, Max_Number_MPI_Proc_J_Direction= 2, Max_Number_MPI_Proc_K_Direction= 1 /
Parallel setting : MPI Only (in MPI graphic topology)
No OpenMP parallelization is considered .
Domain decomposition approach (MPI parallelization) in MPI graphic topology. The domain is divided on 4 subdomains :
- 4 along the I-direction (maximum value)
- 2 along the J-direction (maximum value)
- 1 along the K-direction (default)
The grid is regular. The number of cells in each direction is 80 for each subdomain.
&Domain_Features Geometric_Layout = 0, Start_Coordinate_I_Direction =-0.05 , End_Coordinate_I_Direction = 0.05, Start_Coordinate_J_Direction =-0.05 , End_Coordinate_J_Direction = 0.05, Start_Coordinate_K_Direction = 0.00 , End_Coordinate_K_Direction = 0.00, Cells_Number_I_Direction = 80 , Cells_Number_J_Direction = 80 , Cells_Number_K_Direction = 1, MPI_Graphic_Topology = .false. , Total_Number_MPI_Processes = 6, Max_Number_MPI_Proc_I_Direction= 4, Max_Number_MPI_Proc_J_Direction= 2, Max_Number_MPI_Proc_K_Direction= 1 /
The code must be compiled with the MPI options.
The MPI-graphic topology is used in cases where the domain configuration gets large immersed bodies.
The aim is to build a domain decomposition in a way that takes the solid parts out of the domain and ensure that the domain is mainly fluid.
In a first step, the domain decomposition is carried out as if the MPI cartesian decomposition was used. The number of processes will be equal to the multiplication of the Max_Number_MPI_Proc_I_Direction by Max_Number_MPI_Proc_J_Direction by Max_Number_MPI_Proc_K_Direction.
When subdomain are totally occupied by solid parts, they are useless. They are therefore removed in order to reduce the MPI-process number. As the MPI topology is no longer cartesian due to the “holes” created, the subdomain decomposition is handled with the MPI graphic topology.
The software “mpi_subdomain_decomposition” has been developed for helping the user to build the subdomain decomposition.
The user
- fr