Click here to come back to the previous page
The user finds here some examples illustrating different configurations related to the namelist "Domain_Features".
The data initialized by default, and not explicitly required, are generally not present for a sake of clarity.
Data values are showed for equations used in a dimensional form.
&Domain_Features Geometric_Layout = 0, Start_Coordinate_I_Direction =-0.05 , End_Coordinate_I_Direction = 0.05, Start_Coordinate_J_Direction =-0.05 , End_Coordinate_J_Direction = 0.05, Start_Coordinate_K_Direction = 0.00 , End_Coordinate_K_Direction = 0.00, Cells_Number_I_Direction = 80 , Cells_Number_J_Direction = 60 , Cells_Number_K_Direction = 1 /
&Domain_Features Geometric_Layout = 0, Start_Coordinate_I_Direction =-0.05 , End_Coordinate_I_Direction = 0.05, Start_Coordinate_J_Direction =-0.05 , End_Coordinate_J_Direction = 0.05, Start_Coordinate_K_Direction = 0.00 , End_Coordinate_K_Direction = 0.00, Cells_Number_I_Direction = 80 , Cells_Number_J_Direction = 60 , Cells_Number_K_Direction = 1, Number_OMP_Threads= 4 /
The grid is regular. The number of cells in each direction is 80 for each subdomain.
&Domain_Features Geometric_Layout = 0, Start_Coordinate_I_Direction =-0.05 , End_Coordinate_I_Direction = 0.05, Start_Coordinate_J_Direction =-0.05 , End_Coordinate_J_Direction = 0.05, Start_Coordinate_K_Direction = 0.00 , End_Coordinate_K_Direction = 0.00, Cells_Number_I_Direction = 80 , Cells_Number_J_Direction = 60 , Cells_Number_K_Direction = 1, MPI_Cartesian_Topology = .false. , Total_Number_MPI_Processes = 8, Max_Number_MPI_Proc_I_Direction= 4, Max_Number_MPI_Proc_J_Direction= 2, Max_Number_MPI_Proc_K_Direction= 1 /
The grid is regular. The number of cells in each direction is 80 for each subdomain.
&Domain_Features Geometric_Layout = 0, Start_Coordinate_I_Direction =-0.05 , End_Coordinate_I_Direction = 0.05, Start_Coordinate_J_Direction =-0.05 , End_Coordinate_J_Direction = 0.05, Start_Coordinate_K_Direction = 0.00 , End_Coordinate_K_Direction = 0.00, Cells_Number_I_Direction = 80 , Cells_Number_J_Direction = 60 , Cells_Number_K_Direction = 1, MPI_Graphic_Topology = .true. , Total_Number_MPI_Processes = Value depending on the geometrical configuration of the domain, Max_Number_MPI_Proc_I_Direction= 4, Max_Number_MPI_Proc_J_Direction= 2, Max_Number_MPI_Proc_K_Direction= 1 /
The code must be compiled with the MPI options.
The MPI graph topology is used in cases where the domain configuration have got large immersed bodies.
The aim is to build a domain decomposition in a way that takes the solid parts out of the domain and ensure that the numerical domain is mainly fluid.
In a first step, the domain decomposition is carried out as if the MPI cartesian decomposition was used. The number of processes “Total_Number_MPI_Processes” will be equal to the multiplication of the “Max_Number_MPI_Proc_I_Direction” by “Max_Number_MPI_Proc_J_Direction” by “Max_Number_MPI_Proc_K_Direction”.
When some subdomains are totally occupied by solid parts, they are useless. They must therefore be removed in order to reduce the MPI process number. As the MPI topology is no longer cartesian due to the “holes” arisen in the MPI topology, the subdomain decomposition is handled with the MPI graph topology.
The software “mpi_subdomain_decomposition” has been developed for helping the user to build the subdomain decomposition in these cases :
The user must set the variable “Total_Number_MPI_Processes” to this new value and keep the other data in the same state. During the run, sunfluidh will read this own data file and the new data file “data_mpi_subdomain_layout.dat” in order to know the MPI process layout related to the MPI graph topology.