We are using a PETScKrylovSolver
with CG and Hypre preconditioning to solve a system of PDEs. As it stands now, Hypre is performing quit poorly. This is a system that we know Hypre is capable of solving well. We believe that we need to set HYPRE_BoomerAMGSetNumFunctions(<num_unknowns>)
, where <num_unknowns>
is the number of unknown functions in the system in order to recover the correct Hypre performance.
The Dolfin interface doesn't appear to have a way to set this Hypre options directly. Knowing that Dolfin interfaces very closely with PETSc I did a little research and found that if PETSc calls MatSetBlockSize(<matrix>, <num_unknowns>)
then, upon calling Hypre, PETSc will automatically call HYPRE_BoomerAMGSetNumFunctions(...)
correctly. As a reference please read the following thread: HYPRE with multiple variables.
After doing a little more exploring, I found the Dolfin implementation of PETScMatrix::init
contains the correct call:
if (tensor_layout.block_size > 1)
{
ierr = MatSetBlockSize(_matA, tensor_layout.block_size);
if (ierr != 0) petsc_error(ierr, __FILE__, "MatSetBlockSize");
}
After assembling the matrix, A
, I found a way to check it's block size. This can be done by (please let me know if there is a better way to accomplish this):
Apetsc=down_cast(A)
Amat = Apetsc.mat()
Amat.block_size
The call to Amat.block_size
returns 1, indicating that the system information is not being picked up correctly. I am not familiar enough with Dolfin (yet), to be able to track down the correct information about tensor_layout.block_size
or how/where to set it. Sorry for the long background explanation, but has anyone figured this out, or know a simple way to set this information such that we can recover the correct Hypre performance?