This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Setting the number of unknowns for Hypre

+2 votes

We are using a PETScKrylovSolver with CG and Hypre preconditioning to solve a system of PDEs. As it stands now, Hypre is performing quit poorly. This is a system that we know Hypre is capable of solving well. We believe that we need to set HYPRE_BoomerAMGSetNumFunctions(<num_unknowns>), where <num_unknowns> is the number of unknown functions in the system in order to recover the correct Hypre performance.

The Dolfin interface doesn't appear to have a way to set this Hypre options directly. Knowing that Dolfin interfaces very closely with PETSc I did a little research and found that if PETSc calls MatSetBlockSize(<matrix>, <num_unknowns>) then, upon calling Hypre, PETSc will automatically call HYPRE_BoomerAMGSetNumFunctions(...) correctly. As a reference please read the following thread: HYPRE with multiple variables.

After doing a little more exploring, I found the Dolfin implementation of PETScMatrix::init contains the correct call:

if (tensor_layout.block_size > 1)
{
 ierr =  MatSetBlockSize(_matA, tensor_layout.block_size);
 if (ierr != 0) petsc_error(ierr, __FILE__, "MatSetBlockSize");
}

After assembling the matrix, A, I found a way to check it's block size. This can be done by (please let me know if there is a better way to accomplish this):

Apetsc=down_cast(A)
Amat = Apetsc.mat()
Amat.block_size

The call to Amat.block_size returns 1, indicating that the system information is not being picked up correctly. I am not familiar enough with Dolfin (yet), to be able to track down the correct information about tensor_layout.block_size or how/where to set it. Sorry for the long background explanation, but has anyone figured this out, or know a simple way to set this information such that we can recover the correct Hypre performance?

asked Feb 16, 2015 by leibs FEniCS Novice (360 points)

This is strange. Which dolfin-version are you running?

The following returns the expected for me (2):

mesh = UnitSquareMesh(4,4)
V = VectorFunctionSpace(mesh, "CG", 1)
u,v = TrialFunction(V), TestFunction(V)
M = assemble(dot(u,v)*dx)
m = as_backend_type(M).mat()
print m.block_size

Thanks on your help with this. Your answer gives me hope we should be able to get to the bottom of this. This is strange. We are on Dolfin version 1.5.0. Your example code returns 2. Our problem must be an artifact of how we are building our system. Here is a minimal working example that shows our issue. I would expect block_size to return 3, but it returns 1:

from dolfin import *

mesh = UnitSquareMesh(4, 4)

order = 1
Vv = VectorFunctionSpace(mesh, 'Lagrange', order)
Vs = FunctionSpace(mesh, 'Lagrange', order)
V = Vv*Vs

def LSop(u, p):
    L1 = u - grad(p)
    L2 = div(u)
    L3 = curl(u)
    return [L1, L2, L3]


(u, p) = TrialFunctions(V)
(v, q) = TestFunctions(V)

Lu = LSop(u, p)
Lv = LSop(v, q)

a = dot(Lu[0], Lv[0])*dx + Lu[1]*Lv[1]*dx + dot(Lu[2], Lv[2])*dx
A = assemble(a)

Aback = as_backend_type(A).mat()
print Aback.block_size

Thanks again.

Just a quick update. We found a way around this by not building V = Vv*Vs, and instead constructing a "flat" VectorFunctionSpace with the total number of unknowns and extracting the trial and test functions manually:

...
Vsp = VectorFunctionSpace(mesh, 'Lagrange', order, 3)                                                                                                       

UU = TrialFunctions(Vsp)
VV = TestFunctions(Vsp)

u = as_vector((UU[0], UU[1]))
p = UU[2]

v = as_vector((VV[0], VV[1]))
q = VV[2]
...

Using the above formulation recovers the correct block_size of 3 and Hypre again performs as expected. We would, of course, still prefer not to have to manually construct our trial and test functions component-wise, but can happily live with this. Maybe you know of a more elegant way to set this up.

I'm glad you got it working for you. However, it's definitely not a proper fix, because there's no way to set this up if the element or order is different for the different blocks.

Could you please report an issue to https://bitbucket.org/fenics-project/dolfin/issues?status=new&status=open?

If elements (including degrees) of Vv and Vs differ, block size $\geq1$ does not make a sense anymore.

You are most correct. I've submitted an enhancement report that suggests exactly that. If a product space is composed of the same element type and order, then the block_size should be set appropriately. See here: https://bitbucket.org/fenics-project/dolfin/issue/471/petsc-matsetblocksize-for-product-function.

...