This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

block preconditioning, AMG blocks

+10 votes

I would like to create a block preconditioner. As far as I can tell, the currently recommendation is cbc.block here. Is this correct?

In cbc.block, is it possible to create a block preconditioner with a PETSc AMG strategy?

asked Sep 10, 2013 by nschloe FEniCS User (7,120 points)

2 Answers

+5 votes

I have working code using PETSc FieldSplit for block preconditioning. There are some issues around the design that we're still working on. Once that is cleaned up, I'll add it to the stokes-iterative demo. This should be very soon.

I would recommend using block preconditioners via the linear algebra backends rather than cbc.block.

answered Sep 11, 2013 by Garth N. Wells FEniCS Expert (35,930 points)
edited Sep 11, 2013 by nschloe

I'm quite happy to hear that. I'll check out the fieldsplit branch.

Garth, can you comment on the relative merits of the two approaches and why you favor FieldSplit? Is it because getting to reuse your top-level simulation code outweighs being tied to PETSc, or is there some other consideration?

I ask because we're about to embark on some block preconditioners and are looking at options.

Thanks,
Rob

I think that if good block preconditioning is supported by the linear algebra backend, then that's place to use it rather than building one's block library on top. We're using FieldSplit to start, but aiming to make it look 'nice' for the user, with user code decoupled, as far as is practical, from the linear algebra backend. Some reasons we're working towards using PETSc FieldSplit (and possibly supporting similar functionality via Trilinos in the future) rather than using cbc.block are:

  • Speed
  • Actively developed
  • Well tested
  • Good support from PETSc devs
  • C++ interface (with Python for free via SWIG)
  • No need to assemble separate matrices
  • No need to modify existing assembly code
  • Easy to switch between direct, standard preconditioned and block preconditioned systems (no need to change the matrix)

Thanks, this makes sense. It's an interesting question as to who should own block structure, but writing your top-level assembly code once and being able to switch is a big user advantage, and as long as you don't require both block structure and something accessible only in a particular (non-block) backend, is probably the best pathway.

If we wanted to use this, do we need to clone the fieldsplit branch, or is this in (or soon-to-be in) the main one?

PETSc has a option for matrices that controls the internal storage format - it can be one big matrix with the fields indicated via an index set, in which case any preconditioners can be applied, or more efficiently nested storage can be used, but in which case some preconditioners cannot be used (e.g., LU for the whole matrix).

The fieldsplit branch has some of the required functionality, but more is required. I've been reluctant to merge it into master because the interface is very unstable at present.

Have there been any news on this matter? What is the current status?

We now have things working with PETSc MatNest. A demo should land in the next few weeks.

+3 votes

If simple block preconditioning strategies like the stokes-iterative is sufficient
for your needs, then you don't need cbc.block.

However, cbc.block has far more advanced capabilities because you can
compose expression tree for the composed operators, see chap 35 of the
FEniCS book. For many mixed problems, e.g. Navier-Stokes equations, this is essential.

cbc.block is currently tied to Trilinos but it could probably easily be extended
to PETSc given that they now have a mature Python interface. The interface
to Trilinos (see the trilinos directory) is small and a similar to
PETSc is feasible to implement if needed.

cbc.block is actively maintained and used, but rely on some assembly code in dolfin that
changes once in a while. Other than that it is very stable.

answered Sep 23, 2013 by Kent-Andre Mardal FEniCS Expert (14,380 points)

Kent,
In what way is cbc.block tied to Trilinos? You don't need to be specific at the level of an array of matrices/GenericTensors. Is it because you need a Python interface to instantiate particular solver/preconditioner objects, and that currently happens to be PyTrilinos?

Yes. And for this reason it should not be a big effort to extend to PETSc
(given that the Python interface to PETSc is stable and well-functioning).

The Trilinos Python interface is not sufficiently comprehensive for serious computation. For example, it is not possible to set the near-nullspace for a preconditioner (this was the case 12 months ago - it may have changed in the meantime). The ML/AMG-based preconditioners in cbc.block are not optimal as they do not set the nullspace, and the iteration count will therefore grow with problem size. It is possible to attach the nullspace from Python using DOLFIN additions, which have been implemented in the the C++ interface.

You can set nullspace in ML. We have a demo in cbc.block where this is used
to remove rigid motions in poroelasticity (it is called biot.py).

...