This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Solve Poisson problem with Neumann BC

+12 votes

How to solve the Poisson problem with a pure Neumann boundary condition?

The usual Poisson problem with pure Neumann BC
$$ \int_\Omega \nabla u \cdot \nabla v \,\mathrm{d}x
= \int_\Omega f\,v\,\mathrm{d}x + \int_{\partial\Omega} g\,v\,\mathrm{d}s
\textrm{ for all } v\in V $$
is singular when $V = H^1(\Omega)$.

The necessary (and also sufficient in this setting) condition for existence of a solution is
$$ \int_\Omega f\,\mathrm{d}x + \int_{\partial\Omega} g\,\mathrm{d}s = 0 .$$

Possible approaches to implementation

  1. Supply null-space to Krylov solver (which can handle it) and orthogonalize the data to the null space to fulfill the necessary condition. Implemented in demo singular-poisson using PETSc backend, GMRES (no need to use GMRES, CG is suitable here!) and no preconditioner.

  2. Restrict the problem by applying DirichletBC(..., method=pointwise) to just one DOF and take care of the necessary condition either algebraically (for example using normalize) or by projection of the original data at DOLFIN Function level. Demo needed!

  3. Add a Lagrange multiplier which takes care of both the necessary condition and the singularity of the original problem. Implemented in demo neumann-poisson using direct solver.

asked Jan 23, 2014 by Jan Blechta FEniCS Expert (51,420 points)
edited Feb 5, 2014 by Jan Blechta

The Problem is singular alright, but that doesn't usually mean you need to restrict the spaces. The original task would be to find one solution (out of the many that possibly exist). One possibility is to use a Krylov method on the problem, another is to extend the problem using Lagrange multipliers. The latter has some pitfalls, one being that it isn't immediately clear what is a good preconditioner for the system.

Please, add your method to the question as another possibility.

I already did.

I meant to the summary above, in the statement of the question. It is closest to approach 1. but you just claim that in works without setting null space.

Maybe we could formulate it like that:
Q: How to I solve problem XYZ?
A1: "supply null space"
A2: "restrict the problem"
A3: "Krylov ..."
...
(Right now, the answers are basically provided in the question.)

Ok, next week.

3 Answers

+6 votes
 
Best answer

Krylov solvers can deal with singular systems natively as long as the problem is well-posed, i.e., the right-hand side is indeed in the image of the linear operator. Consequently, the original problem above problem can be solved with CG (it's positive semidefinite); an extension by Lagrange multipliers is not necessary.

A little bit of care has to be taken when using preconditioners. AMG for example will do a optimal job if you make sure that the coarse solve doesn't mess with the nullspace. Using an iterative solver (like Jacobi) on the coarse solve will do the trick, and AMG will work as well as it does for a Dirichlet-Poisson-type problem, i.e., the number of Krylov iterations necessary to reach any given residual norm is independent of the size of the equation system.

The following code illustrates the behavior.

from dolfin import *

mesh = UnitSquareMesh(20, 20)
V = FunctionSpace(mesh, 'CG', 1)

f = Expression('sin(x[0]) * sin(x[1])')

u = TrialFunction(V)
v = TestFunction(V)
a = dot(grad(u), grad(v)) * dx
L = f * v * dx

A = assemble(a)
b = assemble(L)

# Make sure the system is consistent, i.e., the right-hand side is in the image
# of A. This is equivalent to the rhs not having a component in the kernel of
# A (since A is symmetric).
e = Function(V)
e.interpolate(Constant(1.0))
evec = e.vector()
evec /= norm(evec)
alpha = b.inner(evec)
b -= alpha * evec

x = Function(V)

prec = PETScPreconditioner('hypre_amg')
PETScOptions.set('pc_hypre_boomeramg_relax_type_coarse', 'jacobi')
solver = PETScKrylovSolver('cg', prec)
solver.parameters['absolute_tolerance'] = 0.0
solver.parameters['relative_tolerance'] = 1.0e-10
solver.parameters['maximum_iterations'] = 100
solver.parameters['monitor_convergence'] = True
# Create solver and solve system
A_petsc = as_backend_type(A)
b_petsc = as_backend_type(b)
x_petsc = as_backend_type(x.vector())
solver.set_operator(A_petsc)
solver.solve(x_petsc, b_petsc)
answered Jan 23, 2014 by nschloe FEniCS User (7,120 points)
selected Feb 5, 2014 by Jan Blechta

Errata

solver = PETScKrylovSolver('cg')

should be

solver = PETScKrylovSolver('cg', prec)

Great solution to my problem. I would like to use it on a machine where only petsc4py is installed. What would become your code with only petsc4py library elements ? My try:

  from petsc4py import PETSc
  comm = PETSc.COMM_WORLD
  method_PETSc = PETSc.KSP()
  method_PETSc.create(comm)
  pc_PETSc = method_PETSc.getPC()
  #PETSc.Options().setValue('pc_hypre_boomeramg_relax_type_coarse', 'jacobi')
  pc_PETSc.setType('hypre')
  pc_PETSc.setFromOptions()
  print '### Preconditionner set to: '+str(pc_PETSc.getType())
  pc_PETSc.setHYPREType('boomeramg')

returns:

### Preconditionner set to: hypre
 Traceback (most recent call last):
   File "test_prec.py", line 11, in <module>
     pc_PETSc.setHYPREType('boomeramg')
   File "PETSc/PC.pyx", line 320, in petsc4py.PETSc.PC.setHYPREType (src/petsc4py.PETSc.c:142383)
 petsc4py.PETSc.Error: error code 56
 [0] PCHYPRESetType() line 14 in src/include/compat/hypre.h
 [0] No support for this operation for this object type
 [0] PCHYPRESetType() requires HYPRE 

What do I do wrong ??
Thanks a lot for your support.

0 votes

Implementation of approach 3. using

  • modified demo neumann-poisson by Marie,
  • MINRES (problem is symmetric, indefinite),
  • BoomerAMG with Jacobi coarsener by Nico,
  • preconditioning operator $\int ( \nabla u \cdot \nabla v + c\,d ) \,\mathrm{d}x$ inspired by stokes-iterative demo by Garth.

Unfortunately, PETSc's MINRES does not support KSP_NORM_UNPRECONDITIONED or PC_RIGHT so residuals/cost cannot be compared to other implementations. Comparison could be done using errornorm of some manufactured solution.

Deprecation warning: Nico's approach should be preferred when Krylov solver is supposed to be employed. There is no need introduce Lagrange multiplier when Krylov solver can handle null space of constants. Big advantage of Nico's approach is that it keeps the problem positive definite.

from dolfin import *

# Create mesh and define function space
mesh = UnitSquareMesh(64, 64)
V = FunctionSpace(mesh, "CG", 1)
R = FunctionSpace(mesh, "R", 0)
W = V * R

# Define variational problem
(u, c) = TrialFunction(W)
(v, d) = TestFunctions(W)
f = Expression("10*exp(-(pow(x[0] - 0.5, 2) + pow(x[1] - 0.5, 2)) / 0.02)")
g = Expression("-sin(5*x[0])")
a  = (inner(grad(u), grad(v)) + c*v + u*d)*dx
ap = (inner(grad(u), grad(v)) + c*d)*dx
L  = f*v*dx + g*v*ds

# Assemble
A  = assemble(a)
Ap = assemble(ap)
b  = assemble(L)

# Create Krylov solver
#PETScOptions.set("ksp_norm_type", "unpreconditioned")
#PETScOptions.set("ksp_pc_side", "right")
prec = PETScPreconditioner('hypre_amg')
prec.parameters['hypre']['BoomerAMG']['relax_type_coarse'] = 'jacobi'
solver = PETScKrylovSolver('minres', prec)
#solver.parameters['convergence_norm_type'] = 'true'
solver.parameters['absolute_tolerance'] = 0.0
solver.parameters['relative_tolerance'] = 1e-15 
solver.parameters['maximum_iterations'] = 100
solver.parameters['monitor_convergence'] = True

# Solve system
w = Function(W)
solver.set_operators(A, Ap)
solver.solve(w.vector(), b)
(u, c) = w.split()

# Plot solution
plot(u, interactive=True)
answered Jan 23, 2014 by Jan Blechta FEniCS Expert (51,420 points)
edited Feb 5, 2014 by Jan Blechta

In this approach, you invert the matrix Ap, a 2x2 block matrix. The blocks (1,2) and (2,1) are null, and the block (1,1) is exactly your original problem which you're trying not to invert in the first place.

It seems that this approach is unnecessarily complicated. Just invert Ap(1,1).

So you suggest to switch preconditioning operator to

ap = (inner(grad(u), grad(v)) + 0.0*d)*dx

do you? This does not work.

That operator is even less regular, so it's not too surprising it won't work.
What I suggest is to invert A(1,1) directly, using AMG. I do that in the alternative answer, and you do that implicitly as well, but on top of that decorate it with Lagrange.

So basically, you don't suggest a better solution of the Lagrange multiplied problem, but rather tell us not to solve it.

BTW I don't see any good reason now why this problem is evil. It should be evil because:

  • DOLFIN SparsityPatternBuilder and Assembler do not like it
  • Garth and Nico do not like it

I would like to see an evidence.

There is also a possibility that the Lagrange multiplier column makes the problem numerically better than the original problem which depends on the floating-point-enforced necessary condition. Indeed, it is hard, if not impossible, to fasten a tolerance of yours solution.

I'd indeed rather the problem without the Laplace multipliers. It seems that the multiplier is introduced to make the problem regular, but that isn't actually necessary; the 0-eigenvalue doesn't hurt convergence. Also, the extended problem must converge more slowly than the original problem since it has the exacty same spectrum except that it splits the 0-eigenvalue (which doesn't influence the convergence) into two separate eigenvalues which the Krylov iteration does need to account for. On top of that, one of them is negative, so you're losing the comfort of your (semi)definiteness and need to move to indefinite methods (MINRES).

Ok, I agree. Also some scaling for multiplier terms may be useful to scale additional eigenvalues well as Kent-Andre suggested.

Nevertheless, can you convince me that you can tighten a tolerance in your setting?

Hm, what do you by "tighten a tolerance"? You should be able to converge to any level of tolerance well above machine precision (10^{-13} or so).

0 votes

Personally, I simply apply a Dirichlet BC to one node. It works very well: i performed some homogenization calculation on micro structure to get the effective diffusion of the medium, and i have good accuracy between Dirichlet-Dirichlet case, Dirichlet-Neuman case and Neuman-Neuman case.

answered Nov 10, 2016 by fussegli FEniCS Novice (700 points)
...