This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

parallel solver calls

0 votes

This is related my previous question: https://fenicsproject.org/qa/12201/embarassingly-parallel-fenics, however, may provide more insight as to what my problem is specifically. I'm trying to solve the same PDE with different parameter values in parallel. Each processor is given a different input parameter a, solves the PDE, and returns the solution u(a).

I've created the mesh:

auto mesh = make_shared<dfn::UnitSquareMesh>(MPI_COMM_SELF, 32, 32);
space = make_shared<Model::FunctionSpace>(mesh);

which, works fine. Then I create the form

Model::LinearForm F(space);
F.u = u; F.f = f;
Model::JacobianForm J(space, space)
J.u = u;

dfn::Parameters params("nonlinear_variational_solver");
dfn::Parameters newton_params("newton_solver");
newton_params.add("relative_tolerance", tol);
newton_params.add("absolute_tolerance", tol);

dolfin::solve(F==0, *u, boundary_conditions, J, params);

This work fine if I run it with

mpirun -n 1 ./model

but if I use

mpirun -n 2 ./model

I get the error:

*** -------------------------------------------------------------------------
*** Error:   Unable to creating EigenVector.
*** Reason:  EigenVector does not support parallel communicators.
*** Where:   This error was encountered inside EigenVector.cpp.
*** Process: 0
*** 
*** DOLFIN version: 2017.1.0.dev0
*** Git changeset:  f8389e7178fcf6f74e392f084089da533cbc1501
*** -------------------------------------------------------------------------
asked Jan 22, 2017 by davisad FEniCS Novice (470 points)

1 Answer

0 votes

Can you post a complete minimum working example? I tried this (in Python):

from dolfin import *
parameters['linear_algebra_backend'] = 'Eigen'

mesh = UnitSquareMesh(mpi_comm_self(), 20, 20)
Q = FunctionSpace(mesh, "CG", 1)
v = TestFunction(Q)
u = TrialFunction(Q)
a = dot(grad(u), grad(v))*dx
L = Constant(1.0)*v*dx

def boundary(x):
    return x[0] < DOLFIN_EPS or x[0] > 1.0 - DOLFIN_EPS
bc = DirichletBC(Q, Constant(0.0), boundary)

w = Function(Q)
solve(a==L, w, bc)

and it seems to work fine in parallel. Probably there is an implicit assumption of MPI_COMM_WORLD somewhere in your code (or inside the dolfin library calls it makes).

answered Jan 24, 2017 by chris_richardson FEniCS Expert (31,740 points)

You are quiet correct --- I actually don't think the problem is with my code. The problem is that I didn't have PETSC (or, more generally, any distributed memory vectors or parallel solvers) enabled when I compiled Dolfin. Apparently, fenics won't use the serial solver (at least not by default) even if it detects that the solve is only on one processor?

Either way, I'm recompiling with PETSC and, hopefully, that will work.

EDIT: see https://fenicsproject.org/qa/12247/nonlinear-solver-in-parallel for a related question when this breaks

Nonlinear solver in parallel
...