This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Solving a problem in parallel with MPI & PETSc

+5 votes

Hi all,

I'm not sure if I really understand how FEniCS / PETSc is solving problems in parallel with MPI. The program I am trying to understand is a modification to the Cahn Hilliard demo (below, with PETScSNESSolver replacing the newton_solver).

Run with:

mpirun -np 3 python demo_cahn-hilliard.py

fenics seems to split the problem spatially and essentially run 3 instances of the code on each chunk (I remember something about ghost cells being used to communicate between chunks but can't find any documentation on them). Also I get 3 different plots. Now meanwhile, the PETSc output does say it is using the 3 mpi processes too...

I had thought that I would have a single fenics program running, which would call PETSc to solve the system in parallel. Obviously I'm missing something. Can anyone point out a few demos of how to do this or some documentation please?

"""This demo illustrates how to use of DOLFIN for solving the Cahn-Hilliard
equation, which is a time-dependent nonlinear PDE """

# Copyright (C) 2009 Garth N. Wells
#
# This file is part of DOLFIN.
#
# DOLFIN is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# DOLFIN is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with DOLFIN. If not, see <http://www.gnu.org/licenses/>.
#
# First added:  2009-06-20
# Last changed: 2013-11-20

# Begin demo

import random
from dolfin import *

# Class representing the intial conditions
class InitialConditions(Expression):
    def __init__(self):
        random.seed(2 + MPI.process_number())
    def eval(self, values, x):
        values[0] = 0.63 + 0.02*(0.5 - random.random())
        values[1] = 0.0
    def value_shape(self):
        return (2,)

# Class for interfacing with the Newton solver
class CahnHilliardEquation(NonlinearProblem):
    def __init__(self, a, L):
        NonlinearProblem.__init__(self)
        self.L = L
        self.a = a
        self.reset_sparsity = True
    def F(self, b, x):
        assemble(self.L, tensor=b)
    def J(self, A, x):
        assemble(self.a, tensor=A, reset_sparsity=self.reset_sparsity)
        #self.reset_sparsity = False

# Model parameters
lmbda  = 1.0e-02  # surface parameter
dt     = 5.0e-06  # time step
theta  = 0.5      # time stepping family, e.g. theta=1 -> backward Euler, theta=0.5 -> Crank-Nicolson

# Form compiler options
parameters["form_compiler"]["optimize"]     = True
parameters["form_compiler"]["cpp_optimize"] = True
parameters["form_compiler"]["representation"] = "quadrature"

# Create mesh and define function spaces
mesh = UnitSquareMesh(96, 96)
V = FunctionSpace(mesh, "Lagrange", 1)
ME = V*V

# Define trial and test functions
du    = TrialFunction(ME)
q, v  = TestFunctions(ME)

# Define functions
u   = Function(ME)  # current solution
u0  = Function(ME)  # solution from previous converged step

# Split mixed functions
dc, dmu = split(du)
c,  mu  = split(u)
c0, mu0 = split(u0)

# Create intial conditions and interpolate
u_init = InitialConditions()
u.interpolate(u_init)
u0.interpolate(u_init)

# Compute the chemical potential df/dc
c = variable(c)
f    = 100*c**2*(1-c)**2
dfdc = diff(f, c)

# mu_(n+theta)
mu_mid = (1.0-theta)*mu0 + theta*mu

# Weak statement of the equations
L0 = c*q*dx - c0*q*dx + dt*dot(grad(mu_mid), grad(q))*dx
L1 = mu*v*dx - dfdc*v*dx - lmbda*dot(grad(c), grad(v))*dx
L = L0 + L1

F = L

# Compute directional derivative about u in the direction of du (Jacobian)
a = derivative(L, u, du)

# Create nonlinear problem and Newton solver
problem = CahnHilliardEquation(a, L)
solver = PETScSNESSolver()
solver.parameters["relative_tolerance"] = 1e-6

# Output file
file = File("output.pvd", "compressed")

#set_log_level(ERROR)
# Step in time
t = 0.0
T = 50*dt
while (t < T):
    t += dt
    u0.vector()[:] = u.vector()
    #solver.solve()
    solver.solve(problem, u.vector())

    file << (u.split()[0], t)

plot(u.split()[0])
interactive()
asked Mar 20, 2014 by mwelland FEniCS User (8,410 points)

1 Answer

+5 votes
 
Best answer

fenics seems to split the problem spatially and essentially run 3 instances of the code on each chunk

Exactly. Moreover there's a communication between processes so that results on each process are coupled - for instance by definition (say continuity) of function space or penalization terms in weak form for DG methods.

answered Mar 24, 2014 by Jan Blechta FEniCS Expert (51,420 points)
selected Mar 24, 2014 by mwelland

Thanks Jan, so how does it work with the solver then? Are there three different solvers each solving their own chunk, or is it one solver solving all of it (which is distributed over the three cores)?

Thanks

There are many kinds of distributed (or threaded) solvers which DOLFIN can call. Don't worry, they solve coupled system as it is not usually block-diagonal.

Your plots are simply partitioned by processes. Store results to file and visualize by ParaView to get non-partitioned visualization.

Thanks, actually it's not the plots that I am worried about. I have a simple adaptive timestepper going that is taking different time steps (based on the norm of the change) for each chunk! From what you tell me, I think what I need to have one node collect all the norms of the chunks and then broadcast out the new time step. I will keep toiling away.

Many DOLFIN functions (like norm functions) gathers the result automatically for you. For simple things you can use MPI.max,min,sum.

...