This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

DirichletBC from Function not working in parallel

+1 vote

Hi all,

For domain decompositon, I want to use the Result "U0", which is a Function in a mixed functionsspace "W", of a previous iteration as DirichletBC for the computation of "U1" (the next iteration), something like:

bcs = DirichletBC(W, U0, boundaries, 2)
problem = LinearVariationalProblem(L1, R1, U1, bcs)
# ... solve with MUMPS

This works FINE IN SERIAL but NOT IN PARALLEL, where the solution looks weird, maybe due to a wrong dof-distibution during the automatical applying of the BC? Is there any special usage of DirichletBC in parallel to solve this issue?

I have found the DirichletBC-options "gather()" in DOLFIN 1.3.0, which doesn't exists anymore and "get_boundary_values()", but don't know how to use them if they could fix my problem.

I'm thankful about any hint or workaround to solve this issue, because I'm realy dependent on parallel computation due to very large problems.

asked Jan 6, 2017 by RR FEniCS User (3,330 points)

Hi, can you post the minimal code which shows the problem?

Here it is, I tried to skip the unimportant parts but not to miss something that could also leed to the issue in parallel. I checked if setting the function values of "u2" is correct, it should be fine in my opinion.

from dolfin import *

mesh=Mesh("fullspace_coarse.xml")
parameters["allow_extrapolation"] = True

#%% define some modeling parameters
...

#%% define domains and export 2 overlapping SubMeshes

class Plate(SubDomain): ...
class Dom1(SubDomain) ...
class Dom2(SubDomain) ...
class B(SubDomain) ...

# ! this part is run in serial only once at the beginning to create submesh1
# and submesh2 xmls

#dom1 = Dom1()
#dom2 = Dom2()
#plate = Plate()
#b = B()
#
#domains = CellFunction("size_t", mesh)
#domains.set_all(0)
#dom1.mark(domains, 1)
#submesh1 = df.SubMesh(mesh_0, domains, 1)
#df.File('submesh1.xml') << submesh1
#dom2.mark(domains, 2)
#submesh2 = df.SubMesh(mesh_0, domains, 2)
#df.File('submesh2.xml') << submesh2
#df.File('whole.pvd') << domains
#
#raise SystemExit

#%% FEM computiation, the important part

submesh1 = Mesh('submesh1.xml')
submesh2 = Mesh('submesh2.xml')

boundaries1 = FacetFunction("size_t", submesh1)
boundaries1.set_all(1)
b.mark(boundaries1, 2)

boundaries2 = FacetFunction("size_t", submesh2)
boundaries2.set_all(1)
b.mark(boundaries2, 2)

niter = 3

for j in range(niter):

    print('Iteration:' + str(j))

    if (j % 2) == 0:

        submesh = submesh1
        boundaries = boundaries1
        subdom = CellFunction("size_t", submesh)
        subdom.set_all(0)
        dom1.mark(subdom, 1)      
        plate.mark(subdom, 2)

    else:
        submesh = submesh2
        boundaries = boundaries2
        subdom = CellFunction("size_t", submesh)
        subdom.set_all(0)
        dom2.mark(subdom, 1)   
        plate.mark(subdom, 2)

    V = FunctionSpace(submesh,'N1curl',p)
    W = V*V

    (v_r, v_i) = TestFunctions(W)
    (u_r, u_i) = TrialFunctions(W)

    f_0 = Expression(('0', '0', '0'), cell=tetrahedron)
    f_1 = Expression(('(J / sqrt(a * pi)) * exp(-(x[0]*x[0]/(a*a)) cell=tetrahedron, a=a, J=J)

    dx = Measure('dx')[subdom]
    L = inner(curl(u_r), curl(v_r))*dx(1) ...
    R = inner(f_0 * sigma, v_r)*dx(1) ...

    # THIS IS THE PART WHICH RUNS ONLY CORRECT IN SERIAL BUT NOT IN PARALLEL
    # AFTER THE 0th ITERATION ...

    u = Function(W)
    if j is 0:
        problem = LinearVariationalProblem(L, R, u)
    else:
        bcs = [DirichletBC(W, u2, boundaries, 2)]
        problem = LinearVariationalProblem(L, R, u, bcs)

    solver = LinearVariationalSolver(problem)
    solver.parameters["linear_solver"] = "mumps"
    solver.parameters['symmetric'] = True
    solver.solve()

    if j is (niter):
        print('last iteration done')
    else:
        # u2 = Function(W)
        # u2.vector()[:] = u.vector()[:]
        # I use this alternative method to copy "u" to "u2"  which should work in parallel
        u2.vector().set_local(u.vector().array())
        u2.vector().apply('insert')

    if (j % 2) == 0:
        V_cg1 = VectorFunctionSpace(submesh1,'CG',p)
        V_cg2 = VectorFunctionSpace(submesh2,'CG',p)       
    else:   
        V_cg2 = VectorFunctionSpace(submesh1,'CG',p)
        V_cg1 = VectorFunctionSpace(submesh2,'CG',p)

    e_r1, e_i1 = u.split()
    e_r_cg1 = interpolate(e_r1, V_cg1)
    e_i_cg1 = interpolate(e_i1, V_cg1)

    e_r2, e_i2 = u2.split()
    e_r_cg2 = interpolate(e_r2, V_cg2)
    e_i_cg2 = interpolate(e_i2, V_cg2)

    #%% save output

    name = '0' + str(niter) + '_iter_' + str(j)
    File(name + "_e_r1.pvd") << e_r_cg1
    ...

Another comment!

There's still the problem of "interpolate" according to:
https://bitbucket.org/fenics-project/dolfin/issues/507/interpolation-between-non-lagrange-spaces

I tested it by directly exporting the solution on the Nedelec-Space, the mentioned interpolation-issue in parallel has no influence here, probably because I'm on the same mesh.

1 Answer

0 votes

I'm pretty sure that I finally discovered the issue, let's explain it with help of iteration step 1:

Since the result-function "u2" is defined on "submesh1" at iteration step 0 and used as boundary condition for "submesh2" afterwards, I suppose that "DirichletBC" internally does something like 'interpolate u2 from submesh1 to submesh2' before applying the BC.

It is known (see previous comment) that "interpolate" does not work for different meshes in parallel so far, at least for non-LaGrange spaces. Thus the above approach probably doesn't work.

If anyone is interested in my (at the moment) very uncomfortable workaround and the reason for my assumption, write me a PM or open a new question. It took me quite a while and would be too much to explain at this point.

Until the issue is solved, maybe it is possible to implement an error message for the wrong use of "DircichletBC" in parallel, if solutions-functions on different meshes are used instead of expressions such as in my case?

answered Jan 19, 2017 by RR FEniCS User (3,330 points)
...