This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

MixedFunctionSpace on two different SubMeshes

0 votes

I am coupling a Laplace equation for the velocity potential with a Navier-Stokes equation for the velocity. Up to now I have assembled the equations separately on two different meshes and then extracted them as SciPy sparse matrices and set up a sparse block system where the coupling conditions on the interface between the subdomains, which goes into the off-diagonal blocks, is hand coded. This approach works, but coding the quadrature on the shared facets is tedious.

A naive attempt at solving the problem by setting up a large mixed space fails with "Unable to create function space -> Nonmatching meshes for function space":

from dolfin import *

mesh = UnitSquareMesh(1, 1)
cell_marker = CellFunction('size_t', mesh)
cell_marker[0] = 1
cell_marker[1] = 2
mesh1 = SubMesh(mesh, cell_marker, 1)
mesh2 = SubMesh(mesh, cell_marker, 1)

V_phi = FunctionSpace(mesh1, 'CG', 2)
V_u = VectorFunctionSpace(mesh2, 'CG', 1)
W = MixedFunctionSpace([V_phi, V_u])

Question
Can this be set up elegantly in UFL by the existing FEniCS machinery, or must I continue assembling the coupling terms myself? I have noticed the work on overlapping meshes, which this is some kind of special case of, but I would really appreciate some pointers if this is a way to go forward.

Side question
For helping hand coding the quadrature on shared facets, is there an easy way to evaluate a single test function (1 dof) and its derivative from the Python layer? I know this functionality is in the UFC element class. EDIT: MiroK just told me that this element functionality can be found under FunctionSpace.element() (instead of FunctionSpace.ulf_element(). Thanks!

asked Feb 23, 2016 by Tormod Landet FEniCS User (5,130 points)
edited Feb 23, 2016 by Tormod Landet

Some posts related to the UFC question are evaluate_basis and evaluate_basis_derivatives.

1 Answer

0 votes
 
Best answer

Is there a reason you need to use separate meshes? A simple (but not necessarily effective) approach would be just to use one mesh. The code would be something like

from dolfin import *
from mshr import *

class LambdaDomain(SubDomain):
    def __init__(self, condition):
        self.condition = condition
        SubDomain.__init__(self)

    def inside(self, x, on_boundary):
        return self.condition(x, on_boundary)

res = 50
# Generate mesh with two subdomains.
unit_sq = Rectangle(Point(0, 0), Point(1.0, 1.0))
half_sq = Rectangle(Point(0, 0), Point(0.5, 1.0))
unit_sq.set_subdomain(1, half_sq)
mesh = generate_mesh(unit_sq, res)
# Mark mesh regions.
cell_func = CellFunction('size_t', mesh)
LambdaDomain(lambda x, on: x[0] <= 0.5).mark(cell_func, 1)
dx = Measure("dx")(subdomain_data=cell_func)
# Create function space on mesh.
V_phi = FunctionSpace(mesh, 'CG', 2)
V_u = VectorFunctionSpace(mesh, 'CG', 1)
W = MixedFunctionSpace([V_phi, V_u])

Now, when you formulate your problem, you simply use dx(0) for integration on the left part of the mesh and dx(1) for integration on the right part.

answered Mar 1, 2016 by emher FEniCS User (1,290 points)
selected Mar 1, 2016 by Tormod Landet

This is what I have implemented now. It would be nice if there was a simple method to avoid all the unnecessary dofs. Right now I use matrix.ident_zeros() so that the extra dofs only cause some extra memory and communication time, but no/very little extra work is required in the solver.

I am accepting your answer. If I find that I need more speed I will try to see if the multimesh support can handle this better. I suspect that domain decomposition (in this case between CPUs) will be hard to get right/optimal since some cells will have many more unknowns than others.

...