This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Suspecting a loss of accuracy somewhere in my code, need suggestions where to look

+2 votes

I am in the process of vetting a FEniCS code for sources of loss of accuracy.

I have created a FEniCS implementation of a non-overlapping optimized Schwarz (OSM) domain decomposition (DD) method for the Poisson problem. The DD iterations converge to the global solution and the number of iterations depends on the grid dependent OSM parameter as expected. Unfortunately, I find that the FEniCS code converges to the global solution in double the number of iterations than an equivalent finite difference code.

Previously I resolved a convergence rate issue by changing the solver type of the project() function to 'lu'. I am left wondering if the poor performance of the FEniCS OSM code might also be due to some unseen default mode in one of the FEniCS functions that I use.

The main part of my code, where I suspect a loss of accuracy, is provided below. In particular, I perform an update to the assembled vectors G1 and G2 at every iteration, and if this is accuracy limited, I might expect to see slower convergence.

Any guidance here will be appreciated.

# Find initial error    
alpha = 20

g1 = Function(V1)
g2 = Function(V2)

ds1 = Measure("ds")[sub_domains1]
ds2 = Measure("ds")[sub_domains2]

a1 = - inner(grad(uL), grad(vL))*dx - inner(alpha*uL,vL)*ds1(2)
a2 = - inner(grad(uR), grad(vR))*dx - inner(alpha*uR,vR)*ds2(1)

A1 = assemble(a1)
A2 = assemble(a2)

G1 = assemble(inner(g1,vL)*ds1(2))
G2 = assemble(inner(g2,vR)*ds2(1))

L1 = assemble(inner(f1,vL)*dx)
L2 = assemble(inner(f2,vR)*dx)

b1 = L1 - G1
b2 = L2 - G2

bc1.apply(A1, b1)
bc2.apply(A2, b2)

u1 = Function(V1) # Compute solution - default is LU
solve(A1, u1.vector(), b1, 'lu')

u2 = Function(V2) # Compute solution - default is LU
solve(A2, u2.vector(), b2, 'lu')

u1np = u1.vector().array()[v2d1]
u2np = u2.vector().array()[v2d2]

u1np = u1np.reshape((ny,nx))
u2np = u2np.reshape((ny,nx))

uGnp = np.zeros((ny,ny))
uGnp[:,0:nx-1] = u1np[:,0:-1]
uGnp[:,nx:] = u2np[:,1:]
uGnp[:,nx-1] = 0.5*(u1np[:,-1] + u2np[:,0])

e0 = np.linalg.norm((uGnp.reshape((ny*ny,)) - uGlob.reshape((ny*ny,))), np.inf) \
    /np.linalg.norm(uGlob.reshape((ny*ny,)), np.inf)

maxiter = 100
niter = 1 # Since I do one iteration before entering loop to get initial error
tol = 1e-6
resid = []
errors = []
err = 1

# Iterate

while err > tol and niter < maxiter:  

    niter +=1

    U1 = assemble(inner(u1,vL)*ds1(2))
    U2 = assemble(inner(u2,vR)*ds2(1))

    G1_old = G1[v2d1].copy()
    G1.set_local(np.dot(B1.T, (2.0*alpha*np.dot(B2, U2[v2d2]) - np.dot(B2, G2[v2d2])).T)[d2v1])

    G2.set_local(np.dot(B2.T, (2.0*alpha*np.dot(B1, U1.array()[v2d1]) - np.dot(B1, G1_old)).T)[d2v2])

    b1 = L1 - G1
    b2 = L2 - G2

    bc1.apply(b1)
    bc2.apply(b2)

    u1 = Function(V1) # Compute solution - default is LU
    solve(A1, u1.vector(), b1, 'lu')

    u2 = Function(V2) # Compute solution - default is LU
    solve(A2, u2.vector(), b2, 'lu')

    u1np = u1.vector().array()[v2d1]
    u2np = u2.vector().array()[v2d2]

    u1np = u1np.reshape((ny,nx))
    u2np = u2np.reshape((ny,nx))

    uGnp = np.zeros((ny,ny))
    uGnp[:,0:nx-1] = u1np[:,0:-1]
    uGnp[:,nx:] = u2np[:,1:]
    uGnp[:,nx-1] = 0.5*(u1np[:,-1] + u2np[:,0])


    err = np.linalg.norm((uGnp.reshape((ny*ny,)) - uGlob.reshape((ny*ny,))), np.inf) \
         /np.linalg.norm(uGlob.reshape((ny*ny,)), np.inf)
    errors.append(err)
asked Nov 17, 2015 by brk888 FEniCS Novice (990 points)

1 Answer

0 votes

This might not be the reason, but you can try to set the convergence criteria in the iterative solver

solver = KrylovSolver("gmres", "hypre_amg") # whatever solver and preconditioner that works best for your problem
solver.parameters["relative_tolerance"] = 1e-10
solver.parameters["absolute_tolerance"] = 1e-15

If you are using a LU solver on a single core (no MPI) then the PETCs LU solver gives the lowest errors in my experience.

solver = PETScLUSolver("petsc")

Then run

solver.solve(A, func.vector(), b)
answered Nov 17, 2015 by Tormod Landet FEniCS User (5,130 points)

Thank you for the response. I am currently using lu for all solves, but I will try one of the Krylov's with a low tolerance next.

...