Hello I am using an explicit RK DG scheme to solve a set of two coupled equation (1-D compressible isentropic navier-stokes). In this case the mass matrix is block diagonal. Currently I assemble mass matrix once and then during time iterations I do
b = assemble(b_rhs) solve(A, U.vector(), b)
However this is very slow since it does not use the block diagonal structure of the matrix. The mass matrix can be solved on each element which will be faster. Is there any way I can improve the speed of solving such a problem.
Also, if lumping is not desired. Make sure that an iterative method is used, i.e., solve(A, x, b, "cg", "jacobi")
Segregation of the mass matrix often leads to speed-up, see chap 22 in the FEniCS book.
You could assemble the mass matrix into a vector (as you would do with a lumped matrix) and apply the inverse by multiplying with the inverse elements in your vector.
A = assemble(action(u*v*dx), coefficients=[Constant(1)]) U = Vector(b.size()) for i, bi in enumerate(b): U[i] = bi/A[i]
I have no code to test this on, but it should give you some ideas on how to proceed.
I think fenics uses nodal Lagrange basis functions. In this case the mass matrix is not diagonal, but block diagonal. It would be diagonal if the basis functions are orthogonal. Are there function spaces available in fenics which have orthogonal basis functions, like Legendre functions, atleast in 1-D ?
I have not understood your answer. Does
A = assemble(action(u*v*dx), coefficients=[Constant(1)])
compute the inverse of the mass matrix ?
No, it assembles the lumped mass matrix. The following lines of the snippet apply the inverse of this matrix to the vector b. I proposed this because I initially read in your post that you expect a diagonal mass matrix in which case it would make more sense to do the above, since a lumped diagonal is still the original matrix.
In case of a DG basis not resulting in a diagonal mass, but only block-diagonals, you should try to use a different solver if lumping is out of question. I suggest to try the LU-solver from the PETSc-backend first and configure it to reuse the factorization. This will probably still not be the optimal solution, but may perhaps be considerably faster than whatever the default is you are using by simply calling solve.
Christian, a follow-up question: once you've assembled the solution Ui to the lumped system, how do you store this into the coefficients of a Function?
Edit: nvm, found it! http://fenicsproject.org/qa/1425/derivatives-at-the-quadrature-points?show=1438#c1438
The mass matrix will still be SPD, so you'll probably have good chances for a speedup with solve(A, x, b, "cg", "amg") When using PETSc, I think the block structure is even already automatically accounted for.
solve(A, x, b, "cg", "amg")
Jacobi and AMG are both order optimal for the mass matrix, but AMG is heavier.