This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Elementwise vector multiplication and parallelisation for explicit central difference method

+1 vote

I am developing a simple explicit solver for nonlinear elastodynamics. The following line of code updates the displacements after the vectors F_ext and F_int have been assembled. Md_inv is the inverse of a diagonal mass matrix that can be precomputed.

u_vec[:] = u0_vec + beta*(u0_vec - u1_vec) + alpha*Md_inv*(F_ext - F_int)

First of all, I would like to use a vector instead of a matrix to store the diagonal of Md_inv but I cannot find how to multiply two vectors element by element (the .* operator in Matlab).

Secondly, The stability of the central difference method is limited by the time step size and the above calculation must be done done thousands of times (as well as the assembly of the F_int vector). The equations are completely decoupled and the displacement for each degree of freedom can be calculated individually (all variables are either scalars or vectors). I've tried using multithreading which speeds up the assembly of F_int and F_ext for large models. Is there a simple way to do the above calculation for updating the displacement vector u_vec in parallel as well using FEniCS?

asked Sep 3, 2015 by benzwick FEniCS Novice (350 points)

1 Answer

+1 vote

Given a consistent mass bilinear form M=rho*inner(du, v)*dx, create the diagonal part of the lumped matrix

M_lumped = assemble(action(M, Constant(1.0)))

where in the 2d/3d cases, Constant(1.0) is to be replaced by Constant([1.0, 1.0]), etc...

Given your total force vector F = F_ext - F_int, the nodal acceleration vector can be computed by

a.set_local(F.array() / M_lumped.array())
a.apply("insert")

This code should also work in parallel. You can try

mpirun -np 2 python script.py
answered Sep 4, 2015 by tianyikillua FEniCS User (1,620 points)

Thanks Tianyi. I've written the code in C++ now and it's quite fast. I used the following form for the mass matrix

M = rho*inner(du, v)*dx
L = action(M) 

and compiled it using the vertex quadrature scheme to give me a vector of the diagonal matrix.

Then I update the displacements like this:

std::vector<double> u, u0, u1, f_int, f_ext, m;
U.vector()->get_local(u);
// Repeat for the other vectors u0, u1, m, f_int, f_ext

for(size_t i=0; i < u.size(); ++i)
    u[i] = u0[i] + beta*(u0[i] - u1[i]) + alpha/m[i]*(f_ext[i] - f_int[i]);

U.vector()->set_local(u);
U.vector()->apply("insert");

Do you know if there is a way to do this in Python with the same performance as C++?

...