This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

How to use MPI communication with fenics

+1 vote

I am solving a problem where I need a custom expression

  class scalarExpression(Expression):
    def __init__(self, M, d, data, element):
      self.M = M
      self.d = d
      self.data = data
      self.element = element
    def eval(self, value,x):
      i, j, k = [int(e) for e in space2pixel(x, self.M, self.d)]
      value[0] = self.data[i-1,j-1,k-1]

which requires an 3x3 matrix M, de 3dim vector d and a big data array of 512x512x600. I would like to run the code in parallel without splitting this function because every thread needs it in the entire domain. How is this possible? I haven't been able to find MPI directives to use in python with fenics. Thanks in advance.

asked Sep 26, 2016 by nabarnaf FEniCS User (2,940 points)

Does FEniCS not do what you need by default? i.e.: Have you tried just running:

mpiexec -n 2 python foo.py

(where foo.py is the name of your code)?

Yes, I have tried it and plotted a function given by an instance of scalarExpression. The result by default is a decomposed domain, so that when each thread tries to evaluate the function, it falls outside its domain.
For example, if the data was just a constant function in [0,1], then running mpiexec with 2 threads (I use mpirun, what is the difference?) and plotting the function would generate two plots, one defined in [0,0.5] and another one in [0.5,1]. The partition is not exact, but is represents what happens.
With this in mind, my idea would be to export to every thread the whole function so that each thread can evaluate it without issues, but so far I haven't found how to do this with fenics.

...