This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Problems running MPI

–1 vote

Hi all,
I built dolfin and verified it works correctly when I'm using one core, using PETSc to solve. I had to make the mpi enabled python using mpi4py. I'm using MPICH2 and PETSC-3.4.4 (although I tried a few others too)

If I try to use 2 cores with mpirun -np, I get:

Solving nonlinear variational problem.
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Invalid pointer!
[0]PETSC ERROR: Null Pointer: Parameter # 3!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.4.4, Mar, 13, 2014 
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Unknown Name on a arch-linux2-c-debug named blogin2 by mwelland Mon Apr 21 20:37:10 2014
[0]PETSC ERROR: Libraries linked from /home/mwelland/local_gnu_32I/lib
[0]PETSC ERROR: Configure run at Mon Apr 21 19:13:42 2014
[0]PETSC ERROR: Configure options CFLAGS= -fopenmp -m64 -I/soft/mkl/11.0.5.192/mkl/include -march=native -mavx CXXFLAGS= -fopenmp -m64 -I/soft/mkl/11.0.5.192/mkl/include -march=native -mavx FFLAGS= -fopenmp -m64 -I/soft/mkl/11.0.5.192/mkl/include -march=native -mavx LDFLAGS=-L/soft/mkl/11.0.5.192/mkl/lib/intel64 -ldl -lpthread -lm --prefix=/home/mwelland/local_gnu_32I --with-shared-libraries --with-petsc4py=1 --with-mpi-dir=/software/mvapich2-gnu-psm-1.9.5/ --with-boost-dir=/home/mwelland/local_gnu_32I --with-mpi4py=1 --with-blas-lapack-dir=/soft/mkl/11.0.5.192/mkl/lib/intel64 --with-numpy=1 --with-scientificpython=1 --with-scientificpython-dir=/home/mwelland/local_gnu_32I --with-umfpack=1 --download-umfpack=yes --with-valgrind=1 --with-valgrind-dir=/usr/bin/ --search-dirs="[/usr/bin,/home/mwelland/local_gnu_32I]" --package-dirs="[/usr/lib64,/home/mwelland/local_gnu_32I]"
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: VecGetValues() line 920 in /fusion/gpfs/home/mwelland/programs/petsc-3.4.4/src/vec/vec/interface/rvector.c
Traceback (most recent call last):
  File "13_PETScTweak.py", line 312, in <module>
    solver.solve()
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics@fenicsproject.org
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to successfully call PETSc function 'VecGetValues'.
*** Reason:  PETSc error code is: 68.
*** Where:   This error was encountered inside /home/mwelland/programs/git/dolfin/dolfin/la/PETScVector.cpp.
*** Process: unknown
*** 
*** DOLFIN version: 1.3.0+
*** Git changeset:  15187a300a3ed92da2cd5e13e64eae5383265d45
*** -------------------------------------------------------------------------

application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

The PETSc demos worked fine, as did some testing scripts using mpi4py, so I'm left thinking it has something to do with how dolfin is calling PETSc... anyone have some advice?
Thanks

asked Apr 22, 2014 by mwelland FEniCS User (8,410 points)

1 Answer

0 votes

It appears this problem stems from using a compiledSubDomain to mark a boundary condition. Replacing it with a python expression solves the problem.

answered Apr 24, 2014 by mwelland FEniCS User (8,410 points)
...