This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

dolfin not found under slurm sbatch

+1 vote

Hello,

I've recently installed fenics on a cluster. I can now execute my code with "python myscript.py" and "mpirun python myscript.py" and a bash script containing any of them. However, when I try to run this bash script through "sbatch -A myaccount -p myqueue bashscript", python is not able to find dolfin at all. I always get:

Traceback (most recent call last):
File "kk.py", line 1, in
from dolfin import *
ImportError: No module named dolfin

I tried exporting all environment variables (especially those from fenics.conf) through sbatch and mpirun, but no luck. I know this may not be a directly related question to fenics issues, but I'm writing in case anyone had a simillar issue (it's my last step to have a fully functioning fenics on a cluster)

Thank you very much for your help.

asked Sep 17, 2014 by Juan Córcoles FEniCS Novice (250 points)

Did you set any environment variables in your bash script?
Use env to list them all before calling python.

#!/bin/bash

export PYTHONPATH=/my/python/lib:$PYTHONPATH
env
mpirun -n 4 python example.py 

1 Answer

0 votes

Hello,

Yes I tried exporting all env variables, but it was again a cluster configuratoin issue. My cluster admin fixed it. Just in case someone has a similar issue: the filesystem where I compiled and installed fenics is a local filesystem - I couldn't do it on my home dir which is shared among all nodes from the cluster because of the quota limit. The cluster admin just moved the fenics installation to a location accesible to all the nodes and it's now working.

Thank you for your help.

answered Sep 18, 2014 by Juan Córcoles FEniCS Novice (250 points)
...