ptest: *** An error occurred in MPI_Recv

Hi,

I'm installing parMetis 4.02 onto our cluster for our users. I would like to verify that the compiled binaries work properly in our environment, but I'm not finding much in the documentation other than stating that there are files in the 'Graphs' directory which can be used for testing. It looks like mtest and ptest are the two binaries available for testing, so I've run those with the following files, and received their output (not sure if they're correct):

$ mtest bricks.hex3d
Nelements: 117649, Nnodes: 125000, EType: 3
MGCNUM: 4

...is this the right graph file, and is output correct? And for ptest:

$ptest rotor.graph
******begin output******
Testing ParMETIS_V3_PartKway with ncon: 1, nparts: 2
ParMETIS_V3_PartKway reported a cut of 2207 [OK:2207]

Testing ParMETIS_V3_RefineKway with ncon: 1, nparts: 2
Setup: Max: 0.017, Sum: 0.017, Balance: 1.000
Matching: Max: 0.052, Sum: 0.052, Balance: 1.000
Contraction: Max: 0.078, Sum: 0.078, Balance: 1.000
Project: Max: 0.002, Sum: 0.002, Balance: 1.000
Initialize: Max: 0.018, Sum: 0.018, Balance: 1.000
K-way: Max: 0.014, Sum: 0.014, Balance: 1.000
Total: Max: 0.184, Sum: 0.184, Balance: 1.000
Final 2-way Cut: 2207 Balance: 1.001
NMoved: 0 0 0 0
ParMETIS_V3_RefineKway reported a cut of 2207 [OK:2207]

Testing ParMETIS_V3_AdaptiveRepart with ipc2redist: 1000.000, ncon: 1, nparts: 2
[ 99617 1324862 99617 99617][100]
[ 52014 775450 52014 52014][100]
[ 27308 413280 27308 27308][100]
[ 14448 220040 14448 14448][100]
[ 7683 117072 7683 7683][100]
[ 4118 62058 4118 4118][100]
[ 2229 32872 2229 2229][100]
[ 1207 17288 1207 1207][100]
[ 669 8950 669 669][100]
[ 383 4698 383 383][100]
[ 228 2470 228 228][100]
[ 160 1602 160 160][100]
[ 146 1464 146 146][100]
[t-cn1034.hpc2n.umu.se:31590] *** An error occurred in MPI_Recv
[t-cn1034.hpc2n.umu.se:31590] *** on communicator MPI COMMUNICATOR 4 DUP FROM 3
[t-cn1034.hpc2n.umu.se:31590] *** MPI_ERR_RANK: invalid rank
[t-cn1034.hpc2n.umu.se:31590] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
****end output****

...not sure if I'm trying to use the correct graph file for ptest, but it looks like all of the tests pass until the very end.

Is this the proper way to verify that the binaries/env are ok?

Why is the ptest getting MPI errors at the end?

I have properly loaded openmpi in the environment.

Any help/suggestions appreciated.

RE: The inputs are correct;

The inputs are correct; however, shouldn't you be using something like "mpirun -np 4 ptest rotor.graph"?

RE: Yes that was it, thanks.

Yes that was it, thanks. Also, my apologies for not responding sooner; I thought I had already replied to the thread but was mistaken.