Different mesh partition results on two machines

I tested the following parallel mesh partitioning code on my Macbook and a Linux machine. I got different results (shown at the end of the post). I used the same compiler (gcc-4.9.1), the same MPI (mpich-3.2) and same parmetis-4.0.3 (with same IDXTYPEWIDTH and REALTYPEWIDTH).
Why and how to make them have the same result? I need that to help debug my own code.


//mesh.c
#include
#include
#include

int main(int argc, char **argv)
{
int rank, nproc;
int elmdist[] = {0, 3, 7, 14};

int eptr0[] = {0, 3, 6, 9};
int eind0[] = {21,20,11,0,8,21,20,9,10};

int eptr1[] = {0, 3, 6, 9, 12};
int eind1[] = {23,15,12,8,24,13,9,14,22,22,23,24};

int eptr2[] = {0, 3, 6, 9, 12, 15, 18, 21};
int eind2[] = {26,5,16,28,17,10,14,25,17,15,26,25,18,29,11,19,16,27,27,28,29};

int *eptr, *eind;

int *elmwgt = NULL;
int wgtflag = 0;
int numflag = 0;
int ncon = 1;
int ncommonnodes = 1;
int nparts = 3;
double tpwgts[3] = {1/3.0, 1/3.0, 1/3.0};
double ubvec[1] = {1.05};
int options[5] = {1, 7, 0};
int edgecut;
int part[10];
MPI_Comm comm = MPI_COMM_WORLD;

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);

assert(nproc == 3);

if (rank == 0) {
eptr = eptr0;
eind = eind0;
} else if (rank == 1) {
eptr = eptr1;
eind = eind1;
} else if (rank == 2) {
eptr = eptr2;
eind = eind2;
}

for (int i=0; i< nparts*ncon; i++) tpwgts[i] = 1.0/nparts;

ParMETIS_V3_PartMeshKway(elmdist, eptr, eind, elmwgt, &wgtflag, &numflag, &ncon,
&ncommonnodes, &nparts, tpwgts, ubvec, options, &edgecut, part, &comm);

printf("Rank = %d, edgecut = %d, part[] = ", rank, edgecut);
for (int i = 0; i < elmdist[rank+1] - elmdist[rank]; i++) printf("%d ", part[i]);
printf("\n");

MPI_Finalize();
}

On my Macbook,

$mpicc -o mesh mesh.c -std=c99 -lparmetis -Isoft/parmetis-4.0.3/include -Lsoft/parmetis-4.0.3/lib
$mpirun -n 3 ./metis
Partitioning a graph of size 14 serially
Setup: Max: 0.001, Sum: 0.004, Balance: 1.004
Remap: Max: 0.000, Sum: 0.000, Balance: 1.150
Total: Max: 0.006, Sum: 0.018, Balance: 1.000
Final 3-way Cut: 8 Balance: 1.071
Rank = 0, edgecut = 8, part[] = 0 1 0
Rank = 1, edgecut = 8, part[] = 2 1 0 1
Rank = 2, edgecut = 8, part[] = 2 0 2 2 1 2 0

On my Linux machine:
$mpicc -o mesh mesh.c -std=c99 -lparmetis -Isoft/parmetis-4.0.3/include -Lsoft/parmetis-4.0.3/lib
$mpirun -n 3 ./mesh
Partitioning a graph of size 14 serially
Setup: Max: 0.000, Sum: 0.000, Balance: 1.015
Remap: Max: 0.000, Sum: 0.000, Balance: 1.003
Total: Max: 0.001, Sum: 0.004, Balance: 1.006
Final 3-way Cut: 9 Balance: 1.071
Rank = 0, edgecut = 9, part[] = 1 0 2
Rank = 1, edgecut = 9, part[] = 1 0 1 1
Rank = 2, edgecut = 9, part[] = 2 2 2 2 0 0 0

RE: This is due to differences in

This is due to differences in the random number generators between the two machines.