Parallel Graph & Mesh Partitioning

Discussions about the routines in ParMETIS

Hi,
int elmdist[]={0, 2, 4}, eptr[]={0, 4, 8}, eind[8];
int eind_cpu1[]={0, 1, 4, 3, 1, 2, 5, 4};
int eind_cpu2[]={3, 4, 7, 6, 4, 5, 8, 7};
int wgtflag=0, numflag=0, ncon=0, nparts=2, ncommonnodes=4, edgecut, elmwgt=0, part[2], options[]= {0, 0, 0};
float tpwgts=1.0F, ubvec= 1.05F;

Second try: Sorry.
Thats my Input for the above function in the title with 2 processe. My output is
cpu1 part[0]=0;
cpu1 part[1]=0
cpu2 part[0]=0;
cpu2 part[1]=0

The numbering of the nodes in the clockwise direction, I've noticed, but this has no effect on the result.
Is my Input wrong? Can someone explain my the result? I expected 2 elements on cpu2.

Regards stonator

Hi,
I would like to use ParMetis for large MeshPartitions. But I fail on an simple 4 Element Mesh.

#include
#include
#include
#include

int test_main(int argc, char* argv[])
{
MPI_Comm comm;
MPI_Init(&argc, &argv);

MPI_Comm_dup(MPI_COMM_WORLD,&comm);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

int elmdist[]={0, 2, 4}, eptr[]={0, 4, 8}, eind[8];
int eind1[]={0, 1, 4, 3, 1, 2, 5, 4};
int eind2[]={3, 4, 7, 6, 4, 5, 8, 7};

int wgtflag=0, numflag=0, ncon=0, nparts=2, ncommonnodes=4, edgecut, elmwgt=0, part[2], options[]= {0, 0, 0};
float tpwgts=1.0F, ubvec= 1.05F;

if(rank == 0){
memcpy(eind, eind1, sizeof(eind1));
}else{
memcpy(eind, eind2, sizeof(eind2));
}
ParMETIS_V3_PartMeshKway(elmdist, eptr, eind, &elmwgt, &wgtflag, &numflag, &ncon, &ncommonnodes, &nparts, &tpwgts, &ubvec, options, &edgecut, part,&comm);

Bonjour,

J'utilise la fonction :
ParMETIS_V3_Mesh2Dual ( idxtype * elmdist,
idxtype * eptr,
idxtype * eind,
int * numflag,
int * ncommonnodes,
idxtype ** xadj,
idxtype ** adjncy,
MPI_Comm * comm)
dans mon programme en fortran.
Cependant je ne sais pas comment declarer xadj et adjncy.
En les declarant comme des tableaux normaux j'obtient une erreur de segmentation lors de l'allocation de myxadj dans ParMETIS_V3_Mesh2Dual.
Merci

I am getting the following Assert error.

***ASSERTION failed on line 213 of file wave.c: fabs(ssum(nparts, tmpvec)) < .0001.

Valgrind reports that the sum is evaluated with an uninitialised variable.
I am calling from a Fortran code, and am using version 3.1.1. Any help would be appreciated.

Hi,

we are trying to partition a graph with 287496 vertices and an average of 8 edges per vertex using 287496 processors to ca. 32000 parts.
We tried it using PARMETIS_PartGraphKway but apparently Parmetis (as well as Metis) tries to allocate an integer array of size 32000^2 which fails miserably due to an integer overflow:

Error! ***Memory allocation failed for AllocateWorkSpace: pmat. Requested size: -1631581632

the same happens using METIS_PartGraphKway sequentially.

We were able to partition the graph using METIS_PartGraphRecursive on one processor, but would prefer a parallel partitioning startegy for scalability. Is there a ParMETIS counterpart for METIS_PartGraphKway?

Regards,

Markus Blatt

In some OS X systems, compilation of ParMETIS will stop with:


(cd ParMETISLib ; make )
mpicc -DNDEBUG -O3 -I. -c comm.c
In file included from ./parmetislib.h:19:0,
from comm.c:11:
./stdheaders.h:17:20: fatal error: malloc.h: No such file or directory
compilation terminated.
make[1]: *** [comm.o] Error 1
make: *** [default] Error 2

This is because OS X does not have a malloc.h header, and the malloc declaration is instead included in stdlib.h. This problem can be fixed by wrapping the #include call in ParMETISLib/stdheaders.h with a conditional with the following code:


#if !defined(__APPLE__)
#include <malloc.h>
#endif

Hello

I am a new user of ParMETIS. I am trying to use ParMETIS_V3_PartKway from ParMETIS 3.1.1 to partition my large graph (sparse matrix), but it crashed and I got the following error message:

[cluster:29557] *** Process received signal ***
[cluster:29557] Signal: Aborted (6)
[cluster:29557] Signal code: (-6)
Error! ***Memory allocation failed for SetUp: sendind. Requested size: -1175656032 bytes[cluster:29557] [ 0] /lib64/libc.so.6 [0x2aaaae4912d0]
[cluster:29557] [ 1] /lib64/libc.so.6(gsignal+0x35) [0x2aaaae491265]
[cluster:29557] [ 2] /lib64/libc.so.6(abort+0x110) [0x2aaaae492d10]
[cluster:29557] [ 3] ./test_1 [0x422214]
[cluster:29557] [ 4] ./test_1 [0x4227df]
[cluster:29557] [ 5] ./test_1 [0x4230cf]
[cluster:29557] [ 6] ./test_1 [0x4264aa]
[cluster:29557] [ 7] ./test_1 [0x41db97]
[cluster:29557] [ 8] ./test_1 [0x40ecf5]
[cluster:29557] [ 9] /lib64/libc.so.6(__libc_start_main+0xf4) [0x2aaaae47e994]

Hi,

I am new to ParMetis.

Can someone please tell me how it works. How to submit a job with ParMetis. I have to partitiona a mesh having 40 million nodes in 64 partitions.

what is the meaning of these parameters

./parmetis

Thanks,

Vimal

Is there a tentative release date for ParMetis4.0..? (I noticed the inclusion of ParMetis4.0 in the issue tracker).

I am wondering if there is a way to tell metis (either through command line / lib call) to generate partitions of unequal sizes up to a certain tolerance factor, similar to ubfactor in hmetis? I have looked into ubvec for mCPartGraphKway but since my graph does not have multiple constraints that does not apply. Thanks!