Adaptive repartitioning in Metis

Is an equivalent to ParMetis_V3_AdaptiveRepart available in Metis? I am looking to repartition a previously partitioned graph, but certainly do not want the overhead of MPI. Thank you.


RE: Metis does not have adaptive

Metis does not have adaptive repartitioning routines the way ParMetis has. Depending on your application, the partitioning refinement routines that Metis have may be sufficient.

I'm not planning on including such adaptive repartitioning algorithms in Metis as I do not see their use scenario. Of course, you can convince me otherwise.

RE: Re: adaptive repartitioning in Metis

The parallel scenario for adaptive repartitioning is along the lines that for a perfectly partitioned graph on two processors, you dont want to change it such that everything on proc 0 goes to proc 1 and vice versa. That is, you want to save on the expensive communication.

Now, if you could keep track of where your original graph was, you could make that decision outside of ParMetis. For example in the above pathological case, ParMetis says I have to move N nodes from proc 0 to proc 1 and N from proc 1 to proc 0. Why not I just map it so that what parmetis says 0 is proc 1 and what parmetis says 1 is proc 0?

This is essentially I'd say what adaptive repartitioning is trying to avoid. Not the communication, which is related but is a secondary thing to the partitioning. So I dont see the use case as being parallel-dependent.

I can imagine there must be a variety of reasons for needing this. As a loose example, there is a container for each partition similarly numbered. If the graph were to change as in the above case, it has to be removed from one container and put in the other. My use case is a mesh which is similarly contained. In my case, each partition is also on a thread, so there's more overhead than simply exchanging data.

RE: I'm looking to find this out

I'm looking to find this out as well! It seems like this is a great topic to discuss.

RE: Could someone reply? Thanks.

Could someone reply? Thanks.