ParMETIS (4.0.3) PartKway returns discontiguous partitions with "large" number of MPI tasks

Dear all,

I am using ParMETIS (version 4.0.3) to partition in parallel
the dual graph of a finite element mesh with 8.75M tetrahedra.
In particular, I build the dual graph on my own, and then
call ParMETIS graphpartkway subroutine to partition it into
the same number of parts as MPI tasks involved in the parallel
computation. The dual graph is constructed in such a way that
two vertices are neighbours if and only if the corresponding tetrahedra
share a face. I am using ncon==1, no vertex/edge weights nor tpwgts,
and ubvec=1.10.

With this set-up, I am obtaining discontiguous partitions
with 1024 MPI tasks/partitions and beyond. I am aware that
ParMETIS 4.0.3 does not ensure contiguous partitions, but ...

1) Is there any algorithm set-up that favours contiguous
partitions? I have tried defining NGR_PASSES to 20 (instead of 4)
in defs.h without success. I also tried to replace the
parallel recursive bisection at the coarsest level by a replicated
on all processors METIS 5.x partkway with METIS_OPTION_CONTIG enabled,
also without success (i.e., it seems uncoarsening/refining to be
the source of discontiguity).

2) Would it be so hard to write a message-passing code that tries to
enforce contiguous partitions afterwards as it is done in METIS 5.x
when METIS_OPTION_CONTIG is enabled ?

Find at

a picture showing how does a discontiguous partition
looks like. It is the local finite element mesh corresponding
to part. id 117. Note that in the bottom-right corner, there are two
tetrahedra sharing only one corner; these two tetrahedra are
DISCONNECTED in the dual graph. This kind of situation can lead to
singularities in the solution of
local Neumann problems in domain decomposition solvers (the ones that
are fed by the finite element mesh partition computed by ParMetis).

Thanks for your support.
Best regards,