Scalability of rb in terms of cluster numbers

Hello,
I am using cluto to cluster about 1M documents using rb. According to the manual, the computational time complexity is O(NNZ*log(k)), however, it does not behave as this with as k gets big. I ran vcluster 2.1.2 (Linux-x86_64) for 10, 100, 1000, 10 000 and 100 000 clusters, with default options:

Matrix Information -----------------------------------------------------------
Name: ./foo.mat, #Rows: 1033461, #Columns: 693328, #NonZeros: 150672156

Options ----------------------------------------------------------------------
CLMethod=RB, CRfun=I2, SimFun=Cosine, #Clusters: 1000
RowModel=None, ColModel=IDF, GrModel=SY-DIR, NNbrs=40
Colprune=1.00, EdgePrune=-1.00, VtxPrune=-1.00, MinComponent=5
CSType=Best, AggloFrom=0, AggloCRFun=I2, NTrials=10, NIter=10

This is the runtimes:

#clusters #seconds
10____________626
100_________1412
1000________6400
10000____161186
100000___Not finished after one week

It runs constantly with 100% CPU, and memory is not a problem.

Is this dramatic increase in runtime when k gets big, due to the implementation, rather than the algorithm?

BTW, thank you very much for making this program freely available.

RE: The problem with that is due

The problem with that is due to a not so smart way of implementing the selection. This can be optimized to be comparable to the performance of the large-cluster selection scheme.
I may be able to find some time to do that during the next month or two.

RE: Re: Scalability ...

I can't see in the change log that there has been any changes since I originally wrote about this 2 years ago. Is Cluto still being maintained? Or, even better, are there any plans of releasing the source code?