[Networkit] NetworKit Sprint
elisabetta.bergamini at kit.edu
Fri Dec 2 15:34:21 CET 2016
Matteo, thank you for your e-mail and for bringing this up.
In my opinion, the “pseudo-optimization” is not necessary: it’s not really faster and when using it we lose RK's theoretical guarantee. If one doesn’t need a guarantee, then it would probably make more sense to use ApproxBetweenness2 (algorithm by Geisberger et al.), which doesn’t have any theoretical guarantee but seems to give a good estimation even with a few samples. By the way, the names ApproxBetweenness and ApproxBetweenness2 are not very meaningful. How about we call them BetweennessApproximation and BetweennessEstimation, respectively?
If nobody has anything against that, I would suggest to just remove the pseudo-optimization and use the “pedantic” computation all the time.
Karlsruhe Institute of Technology (KIT)
Institute of Theoretical Informatics (ITI)
Theoret. Informatics / Parallel Computing
KIT - The Research University in the Helmholtz Association
On Dec 2, 2016, at 15:03, Matteo Riondato <matteo at cs.brown.edu> wrote:
>> On Dec 2, 2016, at 8:55 AM, Elisabetta Bergamini <elisabetta.bergamini at kit.edu> wrote:
>> Dear NetworKit developers,
>> On Tuesday, December 13 there will be a sprint to complete the new NetworKit release. We will start at 12:00 in room 010, building 50.34, Karlsruhe Institute of Technology.
>> We’ll stay in room 010 until 16:00, then, if necessary, we’ll move to room 315 on the third floor.
>> If you would be willing to contribute, feel free to join us at any time during the sprint. For those of you who were not at the developer meeting, please let me know as soon as possible if there’s any code you’d like to integrate in the new release.
>> Thank you,
> There is a small bug in the pseudo-optimization (I don’t want to say pessimization =) ) of the approximate computation to the vertex diameter in the RK algorithm (I mean the computation that returns a lower bound to the vertex diameter, rather than an upper bound as required by the algorithm).
> Specifically, there are cases in which the lower bound is 1 and the sample size computation overflows, making the algorithm collect an astronomic number of samples.
> Some time ago I switched the default to use the pedantic computation, but the bug is still there. I’ll fix it and commit before Dec 13.
> NetworKit mailing list
> NetworKit at ira.uni-karlsruhe.de
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NetworKit