[Networkit] new core decomposition algorithm

Maximilian Vogel maximilian.vogel at student.kit.edu
Thu Sep 17 10:35:56 CEST 2015


> But clearly, it must be found out what hampers the ParK implementation 
> (and if this is due to the algorithm or the implementation).
In the paper it is stated, that the atomicIncrement operations realized 
with the omp atomic capture statement are carried out in batches of 
sizes of at least one cache line. Therefore, thread local buffers are 
needed. I've pushed some changes where each thread in the loops in 
parallelScan and processSublevelParallel has a vector to push nodes to. 
These vectors are combined in the end of the function. Thus, only one 
omp atomic capture statement remains. Here are results from phipute:

graph G(n=862664, m=16138468) from METIS file: eu-2005
Time for ParK of 
/algoDaten/staudt/Graphs/Collections/ICPP/L/eu-2005.metis.graph: (280 ms)
Time for bucket queue based k-core decomposition of 
/algoDaten/staudt/Graphs/Collections/ICPP/L/eu-2005.metis.graph: (1118 ms)

graph G(n=18520486, m=261787258) from METIS file: uk-2002
Time for ParK of 
/algoDaten/staudt/Graphs/Collections/ICPP/L/uk-2002.metis.graph: (4773 ms)
Time for bucket queue based k-core decomposition of 
/algoDaten/staudt/Graphs/Collections/ICPP/L/uk-2002.metis.graph: (22349 ms)

Note that this is not the same eu-2005 graph Christian used.

There are still a few issues with the implementation:
- the scan-function (both implementations) scans over all nodes in each 
iteration. If the implementation kept the "alive"-information for nodes, 
this could be reduced.
- it should be possible to remove the remaining omp atomic capture 
statement, but I'm not yet sure whether this will increase the performance.


Best,
Max
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ira.uni-karlsruhe.de/mailman/private/networkit/attachments/20150917/7aae920b/attachment.html>


More information about the NetworKit mailing list