Shortest path problems are ones of the most fundamental combinatorial optimization problems with many applications, both direct and as subroutines in other combinatorial optimization algorithms. Algorithms for these problems have been studied since 1950's and still remain an active area of research.
One goal of this Challenge is to create a reproducible picture of the state of the art in the area of shortest path algorithms. To this end we are identifying a standard set of benchmark instances and generators, as well as benchmark implementations of well-known shortest path algorithms.
Another goal is to enable current researchers to compare their codes with each other, in hopes of identifying the more effective of the recent algorithmic innovations that have been proposed.
The final goal is to publish proceedings containing results presented at the Challenge workshop, and a book containing the best of the proceedings papers.
The Challenge addresses a wide range of shortest path problems, including all sensible combinations of the following:
|k-shortest paths, point-to-point, single-source, all-pairs|
|Non-negative arc lengths and arbitrary arc lengths (including negative cycle detection)|
|Directed and undirected graphs|
|Static and dynamic problems. The latter include those dynamic in CS sense (arc additions, deletions, length changes) and those dynamic in OR sense (arc transit times depending on arrival times)|
|Exact and approximate shortest paths|
|Compact routing tables and shortest path oracles|
Implementations on any platform of interest, for example desktop machines, parallel machines, supercomputers, and handheld devices, are encouraged.
This challenge is open to anyone who wishes to participate. Participants can submit either algorithms or problem instances. Participants may submit results for as many algorithms as they wish.
If you are interested in participating in the Challenge, send email to firstname.lastname@example.org to let us know of your plans. A good way to be kept in the loop is registering to the Challenge mailing list.
Participants should download benchmark instances and code for the problem they address from download page. The page describes the file formats and includes instructions for using the instance generators to create the benchmark instances in the testbed.
Your work can take two different directions.
|Instances for algorithm evaluation. The instances should be natural and interesting. By the latter we mean instances that cause good algorithms to behave differently from the other instances. Interesting real-life application data is especially welcome.|
|Algorithm evaluation. Description of implementations of algorithms with experimental data that supports conclusion about practical performance. Common benchmark instances and codes should be used so that there is common ground for comparison. The most obvious way for such a paper to be interesting (and selected for the proceedings) is if the implementation improves state-of-the-art. However, there may be other ways to produce and interesting paper, for example by showing that an approach that looks well in theory does not work well in practice by explaining why this is the case.|
The Challenge will have three phases. The first phase will be devoted to collecting and improving testbeds. During the second phase algorithms and codes are evaluated and improved. The second stage culminates in a workshop with talks describing testbeds and implementations. In the post-workshop phase, papers describing the most interesting results of the Challenge are assembled into a book refereed Proceedings. As with previous Challenges, we expect that the website with benchmarks will remain accessible after the Challenge is complete. We plan to have initial collection of benchmark instances and codes available by September 30, 2005, officially starting the first phase. We expect the second phase to be completed by March 31, 2006, at which point benchmarks will be finalized and the second phase starts. The deadline for initial versions of the algorithm implementation papers is August 25, 2006. The workshop will take place on November 13-14, 2006 at the DIMACS Center, Rutgers University, Piscataway, NJ.
|October 12, 2005||Start of Phase 1 devoted to collecting and improving testbeds|
|March 31, 2006||Start of Phase 2 devoted to algorithm evaluations|
|August 25, 2006||Deadline for submitting papers to the workshop|
|November 13-14, 2006||Workshop @ DIMACS Center, Rutgers University, Piscataway, NJ|
|December, 2006||Start of Phase 3 devoted to assembling the final Challenge Proceedings|
University of Rome "La Sapienza"
Andrew Goldberg, Microsoft Research
David Johnson, AT&T Labs - Research
Questions and comments should be sent to email@example.com
Paolo Dell'Olmo, University of
Rome "La Sapienza"
Irina Dumitrescu, University of New South Wales
Mikkel Thorup, AT&T Labs- Research
Dorothea Wagner, Universität Karlsruhe