Abstracts: (as of June 21, 2015):

Michael A. Bender, NYU, USA

TBD: Three Backoff Dilemmas

In this talk we revisit the classical problem of randomized exponential backoff on a multiple-access channel. We analyze randomized backoff with respect to throughput, robustness to jamming or failures, and the number of attempts to access the channel.

(Joint work with Jeremy Fineman, Seth Gilbert, and Maxwell Young)


Aaron Bernstein, Columbia U., USA

Dynamic matching in bipartite graphs

We study the problem of maintaining an approximate maximum cardinality matching in an unweighted bipartite graph in a dynamic graph G that is subject to a sequence of insertions and deletions. The goal is recompute the approximate matching as quickly as possible after each insertion/deletion. Currently, there exist algorithms with only polylog update time for maintaining a 2-or-worse approximate matching (e.g. a maximal matching), while the fastest update time for any algorithm that achieves a better-than-2 approximation is O(m^(1/2)). Intuitively the reason for this large gap is that maintaining a 2-approximate matching is a purely local problem in that as long as we match all legal edges, it does not matter which edges we choose: if both (u,v) and (u,w) are legal, either one can be chosen for the 2-approximate matching. To go beyond a 2-approximation, we need to keep track of some global structure in the graph, which is difficult to do in a dynamic setting. In our paper, we set out to devise faster algorithms for maintaining a better-than-2 approximation in bipartite graphs. We achieve an update time of m^(1/4) for a (3/2 + eps) approximation, and in the case of small arboricity graphs, we achieve poly-log update time and a (1+eps) approximation; the previous state of the art was a m^1/2 update time (1 + eps) approximation for general graphs, or a polylog update 2-approximation for small arboricity, though unlike our result both worked in non-bipartite graphs. Our algorithm thus achieves the fastest update time for a better-than-2 approximation, as well as the fastest existing deterministic update time for any constant approximation. Our main technique is (loosely speaking) a new way of characterizing (1+eps) and (3/2+eps) approximate matchings -- one that does not rely on the non-existence of short augmenting paths.

(Joint work with Clifford Stein)


Timo Bingmann, KIT, Germany

Engineering Parallel Shared-Memory String Sorting

We present parallel string sorting algorithms for modern multi-core and NUMA shared memory architectures. We first describe the challenges posed by these architectures, and discuss key points to achieving high performance gains. Then we give an overview of existing sequential string sorting algorithms, and continue by developing two easily and well parallelizable ones: Super Scalar String Sample Sort (S^5) and Multiway LCP-Mergesort. We then discuss applications and an outlook on future string sorting work.


Guy Blelloch, Carnegie Mellon U., USA

Teaching an Introduction to Parallel and Functional Algorithms

We have developed a new version of our introduction to algorithms and data structures course (Sophomore level), which we have been teaching for three years. The course teaches parallel algorithms from the start, and in a purely functional style. Algorithms are analyzed in terms of work and depth, and sequential algorithms are simply algorithms in which the work and depth is the same (the distinction is seemless). An important goal is to have students think about parallelism while not giving up on the key ideas from traditional algorithms courses, most of which are valid for either parallel or sequential algorithms. I will describe the course, the motivation for using a functional style, and go through many examples of algorithms and data structures that we cover. I hope to get some feedback on the course.

The course includes: divide-and-conquer, solving recurrences, sorting, merging, median, randomized algorithms, sequences (prefix sums, reductions), sets and tables, balanced search trees (treaps, split and join, union and intersection, augmented trees, ordered sets), leftist heaps, hash tables, BFS (reachability, unweighted shortest paths), DFS* (topological sort, biconnected components), weighted shortest paths (Dijkstra's*, Bellman Ford), connectivity (graph contraction), MST (Boruvka and Prim*), computational geometry (2d range searching and skyline), and dynamic programming (subset sum, edit distance, optimal binary search trees, longest increasing subsequence). All except those marked with * are parallel, and all are described in a (purely) functional style.

(This course has been jointly taught with Umut Acar, Margaret Reid-Miller, Danny Sleator, and Kanat Tangwongsan, and about 400 students take the course per year)


Allan Borodin, U. Toronto, Canada

(Enhanced) Sequential Posted Price Mechanisms with Correlated Valuations

We study the revenue performance of sequential posted price (SPP) mechanisms when the valuations of the buyers are drawn from a correlated distribution. SPP mechanisms are conceptually simple mechanisms that work by proposing a ``take-it-or-leave-it'' offer to each buyer. We apply SPP mechanisms to settings in which each buyer has unit demand and the mechanism can assign the service to at most $k$ of the buyers. In contrast to the positive results of Chawla et al when buyers have independent distributions, we prove that no SPP mechanism can extract a constant fraction of the optimal expected revenue, even with unlimited supply.

However, when we enhance these mechanisms by asking a small fraction of buyers for their valuations, we show that one can often obtain constant approximations. More specifically, we prove that such enhanced SPP mechanisms can achieve an Omega(1/max{1,d}) fraction of the optimal revenue where $d$ is the "directed degree of dependence" of the valuations, ranging from independence ($d=0$) to complete dependence ($d = n-1$). Without this enhancement, standard SPP mechanisms cannot achieve a constant approxiamtion even when d = 1.

(Joint work with Marek Adamczyk, Diodato Ferraiola, Bart de Keijzer, and Stefano Leonardi)


Keren Censor-Hillel, Technion, Israel

Tight Bounds on Vertex Connectivity Under Vertex Sampling

This talk will discuss a line of work which begins with studying a distributed routing problem and continues with obtaining a graph-theoretic result about vertex connectivity.

In particular, we show that for any k-vertex-connected graph G with n nodes, if each node is independently sampled with probability p=\Omega(sqrt{log(n)/k}), then the subgraph induced by the sampled nodes has vertex connectivity \Omega(kp^2), with high probability.

This is an analogous result to the fundamental result by Karger [SODA 1994], which states that for any k-edge-connected graph with n nodes, independently sampling each edge with probability p = \Omega(log(n)/k) results in a graph that has edge connectivity \Omega(kp), with high probability.

(Based on joint work with Mohsen Ghaffari, George Giakkoupis, Bernhard Haeupler, and Fabian Kuhn)


Shiri Chechik, Tel Aviv U., Israel

Approximate Distance Oracles with Improved Bounds

The design of algorithms for fast shortest path retrieval has attracted much attention in recent years, and has been studied extensively both from the theoretical and practical perspectives. While classic shortest path algorithms, e.g., Dijkstra's algorithm, enable one to easily compute a shortest path in a given graph, such algorithms suffer from two main drawbacks. First, they require that the entire graph needs to be stored, resulting in high memory usage when the input graph is very dense. Second, the time required by such algorithms to answer a single query is linear in the number of nodes, much too slow for many applications. One approach to overcome these drawbacks, and achieve a sub-linear query time, is to preprocess the input graph and construct an appropriate data structure. A distance oracle is a data structure that allows fast retrieval of a distance estimate for any pair of nodes.

In this talk I will discuss a new construction for distance oracles in general undirected weighted graphs, which obtains improved bounds over previous works (both in size and query time).


Thomas Dueholm Hansen, Aarhus U., Denmark

Hollow Heaps

We introduce the hollow heap, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap. All heap operations, except delete and delete-min, take O(1) time, worst case as well as amortized; delete and delete-min take O(log n) amortized time. Hollow heaps are by far the simplest structure to achieve this. Hollow heaps combine two novel ideas: the use of lazy deletion and re-insertion to do decrease-key operations, and the use of a dag (directed acyclic graph) instead of a tree or set of trees to represent a heap. Lazy deletion produces hollow nodes (nodes without items), giving the data structure its name.

(Joint work with Haim Kaplan, Robert E. Tarjan, and Uri Zwick)


Amos Fiat, Tel Aviv U., Israel

Pricing Online Decisions

We show that appropriately computed posted prices allow one to achieve essentially the same performance as the best online algorithmin a wide variety of settings. Unlike online algorithms that learn about the event, and then make enforceable decisions, prices are posted without knowing the future events or even the current event, and are thus inherently dominant strategy incentive compatible. In particular we show that one can give efficient posted price mechanisms for metrical task systems, some instances of the k-server problem, and metrical matching problems. We give both deterministic and randomized algorithms. Such posted price mechanisms decrease the social cost dramatically over selfish behavior where no decision incurs a charge. One alluring application of this is reducing the social cost of free parking exponentially.


Daniel Fleischman, Cornell U., USA

A Simple O(|A||B|) algorithm for the Longest Common Cyclic Subsequence

The Longest Common Subsequence problem (LCS) is a classical problem with a simple dynamic programming O(|A||B|) solution tought in undergraduate and graduate courses around the world. While not optimal, this solution thrives for its simplicity, and the fast runtime in practice.

In some applications, like when dealing with circular bacterial chromosome, we would like to consider rotations of the sequences. The Longest Common Cyclic Subsequence problem asks for the longest common subsequence allowing rotation of the sequences. We present an extension of the traditional algorithm for LCCS that shares its simplicity, its O(|A||B|) runtime and its fast runtime in practice. The techniques can also be applied to other cyclic and sliding window problems.


Loukas Georgiadis, U. Western Macedonia, Greece

2-Edge-Connectivity Problems in Directed Graphs

We consider problems related to several concepts of 2-edge-connectivity in directed graphs. We say that two vertices v and w are 2-edge-connected if there are two edge-disjoint paths from v to w and two edge-disjoint paths from w to v. This relation partitions the vertices into blocks such that all vertices in the same block are 2-edge-connected. Differently from the undirected case, those blocks do not correspond to the 2-edge-connected components of the graph. In this talk, we present the first linear-time algorithm that computes the 2-edge-connected blocks of a directed graph.

We also consider the following optimization problems where we wish to compute the smallest strongly connected spanning subgraph of a directed graph that maintains respectively: the 2-edge-connected blocks (2EC-B); the 2-edge-connected components (2EC-C); both the 2-edge-connected blocks and the 2-edge-connected components (2EC-B-C). All three problems are NP-hard, and thus we are interested in efficient approximation algorithms. For 2EC-C we can obtain a 3/2-approximation by combining previously known results. For 2EC-B and 2EC-B-C, we present new 4-approximation algorithms that run in linear time.

(Joint work with Giuseppe Italiano, Luigi Laura, Charis Papadopoulos, and Nikos Parotsidis)


Andrew V. Goldberg, Amazon.com Inc., USA

Hub Labeling Algorithms

Given a weighted graph, a distance oracle takes as an input a pair of vertices and returns the distance between them. The labeling approach to distance oracle design is to precompute a label for every vertex so that the distances can be computed from the corresponding labels, without looking at the graph. We survey results on hub labeling (HL), a labeling algorithm that received a lot of attention recently in both theoretical and experimental contexts.

HL query time and memory requirements depend on the label size. In polynomial time one can approximate the labels up to a factor of O(log(n)) [Cohen at al. 2003]. Our recent paper shows that the problem is NP-hard. We also developed a faster approximation algorithm; however, in practice it does not scale to large graphs.

We indroduce a subclass of HL, hierarchical labels (HHL), and give experimental results showing that HHL heuristics work well on a wide range of problems. We also give theoretical results on the complexity of computing the optimal HHL labels.


Ilia Gorelik, Tell Aviv U., Israel

The Temp Secretary Problem

We consider a generalization of the secretary problem where contracts are temporary, and for a fixed duration $\gamma$. This models online hiring of temporary employees, or online auctions for re-usable resources. Consider a firm employing temporary employees, where candidates interview at random times drawn from a known prior. Hiring may not be preemptive, and is subject to capacity and budget constraints:

- Employees cannot be fired, they leave when their contract runs out (no preemption).

- At no point in time can there be more than $d$ employees employed (capacity).

We give bounds on the competitive ratio of this problem. Not surprisingly, the competitive ratio improves as the employment duration $\gamma$ decreases, and as the capacity $d$. For uniform priors and large capacity this can be arbitrary close to one, for arbitrary priors we give upper and lower bounds close to 1/2. We also generalize the capacity constraints to matroid and knapsack settings. This work uses the $k$ secretary algorithm of R. Kleinberg and an analysis of the expected max size of an independent set in random interval graphs.


Bernhard Haeupler, Carnegie Mellon U., USA

Distributed Algorithms for Planar Networks: Planar Embeddings

This talk explains the first distributed planar embedding algorithm and how it fits into a broader program to design efficient distributed algorithms for planar networks.

We work in the standard distributed model in which nodes can send an O(logn)-bit message along each edge per round. In a planar network, with n nodes and diameter D, our deterministic planar embedding algorithm uses O(D·min{logn,D}) rounds to compute a combinatorial planar embedding, which consists of each node knowing the clockwise order of its incident edges in a fixed planar drawing. The complexity of our algorithm is near-optimal and matches the trivial lower bound of Omega(D) up to a logn factor. No non-trivial algorithm outperforming the trivial round complexity of O(n) was known prior to this work.

(Joint work with Mohsen Ghaffari, MIT)


John Iacono, NYU, USA

Cache-Oblivious Persistence

Partial persistence is a general transformation that takes a data structure and allows queries to be executed on any past state of the structure. The cache-oblivious model is the leading model of a modern multi-level memory hierarchy. We present the first general transformation for making cache-oblivious model data structures partially persistent.

(Joint work with Pooya Davoodi, Jeremy T. Fineman, and Özgür Özkan)


Giuseppe F. Italiano, U. Rome Tor Vergata, Italy

Deterministic Fully Dynamic Data Structures for Vertex Cover and Matching

We present the first deterministic data structures for maintaining approximate minimum vertex cover and maximum matching in a fully dynamic graph G=(V,E), with |V|=n and |E|=m, in o(sqrt{m}) time per update. In particular, for minimum vertex cover we provide deterministic data structures for maintaining a (2+eps) approximation in O(log n/eps^2) amortized time per update. For maximum matching, we show how to maintain a (3+eps) approximation in O(min(sqrt{n}/eps, m^{1/3}/eps^2)) amortized time per update, and a (4+eps) approximation in O(m^{1/3}/eps^2) worst-case time per update. Our data structure for fully dynamic minimum vertex cover is essentially near-optimal and settles an open problem by Onak and Rubinfeld.

(Joint work with Sayan Bhattacharya and Monika Henzinger)


Haim Kaplan, Tel-Aviv U., Israel

Minimum Cost Flows in Graphs with Unit Capacities

We consider the minimum cost flow problem on graphs with unit capacities and its special cases. In previous studies, special purpose algorithms for unit capacities have been developed. In contrast, for maximum flow with unit capacities, the best bounds are proven for slight modifications of classical blocking flow and push-relabel algorithms.

We show that the classical cost scaling algorithms of Goldberg and Tarjan (for general integer capacities) applied to a problem with unit capacities achieve or improve the best known bounds.

For weighted bipartite matching we establish a bound of O(sqrt{r} m log C). Here r is the size of the smaller side of the bipartite graph, m is the number of edges, and C is the largest absolute value of an arc-cost. This simplifies and improves a result of Duan et al., answering an open question of Tarjan and Ramshaw. For graphs with unit vertex capacities we establish a novel O(sqrt{n} m log(nC)) bound. We also give the first cycle canceling algorithm for minimum cost flow with unit capacities. The algorithm generalizes the single source shortest path algorithm of Goldberg.


Valerie King, U. Victoria, Canada

Improved worst case update time (and sublinear space) for dynamic graph connectivity

The dynamic graph connectivity problem is to be able to process queries of the form: "Are nodes x and y connected?" while a graph is undergoing edge insertions and deletions. In 2013, my collaborators and I presented a Monte Carlo algorithm with O(log^5 n) worst case time per edge update where n is the number of nodes. In this talk, I'll show how a small conceptual modification brings this down to O(log^4 n). If the sequence of updates is bounded in length by a polynomial in n, only O(n log^2 n) words of memory are required to answer all queries correctly with high probability.

(Joint work with Bruce Kapron)


Sebastian Krinninger, U. Wien, Austria

Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time

We call an algorithm dynamic if it regularly updates the result of its computations as the input undergoes changes over time. The primary goal is to be faster than the naive algorithm that recomputes the result from scratch after each change. In decremental graph algorithms the only changes allowed to the graph are edge deletions. We present a decremental algorithm for maintaining (1 + \epsilon)-approximate shortest paths from a fixed source node in weighted, undirected graphs. The algorithm has a total update time of O(m^{1+o(1)}) accumulated over \Theta(m) deletions. This improves upon previous solutions (which only worked for unweighted graphs) with total update times of O(n^{2+o(1)}) (Bernstein and Roditty SODA 2011) and O(n^{1.8+o(1)} + m^{1+o(1)}). In contrast to these previous results, which rely on maintaining a sparse emulator, our algorithm relies on maintaining a so-called sparse (h, \epsilon)-hop set, a notion introduced by Cohen (JACM 2000) in the PRAM literature. In this talk we present our new hop-set construction and, besides its use in our dynamic algorithm, explain its applications to distributed and streaming algorithms.

(Joint work with Monika Henzinger and Danupon Nanongkai)


Alberto Marchetti Spaccamela, Sapienza U. Rome, Italy

Scheduling of Systems of Conditional Sporadic DAG Tasks

Programming methodologies for next-generation many-core embedded platforms will have to promote predictability and being able to exploit massive parallelism. We first discuss task models that have been proposed to characterize real-time workloads at different levels of abstraction for the design and analysis of real-time systems.

Among these models the DAG task model exposes parallelism that may exist within individual tasks to the run-time scheduling mechanism, and is therefore considered a particularly suitable model for representing recurrent real-time tasks that are implemented upon multiprocessor platforms. We propose and evaluate an extension to the model to allow for the concurrent modeling of conditional execution of pieces of an individual task, along with the modeling of intra-task parallelism.

(Joint work with S. Baruah and V. Bonifaci)


Sam McCauley, SUNY Stony Brook, USA

Cache-Adaptive Analysis

In this talk we analyze algorithms that are I/O-efficient on a cache whose size changes dynamically. We show a novel analysis method that can be applied to most recursive algorithms. In particular, we are the first to analyze non-optimal algorithms in this model, and we greatly expand the class of efficient cache-adaptive algorithms.

(Joint work with Michael Bender, Erik Demaine, Roozbeh Ebrahimi, Jeremy Fineman, Golnaz Ghasemiesfeh, Andrea Lincoln, Rob Johnson, and Jayson Lynch)


Ian Munro, U. Waterloo, Canada

Succinct Data Structures for Equivalence Classes and Unordered Permutations

We discuss the issue of representing equivalence classes on n objects, by which each of n objects is assigned a label from some bounded range, and we are to answer questions of the form "Are the elements of label a and of label b in the same equivalence class?" A tradeoff between is shown between the size of the label space and amount of extra space required (to answer such queries quickly). The similar problem of representing the cycles of an arbitrary permutation, PI, on n nodes such that PI^i(a) can be determined quickly, for and element a and any (positive or negative) integer i. Tradeoffs between label space and extra memory are again investigated.


Nikos Parotsidis, U. Ioannina, Greece

2-Vertex Connectivity in Directed Graphs

Given a directed graph, two vertices v and w are 2-vertex-connected if there are two internally vertex-disjoint paths from v to w and two internally vertex-disjoint paths from w to v. In this talk, we show how to compute this relation in O(m+n) time, where n is the number of vertices and m is the number of edges of the graph. As a side result, we show how to build in linear time an O(n)-space data structure, which can answer in constant time queries on whether any two vertices are 2-vertex-connected. Additionally, when two query vertices v and w are not 2-vertex-connected, our data structure can produce in constant time a "witness" of this property, by exhibiting a vertex or an edge that is contained in all paths from v to w or in all paths from w to v. We are also able to compute in linear time a sparse certificate for 2-vertex connectivity, i.e., a subgraph of the input graph that has O(n) edges and maintains the same 2-vertex connectivity properties as the input graph.

(Joint work with Loukas Georgiadis, Giuseppe F. Italiano, and Luigi Laura)


Seth Pettie, U. Michigan, Ann Arbor, USA

Weighted Matching on General Graphs: Faster and Simpler

We present a new scaling algorithm for maximum (or minimum) weight perfect matching on general, edge weighted graphs. The algorithm runs in O(mn^{1/2}log(nW)) time, where m, n, and W are the numbers of edges, vertices and maximum integer edge weight. This bound matches the fastest algorithm for bipartite graphs and comes within a log(nW) factor of the Micali-Vazirani cardinality matching algorithm. In terms of running time our algorithm is just slightly faster than the previous best (Gabow and Tarjan, 1991) by a roughly (log n)^{1/2} factor. However, it is dramatically simpler to describe and analyze.

(Joint work with Ran Duan [IIIS, Tsinghua] and Hsin-Hao Su [University of Michigan]. Manuscript available at http://arxiv.org/abs/1411.1919v2)


Ely Porat, Bar-Ilan U., Israel

3SUM Hardness in (Dynamic) Data Structures

We prove lower bounds for several (dynamic) data structure problems conditioned on the well known conjecture that 3SUM cannot be solved in O(n^{2-Omega(1)) time. This continues a line of work that was initiated by Patrascu [STOC 2010] and strengthened recently by Abboud and Vassilevska-Williams [FOCS 2014]. The problems we consider are from several subfields of algorithms, including text indexing, dynamic and fault tolerant graph problems, and distance oracles. In particular we prove polynomial lower bounds for the data structure version of the following problems: Dictionary Matching with Gaps, Document Retrieval problems with more than one pattern or an excluded pattern, Maximum Cardinality Matching in bipartite graphs (improving known lower bounds), d-failure Connectivity Oracles, Preprocessing for Induced Subgraphs, and Distance Oracles for Colors.

Our lower bounds are based on several reductions from 3SUM to a special set intersection problem introduced by Patrascu, which we call Patrascu's Problem. In particular, we provide a new reduction from 3SUM to Patrascu's Problem which allows us to obtain stronger conditional lower bounds for (some) problems that have already been shown to be 3SUM hard, and for several of the problems examined here. Our other lower bounds are based on reductions from the Convolution3SUM problem, which was introduced by Patrascu. We also prove that up to a logarithmic factor, the Convolution3SUM problem is equivalent to 3SUM when the inputs are integers. A previous reduction of Patrascu shows that a subquadratic algorithm for Convolution3SUM implies a similarly subquadratic 3SUM algorithm, but not that the two problems are asymptotically equivalent or nearly equivalent.


Liam Roditty, Bar-Ilan U., Israel

New routing techniques and their applications

In this talk I will present two new routing techniques that allow us to obtain the following new routing schemes:

1. A routing scheme for n-nodes m-edges unweighted graphs that uses \Ot(\frac{1}{\eps} n^{2/3}) space at each vertex and \Ot(1/\eps)-bit headers, to route a message between any pair of vertices u,v\in V on a (2 + \eps,1)-stretch path, i.e., a path of length at most (2 + \eps)\cdot d+1, where d is the distance between u and v. This should be compared to the (2,1)-stretch and \Ot(n^{5/3}) space distance oracle of Patrascu and Roditty [FOCS'10 and SIAM J. Comput. 2014] and to the (2,1)-stretch routing scheme of Abraham and Gavoille [DISC'11] that uses \Ot( n^{3/4}) space at each vertex. It follows from Patrascu, Thorup and Roditty [FOCS'12] that a 2-stretch routing scheme with \Ot( m^{2/3}) space at each vertex is optimal, assuming a hardness conjecture on set intersection holds.

2. A routing scheme for n-nodes weighted graphs with normalized diameter D, that uses \Ot(\frac{1}{\eps} n^{1/3}\log D) space at each vertex and \Ot(\frac{1}{\eps} \log D)-bit headers, to route a message between any pair of vertices on a (5+\eps)-stretch path. This should be compared to the 5-stretch and \Ot(n^{4/3}) space distance oracle of Thorup and Zwick [STOC'01 and J. ACM. 2005] and to the 7-stretch routing scheme of Thorup and Zwick [SPAA'01] that uses \Ot( n^{1/3}) space at each vertex. Since a 5-stretch routing scheme must use tables of \Omega( n^{1/3}) space our result is almost tight.

3. For an integer \ell>1, a routing scheme for n-nodes unweighted graphs that uses \Ot(\ell \frac{1}{\eps} n^{\ell/(2\ell \pm 1)}) space at each vertex and \Ot(\frac{1}{\eps})-bit headers, to route a message between any pair of vertices on a (3 \pm 2 / \ell + \eps,2)-stretch path. This should be compared to the distance oracles of \patrascu, Thorup and Roditty [FOCS'12] for weighted graphs with (3\pm 2/\ell)-stretch and \Ot(\ell m^{1+\ell/(2\ell \pm 1)}) total space.

4. A routing scheme for n-nodes weighted graphs, that for any integer k>2, uses \Ot(\frac{1}{\eps} n^{1/k}\log D) space at each vertex and \Ot(\frac{1}{\eps} \log D)-bit headers, to route a message between any pair of vertices on a (4k-7+\eps)-stretch path. This improves the (4k-5)-stretch routing scheme of Thorup and Zwick [SPAA'01] and can also be used in the recent ((4 - \alpha)k - \beta)-stretch routing scheme of Chechik [PODC'13] to obtain slightly better values for \alpha and \beta.


Jared Saia, University of New Mexico, USA

Interactive Communication with Unknown Noise Rate

Alice and Bob want to run a protocol over an unreliable channel, where a certain number of bits are flipped adversarially. Several results take a protocol requiring L bits of noise-free communication and make it robust over such a channel. In a recent breakthrough result, Haeupler described an algorithm that sends a number of bits that is conjectured to be near optimal in such a model. However, his algorithm, like all previous algorithms, critically requires knowledge of the number of bits that will be flipped by the adversary.

In this talk, we describe an algorithm requiring no such knowledge. If an adversary flips T bits, our algorithm sends L + soft-O(T+\sqrt{LT+L}) bits in expectation and succeeds with high probability in L. It does so without any a priori knowledge of T. Assuming a conjectured lower bound by Haeupler, our result is optimal up to logarithmic factors.

Our algorithm critically relies on the assumption of a private channel. We show that such an assumption is necessary when the amount of noise is unknown.

(Joint work with Varsha Dani, Tom Hayes, Mahnush Mohavedi, and Maxwell Young)


Peter Sanders, KIT, Germany

Parallel Algorithms Reconsidered

Parallel algorithms have been a subject of intensive algorithmic research in the 1980s. This research almost died out in the mid 1990s. In this paper we argue that it is high time to reconsider this subject since a lot of things have changed. First and foremost, parallel processing has moved from a niche application to something mandatory for any performance critical computer applications. We will also point out that even very fundamental results can still be obtained. We give examples and also formulate some open problems.


Clifford Stein, Columbia U., USA

Hallucination Helps: Energy Efficient Virtual Circuit Routing

In this talk, we give online and offline approximation algorithms for energy efficient circuit routing protocols for a network of components that are speed scalable, and that may be shutdown when idle. We will consider both the case where the speed-scalable components are edges and the case when they are nodes. For the case of edges, we describe a polynomial-time offline algorithm that has a poly-log approximation ratio. The key step of the algorithm design is a random sampling technique that we call hallucination. The algorithm extends rather naturally to an online algorithm. The analysis of the online algorithm introduces a natural "priority" multicommodity flow problem, and bounds the priority multicommodity flow-cut gap. We will then explain the additional difficulty faced if the power management components are vertices, and explain how to how to surmount this difficulty in the offline case.

(This work spans several papers and is joint with Antonios Antoniadis, Nikhil Bansal, Sungjin Im, Ravishankar Krishnaswamy, Benjamin Moseley, Viswanath Nagarajan, Kirk Pruhs)


Suresh Venkatasubramanian, U. Utah, USA

Streaming verification of graph and geometric problems

One answer to the challenge of working with large data is to use streaming algorithms. An algorithm with limited access to storage can read a stream of data and perform (surprisingly many) computations on it. We've now moved into a new era populated by third party cloud computing systems like EC2 and Azure where powerful third party clusters can compute answers quickly.

Imagine therefore a client who has data and wants to compute with it. They can outsource the computation to a cloud entity, and get answers back. How can such a computationally limited client check that the answers are correct without having to compute them?

It turns out that ideas that go back to the theory of interactive proofs can be used very effectively in this setting, and there's now a large body of work combining streaming algorithms (on the client side) and interaction (between client and server) to efficiently verify the results of a computation.

In this talk I'll briefly review the landscape of these methods, and focus on some recent work that can verify results of fundamental problems like near neighbor search, classification, clustering and graph matchings.


Philipp Woelfel, U. Calgary, Canada

Randomized Mutual Exclusion with Constant Amortized RMR Complexity

The efficiency of mutual exclusion algorithms is usually measured in terms of the number of remote memory references (RMRs) that are necessary for processes to enter a critical section. In this talk I will present a recent algorithm that achieves expected constant amortized RMR complexity and is deterministically deadlock-free on the distributed shared memory model (DSM) with an oblivious adversary.

(Joint work with George Giakkoupis)


Uri Zwick, Tel Aviv U., Israel

An improved version of the Random-Facet pivoting rule for the simplex algorithm

Random-Facet is a randomized pivoting rule for the simplex method used to solve linear programming (LP) problems. It was devised independently by Kalai and by Matousek, Sharir and Welzl. It solves any LP problem using a sub-exponential number of pivoting steps, making it the fastest known pivoting rule for the simplex algorithm. We present a slightly improved version of this pivoting rule. Using the new pivoting rule, a linear program comprised of O(d) constraints in d variables can be solved using an expected number of 2^{O(\sqrt{d})} pivoting steps, improved from the previous bound of 2^{O(\sqrt{d\log d})}.

(Joint work with Thomas Dueholm Hansen)