Institut für Informatik |

Date: 31. Jan 2018 14:15

Location: HS III LBH

Wann: Mittwoch, 31. Januar 2018, 14:15 Uhr Wo: HS III (LBH)

Sprecher: Henning Meyerhenke (Universität zu Köln)

Titel: Faster Computation of Network Centrality Measures

Abstract: Network centrality measures indicate the importance of nodes (or edges) in a network. In this talk we will discuss a few popular measures. As one example, closeness is a widely-used centrality measure in social network analysis. For a node it indicates the inverse average shortest-path distance to the other nodes of the network. While the identification of the k nodes with highest closeness received significant attention, many applications are actually interested in finding a group of nodes that is central as a whole. For this problem, only recently a greedy algorithm with approximation ratio (1−1/e) has been proposed [Chen et al., ADC 2016].

In this talk we develop new techniques to speed up the greedy algorithm without losing its theoretical guarantee. Compared to a straightforward implementation, our approach is orders of magnitude faster and, compared to a heuristic proposed by Chen et al., we always find a solution with better quality in a comparable running time in our experiments. Our method Greedy++ allows us to approximate the group with maximum closeness on networks with up to hundreds of millions of edges in minutes or at most a few hours. To have the same theoretical guarantee, the greedy approach by [Chen et al., ADC 2016] would take several days already on networks with hundreds of thousands of edges. In a comparison with the optimum, our experiments show that the solution found by Greedy++ is actually much better than the theoretical guarantee. Over all tested networks, the empirical approximation ratio is never lower than 0.97. Our results also show that the overlap between the most central group and the most central individual nodes is relatively small, which indicates empirically the need to distinguish clearly between the two problems.

Mostly based on joint work with Elisabetta Bergamini and Tanya Gonser (ALENEX 2018).

Date: 13. Oct 2016 15:00

Location: LBH E.08

On schematic maps of public transport systems, distances on the map are typically not proportional to actual travel times, which may cause surprises for the map user. In this presentation, I propose several solutions to give a user visual cues about which connections are fast and which connections are slow in terms of distance on the map per minute of travel time. I will discuss modelling and algorithmic questions that need to be addressed to enable practical implementation of these solutions.

Date: 26. Jul 2016 10:30

Location: LBH, E.08

Online matching on a line involves matching an online stream of items of various sizes to stored items of various sizes, with the objective of minimizing the average discrepancy in size between matched items. The best previously known upper and lower bounds on the optimal deterministic competitive ratio are linear in the number of items, and constant, respectively. We show that online matching on a line is essentially equivalent to a particular search problem, that we call k-lost cows. We then obtain the first deterministic sub-linearly competitive algorithm for online matching on a line by giving such an algorithm for the k-lost cows problem.

Joint work with Neal Barcelo, Michael Nugent, Kirk Pruhs and Michele Scquizzato.

Date: 23. Mar 2016 10:30

Location: Lecture Room III.03, Friedrich-Ebert-Allee 144

Energy consumption has become an important aspect of computing. On one hand, in large-scale data centers, energy is a major cost factor. On the other, energy efficiency is also a key benchmark for smartphones and other small portable devices, as it strongly affects battery lifetime. Two of the most common energy-saving techniques used by modern microprocessors, are (i) dynamic speed scaling, where the operating speed of the processor can be dynamically adjusted over time, and (ii) a sleep state to which the processor can transition in order to incur further energy savings.

The main focus of this talk is on the combined setting where a processor is equipped with both aforementioned techniques. When the processor is in the active state, its power consumption is modelled by an arbitrary convex function of its speed, and while in the sleep state the processor has negligible energy consumption. Furthermore, a constant amount of energy is required for each transition from the sleep to the active state. The combination of speed-scaling and sleep state raises challenges that do not appear in either of these two techniques when regarded in isolation.

We consider classical deadline-based scheduling, in this setting. That is, each job is specified by a release time, a deadline and a processing volume, and seek a feasible schedule of minimal total energy consumption. We discuss the NP-hardness of the associated algorithmic problem, and present a Fully Polynomial Time Approximation Scheme (FPTAS) for it. The scheme is based on a transformation to a non-preemptive variant of the problem, and a discretization that exploits a carefully defined lexicographical ordering among schedules. We conclude the talk by taking a quick glimpse of other energy-efficient scheduling results.

Date: 27. Jan 2016 10:30

Location: Lecture Room III.03, LBH

The k-means problem consists of finding k centers that minimize the sum of the squared distances of all points in a set P to their nearest center. It is a basic geometric problem and also a basic unsupervised learning model. As such, it has been studied for more than sixty years. The talk contains a short overview on known results (from the theory/algorithm community) on solving the k-means problem for small and big inputs. Furthermore, we see short excursions to results on lower bounds and dimensionality reduction for k-means.

Date: 16. Dec 2015 10:00

Location: Lecture Room III.03, LBH

Let R be a finite ring and let M, N be two finite left R-modules. We present two distinct deterministic algorithms that decide in polynomial time whether or not M and N are isomorphic, and if they are, exhibit an isomorphism. As by-products, we are able to determine the largest isomorphic common direct summand between two modules and the minimum number of generators of a module. By not requiring R to contain a field, avoiding computation of the Jacobson radical and not distinguishing between large and small characteristic, both algorithms constitute improvements to known results. If time allows, we will also briefly discuss some applications of these algorithms, and the related problem of algorithmically approximating the Jacobson radical of a finite ring.

Date: 09. Dec 2015 10:30

Location: Lecture Room III.03, LBH

Complete-linkage clustering is a very popular method for computing hierarchical clusterings in practice, which is not fully understood theoretically. Given a finite set P \subset R^d of points, the complete-linkage method starts with each point from P in a cluster of its own and then iteratively merges two clusters from the current clustering that have the smallest diameter when merged into a single cluster. We study the problem of partitioning P into k clusters such that the largest diameter of the clusters is minimized and we prove that the complete- linkage method computes an O(1)-approximation for this problem for any metric that is induced by a norm, assuming that the dimension d is a constant. This improves the best previously known bound of O(log k) due to Ackermann et al. (Algorithmica, 2014). Our improved bound also carries over to the k-center and the discrete k-center problem.

Date: 25. Nov 2015 10:00

Location: Lecture Room III.03, LBH

In the bin covering problem, the goal is to fill as many bins as possible up to a certain minimal level with a given set of items of different sizes. Online variants, in which the items arrive one after another and have to be packed immediately on their arrival without knowledge about the future items, have been studied extensively in the literature. We study the simplest possible online algorithm Dual Next-Fit, which packs all arriving items into the same bin until it is filled and then proceeds with the next bin in the same manner. The competitive ratio of this and any other reasonable online algorithm is 1/2.

We study Dual Next-Fit in a probabilistic setting where the item sizes are chosen i.i.d. according to a discrete distribution and we prove that, for every distribution, its expected competitive ratio is at least 1/2+epsilon for a constant epsilon > 0 independent of the distribution. We also prove an upper bound of 2/3 and better lower bounds for certain restricted classes of distributions. Finally, we prove that the expected competitive ratio equals, for a large class of distributions, the random-order ratio, which is the expected competitive ratio when adversarially chosen items arrive in uniformly random order.

Date: 18. Nov 2015 10:00

Location: Lecture Room III.03, LBH

Motivated by the problem of physical realizability of certain graph parameters, Freedman, Lovasz and Schrijver [Reflection positivity, rank connectivity, and homomorphisms of graphs, J. American Math. Soc. 20 (2007)] introduced algebras of quantum graphs as the proper framework to characterize the graph parameters that can be represented as homomorphism functions into weighted graphs. These algebras have special elements, called contractors, which were introduced by Lovasz and Szegedy [Contractors and connectors in graph algebras, J. Graph Theory 60:1 (2009)] as a generalization of the deletion-contraction identity of the Tutte polynomial, a two-variable polynomial associated with any graph that contains a great amount of information about it. This is an important generalization, among other reasons, because the Tutte polynomial is considered a basic tool in combinatorics and graph theory.

In their paper, Lovasz and Szegedy comment that "there does not seem to be a simple explicit construction for a contractor for the number of B-flows of a graph", where B is a subset of a finite Abelian group closed under inverses. In this talk, after giving an overview of the context and the basic notions in detail, we will describe a surprising technique to obtain an explicit construction for such a contractor. Our technique involves homomorphism functions, flows and tensions, and as a main tool, the Fourier transform on finite Abelian groups. This is a joint work with Andrew Goodall and Jarik Nesetril.

Date: 09. Nov 2015 11:00

Location: Seminar Room E.08, LBH

We consider the following online appointment scheduling problem: Jobs of different processing times and weights arrive online step-by-step. Upon arrival of a job, its (future) starting date has to be determined immediately and irrevocably before the next job arrives, with the objective of minimizing the average weighted completion time. In this type of scheduling problem it is impossible to achieve non-trivial competitive ratios in the classical, adversarial arrival model, even if jobs have unit processing times. We weaken the adversary and consider random order of arrival instead. In this model the adversary defines the weight-processing time pairs for all jobs, but the order in which the jobs arrive online is a permutation drawn uniformly at random. For the case of jobs with unit processing time we give a constant competitive algorithm. We use this algorithm as a building block for the general case of variable job processing times and achieve competitive ratio O(log n). We complement these algorithms with a lower bound of Omega(n) for unit-processing time jobs in the adversarial input model.

Date: 04. Nov 2015 10:30

Location: Lecture Room III.03, LBH

In this talk, we study output-sensitive algorithms for enumeration problems in multiobjective combinatorial optimization (MOCO). We develop two methods for enumerating the extreme points of the Pareto-front of MOCO problems. Using a well known method from multiobjective optimization, we show that we can enumerate these extreme points in output polynomial time for every fixed number of objectives if the weighted-sum scalarization can be solved in polynomial time. We also propose a new lexicographic version of the above method that runs in incremental polynomial time in the case that the lexicographic optimization variant can be solved in polynomial time. As a consequence, the extreme points of the Pareto-frontier of the multiobjective spanning tree problem as well as the multiobjective global min-cut problem can be computed in polynomial time for a fixed number of objectives. We also embed these quite new questions in the context of computational complexity in multiobjective optimization.

Date: 14. Oct 2015 11:00

Location: Room E.08, LBH

Despite much effort, for the classic problem of finding a longest common subsequence (LCS) of strings x and y over an alphabet Sigma, no algorithm with a worst-case running time of O((|x| * |y|)^{1-eps}) is known. Recent work indeed shows that finding such an algorithm is impossible without refuting the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska-Williams FOCS'15; Bringmann, Künnemann FOCS'15]. In the first part of this talk, we will sketch a proof of this result.

Notwithstanding the lack of improvement in the worst case, a long and successful line of research identified and exploited parameters of the input strings, achieving strongly subquadratic running times for highly relevant special cases, e.g., comparing large, but very similar files. The questions arise whether (i) the best known algorithms have an optimal parameter dependence (up to lower order factors, assuming SETH) and whether (ii) some parameter constellations admit faster algorithms than currently known (without refuting SETH). To this end, in the second part of the talk we provide a systematic study of the (conditional) complexity of LCS, taking into account the parameters previously discussed in the literature: n:=max{|x|,|y|}, m:=min{|x|,|y|}, the length L of an LCS of x and y, the alphabet size |Sigma|, m-L, n-L, as well as the numbers of matching and dominant pairs. We obtain effectively tight lower bounds in most of the special cases considered.

This is joint work with Karl Bringmann, Simons Institute for the Theory of Computing.

Date: 27. Aug 2015 11:00

Location: Seminar Room LBH E.08

The famous Braess paradox describes the following phenomenon: It might happen that the improvement of resources, like building a new street within a congested network, may in fact lead to larger costs for the players in an equilibrium. Throughout the talk we consider general nonatomic congestion games and give a characterization of the maximal combinatorial property of strategy spaces for which Braess paradox does not occur. In a nutshell, bases of matroids are exactly this maximal structure. We prove our characterization by two novel sensitivity results for convex separable optimization problems over polymatroid base polyhedra which may be of independent interest.

(Joint work with Satoru Fujishige, Tobias Harks, Michel Goemans, and Rico Zenklusen.)

Date: 21. Aug 2015 10:30

Location: Seminar Room LBH E.08

Due to its good practical performance the simplex method is one of the most popular algorithms to solve linear programming problems. Theoretical investigations however show that its worst case running time is very poor. This motivated several average case analyses in the 1970/80s (e.g. by Borgwardt, Adler/Karp/Shamir, Haimovich, Megiddo, Smale and Todd). In addition to that, Spielman and Teng introduced the concept of smoothed analysis in 2001 and applied it to further investigate the practical performance of the simplex method. Their result for the smoothed running time (published in 2004) was substantially improved by a paper of Vershynin (published in 2009). Both analyses are based on randomized variants and my talk mainly deals with the question how one can also derive the smoothed running time for a deterministic variant of the simplex method.

Date: 29. Jul 2015 10:15

Location: Lecture Room III.03, LBH

It has been discovered that graphs with a power law distribution are prevalent in various communication networks, in social, technological and in biological networks and the design problems thereof. In this talk we consider the special case of the (1,2)-TSP when the underlying graph of 1-edges is a Power Law Graph (PLG). Additionally, we consider also instances of the Graphical TSP in Power Law Graphs. We give the first upper and lower bounds for the approximability of these problems for the case of power law exponent beta>1.

(Joint work with Mikael Gast and Marek Karpinski)

Date: 15. Jul 2015 10:15

Location: Lecture Room III.03, LBH

In real network applications, one frequently encounters phenomena of the following type:

a. In biological networks, network motifs are often nested.

b. In biological regulatory networks, paths mediating up- or down-regulation of a target node starting from the same regulator node often have many small crosstalk paths.

c. An eavesdropper with limited sensor ranges can often intercept communications between nodes very far apart from it.

d. In traffic networks, congestion can happen in a node that is not a hub.

Although each of these phenomena can be studied on its own, it is desirable to have a network measure reflecting salient properties of complex large-scale networks that can explain all these phenomena at one shot. In this talk we adapt a combinatorial measure of negative curvature (also called hyperbolicity) to parameterized finite networks, and show that a variety of biological and social networks are hyperbolic. The hyperbolicity property has strong implications on the higher-order connectivity and other topological properties of these networks. Specifically, we derive and prove bounds on the distance among shortest or approximately shortest paths in hyperbolic networks, and explain how implications of these bounds may provide answers to observations such as in a-d above.

Based on a joint results with R. Albert and N. Mobasheri (Physical review E, 89 (3), 032811, 2014). No prior knowledge of metric topology will be assumed.

Date: 08. Jul 2015 10:15

Location: Lecture Room III.03, LBH

Recently it has been discovered that many large scale networks have a power law distribution. The Independent Set Problem has been motivated early in this framework in several applications. In this talk we give the first nonconstant lower bounds for the approximability of the Independent Set Problem in Power Law Graphs. For beta <1 the lower bound is of the form n^epsilon, and for beta =1 the lower bound is (log n)^epsilon for some epsilon >0. The constructions are based on efficient embeddings of graphs into power law graphs.

(joint work with Marek Karpinski)

Date: 01. Jul 2015 10:15

Location: Lecture Room III.03, LBH

We study parity games in which one of the two players controls only a small number k of nodes and the other player controls the n - k other nodes of the game. Our main result is a fixed-parameter algorithm that solves parity games in time (p + k)^O(sqrt{k}) ˇ O(pnm), where p denotes the number of distinct priorities and m denotes the number of edges. For all games with k = o(n) this improves the previously fastest algorithm by Jurdzinski, Paterson, and Zwick (SICOMP 2008). We also obtain novel kernelization results and an improved deterministic algorithm for graphs with small average degree.

Date: 24. Jun 2015 10:15

Location: Lecture Room III.03, LBH

We give a fully polynomial time randomized approximation scheme (FPRAS) for counting matchings in k-uniform hypergraphs without so-called wide edges, or equivalently hypergraphs whose intersection graphs are claw-free. The method generalizes the canonical path method of Jerrum and Sinclair to that class of hypergraphs. We prove also that the problem of counting matchings in k-uniform hypergraphs without that restriction is approximation hard for all k ≥ 6. The status of counting problems for k = 3, 4 and 5 is widely open. In this direction however we were able to design a deterministic approximation scheme (FPTAS) for counting matchings in 3-uniform hypergraphs (3D matchings) of degree 3. The method used in this design differs from the canonical path method and uses a contraction-tree correlation decay technique and a new combinatorial analysis of underlying intersection graph structures. This perhaps opens a possibility of extending our result to arbitrary 3-uniform hypergraphs of bounded degree, and perhaps beyond.

(Joint work with A.Dudek, A.Rucinski and E.Szymanska)

Date: 19. Jun 2015 10:30

Location: Seminar Room E.08, LBH

The following zero-sum game is played on a weighted graph: Alice selects a matching M and Bob selects a number k. Then, Alice receives a payoff equal to the ratio of the weight of the k heaviest edges of M to the maximum weight of a matching of size at most k in the graph. If M guarantees a payoff of at least L then it is called L-robust. In 2002, Hassin and Rubinstein gave an algorithm that returns a sqrt(1/2)-robust matching, which is best possible.

In this talk, we will see that Alice can improve on the guarantee of sqrt(1/2) when allowing her to play a randomized strategy. We devise a simple algorithm that returns a 1/ln(4)-robust randomized matching, based on the following observation: If all edge weights are integer powers of 2, then the lexicographically optimum matching is 1-robust. We prove this property not only for matchings but for a very general class of independence systems that includes matroid intersection, b-matchings, and strong 2-exchange systems. We also show that our robustness results for randomized matchings translate to an asymptotic robustness guarantee for deterministic matchings: When restricting Bob's choice to cardinalities larger than a given constant, then Alice can find a single deterministic matching with approximately the same guaranteed payoff as in the randomized setting.

(joint work with Martin Skutella and José A. Soto)

Date: 17. Jun 2015 10:15

Location: Lecture Room III.03, LBH

The instances of many optimization problems, such as the traveling salesman problem (TSP), are basically a finite metric space. However, average-case analyses of heuristics for such problems usually use either Euclidean instances, which yields restricted metric spaces, or independent edge lengths, which does not yield metrics at all.

We consider generating random metric instances for optimization problems as follows: Every edge of a complete graph gets a weight drawn independently at random. And then the actual length of an edge is the length of a shortest path with respect to these weights that connects its two endpoints. We investigate properties of the instances obtained from this model, and we analyze the approximation ratios of heuristics for the TSP and the k-center problem.

Based on joint work with Karl Bringmann, Christian Engels, and B. V. Raghavendra Rao.

Date: 10. Jun 2015 10:15

Location: Lecture Room III.03, LBH

The Secluded Path problem models a situation where a sensitive information has to be transmitted between a pair of nodes along a path in a network. The measure of the quality of a selected path is its exposure, which is the total weight of vertices in its closed neighborhood. In order to minimize the risk of intercepting the information, we are interested in selecting a secluded path, i.e. a path with a small exposure. Similarly, the Secluded Steiner Tree problem is to find a tree in a graph connecting a given set of terminals such that the exposure of the tree is minimized. The problems were introduced by Chechik et al. in [ESA 2013]. Among other results, Chechik et al. have shown that Secluded Path is fixed-parameter tractable (FPT) on unweighted graphs being parameterized by the maximum vertex degree of the graph and that Secluded Steiner Tree is FPT parameterized by the treewidth of the graph. In this work, we obtain the following results about parameterized complexity of secluded connectivity problems.

We give FPT-algorithms deciding if a graph G with a given cost function contains a secluded path and a secluded Steiner tree of exposure at most k with the cost at most C. We initiate the study of "above guarantee" parameterizations for secluded problems, where the lower bound is given by the size of a Steiner tree. We investigate Secluded Steiner Tree from kernelization perspective and provide several lower and upper bounds when parameters are the treewidth, the size of a vertex cover, maximum vertex degree and the solution size. Finally, we refine the algorithmic result of Chechik et al. by improving the exponential dependence from the treewidth of the input graph.

Joint work with Fedor Fomin, Nikolay Karpov, and Alexander Kulikov.

Date: 03. Jun 2015 10:15

Location: Lecture Room III.03, LBH

We study the convergence time of local search for a standard machine scheduling problem in which jobs are assigned to identical or related machines. Local search corresponds to the best response dynamics that arises when jobs selfishly try to minimize their costs. We assume that each machine runs a coordination mechanism that determines the order of execution of jobs assigned to it. We obtain various new polynomial and pseudo-polynomial bounds for the well-studied coordination mechanisms Makespan and Shortest-Job-First, using worst-case and smoothed analysis. We also introduce a natural coordination mechanism FIFO, which takes into account the order in which jobs arrive at a machine, and study both its impact on the convergence time and its price of anarchy.

Date: 29. Apr 2015 10:15

Location: Lecture Room III.03, LBH

Voronoi diagrams are a fundamental structure used in many areas of science. For a given set of sites, they separate the plane into regions such that points belonging to the same region have got the same nearest site. In variating the set of sites, they may be points, line segments, polygons etc., and the chosen distance measure, which may be anything from L_p-metrics to convex distance functions, we get a large class of Voronoi diagrams with different features and different algorithms were required to construct them.

Abstract Voronoi diagrams, AVDs for short, were introduced to serve as a unifying concept, covering as many concrete Voronoi diagrams as possible. They are based on bisecting curves enjoying simple combinatorial properties, rather than on the geometric notions of sites and distances. Once the bisector system of any concrete type of Voronoi diagram is shown to fulfill the AVD axioms, structural results and efficient algorithms become available without further effort.

In this talk I will give a survey on what is already known about AVDs and where there is still work to do. For example, one AVD axiom stated that Voronoi regions need to be connected and all bisectors must be unbounded. There are concrete diagrams, like the multiplicatively weighted diagram of point-sites, where this does not hold and these diagrams were thus not yet covered by the AVD concept. Some new results show how to overcome this problem and how to efficiently construct such diagrams by randomized incremental construction. This algorithms can now be applied not only for multiplicatively weighted point-sites in the euclidean metric, but for multiplicatively weighted points under all convex distance functions whose unit circle is of constant algebraic degree.

Date: 15. Apr 2015 10:15

Location: Lecture Room III.03, LBH

Kernelization is a fundamental technique for preprocessing instances of NP-hard problems, where the kernelization algorithm compresses the input instance in polynomial time to an equivalent instance whose size is bounded by a structural parameter only. We present the first distributed kernelization algorithms that run in a sublinear number of rounds in terms of the input size. Our focus in on the fundamental optimization problem of computing minimum dominating sets in graphs. We show that the dominating set problem admits a kernelization algorithm that yields a kernel of size O(k) in planar graphs which can be obtained in O(1) rounds where vertices carry as initial information only their ID, and k is the minimum size of a dominating set of the input graph. For the more general class of graphs of bounded genus g, we show how to obtain kernels with size O(g5/2 ˇk) in O(log Delta) rounds where Delta is the maximum degree of the input graph. These bounds match the best known complexities for distributed approximation algorithms. For the connected variant of dominating set, we obtain similar results. Further, we show that no local kernelization algorithm can compute kernels with less than (7 -epsilon)k vertices for any epsilon > 0 for the dominating set problem on planar graphs; this constitutes the first kernel lower bound to hold unconditionally.

Date: 18. Feb 2015 10:15

Location: Seminar Room II.57, LBH

The feedback vertex set problem is one of the well-studied problem in graph modification problems, which is to find a small vertex subset in a given graph whose removal makes the graph a forest. Many combinatorial problems can be easily solved on forests because of their simple structures; they have tree-width 1 and are very sparse. However, for dense graphs like complete graphs, there is no hope to find a small solution for this problem. As a natural generalization of tree-width to dense graphs, rank-width has been well-developed along recent 10 years. Rank-width basically measures the complexity of graphs using the rank of the corresponding matrices and it is directly related to width parameters of matroids. In particular, graphs of rank-width 1 much generalize graphs of tree-width 1 and still many optimization problems can be solved efficiently on those graphs. We consider the problem of finding a vertex subset of size at most k from a given graph whose removal make a graph to have rank-width 1. We prove that this parameterized version of Rank-width 1 Vertex Deletion problem can be solved in time 2^{O(klogk)} n^{O(1)} where the parameter k is the solution size. This is joint work with Christophe Paul, Eun Jung Kim, and Sang-il Oum.

Date: 09. Feb 2015 16:15

Location: Seminar Room E.08, LBH

Fighting wildfires and epidemics has become a serious issue in the last years. To provide professional fire fighters with models and simulation tools, a good understanding of the theoretical foundations seems necessary. So far, mainly graph-based models have received attention. In this talk we discuss a geometric model, where a circular fire is spreading at unit speed, and a fire fighter is charged with containing the fire by building barriers at speed v. The fire cannot cross a barrier, and the fighter cannot move into the fire.

In the first case, the fire starts inside a simple polygon, and the fire fighter can choose from a pre-defined set of cuts which barriers he wants to build, in order to save a maximum area inside the polygon from burning. While this problem is NP-hard, we can provide an 11.65 approximation algorithm, based on a hybrid approach of packing and scheduling. In the second case, no surrounding polygon or pre-defined barrier set are given. How large must v be, and how should the fire fighter proceed, to save the whole plane from burning? We show that a speed v > v_c = 2.6144... is sufficient if the fighter keeps building a barrier along the ever expanding frontier of the fire (but the number of rounds this approach needs, before the fire is contained, grows to infinity as v decreases towards v_c). On the other hand, each ''spiralling'' strategy needs speed v > 1.618..., the golden ratio, to be successful.

Date: 19. Jan 2015 16:15

Location: Seminar Room E.08, LBH

In our research we work on online algorithms in the random-order model. In this setting the algorithm makes irrevocable decisions before the whole input sequence is known. Furthermore in this model the adversary has no power over the order of arrival of the online input. We work on the competitive ratio for extensions to the well-known secretary problem, e.g. online matching, combinatorial auctions and packing linear programs. We have an e-competitive algorithm for weighted matching in the random-order model. The problem is a directed generalization to the classical secretary problem, thus the algorithm is tight. Additionally we have an online algorithm for packing linear programs that is (1-epsilon)-competitive if the capacities in the packing constraints are large enough. This algorithm is also tight in the sense that it matches a lower bound on the capacities.

Date: 10. Dec 2014 10:30

Location: Lecture Room III.03a, Friedrich-Ebert-Allee 144

We propose polynomial-time algorithms that sparsify planar and bounded-genus graphs while preserving optimal or near-optimal solutions to Steiner problems. Our main contribution is a polynomial-time algorithm that, given an unweighted graph G embedded on a surface of genus g and a designated face f bounded by a simple cycle of length k, uncovers a subset F of E(G) of size polynomial in g and k that contains an optimal Steiner tree for any set of terminals that is a subset of the vertices of f. We apply this general theorem to prove that:

* given an unweighted graph G embedded on a surface of genus g and a terminal set S in V(G), one can in polynomial time find a set F in E(G) that contains an optimal Steiner tree T for S and that has size polynomial in g and |E(T)|;

* an analogous result holds for an optimal Steiner forest for a set S of terminal pairs;

* given an unweighted planar graph G and a terminal set S in V(G), one can in polynomial time find a set F in E(G) that contains an optimal (edge) multiway cut C separating S and that has size polynomial in |C|.

In the language of parameterized complexity, these results imply the first polynomial kernels for Steiner Tree and Steiner Forest on planar and bounded-genus graphs (parameterized by the size of the tree and forest, respectively) and for (Edge) Multiway Cut on planar graphs (parameterized by the size of the cutset). Steiner Tree and similar "subset" problems were identified in [Demaine, Hajiaghayi, Computer J., 2008] as important to the quest to widen the reach of the theory of bidimensionality ([Demaine et al, JACM 2005], [Fomin et al, SODA 2010]). Therefore, our results can be seen as a leap forward to achieve this broader goal. Additionally, we obtain a weighted variant of our main contribution.

Date: 25. Nov 2014 17:15

Location: Seminar Room II.57, LBH

In this talk we consider the task of designing a walker (also known as an agent or a walking automaton), whose goal is to explore all the nodes of an unknown graph. We focus on a model of a walker which is not capable of storing any information on the nodes of the graph, but which is allowed to carry around the graph a small amount of local memory, whose size typically ranges from O(1) to O(log n) bits. We exhibit tradeoffs between a number of parameters of such an exploration process: the available memory space, the time required for exploration, and the knowledge of global parameters of the system. The applied approaches rely on a variety of subroutines, both randomized (biased random walks) and deterministic (universal traversal sequences). We also discuss implications of the obtained results for the undirected s-t connectivity problem (USTCON) in the standard RAM model of computation.

Date: 05. Aug 2014 17:15

Location: LBH, Seminar Room II.57

Date: 31. Jul 2014 15:00

Location: LBH, Seminar Room II.57

Date: 25. Mar 2014 14:15

Location: Seminar Room E.08

The Maximum Weight Independent Set of Polygons problem is a fundamental problem in computational geometry. Given a set of weighted polygons in the two-dimensional plane, the goal is to find a set of pairwise non-overlapping polygons with maximum total weight. Despite a lot of research, the problem is not well-understood yet. The best known hardness result is strong NP-hardness, while the best known polynomial time algorithm achieves an approximation ratio of n^{epsilon}, even for the special case when all input objects are line segments (or rectangles). We present a (1 + epsilon)-approximation algorithm, assuming that each polygon in the input has at most a polylogarithmic number of vertices. Our algorithm has quasi-polynomial running time, i.e., it runs in time 2^{poly(log n, 1/epsilon)}.

Date: 26. Nov 2013 17:15

Location: Seminar Room E.08, Friedrich-Ebert-Allee 144

We study the structure of solutions to linear programming formulations for the traveling salesperson problem (TSP).

We perform a detailed analysis of the support of the subtour elimination linear programming relaxation, which leads to algorithms that find 2-matchings with few components in polynomial time. The number of components directly leads to integrality gap upper bounds for the TSP with distances one and two, for both undirected and directed graphs.

Our main results for fractionally Hamiltonian instances are: For undirected instances we obtain an integrality gap upper bound of 5/4 without any restrictions, of 7/6 if the optimal LP solution is half-integral, and of 10/9 if there is an optimal solution that is a basic solution of the fractional 2-matching polytope. For directed instances we obtain an integrality gap upper bound of 3/2, and of 4/3 if given an optimal 1/2-integral solution. Our algorithms perform sequences of local improvements that harness the structure of the support.

Additionally, we show that relying on the structure of the support is not an artifact of our algorithm, but is necessary under standard complexity-theoretic assumptions: we show that finding improved solutions via local search is W[1]-hard for k-edge change neighborhoods even for the TSP with distances one and two, which strengthens a result of Dániel Marx.

(joint work with Tobias Moemke)

Date: 12. Nov 2013 17:15

Location: Seminar Room II.57, Friedrich-Ebert-Allee 144

Recently it has been discovered that many large scale networks have a Power Law distribution, i.e. the number of nodes of degree i is proportional to i^{-beta}, where beta is the power law exponent. In this talk we give a survey on the approximability of combinatorial optimization problems on Power Law Graphs (PLG). For the Dominating Set Problem on PLGs, we give both new upper approximation bounds for the case beta >2 and the first tight logarithmic inapproximability results for the case beta > 2. Furthermore, we show a tight phase transition between approximability and inapproximability at the point beta =2. We also give a survey on relatively new notions of hyperbolic graphs and the associated metric spaces and some of their interesting applications in the analysis of large scale networks.

(joint work with Mikael Gast and Marek Karpinski)

Date: 05. Nov 2013 17:15

Location: Seminar Room II.57, LBH

We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost.

For the TSP on graphic metrics (graph-TSP), we show that the approach gives a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted either to half-integral solutions to the Held-Karp relaxation or to a class of graphs that contains degree three bounded graphs and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework also allows for generalizations in a natural way and leads to analogous results for the s,t-path traveling salesman problem on graphic metrics where the start and end vertices are prespecified.

(joint work with Ola Svensson)

Date: 16. Jul 2013 17:15

Location: Seminar Room II.57

We consider the problem of recovering a hidden element s of a finite field F_q of q elements from queries to an oracle that for a given x \in F_q returns (x+s)^e for a given divisor e|q-1. This question is motivated by some applications to pairing based cryptography.

Using Lagrange interpolation one can recover s in time ep^{o(1)} on a classical computer. In the case of e = (q - 1)/2 an efficient quantum algorithm has been given by W. van Dam, S. Hallgren and L. Ip.

We describe some techniques from additive combinatorics and analytic number theory that lead to more efficient classical algorithms than the naive interpolation algorithm, for example, they use substantially fewer queries to the oracle. We formulate some questions and discuss whether quantum algorithms can give further improvement.

(joint work of Jean Bourgain, Moubariz Garaev and Sergei Konyagin)

Date: 02. Jul 2013 17:15

Location: Seminar Room E.08

Abstract Voronoi diagrams are based on bisecting curves enjoying simple combinatorial properties, rather than on the geometric notions of sites and circles. They serve as a unifying concept. Once the bisector system of any concrete type of Voronoi diagram is shown to fulfill the AVD properties, structural results and efficient algorithms become available without further effort. For example, the first optimal algorithms for constructing nearest Voronoi diagrams of disjoint convex objects, or of line segments under the Hausdorff metric, have been obtained this way. In a concrete order-k Voronoi diagram, all points are placed into the same region that have the same k nearest neighbors among the given sites. This paper is the first to study abstract Voronoi diagrams of arbitrary order k. We prove that their complexity is upper bounded by 2k(n-k). So far, an O(k (n-k)) bound has been shown only for point sites in the Euclidean and L_p plane, and, very recently, for line segments. These proofs made extensive use of the geometry of the sites. Our result on AVDs implies a 2k(n-k) upper bound for a wide range of cases for which only trivial upper complexity bounds were previously known, and a slightly sharper bound for the known cases. Also, our proof shows that the reasons for this bound are combinatorial properties of certain permutation sequences.

Date: 18. Jun 2013 17:15

Location: Seminar Room II.57

Tree-width is a traditional measure of similarity to a tree. All graphs of bounded tree-width are sparse and allow efficient algorithms for many interesting problems that are NP-hard for arbitrary graphs. Clique-width is a parameter serving a similar purpose. Its strength is the inclusion of many dense graphs. Graphs of bounded tree-width are of bounded clique-width. This containment is non-trivial and involves an exponential blow-up of the parameter.

A new parameter, the multi-tree-width, is introduced as a natural generalization of both the tree-width and clique-width. Containment of both, bounded tree-width and bounded clique-width, in bounded multi-tree-width is trivial without a significant parameter increase. Nevertheless, it can be shown that the classes of graphs of bounded multi-tree-width are also of bounded clique-width.

Date: 11. Jun 2013 17:15

Location: Seminar Room E.08, Friedrich-Ebert-Allee 144

We show that the simplex method with shadow vertex pivot rule can be used to compute a short path between a given pair of vertices of a polyhedron P = {x : Ax \leq b} along the edges of P, where A \in R^{m\times n} is a real-valued matrix. Both, the length of the path and the running time of the algorithm, are polynomial in m, n, and a parameter 1/delta that is a measure for the flatness of the vertices of P. For integer matrices A \in Z^{m \times n} we show a connection between delta and the largest absolute value Delta of any sub-determinant of A, yielding a bound of O(Delta^4 m n^4) for the length of the computed path. This bound is expressed in the same parameter Delta as the recent non-constructive bound of O(Delta^2 n^4\log (n Delta)) by Bonifas et al. For the special case of totally unimodular matrices, the length of the computed path simplifies to O(m n^4), which significantly improves the previously best known constructive bound of O(m^{16} n^3 \log^3(mn)) by Dyer and Frieze.

Date: 10. Jun 2013 15:00

Location: Seminar Room E.08

Let P be a given set of n point in the plane. The problem of computing the exact number of (straight-edge) triangulations of P has recently received some attention. Until very recently it was not known whether this number could always be computed asymptotically faster than by enumerating all triangulations of P. However, we now know that this number can be computed in O*(2^n) time, which is less than the lower bound of Omega(2.43^n) on the number of triangulations of any point set. In this talk I will present this O*(2^n) algorithm along with other slightly older algorithm for this problem that happens to be interesting by its own right. I will also show experimental results comparing these two algorithms with yet another older algorithm that was reported to be extremely fast in practice.

Date: 22. Jan 2013 17:15

Location: Lecture Room III.03, LBH

The Traveling Salesman problem is one of the most famous problems in combinatorial optimization and its approximability is a huge open problem. Though there have been recent breakthroughs on the algorithmic side, until recently the best inapproximability constant was only 220/219, due to Papadimitriou and Vempala. In this talk we will sketch a much simpler reduction which achieves a slightly better inapproximability bound. The main ingredient is inapproximable constraint satisfaction problems where each variable only appears a bounded number of times. We will discuss the role of expanders and amplifier graphs in the inapproximability of such problems and outline a simple (but hard to analyze) idea which allows us to obtain better expander graphs using the probabilistic method, improving on a 25-year old result by Bollobas. Finally, we will discuss whether it is realistic to hope that further work in this direction might eventually lead to a strong inapproximability result for TSP (spoiler: probably not!).

Date: 11. Dec 2012 17:15

Location: Lecture Room III.03a, Friedrich-Ebert-Allee 144

We consider the Euclidean facility location problem with uniform opening cost. In this problem, we are given a set of n points P \subseteq R^2 and an opening cost f in R^+, and we want to find a set of facilities F \subseteq R^2 that minimizes f |F| + sum_{p in P} min_{q in F} dist(p-q), where dist(p,q) is the Euclidean distance between p and q. We obtain two main results: 1- A (1+eps)-approximation algorithm with running time O(n log^2 n loglog n) for any constant eps. 2- The first (1+eps)-approximation algorithm for the cost of the facility location problem for dynamic geometric data streams, i.e., when the stream consists of insert and delete operations of points from a discrete space [1,\dots,Delta]^2. The streaming algorithm uses poly(log Delta/eps) space. Our PTAS is significantly faster than any previously known (1+eps)-approximation algorithm for the problem, and is also relatively simple. Our algorithm for dynamic geometric data streams is the first (1+eps)-approximation algorithm for the cost of the facility location problem with polylogarithmic space, and it resolves an open problem in the streaming area.

Both algorithms are based on a novel and simple decomposition of an input point set P into small subsets P_i, such that: 1- the cost of solving the facility location problem for each P_i is y a small, polylogarithmic number of facilities), 2- sum_i opt(P_i) \le (1+eps) opt(P), where for a point set P, opt(P) denotes the cost of an optimal solution for P.

The decomposition can be used directly to obtain the PTAS by splitting the point set in the subsets and efficiently solve the problem for each subset independently. By combining our partitioning with techniques to process dynamic data streams of sampling from the cells of the partition and estimating the cost from the sample, we obtain our data streaming algorithm.

(joint work with Artur Czumaj, Christiane Lammersen and Christian Sohler)

Date: 30. Oct 2012 17:15

Location: Seminar Room II.57, Friedrich-Ebert-Allee 144

We study the boundary of tractability for maximization problems parameterized above tight lower bound. We define a general property, namely strong \lambda-extendibility, and prove that all strong \lambda-extendible properties parameterized above tight lower bound are fixed-parameter tractable on graphs which are O(k) vertices away from being a graph in which each block is a clique. Our results hold for properties of oriented graphs and graphs with edge labels. This solves several open questions from the literature and improves existing algorithms, for example: * Max-Cut above the Edwards-Erd\H{o}s bound is fixed-parameter tractable: we give an algorithm that for any connected graph with n vertices and m edges finds a cut of size m/2 + (n-1)/4 + k in time 8^k O(n^4), or decides that no such cut exists. This answers a long-standing open question from parameterized complexity that has been posed several times over the past 15 years. * Max-Bisection above tight lower bound admits a linear kernel, improving a quadratic kernel by Gutin and Yeo. * Max Acyclic Oriented Digraph above tight lower bound is fixed-parameter tractable, solving an open question of Raman and Saurabh. * Max q-Colorable Subgraph is fixed-parameter tractable for all values of q. (joint work with Robert Crowston, Philip Geevarghese, Mark Jones, Saket Saurabh, Ondrey Suchy, and Rico Zenklusen)

Date: 01. Aug 2012 16:15

Location: Seminar Room E.08, LBH, Friedrich-Ebert-Allee 144

Given three permutations on the integers 1 through n, consider the set system consisting of each interval in each of the three permutations. In 1982, Beck conjectured that the discrepancy of this set system is O(1). In other words, the conjecture says that each integer from 1 through n can be colored either red or blue so that the number of red and blue integers in each interval of each permutations differs only by a constant. (The discrepancy of a set system based on two permutations is at most two.) We will present a counterexample to this conjecture: for any positive integer n = 3^k, we construct three permutations whose corresponding set system has discrepancy Omega(log(n)). Our counterexample is based on a simple recursive construction, and our proof of the discrepancy lower bound is by induction. This construction also disproves a generalization of Beck's conjecture due to Spencer, Srinivasan and Tetali, who conjectured that a set system corresponding to \ell permutations has discrepancy O(sqrt(\ell)). Our work was inspired by an intriguing paper from SODA 2011 by Eisenbrand, Palvolgyi and Rothvoss, who show a surprising connection between the discrepancy of three permutations and the bin packing problem: They show that Beck's conjecture implies a constant worst-case bound on the additive integrality gap for the Gilmore-Gomory LP relaxation for bin packing in the special case when all items have sizes strictly between 1/4 and 1/2, also known as the three partition problem. Our counterexample shows that this approach to bounding the additive integrality gap for bin packing will not work. We can, however, prove an interesting implication of our construction in the reverse direction: there are instances of bin packing and corresponding optimal basic feasible solutions for the Gilmore-Gomory LP relaxation such that any packing that contains only patterns from the support of these solutions requires at least OPT + Omega(log(m)) bins, where m is the number of items. Joint work with Ofer Neiman and Aleksandar Nikolov.

Hausdorff Colloquium Lecture

Amino Acid Chains and Peptide Binding Predictions

Date: 20. Jun 2012 18:00

Location: Mathematik-Zentrum, Lipschitz Lecture Hall, Endenicher Allee 60

We will give some mathematical foundations for immunology via a geometry of kernels and metric spaces, finite spaces, and apply that to spaces associated to genes.

Date: 28. Feb 2012 17:15

Location: Seminar Room II.57, LBH

Stochastic mean payoff games are a powerful class of two-player zero-sum games that generalize, e.g., cyclic games, simple stochastic games, parity games, and Markov decision processes. A stochastic mean payoff game consists of a directed graph with black, white, and red vertices: Black and white vertices belong to the two players, red vertices are random vertices. The edges are labeled with rewards. During the game a token is moved along the edges, and the black player pays the white player the corresponding reward. The goal of the players is to optimize their average payment.

The question whether equilibrium strategies and game values of stochastic mean payoff games can be computed efficiently is a long-standing open problem. It is only known that this problem lies in the intersection of NP and co-NP, and there are pseudo-polynomial algorithms for some subclasses.

To get some more insights into the complexity of solving these games, we consider two relaxed concepts: Approximate Nash equilibria and smoothed analysis. In the former, players are happy with a strategy if they cannot gain too much by deviating. In the latter, we rule out pathological instances by perturbing the rewards. We prove that the existence of pseudo-polynomial algorithms implies the existence of approximation schemes and polynomial smoothed complexity.

(Joint work with Endre Boros, Khaled Elbassioni, Mahmoud Fouz, Vladimir Gurvich, and Kazuhisa Makino.)

Association Schemes and Polynomial Factoring

Location: Room II.57, Friedrich-Ebert-Allee 144

**10:00-11:00**

Paul-Hermann Zieschang (University of Texas),

* Hypergroups, Association Schemes, Buildings*

* Abstract: *

An association scheme is a complete graph with a coloring satisfying a certain regularity condition. Cayley graphs of groups are examples and show that each group can be viewed as an association scheme. - Buildings have been introduced by Jacques Tits in order to associate to each simple algebraic group over an arbitrary field a geometry in exactly the same way as the finite plane of order 2 (the Fano plane) is associated to the simple group L_3(2) of order 168. - In my talk I will first show that, in the theory of association schemes, the class of buildings plays exactly the same role which the class of the Coxeter groups plays in group theory. After that, I will show that the notion of a hypergroup is the concept which explains best the relationship between association schemes and buildings.

**11:30-12:30**

Sergei Evdokimov (Steklov Institute of Mathematics St.Petersburg)

* Factorization of polynomials over finite fields and odd association schemes*

* Abstract: *

To each separable split polynomial over a finite field we put in correspondence an odd association scheme on the set of its roots. Basing on this and the theory of multidimensional extensions of schemes a new deterministic algorithm to decompose a polynomial of degree n over a field of cardinality q into irreducible factors (a version of the author's quasipolynomial factorization algorithm) is constructed. Its running time (assuming GRH) is polynomial in n^{b_n} and log(q) where b_n is the maximum base number of a primitive odd association scheme on not more than n points.

** Thursday, December 8th, 2011**

**10:00-11:00 **

Nitin Saxena (Bonn)

* Combinatorial Schemes in Algebraic Algorithms*

* Abstract:*

We introduce a new combinatorial object called m-scheme. It subsumes several standard objects like finite groups, strongly regular graphs and association schemes. An m-scheme on points V is basically a partition of V^m satisfying certain natural conditions. We state a conjecture about schemes and prove it in special cases. Finally, we show how it appears in the classical problem of factoring polynomials over finite fields. This is joint work with Gabor Ivanyos and Marek Karpinski.

**11:30-12:30**

Manuel Arora (Bonn)

* Hanaki and Uno's Theorem for Schemes of Prime Order and its Application to Polynomial Factoring*

* Abstract:*

We give an overview of Hanaki and Uno's results for schemes of prime order. A central result of Hanaki and Uno states that all association schemes of prime order are commutative; moreover, all valencies of non-trivial relations and all multiplicities of non-trivial irreducible characters are constant for such schemes. We give an overview of the techniques used in the proof of Hanaki-Uno. Furthermore, we describe how their result for schemes of prime order can be applied in the context of polynomial factoring over finite fields, especially in connection with the new factoring algorithm by Ivanyos, Karpinski and Saxena. As we will show, the latter algorithm can factor prime-degree polynomials efficiently under certain conditions, as a direct result of Hanaki-Uno.

** Friday, December 9th, 2011**

**10:00 - 11:00**

Ilia Ponomarenko (Steklov Institute of Mathematics St.Petersburg)

* Polynomial-time recognizing of antisymmetric coherent configurations*

* Abstract:*

It is known that for any permutation group G of odd order one can find a subset of the permuted set whose stabilizer in G is trivial, and if G is primitive, then also a base of size at most 3. Both of these results are generalized to the coherentconfiguration of G (that is in this case a schurian antisymmetric coherent configuration).This enables us to construct a polynomial-time algorithm for recognizing and isomorphismtesting of schurian antisymmetric coherent configurations.

**11:30 - 12:30**

Sergei Evdokimov (Steklov Institute of Mathematics St.Petersburg)

*Schurity of S-rings and Cayley schemes*

* Abstract:*
Criteria for schurity and non-schurity of S-rings and Cayley
schemes over an abelian (especially cyclic) group are given. In
particular, we completely characterize those finite cyclic groups over
which every S-ring, equivalently every Cayley scheme, is schurian (joint
work with Istvan Kovacs and Ilia Ponomarenko).

Date: 29. Nov 2011 17:15

Location: Seminar Room II.57

This talk is based on joint work with Sungjin Im and Maxim Sviridenko, where we give a 2-approximation for the Preemptive Generalized Min Sum Set Cover and use this to improve the current best approximation algorithm for Generalized Min Sum Set Cover. The problem was introduced by Azar, Gamzu and Yin, who gave a O(log n) approximation, which was improved by Bansal, Gupta and Krishnaswamy to roughly 500 and very recently to 28 by Skutella and Williamson. In this talk I will motivate why this is an interesting problem to look at, for example both broadcast scheduling and web search ranking can be modelled as Generalized Min Sum Set Cover. Further, why it makes sense to look at the Preemptive version of the problem. I will not give proofs in this talk, I will just highlight the main ingredients needed to obtain this 2-approximation for the preemptive problem and later show how to transform this into a non-preemptive solution. At the end, I will possibly say something on generalizing this problem by using submodular cost functions, but only if time permits.

Graph Limits - The Hyperfinite Case

Alfred Renyi Institute of Mathematics, Budapest and EPFL Lausanne

Date: 25. Nov 2011 14:00

Location: Seminar Room II.57

The testability results of Newman and Sohler as well as the work of Hassidim, Kelner, Nguyen and Onak suggest that the theory of hyperfinite graphs (say, bounded degree planar graphs) are analogous to the one of dense graphs. In this talk, we show how to build a limit graph theory for hypergraph graphs parallel to the theory of graphons by Lovasz and Szegedy.

Date: 22. Nov 2011 17:15

Location: Seminar Room II.57

Der Vortrag stellt das Problem der Identifikation eines optimalen DEA aus gegebener (meist unvollständiger) Beschreibung seines IO-Verhaltens vor. Es besteht darin, bei Eingabe einer Menge von Beispielstrings, die aus einer regulären Sprache stammen (den sog. positiven Beispielen), und einer anderen Menge von Strings, die in dieser Sprache nicht vorkommen (den sog. negativen Beispielen), einen endlichen Automaten mit minimaler Anzahl von Zuständen zu finden, der alle positiven Beispiele und keins der negativen Beispiele akzeptiert. Dieses Problem ist NP-schwer. Jedoch gibt es einige Algorithmen, die mit heuristischen Ansätzen im Normalfall gute Ergebnisse liefern. Einer der besonders mächtigen davon ist der Algorithmus von R. Price, der im Jahr 1997 "The Abbadingo One DFA Learning Competition" gewonnen hat. Er gehört zur Gruppe der Evidence-Driven State Merging (EDSM) Algorithmen. Im Vortrag wird ein neuer im Rahmen einer Diplomarbeit entwickelter Algorithmus vorgestellt, der auch zur Gruppe der EDSM-Algorithmen gehört. Einige Gründe weisen jedoch darauf hin, dass dieser in vielen Fällen besser ist als der Algorithmus von Price. Dies wird im anschließenden Vergleich der beiden Algorithmen ausführlich erläutert.

Strategic Network Models: From Building to Bargaining

Date: 11. Nov 2011 14:00

Location: HIM Lecture Hall, Poppelsdorfer Allee 45

In the first part of this talk, I will consider a new game theoretic model of network formation. The model is motivated by the observation that the social cost of relationships is often the effort needed to maintain them. In contrast to many popular models of network formation whose equilibria tend to be complete graphs or trees, our model has equilibria which support many observed network structures, including triadic closure and heavy-tailed degree distributions. This part of the talk represents work with J.T. Chayes, J. Ding and B. Lucier. If time permits, I will then turn to recent work by N. Balcan, M. Braverman, J.T. Chayes, S.-H. Teng, and myself in which we give an algorithm to discover overlapping communities within a social network.

Date: 12. Jul 2011 17:15

Location: Seminar Room II.57

Parsers for LR(k)-grammars are usually based on pushdown transducers with a potentially exponential number of states. In his paper "On LR(k)-parsers of polynomial size", Blum introduced a new concept for parsers. It is based on the simulation of a pushdown transducer by manipulating a directed graph and leads to parsers of a polynomial size, For my diploma thesis, I implemented a parser generator which transforms a formal specification of a LR(k)-grammar into such an "expanded LR(k)-parser". For some grammars, parsers were generated and tested against those parsers that Bison and MSTA produce. I this talk I will give an intruduction to the concept of expanded LR(k)-parsers and present the results of the performed tests.

Date: 10. May 2011 17:15

Location: Seminar Room II.57

Consider the following combinatorial auction problem with conflict graph: There is a set of k communication channels that should be allocated to n>> k bidders. Bidders have arbitrary valuations for bundles of channels. A channel can be assigned to multiple bidders unless there is a conflict between two bidders sharing the same channel. Conflicts are described in terms of a graph in which the nodes represent the bidders and the edges represent conflicts. This problem generalizes combinatorial auctions (assuming that the conflict graph is a clique) and maximum weight independent set (assuming that there is only one channel).

To motivate our study, we show that problem formulations for secondary spectrum auctions in well established models for wireless communication like, e.g., the protocol or the physical model can be reduced to the combinatorial auction problem with different variants of conflict graphs. We prove that the conflict graphs obtained in our reductions have an interesting property: The so-called inductive independence, rho, can be bounded by a slowly growing function depending on the communication model. For the protocol model and various related models, rho is bounded even by a constant term. We make use of the bounded inductive independence in order to get a polynomial-time approximation algorithms bypassing the n^{1-epsilon} lower bound on the approximation ratio for the independent set problem on general graphs. In fact, in terms of the inductive independence and the number of channels, we can prove an approximation factor of O(rho sqrt k) for the combinatorial auction problem with conflict graph. Known lower bounds for combinatorial auctions and independent set show that this approximation ratio is close to best possible both in terms of rho and k. Applied, e.g., to the protocol model, our algorithm gives an O(sqrt k) approximation guarantee for spectrum auctions, which scales nicely as it only depends on the number of channels but is independent of the number of bidders. Our approach can be used to derive incentive compatible mechanisms using the LP-based framework of Lavi and Swamy for general bidders with demand oracles. In fact, we slightly extend this framework by starting with an infeasible rather than feasible ILP formulation. The approximation algorithm computes a linear combination of feasible solutions approximating the optimal solution of the corresponding LP and then chooses one of these solutions at random. The obtained mechanism is truthful in expectation.

(joint work with Martin Hoefer and Thomas Kesselheim)

A 3/2- Approximation Algorithm for Minmal Weighted Constrained Forest

Date: 27. Apr 2011 10:00

Location: Seminar Room II.57, Friedrich-Ebert-Allee 144

Min WCF(p) denote the following graph problem: Given an edge weighted graph, we want to find a covering forest, such that every tree of the forest is of order at least p, minimizing the total weight of this forest.

This problem is a rather natural problem in graph theory. Imielenska et al. (93) have shown that it is NP-hard and that it admits a 2-approximation ratio. This ratio has not been significantly improved until now. The general approach on constraint forest from Goemans and Williamson (95) gives a 2-1/n approximation ratio, with n the number of vertices.

We propose a 3/2 approximation for this problem.

Date: 19. Apr 2011 17:15

Location: Seminar Room II.57

We are giving an outline of the proof of the following fact. Let P be an arbitrary simple polygon, and let S be an arbitrary set of 15 points inside P. Then there exists a subset T of S that is not "visually discernible'', that is, T does not equal the intersection of vis(v) with S for the visibility regions vis(v) of any point v in P. In other words, the VC-dimension d of visibility regions in a simple polygon cannot exceed 14. Since Valtr proved in 1998 that d lies between 6 and 23, no progress had been made on this bound. Our reduction immediately implies a smaller upper bound to the number of guards needed to cover P by epsilon-net theorems.

Date: 22. Mar 2011 17:15

Location: Seminar Room II.57, Friedrich-Ebert-Allee 144

Euclidean optimization problems such as TSP and minimum-length matching admit fast partitioning algorithms that compute near-optimal solutions on typical instances.

In order to explain their performance, we design a framework for the application of smoothed analysis to partitioning algorithms for Euclidean optimization problems. We use this framework to analyze Karp's partitioning scheme and Dyer and Frieze's matching algorithm, among others.

Joint work with Markus Bläser and B. V. Raghavendra Rao (Saarland University).

Date: 20. Dec 2010 11:00

Location: Lecture Hall, Hausdorff Research Institute for Mathematics, Poppelsdorfer Allee 45

We consider two game theoretic network models - one concerning network formation, and the other concerning bargaining on exchange networks. In the first case, we define the model and show that it has equilibria which support many observed network structures, including triadic closure and heavy-tailed degree distributions. We do not, however, consider the problem of finding dynamics which converge to (some of) these equilibria. In the second case, we study the well-known Nash bargaining equlibria on exchange networks. We construct natural local bargaining dynamics, achievable by actual players, and show that this dynamics converges quickly to an approximate Nash bargaining equilibrium.

Date: 17. Dec 2010 11:15

Location: Room 3.066, Bruehler Str. 7

Adaptive analysis is a well known technique in algorithm design, which refines the traditional worst case analysis over all instances of fixed input size by taking into account some other parameters, such as the size of the output in the case of output sensitive analysis. We present some adaptive techniques for the computation of the convex hull in two and three dimensions, and related problems. The main analysis technique is unordered instance optimality, based on the structural entropy of the instance, and yields results on the computational complexity of planar convex hull in two and three dimensions, through a generalization of output sensitivity and a more precise analysis of the complexity of Kirkpatrick and Seidel's algorithm, which yields algorithms which are instance optimal among all algorithms in the multilinear decision tree model with worst or random input order. We will describe complementary analysis techniques based on the input order, which improve results on the computation of convex hulls in two and three dimensions, and yield new results on for the adaptive computation of Voronoi and Delaunay diagrams, through the entropy of a partition of the input in easier instances.

Date: 14. Dec 2010 17:15

Location: Room 3.066, Bruehler Str. 7

A property testing algorithm for a property Pi in the bounded degree graph model [GR97] is an algorithm that, given access to the adjacency list representation of a graph G=(V,E) with maximum degree at most d, accepts G with probability at least 2/3, if G has property Pi and rejects G with probability at least 2/3, if it differs on more than epsilon*dn edges from every graph with property Pi. A property is testable, if for every epsilon,d and n, there is a property testing algorithm A_{epsilon,n,d} that makes at most q(epsilon,d) queries to the graph, i.e. a non-uniform algorithm that makes only a constant number of queries to the graph.

A k-disc around a vertex v of a graph G=(V,E) is the subgraph induced by all vertices of distance at most k from v. We show that the structure of a planar graph on n >= N(\epsilon,d) vertices and with constant maximum degree d, is determined, up to the deletion of epsilon*dn edges, by the frequency of K-discs for certain K=K(epsilon,d) that is independent of the size of the graph. We can replace planar graphs by any hyperfinite class of graphs, which includes, for example, every graph class that does not contain a set of forbidden minors. We use this result to obtain new results and improve upon existing results in the area of property testing. In particular, we prove that graph isomorphism is testable, every graph property is testable for every class of hyperfinite graphs, every hyperfinite graph property is testable, A large class of graph parameters is approximable for hyperfinite graphs Our results also give a partial explanation of the success of motifs in the analysis of complex networks.

(joint work with Ilan Newman).

Date: 14. Sep 2010 17:15

Location: Seminar Room N327

Finding the length of the longest increasing subsequence (LIS) is a classic algorithmic problem. A small example should explain the problem. In the array 1 9 4 10 6 7, the LIS has length 4 and is 1 4 6 7. Let n denote the size of the input array. Simple textbook solutions achieve an O(n log n) running time, using dynamic programming. What can a sublinear time algorithm say about the LIS? For any constant delta > 0, we construct a polylogarithmic time randomized algorithm that estimates the length of the LIS within an additive error of (delta n). Previously, the best known polylogarithmic time algorithms could only achieve an additive n/2 approximation. Why is this problem challenging? The LIS length is the output of a dynamic program, which means unless we solve many (read linear) subproblems, we cannot get the exact LIS length. We are looking to get extremely accurate (read delta n) approximations in *polylogarithmic time*. The algorithm we construct attempts to follow the progress of a (top-down) dynamic program. In polylogarithmic time, it zeroes in on the "important" sub-problems to solve. In essence, it is a sublinear-time divide and conquer algorithm. This requires some new sublinear algorithm ideas, which we hope will have applications for approximating other dynamic programs.

Joint work with Michael Saks. (To appear in FOCS 2010)

Date: 10. Aug 2010 17:15

Location: Seminar Room N327, Roemerstr. 164

We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it unpractical. We develop efficient algorithms for sorting under partial information, in particular a subcubic algorithm performing O(log e(P)) comparisons. Like Kahn and Kim, our approach relies on graph entropy. This is a joint work with Samuel Fiorini, Gwenael Joret, Raphael Jungers, and Ian Munro.

Date: 02. Jul 2010 14:15

Location: Seminar Room N327

This talk will be concerned with how well can we approximate NP-hard problems. One of the most successful algorithmic strategies, from an upper bound perspective, is to write a tractable relaxation for an NP-hard problem and present a "rounding" algorithm. To prove a hardness of approximation result, on the other hand, one often gives a reduction assuming existence of computationally hard structures, for instance those promised by the conjecture P neq NP.

Until recently, these seemed to be two different facets of a problem. This distinction is now blurring: we are beginning to understand systematic recipes of how to use the output of algorithmic relaxations to come up with reductions, and how to use the analysis of the hardness reduction to produce a rounding algorithm for the relaxation itself.

This viewpoint has led to the discovery of new and optimal algorithms and reductions for several important problems such as Max-Cut, Vertex Cover and beyond for which elegant and clever, but seemingly unrelated, algorithms and reductions were previously known, and seems to point to a promising future for approximability.

In this talk I will attempt to explain this emerging duality between algorithms and complexity and I will conclude with many open questions. The only pre-requisite to this talk would be the knowledge of Central Limit Theorem and an interest in the question "Where do algorithms and reductions for NP-hard problems come from?"

Date: 09. Mar 2010 17:15

Location: Seminar Room N327, Roemerstr. 164

Computing the visibility among geometric objects is one of the most natural subjects in the field of computational geometry. We consider a visibility problem for two non-intersecting open or closed simple polygonal chains P and Q in the plane. The visibility area between P and Q is the union of all line segments pq where p lies on the boundary of P and q on the boundary of Q and pq does neither intersect with P nor with Q.

We present an optimal linear time algorithm for computing this area. The given result has two main benefits. On one hand, we generalize a known visibility result inside simple polygons. It was already shown how to compute the inner visibility region of an edge e of a simple polygon P efficiently. Instead of e and P we allow more general polygonal objects P and Q while maintaining optimal linear time. On the other hand, the given problem arises in the context of a digital three dimensional building construction model. An efficient computation of the visibility area between two wall axes A and B in the digital model of a building is an important question for computer aided construction management.

Joint work with Elmar Langetepe and Jan Tulke.

Date: 10. Feb 2010 17:15

Location: N327

A traveller is planning a tour from some start position s to goal position g in d-dimensional space. Transportation is provided by n carriers. Each carrier is a convex object that results from intersecting finitely many closed linear subspaces; it moves at constant speed along a line. Different carriers may be assigned different velocity vectors.

While using carrier C, the traveller can walk at innate speed v in any direction, like a passenger on board a vessel. Whenever his current position on C is simultaneously contained in some other carrier C', the traveller can change from C to C', and continue his tour by C'.

Given initial positions of the carriers and of s and g, is the traveller able to reach g starting from s? If so, what minimum travel time can be achieved?

Date: 15. Dec 2009 17:15

Location: Seminar Room N327, Roemerstr. 164

An O(n log^2 n) algorithm is presented to compute the characteristic polynomial of a tree on n vertices improving on the previously best quadratic time. With the same running time, the algorithm can be generalized in two directions. The algorithm is a counting algorithm, and the same ideas can be used to count other objects. For example, one can count the number of independent sets of all possible sizes simultaneously with the same running time. These counting algorithms not only work for trees, but can be extended to arbitrary graphs of bounded tree-width.

Date: 08. Dec 2009 17:15

Location: N327

Oberseminar Theoretische Informatik ===================================

Tuesday December 8st, 2009, 17:15 Seminar Room N327, Roemerstr. 164

On the optimality of spiral search

Elmar Langetepe (University of Bonn)

Abstract: --------- Searching for a point in the plane is a well-known search game problem introduced in the early eighties. The best known search strategy is given by a spiral and achieves a competitive ratio of 17.289... It was shown by Gal that this strategy is the best strategy among all monotone and periodic strategies. Since then it was unknown whether the given strategy is optimal in general. This paper settles this old open fundamental search problem and shows that spiral search is indeed optimal. The given problem can be considered as the continuous version of the well-known m-ray search problem and also appears in several non-geometric applications and modifications. Therefore the optimality of spiral search is an important question considered by many researchers in the last decades. The presented work answers the logarithmic spiral conjecture for the given problem. The lower bound construction might be helpful for similar settings, it also simplifies existing proofs on classical m-ray search.

Date: 08. Dec 2009 17:15

Location: Seminar Room N327, Roemerstr. 164

Searching for a point in the plane is a well-known search game problem introduced in the early eighties. The best known search strategy is given by a spiral and achieves a competitive ratio of 17.289... It was shown by Gal (1980) that this strategy is the best strategy among all monotone and periodic strategies. Since then it was unknown whether the given strategy is optimal in general. This paper settles this old open fundamental search problem and shows that spiral search is indeed optimal. The given problem can be considered as the continuous version of the well-known m-ray search problem and also appears in several non-geometric applications and modifications. Therefore the optimality of spiral search is an important question considered by many researchers in the last decades. The presented work answers the logarithmic spiral conjecture for the given problem. The lower bound construction might be helpful for similar settings, it also simplifies existing proofs on classical m-ray search.

Date: 17. Nov 2009 17:15

Location: Seminar Room N327, Roemerstr. 164

Inputs in the real world are probably not worst-case and usually have some structure worth exploiting. Unfortunately, this "structure" may not be known in advance, or even quantifiable easily. Suppose the input is coming from some fixed (unknown) distribution. Can we use this to improve our running time? The model of self-improving algorithms addresses this question, and we give an algorithm for computing planar convex hulls in the self-improving model: given a sequence I_1, I_2, ... of planar n-point sets, the upper convex hull UH(I) of each set I is desired. We assume that there exists a probability distribution D on n-point sets, such that the inputs I_j are drawn independently according to D. Furthermore, D is such that the individual points are distributed independently of each other. In other words, the ith point is distributed according to D_i. The D_i's can be arbitrary but are independent of each other. The distribution D is not known to the algorithm in advance. Such distributions are not really esoteric. Computing the convex hull of n random points in the unit square is a well studied problem. We deal with a far more general setting. Our aim is to compute convex hulls of such inputs in expected running time *better* than the standard lower bound of nlog n. We would like our algorithm to "become" optimal for any such D. We give an algorithm whose expected running time is O(n + H(UH(I))). Here, H(UH(I)) is the entropy of the output, which is a lower bound for the expected running time of any algebraic computation tree that computes the convex hull. Our algorithm is thus asymptotically optimal for D. Joint work with Ken Clarkson and Wolfgang Mulzer.

Date: 29. Sep 2009 17:15

Location: Seminar Room N327

Internet ad auctions have inspired new algorithmic questions, the study of which has not only given new insights for this industry, but has also added fundamental results to the theory of algorithms. One such problem is that of Budgeted Allocation: We are given a set of m indivisible items and n agents; each agent i is willing to pay b_ij for the item j and has a maximum overall budget of B_i. The goal is to allocate items to the agents so as to maximize the total revenue.

In this talk, I will briefly survey our recent results on the design of online and approximation algorithms for the budgeted allocation problem. Then, I will describe one of the algorithms and the lower bound technique which improves the hardness of budgeted allocation and some other allocation problems.

Date: 12. May 2009 17:15

Location: Seminar Room N327, Roemerstr. 164

We consider the following motion planning problem from swarm robotics: given one or more awake robots in some metric space M, wake up a set of asleep robots. A robot awakens a sleeping robot by moving to the sleeping robot's position. When a robot awakens, it is available to assist in awakening other slumbering robots. We investigate offline and online versions of this problem.

Date: 03. Feb 2009 17:15

Location: Seminar Room N327

We study the approximability of the reconstruction problem of phylogenetic trees with respect to three different cost measures and give the first explicit lower bounds, under standard complexity-theoretic assumptions. For the Steiner Tree Problem in Phylogeny (STPP) and the Generalized Tree Alignment (GTA) problem, we show that unless P=NP, no polynomial-time algorithm can approximate these problems with an approximation ratio below 359/358. For the Ancestral Maximum Likelihood (AML) problem we give a lower bound of 1577/1576. Furthermore we construct a polynomial-time approximation scheme (A_epsilon) for the AML problem, such that for each epsilon >0, A_epsilon is a polynomial time approximation algorithm with ratio (1+epsilon)*(1+ln(3))/2). This result is based on the Steiner tree algorithm of Robins and Zelikovsky and on a new exact algorithm for AML instances of constant size. This improves upon recent results by Alon et al.(2008) who gave a 1.78-approximation algorithm for the AML problem.

(joint work with Marlis Lamp)

Date: 13. Jan 2009 17:15

Location: Seminar Room N327

Combinatorial property testing deals with the following relaxation of decision problems: Given a fixed property and an input x, one wants to decide whether x satisfies the property or is ``far'' from satisfying it. The main focus of property testing is in identifying large families of properties that can be tested with a certain number of queries to the input. In this paper we study the relation between the space complexity of a language and its query complexity. Our main result is that for any space complexity s(n)\leq \log{n} there is a language with space complexity O(s(n)) and query complexity 2^{\Omega(s(n))}. Our result has implications with respect to testing languages accepted by certain restricted machines. Alon et al. [FOCS 1999] have shown that any regular language is testable with a constant number of queries. It is well known that any language in space o(\log \log n) is regular, thus implying that such languages can be so tested. It was previously known that there are languages in space O(\log n) that are not testable with a constant number of queries and Newman [FOCS 2000] raised the question of closing the exponential gap between these two results. A special case of our main result resolves this problem as it implies that there is a language in space O(\log \log n) that is not testable with a constant number of queries. It was also previously known that the class of testable properties cannot be extended to all context-free languages. We further show that one cannot even extend the family of testable languages to the class of languages accepted by single counter machines.

Date: 11. Nov 2008 17:15

Location: Seminar Room N327

The presentation is about k-Means and euclidean k-Median Clustering of moving objects. The problem of motion clustering is formalized in the same way as by Sariel Har-Peled [1]: The objects are n points in the d-dimensional real space, whose motion is given by functions that are polynomial in the time. The degree of these functions is bounded by some constant \mu. The objective is to calculate a clustering that guarantees an approximation factor to the optimal k-clustering at each time point. The method introduced by Har-Peled is applied to k-Means and k-Median clustering and O(1) approximation algorithms, which are polynomial in n, are introduced. Furthermore, it is shown how to construct coresets for k-Median and k-Means motion clustering problems. [1] Sariel Har-Peled: Clustering Motion, Discrete & Computational Geometry Volume 31, Issue 4 (March 2004), S. 545ff., 2004

Date: 28. Oct 2008 17:15

Location: Seminar Room N327, Roemerstr. 164

In this talk I will report on recent ideas of algorithmic consistency developed in the context of our SFB proposal.

Date: 24. Sep 2008 17:15

Location: N327

We define a transitive-closure spanner of a directed graph. Given a directed graph G = (V;E) and an integer k>0, a k-transitive-closure-spanner (k-TC-spanner) of G is a directed graph H = (V;E') that has (1) the same transitive-closure as G and (2) diameter at most k. These spanners were studied implicitly in access control, property testing, and data structures, and properties of these spanners have been rediscovered over many years. We abstract the common task implicitly tackled in these applications as the problem of approximating the size of the sparsest k-TC-spanner of a given digraph, and obtain the following results. We give an O~(n^{1-1/k})-approximation algorithm for k > 2. Our method, based on convex programming and sampling, yields the first sublinear approximation ratios for DIRECTED k-SPANNER, a well-studied generalization of k-TC-SPANNER, and its variants. This resolves the main open question of Elkin and Peleg (IPCO, 2001). The second algorithm shows that for k = Omega (sqrt{n}), our problem has a provably better approximation ratio than DIRECTED k-SPANNER, resolving an open question of Hesse (SODA, 2003). Finally, we show that every H-minor-free digraph (e.g., planar or bounded genus/tree-width) has a k-TC-spanner of size n polylog n. Given that 2-TC-spanners yield monotonicity testers, we thus obtain an O(log^2 n/epsilon)-query tester for any poset whose transitive reduction is an H-minor free digraph. This improves the previous Theta(sqrt{n} log n/ epsilon)-query tester of Fischer et al (STOC, 2002). On the negative side, we resolve the approximability of 2-TC-spanners, showing that it is Theta(log n) unless P=NP. For constant k>2, we prove that the size of the sparsest k-TC-spanner is hard to approximate within 2^ {log^{1-epsilon} n}, for any epsilon > 0, under standard complexity assumptions. Our hardness result explains the difficulty in designing efficient solutions for the applications above, and it cannot be improved without resolving a long-standing open question in complexity theory. It uses generalized butterfly and broom graphs, as well as noise-resilient transformations of hard problems, which may be of independent interest. Joint work with Arnab Bhattacharyya (MIT), Elena Grigorescu (MIT), Kyomin Jung (MIT), and Sofya Raskhodnikova (Penn State). To appear in SODA, 2009.

Date: 09. Sep 2008 17:15

Location: N327

A generic technique for deriving approximation and online algorithms for graph problems consists of first approximating a given graph G by a tree network T, and then solving problems on T instead of solving them on the original graph G. This technique has proved very successful for distance-based cost-measures where it can be shown that a shortest path metric on a graph can be embedded into a probability distribution over dominating trees with distortion O(log n). In this talk we review this technique for so-called cut-based cost-measures that aim at approximating the bottlenecks in a graph instead of point-to-point distances. We show that there exists a distribution over trees that approximates the cut-structure of the graph with an approximation guarantee of O(log n). This in turn gives improved oblivious routing schemes and an improved approximation guarantee for the Minimum Bisection problem.

Date: 27. Aug 2008 17:00

Location: N327

In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to $m$ parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this talk, we consider online scheduling algorithms without preemption. However, we do not require that each arriving job has to be assigned immediately to one of the machines. A reordering buffer with limited storage capacity can be used to reorder the input sequence in a restricted fashion so as to schedule the jobs with a smaller makespan. This is a natural extension of lookahead. We present an extensive study of the power and limits of online reordering for minimum makespan scheduling. As main result, we give, for $m$ identical machines, tight and, in comparison to the problem without reordering, much improved bounds on the competitive ratio for minimum makespan scheduling with reordering buffers.

Date: 15. Jul 2008 17:15

Location: Seminar Room N327, Roemerstr. 164

Given a point set P in d-dimensional space and an integer j>0, the j-flat mean clustering problem is to find a flat OPT of dimension j such that the sum of squared distances between a point q in P and OPT is minimized. In this paper we show that every point set P has a weak coreset of size O(2^j^2) for this problem. A weak coreset, S, is a weighted set not necessairly a subset of P together with a set T such that T contains a (1+eps)-approximation of OPT and for every j-dimensional flat F in T, the cost of F for S is a (1+eps)-approximation of the cost of F for P. We apply our weak coreset to obtain a PTAS for the one j-flat mean problem with running time O(ndj^2+d 2^j^6)$. We also show this problem can be solved in the one-pass data stream scenario and in the space O( dlog^6 n 2^j^2).

Date: 10. Jul 2008 17:15

Location: Seminar Room N327, Roemerstr. 164

The k-means algorithm is one of the most popular algorithms for clustering high-dimensional data into k classes. However, its good performance in praxis is at stark contrast to its theoretical performance: In the worst case, the running-time can be exponential. The goal of smoothed analysis is to provide an explanation for the good practical performance of algorithms with poor worst-case performance. We present a smoothed analysis of the k-means algorithm: An adversary specifies a point set, and then the points are perturbed with a Gaussian with standard deviation \sigma. We prove two bounds on the smoothed running-time of the k-means algorithm: Our first bound is polynomial in n^{\sqrt k} and 1/\sigma, where n is the number of points. Our second bound is k^{kd} poly(n, 1/\sigma), where d is the dimension of the instance. This yields a polynomial bound if both k and d are in O(\sqrt{log n/ log log n). Finally, we prove that k-means has polynomial smoothed running-time for one-dimensional instances.

Date: 08. Jul 2008 17:15

Location: Seminar Room N327, Roemerstr. 164

Abstract Voronoi diagrams were designed as a unifying concept that should cover as many types of concrete Voronoi diagrams as possible. They are based on a system of bisecting curves, rather than on the notion of sites and distance. These curves must satisfy certain combinatorial axioms. One of these axioms was that any two curves intersect only finitely often. But T. Recio and his group provided an example of a norm in the plane that violates this requirement and was, therefore, not covered by the AVD concept. In this talk we provide a new foundation of abstract Voronoi diagrams that works without the finiteness assumption (and contains all norms, among many other metrics).

Date: 01. Jul 2008 17:15

Location: Seminar Room N327, Roemerstr. 164

Suppose you ran a chess tournament, everybody played everybody, and you wanted to use the results to rank everybody. Unless you were really lucky, the results would not be acyclic, so you could not just sort the players by who beat whom. A natural objective is to find a ranking that minimizes the number of upsets, where an upset is a pair of players where the player ranked lower on the ranking beats the player ranked higher. This is the minimum feedback arc set (FAS) problem on tournaments. Our main result is a polynomial time approximation scheme (PTAS) for this problem. A simple weighted generalization gives a PTAS for Kemeny-Young rank aggregation. This work was published at STOC 2007; see http://www.cs.brown.edu/~ws/tournament2.pdf for the paper.

Date: 24. Jun 2008 17:15

Location: Seminar Room N327, Roemerstr. 164

We present a deterministic kinetic data structure for the facility location problem that maintains a subset of the moving points as facilities such that, at any point of time, the accumulated cost for the whole point set is at most a constant factor larger than the current optimal cost. At any point of time, the cost that arises for a point depends on the status and the position of the point. In the case that a point is currently a facility it causes maintenance cost, otherwise it causes connection cost depending on its demand and its distance to the nearest facility. In our scenario, each point can change its status between client and facility and moves continuously along a known trajectory in a d-dimensional Euclidean space, where d is a constant. (joint work with Bastian Degener and Joachim Gehweiler)

Date: 17. Jun 2008 17:15

Location: Seminar Room N327

We will discuss an adaptation of Fuerer's (2007) algorithm for multiplying two N-bit integers. The algorithm uses arithmetic modulo prime powers (which we call modular arithmetic) as opposed to arithmetic over complex numbers in Fuerer's algorithm. One advantage of using modular arithmetic is that, by choosing a prime carefully certain precision analysis associated with the algorithm can be made very simple. However, choosing an appropriate prime is a costly operation, which fortunately can be made easy by considering Fast Fourier Transform (FFT) over multivariate polynomials. All previous algorithms used FFT over univariate polynomials. Our algorithm can also be viewed as a p-adic version of Fuerer's algorithm and achieves a running time of O(N.log N. 2^O(log* N)) bit operations.

Date: 10. Jun 2008 17:15

Location: N327, Roemerstr. 164

We consider the problem of determining suitable meeting times and locations for a group of participants wishing to schedule a new meeting subject to already scheduled meetings possibly held at a number of different locations. Each participant must be able to reach the new meeting location, attend for the entire duration, and reach the next meeting location on time. In particular, we give a solution to the problem instance where each participant has two scheduled meetings separated by a free time interval. Our approach uses the oncept of LP-type problems and leads to a randomized algorithm with expected running time O(n).

Date: 03. Jun 2008 17:15

Location: Seminar Room N327, Roemerstr. 164

We investigate a generalisation of the (longest) common subsequence problem called consensus pattern. A consensus pattern for a string s is a sequence of strings that occur in the same order in s and are non-overlapping. For a set S of strings a consensus pattern is defined as a consensus pattern for all strings of S. In this talk we will present some characteristics and a data structure that represents all consensus patterns for a set of Strings.

Date: 27. May 2008 17:15

Location: N327

The graph isomorphism problem resists classification in terms of computational complexity. There is strong evidence that it is not NP-complete, yet nobody seems to have even an idea for a polynomial time algorithm, and indeed none might exist. Still much progress has been made over the last thirty years. The subclasses of graphs with bounded genus, bounded eigenvalue multiplicity, and bounded degree have all been shown to be testable in polynomial time. The methods come from combinatorics, the theory of finite groups, linear algebra, and mathematical logic. The most natural combinatorial method for the graph isomorphism problem is the k-dimensional Weisfeiler-Lehman method. The aim is to classify the k-tuples of vertices by their neighborhoods in order to obtain a classification of the vertices themselves. An initial hope to obtain all known polynomial time isomorphism test results based on the k-dimensional Weisfeiler-Lehman method has been shown to fail. There are some interesting classes of graphs on which this method is amazingly ineffective in two ways. Most importantly, sometimes k has to be of order n to succeed. Furthermore, when the method succeeds for a constant k, it is possible that a large (even though polynomial) number of iterations is required. On the other hand, the k-dimensional Weisfeiler-Lehman method is successful for graphs of bounded genus. There are many long standing open problems. The question whether Graph Isomorphism is in P is the most obvious one. Could one at least improve the old upper bound for general graph isomorphism based on Zemlyachenko's method? For more than twenty years, this bound has been moderately exponential, with the square root of the number of vertices in the exponent. It would still be interesting to better understand the power of the combinatorial method in general and the k-dimensional Weisfeiler-Lehman method in particular. The latter has strong connections to games and the expressive power of certain restricted logics.

Date: 06. May 2008 17:15

Location: N327, Roemerstr. 164

Identity Testing is the computational problem of checking whether a given arithmetic circuit is the zero polynomial. Its deterministic complexity is unknown and a solution would have important repercussions in the complexity theory. In this talk I will motivate the problem and solve it in some special cases especially when the depth of the circuit is just 3.

Date: 29. Apr 2008 17:15

Location: N327, Roemerstr. 164

We consider the problem of testing expansion in bounded degree graphs. In this problem we are given access to a bounded degree graph G and we want to approximately decide with a sublinear number of queries to G whether G is an expander graph, i.e. we want to accept G with prob. 2/3, if G is an expander and reject G with prob. 2/3, if G is far away from every expander. We prove that an algorithm proposed by Goldreich and Ron (2000) is indeed a property tester for expansion. (joint work with Artur Czumaj)

Date: 22. Apr 2008 17:15

Location: N327, Roemerstr. 164

We construct the first constant time approximation schemes (CTAS) for Metric and Quasi-Metric CSP problems in a preprocessed model of computation. They entail also the first sublinear time approximation schemes for the above optimization problems.

Date: 30. Jan 2008 13:15

Location: N327

An instance G=(V,E), S subset of V, of the Steiner Tree Problem in graphs is called epsilon-dense if every terminal s in S has at least epsilon*|V\S| neighbors in V\S. Karpinski and Zelikovsky (1997) constructed the first PTAS for the epsilon-Dense Steiner Tree Problem. An EPTAS for the problem was given by Hauptmann (2004).

In this talk we give a survey and some new results on the approximability of Dense Steiner Tree and Forest Problems. In particular we construct a PTAS for dense instances of the k-Steiner Forest Problem where one has to cover a set of terminals with a forest consisting of at most k connected components.

Date: 23. Jan 2008 13:15

Location: N327

We present some new results on sample complexity and approximability of general instances of MAX-CUT and MAX-CSP extending the results known only for dense instances of those problems.

Date: 28. Nov 2007 12:45

Location: N327

In cryptography, we need encryption schemes that are secure and efficient. One can create more of these secure sytems using generic contructions. Also, to provide a security proof, a reasonable intractibility assumption must be reduced to an attack against the scheme.

In this talk, we will discuss two new generic constructions that convert a weakly secure Key Encapsulation Mechanism (KEM) to a highly secure encryption scheme in Public Key Setting. Also, we define a new computational assumption and describe two new Identity Based Encryption schemes that are provably secure and based on the Sakai-Kasahara Key Construction.

Date: 21. Nov 2007 13:15

Location: N327

In contrast to standard scheduling problems in load balancing games there is no authority that controls how the jobs are distributed among the servers. Instead each job is controlled by a single player that selects selfishly one of the available servers.

In this talk we give an introduction to load balancing games. We will discuss standard models of load balancing games and provide an overview of extensions of these models.

Date: 31. Oct 2007 13:15

Location: N327

A pride of lions are prowling among the vertices and edges of an n x n grid. If their paths are known in advance, is it possible to design a safe path for a man that avoids all lions, assuming that man and lion move at the same speed? In their recent paper, Dumitrescu et al. employed probabilistic arguments to show that the number of lions which can always be avoided, k(n), lies in Omega(sqrt(n)). They raised the question if k(n) even lies in Omega(n).

Using a proof technique quite different from theirs, we give a positive answer. Even n/2 lions can be avoided in dimension 2. However, there is no escaping from, by order of magnitude, Theta(n^(d-1)/sqrt(d)) lions on the d-dimensional grid.

Date: 12. Jul 2006 13:15

Location: N327

Many organisms share functional similarities. These similarities can often be found in the genetic structure of these organisms too. By comparing the functional to the genetic similarities one can conclude by which part of a gene a specific function is controlled. The genetic similarities manifest in so-called consensus patterns. Calculating these patterns is one objective of bioinformatics. The usual approach to find a pattern in a set of genetic strings is to calculate a multiple alignment of the strings that minimizes a given cost function. In this way many possibly important patterns are not recognized. In my talk I will follow a different approach, that uses an iterative method to calculate all not expandable consensus patterns for a given set of strings. Precisely my talk deals with the first step of this iterative method: the calculation of all not expandable consensus patterns for two given strings.

Date: 24. May 2006 13:15

Location: Seminar Room N327

We consider several variants of the Price Collecting Steiner Tree Problem. In the original version considered by Goemans and Williamson (1995) we are given an instance of the Steiner Tree Problem with terminal set S and prices p(s) for s in S, and the task is to construct a tree T minimizing the cost of T plus the sum of prices of terminals that are not in T. This problem is well-known to provide a 2-Approximation Algorithm, based on the Primal-Dual Method. In this talk we show how to apply the primal-dual approach to the variant where we are given prices for subsets of terminals, and the price for a subset has to be paid if the subset is not connected in the solution. Furthermore we consider a variant in which the prices of terminals depend on their distance to the solution tree T.

Date: 25. Jan 2006 13:15

Location: N327

In this talk we give a survey on quantitative methods in computational complexity, based on the notion of effective measure and effective Hausdorff dimension of J.Lutz.

Especially we present the following new result: If NP does not have p-measure 0, then the set of minimal pairs in NP (wtro polynomial-time reductions) does not have p-measure 0 either.

Date: 22. Dec 2005 15:00

Location: N327

The number partitioning problem is a classical combinatorial optimization problem: Given n numbers or weights, one is faced with the problem of partitioning this set of numbers into two subsets to minimize the discrepancy, defined as the absolute value of the difference in the total weights of the two subsets.

Here we consider random instances of this problem where the n numbers are i.i.d. random variables, and we study the distribution of the discrepancies and the correlations between partitions with similar discrepancy. In spite of the fact that the discrepancies of the 2^{n-1} possible partitions are clearly correlated, a surprising recent conjecture states that the discrepancies near any given threshold become asymptotically independent, and that the partitions corresponding to these discrepancies become uncorrelated. In other words, the conjecture claims that near any fixed threshold, the cost function of the number partitioning problem behaves asymptotically like a random cost function.

In this talk, I describe our recent proof of this conjecture. I also discuss the analogue conjecture for spin glasses.

Date: 22. Dec 2005 14:00

Location: N327

We consider the spread of viruses on preferential attachment networks - a problem of relevance both to the spread of viruses and worms on the Internet, and to the spread of communicable diseases in a population. We consider the problem in which the virus may spread to from a site to any neighboring site with some rate, and is spontaneously cured with another rate. We also make the assumption that sites may be re-infected - an assumption corresponding to rapidly mutating viruses or worms, which have recently been observed on the Internet, and which can also occur in animal populations.

First we show that if the ratio of the infection rate to the curing rate is bounded below, then there will be an epidemic. In other words, the epidemic threshold is zero, in contrast to the epidemic threshold on networks with a bounded number of neighbors per site. Next, we consider various algorithms for distributing a fixed amount of antidote to control the spread of virus. We show that commonly used algorithms, in particular so-called contact tracing, are ineffective in controlling the epidemic. We also propose an alternative algorithm which does control the epidemic, in the sense that the epidemic threshold becomes positive, and show that this algorithm is best possible in the sense that no other algorithm is more than a constant factor better.

This talk represents several projects in collaboration with subsets of Noam Berger, Christian Borgs, Ayalvadi Ganesh, Amin Saberi and David Wilson.

Date: 14. Dec 2005 13:15

Location: N327

A typical university campus contains facilities like lecture halls, dorms, library, mensa, and supermarkets, which are connected by some path system. Students in a hurry are tempted to walk straight across the lawn, if the shortcut seems worth it. After a while, this causes new paths to appear. Since their intersections are frequented by many people, they attract coffee shops or other new facilities. Now, people will walk across the lawn to get quickly to a coffee shop, and so on.

Two natural questions arise:

(1) What happens to the lawn, as this process continues? (2) Could the original campus map be designed in such a way that the process never starts, because the advantage of walking straight is always negligible?

Since an answer to the first question is given in Sanaz Kamali's talk, I will concentrate on question (2).

Date: 07. Dec 2005 13:15

Location: N327

Given $S_1$, a starting set of points in the plane, we define a sequence of planar point sets ${S_i}$ as follows: with $S_i$ already determined, let $L_i$ be the set of all the line segments connecting pairs of points of $\bigcup_{j=1}^i S_j$, and let $S_{i+1}$ be the set of intersection points of those line segments in $L_i$, which cross but do not overlap. It will be shown in the following text that with exception of some starting configurations the limit point set $\bigcup_{i=1}^\infty S_i$ is dense in a particular subset of the plane with nonempty interior, which we can describe by a simple definition.

Date: 30. Nov 2005 13:15

Location: N327

In contrast to standard scheduling problems in loadbalancing-games there is no authority that controls how the jobs are distributed among the servers. Instead each job is controlled by a single player that selects selfishly one of the available servers.

In this talk we give an introduction to the basic concepts of strategic games and their application to selfish loadbalancing. Furthermore we compare the cost of selfish server selection to the cost of a centralized optimum.

Date: 23. Nov 2005 13:15

Location: N327

While traditional auctions on a single object rarely pose difficult computational problems, computing the outcome of multi object auctions often requires solving NP-hard optimization problems.

In this talk, we give an introduction to the basic concepts of auction theory and present multi-object auctions, the difficulties involved in solving them as well as some algorithmic strategies for dealing with these problems.

Date: 16. Nov 2005 13:15

Location: N327

Given a set A of integers, the predecessor problem consists in finding for an arbitrary integer x the biggest element a from the set A, such that $a \leq x$. If x is smaller than all elements in A, a default value is returned.

In this talk we present a new static data structure for batched predecessor queries. In particular, our data structure supports $O(\sqrt{\log n})$ queries in $O(1)$ time per query and requires $O(n^{\varepsilon \sqrt{\log n}} )$ space for any $\varepsilon > 0$. Several modifications and generalizations of this result are also described. (Joint work with M. Karpinski)

Date: 09. Nov 2005 13:15

Location: N327

The metric dimension mdim((V,d)) of a finite metric space (V,d) is defined as the minimum cardinality of a subset U of V such that for any two points u,v in V, there is at least one point w in U such that d(u,w) \neq d(v,w). If the metric space is given by a distance matrix or by a graph representation, the problem of computing the metric dimension is essentially a special case of the Set Cover Problem. In this talk we give an approximation preserving reduction from the Set Cover Problem to the Metric Dimension Problem in graphs of diameter 2. This implies that it is NP-hard to approximate the Metric Dimension by a factor of o(log n), even for graphs and (1,2)-metric spaces. We will also discuss some special cases and applications to classification problems in Algorithmic Molecular Biology.

(joint work in progress with Andreas Bjoerklund and Claus Viehmann)

Date: 13. Jul 2005 13:05

Location: N328

Given a set of points $P\subset \RSet^d$, the range reporting problem is to find all points in $P\cap Q$ for an arbitrary $d$-dimensional rectangle $Q$.

In this talk we present new space efficient dynamic data structures for orthogonal range reporting. Our results match the corresponding best upper space bounds of Chazelle ,1988, for the static case. (This paper will appear in Proc. 21st ACM Symposium on Computational Geometry, 2005)

Date: 06. Jul 2005 13:15

Location: N328

Date: 29. Jun 2005 13:15

Location: N328

There are natural problems in NP that are not known to be in P but can be solved by polynomial time NTMs which make only a sublinear number of nondeterministic steps. Accordingly the class NP[f(n)] consists of all problems in NP that can be solved by a polytime NTM which makes only O(f(n)) nondeterministic steps. In this talk we give a lower bound theorem for the amount of nondeterminism for problems in NP. It turns out that the gap between lower bound function and solvability can be made ''arbitrarily'' small. We show how this result can be used to obtain a non-uniform approximate intersection theorem, dealing with the intersection of classes NP[f_i(n)].

Date: 15. Jun 2005 13:15

Location: N328

This talk gives an overview on my PhD-Thesis which is about material cost optimization in sheet metal stamping. The optimization is done by computing nested layouts for the blanks on the sheet metal coil. The work was motivated by the CoilNest project in which we developed a fully automatic system for this task.

Consequently, the interaction of all contributing algorithms is one of the main results of the thesis. Besides this and besides the adaption of known techniques for some subproblems, the thesis contains new results on Minkowski sums, a new exact algorithm for a special case of the main nesting problem, and new nesting techniques which are based on symmetries.

Date: 25. May 2005 13:15

Location: N 328

The hidden subgroup problem drew substantial attention when a polynomial time quantum algorithm for it was found. The algorithm generalized the algorithms for factoring integers and finding discrete logarithms.

We consider the hidden subgroup problem in the context of quantum ordered read-once decision diagrams. Our presentation starts with some fundamental functions related to the hidden subgroup problem. These functions include Equality, Periodicity and simplified Simon. We show linear upper and lower bounds on the width of quantum OBDD representations of these functions.

In the second part of the presentation we show upper and lower bounds for the hidden subgroup test.

(Joint work with Farid Ablayev and Marek Karpinski)

Date: 26. Jan 2005 13:15

Location: N328

Since Feder and Motwani found a faster algorithm for the bipartite cardinality matching problem by using graph compression there have been several attempts to transfer this result to the nonbipartite case. This eventually was archived by Jungnickel and Freymuth-Paeger and by Goldberg and Karzanov in 2002. The algorithms used are quite complicated as they were developed for the general skew-symmetric flow problem and then specialized to the case of matching.

In this talk we present a subgraph substitution procedure using modified superconcentrators which allows the direct application of the algorithm of Blum to the resulting graph.

Date: 19. Jan 2005 13:15

Location: N328

We survey recent developments and open problems concerning the Market Equilibrium Problem in Fisher and Arrow-Debreu markets with concave utility functions. We consider a class of concave utility functions satisfying some Lipschitz condition such that the demand of a buyer does not decrease if the price of the good is increased by other buyers. We compare this class to the gross-substitute functions and generalized logarithmic functions that have been considered before in the literature and discuss possible extension to non-additive separable functions and to segmented markets.

Date: 24. Nov 2004 13:15

Location: N328

We define Simon and Periodicity Boolean functions. We consider Equality function and its generalization that was used in AC_0 and BPP-OBDD classes incomparability proof. For all these problems we prove linear asymptotic complexity on Read-Once Quantum Branching Programs (quantum Ordered Binary Decision Diagrams).

Date: 17. Nov 2004 13:15

Location: N328

A concept of Merkle tree has been used to build secure cryptographic authentication schemes from hash functions. It was introduced by Ralph Merkle in 1979.

In this talk we describe optimal trade-offs between time and space complexity of Merkle tree traversals, improving on the previous results of Jacobson, Leighton, Micali, and Szydlo, 2003 and Szydlo, 2003. In particular, our algorithm requires $2 \log n/\log^{(3)} n$ hash function computations and storage for less than $(\log n/\log^{(3)} n + 1)\log\log n + 2 \log n$ hash values, where $n$ is the number of leaves in the Merkle tree. We also prove that these trade-offs are optimal, i.e. there is no algorithm that requires less than $O(\log n/ \log t)$ time and less than $O(t\log n/\log t)$ space for any choice of parameter $t \geq 2$.

Our algorithm could be of special use in the case when both time and space are limited. (Joint work with P. Berman and M. Karpinski. )

Date: 10. Nov 2004 13:15

Location: N328

Given a context free grammar G satisfying certain constraints, and a word w from the Kleene closure of the grammar's terminal alphabet, an LR(k) parser attempts to construct a rightmost derivation for w regarding the rules in G, or signals failure if w \not\in L(G). The constraints to the grammar are such that at each derivation step, the parser is able to uniquely determine its next action only by examining the string derived so far as well as the next k input symbols.

In this talk we present a parsing algorithm whose precomputed information is bounded only by a polynomial in the size of the grammar, where the "LR(k) table" of the canonical method needs exponential space in the worst case (k being fixed).

Date: 27. Oct 2004 13:15

Location: N328

In this talk we consider the following market equilibrium problem: Given a finite set of buyers B and a finite set of goods A, each of amount b_a. Each buyer b \in B has an initial endowment e_b and concave utility functions u_{b,a},a\in A. The task is to compute prices p_a for the goods such that the market clears and each buyer b obtains an optimum bundle of goods (i.e. maximizing his utility under given prices p). This problem goes back to Irving Fisher (1891) and is nowadays one of the fundamental computational problems in economics and algorithmic game theory.

We will give a survey on the current status of the market equilibrium problem with special emphasis put on the linear case. Furthermore we present a new approximation scheme for a special class of concave utility functions.

Date: 14. Jul 2004 13:15

Location: N328

The Steiner Tree Problem asks for a minimum cost tree T connecting some given set of vertices S (called the set of terminals) in an edge-weighted graph G=(V,E), c:E->R. In this talk we consider the epsilon-dense version of the Steiner Tree Problem, where the edge costs are all equal to 1 and we have the density condition \forall s\in S: the number of neighbors of s in V\setminus S is at least \epsilon\cdot |V\setminus S|. For this problem, Karpinski and Zelikovsky obtained a PTAS in 1997. We apply their methods to obtain polynomial time approximation schemes for the epsilon-dense versions of the Group Steiner Tree Problem, the Prize Collecting Steiner Tree Problem and the k-Steiner Tree problem. For the average-dense versions of these problems we prove APX-hardness. Furthermore we consider extensions of the approximation schemes which turn out to be efficient, which means the running time does not exponentially depend on a function of the precision but only by a factor.

Date: 09. Jun 2004 13:15

Location: N328

How efficiently can we search an unknown environment for a goal in unknown position? How much would it help if the environment is known in advance? We answer these questions for simple polygons and for general graphs, by providing online search strategies that are as good as the best offline search algorithms, up to a constant factor. For other settings we can prove that no such constant competitive online approximation exists.

Date: 12. May 2004 13:15

Location: N328

Let X be some NP- optimization problem. A polynomial time approximation scheme for X is an algorithm A which on input x,eps with eps >0 computes an (1+eps)-approximate solution to instance x of problem X and such that for each fixed eps>0 the running time is polynomially bounded in |x|.Equivalently, the running time bound is O(|x|^{f(1/eps)}) for some function f(1/eps).

The polynomial time approximation scheme is called efficient if instead the running time is bounded by O(g(1/eps) |x|^c) for some function g and some constant c. The according classes of optimization problems are usually denoted PTAS and EPTAS. Assumption P\neq NP is not known to imply EPTAS \neq PTAS. In 1997, Cesati and Trevisan proved EPTAS\neq PTAS under some stronger assumption from fixed parameter complexity theory, namely FPT\neq W[P].

We prove EPTAS\neq PTAS under some different assumption, namely existence of problems in NP\P with superpolynomial lower bound for the deterministic time complexity. Furthermore, we are able to separate these two assumptions by an oracle construction, implying that using relativizing proof techniques one can not show that our assumption implies theirs.

[Universität Bonn] - [Institut für Informatik] - [Abteilung I] - [Abteilung V]