[
University of Bonn |
Department
of Computer Science |
Chair
V ]
RAND-Seminar/Oberseminar Theoretische Informatik
Wednesday, December 16th 2015, 10:00
Lecture Room III.03, LBH
Iuliana Ciocanea Teodorescu
(University of Leiden)
The Module Isomorphism Problem for Finite Rings and Related Results
Abstract
Let R be a finite ring and let M, N be two finite left R-modules. We present two distinct deterministic
algorithms that decide in polynomial time whether or not M and N are isomorphic, and if they are,
exhibit an isomorphism. As by-products, we are able to determine the largest isomorphic common direct
summand between two modules and the minimum number of generators of a module. By not requiring R to
contain a field, avoiding computation of the Jacobson radical and not distinguishing between large
and small characteristic, both algorithms constitute improvements to known results. If time allows,
we will also briefly discuss some applications of these algorithms, and the related problem of
algorithmically approximating the Jacobson radical of a finite ring.
RAND-Seminar/Oberseminar Theoretische Informatik
Tuesday, November 25th 2014, 17:15
Seminar Room II.57
Adrian Kosowski
(Universite Paris Diderot)
Time-space tradeoffs for graph exploration and s-t connectivity
Abstract
In this talk we consider the task of designing a walker (also known as an
agent or a walking automaton), whose goal is to explore all the nodes of an
unknown graph. We focus on a model of a walker which is not capable of storing
any information on the nodes of the graph, but which is allowed to carry
around the graph a small amount of local memory, whose size typically ranges
from O(1) to O(log n) bits. We exhibit tradeoffs between a number of
parameters of such an exploration process: the available memory space, the
time required for exploration, and the knowledge of global parameters of the
system. The applied approaches rely on a variety of subroutines, both
randomized (biased random walks) and deterministic (universal traversal
sequences). We also discuss implications of the obtained results for the
undirected s-t connectivity problem (USTCON) in the standard RAM model of
computation.
RAND-Seminar/Oberseminar Theoretische Informatik
Tuesday, August 5th 2014, 17:15
Seminar Room II.57
Michael Fellows
(Charles Darwin University)
Overview of Multivariate Algorithmics
RAND-Seminar/Oberseminar Theoretische Informatik
Thursday, July 31st 2014, 15:00
Seminar Room II.57
Eli Shamir
(Hebrew University, Jerusalem)
Dissection Rotation Schemes Enhancing Parsing and Ambiguity Resolution
RAND-Seminar/Lecture Series "Algorithms in Bioinformatics"
Thursday, December 19th 2013, 17:15
Hörsaal, B-IT, Dahlmannstraße 2
Mikael Gast
(University of Bonn)
Approximation Algorithms for Combinatorial Optimization Problems on Power Law Networks and Some Applications
Abstract
One of the central tasks in the analysis of large scale networks is
the identification of a set of key nodes such as to reach or to affect
the whole network from this - ideally small - set. Areas of
application range from the World Wide Web and the Internet to various
social and biological networks.
In this talk we present new results on approximability of some
fundamental optimization problems on such networks. Some of the
results are first of the art (sometimes optimal), on those networks.
We give also a short survey on relatively new concepts of hyperbolic
networks and their applicability in the analysis of the real-world
large scale networks.
(joint work with Mathias Hauptmann and Marek Karpinski)
RAND-Seminar/Oberseminar Theoretische Informatik
Tuesday, July 16th 2013, 17:15
Seminar Room II.57
Igor Shparlinski
(University of New South Wales)
On the hidden shifted power problem
Abstract
We consider the problem of recovering a hidden element s of a finite field
F_q of q elements from queries to an oracle that for a given x \in F_q returns
(x+s)^e for a given divisor e|q-1. This question is motivated by some
applications to pairing based cryptography.
Using Lagrange interpolation one can recover s in time ep^{o(1)} on a classical
computer. In the case of e =3D (q - 1)/2 an efficient quantum algorithm has been
given by W. van Dam, S. Hallgren and L. Ip.
We describe some techniques from additive combinatorics and analytic number
theory that lead to more efficient classical algorithms than the naive
interpolation algorithm, for example, they use substantially fewer queries to
the oracle. We formulate some questions and discuss whether quantum algorithms
can give further improvement.
(joint work of Jean Bourgain, Moubariz Garaev and Sergei Konyagin)
RAND-Seminar/ Master Seminar Computational Complexity
Friday, May 17th 2013, 9:15
Seminar Room II.57
Mathias Hauptmann
(University of Bonn)
Approximability of Generalized Steiner Tree Problems
Abstract
In this talk we give a survey on some of the most important generalizations of
the Steiner Tree Problem. We give an introduction to the Primal-Dual Method and
explain how it is used in the construction of approximation algorithms
for the Steiner Forest Problem, the Prize Collecting Steiner Forest Problem and
related problems.
Master-/RAND-Seminar
Monday, April 29th 2013, 10:15
Seminar Room II.57
Johannes Mittmann
(University of Bonn)
Polynomial Identity Testing
Abstract
Polynomial Identity Testing (PIT) is the computational problem of deciding whether a given arithmetic circuit computes the zero polynomial. There are efficient randomized PIT algorithms known, but as yet deterministic polynomial-time algorithms could be found only for restricted circuit classes. In this talk we give an introduction to PIT and highlight some recent developments in this area, with a focus on methods featuring the themes of linear and algebraic independence.
The presented material comprises joint work with Malte Beecken, Nitin Saxena, and Peter Scheiblechner.
Master-/RAND-Seminar
Monday, February 25th 2013, 10:15
Seminar Room II.57
Maximilian Lorse
(University of Bonn)
Approximation Complexity of Gale-Berlekamp Games and Related Minimization Problems
Abstract
We study minimum Constrained Satisfaction Problems, especially the Gale-Berlekamp Switching Game, which can be formulated
as a Nearest Codeword Problem (NCP) and is equivalent to computing the matrix rigidity of -1,1-matrices. We will show
that the GB Game is NP-hard. Also we design a linear time approximation scheme of Karpinski and Schudy to it and generalize it to a wider class
of fragile dense minimum Constrained Satisfaction problems including the Nearest Codeword Problem and Unique Games
Problems.
Master-/RAND-Seminar
Monday, November 5th 2012, 15:15
Seminar Room II.57
Andrea Munaro
(University of Bonn)
New Bounds for VC-Dimension and epsilon-Nets
Special Lecture
Hausdorff Colloquium Lecture
Wednesday, June 20th 2012, 18:00
Mathematik-Zentrum, Lipschitz Lecture Hall, Endenicher Allee 60
Steve Smale
(Hong Kong/Berkeley)
Amino Acid Chains and Peptide Binding Predictions
Abstract
We will give some mathematical foundations for immunology
via a geometry of kernels and metric spaces, finite spaces, and
apply that to spaces associated to genes.
RAND-Workshop on
Association Schemes and Polynomial Factoring
7.-9. December 2011
Room II.57, Friedrich-Ebert-Allee 144
Wednesday, December 7th, 2011
10:00-11:00 | Paul-Hermann Zieschang (University of Texas) |
Hypergroups, Association Schemes, Buildings | |
Abstract. An association scheme is a complete graph with a coloring satisfying a certain regularity condition. Cayley graphs of groups are examples and show that each group can be viewed as an association scheme. - Buildings have been introduced by Jacques Tits in order to associate to each simple algebraic group over an arbitrary field a geometry in exactly the same way as the finite plane of order 2 (the Fano plane) is associated to the simple group L_3(2) of order 168. - In my talk I will first show that, in the theory of association schemes, the class of buildings plays exactly the same role which the class of the Coxeter groups plays in group theory. After that, I will show that the notion of a hypergroup is the concept which explains best the relationship between association schemes and buildings. | |
11:30-12:30 | Sergei Evdokimov (Steklov Institute of Mathematics St. Petersburg) |
Factorization of polynomials over finite fields and odd association schemes | |
Abstract. To each separable split polynomial over a finite field we put in correspondence an odd association scheme on the set of its roots. Basing on this and the theory of multidimensional extensions of schemes a new deterministic algorithm to decompose a polynomial of degree n over a field of cardinality q into irreducible factors (a version of the author's quasipolynomial factorization algorithm) is constructed. Its running time (assuming GRH) is polynomial in n^{b_n} and log(q) where b_n is the maximum base number of a primitive odd association scheme on not more than n points. |
Thursday, December 8th, 2011
10:00-11:00 | Nitin Saxena (Bonn) |
Combinatorial Schemes in Algebraic Algorithms | |
Abstract.
We introduce a new combinatorial object called m-scheme. It
subsumes several standard objects like finite groups, strongly regular
graphs and association schemes. An m-scheme on points V is basically a
partition of V^m satisfying certain natural conditions. We state a
conjecture about schemes and prove it in special cases. Finally, we show
how it appears in the classical problem of factoring polynomials over
finite fields. This is joint work with Gabor Ivanyos and Marek Karpinski. | |
11:30-12:30 | Manuel Arora (Bonn) |
Hanaki and Uno's Theorem for Schemes of Prime Order and its Application to Polynomial Factoring | |
Abstract. We give an overview of Hanaki and Uno's results for schemes of prime order. A central result of Hanaki and Uno states that all association schemes of prime order are commutative; moreover, all valencies of non-trivial relations and all multiplicities of non-trivial irreducible characters are constant for such schemes. We give an overview of the techniques used in the proof of Hanaki-Uno. Furthermore, we describe how their result for schemes of prime order can be applied in the context of polynomial factoring over finite fields, especially in connection with the new factoring algorithm by Ivanyos, Karpinski and Saxena. As we will show, the latter algorithm can factor prime-degree polynomials efficiently under certain conditions, as a direct result of Hanaki-Uno. |
Friday, December 9th, 2011
10:00-11:00 | Ilia Ponomarenko (Steklov Institute of Mathematics St. Petersburg) |
Polynomial-time recognizing of antisymmetric coherent configurations | |
Abstract. It is known that for any permutation group G of odd order one can find a subset of the permuted set whose stabilizer in G is trivial, and if G is primitive, then also a base of size at most 3. Both of these results are generalized to the coherentconfiguration of G (that is in this case a schurian antisymmetric coherent configuration).This enables us to construct a polynomial-time algorithm for recognizing and isomorphismtesting of schurian antisymmetric coherent configurations. | |
11:30-12:30 | Sergei Evdokimov (Steklov Institute of Mathematics St. Petersburg) |
Schurity of S-rings and Cayley schemes | |
Abstract. Criteria for schurity and non-schurity of S-rings and Cayley schemes over an abelian (especially cyclic) group are given. In particular, we completely characterize those finite cyclic groups over which every S-ring, equivalently every Cayley scheme, is schurian (joint work with Istvan Kovacs and Ilia Ponomarenko). |
Special Lecture/RAND-Seminar
Friday, November 11th 2011, 14:00 HIM Lecture Hall, Poppelsdorfer Allee 45
Christian Borgs
Microsoft Research
Strategic Network Models: From Building to Bargaining
Abstract In the first part of this talk, I will consider a new game theoretic model of network formation. The model is motivated by the observation that the social cost of relationships is often the effort needed to maintain them. In contrast to many popular models of network formation whose equilibria tend to be complete graphs or trees, our model has equilibria which support many observed network structures, including triadic closure and heavy-tailed degree distributions. This part of the talk represents work with J.T. Chayes, J. Ding and B. Lucier. If time permits, I will then turn to recent work by N. Balcan, M. Braverman, J.T. Chayes, S.-H. Teng, and myself in which we give an algorithm to discover overlapping communities within a social network.
RAND/SAC-Seminar
Wednesday, April 27th 2011, 10:00 Seminar Room II.57, Institut für Informatik, Friedrich-Ebert-Allee 144
Basile Couetoux
Lamsade, Universite Paris-Dauphine
A 3/2- Approximation Algorithm for
Minmal Weighted Constrained Forest
Abstract Min WCF(p) denote the following graph problem: Given an edge weighted graph, we want to find a covering forest, such that every tree of the forest is of order at least p, minimizing the total weight of this forest. This problem is a rather natural problem in graph theory. Imielenska et al. (93) have shown that it is NP-hard and that it admits a 2-approximation ratio. This ratio has not been significantly improved until now. The general approach on constraint forest from Goemans and Williamson (95) gives a 2-1/n approximation ratio, with n the number of vertices. We propose a 3/2 approximation for this problem.
Oberseminar Theoretische Informatik/ RAND/SAC-Seminar
Friday, December 17th 2010, 11:15, Seminar Room 3.066, Institut für Informatik, Bruehler Str. 7
Jeremy Barbay
Universidad de Chile
Instance Optimality and Order Adaptivity
for Convex Hulls and Related Problems
Abstract Adaptive analysis is a well known technique in algorithm design, which refines the traditional worst case analysis over all instances of fixed input size by taking into account some other parameters, such as the size of the output in the case of output sensitive analysis. We present some adaptive techniques for the computation of the convex hull in two and three dimensions, and related problems. The main analysis technique is unordered instance optimality, based on the structural entropy of the instance, and yields results on the computational complexity of planar convex hull in two and three dimensions, through a generalization of output sensitivity and a more precise analysis of the complexity of Kirkpatrick and Seidel's algorithm, which yields algorithms which are instance optimal among all algorithms in the multilinear decision tree model with worst or random input order. We will describe complementary analysis techniques based on the input order, which improve results on the computation of convex hulls in two and three dimensions, and yield new results on for the adaptive computation of Voronoi and Delaunay diagrams, through the entropy of a partition of the input in easier instances.
RAND Seminar
SAC Seminar, Excellence Cluster
Monday, January 28th 2008, 10:15 Seminar Room N327, 3rd floor, Institut für Informatik, Römerstr. 164
Bhaskar DasGupta
University of Illinois at Chicago (UIC)
Randomized approximations for offline
and online set-multicover problems
Abstract
The set-multicover problem differs from the set-cover problem in that
an element now needs to be covered a pre-specified number of times.
In this talk we will present our works on improved randomized approximations
for offline and online versions of this problem, together with a novel
application of this problem to reverse engineering of signal-transduction
networks in biology.
(Joint works with Piotr Berman and Eduardo Sontag.)
RAND Seminar
SAC Seminar, Excellence Cluster
Friday, January 18th 2008, 16:15 Seminar Room N328, 3rd floor, Institut für Informatik, Römerstr. 164
Jieun Jeong
Pennsylvania State University
Improving a classifying function by tree evaluation
and efficient message delivery in weighted scale-free network
Abstract
In the first part of the talk we consider the situation in which we have
a training sample of objects divided into a number of classes, and a
functions we would like to use to predict the classification of the
future query objects: given a query, the function computes its
similarity with already classified objects and thus points to the
most plausible classes for the query.
Once we have a general form of such a function, we want to improve it,
e.g. by changing parameters used in its formula. To do it, we need to
evaluate the function after each modification. One general approach is
to use the function to build a "phylogenetic" tree from the training
sample of objects and evaluate how well this tree fits their
classification. We show how a natural evaluation method can give wrong
answers, how to correct this flaw and how to evaluate quickly expected
values that are used in the corrected formula.
In the second part we consider the problem of finding a file located
at some unknown node in a random network, scale free or Erdoes-Renyi,
assuming variable cost of sending a message through a network link.
A simple class of criteria yields superior results when it is used
to guide a random walk through the network (compared with the earlier
work).
RAND Seminar
HIM Seminar, Excellence Cluster
Thursday, December 20th 2007, 17:00 HIM Lecture Hall
Hausdorff Research Institute for Mathematics, Poppelsdorfer Allee 45
Jennifer Chayes
Microsoft Research
Redmond, WA 98052
Epidemics in Technological and Social Networks:
The Downside of Six Degrees of Separation
Abstract
During the past decade, complex networks have become increasingly important
in communication and information technology. Vast, self-engineered networks,
like the Internet, the World Wide Web, and Instant Messaging Networks, have
facilitated the flow of information, and served as media for social and
economics interaction. In social networks, the ease of information flow goes
by many names: the "small word" phenomenon, the "Kevin Bacon
phenomenon",
and "six degrees of separation" − the claim that any two people on earth
can be connected through a chain of acquaintances with at most five
intermediaries. Unfortunately, many of the properties that facilitate
information transmission also facilitate the spread of viruses in both
technological and social networks. The speaker uses simple mathematical
models to explain these epidemics and to examine strategies for their
containment.
RAND Seminar
SAC Seminar, Excellence Cluster
Monday, October 15th 2007, 10:00 Seminar Room N327, 3rd floor, Institut für Informatik, Römerstr. 164
Mihai Patrascu
Massachusetts Institute of Technology
Dynamic Graph Algorithms invade Geometry
Abstract After dynamic graph algorithms has been an area of intense research in data structures for almost 30 years, the time seems ripe for the field to tackle problems of a geometric nature. In dynamic graph algorithms, we have a graph that is being changed through updates (typically edge insertions and deletions), and we want to query it for various graph properties (say, whether two nodes are connected, the cost of an MST, the shortest path between two nodes etc). Now consider a scenario in which we have sensors distributed in the space, and each sensor can only talk to another sensor at radius r. The connectivity is defined by the intersection graph of discs of radius r/2 around every point. Can we handle a dynamic environment, e.g. a situation when new sensors can be inserted, or some sensors run out of battery? Our results apply to the intersection graphs of a very general class of objects (including disks, segments, rectangles) and support the dynamic connectivity problem in time O(n^{c}) per update and query, where c is a constant less than 1 depending on the geometric shape in question. This opens a very broad and promising research direction:
RAND Seminar
SAC Seminar, Excellence Cluster
Monday, December 11th 2006, 10:15 Seminar Room N328, 3rd floor, Institut für Informatik, Römerstr. 164
Marek Karpinski
University of Bonn
Constant Time Approximation Schemes
Abstract
We survey some new results and recent developments in designing constant
time (optimum) value approximation schemes (CTASs) for large classes of
NP-hard optimization problems. We describe some new underlying paradigms
in designing such approximation schemes and put the recent progress in a
more general perspective.
RAND Seminar
Oberseminar Theoretische Informatik
Wensday, January 25th 2006, 13:15 Seminar Room N327, 3rd floor, Institut für Informatik, Römerstr. 164
Mathias Hauptmann
University of Bonn
Effective Measure and Dimension: Quantitative Methods in Computational Complexity
Abstract
In this talk we give a survey on quantitative methods
in computational complexity, based on the notion of
effective measure and effective Hausdorff dimension of J.Lutz.
In particular we present the following new result: If NP does not have
p-measure 0, then the set of minimal pairs in NP (wtro polynomial-time
reductions) does not have p-measure 0 either.
RAND Seminar / BIGS
Oberseminar Theoretische Informatik
Thursday, December 22th 2005, 14:00 Seminar Room N327, 3rd floor, Institut für Informatik, Römerstr. 164
Jennifer Chayes
Microsoft Research,
Redmond, WA 98052
Controlling the Spread of Viruses on Power-Law Networks
Abstract
We consider the spread of viruses on preferential attachment
networks - a problem of relevance both to the spread of viruses and
worms on the Internet, and to the spread of communicable diseases in a
population. We consider the problem in which the virus may spread to
from a site to any neighboring site with some rate, and is spontaneously
cured with another rate. We also make the assumption that sites may be
re-infected - an assumption corresponding to rapidly mutating viruses or
worms, which have recently been observed on the Internet, and which can
also occur in animal populations.
First we show that if the ratio of the infection rate to the curing rate
is bounded below, then there will be an epidemic. In other words, the
epidemic threshold is zero, in contrast to the epidemic threshold on
networks with a bounded number of neighbors per site. Next, we consider
various algorithms for distributing a fixed amount of antidote to
control the spread of virus. We show that commonly used algorithms, in
particular so-called contact tracing, are ineffective in controlling the
epidemic. We also propose an alternative algorithm which does control
the epidemic, in the sense that the epidemic threshold becomes positive,
and show that this algorithm is best possible in the sense that no other
algorithm is more than a constant factor better.
This talk represents several projects in collaboration with subsets of
Noam Berger, Christian Borgs, Ayalvadi Ganesh, Amin Saberi and David
Wilson.
RAND Seminar / BIGS
Oberseminar Theoretische Informatik
Thursday, December 22th 2005, 15:00 Seminar Room N327, 3rd floor, Institut für Informatik, Römerstr. 164
Christian Borgs
Microsoft Research,
Redmond, WA 98052
The Random Energy Conjecture for Number Partitioning and Spin Glasses
Abstract
The number partitioning problem is a classical combinatorial
optimization problem: Given n numbers or weights, one is faced with the
problem of partitioning this set of numbers into two subsets to minimize
the discrepancy, defined as the absolute value of the difference in the
total weights of the two subsets.
Here we consider random instances of this problem where the n numbers
are i.i.d. random variables, and we study the distribution of the
discrepancies and the correlations between partitions with similar
discrepancy. In spite of the fact that the discrepancies of the
2^{n-1} possible partitions are clearly correlated, a surprising recent
conjecture states that the discrepancies near any given threshold
become asymptotically independent, and that the partitions corresponding
to these discrepancies become uncorrelated. In other words, the
conjecture claims that near any fixed threshold, the cost function of
the number partitioning problem behaves asymptotically like a random
cost function.
In this talk, I describe our recent proof of this conjecture. I also
discuss the analogue conjecture for spin glasses.
Promotionskolloquium / BIGS
Thursday, July 14th 2004, 13:30 Lecture Room B, 1st floor, Institut für Informatik, Römerstr. 164
Airat Khasianov
University of Bonn
and
Bonn International Graduate School in Mathematics, Physics and Astronomy
Complexity Bounds on Some
Fundamental Computational Problems for
Quantum Branching Programs
Abstract
We study quantum computational complexity of several problems
connected to the Hidden Subgroup Problem. This problem drew
substantial attention when a polynomial time quantum algorithm
for it was found. The
algorithm generalized the quantum algorithms for factoring integers
and finding discrete logarithms.
We consider the the problems in the context of
quantum ordered read-once decision diagrams. Our presentation
starts with some fundamental functions related to the Hidden
Subgroup Problem. These functions include Equality,
Periodicity and simplified "Simon". We show linear upper
and lower bounds on the width of quantum OBDD representations
of these functions.
In the second part of the presentation we show upper and lower
bounds for the Hidden Subgroup Test.
(Part of this work is based on a joint research with Farid
Ablayev and Marek Karpinski)
Theory Lunch Talk
Wednesday, December 22nd 2004, 13:00 Seminar Room N328
Marek Karpinski
UNIVERSITY OF BONN
Existence of Small Symmetric 3SAT-Enforcers
Promotionskolloquium / BIGS
Friday, June 4th 2004, 13:15 Hörsaal 1, Institut für Informatik, Römerstr. 164
Mathias Hauptman
University of Bonn
Approximation Complexity of Optimization Problems:
Structural Foundations and Steiner Tree Problems
Abstract
The Steiner Tree Problem asks for a minimum-cost connection of a
given set of vertices S in a graph G=(V,E), where the cost of a connection
is the number of edges it uses. For the epsilon-dense case where each s\in S
has at least \epsilon\cdot |V-S| neighbors in V-S, Karpinski
and Zelikovsky gave a polynomial time approximation scheme, allowing
to approximately solve the problem within arbitrary precision \delta >0,
such that for every fixed \delta >0 the running time is polynomial in the size of
graph G. We apply their methods to obtain a polynomial time approximation
scheme for the \epsilon-Dense Class Steiner Tree Problem and several other related
problems.
These algorithms share the property that the running time depends
exponentially on the inverse of the precision parameter \delta. An approximation scheme is
called efficient if instead the running time is bounded by
f(1\slash\delta) times a polynomial in the size of G. For the dense versions
of the Steiner Tree Problem, existence of efficient approximation schemes is an
open problem. In the second part of the talk we consider structural aspects
of this question, proving existence of problems in PTAS which do not provide
efficent approximation schemes, under some strong but reasonable assumptions.
RAND-Seminar / BIGS
Friday, December 19th 2003, 14:15 Seminar Room N327
Jennifer Chayes
Microsoft Research
The Phase Transition in Random Load Balancing
Abstract
Phase transitions are familiar phenomena in physical systems.
But they also occur in random versions of combinatorial models,
including random versions of some of the canonical problems
of theoretical computer science.
In this talk, I will illustrate this by discussing joint work with Christian
Borgs,
Stephan Mertens and Boris Pittel on the so-called random optimum
partitioning
or load balancing problem -- a fundamental problem in combinatorial
optimization.
I will show how this problem undergoes a phase transition from a phase
in which it is typically possible to balance loads to a phase in which such
balance cannot be achieved. I will also discuss how notions of phase
transitions
may help us to understand what makes hard problems hard.
No previous knowledge of phase transitions will be assumed in this talk.
RAND-Seminar / BIGS
Thursday, December 18th 2003, 14:15 Seminar Room N327
Christian Borgs
Microsoft Research
What Makes a Network High-Dimensional:
Erdos-Reny Scaling for Finite Graphs
Abstract
Many models of practical relevance, like faulty wireless networks, are
well described by random subgraphs of finite graphs, or, more
probabilistically, by percolation on finite graphs. For both theoretical and practical
reasons, one of the most interesting properties of these models is the behavior
of the largest connected component as the underlying edge density is varied.
While the behavior of the largest component is well understood for the
complete graph, which was first systematically studied by Erdos and Renyi in
1963, not much was known for general finite graphs.
In the work presented in this talk we formulate a sufficient condition
under which transitive graphs on N vertices exhibit the same scaling behavior
as the complete graph. Our condition is related to a well known condition
in percolation theory on infinite graphs, where it characterizes high
dimensionality.
This work is in collaboration with Jennifer Chayes, Gordon Slade, Joel
Spencer and Remco van der Hofstad.
RAND-Seminar/Diplomandenseminar
Friday, August 1st 2003, 10:00 Seminar Room N328
Martin Schido
University of Bonn
Polynomial Time Approximation Schemes
for Metric Min-k-Sum
Clustering and Metric Max-k-Cut
Abstract
The Metric Min-k-Sum Clustering Problem is the following:
Given n points in a metric space (V,c), build a k-clustering
C_1,...,C_k of
these points such as to minimize $\sum_{i=1}^k \sum_{u,v\in C_i} c(u,v)$.
In this talk we will discuss the first known PTAS for this problem by
Fernandez de la Vega, Karpinski, Kenyon and Rabani
for arbitrary k and a PTAS by Indyk for k=2. As a part of these PTASs, we
will also introduce approximation schemes for Metric Max-(k)-Cut.
We will present ways for optimization and parallelization with respect to an
efficient implementation.
Furthermore we will discuss fast Max-(k)-Cut and Min-k-Sum Clustering
heuristics as an alternative for large instances and to support and speed up
the PTASs.
We will finally present the first implementations of the clustering PTASs
and report on practical experiments up to k=8.
RAND-Seminar
Wednesday, Juli 23rd 2003, 11:00 Seminar Room N328
V.G. Papadopoulou
University of Patras
On Radiocoloring
Hierarchically Specified
Planar Graphs:
PSPACE-completeness and Approximations
Abstract
Hierarchical specifications of graphs have been widely
used in many important applications, such as VLSI design, parallel
programming and software engineering. A well known hierarchical
specification model, considered in this work, is that of Lengauer
[L89,LW92], referred to as L-specifications. In this
paper we discuss a restriction on the L-specifications resulting
to graphs which we call Well-Separated (WS). This class is
characterized by a polynomial time (to the size of the
specification of the graph) testable combinatorial property.
In this work we study the Radiocoloring Problem (RCP) on WS
L-specified hierarchical planar graphs. The optimization version
of RCP studied here, consists in assigning colors to the vertices
of a graph, such that any two vertices of distance at most two get
different colors. The objective here is to minimize the number of
colors used. This problem is equivalent to the problem of vertex
coloring the square of a graph G, G^2, where G^2 has the
same vertex set as G and there is an edge between any two
vertices of G^2 if their distance in G is at most 2.
We first show that RCP is PSPACE-complete for WS L-specified
hierarchical planar graphs. Second, we present a polynomial time
2.66-approximation algorithm as well as a more efficient
3.33-approximation algorithm for RCP on graphs of this class.
We note that, the best currently known approximation ratio for the
RCP on ordinary (non-hierarchical) planar graphs of general degree
is 5/3 [MS02]. Note also that the only known results on any kind
of coloring problems have been shown for another special kind of
hierarchical graphs (unit disk graphs) achieving a 6-approximation
solution [MRHR97].
RAND-Seminar
Friday, Juli 4th 2003, 14:30 Seminar Room N327
Lucia Penso
Brown University / University of Bonn
Tight Bounds for k-Set Agreement with Limited-Scope Failure Detectors (2nd Part)
Abstract
We employ mechanisms from Combinatorial Topology to derive
(tight) bounds for the k-Set Agreement Problem in the context of an
asynchronous message-passing system augmented with Eventual or Perpetual
Limited-Scope Failure Detectors.
Both (Eventual and Perpetual) f-fault tolerant protocols presented at the
last talk match our bounds, which shows at the same time the tightness of the
bounds as well as the optimality of the protocols.
In the perpetual case we have that the number of failures f < k+x-q,
whereas in the eventual case we have that the number of failures
f < min ( (n+1)/2, k+x-q ). Remember that n+1 is the total number of
processes, q is the number of sets having a reliable process, and x is the sum of
processes belonging to any of the q sets.
Though these methods have been successful in other models, this is the
first time such methods have been applied to a model augmented with failure
RAND-Seminar
Friday, June 20th 2003, 14:30 Seminar Room N327
Lucia Penso
Brown University / University of Bonn
Tight Bounds for k-Set Agreement with Limited-Scope Failure Detectors (1st Part)
Abstract
In the k-set agreement problem, each process in a
group starts with a private input value, communicates
with the others, and then halts after choosing a
private output value. Each process is required to
choose some process's input, and at most k distinct
values may be chosen.
We consider this problem in an asynchronous
message-passing system of n+1 processes, of which at
most f may fail by halting. In our model, each process
has a failure detector attached to itself, where a
failure detector is an unreliable oracle that informs
its process when it suspects other processes to have
failed.
In practice, however, a process can typically detect
some failures more easily than others. For example,
timeouts may more reliably detect failures on the same
local area network, but less reliably over a wide-area
network.
Thus, we will be interested in a particular type of
failure detectors that capture this idea, namely,
limited-scope failure detectors. In this case, the set
of processes encompasses sets $X_0,...,X_{q-1}$, such
that some correct process in $X_i$ is never suspected by
any process in $X_i$. This can be satisfied either
eventually or all the time (perpetually).
We give the first ever (tight) lower bounds for k-set
agreement protocols employing limited-scope failure
detectors. We also give a novel protocol that matches
our lower bound for the eventual case, and that is the
first ever to do so, disproving the conjecture of
Mostefaoui and Raynal that their protocol would be
optimal for this (eventual) case. We also prove a
conjecture of Mostefaoui and Raynal: that actually
their protocol is optimal for the other (perpetual)
case, since it matches our lower bound.
RAND-Seminar / BIGS
Friday, February 7th 2003, 14:30 Seminar Room N327
Piotr Sankowski
University of Warsaw
Extensions of the Godsil-Gutman estimator
Abstract
In the first part of the talk we will summarize current results concerning
extensions of the Godsil-Gutman complex and quaternionic estimators,
Clifford algebra estimators and usage of symmetrized determinants. We will
concentrate on problems appearing in these methods. In the second part of
my talk we present and analyze a new estimator that uses especially
constructed hypernumbers.
RAND-Seminar / BIGS
Wednesday, February 5th 2003, 12:15 Seminar Room N328
Piotr Sankowski
University of Warsaw
Alternative Algorithms for Counting All Matchings in Graphs
Abstract
We will present two methods for counting all matchings in a graph. Both
methods are alternatives to well known methods based on the Markov Chains
and both
are unbiased. The first one is a generalization of a Godman-Godsil
estimator. This algorithm also works in polynomial time for dense
graphs and in expected polynomial time for random graphs. The second
method uses importance sampling. This method works in exponential time but
it can easily be enriched by some heuristics leading to very efficient
algorithms in practice. Experiments show that our methods give better
estimates than the Markow Chain approach.
RAND-Seminar
Friday, January 17th 2003, 14:30 Seminar Room N327
Ayrat Khasyanov
University of Bonn
Polynomial Time Quantum Factoring Algorithm
Abstract
Integer factoring is one of the most extensively studied number
theoretical problems. So far neither polynomial time deterministic nor
BPP algorithms are known for this problem.
It is quantum computational model that gave a hand.
In 1994 P. Shor came up with Quantum Polynomial Time algorithms for
Factoring and the Discrete Logarithm Problem,
a result for which he obtained the Nevanlinna Prize in 1998.
Both problems had been believed to be hard enough to be a base for
cryptosystems
(e.g. the RSA cryptosystem is based on the assumption that integer
factoring is hard).
In this talk we present Shor's Quantum Polynomial Time Factoring
Algorithm and also some recent improvements of this algorithm.
RAND-Seminar
Monday, January 13th 2003, 11:15 Seminar Room N327
Jana Chlebikova
UNIVERSITY OF KIEL
Some inapproximability results for combinatorial optimization problems
Abstract
Our currently best inapproximability results for several NP-hard
combinatorial optimization problems will be presented (e.g.
Steiner Tree Problem, 3-Dimensional Matching, bounded occurrence
instances of Independent Set and Node Cover Problems). Our approximation
lower bounds depend on parameters of amplifiers that provably exist,
they do not need
to be efficiently constructable. The methods of construction (or the
proofs of existence)
of amplifiers with better parameters will be discussed as well.
Theory Lunch Talk
Wednesday, January 8th 2003, 13:00 Seminar Room N328
Christel Baier
UNIVERSITY OF BONN
Model Checking and Abstraction
RAND-Seminar/MPI-Seminar
joint with the
Colloquium of the Institute of Computer Science
Friday, December 20th 2002, 14:30 Seminar Room N327
Institute of Computer Science, Römerstraße 164
Sanjeev Arora
PRINCETON UNIVERSITY
Proving Integrality Gaps Without Knowing the Linear Program
Abstract
Many approximation algorithms for NP-hard optimization
problems are designed using a linear program relaxation of the problem.
The integrality gap of the relaxation is the worst-case ratio of the
cost of the optimum (integer) solution to the optimum value of the
linear program. If we can show that the integrality gap is large, it
rules out using that linear program for computing good approximations.
Proving integrality gaps is a difficult task and usually undertaken on a
case-by-case basis. We initiate a more systematic approach that proves
integrality gaps for large families of linear programs. We prove an
integrality gap of 2-o(1) for three families of relaxations for vertex
cover, including those obtained from the Lovasz-Schrijver
''lift-and-project'' proof system. Our methods seem relevant to other
problems as well. (Joint work with Bela Bollobas and Laszlo Lovasz)
RAND-Seminar/Doktorandenseminar
Monday, December 9th 2002, 10:15 Seminar Room N327
Mathias Hauptmann
UNIVERSITY OF BONN
Polynomial Time Algorithms for Market Equilibria
Abstract
In this talk we consider the problem of finding market clearing prices:
Given a finite market with n kinds of goods and m buyers, each having
an initial amount e_j of money and an utility function u_j, the task is
to find prices p_i, i=1,..,n for the goods such that goods can be assigned
to
buyers such that each buyer j has maximum utility wrto e_j. We discuss
a new polytime algorithm due to Devanur, Papadimitriou, Saberi and Vazirani
which constructs market clearing prices in the case of linear utility
functions.
Their algorithm is modeled after Kuhn's primal-dual algorithm for bipartite
matching.
RAND-Seminar
Friday, December 6th 2002, 15:15 Seminar Room N327
Ayrat Khasyanov
UNIVERSITY OF BONN
A Polynomial Time Primality Test
Abstract
Primality testing is one of the most important problems in mathematics. But
for
long time no efficient algorithms for it have been known. With an advent
of electronic computers the problem of how to test a given natural number
given in binary for being
prime was brought to forefront of computer science. For it concerns such
problems as
construction of pseudorandom key generators and public key cryptosystems.
Fermat's Little Theorem proven in 17th century was a starting point for
many efficient primality tests. In 1976 G. Miller proposed his polynomial
time
deterministic algorithm which was correct assuming Extended Riemann
Hypothesis. Since then many other efficient tests were devised. But for
decades there were no known polynomial time deterministic
tests. Thus for a long time the problem of $PRIMES \in P$ or $PRIMES \notin
P$
has been open. Finally this August M. Agrawal,
N. Kayal and N. Saxena made a significant breakthrough. They proved that
$PRIMES \in P$ and obtained the first polynomial time deterministic
primality test algorithm.
We are going to present this result along with a brief history of primality
testing.
RAND-Seminar/Doktorandenseminar
Friday, December 6th 2002, 14:15 Seminar Room N327
Mathias Hauptmann
UNIVERSITY OF BONN
Structure in PTAS
Abstract
PTAS is the class of NP optimization problems X
that admit a polynomial time approximation scheme (ptas), i.e. some
algorithm
A such that for each instance x of X and \eps > 0 A(x,\eps) returns
a (1+\eps)-approximate solution and for fixed \eps running time is
polynomial in |x|. Equivalently the running time is O(|x|^f(1\slash \eps))
for some function f. In this talk we present a new kind of ptas preserving
reduction which can be used to study structure inside PTAS. We prove
existence of complete and intermediate problems and furthermore
discuss the problem of separating PTAS from EPTAS (the class of NPO problems
admitting an efficient ptas, i.e. with running time O(f(1\slash\eps)\cdot
|x|^\alpha) for some constant \alpha and some function f).
RAND-Seminar/Special Seminar
Wednesday, November 20th 2002, 12:00 Seminar Room N328
Ignatios Souvatzis
UNIVERSITY OF BONN
Summary of the 2nd European BSD Conference
Abstract
From the 15th to the 17th of November 2002, BSD developers, administrators
and users (not only) from Europe met in Amsterdam.
Short summaries of some of the presented lectures themes are given,
including
themes like software installation systems, system and Network control, video
streaming with SCTP, kernel extensions for VLAN routing, use of PC cluster
based parallel computing systems to evaluate video material, shared root for
network clients, strategies for porting an OS to new hardware, telephony
software for Unix, MacOS-X for less costly computers.
RAND-Seminar/Diplomandenseminar
Friday, November 15th 2002, 14:00 Seminar Room N328
Holger Dach
UNIVERSITY OF BONN
The Hydrophilic-Hydrophobic model for protein folding
Abstract
In this talk we will give a survey on the two-dimensional
hydrophilic hydrophobic model (H-P model) proposed by Dill.
In this model it is assumed that the protein is a sequence of
0's and 1's, and folding entails embedding the sequence in the
two-dimensional lattice such as to minimize a given energy function.
We will present well known hardness results and algorthmic approaches for
this and related problems.
RAND-Seminar/Math.Kolloquium
Friday, October 25th 2002, 17:15 Kleiner Hörsaal, Wegelerstr.10
Farid Ablayev
KAZAN UNIVERSITY/MPI
The Power of Quantum Computational Models
Abstract
In the first part of the talk we present basic notions of quantum
computational models and basic classic and quantum
complexity classes.
The name BQP stands for Bounded-error Quantum Polynomial time
--- the class of languages that are decidable (with small
error-probability) on a quantum Turing machines in polynomial
time. The BQP is the analog of wll known classical probabilistic BPP
complexity class (the class of languages that are decidable (with small
error-probability) on a probabilistic Turing machines in polynomial
time).
It is known that P \subseteq BPP \subseteq PP
and that P\subseteq NP \subseteq PP.
In 1997 it is proved that BPP \subseteq BQP \subseteq PP.
The importance of the class BQP shows that the factoring problem is
in BQP (Shor).\\
In the second part of the talk we define quantum branching programs (qbp)
--- quantum variant of the known computational model --- branching programs.
We prove that quantum branching programs might be more effective than a
classical ones:
we present certain symmetric Boolean function which
is computable by a read-once qbp with a O(log n) width, but which
requires a width Omega(n) on any classical
randomized read-once bp with the permanent transitions on each level.
Next. It is known
result of 80-th-s that
BP_5=NC^1 that is, polynomial size
deterministic width 5 branching programs are of the same power as
polynomial size logarithmic depth circuits. Our result
is QBP_2=NC^1 here QBP_2 stands for a class of function
computable by polynomial size quantum width 2 branching
programs.
Note that BP_2 \subsetneq ProbBP_2 \subsetneq QBP_2. Here
ProbBP_2 is a complexity class determined by probabilistic branching
programs of width 2 (with small error-probability).
Diplomandenseminar
Friday, October 25th 2002, 11:00 Seminar Room N328
Martin Schido
UNIVERSITY OF BONN
Polynomial Time Approximation Schemes for Metric Min-Sum Clustering
Abstract
The Metric Min Sum k-Clustering Problem is the following:
Given n points in some metric space (V,c), build a k-clustering
C_1,...,C_k of these points such as to minimize
sum_{i=1}^k sum_{u,v\in C_i} c(u,v).
In this talk we will discuss a PTAS for this problem by Fernandez de la Vega,
Karpinski, Kenyon and Rabani. We will also present the first implementation
of this algorithm and report on practical experiments up to k=8.
MPI-Seminar/Doktorandenseminar
Tuesday, October 8th 2002, 14:00 Hörsaal MPI, Vivatsgasse 7
Ayrat Khasyanov
UNIVERSITY OF BONN
The Quantum Grover's Search Algorithm. The Tutorial.
Abstract
We are going to give an introductory talk on one of the famous results in
Quantum Computing. In 1996 Lov Grover developed a method to be
applied to
the whole class of problems. Namely those for which it is hard to find a
solution but easy to check an instance to be a solution.
And searching problem is one for which it is proven any classical
algorithm is worse than the quantum one. The latter can solve
the
problem in $O(\sqrt{n}) queries. When the former can only do it
in O(n), even being randomized. And the lower bound for quantum computing
of searching problem is exactly reached by Grover's algorithm.
We should remark that in this talk no new results are proposed. This talk is
planned in the framework of the Seminar in Quantum Computing. The aim of
current
talk is for the newcomers in the area of quantum computing to follow the
whole seminar as well as those who is searching for the most recent results.
MPI-Seminar/Special Year on Computational Complexity
Tuesday, September 17th 2002, 14:00 Hörsaal MPI, Vivatsgasse 7
Farid Ablayev
KAZAN STATE UNIVERSITY/MPI
Complexity of classic and quantum computational models
Abstract
In this talk we introduce a model of a Quantum Branching Program
(qbp) and study its computational power. We define several natural
restrictions of a general qbp model, such as a read-once and
a read-k-times qbp, noting that the obliviousness is inherent in
a quantum nature of such programs.
In particular we show that any Boolean function can be computed
deterministically (exactly) by a read-once qbp in a width O(2^n),
contrary to the analogous situation for quantum finite
automata. Further we display certain symmetric Boolean function which
is computable by a read-once qbp with a O(log n) width, which requires
a width Omega(n) on any deterministic read-once branching program
and (classical) randomized read-once branching program with the permanent transitions
on each level.
We present also a general lower bound on the width of read-once qbps,
showing that the upper bound for the considered symmetric function is
almost tight.
MPI-Seminar/Special Year on Computational Complexity
Tuesday, September 10th 2002, 14:00 Hörsaal MPI, Vivatsgasse 7
Farid Ablayev
KAZAN STATE UNIVERSITY/MPI
Quantum bounded width branching programs versus classic width bounded branching programs
MPI-Seminar/Special Year on Computational Complexity
Thursday, September 5th 2002, 15:00 Hörsaal MPI, Vivatsgasse 7
Farid Ablayev
KAZAN STATE UNIVERSITY/MPI
Complexity of classic and quantum computational models
RAND-Seminar
Wednesday, Juli 17th 2002, 12:00 Seminar Room N328
Martin Loehnertz
UNIVERSITY OF BONN
A Fast Algorithm for Finding a Perfect Matching
in a Bipartite
Graph
Containing Many Disjoint Perfect Matchings
Abstract
How many needles must be in a haystack to make finding a needle easy ?
How many disjoint perfect matchings must be in a graph
to make finding one simpler ? We present a $O(m\cdot\sqrt{n}/d)$ algorithm
for
finding an almost regular subgraph of a bipartite graph containing
$\sqrt{n}d^3$ disjoint matchings. We then demonstrate that this subgraph
already contains a very large matching which can be found and augmented
to a perfect matching within the same time bound.
Theory Lunch Talk
Wednesday, Juli 3rd 2002, 13:00 Seminar Room N328
Rolf Klein
UNIVERSITY OF BONN
On the Detour of Planar Curves
Friday, June 7th 2002, 17:15 Kleiner Hörsaal
Wegelerstraße 10
Invitation
for the Special Colloquium
In Memoriam
Roman Smolensky
Alexander Razborov
INSTITUTE FOR ADVANCED STUDY PRINCETON
Algebraic Proof Systems
Abstract Algebraic proof systems are proof systems that operate with commutative multi-variable polynomials over some ring and prove that a given system of polynomial equations is unsolvable. Of especial interest to us in this talk is the "combinatorial" framework in which the equations $x^2_i-x_i=0$ must ultimately be present in the system (equivalently, we at once restrict ourselves to 0-1 solutions). Algebraic proof systems originally emerged in 1994 as a lower bound tool for attacking the propositional proof system $F_d + MOD_p$. This system makes a straightforward analogue of Smolensky's complexity class $ACC^0[p]$ in the context of proof complexity. It soon became clear, however, that the importance of algebraic proof systems goes far beyond this particular task: they provide natural and elegant models for studying (our way of thinking about) the most basic algebraic facts and constructions. In this talk we firstly survey various connections between algebraic proof systems on the one hand, and such areas as propositional proof systems or automatic theorem proving on the other. In particular, we try to explain from the complexity perspective what makes algebraic proof systems very unique for the automatic proof generation. Then we survey lower bounds known for algebraic proof systems.
(joint with RAND-Seminar)
RAND-Seminar / Oberseminar Mengenlehre / Special Year on Computational Complexity
Thursday, June 6th 2002, 16:30 Seminar Room B (Room37)
Beringstraße 4
Patrick Speissegger
UNIVERSITY OF WISCONSIN
''Tame'' expansions of the real field that are not o-minimal
RAND-Seminar/Doktorandenseminar
Friday, May 17th 2002, 14:00 Seminar Room N327
Peter Wegner
UNIVERSITY OF BONN
Protein Motif Recognition
Abstract The structural motif recognition problem is an important
step in predicting a protein's three-dimensional fold
when given only its amino acid sequence. Given a known local structure,
or motif, one would like to determine whether
it occurs in a given amino acid sequence. Additionally, one would like
to determine the positions of the motif.
In this talk we will present some existing algorithmic approaches to solving
this problem.
Theory Lunch Talk
Wednesday, May 8th 2002, 13:00 Seminar Room N328
Marek Karpinski
UNIVERSITY OF BONN
Satisfiability and Approximability of Regular SAT Problems
RAND-Seminar/
Doktorandenseminar
Friday, May 3rd 2002, 14:30 Seminar Room N327
Mathias Hauptmann
UNIVERSITY OF BONN
Approximation of the Steiner Forest Problem
RAND-Seminar/
Special Year on Computational
Complexity
Friday, April 19th 2002, 14:30 Seminar Room N327
Pavel Pudlak
Mathematical Institute AVCR Prague
An application of Hindman's theorem to a problem on communication complexity
Abstract We consider the k-party communication complexity
of the problem to determine if a word w is of the form w_0a_1w_1a_2...w_{k-1}a_kw_k,
for fixed letters a_1,...,a_k. Using the well-known theorem of Hindman
(a Ramsey type result about finite subsets of natural numbers) we prove
that for k=4 and 5, the communication complexity of the problem increases
with the length of the word w.
RAND-Seminar
Thursday, March 21th 2002, 11.15, Seminar Room N327
Leonid Gurvits
Los Alamos National Laboratory
From intersection of two geometric matroids to Quantum (Operator) matching theory with applications to the Quantum entanglement
Abstract With a given two M-tuples of complex N-dimensional
vectors X={x_1,x_2,...,x_M} and Y={y_1,y_2,...,y_M} we associate the matroidal
permanent MP_{X,Y}. This quantity is related to the intersection of two
corresponding geometric matroids and includes permanents and mixed discriminants
as partial cases. The matroidal permanent is particular case of quantum
permanent associated with completely positive operators and corresponds
to so called separable operators. I will discuss an analog of van der Waerden
conjecture for M_{X,Y}, computational issues and how this seemingly strange
thing is related to fundamental notion in quantum mechanics called entanglement.
I will sketch a generalization of classical matching theory - Quantum Matching
Theory (with corresponding generalization of Sinkhorn's theory ). As classical
matching theory is a part of ''combinatorics of non negative matrices'',
whereas Quantum Matching Theory is formulated in terms of positive linear
operators . Again , I will explain how this theory touches the most fundamental
issues in Quantum mechanics . One of the results I plan to present is a
poly time deterministic algorithm to answer the following question : given
a linear subspace $ L \subset M(N) $ and a promise that L has a ( hidden
) basis consisting of rank one matrices. Is there exists poly-time deterministic
algorithm to decide whether L contains a nonsingular matrix ? Or more generally
, to compute maximum matrix rank achieved in L ? Time permit , I will talk
about my recent result about ''hardness'' of the Weak Membership Test for
the set of entangled mixed quantum states. This last result , in a sense
, says that there is no hope to get classical ''operational'' algorithm
to check whether given ''large'' density matrix is entangled .
RAND-Seminar
Friday, December 21th 2001, 14.30, Seminar Room N327
Mathias Hauptman
UNIVERSITY OF BONN
The Steiner Forest Problem in Graphs and Bounded Hypergraphs
Abstract In the classical Steiner Tree Problem, we are
given an edge weighted graph G and a subset S of the vertices of G, the
terminal set. We ask for a shortest connected subgraph of G containing
S. In the Steiner Forest Problem, instead of one terminal set we are given
a set S_{1}, ... ,S_{r} of pairwise disjoint nonempty terminal
set, and we ask for a minimum cost subgraph of G that contains a Steiner
Tree for each of the sets S_{i}. In this talk we will give a survey
on approximation algorithms and hardness results for the Steiner Forest
Problem and generalizations to proper and weakly supermodular requirement
functions. We will also present a generalization of the 2-approximation
algorithm of Goemans, Williamson for proper functions to bounded hypergraphs.
RAND-Seminar
Friday, December 14th 2001, 11.00, Seminar Room N327
Tatiana Roubinchtein
UNIVERSITY OF BONN
The Maximum Bisection Problem in Regular Graphs and Hypergraphs
Abstract In the Maximum Bisection Problem in
graphs, we are given a graph G with even number of vertices. We ask for
a partition of the vertex set V into two parts of equal size with a maximum
number of cut edges. This problem is MAX SNP-hard even for $\Delta$-regular
graphs with small constant $\Delta$, and polynomial time approximation
algorithms with small constant ratio for the regular case were given by
Feige, Karpinski and Langberg. In this talk we will give a survey on the
problem and present an approximation algorithm with improved ratio for
the Hypergraph Maximum Bisection Problem on uniform regular hypergraphs.
Furthermore we will report on practical experiences with differen algorithms
for the graph case.
RAND-Seminar
Friday, November 16th 2001, 14.30, Seminar Room N327
Mathias Hauptman
UNIVERSITY OF BONN
The Directed Zero Skew Tree Problem
Abstract We will give a survey on Zero Skew Tree Problems in
graphs, a class of problems that arrise naturally in various applications
in VLSI Design: A signal has to be sent from a source in a given network
to a set of sinks such that the signal arrives at every sink at the same
time. To satisfy this constraint one is allowed to use additional delay
segments, the task is to minimize length plus sum of costs of all delay
segments used in the solution. In this talk we will deal with the directed
version of this problem, i.e. the underlying network is a directed edge-weighted
graph. We give upper and lower bounds for approximability of this problem.
RAND-Seminar
Friday, June 15th 2001, 14.15, Seminar Room N327
Claire Kenyon
UNIVERSITY PARIS-SUD
Designing approximation schemes via rounding
Abstract Frustrated by the NP-hardness of many
combinatorial optimization problems, researchers have in the last decade
turned their attention away from exact algorithms, to approximation algorithms.
Of particular interest are approximation schemes, which construct a solution
whose cost is arbitrarily close to the optimal solution cost. Rounding
is a powerful tool for this. We will give several examples of rounding
techniques: rounding followed by exhaustive search (example: planar max
cut), by linear programming (examples: bin-packing, strip packing), or
by dynamic programming.
RAND-Seminar
Friday, May 25th 2001, 14.00, Seminar Room N327
Stefan Jania
UNIVERSITY OF BONN
Design, Implementation and Analysis of Approximation Algorithms for Bisection Problems based on Simulated Annealing
Abstract The Minimum Bisection Problem for
graphs is to partition a given graph into two subgraphs of the same size
such that the number of edges connecting them is minimal. In this talk
we will present an algorithm of Jerrum and Sorkin for the Minimum Bisection
Problem based on Simulated Annealing. During my master project, this algorithm
was implemented in C and parallelized using PVM. In the talk we will first
define the Minimum Bisection Problem, then describe the algorithm of Jerrum
and Sorkin and finally present the C-program. We will also report on practical
experiences and demonstrate the program on random instances.