11 - 13
Zeitunabhängige Lebendigkeit von Intervall-Petrinetzen
12 - 13
15 - 16
Sparse Instances of Hard Problems
11 - 13
Michael Elberfeld (Lübeck)
Algorithmic meta theorems state that problems definable in some logic are efficiently solvable on some class of logical structures. The prime example of these results is the Theorem of Courcelle, which states that all monadic second-order (MSO) definable problems are linear time solvable on structures of bounded tree width.
In the talk I will discuss the computational complexity of classes of problems that are solvable by applying algorithmic meta theorems. We will have a look at two results and discuss their proofs as well as their applications: The first result is an algorithmic meta theorem for L, which states that all MSO-definable decision, optimization, and counting problems are solvable on deterministic logarithmic space on structures of bounded tree width. The second result transfers the idea of unified MSO-based problem definitions to a class inside L. It states that all MSO-definable decision, optimization, and counting problems are solvable by uniform circuit families from TC0 on structures of bounded tree depth, a width measure that is related to the longest simple path length in graphs. During the course of the talk we will see that each result covers complete problems for the corresponding complexity class and, thus, classifies the computational complexity of the considered class of problems.
11 - 13
Das Bestimmen der Anzahl verschiedener Elemente $F_0$ in einem Datenstrom ist ein fundamentales Problem in den verschiedensten Anwendungen, unter anderem in der Überwachung des Netzwerkverkehrs sowie im Bereich der Datenbanken. Algorithmen, die dieses Problem mit sublinearen Speicherplatz lösen, müssen randomisiert und approximativ sein. Die in den letzten Jahren gezeigten unteren Schranken für einen randomisierten Approximationsalgorithmus wurden nun von Kane, Nelson und Woodruff in ihrer Arbeit "An Optimal Algorithm for the Distinct Elements Problem" auf der PODS 2010 erreicht. Die Ideen ihres Algorithmus werde ich in meinem Vortrag vorstellen.
11 - 13
Computing Universal Models Under Linear Tgds
11 - 13
Databases are dynamic structures - a database is object to an ongoing sequence of small changes and queries. In a dynamic approach one can initialize auxiliary relations which help to answer a query faster than recomputing it from scratch. These relations must be updated after each change in order to keep them consistent with the database.
In this talk I will introduce the Dyn-FO framework of Patnaik and Immerman which models databases as finite relational structures and measures the complexity of maintaining auxiliary relations by logical means. I will give an overview over important results from dynamic complexity theory.
In the literature it is assumed that the considered structures are ordered or equivalently that the dynamic process starts with an empty structure (then the order of arrival can be incrementally built). I will discuss the importance of the availability of an ordering of the considered structures. In particular I will show how to modify the known dynamic algorithms for symmetric reachability and how to modify the concept of dynamic reductions to work on unordered structures. On the other hand I will show that several queries which can be handled with first-order updates in the ordered setting cannot be handled in the unordered setting.
14 - 16
Lots of data leads to lots of difficulties. When storage space, bandwidth, or processing power are limited, we often use data reduction techniques to reduce the size and the resolution of data, or we compile small summaries and statistics that we hope still contain the information we want to keep or transmit. Kernelization is a mathematically rigorous framework for preprocessing data by creating small summaries. In particular, kernelization allows us to make precise which core information of the data we want to preserve in the data reduction step, and it allows us to analyze the amount of storage space that is required to store the summaries, called kernels, in the worst case.
The kernelization framework was proposed by Downey and Fellows. It has been studied in the setting where the information we want to preserve is an NP-hard property of the input, and where we only have polynomial time to compute the kernels. In the first part of the talk, we will see some examples of how kernels can be computed in that setting. Moreover, we will informally discuss a result from my thesis that gives us a complexity-theoretic reason to believe that the kernelization algorithms from the first part of the talk are essentially optimal.
11 - 13
We present two aspects of the use of randomisation in computational complexity theory.
In the first part, we review certain gap-introduction reductions which were used to proof inapproximability results. Because these reductions were originally stated as randomised reductions, they implied non-approximability results under a complexity hypothesis involving randomised complexity classes. We were able to derandomise them and therefore weaken the hypotheses to ones about deterministic complexity classes.
In the second part we show how descriptive complexity can be used to shed light on questions about randomised complexity classes. To this end we define randomised logics as a counterpart to randomised complexity classes and show that we can indeed capture several important randomised complexity classes by these logics. We then show derandomisation results, both positive and negative, for these randomised logics. In particular, we can prove unconditionally that a certain uniform variant of randomised AC0 can _not_ be derandomised.
(joint work with Martin Grohe, Magdalena Grüber, Kristoffer Arnsfelt Hansen and Elad Verbin).
11 - 13
Johann A. Makowsky (Technion IIT, Haifa)
Das Spektrum Problem von H. Scholz (1954) und sein Einfluss auf die Entwicklung der endlichen Modelltheorie