09:00 – 09:05 | 开幕式 |
09:05 – 09:35 | 朱涵 |
09:35 – 10:05 | 蔡小娟 |
10:05 – 10:35 | 龙环 |
10:35 – 11:05 | 蔡烜 |
11:05 – 11:35 | 何超栋 |
11:35 – 12:05 | 岳厚光 |
午休 | |
13:00 – 13:45 | 邓玉欣博士 |
13:45 – 14:30 | 朱洪教授 |
14:30 – 14:40 | BREAK |
14:40 – 15:25 | 陈翌佳博士 |
15:25 – 16:10 | 陈仪香教授 |
16:10 – 16:20 | BREAK |
16:20 – 17:05 | 钱海峰博士 |
17:05 – 17:50 | 孙贺 |
17:50 – 18:35 | 傅育熙教授 |
18:35 – 18:45 | 闭幕式 |
18:45 | 实验室活动 |
We know that 3SAT and SAT are both NP-hard. However they have an important difference: for a CNF formula \alpha with n variables and of length m, m can be exponentially larger than n, but for a 3CNF formula \alpha, m can be bounded polynomially in n. In [Harnik and Naor, FOCS '06]. Harnik and Naor asked whether there is an algorithm that can compute from a CNF formula \alpha with n variables an equivalent formula \beta of length polynomial in n, i.e., a polynomial compression algorithm. Moreover, they showed that such algorithms will have very significant application in cryptography.
This problem was actually first proposed in parameterized complexity [Flum and Grohe 06], in the name of proving lower bounds for kernelizations which has been a major problem in the field.
A recent breakthrough due to Fortnow and Santhanam shows that the problem is actually related to the polynomial hierarchy (PH). More precisely, they showed that assuming PH does not collapse, then SAT has no such compression algorithm. By refining their method, we show that for every \varepsilon > 0 there is no polynomial time algorithm that for every CNF formula \alpha with n variables and of length m computes an equivalent instance \beta of length k^{O(1)} \cdot |\alpha|^{1-\varepsilon}, unless PH collapses.
This is joint work with Joerg Flum and Moritz Mueller.
We present two new algorithms for the range-efficient $F_0$ estimating problem and improve the previously best known result, proposed by Pavan and Tirthapura in 2005. Furthermore, our algorithms can be applied to improve the previously best known result for the Max-Dominance Norm Problem.
The idea of testing can be used in a wide variety of occasions from daily life to computer science. It occurred in concurrency theory in the 1980s and yielded equivalences that make only inarguable distinctions between processes.
In probabilistic concurrency, however, it seemed necessary to extend the standard testing framework beyond a direct probabilistic generalization in order to remain useful, as observed by some researchers in the 1990s. An attractive possibility was the extension from one success action to multiple success actions in order to represent multiple objects in one test, yielding vectors of outcomes rather than scalar ones. In this talk we will see that multiple success actions are unnecessary, at least when processes are finitary, that is finitely branching and finite-state: scalar outcomes are just as powerful. Thus for finitary processes we can retain the original, simpler testing approach and its direct connections with other naturally scalar-valued phenomena. This result enables us to characterise testing preorders in terms of simulations, modal logics, and inequational theories.
(The talk is based on some joint work with R. van Glabbeek, M. Hennessy, C. Morgan and C. Zhang.)