BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Exponential Lower Bounds for Solving Infinitary Payoff Games and L
 inear Programs - Oliver Friedmann (LMU Munich)
DTSTART:20120507T140000Z
DTEND:20120507T150000Z
UID:TALK37305@talks.cam.ac.uk
CONTACT:Felix Fischer
DESCRIPTION:Policy iteration is one of the most important algorithmic sche
 mes for solving infinitary payoff games such as parity games\, stochastic 
 games and Markov decision processes\, and many more. It is parameterized b
 y an improvement rule that determines how to proceed in the iteration from
  one policy to the next. It is a major open problem whether there is an im
 provement rule that results in a polynomial time algorithm for solving\non
 e of the considered (two-player) game classes.\nSimplex algorithms for sol
 ving linear programs are closely related to policy iteration algorithms. L
 ike policy iteration\, the simplex algorithm is\nparameterized by a so-cal
 led pivoting rule that describes how to proceed from one basic feasible so
 lution in the linear program to the next. Also\,\nit is a major open probl
 em whether there is a pivoting rule that results in a (strongly) polynomia
 l time algorithm for solving linear programs. We describe our recent const
 ructions for parity games that give rise to\nexponential lower bounds for 
 several improvement rules\, and how to extend the lower bound to more expr
 essive game classes like stochastic games. We show that our construction f
 or parity games can be translated to Markov decision processes\, transferr
 ing our lower bound to their domain\, and finally show how the lower bound
 s for the MDPs can be transferred to the linear programming domain\, solvi
 ng problems that have been open for several decades.
LOCATION:MR12\, Centre for Mathematical Sciences\, Wilberforce Road\, Camb
 ridge
END:VEVENT
END:VCALENDAR
