Organizer: David Lipshutz. Click on a title to display the associated abstract. Non-standard seminar times and locations are emphasized in **bold**.

Tuesday, September 8, 2015, 11 a.m., Room 110:

William McEneaney, University of California, San Diego.

*Stationary action and a diffusion representation for the Schrodinger equation*.

One approach to the modeling of conservative dynamical systems is through the principle of stationary action. The approach was first investigated by Hamilton, who proposed a least-action principle. In particular, he hypothesized that a conservative system moves along the path that minimizes the so-called action functional. This functional is the time-integral of the kinetic energy minus the potential energy over the path of the process. However, the least-action principle applies only to systems on sufficiently short time-durations. Later researchers determined that the correct formulation was the principle of stationary action, which states that a conservative system moves along a path such that the first-order Frechet derivative of the action functional around the path is the zero element. Traditional control approaches may be applied to the least-action principle, but are inadequate to deal with the stationary-action principle. The latter case requires the development of the notion of a staticization operation, which is similar to minimization and maximization, but seeks the input which makes the payoff stationary. One can develop a dynamic programming approach to staticization, and obtain a relationship with the associated Hamilton-Jacobi partial differential equation (PDE). As staticization is defined by a zero-derivative rather than by minimization or maximization, it may be extended to complex-valued functionals.

There have long been discussions regarding the relationship of the time-dependent Schrodinger equation to the principle of stationary action. If one takes the Maslov dequantization of the Schrodinger equation, which is similar to a log transform thereof, the resulting PDE is second-order with complex-valued coefficients over real space. Extending the domain to complex space, but retaining purely real time, one may develop a diffusion representation for the solution. In particular, the diffusion process is complex valued due to the presence of a complex-valued multiplier on the real-valued Brownian motion input. A verification is obtained, where the solution of the dequantized Schrodinger equation is given as the stationary value of a complex stochastic control problem. The result is somewhat related to Feynman's path-integral concept.

Tuesday, September 15, 2015, 11 a.m., Room 110:

Daniel Hernández-Hernández, CIMAT.

*Singular control and stopping problems for Lévy processes*.

Singular control problems appear in a large class of applications, like in queueing, mathematical finance and inventory systems. In this talk we shall present an overview on recent developments in the study of the partial differential equations with an integral local operator associated to this class of control problems. The solution is based on analytical arguments.

We also plan to present the extension of this singular control problems when an extra stopping controller is included, either competing or cooperating with the other one. The model for this class of games is a Lévy process with one side jumps.

**Friday, October 2, 2015, 9 a.m. – 5 p.m., Microsoft Research, Cambridge, MA**:

Charles River Lectures on Probability Theory and Related Topics.

Tuesday, October 6, 2015, 11 a.m., Room 110:

Mohammadreza Agajani, Brown University.

*PDE method for randomized load balancing networks*.

We introduce a general framework for studying a class of randomized load balancing models in a system with a large number of servers that have generally distributed service times and use a first-come-first serve policy within each queue. Under fairly general conditions, we use an interacting measure-valued process representation to obtain hydrodynamics limits for these models, and establish a propagation of chaos result. Furthermore, we present a set of partial integro-differential equations whose solution can be used to approximate the transient behavior of such systems. We prove that this set of equations has a unique solution and use this solution to gain insight into properties of systems that are of practical interest.

This is joint work with Kavita Ramanan.

Tuesday, October 13, 2015, 11 a.m., Room 110:

Natesh Pillai, Harvard University.

*Mixing times for a constrained Ising process on the torus at low density*.

We discuss a kinetically constrained Ising process (KCIP) associated with a graph $G$ and density parameter $p$; this process is an interacting particle system with state space $\{0,1\}_G$. The stationary distribution of the KCIP Markov chain is the Binomial($|G|,p$) distribution on the number of particles, conditioned on having at least one particle. The 'constraint' in the name of the process refers to the rule that a vertex cannot change its state unless it has at least one neighbor in state '1'. The KCIP has been proposed by statistical physicists as a model for the glass transition, and more recently as a simple algorithm for data storage in computer networks. In this talk, we will give a proof sketch for finding the mixing time of this process on the torus $G=\mathbb{Z}_L^d$, $d\geq 3$, in the low-density regime $p=cn$ for arbitrary $0< c<1$; this regime is the subject of a conjecture of Aldous and is natural in the context of computer networks. Our arguments provide a counter-example to Aldous' conjecture, suggest a natural modification of the conjecture, and show that this modification is correct up to logarithmic factors. En route, we also give a novel characterization for 'decomposable' reversible Markov chains.

This is joint work with Aaron Smith, U of Ottawa.

Tuesday, October 27, 2015, 11 a.m., Room 110:

Rama Cont, Imperial College London.

*Kolmogorov without Markov: Path-dependent Kolmogorov equations*.

Path-dependent Kolmogorov equations are a class of functional partial differential equations on the space of continuous functions which extend Kolmogorov's backward equation to path-dependent functionals of stochastic processes [1]. Solutions of such equations are non-anticipative functionals which extend the notion of harmonic function to a non-Markovian, functional setting. We discuss existence, uniqueness and properties of weak and strong solutions of path-dependent Kolmogorov equations and provide an analytical characterization of all square-integrable martingale functionals of an Ito process as weak solutions of the corresponding path-dependent Kolmogorov equation.

The proofs are based on the recently developed Functional Ito calculus [2,3]. These results have natural applications to non-Markovian stochastic control and the modeling of systems with path-dependent features.

[1] R Cont (2012) Functional Ito Calculus and Functional Kolmogorov Equations, in: V Bally et al: Stochastic integration by parts and Functional Ito calculus, *Lecture Notes of the Barcelona Summer School in Stochastic Analysis* (July 2012), Springer.

[2] R Cont, D Fournié (2013) Functional Ito calculus and stochastic integral representations of martingales, Annals of Probability, Vol 41, No 1, 109-133.

[3] R Cont, D Fournié (2010) Change of variable formulas for non-anticipative functional on path space, Journal of Functional Analysis, Vol 259, 1043-1072.

**Thursday, November 5, 2015**, 11 a.m., Room 110:

Daniel Lacker, Brown University.

*Mean field limits for stochastic differential games*.

Mean field game (MFG) theory generalizes classical models of interacting particle systems by replacing the particles with rational agents, making the theory applicable in economics and other social sciences. Most research so far has focused on the existence and uniqueness of Nash equilibria in a model which arises intuitively as a continuum limit (i.e. an infinite-agent version) of a given large-population stochastic differential game of a certain symmetric type. This talk discusses some recent results in this direction, particularly for MFGs with common noise, but more attention is paid to recent progress on a less well-understood problem: Given for each n a Nash equilibrium for the n-player game, in what sense if any do these equilibria converge as n tends to infinity? The answer is somewhat unexpected, and certain forms of randomness can prevail in the limit which are well beyond the scope of the usual notion of MFG solution. A new notion of weak MFG solutions is shown to precisely characterize the set of possible limits of approximate Nash equilibria of n-player games, for a large class of models.

Tuesday, November 10, 2015, 11 a.m., Room 110:

Shuenn-Jyi Sheu, National Central University.

*Merton optimal consumption problems in incomplete markets*.

Merton-type optimal consumption problems are classical portfolio optimization problems. There is a huge literature addressing the problem, although its study is still far from complete. Dynamic programming approach and martingale method are two main ideas developed to study the problems. We can view the problem as a stochastic control problem. The dynamic programming is a standard approach used in the control theory, which leads to the Hamilton-Jacobi-Bellman (HJB) equation. When the HJB equation can be solved, a candidate of (Markovian) optimal consumption policy can be derived. This approach suggests that the optimal consumption problem can be solved through a study of a nonlinear partial differential equation (elliptic type or parabolic equation), hence a need for a serious study of HJB equation. On the other hand, the martingale method uses stochastic calculus and duality argument from convex analysis. It has been very successfully applied in the case of complete markets.

In this talk, we show some limitation for the use of these two important methods in solving the portfolio optimization problem, and hence a need for new ideas. We also discuss some new observations to deal with the limitation. The HJB equation can be rewritten as Isaacs equation, an inf-sup type equation, with an additional parameter used to describe market completions. The latter is an idea developed in martingale method. This Isaacs equation suggests an updating scheme of choosing new parameter (for the market completion). From this, the problem can be studied through a sequence of complete markets, then the need of convergence analysis. Another interesting observation is the convexity of the value as a function of the parameter mentioned above to describe the market completion. The updating scheme (of changing parameter for market completion) is shown to be closely related to the gradient flow (induced by the value function). As a consequence, we expect the convergence rate of this iterating scheme is of exponential fast.

The talk is based on joint works with Hata (2012), Hata-Nagai-Sheu (2015) and ongoing projects of Fleming-Nagai-Sheu, and Sekine-Sheu.

**Thursday, November 19 – Friday, November 20, 2015, Courant Institute, New York, NY**:

Northeast Probability Seminar.

Tuesday, December 1, 2015, 11 a.m., Room 110:

Douglas Rizzolo, University of Delaware.

*Random flight processes in external fields*.

Random flight processes naturally arise as the Boltzmann-Grad limit of Lorentz gas models. Lorentz gas's are notoriously difficult to analyze directly, so in order to study them one often studies the associated random flight process instead. Motivated by studying the effects of an external field on a Lorentz gas, we consider random flight processes in external fields. We will discuss results on determining transience versus recurrence for these processes as well as results showing how the external field influences the free path and the diffusive limit for these models. This is talk is based on joint work with Krzysztof Burdzy, Alexandru Hening, and Eric Wayman.

Tuesday, December 15, 2015, 11 a.m., Room 110:

Nawaf Bou-Rabee, Rutgers University, Camden.

*Markov chain approximation methods for the numerical solution of stochastic differential equations*.

Stochastic differential equations (SDEs) model random fluctuations in applications as diverse as molecular dynamics, mathematical finance, population dynamics, epidemiology, laser dynamics and atmosphere/ocean sciences. For the most part, these SDEs cannot be solved exactly and numerical methods are used to approximate their solution. The main goal of these approximations is to estimate statistics associated to the SDE solution like mean first passage times, exit probabilities, and multi-time expectations of observables. This talk demonstrates that the go-to method for numerically solving SDEs—Euler-Maruyama—is impractical to use for SDEs with numerical stiffness, long-time simulation of ergodic SDEs, SDEs with unattainable boundaries, SDEs with internal discontinuities, SDEs with boundary conditions (e.g. Dirichlet, Neumann, and oblique derivative boundary conditions), and SDEs with interface conditions (for these SDEs existence/uniqueness of weak solutions are active areas of research). These issues motivate a shift in perspective that naturally leads to Markov Chain Approximation Methods. We will show how to use numerical PDE methods to construct these types of approximations, and stochastic methods to simulate/analyze them. This talk is influenced by ongoing work with Paul Dupuis, Ioannis Karatzas, Harold Kushner and Gideon Simpson; and previous work with Eric Vanden-Eijnden.