During the panel discussion on the first day of the workshop, Jim Gatheral pointed out that "for some reason" exotic derivatives have gone out of fashion during the past two years, with the result that a whole bunch of talented quants had to divert their brain power to something else, in particular algorithmic trading. Naturally, the workshop organizers were aware of that and algorithmic trading was the predominant topic of the second day. I suspect this was one of the reasons for changing the original name of the workshop from "numerical" to "computational" methods in finance - a very appropriate generalization.
The first talk by Ciamac Moallemi allowed me for the first time to properly understand the dynamics of a limit order book, mainly because he used a deterministic fluid model, thereby developing a good deal of intuition before actually solving the optimal execution problem in such model.
In impeccably characteristic style, Jim Gatheral followed with a review of the most well-known optimal execution models in the literature, adding personal touches along the way (such as re-deriving closed-form optimal policies by Euller-Lagrange instead of HJB) and leading up to his own new results about the possibility of price manipulation. As anyone who ready his volatility surface book, Jim is not a fan of unnecessarily complicated notation, but not afraid of deep mathematics either. As a result, everything that he writes looks very simple, but moving from one equation to the next can require unexpected effort (if you really want to understand what is going on, that is).
Petter Kolm shifted the focus of the discussion by concentration on the "buy-side" of optimal execution, that is, incorporation the views of a portfolio manager into the execution problem. Since these bring different objectives (maximizing returns over a long time horizon, improve diversification, etc), he showed that traditional execution policies are generally sub-optimal. In particular, he prescribed that rebalancing should be done more often than traditionally suggested, and predicted a corresponding growth in research and practice of multi-period portfolio optimization.
Chris Rogers reverted back to numerical methods and introduced a way to solve optimal stopping problems using convex approximations to a convex value function. As another participant commented during the break, you might not be clever enough to exactly reproduce what Chris does, but his presentations always make perfectly clear what is going on each step of the way. He is also very good at identifying fatal flaws in other people's work (I personally tremble whenever he is listening to my talks), but unlike other acute critics, he also applies the same level of rigor to his own work. As a result, he can be disconcertingly honest in pointing out weaknesses of his proposed methods. My impression is that it is too early to pass judgment on this particular method, but at least all the advantages and disadvantages were clearly spelled out.
In an informal presentation after lunch, Mike Giles gave a fascinating overview of the use of GPU (graphic processing units) in quantitative finance. It was highly entertaining to hear that banks in Canary Wharf had exhausted the energy capacity available to power up their computer (with any new power generation being committed to the 2012 Olympic Games), so that they face the choice of either migrating to less power-hungry GPUs (and having to rewrite all their code) or build new data centers in the periphery of London.
In the context of portfolio credit risk, Kay Giesecke showed how to use importance sampling to efficiently simulate rare events and calculate several related risk measure. Liming Feng concluded the day by introducing a discrete Hilbert transform to calculate option prices for several exotic options (bermuda, loockbacks, etc) in a variety of Levy models with surprisingly good error estimates. Apologies for being brief on these last two talks, by I'm running out of time for this post.