Research Projects List
Monis Quantitative Analysis Team
(February 16, 1999)
Here is the resume of different possible research directions identified
by the team:
Candidates are warmly encouraged to develop, in the limits of the constraints
imposed by the internship, a critical approach of problems. Moreover in
an research activity it is important to enhance the synergy by discussions
with colleagues. For that purpose we have set up a world-wide discussion
group hosted by www.mailbase.ac.uk. Any interested individual can subscribe
at the following address:
Some useful information on numerical simulations can be obtained from
the lectures notes of Prof. M. Caffarel by clicking
here (only in French).
Prototyping of models can be performed on the available platforms of
and Algebraic Computation (Maple
V, Matlab). Or by implementing algorithms in C/C++.
1) Malliavin Calculus
for Monte Carlo methods.
Malliavin calculus has been applied to financial problems at different levels
>From a mathematical perspective, the standard problem of portfolio analysis
solved by Black-Scholes is well defined in terms of Backward Stochastic
Differential Equations (BSDE). While the general theory of BSDE gives that
this standard problem has a unique solution, it says little about how to
find this solution explicitly. Malliavin calculus allows the rigorous minded
mathematician to obtain an answer to standard questions such as: What is
the option price and the optimal strategy to use. Thus in this first approach,
Malliavin calculus appears as the natural calculus to use for the study
of financial problems.
A direct implication of this statement is that Malliavin calculus provides
a hedging formula more fundamental than other ones. Thus it can be used
to find fundamental methods of pricing and hedging, as described for example
An other use of Malliavin calculus, which is the one of interest in
this project, is to use it to devise efficient Monte Carlo methods for
calculating the price and sensitivities of financial products. These are
computed respectively as expected values of functional of brownian motions
and their differentials.
The Malliavin calculus defines the derivative of functions on Wiener
space and can be seen as a theory of integration by part in this space.
Thanks to Malliavin calculus, we can show that the hedge factor, or differentials
of the price, can be computed as the expectation, under a risk neutral
probability measure, of the option payoff multiplied by a weight. A good
description of this method is given in [4,5].
The aim of the present project is to apply Malliavin calculus to compute
the differentials of a wide set of options. This requires at first an extension
of the work of Fournie et al. to overcome difficulties associated
with non differentiable payoffs. Followed by an implementation of the method
(which will be used uniquely to prove the good behaviour of the proposed
method). Among the most determinant characteristics we will focus on we
have accuracy and speed.
In order to precise our goal, we will expand on the previous paragraph.
At Monis we have an application called Generalized Monte Carlo (GMC) which
perform, as its name suggest, a Monte Carlo simulation to compute the price
of an option as well as its differentials, commonly called greeks. In order
to compute the greeks, GMC use a finite difference approximation (e.g (F(x+d)-F(x-d))/2d
where F(x) is the option price (i.e computed as a mean average over the
set of sample paths). Thus greeks calculations require roughly twice the
amount of time to calculate than the price itself. Using Malliavin calculus
we expect to be able to reduce this calculation time as well improve the
stability of the greeks.
A particularly interesting feature of GMC is that the option payoff
is specified by the user using a "high level programming language", thus
we are looking to an implementation of Malliavin calculus which is, to
a great extend, option independent. A difficulty to overcome, in order
to achieve this goal, is to find an appropriate solution to the treatment
of singularity in the options payoff. A good exempla is presented in Fournie
and al., which show how the discontinuity of a call payoff can influence
the computation.A prototype program should be implemented and used as a
test for the various methods found to treat singularities as well as to
estimate the accuracy and speed of convergence.
Finally, a report (leading to a further publication) should be provided,
with a clear description of the algorithms. This report should be clear
enough to allow for a direct implementation of the methods in the existing
 D. Nualart,
calculus and related topics, Probability and its applications, Springer
Verlag 1995. Back to Text.
Oskendal, An introduction to Malliavin Calculus with application to
economics, Lecture Notes, 1997. Back to Text.
Barucci and M.E. Mancino, Wiener chaos and Hermite polynomial
expansion for pricing and hedging contingent claims, Preprint. Back
Fournie, J.M. Lasry, J. Lebuchoux, P.L. Lions and N. Touzi, An application
of Malliavin calculus to Monte Carlo methods in finance, Ceremade 9726,
1997. Back to Text.
 E. Fournie, J.M Lasry, J. Lebuchoux and P.L.
Lions, Applications of Malliavin Calculus to Monte Carlo Methods
in Finance II, Ceremade 9901, 1999. Back to text.
and application of Extreme Value Theory to risk estimation and pricing
of financial instruments: CAT-Bonds/Insurance/Weather Derivatives.
Due to the increasing complexity of financial instruments, even more complicated
tools to manage the risk have to be put into place. The securitisation
of risk and alternative risk transfer ask for a coherent and realistic
valuation of extreme fluctuations. The conventional approach in parametric
risk estimation in finance is to use a "normal" distribution, this choice
makes the mathematics more easy but has the problem to badly describe tail
characteristics of real distributions. Indeed, extreme fluctuations
are characterized by their intrinsic scarcity. Therefore a probabilistic
approach from a frequentistic point of view is sorted out, and the conventional
"Gaussian" approach often led to dangerous unrealistic risk estimates.
Nevertheless results from EVT, like e.g. the Fisher-Tippet theorem, can
be used to overcome this difficulty. Indeed, under very general assumptions,
probability distributions of extreme events (which from a frequentistic
approach can be estimated only with an infinite number of observations)
can be described by a generalised Pareto distribution. And for a large
class of underlying distributions it is possible to determine the Hill
estimator, which is needed to fix the real excess distribution. The interesting
point is that the Hill estimator can be estimated by the market dynamics
with a limited number of observations. It is therefore possible, given
a time horizon and an upper bound for the jump, to provide the probability
of having a jump bigger than a stated bound.
||99.6% Confidence Level
||99.92% Confidence Level
In simple terms, using real time data with a sparseness compatible with
our forecast horizon, we could be able to asess a realistic probability
of exceeding a fixed boundary level. This value clearly depends on the
confidence level, i.e. on the quantile we need as shown in the previous
table. Extreme value theory offers an important set of techniques and estimators
for quantifying the boundaries between different gain/loss classes.
Furthermore the Hill estimator could be an indirect indicator of the
market liquidity. By analysing high frequency data, under time aggregation
hypothesis it will be possible to detect trends in the liquidity
evolution (i.e. Bid/Offer spread).
The candidate will be asked to familiarise himself with the basics of Extreme
Value Theory (e.g. P.
Embrecht et al., WEB site
in finance at ETHZ, Richard
Davis & Thomas Mikosch), and to apply it to the analysis
of different time series (e.g. Market Indexes, CAC 40, Futures on CAC 40,
or if possible meteorological data). An area of application
and interest is the valuation of properly securitised products in the realm
of catastrophe insurance like: CAT futures and CBOT where securitisation
is achieved through the construction of derivatives written on a newly
constructed industry wide loss-ratio index. Good overviews stressing the
financial engineering of these products are Doherty
(1997), Tilley (1997), Schmock
Samorodnitsky and Resnick (1998).
 P.Embrecht, C. Kluppelberg,
T. Mikosch, " Modelling Extremal Events", in Applications of Mathematics,
Ed. Springer. Back to Text.
 N.A. Doherty in Financial innovation in
the management catastrophe risk. Joint Day Proceedings, XXVIIIth International
ASTIN Colloquium and 7th International AIFR Colloquium, Cairns (Australia),
1-26. (1997). Back to Text.
 J.A. Tilley in The securitisation of catastrophic
property risks. Joint Day Proceedings, XXVIIIth International Colloquium
and 7th International AIFR Colloquium, Cairns (Australia), 27-53. (1997).
 U. Schmock Estimating the value of the Wincat
coupons of the Winterthur Insurance Convertible Bond. Joint Day Proceedings,
XXVIIIth International ASTIN Colloquium and 7th International AIFR Colloquium,
Cairns (Australia), 231-259, (1997). Back to Text.
 Embrecht et al., "Living on
the edge", RISK 11, 96-100, (1998).Back
4) PDE solutions,
comparison with existing trees.
This project is concerned with the pricing of Convertible Bonds using Partial
Differential Equations methods. In the following, the model is presented
and a possible PDE method is suggested to use for its evaluation. In the
method choice, a particular emphasis should be given to the calculation
of derivatives as well to the treatment of discontinuity.
A bond is a contract, paid up-front, that yields a known amount on a
known date in the future, the maturity date T. The bond may also pay a
known cash dividend, the coupon, at fixed times during the life of the
contract. Bonds may be issued both by governments or companies. The main
purpose of a bond issue is the raising of capital.
Convertible bonds are bonds involving a dual option. On one hand the
holder has the option to exchange the bonds for the company's stock at
certain time in the future. The amount of stock obtained in exchange for
one bond is called the exchange ratio n(t). On the other hand, the issuer
has the right to buy back the bonds. The price at which the bonds can be
bought back is the call price CP(t). The holder has the right to convert
the bonds once they have been called, the call feature is therefore often
a way of forcing conversion at a time earlier than the holder would otherwise
choose. Furthermore, the convertible bond is sometimes puttable, i.e. the
holder has the right to sell it back to the company at a known put price
We consider a two factor model to compute the price of the convertible
bond. The two factors are the company stock price S(t) and the interest
rate r(t). Furthermore a third stochastic component is introduced in the
conversion ratio alpha(t) by equity resets. As a consequence the bond price
is a function V(S,r,alpha,t) of the three stochastic components.
The company's stock price is assumed to follows a geometric Brownian
motion and the interest rate r(t) is described by the extended Vasicek
model (the two process are correlated).
The starting point of this project is the PDE which is to be solved with
particular attention to the method for handling the boundary conditions.
For instance, the boundaries related to the equity reset feature reduces
the PDE to a first order hyperbolic equation which much be discretised
carefully to avoid spurious oscillations.
We will discretise equation using a Galerkin finite element method for
the diffusion terms (all the second order derivatives).The convection terms
(involving first order derivatives in S, alpha and r will be discretised
using a finite volume approach. One popular method, already applied in
finance, is the flux limiter of van Leer. A van Leer limiter is known to
produce an oscillation free solutionAn automatic time step selection method
will be used between sampling dates (resets, dividends and coupons).An
unstructured grid of triangular elements will be used. The possibility
of inserting new nodes at arbitrary locations in the computational domain
is an important feature particularly useful for financial applications.
Two methods for handling the problem of the early exercise feature associated
with American options will be considered: (1) One method is to view the
problem as a linear complementary problem and then use a projected SOR
technique for solving the discrete algebraic equations. (2) An alternative
method is to view the problem as a nonlinear algebraic system, where the
nonlinear constraint can be imposed using a penalty method. The resulting
system of nonlinear algebraic equations is then solved using Newton iteration.
This project is mainly a numerical analysis study of PDE applied to
finance. Its main interest stands on the finding of a good and efficient
PDE method to deal with the financial constraints. From a technical point
of view, the PDE is linear thus all the difficulty is in the treatment
of boundary conditions, which can be discontinous. An operational constraint
is the request of a solution in times not exceeding few minutes. The PDE
method is competing with lattice based method as well as with Monte Carlo
methods, thus an advance implementation making use of adaptive techniques
is imperative for the success of this project.
 P. Wilmott, S. Howison and J. Dewynne, The mathematics of
financial derivatives, Cambridge university Press, 1997.
 R.J. LeVeque, Numerical Methods for Conservation Laws,
 R. Zvan, P.A. Forsyth and K.R. Vetzal, A finite element approach
to the pricing of discrete lookbacks with stochastic volatility, preprint.
agents models and definition of crash precursors:
In our research planning we are interested to investigate interacting agent
models. The idea is to apply techniques of statistical physics to
construct a model of an evolving stock market driven by individuality
(noise traders) and general expectation (fundamentalist). Indeed, the frequency
of large variations in stock prices raise doubts on existing models, which
all fail in accounting for non-Gaussian statistics. From a less quantitative
aspect, but with a possible valuable impact from a forecasting point of
view, it could be interesting to understand market dynamics and to identify
crash precursors from a fundamental structure of the market evolution.
In a general view financial markets seem to exhibits a scale invariant
If for a physicist, scale invariance means the absence of characteristic
scale, from a business point of view, it means the existence of catastrophic
risk which can bankrupt a company. It is rather tempting to identify the
mechanism responsible for the scale invariance with what physicists call
Self-Organised Criticality (SOC) 
i.e. the expression of an underlying unstable dynamical critical point.
General examples of SOC systems are: sand-piles, earthquakes, forest
fires, etc. E.g. in the sand-pile case the pile grows until the structure
become unstable and an avalanche is initiated. In this way, the pile reaches
a stationary critical state, characterised by a critical slope, in which
additional grains of sand will fall off the pile via avalanches distributed
in lifetime and size according to a power law.
Albeit the general conditions under which a physical system exhibits
SOC are largely unknown, some facts have however been established:
The second point is one of the most neglected in mathematical modelling
in finance since market and market players are considered as totally independent
and not influencing each other. However, recent results in that direction
are reported in .
The large scale evolution should obeys a diffusion process (like
markets are supposed to behave) which satisfy a global conservation law.
More generally a feedback mechanism must operate to attract the dynamics
to a critical state.
It is rather tempting to apply a picture of a dynamical critical point,
similar to what has been found recently for earthquakes, to predicts
financial crashes. Dynamical critical points exhibits a characteristic
log-periodic signatures. Evidence of which has been found by Sornette et
al.  in the analysis
of the two major crashes of this century: October 1927 (Dow Jones) and
October 1987 (S&P500). They found for these two cases, using
historical data before the crashes, a critical time (i.e. the market crash)
in very good agreement with the real timing of the events. They suggest
that this agreement reflects the fundamental co-operative nature of the
behaviour of stock markets.
In general co-operative behaviours of complex systems cannot be reduced
to a decomposition of elementary causes. A crash emerges naturally as an
intrinsic signature of the functioning of the market. Indeed there is a
need to insert in market models the effect of a positive feed-back interactions
in which traders exchange information according to a hierarchical structure
As a general comment, it is interesting to note the existing similarities
between log-periodic structure observed in the market 
and Elliot waves ,
a technique which is strongly rooted in the financial analysisí folklore.
Crash statistics are very poor and the natural excitation, that any
kind of evidence of a crash precursor can produce, should be always moderated
by critical views. Albeit it is tempting to see financial crashes
as a critical phenomena described by statistical mechanics where, in a
particular situation, all the subparts of the system react co-operatively,
there are only few experimental situations where these scenarios can be
tested. In that sense, any empirical finding, is not statistically significant
and could be even more dangerous than the crash itself if its conclusions
are used to asess a strategy (L.Laloux et al. )
based on an ex ante prediction. To use such empirical results, the lack
of statistically relevant set of experimental configurations must be compensated
by a a priori knowledge of the market conditions. Any conclusion on a market
crash prediction, based on empirical results, must be handled as a Bayesian
inference on magnitude and time-scale on the next catastrophic event given
the knowledge of the past.
In this framework is rather tempting to apply a picture of a dynamical
critical point where the system is spontaneously driven towards a critical
dynamical state. This approach has already been analysed by several authors,
from P. Back et al.
and Marchesi (Nature
397, 498 - 500 (1999)), see also
et al., Vandewalle et al. .
Indeed, evidence of scaling properties in financial prices, similar to
those characterising systems of a large number of interacting particles,
drive the idea that main observed features in financial data can be explained
by a multi-agent model of financial market.
The candidate will be asked to familiarise himself with the existing
literature on the subject, and to test the theory on real data. It means
that he will be asked to create a working prototype of the model, and its
implementation to test it against real market data. It is clear that the
previous literature constitutes a starting point and any original contribution
will be appreciated on its own right.
 Eur. Phys. J. B4, 139-141 (1998). Back
to text to text
 R. Cont, Proceedings of the CNRS Workshop on scale
Invariance, Les Houches, 1997 Back to text
 D. Sornette in "Physics of Complexity" Editions
Frontieres. Back to text to text
 J. Cvitanic, in "Mathematics in Derivative Securities"
Cambridge University Press and references therein. Back
to text to text
 A. Prost and R. Prechter, "Elliot Waves Principle",
New Classic Library, 1985 Back to text to text.
6) MC generator for interest
In view of developing a new Monte Carlo generator for interest rates financial
products, it will be interesting to analyse the relation between different
models and real data. The primary objective, will be to clarify the roles
played by various features of models and their parameters in the pricing
of bonds and related assets (see e.g. D.
Backus), analysing a database of monthly
spot rates (continuously-compounded zero-coupon yields) and forward
rates from the McCulloch-Kwon
dataset (binary "zipped" file, 487k) - high density data are also available.
The universe of bond pricing is populated by a variety of models used by
academics and practitioners alike. Since the theory of bond prices
is essentially the choice of a price kernel, i.e. the stochastic process
governing prices of state-contingent claims, different models differ by
the description of the underlying process. It follows that assumptions
used to develop coherent pricing criteria for interest rate dependent securities
may differ from the statistical description of the movements of real interest
rates (see R.
Cont). Indeed, as pointed out by Boucheaud
et al., and R.
Cont, some empirical observations on the deformation of the term structure
cannot be explained by classical arbitrage-free models.
Starting from a statistical description of the real data, it will be interesting
to perform a comparative study of the most fashionable models used by practitioners,
e.g. Black, Derman and Toy; Black-Karasinski; Heat, Jarrow and Morton,
and identify possible incoherence between theories and observations. As
a comparative test, a multi-affine analysis
can be performed on the available data to investigate the existence of
scaling properties of the available data. The obtained result can then
be compared with the analysis of synthetic interest rates time series obtained
via Monte Carlo simulation assuming a given choice of a price kernel. Finally
a phenomenological approach via a principal component analysis could by
applied following the method of Boucheaud
et al and the infinite dimensional approach of R.
N. Vandewalle and M. Ausloos, Eur. Phys. J. B 4, 257-261 (1998). Back
London February 16th, 1999.
Dr Gabriele Susinno
Dr Marco Rigo
Quantitative Analysts at Monis