{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "### Training \"inspecteur modèles\" BNP Paribas - Monte-Carlo methods - December 12-17/2019" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# TP 3 - Continuous time processes and their approximation schemes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 1. Euler scheme\n", "\n", "We consider the process $(S_t)_{t \\ge 0}$ satisfying the equation\n", "\n", "$$\n", "dS_t = r \\, S_tdt + \\sigma \\, S_t dW_t,\n", "\\qquad S_0=s_0>0,\n", "\\qquad r,\\sigma>0,\n", "$$\n", "\n", "where $(W_t)_{t \\ge 0}$ is a real valued Brownian motion.\n", "Recall that the stochastic differential equation (SDE) above admits an explicit solution:\n", "\n", "$$\n", "S_t = s_0 \\exp \\left( \\bigl(r-\\frac{\\sigma^2}{2} \\bigr)t + \\sigma W_t \\right), \\qquad t \\ge 0,\n", "$$\n", "\n", "called geometric Brownian motion.\n", "\n", "The explicit knowledge of the solution allows to\n", "\n", "+ _Exactly_ simulate both the solution of the equation and its Euler scheme.\n", "\n", "+ Compare their trajectories, and estimate the error of the Euler scheme.\n", "\n", "We fix a time horizon $T$ and a number $n$ of discretization steps, yielding the grid\n", "\n", "$$ t_k = k \\ \\frac T n $$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 1 (a) (fundamental) :\n", "\n", "Write the evolution of the true process \n", "\n", "$$ \\bigl( S_{t_k} \\bigr)_{0\\leq k\\leq n} $$\n", "\n", "on the time grid $t_k$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 1 (b) (fundamental as well) :\n", "\n", "Write the evolution of the Euler scheme\n", "\n", "$$ \\bigl(S^n_{t_k} \\bigr)_{0\\leq k\\leq n} $$\n", "\n", "on the same time grid $t_k$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### $\\blacktriangleright$ Implementation: \n", "\n", "Questions 1 (a) and (b) provide recursive simulation schemes for both the true process $S$ and its Euler scheme, which can be simulated using the same Brownian increments.\n", "\n", "In the cells below, implement the simulation schemes given above, and compare their trajectories of $S$ and the Euler scheme $S^n$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "T = 1. \n", "sigma = 0.3\n", "r = 0.02\n", "S_0 = 100" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#########################################\n", "## Discretisation dates t_1, ..., t_n\n", "n = 52\n", "dates = np.linspace(0, T, n+1)\n", "\n", "# Time step\n", "Delta_T = T/n\n", "\n", "#################################################\n", "# Simulation of the trajectories\n", "# S: the true process\n", "# Se: the Euler scheme\n", "#################################################\n", "# We initialize vectors with n+1 components\n", "S = S_0 * np.ones(n+1)\n", "Se = S_0 * np.ones(n+1)\n", "\n", "for i in range(n):\n", " ## Simulation of a standard Gaussian distribution N(0,1)\n", " G = np.random.randn()\n", " \n", " ##############################################################\n", " ## TO DO: generate the Brownien increment on [t_i, t_{i+1}]\n", " ## using the random variable G\n", " ##############################################################\n", " accr_brownien = 0. # UPDATE here\n", " \n", " ############################################################\n", " ## TO DO: complete the simulation\n", " # + of the trajectory of the process S\n", " # + of the trajectory of the Euler scheme Se\n", " ############################################################\n", " \n", " ############################################################\n", " # Process S\n", " S[i+1] = S[i] * 1 # UPDATE here\n", " \n", " ############################################################\n", " # Euler scheme\n", " Se[i+1] = Se[i] * 1 # UPDATE here\n", "\n", "###########################\n", "## Plot the trajectories\n", "###########################\n", "import matplotlib.pyplot as plt\n", "\n", "###########################################################\n", "plt.plot(dates, S, color=\"b\", label=\"Process S\")\n", "plt.plot(dates, Se, color=\"r\", label=r\"Euler scheme $S^n$\",\n", " marker = \".\", linewidth=0.,)\n", "###########################################################\n", "\n", "plt.xlabel(\"discretization dates t_i\", fontsize=12)\n", "\n", "plt.legend(loc=\"best\", fontsize=12)\n", "\n", "######################################\n", "# Comparison of final values: \n", "# remove the if 0\n", "if 0:\n", " print(\"Final value $S_T$ of the true process: %1.2f\" %S[-1])\n", " print(\"Final value $S^n_T$ of the Euler scheme: %1.2f\" %Se[-1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Question 2:\n", "\n", "Our goal is now to observe the error at final time $T$:\n", "\n", "$$\n", "error(n) = \\mathbb{E} \\Bigl[ \\bigl|S_T - S^n_T \\bigr| \\Bigr]\n", "$$\n", "\n", "called the _strong_ error of the Euler scheme.\n", "\n", "We can estimate $error(n)$ with a standard Monte-Carlo: replace the expectation with the empirical mean over a large number $N$ of simulated trajectories\n", "\n", "$$\n", "\\frac 1 N \\sum_{i=1}^N \\bigl|S^i_T - S^{n,i}_T \\bigr|\n", "$$\n", "\n", "of the process and the Euler scheme." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def errorEuler(T, sig, r, S_0, n, N):\n", " \"\"\"\n", " Euler schene with n steps for geometric Brownian motion.\n", " \n", " This function: estimation of the strong error E[|S_T - S^n_T|]\n", " from a sample of N iid simulations of the Euler scheme.\n", " \n", " Output of the function: the couple (strong error, 95% confidence interval)\n", " \"\"\"\n", " S = S_0*np.ones(N)\n", " Se = S_0*np.ones(N)\n", " strong_error = np.zeros(N)\n", " \n", " Delta_T = T / n\n", " \n", " ####################################################\n", " # We can now recursively compute the evolution of the\n", " # whole iid sample of N simultaneously: \n", " # recall S and Se are vectors of size N\n", " \n", " for k in range(n):\n", " g = np.random.randn(N)\n", " #######################################################\n", " # To do: implement the evolution of the Euler scheme Se\n", " # and the process S\n", " #\n", " # REMARK: we do not need to stock the whole trajectories.\n", " # We just require the terminal values S_T and S^n_T\n", " # in order to evaluate the final error\n", " #######################################################\n", " S = S_0 # UPDATE\n", " \n", " Se = S_0 # UPDATE\n", " \n", " #######################################################\n", " # The loop for ends here.\n", " # Now: evaluate the empirical approximation of the error \n", " #######################################################\n", " strong_error = 0. # UPDATE\n", " \n", " #############################################\n", " # To Do: evaluate the 95% confidence interval\n", " ############################################\n", " empirical_variance = 0. # UPDATE \n", "\n", " radius_conf_int = 1.96 * np.sqrt(empirical_variance / N)\n", " \n", " return strong_error, radius_conf_int" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The cell below\n", "\n", "+ Estimates the strong error for different values of $n$ (number of time steps): $n = 1, 2, 4, ..., 2^A$ (here for $A = 6$).\n", "\n", "\n", "+ Plots the graph of the strong error +/- the confidence interval, as a function of the time step $\\frac T n$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "N = int(1.e4) # size of the iid sample\n", "\n", "n = 1 # initial number of time steps for the Euler scheme\n", "\n", "## The number of time steps will evolve as n = 1, 2, 4, ..., 2^A\n", "A = 6\n", "\n", "number_steps = np.zeros(A)\n", "\n", "strong_errors = np.zeros(A) \n", "radii_conf_intervals = np.zeros(A) \n", "\n", "for i in range(A):\n", " number_steps[i] = n\n", " \n", " #############################################\n", " # UPDATE here\n", " #############################################\n", " # strong_errors[i], radii_conf_intervals[i] = ???\n", "\n", " n = 2*n # multiplication of the number of time steps by 2\n", "\n", "#############################################\n", "# Plot the error +/- the confidence interval\n", "# as functions of the time step T / n\n", "#############################################\n", "plt.clf()\n", "plt.plot( T / number_steps , strong_errors , color=\"r\", label=\"Strong error\")\n", "plt.plot( T / number_steps , strong_errors - radii_conf_intervals, color=\"b\", label=\"95% conf interval\")\n", "plt.plot( T / number_steps , strong_errors + radii_conf_intervals, color=\"b\")\n", "\n", "plt.xlabel('Discretisation step')\n", "plt.legend(loc=\"best\", fontsize = 12)\n", "plt.show()\n", "\n", "print(\"Number of steps n:\"); print(number_steps)\n", "print(\"Errors of the Euler scheme with step n:\"); print(strong_errors)\n", "print(\"95% conf interval:\"); print (radii_conf_intervals)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercice 2. Multi-level Monte-Carlo method" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now consider the application of the Multi-level Monte-Carlo method to the evaluation of\n", "\n", "$$\\mathbb{E}[f(S_T)]$$, \n", "\n", "where $f(s)=(K-s)^+$ and $K$ is fixed. $S$ is the process considered in the previous exercise.\n", "\n", "Reminders: the Multi-level method is based on a combination of Euler schemes with decreasing step\n", "\n", "$$h_l = \\frac T {n_l} = \\frac T{2^l}.$$\n", "\n", "The corresponding time grids are given by $t_k^l = k h_l = k \\frac T{2^l}$, for $k = 0, \\dots, 2^l$.\n", "\n", "Precisely, the multi-level estimator of $\\mathbb{E}[f(S_T)]$ is given by\n", "\n", "$$\n", "\\widehat{f(S_T)}_{\\mathrm{ML}}\n", "= \\frac1{M_0} \\sum_{m=1}^{M_0} (K - S_T^{h_0,0,m})^+\n", "+ \\sum_{l=1}^L \\frac1{M_l} \\sum_{m=1}^{M_l}\n", "\\left( (K - S_T^{h_l,l,m})^+ - (K - S_T^{h_{l-1},l,m})^+ \\right)\n", "$$ \n", "\n", "where\n", "- for each $h$ and $l$, the $(S^{h,l,m})_{1 \\le m \\le M_l}$ are iid samples of the Euler scheme with step $h$;\n", "\n", "\n", "- all the simulations are independent between different levels $l$;\n", "\n", "\n", "- the schemes $S^{h_l,l,m}$ and $S^{h_{l-1},l,m}$ are constructed from the same Brownian increments.\n", "\n", "\n", "We recall that, using the the properties of the Euler scheme (weak error of order $=1$, strong error of order $=1/2$), one can show the following upper bound for the quadratic error of the multi-level estimator:\n", "\n", "$$\n", "\\mathbb{E} \\Bigl[\n", "\\bigl(\n", "\\widehat{f(S_T)}_{\\mathrm{ML}} - \\mathbb{E}[(K - S_T)^+]\n", "\\bigr)^2 \\Bigr]\n", "\\le\n", "C \\biggl( h_L^2 + \\sum_{l=0}^L \\frac{h_l}{M_l} \\biggr)\n", "\\qquad \\forall M_0, \\dots, M_L,\n", "$$\n", "\n", "where $C$ is a constant independent from $L$ and the $M_l$.\t\t" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Find the smallest $L$ so to have $h_L^2 \\le \\varepsilon^2$, where $\\varepsilon$ is a fixed desired precision.\n", "\n", "The solution $(M_0, \\dots, M_L)$ of the optimisation problem\n", "\n", "$$\n", "\\min_{M_0, \\dots, M_L} \\sum_{l=0}^L \\frac{M_l}{h_l} \\qquad \\mbox{ t.q. } \\qquad\n", "\\sum_{l=0}^L \\frac{h_l}{M_l} = \\varepsilon^2\n", "$$\n", "\n", "yields\n", "\n", "$$\n", "M_0 \\sim_c \\frac{|\\log \\varepsilon|}{\\varepsilon^2},\n", "\\qquad\n", "M_l = M_0 \\, 2^{-l}.\n", "$$\n", "\n", "What is the interest of the solution of this problem, for the application of the Multi-level method?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "2. The code in the cells below generates $P$ iid samples of the multi-level estimator\n", "\n", "Complete the code:\n", "\n", "+ implement the evolution between the times $t^{l-1}_k = T \\frac{k}{2^{l-1}}$ and $t^{l-1}_{k+1} = T \\frac{k+1}{2^{l-1}}$ \n", "\n", "+ of a matrix *Se\\_fin* (resp. *Se\\_gr*) of size $M_l\\times P$, containing the Euler scheme of finer discretization step $\\frac T{2^{l}}$ (resp. of coarser discretization step $\\frac T{2^{l-1}}$)\n", "\n", "+ using the same Brownian increments.\n", "\n", "\n", "+ Finally, add to the variable *estim\\_multi* the sum\n", "\n", "$$\\sum_{m = 1}^{M_l} \\bigl( (K - S_T^{h_l,l,m})^+ - (K - S_T^{h_{l-1},l,m})^+ \\bigr)$$\n", "\n", "divided by $M_l$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "T = 1.\n", "sig = 0.2\n", "r = 0.05\n", "S0 = 100.\n", "K = 110" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def estimMultiniveaux(eps, P):\n", " \"\"\"\n", " This function: \n", " \n", " outputs an iid sample of P simulations of the Multi-level estimator\n", " \n", " having a desired target precision of order eps\n", " \"\"\" \n", " h_0 = T # initial value of the discretization step: just one step at level l=0 !\n", " \n", " ####################################################\n", " # To do: fix the total number L of levels \n", " # and the number of samples M_0 wished at level 0\n", " ####################################################\n", " L = ????\n", " M_0 = ????\n", " \n", " g = np.random.randn(M_0, P)\n", " \n", " Se_coarse = S0*(1. + r*h_0 + sig * np.sqrt(h_0)*g)\n", "\n", " estim_multi = np.mean((K-Se_coarse)*(K > Se_coarse), axis=0)\n", " \n", " ##########################################\n", " # This is the loop on the different levels \n", " for l in range(1, L+1):\n", " M_l = int(M_0/2.**l)\n", " \n", " #############################################\n", " # Parameters of Euler scheme with finer step\n", " #############################################\n", " h_fine = h_0/2.**l\n", " \n", " Se_fine = S0*np.ones((M_l, P)) # Euler scheme: fine step h_l\n", " Se_coarse = S0*np.ones((M_l, P)) # Euler scheme: coarse step h_{l-1} \n", " \n", " ######################################################\n", " # This is the loop over the discretization grid t_k, \n", " # \n", " # Two Euler schemes evolve: the fine and the coarse \n", " #\n", " # Here, they are both arrays of size M_l x P\n", " #\n", " for k in range(2**(l-1)):\n", " \n", " g1 = np.random.randn(M_l, P)\n", " g2 = np.random.randn(M_l, P)\n", " ####################################################\n", " # Implement the evolution of the two Euler schemes\n", " ###################################################\n", " Se_fine = ????\n", " Se_coarse = ????\n", " \n", " ###################################################################\n", " # Add to the variable estim_multi the contribution at level l\n", " # \n", " # (See formulas above: we want the mean over the raws of the M_l x P arrays)\n", " ###################################################################\n", " estim_multi = ????\n", " \n", " return estim_multi" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "3. The code in the cell below represents the histogram of $P=1000$ simulations of the Multi-level estimator with a fixed desired precision threshold $\\varepsilon$ divided by $2$ from an histogram to the other.\n", "\n", "The variable *RMSE* contains the empirical estimation of the root mean square error as a function of $\\varepsilon$, where\n", "the true value of the expectation $\\mathbb{E}[(K - S_T)^+]$ is (in this special case!) computed exactly using the Black-Scholes formula\n", "\n", "$$\n", "\\mathbb{E}[(K-S_T)^+] = K \\, {\\cal N}(-d+\\sigma\\sqrt{T}) - s_0e^{rT} {\\cal N}(-d)\n", "\\ \\mbox{ where } \\ d = \\frac{\\ln(s_0/K)+ (r+\\frac{\\sigma^2}2) T}{\\sigma\\sqrt{T}}\n", "$$\n", "\n", "and with ${\\cal N}(x)=\\int_{-\\infty}^xe^{-\\frac{y^2}{2}}\\frac{dy}{\\sqrt{2\\pi}}$.\n", "\n", "\n", "Does the error evolve as expected in terms of $\\varepsilon$ ?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "eps = 0.2 # target precision\n", "P = 1000 # wished number of samples of the Multi-level estimator\n", "\n", "import scipy.stats as sps\n", "\n", "###################################################\n", "# Black Scholes explicit formula for E[(K-S_T)^+]\n", "###################################################\n", "d = (np.log(S0/K) + r*T) / (sig*np.sqrt(T)) + sig*np.sqrt(T)/2.\n", "d2 = d - sig*np.sqrt(T)\n", "expectation = K * sps.norm.cdf(-d2) - S0*np.exp(r*T) * sps.norm.cdf(-d)\n", "\n", "for i in range(3):\n", " estim_multi = estimMultiniveaux(eps,P)\n", " \n", " plt.hist(estim_multi, density=\"True\", bins=int(np.sqrt(P)), label='eps = %1.2f' %(eps))\n", " \n", " #########################################################\n", " # To do: estimation of the RMSE from the sample estim_multi\n", " RMSE = ????\n", " #########################################################\n", " print(\"Root mean square error (eps=%1.2f) : %1.2f\" %(eps, RMSE))\n", " \n", " eps = eps/2\n", "\n", "plt.axvline(expectation, linewidth=2.0, color='k',label=\"true expectation\")\n", "plt.legend(loc=\"best\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }