Solving The Dynamic Volatility Fitting Problem: A Deep Reinforcement Learning Approach
Oct 12, 2024·
·
0 min read
Emmanuel G.
Equal contribution
,Omar Karkar
Equal contribution
,Imad Idboufous
Abstract
The volatility fitting is one of the core problems in the equity derivatives business. Through a set of deterministic rules, the degrees of freedom in the implied volatility surface encoding (parametrization, density, diffusion) are defined. Whilst very effective, this approach widespread in the industry is not natively tailored to learn from shifts in market regimes and discover unsuspected optimal behaviors. In this paper, we change the classical paradigm and apply the latest advances in Deep Reinforcement Learning(DRL) to solve the fitting problem. In particular, we show that variants of Deep Deterministic Policy Gradient (DDPG) and Soft Actor Critic (SAC) can achieve at least as-good as standard fitting algorithms. Furthermore, we explain why the reinforcement learning framework is appropriate to handle complex objective functions and is natively adapted for online learning.
Type
Deep Reinforcement Learning (DRL)
Continuous State Action Spaces
Stochastic and Continuous Control
Actor-Critic
Sequential Decision Making
Deep Reinforcement Learning in Stochastic Environment

Authors
Emmanuel G.
(he/him)
Researcher in Mathematics and Applications
Hi, welcome to my website! I am a Research Scientist in Mathematics working on stochastic analysis, optimal control, diffusion models, and statistics, with applications to mathematical finance and machine learning.
Prior to this, I studied at École Polytechnique, where I earned an engineering degree with a major in mathematics. I also obtained a Master’s degree in Probability and Finance from IP Paris, jointly with Sorbonne Université, graduating with highest honors (mention Très Bien) and received a bachelor’s degree in Philosophy from Université Paris Nanterre.
Authors
Authors