Svetlana Borovkova: Machine Learning for market risk

Svetlana Borovkova: Machine Learning for market risk

Risicomanagement Technologie
Svetlana Borovkova (foto archief Probability)

By Svetlana Borovkova, Head of Quant Modelling at Probability & Partners

In two recent columns, I discussed the current state of Machine Learning (ML) applications in finance (February 15, 2022) and the use of ML in credit (November 23, 2021). Recently, a new class of machine learning algorithms – the so-called autoencoders – was proposed for market risk applications by Prof. John Hull, well-known for his famous book about derivatives, and his co-author Alexander Sokol.

I heard their talks on this at the latest Risk Minds conference in December and immediately got so excited by this ingenuous application of machine learning, that we are already implementing and extending it in within our quant team at Probability & Partners.

One of the main problems in assessing the market risk of (large) portfolios is the multidimensionality of the task. Portfolios typically comprise tens if not hundreds of assets – stocks, commodities, indices, ETFs and other instruments. Treating each of them as an individual risk factor is not feasible. A similar problem arises when modelling interest rate curves or implied volatility surfaces (volatilities for different option strikes and maturities) – these problems are also multivariate in nature.

To ease the modelling task, one would like to find a smaller set of (often unobservable) latent risk factors, which, when taken together, explain most of the risk of a large portfolio. In interest rate curves, these latent factors are the level, the slope and the curvature of the curve. These three factors can explain 99% of all interest rate moves. But how can we find such factors for more complicated situations, such as large diversified investment portfolios or implied volatility surfaces? This is where autoencoders can help.

Neural networks

An autoencoder is a type of a neural network: the most popular class of machine learning models. A neural network consists of nodes, also called neurons, which are arranged in the input layer, hidden layers and the output layer, these layers being sequentially connected. The goal of a neural network is to predict an outcome on the basis of a set of features, just as in traditional regression.

For example, the outcome might be the price of a security in the next trading period, the price and the delta of an option, or the credit rating of a loan holder. The nodes in the input layer receive the information about features (for instance previously observed prices, trading book features, option parameters or characteristics of a loan holder), process it in a certain way and transfer it to the first hidden layer, where the process is repeated, and so on, until the output layer is reached.

The information processing in the nodes is done in such a way that the predicted output maximally matches the actual observed output – the network ‘learns’ these processing rules on the basis of a multitude of examples presented to it. This process is called training. Then the trained network is tested on the examples it has never seen before. The architecture of a typical neural network is so flexible and its generalization power is so great, that it can learn very complex relationships between the input and the output.

Autoencoders

While the goal of a typical neural network is to predict a response on the basis of features, an autoencoder is designed to perform a seemingly useless task: to recover the input information. But first, this information has to go through an intermediate layer with very few nodes – as few as three or four – which is called the bottleneck.

Essentially, an autoencoder is a ‘compressor’, or encoder of information, which is decoded once it passes through the bottleneck. Autoencoder ‘decides’ on the most efficient way of encoding the input, without any guidance from the modeler.

In a way, it is similar to the well-known statistical technique of Principal Component Analysis (PCA). But while PCA’s latent factors are linear functions of the original features, an autoencoder can encode them in a much more flexible way, using any imaginable nonlinear combinations of them.

And this is where the magic happens: imagine that the input features are the returns of many assets in a large portfolio, a set of interest rates for different maturities or implied volatilities for a multitude of options’ maturities and strikes. Then the compressed information coming out of the bottleneck will be exactly these few ‘hidden’ risk factors, that drive almost the entire portfolio risk, interest rate curve’s or implied volatility surface’s evolution.

In this way, we handle the risk resulting from many actual risk factors by modelling just a few latent factors and then ‘decoding’ them back to the full risk factor universe.

Applications

While the main application of such autoencoders is market risk modelling, they can also be used for recovering realistic interest rate curves from ‘corrupted’ or incomplete quotes, or for generating implied volatilities for illiquid  or unobserved strikes and maturities. But for now, there are more questions than answers and efficient algorithms for these applications still need to be developed and tested – something our quants at Probability & Partners are enthusiastically working on.

Get in touch with us if you want to know how Machine Learning can be applied to problems in your organization, and how you can make your risk management and modelling more efficient with ML techniques.

Probability & Partners is a Risk Advisory Firm offering integrated risk management and quantitative modelling solutions to the financial sector and data-driven enterprises.