How to identify the apple, the orange, and the banana if the juggler tosses everything around at high speed. Or differently, how to identify which groups of banks and other financial institutions are alike in a situation where new regulation kicks in fast, fintech companies are rapidly changing the scene, and central banks are implementing non-standard policies and keep interest rates uncommonly low for uncanningly long.
In our paper Bank Business Models at Zero Interest Rates, which has been accepted in the Journal of Business and Economic Statistics, Andre Lucas (me), Julia Schaumburg, and Bernd Schwaab investigate how to come up with groups of peer banks in such a volatile environment. Identifying peer groups is extremely important for regulators to create a level playing field. Similar banks should be charged similar capital buffers to safeguard the financial system and to retain fair competition at the same time.
We devise a new technique inspired by the machine learning literature (clustering) combined with ideas from the financial econometrics literature (score models). Think of it as allowing the moving center of the fruits of the juggler (the apple, orange and banana) being described by one statistical model, and the size of the fruits being determined by another model.
The model is put to work on a sample of 208 European banks, observed over the period 2008-2015, so including the aftermath of the financial crisis and all of the European sovereign debt crisis. Six peer groups of banks are identified. These groups are partly overlapping and partly different from classifications by ECB experts. The findings can thus be used as a complement for policy makers to compare bank profitability, riskiness, and buffer requirements. For example, leverage (as a measure of riskiness) and share of net interest rate income (as a measure of profitability) evolve quite differently for some groups of banks than for others, particularly during the low interest rate environment. A follow-up of this paper is Do negative interest rates make banks less safe?.
Download the papers published version in the Journal of Business and Economic Statistics or see the Tinbergen Institute working paper version or the ECB working paper version.
Risk managers, pension funds, asset managers and banks nowadays use advanced models to assess the risk of investment portfolios. Much scientific progress has been made over the past decade to develop new techniques to measure the risk of such portfolios. In recent years scientists and professionals have started to use so-called high-frequency data to measure the risk. High-frequency data are frequent measurements of for instance stock prices or exchange rates. You can think of measurements every minute, every second, or in some cases even every millisecond. Such measurements often result in more accurate risk assessments than traditional daily measurements.
An important issue, however, is how to deal with so-called outliers in high-frequency data. An outlier is an anomalous measurement in the data. Think of a temporary crash in markets due to a faulty algorithm, a typo by a trader, or any other reason. Such anomalous events occur more often than you would think. For instance, in May 2010 there was a famous flash-crash than unsettled the main U.S. financial markets. Within the time span of 36 minutes, the Dow Jones index droped by 9%(!!) and subsequently recovered. Such big swings within the day result in enormous swings in risk measures and incorrect risk forecasts for subsequent days.
Anne Opschoor, Andre Lucas (me), Pawel Janus, and Dick van Dijk have developed a new technique to deal with such anomalous observations in high-frequency data. Our paper New HEAVY Models for Fat-Tailed Realized Covariances and Returns has been accepted in the Journal of Business and Economic Statistics. The core novelty of our approach is that anomalous event do not automatically inflate risk forecasts as in traditional models. Instead, the model trades off whether the increased risk is due to a true increase in risk, or to an incidental, anomalous event. We use statistical techniques calibrated on financial data to properly make this trade-off.
We test the model on a long time series of 30 U.S. stocks over the period 2000-2014. During that period, we have seen big events like the financial crisis of 2008, but also peak events like the May 2010 flash-crash. Using the new techniques, risk forecasts are significantly better than with the most recent competing methods. Moreover, our method is relatively straightforward to implement, which should increase its potential impact.
Download the papers published version in the Journal of Business and Economic Statistics or see the Tinbergen Institute working paper version.
The European Central Bank has implemented a number of unconventional monetary policies
since the financial crisis and the subsequent sovereign debt crisis.
One of the policies involves setting the official rate at which banks can park money at the ECB close to zero, or even below zero.
Was this a wise decision?
The idea of setting the official rate close to or below zero is that banks then have more incentive to *not* put money at the central bank. Rather, it would pay for banks to lend the money out, thus providing more financing to the people and businesses and helping the economy to grow again.
Then again, others argue that low interest rates squeeze the profit opportunities for banks, making them more vulnerable to new economic shocks and risking a new crisis in the financial sector.
In this paper, we investigate how markets percieved the effect of the ECB's decision to impose negative interest rates on the riskiness of banks. In particular, we are interested in whether some bank business models are more prone to the potential negative effects of ECB's policy than others.
We measure riskiness of the banks by well established methodology: the expected amount of capital that has to be injected into a troubled bank in case of an extreme market-wide shock. It is important to consider a situation of extreme market stress, as in that case injecting more capital into a troubled bank is most problematic and hurts most.
We find that policy rate cuts below zero trigger different SRisk responses than an equally-sized cut to zero. There is only weak evidence that large universal banks are affected differently than other banks in the sense that the riskiness of the large banks decreases somewhat more for rate decreases into negative territory.
Download the paper's published version in Economics Letters or see the Tinbergen Institute working paper version.