Choosing the Best Volatility Models

The paper, “Determining the best forecasting models”, is about testing 55 models that belong to the GARCH family. If we have just one model and a straw model, it is easy to show that some statistic on the test sample that the hypothesized model is superior. How do we go about testing a set of competing models? There are many wonderful techniques in the Bayesian world. However this paper is more frequentist in nature.

Liquidity considerations in estimating implied volatility

The paper titled, “Liquidity considerations in estimating implied volatility”, by Susan Thomas and Rohini Grover, is about a new way of constructing volatility index that is based on weighing the implied volatility of the options based on the relative spreads at various strikes. The key idea behind the paper is that there is considerable liquidity asymmetry across various strikes for the near month and mid month contracts on NIFTY options. This leads the authors to hypothesize a measure that is based on weighing implied volatilities.

Efficient Estimation of Volatility using High Frequency Data

The paper titled, Efficient Estimation of Volatility using High Frequency Data, is about testing a set of volatility estimators using high frequency data. I will attempt to briefly summarize the paper. For a person working in the financial markets, there is not a day that goes by without hearing the word, “volatility’’. Yet, it is something that is not observed. If you assume that stocks follow some random process like a GBM, then the relevant question to ask is, “How does one estimate the diffusion parameter/process in the model?

Bootstrap method for robust inference

The paper titled, Regression analysis with many specifications, uses stationary bootstrap method to evaluate a large set of models. In a typical data mining set up, the problem of choosing the number of covariates can be handled in many ways such as Best subset selection Forward or/ and backward regression Forward stage wise regression Lasso Ridge regression Combination of Lasso and Ridge regression

Consistent High-Precision Volatility from High Frequency Data

The paper titled, “Consistent High-Precision Volatility from High Frequency Data”, talks about the trade off that one needs to make while considering sampling frequency. If you increase sampling frequency, the measurement error goes down but microstructure noise increases. If you decrease sampling frequency, the microstructure noise decreases but the measurement error goes up. Researchers in the past have suggested the usage of 10min/20min/xmin intervals based on some visual tools that have fancy names such as volatility signature plots.