Moving from IBOR to RFR
In the last few years, the projects that I have managed to work on, are really squiggly in nature. Last Christmas(2020), while the world was celebrating year end holidays, I was slogging away and writing code that would help a bank incorporate Risk free rates in to their products. It was exciting to be working on something where the big boys of the enterprise software were not flexible enough to support the new requirements. Also many aspects were yet evolving and hence there was a need to “figure” out what needs to be done, instead of coding something that was available as a spec. I had to write the spec taking in to consideration that not everything is black and white, and subsequently implement the spec. In any case, the project that I worked on, turned out to be a success and subsequently there have been interesting offshoots to the work that others have put in place. One of the offshoots is in the “Fallback” world. The basic idea of “fallback rates” is that the risk free rates are credit adjusted and made available to market participants, so that they can start changing the interest rate derivative contracts. The “fallback” as the name suggests creates a safety net in the contracts, while an active transition plan is put in place.
Stumbled on to a concise writeup by KPMG that talks about all the relevant aspects of this transition. This blogpost will list down some of the main points mentioned in the writeup
Visual Guide to ETFs
This blog post summarizes the book titled “Visual Guide to ETFs”, by David J. Abner
Hands on Gradient Boosting - Book Review
This blog post summarizes the book titled “Hands on Gradient Boosting with XGBoost and scikit-learn”, by Corey Wade
XGBoost Seminal Paper - Summary
The paper titled, XGBoost: A Scalable Tree Boosting System, by Tianqi Chen, Carlos Guestrin came out in 2016 and since then it has been the goto algorithm for classification and regression tasks, until the deep learning algo implementations were made available across various platforms. Of course one can build a super deep neural network, feed the features, run backprop and get all the weights of the network. No feature engineering, No need to understand data, No need to think through the missing data; use a deep neural network and get your job done. In one sense, I think that is the appealing reason for many, to be drawn towards NN. Also the fact that you get to meet your objective of minimizing out of sample error seems to be like a nirvana. Why would one ever want to use classical statistical procedures ? XGBoost however seems to be still one of the favorite choices for many ML practitioners. The technique is very peculiar in the sense that it is not just an applied statistical technique but incorporates a healthy dose of system design optimization hacks that seems to have given it a massive edge over similar algos.