Hands on Gradient Boosting - Book Review
This blog post summarizes the book titled “Hands on Gradient Boosting with XGBoost and scikit-learn”, by Corey Wade
This blog post summarizes the book titled “Hands on Gradient Boosting with XGBoost and scikit-learn”, by Corey Wade
The paper titled, XGBoost: A Scalable Tree Boosting System, by Tianqi Chen, Carlos Guestrin came out in 2016 and since then it has been the goto algorithm for classification and regression tasks, until the deep learning algo implementations were made available across various platforms. Of course one can build a super deep neural network, feed the features, run backprop and get all the weights of the network. No feature engineering, No need to understand data, No need to think through the missing data; use a deep neural network and get your job done. In one sense, I think that is the appealing reason for many, to be drawn towards NN. Also the fact that you get to meet your objective of minimizing out of sample error seems to be like a nirvana. Why would one ever want to use classical statistical procedures ? XGBoost however seems to be still one of the favorite choices for many ML practitioners. The technique is very peculiar in the sense that it is not just an applied statistical technique but incorporates a healthy dose of system design optimization hacks that seems to have given it a massive edge over similar algos.
It was in re:invent 2020 that Amazon announced Docker support for Lambda functions. It was a sigh of relief for many who struggled to meet the size restrictions of lambda functions
This blog post summarizes the book titled “Practical Docker with Python”, by Sathyajith Bhat
This blog post summarizes the book titled “Getting Started with Google Bert”, by Sudharshan Ravichandran