My Learning’s from Charles Franklin Lectures 1- 5
Charles Franklin conducted a wonderful workshop on MLE and I found all the slides on his site. I am indeed fortunate to have found his slides. This post will contain all the points which I have learnt from the presentation
- There was a bitter dispute between Gauss and Legendre on the claim of discovering method of least square
- Legendre was the first person to actually put across the algorithm of least square with a nice example worked out
- In a non parametric model, there is no interpretation whatsoever
- Chebyshev used method of moments method to prove central limit theorem
- Markov was Chebyshev’s student
- Fischer pioneered MLE
- Posterior = Prior * Likelihood (Bayes)
- If you just do OLS , you can find the coefficient and slope but nothing more that. You can never say anything about the beta of the population regression line
- You have to make assumptions for making inferences about the intercept and the slope
- OLS – you make assumptions about the error terms and not yi …However in essence you are assuming a DGP for every yi to be normal
- ML – You make a specification of DGP to begin with
- Likelihood as a metric is useless in isolation. It always makes sense to compare with other likelihoods
- So, in that sense tests are usually by comparison only
- Likelihood is not a synonym for probability. It does not obey the properties of probability.
- Grid Search method is a way to find out the parameter
- Use optimization techniques to find the parameter instead of grid search which is rather time consuming
- Information Matrix
- Hessian Matrix
- Consistent, Efficient, Asymptotic Normality, Invariance are the properties of MLE
- ML Estimator are normally distributed
- Invariance
- Bayesian is Different – You got to know prior distribution of theta before looking at the data
- Bayesian Solution is not invariant over different parameterizations , while ML is variant under different parametrizations
- Learnt the difference between OLS and MLE