Bayesian and Likelihood Principle
Stumbled on to an interesting paper that connects Bayesian ideas to Likelihood based inference. Both are related in the sense that Likelihood based Inference can be thought of a Bayesian Inference with uniform/vague prior. However when you get down to estimating and inferring from the data using these two philosophies, the math, the equations you use, the code you need to write are completely different.
This paper by Steel talks about whether a hard core Bayesian must accept Likelihood principle or not. It talks about two versions of Likelihood Principle(LP) that can be easily connected to the posterior-prior Bayes framework. The first version(LP1) is where you see different sets of data but they don’t change the likelihood function and the second version(LP2) is where you evaluate a competing hypotheses with the same dataset.
The paper uses Bayesian confidence measures to show that Bayesians can accept LP1 whereas LP2 can be accepted ONLY when the competing hypotheses is mutually exhaustive(which is never the case in real world). One usually comes across LP2 in many contexts, for example Log Likelihood ratio of Null and Alternate hypothesis in GLM. In such cases Likelihood theory has a straightforward way to test,i.e Compute the Deviance and check the realization of Deviance with respect to the relevant asymptotic distribution This paper shows that this kind of reasoning is weak in the Bayesian world.