My statistical training was focused on classical frequentist inference, but soon after I got to the real world of statistics I realized that Bayesian statistics was not only used more than I thought, but more and more people were using it. Then I set as a personal goal to learn it.
I fount it was not as easy. Maybe I am not that smart too. As time went by I got this impression that Bayesian statistics is actually simple, maybe simpler than frequentist statistics, there are some barriers for those like me, who did not get the tools or the thinking way of Bayesian statistics.
For one we did not have a strong training in simulations, programming, computational statistics. But also, we did not learn too much about Markov Process, especially the ones in Continuous Space. So when we see thing like Metropolis-Hasting algorithm it looks something not accessible.
Nowadays there are many sources where we can learn Bayesian statistics, but I always want to start by learning this MH algorithm. And many times I wasn't successful. Until I found this paper, in the The American Statistician Journal. Using the paper, for the first time I was able to create my simple Markov Chain and simulate from a target distribution, and I just published the R code I used here.
The paper is not that simple, and I actually would not recommend it if you want to learn the MH algorithm and do not have some good understanding of math. I think the paper is sort of old and probably any book on Bayesian statistics published more recently will give you a more gently introduction to the MH algorithm. But the paper got me started and I learnt a lot with it. The reason the MH algorithm is so useful relates to its power on simulating from intractable distributions, but when we are learning right on our first steps, even this code that I used to simulate from a bivariate Normal distribution made me proud.
Here I will leave the function I created. The MH algorithm is actually quite simple. I wanted to publish the whole code here instead of just linking to it above, but it is impressive how difficult it is to do this. Even this GitHub option does not work very well (not to mention I could not colorcode the code) because if you have many pieces of codes and graphs and R outputs, then it becomes lots of work. The alternative is to paste the code here and copy and paste the outputs and insert figures, which is not better, really. The Knitr package offer an awesome option to create HTMLs with codes and outputs, but I still did not figure out how to publish it in a blog like this.
2 comments:
Nice post. It kindles my way of staring Bayesian inference and computations in early 2005 with Gibbs Sampler Anyway rightly says that present day books attempt gentle way but still papers mentioned and this one Explaining the Gibbs Sampler
George Casella; Edward I. George
The American Statistician, Vol. 46, No. 3. (Aug., 1992), pp. 167-174. are always worth mentioning
Nice! Thanks for the reference Subbiah, I will surely check it out!
Marcos
Post a Comment