Slides from my RStanARM tutorial
Back in September, I gave a tutorial on RStanARM to the Madison R userâ€™s group. As I did for my magrittr tutorial, I broke the content down into slide decks. They were:
 How I got into Bayesian statistics
 Some intuitionbuilding about Bayes theorem
 Tour of RStanARM
 Where to learn more about Bayesian statistics
The source code and supporting materials are on Github.
Observations (training data)
The intuitionbuilding section was the most challenging and rewarding, because I had to brush up on Bayesian statistics well enough to informally, handwavily teach about it to a crowd of R users. Like, I have good sense of how to fit these models and interpret them in practice, but thereâ€™s a gulf between understanding something and teaching about it. It was a bit of trial by fire ðŸ”¥.
One thing I did was work through a toy Bayesian updating demo. Whatâ€™s the mean of some IQ scores, assuming a standard deviation of 15 and a flat prior over a reasonable range values? Cue some plots of how the distribution of probabilities update as new data is observed.
See how the beliefs are updated? See how we retain uncertainty around that most likely value? And so on.
Naturally, I animated the thingâ€”Iâ€™ll take any excuse to use gganimate.
Someone asked a good question about what advantages these models have over classical ones. I find the models more intuitive^{1}, because posterior probabilities are postdata probabilities. I also find them more flexible. For example, I can use a tdistribution for my error termsâ€”thick tails! If I write the thing in Stan, I can incorporate measurement error into the model. If I put my head down and work really hard, I could even fit one of those gorgeous Gaussian process models. We can fit vanilla regression models or get really, really fancy, but it all kind of emerges nicely from the general framework of writing out priors and a likelihood definition.
Last knitted on 20210215. Source code on GitHub.^{2}

But I was taught the classical models firstâ€¦ I sometimes think that these models are only more intuitive because this is my second bite at the apple. This learning came more easily because the first time I learned regression, I was a total novice and had to learn everything. I had learn to about ttest, reductions in variance, collinearity, and what interactions do. Here, I can build off of that prior learning. Maybe if I learn everything againâ€”as what? everything as a neural network?â€”it will be even more intuitive.Â ↩

sessioninfo::session_info() #>  Session info  #> setting value #> version R version 4.0.3 (20201010) #> os Windows 10 x64 #> system x86_64, mingw32 #> ui RTerm #> language (EN) #> collate English_United States.1252 #> ctype English_United States.1252 #> tz America/Chicago #> date 20210215 #> #>  Packages  #> package * version date lib source #> assertthat 0.2.1 20190321 [1] CRAN (R 4.0.2) #> cli 2.3.0 20210131 [1] CRAN (R 4.0.3) #> crayon 1.4.1 20210208 [1] CRAN (R 4.0.3) #> emo 0.0.0.9000 20200706 [1] Github (hadley/emo@3f03b11) #> evaluate 0.14 20190528 [1] CRAN (R 4.0.2) #> generics 0.1.0 20201031 [1] CRAN (R 4.0.3) #> git2r 0.28.0 20210110 [1] CRAN (R 4.0.3) #> glue 1.4.2 20200827 [1] CRAN (R 4.0.2) #> here 1.0.1 20201213 [1] CRAN (R 4.0.3) #> highr 0.8 20190320 [1] CRAN (R 4.0.2) #> knitr * 1.31 20210127 [1] CRAN (R 4.0.3) #> lubridate 1.7.9.2 20201113 [1] CRAN (R 4.0.3) #> magrittr 2.0.1 20201117 [1] CRAN (R 4.0.3) #> purrr 0.3.4 20200417 [1] CRAN (R 4.0.2) #> ragg 0.4.1 20210111 [1] CRAN (R 4.0.3) #> Rcpp 1.0.6 20210115 [1] CRAN (R 4.0.3) #> rlang 0.4.10 20201230 [1] CRAN (R 4.0.3) #> rprojroot 2.0.2 20201115 [1] CRAN (R 4.0.3) #> sessioninfo 1.1.1 20181105 [1] CRAN (R 4.0.2) #> stringi 1.5.3 20200909 [1] CRAN (R 4.0.2) #> stringr 1.4.0 20190210 [1] CRAN (R 4.0.2) #> systemfonts 1.0.0 20210201 [1] CRAN (R 4.0.3) #> textshaping 0.2.1 20201113 [1] CRAN (R 4.0.3) #> withr 2.4.1 20210126 [1] CRAN (R 4.0.3) #> xfun 0.20 20210106 [1] CRAN (R 4.0.3) #> #> [1] C:/Users/Tristan/Documents/R/winlibrary/4.0 #> [2] C:/Program Files/R/R4.0.3/library
Leave a comment