As part of my tutorial talk on RStanARM, I presented some examples of how to visualize the uncertainty in Bayesian linear regression models. This post is an expanded demonstration of the approaches I presented in that tutorial.
Data: Does brain mass predict how much mammals sleep in a day?
Let’s use the mammal sleep dataset from ggplot2. This dataset contains the number of hours spent sleeping per day for 83 different species of mammals along with each species’ brain mass (kg) and body mass (kg), among other measures. Here’s a first look at the data.
library(dplyr, warn.conflicts = FALSE)
library(ggplot2)
# Preview sorted by brain/body ratio. I chose this sorting so that humans would
# show up in the preview.
msleep %>%
select(name, sleep_total, brainwt, bodywt, everything()) %>%
arrange(desc(brainwt / bodywt))
#> # A tibble: 83 x 11
#> name sleep_total brainwt bodywt genus vore order conservation sleep_rem
#> <chr> <dbl> <dbl> <dbl> <chr> <chr> <chr> <chr> <dbl>
#> 1 Thir~ 13.8 4.00e3 0.101 Sper~ herbi Rode~ lc 3.4
#> 2 Owl ~ 17 1.55e2 0.48 Aotus omni Prim~ <NA> 1.8
#> 3 Less~ 9.1 1.40e4 0.005 Cryp~ omni Sori~ lc 1.4
#> 4 Squi~ 9.6 2.00e2 0.743 Saim~ omni Prim~ <NA> 1.4
#> 5 Maca~ 10.1 1.79e1 6.8 Maca~ omni Prim~ <NA> 1.2
#> 6 Litt~ 19.9 2.50e4 0.01 Myot~ inse~ Chir~ <NA> 2
#> 7 Gala~ 9.8 5.00e3 0.2 Gala~ omni Prim~ <NA> 1.1
#> 8 Mole~ 10.6 3.00e3 0.122 Spal~ <NA> Rode~ <NA> 2.4
#> 9 Tree~ 8.9 2.50e3 0.104 Tupa~ omni Scan~ <NA> 2.6
#> 10 Human 8 1.32e+0 62 Homo omni Prim~ <NA> 1.9
#> # ... with 73 more rows, and 2 more variables: sleep_cycle <dbl>, awake <dbl>
ggplot(msleep) +
aes(x = brainwt, y = sleep_total) +
geom_point()
#> Warning: Removed 27 rows containing missing values (geom_point).
Hmmm, not very helpful! We should put our measures on a log10 scale. Also, 27 of the species don’t have brain mass data, so we’ll exclude those rows for the rest of this tutorial.
msleep < msleep %>%
filter(!is.na(brainwt)) %>%
mutate(
log_brainwt = log10(brainwt),
log_bodywt = log10(bodywt),
log_sleep_total = log10(sleep_total)
)
Now, plot the logtransformed data. But let’s also get a little fancy and label the points for some example critters 🐱 so that we can get some intuition about the data in this scaling. (Plus, I wanted to try out the annotation tips from the R4DS book.)
# Create a separate dataframe of species to highlight
ex_mammals < c(
"Domestic cat", "Human", "Dog", "Cow", "Rabbit",
"Big brown bat", "House mouse", "Horse", "Golden hamster"
)
# We will give some familiar species shorter names
renaming_rules < c(
"Domestic cat" = "Cat",
"Golden hamster" = "Hamster",
"House mouse" = "Mouse"
)
ex_points < msleep %>%
filter(name %in% ex_mammals) %>%
mutate(name = stringr::str_replace_all(name, renaming_rules))
# Define these labels only once for all the plots
lab_lines < list(
brain_log = "Brain mass (kg., logscaled)",
sleep_raw = "Sleep per day (hours)",
sleep_log = "Sleep per day (loghours)"
)
ggplot(msleep) +
aes(x = brainwt, y = sleep_total) +
geom_point(color = "grey40") +
# Circles around highlighted points + labels
geom_point(size = 3, shape = 1, color = "grey40", data = ex_points) +
ggrepel::geom_text_repel(aes(label = name), data = ex_points) +
# Use log scaling on xaxis
scale_x_log10(breaks = c(.001, .01, .1, 1)) +
labs(x = lab_lines$brain_log, y = lab_lines$sleep_raw)
As a child growing up on a dairy farm 🐮, it was remarkable to me how little I saw cows sleeping, compared to dogs or cats. Were they okay? Are they constantly tired and groggy? Maybe they are asleep when I’m asleep? Here, it looks like they just don’t need very much sleep.
Next, let’s fit a classical regression model. We will use a logscaled sleep measure so that the regression line doesn’t imply negative sleep (even though brains never get that large).
m1_classical < lm(log_sleep_total ~ log_brainwt, data = msleep)
arm::display(m1_classical)
#> lm(formula = log_sleep_total ~ log_brainwt, data = msleep)
#> coef.est coef.se
#> (Intercept) 0.74 0.04
#> log_brainwt 0.13 0.02
#> 
#> n = 56, k = 2
#> residual sd = 0.17, RSquared = 0.40
We can interpret the model in the usual way: A mammal with 1 kg (0 logkg) of brain mass sleeps 10^{0.74} = 5.5 hours per day. A mammal with a tenth of that brain mass (1 logkg) sleeps 10^{0.74 + 0.13} = 7.4 hours.
We illustrate the regression results to show the predicted mean of y and
its 95% confidence interval. This task is readily accomplished in ggplot2 using
stat_smooth()
. This function fits a model and plots the mean and CI for each
aesthetic grouping of data^{1} in a plot.
ggplot(msleep) +
aes(x = log_brainwt, y = log_sleep_total) +
geom_point() +
stat_smooth(method = "lm", level = .95) +
scale_x_continuous(labels = function(x) 10 ^ x) +
labs(x = lab_lines$brain_log, y = lab_lines$sleep_log)
#> `geom_smooth()` using formula 'y ~ x'
This interval conveys some uncertainty in the estimate of the mean, but this interval has a frequentist interpretation which can be unintuitive for this sort of data.
Now, for the point of this post: What’s the Bayesian version of this kind of visualization? Specifically, we want to illustrate:
 Predictions from a regression model
 Some uncertainty about those predictions
 Raw data used to train the model
Option 1: The pileoflines plot
The regression line in the classical plot is just one particular line. It’s the line of best fit that satisfies a leastsquares or maximumlikelihood objective. Our Bayesian model estimates an entire distribution of plausible regression lines. The first way to visualize our uncertainty is to plot our own line of best fit along with a sample of other lines from the posterior distribution of the model.
First, we fit a model RStanARM using weakly informative priors.
library("rstanarm")
m1 < stan_glm(
log_sleep_total ~ log_brainwt,
family = gaussian(),
data = msleep,
prior = normal(0, 3),
prior_intercept = normal(0, 3)
)
We now have 4,000 credible regressions lines for our data.
summary(m1)
#>
#> Model Info:
#> function: stan_glm
#> family: gaussian [identity]
#> formula: log_sleep_total ~ log_brainwt
#> algorithm: sampling
#> sample: 4000 (posterior sample size)
#> priors: see help('prior_summary')
#> observations: 56
#> predictors: 2
#>
#> Estimates:
#> mean sd 10% 50% 90%
#> (Intercept) 0.7 0.0 0.7 0.7 0.8
#> log_brainwt 0.1 0.0 0.2 0.1 0.1
#> sigma 0.2 0.0 0.2 0.2 0.2
#>
#> Fit Diagnostics:
#> mean sd 10% 50% 90%
#> mean_PPD 1.0 0.0 0.9 1.0 1.0
#>
#> The mean_ppd is the sample average posterior predictive distribution of the outcome variable (for details see help('summary.stanreg')).
#>
#> MCMC diagnostics
#> mcse Rhat n_eff
#> (Intercept) 0.0 1.0 3942
#> log_brainwt 0.0 1.0 3994
#> sigma 0.0 1.0 2941
#> mean_PPD 0.0 1.0 3750
#> logposterior 0.0 1.0 1663
#>
#> For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).
For models fit by RStanARM, the generic coefficient function coef()
returns
the median parameter values.
coef(m1)
#> (Intercept) log_brainwt
#> 0.7366180 0.1256099
coef(m1_classical)
#> (Intercept) log_brainwt
#> 0.7363492 0.1264049
We can see that the intercept and slope of the median line is pretty close to the classical model’s intercept and slope. The median line serves as the “point estimate” for our model: If we had to summarize the modeled relationship using just a single number for each parameter, we can use the medians.
One way to visualize our model therefore is to plot our pointestimate line plus a sample of the other credible lines from our model. First, we create a dataframe with all 4,000 regression lines.
# Coercing a model to a dataframe returns dataframe of posterior samples.
# One row per sample.
fits < m1 %>%
as_tibble() %>%
rename(intercept = `(Intercept)`) %>%
select(sigma)
fits
#> # A tibble: 4,000 x 2
#> intercept log_brainwt
#> <dbl> <dbl>
#> 1 0.828 0.0940
#> 2 0.662 0.156
#> 3 0.722 0.133
#> 4 0.718 0.126
#> 5 0.742 0.136
#> 6 0.742 0.142
#> 7 0.706 0.134
#> 8 0.771 0.125
#> 9 0.731 0.133
#> 10 0.741 0.119
#> # ... with 3,990 more rows
We now plot the 500 randomly sampled lines from our model with light, semitransparent lines.
# aesthetic controllers
n_draws < 500
alpha_level < .15
col_draw < "grey60"
col_median < "#3366FF"
ggplot(msleep) +
aes(x = log_brainwt, y = log_sleep_total) +
# Plot a random sample of rows as gray semitransparent lines
geom_abline(
aes(intercept = intercept, slope = log_brainwt),
data = sample_n(fits, n_draws),
color = col_draw,
alpha = alpha_level
) +
# Plot the median values in blue
geom_abline(
intercept = median(fits$intercept),
slope = median(fits$log_brainwt),
size = 1,
color = col_median
) +
geom_point() +
scale_x_continuous(labels = function(x) 10 ^ x) +
labs(x = lab_lines$brain_log, y = lab_lines$sleep_log)
Each of these light lines represents a credible prediction of the mean across the values of x. As these line pile up on top of each other, they create an uncertainty band around our line of best fit. More plausible lines are more likely to be sampled, so these lines overlap and create a uniform color around the median line. As we move left or right, getting farther away from the mean of x, the lines start to fan out and we see very faint individual lines for some of the more extreme (yet still plausible) lines.
The advantage of this plot is that it is a direct visualization of posterior samples—one line per sample. It provides an estimate for the central tendency in the data but it also conveys uncertainty around that estimate.
This approach has limitations, however. Lines for subgroups require a little more effort to undo interactions. Also, the regression lines span the whole x axis which is not appropriate when subgroups only use a portion of the xaxis. (This limitation is solvable though.) Finally, I haven’t found good defaults for the aesthetic options: The number of samples, the colors to use, and the transparency level. One can lose lots and lots and lots of time fiddling with those knobs!
Option 2: Mean and its 95% interval
Another option is a direct port of the stat_smooth()
plot: Draw a line of
best fit and the 95% uncertainty interval around it.
To limit the amount of the x axis used by the lines, we’re going to create a sequence of 80 points along the range of the data.
x_rng < range(msleep$log_brainwt)
x_steps < seq(x_rng[1], x_rng[2], length.out = 80)
new_data < tibble(
observation = seq_along(x_steps),
log_brainwt = x_steps
)
new_data
#> # A tibble: 80 x 2
#> observation log_brainwt
#> <int> <dbl>
#> 1 1 3.85
#> 2 2 3.80
#> 3 3 3.74
#> 4 4 3.68
#> 5 5 3.62
#> 6 6 3.56
#> 7 7 3.50
#> 8 8 3.45
#> 9 9 3.39
#> 10 10 3.33
#> # ... with 70 more rows
The function posterior_linpred()
returns the modelfitted means for a dataframe
of new data. I say means because the function computes 80 predicted means for
each sample from the posterior. The result is 4000 x 80 matrix of fitted means.
pred_lin < posterior_linpred(m1, newdata = new_data)
dim(pred_lin)
#> [1] 4000 80
We are going to reduce this down to just a median and 95% interval around each
point. I do some tidying to get the data into a long format (one row per fitted
mean per posterior sample), and then do a tablejoin with the observation
column included in new_data
. I store these steps in a function because I
have to do them again later in this post.
tidy_predictions < function(
mat_pred,
df_data,
obs_name = "observation",
prob_lwr = .025,
prob_upr = .975
) {
# Get dataframe with one row per fitted value per posterior sample
df_pred < mat_pred %>%
as_tibble() %>%
setNames(seq_len(ncol(.))) %>%
tibble::rownames_to_column("posterior_sample") %>%
tidyr::gather_(obs_name, "fitted", setdiff(names(.), "posterior_sample"))
# Helps with joining later
class(df_pred[[obs_name]]) < class(df_data[[obs_name]])
# Summarise prediction interval for each observation
df_pred %>%
group_by_(obs_name) %>%
summarise(
median = median(fitted),
lower = quantile(fitted, prob_lwr),
upper = quantile(fitted, prob_upr)
) %>%
left_join(df_data, by = obs_name)
}
df_pred_lin < tidy_predictions(pred_lin, new_data)
#> Warning: `group_by_()` was deprecated in dplyr 0.7.0.
#> Please use `group_by()` instead.
#> See vignette('programming') for more help
df_pred_lin
#> # A tibble: 80 x 5
#> observation median lower upper log_brainwt
#> <int> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1.22 1.13 1.32 3.85
#> 2 2 1.21 1.12 1.31 3.80
#> 3 3 1.21 1.12 1.30 3.74
#> 4 4 1.20 1.11 1.29 3.68
#> 5 5 1.19 1.11 1.28 3.62
#> 6 6 1.19 1.10 1.27 3.56
#> 7 7 1.18 1.10 1.27 3.50
#> 8 8 1.17 1.09 1.26 3.45
#> 9 9 1.16 1.08 1.25 3.39
#> 10 10 1.16 1.08 1.24 3.33
#> # ... with 70 more rows
We can do the lineplusinterval plot using geom_ribbon()
for the uncertainty
band.
p_linpread < ggplot(msleep) +
aes(x = log_brainwt) +
geom_ribbon(
aes(ymin = lower, ymax = upper),
data = df_pred_lin,
alpha = 0.4,
fill = "grey60"
) +
geom_line(
aes(y = median),
data = df_pred_lin,
colour = "#3366FF",
size = 1
) +
geom_point(aes(y = log_sleep_total)) +
scale_x_continuous(labels = function(x) 10 ^ x) +
labs(x = lab_lines$brain_log, y = lab_lines$sleep_log)
p_linpread
This plot is just like the stat_smooth()
plot, except the interval here is
interpreted in terms of postdata probabilities: We’re 95% certain—given the
data, model and our prior information—that the “true” average sleep duration
is contained in this interval. I put “true” in quotes because this is truth in
the “small world” of the model, to quote
Statistical Rethinking, not
necessarily the real world.
Although the interpretation of the interval changes (compared to a classical
confidence interval), its location barely changes at all. If we overlay a
stat_smooth()
layer onto this plot, we can see that two sets of intervals are
virtually identical. With this much data and for this simple of a model, both
types of models can make very similar estimates.
p_linpread +
stat_smooth(aes(y = log_sleep_total), method = "lm")
#> `geom_smooth()` using formula 'y ~ x'
The previous plot illustrates one limitation of this approach: Pragmatically
speaking, stat_smooth()
basically does the same thing, and we’re
not taking advantage of the affordances provided by our model. This is why
RStanARM, in a kind of amusing way, disowns posterior_linpred()
in its
documentation:
This function is occasionally convenient, but it should be used sparingly. Inference and model checking should generally be carried out using the posterior predictive distribution (see
posterior_predict
).
Occasionally convenient. 😮 And elsewhere:
See also:
posterior_predict
to draw from the posterior predictive distribution of the outcome, which is almost always preferable.
Option 3: Mean and 95% interval for modelgenerated data
The reason why posterior_predict()
is preferable is that it uses more
information from our model, namely the error term sigma
.
poseterior_linpred()
predicts averages; posterior_predict()
predicts new
observations. This posterior predictive checking helps us confirm whether our
model—a story of how the data could have been generated—can produce new data
that resembles our data.
Here, we can use the function we defined earlier to get prediction intervals.
# Still a matrix with one row per posterior draw and one column per observation
pred_post < posterior_predict(m1, newdata = new_data)
dim(pred_post)
#> [1] 4000 80
df_pred_post < tidy_predictions(pred_post, new_data)
df_pred_post
#> # A tibble: 80 x 5
#> observation median lower upper log_brainwt
#> <int> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1.22 0.856 1.57 3.85
#> 2 2 1.21 0.864 1.58 3.80
#> 3 3 1.20 0.842 1.56 3.74
#> 4 4 1.20 0.843 1.56 3.68
#> 5 5 1.20 0.834 1.54 3.62
#> 6 6 1.19 0.823 1.54 3.56
#> 7 7 1.18 0.827 1.54 3.50
#> 8 8 1.17 0.818 1.51 3.45
#> 9 9 1.16 0.802 1.53 3.39
#> 10 10 1.16 0.817 1.51 3.33
#> # ... with 70 more rows
And we can plot the interval in the same way.
ggplot(msleep) +
aes(x = log_brainwt) +
geom_ribbon(
aes(ymin = lower, ymax = upper),
data = df_pred_post,
alpha = 0.4,
fill = "grey60"
) +
geom_line(
aes(y = median),
data = df_pred_post,
colour = "#3366FF",
size = 1
) +
geom_point(aes(y = log_sleep_total)) +
scale_x_continuous(labels = function(x) 10 ^ x) +
labs(x = lab_lines$brain_log, y = lab_lines$sleep_log)
First, we can appreciate that this interval is much wider. That’s because the interval doesn’t summarize a particular statistic (like an average) but all of the observations that can generated by our model. Okay, not all of the observations—just the 95% most probable observations.
Next, we can also appreciate that the line and the ribbon are jagged due to simulation randomness. Each prediction is a random number draw, and at each value of x, we have 4000 such random draws. We computed a median and 95% interval at each x, but due to randomness from simulating new data, these medians do not smoothly connect together in the plot. That’s okay, because these fluctuations are relatively small.
Finally, we can see that there are only two points outside of the interval. These appear to be the restless roe deer and the eversleepy giant armadillo. These two represent the main outliers for our model because they fall slight outside of the 95% prediction interval. In this way, the posterior predictive interval can help us discover which data points are relative outliers for our model.
(Maybe outliers isn’t the right word. It makes perfect sense that 2/56 = 3.6% of the observations fall outside of the 95% interval.)
This posterior prediction plot does reveal a shortcoming of our model, when plotted in a different manner.
last_plot() +
geom_hline(yintercept = log10(24), color = "grey50") +
geom_label(x = 0, y = log10(24), label = "24 hours")
One faulty consequence of how our model was specified is that it predicts that some mammals sleep more than 24 hours per day—oh, what a life to live 😴.
Wrap up
In the post, I covered three different ways to plot the results of an RStanARM model, while demonstrating some of the key functions for working with RStanARM models. Time well spent, I think.
As for future directions, I learned about the underdevelopment (as of November 2016) R package bayesplot by the Stan team. The README package shows off a lot of different ways to visualize posterior samples from a model. I’ll be sure to demo it on this dataset once it goes live.
Last knitted on 20210215. Source code on GitHub.^{2}

That is, if we map the plot’s color aesthetic to a categorical variable in the data,
stat_smooth()
will fit a separate model for each color/category. I figured this out when I tried to write my own functionstat_smooth_stan()
based on ggplot2’s extensions vignette and noticed that RStanARM was printing out MCMC sampling information for each color/category of the data. ↩ 
sessioninfo::session_info() #>  Session info  #> setting value #> version R version 4.0.3 (20201010) #> os Windows 10 x64 #> system x86_64, mingw32 #> ui RTerm #> language (EN) #> collate English_United States.1252 #> ctype English_United States.1252 #> tz America/Chicago #> date 20210215 #> #>  Packages  #> ! package * version date lib source #> abind 1.45 20160721 [1] CRAN (R 4.0.0) #> arm 1.112 20200727 [1] CRAN (R 4.0.2) #> assertthat 0.2.1 20190321 [1] CRAN (R 4.0.2) #> backports 1.2.1 20201209 [1] CRAN (R 4.0.3) #> base64enc 0.13 20150728 [1] CRAN (R 4.0.0) #> bayesplot 1.8.0.9000 20210201 [1] local #> boot 1.327 20210212 [1] CRAN (R 4.0.3) #> callr 3.5.1 20201013 [1] CRAN (R 4.0.3) #> checkmate 2.0.0 20200206 [1] CRAN (R 4.0.2) #> cli 2.3.0 20210131 [1] CRAN (R 4.0.3) #> cluster 2.1.0 20190619 [1] CRAN (R 4.0.3) #> coda 0.194 20200930 [1] CRAN (R 4.0.2) #> codetools 0.218 20201104 [1] CRAN (R 4.0.2) #> colorspace 2.00 20201111 [1] CRAN (R 4.0.3) #> colourpicker 1.1.0 20200914 [1] CRAN (R 4.0.2) #> crayon 1.4.1 20210208 [1] CRAN (R 4.0.3) #> crosstalk 1.1.1 20210112 [1] CRAN (R 4.0.3) #> curl 4.3 20191202 [1] CRAN (R 4.0.2) #> data.table 1.13.6 20201230 [1] CRAN (R 4.0.3) #> DBI 1.1.1 20210115 [1] CRAN (R 4.0.3) #> digest 0.6.27 20201024 [1] CRAN (R 4.0.3) #> dplyr * 1.0.4 20210202 [1] CRAN (R 4.0.3) #> DT 0.17 20210106 [1] CRAN (R 4.0.3) #> dygraphs 1.1.1.6 20180711 [1] CRAN (R 4.0.2) #> ellipsis 0.3.1 20200515 [1] CRAN (R 4.0.2) #> evaluate 0.14 20190528 [1] CRAN (R 4.0.2) #> fansi 0.4.2 20210115 [1] CRAN (R 4.0.3) #> farver 2.0.3 20200116 [1] CRAN (R 4.0.2) #> fastmap 1.1.0 20210125 [1] CRAN (R 4.0.3) #> foreign 0.881 20201222 [1] CRAN (R 4.0.3) #> Formula 1.24 20201016 [1] CRAN (R 4.0.2) #> generics 0.1.0 20201031 [1] CRAN (R 4.0.3) #> ggplot2 * 3.3.3 20201230 [1] CRAN (R 4.0.3) #> ggrepel 0.9.1 20210115 [1] CRAN (R 4.0.3) #> ggridges 0.5.3 20210108 [1] CRAN (R 4.0.3) #> git2r 0.28.0 20210110 [1] CRAN (R 4.0.3) #> glue 1.4.2 20200827 [1] CRAN (R 4.0.2) #> gridExtra 2.3 20170909 [1] CRAN (R 4.0.2) #> gtable 0.3.0 20190325 [1] CRAN (R 4.0.2) #> gtools 3.8.2 20200331 [1] CRAN (R 4.0.0) #> here 1.0.1 20201213 [1] CRAN (R 4.0.3) #> highr 0.8 20190320 [1] CRAN (R 4.0.2) #> Hmisc 4.42 20201129 [1] CRAN (R 4.0.3) #> htmlTable 2.1.0 20200916 [1] CRAN (R 4.0.2) #> htmltools 0.5.1.1 20210122 [1] CRAN (R 4.0.3) #> htmlwidgets 1.5.3 20201210 [1] CRAN (R 4.0.3) #> httpuv 1.5.5 20210113 [1] CRAN (R 4.0.3) #> igraph 1.2.6 20201006 [1] CRAN (R 4.0.2) #> inline 0.3.17 20201201 [1] CRAN (R 4.0.3) #> jpeg 0.18.1 20191024 [1] CRAN (R 4.0.0) #> jsonlite 1.7.2 20201209 [1] CRAN (R 4.0.3) #> knitr * 1.31 20210127 [1] CRAN (R 4.0.3) #> labeling 0.4.2 20201020 [1] CRAN (R 4.0.2) #> later 1.1.0.1 20200605 [1] CRAN (R 4.0.2) #> lattice 0.2041 20200402 [1] CRAN (R 4.0.2) #> latticeExtra 0.629 20191219 [1] CRAN (R 4.0.2) #> lifecycle 1.0.0 20210215 [1] CRAN (R 4.0.3) #> lme4 1.126 20201201 [1] CRAN (R 4.0.3) #> loo 2.4.1 20201209 [1] CRAN (R 4.0.3) #> magrittr 2.0.1 20201117 [1] CRAN (R 4.0.3) #> markdown 1.1 20190807 [1] CRAN (R 4.0.2) #> MASS 7.353 20200909 [1] CRAN (R 4.0.3) #> Matrix 1.218 20191127 [1] CRAN (R 4.0.3) #> matrixStats 0.58.0 20210129 [1] CRAN (R 4.0.3) #> mgcv 1.833 20200827 [1] CRAN (R 4.0.2) #> mime 0.9 20200204 [1] CRAN (R 4.0.3) #> miniUI 0.1.1.1 20180518 [1] CRAN (R 4.0.2) #> minqa 1.2.4 20141009 [1] CRAN (R 4.0.2) #> munsell 0.5.0 20180612 [1] CRAN (R 4.0.2) #> nlme 3.1152 20210204 [1] CRAN (R 4.0.3) #> nloptr 1.2.2.2 20200702 [1] CRAN (R 4.0.2) #> nnet 7.315 20210124 [1] CRAN (R 4.0.3) #> pillar 1.4.7 20201120 [1] CRAN (R 4.0.3) #> pkgbuild 1.2.0 20201215 [1] CRAN (R 4.0.3) #> pkgconfig 2.0.3 20190922 [1] CRAN (R 4.0.2) #> plyr 1.8.6 20200303 [1] CRAN (R 4.0.2) #> png 0.17 20131203 [1] CRAN (R 4.0.0) #> prettyunits 1.1.1 20200124 [1] CRAN (R 4.0.2) #> processx 3.4.5 20201130 [1] CRAN (R 4.0.3) #> promises 1.1.1 20200609 [1] CRAN (R 4.0.3) #> ps 1.5.0 20201205 [1] CRAN (R 4.0.3) #> purrr 0.3.4 20200417 [1] CRAN (R 4.0.2) #> R6 2.5.0 20201028 [1] CRAN (R 4.0.2) #> ragg 0.4.1 20210111 [1] CRAN (R 4.0.3) #> RColorBrewer 1.12 20141207 [1] CRAN (R 4.0.0) #> Rcpp * 1.0.6 20210115 [1] CRAN (R 4.0.3) #> D RcppParallel 5.0.2 20200624 [1] CRAN (R 4.0.2) #> reshape2 1.4.4 20200409 [1] CRAN (R 4.0.2) #> rlang 0.4.10 20201230 [1] CRAN (R 4.0.3) #> rpart 4.115 20190412 [1] CRAN (R 4.0.2) #> rprojroot 2.0.2 20201115 [1] CRAN (R 4.0.3) #> rsconnect 0.8.16 20191213 [1] CRAN (R 4.0.2) #> rstan 2.21.2 20200727 [1] CRAN (R 4.0.3) #> rstanarm * 2.21.1 20200720 [1] CRAN (R 4.0.2) #> rstantools 2.1.1 20200706 [1] CRAN (R 4.0.2) #> rstudioapi 0.13 20201112 [1] CRAN (R 4.0.3) #> scales 1.1.1 20200511 [1] CRAN (R 4.0.2) #> sessioninfo 1.1.1 20181105 [1] CRAN (R 4.0.2) #> shiny 1.6.0 20210125 [1] CRAN (R 4.0.3) #> shinyjs 2.0.0 20200909 [1] CRAN (R 4.0.2) #> shinystan 2.5.0 20180501 [1] CRAN (R 4.0.2) #> shinythemes 1.2.0 20210125 [1] CRAN (R 4.0.3) #> StanHeaders 2.21.07 20201217 [1] CRAN (R 4.0.3) #> statmod 1.4.35 20201019 [1] CRAN (R 4.0.3) #> stringi 1.5.3 20200909 [1] CRAN (R 4.0.2) #> stringr 1.4.0 20190210 [1] CRAN (R 4.0.2) #> survival 3.27 20200928 [1] CRAN (R 4.0.2) #> systemfonts 1.0.0 20210201 [1] CRAN (R 4.0.3) #> textshaping 0.2.1 20201113 [1] CRAN (R 4.0.3) #> threejs 0.3.3 20200121 [1] CRAN (R 4.0.2) #> tibble 3.0.6 20210129 [1] CRAN (R 4.0.3) #> tidyr 1.1.2 20200827 [1] CRAN (R 4.0.2) #> tidyselect 1.1.0 20200511 [1] CRAN (R 4.0.2) #> utf8 1.1.4 20180524 [1] CRAN (R 4.0.2) #> V8 3.4.0 20201104 [1] CRAN (R 4.0.3) #> vctrs 0.3.6 20201217 [1] CRAN (R 4.0.3) #> withr 2.4.1 20210126 [1] CRAN (R 4.0.3) #> xfun 0.20 20210106 [1] CRAN (R 4.0.3) #> xtable 1.84 20190421 [1] CRAN (R 4.0.2) #> xts 0.12.1 20200909 [1] CRAN (R 4.0.2) #> zoo 1.88 20200502 [1] CRAN (R 4.0.2) #> #> [1] C:/Users/Tristan/Documents/R/winlibrary/4.0 #> [2] C:/Program Files/R/R4.0.3/library #> #> D  DLL MD5 mismatch, broken installation.
Leave a comment