Load packages

# Load packages-----------------------------------------------------------------
library(readxl)
library(tidyverse)
library(lmerTest)
library(emmeans)
library(ggplot2)
library(kableExtra)
library(nlme)
library(MAW636)
library(broom.mixed)
library(brms)
library(sjPlot)
theme_set(theme_bw())

Load data

Converting to factor may help brms run faster.

phdat <- read.csv("01_organize/rumen_imp 2023.02.13 backup.csv")%>% 
  mutate(cow = as.character(cow),
         hr = as.factor(hr),
         period = factor(period, levels = c(paste0("P", 1:4))),
         day = factor(day, levels = c(paste0("Day", 1:4)))) %>% 
  dplyr::select( period, day, hr, cow, osc_lvl, prot_lvl, trt, rumen_ph)
    
str(phdat)
## 'data.frame':    640 obs. of  8 variables:
##  $ period  : Factor w/ 4 levels "P1","P2","P3",..: 1 2 3 4 1 2 3 4 1 2 ...
##  $ day     : Factor w/ 4 levels "Day1","Day2",..: 1 1 1 1 2 2 2 2 3 3 ...
##  $ hr      : Factor w/ 5 levels "6","8","10","12",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ cow     : chr  "8810" "8810" "8810" "8810" ...
##  $ osc_lvl : chr  "S" "S" "O" "O" ...
##  $ prot_lvl: chr  "LP" "HP" "HP" "LP" ...
##  $ trt     : chr  "SLP" "SHP" "OHP" "OLP" ...
##  $ rumen_ph: num  6.51 6.67 6.94 6.29 6.42 ...
hist(phdat$rumen_ph, main = "Ruminal pH", xlab = NULL)

Fit LMM Frequentist

ANOVA style Expression

Same as from production manuscript, except that there are more levels to Hour (5 instead of 2), and there is no cannulated effect. More similar to intensive manuscript plasma model. Because there are 5 observations with a cow x period x day, there is an AR1 structure. This was not implemented with total collections or plasma data because of having only 2 or 3 observations to model.Selected on the basis of BIC.

The ANOVA expression emphasizes that we are modeling conditional means, i.e., means at different levels of the indexes i j k l m n. 

\[ \begin{aligned} y_{i j k l m n}=\mu+ & E_j+P_k+F_l+P F_{k l}+D_m+H_n+\left(D_m \times H_n\right) \\ & +D_m\left(P_k+F_l+P F_{k l}\right) \\ & +H_n\left(P_k+F_l+P F_{k l}\right) \\ & +\left(D_m \times H_n\right)\left(P_k+F_l+P F_{k l}\right) \\ & +C_i+(C: E)_{i j}+(C: E: D)_{i j m}+\epsilon_{i j k l m n} \end{aligned}\]

Complete model:
experimental period (Ek, where k = 1, 2, 3, 4).
dietary CP level (Pl, where l = LP, HP). CP feeding pattern (Fm, where m = OF, SF). Interaction term between CP level and CP feeding pattern.
Random effect of cow (i = 1 to 9).
Random error term (ϵijklm; n = 640).

Actual model has missingness (2.6% of observations) imputed, and 2 of 32 cow:periods omitted (and not imputed).

lme() Expression

model_f = lme(rumen_ph ~ period + osc_lvl * prot_lvl * day * hr,
    random = ~ 1 | cow / period / day,
    correlation = corAR1(form = ~ 1 | cow / period / day), data = phdat, 
    control = lmeControl(opt = "optim")
  )
# summary(model_f, verbose = F) omitted, too verbose
tab_model(model_f)
  rumen ph
Predictors Estimates CI p
(Intercept) 6.56 6.37 – 6.75 <0.001
period [P2] 0.08 0.01 – 0.15 0.036
period [P3] -0.02 -0.10 – 0.05 0.550
period [P4] 0.18 0.10 – 0.26 <0.001
osc lvl [S] 0.12 -0.14 – 0.38 0.345
prot lvl [LP] 0.08 -0.18 – 0.34 0.505
day [Day2] 0.06 -0.19 – 0.30 0.630
day [Day3] 0.11 -0.14 – 0.35 0.381
day [Day4] -0.04 -0.29 – 0.20 0.726
hr [8] 0.28 0.05 – 0.51 0.016
hr [10] -0.23 -0.46 – -0.00 0.049
hr [12] -0.45 -0.68 – -0.22 <0.001
hr [19] -0.78 -1.00 – -0.57 <0.001
osc lvl [S] × prot lvl
[LP]
-0.13 -0.49 – 0.24 0.479
osc lvl [S] × day [Day2] -0.16 -0.51 – 0.18 0.352
osc lvl [S] × day [Day3] -0.25 -0.60 – 0.10 0.155
osc lvl [S] × day [Day4] -0.11 -0.46 – 0.24 0.528
prot lvl [LP] × day
[Day2]
-0.15 -0.49 – 0.20 0.397
prot lvl [LP] × day
[Day3]
-0.12 -0.47 – 0.23 0.493
prot lvl [LP] × day
[Day4]
0.14 -0.21 – 0.49 0.423
osc lvl [S] × hr [8] -0.30 -0.62 – 0.03 0.073
osc lvl [S] × hr [10] -0.14 -0.46 – 0.19 0.412
osc lvl [S] × hr [12] -0.09 -0.42 – 0.23 0.567
osc lvl [S] × hr [19] -0.10 -0.40 – 0.20 0.506
prot lvl [LP] × hr [8] -0.36 -0.69 – -0.04 0.028
prot lvl [LP] × hr [10] -0.15 -0.47 – 0.18 0.375
prot lvl [LP] × hr [12] -0.24 -0.56 – 0.08 0.148
prot lvl [LP] × hr [19] -0.04 -0.34 – 0.26 0.775
day [Day2] × hr [8] -0.05 -0.38 – 0.27 0.740
day [Day3] × hr [8] -0.41 -0.73 – -0.08 0.014
day [Day4] × hr [8] -0.29 -0.61 – 0.04 0.085
day [Day2] × hr [10] 0.02 -0.31 – 0.34 0.907
day [Day3] × hr [10] -0.26 -0.58 – 0.07 0.121
day [Day4] × hr [10] -0.00 -0.33 – 0.32 0.996
day [Day2] × hr [12] 0.03 -0.30 – 0.35 0.871
day [Day3] × hr [12] -0.13 -0.45 – 0.20 0.440
day [Day4] × hr [12] 0.05 -0.28 – 0.37 0.777
day [Day2] × hr [19] -0.10 -0.40 – 0.20 0.507
day [Day3] × hr [19] -0.02 -0.32 – 0.28 0.888
day [Day4] × hr [19] 0.19 -0.11 – 0.49 0.211
(osc lvl [S] × prot lvl
[LP]) × day [Day2]
0.35 -0.14 – 0.84 0.159
(osc lvl [S] × prot lvl
[LP]) × day [Day3]
0.34 -0.15 – 0.83 0.172
(osc lvl [S] × prot lvl
[LP]) × day [Day4]
0.13 -0.36 – 0.61 0.611
(osc lvl [S] × prot lvl
[LP]) × hr [8]
0.38 -0.08 – 0.84 0.101
(osc lvl [S] × prot lvl
[LP]) × hr [10]
0.16 -0.30 – 0.62 0.494
(osc lvl [S] × prot lvl
[LP]) × hr [12]
0.17 -0.28 – 0.63 0.460
(osc lvl [S] × prot lvl
[LP]) × hr [19]
0.21 -0.22 – 0.63 0.341
(osc lvl [S] × day
[Day2]) × hr [8]
0.29 -0.17 – 0.75 0.217
(osc lvl [S] × day
[Day3]) × hr [8]
0.62 0.16 – 1.08 0.008
(osc lvl [S] × day
[Day4]) × hr [8]
0.54 0.08 – 1.00 0.022
(osc lvl [S] × day
[Day2]) × hr [10]
0.06 -0.40 – 0.52 0.805
(osc lvl [S] × day
[Day3]) × hr [10]
0.25 -0.21 – 0.71 0.292
(osc lvl [S] × day
[Day4]) × hr [10]
0.15 -0.31 – 0.61 0.524
(osc lvl [S] × day
[Day2]) × hr [12]
0.00 -0.45 – 0.46 0.998
(osc lvl [S] × day
[Day3]) × hr [12]
0.25 -0.21 – 0.70 0.284
(osc lvl [S] × day
[Day4]) × hr [12]
-0.00 -0.46 – 0.45 0.990
(osc lvl [S] × day
[Day2]) × hr [19]
0.37 -0.06 – 0.79 0.091
(osc lvl [S] × day
[Day3]) × hr [19]
0.36 -0.06 – 0.79 0.094
(osc lvl [S] × day
[Day4]) × hr [19]
0.04 -0.38 – 0.47 0.842
(prot lvl [LP] × day
[Day2]) × hr [8]
0.37 -0.09 – 0.83 0.112
(prot lvl [LP] × day
[Day3]) × hr [8]
0.43 -0.03 – 0.89 0.070
(prot lvl [LP] × day
[Day4]) × hr [8]
0.25 -0.21 – 0.71 0.286
(prot lvl [LP] × day
[Day2]) × hr [10]
0.00 -0.46 – 0.46 0.997
(prot lvl [LP] × day
[Day3]) × hr [10]
0.35 -0.11 – 0.81 0.136
(prot lvl [LP] × day
[Day4]) × hr [10]
-0.08 -0.54 – 0.38 0.722
(prot lvl [LP] × day
[Day2]) × hr [12]
0.11 -0.34 – 0.57 0.632
(prot lvl [LP] × day
[Day3]) × hr [12]
0.24 -0.22 – 0.69 0.304
(prot lvl [LP] × day
[Day4]) × hr [12]
-0.00 -0.46 – 0.45 0.987
(prot lvl [LP] × day
[Day2]) × hr [19]
0.19 -0.23 – 0.62 0.378
(prot lvl [LP] × day
[Day3]) × hr [19]
0.02 -0.41 – 0.44 0.942
(prot lvl [LP] × day
[Day4]) × hr [19]
-0.17 -0.60 – 0.26 0.432
(osc lvl [S] × prot lvl
[LP] × day [Day2]) × hr
[8]
-0.57 -1.22 – 0.08 0.086
(osc lvl [S] × prot lvl
[LP] × day [Day3]) × hr
[8]
-0.65 -1.30 – -0.00 0.050
(osc lvl [S] × prot lvl
[LP] × day [Day4]) × hr
[8]
-0.69 -1.34 – -0.04 0.037
(osc lvl [S] × prot lvl
[LP] × day [Day2]) × hr
[10]
-0.05 -0.70 – 0.60 0.879
(osc lvl [S] × prot lvl
[LP] × day [Day3]) × hr
[10]
-0.38 -1.03 – 0.27 0.256
(osc lvl [S] × prot lvl
[LP] × day [Day4]) × hr
[10]
-0.30 -0.95 – 0.35 0.366
(osc lvl [S] × prot lvl
[LP] × day [Day2]) × hr
[12]
-0.11 -0.76 – 0.53 0.727
(osc lvl [S] × prot lvl
[LP] × day [Day3]) × hr
[12]
-0.32 -0.97 – 0.32 0.327
(osc lvl [S] × prot lvl
[LP] × day [Day4]) × hr
[12]
-0.23 -0.88 – 0.41 0.478
(osc lvl [S] × prot lvl
[LP] × day [Day2]) × hr
[19]
-0.57 -1.17 – 0.03 0.064
(osc lvl [S] × prot lvl
[LP] × day [Day3]) × hr
[19]
-0.62 -1.23 – -0.02 0.042
(osc lvl [S] × prot lvl
[LP] × day [Day4]) × hr
[19]
-0.23 -0.83 – 0.37 0.456
Random Effects
σ2 0.05
Ï„00 day 0.09
Ï„00 period 0.01
Ï„00 cow 0.08
N day 4
N period 4
N cow 9
Observations 640
Marginal R2 / Conditional R2 0.675 / NA

Because it is somewhat complex, the AR1 structure is only applied to the lowest level in nlme, for a cow x period x day. In other words, the timepoints are autocorrelated within a cow x period x day. There are random intercepts for cow, cow:period, and cow:day.

Optimizer changed to optim due to convergence issues with default.

Matrix Expression

\[ \mathbf{Y = XB + Z\gamma + {\varepsilon}} \] where - \(y\) is the \(n\)-by-1 response vector, and \(n\) is the number of observations. - \(X\) is an \(n\)-by-p fixed-effects design matrix. - \(B\) is a \(p\)-by-1 fixed-effects vector. - \(Z\) is an \(n\)-by- \(q\) random-effects design matrix. - \(\gamma\) is a \(q\)-by-1 random-effects vector. - \(\varepsilon\) is the \(n\)-by-1 observation error vector.

where the expectation of the random effect and error means are zero.
\[ E\begin{pmatrix}\gamma \\ \varepsilon \end{pmatrix} = \begin{bmatrix}0 \\ 0 \end{bmatrix} \] Because the other parts (e.g., X, B) are fixed and do not vary, we only talk about the variance covariance matrix of the random effects and error term. These are assumed to be independent. \[var\begin{pmatrix}\gamma \\ \varepsilon \end{pmatrix} = \begin{bmatrix} \mathbf{G}& O \\ O & \mathbf{R} \end{bmatrix}\] The R matrix is structured such that the covariance between an observation at time ti and time ti’ is:

\[ \frac{\sigma^2}{1-\phi^2}\phi^{|t_i-t_{i'}|}\]

# example
delta <- c(0,1,3,4,8) # time difference
phi <- .5 # autocorrelation parameter
sigma <- 1 # 
t <- cumsum(delta);t
## [1]  0  1  4  8 16
outer(t, t, "-")
##      [,1] [,2] [,3] [,4] [,5]
## [1,]    0   -1   -4   -8  -16
## [2,]    1    0   -3   -7  -15
## [3,]    4    3    0   -4  -12
## [4,]    8    7    4    0   -8
## [5,]   16   15   12    8    0
Sigma <- sigma^2/(1-phi^2)*phi^abs(outer(t,t,"-"))
Sigma
##              [,1]         [,2]         [,3]        [,4]         [,5]
## [1,] 1.333333e+00 0.6666666667 0.0833333333 0.005208333 2.034505e-05
## [2,] 6.666667e-01 1.3333333333 0.1666666667 0.010416667 4.069010e-05
## [3,] 8.333333e-02 0.1666666667 1.3333333333 0.083333333 3.255208e-04
## [4,] 5.208333e-03 0.0104166667 0.0833333333 1.333333333 5.208333e-03
## [5,] 2.034505e-05 0.0000406901 0.0003255208 0.005208333 1.333333e+00

Where phi is the autocorrelation parameter.

Fit LMM Bayesian

Model Expression

The model has the same form, except we will specify priors.

NOTE: this is a different mu than before! I am trying to write this out but not sure if I got it correct yet. I tried to stay away from Greek letters to make it simpler to translate and match with the frequentist version. Term order is a little different. Assuming the alpha is separate from the X matrix is not typical.

B, alpha, sigma e are population level terms. I’m not sure if it is typical to express mu indexed.

There is probably a way to set a prior on the autocorrelation parameter but I haven’t found how to change the default.

\[ \begin{aligned} y_{i j k l m n} &\sim Normal(\mu_{i j k l m n}, \sigma) \\ \mu_{i j k l m n} &= \alpha + \mathbf{XB} + C_i+(C: E)_{i j}+(C: E: D)_{i j m} \\ C_i &\sim Normal(0,\sigma_{C_i } ) \\ (C: E_{i j}) &\sim Normal(0,\sigma_{(C: E_{i j}) }) \\ (C: E: D)_{i j m} &\sim Normal(0,\sigma_{(C: E: D)_{i j m}}) \\ \alpha &\sim Normal(7,10) \\ \text{class of B } &\sim Normal(0,10) \\ \sigma_{e} &\sim HalfCauchy(0,10) \\ \sigma_{C_i } &\sim HalfCauchy(0,10) \\ \sigma_{(C: E_{i j}) }&\sim HalfCauchy(0,10) \\ \sigma_{(C: E: D)_{i j m}}&\sim HalfCauchy(0,10) \\ \end{aligned} \] In this case, there is a global intercept modeled (alpha) that is analogous to the mu in the prior notation. ANOVA notation does not express coefficients separately, so I just used XB matrix notation to describe the fixed effects.

X are still assumed fixed and known. The variance components are independent so there is no need to specify a covariance matrix describing the relations between intercept/slope. The Ci, C:E, C:E:D could also be considered part of class alpha.

The global intercept variance is separate than the error term, so they get different priors.

The sigma term in the first expression is the entire variance which includes variance components.

Priors should be realistic given scientific knowledge. For example, we know pH will be from 0 to 14.

Get priors using lme syntax

## get all parameters and parameters classes to define priors on
prior <- get_prior(
  rumen_ph~period + osc_lvl * prot_lvl * day * hr + (1|cow/period/day) +
    ar(gr = cow:period:day), 
            data = phdat, family = gaussian())
prior
##                   prior     class                             coef
##                  (flat)        ar                                 
##                  (flat)         b                                 
##                  (flat)         b                          dayDay2
##                  (flat)         b                     dayDay2:hr10
##                  (flat)         b                     dayDay2:hr12
##                  (flat)         b                     dayDay2:hr19
##                  (flat)         b                      dayDay2:hr8
##                  (flat)         b                          dayDay3
##                  (flat)         b                     dayDay3:hr10
##                  (flat)         b                     dayDay3:hr12
##                  (flat)         b                     dayDay3:hr19
##                  (flat)         b                      dayDay3:hr8
##                  (flat)         b                          dayDay4
##                  (flat)         b                     dayDay4:hr10
##                  (flat)         b                     dayDay4:hr12
##                  (flat)         b                     dayDay4:hr19
##                  (flat)         b                      dayDay4:hr8
##                  (flat)         b                             hr10
##                  (flat)         b                             hr12
##                  (flat)         b                             hr19
##                  (flat)         b                              hr8
##                  (flat)         b                         osc_lvlS
##                  (flat)         b                 osc_lvlS:dayDay2
##                  (flat)         b            osc_lvlS:dayDay2:hr10
##                  (flat)         b            osc_lvlS:dayDay2:hr12
##                  (flat)         b            osc_lvlS:dayDay2:hr19
##                  (flat)         b             osc_lvlS:dayDay2:hr8
##                  (flat)         b                 osc_lvlS:dayDay3
##                  (flat)         b            osc_lvlS:dayDay3:hr10
##                  (flat)         b            osc_lvlS:dayDay3:hr12
##                  (flat)         b            osc_lvlS:dayDay3:hr19
##                  (flat)         b             osc_lvlS:dayDay3:hr8
##                  (flat)         b                 osc_lvlS:dayDay4
##                  (flat)         b            osc_lvlS:dayDay4:hr10
##                  (flat)         b            osc_lvlS:dayDay4:hr12
##                  (flat)         b            osc_lvlS:dayDay4:hr19
##                  (flat)         b             osc_lvlS:dayDay4:hr8
##                  (flat)         b                    osc_lvlS:hr10
##                  (flat)         b                    osc_lvlS:hr12
##                  (flat)         b                    osc_lvlS:hr19
##                  (flat)         b                     osc_lvlS:hr8
##                  (flat)         b              osc_lvlS:prot_lvlLP
##                  (flat)         b      osc_lvlS:prot_lvlLP:dayDay2
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay2:hr10
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay2:hr12
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay2:hr19
##                  (flat)         b  osc_lvlS:prot_lvlLP:dayDay2:hr8
##                  (flat)         b      osc_lvlS:prot_lvlLP:dayDay3
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay3:hr10
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay3:hr12
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay3:hr19
##                  (flat)         b  osc_lvlS:prot_lvlLP:dayDay3:hr8
##                  (flat)         b      osc_lvlS:prot_lvlLP:dayDay4
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay4:hr10
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay4:hr12
##                  (flat)         b osc_lvlS:prot_lvlLP:dayDay4:hr19
##                  (flat)         b  osc_lvlS:prot_lvlLP:dayDay4:hr8
##                  (flat)         b         osc_lvlS:prot_lvlLP:hr10
##                  (flat)         b         osc_lvlS:prot_lvlLP:hr12
##                  (flat)         b         osc_lvlS:prot_lvlLP:hr19
##                  (flat)         b          osc_lvlS:prot_lvlLP:hr8
##                  (flat)         b                         periodP2
##                  (flat)         b                         periodP3
##                  (flat)         b                         periodP4
##                  (flat)         b                       prot_lvlLP
##                  (flat)         b               prot_lvlLP:dayDay2
##                  (flat)         b          prot_lvlLP:dayDay2:hr10
##                  (flat)         b          prot_lvlLP:dayDay2:hr12
##                  (flat)         b          prot_lvlLP:dayDay2:hr19
##                  (flat)         b           prot_lvlLP:dayDay2:hr8
##                  (flat)         b               prot_lvlLP:dayDay3
##                  (flat)         b          prot_lvlLP:dayDay3:hr10
##                  (flat)         b          prot_lvlLP:dayDay3:hr12
##                  (flat)         b          prot_lvlLP:dayDay3:hr19
##                  (flat)         b           prot_lvlLP:dayDay3:hr8
##                  (flat)         b               prot_lvlLP:dayDay4
##                  (flat)         b          prot_lvlLP:dayDay4:hr10
##                  (flat)         b          prot_lvlLP:dayDay4:hr12
##                  (flat)         b          prot_lvlLP:dayDay4:hr19
##                  (flat)         b           prot_lvlLP:dayDay4:hr8
##                  (flat)         b                  prot_lvlLP:hr10
##                  (flat)         b                  prot_lvlLP:hr12
##                  (flat)         b                  prot_lvlLP:hr19
##                  (flat)         b                   prot_lvlLP:hr8
##  student_t(3, 6.4, 2.5) Intercept                                 
##    student_t(3, 0, 2.5)        sd                                 
##    student_t(3, 0, 2.5)        sd                                 
##    student_t(3, 0, 2.5)        sd                        Intercept
##    student_t(3, 0, 2.5)        sd                                 
##    student_t(3, 0, 2.5)        sd                        Intercept
##    student_t(3, 0, 2.5)        sd                                 
##    student_t(3, 0, 2.5)        sd                        Intercept
##    student_t(3, 0, 2.5)     sigma                                 
##           group resp dpar nlpar lb ub       source
##                                            default
##                                            default
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                       (vectorized)
##                                            default
##                                  0         default
##             cow                  0    (vectorized)
##             cow                  0    (vectorized)
##      cow:period                  0    (vectorized)
##      cow:period                  0    (vectorized)
##  cow:period:day                  0    (vectorized)
##  cow:period:day                  0    (vectorized)
##                                  0         default

Set priors

You can set priors to an entire class of parameters (e.g., sd, b, sigma) or to individual coefficients. When cauchy() is applied to sd or sigma, it is actually a half-cauchy, so the variance parameters are constrained to positive. Cauchy has thicker tails than a normal distribution so it seems like this allows for more abberrant observations. We will set weak priors.

prior3 = c(
  prior(normal(7,10), class = Intercept),
  prior(normal(0,10), class = b), # speeds up relative to default flat prior
  prior(cauchy(0,10), class = sd), # variance components
  prior(cauchy(0,10), class = sigma) # residual variance
)

brms() Expression

Cut iterations to 5000 for speed because this is just for fun.

t1 = Sys.time()
bmod3 = brm(rumen_ph~period + osc_lvl * prot_lvl * day * hr + (1|cow/period/day) +
    ar(gr = cow:period:day), 
            data = phdat, family = gaussian(),
            prior = prior3,
            warmup = 2000, iter = 5000)
## Compiling Stan program...
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -arch arm64 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/opt/R/arm64/include   -fPIC  -falign-functions=64 -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## Start sampling
## 
## SAMPLING FOR MODEL '6e174f85b3ec0adb19023fe1609e678b' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.000446 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 4.46 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 5000 [  0%]  (Warmup)
## Chain 1: Iteration:  500 / 5000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1000 / 5000 [ 20%]  (Warmup)
## Chain 1: Iteration: 1500 / 5000 [ 30%]  (Warmup)
## Chain 1: Iteration: 2000 / 5000 [ 40%]  (Warmup)
## Chain 1: Iteration: 2001 / 5000 [ 40%]  (Sampling)
## Chain 1: Iteration: 2500 / 5000 [ 50%]  (Sampling)
## Chain 1: Iteration: 3000 / 5000 [ 60%]  (Sampling)
## Chain 1: Iteration: 3500 / 5000 [ 70%]  (Sampling)
## Chain 1: Iteration: 4000 / 5000 [ 80%]  (Sampling)
## Chain 1: Iteration: 4500 / 5000 [ 90%]  (Sampling)
## Chain 1: Iteration: 5000 / 5000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 35.5926 seconds (Warm-up)
## Chain 1:                39.1338 seconds (Sampling)
## Chain 1:                74.7264 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '6e174f85b3ec0adb19023fe1609e678b' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 0.000226 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 2.26 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 5000 [  0%]  (Warmup)
## Chain 2: Iteration:  500 / 5000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1000 / 5000 [ 20%]  (Warmup)
## Chain 2: Iteration: 1500 / 5000 [ 30%]  (Warmup)
## Chain 2: Iteration: 2000 / 5000 [ 40%]  (Warmup)
## Chain 2: Iteration: 2001 / 5000 [ 40%]  (Sampling)
## Chain 2: Iteration: 2500 / 5000 [ 50%]  (Sampling)
## Chain 2: Iteration: 3000 / 5000 [ 60%]  (Sampling)
## Chain 2: Iteration: 3500 / 5000 [ 70%]  (Sampling)
## Chain 2: Iteration: 4000 / 5000 [ 80%]  (Sampling)
## Chain 2: Iteration: 4500 / 5000 [ 90%]  (Sampling)
## Chain 2: Iteration: 5000 / 5000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 34.1584 seconds (Warm-up)
## Chain 2:                38.9771 seconds (Sampling)
## Chain 2:                73.1354 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '6e174f85b3ec0adb19023fe1609e678b' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 0.000225 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 2.25 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 5000 [  0%]  (Warmup)
## Chain 3: Iteration:  500 / 5000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1000 / 5000 [ 20%]  (Warmup)
## Chain 3: Iteration: 1500 / 5000 [ 30%]  (Warmup)
## Chain 3: Iteration: 2000 / 5000 [ 40%]  (Warmup)
## Chain 3: Iteration: 2001 / 5000 [ 40%]  (Sampling)
## Chain 3: Iteration: 2500 / 5000 [ 50%]  (Sampling)
## Chain 3: Iteration: 3000 / 5000 [ 60%]  (Sampling)
## Chain 3: Iteration: 3500 / 5000 [ 70%]  (Sampling)
## Chain 3: Iteration: 4000 / 5000 [ 80%]  (Sampling)
## Chain 3: Iteration: 4500 / 5000 [ 90%]  (Sampling)
## Chain 3: Iteration: 5000 / 5000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 34.0676 seconds (Warm-up)
## Chain 3:                39.0733 seconds (Sampling)
## Chain 3:                73.1409 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL '6e174f85b3ec0adb19023fe1609e678b' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 0.000208 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 2.08 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 5000 [  0%]  (Warmup)
## Chain 4: Iteration:  500 / 5000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1000 / 5000 [ 20%]  (Warmup)
## Chain 4: Iteration: 1500 / 5000 [ 30%]  (Warmup)
## Chain 4: Iteration: 2000 / 5000 [ 40%]  (Warmup)
## Chain 4: Iteration: 2001 / 5000 [ 40%]  (Sampling)
## Chain 4: Iteration: 2500 / 5000 [ 50%]  (Sampling)
## Chain 4: Iteration: 3000 / 5000 [ 60%]  (Sampling)
## Chain 4: Iteration: 3500 / 5000 [ 70%]  (Sampling)
## Chain 4: Iteration: 4000 / 5000 [ 80%]  (Sampling)
## Chain 4: Iteration: 4500 / 5000 [ 90%]  (Sampling)
## Chain 4: Iteration: 5000 / 5000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 36.2865 seconds (Warm-up)
## Chain 4:                38.3825 seconds (Sampling)
## Chain 4:                74.669 seconds (Total)
## Chain 4:
t2 = Sys.time()
#summary(bmod3)
tab_model(bmod3)
  rumen ph
Predictors Estimates CI (95%)
Intercept 6.56 6.37 – 6.76
periodP2 0.08 0.00 – 0.16
periodP3 -0.02 -0.10 – 0.06
periodP4 0.18 0.10 – 0.26
osc_lvlS 0.12 -0.12 – 0.36
prot_lvlLP 0.08 -0.16 – 0.33
dayDay2 0.06 -0.19 – 0.30
dayDay3 0.11 -0.13 – 0.35
dayDay4 -0.04 -0.29 – 0.20
hr: hr 8 0.28 0.04 – 0.51
hr: hr 10 -0.23 -0.46 – 0.00
hr: hr 12 -0.45 -0.67 – -0.22
hr: hr 19 -0.79 -1.00 – -0.57
osc_lvlS:prot_lvlLP -0.13 -0.47 – 0.21
osc_lvlS:dayDay2 -0.16 -0.50 – 0.17
osc_lvlS:dayDay3 -0.25 -0.59 – 0.10
osc_lvlS:dayDay4 -0.11 -0.45 – 0.23
prot_lvlLP:dayDay2 -0.15 -0.48 – 0.20
prot_lvlLP:dayDay3 -0.12 -0.46 – 0.23
prot_lvlLP:dayDay4 0.14 -0.20 – 0.48
osc_lvlS:hr8 -0.30 -0.62 – 0.04
osc_lvlS:hr10 -0.14 -0.46 – 0.19
osc_lvlS:hr12 -0.10 -0.41 – 0.22
osc_lvlS:hr19 -0.10 -0.41 – 0.20
prot_lvlLP:hr8 -0.36 -0.69 – -0.03
prot_lvlLP:hr10 -0.15 -0.48 – 0.18
prot_lvlLP:hr12 -0.24 -0.56 – 0.08
prot_lvlLP:hr19 -0.04 -0.35 – 0.26
dayDay2:hr8 -0.05 -0.38 – 0.27
dayDay3:hr8 -0.40 -0.73 – -0.07
dayDay4:hr8 -0.28 -0.61 – 0.04
dayDay2:hr10 0.02 -0.30 – 0.35
dayDay3:hr10 -0.25 -0.59 – 0.08
dayDay4:hr10 -0.00 -0.33 – 0.32
dayDay2:hr12 0.03 -0.29 – 0.35
dayDay3:hr12 -0.12 -0.45 – 0.19
dayDay4:hr12 0.05 -0.28 – 0.36
dayDay2:hr19 -0.10 -0.40 – 0.20
dayDay3:hr19 -0.02 -0.32 – 0.28
dayDay4:hr19 0.19 -0.11 – 0.50
osc_lvlS:prot_lvlLP:dayDay2 0.35 -0.12 – 0.82
osc_lvlS:prot_lvlLP:dayDay3 0.34 -0.14 – 0.82
osc_lvlS:prot_lvlLP:dayDay4 0.13 -0.36 – 0.61
osc_lvlS:prot_lvlLP:hr8 0.39 -0.08 – 0.83
osc_lvlS:prot_lvlLP:hr10 0.16 -0.30 – 0.63
osc_lvlS:prot_lvlLP:hr12 0.18 -0.28 – 0.62
osc_lvlS:prot_lvlLP:hr19 0.21 -0.20 – 0.63
osc_lvlS:dayDay2:hr8 0.29 -0.17 – 0.74
osc_lvlS:dayDay3:hr8 0.62 0.15 – 1.08
osc_lvlS:dayDay4:hr8 0.54 0.09 – 1.00
osc_lvlS:dayDay2:hr10 0.06 -0.40 – 0.52
osc_lvlS:dayDay3:hr10 0.24 -0.23 – 0.71
osc_lvlS:dayDay4:hr10 0.15 -0.31 – 0.61
osc_lvlS:dayDay2:hr12 -0.00 -0.46 – 0.45
osc_lvlS:dayDay3:hr12 0.25 -0.20 – 0.70
osc_lvlS:dayDay4:hr12 -0.00 -0.45 – 0.45
osc_lvlS:dayDay2:hr19 0.36 -0.06 – 0.79
osc_lvlS:dayDay3:hr19 0.36 -0.06 – 0.79
osc_lvlS:dayDay4:hr19 0.04 -0.38 – 0.47
prot_lvlLP:dayDay2:hr8 0.37 -0.10 – 0.83
prot_lvlLP:dayDay3:hr8 0.42 -0.04 – 0.88
prot_lvlLP:dayDay4:hr8 0.25 -0.21 – 0.72
prot_lvlLP:dayDay2:hr10 -0.00 -0.46 – 0.46
prot_lvlLP:dayDay3:hr10 0.34 -0.11 – 0.82
prot_lvlLP:dayDay4:hr10 -0.08 -0.54 – 0.38
prot_lvlLP:dayDay2:hr12 0.11 -0.34 – 0.56
prot_lvlLP:dayDay3:hr12 0.24 -0.21 – 0.70
prot_lvlLP:dayDay4:hr12 -0.00 -0.45 – 0.45
prot_lvlLP:dayDay2:hr19 0.19 -0.24 – 0.62
prot_lvlLP:dayDay3:hr19 0.01 -0.41 – 0.44
prot_lvlLP:dayDay4:hr19 -0.17 -0.60 – 0.26
osc_lvlS:prot_lvlLP:dayDay2:hr8 -0.57 -1.21 – 0.08
osc_lvlS:prot_lvlLP:dayDay3:hr8 -0.65 -1.30 – 0.00
osc_lvlS:prot_lvlLP:dayDay4:hr8 -0.69 -1.35 – -0.05
osc_lvlS:prot_lvlLP:dayDay2:hr10 -0.05 -0.70 – 0.59
osc_lvlS:prot_lvlLP:dayDay3:hr10 -0.37 -1.04 – 0.28
osc_lvlS:prot_lvlLP:dayDay4:hr10 -0.31 -0.97 – 0.35
osc_lvlS:prot_lvlLP:dayDay2:hr12 -0.12 -0.74 – 0.53
osc_lvlS:prot_lvlLP:dayDay3:hr12 -0.32 -0.97 – 0.32
osc_lvlS:prot_lvlLP:dayDay4:hr12 -0.24 -0.88 – 0.40
osc_lvlS:prot_lvlLP:dayDay2:hr19 -0.57 -1.16 – 0.02
osc_lvlS:prot_lvlLP:dayDay3:hr19 -0.62 -1.22 – -0.03
osc_lvlS:prot_lvlLP:dayDay4:hr19 -0.23 -0.83 – 0.38
Random Effects
σ2 0.01
Ï„00 0.18
ICC 0.06
N day 4
N period 4
N cow 9
Observations 640
Marginal R2 / Conditional R2 0.303 / 0.309

Time elapsed

t2 - t1
## Time difference of 5.27798 mins

See Stan code

This is what is compiled at runtime. Unclear if brms will cache this (don’t think so). Maybe helpful for getting correct model expression?

stancode(bmod3)
## // generated with brms 2.19.0
## functions {
## }
## data {
##   int<lower=1> N;  // total number of observations
##   vector[N] Y;  // response variable
##   int<lower=1> K;  // number of population-level effects
##   matrix[N, K] X;  // population-level design matrix
##   // data needed for ARMA correlations
##   int<lower=0> Kar;  // AR order
##   int<lower=0> Kma;  // MA order
##   // number of lags per observation
##   int<lower=0> J_lag[N];
##   // data for group-level effects of ID 1
##   int<lower=1> N_1;  // number of grouping levels
##   int<lower=1> M_1;  // number of coefficients per level
##   int<lower=1> J_1[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_1_1;
##   // data for group-level effects of ID 2
##   int<lower=1> N_2;  // number of grouping levels
##   int<lower=1> M_2;  // number of coefficients per level
##   int<lower=1> J_2[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_2_1;
##   // data for group-level effects of ID 3
##   int<lower=1> N_3;  // number of grouping levels
##   int<lower=1> M_3;  // number of coefficients per level
##   int<lower=1> J_3[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_3_1;
##   int prior_only;  // should the likelihood be ignored?
## }
## transformed data {
##   int Kc = K - 1;
##   matrix[N, Kc] Xc;  // centered version of X without an intercept
##   vector[Kc] means_X;  // column means of X before centering
##   int max_lag = max(Kar, Kma);
##   for (i in 2:K) {
##     means_X[i - 1] = mean(X[, i]);
##     Xc[, i - 1] = X[, i] - means_X[i - 1];
##   }
## }
## parameters {
##   vector[Kc] b;  // population-level effects
##   real Intercept;  // temporary intercept for centered predictors
##   vector[Kar] ar;  // autoregressive coefficients
##   real<lower=0> sigma;  // dispersion parameter
##   vector<lower=0>[M_1] sd_1;  // group-level standard deviations
##   vector[N_1] z_1[M_1];  // standardized group-level effects
##   vector<lower=0>[M_2] sd_2;  // group-level standard deviations
##   vector[N_2] z_2[M_2];  // standardized group-level effects
##   vector<lower=0>[M_3] sd_3;  // group-level standard deviations
##   vector[N_3] z_3[M_3];  // standardized group-level effects
## }
## transformed parameters {
##   vector[N_1] r_1_1;  // actual group-level effects
##   vector[N_2] r_2_1;  // actual group-level effects
##   vector[N_3] r_3_1;  // actual group-level effects
##   real lprior = 0;  // prior contributions to the log posterior
##   r_1_1 = (sd_1[1] * (z_1[1]));
##   r_2_1 = (sd_2[1] * (z_2[1]));
##   r_3_1 = (sd_3[1] * (z_3[1]));
##   lprior += normal_lpdf(b | 0, 10);
##   lprior += normal_lpdf(Intercept | 7, 10);
##   lprior += cauchy_lpdf(sigma | 0, 10)
##     - 1 * cauchy_lccdf(0 | 0, 10);
##   lprior += cauchy_lpdf(sd_1 | 0, 10)
##     - 1 * cauchy_lccdf(0 | 0, 10);
##   lprior += cauchy_lpdf(sd_2 | 0, 10)
##     - 1 * cauchy_lccdf(0 | 0, 10);
##   lprior += cauchy_lpdf(sd_3 | 0, 10)
##     - 1 * cauchy_lccdf(0 | 0, 10);
## }
## model {
##   // likelihood including constants
##   if (!prior_only) {
##     // matrix storing lagged residuals
##     matrix[N, max_lag] Err = rep_matrix(0, N, max_lag);
##     vector[N] err;  // actual residuals
##     // initialize linear predictor term
##     vector[N] mu = rep_vector(0.0, N);
##     mu += Intercept + Xc * b;
##     for (n in 1:N) {
##       // add more terms to the linear predictor
##       mu[n] += r_1_1[J_1[n]] * Z_1_1[n] + r_2_1[J_2[n]] * Z_2_1[n] + r_3_1[J_3[n]] * Z_3_1[n];
##     }
##     // include ARMA terms
##     for (n in 1:N) {
##       err[n] = Y[n] - mu[n];
##       for (i in 1:J_lag[n]) {
##         Err[n + 1, i] = err[n + 1 - i];
##       }
##       mu[n] += Err[n, 1:Kar] * ar;
##     }
##     target += normal_lpdf(Y | mu, sigma);
##   }
##   // priors including constants
##   target += lprior;
##   target += std_normal_lpdf(z_1[1]);
##   target += std_normal_lpdf(z_2[1]);
##   target += std_normal_lpdf(z_3[1]);
## }
## generated quantities {
##   // actual population-level intercept
##   real b_Intercept = Intercept - dot_product(means_X, b);
## }

Diagnostics

pp_check(bmod3)
## Using 10 posterior draws for ppc type 'dens_overlay' by default.

Compare Frequentist and Bayesian Estimates

b = tidy(bmod3) %>% mutate(model = "Bayesian")
## Warning in tidy.brmsfit(bmod3): some parameter names contain underscores: term
## naming may be unreliable!
f = tidy(model_f)%>% mutate(model = "Frequentist") # does not give random parameters
## Warning in tidy.lme(model_f): ran_pars not yet implemented for multiple levels
## of nesting
both = b %>% 
  full_join(f) 
## Joining with `by = join_by(effect, term, estimate, std.error, model)`
  # find duplicates - then remove them . Random parameters cannot be compared with tidy()
both %>%
  dplyr::group_by(term, model) %>%
  dplyr::summarise(n = dplyr::n(), .groups = "drop") %>%
  dplyr::filter(n > 1L) 
## # A tibble: 1 × 3
##   term            model        n
##   <chr>           <chr>    <int>
## 1 sd__(Intercept) Bayesian     3
both_wide = both %>% 
  filter(!(effect == "ran_pars")) %>% 
  dplyr::select( term, model, estimate) %>% 
  pivot_wider( names_from = model, values_from = estimate) %>% 
  dplyr::select(term,  Bayesian, Frequentist) %>% 
  mutate(DifferenceBvsF = Bayesian - Frequentist) %>% 
  mutate(across(where(is.numeric), round, 3))
## Warning: There was 1 warning in `mutate()`.
## ℹ In argument: `across(where(is.numeric), round, 3)`.
## Caused by warning:
## ! The `...` argument of `across()` is deprecated as of dplyr 1.1.0.
## Supply arguments directly to `.fns` through an anonymous function instead.
## 
##   # Previously
##   across(a:b, mean, na.rm = TRUE)
## 
##   # Now
##   across(a:b, \(x) mean(x, na.rm = TRUE))
kable(both_wide)
term Bayesian Frequentist DifferenceBvsF
(Intercept) 6.564 6.561 0.002
periodP2 0.083 0.080 0.002
periodP3 -0.022 -0.022 0.001
periodP4 0.177 0.179 -0.002
osc_lvlS 0.120 0.120 0.000
prot_lvlLP 0.084 0.084 0.000
dayDay2 0.057 0.059 -0.002
dayDay3 0.106 0.108 -0.002
dayDay4 -0.044 -0.043 -0.001
hr8 0.280 0.283 -0.002
hr10 -0.232 -0.231 -0.001
hr12 -0.447 -0.447 0.000
hr19 -0.786 -0.784 -0.002
osc_lvlS:prot_lvlLP -0.129 -0.126 -0.003
osc_lvlS:dayDay2 -0.162 -0.163 0.001
osc_lvlS:dayDay3 -0.249 -0.249 0.001
osc_lvlS:dayDay4 -0.112 -0.110 -0.001
prot_lvlLP:dayDay2 -0.147 -0.148 0.001
prot_lvlLP:dayDay3 -0.118 -0.120 0.002
prot_lvlLP:dayDay4 0.140 0.140 -0.001
osc_lvlS:hr8 -0.296 -0.297 0.001
osc_lvlS:hr10 -0.136 -0.136 0.000
osc_lvlS:hr12 -0.096 -0.094 -0.002
osc_lvlS:hr19 -0.101 -0.102 0.001
prot_lvlLP:hr8 -0.362 -0.364 0.002
prot_lvlLP:hr10 -0.146 -0.147 0.001
prot_lvlLP:hr12 -0.239 -0.238 -0.002
prot_lvlLP:hr19 -0.043 -0.044 0.001
dayDay2:hr8 -0.052 -0.055 0.003
dayDay3:hr8 -0.403 -0.406 0.003
dayDay4:hr8 -0.284 -0.286 0.003
dayDay2:hr10 0.022 0.019 0.003
dayDay3:hr10 -0.253 -0.257 0.003
dayDay4:hr10 0.000 -0.001 0.001
dayDay2:hr12 0.028 0.027 0.001
dayDay3:hr12 -0.126 -0.127 0.001
dayDay4:hr12 0.046 0.046 0.000
dayDay2:hr19 -0.098 -0.102 0.003
dayDay3:hr19 -0.018 -0.022 0.003
dayDay4:hr19 0.194 0.192 0.003
osc_lvlS:prot_lvlLP:dayDay2 0.351 0.349 0.002
osc_lvlS:prot_lvlLP:dayDay3 0.341 0.339 0.002
osc_lvlS:prot_lvlLP:dayDay4 0.130 0.125 0.004
osc_lvlS:prot_lvlLP:hr8 0.385 0.385 0.001
osc_lvlS:prot_lvlLP:hr10 0.162 0.160 0.002
osc_lvlS:prot_lvlLP:hr12 0.177 0.171 0.005
osc_lvlS:prot_lvlLP:hr19 0.208 0.207 0.001
osc_lvlS:dayDay2:hr8 0.287 0.289 -0.003
osc_lvlS:dayDay3:hr8 0.620 0.623 -0.003
osc_lvlS:dayDay4:hr8 0.539 0.540 -0.001
osc_lvlS:dayDay2:hr10 0.056 0.058 -0.002
osc_lvlS:dayDay3:hr10 0.244 0.247 -0.003
osc_lvlS:dayDay4:hr10 0.151 0.149 0.002
osc_lvlS:dayDay2:hr12 -0.001 0.001 -0.002
osc_lvlS:dayDay3:hr12 0.249 0.249 0.000
osc_lvlS:dayDay4:hr12 0.000 -0.003 0.003
osc_lvlS:dayDay2:hr19 0.364 0.367 -0.003
osc_lvlS:dayDay3:hr19 0.361 0.364 -0.003
osc_lvlS:dayDay4:hr19 0.043 0.043 0.000
prot_lvlLP:dayDay2:hr8 0.369 0.373 -0.004
prot_lvlLP:dayDay3:hr8 0.422 0.425 -0.004
prot_lvlLP:dayDay4:hr8 0.249 0.250 -0.001
prot_lvlLP:dayDay2:hr10 -0.001 0.001 -0.002
prot_lvlLP:dayDay3:hr10 0.345 0.349 -0.004
prot_lvlLP:dayDay4:hr10 -0.083 -0.083 0.001
prot_lvlLP:dayDay2:hr12 0.111 0.111 0.000
prot_lvlLP:dayDay3:hr12 0.237 0.239 -0.001
prot_lvlLP:dayDay4:hr12 -0.003 -0.004 0.001
prot_lvlLP:dayDay2:hr19 0.189 0.191 -0.002
prot_lvlLP:dayDay3:hr19 0.012 0.016 -0.004
prot_lvlLP:dayDay4:hr19 -0.170 -0.170 0.000
osc_lvlS:prot_lvlLP:dayDay2:hr8 -0.569 -0.569 0.001
osc_lvlS:prot_lvlLP:dayDay3:hr8 -0.652 -0.652 0.000
osc_lvlS:prot_lvlLP:dayDay4:hr8 -0.695 -0.693 -0.002
osc_lvlS:prot_lvlLP:dayDay2:hr10 -0.051 -0.050 -0.001
osc_lvlS:prot_lvlLP:dayDay3:hr10 -0.374 -0.376 0.002
osc_lvlS:prot_lvlLP:dayDay4:hr10 -0.304 -0.299 -0.005
osc_lvlS:prot_lvlLP:dayDay2:hr12 -0.117 -0.114 -0.003
osc_lvlS:prot_lvlLP:dayDay3:hr12 -0.324 -0.322 -0.002
osc_lvlS:prot_lvlLP:dayDay4:hr12 -0.237 -0.233 -0.005
osc_lvlS:prot_lvlLP:dayDay2:hr19 -0.567 -0.568 0.001
osc_lvlS:prot_lvlLP:dayDay3:hr19 -0.624 -0.625 0.001
osc_lvlS:prot_lvlLP:dayDay4:hr19 -0.232 -0.229 -0.003
both %>% 
  left_join(both_wide) %>% 
  filter(term != "(Intercept)") %>%  # much larger than other terms
  ggplot(aes(y = estimate, x = term, color = model)) + # differences are too smale for segment to show up. 
  geom_segment(aes(y = estimate - DifferenceBvsF, yend = estimate, xend = term, x = term))+
  geom_point() + coord_flip() + facet_wrap(~model) + theme(legend.position = "none") + labs(x = NULL, y = "Estimate")
## Joining with `by = join_by(term)`
## Warning: Removed 4 rows containing missing values (`geom_segment()`).

The fixed effects are very close.

Visualize autocorrelation

See how the phi parameter affects the degree of autocorrelation, using example data. The first image is close to the estimated phi in our data.

c = corCAR1(value = .2)
cst1 <- Initialize(c, data = Orthodont)
cm = as.matrix(cst1)
image(cm)

c = corCAR1(value = .9)
cst1 <- Initialize(c, data = Orthodont)
cm = as.matrix(cst1)
image(cm)

Estimated marginal means (emmeans)

Emmeans with lme()

HPD = highest posterior density region. Defines the credible interval. A credible interval is a section of the posterior distribution.

fem = emmeans(model_f, ~osc_lvl*prot_lvl)
## NOTE: Results may be misleading due to involvement in interactions
fem 
##  osc_lvl prot_lvl emmean     SE df lower.CL upper.CL
##  O       HP         6.37 0.0396  8     6.28     6.46
##  S       HP         6.38 0.0396  8     6.29     6.47
##  O       LP         6.35 0.0396  8     6.25     6.44
##  S       LP         6.38 0.0396  8     6.29     6.47
## 
## Results are averaged over the levels of: period, day, hr 
## Degrees-of-freedom method: containment 
## Confidence level used: 0.95

Emmeans with brms()

HPD = highest posterior density region. Defines the credible interval. A credible interval is a section of the posterior distribution.

bem = emmeans(bmod3, ~osc_lvl*prot_lvl)
## Loading required namespace: rstanarm
## NOTE: Results may be misleading due to involvement in interactions
bem
##  osc_lvl prot_lvl emmean lower.HPD upper.HPD
##  O       HP         6.37      6.27      6.47
##  S       HP         6.38      6.28      6.47
##  O       LP         6.35      6.25      6.44
##  S       LP         6.38      6.29      6.48
## 
## Results are averaged over the levels of: period, day, hr 
## Point estimate displayed: median 
## HPD interval probability: 0.95

Compare Emmeans

bem1 <- bem %>%
  data.frame() %>%
  rename(
    lower = lower.HPD,
    upper = upper.HPD
  ) %>% 
  mutate(model = "Bayesian")
fem1 <- fem %>%
  data.frame() %>%
  rename(
    lower = lower.CL,
    upper = upper.CL
  ) %>% 
  mutate(model = "Frequentist")

fem1 %>% full_join(bem1) %>% 
  ggplot(aes(x = interaction(osc_lvl, prot_lvl), y = emmean, color = model)) +
  geom_point(position = position_dodge(width = .5)) + 
  geom_errorbar(aes(ymin = lower, ymax = upper), position = position_dodge(width = .5)) +
  labs(x = NULL, y = "Estimate")
## Joining with `by = join_by(osc_lvl, prot_lvl, emmean, lower, upper, model)`

Interestingly, the intervals are slightly wider for Bayesian. Both use 95%.

Missing data imputation with brms?

There is a way to aggregate multiple imputations from packages such as mice. Alternatively, there is a way to simultaneously impute missing data while modeling. The former is what I was interested in.

References

Smid, S. C., & Winter, S. D. (2020). Dangers of the defaults: A tutorial on the impact of default priors when using Bayesian SEM with small samples. Frontiers in Psychology, 11, 611963. https://www.frontiersin.org/articles/10.3389/fpsyg.2020.611963/full

https://bookdown.org/marklhc/notes_bookdown/