import os
import sys
import sys, os # for running from repo
sys.path.insert(0, os.path.abspath("../../"))
try: from brmspy import brms; import seaborn;
except ImportError:
%pip install -q brmspy seaborn
from brmspy import brms
from brmspy import bf, set_rescor, lf
import pandas as pd
import arviz as az
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
#brms.install_runtime()
brms.install_rpackage("MCMCglmm")
R callback write-console: Error in loadNamespace(x) : there is no package called ‘cmdstanr’ R callback write-console: CmdStan path set to: /Users/sebastian/.brmspy/runtime/macos-arm64-r4.5-0.2.0/cmdstan [brmspy] MCMCglmm 2.36 already installed.
Introduction¶
In the present example, we want to discuss how to specify multivariate multilevel models using brms. We call a model multivariate if it contains multiple response variables, each being predicted by its own set of predictors. Consider an example from biology. Hadfield, Nutall, Osorio, and Owens (2007) analyzed data of the Eurasian blue tit (https://en.wikipedia.org/wiki/Eurasian_blue_tit). They predicted the tarsus length as well as the back color of chicks. Half of the brood were put into another fosternest, while the other half stayed in the fosternest of their own dam. This allows to separate genetic from environmental factors. Additionally, we have information about the hatchdate and sex of the chicks (the latter being known for 94% of the animals).
df = brms.get_data("BTdata", package = "MCMCglmm")
df.head()
| tarsus | back | animal | dam | fosternest | hatchdate | sex | |
|---|---|---|---|---|---|---|---|
| 1 | -1.892297 | 1.146421 | 207 | 56 | 74 | -0.687402 | 1 |
| 2 | 1.136110 | -0.759652 | 219 | 57 | 72 | -0.687402 | 2 |
| 3 | 0.984689 | 0.144937 | 395 | 61 | 16 | -0.427981 | 2 |
| 4 | 0.379008 | 0.255585 | 46 | 38 | 4 | -1.465664 | 2 |
| 5 | -0.075253 | -0.300699 | 38 | 43 | 12 | -1.465664 | 1 |
Basic Multivariate Models¶
We begin with a relatively simple multivariate normal model.
bform1 = bf("""
mvbind(tarsus, back) ~
sex +
hatchdate +
(1|p|fosternest) +
(1|q|dam)
""") + set_rescor(rescor=True)
fit1 = brms.brm(bform1, data = df, chains = 2, cores = 2, silent = 2, refresh = 0)
[brmspy] Fitting model with brms (backend: cmdstanr)...
\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-
As can be seen in the model code, we have used mvbind notation to tell brms that both tarsus and back are separate response variables. The term (1|p|fosternest) indicates a varying intercept over fosternest. By writing |p| in between we indicate that all varying effects of fosternest should be modeled as correlated. This makes sense since we actually have two model parts, one for tarsus and one for back. The indicator p is arbitrary and can be replaced by other symbols that comes into your mind (for details about the multilevel syntax of brms, see help("brmsformula") and vignette("brms_multilevel")). Similarly, the term (1|q|dam) indicates correlated varying effects of the genetic mother of the chicks. Alternatively, we could have also modeled the genetic similarities through pedigrees and corresponding relatedness matrices, but this is not the focus of this vignette (please see vignette("brms_phylogenetics")). The model results are readily summarized via
for var in fit1.idata.posterior_predictive.data_vars:
print(var)
print(az.loo(fit1.idata, var_name=var))
print("\n")
tarsus
/Users/sebastian/PycharmProjects/pybrms/.venv/lib/python3.12/site-packages/arviz/stats/stats.py:797: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations. warnings.warn(
Computed from 2000 posterior samples and 828 observations log-likelihood matrix.
Estimate SE
elpd_loo -1037.00 26.33
p_loo 100.21 -
There has been a warning during the calculation. Please check the results.
------
Pareto k diagnostic values:
Count Pct.
(-Inf, 0.70] (good) 826 99.8%
(0.70, 1] (bad) 2 0.2%
(1, Inf) (very bad) 0 0.0%
back
Computed from 2000 posterior samples and 828 observations log-likelihood matrix.
Estimate SE
elpd_loo -1128.30 19.51
p_loo 73.65 -
------
Pareto k diagnostic values:
Count Pct.
(-Inf, 0.70] (good) 828 100.0%
(0.70, 1] (bad) 0 0.0%
(1, Inf) (very bad) 0 0.0%
brms.summary(fit1)
Family: MV(gaussian, gaussian)
Links: mu = identity
mu = identity
Formula: tarsus ~ sex + hatchdate + (1 | p | fosternest) + (1 | q | dam)
back ~ sex + hatchdate + (1 | p | fosternest) + (1 | q | dam)
Data: structure(list(tarsus = c(-1.89229718155107, 1.136 (Number of observations: 828)
Draws: 2 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup draws = 2000
Multilevel Hyperparameters:
~dam (Number of levels: 106)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(tarsus_Intercept) 0.47 0.05 0.37 0.57 1.00
sd(back_Intercept) 0.24 0.07 0.10 0.38 1.00
cor(tarsus_Intercept,back_Intercept) -0.55 0.22 -0.94 -0.09 1.00
Bulk_ESS Tail_ESS
sd(tarsus_Intercept) 1051 1241
sd(back_Intercept) 308 763
cor(tarsus_Intercept,back_Intercept) 561 620
~fosternest (Number of levels: 104)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(tarsus_Intercept) 0.31 0.06 0.20 0.42 1.00
sd(back_Intercept) 0.35 0.06 0.24 0.46 1.00
cor(tarsus_Intercept,back_Intercept) 0.67 0.20 0.22 0.97 1.00
Bulk_ESS Tail_ESS
sd(tarsus_Intercept) 684 1076
sd(back_Intercept) 439 912
cor(tarsus_Intercept,back_Intercept) 341 619
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
tarsus_Intercept -0.80 0.10 -0.99 -0.60 1.00 2429 1724
back_Intercept -0.06 0.10 -0.26 0.14 1.00 2684 1376
tarsus_sex 0.49 0.05 0.39 0.59 1.00 4291 1308
tarsus_hatchdate -0.08 0.06 -0.19 0.04 1.00 1523 1408
back_sex 0.04 0.06 -0.08 0.15 1.00 3311 1215
back_hatchdate -0.09 0.05 -0.19 0.01 1.00 2280 1178
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_tarsus 0.79 0.02 0.75 0.84 1.00 2154 1318
sigma_back 0.90 0.02 0.86 0.95 1.00 2331 1227
Residual Correlations:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
rescor(tarsus,back) -0.06 0.04 -0.14 0.02 1.00 2762 1432
Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
The summary output of multivariate models closely resembles those of univariate models, except that the parameters now have the corresponding response variable as prefix. Across dams, tarsus length and back color seem to be negatively correlated, while across fosternests the opposite is true. This indicates differential effects of genetic and environmental factors on these two characteristics. Further, the small residual correlation rescor(tarsus, back) on the bottom of the output indicates that there is little unmodeled dependency between tarsus length and back color. Although not necessary at this point, we have already computed and stored the LOO information criterion of fit1, which we will use for model comparisons. Next, let’s take a look at some posterior-predictive checks, which give us a first impression of the model fit.
az.plot_ppc(fit1.idata, var_names=['tarsus'])
<Axes: xlabel='tarsus'>
az.plot_ppc(fit1.idata, var_names=["back"])
<Axes: xlabel='back'>
This looks pretty solid, but we notice a slight unmodeled left skewness in the distribution of tarsus. We will come back to this later on. Next, we want to investigate how much variation in the response variables can be explained by our model and we use a Bayesian generalization of the 𝑅2 coefficient.
brms.call("bayes_R2", fit1)
| Estimate | Est.Error | Q2.5 | Q97.5 | |
|---|---|---|---|---|
| R2tarsus | 0.378385 | 0.025264 | 0.326799 | 0.425129 |
| R2back | 0.196298 | 0.026960 | 0.141394 | 0.250162 |
Clearly, there is much variation in both animal characteristics that we can not explain, but apparently we can explain more of the variation in tarsus length than in back color.
More Complex Multivariate Models¶
Now, suppose we only want to control for sex in tarsus but not in back and vice versa for hatchdate. Not that this is particular reasonable for the present example, but it allows us to illustrate how to specify different formulas for different response variables. We can no longer use mvbind syntax and so we have to use a more verbose approach:
bf_tarsus = bf("tarsus ~ sex + (1|p|fosternest) + (1|q|dam)")
bf_back = bf("back ~ hatchdate + (1|p|fosternest) + (1|q|dam)")
fit2 = brms.brm(bf_tarsus + bf_back + set_rescor(True), data = df, chains = 2, cores = 2, silent = 2, refresh = 0)
[brmspy] Fitting model with brms (backend: cmdstanr)...
Note that we have literally added the two model parts via the + operator, which is in this case equivalent to writing mvbf(bf_tarsus, bf_back). See help("brmsformula") and help("mvbrmsformula") for more details about this syntax. Again, we summarize the model first.
for var in fit2.idata.posterior_predictive.data_vars:
print(var)
print(az.loo(fit2.idata, var_name=var))
print("\n")
tarsus
/Users/sebastian/PycharmProjects/pybrms/.venv/lib/python3.12/site-packages/arviz/stats/stats.py:797: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations. warnings.warn(
Computed from 2000 posterior samples and 828 observations log-likelihood matrix.
Estimate SE
elpd_loo -1035.99 26.24
p_loo 99.67 -
There has been a warning during the calculation. Please check the results.
------
Pareto k diagnostic values:
Count Pct.
(-Inf, 0.70] (good) 827 99.9%
(0.70, 1] (bad) 1 0.1%
(1, Inf) (very bad) 0 0.0%
back
Computed from 2000 posterior samples and 828 observations log-likelihood matrix.
Estimate SE
elpd_loo -1126.33 19.48
p_loo 72.99 -
------
Pareto k diagnostic values:
Count Pct.
(-Inf, 0.70] (good) 828 100.0%
(0.70, 1] (bad) 0 0.0%
(1, Inf) (very bad) 0 0.0%
brms.summary(fit2)
Family: MV(gaussian, gaussian)
Links: mu = identity
mu = identity
Formula: tarsus ~ sex + (1 | p | fosternest) + (1 | q | dam)
back ~ hatchdate + (1 | p | fosternest) + (1 | q | dam)
Data: structure(list(tarsus = c(-1.89229718155107, 1.136 (Number of observations: 828)
Draws: 2 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup draws = 2000
Multilevel Hyperparameters:
~dam (Number of levels: 106)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(tarsus_Intercept) 0.47 0.05 0.38 0.58 1.00
sd(back_Intercept) 0.25 0.08 0.09 0.39 1.01
cor(tarsus_Intercept,back_Intercept) -0.52 0.23 -0.93 -0.06 1.00
Bulk_ESS Tail_ESS
sd(tarsus_Intercept) 628 1394
sd(back_Intercept) 377 687
cor(tarsus_Intercept,back_Intercept) 482 683
~fosternest (Number of levels: 104)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(tarsus_Intercept) 0.31 0.05 0.21 0.42 1.00
sd(back_Intercept) 0.35 0.06 0.23 0.47 1.00
cor(tarsus_Intercept,back_Intercept) 0.65 0.20 0.21 0.96 1.00
Bulk_ESS Tail_ESS
sd(tarsus_Intercept) 753 1052
sd(back_Intercept) 500 667
cor(tarsus_Intercept,back_Intercept) 255 536
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
tarsus_Intercept -0.80 0.10 -1.01 -0.60 1.00 1866 1572
back_Intercept -0.00 0.05 -0.11 0.10 1.00 1651 1612
tarsus_sex 0.49 0.05 0.39 0.58 1.00 3138 1403
back_hatchdate -0.08 0.05 -0.18 0.02 1.00 1963 1664
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_tarsus 0.79 0.02 0.75 0.84 1.00 2046 1610
sigma_back 0.90 0.03 0.85 0.95 1.00 1731 1203
Residual Correlations:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
rescor(tarsus,back) -0.06 0.04 -0.14 0.01 1.00 2399 1478
Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Let’s find out, how model fit changed due to excluding certain effects from the initial model:
var = "back"
loo1 = az.loo(fit1.idata, var_name=var)
loo2 = az.loo(fit2.idata, var_name=var)
cmp = az.compare({"m1": fit1.idata, "m2": fit2.idata}, ic="loo", var_name=var)
cmp
| rank | elpd_loo | p_loo | elpd_diff | weight | se | dse | warning | scale | |
|---|---|---|---|---|---|---|---|---|---|
| m1 | 0 | -30.367550 | 60.744969 | 0.000000 | 1.0 | 10.249669 | 0.00000 | False | log |
| m2 | 1 | -1126.327534 | 72.989176 | 1095.959985 | 0.0 | 19.482218 | 22.70914 | False | log |
Apparently, there is no noteworthy difference in the model fit. Accordingly, we do not really need to model sex and hatchdate for both response variables, but there is also no harm in including them (so I would probably just include them).
To give you a glimpse of the capabilities of brms’ multivariate syntax, we change our model in various directions at the same time. Remember the slight left skewness of tarsus, which we will now model by using the skew_normal family instead of the gaussian family. Since we do not have a multivariate normal (or student-t) model, anymore, estimating residual correlations is no longer possible. We make this explicit using the set_rescor function. Further, we investigate if the relationship of back and hatchdate is really linear as previously assumed by fitting a non-linear spline of hatchdate. On top of it, we model separate residual variances of tarsus for male and female chicks.
from brmspy import skew_normal, gaussian
bf_tarsus = bf("tarsus ~ sex + (1|p|fosternest) + (1|q|dam)") + lf("sigma ~ 0 + sex") + skew_normal()
bf_back = bf("back ~ s(hatchdate) + (1|p|fosternest) + (1|q|dam)") + gaussian()
fit3 = brms.brm(
bf_tarsus + bf_back + set_rescor(False),
data = df, chains = 2, cores = 2,
control = {"adapt_delta": 0.95},
silent = 2, refresh = 0
)
[brmspy] Fitting model with brms (backend: cmdstanr)...
\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-
R callback write-console: Warning: 3 of 2000 (0.0%) transitions ended with a divergence. See https://mc-stan.org/misc/warnings for details.
Again, we summarize the model and look at some posterior-predictive checks.
for var in fit3.idata.posterior_predictive.data_vars:
print(var)
print(az.loo(fit3.idata, var_name=var))
print("\n")
tarsus
/Users/sebastian/PycharmProjects/pybrms/.venv/lib/python3.12/site-packages/arviz/stats/stats.py:797: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations. warnings.warn(
Computed from 2000 posterior samples and 828 observations log-likelihood matrix.
Estimate SE
elpd_loo -1049.76 25.55
p_loo 95.05 -
There has been a warning during the calculation. Please check the results.
------
Pareto k diagnostic values:
Count Pct.
(-Inf, 0.70] (good) 826 99.8%
(0.70, 1] (bad) 1 0.1%
(1, Inf) (very bad) 1 0.1%
back
Computed from 2000 posterior samples and 828 observations log-likelihood matrix.
Estimate SE
elpd_loo -1125.37 19.54
p_loo 70.02 -
------
Pareto k diagnostic values:
Count Pct.
(-Inf, 0.70] (good) 828 100.0%
(0.70, 1] (bad) 0 0.0%
(1, Inf) (very bad) 0 0.0%
brms.summary(fit3)
R callback write-console: In addition: R callback write-console: Warning message: R callback write-console: There were 3 divergent transitions after warmup. Increasing adapt_delta above 0.95 may help. See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Family: MV(skew_normal, gaussian)
Links: mu = identity; sigma = log
mu = identity
Formula: tarsus ~ sex + (1 | p | fosternest) + (1 | q | dam)
sigma ~ 0 + sex
back ~ s(hatchdate) + (1 | p | fosternest) + (1 | q | dam)
Data: structure(list(tarsus = c(-1.89229718155107, 1.136 (Number of observations: 828)
Draws: 2 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup draws = 2000
Smoothing Spline Hyperparameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sds(back_shatchdate_1) 2.23 1.13 0.54 4.92 1.00 538
Tail_ESS
sds(back_shatchdate_1) 646
Multilevel Hyperparameters:
~dam (Number of levels: 106)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(tarsus_Intercept) 0.46 0.05 0.36 0.56 1.01
sd(back_Intercept) 0.23 0.07 0.09 0.36 1.01
cor(tarsus_Intercept,back_Intercept) -0.54 0.24 -0.96 -0.06 1.02
Bulk_ESS Tail_ESS
sd(tarsus_Intercept) 616 1097
sd(back_Intercept) 237 427
cor(tarsus_Intercept,back_Intercept) 317 454
~fosternest (Number of levels: 104)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(tarsus_Intercept) 0.30 0.06 0.19 0.42 1.01
sd(back_Intercept) 0.31 0.06 0.20 0.42 1.00
cor(tarsus_Intercept,back_Intercept) 0.62 0.24 0.11 0.98 1.01
Bulk_ESS Tail_ESS
sd(tarsus_Intercept) 394 521
sd(back_Intercept) 312 554
cor(tarsus_Intercept,back_Intercept) 146 255
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
tarsus_Intercept -0.76 0.11 -0.98 -0.54 1.00 992 1493
back_Intercept 0.00 0.05 -0.10 0.10 1.00 873 1317
tarsus_sex 0.46 0.06 0.35 0.57 1.00 1540 1376
sigma_tarsus_sex -0.11 0.02 -0.14 -0.08 1.00 1291 1257
back_shatchdate_1 0.08 3.36 -5.75 7.36 1.00 663 1019
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_back 0.90 0.02 0.85 0.95 1.00 1302 1459
alpha_tarsus -0.91 0.64 -1.82 0.64 1.00 750 1247
Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
We see that the (log) residual standard deviation of tarsus is somewhat larger for chicks whose sex could not be identified as compared to male or female chicks. Further, we see from the negative alpha (skewness) parameter of tarsus that the residuals are indeed slightly left-skewed. Lastly, running
result = brms.call("conditional_effects", fit3, "hatchdate", resp="back")
df_result = result['back.back_hatchdate']
df_plot = df_result.sort_values("hatchdate")
fig, ax = plt.subplots()
ax.plot(
df_plot["hatchdate"],
df_plot["estimate__"],
color="blue"
)
ax.fill_between(
df_plot["hatchdate"],
df_plot["lower__"],
df_plot["upper__"],
alpha=0.3,
color="blue"
)
ax.set_xlabel("hatchdate")
ax.set_ylabel("back")
ax.set_title("Conditional effect of hatchdate on back")
plt.show()
reveals a non-linear relationship of hatchdate on the back color, which seems to change in waves over the course of the hatch dates.
There are many more modeling options for multivariate models, which are not discussed in this vignette. Examples include autocorrelation structures, Gaussian processes, or explicit non-linear predictors (e.g., see help("brmsformula") or vignette("brms_multilevel")). In fact, nearly all the flexibility of univariate models is retained in multivariate models.
References¶
Hadfield JD, Nutall A, Osorio D, Owens IPF (2007). Testing the phenotypic gambit: phenotypic, genetic and environmental correlations of colour. Journal of Evolutionary Biology, 20(2), 549-557.