This tutorial demonstrates the core functionality of the
lgpr package. We demonstrate how to visualize data, create
and fit and additive GP model, study the results, assess covariate
relevances and visualize the inferred covariate effects.
require("lgpr")
# Loading required package: lgpr
# Attached lgpr 1.2.5, using rstan 2.32.7. Type ?lgpr to get started.
require("rstan")
# Loading required package: rstan
# Loading required package: StanHeaders
#
# rstan version 2.32.7 (Stan version 2.32.2)
# For execution on a local, multicore CPU with excess RAM we recommend calling
# options(mc.cores = parallel::detectCores()).
# To avoid recompilation of unchanged Stan programs, we recommend calling
# rstan_options(auto_write = TRUE)
# For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
# change `threads_per_chain` option:
# rstan_options(threads_per_chain = 1)
require("ggplot2")
# Loading required package: ggplot2
In this tutorial we use simulated testdata_002, which is
included in the lgpr package. It consists of 12 individuals
and 8 time points for each. The plot_data() function can be
used to visualize the data in many ways.
plot_data(testdata_002, facet_by = "id", color_by = "sex") + xlab("Age (months)")
# Warning: `aes_string()` was deprecated in ggplot2 3.0.0.
# ℹ Please use tidy evaluation idioms with `aes()`.
# ℹ See also `vignette("ggplot2-in-packages")` for more information.
# ℹ The deprecated feature was likely used in the lgpr package.
# Please report the issue at <https://github.com/jtimonen/lgpr/issues>.
# This warning is displayed once every 8 hours.
# Call `lifecycle::last_lifecycle_warnings()` to see where this warning was generated.
Coloring according to the disease-related age
(diseaseAge) shows that individuals 01-06 are cases and
07-12 are controls. The observed disease onset is at around 60 months
for each case individual.
plot_data(testdata_002, facet_by = "id", color_by = "diseaseAge") + scale_color_gradient2() + xlab("Age (months)")
The vignette “Mathematical
description of lgpr models” describes the statistical models and how
they can be customized in lgpr. Here we have code
examples.
The class lgpmodel represents an additive Gaussian
process model. Such models can be created in lgpr using the
function create_model(), and they can be fit by calling
sample_model(). The easiest way, however, is probably to
use the lgp() function which wraps both
create_model() and sample_model() and has all
the same arguments. The complete information about these arguments is in
the documentation, but here we review the most common ones,
i.e. formula, data and prior.
In this section we focus on defining a model with
create_model(), and the same syntax applies to
lgp(). In R, you can see the more help about the mentioned
(and any other) functions using ?, as in
?lgp.
The data argument to lgp() or
create_model() should be a data.frame where
continuous variables have a numeric type and categorical variables are
factors. Our test data is already in this format.
str(testdata_002)
# 'data.frame': 96 obs. of 6 variables:
# $ id : Factor w/ 12 levels "01","02","03",..: 1 1 1 1 1 1 1 1 2 2 ...
# $ age : num 12 24 36 48 60 72 84 96 12 24 ...
# $ diseaseAge: num -48 -36 -24 -12 0 12 24 36 -48 -36 ...
# $ sex : Factor w/ 2 levels "Female","Male": 2 2 2 2 2 2 2 2 1 1 ...
# $ y : num 651 534 491 435 269 ...
# $ group : Factor w/ 2 levels "Case","Control": 1 1 1 1 1 1 1 1 1 1 ...
If you don’t know how to create a data.frame for your
own data, consult for example this
or this.
Functions in lgpr that can help in adding new variables to
a data frame are add_factor() and
add_dis_age().
The formula argument to lgp() or
create_model() specifies the response variable, model
components and covariates.
Here we create a model with the continuous variable age
and categorical variables sex and subject id
as predictors for y.
model <- create_model(y ~ age + age | sex + age | id, testdata_002, verbose = FALSE)
print(model)
# An object of class lgpmodel. See ?lgpmodel for more info.
# Formula: y ~ gp(age) + gp(age) * zs(sex) + gp(age) * zs(id)
# Likelihood: gaussian
# Data: 96 observations, 6 variables
#
# Component type ker het ns vm unc cat cont
# 1 gp(age) 1 0 0 0 0 0 0 1
# 2 gp(age)*zs(sex) 2 0 0 0 0 0 1 1
# 3 gp(age)*zs(id) 2 0 0 0 0 0 2 1
#
# Variable #Missing
# 1 age 0
#
# Factor #Levels Values
# 1 sex 2 Female, Male
# 2 id 12 ...
#
# Parameter Bounds Prior
# 1 alpha[1] [0, Inf) alpha[1] ~ student-t(20)
# 2 alpha[2] [0, Inf) alpha[2] ~ student-t(20)
# 3 alpha[3] [0, Inf) alpha[3] ~ student-t(20)
# 4 ell[1] [0, Inf) ell[1] ~ log-normal(0,1)
# 5 ell[2] [0, Inf) ell[2] ~ log-normal(0,1)
# 6 ell[3] [0, Inf) ell[3] ~ log-normal(0,1)
# 7 sigma[1] [0, Inf) (sigma[1])^2 ~ inv-gamma(2,1)
#
# Created on Thu Oct 30 21:09:10 2025 with lgpr 1.2.5.
The formula y ~ age + age|sex + age|id has the same
format as for example in the lme4 package, which uses
linear mixed effect models. In lgpr, this formula creates a
model with three components:
If we wanted effects 2. and 3. to not depend on age, we could write
the formula as just y ~ age + sex + id.
The above formula was actually automatically converted to the more
specific format
y ~ gp(age) + gp(age)*zs(sex) + gp(age)*zs(id). The
lgpr package has this advanced syntax, because the common
formula syntax of R is not expressive enough for all kinds of components
that our models can have. Formulae specified using the advanced syntax
consist of terms which can combine the following expressions through
addition and multiplication:
gp(x) specifies a standard GP with exponentiated
quadratic kernelgp_ns(x) specifies a GP with a nonstationary
kernelgp_vm(x) specifies a GP with a variance-masked
kernelzs(z) specifies a GP with zero-sum kernelcateg(z) specifies a GP with categorical kernelunc(z) have uncertainty in their
continuous covariate, so that each level of z introduces
one uncertainty parameterhet(z) are have heterogeneity in
the effect of their continuous covariate, so that each level of
z introduces one level-specific magnitude parameterAbove, x was used to denote any continuous and
z any categorical variable. Each term can contain at most
one continuous variable. Components which contain gp(x),
gp_ns(x), or gp_vm(x) and have
NaN or NA in the data of x are
automatically multiplied by a mask for the missing values.
Here we define the formula using the advanced syntax.
model <- create_model(
y ~ gp(age) + zs(id) * gp(age) + zs(sex) * gp(age) + gp_vm(diseaseAge) + zs(group),
testdata_002
)
# Warning in create_model.prior(prior, stan_input, verbose): Using a default
# prior for input warping steepness (wrp), in a model that involves a gp_ns() or
# gp_vm() expression. This is not recommended. See the 'Basic usage' tutorial at
# https://jtimonen.github.io/lgpr-usage/index.html.
We received a warning because we used gp_vm() without
specifying a prior. This is because the default prior only makes sense
if the covariate (diseaseAge) is expressed in months and
the measurement time interval is several years. We will get back to this
in Section 2.5
print(model)
# An object of class lgpmodel. See ?lgpmodel for more info.
# Formula: y ~ gp(age) + zs(id) * gp(age) + zs(sex) * gp(age) + gp_vm(diseaseAge) + zs(group)
# Likelihood: gaussian
# Data: 96 observations, 6 variables
#
# Component type ker het ns vm unc cat cont
# 1 gp(age) 1 0 0 0 0 0 0 1
# 2 zs(id)*gp(age) 2 0 0 0 0 0 1 1
# 3 zs(sex)*gp(age) 2 0 0 0 0 0 2 1
# 4 gp_vm(diseaseAge) 1 0 0 1 1 0 0 2
# 5 zs(group) 0 0 0 0 0 0 3 0
#
# Variable #Missing
# 1 age 0
# 2 diseaseAge 48
#
# Factor #Levels Values
# 1 id 12 ...
# 2 sex 2 Female, Male
# 3 group 2 Case, Control
#
# Parameter Bounds Prior
# 1 alpha[1] [0, Inf) alpha[1] ~ student-t(20)
# 2 alpha[2] [0, Inf) alpha[2] ~ student-t(20)
# 3 alpha[3] [0, Inf) alpha[3] ~ student-t(20)
# 4 alpha[4] [0, Inf) alpha[4] ~ student-t(20)
# 5 alpha[5] [0, Inf) alpha[5] ~ student-t(20)
# 6 ell[1] [0, Inf) ell[1] ~ log-normal(0,1)
# 7 ell[2] [0, Inf) ell[2] ~ log-normal(0,1)
# 8 ell[3] [0, Inf) ell[3] ~ log-normal(0,1)
# 9 ell[4] [0, Inf) ell[4] ~ log-normal(0,1)
# 10 wrp[1] [0, Inf) wrp[1] ~ inv-gamma(14,5)
# 11 sigma[1] [0, Inf) (sigma[1])^2 ~ inv-gamma(2,1)
#
# Created on Thu Oct 30 21:09:10 2025 with lgpr 1.2.5.
As can be seen in the above output, the model has five magnitude
parameters (alpha), one for each component, and three
lengthscale parameters (ell), since the
zs(group) component does not have a lengthscale parameter.
The other parameters are the input warping steepness parameter
(wrp) for the nonstationary gp_vm(diseaseAge)
component, and the Gaussian noise magnitude parameter
sigma.
Note: the continuous covariate age and
the response variable y are normalized to have zero mean
and unit variance, and the inference of the lengthscale, magnitude, and
noise parameters is done on that scale.
The prior argument to lgp() or
create_model() specifies the prior distribution for kernel
hyperparameters and other model parameters. In above model, we did not
speficy prior, so default priors were used. Custom priors
can be given as a named list, like here:
prior <- list(
alpha = normal(mu = 0, sigma = 1), # gaussian for magnitudes <alpha>
ell = igam(shape = 5, scale = 5) # inverse gamma for lengthscales <ell>
)
model <- create_model(y ~ age + age | sex + age | id,
data = testdata_002,
prior = prior
)
param_summary(model)
# Parameter Bounds Prior
# 1 alpha[1] [0, Inf) alpha[1] ~ normal(0,1)
# 2 alpha[2] [0, Inf) alpha[2] ~ normal(0,1)
# 3 alpha[3] [0, Inf) alpha[3] ~ normal(0,1)
# 4 ell[1] [0, Inf) ell[1] ~ inv-gamma(5,5)
# 5 ell[2] [0, Inf) ell[2] ~ inv-gamma(5,5)
# 6 ell[3] [0, Inf) ell[3] ~ inv-gamma(5,5)
# 7 sigma[1] [0, Inf) (sigma[1])^2 ~ inv-gamma(2,1)
Specifying the prior argument as a named list gave the
same prior for all parameters matching that name. If we wanted separate
priors for example for each lengthscale parameter, we could specify
ell itself as a list with length 3.
It is not recommended to use default priors blindly. Rather, priors
should be specified according to the knowledge about the problem at
hand, as in any Bayesian analysis. In lgpr this is
especially important when
sample_f = TRUE. In this case the response variable is not
normalized, so the scale on which the data varies must be taken into
account when defining priors of the signal magnitude parameters and
possible noise parameters (sigma, phi,
gamma). Also it should be checked if c_hat is
set in a sensible way.gp_ns(x) or
gp_vm(x) expression in its formula. In this case the
corresponding covariate x is not normalized, and the prior
for the input warping steepness parameter wrp must be set
according to the expected width of the window in which the nonstationary
effect of x occurs. By default, the width of this window is
about 36, which has been set assuming that the unit of x is
months.The disease-related age diseaseAge is not normalized,
and we need to take its range into account when setting the prior for
the wrp parameter. Here we define a log-normal prior, take
draws from this prior and visualize the input warping function using the
drawn steepness values.
num_draws <- 300
mu_wrp <- -0.7
sigma_wrp <- 0.3
r <- range(testdata_002$diseaseAge, na.rm = TRUE)
# Define prior for lgp()
my_prior <- list(wrp = log_normal(mu_wrp, sigma_wrp))
# Prior draws
wrp_draws <- stats::rlnorm(num_draws, mu_wrp, sigma_wrp)
# Plot corresponding input warping functions
# - this function is not exported from lgpr so we need to use triple colons :::
# - alpha here is line opacity, not a GP parameter
lgpr:::plot_inputwarp(wrp_draws, seq(r[1], r[2], by = 1), alpha = 0.1)
Higher wrp values mean more rapid changes in the
nonstationary component and lower values mean slower changes. We see
that our prior seems to allow changes in the input warping function only
in the interval \([-20, 20]\). This
means that all possible variation in the diseaseAge
component occurs at most \(\pm 20\)
months before/after the disease effect time/onset.
A model created using create_model() could be fitted
using the sample_model() function. However, in this section
we show how to do both model creation and model fitting using the
lgp() function.
Here we define a model with four components and fit it. The
gp(age) component is a shared age effect,
zs(sex)*gp(age) is a sex-specific deviation from it,
gp_vm(diseaseAge) is a non-stationary variance-masked
disease effect, and zs(group) is a static offset component
between the two groups (Case/Control).
fit <- lgp(y ~ gp(age) + zs(id) * gp(age) + zs(sex) * gp(age) + gp_vm(diseaseAge) + zs(group),
data = testdata_002,
prior = my_prior, # defined earlier
iter = 2000,
chains = 4,
refresh = 500
)
#
# SAMPLING FOR MODEL 'lgp' NOW (CHAIN 1).
# Chain 1:
# Chain 1: Gradient evaluation took 0.002474 seconds
# Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 24.74 seconds.
# Chain 1: Adjust your expectations accordingly!
# Chain 1:
# Chain 1:
# Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
# Chain 1: Iteration: 500 / 2000 [ 25%] (Warmup)
# Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
# Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
# Chain 1: Iteration: 1500 / 2000 [ 75%] (Sampling)
# Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
# Chain 1:
# Chain 1: Elapsed Time: 25.691 seconds (Warm-up)
# Chain 1: 17.488 seconds (Sampling)
# Chain 1: 43.179 seconds (Total)
# Chain 1:
#
# SAMPLING FOR MODEL 'lgp' NOW (CHAIN 2).
# Chain 2:
# Chain 2: Gradient evaluation took 0.001753 seconds
# Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 17.53 seconds.
# Chain 2: Adjust your expectations accordingly!
# Chain 2:
# Chain 2:
# Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
# Chain 2: Iteration: 500 / 2000 [ 25%] (Warmup)
# Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
# Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
# Chain 2: Iteration: 1500 / 2000 [ 75%] (Sampling)
# Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
# Chain 2:
# Chain 2: Elapsed Time: 28.258 seconds (Warm-up)
# Chain 2: 23.721 seconds (Sampling)
# Chain 2: 51.979 seconds (Total)
# Chain 2:
#
# SAMPLING FOR MODEL 'lgp' NOW (CHAIN 3).
# Chain 3:
# Chain 3: Gradient evaluation took 0.001737 seconds
# Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 17.37 seconds.
# Chain 3: Adjust your expectations accordingly!
# Chain 3:
# Chain 3:
# Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
# Chain 3: Iteration: 500 / 2000 [ 25%] (Warmup)
# Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
# Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
# Chain 3: Iteration: 1500 / 2000 [ 75%] (Sampling)
# Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
# Chain 3:
# Chain 3: Elapsed Time: 27.514 seconds (Warm-up)
# Chain 3: 20.053 seconds (Sampling)
# Chain 3: 47.567 seconds (Total)
# Chain 3:
#
# SAMPLING FOR MODEL 'lgp' NOW (CHAIN 4).
# Chain 4:
# Chain 4: Gradient evaluation took 0.001762 seconds
# Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 17.62 seconds.
# Chain 4: Adjust your expectations accordingly!
# Chain 4:
# Chain 4:
# Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
# Chain 4: Iteration: 500 / 2000 [ 25%] (Warmup)
# Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
# Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
# Chain 4: Iteration: 1500 / 2000 [ 75%] (Sampling)
# Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
# Chain 4:
# Chain 4: Elapsed Time: 30.74 seconds (Warm-up)
# Chain 4: 23.99 seconds (Sampling)
# Chain 4: 54.73 seconds (Total)
# Chain 4:
# Computing analytic function posteriors...
# | 10%| 20%| 30%| 40%| 50%| 60%| 70%| 80%| 90%| 100%|
# ======================================================================
# Done.
The lgp() function can be given any arguments that
should be passed to rstan::sampling().
We could use for example cores = 4 to parallelize the
chains or control = list(adapt_delta = 0.95) to set the
target acceptance rate.
Printing the fit object gives info about the posterior distribution of the parameters, MCMC settings and convergence diagnostics.
print(fit)
# An object of class lgpfit. See ?lgpfit for more info.
# Inference for Stan model: lgp.
# 4 chains, each with iter=2000; warmup=1000; thin=1;
# post-warmup draws per chain=1000, total post-warmup draws=4000.
#
# mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
# alpha[1] 0.81 0.01 0.33 0.41 0.59 0.74 0.95 1.63 2661 1
# alpha[2] 0.10 0.00 0.08 0.00 0.04 0.08 0.14 0.29 1806 1
# alpha[3] 0.62 0.01 0.31 0.25 0.41 0.54 0.75 1.42 2527 1
# alpha[4] 0.99 0.01 0.47 0.36 0.66 0.90 1.21 2.23 3505 1
# alpha[5] 0.38 0.01 0.44 0.01 0.08 0.22 0.54 1.63 4123 1
# ell[1] 0.57 0.00 0.17 0.23 0.45 0.56 0.68 0.93 2577 1
# ell[2] 1.39 0.03 1.83 0.11 0.38 0.80 1.69 6.17 2882 1
# ell[3] 0.93 0.02 0.68 0.28 0.56 0.75 1.02 2.87 1734 1
# ell[4] 2.50 0.04 2.67 0.21 0.94 1.74 3.08 9.10 3544 1
# wrp[1] 0.52 0.00 0.16 0.28 0.41 0.49 0.61 0.92 2987 1
# sigma[1] 0.50 0.00 0.04 0.42 0.48 0.50 0.53 0.59 2265 1
#
# Samples were drawn using NUTS(diag_e) at Thu Oct 30 21:12:28 2025.
# For each parameter, n_eff is a crude measure of effective sample size,
# and Rhat is the potential scale reduction factor on split chains (at
# convergence, Rhat=1).
Distribution of the parameter draws can be visualized.
plot_draws(fit, type = "dens")
The slot fit@stan_fit is a stanfit object
and can therefore be studied more as is done here
sf <- fit@stan_fit
sampler_params <- rstan::get_sampler_params(sf, inc_warmup = FALSE)
myfun <- function(x) mean(x[, "accept_stat__"])
mean_accept_stat_by_chain <- sapply(sampler_params, myfun)
print(mean_accept_stat_by_chain)
# [1] 0.8474148 0.9067873 0.8673308 0.9143441
Here we compute the average relevance of each component. We see that
id and group have a very low relevance.
Relevance <- relevances(fit, reduce = mean)
data.frame(Relevance)
# Relevance
# gp(age) 0.405688043
# zs(id)*gp(age) 0.002778684
# zs(sex)*gp(age) 0.194161308
# gp_vm(diseaseAge) 0.181415103
# zs(group) 0.001163845
# noise 0.214793016
Components can be selected using a threshold for the proportion of
total explained variance. We see that id and
group are not selected.
select(fit, threshold = 0.95)
# Component
# gp(age) TRUE
# zs(id)*gp(age) FALSE
# zs(sex)*gp(age) TRUE
# gp_vm(diseaseAge) TRUE
# zs(group) FALSE
# noise TRUE
In order to avoid having to set a fixed threshold for
selection, we can integrate over a threshold density on the [0,1]
interval, and obtain probabilistic selection. Here we define the density
to be stats::dbeta(x, 20, 2), which has most mass between
\(0.9\) and \(1.0\).
threshold_density <- function(x) {
stats::dbeta(x, 20, 2)
}
s <- select.integrate(fit, p = threshold_density)
# | 10%| 20%| 30%| 40%| 50%| 60%| 70%| 80%| 90%| 100%|
# ======================================================================
print(s$expected)
# Component
# gp(age) 1.0000000
# zs(id)*gp(age) 0.0000000
# zs(sex)*gp(age) 0.9992968
# gp_vm(diseaseAge) 0.9212750
# zs(group) 0.0000000
# noise 1.0000000
The above table shows for each component the expectation value of being selected.
We visualize the predictive distribution of the model, using
posterior mean hyperparameters (reduce = mean). The
predictive mean (\(\pm 2 \times\)
standard deviation) are computed at 120 points for each of the 12
individuals (1440 total prediction points created using
new_x()) and visualized against the data.
t <- seq(1, 120, by = 1)
x_pred <- new_x(testdata_002, t, x_ns = "diseaseAge")
p <- pred(fit, x_pred, reduce = mean) # use posterior mean kernel parameters
# Computing analytic function posteriors...
# Done.
plot_pred(fit, pred = p)
Also the posterior distribution of each component can be visualized
plot_components(fit,
pred = p,
color_by = c(NA, NA, "sex", "group", "group", "group"),
ylim = c(-3, 2),
ncol = 2
)
Note: the plot_pred() function scales
the predictions back to the original data scale, whereas the
plot_components() function plots the inferred functions on
the (normalized) inference scale.
Note: the plot_pred() and
plot_components() functions can be used without the
x and pred arguments, too, in which case
computing out-of-sample predictions is not be needed. The predictive and
component posteriors are then shown only at the data points.
plot_pred(fit)
# Missing 'pred' argument, computing or extracting it...
# Computing analytic function posteriors...
# Done.
plot_components(fit,
color_by = c(NA, NA, "sex", "group", "group", "group"),
ylim = c(-3, 2),
ncol = 2
)
# Missing 'pred' argument, computing or extracting it...
# Computing analytic function posteriors...
# Done.
Below is the original environment that was used to run this tutorial.
sessionInfo()
# R version 4.4.1 (2024-06-14)
# Platform: aarch64-apple-darwin20
# Running under: macOS Sonoma 14.6.1
#
# Matrix products: default
# BLAS: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib
# LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.0
#
# locale:
# [1] C/UTF-8/C/C/C/C
#
# time zone: Europe/Helsinki
# tzcode source: internal
#
# attached base packages:
# [1] stats graphics grDevices utils datasets methods base
#
# other attached packages:
# [1] ggplot2_4.0.0 rstan_2.32.7 StanHeaders_2.32.10
# [4] lgpr_1.2.5 rmarkdown_2.30
#
# loaded via a namespace (and not attached):
# [1] tensorA_0.36.2.1 sass_0.4.10 generics_0.1.4
# [4] stringi_1.8.7 digest_0.6.37 magrittr_2.0.4
# [7] evaluate_1.0.5 grid_4.4.1 RColorBrewer_1.1-3
# [10] fastmap_1.2.0 plyr_1.8.9 jsonlite_2.0.0
# [13] pkgbuild_1.4.8 backports_1.5.0 gridExtra_2.3
# [16] QuickJSR_1.8.1 scales_1.4.0 codetools_0.2-20
# [19] jquerylib_0.1.4 abind_1.4-8 cli_3.6.5
# [22] rlang_1.1.6 crayon_1.5.3 withr_3.0.2
# [25] cachem_1.1.0 yaml_2.3.10 tools_4.4.1
# [28] inline_0.3.21 parallel_4.4.1 reshape2_1.4.4
# [31] checkmate_2.3.3 rstantools_2.5.0 dplyr_1.1.4
# [34] colorspace_2.1-2 curl_6.2.3 posterior_1.6.1
# [37] vctrs_0.6.5 R6_2.6.1 matrixStats_1.5.0
# [40] stats4_4.4.1 lifecycle_1.0.4 stringr_1.5.2
# [43] V8_6.0.0 pkgconfig_2.0.3 RcppParallel_5.1.11-1
# [46] pillar_1.11.1 bslib_0.9.0 gtable_0.3.6
# [49] loo_2.8.0 glue_1.8.0 Rcpp_1.1.0
# [52] xfun_0.53 tibble_3.3.0 tidyselect_1.2.1
# [55] knitr_1.50 bayesplot_1.14.0 farver_2.1.2
# [58] htmltools_0.5.8.1 labeling_0.4.3 compiler_4.4.1
# [61] S7_0.2.0 distributional_0.5.0