require("lgpr")
# Loading required package: lgpr
# Attached lgpr 1.2.5, using rstan 2.32.7. Type ?lgpr to get started.
require("ggplot2")
# Loading required package: ggplot2
require("rstan")
# Loading required package: rstan
# Loading required package: StanHeaders
# 
# rstan version 2.32.7 (Stan version 2.32.2)
# For execution on a local, multicore CPU with excess RAM we recommend calling
# options(mc.cores = parallel::detectCores()).
# To avoid recompilation of unchanged Stan programs, we recommend calling
# rstan_options(auto_write = TRUE)
# For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
# change `threads_per_chain` option:
# rstan_options(threads_per_chain = 1)

1 Introduction

In this tutorial we simulate and analyse a test data set which contains 6 case and 6 control individuals, and the disease effect on case individuals is modeled using the disease-related age (diseaseAge) as a covariate. The disease-related age is defined as age relative to the observed disease initiation. The true disease effect times for each case individual \(q=1, \ldots,6\) are drawn from \(\mathcal{N}(36,4^2)\), but the disease initiation is observable only after time \(t_q\) , which is drawn from \(t_q∼\text{Exponential}(0.05)\) .

set.seed(121)
relev <- c(0, 1, 1, 1, 0, 0)
effect_time_fun <- function() {
  rnorm(n = 1, mean = 36, sd = 4)
}
obs_fun <- function(t) {
  min(t + stats::rexp(n = 1, rate = 0.05), 96 - 1e-5)
}

simData <- simulate_data(
  N = 12,
  t_data = seq(12, 96, by = 12),
  covariates = c(0, 2, 2, 2),
  relevances = relev,
  lengthscales = c(18, 24, 1.1, 18, 18, 18),
  t_effect_range = effect_time_fun,
  t_observed = obs_fun,
  snr = 3
)

plot_sim(simData) + xlab("Age (months)")
# Warning: `aes_string()` was deprecated in ggplot2 3.0.0.
# ℹ Please use tidy evaluation idioms with `aes()`.
# ℹ See also `vignette("ggplot2-in-packages")` for more information.
# ℹ The deprecated feature was likely used in the lgpr package.
#   Please report the issue at <https://github.com/jtimonen/lgpr/issues>.
# This warning is displayed once every 8 hours.
# Call `lifecycle::last_lifecycle_warnings()` to see where this warning was generated.
# - Dots are noisy observations of the response var.
# - Line is the true signal mapped through inv. link fun.
# - Solid vert. line is the real effect time (used to generate signal) 
# - Dashed vert. line is the 'observed' disease initiation time

# plot_sim(simData, comp_idx = 3) # to visualize one generated component

Above, the blue line represents the data-generating signal and black dots are noisy observations of the response variable.

dat <- simData@data
str(dat)
# 'data.frame': 96 obs. of  7 variables:
#  $ id        : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 2 2 ...
#  $ age       : num  12 24 36 48 60 72 84 96 12 24 ...
#  $ diseaseAge: num  -36 -24 -12 0 12 24 36 48 -60 -48 ...
#  $ z1        : Factor w/ 2 levels "1","2": 2 2 2 2 2 2 2 2 1 1 ...
#  $ z2        : Factor w/ 2 levels "1","2": 2 2 2 2 2 2 2 2 2 2 ...
#  $ z3        : Factor w/ 2 levels "1","2": 2 2 2 2 2 2 2 2 2 2 ...
#  $ y         : num  -1.42 -2.632 -2.467 -0.275 0.857 ...
simData@effect_times
# $true
#        1        2        3        4        5        6        7        8 
# 34.97856 36.43350 36.51112 35.67571 33.14527 42.46133      NaN      NaN 
#        9       10       11       12 
#      NaN      NaN      NaN      NaN 
# 
# $observed
#   1   2   3   4   5   6   7   8   9  10  11  12 
#  48  72  96  36  48  60 NaN NaN NaN NaN NaN NaN

2 Declaring effect time uncertainty

We will define a formula where the term unc(id)*gp_vm(diseaseAge) declares that the effect time for the nonstationary gp_vm term is uncertain and that one uncertainty parameter is needed for each level of id.

formula <- y ~ zs(id) * gp(age) + gp(age) + unc(id) * gp_vm(diseaseAge) + zs(z1) * gp(age) + zs(z2) * gp(age) + zs(z3) * gp(age)

Because diseaseAge is NaN for the control individuals, it is automatically taken into account that a separate uncertainty parameter is actually needed just for each case individual.

3 Defining the effect time prior

Declaring a temporally uncrertain component will add parameters teff to the model. The vector teff has length equal to the number of case individuals. We must define a prior for each teff parameter. This means that the prior argument must be a list containing elements named effect_time and effect_time_info. The first one is specified using any of the basic prior definition functions, like uniform(), normal(), etc. The second one, effect_time_info, must be a named list containing the fields

  • zero - this is a vector with same length as teff, and can be used to move the center of the prior
  • backwards - this is a boolean value, and the prior defined in effect_time will be for
    • (teff - zero) if backwards = FALSE
    • - (teff - zero) if backwards = TRUE
  • lower - this is a vector with same length as teff, and defines the lower bound for each teff parameter
  • upper - this is a vector with same length as teff, and defines the upper bound for each teff parameter

You can give zero, lower, and upper also as just one number, in which case they are turned into vectors that repeat the save value. The prior defined in effect_time will be truncated at lower and upper bounds.

3.1 Prior for the effect time directly

We had observed the disease onset at times \(48,72,96,36,48,60\) months for each case individual, respectively. Now if think that the true effect of the disease has occurred for each indiviaul at some time point before the detection of the disease, but not before age \(18\) months, we could set the prior like here.

obs_onset <- c(48, 72, 96, 36, 48, 60)
lb <- 18
ub <- obs_onset
effect_time_info <- list(zero = 0, backwards = FALSE, lower = lb, upper = ub)
my_prior <- list(
  effect_time = uniform(), # between lb and ub
  effect_time_info = effect_time_info,
  wrp = igam(14, 5) # see how to set this in the 'Basic usage' tutorial
)

3.2 Prior relative to a known time point

It is possible that we want a prior where values closer to the observed onset are more likely than those closer to birth. This can be done by defining for example an exponentially decaying prior for - (teff - obs_onset), as is done here.

lb <- 18
ub <- obs_onset
effect_time_info <- list(zero = ub, backwards = TRUE, lower = lb, upper = ub)
my_prior <- list(
  wrp = igam(14, 5),
  effect_time_info = effect_time_info,
  effect_time = gam(shape = 1, inv_scale = 0.05) # = Exponential(rate=0.05)
)

Now our uncertainty priors are actually for time differences relative to the observed disease initiation time, and backwards = TRUE argument is used to define the direction so that the prior is “backwards” in time. We used gam() because the Gamma distribution with shape=1 and inv_scale=lambda is equal to the Exponential distribution with rate= lambda.

4 Fitting the model

fit <- lgp(
  formula = formula,
  data = dat,
  prior = my_prior,
  iter = 3000,
  chains = 4,
  cores = 4,
  verbose = TRUE
)
# Creating model... 
# Parsing formula...
# Formula interpreted as: y ~ zs(id) * gp(age) + gp(age) + unc(id) * gp_vm(diseaseAge) + zs(z1) * gp(age) + zs(z2) * gp(age) + zs(z3) * gp(age)
# Parsing covariates and components... 
# Parsing options... 
# Parsing response and likelihood... 
# Parsing prior...
# User-specified priors found for: {wrp, effect_time, effect_time_info}.
# If any of the following parameters are included in the model, default priors are used for them: {alpha, ell, sigma, phi, beta, gamma}.
# 
# Model created, printing it here. 
# An object of class lgpmodel. See ?lgpmodel for more info.
# Formula: y ~ zs(id) * gp(age) + gp(age) + unc(id) * gp_vm(diseaseAge) + zs(z1) * gp(age) + zs(z2) * gp(age) + zs(z3) * gp(age)
# Likelihood: gaussian
# Data: 96 observations, 7 variables
# 
#                   Component type ker het ns vm unc cat cont
# 1            zs(id)*gp(age)    2   0   0  0  0   0   1    1
# 2                   gp(age)    1   0   0  0  0   0   0    1
# 3 unc(id)*gp_vm(diseaseAge)    1   0   0  1  1   1   0    2
# 4            zs(z1)*gp(age)    2   0   0  0  0   0   2    1
# 5            zs(z2)*gp(age)    2   0   0  0  0   0   3    1
# 6            zs(z3)*gp(age)    2   0   0  0  0   0   4    1
# 
#     Variable #Missing
# 1        age        0
# 2 diseaseAge       48
# 
#   Factor #Levels Values
# 1     id      12    ...
# 2     z1       2   1, 2
# 3     z2       2   1, 2
# 4     z3       2   1, 2
# 
#    Parameter   Bounds                             Prior
# 1   alpha[1] [0, Inf)          alpha[1] ~ student-t(20)
# 2   alpha[2] [0, Inf)          alpha[2] ~ student-t(20)
# 3   alpha[3] [0, Inf)          alpha[3] ~ student-t(20)
# 4   alpha[4] [0, Inf)          alpha[4] ~ student-t(20)
# 5   alpha[5] [0, Inf)          alpha[5] ~ student-t(20)
# 6   alpha[6] [0, Inf)          alpha[6] ~ student-t(20)
# 7     ell[1] [0, Inf)          ell[1] ~ log-normal(0,1)
# 8     ell[2] [0, Inf)          ell[2] ~ log-normal(0,1)
# 9     ell[3] [0, Inf)          ell[3] ~ log-normal(0,1)
# 10    ell[4] [0, Inf)          ell[4] ~ log-normal(0,1)
# 11    ell[5] [0, Inf)          ell[5] ~ log-normal(0,1)
# 12    ell[6] [0, Inf)          ell[6] ~ log-normal(0,1)
# 13    wrp[1] [0, Inf)          wrp[1] ~ inv-gamma(14,5)
# 14  sigma[1] [0, Inf)     (sigma[1])^2 ~ inv-gamma(2,1)
# 15   teff[1] [18, 48]  - (teff[1] - 48) ~ gamma(1,0.05)
# 16   teff[2] [18, 72]  - (teff[2] - 72) ~ gamma(1,0.05)
# 17   teff[3] [18, 96]  - (teff[3] - 96) ~ gamma(1,0.05)
# 18   teff[4] [18, 36]  - (teff[4] - 36) ~ gamma(1,0.05)
# 19   teff[5] [18, 48]  - (teff[5] - 48) ~ gamma(1,0.05)
# 20   teff[6] [18, 60]  - (teff[6] - 60) ~ gamma(1,0.05)
# 
#                                   
# id                     1 2 3 4 5 6
# beta_or_teff_param_idx 1 2 3 4 5 6
# 
# Created on Thu Oct 30 21:17:11 2025 with lgpr 1.2.5. 
# 
# Sampling model... 
# 
# SAMPLING FOR MODEL 'lgp' NOW (CHAIN 1).
# 
# SAMPLING FOR MODEL 'lgp' NOW (CHAIN 2).
# Chain 1: 
# Chain 1: Gradient evaluation took 0.005374 seconds
# Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 53.74 seconds.
# Chain 1: Adjust your expectations accordingly!
# Chain 1: 
# Chain 1: 
# Chain 2: 
# Chain 2: Gradient evaluation took 0.005849 seconds
# Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 58.49 seconds.
# Chain 2: Adjust your expectations accordingly!
# Chain 2: 
# Chain 2: 
# 
# SAMPLING FOR MODEL 'lgp' NOW (CHAIN 3).
# 
# SAMPLING FOR MODEL 'lgp' NOW (CHAIN 4).
# Chain 3: 
# Chain 3: Gradient evaluation took 0.006909 seconds
# Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 69.09 seconds.
# Chain 3: Adjust your expectations accordingly!
# Chain 3: 
# Chain 3: 
# Chain 4: 
# Chain 4: Gradient evaluation took 0.006835 seconds
# Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 68.35 seconds.
# Chain 4: Adjust your expectations accordingly!
# Chain 4: 
# Chain 4: 
# Chain 1: Iteration:    1 / 3000 [  0%]  (Warmup)
# Chain 2: Iteration:    1 / 3000 [  0%]  (Warmup)
# Chain 3: Iteration:    1 / 3000 [  0%]  (Warmup)
# Chain 4: Iteration:    1 / 3000 [  0%]  (Warmup)
# Chain 4: Iteration:  300 / 3000 [ 10%]  (Warmup)
# Chain 1: Iteration:  300 / 3000 [ 10%]  (Warmup)
# Chain 3: Iteration:  300 / 3000 [ 10%]  (Warmup)
# Chain 2: Iteration:  300 / 3000 [ 10%]  (Warmup)
# Chain 4: Iteration:  600 / 3000 [ 20%]  (Warmup)
# Chain 1: Iteration:  600 / 3000 [ 20%]  (Warmup)
# Chain 3: Iteration:  600 / 3000 [ 20%]  (Warmup)
# Chain 2: Iteration:  600 / 3000 [ 20%]  (Warmup)
# Chain 1: Iteration:  900 / 3000 [ 30%]  (Warmup)
# Chain 4: Iteration:  900 / 3000 [ 30%]  (Warmup)
# Chain 3: Iteration:  900 / 3000 [ 30%]  (Warmup)
# Chain 2: Iteration:  900 / 3000 [ 30%]  (Warmup)
# Chain 1: Iteration: 1200 / 3000 [ 40%]  (Warmup)
# Chain 4: Iteration: 1200 / 3000 [ 40%]  (Warmup)
# Chain 3: Iteration: 1200 / 3000 [ 40%]  (Warmup)
# Chain 2: Iteration: 1200 / 3000 [ 40%]  (Warmup)
# Chain 1: Iteration: 1500 / 3000 [ 50%]  (Warmup)
# Chain 1: Iteration: 1501 / 3000 [ 50%]  (Sampling)
# Chain 4: Iteration: 1500 / 3000 [ 50%]  (Warmup)
# Chain 4: Iteration: 1501 / 3000 [ 50%]  (Sampling)
# Chain 3: Iteration: 1500 / 3000 [ 50%]  (Warmup)
# Chain 3: Iteration: 1501 / 3000 [ 50%]  (Sampling)
# Chain 2: Iteration: 1500 / 3000 [ 50%]  (Warmup)
# Chain 2: Iteration: 1501 / 3000 [ 50%]  (Sampling)
# Chain 1: Iteration: 1800 / 3000 [ 60%]  (Sampling)
# Chain 3: Iteration: 1800 / 3000 [ 60%]  (Sampling)
# Chain 4: Iteration: 1800 / 3000 [ 60%]  (Sampling)
# Chain 2: Iteration: 1800 / 3000 [ 60%]  (Sampling)
# Chain 1: Iteration: 2100 / 3000 [ 70%]  (Sampling)
# Chain 3: Iteration: 2100 / 3000 [ 70%]  (Sampling)
# Chain 2: Iteration: 2100 / 3000 [ 70%]  (Sampling)
# Chain 4: Iteration: 2100 / 3000 [ 70%]  (Sampling)
# Chain 1: Iteration: 2400 / 3000 [ 80%]  (Sampling)
# Chain 3: Iteration: 2400 / 3000 [ 80%]  (Sampling)
# Chain 2: Iteration: 2400 / 3000 [ 80%]  (Sampling)
# Chain 1: Iteration: 2700 / 3000 [ 90%]  (Sampling)
# Chain 3: Iteration: 2700 / 3000 [ 90%]  (Sampling)
# Chain 4: Iteration: 2400 / 3000 [ 80%]  (Sampling)
# Chain 2: Iteration: 2700 / 3000 [ 90%]  (Sampling)
# Chain 3: Iteration: 3000 / 3000 [100%]  (Sampling)
# Chain 3: 
# Chain 3:  Elapsed Time: 298.937 seconds (Warm-up)
# Chain 3:                3129.47 seconds (Sampling)
# Chain 3:                3428.41 seconds (Total)
# Chain 3: 
# Chain 1: Iteration: 3000 / 3000 [100%]  (Sampling)
# Chain 1: 
# Chain 1:  Elapsed Time: 275.025 seconds (Warm-up)
# Chain 1:                3153.81 seconds (Sampling)
# Chain 1:                3428.84 seconds (Total)
# Chain 1: 
# Chain 2: Iteration: 3000 / 3000 [100%]  (Sampling)
# Chain 2: 
# Chain 2:  Elapsed Time: 323.166 seconds (Warm-up)
# Chain 2:                3126.38 seconds (Sampling)
# Chain 2:                3449.54 seconds (Total)
# Chain 2: 
# Chain 4: Iteration: 2700 / 3000 [ 90%]  (Sampling)
# Chain 4: Iteration: 3000 / 3000 [100%]  (Sampling)
# Chain 4: 
# Chain 4:  Elapsed Time: 279.176 seconds (Warm-up)
# Chain 4:                4270.68 seconds (Sampling)
# Chain 4:                4549.86 seconds (Total)
# Chain 4: 
# Sampling done. 
# 
# Postprocessing... 
# Computing analytic function posteriors... 
# |   10%|   20%|   30%|   40%|   50%|   60%|   70%|   80%|   90%|  100%| 
# ======================================================================  
# Done. 
# Postprocessing done.

Printing the fit object summarizes the posterior

print(fit)
# An object of class lgpfit. See ?lgpfit for more info.
# Inference for Stan model: lgp.
# 4 chains, each with iter=3000; warmup=1500; thin=1; 
# post-warmup draws per chain=1500, total post-warmup draws=6000.
# 
#            mean se_mean    sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
# alpha[1]   0.12    0.00  0.09  0.01  0.05  0.10  0.17  0.33  2717 1.00
# alpha[2]   0.84    0.01  0.31  0.43  0.62  0.77  0.98  1.63  3101 1.00
# alpha[3]   1.07    0.01  0.51  0.28  0.72  1.00  1.36  2.28  2279 1.00
# alpha[4]   0.76    0.01  0.36  0.33  0.51  0.67  0.92  1.71  2779 1.00
# alpha[5]   0.16    0.00  0.19  0.00  0.05  0.11  0.21  0.70  4624 1.00
# alpha[6]   0.16    0.00  0.19  0.00  0.05  0.11  0.20  0.67  5059 1.00
# ell[1]     1.64    0.04  2.29  0.12  0.46  0.99  2.01  6.85  4114 1.00
# ell[2]     0.46    0.00  0.16  0.12  0.35  0.46  0.56  0.77  1788 1.00
# ell[3]     1.40    0.03  1.47  0.19  0.59  1.01  1.69  5.17  3057 1.00
# ell[4]     1.05    0.01  0.65  0.24  0.63  0.92  1.29  2.70  3290 1.00
# ell[5]     1.85    0.04  2.56  0.12  0.46  1.09  2.27  7.83  3744 1.00
# ell[6]     1.72    0.04  2.36  0.14  0.48  0.93  2.03  8.23  3580 1.00
# wrp[1]     0.37    0.00  0.10  0.22  0.30  0.36  0.43  0.61  3053 1.00
# sigma[1]   0.53    0.00  0.05  0.43  0.49  0.53  0.57  0.64  1708 1.00
# teff[1,1] 36.28    0.12  7.21 20.49 31.49 36.84 42.23 47.29  3379 1.00
# teff[1,2] 39.17    0.34  9.40 24.08 34.41 37.67 41.19 69.63   770 1.00
# teff[1,3] 56.95    0.72 17.43 31.46 41.14 57.59 68.42 93.53   584 1.01
# teff[1,4] 30.87    0.09  4.28 20.25 28.48 32.07 34.25 35.84  2261 1.00
# teff[1,5] 37.67    0.21  6.55 20.27 34.79 38.96 42.18 47.08   976 1.00
# teff[1,6] 48.33    0.36 10.35 22.40 41.32 50.90 57.50 59.77   813 1.00
# 
# Samples were drawn using NUTS(diag_e) at Thu Oct 30 22:33:01 2025.
# For each parameter, n_eff is a crude measure of effective sample size,
# and Rhat is the potential scale reduction factor on split chains (at 
# convergence, Rhat=1).

Printing the model information clarifies the model and priors

model_summary(fit)
# Formula: y ~ zs(id) * gp(age) + gp(age) + unc(id) * gp_vm(diseaseAge) + zs(z1) * gp(age) + zs(z2) * gp(age) + zs(z3) * gp(age)
# Likelihood: gaussian
# Data: 96 observations, 7 variables
# 
#                   Component type ker het ns vm unc cat cont
# 1            zs(id)*gp(age)    2   0   0  0  0   0   1    1
# 2                   gp(age)    1   0   0  0  0   0   0    1
# 3 unc(id)*gp_vm(diseaseAge)    1   0   0  1  1   1   0    2
# 4            zs(z1)*gp(age)    2   0   0  0  0   0   2    1
# 5            zs(z2)*gp(age)    2   0   0  0  0   0   3    1
# 6            zs(z3)*gp(age)    2   0   0  0  0   0   4    1
# 
#     Variable #Missing
# 1        age        0
# 2 diseaseAge       48
# 
#   Factor #Levels Values
# 1     id      12    ...
# 2     z1       2   1, 2
# 3     z2       2   1, 2
# 4     z3       2   1, 2
# 
#    Parameter   Bounds                             Prior
# 1   alpha[1] [0, Inf)          alpha[1] ~ student-t(20)
# 2   alpha[2] [0, Inf)          alpha[2] ~ student-t(20)
# 3   alpha[3] [0, Inf)          alpha[3] ~ student-t(20)
# 4   alpha[4] [0, Inf)          alpha[4] ~ student-t(20)
# 5   alpha[5] [0, Inf)          alpha[5] ~ student-t(20)
# 6   alpha[6] [0, Inf)          alpha[6] ~ student-t(20)
# 7     ell[1] [0, Inf)          ell[1] ~ log-normal(0,1)
# 8     ell[2] [0, Inf)          ell[2] ~ log-normal(0,1)
# 9     ell[3] [0, Inf)          ell[3] ~ log-normal(0,1)
# 10    ell[4] [0, Inf)          ell[4] ~ log-normal(0,1)
# 11    ell[5] [0, Inf)          ell[5] ~ log-normal(0,1)
# 12    ell[6] [0, Inf)          ell[6] ~ log-normal(0,1)
# 13    wrp[1] [0, Inf)          wrp[1] ~ inv-gamma(14,5)
# 14  sigma[1] [0, Inf)     (sigma[1])^2 ~ inv-gamma(2,1)
# 15   teff[1] [18, 48]  - (teff[1] - 48) ~ gamma(1,0.05)
# 16   teff[2] [18, 72]  - (teff[2] - 72) ~ gamma(1,0.05)
# 17   teff[3] [18, 96]  - (teff[3] - 96) ~ gamma(1,0.05)
# 18   teff[4] [18, 36]  - (teff[4] - 36) ~ gamma(1,0.05)
# 19   teff[5] [18, 48]  - (teff[5] - 48) ~ gamma(1,0.05)
# 20   teff[6] [18, 60]  - (teff[6] - 60) ~ gamma(1,0.05)
# 
#                                   
# id                     1 2 3 4 5 6
# beta_or_teff_param_idx 1 2 3 4 5 6
# 
# Created on Thu Oct 30 21:17:11 2025 with lgpr 1.2.5.
rstan::get_elapsed_time(fit@stan_fit)
#          warmup  sample
# chain:1 275.025 3153.81
# chain:2 323.166 3126.38
# chain:3 298.937 3129.47
# chain:4 279.176 4270.68

5 Visualizing the inferred effect times

We can visualize the inferred effect times for each case individual. We see that for individuals 2 and 3 the inferred effect time is much earlier than the observed one.

plot_effect_times(fit) + xlab("Age (months)")
#                                   
# id                     1 2 3 4 5 6
# beta_or_teff_param_idx 1 2 3 4 5 6

Finally we plot the inferred disease component

t <- seq(0, 100, by = 1)
x_pred <- new_x(dat, t, x_ns = "diseaseAge")
p <- pred(fit, x_pred, verbose = FALSE)
plot_f(fit, pred = p, comp_idx = 3, color_by = "diseaseAge") + xlab("Age (months)")

6 Computing environment

sessionInfo()
# R version 4.4.1 (2024-06-14)
# Platform: aarch64-apple-darwin20
# Running under: macOS Sonoma 14.6.1
# 
# Matrix products: default
# BLAS:   /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib 
# LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.0
# 
# locale:
# [1] C/UTF-8/C/C/C/C
# 
# time zone: Europe/Helsinki
# tzcode source: internal
# 
# attached base packages:
# [1] stats     graphics  grDevices utils     datasets  methods   base     
# 
# other attached packages:
# [1] rstan_2.32.7        StanHeaders_2.32.10 ggplot2_4.0.0      
# [4] lgpr_1.2.5          rmarkdown_2.30     
# 
# loaded via a namespace (and not attached):
#  [1] tensorA_0.36.2.1      sass_0.4.10           generics_0.1.4       
#  [4] stringi_1.8.7         digest_0.6.37         magrittr_2.0.4       
#  [7] evaluate_1.0.5        grid_4.4.1            RColorBrewer_1.1-3   
# [10] fastmap_1.2.0         plyr_1.8.9            jsonlite_2.0.0       
# [13] pkgbuild_1.4.8        backports_1.5.0       gridExtra_2.3        
# [16] QuickJSR_1.8.1        scales_1.4.0          codetools_0.2-20     
# [19] jquerylib_0.1.4       abind_1.4-8           cli_3.6.5            
# [22] crayon_1.5.3          rlang_1.1.6           withr_3.0.2          
# [25] cachem_1.1.0          yaml_2.3.10           tools_4.4.1          
# [28] inline_0.3.21         parallel_4.4.1        reshape2_1.4.4       
# [31] checkmate_2.3.3       rstantools_2.5.0      dplyr_1.1.4          
# [34] colorspace_2.1-2      curl_6.2.3            posterior_1.6.1      
# [37] vctrs_0.6.5           R6_2.6.1              ggridges_0.5.7       
# [40] matrixStats_1.5.0     stats4_4.4.1          lifecycle_1.0.4      
# [43] stringr_1.5.2         V8_6.0.0              MASS_7.3-60.2        
# [46] pkgconfig_2.0.3       RcppParallel_5.1.11-1 pillar_1.11.1        
# [49] bslib_0.9.0           gtable_0.3.6          loo_2.8.0            
# [52] glue_1.8.0            Rcpp_1.1.0            xfun_0.53            
# [55] tibble_3.3.0          tidyselect_1.2.1      knitr_1.50           
# [58] farver_2.1.2          bayesplot_1.14.0      htmltools_0.5.8.1    
# [61] labeling_0.4.3        compiler_4.4.1        S7_0.2.0             
# [64] distributional_0.5.0