Last updated on 2025-02-16 09:51:05 CET.
Package | ERROR | NOTE | OK |
---|---|---|---|
easystats | 15 | ||
esc | 15 | ||
ggeffects | 15 | ||
insight | 2 | 13 | |
parameters | 2 | 13 | |
performance | 1 | 14 | |
sjlabelled | 15 | ||
sjmisc | 3 | 12 | |
sjPlot | 15 | ||
sjstats | 15 |
Current CRAN status: OK: 15
Current CRAN status: OK: 15
Current CRAN status: OK: 15
Current CRAN status: ERROR: 2, OK: 13
Version: 1.0.2
Check: tests
Result: ERROR
Running ‘testthat.R’ [10m/13m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(insight)
> test_check("insight")
Starting 2 test processes
[ FAIL 27 | WARN 507 | SKIP 72 | PASS 3414 ]
══ Skipped tests (72) ══════════════════════════════════════════════════════════
• On CRAN (64): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:3:1',
'test-brms.R:1:1', 'test-brms_aterms.R:1:1',
'test-brms_gr_random_effects.R:1:1', 'test-brms_missing.R:1:1',
'test-brms_von_mises.R:1:1', 'test-blmer.R:249:3',
'test-clean_names.R:103:3', 'test-clean_parameters.R:2:3',
'test-clean_parameters.R:35:3', 'test-clmm.R:165:3', 'test-cpglmm.R:145:3',
'test-export_table.R:4:3', 'test-export_table.R:8:3',
'test-export_table.R:106:3', 'test-export_table.R:133:3',
'test-export_table.R:164:3', 'test-export_table.R:193:3',
'test-export_table.R:205:3', 'test-export_table.R:233:3',
'test-find_smooth.R:31:3', 'test-format_table.R:1:1',
'test-format_table_ci.R:71:3', 'test-find_random.R:27:3', 'test-gam.R:1:1',
'test-get_data.R:385:1', 'test-get_loglikelihood.R:93:3',
'test-get_loglikelihood.R:158:3', 'test-get_predicted.R:2:1',
'test-get_priors.R:3:3', 'test-get_varcov.R:40:3',
'test-is_converged.R:28:1', 'test-lme.R:34:3', 'test-lme.R:210:3',
'test-glmmTMB.R:71:3', 'test-glmmTMB.R:755:3', 'test-glmmTMB.R:787:3',
'test-marginaleffects.R:1:1', 'test-mgcv.R:1:1', 'test-mipo.R:1:1',
'test-mvrstanarm.R:1:1', 'test-panelr-asym.R:142:3', 'test-panelr.R:272:3',
'test-phylolm.R:5:1', 'test-r2_nakagawa_bernoulli.R:1:1',
'test-r2_nakagawa_beta.R:1:1', 'test-r2_nakagawa_binomial.R:1:1',
'test-r2_nakagawa_gamma.R:1:1', 'test-r2_nakagawa_linear.R:1:1',
'test-r2_nakagawa_negbin.R:1:1', 'test-r2_nakagawa_negbin_zi.R:1:1',
'test-r2_nakagawa_ordered_beta.R:1:1', 'test-r2_nakagawa_poisson.R:1:1',
'test-r2_nakagawa_poisson_zi.R:1:1',
'test-r2_nakagawa_truncated_poisson.R:1:1', 'test-r2_nakagawa_tweedie.R:1:1',
'test-rlmer.R:259:3', 'test-rqss.R:1:1', 'test-rstanarm.R:1:1',
'test-spatial.R:1:1', 'test-svylme.R:1:1', 'test-vgam.R:1:1',
'test-weightit.R:1:1'
• On Linux (3): 'test-BayesFactorBF.R:1:1', 'test-MCMCglmm.R:1:1',
'test-get_data.R:150:3'
• Package `logistf` is loaded and breaks `mmrm::mmrm()` (1): 'test-mmrm.R:4:1'
• TRUE is TRUE (1): 'test-fixest.R:2:1'
• works interactively (2): 'test-coxph.R:38:3', 'test-coxph-panel.R:34:3'
• {bigglm} is not installed (1): 'test-model_info.R:24:3'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-get_loglikelihood.R:216:3'): get_loglikelihood - gamm4 ───────
as.numeric(ll) (`actual`) not equal to -101.1107 (`expected`).
`actual`: -99.1
`expected`: -101.1
── Failure ('test-get_variance.R:49:3'): get_variance-1 ────────────────────────
v1$var.intercept (`actual`) not equal to c(Subject = 612.10016) (`expected`).
`actual`: 593.3
`expected`: 612.1
── Failure ('test-get_variance.R:53:3'): get_variance-1 ────────────────────────
v1$var.slope (`actual`) not equal to c(Subject.Days = 35.07171) (`expected`).
`actual`: 593.3
`expected`: 35.1
── Failure ('test-get_variance.R:60:3'): get_variance-2 ────────────────────────
v2$var.intercept (`actual`) not equal to c(Subject = 627.56905) (`expected`).
`actual`: 593.3
`expected`: 627.6
── Failure ('test-get_variance.R:64:3'): get_variance-2 ────────────────────────
v2$var.slope (`actual`) not equal to c(Subject.Days = 35.85838) (`expected`).
`actual`: 593.3
`expected`: 35.9
── Failure ('test-get_variance.R:71:3'): get_variance-3 ────────────────────────
v3$var.intercept (`actual`) not equal to c(subgrp.grp.1 = 0, Subject = 662.52047, grp.1 = 0) (`expected`).
`actual` is NULL
`expected` is a double vector (0, 662.52047, 0)
── Failure ('test-get_variance.R:79:3'): get_variance-3 ────────────────────────
v3$var.slope (`actual`) not equal to c(Subject.Days = 34.25771, subgrp.grp.Days = 7.88485, grp.Days = 0) (`expected`).
`actual` is NULL
`expected` is a double vector (34.25771, 7.88485, 0)
── Failure ('test-get_variance.R:91:3'): get_variance-4 ────────────────────────
v4$var.intercept (`actual`) not equal to c(Subject = 1378.17851) (`expected`).
`actual`: 811.1
`expected`: 1378.2
── Failure ('test-get_variance.R:99:3'): get_variance-5 ────────────────────────
v5$var.intercept (`actual`) not equal to c(`subgrp:grp` = 38.76069, Subject = 1377.50569, grp = 3.32031) (`expected`).
`actual`: 654.2 654.2 654.2
`expected`: 38.8 1377.5 3.3
── Failure ('test-get_variance.R:112:3'): get_variance-6 ───────────────────────
v6$var.intercept (`actual`) not equal to c(plate = 0.71691) (`expected`).
`actual`: 0.13
`expected`: 0.72
── Failure ('test-get_variance.R:113:3'): get_variance-6 ───────────────────────
v6$var.random (`actual`) not equal to 0.71691 (`expected`).
`actual`: 0.13
`expected`: 0.72
── Failure ('test-get_variance.R:124:3'): get_variance-7 ───────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 593.3
`expected$var.random`: 627.6
`actual$var.residual`: 593.3
`expected$var.residual`: 653.6
`actual$var.distribution`: 593.3
`expected$var.distribution`: 653.6
`actual$var.intercept`: 593.3
`expected$var.intercept`: 627.6
`actual$var.slope`: 593.3
`expected$var.slope`: 35.9
── Failure ('test-get_variance.R:144:3'): get_variance-8 ───────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 21856.9
`expected$var.random`: 1502.2
`actual$var.residual`: 766.9
`expected$var.residual`: 842.0
`actual$var.distribution`: 766.9
`expected$var.distribution`: 842.0
`actual$var.slope`: 766.9
`expected$var.slope`: 52.7
── Failure ('test-get_variance.R:169:3'): get_variance-9 ───────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 1264.6
`expected$var.random`: 1711.4
`actual$var.residual`: 790.4
`expected$var.residual`: 748.8
`actual$var.distribution`: 790.4
`expected$var.distribution`: 748.8
`actual$var.intercept`: 790.4
`expected$var.intercept`: 663.3
`actual$var.slope`: 790.4 790.4
`expected$var.slope`: 882.4 1415.7
actual$cor.slope_intercept vs expected$cor.slope_intercept
[,1]
- actual$cor.slope_intercept[1, ] 0.0000000
+ expected$cor.slope_intercept[1, ] 0.3611731
- actual$cor.slope_intercept[2, ] 0.0000000
+ expected$cor.slope_intercept[2, ] 0.3318785
`actual$cor.slopes`: 0.00
`expected$cor.slopes`: 0.85
── Failure ('test-get_variance.R:198:3'): get_variance-10 ──────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual` is length 8
`expected` is length 7
names(actual) | names(expected)
[1] "var.fixed" | "var.fixed" [1]
[2] "var.random" -
[3] "var.residual" | "var.residual" [2]
[4] "var.distribution" | "var.distribution" [3]
[5] "var.dispersion" | "var.dispersion" [4]
`actual$var.random` is a double vector (739.145250002286)
`expected$var.random` is absent
names(actual$var.slope) | names(expected$var.slope)
[1] "Subject.1.Days2(3,6]" -
[2] "Subject.1.Days2(6,10]" -
[3] "Subject.Days2(-1,3]" | "Subject.Days2(-1,3]" [1]
[4] "Subject.Days2(3,6]" | "Subject.Days2(3,6]" [2]
[5] "Subject.Days2(6,10]" | "Subject.Days2(6,10]" [3]
actual$var.slope | expected$var.slope
[1] 739.145250002286 - 0 [1]
[2] 739.145250002286 - 994.015865559888 [2]
[3] 739.145250002286 - 1545.72576115283 [3]
[4] 739.145250002286 -
[5] 739.145250002286 -
names(actual$cor.slopes) | names(expected$cor.slopes)
[1] "Subject.1.Days2(-1,3]-Days2(3,6]" -
[2] "Subject.1.Days2(-1,3]-Days2(6,10]" -
[3] "Subject.1.Days2(3,6]-Days2(6,10]" | "Subject.1.Days2(3,6]-Days2(6,10]" [1]
`actual$cor.slopes`: 0 0 0
`expected$cor.slopes`: 0.859480774219098
── Failure ('test-get_variance.R:222:3'): get_variance-11 ──────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 589.0
`expected$var.random`: 1446.1
`actual$var.residual`: 981.6
`expected$var.residual`: 748.8
`actual$var.distribution`: 981.6
`expected$var.distribution`: 748.8
`actual$var.slope`: 981.6 981.6 981.6
`expected$var.slope`: 663.3 2098.2 2722.2
`actual$cor.slopes`: 0.00 0.00 0.00
`expected$cor.slopes`: 0.80 0.73 0.92
── Failure ('test-get_variance.R:250:3'): get_variance-12 ──────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 589.0
`expected$var.random`: 1446.1
`actual$var.residual`: 981.6
`expected$var.residual`: 748.8
`actual$var.distribution`: 981.6
`expected$var.distribution`: 748.8
`actual$var.slope`: 981.6 981.6 981.6
`expected$var.slope`: 663.3 2098.2 2722.2
`actual$cor.slopes`: 0.00 0.00 0.00
`expected$cor.slopes`: 0.80 0.73 0.92
── Failure ('test-get_variance.R:283:3'): get_variance-cat_random_slope ────────
vc$cor.slopes (`actual`) not equal to c(...) (`expected`).
`actual`: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
`expected`: 1.0 1.0 1.0 1.0 1.0 1.0 -1.0 -1.0 -1.0 -1.0
── Failure ('test-get_variance.R:326:3'): random effects CIs, simple slope ─────
`vc` (`actual`) not equal to list(...) (`expected`).
`actual[[1]]`: 935.7
`expected[[1]]`: 921.9
`actual[[2]]`: 17808.7
`expected[[2]]`: 1068.0
`actual[[3]]`: 624.9
`expected[[3]]`: 764.5
`actual[[4]]`: 624.9
`expected[[4]]`: 764.5
`actual[[6]]`: 624.9 624.9
`expected[[6]]`: 37.5 27.6
`actual[[7]]`: 0.00
`expected[[7]]`: 0.46
── Failure ('test-get_variance.R:353:3'): random effects CIs, poly slope ───────
vc$cor.slopes (`actual`) not equal to c(`replicate.poly(temp, 2)1-poly(temp, 2)2` = 0.940016422944175) (`expected`).
`actual`: 0.00
`expected`: 0.94
── Failure ('test-is_converged.R:23:3'): is_converged ──────────────────────────
is_converged(model) is not TRUE
`actual`: FALSE
`expected`: TRUE
── Failure ('test-lmer.R:60:3'): get_df ────────────────────────────────────────
get_df(m1, type = "satterthwaite") (`actual`) not equal to c(`(Intercept)` = 16.99973, Days = 16.99998) (`expected`).
`actual`: 18.1 115.7
`expected`: 17.0 17.0
── Failure ('test-lmer.R:334:3'): get_variance ─────────────────────────────────
get_variance(m1) (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 17503.7
`expected$var.random`: 1698.1
`actual$var.slope`: 593.3
`expected$var.slope`: 35.1
── Failure ('test-lmer.R:353:3'): get_variance ─────────────────────────────────
get_variance_random(m1) (`actual`) not equal to c(var.random = 1698.084) (`expected`).
`actual`: 17503.7
`expected`: 1698.1
── Failure ('test-lmer.R:377:3'): get_variance ─────────────────────────────────
get_variance_slope(m1) (`actual`) not equal to c(var.slope.Subject.Days = 35.07171) (`expected`).
`actual`: 593.3
`expected`: 35.1
── Failure ('test-lmer.R:388:3'): get_variance ─────────────────────────────────
suppressWarnings(get_variance(m2)) (`actual`) not equal to list(...) (`expected`).
`actual` is length 6
`expected` is length 5
names(actual) | names(expected)
[1] "var.fixed" | "var.fixed" [1]
[2] "var.random" -
[3] "var.residual" | "var.residual" [2]
[4] "var.distribution" | "var.distribution" [3]
[5] "var.dispersion" | "var.dispersion" [4]
`actual$var.random` is a double vector (2058.53584626217)
`expected$var.random` is absent
`actual$var.residual`: 686.2
`expected$var.residual`: 941.8
`actual$var.distribution`: 686.2
`expected$var.distribution`: 941.8
`actual$var.intercept`: 686.2 686.2 686.2
`expected$var.intercept`: 0.0 1357.4 24.4
── Failure ('test-lmer.R:558:3'): get_predicted_ci: warning when model matrix and varcovmat do not match ──
head(data.frame(p)$Predicted) (`actual`) not equal to known$Predicted (`expected`).
`actual`: 37.586 47.556 56.800 65.318 73.111 80.177
`expected`: 37.534 47.957 58.789 70.029 81.677 93.735
[ FAIL 27 | WARN 507 | SKIP 72 | PASS 3414 ]
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang
Version: 1.0.2
Check: tests
Result: ERROR
Running ‘testthat.R’ [11m/30m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(insight)
> test_check("insight")
Starting 2 test processes
[ FAIL 27 | WARN 507 | SKIP 72 | PASS 3414 ]
══ Skipped tests (72) ══════════════════════════════════════════════════════════
• On CRAN (64): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:3:1',
'test-brms.R:1:1', 'test-brms_aterms.R:1:1',
'test-brms_gr_random_effects.R:1:1', 'test-brms_missing.R:1:1',
'test-brms_von_mises.R:1:1', 'test-blmer.R:249:3',
'test-clean_names.R:103:3', 'test-clean_parameters.R:2:3',
'test-clean_parameters.R:35:3', 'test-clmm.R:165:3', 'test-cpglmm.R:145:3',
'test-export_table.R:4:3', 'test-export_table.R:8:3',
'test-export_table.R:106:3', 'test-export_table.R:133:3',
'test-export_table.R:164:3', 'test-export_table.R:193:3',
'test-export_table.R:205:3', 'test-export_table.R:233:3',
'test-find_random.R:27:3', 'test-format_table.R:1:1',
'test-format_table_ci.R:71:3', 'test-gam.R:1:1', 'test-find_smooth.R:31:3',
'test-get_data.R:385:1', 'test-get_loglikelihood.R:93:3',
'test-get_loglikelihood.R:158:3', 'test-get_predicted.R:2:1',
'test-get_priors.R:3:3', 'test-get_varcov.R:40:3',
'test-is_converged.R:28:1', 'test-lme.R:34:3', 'test-lme.R:210:3',
'test-glmmTMB.R:71:3', 'test-glmmTMB.R:755:3', 'test-glmmTMB.R:787:3',
'test-marginaleffects.R:1:1', 'test-mgcv.R:1:1', 'test-mipo.R:1:1',
'test-mvrstanarm.R:1:1', 'test-panelr-asym.R:142:3', 'test-panelr.R:272:3',
'test-phylolm.R:5:1', 'test-r2_nakagawa_bernoulli.R:1:1',
'test-r2_nakagawa_beta.R:1:1', 'test-r2_nakagawa_binomial.R:1:1',
'test-r2_nakagawa_gamma.R:1:1', 'test-r2_nakagawa_linear.R:1:1',
'test-r2_nakagawa_negbin.R:1:1', 'test-r2_nakagawa_negbin_zi.R:1:1',
'test-r2_nakagawa_ordered_beta.R:1:1', 'test-r2_nakagawa_poisson.R:1:1',
'test-r2_nakagawa_poisson_zi.R:1:1',
'test-r2_nakagawa_truncated_poisson.R:1:1', 'test-r2_nakagawa_tweedie.R:1:1',
'test-rlmer.R:259:3', 'test-rqss.R:1:1', 'test-rstanarm.R:1:1',
'test-spatial.R:1:1', 'test-svylme.R:1:1', 'test-vgam.R:1:1',
'test-weightit.R:1:1'
• On Linux (3): 'test-BayesFactorBF.R:1:1', 'test-MCMCglmm.R:1:1',
'test-get_data.R:150:3'
• Package `logistf` is loaded and breaks `mmrm::mmrm()` (1): 'test-mmrm.R:4:1'
• TRUE is TRUE (1): 'test-fixest.R:2:1'
• works interactively (2): 'test-coxph-panel.R:34:3', 'test-coxph.R:38:3'
• {bigglm} is not installed (1): 'test-model_info.R:24:3'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-get_loglikelihood.R:216:3'): get_loglikelihood - gamm4 ───────
as.numeric(ll) (`actual`) not equal to -101.1107 (`expected`).
`actual`: -99.1
`expected`: -101.1
── Failure ('test-get_variance.R:49:3'): get_variance-1 ────────────────────────
v1$var.intercept (`actual`) not equal to c(Subject = 612.10016) (`expected`).
`actual`: 593.3
`expected`: 612.1
── Failure ('test-get_variance.R:53:3'): get_variance-1 ────────────────────────
v1$var.slope (`actual`) not equal to c(Subject.Days = 35.07171) (`expected`).
`actual`: 593.3
`expected`: 35.1
── Failure ('test-get_variance.R:60:3'): get_variance-2 ────────────────────────
v2$var.intercept (`actual`) not equal to c(Subject = 627.56905) (`expected`).
`actual`: 593.3
`expected`: 627.6
── Failure ('test-get_variance.R:64:3'): get_variance-2 ────────────────────────
v2$var.slope (`actual`) not equal to c(Subject.Days = 35.85838) (`expected`).
`actual`: 593.3
`expected`: 35.9
── Failure ('test-get_variance.R:71:3'): get_variance-3 ────────────────────────
v3$var.intercept (`actual`) not equal to c(subgrp.grp.1 = 0, Subject = 662.52047, grp.1 = 0) (`expected`).
`actual` is NULL
`expected` is a double vector (0, 662.52047, 0)
── Failure ('test-get_variance.R:79:3'): get_variance-3 ────────────────────────
v3$var.slope (`actual`) not equal to c(Subject.Days = 34.25771, subgrp.grp.Days = 7.88485, grp.Days = 0) (`expected`).
`actual` is NULL
`expected` is a double vector (34.25771, 7.88485, 0)
── Failure ('test-get_variance.R:91:3'): get_variance-4 ────────────────────────
v4$var.intercept (`actual`) not equal to c(Subject = 1378.17851) (`expected`).
`actual`: 811.1
`expected`: 1378.2
── Failure ('test-get_variance.R:99:3'): get_variance-5 ────────────────────────
v5$var.intercept (`actual`) not equal to c(`subgrp:grp` = 38.76069, Subject = 1377.50569, grp = 3.32031) (`expected`).
`actual`: 654.2 654.2 654.2
`expected`: 38.8 1377.5 3.3
── Failure ('test-get_variance.R:112:3'): get_variance-6 ───────────────────────
v6$var.intercept (`actual`) not equal to c(plate = 0.71691) (`expected`).
`actual`: 0.13
`expected`: 0.72
── Failure ('test-get_variance.R:113:3'): get_variance-6 ───────────────────────
v6$var.random (`actual`) not equal to 0.71691 (`expected`).
`actual`: 0.13
`expected`: 0.72
── Failure ('test-get_variance.R:124:3'): get_variance-7 ───────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 593.3
`expected$var.random`: 627.6
`actual$var.residual`: 593.3
`expected$var.residual`: 653.6
`actual$var.distribution`: 593.3
`expected$var.distribution`: 653.6
`actual$var.intercept`: 593.3
`expected$var.intercept`: 627.6
`actual$var.slope`: 593.3
`expected$var.slope`: 35.9
── Failure ('test-get_variance.R:144:3'): get_variance-8 ───────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 21856.9
`expected$var.random`: 1502.2
`actual$var.residual`: 766.9
`expected$var.residual`: 842.0
`actual$var.distribution`: 766.9
`expected$var.distribution`: 842.0
`actual$var.slope`: 766.9
`expected$var.slope`: 52.7
── Failure ('test-get_variance.R:169:3'): get_variance-9 ───────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 1264.6
`expected$var.random`: 1711.4
`actual$var.residual`: 790.4
`expected$var.residual`: 748.8
`actual$var.distribution`: 790.4
`expected$var.distribution`: 748.8
`actual$var.intercept`: 790.4
`expected$var.intercept`: 663.3
`actual$var.slope`: 790.4 790.4
`expected$var.slope`: 882.4 1415.7
actual$cor.slope_intercept vs expected$cor.slope_intercept
[,1]
- actual$cor.slope_intercept[1, ] 0.0000000
+ expected$cor.slope_intercept[1, ] 0.3611731
- actual$cor.slope_intercept[2, ] 0.0000000
+ expected$cor.slope_intercept[2, ] 0.3318785
`actual$cor.slopes`: 0.00
`expected$cor.slopes`: 0.85
── Failure ('test-get_variance.R:198:3'): get_variance-10 ──────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual` is length 8
`expected` is length 7
names(actual) | names(expected)
[1] "var.fixed" | "var.fixed" [1]
[2] "var.random" -
[3] "var.residual" | "var.residual" [2]
[4] "var.distribution" | "var.distribution" [3]
[5] "var.dispersion" | "var.dispersion" [4]
`actual$var.random` is a double vector (739.145250002286)
`expected$var.random` is absent
names(actual$var.slope) | names(expected$var.slope)
[1] "Subject.1.Days2(3,6]" -
[2] "Subject.1.Days2(6,10]" -
[3] "Subject.Days2(-1,3]" | "Subject.Days2(-1,3]" [1]
[4] "Subject.Days2(3,6]" | "Subject.Days2(3,6]" [2]
[5] "Subject.Days2(6,10]" | "Subject.Days2(6,10]" [3]
actual$var.slope | expected$var.slope
[1] 739.145250002286 - 0 [1]
[2] 739.145250002286 - 994.015865559888 [2]
[3] 739.145250002286 - 1545.72576115283 [3]
[4] 739.145250002286 -
[5] 739.145250002286 -
names(actual$cor.slopes) | names(expected$cor.slopes)
[1] "Subject.1.Days2(-1,3]-Days2(3,6]" -
[2] "Subject.1.Days2(-1,3]-Days2(6,10]" -
[3] "Subject.1.Days2(3,6]-Days2(6,10]" | "Subject.1.Days2(3,6]-Days2(6,10]" [1]
`actual$cor.slopes`: 0 0 0
`expected$cor.slopes`: 0.859480774219098
── Failure ('test-get_variance.R:222:3'): get_variance-11 ──────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 589.0
`expected$var.random`: 1446.1
`actual$var.residual`: 981.6
`expected$var.residual`: 748.8
`actual$var.distribution`: 981.6
`expected$var.distribution`: 748.8
`actual$var.slope`: 981.6 981.6 981.6
`expected$var.slope`: 663.3 2098.2 2722.2
`actual$cor.slopes`: 0.00 0.00 0.00
`expected$cor.slopes`: 0.80 0.73 0.92
── Failure ('test-get_variance.R:250:3'): get_variance-12 ──────────────────────
`vmodel` (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 589.0
`expected$var.random`: 1446.1
`actual$var.residual`: 981.6
`expected$var.residual`: 748.8
`actual$var.distribution`: 981.6
`expected$var.distribution`: 748.8
`actual$var.slope`: 981.6 981.6 981.6
`expected$var.slope`: 663.3 2098.2 2722.2
`actual$cor.slopes`: 0.00 0.00 0.00
`expected$cor.slopes`: 0.80 0.73 0.92
── Failure ('test-get_variance.R:283:3'): get_variance-cat_random_slope ────────
vc$cor.slopes (`actual`) not equal to c(...) (`expected`).
`actual`: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
`expected`: 1.0 1.0 1.0 1.0 1.0 1.0 -1.0 -1.0 -1.0 -1.0
── Failure ('test-get_variance.R:326:3'): random effects CIs, simple slope ─────
`vc` (`actual`) not equal to list(...) (`expected`).
`actual[[1]]`: 935.7
`expected[[1]]`: 921.9
`actual[[2]]`: 17808.7
`expected[[2]]`: 1068.0
`actual[[3]]`: 624.9
`expected[[3]]`: 764.5
`actual[[4]]`: 624.9
`expected[[4]]`: 764.5
`actual[[6]]`: 624.9 624.9
`expected[[6]]`: 37.5 27.6
`actual[[7]]`: 0.00
`expected[[7]]`: 0.46
── Failure ('test-get_variance.R:353:3'): random effects CIs, poly slope ───────
vc$cor.slopes (`actual`) not equal to c(`replicate.poly(temp, 2)1-poly(temp, 2)2` = 0.940016422944175) (`expected`).
`actual`: 0.00
`expected`: 0.94
── Failure ('test-is_converged.R:23:3'): is_converged ──────────────────────────
is_converged(model) is not TRUE
`actual`: FALSE
`expected`: TRUE
── Failure ('test-lmer.R:60:3'): get_df ────────────────────────────────────────
get_df(m1, type = "satterthwaite") (`actual`) not equal to c(`(Intercept)` = 16.99973, Days = 16.99998) (`expected`).
`actual`: 18.1 115.7
`expected`: 17.0 17.0
── Failure ('test-lmer.R:334:3'): get_variance ─────────────────────────────────
get_variance(m1) (`actual`) not equal to list(...) (`expected`).
`actual$var.random`: 17503.7
`expected$var.random`: 1698.1
`actual$var.slope`: 593.3
`expected$var.slope`: 35.1
── Failure ('test-lmer.R:353:3'): get_variance ─────────────────────────────────
get_variance_random(m1) (`actual`) not equal to c(var.random = 1698.084) (`expected`).
`actual`: 17503.7
`expected`: 1698.1
── Failure ('test-lmer.R:377:3'): get_variance ─────────────────────────────────
get_variance_slope(m1) (`actual`) not equal to c(var.slope.Subject.Days = 35.07171) (`expected`).
`actual`: 593.3
`expected`: 35.1
── Failure ('test-lmer.R:388:3'): get_variance ─────────────────────────────────
suppressWarnings(get_variance(m2)) (`actual`) not equal to list(...) (`expected`).
`actual` is length 6
`expected` is length 5
names(actual) | names(expected)
[1] "var.fixed" | "var.fixed" [1]
[2] "var.random" -
[3] "var.residual" | "var.residual" [2]
[4] "var.distribution" | "var.distribution" [3]
[5] "var.dispersion" | "var.dispersion" [4]
`actual$var.random` is a double vector (2058.53584626217)
`expected$var.random` is absent
`actual$var.residual`: 686.2
`expected$var.residual`: 941.8
`actual$var.distribution`: 686.2
`expected$var.distribution`: 941.8
`actual$var.intercept`: 686.2 686.2 686.2
`expected$var.intercept`: 0.0 1357.4 24.4
── Failure ('test-lmer.R:558:3'): get_predicted_ci: warning when model matrix and varcovmat do not match ──
head(data.frame(p)$Predicted) (`actual`) not equal to known$Predicted (`expected`).
`actual`: 37.586 47.556 56.800 65.318 73.111 80.177
`expected`: 37.534 47.957 58.789 70.029 81.677 93.735
[ FAIL 27 | WARN 507 | SKIP 72 | PASS 3414 ]
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Current CRAN status: ERROR: 2, OK: 13
Version: 0.24.1
Check: tests
Result: ERROR
Running ‘testthat.R’ [309s/207s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(parameters)
> library(testthat)
>
> test_check("parameters")
Starting 2 test processes
[ FAIL 22 | WARN 35 | SKIP 119 | PASS 638 ]
══ Skipped tests (119) ═════════════════════════════════════════════════════════
• Installed marginaleffects is version 0.25.0; but 1.0.0 is required (1):
'test-marginaleffects.R:1:1'
• On CRAN (106): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1',
'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1',
'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1',
'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5',
'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5',
'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1',
'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3',
'test-equivalence_test.R:18:3', 'test-equivalence_test.R:82:3',
'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1',
'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1',
'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:8:1', 'test-helper.R:1:1',
'test-ivreg.R:54:3', 'test-include_reference.R:15:3',
'test-include_reference.R:67:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3',
'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1',
'test-model_parameters.aov.R:1:1', 'test-model_parameters.bracl.R:5:1',
'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1',
'test-model_parameters.aov_es_ci.R:158:3',
'test-model_parameters.aov_es_ci.R:269:3',
'test-model_parameters.aov_es_ci.R:319:3',
'test-model_parameters.aov_es_ci.R:372:3',
'test-model_parameters.epi2x2.R:1:1', 'test-model_parameters.fixest.R:2:3',
'test-model_parameters.fixest.R:77:3',
'test-model_parameters.fixest.R:147:5',
'test-model_parameters.fixest_multi.R:3:1',
'test-model_parameters.ggeffects.R:12:3',
'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3',
'test-model_parameters.glm.R:68:3', 'test-model_parameters.logistf.R:1:1',
'test-model_parameters.logitr.R:1:1', 'test-model_parameters.mclogit.R:5:1',
'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1',
'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1',
'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1',
'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1',
'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3',
'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1',
'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1',
'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1',
'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-pool_parameters.R:11:3',
'test-pool_parameters.R:32:1', 'test-print_AER_labels.R:11:5',
'test-pretty_names.R:65:5', 'test-printing-stan.R:2:1',
'test-printing.R:1:1', 'test-quantreg.R:1:1', 'test-random_effects_ci.R:4:1',
'test-robust.R:2:1', 'test-rstanarm.R:3:1', 'test-serp.R:16:5',
'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7',
'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7',
'test-printing2.R:91:7', 'test-printing2.R:127:7', 'test-svylme.R:1:1',
'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3',
'test-weightit.R:43:3', 'test-standardize_parameters.R:31:3',
'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3',
'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3',
'test-standardize_parameters.R:332:3', 'test-standardize_parameters.R:425:3',
'test-standardize_parameters.R:515:3'
• On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1',
'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1',
'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1'
• TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3'
• TODO: this one actually is not correct. (1):
'test-model_parameters_robust.R:127:3'
• empty test (5): 'test-wrs2.R:8:1', 'test-wrs2.R:18:1', 'test-wrs2.R:30:1',
'test-wrs2.R:43:1', 'test-wrs2.R:55:1'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-ci.R:12:3'): ci ──────────────────────────────────────────────
suppressMessages(ci(model, method = "normal"))[1, 3] (`actual`) not equal to -0.335063 (`expected`).
`actual`: -0.46
`expected`: -0.34
── Failure ('test-ci.R:15:3'): ci ──────────────────────────────────────────────
ci(model)[1, 3] (`actual`) not equal to -0.3795646 (`expected`).
`actual`: -0.51
`expected`: -0.38
── Failure ('test-ci.R:19:3'): ci ──────────────────────────────────────────────
`val` (`actual`) not equal to -0.555424 (`expected`).
`actual`: -0.593
`expected`: -0.555
── Failure ('test-model_parameters.blmerMod.R:10:3'): model_parameters.blmerMod ──
params$SE (`actual`) not equal to c(6.8246, 1.54579) (`expected`).
`actual`: 6.66 5.78
`expected`: 6.82 1.55
── Failure ('test-model_parameters.blmerMod.R:20:3'): model_parameters.blmerMod-all ──
params$SE (`actual`) not equal to c(6.8246, 1.54579, 5.83626, 1.24804, 0.31859, 1.50801) (`expected`).
`actual`: 6.66 5.78 NA NA NA NA
`expected`: 6.82 1.55 5.84 1.25 0.32 1.51
── Failure ('test-model_parameters.blmerMod.R:21:3'): model_parameters.blmerMod-all ──
params$Coefficient (`actual`) not equal to c(251.4051, 10.46729, 24.74066, 5.92214, 0.06555, 25.5918) (`expected`).
`actual`: 251.4051 10.4673 24.3587 24.3587 0.0000 24.3587
`expected`: 251.4051 10.4673 24.7407 5.9221 0.0655 25.5918
── Failure ('test-model_parameters_df_method.R:16:3'): model_parameters, ci_method default (residual) ──
mp0$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.7352 4.3554 3.6616 0.0201 1.5764 0.8709 0.0373 0.0174
`expected`: 2.7746 3.6957 3.5210 0.0157 1.5851 0.8632 0.0297 0.0167
── Failure ('test-model_parameters_df_method.R:31:3'): model_parameters, ci_method default (residual) ──
mp0$p (`actual`) not equal to c(0, 0.00258, 0.14297, 0.17095, 0.84778, 0.00578, 0.00151, 0.32653) (`expected`).
`actual`: 0.0000 0.0127 0.2125 0.4124 0.8151 0.0077 0.0107 0.3555
`expected`: 0.0000 0.0026 0.1430 0.1709 0.8478 0.0058 0.0015 0.3265
── Failure ('test-model_parameters_df_method.R:45:3'): model_parameters, ci_method default (residual) ──
mp0$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 21.7702 2.7847 -2.8922 -0.0584 -2.8962 -4.3616 -0.1816 -0.0524
`expected`: 24.5472 4.8970 -1.9532 -0.0549 -2.9795 -4.4285 -0.1693 -0.0513
── Failure ('test-model_parameters_df_method.R:62:3'): model_parameters, ci_method normal ──
mp1$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.7352 4.3554 3.6616 0.0201 1.5764 0.8709 0.0373 0.0174
`expected`: 2.7746 3.6957 3.5210 0.0157 1.5851 0.8632 0.0297 0.0167
── Failure ('test-model_parameters_df_method.R:81:3'): model_parameters, ci_method normal ──
mp1$p (`actual`) not equal to c(0, 0.00068, 0.12872, 0.15695, 0.846, 0.00224, 0.00029, 0.31562) (`expected`).
`actual`: 0.0000 0.0067 0.1991 0.4034 0.8129 0.0033 0.0053 0.3453
`expected`: 0.0000 0.0007 0.1287 0.1570 0.8460 0.0022 0.0003 0.3156
── Failure ('test-model_parameters_df_method.R:86:3'): model_parameters, ci_method normal ──
mp1$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 22.1956 3.2808 -2.4751 -0.0561 -2.7166 -4.2624 -0.1774 -0.0504
`expected`: 24.8633 5.3180 -1.5521 -0.0531 -2.7989 -4.3301 -0.1659 -0.0494
── Failure ('test-model_parameters_df_method.R:103:3'): model_parameters, ci_method satterthwaite ──
mp2$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.7352 4.3554 3.6616 0.0201 1.5764 0.8709 0.0373 0.0174
`expected`: 2.7746 3.6957 3.5210 0.0157 1.5851 0.8632 0.0297 0.0167
── Failure ('test-model_parameters_df_method.R:118:3'): model_parameters, ci_method satterthwaite ──
mp2$p (`actual`) not equal to c(0, 0.00236, 0.14179, 0.16979, 0.84763, 0.00542, 0.00136, 0.32563) (`expected`).
`actual`: 0.0000 0.0121 0.2114 0.4117 0.8149 0.0072 0.0101 0.3547
`expected`: 0.0000 0.0024 0.1418 0.1698 0.8476 0.0054 0.0014 0.3256
── Failure ('test-model_parameters_df_method.R:132:3'): model_parameters, ci_method satterthwaite ──
mp2$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 21.8060 2.8281 -2.8557 -0.0582 -2.8804 -4.3529 -0.1812 -0.0522
`expected`: 24.5749 4.9338 -1.9181 -0.0548 -2.9637 -4.4199 -0.1690 -0.0512
── Failure ('test-model_parameters_df_method.R:149:3'): model_parameters, ci_method kenward ──
mp3$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.8581 4.4785 3.7104 0.0217 1.5776 0.8745 0.0388 0.0175
`expected`: 2.9761 6.1045 3.9875 0.0203 1.6033 0.9160 0.0551 0.0196
── Failure ('test-model_parameters_df_method.R:163:3'): model_parameters, ci_method kenward ──
mp3$df (`actual`) not equal to c(...) (`expected`).
`actual`: 10.71 22.97 22.53 24.00 22.04 22.18 23.26 22.40
`expected`: 19.40 5.28 23.57 8.97 22.74 23.76 2.73 22.83
── Failure ('test-model_parameters_df_method.R:177:3'): model_parameters, ci_method kenward ──
mp3$p (`actual`) not equal to c(0, 0.09176, 0.19257, 0.30147, 0.84942, 0.00828, 0.15478, 0.40248) (`expected`).
`actual`: 0.0000 0.0147 0.2181 0.4467 0.8153 0.0079 0.0131 0.3602
`expected`: 0.0000 0.0918 0.1926 0.3015 0.8494 0.0083 0.1548 0.4025
── Failure ('test-model_parameters_df_method.R:191:3'): model_parameters, ci_method kenward ──
mp3$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 20.9968 2.5523 -2.9830 -0.0615 -2.8984 -4.3684 -0.1843 -0.0527
`expected`: 24.0809 -2.8870 -2.8889 -0.0683 -3.0108 -4.5299 -0.2934 -0.0573
── Failure ('test-model_parameters_df_method.R:208:3'): model_parameters, ci_method wald (t) ──
mp4$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.7352 4.3554 3.6616 0.0201 1.5764 0.8709 0.0373 0.0174
`expected`: 2.7746 3.6957 3.5210 0.0157 1.5851 0.8632 0.0297 0.0167
── Failure ('test-model_parameters_df_method.R:223:3'): model_parameters, ci_method wald (t) ──
mp4$p (`actual`) not equal to c(0, 0.00258, 0.14297, 0.17095, 0.84778, 0.00578, 0.00151, 0.32653) (`expected`).
`actual`: 0.0000 0.0127 0.2125 0.4124 0.8151 0.0077 0.0107 0.3555
`expected`: 0.0000 0.0026 0.1430 0.1709 0.8478 0.0058 0.0015 0.3265
── Failure ('test-model_parameters_df_method.R:237:3'): model_parameters, ci_method wald (t) ──
mp4$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 21.7702 2.7847 -2.8922 -0.0584 -2.8962 -4.3616 -0.1816 -0.0524
`expected`: 24.5472 4.8970 -1.9532 -0.0549 -2.9795 -4.4285 -0.1693 -0.0513
[ FAIL 22 | WARN 35 | SKIP 119 | PASS 638 ]
Deleting unused snapshots:
• equivalence_test/equivalence-test-1.svg
• equivalence_test/equivalence-test-2.svg
• equivalence_test/equivalence-test-3.svg
• equivalence_test/equivalence-test-4.svg
• equivalence_test/equivalence-test-5.svg
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang
Version: 0.24.1
Check: tests
Result: ERROR
Running ‘testthat.R’ [6m/18m]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(parameters)
> library(testthat)
>
> test_check("parameters")
Starting 2 test processes
[ FAIL 22 | WARN 35 | SKIP 119 | PASS 638 ]
══ Skipped tests (119) ═════════════════════════════════════════════════════════
• Installed marginaleffects is version 0.25.0; but 1.0.0 is required (1):
'test-marginaleffects.R:1:1'
• On CRAN (106): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1',
'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1',
'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1',
'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5',
'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5',
'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1',
'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3',
'test-equivalence_test.R:18:3', 'test-equivalence_test.R:82:3',
'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1',
'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1',
'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:8:1', 'test-helper.R:1:1',
'test-ivreg.R:54:3', 'test-include_reference.R:15:3',
'test-include_reference.R:67:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3',
'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1',
'test-model_parameters.aov.R:1:1', 'test-model_parameters.bracl.R:5:1',
'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1',
'test-model_parameters.aov_es_ci.R:158:3',
'test-model_parameters.aov_es_ci.R:269:3',
'test-model_parameters.aov_es_ci.R:319:3',
'test-model_parameters.aov_es_ci.R:372:3',
'test-model_parameters.epi2x2.R:1:1',
'test-model_parameters.fixest_multi.R:3:1',
'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3',
'test-model_parameters.fixest.R:147:5',
'test-model_parameters.ggeffects.R:12:3',
'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.glm.R:40:3',
'test-model_parameters.glm.R:68:3', 'test-model_parameters.logistf.R:1:1',
'test-model_parameters.logitr.R:1:1', 'test-model_parameters.mclogit.R:5:1',
'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1',
'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1',
'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1',
'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1',
'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3',
'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1',
'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1',
'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1',
'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-pool_parameters.R:11:3',
'test-pool_parameters.R:32:1', 'test-print_AER_labels.R:11:5',
'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-printing2.R:15:7',
'test-printing2.R:22:7', 'test-printing2.R:27:7', 'test-printing2.R:32:7',
'test-printing2.R:37:7', 'test-printing2.R:49:7', 'test-printing2.R:91:7',
'test-printing2.R:127:7', 'test-quantreg.R:1:1',
'test-random_effects_ci.R:4:1', 'test-robust.R:2:1', 'test-rstanarm.R:3:1',
'test-serp.R:16:5', 'test-pretty_names.R:65:5', 'test-svylme.R:1:1',
'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3',
'test-weightit.R:43:3', 'test-standardize_parameters.R:31:3',
'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3',
'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3',
'test-standardize_parameters.R:332:3', 'test-standardize_parameters.R:425:3',
'test-standardize_parameters.R:515:3'
• On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1',
'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1',
'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1'
• TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3'
• TODO: this one actually is not correct. (1):
'test-model_parameters_robust.R:127:3'
• empty test (5): 'test-wrs2.R:8:1', 'test-wrs2.R:18:1', 'test-wrs2.R:30:1',
'test-wrs2.R:43:1', 'test-wrs2.R:55:1'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-ci.R:12:3'): ci ──────────────────────────────────────────────
suppressMessages(ci(model, method = "normal"))[1, 3] (`actual`) not equal to -0.335063 (`expected`).
`actual`: -0.46
`expected`: -0.34
── Failure ('test-ci.R:15:3'): ci ──────────────────────────────────────────────
ci(model)[1, 3] (`actual`) not equal to -0.3795646 (`expected`).
`actual`: -0.51
`expected`: -0.38
── Failure ('test-ci.R:19:3'): ci ──────────────────────────────────────────────
`val` (`actual`) not equal to -0.555424 (`expected`).
`actual`: -0.593
`expected`: -0.555
── Failure ('test-model_parameters.blmerMod.R:10:3'): model_parameters.blmerMod ──
params$SE (`actual`) not equal to c(6.8246, 1.54579) (`expected`).
`actual`: 6.66 5.78
`expected`: 6.82 1.55
── Failure ('test-model_parameters.blmerMod.R:20:3'): model_parameters.blmerMod-all ──
params$SE (`actual`) not equal to c(6.8246, 1.54579, 5.83626, 1.24804, 0.31859, 1.50801) (`expected`).
`actual`: 6.66 5.78 NA NA NA NA
`expected`: 6.82 1.55 5.84 1.25 0.32 1.51
── Failure ('test-model_parameters.blmerMod.R:21:3'): model_parameters.blmerMod-all ──
params$Coefficient (`actual`) not equal to c(251.4051, 10.46729, 24.74066, 5.92214, 0.06555, 25.5918) (`expected`).
`actual`: 251.4051 10.4673 24.3587 24.3587 0.0000 24.3587
`expected`: 251.4051 10.4673 24.7407 5.9221 0.0655 25.5918
── Failure ('test-model_parameters_df_method.R:16:3'): model_parameters, ci_method default (residual) ──
mp0$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.7352 4.3554 3.6616 0.0201 1.5764 0.8709 0.0373 0.0174
`expected`: 2.7746 3.6957 3.5210 0.0157 1.5851 0.8632 0.0297 0.0167
── Failure ('test-model_parameters_df_method.R:31:3'): model_parameters, ci_method default (residual) ──
mp0$p (`actual`) not equal to c(0, 0.00258, 0.14297, 0.17095, 0.84778, 0.00578, 0.00151, 0.32653) (`expected`).
`actual`: 0.0000 0.0127 0.2125 0.4124 0.8151 0.0077 0.0107 0.3555
`expected`: 0.0000 0.0026 0.1430 0.1709 0.8478 0.0058 0.0015 0.3265
── Failure ('test-model_parameters_df_method.R:45:3'): model_parameters, ci_method default (residual) ──
mp0$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 21.7702 2.7847 -2.8922 -0.0584 -2.8962 -4.3616 -0.1816 -0.0524
`expected`: 24.5472 4.8970 -1.9532 -0.0549 -2.9795 -4.4285 -0.1693 -0.0513
── Failure ('test-model_parameters_df_method.R:62:3'): model_parameters, ci_method normal ──
mp1$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.7352 4.3554 3.6616 0.0201 1.5764 0.8709 0.0373 0.0174
`expected`: 2.7746 3.6957 3.5210 0.0157 1.5851 0.8632 0.0297 0.0167
── Failure ('test-model_parameters_df_method.R:81:3'): model_parameters, ci_method normal ──
mp1$p (`actual`) not equal to c(0, 0.00068, 0.12872, 0.15695, 0.846, 0.00224, 0.00029, 0.31562) (`expected`).
`actual`: 0.0000 0.0067 0.1991 0.4034 0.8129 0.0033 0.0053 0.3453
`expected`: 0.0000 0.0007 0.1287 0.1570 0.8460 0.0022 0.0003 0.3156
── Failure ('test-model_parameters_df_method.R:86:3'): model_parameters, ci_method normal ──
mp1$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 22.1956 3.2808 -2.4751 -0.0561 -2.7166 -4.2624 -0.1774 -0.0504
`expected`: 24.8633 5.3180 -1.5521 -0.0531 -2.7989 -4.3301 -0.1659 -0.0494
── Failure ('test-model_parameters_df_method.R:103:3'): model_parameters, ci_method satterthwaite ──
mp2$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.7352 4.3554 3.6616 0.0201 1.5764 0.8709 0.0373 0.0174
`expected`: 2.7746 3.6957 3.5210 0.0157 1.5851 0.8632 0.0297 0.0167
── Failure ('test-model_parameters_df_method.R:118:3'): model_parameters, ci_method satterthwaite ──
mp2$p (`actual`) not equal to c(0, 0.00236, 0.14179, 0.16979, 0.84763, 0.00542, 0.00136, 0.32563) (`expected`).
`actual`: 0.0000 0.0121 0.2114 0.4117 0.8149 0.0072 0.0101 0.3547
`expected`: 0.0000 0.0024 0.1418 0.1698 0.8476 0.0054 0.0014 0.3256
── Failure ('test-model_parameters_df_method.R:132:3'): model_parameters, ci_method satterthwaite ──
mp2$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 21.8060 2.8281 -2.8557 -0.0582 -2.8804 -4.3529 -0.1812 -0.0522
`expected`: 24.5749 4.9338 -1.9181 -0.0548 -2.9637 -4.4199 -0.1690 -0.0512
── Failure ('test-model_parameters_df_method.R:149:3'): model_parameters, ci_method kenward ──
mp3$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.8581 4.4785 3.7104 0.0217 1.5776 0.8745 0.0388 0.0175
`expected`: 2.9761 6.1045 3.9875 0.0203 1.6033 0.9160 0.0551 0.0196
── Failure ('test-model_parameters_df_method.R:163:3'): model_parameters, ci_method kenward ──
mp3$df (`actual`) not equal to c(...) (`expected`).
`actual`: 10.71 22.97 22.53 24.00 22.04 22.18 23.26 22.40
`expected`: 19.40 5.28 23.57 8.97 22.74 23.76 2.73 22.83
── Failure ('test-model_parameters_df_method.R:177:3'): model_parameters, ci_method kenward ──
mp3$p (`actual`) not equal to c(0, 0.09176, 0.19257, 0.30147, 0.84942, 0.00828, 0.15478, 0.40248) (`expected`).
`actual`: 0.0000 0.0147 0.2181 0.4467 0.8153 0.0079 0.0131 0.3602
`expected`: 0.0000 0.0918 0.1926 0.3015 0.8494 0.0083 0.1548 0.4025
── Failure ('test-model_parameters_df_method.R:191:3'): model_parameters, ci_method kenward ──
mp3$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 20.9968 2.5523 -2.9830 -0.0615 -2.8984 -4.3684 -0.1843 -0.0527
`expected`: 24.0809 -2.8870 -2.8889 -0.0683 -3.0108 -4.5299 -0.2934 -0.0573
── Failure ('test-model_parameters_df_method.R:208:3'): model_parameters, ci_method wald (t) ──
mp4$SE (`actual`) not equal to c(...) (`expected`).
`actual`: 3.7352 4.3554 3.6616 0.0201 1.5764 0.8709 0.0373 0.0174
`expected`: 2.7746 3.6957 3.5210 0.0157 1.5851 0.8632 0.0297 0.0167
── Failure ('test-model_parameters_df_method.R:223:3'): model_parameters, ci_method wald (t) ──
mp4$p (`actual`) not equal to c(0, 0.00258, 0.14297, 0.17095, 0.84778, 0.00578, 0.00151, 0.32653) (`expected`).
`actual`: 0.0000 0.0127 0.2125 0.4124 0.8151 0.0077 0.0107 0.3555
`expected`: 0.0000 0.0026 0.1430 0.1709 0.8478 0.0058 0.0015 0.3265
── Failure ('test-model_parameters_df_method.R:237:3'): model_parameters, ci_method wald (t) ──
mp4$CI_low (`actual`) not equal to c(...) (`expected`).
`actual`: 21.7702 2.7847 -2.8922 -0.0584 -2.8962 -4.3616 -0.1816 -0.0524
`expected`: 24.5472 4.8970 -1.9532 -0.0549 -2.9795 -4.4285 -0.1693 -0.0513
[ FAIL 22 | WARN 35 | SKIP 119 | PASS 638 ]
Deleting unused snapshots:
• equivalence_test/equivalence-test-1.svg
• equivalence_test/equivalence-test-2.svg
• equivalence_test/equivalence-test-3.svg
• equivalence_test/equivalence-test-4.svg
• equivalence_test/equivalence-test-5.svg
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Current CRAN status: ERROR: 1, OK: 14
Version: 0.13.0
Check: tests
Result: ERROR
Running ‘testthat.R’ [138s/356s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(performance)
>
> test_check("performance")
Starting 2 test processes
[ FAIL 5 | WARN 18 | SKIP 34 | PASS 375 ]
══ Skipped tests (34) ══════════════════════════════════════════════════════════
• On CRAN (29): 'test-binned_residuals.R:137:3',
'test-binned_residuals.R:164:3', 'test-bootstrapped_icc_ci.R:2:3',
'test-bootstrapped_icc_ci.R:44:3', 'test-check_collinearity.R:181:3',
'test-check_collinearity.R:218:3', 'test-check_dag.R:1:1',
'test-check_distribution.R:35:3', 'test-check_itemscale.R:28:3',
'test-check_model.R:1:1', 'test-check_predictions.R:2:1',
'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3',
'test-check_singularity.R:23:3', 'test-check_zeroinflation.R:73:3',
'test-check_zeroinflation.R:112:3', 'test-check_outliers.R:110:3',
'test-helpers.R:1:1', 'test-icc.R:2:1', 'test-compare_performance.R:21:3',
'test-mclogit.R:53:3', 'test-model_performance.bayesian.R:1:1',
'test-model_performance.merMod.R:2:3',
'test-model_performance.merMod.R:22:3', 'test-model_performance.rma.R:33:3',
'test-pkg-ivreg.R:7:3', 'test-r2_nakagawa.R:20:1', 'test-rmse.R:35:3',
'test-test_likelihoodratio.R:55:1'
• On Linux (3): 'test-nestedLogit.R:1:1', 'test-r2_bayes.R:1:1',
'test-test_wald.R:1:1'
• getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:258:3'
• packageVersion("glmmTMB") > "1.1.10" is not TRUE (1): 'test-r2.R:82:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-check_distribution.R:18:3'): check_distribution ──────────────
out$p_Residuals (`actual`) not equal to c(...) (`expected`).
actual | expected
[2] 0.000 | 0.000 [2]
[3] 0.000 | 0.000 [3]
[4] 0.000 | 0.000 [4]
[5] 0.938 - 0.906 [5]
[6] 0.000 | 0.000 [6]
[7] 0.000 | 0.000 [7]
[8] 0.000 | 0.000 [8]
[9] 0.031 - 0.062 [9]
[10] 0.000 | 0.000 [10]
[11] 0.000 | 0.000 [11]
... ... ... and 1 more ...
── Failure ('test-check_convergence.R:27:3'): check_convergence ────────────────
check_convergence(model) is not TRUE
`actual`: FALSE
`expected`: TRUE
── Failure ('test-performance_aic.R:29:3'): performance_aic lme4 default ───────
performance_aic(m1, estimator = "ML") (`actual`) not equal to 125.0043 (`expected`).
`actual`: 128.2
`expected`: 125.0
── Failure ('test-performance_aic.R:31:3'): performance_aic lme4 default ───────
performance_aic(m2, estimator = "REML") (`actual`) not equal to 128.0054 (`expected`).
`actual`: 132.9
`expected`: 128.0
── Failure ('test-r2_nakagawa.R:7:3'): r2_nakagawa ─────────────────────────────
r2_nakagawa(model) (`actual`) not equal to structure(...) (`expected`).
`actual$R2_conditional`: 0.950
`expected$R2_conditional`: 0.969
`actual$R2_marginal`: 0.87
`expected$R2_marginal`: 0.66
[ FAIL 5 | WARN 18 | SKIP 34 | PASS 375 ]
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Current CRAN status: OK: 15
Current CRAN status: NOTE: 3, OK: 12
Version: 2.8.10
Check: Rd cross-references
Result: NOTE
Found the following Rd file(s) with Rd \link{} targets missing package
anchors:
to_value.Rd: set_labels
Please provide package anchors for all Rd \link{} targets not in the
package itself and the base packages.
Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc, r-devel-windows-x86_64
Current CRAN status: OK: 15
Current CRAN status: OK: 15