CRAN Package Check Results for Package finetune

Last updated on 2025-02-18 09:50:37 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 1.2.0 16.85 160.50 177.35 NOTE
r-devel-linux-x86_64-debian-gcc 1.2.0 11.40 109.24 120.64 NOTE
r-devel-linux-x86_64-fedora-clang 1.2.0 317.80 ERROR
r-devel-linux-x86_64-fedora-gcc 1.2.0 280.41 ERROR
r-devel-macos-arm64 1.2.0 75.00 OK
r-devel-macos-x86_64 1.2.0 162.00 OK
r-devel-windows-x86_64 1.2.0 15.00 157.00 172.00 NOTE
r-patched-linux-x86_64 1.2.0 17.81 145.51 163.32 OK
r-release-linux-x86_64 1.2.0 14.45 144.59 159.04 OK
r-release-macos-arm64 1.2.0 72.00 OK
r-release-macos-x86_64 1.2.0 100.00 OK
r-release-windows-x86_64 1.2.0 16.00 150.00 166.00 OK
r-oldrel-macos-arm64 1.2.0 77.00 OK
r-oldrel-macos-x86_64 1.2.0 202.00 OK
r-oldrel-windows-x86_64 1.2.0 20.00 181.00 201.00 OK

Check Details

Version: 1.2.0
Check: Rd cross-references
Result: NOTE Found the following Rd file(s) with Rd \link{} targets missing package anchors: collect_predictions.Rd: collect_metrics, tune show_best.Rd: show_best tune_sim_anneal.Rd: tune_grid, tune_bayes Please provide package anchors for all Rd \link{} targets not in the package itself and the base packages. Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc, r-devel-windows-x86_64

Version: 1.2.0
Check: tests
Result: ERROR Running ‘spelling.R’ Running ‘testthat.R’ [99s/237s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > > suppressPackageStartupMessages(library(finetune)) > > # CRAN wants packages to be able to be check without the Suggests dependencies > if (rlang::is_installed(c("modeldata", "lme4", "testthat"))) { + suppressPackageStartupMessages(library(testthat)) + test_check("finetune") + } [ FAIL 7 | WARN 2 | SKIP 21 | PASS 148 ] ══ Skipped tests (21) ══════════════════════════════════════════════════════════ • On CRAN (21): 'test-anova-filter.R:132:3', 'test-anova-overall.R:3:3', 'test-anova-overall.R:25:3', 'test-anova-overall.R:44:3', 'test-anova-overall.R:69:3', 'test-race-control.R:17:3', 'test-race-control.R:38:3', 'test-sa-control.R:19:3', 'test-sa-control.R:48:3', 'test-sa-misc.R:5:3', 'test-sa-overall.R:2:3', 'test-sa-overall.R:21:3', 'test-sa-overall.R:42:3', 'test-sa-overall.R:92:3', 'test-sa-overall.R:131:3', 'test-sa-overall.R:186:3', 'test-win-loss-filter.R:2:3', 'test-win-loss-overall.R:3:3', 'test-win-loss-overall.R:22:3', 'test-win-loss-overall.R:39:3', 'test-win-loss-overall.R:56:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-anova-filter.R:50:3'): anova filtering and logging ─────────── anova_res$lower (`actual`) not equal to unname(rmse_ci[, 1]) (`expected`). `actual`: -0.89 -0.82 -0.77 -0.76 -0.76 -0.73 -0.60 `expected`: -0.24 -0.16 -0.12 -0.11 -0.11 -0.08 0.06 ── Failure ('test-anova-filter.R:51:3'): anova filtering and logging ─────────── anova_res$upper (`actual`) not equal to unname(rmse_ci[, 2]) (`expected`). `actual`: 0.91 0.99 1.03 1.04 1.04 1.08 1.21 `expected`: 0.26 0.34 0.38 0.39 0.39 0.42 0.56 ── Failure ('test-race-s3.R:33:3'): racing S3 methods ────────────────────────── nrow(collect_metrics(anova_race)) (`actual`) not equal to 2 (`expected`). `actual`: 6.0 `expected`: 2.0 ── Failure ('test-race-s3.R:35:3'): racing S3 methods ────────────────────────── nrow(collect_metrics(anova_race, summarize = FALSE)) (`actual`) not equal to 2 * 20 (`expected`). `actual`: 120.0 `expected`: 40.0 ── Failure ('test-race-s3.R:44:3'): racing S3 methods ────────────────────────── nrow(collect_predictions(anova_race, all_configs = FALSE, summarize = TRUE)) (`actual`) not equal to nrow(mtcars) * 1 (`expected`). `actual`: 96.0 `expected`: 32.0 ── Failure ('test-race-s3.R:52:3'): racing S3 methods ────────────────────────── nrow(collect_predictions(anova_race, all_configs = FALSE, summarize = FALSE)) (`actual`) not equal to nrow(mtcars) * 1 * 2 (`expected`). `actual`: 192.0 `expected`: 64.0 ── Failure ('test-race-s3.R:64:3'): racing S3 methods ────────────────────────── nrow(show_best(anova_race, metric = "rmse")) (`actual`) not equal to 1 (`expected`). `actual`: 3.0 `expected`: 1.0 [ FAIL 7 | WARN 2 | SKIP 21 | PASS 148 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 1.2.0
Check: tests
Result: ERROR Running ‘spelling.R’ Running ‘testthat.R’ [90s/122s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > > suppressPackageStartupMessages(library(finetune)) > > # CRAN wants packages to be able to be check without the Suggests dependencies > if (rlang::is_installed(c("modeldata", "lme4", "testthat"))) { + suppressPackageStartupMessages(library(testthat)) + test_check("finetune") + } [ FAIL 7 | WARN 2 | SKIP 21 | PASS 148 ] ══ Skipped tests (21) ══════════════════════════════════════════════════════════ • On CRAN (21): 'test-anova-filter.R:132:3', 'test-anova-overall.R:3:3', 'test-anova-overall.R:25:3', 'test-anova-overall.R:44:3', 'test-anova-overall.R:69:3', 'test-race-control.R:17:3', 'test-race-control.R:38:3', 'test-sa-control.R:19:3', 'test-sa-control.R:48:3', 'test-sa-misc.R:5:3', 'test-sa-overall.R:2:3', 'test-sa-overall.R:21:3', 'test-sa-overall.R:42:3', 'test-sa-overall.R:92:3', 'test-sa-overall.R:131:3', 'test-sa-overall.R:186:3', 'test-win-loss-filter.R:2:3', 'test-win-loss-overall.R:3:3', 'test-win-loss-overall.R:22:3', 'test-win-loss-overall.R:39:3', 'test-win-loss-overall.R:56:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-anova-filter.R:50:3'): anova filtering and logging ─────────── anova_res$lower (`actual`) not equal to unname(rmse_ci[, 1]) (`expected`). `actual`: -0.89 -0.82 -0.77 -0.76 -0.76 -0.73 -0.60 `expected`: -0.24 -0.16 -0.12 -0.11 -0.11 -0.08 0.06 ── Failure ('test-anova-filter.R:51:3'): anova filtering and logging ─────────── anova_res$upper (`actual`) not equal to unname(rmse_ci[, 2]) (`expected`). `actual`: 0.91 0.99 1.03 1.04 1.04 1.08 1.21 `expected`: 0.26 0.34 0.38 0.39 0.39 0.42 0.56 ── Failure ('test-race-s3.R:33:3'): racing S3 methods ────────────────────────── nrow(collect_metrics(anova_race)) (`actual`) not equal to 2 (`expected`). `actual`: 6.0 `expected`: 2.0 ── Failure ('test-race-s3.R:35:3'): racing S3 methods ────────────────────────── nrow(collect_metrics(anova_race, summarize = FALSE)) (`actual`) not equal to 2 * 20 (`expected`). `actual`: 120.0 `expected`: 40.0 ── Failure ('test-race-s3.R:44:3'): racing S3 methods ────────────────────────── nrow(collect_predictions(anova_race, all_configs = FALSE, summarize = TRUE)) (`actual`) not equal to nrow(mtcars) * 1 (`expected`). `actual`: 96.0 `expected`: 32.0 ── Failure ('test-race-s3.R:52:3'): racing S3 methods ────────────────────────── nrow(collect_predictions(anova_race, all_configs = FALSE, summarize = FALSE)) (`actual`) not equal to nrow(mtcars) * 1 * 2 (`expected`). `actual`: 192.0 `expected`: 64.0 ── Failure ('test-race-s3.R:64:3'): racing S3 methods ────────────────────────── nrow(show_best(anova_race, metric = "rmse")) (`actual`) not equal to 1 (`expected`). `actual`: 3.0 `expected`: 1.0 [ FAIL 7 | WARN 2 | SKIP 21 | PASS 148 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc