Fix documentation issues detected during shapr 1.0.2 release (#442)
print()
by warning()
on
two occasionsFix issue with
Expected <nn_module> but got object of type <NULL>
for approach='vaeac'
after recent torch
update
broke it (#444)
Changes default seed in explain()
and
explain_forecast()
from 1 to NULL to avoid set.seed() to
conflict with later called code (#445)
Other minor fixes
expect_snapshot_rds()
to reduce false positive
roundoff-errors between platforms (#444)explain_forecast()
(#433)by=.I
(#434)paired_shap_sampling
and
kernelSHAP_reweighting
into
extra_computation_args
(#428)iterative = TRUE
for
explain_forecast()
which was not using coalitions from
previous iterations (#426)verbose
argument for explain_forecast()
(#425)party
package returns a
constparty
object (#423)keep_samp_for_vS
with iterative approach (#417)explain()
in R (#416)shapr()
for initial setup + explain()
for
explanation for specific observations), to a single function
call (also named explain()
). The data used for training and
to be explained have gotten explicit names (x_train
and
x_explain
). The order of the input arguments has also been
slightly changed (model
is now the first argument).explain()
instead of being defined
as functions of a specific class in the global env.make_dummies
used to
explain xgboost
models with categorical data, is removed to
simplify the code base. This is rather handled with a custom prediction
model.explain.ctree_comb_mincrit
, which allowed
combining models with approch=ctree
with different
mincrit
parameters, has been removed to simplify the code
base. It may return in a completely general manner in later version of
shapr
.shaprpyr
, #325) for
explaining predictions from Python models (from Python) utilizing almost
all functionality of shapr
. The wrapper moves back and
forth back and forth between Python and R, doing the prediction in
Python, and almost everything else in R. This simplifies maintenance of
shaprpy
significantly. The wrapper is available here.progressr
package. Must be activated by the user with
progressr::handlers(global = TRUE)
or wrapping the call to
explain()
around
progressr::with_progress({})
approach = 'categorical'
(#256,
#307)
used to explain models with solely categorical features by directly
using/estimating the joint distribution of all feature
combinations.approch='timeseries'
(#314) for
explaining classifications based on time series data/models with the
method described in Sec 4.3 of the groupShapley
paper.explain_forecast
to explain
forecasts from time series models, at various prediction horizons (#328).
Uses a different set of input argument which is more appropriate for
these models.approach = 'independence'
method
providing significantly faster computation (no longer as a special case
of the empirical
method). Also allow the method to be used
on models with categorical data (#315).explain
, also using vdiffr for plot tests. Test
functions are only written for exported core functions. Internal
functions are only tested through the exported ones.datasets::airquality
dataset. This avoids including a new
package just for the dataset (#248).shapr(data[,1:5],model...)
attach()
: Fixed by changing how we simulate adding a
function to .GlobalEnv in the failing test. Actual package not
affected.