Compares stacked elastic net, tuned elastic net, ridge and lasso.
cv.starnet(
y,
X,
family = "gaussian",
nalpha = 21,
alpha = NULL,
nfolds.ext = 10,
nfolds.int = 10,
foldid.ext = NULL,
foldid.int = NULL,
type.measure = "deviance",
alpha.meta = 1,
nzero = NULL,
intercept = NULL,
upper.limit = NULL,
unit.sum = NULL,
...
)
response: numeric vector of length \(n\)
covariates: numeric matrix with \(n\) rows (samples) and \(p\) columns (variables)
character "gaussian", "binomial" or "poisson"
number of alpha
values
elastic net mixing parameters:
vector of length nalpha
with entries
between \(0\) (ridge) and \(1\) (lasso);
or NULL
(equidistance)
number of folds (nfolds
):
positive integer;
fold identifiers (foldid
):
vector of length \(n\) with entries between \(1\) and nfolds
,
or NULL
,
for hold-out (single split) instead of cross-validation (multiple splits):
set foldid.ext
to \(0\) for training and to \(1\) for testing samples
loss function:
character "deviance", "class", "mse" or "mae"
(see cv.glmnet
)
meta-learner:
value between \(0\) (ridge) and \(1\) (lasso)
for elastic net regularisation;
NA
for convex combination
number of non-zero coefficients:
scalar/vector including positive integer(s) or NA
;
or NULL
(no post hoc feature selection)
settings for meta-learner: logical,
or NULL
(intercept=!is.na(alpha.meta)
,
upper.limit=TRUE
,
unit.sum=is.na(alpha.meta)
)
further arguments (not applicable)
List containing the cross-validated loss
(or out-of sample loss if nfolds.ext
equals two,
and foldid.ext
contains zeros and ones).
The slot meta
contains the loss from the stacked elastic net
(stack
), the tuned elastic net (tune
), ridge, lasso,
and the intercept-only model (none
).
The slot base
contains the loss from the base learners.
And the slot extra
contains the loss from the restricted
stacked elastic net (stack
), lasso, and lasso-like elastic net
(enet
),
with the maximum number of non-zero coefficients shown in the column name.
# \donttest{
loss <- cv.starnet(y=y,X=X)# }