17982
12269
sklearn.pipeline.Pipeline(step_0=sklearn.feature_selection._variance_threshold.VarianceThreshold,step_1=sklearn.naive_bayes.BernoulliNB)
sklearn.Pipeline(VarianceThreshold,BernoulliNB)
sklearn.pipeline.Pipeline
1
openml==0.10.2,sklearn==0.22.1
Pipeline of transforms with a final estimator.
Sequentially apply a list of transforms and a final estimator.
Intermediate steps of the pipeline must be 'transforms', that is, they
must implement fit and transform methods.
The final estimator only needs to implement fit.
The transformers in the pipeline can be cached using ``memory`` argument.
The purpose of the pipeline is to assemble several steps that can be
cross-validated together while setting different parameters.
For this, it enables setting parameters of the various steps using their
names and the parameter name separated by a '__', as in the example below.
A step's estimator may be replaced entirely by setting the parameter
with its name to another estimator, or a transformer removed by setting
it to 'passthrough' or ``None``.
2020-05-19T07:21:44
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
memory
None
null
Used to cache the fitted transformers of the pipeline. By default,
no caching is performed. If a string is given, it is the path to
the caching directory. Enabling caching triggers a clone of
the transformers before fitting. Therefore, the transformer
instance given to the pipeline cannot be inspected
directly. Use the attribute ``named_steps`` or ``steps`` to
inspect estimators within the pipeline. Caching the
transformers is advantageous when fitting is time consuming
steps
list
[{"oml-python:serialized_object": "component_reference", "value": {"key": "step_0", "step_name": "step_0"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "step_1", "step_name": "step_1"}}]
List of (name, transform) tuples (implementing fit/transform) that are
chained, in the order in which they are chained, with the last object
an estimator
verbose
bool
false
If True, the time elapsed while fitting each step will be printed as it
is completed.
step_1
17698
12269
sklearn.naive_bayes.BernoulliNB
sklearn.BernoulliNB
sklearn.naive_bayes.BernoulliNB
11
openml==0.10.2,sklearn==0.22.1
Naive Bayes classifier for multivariate Bernoulli models.
Like MultinomialNB, this classifier is suitable for discrete data. The
difference is that while MultinomialNB works with occurrence counts,
BernoulliNB is designed for binary/boolean features.
2020-05-18T19:37:55
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
alpha
float
45.72041457701043
Additive (Laplace/Lidstone) smoothing parameter
(0 for no smoothing)
binarize
float or None
0.0
Threshold for binarizing (mapping to booleans) of sample features
If None, input is presumed to already consist of binary vectors
class_prior
array
null
Prior probabilities of the classes. If specified the priors are not
adjusted according to the data.
fit_prior
bool
true
Whether to learn class prior probabilities or not
If false, a uniform prior will be used
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1
step_0
17744
12269
sklearn.feature_selection._variance_threshold.VarianceThreshold
sklearn.VarianceThreshold
sklearn.feature_selection._variance_threshold.VarianceThreshold
2
openml==0.10.2,sklearn==0.22.1
Feature selector that removes all low-variance features.
This feature selection algorithm looks only at the features (X), not the
desired outputs (y), and can thus be used for unsupervised learning.
2020-05-18T23:48:44
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
threshold
float
0.0
Features with a training-set variance lower than this threshold will
be removed. The default is to keep all features with non-zero variance,
i.e. remove the features that have the same value in all samples.
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1