17919
12269
sklearn.pipeline.Pipeline(step_0=sklearn.decomposition._fastica.FastICA,step_1=sklearn.naive_bayes.BernoulliNB)
sklearn.Pipeline(FastICA,BernoulliNB)
sklearn.pipeline.Pipeline
1
openml==0.10.2,sklearn==0.22.1
Pipeline of transforms with a final estimator.
Sequentially apply a list of transforms and a final estimator.
Intermediate steps of the pipeline must be 'transforms', that is, they
must implement fit and transform methods.
The final estimator only needs to implement fit.
The transformers in the pipeline can be cached using ``memory`` argument.
The purpose of the pipeline is to assemble several steps that can be
cross-validated together while setting different parameters.
For this, it enables setting parameters of the various steps using their
names and the parameter name separated by a '__', as in the example below.
A step's estimator may be replaced entirely by setting the parameter
with its name to another estimator, or a transformer removed by setting
it to 'passthrough' or ``None``.
2020-05-19T03:53:44
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
memory
None
null
Used to cache the fitted transformers of the pipeline. By default,
no caching is performed. If a string is given, it is the path to
the caching directory. Enabling caching triggers a clone of
the transformers before fitting. Therefore, the transformer
instance given to the pipeline cannot be inspected
directly. Use the attribute ``named_steps`` or ``steps`` to
inspect estimators within the pipeline. Caching the
transformers is advantageous when fitting is time consuming
steps
list
[{"oml-python:serialized_object": "component_reference", "value": {"key": "step_0", "step_name": "step_0"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "step_1", "step_name": "step_1"}}]
List of (name, transform) tuples (implementing fit/transform) that are
chained, in the order in which they are chained, with the last object
an estimator
verbose
bool
false
If True, the time elapsed while fitting each step will be printed as it
is completed.
step_1
17698
12269
sklearn.naive_bayes.BernoulliNB
sklearn.BernoulliNB
sklearn.naive_bayes.BernoulliNB
11
openml==0.10.2,sklearn==0.22.1
Naive Bayes classifier for multivariate Bernoulli models.
Like MultinomialNB, this classifier is suitable for discrete data. The
difference is that while MultinomialNB works with occurrence counts,
BernoulliNB is designed for binary/boolean features.
2020-05-18T19:37:55
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
alpha
float
45.72041457701043
Additive (Laplace/Lidstone) smoothing parameter
(0 for no smoothing)
binarize
float or None
0.0
Threshold for binarizing (mapping to booleans) of sample features
If None, input is presumed to already consist of binary vectors
class_prior
array
null
Prior probabilities of the classes. If specified the priors are not
adjusted according to the data.
fit_prior
bool
true
Whether to learn class prior probabilities or not
If false, a uniform prior will be used
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1
step_0
17728
12269
sklearn.decomposition._fastica.FastICA
sklearn.FastICA
sklearn.decomposition._fastica.FastICA
1
openml==0.10.2,sklearn==0.22.1
FastICA: a fast algorithm for Independent Component Analysis.
2020-05-18T23:43:57
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
algorithm
"deflation"
fun
string or function
"exp"
The functional form of the G function used in the
approximation to neg-entropy. Could be either 'logcosh', 'exp',
or 'cube'
You can also provide your own function. It should return a tuple
containing the value of the function, and of its derivative, in the
point. Example:
def my_g(x):
return x ** 3, (3 * x ** 2).mean(axis=-1)
fun_args
dictionary
null
Arguments to send to the functional form
If empty and if fun='logcosh', fun_args will take value
{'alpha' : 1.0}
max_iter
int
297
Maximum number of iterations during fit
n_components
int
4
Number of components to use. If none is passed, all are used
algorithm : {'parallel', 'deflation'}
Apply parallel or deflational algorithm for FastICA
random_state
int
42
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
tol
float
0.051729889556082674
Tolerance on update at each iteration
w_init
None of an
null
The mixing matrix to be used to initialize the algorithm
whiten
boolean
false
If whiten is false, the data is already considered to be
whitened, and no whitening is performed
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1