17887
12269
sklearn.pipeline.Pipeline(step_0=automl.util.sklearn.StackingEstimator(estimator=sklearn.linear_model._stochastic_gradient.SGDClassifier),step_1=sklearn.ensemble._hist_gradient_boosting.gradient_boosting.HistGradientBoostingClassifier)
sklearn.Pipeline(StackingEstimator,HistGradientBoostingClassifier)
sklearn.pipeline.Pipeline
1
automl==0.0.1,openml==0.10.2,sklearn==0.22.1
Pipeline of transforms with a final estimator.
Sequentially apply a list of transforms and a final estimator.
Intermediate steps of the pipeline must be 'transforms', that is, they
must implement fit and transform methods.
The final estimator only needs to implement fit.
The transformers in the pipeline can be cached using ``memory`` argument.
The purpose of the pipeline is to assemble several steps that can be
cross-validated together while setting different parameters.
For this, it enables setting parameters of the various steps using their
names and the parameter name separated by a '__', as in the example below.
A step's estimator may be replaced entirely by setting the parameter
with its name to another estimator, or a transformer removed by setting
it to 'passthrough' or ``None``.
2020-05-19T03:36:21
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
memory
None
null
Used to cache the fitted transformers of the pipeline. By default,
no caching is performed. If a string is given, it is the path to
the caching directory. Enabling caching triggers a clone of
the transformers before fitting. Therefore, the transformer
instance given to the pipeline cannot be inspected
directly. Use the attribute ``named_steps`` or ``steps`` to
inspect estimators within the pipeline. Caching the
transformers is advantageous when fitting is time consuming
steps
list
[{"oml-python:serialized_object": "component_reference", "value": {"key": "step_0", "step_name": "step_0"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "step_1", "step_name": "step_1"}}]
List of (name, transform) tuples (implementing fit/transform) that are
chained, in the order in which they are chained, with the last object
an estimator
verbose
bool
false
If True, the time elapsed while fitting each step will be printed as it
is completed.
step_1
17709
12269
sklearn.ensemble._hist_gradient_boosting.gradient_boosting.HistGradientBoostingClassifier
sklearn.HistGradientBoostingClassifier
sklearn.ensemble._hist_gradient_boosting.gradient_boosting.HistGradientBoostingClassifier
7
openml==0.10.2,sklearn==0.22.1
Histogram-based Gradient Boosting Classification Tree.
This estimator is much faster than
:class:`GradientBoostingClassifier<sklearn.ensemble.GradientBoostingClassifier>`
for big datasets (n_samples >= 10 000).
This estimator has native support for missing values (NaNs). During
training, the tree grower learns at each split point whether samples
with missing values should go to the left or right child, based on the
potential gain. When predicting, samples with missing values are
assigned to the left or right child consequently. If no missing values
were encountered for a given feature during training, then samples with
missing values are mapped to whichever child has the most samples.
This implementation is inspired by
`LightGBM <https://github.com/Microsoft/LightGBM>`_.
.. note::
This estimator is still **experimental** for now: the predictions
and the API might change without any deprecation cycle. To use it,
you need to explicitly import ``enable_hist_gradient_boosting``::
>>> # explicit...
2020-05-18T19:47:40
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
l2_regularization
float
0.029157851614848844
The L2 regularization parameter. Use 0 for no regularization
learning_rate
float
0.0002615635618827854
The learning rate, also known as *shrinkage*. This is used as a
multiplicative factor for the leaves values. Use ``1`` for no
shrinkage
loss
"auto"
max_bins
int
219
The maximum number of bins to use for non-missing values. Before
training, each feature of the input array `X` is binned into
integer-valued bins, which allows for a much faster training stage
Features with a small number of unique values may use less than
``max_bins`` bins. In addition to the ``max_bins`` bins, one more bin
is always reserved for missing values. Must be no larger than 255
max_depth
int or None
9
The maximum depth of each tree. The depth of a tree is the number of
nodes to go from the root to the deepest leaf. Must be strictly greater
than 1. Depth isn't constrained by default
max_iter
int
938
The maximum number of iterations of the boosting process, i.e. the
maximum number of trees for binary classification. For multiclass
classification, `n_classes` trees per iteration are built
max_leaf_nodes
int or None
107
The maximum number of leaves for each tree. Must be strictly greater
than 1. If None, there is no maximum limit
min_samples_leaf
int
266
The minimum number of samples per leaf. For small datasets with less
than a few hundred samples, it is recommended to lower this value
since only very shallow trees would be built
n_iter_no_change
int or None
65
Used to determine when to "early stop". The fitting process is
stopped when none of the last ``n_iter_no_change`` scores are better
than the ``n_iter_no_change - 1`` -th-to-last one, up to some
tolerance. If None or 0, no early-stopping is done
random_state
int
42
Pseudo-random number generator to control the subsampling in the
binning process, and the train/validation data split if early stopping
is enabled. See :term:`random_state`.
scoring
str or callable or None
"neg_log_loss"
Scoring parameter to use for early stopping. It can be a single
string (see :ref:`scoring_parameter`) or a callable (see
:ref:`scoring`). If None, the estimator's default scorer
is used. If ``scoring='loss'``, early stopping is checked
w.r.t the loss value. Only used if ``n_iter_no_change`` is not None
tol
float or None
0.09169788469283188
The absolute tolerance to use when comparing scores. The higher the
tolerance, the more likely we are to early stop: higher tolerance
means that it will be harder for subsequent iterations to be
considered an improvement upon the reference score
verbose: int, optional (default=0)
The verbosity level. If not zero, print some information about the
fitting process
validation_fraction
int or float or None
0.19574305926541347
Proportion (or absolute size) of training data to set aside as
validation data for early stopping. If None, early stopping is done on
the training data
verbose
0
warm_start
bool
false
When set to ``True``, reuse the solution of the previous call to fit
and add more estimators to the ensemble. For results to be valid, the
estimator should be re-trained on the same data only
See :term:`the Glossary <warm_start>`
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1
step_0
17875
12269
automl.util.sklearn.StackingEstimator(estimator=sklearn.linear_model._stochastic_gradient.SGDClassifier)
automl.StackingEstimator
automl.util.sklearn.StackingEstimator
1
automl==0.0.1,openml==0.10.2,sklearn==0.22.1
StackingEstimator
A shallow wrapper around a classification algorithm to implement the transform method. Allows stacking of
arbitrary classification algorithms in a pipelines.
2020-05-19T03:30:09
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
estimator
PredictionMixin
{"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": null}}
An instance implementing PredictionMixin
estimator
17703
12269
sklearn.linear_model._stochastic_gradient.SGDClassifier
sklearn.SGDClassifier
sklearn.linear_model._stochastic_gradient.SGDClassifier
2
openml==0.10.2,sklearn==0.22.1
Linear classifiers (SVM, logistic regression, a.o.) with SGD training.
This estimator implements regularized linear models with stochastic
gradient descent (SGD) learning: the gradient of the loss is estimated
each sample at a time and the model is updated along the way with a
decreasing strength schedule (aka learning rate). SGD allows minibatch
(online/out-of-core) learning, see the partial_fit method.
For best results using the default learning rate schedule, the data should
have zero mean and unit variance.
This implementation works with data represented as dense or sparse arrays
of floating point values for the features. The model it fits can be
controlled with the loss parameter; by default, it fits a linear support
vector machine (SVM).
The regularizer is a penalty added to the loss function that shrinks model
parameters towards the zero vector using either the squared euclidean norm
L2 or the absolute norm L1 or a combination of both (Elastic Net). If the
parameter update crosses the 0.0 value b...
2020-05-18T19:46:26
English
sklearn==0.22.1
numpy>=1.6.1
scipy>=0.9
alpha
float
18.576489600940455
Constant that multiplies the regularization term. Defaults to 0.0001
Also used to compute learning_rate when set to 'optimal'
average
bool or int
false
When set to True, computes the averaged SGD weights and stores the
result in the ``coef_`` attribute. If set to an int greater than 1,
averaging will begin once the total number of samples seen reaches
average. So ``average=10`` will begin averaging after seeing 10
samples.
class_weight
dict
null
Preset for the class_weight fit parameter
Weights associated with classes. If not given, all classes
are supposed to have weight one
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``
early_stopping
bool
true
Whether to use early stopping to terminate training when validation
score is not improving. If set to True, it will automatically set aside
a stratified fraction of training data as validation and terminate
training when validation score is not improving by at least tol for
n_iter_no_change consecutive epochs
.. versionadded:: 0.20
epsilon
float
0.9292126060861614
Epsilon in the epsilon-insensitive loss functions; only if `loss` is
'huber', 'epsilon_insensitive', or 'squared_epsilon_insensitive'
For 'huber', determines the threshold at which it becomes less
important to get the prediction exactly right
For epsilon-insensitive, any differences between the current prediction
and the correct label are ignored if they are less than this threshold
eta0
double
0.8230008156002588
The initial learning rate for the 'constant', 'invscaling' or
'adaptive' schedules. The default value is 0.0 as eta0 is not used by
the default schedule 'optimal'
fit_intercept
bool
false
Whether the intercept should be estimated or not. If False, the
data is assumed to be already centered. Defaults to True
l1_ratio
float
0.3888555696189441
The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1
l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1
Defaults to 0.15
learning_rate
str
"constant"
The learning rate schedule:
'constant':
eta = eta0
'optimal': [default]
eta = 1.0 / (alpha * (t + t0))
where t0 is chosen by a heuristic proposed by Leon Bottou
'invscaling':
eta = eta0 / pow(t, power_t)
'adaptive':
eta = eta0, as long as the training keeps decreasing
Each time n_iter_no_change consecutive epochs fail to decrease the
training loss by tol or fail to increase validation score by tol if
early_stopping is True, the current learning rate is divided by 5
loss
str
"modified_huber"
The loss function to be used. Defaults to 'hinge', which gives a
linear SVM
The possible options are 'hinge', 'log', 'modified_huber',
'squared_hinge', 'perceptron', or a regression loss: 'squared_loss',
'huber', 'epsilon_insensitive', or 'squared_epsilon_insensitive'
The 'log' loss gives logistic regression, a probabilistic classifier
'modified_huber' is another smooth loss that brings tolerance to
outliers as well as probability estimates
'squared_hinge' is like hinge but is quadratically penalized
'perceptron' is the linear loss used by the perceptron algorithm
The other losses are designed for regression but can be useful in
classification as well; see SGDRegressor for a description
penalty : {'l2', 'l1', 'elasticnet'}, default='l2'
The penalty (aka regularization term) to be used. Defaults to 'l2'
which is the standard regularizer for linear SVM models. 'l1' and
'elasticnet' might bring sparsity to the model (feature selection)
not achieva...
max_iter
int
1657
The maximum number of passes over the training data (aka epochs)
It only impacts the behavior in the ``fit`` method, and not the
:meth:`partial_fit` method
.. versionadded:: 0.19
n_iter_no_change
int
44
Number of iterations with no improvement to wait before early stopping
.. versionadded:: 0.20
n_jobs
int
1
The number of CPUs to use to do the OVA (One Versus All, for
multi-class problems) computation
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details
penalty
"elasticnet"
power_t
double
0.5337479304733836
The exponent for inverse scaling learning rate [default 0.5]
random_state
int
42
The seed of the pseudo random number generator to use when shuffling
the data. If int, random_state is the seed used by the random number
generator; If RandomState instance, random_state is the random number
generator; If None, the random number generator is the RandomState
instance used by `np.random`
shuffle
bool
true
Whether or not the training data should be shuffled after each epoch
tol
float
0.0016762319789051909
The stopping criterion. If it is not None, the iterations will stop
when (loss > best_loss - tol) for ``n_iter_no_change`` consecutive
epochs
.. versionadded:: 0.19
validation_fraction
float
0.5789400632046087
The proportion of training data to set aside as validation set for
early stopping. Must be between 0 and 1
Only used if early_stopping is True
.. versionadded:: 0.20
verbose
int
0
The verbosity level
warm_start
bool
false
When set to True, reuse the solution of the previous call to fit as
initialization, otherwise, just erase the previous solution
See :term:`the Glossary <warm_start>`
Repeatedly calling fit or partial_fit when warm_start is True can
result in a different solution than when calling fit a single time
because of the way the data is shuffled
If a dynamic learning rate is used, the learning rate is adapted
depending on the number of samples already seen. Calling ``fit`` resets
this counter, while ``partial_fit`` will result in increasing the
existing counter
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1
openml-python
python
scikit-learn
sklearn
sklearn_0.22.1