{"flow":{"id":"18701","uploader":"18886","name":"sklearn.model_selection._search.RandomizedSearchCV(estimator=sklearn.linear_model._logistic.LogisticRegression)","custom_name":"sklearn.RandomizedSearchCV(LogisticRegression)","class_name":"sklearn.model_selection._search.RandomizedSearchCV","version":"1","external_version":"openml==0.10.2,sklearn==0.23.2","description":"Randomized search on hyper parameters.\n\nRandomizedSearchCV implements a \"fit\" and a \"score\" method.\nIt also implements \"predict\", \"predict_proba\", \"decision_function\",\n\"transform\" and \"inverse_transform\" if they are implemented in the\nestimator used.\n\nThe parameters of the estimator used to apply these methods are optimized\nby cross-validated search over parameter settings.\n\nIn contrast to GridSearchCV, not all parameter values are tried out, but\nrather a fixed number of parameter settings is sampled from the specified\ndistributions. The number of parameter settings that are tried is\ngiven by n_iter.\n\nIf all parameters are presented as a list,\nsampling without replacement is performed. If at least one parameter\nis given as a distribution, sampling with replacement is used.\nIt is highly recommended to use continuous distributions for continuous\nparameters.","upload_date":"2020-10-21T01:04:20","language":"English","dependencies":"sklearn==0.23.2\nnumpy>=1.6.1\nscipy>=0.9","parameter":[{"name":"cv","data_type":"int","default_value":"null","description":"Determines the cross-validation splitting strategy\n Possible inputs for cv are:\n\n - None, to use the default 5-fold cross validation,\n - integer, to specify the number of folds in a `(Stratified)KFold`,\n - :term:`CV splitter`,\n - An iterable yielding (train, test) splits as arrays of indices\n\n For integer\/None inputs, if the estimator is a classifier and ``y`` is\n either binary or multiclass, :class:`StratifiedKFold` is used. In all\n other cases, :class:`KFold` is used\n\n Refer :ref:`User Guide ` for the various\n cross-validation strategies that can be used here\n\n .. versionchanged:: 0.22\n ``cv`` default value if None changed from 3-fold to 5-fold"},{"name":"error_score","data_type":"'raise' or numeric","default_value":"NaN","description":"Value to assign to the score if an error occurs in estimator fitting\n If set to 'raise', the error is raised. If a numeric value is given,\n FitFailedWarning is raised. This parameter does not affect the refit\n step, which will always raise the error"},{"name":"estimator","data_type":"estimator object","default_value":"{\"oml-python:serialized_object\": \"component_reference\", \"value\": {\"key\": \"estimator\", \"step_name\": null}}","description":"A object of that type is instantiated for each grid point\n This is assumed to implement the scikit-learn estimator interface\n Either estimator needs to provide a ``score`` function,\n or ``scoring`` must be passed"},{"name":"iid","data_type":"bool","default_value":"\"deprecated\"","description":"If True, return the average score across folds, weighted by the number\n of samples in each test set. In this case, the data is assumed to be\n identically distributed across the folds, and the loss minimized is\n the total loss per sample, and not the mean loss across the folds\n\n .. deprecated:: 0.22\n Parameter ``iid`` is deprecated in 0.22 and will be removed in 0.24"},{"name":"n_iter","data_type":"int","default_value":"10","description":"Number of parameter settings that are sampled. n_iter trades\n off runtime vs quality of the solution"},{"name":"n_jobs","data_type":"int","default_value":"-1","description":"Number of jobs to run in parallel\n ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context\n ``-1`` means using all processors. See :term:`Glossary `\n for more details\n\n .. versionchanged:: v0.20\n `n_jobs` default changed from 1 to None"},{"name":"param_distributions","data_type":"dict or list of dicts","default_value":"{\"C\": {\"oml-python:serialized_object\": \"rv_frozen\", \"value\": {\"dist\": \"scipy.stats._continuous_distns.uniform_gen\", \"a\": 0.0, \"b\": 1.0, \"args\": [1, 100], \"kwds\": {}}}, \"penalty\": [\"l1\", \"l2\", \"elasticnet\"]}","description":"Dictionary with parameters names (`str`) as keys and distributions\n or lists of parameters to try. Distributions must provide a ``rvs``\n method for sampling (such as those from scipy.stats.distributions)\n If a list is given, it is sampled uniformly\n If a list of dicts is given, first a dict is sampled uniformly, and\n then a parameter is sampled using that dict as above"},{"name":"pre_dispatch","data_type":"int","default_value":"\"2*n_jobs\"","description":"Controls the number of jobs that get dispatched during parallel\n execution. Reducing this number can be useful to avoid an\n explosion of memory consumption when more jobs get dispatched\n than CPUs can process. This parameter can be:\n\n - None, in which case all the jobs are immediately\n created and spawned. Use this for lightweight and\n fast-running jobs, to avoid delays due to on-demand\n spawning of the jobs\n\n - An int, giving the exact number of total jobs that are\n spawned\n\n - A str, giving an expression as a function of n_jobs,\n as in '2*n_jobs'"},{"name":"random_state","data_type":"int or RandomState instance","default_value":"0","description":"Pseudo random number generator state used for random uniform sampling\n from lists of possible values instead of scipy.stats distributions\n Pass an int for reproducible output across multiple\n function calls\n See :term:`Glossary `"},{"name":"refit","data_type":"bool","default_value":"true","description":"Refit an estimator using the best found parameters on the whole\n dataset\n\n For multiple metric evaluation, this needs to be a `str` denoting the\n scorer that would be used to find the best parameters for refitting\n the estimator at the end\n\n Where there are considerations other than maximum score in\n choosing a best estimator, ``refit`` can be set to a function which\n returns the selected ``best_index_`` given the ``cv_results``. In that\n case, the ``best_estimator_`` and ``best_params_`` will be set\n according to the returned ``best_index_`` while the ``best_score_``\n attribute will not be available\n\n The refitted estimator is made available at the ``best_estimator_``\n attribute and permits using ``predict`` directly on this\n ``RandomizedSearchCV`` instance\n\n Also for multiple metric evaluation, the attributes ``best_index_``,\n ``best_score_`` and ``best_params_`` will only be available if\n ``refit`` is set and all of them will be determined w.r.t this speci..."},{"name":"return_train_score","data_type":"bool","default_value":"false","description":"If ``False``, the ``cv_results_`` attribute will not include training\n scores\n Computing training scores is used to get insights on how different\n parameter settings impact the overfitting\/underfitting trade-off\n However computing the scores on the training set can be computationally\n expensive and is not strictly required to select the parameters that\n yield the best generalization performance\n\n .. versionadded:: 0.19\n\n .. versionchanged:: 0.21\n Default value was changed from ``True`` to ``False``"},{"name":"scoring","data_type":"str","default_value":"null","description":"A single str (see :ref:`scoring_parameter`) or a callable\n (see :ref:`scoring`) to evaluate the predictions on the test set\n\n For evaluating multiple metrics, either give a list of (unique) strings\n or a dict with names as keys and callables as values\n\n NOTE that when using custom scorers, each scorer should return a single\n value. Metric functions returning a list\/array of values can be wrapped\n into multiple scorers that return one value each\n\n See :ref:`multimetric_grid_search` for an example\n\n If None, the estimator's score method is used"},{"name":"verbose","data_type":"integer","default_value":"0","description":"Controls the verbosity: the higher, the more messages"}],"component":{"identifier":"estimator","flow":{"id":"18686","uploader":"11601","name":"sklearn.linear_model._logistic.LogisticRegression","custom_name":"sklearn.LogisticRegression","class_name":"sklearn.linear_model._logistic.LogisticRegression","version":"4","external_version":"openml==0.10.2,sklearn==0.23.2","description":"Logistic Regression (aka logit, MaxEnt) classifier.\n\nIn the multiclass case, the training algorithm uses the one-vs-rest (OvR)\nscheme if the 'multi_class' option is set to 'ovr', and uses the\ncross-entropy loss if the 'multi_class' option is set to 'multinomial'.\n(Currently the 'multinomial' option is supported only by the 'lbfgs',\n'sag', 'saga' and 'newton-cg' solvers.)\n\nThis class implements regularized logistic regression using the\n'liblinear' library, 'newton-cg', 'sag', 'saga' and 'lbfgs' solvers. **Note\nthat regularization is applied by default**. It can handle both dense\nand sparse input. Use C-ordered arrays or CSR matrices containing 64-bit\nfloats for optimal performance; any other input format will be converted\n(and copied).\n\nThe 'newton-cg', 'sag', and 'lbfgs' solvers support only L2 regularization\nwith primal formulation, or no regularization. The 'liblinear' solver\nsupports both L1 and L2 regularization, with a dual formulation only for\nthe L2 penalty. The Elastic-Net regularization is only su...","upload_date":"2020-09-18T03:11:00","language":"English","dependencies":"sklearn==0.23.2\nnumpy>=1.6.1\nscipy>=0.9","parameter":[{"name":"C","data_type":"float","default_value":"1.0","description":"Inverse of regularization strength; must be a positive float\n Like in support vector machines, smaller values specify stronger\n regularization"},{"name":"class_weight","data_type":"dict or","default_value":"null","description":"Weights associated with classes in the form ``{class_label: weight}``\n If not given, all classes are supposed to have weight one\n\n The \"balanced\" mode uses the values of y to automatically adjust\n weights inversely proportional to class frequencies in the input data\n as ``n_samples \/ (n_classes * np.bincount(y))``\n\n Note that these weights will be multiplied with sample_weight (passed\n through the fit method) if sample_weight is specified\n\n .. versionadded:: 0.17\n *class_weight='balanced'*"},{"name":"dual","data_type":"bool","default_value":"false","description":"Dual or primal formulation. Dual formulation is only implemented for\n l2 penalty with liblinear solver. Prefer dual=False when\n n_samples > n_features"},{"name":"fit_intercept","data_type":"bool","default_value":"true","description":"Specifies if a constant (a.k.a. bias or intercept) should be\n added to the decision function"},{"name":"intercept_scaling","data_type":"float","default_value":"1","description":"Useful only when the solver 'liblinear' is used\n and self.fit_intercept is set to True. In this case, x becomes\n [x, self.intercept_scaling],\n i.e. a \"synthetic\" feature with constant value equal to\n intercept_scaling is appended to the instance vector\n The intercept becomes ``intercept_scaling * synthetic_feature_weight``\n\n Note! the synthetic feature weight is subject to l1\/l2 regularization\n as all other features\n To lessen the effect of regularization on synthetic feature weight\n (and therefore on the intercept) intercept_scaling has to be increased"},{"name":"l1_ratio","data_type":"float","default_value":"null","description":"The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only\n used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent\n to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent\n to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a\n combination of L1 and L2."},{"name":"max_iter","data_type":"int","default_value":"100","description":"Maximum number of iterations taken for the solvers to converge\n\nmulti_class : {'auto', 'ovr', 'multinomial'}, default='auto'\n If the option chosen is 'ovr', then a binary problem is fit for each\n label. For 'multinomial' the loss minimised is the multinomial loss fit\n across the entire probability distribution, *even when the data is\n binary*. 'multinomial' is unavailable when solver='liblinear'\n 'auto' selects 'ovr' if the data is binary, or if solver='liblinear',\n and otherwise selects 'multinomial'\n\n .. versionadded:: 0.18\n Stochastic Average Gradient descent solver for 'multinomial' case\n .. versionchanged:: 0.22\n Default changed from 'ovr' to 'auto' in 0.22"},{"name":"multi_class","data_type":[],"default_value":"\"auto\"","description":[]},{"name":"n_jobs","data_type":"int","default_value":"null","description":"Number of CPU cores used when parallelizing over classes if\n multi_class='ovr'\". This parameter is ignored when the ``solver`` is\n set to 'liblinear' regardless of whether 'multi_class' is specified or\n not. ``None`` means 1 unless in a :obj:`joblib.parallel_backend`\n context. ``-1`` means using all processors\n See :term:`Glossary ` for more details"},{"name":"penalty","data_type":[],"default_value":"\"l2\"","description":[]},{"name":"random_state","data_type":"int","default_value":"null","description":"Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the\n data. See :term:`Glossary ` for details\n\nsolver : {'newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'}, default='lbfgs'\n\n Algorithm to use in the optimization problem\n\n - For small datasets, 'liblinear' is a good choice, whereas 'sag' and\n 'saga' are faster for large ones\n - For multiclass problems, only 'newton-cg', 'sag', 'saga' and 'lbfgs'\n handle multinomial loss; 'liblinear' is limited to one-versus-rest\n schemes\n - 'newton-cg', 'lbfgs', 'sag' and 'saga' handle L2 or no penalty\n - 'liblinear' and 'saga' also handle L1 penalty\n - 'saga' also supports 'elasticnet' penalty\n - 'liblinear' does not support setting ``penalty='none'``\n\n Note that 'sag' and 'saga' fast convergence is only guaranteed on\n features with approximately the same scale. You can\n preprocess the data with a scaler from sklearn.preprocessing\n\n .. versionadded:: 0.17\n Stochastic Average Gr..."},{"name":"solver","data_type":[],"default_value":"\"lbfgs\"","description":[]},{"name":"tol","data_type":"float","default_value":"0.0001","description":"Tolerance for stopping criteria"},{"name":"verbose","data_type":"int","default_value":"0","description":"For the liblinear and lbfgs solvers set verbose to any positive\n number for verbosity"},{"name":"warm_start","data_type":"bool","default_value":"false","description":"When set to True, reuse the solution of the previous call to fit as\n initialization, otherwise, just erase the previous solution\n Useless for liblinear solver. See :term:`the Glossary `\n\n .. versionadded:: 0.17\n *warm_start* to support *lbfgs*, *newton-cg*, *sag*, *saga* solvers"}],"tag":["openml-python","python","scikit-learn","sklearn","sklearn_0.23.2"]}},"tag":["openml-python","python","scikit-learn","sklearn","sklearn_0.23.2"]}}