本文将介绍scikit-learncross_val_predict准确性得分如何计算?的详细情况,特别是关于clf.score准确率的相关信息。我们将通过案例分析、数据研究等多种方式,帮助您更全面地
本文将介绍scikit-learn cross_val_predict准确性得分如何计算?的详细情况,特别是关于clf.score 准确率的相关信息。我们将通过案例分析、数据研究等多种方式,帮助您更全面地了解这个主题,同时也将涉及一些关于cross_val_score和cross_val_predict之间的区别、ML之sklearn:sklearn的make_pipeline函数、RobustScaler函数、KFold函数、cross_val_score函数的代码解释、使用方法之详细攻略、pymc3 是否与 scikit-learn 中的 predict 方法等效?、Python scikit-learn (metrics): difference between r2_score and explained_variance_score?的知识。
本文目录一览:- scikit-learn cross_val_predict准确性得分如何计算?(clf.score 准确率)
- cross_val_score和cross_val_predict之间的区别
- ML之sklearn:sklearn的make_pipeline函数、RobustScaler函数、KFold函数、cross_val_score函数的代码解释、使用方法之详细攻略
- pymc3 是否与 scikit-learn 中的 predict 方法等效?
- Python scikit-learn (metrics): difference between r2_score and explained_variance_score?
scikit-learn cross_val_predict准确性得分如何计算?(clf.score 准确率)
如下代码所示,使用 k 折方法的cross_val_predict
(见doc,v0.18)是否可以计算出每折的精度并最终取平均?
__
cv = KFold(len(labels), n_folds=20)clf = SVC()ypred = cross_val_predict(clf, td, labels, cv=cv)accuracy = accuracy_score(labels, ypred)print accuracy
答案1
小编典典不,不是的!
根据交叉验证文档页面,cross_val_predict
不返回任何分数,而仅返回基于某种策略的标签,如下所述:
函数cross_val_predict具有与cross_val_score类似的接口,
但是对于输入中的每个元素,返回该元素在测试集中时获得的预测 。只能使用将所有元素完全一次分配给测试集的交叉验证策略(否则会引发异常)。
因此,通过致电,accuracy_score(labels, ypred)
您只需要计算 与真实标签相比
由上述特定策略预测的标签的准确性得分 即可。再次在同一文档页面中指定:
然后,这些预测可用于评估分类器:
predicted = cross_val_predict(clf, iris.data, iris.target, cv=10)metrics.accuracy_score(iris.target, predicted)
请注意,由于以不同的方式对元素进行分组,因此此计算的结果可能与使用cross_val_score获得的结果略有不同。
如果您需要不同倍数的准确性得分,则应该尝试:
>>> scores = cross_val_score(clf, X, y, cv=cv)>>> scores array([ 0.96..., 1. ..., 0.96..., 0.96..., 1. ])
然后对于所有折痕的平均准确度,请使用scores.mean()
:
>>> print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))Accuracy: 0.98 (+/- 0.03)
如何计算每折的科恩卡伯系数和混淆矩阵?
对于计算Cohen Kappa coefficient
和混淆矩阵,我假设您的意思是真实标签与每个折痕的预测标签之间的kappa系数和混淆矩阵:
from sklearn.model_selection import KFoldfrom sklearn.svm.classes import SVCfrom sklearn.metrics.classification import cohen_kappa_scorefrom sklearn.metrics import confusion_matrixcv = KFold(len(labels), n_folds=20)clf = SVC()for train_index, test_index in cv.split(X): clf.fit(X[train_index], labels[train_index]) ypred = clf.predict(X[test_index]) kappa_score = cohen_kappa_score(labels[test_index], ypred) confusion_matrix = confusion_matrix(labels[test_index], ypred)
什么cross_val_predict
回报?
它使用KFold将数据拆分为多个k
部分,然后进行i=1..k
迭代:
- 取
i''th
部分作为测试数据和其他所有部分作为训练数据 - 使用训练数据训练模型(除以外的所有部分
i''th
) - 然后通过使用经过训练的模型,预测
i''th
零件的标签(测试数据)
在每次迭代中,i''th
将预测部分数据的标签。最后,cross_val_predict合并所有部分预测的标签,并将它们作为最终结果返回。
此代码逐步显示了此过程:
X = np.array([[0], [1], [2], [3], [4], [5]])labels = np.array([''a'', ''a'', ''a'', ''b'', ''b'', ''b''])cv = KFold(len(labels), n_folds=3)clf = SVC()ypred_all = np.chararray((labels.shape))i = 1for train_index, test_index in cv.split(X): print("iteration", i, ":") print("train indices:", train_index) print("train data:", X[train_index]) print("test indices:", test_index) print("test data:", X[test_index]) clf.fit(X[train_index], labels[train_index]) ypred = clf.predict(X[test_index]) print("predicted labels for data of indices", test_index, "are:", ypred) ypred_all[test_index] = ypred print("merged predicted labels:", ypred_all) i = i+1 print("=====================================")y_cross_val_predict = cross_val_predict(clf, X, labels, cv=cv)print("predicted labels by cross_val_predict:", y_cross_val_predict)
结果是:
iteration 1 :train indices: [2 3 4 5]train data: [[2] [3] [4] [5]]test indices: [0 1]test data: [[0] [1]]predicted labels for data of indices [0 1] are: [''b'' ''b'']merged predicted labels: [''b'' ''b'' '''' '''' '''' '''']=====================================iteration 2 :train indices: [0 1 4 5]train data: [[0] [1] [4] [5]]test indices: [2 3]test data: [[2] [3]]predicted labels for data of indices [2 3] are: [''a'' ''b'']merged predicted labels: [''b'' ''b'' ''a'' ''b'' '''' '''']=====================================iteration 3 :train indices: [0 1 2 3]train data: [[0] [1] [2] [3]]test indices: [4 5]test data: [[4] [5]]predicted labels for data of indices [4 5] are: [''a'' ''a'']merged predicted labels: [''b'' ''b'' ''a'' ''b'' ''a'' ''a'']=====================================predicted labels by cross_val_predict: [''b'' ''b'' ''a'' ''b'' ''a'' ''a'']
cross_val_score和cross_val_predict之间的区别
我想,以评估使用交叉验证和感到困惑,这两个功能scikitlearn回归模型构建cross_val_score
和cross_val_predict
我应该使用。一种选择是:
cvs = DecisionTreeRegressor(max_depth = depth)scores = cross_val_score(cvs, predictors, target, cv=cvfolds, scoring=''r2'')print("R2-Score: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
另一种,将cv-predictions与标准结合使用r2_score
:
cvp = DecisionTreeRegressor(max_depth = depth)predictions = cross_val_predict(cvp, predictors, target, cv=cvfolds)print ("CV R^2-Score: {}".format(r2_score(df[target], predictions_cv)))
我认为这两种方法都是有效的,并且给出相似的结果。但这只是k折小的情况。尽管10倍cv的r ^ 2大致相同,但在使用“
cross_vall_score”的第一个版本的情况下,对于较高的k值,r ^ 2越来越低。第二个版本在很大程度上不受折叠数变化的影响。
这种行为是可以预期的吗?我对SKLearn中的简历缺乏了解吗?
答案1
小编典典cross_val_score
返回测试折痕的得分,其中cross_val_predict
返回测试折痕的预测y值。
对于cross_val_score()
,您使用的是输出的平均值,该平均值会受到折痕数量的影响,因为这可能会导致某些折痕的误差很大(无法正确拟合)。
而cross_val_predict()
对于输入中的每个元素,返回该元素在测试集中时获得的预测。[请注意,只能使用将所有元素完全分配给测试集一次的交叉验证策略]。因此,增加折叠数,只会增加测试元素的训练数据,因此其结果可能不会受到太大影响。
编辑 (评论后)
请查看以下有关cross_val_predict
工作原理的答案:
scikit-learn
cross_val_predict准确性得分如何计算?
我认为这cross_val_predict
将是不合适的,因为随着折叠倍数的增加,更多的数据将用于训练,而更少的数据将用于测试。因此,结果标签更依赖于训练数据。同样如上文所述,对一个样本的预测仅进行一次,因此它可能更容易分裂数据。这就是为什么大多数地方或教程都建议使用cross_val_score
分析进行分析的原因。
ML之sklearn:sklearn的make_pipeline函数、RobustScaler函数、KFold函数、cross_val_score函数的代码解释、使用方法之详细攻略
ML之sklearn:sklearn的make_pipeline函数、RobustScaler函数、KFold函数、cross_val_score函数的代码解释、使用方法之详细攻略
目录
sklearn的make_pipeline函数的代码解释、使用方法
sklearn的make_pipeline函数的代码解释
sklearn的make_pipeline函数的使用方法
1、使用Pipeline类来表示在使用MinMaxScaler缩放数据之后再训练一个SVM的工作流程
2、make_pipeline函数创建管道
sklearn的RobustScaler函数的代码解释、使用方法
RobustScaler函数的代码解释
RobustScaler函数的使用方法
sklearn的KFold函数的代码解释、使用方法
KFold函数的代码解释
KFold函数的使用方法
sklearn的cross_val_score函数的代码解释、使用方法
cross_val_score函数的代码解释
scoring参数可选的对象
cross_val_score函数的使用方法
1、分类预测——糖尿病
2、分类预测——iris鸢尾花
sklearn的make_pipeline函数的代码解释、使用方法
为了简化构建变换和模型链的过程,Scikit-Learn提供了pipeline类,可以将多个处理步骤合并为单个Scikit-Learn估计器。pipeline类本身具有fit、predict和score方法,其行为与Scikit-Learn中的其他模型相同。
sklearn的make_pipeline函数的代码解释
def make_pipeline(*steps, **kwargs): This is a shorthand for the Pipeline constructor; it does not require, and does not permit, naming the estimators. Instead, their names will be set to the lowercase of their types automatically. Parameters memory : None, str or object with the joblib.Memory interface, optional |
根据给定的估算器构造一条管道。 这是管道构造函数的简写;它不需要,也不允许命名估算器。相反,它们的名称将自动设置为类型的小写。 参数 ---------- *steps :评估表、 memory:无,str或带有joblib的对象。内存接口,可选 用于缓存安装在管道中的变压器。默认情况下,不执行缓存。如果给定一个字符串,它就是到缓存目录的路径。启用缓存会在安装前触发变压器的克隆。因此,给管线的变压器实例不能直接检查。使用属性'' '' named_steps '' '' ''或'' '' steps '' ''检查管道中的评估器。当装配耗时时,缓存变压器是有利的。 |
Examples Returns |
sklearn的make_pipeline函数的使用方法
Examples
--------
>>> from sklearn.naive_bayes import GaussianNB
>>> from sklearn.preprocessing import StandardScaler
>>> make_pipeline(StandardScaler(), GaussianNB(priors=None))
... # doctest: +NORMALIZE_WHITESPACE
Pipeline(memory=None,
steps=[(''standardscaler'',
StandardScaler(copy=True, with_mean=True, with_std=True)),
(''gaussiannb'', GaussianNB(priors=None))])
Returns
-------
p : Pipeline
1、使用Pipeline类来表示在使用MinMaxScaler缩放数据之后再训练一个SVM的工作流程
from sklearn.pipeline import Pipeline
pipe = Pipeline([("scaler",MinMaxScaler()),("svm",SVC())])
pip.fit(X_train,y_train)
pip.score(X_test,y_test)
2、make_pipeline函数创建管道
用Pipeline类构建管道时语法有点麻烦,我们通常不需要为每一个步骤提供用户指定的名称,这种情况下,就可以用make_pipeline函数创建管道,它可以为我们创建管道并根据每个步骤所属的类为其自动命名。
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(MinMaxScaler(),SVC())
参考文章
《Python机器学习基础教程》构建管道(make_pipeline)
Python sklearn.pipeline.make_pipeline() Examples
sklearn的RobustScaler函数的代码解释、使用方法
RobustScaler函数的代码解释
class RobustScaler(BaseEstimator, TransformerMixin): This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results. .. versionadded:: 0.17 Read more in the :ref:`User Guide <preprocessing_scaler>`. Parameters with_scaling : boolean, True by default quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0 .. versionadded:: 0.18 copy : boolean, optional, default is True Attributes scale_ : array of floats .. versionadded:: 0.17 See also :class:`sklearn.decomposition.PCA` Notes https://en.wikipedia.org/wiki/Median_(statistics) |
使用对离群值稳健的统计数据来衡量特征。 这个标量去除中值,并根据分位数范围(默认为IQR:四分位数范围)对数据进行缩放。 数据集的标准化是许多机器学习估计器的常见需求。这通常是通过去除平均值和缩放到单位方差来实现的。然而,异常值往往会对样本均值/方差产生负面影响。在这种情况下,中位数和四分位范围通常会得到更好的结果。 . .versionadded:: 0.17 详见:ref: '' User Guide ''。</preprocessing_scaler> 参数 with_scaling:布尔值,默认为True quantile_range:元组(q_min, q_max), 0.0 < q_min < q_max < 100.0 . .versionadded:: 0.18 布尔值,可选,默认为真 属性 浮点数数组 . .versionadded:: 0.17 另请参阅 类:“sklearn.decomposition.PCA” 笔记 https://en.wikipedia.org/wiki/Median_(统计) |
def __init__(self, with_centering=True, with_scaling=True, def _check_array(self, X, copy): if sparse.issparse(X): def fit(self, X, y=None): Parameters if self.with_scaling: q = np.percentile(X, self.quantile_range, axis=0) def transform(self, X): Can be called on sparse input, provided that ``RobustScaler`` has been Parameters if sparse.issparse(X): def inverse_transform(self, X): Parameters if sparse.issparse(X): |
RobustScaler函数的使用方法
lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.5, random_state=1))
ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.5, l1_ratio=.9, random_state=3))
sklearn的KFold函数的代码解释、使用方法
KFold函数的代码解释
class KFold Found at: sklearn.model_selection._split class KFold(_BaseKFold): |
在:sklearn.model_select ._split找到的类KFold 类KFold (_BaseKFold): |
Examples -------- >>> from sklearn.model_selection import KFold >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) >>> y = np.array([1, 2, 3, 4]) >>> kf = KFold(n_splits=2) >>> kf.get_n_splits(X) 2 >>> print(kf) # doctest: +NORMALIZE_WHITESPACE KFold(n_splits=2, random_state=None, shuffle=False) >>> for train_index, test_index in kf.split(X): ... print("TRAIN:", train_index, "TEST:", test_index) ... X_train, X_test = X[train_index], X[test_index] ... y_train, y_test = y[train_index], y[test_index] TRAIN: [2 3] TEST: [0 1] TRAIN: [0 1] TEST: [2 3] Notes ----- The first ``n_samples % n_splits`` folds have size ``n_samples // n_splits + 1``, other folds have size ``n_samples // n_splits``, where ``n_samples`` is the number of samples. See also -------- StratifiedKFold Takes group information into account to avoid building folds with imbalanced class distributions (for binary or multiclass classification tasks). GroupKFold: K-fold iterator variant with non-overlapping groups. RepeatedKFold: Repeats K-Fold n times. """ |
另请参阅 -------- StratifiedKFold 考虑组信息,以避免构建不平衡的类分布的折叠(对于二进制或多类分类任务)。 GroupKFold:不重叠组的K-fold迭代器变体。 RepeatedKFold:重复K-Fold n次。 ”“” |
def __init__(self, n_splits=3, shuffle=False, random_state=None): super(KFold, self).__init__(n_splits, shuffle, random_state) def _iter_test_indices(self, X, y=None, groups=None): n_samples = _num_samples(X) indices = np.arange(n_samples) if self.shuffle: check_random_state(self.random_state).shuffle(indices) n_splits = self.n_splits fold_sizes = (n_samples // n_splits) * np.ones(n_splits, dtype=np. int) fold_sizes[:n_samples % n_splits] += 1 current = 0 for fold_size in fold_sizes: start, stop = current, current + fold_size yield indices[start:stop] current = stop |
KFold函数的使用方法
Examples
--------
>>> from sklearn.model_selection import KFold
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([1, 2, 3, 4])
>>> kf = KFold(n_splits=2)
>>> kf.get_n_splits(X)
2
>>> print(kf) # doctest: +NORMALIZE_WHITESPACE
KFold(n_splits=2, random_state=None, shuffle=False)
>>> for train_index, test_index in kf.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [2 3] TEST: [0 1]
TRAIN: [0 1] TEST: [2 3]
sklearn的cross_val_score函数的代码解释、使用方法
cross_val_score函数的代码解释
def cross_val_score Found at: sklearn.model_selection._validation def cross_val_score(estimator, X, y=None, groups=None, scoring=None, cv=None, n_jobs=1, verbose=0, fit_params=None, pre_dispatch=''2*n_jobs''): |
通过交叉验证来评估一个分数 更多信息参见:ref: '' User Guide ''。 |
Parameters ---------- estimator : estimator object implementing ''fit'' The object to use to fit the data. X : array-like The data to fit. Can be for example a list, or an array. y : array-like, optional, default: None The target variable to try to predict in the case of supervised learning. groups : array-like, with shape (n_samples,), optional Group labels for the samples used while splitting the dataset into train/test set. scoring : string, callable or None, optional, default: None A string (see model evaluation documentation) or a scorer callable object / function with signature ``scorer(estimator, X, y)``. cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 3-fold cross validation, - integer, to specify the number of folds in a `(Stratified)KFold`, - An object to be used as a cross-validation generator. - An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and ``y`` is either binary or multiclass, :class:`StratifiedKFold` is used. In all other cases, :class:`KFold` is used. Refer :ref:`User Guide <cross_validation>` for the various cross-validation strategies that can be used here. n_jobs : integer, optional The number of CPUs to use to do the computation. -1 means ''all CPUs''. verbose : integer, optional The verbosity level. fit_params : dict, optional Parameters to pass to the fit method of the estimator. pre_dispatch : int, or string, optional Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: - None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs - An int, giving the exact number of total jobs that are spawned - A string, giving an expression as a function of n_jobs, as in ''2*n_jobs'' Returns ------- scores : array of float, shape=(len(list(cv)),) Array of scores of the estimator for each run of the cross validation. |
参数 ---------- estimator:实现“适合”对象以适合数据。
X:数组类 需要匹配的数据。可以是列表,也可以是数组。
y : 类似数组,可选,默认:无 在监督学习的情况下,预测的目标变量。
groups : 类数组,形状(n_samples,),可选 将数据集分割为训练/测试集时使用的样本的标签分组。
scoring : 字符串,可调用或无,可选,默认:无 一个字符串(参见模型评估文档)或签名为'' '' scorer(estimator, X, y) '' ''的scorer可调用对象/函数。
cv : int,交叉验证生成器或可迭代,可选 确定交叉验证分割策略。 cv可能的输入有: -无,使用默认的三折交叉验证, -整数,用于指定“(分层的)KFold”中的折叠数, -用作交叉验证生成器的对象。 -一个可迭代产生的序列,测试分裂。 对于整数/无输入,如果估计器是一个分类器,并且'' '' y '' ''是二进制的或多类的,则使用:class: '' StratifiedKFold ''。在所有其他情况下,使用:class: '' KFold ''。
请参考:ref: '' User Guide '',了解可以在这里使用的各种交叉验证策略。
n_jobs:整数,可选 用于进行计算的cpu数量。-1表示“所有cpu”。
verbose:整数,可选 冗长的水平。
fit_params :dict,可选 参数传递给估计器的拟合方法。
pre_dispatch: int或string,可选 控制并行执行期间分派的作业数量。当分配的作业多于cpu能够处理的任务时,减少这个数量有助于避免内存消耗激增。该参数可以为: -无,在这种情况下,立即创建并派生所有作业。将此用于轻量级和快速运行的作业,以避免由于按需生成作业而造成的延迟 -一个int,给出生成的作业的确切总数 一个字符串,给出一个作为n_jobs函数的表达式,如''2*n_jobs''
返回 ------- (len(list(cv)),) 交叉验证的每次运行估计器的分数数组。 |
Examples -------- >>> from sklearn import datasets, linear_model >>> from sklearn.model_selection import cross_val_score >>> diabetes = datasets.load_diabetes() >>> X = diabetes.data[:150] >>> y = diabetes.target[:150] >>> lasso = linear_model.Lasso() >>> print(cross_val_score(lasso, X, y)) # doctest: +ELLIPSIS [ 0.33150734 0.08022311 0.03531764] See Also --------- :func:`sklearn.model_selection.cross_validate`: To run cross-validation on multiple metrics and also to return train scores, fit times and score times. :func:`sklearn.metrics.make_scorer`: Make a scorer from a performance metric or loss function. """ # To ensure multimetric format is not supported scorer = check_scoring(estimator, scoring=scoring) cv_results = cross_validate(estimator=estimator, X=X, y=y, groups=groups, scoring={''score'':scorer}, cv=cv, return_train_score=False, n_jobs=n_jobs, verbose=verbose, fit_params=fit_params, pre_dispatch=pre_dispatch) return cv_results[''test_score''] |
另请参阅 --------- :func:“sklearn.model_selection.cross_validate”: 在多个指标上进行交叉验证,并返回训练分数、适应时间和得分时间。 :func:“sklearn.metrics.make_scorer”: 从性能度量或损失函数中制作一个记分员。 ”“” #以确保不支持多度量格式 |
scoring参数可选的对象
https://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter
Scoring |
Function |
Comment |
---|---|---|
Classification |
||
‘accuracy’ |
|
|
‘balanced_accuracy’ |
|
|
‘average_precision’ |
|
|
‘neg_brier_score’ |
|
|
‘f1’ |
|
for binary targets |
‘f1_micro’ |
|
micro-averaged |
‘f1_macro’ |
|
macro-averaged |
‘f1_weighted’ |
|
weighted average |
‘f1_samples’ |
|
by multilabel sample |
‘neg_log_loss’ |
|
requires |
‘precision’ etc. |
|
suffixes apply as with ‘f1’ |
‘recall’ etc. |
|
suffixes apply as with ‘f1’ |
‘jaccard’ etc. |
|
suffixes apply as with ‘f1’ |
‘roc_auc’ |
|
|
‘roc_auc_ovr’ |
|
|
‘roc_auc_ovo’ |
|
|
‘roc_auc_ovr_weighted’ |
|
|
‘roc_auc_ovo_weighted’ |
|
|
Clustering |
||
‘adjusted_mutual_info_score’ |
|
|
‘adjusted_rand_score’ |
|
|
‘completeness_score’ |
|
|
‘fowlkes_mallows_score’ |
|
|
‘homogeneity_score’ |
|
|
‘mutual_info_score’ |
|
|
‘normalized_mutual_info_score’ |
|
|
‘v_measure_score’ |
|
|
Regression |
||
‘explained_variance’ |
|
|
‘max_error’ |
|
|
‘neg_mean_absolute_error’ |
|
|
‘neg_mean_squared_error’ |
|
|
‘neg_root_mean_squared_error’ |
|
|
‘neg_mean_squared_log_error’ |
|
|
‘neg_median_absolute_error’ |
|
|
‘r2’ |
|
|
‘neg_mean_poisson_deviance’ |
|
|
‘neg_mean_gamma_deviance’ |
|
cross_val_score函数的使用方法
1、分类预测——糖尿病
>>> from sklearn import datasets, linear_model
>>> from sklearn.model_selection import cross_val_score
>>> diabetes = datasets.load_diabetes()
>>> X = diabetes.data[:150]
>>> y = diabetes.target[:150]
>>> lasso = linear_model.Lasso()
>>> print(cross_val_score(lasso, X, y)) # doctest: +ELLIPSIS
[ 0.33150734 0.08022311 0.03531764]
2、分类预测——iris鸢尾花
from sklearn import datasets #自带数据集
from sklearn.model_selection import train_test_split,cross_val_score #划分数据 交叉验证
from sklearn.neighbors import KNeighborsClassifier #一个简单的模型,只有K一个参数,类似K-means
import matplotlib.pyplot as plt
iris = datasets.load_iris() #加载sklearn自带的数据集
X = iris.data #这是数据
y = iris.target #这是每个数据所对应的标签
train_X,test_X,train_y,test_y = train_test_split(X,y,test_size=1/3,random_state=3) #这里划分数据以1/3的来划分 训练集训练结果 测试集测试结果
k_range = range(1,31)
cv_scores = [] #用来放每个模型的结果值
for n in k_range:
knn = KNeighborsClassifier(n) #knn模型,这里一个超参数可以做预测,当多个超参数时需要使用另一种方法GridSearchCV
scores = cross_val_score(knn,train_X,train_y,cv=10,scoring=''accuracy'') #cv:选择每次测试折数 accuracy:评价指标是准确度,可以省略使用默认值,具体使用参考下面。
cv_scores.append(scores.mean())
plt.plot(k_range,cv_scores)
plt.xlabel(''K'')
plt.ylabel(''Accuracy'') #通过图像选择最好的参数
plt.show()
best_knn = KNeighborsClassifier(n_neighbors=3) # 选择最优的K=3传入模型
best_knn.fit(train_X,train_y) #训练模型
print(best_knn.score(test_X,test_y)) #看看评分
pymc3 是否与 scikit-learn 中的 predict 方法等效?
我想我找到了答案:使用 pm.Data()
。
with pm.Model() as m_adult:
weight_s = pm.Data("weight_s",adults.weight_s.values)
a = pm.Normal("α",mu=155,sd=20)
b = pm.Lognormal("β",mu=0,sd=1)
mu = pm.Deterministic("μ",a + b * weight_s)
sigma = pm.Uniform("σ",50)
height = pm.Normal("height",mu=mu,sd=sigma,observed=adults.height)
trace_adult = pm.sample()
然后,在尝试时,我们pm.set_data()
:
missing_weights = np.array([45,40,65,31,53])
with m_adult:
pm.set_data({"weight_s": standardize(missing_weights,adults.weight)})
height_pred_data = pm.fast_sample_posterior_predictive(trace_adult)["height"]
missing_df = pd.DataFrame(missing_weights,columns=["weight"])
missing_df["expected_height"] = height_pred_data.mean(axis=0)
missing_df[["hdi_lower","hdi_upper"]] = az.hdi(height_pred_data)
print(missing_df)
给出:
weight expected_height hdi_lower hdi_upper
0 45 154.584063 145.828088 162.512174
1 40 150.184853 142.272258 158.451555
2 65 172.662069 164.522903 180.803430
3 31 141.949137 133.310865 149.811098
4 53 161.719867 153.848599 169.638495
Python scikit-learn (metrics): difference between r2_score and explained_variance_score?
I noticed that that ''r2_score'' and ''explained_variance_score'' are both build-in sklearn.metrics methods for regression problems.
I was always under the impression that r2_score is the percent variance explained by the model. How is it different from ''explained_variance_score''?
When would you choose one over the other?
Thanks!
OK, look at this example:
In [123]:
#data
y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]
print metrics.explained_variance_score(y_true, y_pred)
print metrics.r2_score(y_true, y_pred)
0.957173447537
0.948608137045
In [124]:
#what explained_variance_score really is
1-np.cov(np.array(y_true)-np.array(y_pred))/np.cov(y_true)
Out[124]:
0.95717344753747324
In [125]:
#what r^2 really is
1-((np.array(y_true)-np.array(y_pred))**2).sum()/(4*np.array(y_true).std()**2)
Out[125]:
0.94860813704496794
In [126]:
#Notice that the mean residue is not 0
(np.array(y_true)-np.array(y_pred)).mean()
Out[126]:
-0.25
In [127]:
#if the predicted values are different, such that the mean residue IS 0:
y_pred=[2.5, 0.0, 2, 7]
(np.array(y_true)-np.array(y_pred)).mean()
Out[127]:
0.0
In [128]:
#They become the same stuff
print metrics.explained_variance_score(y_true, y_pred)
print metrics.r2_score(y_true, y_pred)
0.982869379015
0.982869379015
So, when the mean residue is 0, they are the same. Which one to choose dependents on your needs, that is, is the mean residue suppose to be 0?
Most of the answers I found (including here) emphasize on the difference between R2 and Explained Variance Score, that is: The Mean Residue (i.e. The Mean of Error).
However, there is an important question left behind, that is: Why on earth I need to consider The Mean of Error?
Refresher:
R2: is the Coefficient of Determination which measures the amount of variation explained by the (least-squares) Linear Regression.
You can look at it from a different angle for the purpose of evaluating the predicted values of y
like this:
Varianceactual_y × R2actual_y = Variancepredicted_y
So intuitively, the more R2 is closer to 1
, the more actual_y and predicted_y will have samevariance (i.e. same spread)
As previously mentioned, the main difference is the Mean of Error; and if we look at the formulas, we find that''s true:
R2 = 1 - [(Sum of Squared Residuals / n) / Variancey_actual]
Explained Variance Score = 1 - [Variance(Ypredicted - Yactual) / Variancey_actual]
in which:
Variance(Ypredicted - Yactual) = (Sum of Squared Residuals - Mean Error) / n
So, obviously the only difference is that we are subtracting the Mean Error from the first formula! ... But Why?
When we compare the R2 Score with the Explained Variance Score, we are basically checking the Mean Error; so if R2 = Explained Variance Score, that means: The Mean Error = Zero!
The Mean Error reflects the tendency of our estimator, that is: the Biased v.s Unbiased Estimation.
In Summary:
If you want to have unbiased estimator so our model is not underestimating or overestimating, you may consider taking Mean of Error into account.
参考链接:https://stackoverflow.com/questions/24378176/python-sci-kit-learn-metrics-difference-between-r2-score-and-explained-varian
关于scikit-learn cross_val_predict准确性得分如何计算?和clf.score 准确率的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于cross_val_score和cross_val_predict之间的区别、ML之sklearn:sklearn的make_pipeline函数、RobustScaler函数、KFold函数、cross_val_score函数的代码解释、使用方法之详细攻略、pymc3 是否与 scikit-learn 中的 predict 方法等效?、Python scikit-learn (metrics): difference between r2_score and explained_variance_score?的相关知识,请在本站寻找。
本文标签: