Evaluate ML Models Notes

Evaluate ML Models Notes,第1张

Evaluate Models Dummy Models

Manually assign the target values to the target while ignoring any feature. There are multiple ways: 1. Assign the most frequent 2. Choose the preferred result 3. Stratified prediction

It serves as the baseline to be compared with other normal models.

from sklearn.dummy import DummyClassifier

dummy_majority = DummyClassifier(strategy = 'most_frequent').fit(X_train, y_train)
y_dummy_predictions = dummy_majority.predict(X_test)

dummy_majority.score(X_test, y_test)
Confusion Matrix

Investigate the false positive and false negative rate.

from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report

classification_report(y_test, tree_predicted)
confusion = confusion_matrix(y_test, y_predicted)

Sensitivity/True Positive Rate/Recall: How many positive cases are identified?
TP/(TP+FN)

Precison: How many positive cases are correct?
TP/(TP+FP)

False Positive Rate/Specificity: What fraction of all negative cases are incorrectly identified as positive?
FP/(TN+FP)

A higher Sensitivity will inevitably lead to a lower Precision, vice versa.

F score can combine precision and recall into a single number.

Curves

Precision-Recall Curves:
X axis: Precision
Y axis: Recall

Top right corner is the ideal point (precision = 1, recall = 1)

ROC Curves
X axis: False Positive Rate
Y axis: True Positive Rate

Top left corner is the ideal point so that False positive rate is zero and true positive rate is one.

from sklearn.metrics import precision_recall_curve

precision, recall, thresholds = precision_recall_curve(y_test, y_scores_lr)
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]

plt.figure()
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.axes().set_aspect('equal')

###############################################################
###############################################################
from sklearn.metrics import roc_curve, auc

X_train, X_test, y_train, y_test = train_test_split(X, y_binary_imbalanced, random_state=0)

y_score_lr = lr.fit(X_train, y_train).decision_function(X_test)
fpr_lr, tpr_lr, _ = roc_curve(y_test, y_score_lr)
roc_auc_lr = auc(fpr_lr, tpr_lr)

plt.figure()
plt.xlim([-0.01, 1.00])
plt.ylim([-0.01, 1.01])
plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)
plt.legend(loc='lower right', fontsize=13)
plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')
plt.axes().set_aspect('equal')
plt.show()
Multi-Class Evaluation

Confusion Matrix (one class case against all the other).

Macro Precision: Each class’s precison will be added together to achive the average, and there is no weight.
Micro Precision: The overall precision of the dataset.

Macro Precision is low. Examine small classes.
Micro Precision is low. Examine large classes.

Adjust the average variable to calculate macro or micro value.

from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

precision_score(y_test_mc, svm_predicted_mc, average = 'micro')
precision_score(y_test_mc, svm_predicted_mc, average = 'macro')
GridSearch
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC

## cross_val_score
cross_val_score(clf, X, y, cv=5,scoring = 'roc_auc')

## Grid Search
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score

dataset = load_digits()
X, y = dataset.data, dataset.target == 1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)

clf = SVC(kernel='rbf')
grid_values = {'gamma': [0.001, 0.01, 0.05, 0.1, 1, 10, 100]}

# default metric to optimize over grid parameters: accuracy
grid_clf_acc = GridSearchCV(clf, param_grid = grid_values)
grid_clf_acc.fit(X_train, y_train)
y_decision_fn_scores_acc = grid_clf_acc.decision_function(X_test) 

print('Grid best parameter (max. accuracy): ', grid_clf_acc.best_params_)
print('Grid best score (accuracy): ', grid_clf_acc.best_score_)

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/726743.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-26
下一篇 2022-04-26

发表评论

登录后才能评论

评论列表(0条)

保存