What are Performance parameters in Machine Learning (Important topic for ML)?
Performance Parameters in Machine Learning Models
performance parameters (evaluation metrics) is essential for understanding how well a Machine Learning model performs. Here’s a complete explanation of the most important performance parameters for ML models, explained simply with examples and formulas.
1. Accuracy
Accuracy measures the percentage of correctly predicted instances out of the total predictions.
Accuracy = \frac{TP + TN}{TP + TN + FP + FN}
Where:
-
TP = True Positive
-
TN = True Negative
-
FP = False Positive
-
FN = False Negative
Example:
If your model correctly predicts 90 out of 100 results,
Accuracy = 90%.
- Best for balanced datasets.
2. Precision
Precision tells how many of the predicted positive results are actually correct.
Precision = \frac{TP}{TP + FP}
Example:
Out of 100 “spam” predictions, if 90 are truly spam → Precision = 90%.
- Good when the cost of false positives is high.
- Used in email spam detection.
3. Recall (Sensitivity / True Positive Rate)
Recall measures how many actual positive cases were correctly identified by the model.
Recall = \frac{TP}{TP + FN}
Example:
If 100 emails are truly spam and your model correctly identifies 80 → Recall = 80%.
1.Good when missing a positive case is costly.
2. Used in medical diagnosis, fraud detection.
4. F1 Score
F1 Score is the harmonic mean of Precision and Recall.
It balances both — useful when classes are imbalanced.
F1 = 2 \times \frac{(Precision \times Recall)}{(Precision + Recall)}
Example:
If Precision = 0.9 and Recall = 0.8,
F1 = 0.85.
- Best for imbalanced datasets.
5. Confusion Matrix
A confusion matrix is a 2×2 table showing the actual vs predicted outcomes.
| Predicted Positive | Predicted Negative | |
|---|---|---|
| Actual Positive | True Positive (TP) | False Negative (FN) |
| Actual Negative | False Positive (FP) | True Negative (TN) |
- Gives detailed insight into model performance.
6. ROC Curve (Receiver Operating Characteristic Curve)
ROC Curve plots True Positive Rate (Recall) against False Positive Rate (FPR) at different thresholds.
-
FPR = FP / (FP + TN)
-
TPR = TP / (TP + FN)
- Shows the trade-off between sensitivity and specificity.
7. AUC (Area Under the ROC Curve)
AUC represents the degree of separability between classes.
Higher AUC → better model performance.
Range:
-
1.0 → Perfect model
-
0.5 → Random guessing
- Used in binary classification problems.
8. Mean Absolute Error (MAE) (For Regression Models)
MAE measures the average of absolute differences between predicted and actual values.
MAE = \frac{1}{n}\sum |y_{pred} - y_{true}|
- Easy to interpret; less sensitive to outliers.
9. Mean Squared Error (MSE)
MSE calculates the average of squared differences between predicted and actual values.
MSE = \frac{1}{n}\sum (y_{pred} - y_{true})^2
- Penalizes larger errors more strongly.
10. R² Score (Coefficient of Determination)
Indicates how much of the variance in the dependent variable is explained by the model.
R^2 = 1 - \frac{SS_{res}}{SS_{tot}}
Where:
-
( SS_{res} = \sum (y_{true} - y_{pred})^2 )
-
( SS_{tot} = \sum (y_{true} - \bar{y})^2 )
1 means perfect fit; 0 means model does not explain variance.
Summary Table
| Metric | Type | Use Case |
|---|---|---|
| Accuracy | Classification | Balanced datasets |
| Precision | Classification | False positives costly |
| Recall | Classification | False negatives costly |
| F1 Score | Classification | Imbalanced data |
| Confusion Matrix | Classification | Visual error analysis |
| ROC / AUC | Classification | Threshold evaluation |
| MAE | Regression | Average absolute error |
| MSE | Regression | Penalize large errors |
| RMSE | Regression | Root of MSE (same units as data) |
| R² Score | Regression | Goodness of fit |
In simple terms:
-
Use Accuracy when your data is balanced.
-
Use Precision & Recall when dealing with imbalanced data.
-
Use F1 Score for overall classification performance.
-
Use MAE/MSE/R² for regression problems.
-
Use AUC-ROC to check how well your model separates classes.
👈Previous AI, ML, DL Next

Comments
Post a Comment