Classical ML & Statistics
Feature engineering, model evaluation, regression, classification, ensemble methods, and statistical foundations.
Overview
Classical Machine Learning encompasses the foundational algorithms and statistical techniques that remain essential in production ML systems. Despite the rise of deep learning, classical ML methods are often preferred for tabular data, interpretability requirements, and resource-constrained environments.
Key areas include supervised learning (linear/logistic regression, decision trees, random forests, gradient boosting with XGBoost/LightGBM), model evaluation (cross-validation, bias-variance tradeoff, precision-recall, ROC-AUC), and feature engineering (feature selection, encoding, scaling, handling missing data).
Understanding these fundamentals is critical: overfitting vs. underfitting, regularization (L1/L2), ensemble methods, and interpretability tools (SHAP, feature importance). Many interview questions test these concepts because they reveal deep understanding of how and why ML algorithms work.