


For my Machine Learning class at UCSD, the final project required building one model with three variants, but I wanted to push myself further and explore the full landscape of ML techniques. Instead of the minimum, I built 8 completely different models - from traditional linear methods like log-transformed OLS and Poisson Regression, to advanced ensemble methods like Random Forest and Gradient Boosting, all the way to Neural Networks. The most interesting challenge was applying different predictor selection strategies: backward selection for linear models, Ridge (L2) and Lasso (L1) regularization through Elastic Net with various hyperparameters, and letting tree-based models and neural networks work with the full feature set. I spent countless hours tuning hyperparameters with GridSearchCV, implementing 10-fold cross-validation for all models, and analyzing why certain approaches worked better than others. Unfortunately, the dataset had a lot of noise and not enough signal from the predictors to determine strong relationships with injury counts - which is actually a valuable lesson in itself about the importance of data quality and feature engineering. Despite the limitations, seeing Elastic Net achieve that 21.7% Adjusted R² after all the tuning was incredibly satisfying. This project taught me so much about the practical realities of machine learning: that sometimes the data tells you more about what's not possible than what is, and that the process of systematically comparing models is just as valuable as the final results. It was a great learning experience that gave me deep hands-on experience with the entire ML workflow from data preprocessing to model evaluation.
clutchdev.apps@gmail.com
949-910-7879
© 2026 Clutch Studio. All rights reserved.