Abstract
This work presents a comparative evaluation of machine learning (ML) and deep learning (DL) models for solar irradiance decomposition in Nordic regions, where traditional empirical models often struggle. Using data from the Alpha Centauri outdoor test facility in Trondheim, Norway, the present work benchmarks the performance of Histogram-based Gradient Boosting (HGB), Artificial Neural Networks (ANN), and Long Short-Term Memory (LSTM) networks against the Erbs model. Results indicate that HGB performs best on the initial evaluation set, achieving strong R² scores for both DNI and DHI, while requiring minimal computational resources. However, in a zero-shot prediction scenario using independent data, HGB's performance drops significantly, suggesting overfitting to seasonal patterns. ANN maintains the highest accuracy for DNI decomposition, capturing nonlinear dependencies more effectively, whereas LSTM shows mixed results, particularly underperforming in DNI estimation.