Abstract
Machine learning (ML) models, particularly in the context of Federated Learning (FL), are increasingly used to enable predictive maintenance in smart industry. However, as these models become integral to industrial operations, they also become potential targets for data leakage and membership inference attacks (MIAs). In this paper, we hypothesise that training on multi-dimensional data (i.e., multiple features instead of a single feature) enhances resilience against MIAs compared to simpler, single-feature models. To test this hypothesis, we design an experimental testbed to empirically evaluate the vulnerability of ML models to black-box MIAs. Our approach involves training models of varying complexity on industrial time-series data and measuring their vulnerability to MIAs. Additionally, we introduce a human expert’s perspective to contextualise our findings in the realm of industrial espionage, highlighting the potential realworld implications of data leakage. We offer a set of observations and lessons learnt from a series of controlled experiments, shedding light on the trade-offs between model complexity, security, and computational effort. These insights can help inform future design choices in deploying FL-based predictive maintenance solutions in data-sensitive industrial environments.