To main content

Machine Learning Model Complexity as a Mitigation Strategy Against Industrial Espionage through Membership Inference Attacks

Abstract

Machine learning (ML) models, particularly in the context of Federated Learning (FL), are increasingly used to enable predictive maintenance in smart industry. However, as these models become integral to industrial operations, they also become potential targets for data leakage and membership inference attacks (MIAs). In this paper, we hypothesise that training on multi-dimensional data (i.e., multiple features instead of a single feature) enhances resilience against MIAs compared to simpler, single-feature models. To test this hypothesis, we design an experimental testbed to empirically evaluate the vulnerability of ML models to black-box MIAs. Our approach involves training models of varying complexity on industrial time-series data and measuring their vulnerability to MIAs. Additionally, we introduce a human expert’s perspective to contextualise our findings in the realm of industrial espionage, highlighting the potential realworld implications of data leakage. We offer a set of observations and lessons learnt from a series of controlled experiments, shedding light on the trade-offs between model complexity, security, and computational effort. These insights can help inform future design choices in deploying FL-based predictive maintenance solutions in data-sensitive industrial environments.
Read the publication

Category

Academic chapter

Language

English

Author(s)

Affiliation

  • SINTEF Digital / Sustainable Communication Technologies
  • Ericsson AB
  • Austria

Date

26.08.2025

Year

2025

Publisher

IEEE (Institute of Electrical and Electronics Engineers)

Book

Proceedings of the 2025 IEEE International Conference on Cyber Security and Resilience (CSR), August 4–6, 2025, Chania, Crete, Greece

ISBN

9798331535919

Page(s)

469 - 475

View this publication at Norwegian Research Information Repository