To main content

Attention Shaping for GenAI Security Risks

Abstract

The adoption of Generative Artificial Intelligence (GenAI) in software development introduces new security risks that extend beyond traditional threat models. When facing these new risks, security managers need to deliberately manage their attention as a critical resource. To support this, we conducted a multi-case study with four software organizations that distilled ten key security risks related to GenAI. Instead of a static checklist, we offer a socio-technical analysis that reveals three distinct attention shapes for the security risks associated with GenAI, specifically regarding governance, operation, and implementation. These attention shapes unfold how security risks are entangled with a software organization’s structure, actors, technologies, and tasks. We explain how such attention shaping is suitable for security managers to uncover socio-technical blind spots in managing GenAI security risks.

Category

Academic article

Language

Other

Author(s)

Affiliation

  • SINTEF Digital / Software Engineering, Safety and Security

Year

2026

Published in

IEEE Software

ISSN

0740-7459

Page(s)

7 - 7

View this publication at Norwegian Research Information Repository