Abstract
The adoption of Generative Artificial Intelligence (GenAI) in software development introduces new security risks that extend beyond traditional threat models. When
facing these new risks, security managers need to deliberately manage their attention as a critical resource. To support this, we conducted a multi-case study with four software organizations that distilled ten key security risks related to GenAI. Instead of a static checklist, we offer a socio-technical analysis that reveals three distinct attention shapes for the
security risks associated with GenAI, specifically regarding governance, operation, and implementation. These attention shapes unfold how security risks are entangled with a
software organization’s structure, actors, technologies, and tasks. We explain how such attention shaping is suitable for security managers to uncover socio-technical blind spots
in managing GenAI security risks.