Abstract
Risks of artificial intelligence (AI) may concern the totality of AI as provided by myriad vendors and taken up on a societal scale. In consequence, it becomes increasingly important to address trustworthy AI at a societal level. In this position paper, we discuss such a societal perspective on trustworthy AI. To support this perspective, we present Beck’s theory of Risk Society, which concerns how society is increasingly shaped by the identification and management of risks from technological development. We explore how this theory can help understand trustworthy AI at a societal level and detail two key implications. Specifically, we argue that the theory of Risk Society entails (a) the importance of evaluating AI trustworthiness at a societal level and (b) the benefit of open research on trustworthy AI to foster public trust in AI by showing that risks are being actively studied and addressed.