Abstract
This two-phase survey study investigates Software Engineering (SE) researchers' attitudes toward incorporating AI tools, particularly large language models (LLMs), in their workflows. Conducted at a large, independent European research institute, the study provides insights into the emerging norms and attitudes regarding AI integration within empirical SE practices, crucial insights for ensuring responsible AI use and maintaining scientific integrity in the SE field. Our findings reveal that SE researchers prefer LLM contributions when applied on small, narrowly scoped, and verifiable tasks rather than open-ended tasks, and used as a as supplementary tool to traditional methods. In contrast, it is viewed as inappropriate for evaluating others' work. Finally, we observed conflicting views regarding high-stakes tasks traditionally reflecting genuine human effort and emotional commitment.