Speaker
Description
The integration of artificial intelligence into software development is increasingly influencing how security-related decisions are made during the design and implementation of secure systems. While AI tools can support threat detection, risk analysis, and vulnerability identification, their effectiveness depends on how development teams interpret and integrate AI-generated insights into their decision processes. This paper examines the role of human–AI collaboration in secure system development, focusing on the interaction between AI-based decision-support tools and team decision-making in security-critical development environments. The study adopts a conceptual and analytical approach based on a review of current research in artificial intelligence, software security, and team-based decision-making. Particular attention is given to how AI-generated insights support tasks such as threat modeling, security architecture design, and risk assessment. The paper contributes by proposing a conceptual framework describing the interaction between human expertise and AI-supported decision processes in secure software development teams. It further identifies key organizational factors—such as trust in AI recommendations, transparency of AI models, and team coordination—that influence the effective integration of AI into secure development practices.