Speaker
Description
The growing use of AI in software security is reshaping how development teams evaluate vulnerabilities and prioritize security decisions during the development process. Increasingly, teams rely on AI-supported systems that generate recommendations for risk assessment and threat detection. However, the extent to which these tools influence collective decision-making within secure software development teams remains insufficiently explored. This study investigates how AI-supported security tools affect decision dynamics in secure software development teams. An empirical research design was applied through a structured survey involving software developers and security specialists working in security-critical environments. The study examines several key variables, including trust in AI-supported systems, perceived usefulness of AI tools, transparency of AI-generated recommendations, decision confidence, and team coordination during security-related tasks. The findings indicate that trust in AI-supported tools and the perceived transparency of AI-generated insights significantly influence developers’ decision confidence, while improved team coordination facilitates the effective integration of AI recommendations into collective security decisions. These results suggest that the successful use of AI in secure development depends not only on technological capabilities but also on team-level decision processes and organizational conditions shaping how AI-generated insights are interpreted and applied.