5+ years of experience in security research, penetration testing, or offensive security roles, with demonstrated expertise in AI/ML security
Hands-on experience discovering and exploiting vulnerabilities in AI systems and platforms
Strong understanding of AI attack vectors including prompt injection, agent manipulation, and workflow exploitation
Proficiency in Python with experience in AI frameworks and security testing tools
Experience with offensive security tools and vulnerability discovery methodologies
Ability to read and analyze code across multiple languages and codebases
Strong analytical and problem-solving skills with creative thinking about attack scenarios
Excellent written communication skills for documenting technical findings and creating security advisories
Ability to translate technical findings into clear risk assessments and remediation recommendations
Nice to Haves:
Direct experience testing AI agent platforms, conversational AI systems, or AI orchestration architectures
Published security research or conference presentations on AI security topics
Background in software engineering with distributed systems expertise
Security certifications such as OSCP, OSCE, GPEN, or similar
Experience with GitLab or similar DevSecOps platforms
Knowledge of AI agent communication protocols and multi-agent architectures
What You'll Be Doing:
Identify and validate security vulnerabilities in GitLab's AI systems through hands-on testing, developing proof-of-concept exploits that demonstrate real-world attack scenarios
Execute comprehensive penetration testing targeting AI agent platforms, including prompt injection, jailbreaking, and workflow manipulation techniques
Research emerging AI security threats and attack techniques to assess their potential impact on GitLab's AI-powered platform
Design and implement testing methodologies and tools for evaluating AI agent security and multi-agent system exploitation
Create detailed technical reports and advisories that translate complex findings into actionable remediation strategies
Collaborate with AI engineering teams to validate security fixes through iterative testing and verification
Contribute to the development of AI security testing frameworks and automated validation tools
Partner with Security Architecture to inform architectural improvements based on research findings
Share knowledge and mentor team members on AI security testing techniques and vulnerability discovery
Perks and Benefits:
Benefits to support your health, finances, and well-being
All remote, asynchronous work environment
Flexible Paid Time Off
Team Member Resource Groups
Equity Compensation & Employee Stock Purchase Plan