Bachelor's degree or equivalent practical experience.
2 years of experience with security assessments or security design reviews or threat modeling.
2 years of experience with security engineering, computer and network security and security protocols.
2 years of coding experience in one or more general purpose languages.
Nice to haves:
Master's or PhD degree in Computer Science or a related technical field with a specialization in Security, AI/ML, or a related area.
Experience in Artificial Intelligence/Machine Learning (AI/ML) security research, including areas like adversarial machine learning, prompt injection, model extraction, or privacy-preserving ML.
Track record of security research contributions (e.g., publications in relevant security/ML venues, CVEs, conference talks, open-source tools).
Familiarity with the architecture and potential failure modes of LLMs and AI agent systems.
What you'll be doing:
Conduct research to identify, analyze, and understand novel security threats, vulnerabilities, and attack vectors targeting AI agents and underlying LLMs.
Design, prototype, evaluate, and refine innovative defense mechanisms and mitigation strategies against identified threats.
Develop proof-of-concept exploits and testing methodologies to validate vulnerabilities and assess the effectiveness of proposed defenses.
Collaborate with engineering and research teams to translate research findings into practical, security solutions deployable across Google's agent ecosystem.
Document research findings, contribute to internal knowledge sharing, security guidelines, and potentially external publications or presentations.
Perks and benefits:
Work with the Security team to maintain the safest operating environment for Google's users and developers.
Pioneer defenses for systems like Gemini and Workspace AI, addressing novel threats unique to autonomous agents and LLMs.
Help define secure development practices for AI agents within Google and influence the broader industry in this evolving field.