Bachelor’s degree or equivalent practical experience.
5 years of experience with software development in one or more programming languages.
3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.
Nice to Haves
3 years of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, debugging).
Experience in full-stack development, including in programming languages like Python, Java, C++, web frameworks, and cloud platforms.
Experience with Generative AI technologies and their associated risks and opportunities.
Knowledge of Trust and Safety, Content Safety, or Responsible AI domains, with an understanding of adversarial dynamics, content moderation challenges, or similar safety-adjacent fields.
Knowledge of ML algorithms, including supervised and unsupervised learning, deep learning, reinforcement learning, and generative AI.
Knowledge of API design, database technologies, and front-end development principles.
What You'll Be Doing
Lead the full lifecycle of projects that protect Google's users and business-critical products, including the latest generative AI experiences.
Accountable for delivering high-quality, future-proof and performing infrastructure, unlocking new opportunities for the business.
Lead the deployment of Agents, or AI-based protections/classifiers, from initial problem definition and data acquisition through to model development, evaluation, deployment, and long-term maintenance.
Collaborate effectively with cross-functional teams of data scientists, software engineers, product managers, and business stakeholders to deliver impactful safety solutions.
Perks and Benefits
Opportunities to switch teams and projects as you and the business grow and evolve.
Impactful technical decisions across the company as part of the Core team.
Contribute to the success of Content Safety engineers working in Agents, Classifiers, and Responsible AI efforts.