Bachelor’s degree or equivalent practical experience.
1 year of experience with software development in one or more programming languages (e.g., Python, C, C++, Java, JavaScript).
1 year of experience with data structures or algorithms.
1 year of experience implementing core ML concepts.
Nice to haves:
Experience in content safety, as applied to software products.
Experience in safety-adjacent fields, such as fairness and factuality.
Experience with Machine Learning Fairness, Generative AI, Python.
What you'll be doing:
Be accountable for delivering high-quality, future-proof and performing infrastructure. Protect users from exposures to offensive, sensitive and/or potentially harmful content, and unlock new opportunities for the business.
Contribute to the success of server and on device protections. Be responsible for growing the team by interviewing qualified candidates, actively mentoring the team around you, and accountable for the team's deliverables.
Perks and benefits:
Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google's Applicant and Candidate Privacy Policy.
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law.