Bachelor’s degree or equivalent practical experience.
1 year of experience with software development in one or more programming languages (e.g., Python, C, C++, Java, JavaScript).
1 year of experience with data structures or algorithms.
1 year of experience implementing core ML concepts.
Nice to haves
Experience with large-scale distributed systems, in the context of machine learning infrastructure or applications.
Experience developing and deploying machine learning models, especially deep learning or natural language processing techniques.
Familiarity with TensorFlow or similar frameworks.
Experience in the adversarial space, fraud detection, or other anti-abuse domains.
What you'll be doing
Analyze and enhance machine learning models for abuse prevention and detection.
Identify relevant data sources and design signals to detect threats, contributing to the formulation of enhanced protection strategies.
Optimize signal generation processes, including identifying and resolving performance bottlenecks to ensure scalability.
Gain expertise in emerging abuse trends, designing, implementing and testing innovative strategies to combat them.
Collaborate with Trust and Safety Analysts and other abuse engineering teams to develop new detection methodologies and evaluate the efficacy and impact of existing protection mechanisms.