Requirements
- 5+ years of experience with large-scale distributed systems and ML infrastructure.
- Strong hands-on experience with ML inference optimization and GPU utilization.
- Deep proficiency with ML frameworks like PyTorch or TensorFlow.
- Strong background in distributed processing frameworks and managing large-scale data.
- Strong cloud expertise (AWS or Azure).
- Familiarity with vector databases and semantic search infrastructure is a plus.
- Strong proficiency in Python.
- Bachelor's degree or equivalent experience in Computer Science, Machine Learning, or a related field required; Master's degree or equivalent experience preferred.
What you'll be Doing
- Build and optimize scalable data management pipelines for multimodal training data.
- Develop reusable data processing frameworks and components for the feature platform.
- Develop and maintain data quality, lineage, and governance tooling.
- Integrate semantic search and vector database infrastructure.
- Build and optimize distributed batch and real-time inference pipelines using technologies such as Apache Ray.
- Implement backend services for ML inference workflows.
- Partner with product and research teams to translate model requirements into production-ready capabilities.
- Mentor junior and mid-level engineers through code reviews and knowledge sharing.
Perks and Benefits
At Adobe, we empower employees to innovate with AI — and we look for candidates eager to do the same. As part of the hiring experience, we provide clear guidance on where AI is encouraged during the process and where it’s restricted during live interviews. See how we think about AI in the hiring experience.