A Bachelor's, Master's, or Ph.D. in Computer Science or a related technical field (or equivalent experience).
8+ years of relevant work experience
A strong command of Python and experience building complex, well-tested software systems.
Hands-on experience with deep learning frameworks like PyTorch or JAX. You understand how models are built and where the performance challenges lie.
A solid foundation in compiler concepts such as abstract syntax trees (ASTs), intermediate representations (e.g., SSA form), program analysis, and code generation.
Excellent communication and collaboration skills, essential for working effectively in a distributed, open-source environment.
Nice to Haves:
Previous contributions to deep learning compiler projects (e.g., TVM, MLIR, IREE) or deep learning frameworks themselves.
Deep expertise in the internals of PyTorch, particularly its compiler stack (TorchDynamo, TorchInductor).
Experience with JAX-like functional transformations and their application in a compiler context.
Familiarity with parallel programming, distributed systems, and writing high-performance CUDA code.
A track record of impactful participation in open-source communities, such as through code contributions, design discussions, or mentorship.
What you'll be doing:
Contribute directly to the future of accelerated AI by leading the design, implementation, optimization, and maintenance of core compiler technologies.
Work alongside engineers who built PyTorch for NVIDIA hardware to pioneer new features and stay at the forefront of framework development.
Perform performance analysis to find optimization opportunities for Thunder and collaborate with leading compiler, library, and systems teams.
Perks and Benefits:
Highly competitive salaries.
Extensive benefits package.
Work environment that promotes diversity, inclusion, and flexibility.