A Bachelor's, Master's, or Ph.D. in Computer Science or a related technical field (or equivalent experience).
8+ years of relevant work experience.
A strong command of Python and experience building complex, well-tested software systems.
Hands-on experience with deep learning frameworks like PyTorch or JAX. You understand how models are built and where the performance challenges lie.
A solid foundation in compiler concepts such as abstract syntax trees (ASTs), intermediate representations (e.g., SSA form), program analysis, and code generation.
Excellent communication and collaboration skills, essential for working effectively in a distributed, open-source environment.
Nice to haves:
Previous contributions to deep learning compiler projects (e.g., TVM, MLIR, IREE) or deep learning frameworks themselves.
Deep expertise in the internals of PyTorch, particularly its compiler stack (TorchDynamo, TorchInductor).
Experience with JAX-like functional transformations and their application in a compiler context.
Familiarity with parallel programming, distributed systems, and writing high-performance CUDA code.
A track record of impactful participation in open-source communities, such as through code contributions, design discussions, or mentorship.
What you'll be doing:
Contributing directly to the future of accelerated AI.
Leading the design, implementation, optimization, and maintenance of core compiler technologies.
Collaborating with leading compiler, library, and systems teams to create high-impact solutions.
Diving deep into performance analysis to find optimization opportunities for Thunder.
Perks and benefits:
Highly competitive salaries.
Extensive benefits package.
Promotes diversity, inclusion, and flexibility in the work environment.