What You'll Do
Lead the evolution of our high-performance robotics simulation platform
Design and implement the compute infrastructure and data flow mechanisms to optimize performance for physics simulation and foundation model training
Lead development of our compiler stack, focusing on JIT compilation, LLVM IR, and GPU codegen to minimize compile time and maximize runtime performance
Collaborate with the team to improve the compiler's support for differentiable programming, crucial for training neural networks within simulations
Stay current on state-of-the-art ML compilers—such as those in torch, Triton, and JAX—and decide which techniques and approaches are best suited for our application
Work closely with simulation and robotics engineers to align compiler enhancements with application needs
Contribute to relevant open-source projects and participate actively in the broader compiler and systems community
What You’ll Bring
Strong background in compiler construction, particularly in JIT compilation and LLVM-based code generation
Extensive experience with GPU programming models (e.g., CUDA, Vulkan) and understanding of GPU architecture
Track record as a core contributor to GPU programming infrastructure—such as Torch, JAX, Mojo, Taichi, or Warp
Proven ability to profile and optimize complex systems for performance and scalability
Understanding of automatic differentiation and its application in simulation and machine learning contexts
Excellent communication skills and a collaborative approach to problem-solving
Enthusiasm for contributing to and engaging with open-source communities