What You’ll Do
Develop LM/VLM-powered agents that autonomously interface with code, GUI, web tools, and design software to drive generative simulation
Build agentic systems that synthesize high-quality, diverse physical data by orchestrating simulation, generative models, and interactive tools
Build data pipelines to curate, generate, and preprocess datasets used for training and evaluating language- and vision-language-based agents
Collaborate with simulation and infrastructure teams to co-develop APIs, GUIs, and services that enable seamless interaction with physics engines and simulation environments
Drive innovation by staying at the forefront of LLM agents, autonomous tool use, and generative simulation — and translating cutting-edge research into robust, production-ready systems
Contribute to the creation of a generative simulation stack that empowers downstream applications in robotics, embodied AI, and physical reasoning
What You’ll Bring
A bold and imaginative vision for a next-generation paradigm of physical data synthesis— combining simulation, generative models, and autonomous agents
Deep curiosity and strong technical ownership, with a track record of driving complex, open-ended projects from concept to implementation
Experience with (multimodal) large language models, generative AI tools, agentic software design, GUI automation, program synthesis
Passion for inventing creative solutions at the edge of AI, simulation, and physical reasoning, with an eagerness to build systems that generate high-fidelity, useful data for embodied intelligence
Bonus: Familiarity with interactive design tools (e.g., CAD software, game engines) or simulation environments (e.g., physics engines, robotics simulators)