
Motoniq trains humanoid robots using human motion to teach real-world skills. It records motion and tactile data with its proprietary Motion-Capture System (MMoCS) and uses the CUR humanoid platform to translate that data into transferable capabilities. The company operates a SaaS and licensing model to deliver motion intelligence to logistics, manufacturing, construction, healthcare, and space applications. Motoniq is building a library of deployable capabilities and aims to scale robots that learn by moving like us across industries.

Motoniq trains humanoid robots using human motion to teach real-world skills. It records motion and tactile data with its proprietary Motion-Capture System (MMoCS) and uses the CUR humanoid platform to translate that data into transferable capabilities. The company operates a SaaS and licensing model to deliver motion intelligence to logistics, manufacturing, construction, healthcare, and space applications. Motoniq is building a library of deployable capabilities and aims to scale robots that learn by moving like us across industries.
Head of AI
Motoniq — Building the Data Engine for Physical AI
About Motoniq
Every humanoid robotics company is racing to deploy, but they all face the same bottleneck: real-world training data. We're building the full-stack data engine for Physical AI, delivering high-fidelity, robot-ready human motion data at scale.
Our proprietary capture system enables real workers to generate training data while performing actual tasks—no teleoperation rigs, no specialists, no humanoid robots required. We deliver sensorimotor data (kinematics, tactile, RGB-D) compatible with major Vision-Language-Action (VLA) models including RT-2, Octo, OpenVLA, and π₀.
The Role
We're seeking a fractional Head of AI
to architect our AI strategy and ensure our data products integrate seamlessly with state-of-the-art embodied AI systems. This is a hands-on, technical leadership role with direct impact on how the robotics industry trains the next generation of humanoid robots.
You'll work directly with founders and the technical team to define data formats, validation pipelines, and integration protocols for Vision-Language-Action models. This role requires deep expertise in VLAs, robotic learning, and the ability to bridge cutting-edge research with production systems.
Key Responsibilities
VLA Integration Strategy
Define technical requirements and data specifications to ensure compatibility with major VLA architectures (RT-2, Octo, OpenVLA, π₀, and emerging models). Establish validation frameworks to verify data quality meets training requirements.
AI Research & Roadmap
Track developments in embodied AI, foundation models for robotics, and multimodal learning. Advise on which research directions will most impact our data product strategy and customer needs. Identify opportunities to contribute to open-source VLA ecosystems.
Data Pipeline & Quality Assurance
Design and implement validation systems for sensorimotor data streams. Develop metrics for data quality, diversity, and distribution alignment. Ensure annotation, augmentation, and packaging processes preserve information critical for policy learning.
Customer & Partner Collaboration
Engage directly with robotics companies and research labs to understand their VLA training workflows, data requirements, and integration challenges. Translate technical feedback into product improvements.
Technical Leadership & Evangelism
Represent Motoniq at conferences, publish technical documentation, and build relationships within the embodied AI research community. Establish credibility as the authoritative source on training data for Physical AI.
Required Qualifications
PhD in Computer Science, Robotics, Machine Learning, or related field with focus on embodied AI, robotic learning, or vision-language-action models.
Hands-on experience training or deploying Vision-Language-Action (VLA) models such as RT-1, RT-2, Octo, OpenVLA, PaLM-E, π₀, or similar architectures.
Deep understanding of robotic learning paradigms: imitation learning, behavior cloning, reinforcement learning, and foundation model fine-tuning for manipulation tasks.
Experience with multimodal data (vision, language, proprioception, tactile) and sensor fusion for robotics applications.
Track record of publications in top-tier conferences (CoRL, RSS, ICRA, IROS, NeurIPS, ICLR, CVPR) or contributions to major open-source robotics/AI projects.
Strong software engineering skills in Python, PyTorch/JAX, and robotics stacks (ROS, Isaac Sim, MuJoCo, or similar).
Based in San Francisco Bay Area or willing to relocate post-funding.
Preferred Qualifications
Previous work at leading robotics/AI labs: Google DeepMind, Google Research (Robotics), OpenAI, Tesla AI, Figure AI, Physical Intelligence, UC Berkeley RAIL Lab, Stanford SVL, CMU Robotics Institute, MIT CSAIL.
Experience building or contributing to large-scale robotic datasets (e.g., Open X-Embodiment, RoboNet, Bridge Data, DROID).
Expertise in human motion capture, biomechanics, or human-robot interaction systems.
Understanding of data markets, ML data pipelines, or experience at data-centric AI companies (Scale AI, Robust Intelligence, Snorkel AI).
Active contributor to embodied AI open-source ecosystems (OpenVLA, Dora-rs, Open Robotics, LeRobot).
What We Offer
Fractional engagement (10-15 hours/week) with potential path to full-time Chief AI Officer role post seed close.
Competitive equity package aligned with senior technical leadership contribution.
Direct influence on the technical foundation of the Physical AI data ecosystem—your decisions will shape how thousands of robots are trained.
Collaboration with experienced founding team (backgrounds from YC-backed companies, UC Berkeley, CMU, MIT, Google, Apple).
Advisory board access including leaders from Stanford, Princeton, NYU, NVIDIA, Figure AI, and Google DeepMind.
Opportunity to publish research, contribute to open-source projects, and establish thought leadership in the rapidly evolving field of embodied AI.
Work on the most pressing bottleneck in robotics: high-quality, diverse, real-world training data for foundation models.
Timeline & Transition
Fractional engagement begins immediately with expected transition to full-time Chief AI Officer within 3-6 months following seed round close (targeted Q2 2026). Relocation support provided for Bay Area move post-funding.
To Apply
Please send your CV, links to publications/GitHub, and a brief note on your experience with VLAs and embodied AI to j.nguyen@motoniq.ai. Include 'Fractional Head of AI' in the subject line. If you've contributed to any VLA models or large-scale robotic datasets, please highlight this in your note.
Motoniq is an equal opportunity employer. We value diversity and are committed to creating an inclusive environment for all employees.