Foundation is a robotics company focused on developing advanced humanoid robots. Their flagship product, Phantom, is designed for strength and fluid motion, eliminating a 'robotic' feel for seamless integration into human environments. Phantom utilizes proprietary cycloid actuators for power, precision, and efficiency. The company is also involved in AI, hardware, and has a culture focused on tackling ambitious projects. They are actively hiring for various engineering and design roles, aiming to shape the future of technology, potentially for applications on Earth and in space.
Foundation is a robotics company focused on developing advanced humanoid robots. Their flagship product, Phantom, is designed for strength and fluid motion, eliminating a 'robotic' feel for seamless integration into human environments. Phantom utilizes proprietary cycloid actuators for power, precision, and efficiency. The company is also involved in AI, hardware, and has a culture focused on tackling ambitious projects. They are actively hiring for various engineering and design roles, aiming to shape the future of technology, potentially for applications on Earth and in space.
Company: Foundation (Foundation Future Industries, Inc.) — developer of general-purpose humanoid robots (product: Phantom)
Founded: May 2024
Headquarters: San Francisco, United States
Funding: Pre-seed $11M (reported), lead: Tribe Capital
Team size: Approximately 85 employees (reported)
Related Companies
Company
HQ
Industry
Total Funding
Generalist
🇺🇸San Mateo, US
Data and AnalyticsDeepTechHardwareInformation TechnologySoftware
-
Humanoid
🇬🇧London, GB
DeepTech
-
Verne Robotics
🇺🇸San Francisco, US
DeepTech
$500K
Embodied AI
🇨🇭Lausanne, CH
DeepTech
$165K
Bright Machines
🇺🇸San Francisco, US
Consumer ProductsData and AnalyticsDeepTechHardwareInformation TechnologyManufacturingSoftware
$437M
Company Overview
Problem Domain
Industrial and manufacturing automation; general-purpose physical robotics to address labor shortages
Founded
2024
Industry
DeepTech
Tech Stack
Proprietary cycloid actuators
Embodied AI / perception and autonomy
Funding Track Record
Pre-seed- 2024-08-22
$11 million
Reported participation from unnamed angel investors
Investor Signal
“Tribe Capital led pre-seed”
Founders
What we do
Join the Team
Vision Language Action (VLA) Models Engineer
On-SiteSan Francisco Bay Area, US
On-Site • San Francisco Bay Area, US
About Us
Foundation is developing the future of general purpose robotics with the goal to
address the labor shortage.
Our mission is to create advanced robots that can operate in complex environments, reducing human risk in conflict zones and enhancing efficiency in labor-intensive industries.
We are on the lookout for extraordinary engineers and scientists to join our team. Your previous experience in robotics isn't a prerequisite — it's your talent and determination that truly count.
We expect that many of our team members will bring diverse perspectives from various industries and fields. We are looking for individuals with a proven record of exceptional ability and a history of creating things that work.
Our Culture
We like to be frank and honest about who we are, so that people can decide for themselves if this is a culture they resonate with. Please read more about our culture here
https://foundation.bot/culture
.
Who should join:
You like working in person with a team in San Francisco.
You deeply believe that this is the most important mission for humanity and needs to happen yesterday.
You are highly technical - regardless of the role you are in. We are building technology; you need to understand technology well.
You care about aesthetics and design inside out. If it's not the best product ever, it bothers you, and you need to “fix” it.
You don't need someone to motivate you; you get things done
.
Why Are We Hiring for this Role
What Kind of Person are we looking For
Strong experience with training multimodal models, including VLAs, VLMs, vision transformers, LLMs.
Ability to build and iterate on large-scale training pipelines.
Deep proficiency in PyTorch or JAX, distributed training, and GPU acceleration.
Strong software engineering skills in Python and modern ML tooling.
Experience with (synthetic) dataset creation and curation.
Understanding of real-time deployment constraints on embedded hardware.
Optimally, familiarity with robotics simulation environments (Isaac Lab, Mujoco, or similar).
Benefits
We provide market standard benefits (health, vision, dental, 401k, etc.). Join us for the culture and the mission, not for the benefits.
Salary
The annual compensation is expected to be between $150,000 - $300,000. Exact compensation may vary based on skills, experience, and location.
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
AI Researcher
InternshipNew York, US
Internship • New York, US
Machine Learning Engineer
Full-timeBerlin, DE
Full-time • Berlin, DE
Frontend Developer
InternshipBerlin, DE
Internship • Berlin, DE
Product Designer
Full-timeNew York, US
Full-time • New York, US
Software Engineer
ContractCambridge, GB
Contract • Cambridge, GB
Backend Developer
Part-timeAustin, US
Part-time • Austin, US
Develop and optimize vision-language-action models, including transformers, diffusion models, and multimodal encoders/decoders.
Build representations for 2D/3D perception, affordances, scene understanding, and spatial reasoning.
Integrate LLM-based reasoning with action planning and control policies.
Design datasets for multimodal learning: video-action trajectories, instruction following, teleoperation data, and synthetic data.
Interface VLAM outputs with real-time robot control stacks (navigation, manipulation, locomotion).
Implement grounding layers that convert natural language instructions into symbolic, geometric, or skill-level action plans.
Deploy models on on-board or edge compute platforms, optimizing for latency, safety, and reliability.
Build scalable pipelines for ingesting, labeling, and generating multimodal training data.
Create simulation-to-real (Sim2Real) training workflows using synthetic environments and teleoperated demonstration data.
Optimize training pipelines, model parallelism, and evaluation frameworks.
Work closely with robotics, hardware, controls, and safety teams to ensure model outputs are executable, safe, and predictable.
Collaborate with product teams to define robot capabilities and user-facing behaviors.
Participate in user and field testing to iterate on real-world performance.
Ideally, hands-on experience with robotics, embodied AI, or reinforcement/imitation learning.
MSc or PhD in Computer Science, Robotics, Machine Learning, or related field—or equivalent industry experience.
Vision Language Action (VLA) Models Engineer | Foundation · Teeming.ai