Physical Intelligence is bringing general-purpose AI into the physical world. We are a group of engineers, scientists, roboticists, and company builders developing foundation models and learning algorithms to power the robots of today and the physically-actuated devices of the future.
Physical Intelligence is bringing general-purpose AI into the physical world. We are a group of engineers, scientists, roboticists, and company builders developing foundation models and learning algorithms to power the robots of today and the physically-actuated devices of the future.
Focus: Foundation models and learning algorithms for robots (vision-language-action models)
Employee count: 83
Notable funding: $400M Series A (Nov 2024) following $70M seed (Mar 2024)
Related Companies
Company
HQ
Industry
Total Funding
Essential AI
🇺🇸San Francisco, US
Data and AnalyticsDeepTechEducationInformation TechnologySoftware
$65M
Together AI
🇺🇸US
Data and AnalyticsDeepTechHealthInformation TechnologyInternet ServicesSoftware
$534M
Grafton Sciences
🇺🇸Redwood City, US
BiotechnologyData and AnalyticsDeepTechSoftware
-
Modular
🇺🇸US
Data and AnalyticsDeepTechInformation TechnologySoftware
$380M
mimic
🇨🇭CH
—
-
Company Overview
Problem Domain
Robotics software and AI for physical control
Founded
2024
Industry
Research Services
Funding Track Record
Seed- March 2024
$70M
Series A- November 2024
$400M
Valuation reported at roughly $2B
Investor Signal
“Backed by investors including Jeff Bezos, Lux Capital, Thrive Capital, OpenAI Startup Fund, Redpoint Ventures, Bond”
Founders
What we do
Join the Team
Machine Learning Infrastructure Engineer
On-SiteSan Francisco Bay Area, US
On-Site • San Francisco Bay Area, US
Who you are
Strong software engineering fundamentals and experience building ML training infrastructure or internal platforms
Hands-on large-scale training experience in JAX (preferred), PyTorch
Familiarity with distributed training, multi-host setups, data loaders, and evaluation pipelines
Experience managing training workloads on cloud platforms (e.g., SLURM, Kubernetes, GCP TPU/GKE, AWS)
Ability to debug and optimize performance bottlenecks across the training stack
Strong cross-functional communication and ownership mindset
Deep ML systems background (e.g., training compilers, runtime optimization, custom kernels)
Experience operating close to hardware (GPU/TPU performance tuning)
Background in robotics, multimodal models, or large-scale foundation models
Experience designing abstractions that balance researcher flexibility with system reliability
What the job involves
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Machine Learning Engineer
Full-timeRotterdam, NL
Full-time • Rotterdam, NL
Frontend Developer
Part-timeAustin, US
Part-time • Austin, US
Mobile Developer
InternshipJerusalem
Internship • Jerusalem
Mobile Developer
InternshipAmsterdam, NL
Internship • Amsterdam, NL
Backend Developer
Full-timeBerlin, DE
Full-time • Berlin, DE
AI Researcher
Part-timeHamburg, DE
Part-time • Hamburg, DE
In this role you will help scale and optimize our training systems and core model code
You’ll own critical infrastructure for large-scale training, from managing GPU/TPU compute and job orchestration to building reusable and efficient JAX training pipelines
You’ll work closely with researchers and model engineers to translate ideas into experiments—and those experiments into production training runs
This is a hands-on, high-leverage role at the intersection of ML, software engineering, and scalable infrastructure
The ML Infrastructure team supports and accelerates PI’s core modeling efforts by building the systems that make large-scale training reliable, reproducible, and fast
The team works closely with research, data, and platform engineers to ensure models can scale from prototype to production-grade training runs
Own training/inference infrastructure: Design, implement, and maintain systems for large-scale model training, including scheduling, job management, checkpointing, and metrics/logging
Scale distributed training: Work with researchers to scale JAX-based training across TPU and GPU clusters with minimal friction
Optimize performance: Profile and improve memory usage, device utilization, throughput, and distributed synchronization
Enable rapid iteration: Build abstractions for launching, monitoring, debugging, and reproducing experiments
Manage compute resources: Ensure efficient allocation and utilization of cloud-based GPU/TPU compute while controlling cost
Partner with researchers: Translate research needs into infra capabilities and guide best practices for training at scale
Contribute to core training code: Evolve JAX model and training code to support new architectures, modalities, and evaluation metrics