
Runpod is a foundational platform for developers to build and run custom AI systems that scale. Trusted by more than 500,000+ developers at the world’s leading AI companies. With GPU Cloud, users can spin up an on-demand GPU instance in a few clicks. With Serverless, users can create autoscaling API endpoints for scaling inference on their models in production.

Runpod is a foundational platform for developers to build and run custom AI systems that scale. Trusted by more than 500,000+ developers at the world’s leading AI companies. With GPU Cloud, users can spin up an on-demand GPU instance in a few clicks. With Serverless, users can create autoscaling API endpoints for scaling inference on their models in production.
Product: On-demand GPU Pods, Serverless autoscaling GPU endpoints, and multi-node Instant Clusters for training and inference
Customers / scale: Claims more than 500,000 developers; reported $120M ARR (Jan 2026)
Founders: Zhen Lu (Co-founder & CEO) and Pardeep Singh (Co-founder & CTO)
Funding: $20M seed co-led by Intel Capital and Dell Technologies Capital (May 2024)
AI infrastructure for training and inference (GPU compute, scaling, and deployment)
2022
Cloud infrastructure / AI infrastructure
$20M
Angel and strategic participants reported to include Julien Chaumond, Nat Friedman, Amjad Masad, and Adam Lewis
| Company |
|---|
Runpod is pioneering the future of AI and machine learning, offering cutting-edge cloud infrastructure for full-stack AI applications. Founded in 2022, we are a rapidly growing, well-funded company with a remote-first organization spread globally. Our mission is to empower innovators and enterprises to unlock AI's true potential, driving technology and transforming industries. Join us as we shape the future of AI.
As our cloud platform grows, we are seeking a full-time, remote Storage Engineer to join our team. This technical position will be crucial to growing our cloud platform as we integrate new data centers, optimize existing functionality, and deliver new features to customers. You will work on the storage and services that enable training, inference and HPC workloads in a global network of data centers, as well as more traditional cloud workloads.
If you are passionate about cloud infrastructure, distributed systems, and enabling the future of high-performance AI computation, we want to hear from you. Join our team and help shape the future of computing!
Responsibilities:
Requirements:
Preferred:
What You'll Receive:
Runpod is committed to maintaining a workplace free from discrimination and upholding the principles of equality and respect for all individuals. We believe that diversity in all its forms enhances our team. As an equal opportunity employer, Runpod is committed to creating an inclusive workforce at every level. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, protected veteran status, disability status, or any other characteristic protected by law.