Runpod is a foundational platform for developers to build and run custom AI systems that scale. Trusted by more than 500,000+ developers at the world’s leading AI companies.
With GPU Cloud, users can spin up an on-demand GPU instance in a few clicks. With Serverless, users can create autoscaling API endpoints for scaling inference on their models in production.
Runpod is a foundational platform for developers to build and run custom AI systems that scale. Trusted by more than 500,000+ developers at the world’s leading AI companies.
With GPU Cloud, users can spin up an on-demand GPU instance in a few clicks. With Serverless, users can create autoscaling API endpoints for scaling inference on their models in production.
Product: On-demand GPU Pods, Serverless autoscaling GPU endpoints, and multi-node Instant Clusters for training and inference
Customers / scale: Claims more than 500,000 developers; reported $120M ARR (Jan 2026)
Founders: Zhen Lu (Co-founder & CEO) and Pardeep Singh (Co-founder & CTO)
Funding: $20M seed co-led by Intel Capital and Dell Technologies Capital (May 2024)
Related Companies
Company
HQ
Industry
Total Funding
Replit
🇺🇸US
Software
$872M
GMI Cloud
🇺🇸US
Administrative ServicesData and AnalyticsHardwareInformation TechnologySoftware
-
Triomics
🇺🇸US
Data and AnalyticsDeepTechHealthInformation TechnologySoftware
-
SteepRock Inc
🇺🇸Miami, US
BiotechnologyData and AnalyticsDeepTechHealthManufacturing
-
Qualified Health
🇺🇸US
Data and AnalyticsDeepTechHealthSoftware
-
Company Overview
Problem Domain
AI infrastructure for training and inference (GPU compute, scaling, and deployment)
Founded
2022
Industry
Cloud infrastructure / AI infrastructure
Tech Stack
GPU instances (multiple SKUs including RTX 4090, A100, H100/H200)
Serverless autoscaling endpoints
Multi-node cluster orchestration
Funding Track Record
Seed- May 2024
$20M
Angel and strategic participants reported to include Julien Chaumond, Nat Friedman, Amjad Masad, and Adam Lewis
Founders
What we do
Join the Team
Manager, Datacenter Network Engineering
RemoteSeattle, WA, US
Remote • Seattle, WA, US
Runpod is pioneering the future of AI and machine learning, offering cutting-edge cloud infrastructure for full‑stack AI applications. Founded in 2022, we are a rapidly growing, well‑funded, remote‑first company with a global team across the US, Canada, and Europe. Our mission is to create a foundational platform that enables developers and companies to build, deploy, and scale custom AI systems with speed and flexibility.
We are looking for an Engineering Manager, Datacenter Network Engineering to lead the team responsible for designing, deploying, and operating Runpod's global datacenter and backbone network. This role manages engineers working on L2/L3 fabrics, high-performance GPU networking, and global WAN connectivity that underpin our AI platform.
You will lead execution across multiple regions and vendors, while setting technical direction for network architecture that supports massive east-west traffic, low-latency GPU collectives, and secure multi-tenant isolation. This role is hands-on at the architectural level, while focused on team leadership, operational excellence, and scalability.
Responsibilities
Requirements
Preferred Qualifications
Experience operating networks for GPU clusters, HPC environments, or AI/ML platforms.
Familiarity with RDMA tuning, NCCL traffic patterns, and distributed training communication models.
Experience with automation frameworks and network-as-code (e.g., Terraform, Ansible, internal tooling).
Background in multi-region or multi-cloud networking architectures.
Experience working in high-growth or hyperscale infrastructure environments.
What You'll Receive:
The competitive base pay for this position ranges from ($150,000 - $240,000). This salary range may be inclusive of several career levels at Runpod and will be narrowed during the interview process based on a number of factors, including the candidate's experience, qualifications, and location
Meaningful equity in a fast-growing company- everyone on the team receives stock options — your impact drives our growth, and you share in the upside.
Generous medical, dental & vision plans — we cover 100% for all employees and partial for dependents.
Flexible PTO- take the time you need to recharge
Most roles are remote work first with an inclusive, collaborative teams utilizing slack as the main form of internal communication
Join a passionate team on the cutting edge of AI infrastructure — where culture, learning, and ownership are at the heart of how we scale.
Runpod is committed to maintaining a workplace free from discrimination and upholding the principles of equality and respect for all individuals. We believe that diversity in all its forms enhances our team. As an equal opportunity employer, Runpod is committed to creating an inclusive workforce at every level. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, protected veteran status, disability status, or any other characteristic protected by law. We welcome every qualified candidate eligible to work in the United States; however, we are currently unable to sponsor employment visas
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Technical Writer
InternshipUtrecht, NL
Internship • Utrecht, NL
Machine Learning Engineer
Full-timeRotterdam, NL
Full-time • Rotterdam, NL
Technical Writer
InternshipManchester, GB
Internship • Manchester, GB
Mobile Developer
ContractNiš, RS
Contract • Niš, RS
Backend Developer
ContractNew York, US
Contract • New York, US
Technical Writer
InternshipLondon, GB
Internship • London, GB
Lead the Datacenter Networking Team:
Manage and grow a team of network engineers responsible for datacenter fabrics, interconnects, and global WAN connectivity. Provide mentorship, technical guidance, and clear ownership boundaries.
Own Datacenter Network Architecture:
Define and evolve network designs for GPU-heavy clusters, including spine-leaf topologies, ECMP routing, and high-bandwidth east-west traffic patterns.
High-Performance GPU Networking:
Oversee design and operation of InfiniBand and RoCE-based fabrics supporting distributed training and inference workloads. Ensure performance, loss characteristics, and congestion control meet AI workload requirements.
Encapsulation & Overlay Protocols:
Guide implementation and operations of encapsulation technologies such as VXLAN, EVPN, Geneve, or similar, enabling scalable multi-tenant isolation and flexible network provisioning.
Global WAN & Backbone Connectivity:
Lead strategy and execution for global WAN connectivity, including private backbone links, IX connectivity, and hybrid connectivity with cloud providers and partners.
Reliability & Operations:
Establish operational best practices for monitoring, capacity planning, change management, incident response, and post-mortems across the network stack.
Cross-Functional Collaboration:
Partner closely with Infrastructure, SRE, Hardware, and Product Engineering teams to ensure network capabilities align with platform and customer requirements.
Vendor & Partner Management:
Work with hardware vendors, colocation providers, and transit partners on network design, procurement, deployment timelines, and escalations.
Security & Segmentation:
Ensure network designs support secure isolation, DDoS resilience, and compliance requirements without compromising performance.
Engineering Leadership Experience:
3+ years managing network or infrastructure engineering teams, with experience scaling teams and systems in production environments.
Datacenter Networking Expertise:
8+ years designing and operating large-scale datacenter networks, including spine-leaf architectures, BGP-based routing, and high-throughput fabrics.
Encapsulation & Overlays:
Strong hands-on experience with VXLAN/EVPN or equivalent encapsulation protocols, including control-plane and data-plane considerations.
High-Performance Networking:
Proven experience with InfiniBand and/or RoCE, including congestion management, lossless Ethernet concepts, and performance tuning for GPU workloads.
Global WAN Experience:
Deep familiarity with global WAN technologies, including private backbone design, inter-region connectivity, routing policy, and traffic engineering.
Linux & Network OS Fluency:
Comfortable working with Linux-based systems, network operating systems, and automation tooling.
Operational Excellence:
Strong background in network observability, incident management, capacity forecasting, and change control.
Communication & Leadership:
Clear written and verbal communication skills, with the ability to align stakeholders and lead teams through complex technical challenges.