
We build the models. You build the future. AGI for the enterprise, starting with software agents.

We build the models. You build the future. AGI for the enterprise, starting with software agents.
Founded: 2023
Product focus: Foundation models and developer tools (coding copilots / software agents)
Founders / leadership: Jason Warner (co-CEO/co‑founder) and Eiso Kant (co-CEO/co‑founder)
Recent funding: $500M Series B (Oct 2024); total reported funding ≈ $626M
| Company |
|---|
Accelerating software development and coding through generative AI and foundation models.
2023
Generative AI / Developer tools
$500M
Round reported to include participation from Nvidia and eBay Ventures; post-money reported valuation ~ $3B
$400M+ (reported in process)
Reported as raising at least $400M at a reported $2B valuation
“Participation from institutional and strategic investors including Bain Capital Ventures, Nvidia, and eBay Ventures”
About Poolside In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.
Poolside exists to be this company
About Our Team We are a remote-first team that sits across Europe and North America. We come together once a month in-person for 3 days, always Monday-Wednesday, with an open invitation to stay the whole week. We also do longer off-sites once a year.
Our team is a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.
About The Role You will be a core member of our Pretraining Data team, responsible for building and scaling our Model Factory: our system for quickly training, scaling, and experimenting with our foundation models. This is a hands-on role where your #1 mission is to architect and maintain the high-performance pipelines that transform trillions of raw tokens into the high-quality dataset "fuel" our models require.
To enable us to conduct and implement latest research, you’ll be engineering the ingestion, deduplication, and streaming systems that handle petabyte-scale data. You will bridge the gap between raw web crawls and our GPU clusters, directly influencing model performance through superior data modeling, algorithmic sorting, and distributed pipeline optimization. You will be closely collaborating with other teams like Pretraining, Postraining, Evals, and Product to generate high-quality datasets that map to missing model capabilities and downstream use cases.
YOUR MISSION To deliver large, high-quality, and diverse datasets of natural language and source code for training poolside models and coding agents.
Responsibilities
Skills & Experience
PROCESS
Benefits
View GDPR Policy
Strong background in building production-grade, distributed data systems for machine learning, with experience in:
Orchestration: Slurm, Airflow, or Dagster
Observability & Reliability: CI/CD, Grafana, Prometheus, etc.
Infra: Git, Docker, k8s, cloud managed services
Batched inference (ex: vLLM)
Performance obsession, especially with large-scale GPU clusters and distributed pipelines
Expert-level python knowledge and ability to write clean and maintainable code
Strong algorithmic foundations
Proficiency with libraries like Polars, Dask, or PySpark
Nice to have:
Experience in building trillion-scale SOTA pretraining datasets
Experience translating research to production at scale
Experience with OCR, web crawling, or evals
Prior experience pre-training LLMs