
At Lyceum, we believe accessing computing power should be as simple as plugging a cord into an outlet. That's why we built the first software that doesn’t just connect you to the GPUs and CPUs you need - it brings them to you. Our platform simplifies AI model deployment by removing all infrastructure complexities. We track your workload and give you access to the ideal hardware configuration at the click of a button directly from your IDE. No more overprovisioning. No more DevOps costs. 100% efficiency. Know your costs upfront, benefit from low energy prices, and leverage low-demand cycles. We are a European company committed to running your workloads in 100% sovereign data centers.

At Lyceum, we believe accessing computing power should be as simple as plugging a cord into an outlet. That's why we built the first software that doesn’t just connect you to the GPUs and CPUs you need - it brings them to you. Our platform simplifies AI model deployment by removing all infrastructure complexities. We track your workload and give you access to the ideal hardware configuration at the click of a button directly from your IDE. No more overprovisioning. No more DevOps costs. 100% efficiency. Know your costs upfront, benefit from low energy prices, and leverage low-demand cycles. We are a European company committed to running your workloads in 100% sovereign data centers.
What they do: European GPU cloud platform for training and deploying generative AI models (serverless inference, GPU VMs, managed clusters)
Headquarters: Zürich, Switzerland
Stage & funding: Pre-Seed; €10.3M pre-seed announced June 24, 2025
Notable investors: redalpine (lead); participation from 10x Founders
AI infrastructure for training and inference of generative AI models
Cloud infrastructure / AI infrastructure
€10.3 million
Participation from 10x Founders
“redalpine leading a €10.3M pre-seed indicates institutional VC backing at an early stage”
| Company |
|---|
About Lyceum:
Lyceum is building a user-centric GPU cloud from the ground up. Our mission is to make high-performance computing seamless, accessible, and tailored to the needs of modern AI and ML workloads. We're not just deploying infrastructure - we’re designing and building our own large-scale GPU cluster from scratch. If you've ever wanted to help shape a cloud platform from day one, this is your moment.
The Role:
We're looking for a systems-minded software engineer to lead the design and implementation of our orchestration and scheduling systems. This is a foundational role - you’ll be building the core of our infrastructure that determines how jobs run across thousands of GPUs. You’ll work closely with our hardware, backend, and data teams to create a robust orchestration layer using open-source tools and custom-built logic. You'll need to balance performance, reliability, and user experience, all while moving quickly.
What You’ll Do:
What We’re Looking for:
Bonus Points:
Why Join us: