
We believe the future of AI is private, sovereign, and purpose-built for those who demand privacy, control and compliance. As a leading European NeoCloud, our mission is to deliver end-to-end Private AI capabilities and Cloud Services on your terms. Our Private GPT is the European compliant alternative to ChatGPT, built for European organizations who want to allow staff to use AI, connected to your data, safely and securely, without the risk of sharing your data with the world. As an official NVIDIA Elite Cloud Service Provider, DGX Cloud Partner and DGX Solution Provider, we enable European organizations to navigate the complexities of deploying supercomputing and GPU workloads on private cloud, hybrid, and multi-cloud environments. We offer NVIDIA platforms such as DGX, HGX, OVX and RTX with enterprise class GPUs including B200, H200, H100, L40s and L4, designed for graphics/rendering, digital twins, model training and inference at any scale. Nebul leverages the latest NVIDIA Supercomputers to power your AI applications from efficient green energy data centers strategically located across the European Continent. In June 2024, Nebul announced €20M Euro funding round to facilitate expansion of EU Sovereign AI Cloud & Data Centers, serving European Native AI infrastructure projects and related engineering support. This funding allows Nebul to serve its expanding customer base and AI infrastructure demand in Europe. For project inquiries: Contact us here: https://nebul.com/contact/#get-in-touch Or email us: hello@nebul.com

We believe the future of AI is private, sovereign, and purpose-built for those who demand privacy, control and compliance. As a leading European NeoCloud, our mission is to deliver end-to-end Private AI capabilities and Cloud Services on your terms. Our Private GPT is the European compliant alternative to ChatGPT, built for European organizations who want to allow staff to use AI, connected to your data, safely and securely, without the risk of sharing your data with the world. As an official NVIDIA Elite Cloud Service Provider, DGX Cloud Partner and DGX Solution Provider, we enable European organizations to navigate the complexities of deploying supercomputing and GPU workloads on private cloud, hybrid, and multi-cloud environments. We offer NVIDIA platforms such as DGX, HGX, OVX and RTX with enterprise class GPUs including B200, H200, H100, L40s and L4, designed for graphics/rendering, digital twins, model training and inference at any scale. Nebul leverages the latest NVIDIA Supercomputers to power your AI applications from efficient green energy data centers strategically located across the European Continent. In June 2024, Nebul announced €20M Euro funding round to facilitate expansion of EU Sovereign AI Cloud & Data Centers, serving European Native AI infrastructure projects and related engineering support. This funding allows Nebul to serve its expanding customer base and AI infrastructure demand in Europe. For project inquiries: Contact us here: https://nebul.com/contact/#get-in-touch Or email us: hello@nebul.com
What they do: European private AI and sovereign cloud provider offering GPU-accelerated infrastructure and Private GPT
HQ & origins: Headquartered in Amsterdam; founded in 2021
Funding: €20M growth investment announced Jun 18, 2024 (lead investor: BeStacking)
Tech partnerships: Official NVIDIA Elite Cloud Service Provider and DGX Cloud/solution partner
| Company |
|---|
Data sovereignty, privacy-compliant AI deployment, GPU-accelerated compute for AI workloads
2021
Technology, Information and Internet
€20M (announced as tens of millions of euros / 20M growth investment)
Announced as a growth investment to expand EU sovereign AI cloud and data centers.
“BeStacking led a multi-ten-million euro growth investment”
About Nebul
At Nebul, we’re building Europe’s sovereign AI cloud — trusted, secure, and purpose-built for the next generation of intelligent infrastructure.
Beyond infrastructure, we’re investing heavily in AI-native applications that sit on top of this cloud: intelligent systems that reason, automate, and collaborate with humans using large language models (LLMs).
What You’ll Be Doing
As a , you’ll play a key role in designing and building that communicate with LLMs, orchestrate agentic workflows, and automate complex processes end to end.
You’ll work across backend, AI orchestration, and lightweight frontend layers to deliver scalable, observable, and reliable AI systems. Your focus will be on agent-based architectures , LLM integrations , and automation flows , rather than traditional CRUD applications.
You’ll have significant autonomy in technical decisions, influence architectural direction, and help define best practices for building AI systems that are ready for real-world production use.
Key Responsibilities
What Your Day Will Not Look Like
What You Bring
Bonus Points If You Have
Eligibility & Application Information
We welcome non-native Dutch speakers to apply. However, to be eligible, you must:
Ready to build production-grade AI systems that go beyond demos?
Apply now through Frank Poll and help Nebul shape the future of intelligent, sovereign AI applications.