Multiverse Computing is the leading quantum AI software company. We apply quantum and quantum-inspired AI to solve complex problems in finance, energy, manufacturing, defense, cybersecurity, life…
Multiverse Computing is the leading quantum AI software company. We apply quantum and quantum-inspired AI to solve complex problems in finance, energy, manufacturing, defense, cybersecurity, life…
Founding: Founded in 2019, headquartered in Donostia / San Sebastián, Spain
Focus: Quantum and quantum-inspired AI software for optimization and model compression
Flagship products: Singularity (quantum-inspired optimization) and CompactifAI (AI/LLM compression)
Recent funding: Announced large Series B (€189M) and prior Series A (€25M)
Company Overview
Problem Domain
Enterprise optimization and AI model compression for industries such as finance, energy, manufacturing, cybersecurity and telecommunications.
Founded
2019
Industry
Software Development
Funding Track Record
Series A- 2024-03-04
€25 million
Oversubscribed Series A announced March 2024; additional participation later from CDP Venture Capital.
Series B- 2025-06-12
€189 million
Announced June 2025 to support commercialization of CompactifAI and broader AI efforts.
Investor Signal
“Institutional and corporate participation including Bullhound Capital, Columbus Venture Partners, CDP Venture Capital, HP Tech Ventures, Toshiba and others”
Founders
What we do
Join the Team
MLOps/LLMOps Engineer
On-SiteBarcelona, ES
On-Site • Barcelona, ES
Who you are
Startup jobs. A lot of them.
Your next opportunity is in here somewhere. Sign up to explore 52,000+ startups and their open roles. No spam. No gamification. Just jobs.
52,000+
Startups
58,000+
Open Roles
2,200+
New This Week
Frontend Developer
ContractHamburg, DE
Contract • Hamburg, DE
Data Scientist
Full-timeNiš, RS
Full-time • Niš, RS
Mobile Developer
Part-timeBelgrade, RS
Part-time • Belgrade, RS
AI Researcher
Full-timeRotterdam, NL
Full-time • Rotterdam, NL
Product Designer
Part-timeTel Aviv
Part-time • Tel Aviv
Frontend Developer
Part-timeUtrecht, NL
Part-time • Utrecht, NL
Related Companies
Company
HQ
Industry
Total Funding
Pruna AI
🇫🇷Paris, FR
Data and AnalyticsDeepTechInformation TechnologySoftwareSustainability
$7M
Variational AI
🇨🇦Vancouver, CA
BiotechnologyDeepTech
$9M
Weave
🇺🇸US
Software
-
7AI
🇺🇸US
Consumer ProductsData and AnalyticsDeepTechHardwareInformation TechnologySecuritySoftware
-
webAI
🇺🇸US
Blockchain and CryptoData and AnalyticsDeepTechInformation TechnologySoftware
$77M
Bachelor's or master's degree in computer science, Engineering, or a related field
1+ years of experience as an ML/LLM engineer in public cloud platforms
Proven experience in MLOps, LLMOps, or related roles, with hands-on experience in managing machine/deep learning and large language model pipelines from development to deployment and monitoring
Experience in cloud platforms (e.g., AWS, Azure) for ML workloads, MLOps, DevOps, or Data Engineering
Knowledge in model parallelism in model training and serving, and data parallelism/hyperparameter tuning
Proficiency in programming languages such as Python, distributed computing tools such as Ray, model parallelism frameworks such as DeepSpeed, Fully Sharded Data Parallel (FSDP), or Megatron LM
Knowledge in with generative AI applications and domains, including content creation, data augmentation, and style transfer
Strong understanding of Generative AI architectures and methods, such as chunking, vectorization, context-based retrieval and search, and working with Large Language Models like OpenAI GPT 3.5/4.0, Llama2, Llama3, Mistral, etc
Experience with Perfect English, Spanish is a plus
Great communication skills and a passion for working collaboratively in an international environment
What the job involves
Fixed-term contract ending in June 2026
Deploy cutting-edge ML/LLMs models to Fortune Global 500 clients
Join a world-class team of Quantum experts with an extensive track record in both academia and industry
Collaborate with the founding team in a fast-paced startup environment
Design, develop, and implement Machine Learning (ML) and Large Language Model (LLM) pipelines, encompassing data acquisition, preprocessing, model training and tuning, deployment, and monitoring
Employ automation tools such as GitOps, CI/CD pipelines, and containerization technologies (Docker, Kubernetes) to enhance ML/LLM processes throughout the Large Language Model lifecycle
Establish and maintain comprehensive monitoring and alerting systems to track Large Language Model performance, detect data drift, and monitor key metrics, proactively addressing any issues
Conduct truth analysis to evaluate the accuracy and effectiveness of Large Language Model outputs against known, accurate data
Collaborate closely with Product and DevOps teams and Generative AI researchers to optimize model performance and resource utilization
Manage and maintain cloud infrastructure (e.g., AWS, Azure) for Large Language Model workloads, ensuring both cost-efficiency and scalability
Stay updated with the latest developments in ML/LLM Ops, integrating these advancements into generative AI platforms and processes
Communicate effectively with both technical and non-technical stakeholders, providing updates on Large Language Model performance and status
The application process
We are looking to fill this role immediately and are reviewing applications daily. Expect a fast, transparent process with quick feedback