
We are an AI research and application company that researches, develops and operationalises large-scale AI models for language, image data and strategy, thereby contributing to securing Europe's digital sovereignty.

We are an AI research and application company that researches, develops and operationalises large-scale AI models for language, image data and strategy, thereby contributing to securing Europe's digital sovereignty.
Headquarters: Heidelberg, Germany
Founded: 2019
Core focus: Sovereign, human-centric generative AI and large-scale models
Notable product: PhariaAI (AI operating system)
Total funding reported: ≈ $650M
Generative AI for language, image data and strategic applications with emphasis on sovereignty and domain specialization.
2019
Technology, Information and Internet
€23M (~$27M)
$500M (announced)
Consortium included strategic investors such as HPE, SAP and Burda
“Includes strategic and corporate investors (Schwarz Group, Bosch Ventures, HPE, SAP, Burda) and institutional VCs (Earlybird, Lakestar)”
| Company |
|---|
Aleph Alpha Research’s mission is to deliver category-defining AI innovation that enables open, accessible, and trustworthy deployment of GenAI in industrial applications. Our organization develops foundational models and next-generation methods that make it easy and affordable for Aleph Alpha’s customers to increase productivity in finance, administration, R&D, logistics, and manufacturing processes.
Our model releases and research bio can be found on our company Hugging Face.
We are growing our Frontier LLM Research Engineering team with an experienced model trainer who will be responsible for conducting experiments that drive novel research across the entire lifecycle of model training. This involves proposing novel LLM architectures, datamixes, and evaluations to advance our offerings. This role sits at the intersection of research and engineering and requires a strong coding and AI science background.
The goal of our Frontier team is to own the entire lifecycle of model training. This role will contribute to research across a wide range of topics, including (continuous) pre-training, post-training, and synthetic data generation, with a strong emphasis on scalable training and data pipelines. Since the model artifacts produced by Frontier will power our product offerings, we expect the successful candidate to be excited about language model applications and their real-world use cases.
Basic Qualifications
Preferred Qualifications