At Neuralk-AI, we develop our own AI embedding models specialized for structured data representation, allowing enterprises to build custom AI solutions that accurately interact with their own structured data.
Our models leverage Graph Neural Networks (GNNs)-based models to deliver exceptional performance in representation and transfer learning tasks on structured datasets, giving these models a great potential for data cleaning and enrichment solutions.
At Neuralk-AI, we develop our own AI embedding models specialized for structured data representation, allowing enterprises to build custom AI solutions that accurately interact with their own structured data.
Our models leverage Graph Neural Networks (GNNs)-based models to deliver exceptional performance in representation and transfer learning tasks on structured datasets, giving these models a great potential for data cleaning and enrichment solutions.
What they do: Build tabular/tabular‑foundation models and APIs for structured (tabular) enterprise data
Headquarters: Paris, Île‑de‑France, France
Founders: Antoine Moissenot (CEO) and Alexandre Pasquiou (CSO)
Stage & funding: Pre‑seed; $4M round led by Fly Ventures (Feb 2025)
Tech focus: Graph Neural Networks (GNN)‑based embeddings for structured data
Related Companies
Company
HQ
Industry
Total Funding
Sigma Nova
🇫🇷Paris, FR
Data and AnalyticsDeepTech
-
Lexsi Labs
🇺🇸US
Data and AnalyticsDeepTech
-
Valence Labs
🇺🇸Shipman, US
BiotechnologyDeepTech
-
microTECH Global LTD
🇬🇧Maidenhead, GB
Administrative ServicesDeepTechHR and RecruitingManufacturingProfessional Services
-
H Company
🇫🇷FR
Data and AnalyticsDeepTechInformation TechnologySoftware
-
Company Overview
Problem Domain
Representation and predictive modeling for structured (tabular) enterprise data
Founded
2023
Industry
IT Services and IT Consulting
Tech Stack
Graph Neural Networks (GNNs)
Tabular foundation/embedding models
APIs for model access
Funding Track Record
Pre‑seed- February 2025
$4 million
Round included participation from StemAI and multiple angel investors
Investor Signal
“Led by Fly Ventures with participation from StemAI and prominent angel investors (including Thomas Wolf and others)”
Founders
What we do
Join the Team
ML Ops Engineer
HybridParis, FR
Hybrid • Paris, FR
About the Company
Neuralk-AI is a passionate team leading the way in AI innovation, committed to driving the rapid adoption of transformative AI applications. Our focus is on developing the technical tools to allow any company to build AI applications that natively interact with their structured databases (tabular or graph databases). Specifically, we develop a modern AI embedding platform to convert any structured database to a vectorstore that can later be combined with classic Machine Learning models for classification, regression, or clustering purposes.
As an early-stage AI-driven startup backed by significant funding (several millions), we base our approach on state-of-the-art academic research to drive practical business solutions. We value clear communication and simplicity in our approaches, promoting a constant optimization mindset.
Join Neuralk to be part of a growing team, eager to learn and adapt, united by the belief that our technology can make a significant positive impact and contribute to transforming the AI industry.
Co-founders: Alexandre Pasquiou (CSO) & Antoine Moissenot (CEO)
Diversity Commitment: Neuralk is dedicated to equal opportunity employment and fosters an environment that is open and respectful of diversity. All applicants are encouraged to apply.
About the Role
Job Title: ML Ops Engineer Location: Station F, Paris Employment Type: Full Time Department: Open
Mission Highlights
As an MLOps Engineer, your mission will be to bridge the gap between machine learning research and production systems, ensuring seamless integration, deployment, and management of AI models on a large scale. You will collaborate closely with our research, data, and engineering teams (~8 people) to ensure the scalability, performance, and reliability of our AI-driven solutions, focusing on automation, model lifecycle management, and continuous delivery.
Role & Responsibilities
Profile
M.S. or B.S. in Computer Science, Software Engineering, or a closely related field with a focus on DevOps, MLOps, or Machine Learning.
5+ years of experience in MLOps, DevOps, or cloud-based infrastructure roles, focusing on deploying, monitoring, and scaling machine learning systems (in particular Transformer-based models).
Experience with cloud platforms (AWS, GCP, Azure) for ML model deployment and management.
Bonuses
Experience in building and maintaining large-scale ML infrastructure.
Proven experience in translating research into production at scale.
Strong programming skills in Python and familiarity with ML frameworks (e.g., PyTorch, TensorFlow).
Expertise
MLOps: In-depth understanding of MLOps principles, model lifecycle management, automation, and infrastructure scaling.
Cloud Infrastructure: Experience in managing cloud-based environments and optimizing resources for scalable ML deployments.
Compensation & Benefits
We are a fast-paced startup, yet we favor a good work-life balance and interesting compensations. We offer:
A competitive salary
Equity (BSPCE), to reflect the value you bring to Neuralk and to foster a shared journey
Comprehensive health insurance
French level paid leave and time-off work
Dynamic work setting. Although our preference is for in-person collaboration, we will be flexible with occasional remote work arrangements.
And more to come as we grow.
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Data Scientist
ContractAustin, US
Contract • Austin, US
DevOps Engineer
ContractHamburg, DE
Contract • Hamburg, DE
Machine Learning Engineer
ContractCambridge, GB
Contract • Cambridge, GB
Technical Writer
Full-timeRotterdam, NL
Full-time • Rotterdam, NL
Machine Learning Engineer
ContractCambridge, GB
Contract • Cambridge, GB
Backend Developer
ContractAmsterdam, NL
Contract • Amsterdam, NL
Model Deployment & Automation: Design, develop, and optimize continuous integration/continuous deployment (CI/CD) pipelines for deploying Deep Learning models. Automate model training, testing, deployment, and monitoring to ensure efficiency and reliability.
Infrastructure Management: Build and maintain scalable infrastructure for machine learning workflows, leveraging cloud environments, container orchestration (Docker, Kubernetes), and monitoring tools.
Model Versioning and Lifecycle Management: Oversee the lifecycle of Deep Learning models, including versioning, governance, and deprecation strategies, ensuring proper integration with the platform and other tools.
Monitoring & Debugging: Implement robust monitoring systems to track the performance of live models, proactively identifying issues, and developing tools for model debugging and maintenance.
Reproducibility and Traceability: Establish and manage practices that ensure the reproducibility of models, experiments, and pipelines. Implement version control and logging systems for model and data changes.
Collaboration: Work closely with data scientists, machine learning engineers, and other stakeholders to design, implement, and manage production-grade model pipelines that enhance the company’s research and engineering capabilities.
Platform Optimization: Suggest improvements to the infrastructure to handle large-scale AI model deployments, ensuring scalability, performance, and cost-efficiency.
Strong experience with CI/CD pipelines and version control systems like Git.
Familiarity with containerization and orchestration technologies (Docker, Kubernetes).
Excellent communication skills in English and a proven ability to work in interdisciplinary teams.
Thrives in a fast-paced, evolving startup environment.
Self-starter and autonomous, with a focus on operational efficiency and scalability.
Experience with MLOps tools, such as MLflow, Kubeflow, or TFX.
Familiarity with observability tools for monitoring machine learning pipelines (Prometheus, Grafana).
CI/CD & Automation: Expertise in setting up and maintaining CI/CD pipelines, automating model training, testing, and deployment.
Monitoring & Observability: Familiarity with tools and techniques to track model performance in production, including alerting and debugging.
Containerization: Proficiency in Docker and Kubernetes for managing and scaling ML models.
Programming: Proficient in Python and other relevant programming languages, with experience in developing and managing infrastructure as code (e.g., Terraform).