At Cardo AI, we build intelligent technology to help our clients scale in the asset-based finance market. Our mission is to make their operations more efficient, transparent, and accessible —…
At Cardo AI, we build intelligent technology to help our clients scale in the asset-based finance market. Our mission is to make their operations more efficient, transparent, and accessible —…
Space: Asset-based finance (ABF) fintech — software, data, and AI
Employees: ~128
Recent funding: Series A $15M (Nov 20, 2024) — co-led by Blackstone Innovations Investments, FINTOP Capital, JAM FINTOP
Assets on platform: $40B+ managed through technology (company-stated)
Company Overview
Problem Domain
Inefficient, manual processes and limited analytics in asset-based finance and private credit; need for automated workflows, document intelligence, and portfolio analytics.
Founded
2018
Industry
Software Development
Funding Track Record
Seed- 2019
Recorded as an early round (per third-party profiles).
Seed- 2021
Recorded as an early round (per third-party profiles).
Series A- 2024-11-20
$15M
Round co-led by Blackstone Innovations Investments, FINTOP Capital and JAM FINTOP; additional participants include named individuals.
Investor Signal
“Blackstone Innovations Investments participating as a co-lead (strategic early-stage arm)”
Founders
What we do
Join the Team
Senior Data Engineer
HybridMilan, Lombardy, IT
Hybrid • Milan, Lombardy, IT
Related Companies
Company
HQ
Industry
Total Funding
nexos.ai
🇱🇹LT
Data and AnalyticsDeepTechInformation TechnologySoftware
-
RSight®
🇫🇷FR
Administrative ServicesHR and RecruitingInformation TechnologyProfessional Services
-
webAI
🇺🇸US
Blockchain and CryptoData and AnalyticsDeepTechInformation TechnologySoftware
$77M
Embat
🇪🇸ES
FinanceSoftware
$5M
P&R Talent Hunters
🇺🇸Oakland, US
BiotechnologyData and AnalyticsDeepTechInformation TechnologyManufacturingSoftware
$50M
THE ROLE
We’re hiring a Senior Data Engineer to own and evolve the data infrastructure behind Cardo AI’s platform — the engine that powers portfolio analytics, transaction monitoring, and AI-driven insights for $80Bn+ in assets under management.
You’ll design and operate the pipelines, lakehouse architecture, and query layer that feed our data science models and customer-facing products. This is a high-leverage role: the data systems you build will directly shape how banks, trustees, and investors make decisions on billions in private credit and asset-based finance.
ABOUT CARDO AI
Cardo AI is a fintech platform that helps institutions like Blackstone, Encina, and other top-tier players optimize portfolios, streamline transactions, and unlock growth in Private Credit and Asset-Based Finance. We’ve recently closed a funding round co-led by Blackstone, FinTop Capital, and JAM Fintop to scale our platform and expand the ABF market beyond $40Tn.
Learn more at
cardoai.com
WHAT YOU’LL DO
WHAT WE’RE LOOKING FOR
Must Have
Nice to Have
Experience with DuckDB, PyIceberg, delta-rs, Polars, or PyArrow.
Hands-on experience with Terraform or CloudFormation for Infrastructure as Code.
Familiarity with Docker and Kubernetes in production environments.
Experience working with financial data or in fintech / capital markets.
Exposure to data quality frameworks and observability tooling.
WHY CARDO AI
Real impact —
Your pipelines power decisions on billions in assets. The work matters.
Greenfield + scale —
We’re growing fast, which means you’ll build foundational systems, not just maintain legacy ones.
Strong technical culture —
You’ll work alongside engineers who care about clean architecture, testing, and doing things right.
Direct influence —
In a company our size, senior engineers shape product direction, not just execute tickets.
WHAT WE OFFER
Hybrid work with flexible hours —
Structure your day around deep work, not office politics.
Competitive salary + yearly performance bonuses —
We pay well and reward results.
Stock options —
Share in the upside as we scale.
Growth investment —
Training budget, mentorship, and room to evolve your role as the company grows.
Ready to build the data backbone of the next generation of private credit infrastructure?
Apply now and help us shape the future of fintech.
Startup jobs. A lot of them.
Your next opportunity is in here somewhere. Sign up to explore 52,000+ startups and their open roles. No spam. No gamification. Just jobs.
52,000+
Startups
60,000+
Open Roles
500+
New This Week
Product Designer
Part-timeBerlin, DE
Part-time • Berlin, DE
Software Engineer
ContractCambridge, GB
Contract • Cambridge, GB
Backend Developer
Full-timeHamburg, DE
Full-time • Hamburg, DE
Data Scientist
ContractJerusalem
Contract • Jerusalem
Frontend Developer
Full-timeSan Francisco, US
Full-time • San Francisco, US
AI Researcher
Part-timeJerusalem
Part-time • Jerusalem
Architect and maintain production data pipelines —
Design, build, and operate scalable ETL workflows using Apache Spark and Airflow, processing structured and semi-structured financial data at scale.
Own the lakehouse layer —
Evolve our data lakehouse (Delta Lake / Iceberg) on AWS to support analytics, reporting, and ML workloads with high reliability and low latency.
Optimize query performance —
Tune SQL queries on Trino and Postgres to keep dashboards and data products fast as data volumes grow.
Lead high-impact data projects —
Drive cross-functional initiatives end-to-end — from scoping with Product and Data Science through to production deployment and monitoring.
Build and maintain cloud infrastructure —
Manage data infrastructure on AWS (S3, Glue, EKS, Aurora, Athena, Lambda), including CI/CD for data workflows and Infrastructure as Code.
Raise the bar for the team —
Mentor junior engineers, establish best practices for pipeline design and data modeling, and contribute to architectural decisions that shape the platform’s future.
Evaluate and adopt new tools —
Assess emerging technologies (e.g., DuckDB, Polars, PyIceberg) to improve scalability, cost efficiency, and developer experience.
5+ years of professional data engineering experience
, with at least 2 years in a senior or lead capacity.
Apache Spark & Airflow —
Extensive hands-on experience building and operating production ETL/ELT pipelines.
Advanced SQL —
Strong query writing and performance tuning skills on relational databases (Postgres preferred).
Python —
Proficient, production-quality Python for data processing, tooling, and automation.
Lakehouse architecture —
Working experience with at least one Lakehouse format: Apache Iceberg, Delta Lake, or Hudi.
AWS data services —
Deep familiarity with S3, Glue, Aurora, Athena, EKS, and Lambda.
Data modeling —
Solid understanding of dimensional modeling, schema design, and pipeline architecture patterns.
Technical leadership —
Track record of leading cross-functional projects and mentoring other engineers.
Team culture —
Regular social events, annual company retreats, and a team that genuinely enjoys working together.
Referral bonuses —
Bring great people, get rewarded.