
IOMETE offers a self-hosted data lakehouse platform designed for enterprises dealing with large datasets, particularly in regulated industries like Financial Services, Healthcare, Government, and Technology. Unlike SaaS solutions that can be rigid, expensive, and lead to vendor lock-in, IOMETE provides full control, data ownership, enhanced security, and cost efficiency by allowing deployment on-premises, in private clouds, or public clouds. The platform supports advanced data management, engineering, business intelligence, real-time analytics, AI, and machine learning. It emphasizes a partnership approach, acting as an extension of the engineering team to solve complex data challenges and ensure customer success.

IOMETE offers a self-hosted data lakehouse platform designed for enterprises dealing with large datasets, particularly in regulated industries like Financial Services, Healthcare, Government, and Technology. Unlike SaaS solutions that can be rigid, expensive, and lead to vendor lock-in, IOMETE provides full control, data ownership, enhanced security, and cost efficiency by allowing deployment on-premises, in private clouds, or public clouds. The platform supports advanced data management, engineering, business intelligence, real-time analytics, AI, and machine learning. It emphasizes a partnership approach, acting as an extension of the engineering team to solve complex data challenges and ensure customer success.
Product: Self-hosted sovereign data lakehouse platform for AI and analytics
Deployment: On-premises, private cloud, public cloud, or hybrid
Tech: Open-core using Apache Iceberg and Apache Spark
Founders: Vusal Dadalov (CEO) and Piet Jan de Bruin (COO)
Stage / Funding: Multiple early rounds (Crunchbase lists 5); total funding reported as $2,200,000
Data infrastructure for AI and analytics with emphasis on data sovereignty, security, and cost-effective self-hosted deployment for regulated industries.
2020
Data infrastructure / Data platforms
Crunchbase lists multiple rounds (5 total) and shows Seed as the most recent round; specific round dates and amounts are redacted in available summaries.
“Y Combinator (W22) participation; Crunchbase lists investors including Caucasus Ventures and Thunder Future”
About Us We looked at the enterprise data world and said: This is broken. The biggest companies on earth are drowning in data complexity. They're trapped between legacy systems that can't scale and cloud platforms that want to own their data. They're bleeding money on infrastructure while their teams struggle with tools that weren't built for their reality. We built IOMETE to fix this. A sovereign-by-design data lakehouse platform that gives enterprises back control of their data. Where AI optimization actually makes queries faster in the real world, not just in benchmarks. Where Data Mesh isn't just buzzword compliance, but a fundamental architecture. Fortune 500 companies run petabyte-scale data lakehouses on IOMETE. About The Role We're hiring a senior data engineer to support one of our enterprise customers as they move to the IOMETE data lakehouse platform. You'll work closely with the customer's engineering teams and act as an extension of their internal staff. While you'll be part of IOMETE, your day-to-day work will happen inside the customer's environment using their tools and workflows. You'll help lead their migration from legacy platforms like Cloudera, Databricks, Teradata, Greenplum, or Dremio. The job requires strong technical knowledge and the ability to guide others. You should know Spark and Apache Iceberg very well. Experience with Kubernetes is also important. This is a hands-on role. You'll support large data workflows, solve platform issues, and help the customer get the most from IOMETE. You'll also work with our product and engineering teams to share what you're seeing in the field and help shape improvements. This role is right for someone who enjoys working directly with other engineers, solving practical problems, and making systems work better. What You Should Know Well - Spark for large-scale data processing - Apache Iceberg for table storage and optimization - SQL for querying and managing data - Python or PySpark for building data pipelines Kubernetes and container-based deployment - Working with data warehouses and legacy systems - Helping with cloud and on-prem migrations - Clear written and verbal communication Experience with any of these platforms is a plus: Cloudera, Databricks, Teradata, Dremio, Greenplum Location IOMETE is a fully distributed team with team members in eight countries. For this role we expect you to be based in the Continental United States. You will be working from home. Occasional travel for team or customer events is required (max. four weeks per year). What We Offer - Work on technology that's actually changing how enterprises handle data. Not incrementally. Fundamentally. - Solve problems alongside teams at the world's largest companies. Real scale. Real complexity. Real impact. - Competitive compensation and meaningful equity. We're building something big. - Full-time role. No bureaucracy. Just impact. Ready to build something that matters? Show us what you've built. Tell us what broke. Explain how you fixed it. We care more about your battle scars than your certificates.