

Business: Enterprise AI and digital/data transformation services company
Product: NAVIO — GenAI-powered enterprise productivity / workflow unification platform
Funding: Announced $7.5M seed (Feb 2026); company states total funding USD 11.6M
Founders: Anjan Lahiri (Founder & CEO) and Samit Deb (Founder)
Geography & Size: Presence in the U.S. and India; Crunchbase lists Princeton, New Jersey; ~151 employees
Enterprise productivity, workflow unification, AI-led digital and data transformation
Enterprise AI / Digital & Data Transformation Services
USD 7.5M
Company press release states founders participated and that total funding reached USD 11.6M after this round.
Crunchbase lists a Seed round announced on this date and indicates the company has funding across two rounds.
“Seed round led/anchored by industry figures P R Chandrasekar (Sekar PRC) and Sudip Nandy, with founder participation”
Summary: Looking for Experienced Databricks Resident consultants having prior working experience on Databricks Data Engineering. The candidate will be responsible for setting up Databricks Workspaces, creating Delta Lakes, Writing declarative ingestion pipelines & utilizing databricks data quality frameworks (constraints) to delivery high quality Data Pipelines. The role will be customer facing and will require unique problem solving, stakeholder management & thought leadership qualities. Roles and Responsibility: * Proficient in data engineering, data platforms, and analytics with a strong track record of successful projects and in-depth knowledge of industry best practices. * Should have worked on atleast 2 Datawarehouse Migration engagements in Past (Preferably one on Databricks, but can be non databricks migration also). * Should have prior experience of Dimensional Data Modelling on tools like Erwin Modeller etc. * 5+ years of hands-on experience in data engineering, with at least 2+ years in Databricks Architect and Apache Spark. * Proficient in Cluster Management & Workspace Organisation. * Proficient in Delta Lake Framework, Databricks Workspaces and Unity Catalog. * Proficient in writing ETL Pipelines on Databricks by using Delta Live Tables for Streaming , Batch & Incremental Batch ingestion features. * Comfortable architecting & writing code in either Python and SQL. * Should have excellent understanding of Apache Spark Job Performance Tuning, Partitions Optimization etc. * Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one. * Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals. * Familiarity with CI/CD for production deployments. * Working knowledge of MLOps. * Design and deployment of performant end-to-end data architectures. * Experience with technical project delivery – managing scope and timelines. * Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects. Certifications: * Databricks Certified Associate Developer for Apache Spark. * Databricks Certified Professional Data Engineer. * Cloud Certifications (AWS Certified Solutions Architect, Azure Solutions Architect Expert).