
Akaike Technologies is an AI company founded in 2018, inspired by Dr. Hirotugu Akaike's work on the Akaike Information Criterion. They aim to positively impact quality of life through Artificial Intelligence by unlocking insights from unstructured data. Akaike specializes in making multimodal AI accessible, effective, and transformative for enterprises, offering both plug-and-play products and custom AI solutions. Their core expertise lies in Vision AI, Generative AI, Conversational AI, and Natural Language Processing. Their flagship product is BYOB (Build Your Own Brain), an analyst co-pilot designed for enterprises, which centralizes data to decentralize wisdom and makes intelligence more accessible and proactive. The company also undertakes AI transformation projects across various industries. They have offices in Pennsylvania, USA, Bengaluru, India, and Chennai, India.

Akaike Technologies is an AI company founded in 2018, inspired by Dr. Hirotugu Akaike's work on the Akaike Information Criterion. They aim to positively impact quality of life through Artificial Intelligence by unlocking insights from unstructured data. Akaike specializes in making multimodal AI accessible, effective, and transformative for enterprises, offering both plug-and-play products and custom AI solutions. Their core expertise lies in Vision AI, Generative AI, Conversational AI, and Natural Language Processing. Their flagship product is BYOB (Build Your Own Brain), an analyst co-pilot designed for enterprises, which centralizes data to decentralize wisdom and makes intelligence more accessible and proactive. The company also undertakes AI transformation projects across various industries. They have offices in Pennsylvania, USA, Bengaluru, India, and Chennai, India.
Data Engineer
Experience: 2–3 years Location: Bengaluru (Hybrid)
About the Company
Akaike Technologies is a fast-growing AI-first organization focused on building real-world, high-impact AI systems across industries. We work at the intersection of Generative AI, Multimodal AI, and Large-Scale ML Engineering , enabling enterprises to operationalize cutting-edge AI solutions at scale. We foster a culture of ownership, deep technical rigor, and continuous learning.
Experience Pre-Requisite
2–3 years of hands-on experience in Data Engineering , with strong exposure to Databricks (PySpark, Spark SQL) and Azure Data Services (ADF, ADLS, Azure SQL). Candidates must demonstrate real-world experience in designing, building, deploying, and optimizing scalable data pipelines end-to-end.
Job Description
We are seeking a highly skilled Data Engineer to design, develop, and deploy scalable data solutions on Databricks and Azure Data Services . This role requires deep technical expertise in PySpark , SQL , Delta Lake , and Medallion Architecture , combined with strong problem-solving ability and collaboration skills.
The ideal candidate will take end-to-end ownership of data engineering workflows—from data ingestion and transformation to data modeling and governance —while ensuring performance, reliability, and security. You will work in a fast-paced, cloud-native environment, partnering with cross-functional teams to deliver production-grade data pipelines that enable advanced analytics and business insights.
Key Responsibilities
1. Design and Development
2. Data Processing
3. Data Integration
4. Data Governance and Quality
5. Collaboration
6. Documentation
Must Have Technical Skills
Must Have Soft Skills
Relevant to Have
Benefits and Perks