
The Industry’s Largest Pure-Play Supply Chain Service Provider
Core focus: AI-powered application transformation for the digital/connected supply chain
Headquarters: San Jose, California
Employee count (reported): 3401
Ownership: Part of the Mahindra Group
Funding signal: One recorded Series B; lead investor listed as Ridgewood Capital (details obfuscated)
Digital transformation of supply chains and supply-chain application modernization
Supply chain consulting / digital supply-chain services
Funding details and amounts are obfuscated in available records.
“Part of the Mahindra Group”
Job Description We are seeking an experienced Senior Data Engineer to join our team on a longterm contracting basis. As a Senior Data Engineer, you will play a crucial role in designing, implementing, and maintaining scalable data pipelines using industryleading tools such as Airflow, PySpark, and Databricks. You will also be responsible for supporting machine learning models in production, developing data products from large datasets, and collaborating with crossfunctional teams to understand data needs and requirements. Key Responsibilities Design and implement scalable data pipelines using Airflow, PySpark, and Databricks. Support machine learning models in production to ensure their reliability and performance. Develop data products from large datasets to meet business needs and objectives for enterprise products. Collaborate with crossfunctional teams to understand data needs and requirements, and translate them into technical solutions. Implement process monitoring to ensure reliability and data quality, including support for ACID transactions. Provide technical guidance and support to other team members as needed. Requirements Bachelors or Masters degree in Computer Science, Engineering, or a related field. At least 5 years of professional experience in a data engineering role, with a proven track record of delivering scalable and reliable data solutions. Extensive experience with Databricks, AWS, and Airflow ETL architectures, including supporting data streams from realtime consumer applications. Strong proficiency in Python, PySpark, and Spark SQL for data processing and analysis. Solid knowledge of SQL databases and data modeling principles. Familiarity with Databricks Lakeview and other data visualization tools for monitoring and reporting. Ability to work independently and in a team environment, with excellent problemsolving and communication skills.
Your next opportunity is in here somewhere. Sign up to explore 52,000+ startups and their open roles. No spam. No gamification. Just jobs.
52,000+
Startups
58,000+
Open Roles
2,400+
New This Week