At OxygenAI, we are dedicated to transform organizations by creating solutions using Artificial Intelligence. We combined Big Data, both structured and unstructured, with state-of-the-art research in Deep Learning, Computer Vision and Data Science, to provide data-driven solutions. We are a team of experienced AI researchers, data scientists and software engineers. We are currently working with multiple industries, including transportation, media, and finance.
At OxygenAI, we are dedicated to transform organizations by creating solutions using Artificial Intelligence. We combined Big Data, both structured and unstructured, with state-of-the-art research in Deep Learning, Computer Vision and Data Science, to provide data-driven solutions. We are a team of experienced AI researchers, data scientists and software engineers. We are currently working with multiple industries, including transportation, media, and finance.
Design, build, test, and maintain data transformation pipelines that convert raw Computer Vision-generated transactional data into clean, reliable aggregated datasets.
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
DevOps Engineer
Full-timeHamburg, DE
Full-time • Hamburg, DE
Software Engineer
ContractUtrecht, NL
Contract • Utrecht, NL
Mobile Developer
Part-timeNew York, US
Part-time • New York, US
Backend Developer
ContractCambridge, GB
Contract • Cambridge, GB
AI Researcher
Part-timeManchester, GB
Part-time • Manchester, GB
DevOps Engineer
InternshipRotterdam, NL
Internship • Rotterdam, NL
Define and document data models, metrics, and aggregation logic.
Ensure interpretability and accuracy of aggregated data.
Analyze data to identify biases, gaps, and CV-pipeline artifacts, and design compensations where needed.
Develop internal tools for monitoring, debugging, validating assumptions, and helping other teams reason about aggregated data.
Collaborate with product managers to gather requirements and assess feasibility or constraints.
Collaborate with upstream CV teams to define data contracts and ensure input data quality.
Collaborate with downstream teams (e.g., web backend engineers) to design aggregates that meet product constraints and performance needs.
Qualifications:
Bachelor’s degree in Computer Engineering or Computer Science.
Strong understanding of data modeling and scalable custom ETL/ELT pipeline design.
Detail-oriented with strong systems-thinking skills and the ability to anticipate data caveats and evolving requirements.
Comfortable working with incomplete or noisy data and making sound technical judgments under uncertainty.
Proficiency in Python for data processing, analysis, and tool-building.
Familiarity with MongoDB and PostgreSQL (or willingness to learn both).
Comfortable working in a Unix/Linux development environment.
Understanding the needs and challenges of working in a startup environment.
Knowledge of data-driven mall/retail analytics or footfall systems is a plus.