Computer Vision Engineer | NewSpace Research and Technologies · Teeming.ai
NewSpace Research and Technologies
NewSpace builds long-endurance aerospace platforms that enable persistent Earth observation and high-altitude communications. The company designs piloted and unmanned high-altitude systems, including HAPS and endurance drones, using autonomy, sensor fusion, distributed intelligence, computer vision, machine learning/AI, and GPS‑denied navigation to extend endurance from days to months. NewSpace integrates aircraft design and collective robotics to support stratospheric operations and payloads for surveillance, communications, and sensing. It operates as a B2B aerospace technology provider serving organizations that require persistent airborne coverage and communications infrastructure.
NewSpace builds long-endurance aerospace platforms that enable persistent Earth observation and high-altitude communications. The company designs piloted and unmanned high-altitude systems, including HAPS and endurance drones, using autonomy, sensor fusion, distributed intelligence, computer vision, machine learning/AI, and GPS‑denied navigation to extend endurance from days to months. NewSpace integrates aircraft design and collective robotics to support stratospheric operations and payloads for surveillance, communications, and sensing. It operates as a B2B aerospace technology provider serving organizations that require persistent airborne coverage and communications infrastructure.
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Machine Learning Engineer
Part-timeAustin, US
Part-time • Austin, US
AI Researcher
InternshipNovi Sad, RS
Internship • Novi Sad, RS
DevOps Engineer
Part-timeHaifa
Part-time • Haifa
AI Researcher
InternshipUtrecht, NL
Internship • Utrecht, NL
AI Researcher
ContractTel Aviv
Contract • Tel Aviv
Technical Writer
Full-timeAmsterdam, NL
Full-time • Amsterdam, NL
About the Role:
The Engineer III – Computer Vision & Machine Learning role is a senior, hands-on position focused on developing and deploying perception and learning systems for autonomous UAVs. The role requires strong practical experience in computer vision, deep learning, and real-time deployment on embedded platforms.
The engineer will take technical ownership of vision and ML pipelines and work closely with the robotics team.
This role is ideal for engineers who enjoy taking algorithms from research to real-life deployment.
Key Responsibilities:
Design, develop, and deploy computer vision and machine learning algorithms for autonomous UAV perception.
Build and manage datasets and data pipelines, including data collection during flight tests, annotation, augmentation, training, and evaluation workflows.
Ensure
robust runtime behaviour under real-world conditions such as motion blur, lighting variation, and sensor noise.
Optimize models and perception pipelines for real-time performance on embedded compute platforms, accounting for memory, latency, and power constraints.
Integrate vision and ML outputs with robotics and autonomy stacks to enable downstream tasks such as navigation, obstacle avoidance, mapping, and decision-making.
Document system designs, experiments, model performance, and deployment workflows to support maintainability and knowledge sharing.
Basic Requirements:
Preferred Requirements:
Working Hours
Standard working hours are 9:30 AM to 6:30 PM, Monday to Friday.
Field-testing activities may require early-morning or extended hours, depending on mission requirements.
Compensation Range
Competitive compensation aligned with industry standards, including performance-based incentives.
Exact salary ranges can be customised according to experience.
Benefits
Comprehensive health insurance.
Professional development support.
Detachment allowance.
Bachelor’s or Master’s degree in
Computer Science, Robotics, Artificial Intelligence, or Electrical Engineering.
1+ years (Master’s) or 3+ years (Bachelor’s)
of industry experience in computer vision, machine learning, or robotics-related roles.
Strong hands-on experience with
deep learning architectures
such as CNNs, RNNs, and transformer-based models for vision tasks.
Experience with
image enhancement techniques
such as denoising, deblurring, contrast enhancement, and low-light image improvement.
Proficiency in
Python and C++
for algorithm development and deployment.
Experience with
deep learning frameworks
such as PyTorch or TensorFlow.
Solid understanding of I
mage Processing, Linear Algebra, and Optimization Fundamentals
.
Experience working in
Linux-based development environments
.
Strong debugging, analytical, and problem-solving skills.
Experience optimizing inference using
TensorRT, CUDA, or hardware accelerators
.
Experience designing
teacher-student frameworks
and applying
knowledge distillation
to balance accuracy, latency, and power constraints.
Hands-on experience with
deep learning–based object detection
models (e.g., single-stage and two-stage detectors) for real-world vision tasks.
Experience with
multi-object tracking
approaches, including learning-based tracking and data association techniques.
Hands-on experience with
spatial and frequency-domain filtering
, including smoothing, edge-preserving, and sharpening filters.
Experience working with UAVs, or autonomous robotic platforms.
Hands-on experience deploying models on embedded edge platforms.
Experience with ROS/ROS2 or robotics middleware.
Experience with
multi-sensor fusion
involving vision and IMU.