Computer Vision Engineer | NewSpace Research and Technologies · Teeming.ai
NewSpace Research and Technologies
NewSpace builds long-endurance aerospace platforms that enable persistent Earth observation and high-altitude communications. The company designs piloted and unmanned high-altitude systems, including…
NewSpace builds long-endurance aerospace platforms that enable persistent Earth observation and high-altitude communications. The company designs piloted and unmanned high-altitude systems, including…
Your next opportunity is in here somewhere. Sign up to explore 52,000+ startups and their open roles. No spam. No gamification. Just jobs.
52,000+
Startups
58,000+
Open Roles
2,000+
New This Week
AI Researcher
Full-timeTel Aviv
Full-time • Tel Aviv
Data Scientist
Full-timeHaifa
Full-time • Haifa
Mobile Developer
Full-timeHaifa
Full-time • Haifa
DevOps Engineer
Full-timeRotterdam, NL
Full-time • Rotterdam, NL
Data Scientist
ContractUtrecht, NL
Contract • Utrecht, NL
Data Scientist
InternshipAmsterdam, NL
Internship • Amsterdam, NL
About the Role:
The Engineer III – Computer Vision & Machine Learning role is a senior, hands-on position focused on developing and deploying perception and learning systems for autonomous UAVs. The role requires strong practical experience in computer vision, deep learning, and real-time deployment on embedded platforms.
The engineer will take technical ownership of vision and ML pipelines and work closely with the robotics team.
This role is ideal for engineers who enjoy taking algorithms from research to real-life deployment.
Key Responsibilities:
Design, develop, and deploy computer vision and machine learning algorithms for autonomous UAV perception.
Build and manage datasets and data pipelines, including data collection during flight tests, annotation, augmentation, training, and evaluation workflows.
Ensure
robust runtime behaviour under real-world conditions such as motion blur, lighting variation, and sensor noise.
Optimize models and perception pipelines for real-time performance on embedded compute platforms, accounting for memory, latency, and power constraints.
Integrate vision and ML outputs with robotics and autonomy stacks to enable downstream tasks such as navigation, obstacle avoidance, mapping, and decision-making.
Document system designs, experiments, model performance, and deployment workflows to support maintainability and knowledge sharing.
Basic Requirements:
Preferred Requirements:
Working Hours
Standard working hours are 9:30 AM to 6:30 PM, Monday to Friday.
Field-testing activities may require early-morning or extended hours, depending on mission requirements.
Compensation Range
Competitive compensation aligned with industry standards, including performance-based incentives.
Exact salary ranges can be customised according to experience.
Benefits
Comprehensive health insurance.
Professional development support.
Detachment allowance.
Bachelor’s or Master’s degree in
Computer Science, Robotics, Artificial Intelligence, or Electrical Engineering.
1+ years (Master’s) or 3+ years (Bachelor’s)
of industry experience in computer vision, machine learning, or robotics-related roles.
Strong hands-on experience with
deep learning architectures
such as CNNs, RNNs, and transformer-based models for vision tasks.
Experience with
image enhancement techniques
such as denoising, deblurring, contrast enhancement, and low-light image improvement.
Proficiency in
Python and C++
for algorithm development and deployment.
Experience with
deep learning frameworks
such as PyTorch or TensorFlow.
Solid understanding of I
mage Processing, Linear Algebra, and Optimization Fundamentals
.
Experience working in
Linux-based development environments
.
Strong debugging, analytical, and problem-solving skills.
Experience optimizing inference using
TensorRT, CUDA, or hardware accelerators
.
Experience designing
teacher-student frameworks
and applying
knowledge distillation
to balance accuracy, latency, and power constraints.
Hands-on experience with
deep learning–based object detection
models (e.g., single-stage and two-stage detectors) for real-world vision tasks.
Experience with
multi-object tracking
approaches, including learning-based tracking and data association techniques.
Hands-on experience with
spatial and frequency-domain filtering
, including smoothing, edge-preserving, and sharpening filters.
Experience working with UAVs, or autonomous robotic platforms.
Hands-on experience deploying models on embedded edge platforms.
Experience with ROS/ROS2 or robotics middleware.
Experience with
multi-sensor fusion
involving vision and IMU.