
CounterFake is a SaaS service that automatically detects fake products on online channels with a unique AI-based smart malware detection system. We are the only web scanner that works in a smart way to remove fake goods and their malicious traffic behind them. Our major goal is to change the retail industry with advanced security solutions and professional monitoring services supporting online sellers and e-marketplaces from all over the world.

CounterFake is a SaaS service that automatically detects fake products on online channels with a unique AI-based smart malware detection system. We are the only web scanner that works in a smart way to remove fake goods and their malicious traffic behind them. Our major goal is to change the retail industry with advanced security solutions and professional monitoring services supporting online sellers and e-marketplaces from all over the world.
About Us:
Counterfake is an AI-powered platform fighting counterfeit products in e-commerce and social media marketplaces. We help global brands detect and remove fake listings with advanced automation and data intelligence tools.
We are looking for an AI Platform Engineer
to help build and operate the infrastructure behind Counterfake’s AI-driven detection systems. This role sits at the intersection of platform engineering and product engineering
: you will own cloud infrastructure and developer platforms while working closely with AI and product teams to ship reliable, scalable systems.
You will primarily work on
, managing Kubernetes-based workloads
, AI services, and data infrastructure that run in production at scale.
What you’ll do
Design, deploy, and operate AI/ML services on Google Cloud Platform
Manage and optimize GKE (Kubernetes)
clusters for production workloads
Build and maintain infrastructure using Infrastructure as Code
(Terraform preferred)
Support CI/CD pipelines for containerized services
Ensure reliability through monitoring, logging, alerting, and incident response
Work with vector databases
(e.g. Qdrant or similar) used for similarity search and retrieval
Collaborate with AI teams on model serving, embedding pipelines, and inference workloads
Apply cloud security best practices (IAM, secrets, network isolation)
Optimize infrastructure cost, performance, and scalability
Required qualifications
4–5 years of experience in platform, infrastructure, DevOps, or backend engineering roles
Hands-on experience with Google Cloud Platform
Strong knowledge of Kubernetes (GKE)
in production environments
Experience with Docker
and container-based deployments
Experience using Terraform
or similar IaC tools
Familiarity with monitoring and logging systems (Cloud Monitoring, Prometheus, etc.)
Solid Linux and cloud networking fundamentals
Nice to have
Experience with vector databases
(Qdrant, Milvus, Weaviate, pgvector, Pinecone, etc.)
Exposure to AI/ML systems (model serving, embeddings, inference pipelines)
Experience with autoscaling, high-availability systems, or distributed services
Experience supporting AI-powered or data-heavy products
Why join us
How to apply:
Send your CV and links to any relevant projects or GitHub repositories to us.
We’d love to hear from you!