
Zero distance innovation for GenAI creators and industries Expertly engineering platforms and curating multimodal, multilingual data, we empower the ‘Magnificent Seven’ and enterprise clients with safe, scalable AI deployment We a team of over 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We bring platforms, partners and 1.8 million vertical domain experts to create high-quality pre-trained datasets, fine-tuned industry-specific LLMs, and RAG pipelines supported by vector databases. These innovations can reduce GenAI costs by up to 80% and bring GenAI solutions to market 50% faster in 230 locales.

Zero distance innovation for GenAI creators and industries Expertly engineering platforms and curating multimodal, multilingual data, we empower the ‘Magnificent Seven’ and enterprise clients with safe, scalable AI deployment We a team of over 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We bring platforms, partners and 1.8 million vertical domain experts to create high-quality pre-trained datasets, fine-tuned industry-specific LLMs, and RAG pipelines supported by vector databases. These innovations can reduce GenAI costs by up to 80% and bring GenAI solutions to market 50% faster in 230 locales.
What they do: Global AI data and platform company providing dataset generation, RLHF/human-in-the-loop pipelines, and a marketplace to accelerate model-to-production
Founded / HQ: 2020; Redmond, Washington
Team size: Over 3,000 employees (company snapshot lists 3,126)
Recent funding: $60M Series A (announced June 24, 2025), lead investor Granite Asia
AI training data, model evaluation and deployment workflows for enterprises and large-scale models
2020
Software Development
$60 million
Company announcement of Series A to scale AI infrastructure and product roadmap
“Led by Granite Asia (Series A)”
| Company |
|---|
Responsibilities
Design, build, and deploy agentic AI systems using frameworks such as LangChain, LangGraph, and related libraries.
Develop and deploy multi-agent systems capable of autonomous decision-making, reasoning, planning, and collaboration.
Implement and optimize retrieval-augmented generation (RAG) systems, ensuring agents can access and incorporate external knowledge sources for grounded, accurate responses.
Fine-tune and prompt-engineer LLMs for task-specific reasoning, planning, and dynamic adaptation.
Work with LLM/SLM APIs, embeddings, and advanced generative AI techniques.
Lead the development of enterprise-grade AI platforms integrating LLMs, RAG, embeddings, and agentic AI protocols.
Implement and standardize Model Context Protocol (MCP) for consistent context management across models and agents.
Establish and enforce best practices for MLOps, monitoring, and observability, ensuring scalable and maintainable AI solutions.
Rapidly prototype, experiment, and iterate to improve AI agent capabilities.
Participate in the full research cycle: literature review, data exploration, experimentation, and presentation of findings.
Collaborate effectively with other engineers, researchers, and data scientists.
Contribute to the documentation and standardization of technical code and practices.
Required Education
Bachelors degree in Computer Science, Engineering, or a related quantitative field. Master’s or Ph.D. is a strong plus.
Required Experience