
Check out our open positions on nagish.com/careers. **We do not post positions anywhere else** Nagish stands for 'Accessible' in Hebrew and our mission is to make communication more accessible. We believe that everyone should have the right to use a phone without having someone listen to their private calls. Nagish gives its users full ownership of their calls by converting text-to-speech and speech-to-text in real-time so that one side of a phone call can type and read while the other side can hear and speak.

Check out our open positions on nagish.com/careers. **We do not post positions anywhere else** Nagish stands for 'Accessible' in Hebrew and our mission is to make communication more accessible. We believe that everyone should have the right to use a phone without having someone listen to their private calls. Nagish gives its users full ownership of their calls by converting text-to-speech and speech-to-text in real-time so that one side of a phone call can type and read while the other side can hear and speak.
What they do: AI-powered real-time phone call captioning, live transcription, and related accessibility communication tools
Users served: People who are deaf, hard of hearing, or speech-disabled
Headquarters: New York
Latest announced funding: $11M Series A (part of $16M total announced)
Employee count: 36
Accessibility for telephone communication; real-time captioning and relay services
Telecommunications
$120,000
$150,000
$750,000
Listed as convertible note
$5,000,000
Seed round reported as $5M
$11,000,000
Series A led by Canaan Partners
“Canaan Partners led the Series A and participating investors include Vertex Ventures Israel, Tokyo Black, Precursor Ventures, K5 Global, Contour Venture Partners, and Cardumen Capital.”
As a Computer Vision / ML Researcher at Nagish, you will lead the development and deployment of models that power our video generation, sign language animation, transcription, and gesture segmentation efforts. You will collaborate across teams to build models that are indistinguishable from human interpreters and help scale our impact through automation and cutting-edge research. On a day to day, you will: * Use and extend models for pose estimation, gesture segmentation, sign language animation, and transcription * Integrate and champion a video generation pipeline that looks and feels like a human interpreter * Evaluate, optimize, and deploy computer vision models to production * Collaborate with the data engineer to build scalable training and inference pipelines * Automate and accelerate CV tasks for annotation and content generation Requirements: * PhD in Computer Science, AI, or related field, or equivalent industry experience * 3+ years working with PyTorch and computer vision models * Proven ability to take ML models from research to production * Solid understanding of machine learning, deep learning, optimization, and language models * Experience working with motion or sign language data is a strong advantage * Publication record in top-tier computer vision or ML venues preferred * Strong Python skills and experience integrating models into cloud environments Benefits: Work on a fulfilling life-changing product (Literally) ️ Join as a key player at an early stage, and receive generous options Unlimited time off and sick days Hybrid work model * ️ Annual company get-together Bring your pet to work About Us: Nagish makes communication accessible for people who are Deaf or hard of hearing. Our team is passionate about making the world more accessible using our state-of-the-art tech - made for consumers and enterprises. We are backed by some of the best investors out there: Comcast, Techstars, Vertex, Precursor, Contour, Cardumen, and more.