
HyperAccel is a company specializing in hyper-accelerated silicon IP and hardware solutions designed for generative AI applications, particularly transformer-based large language models (LLM) like OpenAI GPT and Meta LLaMA. Their flagship product is the Latency Processing Unit (LPU), the world's first hardware accelerator dedicated to end-to-end LLM inference, offering unprecedented performance, scalability, and energy efficiency. The LPU features a latency-optimized architecture and innovative peer-to-peer communication technology (Expandable Synchronization Link) enabling efficient acceleration of hyperscale models with tens to hundreds of billions of parameters. HyperAccel also offers LPU-based datacenter servers that outperform leading GPU platforms in performance, cost-effectiveness, and power efficiency. Their upcoming ASIC product (4nm) is focused on LLM inference with advanced features such as 32 LPU cores, 128 GB LPDDR5X memory, and support for multiple data types and model types. The company operates on a business model centered on silicon IP and hardware product sales, targeting datacenters and enterprises deploying large-scale AI models. They emphasize software integration with their HyperDex framework, a compiler technology bridging datacenter applications and LPU hardware to facilitate the transition from narrow AI to general AI.

HyperAccel is a company specializing in hyper-accelerated silicon IP and hardware solutions designed for generative AI applications, particularly transformer-based large language models (LLM) like OpenAI GPT and Meta LLaMA. Their flagship product is the Latency Processing Unit (LPU), the world's first hardware accelerator dedicated to end-to-end LLM inference, offering unprecedented performance, scalability, and energy efficiency. The LPU features a latency-optimized architecture and innovative peer-to-peer communication technology (Expandable Synchronization Link) enabling efficient acceleration of hyperscale models with tens to hundreds of billions of parameters. HyperAccel also offers LPU-based datacenter servers that outperform leading GPU platforms in performance, cost-effectiveness, and power efficiency. Their upcoming ASIC product (4nm) is focused on LLM inference with advanced features such as 32 LPU cores, 128 GB LPDDR5X memory, and support for multiple data types and model types. The company operates on a business model centered on silicon IP and hardware product sales, targeting datacenters and enterprises deploying large-scale AI models. They emphasize software integration with their HyperDex framework, a compiler technology bridging datacenter applications and LPU hardware to facilitate the transition from narrow AI to general AI.
하이퍼엑셀(HyperAccel)은 2023년 설립된 AI 반도체 스타트업으로, 초거대 언어모델(LLM) 추론에 최적화된 차세대 반도체 **LPU(LLM Processing Unit)**를 개발하고 있습니다.
LPU 기반 칩·서버·IP 라이선스 사업을 통해 글로벌 생성 AI 인프라의 핵심 파트너로 성장하고 있으며, 설립 3년 만에 총 600억 원 이상 투자 를 유치하며 기술력과 가능성을 인정받았습니다.
*관련기사 “AI 반도체는 GPU 대안 아닌 새로운 방향··· 메모리 접근해 LPU 차별화”
*관련기사 IEEE 최고 논문상·AWS 인스턴스 등재··· 칩 출시 전 존재감 키우는 ‘하이퍼엑셀’
*관련기사 AI 반도체 스타트업 하이퍼엑셀, 550억원 규모 투자 유치… “내년 삼성 4나노로 제품 양산”
함께 글로벌 AI 반도체의 미래를 만들어갈 새로운 동료를 찾습니다!
주요 업무
자격 요건
우대 사항
채용 전형
서류 전형 ▶ 직무 면접 ▶ 종합 면접 ▶ (평판 조회) ▶ 최종 합격
* 채용 전형은 진행 상황에 따라 변경될 수 있으며, 변경 시 사전 안내 드립니다.
* 입사 후 3개월의 수습기간이 적용됩니다.