
Leading supplier of end-to-end high speed Ethernet and InfiniBand intelligent interconnect solutions and services.

Leading supplier of end-to-end high speed Ethernet and InfiniBand intelligent interconnect solutions and services.
Founded: 1993
Headcount (approx.): 42,295
Core focus: GPUs, AI computing platforms, systems, and software
Notable software: CUDA, Omniverse
High-performance computing, AI infrastructure, graphics rendering, networking for data centers, and industrial/autonomous systems.
1993
Semiconductors / AI compute / Software platforms
$2 billion
Investment in CoreWeave to expand AI compute capacity.
$5 billion
Purchase of an equity stake in Intel as part of a collaboration.
$500 million - $1 billion (reported)
Reported investment in Poolside as part of a larger funding round.
We are now looking for a Senior DL Algorithms Engineer! We are seeking a highly skilled Deep Learning Algorithms Engineer with hands-on experience optimizing and deploying Large Language Models (LLMs) and Vision-Language Models (VLMs) in production environments. In this role, you will focus on optimizing and deploying deep learning models for efficient and fast inference across diverse GPU platforms. You will collaborate with research scientists, software engineers, and hardware specialists to bring cutting-edge AI models from prototype to production. * Optimize deep learning models for low-latency, high-throughput inference. * Convert and deploy models using frameworks such as TensorRT and TensorRT-LLM * Understand, analyze, profile, and optimize performance of deep learning workloads on state-of-the-art hardware and software platforms. * Collaborate with internal and external researchers to ensure seamless integration of models from training to deployment. * Master’s or PhD in Computer Science, Electrical Engineering, Computer Engineering, or a related field (or equivalent experience) * 3+ years of professional experience in deep learning or applied machine learning. * Strong foundation in deep learning algorithms, including hands-on experience with LLMs and VLMs * Deep understanding of transformer architectures, attention mechanisms, and inference bottlenecks. * Proficient in building and deploying models using PyTorch or TensorFlow in production-grade environments. * Solid programming skills in Python and C++ * Proven experience deploying LLMs or VLMs at scale in real-world applications. * Hands-on experience with model optimization and serving frameworks, such as: TensorRT, TensorRT-LLM, vLLM, SGLang. As NVIDIA makes inroads into the Datacenter business, our team plays a central role in getting the most out of our exponentially growing datacenter deployments as well as establishing a data-driven approach to hardware design and system software development. We collaborate with a broad cross section of teams at NVIDIA ranging from DL research teams to CUDA Kernel and DL Framework development teams, to Silicon Architecture Teams. As our team grows, and as we seek to identify and take advantage of long-term opportunities, our skillset needs are expanding as well. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you! The base salary range is 184,000 USD - 356,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits . NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. JR1997088