
Lexsi Labs is dedicated to building the foundations for Safe Superintelligence by integrating alignment theory, interpretability science, and agentic autonomy into a cohesive research framework. Their mission is to create AI systems that are not only intelligent but also aligned with human values, ensuring safety and accountability as they scale. The company develops a comprehensive research stack that includes specialized foundation models and tools like TabTune, which streamline the deployment and fine-tuning of tabular data models, making AI more interpretable and trustworthy. With a focus on open-source contributions, Lexsi Labs aims to lead in the AI research space, providing practical solutions for complex challenges in AI deployment.

Lexsi Labs is dedicated to building the foundations for Safe Superintelligence by integrating alignment theory, interpretability science, and agentic autonomy into a cohesive research framework. Their mission is to create AI systems that are not only intelligent but also aligned with human values, ensuring safety and accountability as they scale. The company develops a comprehensive research stack that includes specialized foundation models and tools like TabTune, which streamline the deployment and fine-tuning of tabular data models, making AI more interpretable and trustworthy. With a focus on open-source contributions, Lexsi Labs aims to lead in the AI research space, providing practical solutions for complex challenges in AI deployment.
The Lexsi Lab is a premier research lab dedicated to solving one of the most critical challenges of our time: ensuring that advanced artificial intelligence systems are safe, transparent, and aligned with human values. Founded by pioneers in the deep learning space, our lab operates at the global forefront of AI research with hubs in Mumbai and Paris. We tackle foundational problems in explainability, robustness, and reasoning, publishing our work in top-tier venues (ICML, IJCNN, WWW, ACL) and creating patented, open-source tools like DL-Backtrace that push the boundaries of the field.
Key areas where we are looking to expand our R&D: New RL Techniques for efficient and faster learning; New Tabular Foundational Models; Interetability Alignment Strategies like Pruning, Unlearning, Safety fine-tuning, and new exploration agents.
We are a team of world-class researchers, engineers, and visionaries, seeking passionate individuals to join us in shaping a future where AI operates as a beneficial and trustworthy partner to humanity.
As an AI Research Scientist, you will work at the heart of our research agenda, tackling fundamental questions in AI alignment and interpretability. You will have the freedom to pursue ambitious, long-term research projects while collaborating with a team of leading experts.
Your responsibilities will include:
Who You Are
You are an inquisitive and driven researcher with a passion for solving the AI alignment problem. You are motivated by challenging questions and thrive in a collaborative, intellectually stimulating environment.
Ideal Qualifications:
What We Offer
Join us in building a safe & aligned AI for the future.
To Apply: Please send your CV, a link to your Google Scholar profile, and a brief statement of your research interests.