Lexsi Labs is dedicated to building the foundations for Safe Superintelligence by integrating alignment theory, interpretability science, and agentic autonomy into a cohesive research framework. Their mission is to create AI systems that are not only intelligent but also aligned with human values, ensuring safety and accountability as they scale. The company develops a comprehensive research stack that includes specialized foundation models and tools like TabTune, which streamline the deployment and fine-tuning of tabular data models, making AI more interpretable and trustworthy. With a focus on open-source contributions, Lexsi Labs aims to lead in the AI research space, providing practical solutions for complex challenges in AI deployment.
Lexsi Labs is dedicated to building the foundations for Safe Superintelligence by integrating alignment theory, interpretability science, and agentic autonomy into a cohesive research framework. Their mission is to create AI systems that are not only intelligent but also aligned with human values, ensuring safety and accountability as they scale. The company develops a comprehensive research stack that includes specialized foundation models and tools like TabTune, which streamline the deployment and fine-tuning of tabular data models, making AI more interpretable and trustworthy. With a focus on open-source contributions, Lexsi Labs aims to lead in the AI research space, providing practical solutions for complex challenges in AI deployment.
Teeming tracks opportunities at over 24,000 AI startups, then works with you to find (and land) the one you'll love.
Frontend Developer
ContractRotterdam, NL
Contract • Rotterdam, NL
Data Scientist
ContractAmsterdam, NL
Contract • Amsterdam, NL
Mobile Developer
Part-timeSan Francisco, US
Part-time • San Francisco, US
Mobile Developer
Full-timeHamburg, DE
Full-time • Hamburg, DE
Machine Learning Engineer
Part-timeUtrecht, NL
Part-time • Utrecht, NL
DevOps Engineer
InternshipNew York, US
Internship • New York, US
AI Research Intern – Lexsi Labs
Commitment:
Full-time internship (6 months; potential extension or full-time offer)
Start Date:
Rolling
About Lexsi Lab
s
Lexsi Labs is one of the leading frontier labs focusing on building aligned, interpretable and safe Superintelligence. Most of the work involves on creating new methodologies for efficient alignment, interpretability lead-strategies and tabular foundational model research. Our mission is to create AI tools that empower researchers, engineers, and organizations to unlock AI's full potential while maintaining transparency and safet
y.
Our team thrives on a shared passion for cutting-edge innovation, collaboration, and a relentless drive for excellence. At Lexsi.ai, everyone contributes hands-on to our mission in a flat organizational structure that values curiosity, initiative, and exceptional performan
ce.As a research intern at Lexsi.ai, you will be uniquely positioned in our team to work on very large-scale industry problems and push forward the frontiers of AI technologies. You will become a part of the unique atmosphere where startup culture meets research innovation, with key outcomes of speed and reliabili
**ty.
What You’l**
l DoWe work on multiple frontier research ideas and challenges. If you are selected, you would be working on one of these following areas. Collaborate closely with our research and engineering teams on one of the ar
**ons.
General Required Qualifica**
tionsStrong Python exper
tise: writing clean, modular, and testable
code.Theoretical foundat
ions: deep understanding of machine learning and deep learning principles with hands-on experience with PyT
orch.Transformer architectures & fundamen
tals: comprehensive knowledge of attention mechanisms, positional encodings, tokenization and training objectives in BERT, GPT, LLaMA, T5, MOE, Mamba,
etc.Version control & C
I/CD: Git workflows, packaging, documentation, and collaborative development pract
**work.
Preferred Domain Expertise (Any one of these is g**
**ation.
Additional Experience (Nice-t**
o-Have)Public
ations: contributions to CVPR, ICLR, ICML, KDD, WWW, WACV, NeurIPS, ACL, NAACL, EMNLP, IJCAI or equivalent research expe
rience.Open-source contrib
utions: prior work on AI/ML libraries or t
ooling.Domain ex
posure: risk-sensitive applications in finance, healthcare, or similar
fields.Performance optimi
zation: familiarity with large-scale training infrastru
**ctures.
What**
We OfferReal-world
impact: address high-stakes AI challenges in regulated ind
ustries.Compute re
sources: access to GPUs, cloud credits, and proprietary
models.Competitive
stipend: with potential for full-time con
version.Authorship opport
unities: co-authorship on papers, technical reports, and conference subm
issions.
eas:Library Developm
ent: Architect and enhance open-source Python tooling for alignment, explainability, model alginment, uncertainty quantification, robustness, and machine unlear
ningExplainability & Tr
ust: Improve and find new observations using our and other SOTA XAI techniques (DLB, LRP, SHAP, Grad-CAM, Backtrace) across text, image, and tabular modalities to understand and present new model interpretabil
ity.Mechanistic Interpretabil
ity: Probe internal model representations and circuits—using activation patching, feature visualization, and related methods—to diagnose failure modes and emergent behavi
ors.Uncertainty & R
isk: Develop, implement, and benchmark uncertainty estimation methods (Bayesian approaches, ensembles, test-time augmentation) alongside robustness metrics for foundation mod
els.Tabular Foundational Models (Ori
on): Work with our leading Tabular Foundational Model team to improve and launch new tabular foundational model architectures and work on our leading opesource library TabT
une.Reinforcement Lear
ning: Explore new ideas and algorithm around RL and our new RL fine-tuning libr
ary.Research Contributi
ons: Author and maintain experiment code, run systematic studies, and co-author whitepapers or conference submissi
ices.Collaborative min
dset: excellent communication, peer code reviews, and agile team
ood) :Explainab
ility: applied experience with XAI methods such as DLB, SHAP, LIME, IG, LRP, DL-Bactrace or Gra
d-CAM.Mechanistic interpretab
ility: familiarity with circuit analysis, activation patching, and feature visualization for neural network introspe
ction.Uncertainty estim
ation: hands-on with Bayesian techniques, ensembles, or test-time augment
ation.Quantization & pr
uning: applying model compression to optimize size, latency, and memory foot
print.LLM Alignment techn
iques: crafting and evaluating few-shot, zero-shot, and chain-of-thought prompts; experience with RLHF workflows, reward modeling, and human-in-the-loop fine-t
uning.Tabular Foundational M
odels: Should have used or improved TFMs like Orion, TabPFN, TabI
CL etcPost-training adaptation & fine-t
uning: practical work with full-model fine-tuning and parameter-efficient methods (LoRA, adapters), instruction tuning, knowledge distillation, and domain-specializ