
Safe Intelligence is a venture backed spin-out from Imperial College London, developing software to make AI safe and secure. Our overarching ambition is to help realise a society where people seamlessly interact with trustworthy AI.

Safe Intelligence is a venture backed spin-out from Imperial College London, developing software to make AI safe and secure. Our overarching ambition is to help realise a society where people seamlessly interact with trustworthy AI.
What they do: Formal verification, robustification, testing and continuous monitoring tooling for ML models
Origin: Spin-out from Imperial College London
Funding: £4.15M seed round announced Feb 20, 2025
Lead investor: Amadeus Capital Partners
Headcount: About 16 employees
| Company |
|---|
AI/ML safety and validation
Software Development
£4.15M
Participation from OTB Ventures and Vsquared Ventures
“Led by a specialist deep-tech / AI investor (Amadeus Capital Partners) with participation from early-stage investors”
Safe Intelligence is on a mission to make AI safe and reliable for anyone to use. To help us succeed, our team is looking for a Senior Python Engineer, and we’re hoping it’s you! In this role! You'll help lead and implement the improvement of algorithms, optimising execution, usability, and package architecture of our ML Verification packages.
Note that this is a Python code focused role, rather than a Data Science role, so Python programming skills are the priority, where as Machine Learning and Data Science skills are secondary. There will be plenty of opportunity to get involved in machine learning model training, evaluation and deployment, as well as all the associated ecosystems of tools, however the key mission is work magic on our python based tools and packages.
We’re looking forward to having you on board!
Responsibilities
: The position requires a passion for science and engineering, paired with an ability to produce production-ready solutions while working closely with both product and research teams.
As a Safe Intelligence Senior Python Engineer
, you will:
Requirements
: The technical requirements for the role are:
Additional beneficial experience includes:
At a personal level we’re also looking for someone who is:
Why Safe Intelligence is for you
:
We strongly believe AI can bring great benefits to individuals and society, but these will only be achieved if the systems we build are safe to use. To meet this need, we are developing advanced deep validation techniques and tools that allow AI/ML engineers world-wide to validate the robustness of their models, as well as repair the fragilities that they discover.
By joining us, you’ll be able to help advance the techniques, bring advanced technologies to AI/ML engineers worldwide and contribute to our shared mission to realise successful and reliable AI.
Grow with us!
If you think you can bring something special to this role, please apply even if you do not meet all listed criteria. Safe Intelligence is exploring uncharted waters, and finding the right crewmates is important to us. We support ongoing learning for the whole team, ranging from individual mentorship to internal seminars and support for sector and technology-specific upskilling.
Compensation & Benefits
Safe Intelligence provides competitive compensation based on role and candidate experience. In addition, company benefits for all roles include:
Equality and Inclusion
We are proud to be an equal-opportunity employer and work hard to create an environment where people of diverse backgrounds and life experiences can thrive. The team is highly collaborative and meritocratic. Great ideas come from everywhere, and we strive to make it easy for people to express themselves and be heard.
Location & Office Culture
Safe Intelligence is based in London, UK, and we’re focused on building the initial team here. We highly value the ability to work flexibly and remotely at times, but we also have a strong belief that regular in-office interactions make for a much more fulfilling and productive work experience.
Our company culture combines optimism for the future (hard problems can be solved with the right effort), speed of iteration (the best ideas come from many ideas tested), and rigor in what matters (correctness and precision are critical for safety).
Come and join us to add your skills and passion to the future of Safe Artificial Intelligence!