
We set Faculty up in 2014 because we thought that AI would be the most important technology of our time. Our sole purpose is to make it useful to the world. We do that by helping our clients access…

We set Faculty up in 2014 because we thought that AI would be the most important technology of our time. Our sole purpose is to make it useful to the world. We do that by helping our clients access…
Founded: 2014
Headquarters: London
Core offering: Applied AI, decision-intelligence platform (Faculty Frontier) and AI consultancy
Recent notable funding: £30M growth round (May 2021) led by Apax Digital Fund
Bridging gap between data/models and real-world business decisions to enable safe, impactful AI deployments.
2014
Technology, Information and Internet
£30,000,000
Reported as a growth round that brought total investment to nearly £40M at the time.
USD 10,500,000
Series A reported in December 2019; named investors in reporting include LocalGlobe, GMG Ventures and Jaan Tallinn.
“Backed by investors including Apax Digital Fund, LocalGlobe, GMG Ventures LP, and angel investor Jaan Tallinn”
| Company |
|---|
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we’ve worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here.
We don’t chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence.
Our business, and reputation, is growing fast and we’re always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology.
AI is an epoch-defining technology, join a company where you’ll be empowered to envision its most powerful applications, and to make them happen.
Faculty’s Research team conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1.
Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer-reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety-relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.
This is a brand new senior leadership role to provide technical leadership of Faculty's work on AI safety for the Foundation Labs - and presents a unique opportunity to shape how AI safety is done globally.
Faculty is one of the world's leading applied AI companies, helping many of the organisations that shape our world to adopt AI successfully and safely. We play an important role in the emerging AI safety ecosystem. We already have many of the key Frontier Labs as clients, including Open AI and Anthropic, for whom we provide third-party red teaming, technical testing and other AI safety services. And we work with the UK government and other international governments on AI safety, including helping set up the AI Security Institute and delivering technical work which catalysed the first global AI Safety Summit at Bletchley Park in 2023.
With the recent announcement of Faculty's acquisition by Accenture, we are investing to take our work on AI safety to global scale, and this role will be key to shaping that. This will include:
This role will suit someone with a deep passion and commitment to AI safety, and represents a unique opportunity to contribute to this agenda globally.
We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We’re united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.
Some of our standout benefits:
If you don’t feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please don't hesitate in applying as you might be right for this role, or other roles. We are open to conversations about part-time hours.
Your next opportunity is in here somewhere. Sign up to explore 52,000+ startups and their open roles. No spam. No gamification. Just jobs.
52,000+
Startups
60,000+
Open Roles
500+
New This Week
You have a proven track record of designing and leading high-performing technical teams, with the ability to manage R&D budgets and mentor senior technical staff.
You bring deep expertise in AI safety research, specifically regarding alignment, interpretability, and robustness in large language models (LLMs) or safety-critical systems.
You possess a strong scientific background evidenced by high-impact machine learning publications and a comprehensive understanding of transformer architectures.
You are a strategic visionary capable of setting research priorities that align with long-term organisational goals while remaining at the cutting edge of field developments.
You are a compelling communicator who can synthesise complex technical concepts into narratives that influence both C-suite executives and the broader research community.
You exhibit strong commercial acumen and stakeholder management skills, allowing you to navigate complex organisations and accelerate the delivery of high-value projects.